threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Quick(?) question... why is there a Sort node after an Index Only Scan?\nShouldn't the index already spit out sorted tuples?\n\nCREATE INDEX ON orders_test(shipping_date, order_id);\n\nEXPLAIN ANALYZE SELECT\nFROM orders_test\nWHERE TRUE\nAND shipping_date >= '2022-05-01'\nAND shipping_date <= '2022-05-01'\nORDER BY order_id\nLIMIT 50;\n\nLimit (cost=8.46..8.46 rows=1 width=4) (actual time=0.031..0.032 rows=0\nloops=1)\n -> Sort (cost=8.46..8.46 rows=1 width=4) (actual time=0.025..0.025\nrows=0 loops=1)\n Sort Key: order_id\n Sort Method: quicksort Memory: 25kB\n -> Index Only Scan using orders_test_shipping_date_order_id_idx on\norders_test (cost=0.43..8.45 rows=1 width=4) (actual time=0.017..0.018\nrows=0 loops=1)\n Index Cond: ((shipping_date >= '2022-05-01'::date) AND\n(shipping_date <= '2022-05-01'::date))\n Heap Fetches: 0\n\nFiddle:\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=7a3bc2421b5de5a2a377bd39b78d1c\nd5\n\nI am actually asking because when I skew the distribution a little and\nrepeat the query I get a rather unfortunate plan:\n\nINSERT INTO orders_test SELECT generate_series(2000001, 2100000),\n'2022-05-01';\nANALYZE orders_test;\n\nLimit (cost=0.43..37.05 rows=50 width=4) (actual time=1186.565..1186.593\nrows=50 loops=1)\n -> Index Scan using orders_test_pkey on orders_test (cost=0.43..74336.43\nrows=101502 width=4) (actual time=1186.562..1186.584 rows=50 loops=1)\n Filter: ((shipping_date >= '2022-05-01'::date) AND (shipping_date <=\n'2022-05-01'::date))\n Rows Removed by Filter: 2000000\n\nPostgres here uses the primary key to get the sort order, so I'm wondering\nif there is anything about my index that precludes its use for ORDER BY.\n\n\n\n\n",
"msg_date": "Thu, 5 May 2022 01:15:43 +0200",
"msg_from": "=?iso-8859-1?Q?Andr=E9_H=E4nsel?= <andre@webkr.de>",
"msg_from_op": true,
"msg_subject": "Why is there a Sort after an Index Only Scan?"
},
{
"msg_contents": "On Wed, May 4, 2022 at 7:15 PM André Hänsel <andre@webkr.de> wrote:\n\n> Quick(?) question... why is there a Sort node after an Index Only Scan?\n> Shouldn't the index already spit out sorted tuples?\n>\n> CREATE INDEX ON orders_test(shipping_date, order_id);\n>\n> EXPLAIN ANALYZE SELECT\n> FROM orders_test\n> WHERE TRUE\n> AND shipping_date >= '2022-05-01'\n> AND shipping_date <= '2022-05-01'\n> ORDER BY order_id\n> LIMIT 50;\n>\n\nThey are sorted by order_id only within sets of the same shipping_date,\nwhich is not good enough. (It would be good enough if it were smart enough\nto know that there is only one possible shipping date to satisfy your weird\nrange condition.)\n\nCheers,\n\nJeff\n\nOn Wed, May 4, 2022 at 7:15 PM André Hänsel <andre@webkr.de> wrote:Quick(?) question... why is there a Sort node after an Index Only Scan?\nShouldn't the index already spit out sorted tuples?\n\nCREATE INDEX ON orders_test(shipping_date, order_id);\n\nEXPLAIN ANALYZE SELECT\nFROM orders_test\nWHERE TRUE\nAND shipping_date >= '2022-05-01'\nAND shipping_date <= '2022-05-01'\nORDER BY order_id\nLIMIT 50;They are sorted by order_id only within sets of the same shipping_date, which is not good enough. (It would be good enough if it were smart enough to know that there is only one possible shipping date to satisfy your weird range condition.) Cheers,Jeff",
"msg_date": "Wed, 4 May 2022 19:37:08 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is there a Sort after an Index Only Scan?"
},
{
"msg_contents": "On Thu, 5 May 2022 at 11:15, André Hänsel <andre@webkr.de> wrote:\n>\n> Quick(?) question... why is there a Sort node after an Index Only Scan?\n> Shouldn't the index already spit out sorted tuples?\n>\n> CREATE INDEX ON orders_test(shipping_date, order_id);\n>\n> EXPLAIN ANALYZE SELECT\n> FROM orders_test\n> WHERE TRUE\n> AND shipping_date >= '2022-05-01'\n> AND shipping_date <= '2022-05-01'\n> ORDER BY order_id\n> LIMIT 50;\n\nUnfortunately, the query planner is not quite smart enough to realise\nthat your shipping_date clauses can only match a single value.\nThere's quite a bit more we could do with the planner's\nEquivalanceClasses. There is a patch around to help improve things in\nthis area but it requires some more infrastructure to make it more\npractical to do from a performance standpoint in the planner.\n\nYou'll get the plan you want if you requite the query and replace your\ndate range with shipping_date = '2022-05-01'. Your use of WHERE TRUE\nindicates to me that you might be building this query in an\napplication already, so maybe you can just tweak that application to\ntest if the start and end dates are the same and use equality when\nthey are.\n\nDavid\n\n[1] https://commitfest.postgresql.org/38/3524/\n\n\n",
"msg_date": "Thu, 5 May 2022 11:42:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is there a Sort after an Index Only Scan?"
},
{
"msg_contents": "> They are sorted by order_id only within sets of the same shipping_date, which is not good enough. \n\nAh yes, that totally makes sense for the general case.\n\n> so maybe you can just tweak that application to test if the start and end dates are the same and use equality when they are.\n\nI definitely can.\n\nBut now I have a followup question, which probably should have been a separate question all along. I have modified the example a bit to have a more natural date distribution and I got rid of the weird shipping_date condition and actually made it different dates, so the index order is out of the picture. I also added some statistics so Postgres knows about the relationship between the columns.\n\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=54c7774432e896e3c0e89d8084c4b194\n\nAfter inserting more rows, Postgres still chooses a scan on the primary key instead of using the index.\n\nLimit (cost=0.43..296.63 rows=50 width=4) (actual time=1052.692..1052.737 rows=50 loops=1)\n -> Index Scan using orders_test_pkey on orders_test (cost=0.43..71149.43 rows=12010 width=4) (actual time=1052.690..1052.728 rows=50 loops=1)\n Filter: ((shipping_date >= '2022-04-30'::date) AND (shipping_date <= '2022-05-01'::date))\n Rows Removed by Filter: 1998734\n\nBy setting the CPU costs to 0 (last block in the fiddle) I can force the use of the previous plan and as I already suspected it is much better:\n\nLimit (cost=101.00..101.00 rows=50 width=4) (actual time=4.835..4.843 rows=50 loops=1)\n -> Sort (cost=101.00..101.00 rows=12010 width=4) (actual time=4.833..4.837 rows=50 loops=1)\n Sort Key: order_id\n Sort Method: top-N heapsort Memory: 27kB\n -> Index Scan using orders_test_shipping_date_idx on orders_test (cost=0.00..101.00 rows=12010 width=4) (actual time=0.026..3.339 rows=11266 loops=1)\n Index Cond: ((shipping_date >= '2022-04-30'::date) AND (shipping_date <= '2022-05-01'::date))\n\nIs it overestimating the cost of the sorting?\n\n\n\n\n",
"msg_date": "Thu, 5 May 2022 03:12:40 +0200",
"msg_from": "=?UTF-8?Q?Andr=C3=A9_H=C3=A4nsel?= <andre@webkr.de>",
"msg_from_op": false,
"msg_subject": "RE: Why is there a Sort after an Index Only Scan?"
},
{
"msg_contents": "=?UTF-8?Q?Andr=C3=A9_H=C3=A4nsel?= <andre@webkr.de> writes:\n> Limit (cost=0.43..296.63 rows=50 width=4) (actual time=1052.692..1052.737 rows=50 loops=1)\n> -> Index Scan using orders_test_pkey on orders_test (cost=0.43..71149.43 rows=12010 width=4) (actual time=1052.690..1052.728 rows=50 loops=1)\n> Filter: ((shipping_date >= '2022-04-30'::date) AND (shipping_date <= '2022-05-01'::date))\n> Rows Removed by Filter: 1998734\n\n> Is it overestimating the cost of the sorting?\n\nNo, but it's guessing it will hit 50 rows that satisfy the filter\nbefore it's gone very far in this index. If the shipping date and\npkey are correlated in the wrong direction, that could be a very\noveroptimistic guess. I don't think we have adequate stats yet\nto detect this sort of problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 22:08:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is there a Sort after an Index Only Scan?"
}
] |
[
{
"msg_contents": "Dear All,\n\nWe have recently upgraded Postgresql 9.4 standalone server to Postgresql\n11.2 with High Availability (2 servers : Master and Standby).\n\nWhile trying to test using ETL applications and reports, we observe that\nthe ETL jobs fails with below error,\n\n2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n2022/05/06 16:27:36 - Error connecting to database: (using class\norg.postgresql.Driver)\n2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n\nWe have increased the max_connections = 1000 in postgresql.conf file.\n\nIt worked ok for a day and later we get the same error message.\n\nPlease help to advise on any additional settings required. The prior\nPostgresql 9.4 had the default max_connections = 100 and the applications\nworked fine.\n\nRegards,\nGuna\n\n\nDear All,We have recently upgraded \nPostgresql 9.4 standalone server to Postgresql 11.2 with High \nAvailability (2 servers : Master and Standby).While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,2022/05/06 16:27:36 - Error occurred while trying to connect to the database2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)2022/05/06 16:27:36 - FATAL: Sorry, too many clients alreadyWe have increased the max_connections = 1000 in postgresql.conf file.It worked ok for a day and later we get the same error message.Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default \nmax_connections = 100\n\nand the applications worked fine.Regards,Guna",
"msg_date": "Wed, 11 May 2022 00:59:01 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "DB connection issue suggestions"
},
{
"msg_contents": "\n\n\n\n\n\n Please show output of \"show max_connections\" to validate your assumptions.\n \n\n\n On 05/10/2022 12:59 PM Sudhir Guna <sudhir.guna.sg@gmail.com> wrote:\n \n\n\n\n\n\n\n\n\n Dear All,\n \n\n\n\n\n We have recently upgraded Postgresql 9.4 standalone server to Postgresql 11.2 with High Availability (2 servers : Master and Standby).\n \n\n\n\n\n While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,\n \n\n\n\n\n 2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n 2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)\n 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n \n\n\n\n\n We have increased the max_connections = 1000 in postgresql.conf file.\n\n\n\n\n\nIt worked ok for a day and later we get the same error message.\n\n\n\n\n\nPlease help to advise on any additional settings required. The prior Postgresql 9.4 had the default max_connections = 100 and the applications worked fine.\n\n\n\n\n\nRegards,\n\n\nGuna\n\n\n\n\n\n",
"msg_date": "Tue, 10 May 2022 14:13:14 -0400 (EDT)",
"msg_from": "MichaelDBA Vitale <michaeldba@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:59:01AM +0800, Sudhir Guna wrote:\n> Dear All,\n> \n> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n> 11.2 with High Availability (2 servers : Master and Standby).\n> \n> While trying to test using ETL applications and reports, we observe that\n> the ETL jobs fails with below error,\n> \n> 2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n> 2022/05/06 16:27:36 - Error connecting to database: (using class\n> org.postgresql.Driver)\n> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n> \n> We have increased the max_connections = 1000 in postgresql.conf file.\n> \n> It worked ok for a day and later we get the same error message.\n> \n> Please help to advise on any additional settings required. The prior\n> Postgresql 9.4 had the default max_connections = 100 and the applications\n> worked fine.\n\nIt sounds like at least one thing is still running, perhaps running very\nslowly.\n\nYou should monitor the number of connections to figure out what.\n\nIf you expect to be able to run with only 100 connections, then when\nconnections>200, there's already over 100 connections which shouldn't still be\nthere.\n\nYou could query pg_stat_activity to determine what they're doing - trying to\nrun a slow query ? Are all/most of them stuck doing the same thing ?\n\nYou should try to provide the information here for the slow query, and for the\nrest of your environment.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 10 May 2022 13:15:40 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Em ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <sudhir.guna.sg@gmail.com>\nescreveu:\n\n> Dear All,\n>\n> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n> 11.2 with High Availability (2 servers : Master and Standby).\n>\n> While trying to test using ETL applications and reports, we observe that\n> the ETL jobs fails with below error,\n>\n> 2022/05/06 16:27:36 - Error occurred while trying to connect to the\n> database\n> 2022/05/06 16:27:36 - Error connecting to database: (using class\n> org.postgresql.Driver)\n> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n>\n> We have increased the max_connections = 1000 in postgresql.conf file.\n>\n> It worked ok for a day and later we get the same error message.\n>\n> Please help to advise on any additional settings required. The prior\n> Postgresql 9.4 had the default max_connections = 100 and the applications\n> worked fine.\n>\nI guess that ETL is pentaho?\nYou can try to use the latest JDBC driver (42.3.5) .\n\nregards,\nRanier Vilela\n\nEm ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <sudhir.guna.sg@gmail.com> escreveu:\nDear All,We have recently upgraded \nPostgresql 9.4 standalone server to Postgresql 11.2 with High \nAvailability (2 servers : Master and Standby).While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,2022/05/06 16:27:36 - Error occurred while trying to connect to the database2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)2022/05/06 16:27:36 - FATAL: Sorry, too many clients alreadyWe have increased the max_connections = 1000 in postgresql.conf file.It worked ok for a day and later we get the same error message.Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default \nmax_connections = 100\n\nand the applications worked fine.I guess that ETL is pentaho?You can try to use the latest JDBC driver (42.3.5)\n\n.regards,Ranier Vilela",
"msg_date": "Tue, 10 May 2022 15:54:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Hi Ranier,\n\nThank you for reviewing this.\n\nYes this is Pentaho and SSRS application.\n\nWe are currently using postgresql-42.2.4.jar currently.\n\nRegards,\nGuna\n\nOn Wed, May 11, 2022 at 2:55 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n>\n>\n> Em ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <\n> sudhir.guna.sg@gmail.com> escreveu:\n>\n>> Dear All,\n>>\n>> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n>> 11.2 with High Availability (2 servers : Master and Standby).\n>>\n>> While trying to test using ETL applications and reports, we observe that\n>> the ETL jobs fails with below error,\n>>\n>> 2022/05/06 16:27:36 - Error occurred while trying to connect to the\n>> database\n>> 2022/05/06 16:27:36 - Error connecting to database: (using class\n>> org.postgresql.Driver)\n>> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n>>\n>> We have increased the max_connections = 1000 in postgresql.conf file.\n>>\n>> It worked ok for a day and later we get the same error message.\n>>\n>> Please help to advise on any additional settings required. The prior\n>> Postgresql 9.4 had the default max_connections = 100 and the\n>> applications worked fine.\n>>\n> I guess that ETL is pentaho?\n> You can try to use the latest JDBC driver (42.3.5) .\n>\n> regards,\n> Ranier Vilela\n>\n\nHi Ranier,Thank you for reviewing this.Yes this is Pentaho and SSRS application.We are currently using postgresql-42.2.4.jar currently.Regards,GunaOn Wed, May 11, 2022 at 2:55 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <sudhir.guna.sg@gmail.com> escreveu:\nDear All,We have recently upgraded \nPostgresql 9.4 standalone server to Postgresql 11.2 with High \nAvailability (2 servers : Master and Standby).While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,2022/05/06 16:27:36 - Error occurred while trying to connect to the database2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)2022/05/06 16:27:36 - FATAL: Sorry, too many clients alreadyWe have increased the max_connections = 1000 in postgresql.conf file.It worked ok for a day and later we get the same error message.Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default \nmax_connections = 100\n\nand the applications worked fine.I guess that ETL is pentaho?You can try to use the latest JDBC driver (42.3.5)\n\n.regards,Ranier Vilela",
"msg_date": "Wed, 11 May 2022 09:44:59 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Hi MichaelDBA,\n\nThank you for reviewing.\n\nI had validated the show max_connections and its 1000.\n\n[image: image.png]\n\n\nRegards,\nGuna\n\nOn Wed, May 11, 2022 at 2:13 AM MichaelDBA Vitale <michaeldba@sqlexec.com>\nwrote:\n\n> Please show output of \"show max_connections\" to validate your assumptions.\n>\n> On 05/10/2022 12:59 PM Sudhir Guna <sudhir.guna.sg@gmail.com> wrote:\n>\n>\n> Dear All,\n>\n> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n> 11.2 with High Availability (2 servers : Master and Standby).\n>\n> While trying to test using ETL applications and reports, we observe that\n> the ETL jobs fails with below error,\n>\n> 2022/05/06 16:27:36 - Error occurred while trying to connect to the\n> database\n> 2022/05/06 16:27:36 - Error connecting to database: (using class\n> org.postgresql.Driver)\n> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n>\n> We have increased the max_connections = 1000 in postgresql.conf file.\n>\n> It worked ok for a day and later we get the same error message.\n>\n> Please help to advise on any additional settings required. The prior\n> Postgresql 9.4 had the default max_connections = 100 and the applications\n> worked fine.\n>\n> Regards,\n> Guna\n>\n>",
"msg_date": "Wed, 11 May 2022 09:46:45 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Hi Justin,\n\nThank you for reviewing.\n\nI have tried to run the below query and could see only less than 5\nconnections active when I get this error. The total rows I see is only 10\nincluding idle and active sessions for this output.\n\nselect pid as process_id,\nusename as username,\ndatname as database_name,\nclient_addr as client_address,\napplication_name,\nbackend_start,\nstate,\nstate_change\nfrom pg_stat_activity;\n\nRegards,\nGuna\n\nOn Wed, May 11, 2022 at 2:15 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, May 11, 2022 at 12:59:01AM +0800, Sudhir Guna wrote:\n> > Dear All,\n> >\n> > We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n> > 11.2 with High Availability (2 servers : Master and Standby).\n> >\n> > While trying to test using ETL applications and reports, we observe that\n> > the ETL jobs fails with below error,\n> >\n> > 2022/05/06 16:27:36 - Error occurred while trying to connect to the\n> database\n> > 2022/05/06 16:27:36 - Error connecting to database: (using class\n> > org.postgresql.Driver)\n> > 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n> >\n> > We have increased the max_connections = 1000 in postgresql.conf file.\n> >\n> > It worked ok for a day and later we get the same error message.\n> >\n> > Please help to advise on any additional settings required. The prior\n> > Postgresql 9.4 had the default max_connections = 100 and the applications\n> > worked fine.\n>\n> It sounds like at least one thing is still running, perhaps running very\n> slowly.\n>\n> You should monitor the number of connections to figure out what.\n>\n> If you expect to be able to run with only 100 connections, then when\n> connections>200, there's already over 100 connections which shouldn't\n> still be\n> there.\n>\n> You could query pg_stat_activity to determine what they're doing - trying\n> to\n> run a slow query ? Are all/most of them stuck doing the same thing ?\n>\n> You should try to provide the information here for the slow query, and for\n> the\n> rest of your environment.\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> --\n> Justin\n>\n\nHi Justin,Thank you for reviewing.I have tried to run the below query and could see only less than 5 connections active when I get this error. The total rows I see is only 10 including idle and active sessions for this output.\nselect pid as process_id,usename as username,datname as database_name,client_addr as client_address,application_name,backend_start,state,state_changefrom pg_stat_activity; Regards,GunaOn Wed, May 11, 2022 at 2:15 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, May 11, 2022 at 12:59:01AM +0800, Sudhir Guna wrote:\n> Dear All,\n> \n> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n> 11.2 with High Availability (2 servers : Master and Standby).\n> \n> While trying to test using ETL applications and reports, we observe that\n> the ETL jobs fails with below error,\n> \n> 2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n> 2022/05/06 16:27:36 - Error connecting to database: (using class\n> org.postgresql.Driver)\n> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n> \n> We have increased the max_connections = 1000 in postgresql.conf file.\n> \n> It worked ok for a day and later we get the same error message.\n> \n> Please help to advise on any additional settings required. The prior\n> Postgresql 9.4 had the default max_connections = 100 and the applications\n> worked fine.\n\nIt sounds like at least one thing is still running, perhaps running very\nslowly.\n\nYou should monitor the number of connections to figure out what.\n\nIf you expect to be able to run with only 100 connections, then when\nconnections>200, there's already over 100 connections which shouldn't still be\nthere.\n\nYou could query pg_stat_activity to determine what they're doing - trying to\nrun a slow query ? Are all/most of them stuck doing the same thing ?\n\nYou should try to provide the information here for the slow query, and for the\nrest of your environment.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin",
"msg_date": "Wed, 11 May 2022 09:52:10 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "On Wed, 2022-05-11 at 00:59 +0800, Sudhir Guna wrote:\n> We have recently upgraded Postgresql 9.4 standalone server to Postgresql 11.2 with High Availability (2 servers : Master and Standby).\n> \n> While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,\n> \n> 2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n> 2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)\n> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n> \n> We have increased the max_connections = 1000 in postgresql.conf file.\n> \n> It worked ok for a day and later we get the same error message.\n> \n> Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default max_connections = 100and the applications worked fine.\n\nSome application that uses the database has a connection leak: it opens new connections\nwithout closing old ones. Examine \"pg_stat_activity\" to find out which application is\nat fault, and then go and fix that application.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 11 May 2022 08:13:04 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Dear team,\n\nCan you confirm whether, post upgrade activity, all the post-upgrade steps including stats update on all the relations is complete. Upgrade doesn’t carry over the stats to the new upgraded cluster.\n\nRegards,\nAnbazhagan M\n\n> On 11-May-2022, at 11:43 AM, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> On Wed, 2022-05-11 at 00:59 +0800, Sudhir Guna wrote:\n>> We have recently upgraded Postgresql 9.4 standalone server to Postgresql 11.2 with High Availability (2 servers : Master and Standby).\n>> \n>> While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,\n>> \n>> 2022/05/06 16:27:36 - Error occurred while trying to connect to the database\n>> 2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)\n>> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n>> \n>> We have increased the max_connections = 1000 in postgresql.conf file.\n>> \n>> It worked ok for a day and later we get the same error message.\n>> \n>> Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default max_connections = 100and the applications worked fine.\n> \n> Some application that uses the database has a connection leak: it opens new connections\n> without closing old ones. Examine \"pg_stat_activity\" to find out which application is\n> at fault, and then go and fix that application.\n> \n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n> \n> \n\n\n\n",
"msg_date": "Wed, 11 May 2022 12:00:52 +0530",
"msg_from": "Anbazhagan M <anbazhagan.m@instaclustr.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Hi Ranier,\n\nWe have tried to upgrade the postgresql- 42.3.5 .jarand unfortunately the\nissue still persists.\n\nRegards,\nGuna\n\nOn Wed, May 11, 2022 at 9:44 AM Sudhir Guna <sudhir.guna.sg@gmail.com>\nwrote:\n\n> Hi Ranier,\n>\n> Thank you for reviewing this.\n>\n> Yes this is Pentaho and SSRS application.\n>\n> We are currently using postgresql-42.2.4.jar currently.\n>\n> Regards,\n> Guna\n>\n> On Wed, May 11, 2022 at 2:55 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>>\n>>\n>> Em ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <\n>> sudhir.guna.sg@gmail.com> escreveu:\n>>\n>>> Dear All,\n>>>\n>>> We have recently upgraded Postgresql 9.4 standalone server to Postgresql\n>>> 11.2 with High Availability (2 servers : Master and Standby).\n>>>\n>>> While trying to test using ETL applications and reports, we observe that\n>>> the ETL jobs fails with below error,\n>>>\n>>> 2022/05/06 16:27:36 - Error occurred while trying to connect to the\n>>> database\n>>> 2022/05/06 16:27:36 - Error connecting to database: (using class\n>>> org.postgresql.Driver)\n>>> 2022/05/06 16:27:36 - FATAL: Sorry, too many clients already\n>>>\n>>> We have increased the max_connections = 1000 in postgresql.conf file.\n>>>\n>>> It worked ok for a day and later we get the same error message.\n>>>\n>>> Please help to advise on any additional settings required. The prior\n>>> Postgresql 9.4 had the default max_connections = 100 and the\n>>> applications worked fine.\n>>>\n>> I guess that ETL is pentaho?\n>> You can try to use the latest JDBC driver (42.3.5) .\n>>\n>> regards,\n>> Ranier Vilela\n>>\n>\n\nHi Ranier,We have tried to upgrade the \npostgresql-\n42.3.5\n\n.jarand unfortunately the issue still persists.Regards,GunaOn Wed, May 11, 2022 at 9:44 AM Sudhir Guna <sudhir.guna.sg@gmail.com> wrote:Hi Ranier,Thank you for reviewing this.Yes this is Pentaho and SSRS application.We are currently using postgresql-42.2.4.jar currently.Regards,GunaOn Wed, May 11, 2022 at 2:55 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em ter., 10 de mai. de 2022 às 14:49, Sudhir Guna <sudhir.guna.sg@gmail.com> escreveu:\nDear All,We have recently upgraded \nPostgresql 9.4 standalone server to Postgresql 11.2 with High \nAvailability (2 servers : Master and Standby).While trying to test using ETL applications and reports, we observe that the ETL jobs fails with below error,2022/05/06 16:27:36 - Error occurred while trying to connect to the database2022/05/06 16:27:36 - Error connecting to database: (using class org.postgresql.Driver)2022/05/06 16:27:36 - FATAL: Sorry, too many clients alreadyWe have increased the max_connections = 1000 in postgresql.conf file.It worked ok for a day and later we get the same error message.Please help to advise on any additional settings required. The prior Postgresql 9.4 had the default \nmax_connections = 100\n\nand the applications worked fine.I guess that ETL is pentaho?You can try to use the latest JDBC driver (42.3.5)\n\n.regards,Ranier Vilela",
"msg_date": "Wed, 11 May 2022 15:11:53 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Em qua., 11 de mai. de 2022 às 04:18, Sudhir Guna <sudhir.guna.sg@gmail.com>\nescreveu:\n\n> Hi MichaelDBA,\n>\n> Thank you for reviewing.\n>\n> I had validated the show max_connections and its 1000.\n>\nI think that you are wasting resources with this configuration.\nTry enabling Connection Pool at Pentaho configuration.\nAnd set the *Pool Size* (Maximum) to 100 for Pentaho and 100 for Postgres\n(max_connections).\nUnder Advanced Options (DataSource Windows) enable Connection Pool.\n\nProbably Pentaho is trying to use more connections than Postgres allows.\n\nregards,\nRanier Vilela\n\nEm qua., 11 de mai. de 2022 às 04:18, Sudhir Guna <sudhir.guna.sg@gmail.com> escreveu:Hi MichaelDBA,Thank you for reviewing.I had validated the show max_connections and its 1000.I think that you are wasting resources with this configuration.Try enabling Connection Pool at Pentaho configuration.And set the \nPool Size (Maximum) to 100 for Pentaho and 100 for Postgres (max_connections).Under Advanced Options (DataSource Windows) enable Connection Pool.Probably Pentaho is trying to use more connections than Postgres allows.regards,Ranier Vilela",
"msg_date": "Wed, 11 May 2022 08:45:04 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "On Wed, May 11, 2022 at 09:52:10AM +0800, Sudhir Guna wrote:\n> Hi Justin,\n> \n> Thank you for reviewing.\n> \n> I have tried to run the below query and could see only less than 5\n> connections active when I get this error. The total rows I see is only 10\n> including idle and active sessions for this output.\n\nThat doesn't sound right. Are you sure you're connecting to the correct\ninstance ? Are there really only 5 postgres processes on the server, and fewer\nthan 5 connections to its network port or socket ?\n\nYou didn't provide any other info like what OS this is.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 11 May 2022 14:09:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "Hi Justin,\n\nYes , I have checked pg_stat_activity from both the master node and the\nstandby node server and the total rows of the connection doesn't even\nexceed 10.\n\nSorry the OS is Red Hat Enterprise Linux Server 7.5 (Maipo).\n\nDoes the streaming replication between the master and standby node have any\nimpact to this ?\n\n[image: image.png]\n\nRegards,\nGuna\n\nOn Thu, May 12, 2022 at 3:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, May 11, 2022 at 09:52:10AM +0800, Sudhir Guna wrote:\n> > Hi Justin,\n> >\n> > Thank you for reviewing.\n> >\n> > I have tried to run the below query and could see only less than 5\n> > connections active when I get this error. The total rows I see is only 10\n> > including idle and active sessions for this output.\n>\n> That doesn't sound right. Are you sure you're connecting to the correct\n> instance ? Are there really only 5 postgres processes on the server, and\n> fewer\n> than 5 connections to its network port or socket ?\n>\n> You didn't provide any other info like what OS this is.\n>\n> --\n> Justin\n>",
"msg_date": "Thu, 12 May 2022 12:53:18 +0800",
"msg_from": "Sudhir Guna <sudhir.guna.sg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DB connection issue suggestions"
},
{
"msg_contents": "If the problem occurs gradually (like leaking 20 connections per hour during\nETL), you can check pg_stat_activity every hour or so to try to observe the\nproblem before all the connection slots are used up, to collect diagnostic\ninformation.\n\nAlternately, leave a connection opened to the DB and wait until all connection\nslots *are* used up, and then check pg_stat_activity. That will take longer,\nand you'll have more information to weed through.\n\nWhat messages are in the server's log ?\n\nv11.2 is years old and hundreds of bugfixes behind. Since you ran into this\nproblem anyway, why not run 11.16, which was released today ?\n\nHow did you install postgres 11 ? From source or from packages ? Which\npackages ? The semi-official PGDG RPM packages are available here:\nhttps://yum.postgresql.org/\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 12 May 2022 23:08:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: DB connection issue suggestions"
}
] |
[
{
"msg_contents": "Hi Team,\n\nWe are facing an issue in running the query which takes at least 30 sec to run in PostgreSQL.\n\nWe have tried to create the indexes and done the maintenance and still that query is taking same time.\n\nBelow are the explain plan for the query.\n\nhttps://explain.depesz.com/s/sPo2#html\n\nWe have noticed that maximum time it is takin is do a Seq Scan on Table ps_delay_statistic which consist of approx. 35344812 records .\n\nCan anyone please help on the above issue.\n\nThanks and Regards,\nMukesh Kumar\n\n\n\n\n\n\n\n\n\n\nHi Team,\n\n \nWe are facing an issue in running the query which takes at least 30 sec to run in PostgreSQL.\n \nWe have tried to create the indexes and done the maintenance and still that query is taking same time.\n \nBelow are the explain plan for the query.\n \nhttps://explain.depesz.com/s/sPo2#html\n \nWe have noticed that maximum time it is takin is do a Seq Scan on Table ps_delay_statistic which consist of approx. 35344812 records .\n \nCan anyone please help on the above issue.\n \nThanks and Regards, \nMukesh Kumar",
"msg_date": "Fri, 20 May 2022 07:37:44 +0000",
"msg_from": "\"Kumar, Mukesh\" <MKumar@peabodyenergy.com>",
"msg_from_op": true,
"msg_subject": "Need help on Query Tunning and Not using the Index Scan "
},
{
"msg_contents": "On Fri, 2022-05-20 at 07:37 +0000, Kumar, Mukesh wrote:\n> We are facing an issue in running the query which takes at least 30 sec to run in PostgreSQL.\n> \n> We have tried to create the indexes and done the maintenance and still that query is taking same time.\n> \n> Below are the explain plan for the query.\n> \n> https://explain.depesz.com/s/sPo2#html\n> \n> We have noticed that maximum time it is takin is do a Seq Scan on Table ps_delay_statistic which consist of approx. 35344812 records .\n> \n> Can anyone please help on the above issue.\n\nThe problem is probably here:\n\n-> GroupAggregate (cost=0.57..18153.25 rows=2052 width=23) (actual time=13.764..13.765 rows=1 loops=1)\n Group Key: ds_1.fleet_object_number_f\"\n -> Index Scan using ndx_delay_stat_equipment on ps_delay_statistic ds_1 (cost=0.57..18050.67 rows=16412 width=23) (actual time=0.026..10.991 rows=18180 loops=1)\n Index Cond: (fleet_object_number_f = (COALESCE(NULLIF('4000100000000000277313'::text, ''::text)))::numeric)\n Filter: (activity_code_f IS NOT NULL)\n\nwhich comes from this subquery:\n\nSELECT max(dp1.daily_production_id) prodId\n FROM ps_daily_production_v dp1\nWHERE dp1.fleet_object_number = cast(coalesce(nullif (cast(4000100000000000277313 AS varchar), ''), NULL) AS numeric)\n AND dp1.activity_code IS NOT NULL\nGROUP BY dp1.fleet_object_number\n\nRemove the superfluous GROUP BY clause that confuses the optimizer.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 20 May 2022 12:37:17 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Need help on Query Tunning and Not using the Index Scan"
}
] |
[
{
"msg_contents": "Hi,\nOne of our applications needs 3000 max_connections to the database.\nConnection pooler like pgbouncer or pgpool is not certified within the\norganization yet. So they are looking for setting up high configuration\nHardware with CPU and Memory. Can someone advise how much memory and CPU\nthey will need if they want max_conenction value=3000.\n\nRegards,\nAditya.\n\nHi,One of our applications needs 3000 max_connections to the database. Connection pooler like pgbouncer or pgpool is not certified within the organization yet. So they are looking for setting up high configuration Hardware with CPU and Memory. Can someone advise how much memory and CPU they will need if they want max_conenction value=3000.Regards,Aditya.",
"msg_date": "Fri, 20 May 2022 13:57:50 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Selecting RAM and CPU based on max_connections"
},
{
"msg_contents": "On 20 May 2022 10:27:50 CEST, aditya desai <admad123@gmail.com> wrote:\n>Hi,\n>One of our applications needs 3000 max_connections to the database.\n>Connection pooler like pgbouncer or pgpool is not certified within the\n>organization yet. So they are looking for setting up high configuration\n>Hardware with CPU and Memory. Can someone advise how much memory and CPU\n>they will need if they want max_conenction value=3000.\n>\n>Regards,\n>Aditya.\n\nPgbouncer would be the best solution. CPU: number of concurrent connections. RAM: shared_buffer + max_connections * work_mem + maintenance_mem + operating system + ...\n \n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company\n\n\n",
"msg_date": "Fri, 20 May 2022 12:15:47 +0200",
"msg_from": "Andreas Kretschmer <andreas@a-kretschmer.de>",
"msg_from_op": false,
"msg_subject": "Re: Selecting RAM and CPU based on max_connections"
},
{
"msg_contents": "On Fri, 2022-05-20 at 12:15 +0200, Andreas Kretschmer wrote:\n> On 20 May 2022 10:27:50 CEST, aditya desai <admad123@gmail.com> wrote:\n> > One of our applications needs 3000 max_connections to the database.\n> > Connection pooler like pgbouncer or pgpool is not certified within the\n> > organization yet. So they are looking for setting up high configuration\n> > Hardware with CPU and Memory. Can someone advise how much memory and CPU\n> > they will need if they want max_conenction value=3000.\n> \n> Pgbouncer would be the best solution. CPU: number of concurrent connections.\n> RAM: shared_buffer + max_connections * work_mem + maintenance_mem + operating system + ...\n\nRight. And then hope and pray that a) the database doesn't get overloaded\nand b) you don't hit any of the database-internal bottlenecks caused by many\nconnections.\n\nI also got the feeling that the Linux kernel's memory accounting somehow lags.\nI have seen cases where every snapshot of \"pg_stat_activity\" I took showed\nonly a few active connections (but each time different ones), but the\namount of allocated memory exceeded what the currently active sessions could\nconsume. I may have made a mistake, and I have no reproducer, but I would\nbe curious to know if there is an explanation for that.\n(I am aware that \"top\" shows shared buffers multiple times).\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 20 May 2022 12:31:35 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Selecting RAM and CPU based on max_connections"
},
{
"msg_contents": "Thanks! I will run these suggestions with App team.\n\nOn Fri, May 20, 2022 at 4:01 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Fri, 2022-05-20 at 12:15 +0200, Andreas Kretschmer wrote:\n> > On 20 May 2022 10:27:50 CEST, aditya desai <admad123@gmail.com> wrote:\n> > > One of our applications needs 3000 max_connections to the database.\n> > > Connection pooler like pgbouncer or pgpool is not certified within the\n> > > organization yet. So they are looking for setting up high configuration\n> > > Hardware with CPU and Memory. Can someone advise how much memory and\n> CPU\n> > > they will need if they want max_conenction value=3000.\n> >\n> > Pgbouncer would be the best solution. CPU: number of concurrent\n> connections.\n> > RAM: shared_buffer + max_connections * work_mem + maintenance_mem +\n> operating system + ...\n>\n> Right. And then hope and pray that a) the database doesn't get overloaded\n> and b) you don't hit any of the database-internal bottlenecks caused by\n> many\n> connections.\n>\n> I also got the feeling that the Linux kernel's memory accounting somehow\n> lags.\n> I have seen cases where every snapshot of \"pg_stat_activity\" I took showed\n> only a few active connections (but each time different ones), but the\n> amount of allocated memory exceeded what the currently active sessions\n> could\n> consume. I may have made a mistake, and I have no reproducer, but I would\n> be curious to know if there is an explanation for that.\n> (I am aware that \"top\" shows shared buffers multiple times).\n>\n> Yours,\n> Laurenz Albe\n>\n\nThanks! I will run these suggestions with App team.On Fri, May 20, 2022 at 4:01 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Fri, 2022-05-20 at 12:15 +0200, Andreas Kretschmer wrote:\n> On 20 May 2022 10:27:50 CEST, aditya desai <admad123@gmail.com> wrote:\n> > One of our applications needs 3000 max_connections to the database.\n> > Connection pooler like pgbouncer or pgpool is not certified within the\n> > organization yet. So they are looking for setting up high configuration\n> > Hardware with CPU and Memory. Can someone advise how much memory and CPU\n> > they will need if they want max_conenction value=3000.\n> \n> Pgbouncer would be the best solution. CPU: number of concurrent connections.\n> RAM: shared_buffer + max_connections * work_mem + maintenance_mem + operating system + ...\n\nRight. And then hope and pray that a) the database doesn't get overloaded\nand b) you don't hit any of the database-internal bottlenecks caused by many\nconnections.\n\nI also got the feeling that the Linux kernel's memory accounting somehow lags.\nI have seen cases where every snapshot of \"pg_stat_activity\" I took showed\nonly a few active connections (but each time different ones), but the\namount of allocated memory exceeded what the currently active sessions could\nconsume. I may have made a mistake, and I have no reproducer, but I would\nbe curious to know if there is an explanation for that.\n(I am aware that \"top\" shows shared buffers multiple times).\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 20 May 2022 21:10:37 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Selecting RAM and CPU based on max_connections"
},
{
"msg_contents": "You may also need to tune shmmax and shmmin kernel parameters.\n\nRegards,\nGanesh Korde.\n\nOn Fri, 20 May 2022, 1:58 pm aditya desai, <admad123@gmail.com> wrote:\n\n> Hi,\n> One of our applications needs 3000 max_connections to the database.\n> Connection pooler like pgbouncer or pgpool is not certified within the\n> organization yet. So they are looking for setting up high configuration\n> Hardware with CPU and Memory. Can someone advise how much memory and CPU\n> they will need if they want max_conenction value=3000.\n>\n> Regards,\n> Aditya.\n>\n\nYou may also need to tune shmmax and shmmin kernel parameters.Regards,Ganesh Korde.On Fri, 20 May 2022, 1:58 pm aditya desai, <admad123@gmail.com> wrote:Hi,One of our applications needs 3000 max_connections to the database. Connection pooler like pgbouncer or pgpool is not certified within the organization yet. So they are looking for setting up high configuration Hardware with CPU and Memory. Can someone advise how much memory and CPU they will need if they want max_conenction value=3000.Regards,Aditya.",
"msg_date": "Fri, 20 May 2022 23:29:40 +0530",
"msg_from": "Ganesh Korde <ganeshakorde@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Selecting RAM and CPU based on max_connections"
}
] |
[
{
"msg_contents": "Hi All\n\n I am a Database DBA. I focus on PostgreSQL and DB2.\n Recently. I experience some memory issue. The postgres unable allocate\nmemory. I don't know how to monitor Postgres memory usage.\n I try to search some document. But not found any useful information.\n This server have 16G memory. On that time. The free command display only 3\nG memory used. The share_buffers almost 6G.\n\n On that time. The server have 100 active applications.\n New connection failed. I have to kill some application by os command \"kill\n-9\"\nThe checkpoint command execute very slow. almost need 5-10 seconds.\n\n[image: 图片.png]\n\n\n Is there any useful command to summary PostgreSQL memory usage ?\n\n How to analyse this memory issue ? Thanks for your help.\n\n2022-05-23 17:42:51.541 CST,,,21731,,6288963b.54e3,8055,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork autovacuum worker process: Cannot\nallocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:51.627 CST,,,21731,,6288963b.54e3,8056,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:51.627 CST,,,21731,,6288963b.54e3,8057,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:51.628 CST,,,21731,,6288963b.54e3,8058,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:51.628 CST,,,21731,,6288963b.54e3,8059,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:52.130 CST,,,21731,,6288963b.54e3,8060,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:52.130 CST,,,21731,,6288963b.54e3,8061,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:52.131 CST,,,21731,,6288963b.54e3,8062,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:52.131 CST,,,21731,,6288963b.54e3,8063,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork new process for connection:\nCannot allocate memory\",,,,,,,,,\"\"\n2022-05-23 17:42:52.543 CST,,,21731,,6288963b.54e3,8064,,2022-05-21\n15:35:23 CST,,0,LOG,00000,\"could not fork autovacuum worker process: Cannot\nallocate memory\",,,,,,,,,\"\"",
"msg_date": "Wed, 25 May 2022 00:25:28 +0800",
"msg_from": "=?UTF-8?B?5b6Q5b+X5a6H5b6Q?= <xuzhiyuster@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to monitor Postgres real memory usage"
},
{
"msg_contents": "On Wed, May 25, 2022 at 12:25:28AM +0800, 徐志宇徐 wrote:\n> Hi All\n> \n> I am a Database DBA. I focus on PostgreSQL and DB2.\n> Recently. I experience some memory issue. The postgres unable allocate\n> memory. I don't know how to monitor Postgres memory usage.\n\nPostgres is just an OS Process, so should be monitored like any other.\n\nWhat OS are you using ?\n\nKnow that the OS may attribute \"shared buffers\" to different processes, or\nmultiple processes.\n\n> This server have 16G memory. On that time. The free command display only 3\n> G memory used. The share_buffers almost 6G.\n> \n> On that time. The server have 100 active applications.\n> New connection failed. I have to kill some application by os command \"kill -9\"\n\nIt's almost always a bad idea to kill postgres with kill -9.\n\n> The checkpoint command execute very slow. almost need 5-10 seconds.\n\nDo you mean an interactive checkpoint command ?\nOr logs from log_checkpoint ?\n\n> Is there any useful command to summary PostgreSQL memory usage ?\n\nYou can check memory use of an individual query with \"explain (analyze,buffers) ..\"\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWhat settings have you used in postgres ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nWhat postgres version ?\nHow was it installed ? From souce? From a package ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 May 2022 12:40:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "Hi Justin\n\n Thanks for your update.\n\n Postgres is just an OS Process, so should be monitored like any other.\n\nWhat OS are you using ?\n\n> I am using Centos 7.5.\n\nKnow that the OS may attribute \"shared buffers\" to different processes, or\nmultiple processes.\n\nIt's almost always a bad idea to kill postgres with kill -9.\n\n> I unable to connect to database server. I have to kill some process to\nrelease memory. Then I could connect it.\n\n What settings have you used in postgres ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\n\n> Please reference my attachment.\n\nYou can check memory use of an individual query with \"explain\n(analyze,buffers) ..\"\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nThanks for your update. This memory allocation failed issue impact the\nwhole database running. not a slow query.\nIs there any commands or method could get totally Postgres memory\nutilization ? Thanks .\n\nJustin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:\n\n> On Wed, May 25, 2022 at 12:25:28AM +0800, 徐志宇徐 wrote:\n> > Hi All\n> >\n> > I am a Database DBA. I focus on PostgreSQL and DB2.\n> > Recently. I experience some memory issue. The postgres unable allocate\n> > memory. I don't know how to monitor Postgres memory usage.\n>\n> Postgres is just an OS Process, so should be monitored like any other.\n>\n> What OS are you using ?\n>\n> Know that the OS may attribute \"shared buffers\" to different processes, or\n> multiple processes.\n>\n> > This server have 16G memory. On that time. The free command display\n> only 3\n> > G memory used. The share_buffers almost 6G.\n> >\n> > On that time. The server have 100 active applications.\n> > New connection failed. I have to kill some application by os command\n> \"kill -9\"\n>\n> It's almost always a bad idea to kill postgres with kill -9.\n>\n> > The checkpoint command execute very slow. almost need 5-10 seconds.\n>\n> Do you mean an interactive checkpoint command ?\n> Or logs from log_checkpoint ?\n>\n> > Is there any useful command to summary PostgreSQL memory usage ?\n>\n> You can check memory use of an individual query with \"explain\n> (analyze,buffers) ..\"\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> What settings have you used in postgres ?\n> https://wiki.postgresql.org/wiki/Server_Configuration\n>\n> What postgres version ?\n> How was it installed ? From souce? From a package ?\n>\n> --\n> Justin\n>",
"msg_date": "Thu, 26 May 2022 23:36:44 +0800",
"msg_from": "=?UTF-8?B?5b6Q5b+X5a6H5b6Q?= <xuzhiyuster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "Hi Justin\n\n I list the server configuration for your reference.\n\npostgres=# SELECT name, current_setting(name), source\npostgres-# FROM pg_settings\npostgres-# WHERE source NOT IN ('default', 'override');\n name | current_setting |\n source\n---------------------------------+-------------------------------------+----------------------\n application_name | psql |\nclient\n archive_command | cp %p /data/postgres/archive_log/%f |\nconfiguration file\n archive_mode | on |\nconfiguration file\n auto_explain.log_min_duration | 10s |\nconfiguration file\n autovacuum_analyze_scale_factor | 1e-05 |\nconfiguration file\n autovacuum_analyze_threshold | 5 |\nconfiguration file\n autovacuum_max_workers | 20 |\nconfiguration file\n autovacuum_vacuum_scale_factor | 0.0002 |\nconfiguration file\n autovacuum_vacuum_threshold | 5 |\nconfiguration file\n bgwriter_delay | 20ms |\nconfiguration file\n bgwriter_lru_maxpages | 400 |\nconfiguration file\n client_encoding | UTF8 |\nclient\n DateStyle | ISO, MDY |\nconfiguration file\n default_text_search_config | pg_catalog.english |\nconfiguration file\n dynamic_shared_memory_type | posix |\nconfiguration file\n enable_seqscan | off |\nconfiguration file\n lc_messages | en_US.UTF-8 |\nconfiguration file\n lc_monetary | en_US.UTF-8 |\nconfiguration file\n lc_numeric | en_US.UTF-8 |\nconfiguration file\n lc_time | en_US.UTF-8 |\nconfiguration file\n listen_addresses | * |\nconfiguration file\n lock_timeout | 5min |\nconfiguration file\n log_connections | on |\nconfiguration file\n log_destination | csvlog |\nconfiguration file\n log_directory | log |\nconfiguration file\n log_lock_waits | on |\nconfiguration file\n log_min_duration_statement | 10s |\nconfiguration file\n log_rotation_size | 30MB |\nconfiguration file\n log_statement | ddl |\nconfiguration file\n log_timezone | PRC |\nconfiguration file\n log_truncate_on_rotation | on |\nconfiguration file\n logging_collector | on |\nconfiguration file\n maintenance_work_mem | 64MB |\nconfiguration file\n max_connections | 1000 |\nconfiguration file\n max_parallel_workers_per_gather | 4 |\nconfiguration file\n max_stack_depth | 2MB |\nenvironment variable\n max_wal_size | 4GB |\nconfiguration file\n max_worker_processes | 4 |\nconfiguration file\n min_wal_size | 320MB |\nconfiguration file\n pg_stat_statements.max | 1000 |\nconfiguration file\n pg_stat_statements.track | all |\nconfiguration file\n port | 5432 |\nconfiguration file\n shared_buffers | 6352MB |\nconfiguration file\n shared_preload_libraries | pg_stat_statements,auto_explain |\nconfiguration file\n temp_buffers | 32MB |\nconfiguration file\n TimeZone | PRC |\nconfiguration file\n track_activities | on |\nconfiguration file\n track_commit_timestamp | off |\nconfiguration file\n track_counts | on |\nconfiguration file\n track_functions | all |\nconfiguration file\n track_io_timing | on |\nconfiguration file\n vacuum_cost_limit | 2000 |\nconfiguration file\n wal_compression | on |\nconfiguration file\n wal_keep_segments | 128 |\nconfiguration file\n wal_level | replica |\nconfiguration file\n work_mem | 40MB |\nconfiguration file\n(56 rows)\n\n徐志宇徐 <xuzhiyuster@gmail.com> 于2022年5月26日周四 23:36写道:\n\n> Hi Justin\n>\n> Thanks for your update.\n>\n> Postgres is just an OS Process, so should be monitored like any other.\n>\n> What OS are you using ?\n>\n> > I am using Centos 7.5.\n>\n> Know that the OS may attribute \"shared buffers\" to different processes, or\n> multiple processes.\n>\n> It's almost always a bad idea to kill postgres with kill -9.\n>\n> > I unable to connect to database server. I have to kill some process to\n> release memory. Then I could connect it.\n>\n> What settings have you used in postgres ?\n> https://wiki.postgresql.org/wiki/Server_Configuration\n>\n>\n> > Please reference my attachment.\n>\n> You can check memory use of an individual query with \"explain\n> (analyze,buffers) ..\"\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Thanks for your update. This memory allocation failed issue impact the\n> whole database running. not a slow query.\n> Is there any commands or method could get totally Postgres memory\n> utilization ? Thanks .\n>\n> Justin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:\n>\n>> On Wed, May 25, 2022 at 12:25:28AM +0800, 徐志宇徐 wrote:\n>> > Hi All\n>> >\n>> > I am a Database DBA. I focus on PostgreSQL and DB2.\n>> > Recently. I experience some memory issue. The postgres unable allocate\n>> > memory. I don't know how to monitor Postgres memory usage.\n>>\n>> Postgres is just an OS Process, so should be monitored like any other.\n>>\n>> What OS are you using ?\n>>\n>> Know that the OS may attribute \"shared buffers\" to different processes, or\n>> multiple processes.\n>>\n>> > This server have 16G memory. On that time. The free command display\n>> only 3\n>> > G memory used. The share_buffers almost 6G.\n>> >\n>> > On that time. The server have 100 active applications.\n>> > New connection failed. I have to kill some application by os command\n>> \"kill -9\"\n>>\n>> It's almost always a bad idea to kill postgres with kill -9.\n>>\n>> > The checkpoint command execute very slow. almost need 5-10 seconds.\n>>\n>> Do you mean an interactive checkpoint command ?\n>> Or logs from log_checkpoint ?\n>>\n>> > Is there any useful command to summary PostgreSQL memory usage ?\n>>\n>> You can check memory use of an individual query with \"explain\n>> (analyze,buffers) ..\"\n>> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>>\n>> What settings have you used in postgres ?\n>> https://wiki.postgresql.org/wiki/Server_Configuration\n>>\n>> What postgres version ?\n>> How was it installed ? From souce? From a package ?\n>>\n>> --\n>> Justin\n>>\n>\n\nHi Justin I list the server configuration for your reference. postgres=# SELECT name, current_setting(name), sourcepostgres-# FROM pg_settingspostgres-# WHERE source NOT IN ('default', 'override'); name | current_setting | source---------------------------------+-------------------------------------+---------------------- application_name | psql | client archive_command | cp %p /data/postgres/archive_log/%f | configuration file archive_mode | on | configuration file auto_explain.log_min_duration | 10s | configuration file autovacuum_analyze_scale_factor | 1e-05 | configuration file autovacuum_analyze_threshold | 5 | configuration file autovacuum_max_workers | 20 | configuration file autovacuum_vacuum_scale_factor | 0.0002 | configuration file autovacuum_vacuum_threshold | 5 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_lru_maxpages | 400 | configuration file client_encoding | UTF8 | client DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | posix | configuration file enable_seqscan | off | configuration file lc_messages | en_US.UTF-8 | configuration file lc_monetary | en_US.UTF-8 | configuration file lc_numeric | en_US.UTF-8 | configuration file lc_time | en_US.UTF-8 | configuration file listen_addresses | * | configuration file lock_timeout | 5min | configuration file log_connections | on | configuration file log_destination | csvlog | configuration file log_directory | log | configuration file log_lock_waits | on | configuration file log_min_duration_statement | 10s | configuration file log_rotation_size | 30MB | configuration file log_statement | ddl | configuration file log_timezone | PRC | configuration file log_truncate_on_rotation | on | configuration file logging_collector | on | configuration file maintenance_work_mem | 64MB | configuration file max_connections | 1000 | configuration file max_parallel_workers_per_gather | 4 | configuration file max_stack_depth | 2MB | environment variable max_wal_size | 4GB | configuration file max_worker_processes | 4 | configuration file min_wal_size | 320MB | configuration file pg_stat_statements.max | 1000 | configuration file pg_stat_statements.track | all | configuration file port | 5432 | configuration file shared_buffers | 6352MB | configuration file shared_preload_libraries | pg_stat_statements,auto_explain | configuration file temp_buffers | 32MB | configuration file TimeZone | PRC | configuration file track_activities | on | configuration file track_commit_timestamp | off | configuration file track_counts | on | configuration file track_functions | all | configuration file track_io_timing | on | configuration file vacuum_cost_limit | 2000 | configuration file wal_compression | on | configuration file wal_keep_segments | 128 | configuration file wal_level | replica | configuration file work_mem | 40MB | configuration file(56 rows)徐志宇徐 <xuzhiyuster@gmail.com> 于2022年5月26日周四 23:36写道:Hi Justin Thanks for your update. \r\nPostgres is just an OS Process, so should be monitored like any other.\n\r\nWhat OS are you using ?> I am using Centos 7.5. \r\nKnow that the OS may attribute \"shared buffers\" to different processes, or multiple processes.\r\nIt's almost always a bad idea to kill postgres with kill -9.> I unable to connect to database server. I have to kill some process to release memory. Then I could connect it. \n\n \r\nWhat settings have you used in postgres ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration> Please reference my attachment. \r\nYou can check memory use of an individual query with \"explain (analyze,buffers) ..\" \nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions Thanks for your update. This memory allocation failed issue impact the whole database running. not a slow query. Is there any commands or method could get totally Postgres memory utilization ? Thanks .Justin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:On Wed, May 25, 2022 at 12:25:28AM +0800, 徐志宇徐 wrote:\r\n> Hi All\r\n> \r\n> I am a Database DBA. I focus on PostgreSQL and DB2.\r\n> Recently. I experience some memory issue. The postgres unable allocate\r\n> memory. I don't know how to monitor Postgres memory usage.\n\r\nPostgres is just an OS Process, so should be monitored like any other.\n\r\nWhat OS are you using ?\n\r\nKnow that the OS may attribute \"shared buffers\" to different processes, or\r\nmultiple processes.\n\r\n> This server have 16G memory. On that time. The free command display only 3\r\n> G memory used. The share_buffers almost 6G.\r\n> \r\n> On that time. The server have 100 active applications.\r\n> New connection failed. I have to kill some application by os command \"kill -9\"\n\r\nIt's almost always a bad idea to kill postgres with kill -9.\n\r\n> The checkpoint command execute very slow. almost need 5-10 seconds.\n\r\nDo you mean an interactive checkpoint command ?\r\nOr logs from log_checkpoint ?\n\r\n> Is there any useful command to summary PostgreSQL memory usage ?\n\r\nYou can check memory use of an individual query with \"explain (analyze,buffers) ..\"\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\r\nWhat settings have you used in postgres ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\r\nWhat postgres version ?\r\nHow was it installed ? From souce? From a package ?\n\r\n-- \r\nJustin",
"msg_date": "Thu, 26 May 2022 23:47:40 +0800",
"msg_from": "=?UTF-8?B?5b6Q5b+X5a6H5b6Q?= <xuzhiyuster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "> enable_seqscan = 'off'\n\nWhy is this here ? I think when people set this, it's because they \"want to\nuse more index scans to make things faster\". But index scans aren't\nnecessarily faster, and this tries to force their use even when it will be\nslower. It's better to address the queries that are slow (or encourage index\nscans by decreasing random_page_cost).\n\n> maintenance_work_mem = '64MB'\n> autovacuum_max_workers = '20'\n> vacuum_cost_limit = '2000'\n> autovacuum_vacuum_scale_factor = '0.0002'\n> autovacuum_analyze_scale_factor = '0.00001'\n\nThis means you're going to use up to 20 processes simultaneously running vacuum\n(each of which may use 64MB memory). What kind of storage does the server\nhave? Can it support 20 background processes reading from disk, in addition to\nother processs ?\n\nJustin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:\n> > What postgres version ?\n> > How was it installed ? From souce? From a package ?\n\nWhat about this ?\n\nI'm not sure how/if this would affect memory allocation, but if the server is\nslow, processes will be waiting longer, rather than completing quickly, and\nusing their RAM for a longer period...\n\nDoes the postgres user have any rlimits set ?\n\nCheck:\nps -fu postgres\n# then:\nsudo cat /proc/2948/limits\n\n\n",
"msg_date": "Thu, 26 May 2022 11:05:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "Hi Justin\n\n Thanks for you explaination.\n\n > > What postgres version ?\n > > How was it installed ? From souce? From a package ?\n I am using Postgres 11.1 .It's installed by package.\n\nCheck:\nps -fu postgres\n# then:\nsudo cat /proc/2948/limits\n\nroot@bl4n3icpms ~]# sudo cat /proc/21731/limits\nLimit Soft Limit Hard Limit Units\nMax cpu time unlimited unlimited seconds\nMax file size unlimited unlimited bytes\nMax data size unlimited unlimited bytes\nMax stack size 8388608 unlimited bytes\nMax core file size 0 unlimited bytes\nMax resident set unlimited unlimited bytes\nMax processes 4096 63445\n processes\nMax open files 65536 65536 files\nMax locked memory 65536 65536 bytes\nMax address space unlimited unlimited bytes\nMax file locks unlimited unlimited locks\nMax pending signals 63445 63445 signals\nMax msgqueue size 819200 819200 bytes\nMax nice priority 0 0\nMax realtime priority 0 0\nMax realtime timeout unlimited unlimited us\n\n\n>enable_seqscan = 'off'\n> maintenance_work_mem = '64MB'\n> autovacuum_max_workers = '20'\n> vacuum_cost_limit = '2000'\n> autovacuum_vacuum_scale_factor = '0.0002'\n> autovacuum_analyze_scale_factor = '0.00001'\n\nYour are correct.\n\nI will adjust those parameter .\nenable_seqscan = 'on'\nreduce autovacuum number .\n\nJustin Pryzby <pryzby@telsasoft.com> 于2022年5月27日周五 00:05写道:\n\n> > enable_seqscan = 'off'\n>\n> Why is this here ? I think when people set this, it's because they \"want\n> to\n> use more index scans to make things faster\". But index scans aren't\n> necessarily faster, and this tries to force their use even when it will be\n> slower. It's better to address the queries that are slow (or encourage\n> index\n> scans by decreasing random_page_cost).\n>\n> > maintenance_work_mem = '64MB'\n> > autovacuum_max_workers = '20'\n> > vacuum_cost_limit = '2000'\n> > autovacuum_vacuum_scale_factor = '0.0002'\n> > autovacuum_analyze_scale_factor = '0.00001'\n>\n> This means you're going to use up to 20 processes simultaneously running\n> vacuum\n> (each of which may use 64MB memory). What kind of storage does the server\n> have? Can it support 20 background processes reading from disk, in\n> addition to\n> other processs ?\n>\n> Justin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:\n> > > What postgres version ?\n> > > How was it installed ? From souce? From a package ?\n>\n> What about this ?\n>\n> I'm not sure how/if this would affect memory allocation, but if the server\n> is\n> slow, processes will be waiting longer, rather than completing quickly, and\n> using their RAM for a longer period...\n>\n> Does the postgres user have any rlimits set ?\n>\n> Check:\n> ps -fu postgres\n> # then:\n> sudo cat /proc/2948/limits\n>\n\nHi Justin Thanks for you explaination. \n> > What postgres version ? > > How was it installed ? From souce? From a package ? I am using Postgres 11.1 .It's installed by package. \nCheck:\nps -fu postgres\n# then:\nsudo cat /proc/2948/limits root@bl4n3icpms ~]# sudo cat /proc/21731/limitsLimit Soft Limit Hard Limit UnitsMax cpu time unlimited unlimited secondsMax file size unlimited unlimited bytesMax data size unlimited unlimited bytesMax stack size 8388608 unlimited bytesMax core file size 0 unlimited bytesMax resident set unlimited unlimited bytesMax processes 4096 63445 processesMax open files 65536 65536 filesMax locked memory 65536 65536 bytesMax address space unlimited unlimited bytesMax file locks unlimited unlimited locksMax pending signals 63445 63445 signalsMax msgqueue size 819200 819200 bytesMax nice priority 0 0Max realtime priority 0 0Max realtime timeout unlimited unlimited us \n>enable_seqscan = 'off' > maintenance_work_mem = '64MB'\n> autovacuum_max_workers = '20'\n> vacuum_cost_limit = '2000'\n> autovacuum_vacuum_scale_factor = '0.0002'\n> autovacuum_analyze_scale_factor = '0.00001'\n\nYour are correct. I will adjust those parameter .\nenable_seqscan = 'on' reduce autovacuum number . Justin Pryzby <pryzby@telsasoft.com> 于2022年5月27日周五 00:05写道:> enable_seqscan = 'off'\n\nWhy is this here ? I think when people set this, it's because they \"want to\nuse more index scans to make things faster\". But index scans aren't\nnecessarily faster, and this tries to force their use even when it will be\nslower. It's better to address the queries that are slow (or encourage index\nscans by decreasing random_page_cost).\n\n> maintenance_work_mem = '64MB'\n> autovacuum_max_workers = '20'\n> vacuum_cost_limit = '2000'\n> autovacuum_vacuum_scale_factor = '0.0002'\n> autovacuum_analyze_scale_factor = '0.00001'\n\nThis means you're going to use up to 20 processes simultaneously running vacuum\n(each of which may use 64MB memory). What kind of storage does the server\nhave? Can it support 20 background processes reading from disk, in addition to\nother processs ?\n\nJustin Pryzby <pryzby@telsasoft.com> 于2022年5月25日周三 01:40写道:\n> > What postgres version ?\n> > How was it installed ? From souce? From a package ?\n\nWhat about this ?\n\nI'm not sure how/if this would affect memory allocation, but if the server is\nslow, processes will be waiting longer, rather than completing quickly, and\nusing their RAM for a longer period...\n\nDoes the postgres user have any rlimits set ?\n\nCheck:\nps -fu postgres\n# then:\nsudo cat /proc/2948/limits",
"msg_date": "Fri, 27 May 2022 01:39:15 +0800",
"msg_from": "=?UTF-8?B?5b6Q5b+X5a6H5b6Q?= <xuzhiyuster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "On Fri, May 27, 2022 at 01:39:15AM +0800, 徐志宇徐 wrote:\n> Hi Justin\n> \n> Thanks for you explaination.\n> \n> > > What postgres version ?\n> > > How was it installed ? From souce? From a package ?\n> I am using Postgres 11.1 .It's installed by package.\n\nThis is quite old, and missing ~4 years of bugfixes.\n\nWhat's the output of these commands?\ntail /proc/sys/vm/overcommit_*\ntail /proc/sys/vm/nr_*hugepages /proc/cmdline\ncat /proc/meminfo\nuname -a\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 May 2022 18:35:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "Hi Justin\n\n Thanks for your update. I collect those logs for your reference.\n I also collect this file /etc/sysctl.conf and PG logs on the attachment.\n(memory_issue.tar.gz).\n\n when the memory issue occur. The log directory will generate a new log\nfile.\n\n postgresql-2022-05-27_000000.log. Usually only one file to record log\ninformation.\n For example:postgresql-2022-05-27_000000.csv\n\n The log which named with *.log will contain a lot of infomation about\nmemory detail.\n\n\nWhat's the output of these commands?\ntail /proc/sys/vm/overcommit_*\n\n[root@bl4n3icpms vm]# tail /proc/sys/vm/overcommit_*\n==> /proc/sys/vm/overcommit_kbytes <==\n0\n\n==> /proc/sys/vm/overcommit_memory <==\n2\n\n==> /proc/sys/vm/overcommit_ratio <==\n60\n\n\n\ntail /proc/sys/vm/nr_*hugepages /proc/cmdline\n\n==> /proc/sys/vm/nr_hugepages <==\n0\n\n==> /proc/sys/vm/nr_overcommit_hugepages <==\n0\n\n==> /proc/cmdline <==\nBOOT_IMAGE=/vmlinuz-3.10.0-957.27.2.el7.x86_64\nroot=/dev/mapper/rootvg-lv_root ro crashkernel=auto rd.lvm.lv=rootvg/lv_root\nrd.lvm.lv=rootvg/lv_swap rhgb quiet LANG=en_US.UTF-8\n\n\ncat /proc/meminfo\n\n[root@bl4n3icpms vm]# cat /proc/meminfo\nMemTotal: 16266368 kB\nMemFree: 203364 kB\nMemAvailable: 7823244 kB\nBuffers: 3272 kB\nCached: 12978488 kB\nSwapCached: 0 kB\nActive: 10456284 kB\nInactive: 5042108 kB\nActive(anon): 6008156 kB\nInactive(anon): 1738892 kB\nActive(file): 4448128 kB\nInactive(file): 3303216 kB\nUnevictable: 11292 kB\nMlocked: 11292 kB\nSwapTotal: 2097148 kB\nSwapFree: 2097148 kB\nDirty: 20 kB\nWriteback: 0 kB\nAnonPages: 2527924 kB\nMapped: 4527276 kB\nShmem: 5226728 kB\nSlab: 268304 kB\nSReclaimable: 206876 kB\nSUnreclaim: 61428 kB\nKernelStack: 4576 kB\nPageTables: 106924 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 11856968 kB\nCommitted_AS: 10488212 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 37468 kB\nVmallocChunk: 34359695100 kB\nHardwareCorrupted: 0 kB\nAnonHugePages: 124928 kB\nCmaTotal: 0 kB\nCmaFree: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 143208 kB\nDirectMap2M: 6148096 kB\nDirectMap1G: 12582912 kB\n\n\n\nuname -a\n\nLinux bl4n3icpms.lenovo.com 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29\n17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\n\n\n[root@bl4n3icpms vm]# cat /etc/sysctl.conf\n# sysctl settings are defined through files in\n# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.\n#\n# Vendors settings live in /usr/lib/sysctl.d/.\n# To override a whole file, create a new file with the same in\n# /etc/sysctl.d/ and put new settings there. To override\n# only specific settings, add a file with a lexically later\n# name in /etc/sysctl.d/ and put new settings there.\n#\n# For more information, see sysctl.conf(5) and sysctl.d(5).\nnet.ipv4.icmp_echo_ignore_broadcasts = 1\nnet.ipv4.conf.all.accept_redirects = 0\n# edit for pg database used#\nkernel.shmmax=16656777216\nkernel.shmall=4066596\nkernel.msgmnb = 65536\nkernel.msgmax = 65536\nkernel.sem = 250 32000 32 128\nkernel.pid_max=131072\nvm.overcommit_memory=2\nvm.overcommit_ratio=60\nvm.swappiness=0\n# edit for pg database used#\n\n\n\nJustin Pryzby <pryzby@telsasoft.com> 于2022年5月27日周五 07:35写道:\n\n> On Fri, May 27, 2022 at 01:39:15AM +0800, 徐志宇徐 wrote:\n> > Hi Justin\n> >\n> > Thanks for you explaination.\n> >\n> > > > What postgres version ?\n> > > > How was it installed ? From souce? From a package ?\n> > I am using Postgres 11.1 .It's installed by package.\n>\n> This is quite old, and missing ~4 years of bugfixes.\n>\n> What's the output of these commands?\n> tail /proc/sys/vm/overcommit_*\n> tail /proc/sys/vm/nr_*hugepages /proc/cmdline\n> cat /proc/meminfo\n> uname -a\n>\n> --\n> Justin\n>",
"msg_date": "Sat, 28 May 2022 01:40:14 +0800",
"msg_from": "=?UTF-8?B?5b6Q5b+X5a6H5b6Q?= <xuzhiyuster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to monitor Postgres real memory usage"
},
{
"msg_contents": "On Sat, May 28, 2022 at 01:40:14AM +0800, 徐志宇徐 wrote:\n> vm.swappiness=0\n\nI think this is related to the problem.\n\nswappiness=0 means to *never* use swap, even if that means that processes are\nkilled.\n\nIf you really wanted that, you should remove the swap space.\n\nSwap is extremely slow and worth avoiding, but this doesn't let you use it at\nall. You can't even look at your swap usage as a diagnostic measure to tell if\nthings had been paged out at some point.\n\nI *suspect* the problem will go away if you set swappiness=1 in /proc (and in\nsysctl.conf).\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 27 May 2022 12:56:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: How to monitor Postgres real memory usage"
}
] |
[
{
"msg_contents": "Hello, please look into following example:\n\npostgres=# create table test_array_selectivity as select \narray[id]::int[] as a from generate_series(1, 10000000) gs(id);\nSELECT 10000000\npostgres=# explain analyze select * from test_array_selectivity where a \n@> array[1];\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on test_array_selectivity (cost=0.00..198531.00 rows=50000 \nwidth=32) (actual time=0.023..2639.029 rows=1 loops=1)\n Filter: (a @> '{1}'::integer[])\n Rows Removed by Filter: 9999999\n Planning Time: 0.078 ms\n Execution Time: 2639.038 ms\n(5 rows)\n\n\nfor row estimation rows=50000=10000000*0.005 we are using constant \nDEFAULT_CONTAIN_SEL if I'm not mistaken.\nand we're using it unless we have something in most_common_elems (MCE) \nin statistics which in this case is empty.\n\nif we have something in MCE list then we could use much better estimate \n(https://github.com/postgres/postgres/blob/REL_14_STABLE/src/backend/utils/adt/array_selfuncs.c#L628):\nelem_selec = Min(DEFAULT_CONTAIN_SEL, minfreq / 2)\n\nfor small tables we could get away with larger stats target for column, \nbut for example in this case stats target 10000 is not enough.\n\n\nif I'm reading sources correctly element frequency in sample should be \nmore than 0.0063/stats_target to make it into MCE list:\nhttps://github.com/postgres/postgres/blob/REL_14_STABLE/src/backend/utils/adt/array_typanalyze.c#L471\n\nso if we store mostly one element in array and they're almost all \ndistinct then in tables with more then stats_target/0.0063 (~1.58M for \nmaximum stats target 10000) rows we'll get 0.005 constant for selectivity.\nwhich could be pretty bad estimate (in real example it was 6-7 orders of \nmagnitude difference).\n\nI ran into this issue 2 times in last year with 2 different projects so \nperhaps it's not very rare situation. In first case increasing stats \ntarget helped, in second it didn't (for table with 150M rows), had to \nuse hacks to fix the plan.\n\nIt was in PostgreSQL 12.x and 14.3.\n\nI'm not sure if there is a simple fix for this, maybe store and use \nsomething like n_distinct for elements for selectivity estimation ? or \nperhaps we should store something in MCE list anyway even if frequency \nis low (at least one element) ?\n\n\n--\n\nThanks,\n\nAlexey Ermakov\n\n\n\n",
"msg_date": "Fri, 27 May 2022 19:39:22 +0600",
"msg_from": "Alexey Ermakov <alexey.ermakov@dataegret.com>",
"msg_from_op": true,
"msg_subject": "rows selectivity overestimate for @> operator for arrays"
},
{
"msg_contents": "Alexey Ermakov <alexey.ermakov@dataegret.com> writes:\n> so if we store mostly one element in array and they're almost all \n> distinct then in tables with more then stats_target/0.0063 (~1.58M for \n> maximum stats target 10000) rows we'll get 0.005 constant for selectivity.\n\nYeah. There's a comment in array_selfuncs.c about\n\n * TODO: this estimate probably could be improved by using the distinct\n * elements count histogram. For example, excepting the special case of\n * \"column @> '{}'\", we can multiply the calculated selectivity by the\n * fraction of nonempty arrays in the column.\n\nbut I'm not sure whether that's relevant here.\n\nOne thought is that if there is a pg_statistic row but it contains\nno MCE list, we could assume that the column elements are all distinct\nand see what sort of estimate that leads us to.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 14:02:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rows selectivity overestimate for @> operator for arrays"
},
{
"msg_contents": "On Fri, May 27, 2022 at 12:19 PM Alexey Ermakov <\nalexey.ermakov@dataegret.com> wrote:\n\n> Hello, please look into following example:\n>\n> postgres=# create table test_array_selectivity as select\n> array[id]::int[] as a from generate_series(1, 10000000) gs(id);\n> SELECT 10000000\n> postgres=# explain analyze select * from test_array_selectivity where a\n> @> array[1];\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on test_array_selectivity (cost=0.00..198531.00 rows=50000\n> width=32) (actual time=0.023..2639.029 rows=1 loops=1)\n> Filter: (a @> '{1}'::integer[])\n> Rows Removed by Filter: 9999999\n> Planning Time: 0.078 ms\n> Execution Time: 2639.038 ms\n> (5 rows)\n>\n>\n> for row estimation rows=50000=10000000*0.005 we are using constant\n> DEFAULT_CONTAIN_SEL if I'm not mistaken.\n> and we're using it unless we have something in most_common_elems (MCE)\n> in statistics which in this case is empty.\n>\n>\nThis was discussed before at\nhttps://www.postgresql.org/message-id/flat/CAMkU%3D1x2W1gpEP3AQsrSA30uxQk1Sau5VDOLL4LkhWLwrOY8Lw%40mail.gmail.com\n\nMy solution was to always store at least one element in the MCE, even if\nthe sample size was too small to be reliable. It would still be more\nreliable than the alternative fallback assumption. That patch still\napplies and fixes your example, or improves it anyway and to an extent\ndirectly related to the stats target size. (It also still has my bogus code\ncomments in which I confuse histogram with n_distinct).\n\nThen some other people proposed more elaborate patches, and I never wrapped\nmy head around what they were doing differently or why the elaboration was\nimportant.\n\nSince you're willing to dig into the source code and since this is directly\napplicable to you, maybe you would be willing to go over to pgsql-hackers\nto revive, test, and review these proposals with an eye of getting them\napplied in v16.\n\nI'm not sure if there is a simple fix for this, maybe store and use\n> something like n_distinct for elements for selectivity estimation ? or\n> perhaps we should store something in MCE list anyway even if frequency\n> is low (at least one element) ?\n>\n\nn_distinct might be the best solution, but I don't see how it could be\nadapted to the general array case. If it could only work when the vast\nmajority or arrays had length 1, I think that would be too esoteric to be\naccepted.\n\nCheers,\n\nJeff\n\nOn Fri, May 27, 2022 at 12:19 PM Alexey Ermakov <alexey.ermakov@dataegret.com> wrote:Hello, please look into following example:\n\npostgres=# create table test_array_selectivity as select \narray[id]::int[] as a from generate_series(1, 10000000) gs(id);\nSELECT 10000000\npostgres=# explain analyze select * from test_array_selectivity where a \n@> array[1];\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on test_array_selectivity (cost=0.00..198531.00 rows=50000 \nwidth=32) (actual time=0.023..2639.029 rows=1 loops=1)\n Filter: (a @> '{1}'::integer[])\n Rows Removed by Filter: 9999999\n Planning Time: 0.078 ms\n Execution Time: 2639.038 ms\n(5 rows)\n\n\nfor row estimation rows=50000=10000000*0.005 we are using constant \nDEFAULT_CONTAIN_SEL if I'm not mistaken.\nand we're using it unless we have something in most_common_elems (MCE) \nin statistics which in this case is empty.\nThis was discussed before at https://www.postgresql.org/message-id/flat/CAMkU%3D1x2W1gpEP3AQsrSA30uxQk1Sau5VDOLL4LkhWLwrOY8Lw%40mail.gmail.comMy solution was to always store at least one element in the MCE, even if the sample size was too small to be reliable. It would still be more reliable than the alternative fallback assumption. That patch still applies and fixes your example, or improves it anyway and to an extent directly related to the stats target size. (It also still has my bogus code comments in which I confuse histogram with n_distinct). Then some other people proposed more elaborate patches, and I never wrapped my head around what they were doing differently or why the elaboration was important.Since you're willing to dig into the source code and since this is directly applicable to you, maybe you would be willing to go over to pgsql-hackers to revive, test, and review these proposals with an eye of getting them applied in v16.\nI'm not sure if there is a simple fix for this, maybe store and use \nsomething like n_distinct for elements for selectivity estimation ? or \nperhaps we should store something in MCE list anyway even if frequency \nis low (at least one element) ?n_distinct might be the best solution, but I don't see how it could be adapted to the general array case. If it could only work when the vast majority or arrays had length 1, I think that would be too esoteric to be accepted. Cheers,Jeff",
"msg_date": "Wed, 1 Jun 2022 23:33:33 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rows selectivity overestimate for @> operator for arrays"
}
] |
[
{
"msg_contents": "Hi,\n We have a performance test on Postgresql 13.4 on RHEL8.4 , just after connection storm in ( 952 new connections coming in 1 minute), a lot of backends start on \" D \" state, and when more sessions got disconnected, they do not exit successfully, instead became \"defunct\". No errors from postgresql.log , just after the connection storm, some pg_cron workers can not started either. The server is a Virtual machine and no IO hang (actually) IO load is very low. Could be a postgresql bug or an OS abnormal behavior?\n\ntop - 13:18:02 up 4 days, 6:59, 6 users, load average: 308.68, 307.93, 307.40\nTasks: 1690 total, 1 running, 853 sleeping, 0 stopped, 836 zombie\n%Cpu(s): 0.1 us, 0.8 sy, 0.0 ni, 99.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nMiB Mem : 128657.6 total, 1188.7 free, 52921.5 used, 74547.4 buff/cache\nMiB Swap: 3072.0 total, 3066.7 free, 5.3 used. 74757.3 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n1070325 postgres 20 0 45.6g 15356 11776 D 0.5 0.0 19:21.37 postgres: xuser xdb 192.x.x.132(33318) BIND\n1070328 postgres 20 0 45.6g 16120 11660 D 0.5 0.0 19:21.96 postgres: xuser xdb 192.x.x.121(34372) BIND\n1070329 postgres 20 0 45.6g 15380 11872 D 0.5 0.0 19:20.90 postgres: xuser xdb 192.x.x.126(41316) BIND\n1070397 postgres 20 0 45.6g 14804 11604 D 0.5 0.0 19:23.57 postgres: xuser xdb 192.x.x.132(33324) BIND\n1070434 postgres 20 0 45.6g 14928 11812 D 0.5 0.0 19:21.14 postgres: xuser xdb 192.x.x.129(57298) BIND\n1070480 postgres 20 0 45.6g 14612 11660 D 0.5 0.0 19:19.88 postgres: xuser xdb 192.x.x.127(52424) BIND\n1070508 postgres 20 0 45.6g 14928 11812 D 0.5 0.0 19:20.48 postgres: xuser xdb 192.x.x.127(52428) BIND\n1070523 postgres 20 0 45.6g 14544 11716 D 0.5 0.0 19:22.53 postgres: xuser xdb 192.x.x.130(33678) BIND\n1070647 postgres 20 0 45.6g 14444 11660 D 0.5 0.0 19:24.36 postgres: xuser xdb 192.x.x.129(57316) BIND\n1070648 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:22.86 postgres: xuser xdb 192.x.x.133(48796) BIND\n1070676 postgres 20 0 45.6g 14456 11660 D 0.5 0.0 19:21.92 postgres: xuser xdb 192.x.x.128(54614) BIND\n1070724 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:20.90 postgres: xuser xdb 192.x.x.126(41370) BIND\n1070739 postgres 20 0 45.6g 14008 11412 D 0.5 0.0 19:22.69 postgres: xuser xdb 192.x.x.123(56164) BIND\n1070786 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:22.51 postgres: xuser xdb 192.x.x.121(34428) BIND\n1070801 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:22.19 postgres: xuser xdb 192.x.x.126(41382) BIND\n1070815 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:21.36 postgres: xuser xdb 192.x.x.53(55950) BIND\n1070830 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:23.80 postgres: xuser xdb 192.x.x.131(41704) BIND\n1070841 postgres 20 0 45.6g 13304 10744 D 0.5 0.0 19:24.25 postgres: xuser xdb 192.x.x.131(41706) BIND\n1070884 postgres 20 0 45.6g 13264 10688 D 0.5 0.0 19:20.61 postgres: xuser xdb 192.x.x.122(33734) BIND\n1070903 postgres 20 0 45.6g 14456 11660 D 0.5 0.0 19:23.43 postgres: xuser xdb 192.x.x.132(33384) BIND\n1070915 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:21.20 postgres: xuser xdb 192.x.x.129(57350) initializing\n1070941 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:20.58 postgres: xuser xdb 192.x.x.124(35934) initializing\n1070944 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:18.57 postgres: xuser xdb 192.x.x.50(58964) initializing\n1070963 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:21.98 postgres: xuser xdb 192.x.x.132(33362) initializing\n1070974 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:22.76 postgres: xuser xdb 192.x.x.54(56774) initializing\n1070986 postgres 20 0 45.5g 7284 5372 D 0.5 0.0 19:21.89 postgres: xuser xdb 192.x.x.132(33394) initializing\n...\n\npostgres 1071160 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071161 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071162 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071163 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071164 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071167 1951 0 May27 ? 00:00:21 [postmaster] <defunct>\npostgres 1071168 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071170 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071171 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071174 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071175 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071176 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071179 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071181 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071184 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071185 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071187 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\n\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.00 0.00 0.16 0.00 0.00 99.84\n\nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nscd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\n\nHi,\n We have a performance test on Postgresql 13.4 on RHEL8.4 , just after connection storm in ( 952 new connections coming in 1 minute), a lot of backends start on “ D “ state, and when more sessions got disconnected, they do not exit\n successfully, instead became “defunct”. No errors from postgresql.log , just after the connection storm, some pg_cron workers can not started either. The server is a Virtual machine and no IO hang (actually) IO load is very low. Could be a postgresql\n bug or an OS abnormal behavior? \n \ntop - 13:18:02 up 4 days, 6:59, 6 users, load average: 308.68, 307.93, 307.40\nTasks: 1690 total, 1 running, 853 sleeping, 0 stopped, 836 zombie\n%Cpu(s): 0.1 us, 0.8 sy, 0.0 ni, 99.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nMiB Mem : 128657.6 total, 1188.7 free, 52921.5 used, 74547.4 buff/cache\nMiB Swap: 3072.0 total, 3066.7 free, 5.3 used. 74757.3 avail Mem\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n1070325 postgres 20 0 45.6g 15356 11776 D 0.5 0.0 19:21.37 postgres: xuser xdb 192.x.x.132(33318) BIND\n1070328 postgres 20 0 45.6g 16120 11660 D 0.5 0.0 19:21.96 postgres: xuser xdb 192.x.x.121(34372) BIND\n1070329 postgres 20 0 45.6g 15380 11872 D 0.5 0.0 19:20.90 postgres: xuser xdb 192.x.x.126(41316) BIND\n1070397 postgres 20 0 45.6g 14804 11604 D 0.5 0.0 19:23.57 postgres: xuser xdb 192.x.x.132(33324) BIND\n1070434 postgres 20 0 45.6g 14928 11812 D 0.5 0.0 19:21.14 postgres: xuser xdb 192.x.x.129(57298) BIND\n1070480 postgres 20 0 45.6g 14612 11660 D 0.5 0.0 19:19.88 postgres: xuser xdb 192.x.x.127(52424) BIND\n1070508 postgres 20 0 45.6g 14928 11812 D 0.5 0.0 19:20.48 postgres: xuser xdb 192.x.x.127(52428) BIND\n1070523 postgres 20 0 45.6g 14544 11716 D 0.5 0.0 19:22.53 postgres: xuser xdb 192.x.x.130(33678) BIND\n1070647 postgres 20 0 45.6g 14444 11660 D 0.5 0.0 19:24.36 postgres: xuser xdb 192.x.x.129(57316) BIND\n1070648 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:22.86 postgres: xuser xdb 192.x.x.133(48796) BIND\n1070676 postgres 20 0 45.6g 14456 11660 D 0.5 0.0 19:21.92 postgres: xuser xdb 192.x.x.128(54614) BIND\n1070724 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:20.90 postgres: xuser xdb 192.x.x.126(41370) BIND\n1070739 postgres 20 0 45.6g 14008 11412 D 0.5 0.0 19:22.69 postgres: xuser xdb 192.x.x.123(56164) BIND\n1070786 postgres 20 0 45.6g 14352 11524 D 0.5 0.0 19:22.51 postgres: xuser xdb 192.x.x.121(34428) BIND\n1070801 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:22.19 postgres: xuser xdb 192.x.x.126(41382) BIND\n1070815 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:21.36 postgres: xuser xdb 192.x.x.53(55950) BIND\n1070830 postgres 20 0 45.6g 13240 10688 D 0.5 0.0 19:23.80 postgres: xuser xdb 192.x.x.131(41704) BIND\n1070841 postgres 20 0 45.6g 13304 10744 D 0.5 0.0 19:24.25 postgres: xuser xdb 192.x.x.131(41706) BIND\n1070884 postgres 20 0 45.6g 13264 10688 D 0.5 0.0 19:20.61 postgres: xuser xdb 192.x.x.122(33734) BIND\n1070903 postgres 20 0 45.6g 14456 11660 D 0.5 0.0 19:23.43 postgres: xuser xdb 192.x.x.132(33384) BIND\n1070915 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:21.20 postgres: xuser xdb 192.x.x.129(57350) initializing\n1070941 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:20.58 postgres: xuser xdb 192.x.x.124(35934) initializing\n1070944 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:18.57 postgres: xuser xdb 192.x.x.50(58964) initializing\n1070963 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:21.98 postgres: xuser xdb 192.x.x.132(33362) initializing\n1070974 postgres 20 0 45.5g 7280 5372 D 0.5 0.0 19:22.76 postgres: xuser xdb 192.x.x.54(56774) initializing\n1070986 postgres 20 0 45.5g 7284 5372 D 0.5 0.0 19:21.89 postgres: xuser xdb 192.x.x.132(33394) initializing\n…\n \npostgres 1071160 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071161 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071162 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071163 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071164 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071167 1951 0 May27 ? 00:00:21 [postmaster] <defunct>\npostgres 1071168 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071170 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071171 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071174 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071175 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071176 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071179 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071181 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\npostgres 1071184 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071185 1951 0 May27 ? 00:00:02 [postmaster] <defunct>\npostgres 1071187 1951 0 May27 ? 00:00:03 [postmaster] <defunct>\n \n \navg-cpu: %user %nice %system %iowait %steal %idle\n 0.00 0.00 0.16 0.00 0.00 99.84\n \nDevice r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nscd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n \n \nThanks,\n \nJames",
"msg_date": "Sun, 29 May 2022 13:20:12 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "postgres backend process hang on \" D \" state"
},
{
"msg_contents": "Em dom., 29 de mai. de 2022 às 10:20, James Pang (chaolpan) <\nchaolpan@cisco.com> escreveu:\n\n> Hi,\n>\n> We have a performance test on Postgresql 13.4 on RHEL8.4 ,\n>\nHard to say with this info, but how is this \" test\", why not use the 13.7,\nwith all bugs fixes related?\n\nregards,\nRanier Vilela\n\n>\n\nEm dom., 29 de mai. de 2022 às 10:20, James Pang (chaolpan) <chaolpan@cisco.com> escreveu:\n\n\nHi,\n We have a performance test on Postgresql 13.4 on RHEL8.4 , Hard to say with this info, but how is this \" test\", why not use the 13.7, with all bugs fixes related?regards,Ranier Vilela",
"msg_date": "Sun, 29 May 2022 10:30:39 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres backend process hang on \" D \" state"
},
{
"msg_contents": "On Sun, May 29, 2022 at 01:20:12PM +0000, James Pang (chaolpan) wrote:\n> We have a performance test on Postgresql 13.4 on RHEL8.4 , just after connection storm in ( 952 new connections coming in 1 minute), a lot of backends start on \" D \" state, and when more sessions got disconnected, they do not exit successfully, instead became \"defunct\". No errors from postgresql.log , just after the connection storm, some pg_cron workers can not started either. The server is a Virtual machine and no IO hang (actually) IO load is very low. Could be a postgresql bug or an OS abnormal behavior?\n\nWhat settings have you set ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nWhat extensions do you have loaded? \\dx\n\nSend the output of SELECT * FROM pg_stat_activity either as an attachment or in\n\\x mode?\n\nWhat is your data dir ? Is it on the VM's root filesystem or something else ?\nShow the output of \"mount\". Are there any kernel messages in /var/log/messages\nor `dmesg` ?\n\nHow many relations are in your schema ?\nAre you using temp tables ?\nLong-running transactions ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 29 May 2022 10:01:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres backend process hang on \" D \" state"
},
{
"msg_contents": "1. extensions \n shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n2. psql can not login now ,it hang there too, so can not check anything from pg_stats_* views\n3. one main app user and 2 schemas ,no long running transactions . \n4. we use /pgdata , it's on xfs , lvm/vg RHEL8.4 ,it's a shared storage, no use root filesystem.\n/dev/mapper/pgdatavg-pgdatalv 500G 230G 271G 46% /pgdata\n/dev/mapper/pgdatavg-pgarchivelv 190G 1.5G 189G 1% /pgarchive\n/dev/mapper/pgdatavg-pgwallv 100G 34G 67G 34% /pgwal\n\nRegards,\n\nJames \n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Sunday, May 29, 2022 11:02 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: postgres backend process hang on \" D \" state\n\nOn Sun, May 29, 2022 at 01:20:12PM +0000, James Pang (chaolpan) wrote:\n> We have a performance test on Postgresql 13.4 on RHEL8.4 , just after connection storm in ( 952 new connections coming in 1 minute), a lot of backends start on \" D \" state, and when more sessions got disconnected, they do not exit successfully, instead became \"defunct\". No errors from postgresql.log , just after the connection storm, some pg_cron workers can not started either. The server is a Virtual machine and no IO hang (actually) IO load is very low. Could be a postgresql bug or an OS abnormal behavior?\n\nWhat settings have you set ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nWhat extensions do you have loaded? \\dx\n\nSend the output of SELECT * FROM pg_stat_activity either as an attachment or in \\x mode?\n\nWhat is your data dir ? Is it on the VM's root filesystem or something else ?\nShow the output of \"mount\". Are there any kernel messages in /var/log/messages or `dmesg` ?\n\nHow many relations are in your schema ?\nAre you using temp tables ?\nLong-running transactions ?\n\n--\nJustin\n\n\n",
"msg_date": "Mon, 30 May 2022 01:19:56 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: postgres backend process hang on \" D \" state"
},
{
"msg_contents": "On Mon, May 30, 2022 at 01:19:56AM +0000, James Pang (chaolpan) wrote:\n> 1. extensions \n> shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n> 2. psql can not login now ,it hang there too, so can not check anything from pg_stats_* views\n> 3. one main app user and 2 schemas ,no long running transactions . \n> 4. we use /pgdata , it's on xfs , lvm/vg RHEL8.4 ,it's a shared storage, no use root filesystem.\n> /dev/mapper/pgdatavg-pgdatalv 500G 230G 271G 46% /pgdata\n> /dev/mapper/pgdatavg-pgarchivelv 190G 1.5G 189G 1% /pgarchive\n> /dev/mapper/pgdatavg-pgwallv 100G 34G 67G 34% /pgwal\n\nWhat are the LVM PVs ? Is it a scsi/virt device ? Or iscsi/drbd/???\n\nI didn't hear back if there's any kernel errors.\nIs the storage broken/stuck/disconnected ?\nCan you run \"time find /pgdata /pgarchive /pgwal -ls |wc\" ?\n\nCould you run \"ps -u postgres -O wchan=============================\"\n\nCan you strace one of the stuck backends ?\n\nIt sounds like you'll have to restart the service or VM (forcibly if necessary)\nto resolve the immediate issue and then collect the other info, and leave a\n\"psql\" open to try to (if the problem recurs) check pg_stat_activity and other\nDB info.\n\n\n",
"msg_date": "Sun, 29 May 2022 21:19:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres backend process hang on \" D \" state"
},
{
"msg_contents": "\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> 1. extensions \n> shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n\nCan you still reproduce this if you remove all of those?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 May 2022 22:21:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres backend process hang on \" D \" state"
},
{
"msg_contents": "Update your questions\n\n. time find /pgdata /pgarchive /pgwal -ls |wc\n 82165 903817 8397391\n\nreal 0m1.120s\nuser 0m0.432s\nsys 0m0.800s\n\nps -u postgres -O wchan=============================\n PID ============================ S TTY TIME COMMAND\n 1951 - D ? 00:26:37 /usr/pgsql-13/bin/postmaster -D /pgdata -c config_file=/pgdata/postgresql.conf\n 2341 - S ? 00:00:06 postgres: logger\n 2361 - S ? 00:01:02 postgres: checkpointer\n 2362 - S ? 00:00:27 postgres: background writer\n 2363 - S ? 00:00:59 postgres: walwriter\n 2364 - S ? 00:02:00 postgres: autovacuum launcher\n 2365 - Z ? 00:00:04 [postmaster] <defunct>\n 2366 do_epoll_wait S ? 00:13:30 postgres: stats collector\n 2367 do_epoll_wait S ? 00:00:18 postgres: pg_cron launcher\n 2368 - S ? 00:00:00 postgres: logical replication launcher\n1053144 - Z ? 00:05:36 [postmaster] <defunct>\n1053319 - Z ? 00:05:29 [postmaster] <defunct>\n1053354 - Z ? 00:05:53 [postmaster] <defunct>\n1053394 - Z ? 00:05:51 [postmaster] <defunct>\n...\n1064387 - Z ? 00:05:13 [postmaster] <defunct>\n1070257 - D ? 00:24:23 postgres: test pbwd 192.168.205.53(55886) BIND\n1070258 - D ? 00:24:24 postgres: test pbwd 192.168.205.50(58910) BIND\n1070259 - D ? 00:24:22 postgres: test pbwd 192.168.205.133(48754) SELECT\n1070260 - Z ? 00:05:02 [postmaster] <defunct>\n...\n\nStrace / gdb will hang there too for trace a process.\n\nRegards,\n\nJames \n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Monday, May 30, 2022 10:20 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: postgres backend process hang on \" D \" state\n\nOn Mon, May 30, 2022 at 01:19:56AM +0000, James Pang (chaolpan) wrote:\n> 1. extensions \n> shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n> 2. psql can not login now ,it hang there too, so can not check \n> anything from pg_stats_* views 3. one main app user and 2 schemas ,no long running transactions .\n> 4. we use /pgdata , it's on xfs , lvm/vg RHEL8.4 ,it's a shared storage, no use root filesystem.\n> /dev/mapper/pgdatavg-pgdatalv 500G 230G 271G 46% /pgdata\n> /dev/mapper/pgdatavg-pgarchivelv 190G 1.5G 189G 1% /pgarchive\n> /dev/mapper/pgdatavg-pgwallv 100G 34G 67G 34% /pgwal\n\nWhat are the LVM PVs ? Is it a scsi/virt device ? Or iscsi/drbd/???\n\nI didn't hear back if there's any kernel errors.\nIs the storage broken/stuck/disconnected ?\nCan you run \"time find /pgdata /pgarchive /pgwal -ls |wc\" ?\n\nCould you run \"ps -u postgres -O wchan=============================\"\n\nCan you strace one of the stuck backends ?\n\nIt sounds like you'll have to restart the service or VM (forcibly if necessary) to resolve the immediate issue and then collect the other info, and leave a \"psql\" open to try to (if the problem recurs) check pg_stat_activity and other DB info.\n\n\n",
"msg_date": "Mon, 30 May 2022 02:58:03 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: postgres backend process hang on \" D \" state"
},
{
"msg_contents": " Maybe any bugs from these extensions ? I can try that removing all extensions, but we need these extensions. \n\nThanks,\n\nJames\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Monday, May 30, 2022 10:21 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: postgres backend process hang on \" D \" state\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> 1. extensions \n> shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n\nCan you still reproduce this if you remove all of those?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 May 2022 02:59:31 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: postgres backend process hang on \" D \" state"
},
{
"msg_contents": " Seems all process blocked by a system service \"fapolicy.d\", when I stop the service, defunct process got disappeared and pending backends got released or moving on, and database started to accept new connection again. \n This fapolicy.d in RHEL8.4 is enabled by system admin to support security compliance requirements. \n systemctl stop fapolicyd , after that, everything go back to be normal soon. \n\nRegards,\n\nJames\n\n\n-----Original Message-----\nFrom: James Pang (chaolpan) \nSent: Monday, May 30, 2022 11:00 AM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@lists.postgresql.org\nSubject: RE: postgres backend process hang on \" D \" state\n\n Maybe any bugs from these extensions ? I can try that removing all extensions, but we need these extensions. \n\nThanks,\n\nJames\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Monday, May 30, 2022 10:21 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: postgres backend process hang on \" D \" state\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> 1. extensions \n> shared_preload_libraries = 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'\n\nCan you still reproduce this if you remove all of those?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 02:08:28 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: postgres backend process hang on \" D \" state"
}
] |
[
{
"msg_contents": "> Hi,\n>\n> We are trying to reindex 600k tables in a single database of size 2.7TB\n> using reindexdb utility in a shell script\n> reindexdb -v -d $dbname -h $hostname -U tkcsowner --concurrently -j\n> $parallel -S $schema\n>\n> our config is as below\n> name | setting\n> --------------------------------+---------\n> auto_explain.log_buffers | off\n> autovacuum_work_mem | 524288\n> dbms_pipe.total_message_buffer | 30\n> dynamic_shared_memory_type | posix\n> hash_mem_multiplier | 1\n> logical_decoding_work_mem | 65536\n> maintenance_work_mem | 2097152\n> shared_buffers | 4194304\n> shared_memory_type | mmap\n> temp_buffers | 1024\n> wal_buffers | 2048\n> work_mem | 16384\n>\n> Memory:\n> free -h\n> total used free shared buff/cache\n> available\n> Mem: 125G 38G 1.1G 93M 85G\n> 86G\n> Swap: 74G 188M 74G\n>\n> nproc\n> 16\n>\n> Initially it was processing 1000 tables per minute. Performance is\n> gradually dropping and now after 24 hr it was processing 90 tables per\n> minute.\n>\n> we see stats collector in top -c continuously active\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>\n> 3730 ****** 20 0 520928 233844 1244 R 61.8 0.2 650:31.36\n> postgres: stats collector\n>\n>\n> postgres=# SELECT date_trunc('second', current_timestamp -\n> pg_postmaster_start_time()) as uptime;\n> uptime\n> ----------------\n> 1 day 04:07:18\n>\n> top - 13:08:22 up 1 day, 5:45, 2 users, load average: 1.65, 1.65, 1.56\n> Tasks: 303 total, 3 running, 300 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 9.6 us, 3.4 sy, 0.0 ni, 86.8 id, 0.1 wa, 0.0 hi, 0.0 si,\n> 0.0 st\n> KiB Mem : 13185940+total, 992560 free, 40571300 used, 90295552 buff/cache\n> KiB Swap: 78643200 total, 78450376 free, 192820 used. 90327376 avail Mem\n>\n> iostat -mxy 5\n> Linux 3.10.0-1160.53.1.el7.x86_64\n> (***************************************) 05/31/2022 _x86_64_\n> (16 CPU)\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 8.22 0.00 3.23 0.06 0.00 88.49\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\n> avgqu-sz await r_await w_await svctm %util\n> sda 0.00 0.00 0.00 0.60 0.00 0.00\n> 16.00 0.00 2.67 0.00 2.67 3.33 0.20\n> sdb 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> sdc 0.00 0.00 0.00 26.80 0.00 0.16\n> 11.94 0.01 0.37 0.00 0.37 0.69 1.86\n> sde 0.00 0.00 3.80 26.80 0.04 0.43\n> 31.27 0.03 0.96 0.63 1.01 0.40 1.22\n>\n> DB version\n> PostgreSQL 13.4\n>\n> Os\n> bash-4.2$ cat /etc/redhat-release\n> CentOS Linux release 7.9.2009 (Core)\n>\n> What could be the possible bottleneck ?\n>\n> Best Regards\n> Praneel\n>\n>\n>\n\nHi,We are trying to reindex 600k tables in a single database of size 2.7TBusing reindexdb utility in a shell scriptreindexdb -v -d $dbname -h $hostname -U tkcsowner --concurrently -j $parallel -S $schemaour config is as below name | setting--------------------------------+--------- auto_explain.log_buffers | off autovacuum_work_mem | 524288 dbms_pipe.total_message_buffer | 30 dynamic_shared_memory_type | posix hash_mem_multiplier | 1 logical_decoding_work_mem | 65536 maintenance_work_mem | 2097152 shared_buffers | 4194304 shared_memory_type | mmap temp_buffers | 1024 wal_buffers | 2048 work_mem | 16384Memory: free -h total used free shared buff/cache availableMem: 125G 38G 1.1G 93M 85G 86GSwap: 74G 188M 74G nproc16Initially it was processing 1000 tables per minute. Performance is gradually dropping and now after 24 hr it was processing 90 tables per minute.we see stats collector in top -c continuously active PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3730 ****** 20 0 520928 233844 1244 R 61.8 0.2 650:31.36 postgres: stats collectorpostgres=# SELECT date_trunc('second', current_timestamp - pg_postmaster_start_time()) as uptime; uptime---------------- 1 day 04:07:18top - 13:08:22 up 1 day, 5:45, 2 users, load average: 1.65, 1.65, 1.56Tasks: 303 total, 3 running, 300 sleeping, 0 stopped, 0 zombie%Cpu(s): 9.6 us, 3.4 sy, 0.0 ni, 86.8 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem : 13185940+total, 992560 free, 40571300 used, 90295552 buff/cacheKiB Swap: 78643200 total, 78450376 free, 192820 used. 90327376 avail Memiostat -mxy 5Linux 3.10.0-1160.53.1.el7.x86_64 (***************************************) 05/31/2022 _x86_64_ (16 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 8.22 0.00 3.23 0.06 0.00 88.49Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.00 0.00 0.60 0.00 0.00 16.00 0.00 2.67 0.00 2.67 3.33 0.20sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sdc 0.00 0.00 0.00 26.80 0.00 0.16 11.94 0.01 0.37 0.00 0.37 0.69 1.86sde 0.00 0.00 3.80 26.80 0.04 0.43 31.27 0.03 0.96 0.63 1.01 0.40 1.22DB versionPostgreSQL 13.4 Osbash-4.2$ cat /etc/redhat-releaseCentOS Linux release 7.9.2009 (Core) What could be the possible bottleneck ?Best RegardsPraneel",
"msg_date": "Tue, 31 May 2022 20:44:29 +0530",
"msg_from": "Praneel Devisetty <devisettypraneel@gmail.com>",
"msg_from_op": true,
"msg_subject": "REINDEXdb performance degrading gradually PG13.4"
},
{
"msg_contents": "On Tuesday, May 31, 2022, Praneel Devisetty <devisettypraneel@gmail.com>\nwrote:\n\n>\n> Initially it was processing 1000 tables per minute. Performance is\n>> gradually dropping and now after 24 hr it was processing 90 tables per\n>> minute.\n>>\n>\nThat seems like a fairly problematic metric given the general vast\ndisparities in size tables have.\n\nBuilding indexes is so IO heavy that the non-IO bottlenecks that exists\nlikely have minimal impact on the overall times this rebuild everything\nwill take. That said, I’ve never done anything at this scale before. I\nwouldn’t be too surprised if per-session cache effects are coming into play\ngiven the number of objects involved and the assumption that each session\nused for parallelism is persistent. I’m not sure how the parallelism works\nfor managing the work queue though as it isn’t documented and I haven’t\ninspected the source code.\n\nOn Tuesday, May 31, 2022, Praneel Devisetty <devisettypraneel@gmail.com> wrote:Initially it was processing 1000 tables per minute. Performance is gradually dropping and now after 24 hr it was processing 90 tables per minute.That seems like a fairly problematic metric given the general vast disparities in size tables have.Building indexes is so IO heavy that the non-IO bottlenecks that exists likely have minimal impact on the overall times this rebuild everything will take. That said, I’ve never done anything at this scale before. I wouldn’t be too surprised if per-session cache effects are coming into play given the number of objects involved and the assumption that each session used for parallelism is persistent. I’m not sure how the parallelism works for managing the work queue though as it isn’t documented and I haven’t inspected the source code.",
"msg_date": "Tue, 31 May 2022 08:42:06 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "REINDEXdb performance degrading gradually PG13.4"
},
{
"msg_contents": "On Tue, May 31, 2022 at 08:42:06AM -0700, David G. Johnston wrote:\n> Building indexes is so IO heavy that the non-IO bottlenecks that exists\n> likely have minimal impact on the overall times this rebuild everything\n> will take. That said, I’ve never done anything at this scale before. I\n> wouldn’t be too surprised if per-session cache effects are coming into play\n> given the number of objects involved and the assumption that each session\n> used for parallelism is persistent. I’m not sure how the parallelism works\n> for managing the work queue though as it isn’t documented and I haven’t\n> inspected the source code.\n\nget_parallel_object_list() in reindexdb.c would give the idea, where\nthe list of tables to rebuild are ordered based on an \"ORDER BY\nc.relpages DESC\", then the table queue is processed with its own\ncommand, moving on to the next item once we are done with an item in\nthe list.\n--\nMichael",
"msg_date": "Wed, 1 Jun 2022 13:37:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEXdb performance degrading gradually PG13.4"
},
{
"msg_contents": "On Tue, May 31, 2022 at 9:12 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Tuesday, May 31, 2022, Praneel Devisetty <devisettypraneel@gmail.com>\n> wrote:\n>\n>>\n>> Initially it was processing 1000 tables per minute. Performance is\n>>> gradually dropping and now after 24 hr it was processing 90 tables per\n>>> minute.\n>>>\n>>\n> That seems like a fairly problematic metric given the general vast\n> disparities in size tables have.\n>\n> Building indexes is so IO heavy that the non-IO bottlenecks that exists\n> likely have minimal impact on the overall times this rebuild everything\n> will take. That said, I’ve never done anything at this scale before. I\n> wouldn’t be too surprised if per-session cache effects are coming into play\n> given the number of objects involved and the assumption that each session\n> used for parallelism is persistent. I’m not sure how the parallelism works\n> for managing the work queue though as it isn’t documented and I haven’t\n> inspected the source code.\n>\n\ncould you please share more about per-session cache effects /Point me to\nlink with more info .\n\nOn Tue, May 31, 2022 at 9:12 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tuesday, May 31, 2022, Praneel Devisetty <devisettypraneel@gmail.com> wrote:Initially it was processing 1000 tables per minute. Performance is gradually dropping and now after 24 hr it was processing 90 tables per minute.That seems like a fairly problematic metric given the general vast disparities in size tables have.Building indexes is so IO heavy that the non-IO bottlenecks that exists likely have minimal impact on the overall times this rebuild everything will take. That said, I’ve never done anything at this scale before. I wouldn’t be too surprised if per-session cache effects are coming into play given the number of objects involved and the assumption that each session used for parallelism is persistent. I’m not sure how the parallelism works for managing the work queue though as it isn’t documented and I haven’t inspected the source code.could you please share more about per-session cache effects /Point me to link with more info .",
"msg_date": "Wed, 1 Jun 2022 12:36:41 +0530",
"msg_from": "Praneel Devisetty <devisettypraneel@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEXdb performance degrading gradually PG13.4"
},
{
"msg_contents": "On Tue, May 31, 2022 at 11:14 AM Praneel Devisetty <\ndevisettypraneel@gmail.com> wrote:\n\n>\n> Hi,\n>>\n>> We are trying to reindex 600k tables in a single database of size 2.7TB\n>> using reindexdb utility in a shell script\n>> reindexdb -v -d $dbname -h $hostname -U tkcsowner --concurrently -j\n>> $parallel -S $schema\n>>\n>>\nWhat is the value of $parallel? Are all the tables in the same schema?\n\n\n> Initially it was processing 1000 tables per minute. Performance is\n>> gradually dropping and now after 24 hr it was processing 90 tables per\n>> minute.\n>>\n>\nI can't even get remotely close to 1000 per minute with those options, even\nwith only 100000 single-index tables with all of them being empty. Are you\nsure that isn't 1000 per hour?\n\nUsing --concurrently really hits the stats system hard (I'm not sure why).\n Could you just omit that? If it is running at 1000 per minute or even per\nhour, does it really matter if the table is locked for as long as it takes\nto reindex?\n\nCheers,\n\nJeff\n\nOn Tue, May 31, 2022 at 11:14 AM Praneel Devisetty <devisettypraneel@gmail.com> wrote:Hi,We are trying to reindex 600k tables in a single database of size 2.7TBusing reindexdb utility in a shell scriptreindexdb -v -d $dbname -h $hostname -U tkcsowner --concurrently -j $parallel -S $schemaWhat is the value of $parallel? Are all the tables in the same schema? Initially it was processing 1000 tables per minute. Performance is gradually dropping and now after 24 hr it was processing 90 tables per minute.I can't even get remotely close to 1000 per minute with those options, even with only 100000 single-index tables with all of them being empty. Are you sure that isn't 1000 per hour?Using --concurrently really hits the stats system hard (I'm not sure why). Could you just omit that? If it is running at 1000 per minute or even per hour, does it really matter if the table is locked for as long as it takes to reindex?Cheers,Jeff",
"msg_date": "Wed, 1 Jun 2022 13:41:07 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEXdb performance degrading gradually PG13.4"
}
] |
[
{
"msg_contents": "Hi listers,\n\nIs there any sql query which we can use to find the logical reads performed\nby particular sql statement in postgres ??\n\nThanks,\nGoti\n-- \nThanks,\n\nGoti\n\nHi listers,Is there any sql query which we can use to find the logical reads performed by particular sql statement in postgres ??Thanks,Goti-- Thanks,Goti",
"msg_date": "Tue, 31 May 2022 22:26:54 +0530",
"msg_from": "Goti <aryan.goti@gmail.com>",
"msg_from_op": true,
"msg_subject": "Logical reads"
}
] |
[
{
"msg_contents": "0\n<https://stackoverflow.com/posts/72515636/timeline>\n\nI am using libpq to connect the Postgres server in c++ code. Postgres\nserver version is 12.10\n\nMy table schema is defined below\n\n Column | Type | Collation | Nullable | Default |\nStorage | Stats target | Description\n---------------------+----------+-----------+----------+------------+----------+--------------+-------------\n event_id | bigint | | not null | |\nplain | |\n event_sec | integer | | not null | |\nplain | |\n event_usec | integer | | not null | |\nplain | |\n event_op | smallint | | not null | |\nplain | |\n rd | bigint | | not null | |\nplain | |\n addr | bigint | | not null | |\nplain | |\n masklen | bigint | | not null | |\nplain | |\n path_id | bigint | | | |\nplain | |\n attribs_tbl_last_id | bigint | | not null | |\nplain | |\n attribs_tbl_next_id | bigint | | not null | |\nplain | |\n bgp_id | bigint | | not null | |\nplain | |\n last_lbl_stk | bytea | | not null | |\nextended | |\n next_lbl_stk | bytea | | not null | |\nextended | |\n last_state | smallint | | | |\nplain | |\n next_state | smallint | | | |\nplain | |\n pkey | integer | | not null | 1654449420 |\nplain | | Partition key: LIST (pkey)\nIndexes:\n \"event_pkey\" PRIMARY KEY, btree (event_id, pkey)\n \"event_event_sec_event_usec_idx\" btree (event_sec, event_usec)\nPartitions: event_spl_1651768781 FOR VALUES IN (1651768781),\n event_spl_1652029140 FOR VALUES IN (1652029140),\n event_spl_1652633760 FOR VALUES IN (1652633760),\n event_spl_1653372439 FOR VALUES IN (1653372439),\n event_spl_1653786420 FOR VALUES IN (1653786420),\n event_spl_1654449420 FOR VALUES IN (1654449420)\n\nWhen I execute the following query it takes 1 - 2 milliseconds to execute.\nTime is provided as a parameter to function executing this query, it\ncontains epoche seconds and microseconds.\n\nSELECT event_id FROM event WHERE (event_sec > time.seconds) OR\n((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY\nevent_sec, event_usec LIMIT 1\n\nThis query is executed every 30 seconds on the same client connection\n(Which is persistent for weeks). This process runs for weeks, but some time\nsame query starts taking more than 10 minutes. Once it takes 10 minutes,\nafter that every execution takes > 10 minutes.\n\nIf I restart the process it recreated connection with the server and now\nexecution time again falls back to 1-2 milliseconds. This issue is\nintermittent, sometimes it triggers after a week of the running process and\nsometime after 2 - 3 weeks of the running process.\n\nWe add a new partition to the table every Sunday and write new data in the\nnew partition.\n\n\n-- \nregards\nMayank Kandari\n\n0I am using libpq to connect the Postgres server in c++ code. Postgres server version is 12.10My table schema is defined below Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n---------------------+----------+-----------+----------+------------+----------+--------------+-------------\n event_id | bigint | | not null | | plain | | \n event_sec | integer | | not null | | plain | | \n event_usec | integer | | not null | | plain | | \n event_op | smallint | | not null | | plain | | \n rd | bigint | | not null | | plain | | \n addr | bigint | | not null | | plain | | \n masklen | bigint | | not null | | plain | | \n path_id | bigint | | | | plain | | \n attribs_tbl_last_id | bigint | | not null | | plain | | \n attribs_tbl_next_id | bigint | | not null | | plain | | \n bgp_id | bigint | | not null | | plain | | \n last_lbl_stk | bytea | | not null | | extended | | \n next_lbl_stk | bytea | | not null | | extended | | \n last_state | smallint | | | | plain | | \n next_state | smallint | | | | plain | | \n pkey | integer | | not null | 1654449420 | plain | | \nPartition key: LIST (pkey)\nIndexes:\n \"event_pkey\" PRIMARY KEY, btree (event_id, pkey)\n \"event_event_sec_event_usec_idx\" btree (event_sec, event_usec)\nPartitions: event_spl_1651768781 FOR VALUES IN (1651768781),\n event_spl_1652029140 FOR VALUES IN (1652029140),\n event_spl_1652633760 FOR VALUES IN (1652633760),\n event_spl_1653372439 FOR VALUES IN (1653372439),\n event_spl_1653786420 FOR VALUES IN (1653786420),\n event_spl_1654449420 FOR VALUES IN (1654449420)\nWhen I execute the following query it takes 1 - 2 milliseconds to execute. Time is provided as a parameter to function executing this query, it contains epoche seconds and microseconds.SELECT event_id FROM event WHERE (event_sec > time.seconds) OR ((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY event_sec, event_usec LIMIT 1\nThis query is executed every 30 seconds on the same client connection (Which is persistent for weeks). This process runs for weeks, but some time same query starts taking more than 10 minutes. Once it takes 10 minutes, after that every execution takes > 10 minutes.If I restart the process it recreated connection with the server and now execution time again falls back to 1-2 milliseconds. This issue is intermittent, sometimes it triggers after a week of the running process and sometime after 2 - 3 weeks of the running process.We add a new partition to the table every Sunday and write new data in the new partition.-- regardsMayank Kandari",
"msg_date": "Mon, 6 Jun 2022 15:28:43 +0530",
"msg_from": "Mayank Kandari <mayank.kandari@gmail.com>",
"msg_from_op": true,
"msg_subject": "Query is taking too long i intermittent"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 03:28:43PM +0530, Mayank Kandari wrote:\n> <https://stackoverflow.com/posts/72515636/timeline>\n\nThanks for including the link*.\n\n(*FYI, I find it to be kind of unfriendly to ask the same question in multiple\nforums, simultaneously - it's like cross-posting. The goal seems to be to\ndemand an answer from the internet community as quickly as possible.)\n\n> Indexes:\n> \"event_pkey\" PRIMARY KEY, btree (event_id, pkey)\n> \"event_event_sec_event_usec_idx\" btree (event_sec, event_usec)\n> When I execute the following query it takes 1 - 2 milliseconds to execute.\n\n> I am using libpq to connect the Postgres server in c++ code. Postgres\n> server version is 12.10\n> Time is provided as a parameter to function executing this query, it\n> contains epoche seconds and microseconds.\n\nAre you using the simple query protocol or the extended protocol ?\n\n> This query is executed every 30 seconds on the same client connection\n> (Which is persistent for weeks). This process runs for weeks, but some time\n> same query starts taking more than 10 minutes. Once it takes 10 minutes,\n> after that every execution takes > 10 minutes.\n\n> If I restart the process it recreated connection with the server and now\n> execution time again falls back to 1-2 milliseconds. This issue is\n> intermittent, sometimes it triggers after a week of the running process and\n> sometime after 2 - 3 weeks of the running process.\n\nCould you get the query plan for the good vs bad executions ?\n\nTo get the \"bad\" plan, I suggest to enable auto-explain and set its min\nduration to 10 seconds or 1 minute. The \"good\" plan you can get any time from\npsql.\n\n> SELECT event_id FROM event WHERE (event_sec > time.seconds) OR\n> ((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY\n> event_sec, event_usec LIMIT 1\n\nI think it'd be better if the column was a float storing the fractional number\nof seconds. Currently, it may be hard for the planner to estimate rowcounts if\nthe conditions are not independent. I don't know if it's related to this\nproblem, though.\n\n\n",
"msg_date": "Mon, 6 Jun 2022 05:35:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Query is taking too long i intermittent"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Jun 06, 2022 at 03:28:43PM +0530, Mayank Kandari wrote:\n>> SELECT event_id FROM event WHERE (event_sec > time.seconds) OR\n>> ((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY\n>> event_sec, event_usec LIMIT 1\n\n> I think it'd be better if the column was a float storing the fractional number\n> of seconds. Currently, it may be hard for the planner to estimate rowcounts if\n> the conditions are not independent. I don't know if it's related to this\n> problem, though.\n\nAlso, even if you can't change the data representation, there's a more\nidiomatic way to do that in SQL: use a row comparison.\n\nSELECT ...\nWHERE row(event_sec, event_usec) >= row(time.seconds, time.useconds) ...\n\nI doubt this is notably more execution-efficient, but if you're getting a\nbad rowcount estimate it should help with that. It's easier to read too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jun 2022 10:11:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query is taking too long i intermittent"
},
{
"msg_contents": "Thanks for the tip! I will update my process and monitor it.\n\nOn Mon, Jun 6, 2022 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Mon, Jun 06, 2022 at 03:28:43PM +0530, Mayank Kandari wrote:\n> >> SELECT event_id FROM event WHERE (event_sec > time.seconds) OR\n> >> ((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY\n> >> event_sec, event_usec LIMIT 1\n>\n> > I think it'd be better if the column was a float storing the fractional\n> number\n> > of seconds. Currently, it may be hard for the planner to estimate\n> rowcounts if\n> > the conditions are not independent. I don't know if it's related to this\n> > problem, though.\n>\n> Also, even if you can't change the data representation, there's a more\n> idiomatic way to do that in SQL: use a row comparison.\n>\n> SELECT ...\n> WHERE row(event_sec, event_usec) >= row(time.seconds, time.useconds) ...\n>\n> I doubt this is notably more execution-efficient, but if you're getting a\n> bad rowcount estimate it should help with that. It's easier to read too.\n>\n> regards, tom lane\n>\n\n\n-- \nregards\nMayank Kandari\n\nThanks for the tip! I will update my process and monitor it.On Mon, Jun 6, 2022 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Jun 06, 2022 at 03:28:43PM +0530, Mayank Kandari wrote:\n>> SELECT event_id FROM event WHERE (event_sec > time.seconds) OR\n>> ((event_sec=time.seconds) AND (event_usec>=time.useconds) ORDER BY\n>> event_sec, event_usec LIMIT 1\n\n> I think it'd be better if the column was a float storing the fractional number\n> of seconds. Currently, it may be hard for the planner to estimate rowcounts if\n> the conditions are not independent. I don't know if it's related to this\n> problem, though.\n\nAlso, even if you can't change the data representation, there's a more\nidiomatic way to do that in SQL: use a row comparison.\n\nSELECT ...\nWHERE row(event_sec, event_usec) >= row(time.seconds, time.useconds) ...\n\nI doubt this is notably more execution-efficient, but if you're getting a\nbad rowcount estimate it should help with that. It's easier to read too.\n\n regards, tom lane\n-- regardsMayank Kandari",
"msg_date": "Mon, 6 Jun 2022 22:43:25 +0530",
"msg_from": "Mayank Kandari <mayank.kandari@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query is taking too long i intermittent"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex\nquery generated by the Entity Framework.\n\nThe inner (complex) query has a quick execution time:\n\n# SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" =\n\"Extent2\".\"id_content\"\n WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\ntimestamp)\n AND 2 = \"Extent1\".\"id_status\"\n AND EXISTS (\n SELECT 1 AS \"C1\"\n FROM (\n SELECT \"Extent3\".\"TagId\" FROM\n\"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\"\n ) AS \"Project1\"\n WHERE EXISTS (\n SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n\"SingleRowTable1\"\n WHERE \"Project1\".\"TagId\" = 337139)\n )\n AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND\nNOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\"\nWHERE TRUE = FALSE)\n);\n id | C3\n----------+---------------------\n 13505155 | 2021-03-27 12:01:00\n 13505187 | 2021-03-27 12:03:00\n 13505295 | 2021-03-27 12:06:00\n 13505348 | 2021-03-27 12:09:00\n 13505552 | 2021-03-27 12:11:00\n(5 rows)\n\n*Time: 481.826 ms*\n\nIf I run the same query as a nested select I get similar results (Q1):\n\n\n*SELECT \"Project5\".idFROM (*\nSELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" =\n\"Extent2\".\"id_content\"\n WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\ntimestamp)\n AND 2 = \"Extent1\".\"id_status\"\n AND EXISTS (\n SELECT 1 AS \"C1\"\n FROM (\n SELECT \"Extent3\".\"TagId\" FROM\n\"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\"\n ) AS \"Project1\"\n WHERE EXISTS (\n SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n\"SingleRowTable1\"\n WHERE \"Project1\".\"TagId\" = 337139)\n )\n AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND\nNOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\"\nWHERE TRUE = FALSE)\n)\n*) AS \"Project5\";*\n id\n----------\n 13505155\n 13505187\n 13505295\n 13505348\n 13505552\n(5 rows)\n\n*Time: 486.174 ms*\n\nBut if I add an ORDER BY and a LIMIT something goes very wrong (Q2):\n\n# SELECT \"Project5\".id\nFROM (\nSELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" =\n\"Extent2\".\"id_content\"\n WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\ntimestamp)\n AND 2 = \"Extent1\".\"id_status\"\n AND EXISTS (\n SELECT 1 AS \"C1\"\n FROM (\n SELECT \"Extent3\".\"TagId\" FROM\n\"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\"\n ) AS \"Project1\"\n WHERE EXISTS (\n SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n\"SingleRowTable1\"\n WHERE \"Project1\".\"TagId\" = 337139)\n )\n AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND\nNOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\"\nWHERE TRUE = FALSE)\n)\n) AS \"Project5\" *ORDER BY \"Project5\".\"C3\" DESC LIMIT 6*;\n id\n----------\n 13505552\n 13505348\n 13505295\n 13505187\n 13505155\n(5 rows)\n\n*Time: 389375.374 ms (06:29.375)*\n\nAn EXPLAIN (ANALYZE, BUFFERS) for Q1 returns this:\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=661.07..826757.96 rows=27943 width=4) (actual\ntime=446.767..492.874 rows=5 loops=1)\n One-Time Filter: (NOT $1)\n Buffers: shared hit=344618 read=17702 written=349\n InitPlan 2 (returns $1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.003\nrows=0 loops=1)\n One-Time Filter: false\n -> Nested Loop (cost=661.07..826757.96 rows=27943 width=4) (actual\ntime=267.061..313.166 rows=5 loops=1)\n Buffers: shared hit=344618 read=17702 written=349\n -> Bitmap Heap Scan on ng_path_content \"Extent2\"\n (cost=660.63..30053.47 rows=58752 width=4) (actual time=2.455..28.005\nrows=51330 loops=1)\n Recheck Cond: (id_path = ANY\n('{27495,27554,27555}'::integer[]))\n Heap Blocks: exact=2914\n Buffers: shared hit=5 read=2963 written=35\n -> Bitmap Index Scan on ng_path_content_id_path_idx\n (cost=0.00..645.94 rows=58752 width=0) (actual time=2.020..2.021\nrows=51332 loops=1)\n Index Cond: (id_path = ANY\n('{27495,27554,27555}'::integer[]))\n Buffers: shared hit=5 read=47\n -> Index Scan using pk_ng_content on ng_content \"Extent1\"\n (cost=0.43..13.55 rows=1 width=4) (actual time=0.005..0.005 rows=0\nloops=51330)\n Index Cond: (id = \"Extent2\".id_content)\n Filter: ((2 = id_status) AND (date_from <= LOCALTIMESTAMP)\nAND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1))\n Rows Removed by Filter: 1\n Buffers: shared hit=344613 read=14739 written=314\n SubPlan 1\n -> Index Only Scan using ix_ngx_tag_content on\nngx_tag_content \"Extent3\" (cost=0.43..8.46 rows=1 width=0) (actual\ntime=0.001..0.001 rows=0 loops=51327)\n Index Cond: ((\"TagId\" = 337139) AND (\"ContentId\" =\n\"Extent1\".id))\n Heap Fetches: 5\n Buffers: shared hit=153982 read=4\n Planning:\n Buffers: shared hit=464 read=87 written=51\n Planning Time: 5.374 ms\n JIT:\n Functions: 18\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 1.678 ms, Inlining 81.653 ms, Optimization 63.837 ms,\nEmission 33.967 ms, Total 181.135 ms\n Execution Time: 534.009 ms\n(33 rows)\n\nAn EXPLAIN (ANALYZE, BUFFERS) for Q2 returns this:\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.88..8979.75 rows=6 width=12) (actual\ntime=11037.149..183849.138 rows=5 loops=1)\n Buffers: shared hit=15414548 read=564485 written=504\n InitPlan 2 (returns $1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001\nrows=0 loops=1)\n One-Time Filter: false\n -> Result (cost=0.86..41816103.19 rows=27943 width=12) (actual\ntime=11037.146..183849.127 rows=5 loops=1)\n One-Time Filter: (NOT $1)\n Buffers: shared hit=15414548 read=564485 written=504\n -> Nested Loop (cost=0.86..41816103.19 rows=27943 width=12)\n(actual time=11037.143..183849.116 rows=5 loops=1)\n Buffers: shared hit=15414548 read=564485 written=504\n -> Index Scan Backward using ix_ng_content_date on\nng_content \"Extent1\" (cost=0.43..40616715.85 rows=2231839 width=12)\n(actual time=11027.808..183839.289 rows=5 loops=1)\n Filter: ((2 = id_status) AND (date_from <=\nLOCALTIMESTAMP) AND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1))\n Rows Removed by Filter: 4685618\n Buffers: shared hit=15414533 read=564480 written=504\n SubPlan 1\n -> Index Only Scan using ix_ngx_tag_content on\nngx_tag_content \"Extent3\" (cost=0.43..8.46 rows=1 width=0) (actual\ntime=0.003..0.003 rows=0 loops=4484963)\n Index Cond: ((\"TagId\" = 337139) AND\n(\"ContentId\" = \"Extent1\".id))\n Heap Fetches: 5\n Buffers: shared hit=13454890 read=4\n -> Index Scan using ng_path_content_id_content_idx on\nng_path_content \"Extent2\" (cost=0.43..0.53 rows=1 width=4) (actual\ntime=1.956..1.958 rows=1 loops=5)\n Index Cond: (id_content = \"Extent1\".id)\n Filter: (id_path = ANY\n('{27495,27554,27555}'::integer[]))\n Buffers: shared hit=15 read=5\n Planning:\n Buffers: shared hit=474 read=76\n Planning Time: 113.080 ms\n Execution Time: 183849.283 ms\n\nCan someone help me understand what's going on?\n-- \nPaulo Silva <paulojjs@gmail.com>\n\nHi,I'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex query generated by the Entity Framework.The inner (complex) query has a quick execution time:# SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE)); id | C3 ----------+--------------------- 13505155 | 2021-03-27 12:01:00 13505187 | 2021-03-27 12:03:00 13505295 | 2021-03-27 12:06:00 13505348 | 2021-03-27 12:09:00 13505552 | 2021-03-27 12:11:00(5 rows)Time: 481.826 msIf I run the same query as a nested select I get similar results (Q1):SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\"; id ---------- 13505155 13505187 13505295 13505348 13505552(5 rows)Time: 486.174 msBut if I add an ORDER BY and a LIMIT something goes very wrong (Q2):# SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\" ORDER BY \"Project5\".\"C3\" DESC LIMIT 6; id ---------- 13505552 13505348 13505295 13505187 13505155(5 rows)Time: 389375.374 ms (06:29.375)An EXPLAIN (ANALYZE, BUFFERS) for Q1 returns this: QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=661.07..826757.96 rows=27943 width=4) (actual time=446.767..492.874 rows=5 loops=1) One-Time Filter: (NOT $1) Buffers: shared hit=344618 read=17702 written=349 InitPlan 2 (returns $1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=1) One-Time Filter: false -> Nested Loop (cost=661.07..826757.96 rows=27943 width=4) (actual time=267.061..313.166 rows=5 loops=1) Buffers: shared hit=344618 read=17702 written=349 -> Bitmap Heap Scan on ng_path_content \"Extent2\" (cost=660.63..30053.47 rows=58752 width=4) (actual time=2.455..28.005 rows=51330 loops=1) Recheck Cond: (id_path = ANY ('{27495,27554,27555}'::integer[])) Heap Blocks: exact=2914 Buffers: shared hit=5 read=2963 written=35 -> Bitmap Index Scan on ng_path_content_id_path_idx (cost=0.00..645.94 rows=58752 width=0) (actual time=2.020..2.021 rows=51332 loops=1) Index Cond: (id_path = ANY ('{27495,27554,27555}'::integer[])) Buffers: shared hit=5 read=47 -> Index Scan using pk_ng_content on ng_content \"Extent1\" (cost=0.43..13.55 rows=1 width=4) (actual time=0.005..0.005 rows=0 loops=51330) Index Cond: (id = \"Extent2\".id_content) Filter: ((2 = id_status) AND (date_from <= LOCALTIMESTAMP) AND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1)) Rows Removed by Filter: 1 Buffers: shared hit=344613 read=14739 written=314 SubPlan 1 -> Index Only Scan using ix_ngx_tag_content on ngx_tag_content \"Extent3\" (cost=0.43..8.46 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=51327) Index Cond: ((\"TagId\" = 337139) AND (\"ContentId\" = \"Extent1\".id)) Heap Fetches: 5 Buffers: shared hit=153982 read=4 Planning: Buffers: shared hit=464 read=87 written=51 Planning Time: 5.374 ms JIT: Functions: 18 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.678 ms, Inlining 81.653 ms, Optimization 63.837 ms, Emission 33.967 ms, Total 181.135 ms Execution Time: 534.009 ms(33 rows)An EXPLAIN (ANALYZE, BUFFERS) for Q2 returns this: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.88..8979.75 rows=6 width=12) (actual time=11037.149..183849.138 rows=5 loops=1) Buffers: shared hit=15414548 read=564485 written=504 InitPlan 2 (returns $1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=0 loops=1) One-Time Filter: false -> Result (cost=0.86..41816103.19 rows=27943 width=12) (actual time=11037.146..183849.127 rows=5 loops=1) One-Time Filter: (NOT $1) Buffers: shared hit=15414548 read=564485 written=504 -> Nested Loop (cost=0.86..41816103.19 rows=27943 width=12) (actual time=11037.143..183849.116 rows=5 loops=1) Buffers: shared hit=15414548 read=564485 written=504 -> Index Scan Backward using ix_ng_content_date on ng_content \"Extent1\" (cost=0.43..40616715.85 rows=2231839 width=12) (actual time=11027.808..183839.289 rows=5 loops=1) Filter: ((2 = id_status) AND (date_from <= LOCALTIMESTAMP) AND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1)) Rows Removed by Filter: 4685618 Buffers: shared hit=15414533 read=564480 written=504 SubPlan 1 -> Index Only Scan using ix_ngx_tag_content on ngx_tag_content \"Extent3\" (cost=0.43..8.46 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=4484963) Index Cond: ((\"TagId\" = 337139) AND (\"ContentId\" = \"Extent1\".id)) Heap Fetches: 5 Buffers: shared hit=13454890 read=4 -> Index Scan using ng_path_content_id_content_idx on ng_path_content \"Extent2\" (cost=0.43..0.53 rows=1 width=4) (actual time=1.956..1.958 rows=1 loops=5) Index Cond: (id_content = \"Extent1\".id) Filter: (id_path = ANY ('{27495,27554,27555}'::integer[])) Buffers: shared hit=15 read=5 Planning: Buffers: shared hit=474 read=76 Planning Time: 113.080 ms Execution Time: 183849.283 msCan someone help me understand what's going on?-- Paulo Silva <paulojjs@gmail.com>",
"msg_date": "Wed, 8 Jun 2022 09:44:08 +0100",
"msg_from": "Paulo Silva <paulojjs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Strange behavior of limit clause in complex query"
},
{
"msg_contents": "Em qua., 8 de jun. de 2022 às 05:44, Paulo Silva <paulojjs@gmail.com>\nescreveu:\n\n> Hi,\n>\n> I'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex\n> query generated by the Entity Framework.\n>\n> The inner (complex) query has a quick execution time:\n>\n> # SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n> = \"Extent2\".\"id_content\"\n> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n> timestamp)\n> AND 2 = \"Extent1\".\"id_status\"\n> AND EXISTS (\n> SELECT 1 AS \"C1\"\n> FROM (\n> SELECT \"Extent3\".\"TagId\" FROM\n> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n> WHERE \"Extent1\".\"id\" =\n> \"Extent3\".\"ContentId\"\n> ) AS \"Project1\"\n> WHERE EXISTS (\n> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable1\"\n> WHERE \"Project1\".\"TagId\" = 337139)\n> )\n> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable2\" WHERE TRUE = FALSE)\n> );\n> id | C3\n> ----------+---------------------\n> 13505155 | 2021-03-27 12:01:00\n> 13505187 | 2021-03-27 12:03:00\n> 13505295 | 2021-03-27 12:06:00\n> 13505348 | 2021-03-27 12:09:00\n> 13505552 | 2021-03-27 12:11:00\n> (5 rows)\n>\n> *Time: 481.826 ms*\n>\n> If I run the same query as a nested select I get similar results (Q1):\n>\n>\n> *SELECT \"Project5\".idFROM (*\n> SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n> = \"Extent2\".\"id_content\"\n> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n> timestamp)\n> AND 2 = \"Extent1\".\"id_status\"\n> AND EXISTS (\n> SELECT 1 AS \"C1\"\n> FROM (\n> SELECT \"Extent3\".\"TagId\" FROM\n> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n> WHERE \"Extent1\".\"id\" =\n> \"Extent3\".\"ContentId\"\n> ) AS \"Project1\"\n> WHERE EXISTS (\n> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable1\"\n> WHERE \"Project1\".\"TagId\" = 337139)\n> )\n> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable2\" WHERE TRUE = FALSE)\n> )\n> *) AS \"Project5\";*\n> id\n> ----------\n> 13505155\n> 13505187\n> 13505295\n> 13505348\n> 13505552\n> (5 rows)\n>\n> *Time: 486.174 ms*\n>\n> But if I add an ORDER BY and a LIMIT something goes very wrong (Q2):\n>\n> # SELECT \"Project5\".id\n> FROM (\n> SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n> = \"Extent2\".\"id_content\"\n> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n> timestamp)\n> AND 2 = \"Extent1\".\"id_status\"\n> AND EXISTS (\n> SELECT 1 AS \"C1\"\n> FROM (\n> SELECT \"Extent3\".\"TagId\" FROM\n> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n> WHERE \"Extent1\".\"id\" =\n> \"Extent3\".\"ContentId\"\n> ) AS \"Project1\"\n> WHERE EXISTS (\n> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable1\"\n> WHERE \"Project1\".\"TagId\" = 337139)\n> )\n> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable2\" WHERE TRUE = FALSE)\n> )\n> ) AS \"Project5\" *ORDER BY \"Project5\".\"C3\" DESC LIMIT 6*;\n>\nI think that LIMIT is confusing the planner.\nForcing a path that in the end is not faster.\n\nCan you try something similar to this?\n\nWITH q AS (\nSELECT \"Project5\".id\nFROM (\nSELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" =\n\"Extent2\".\"id_content\"\n WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\ntimestamp)\n AND 2 = \"Extent1\".\"id_status\"\n AND EXISTS (\n SELECT 1 AS \"C1\"\n FROM (\n SELECT \"Extent3\".\"TagId\" FROM\n\"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\"\n ) AS \"Project1\"\n WHERE EXISTS (\n SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n\"SingleRowTable1\"\n WHERE \"Project1\".\"TagId\" = 337139)\n )\n AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND\nNOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\"\nWHERE TRUE = FALSE))\n))\nSELECT * FROM q ORDER BY q.C3 DESC LIMIT 6;\n\nProbably, using CTE, the plan you want.\n\nregards,\nRanier Vilela\n\nEm qua., 8 de jun. de 2022 às 05:44, Paulo Silva <paulojjs@gmail.com> escreveu:Hi,I'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex query generated by the Entity Framework.The inner (complex) query has a quick execution time:# SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE)); id | C3 ----------+--------------------- 13505155 | 2021-03-27 12:01:00 13505187 | 2021-03-27 12:03:00 13505295 | 2021-03-27 12:06:00 13505348 | 2021-03-27 12:09:00 13505552 | 2021-03-27 12:11:00(5 rows)Time: 481.826 msIf I run the same query as a nested select I get similar results (Q1):SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\"; id ---------- 13505155 13505187 13505295 13505348 13505552(5 rows)Time: 486.174 msBut if I add an ORDER BY and a LIMIT something goes very wrong (Q2):# SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\" ORDER BY \"Project5\".\"C3\" DESC LIMIT 6;I think that LIMIT is confusing the planner.Forcing a path that in the end is not faster.Can you try something similar to this?WITH q AS (SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE)))) SELECT * FROM q ORDER BY q.C3 DESC LIMIT 6;Probably, using CTE, the plan you want.regards,Ranier Vilela",
"msg_date": "Wed, 8 Jun 2022 08:40:10 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior of limit clause in complex query"
},
{
"msg_contents": "Hi,\n\nThe problem is that the query is generated by the framework, I'm not sure\nif I can change anything on it. Any other way to influence planner?\n\nRegards\n\nRanier Vilela <ranier.vf@gmail.com> escreveu no dia quarta, 8/06/2022 à(s)\n12:40:\n\n> Em qua., 8 de jun. de 2022 às 05:44, Paulo Silva <paulojjs@gmail.com>\n> escreveu:\n>\n>> Hi,\n>>\n>> I'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex\n>> query generated by the Entity Framework.\n>>\n>> The inner (complex) query has a quick execution time:\n>>\n>> # SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n>> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n>> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n>> = \"Extent2\".\"id_content\"\n>> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n>> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n>> timestamp)\n>> AND 2 = \"Extent1\".\"id_status\"\n>> AND EXISTS (\n>> SELECT 1 AS \"C1\"\n>> FROM (\n>> SELECT \"Extent3\".\"TagId\" FROM\n>> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n>> WHERE \"Extent1\".\"id\" =\n>> \"Extent3\".\"ContentId\"\n>> ) AS \"Project1\"\n>> WHERE EXISTS (\n>> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\")\n>> AS \"SingleRowTable1\"\n>> WHERE \"Project1\".\"TagId\" = 337139)\n>> )\n>> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n>> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n>> \"SingleRowTable2\" WHERE TRUE = FALSE)\n>> );\n>> id | C3\n>> ----------+---------------------\n>> 13505155 | 2021-03-27 12:01:00\n>> 13505187 | 2021-03-27 12:03:00\n>> 13505295 | 2021-03-27 12:06:00\n>> 13505348 | 2021-03-27 12:09:00\n>> 13505552 | 2021-03-27 12:11:00\n>> (5 rows)\n>>\n>> *Time: 481.826 ms*\n>>\n>> If I run the same query as a nested select I get similar results (Q1):\n>>\n>>\n>> *SELECT \"Project5\".idFROM (*\n>> SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n>> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n>> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n>> = \"Extent2\".\"id_content\"\n>> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n>> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n>> timestamp)\n>> AND 2 = \"Extent1\".\"id_status\"\n>> AND EXISTS (\n>> SELECT 1 AS \"C1\"\n>> FROM (\n>> SELECT \"Extent3\".\"TagId\" FROM\n>> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n>> WHERE \"Extent1\".\"id\" =\n>> \"Extent3\".\"ContentId\"\n>> ) AS \"Project1\"\n>> WHERE EXISTS (\n>> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\")\n>> AS \"SingleRowTable1\"\n>> WHERE \"Project1\".\"TagId\" = 337139)\n>> )\n>> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n>> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n>> \"SingleRowTable2\" WHERE TRUE = FALSE)\n>> )\n>> *) AS \"Project5\";*\n>> id\n>> ----------\n>> 13505155\n>> 13505187\n>> 13505295\n>> 13505348\n>> 13505552\n>> (5 rows)\n>>\n>> *Time: 486.174 ms*\n>>\n>> But if I add an ORDER BY and a LIMIT something goes very wrong (Q2):\n>>\n>> # SELECT \"Project5\".id\n>> FROM (\n>> SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n>> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n>> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n>> = \"Extent2\".\"id_content\"\n>> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n>> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n>> timestamp)\n>> AND 2 = \"Extent1\".\"id_status\"\n>> AND EXISTS (\n>> SELECT 1 AS \"C1\"\n>> FROM (\n>> SELECT \"Extent3\".\"TagId\" FROM\n>> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n>> WHERE \"Extent1\".\"id\" =\n>> \"Extent3\".\"ContentId\"\n>> ) AS \"Project1\"\n>> WHERE EXISTS (\n>> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\")\n>> AS \"SingleRowTable1\"\n>> WHERE \"Project1\".\"TagId\" = 337139)\n>> )\n>> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n>> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n>> \"SingleRowTable2\" WHERE TRUE = FALSE)\n>> )\n>> ) AS \"Project5\" *ORDER BY \"Project5\".\"C3\" DESC LIMIT 6*;\n>>\n> I think that LIMIT is confusing the planner.\n> Forcing a path that in the end is not faster.\n>\n> Can you try something similar to this?\n>\n> WITH q AS (\n> SELECT \"Project5\".id\n> FROM (\n> SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\"\n> FROM \"dbo\".\"ng_content\" AS \"Extent1\"\n> INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\"\n> = \"Extent2\".\"id_content\"\n> WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp)\n> AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS\n> timestamp)\n> AND 2 = \"Extent1\".\"id_status\"\n> AND EXISTS (\n> SELECT 1 AS \"C1\"\n> FROM (\n> SELECT \"Extent3\".\"TagId\" FROM\n> \"dbo\".\"ngx_tag_content\" AS \"Extent3\"\n> WHERE \"Extent1\".\"id\" =\n> \"Extent3\".\"ContentId\"\n> ) AS \"Project1\"\n> WHERE EXISTS (\n> SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable1\"\n> WHERE \"Project1\".\"TagId\" = 337139)\n> )\n> AND (\"Extent2\".\"id_path\" IN (27495,27554,27555)\n> AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS\n> \"SingleRowTable2\" WHERE TRUE = FALSE))\n> ))\n> SELECT * FROM q ORDER BY q.C3 DESC LIMIT 6;\n>\n> Probably, using CTE, the plan you want.\n>\n> regards,\n> Ranier Vilela\n>\n\n\n-- \nPaulo Silva <paulojjs@gmail.com>\n\nHi,The problem is that the query is generated by the framework, I'm not sure if I can change anything on it. Any other way to influence planner?RegardsRanier Vilela <ranier.vf@gmail.com> escreveu no dia quarta, 8/06/2022 à(s) 12:40:Em qua., 8 de jun. de 2022 às 05:44, Paulo Silva <paulojjs@gmail.com> escreveu:Hi,I'm using PostgreSQL 14.3 and I'm getting strange behavior in a complex query generated by the Entity Framework.The inner (complex) query has a quick execution time:# SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE)); id | C3 ----------+--------------------- 13505155 | 2021-03-27 12:01:00 13505187 | 2021-03-27 12:03:00 13505295 | 2021-03-27 12:06:00 13505348 | 2021-03-27 12:09:00 13505552 | 2021-03-27 12:11:00(5 rows)Time: 481.826 msIf I run the same query as a nested select I get similar results (Q1):SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\"; id ---------- 13505155 13505187 13505295 13505348 13505552(5 rows)Time: 486.174 msBut if I add an ORDER BY and a LIMIT something goes very wrong (Q2):# SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE))) AS \"Project5\" ORDER BY \"Project5\".\"C3\" DESC LIMIT 6;I think that LIMIT is confusing the planner.Forcing a path that in the end is not faster.Can you try something similar to this?WITH q AS (SELECT \"Project5\".idFROM (SELECT \"Extent1\".\"id\", CAST (\"Extent1\".\"date\" AS timestamp) AS \"C3\" FROM \"dbo\".\"ng_content\" AS \"Extent1\" INNER JOIN \"dbo\".\"ng_path_content\" AS \"Extent2\" ON \"Extent1\".\"id\" = \"Extent2\".\"id_content\" WHERE \"Extent1\".\"date_from\" <= CAST (LOCALTIMESTAMP AS timestamp) AND \"Extent1\".\"date_to\" >= CAST (LOCALTIMESTAMP AS timestamp) AND 2 = \"Extent1\".\"id_status\" AND EXISTS ( SELECT 1 AS \"C1\" FROM ( SELECT \"Extent3\".\"TagId\" FROM \"dbo\".\"ngx_tag_content\" AS \"Extent3\" WHERE \"Extent1\".\"id\" = \"Extent3\".\"ContentId\" ) AS \"Project1\" WHERE EXISTS ( SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable1\" WHERE \"Project1\".\"TagId\" = 337139) ) AND (\"Extent2\".\"id_path\" IN (27495,27554,27555) AND NOT EXISTS (SELECT 1 AS \"C1\" FROM (SELECT 1 AS \"C\") AS \"SingleRowTable2\" WHERE TRUE = FALSE)))) SELECT * FROM q ORDER BY q.C3 DESC LIMIT 6;Probably, using CTE, the plan you want.regards,Ranier Vilela\n-- Paulo Silva <paulojjs@gmail.com>",
"msg_date": "Wed, 8 Jun 2022 15:07:11 +0100",
"msg_from": "Paulo Silva <paulojjs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior of limit clause in complex query"
},
{
"msg_contents": "On Wed, Jun 08, 2022 at 09:44:08AM +0100, Paulo Silva wrote:\n> But if I add an ORDER BY and a LIMIT something goes very wrong (Q2):\n\nA somewhat common problem.\n\nA common workaround is to change \"ORDER BY a\" to something like \"ORDER BY a+0\"\n(if your framework will allow it).\n\n> An EXPLAIN (ANALYZE, BUFFERS) for Q2 returns this:\n...\n> -> Index Scan Backward using ix_ng_content_date on ng_content \"Extent1\" (cost=0.43..40616715.85 rows=2231839 width=12) (actual time=11027.808..183839.289 rows=5 loops=1)\n> Filter: ((2 = id_status) AND (date_from <= LOCALTIMESTAMP) AND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1))\n> Rows Removed by Filter: 4685618\n> Buffers: shared hit=15414533 read=564480 written=504\n\nI'm not sure if it would help your original issue, but the rowcount estimate\nhere is bad - overestimating 2231839 rows instead of 5.\n\nCould you try to determine which of those conditions (id_status, date_from,\ndate_to, or SubPlan) causes the mis-estimate, or if the estimate is only wrong\nwhen they're combined ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Jun 2022 09:32:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior of limit clause in complex query"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table \"tbl\" with a couple of columns. One of them is \"row\"\njsonb. It has a GIN index as per below. The table isn't particularly\nlarge, in lower tens of GB. Each \"row\" has maybe 5-20 keys, nothing\ncrazy.\nNow, when I query it with @> operator I get very different performance\ndepending on the selection of keys I want to look for. The queries\nbelow return the same result set (just a few rows). I think I have\nnarrowed the problem down to uniqueness of a given key. For example\nthis query is fast:\n\nQ1\nselect count(*) from tbl where row @> '{\"SELECTIVE_COL\":\n\"SearchValue\", \"DATE\": \"20220606\"}'::jsonb\nIt takes about 0.6ms execution time\n\nHowever this one, is slow:\n\nQ2\nselect count(*) from tbl where row @> '{\"SELECTIVE_COL\":\n\"SearchValue\", \"DATE\": \"20220606\", \"NON_SELECTIVE_COL\": \"Abc\"}'::jsonb\nIt takes 17ms\n\nNote that the only difference is adding one more - not very unique -\nkey. If in Q2 I replaced NON_SELECTIVE_COL with another selective\ncolumn, it's becoming fast again.\n\nHere are the query plans:\nQ1: https://explain.depesz.com/s/qxU8\nQ2: https://explain.depesz.com/s/oIW3\nBoth look very similar, apart from a very different number of shared\nbuffers hit.\n\nIndex on \"row\":\n\"tbl_row_idx\" gin (\"row\" jsonb_path_ops) WITH (fastupdate=off) WHERE\nupper_inf(effective_range) AND NOT deleted\n\nPG Version: 14.3, work_mem 512MB\n\nWhat are my options? Why is the second query so much slower? I changed\nQ2 to conjunction of conditions on single columns (row @> '..' and row\n@> ...) and it was fast, even with the NON_SELECTIVE_COL included.\nSadly it will be difficult for me do to this in my code without using\ndynamic SQL.\n\nMany thanks,\n-- Marcin\n\n\n",
"msg_date": "Wed, 8 Jun 2022 11:55:44 +0100",
"msg_from": "Marcin Krupowicz <ma@rcin.me>",
"msg_from_op": true,
"msg_subject": "Adding non-selective key to jsonb query @> reduces performance?"
},
{
"msg_contents": "Marcin Krupowicz <ma@rcin.me> writes:\n> However this one, is slow:\n\n> Q2\n> select count(*) from tbl where row @> '{\"SELECTIVE_COL\":\n> \"SearchValue\", \"DATE\": \"20220606\", \"NON_SELECTIVE_COL\": \"Abc\"}'::jsonb\n> It takes 17ms\n\n> Note that the only difference is adding one more - not very unique -\n> key. If in Q2 I replaced NON_SELECTIVE_COL with another selective\n> column, it's becoming fast again.\n\nThis doesn't surprise me a whole lot based on what I know of GIN.\nIt's going to store sets of TIDs associated with each key or value\nmentioned in the data, and then a query will have to AND the sets\nof TIDs for keys/values mentioned in the query. That will take\nlonger when some of those sets are big.\n\nIt might be worth experimenting with an index built using the\nnon-default jsonb_path_ops opclass [1]. I'm not sure if that'd be\nfaster for this scenario, but it seems worth trying.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\n\n",
"msg_date": "Wed, 08 Jun 2022 10:32:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding non-selective key to jsonb query @> reduces performance?"
}
] |
[
{
"msg_contents": "Hey y'all!\n\nSo recently, I ran into an issue where a query I wrote wasn't using an index, presumably because what I was doing was too hard for the query planner to figure out. I've distilled the problem into its essence to the best of my ability, and found that it's because `select` seems to hinder it.\n\nThe problem boils down to the planner not figuring out that these two queries should use an index:\n\n```sql\n-- Setup\ncreate table numbers(n int);\ninsert into numbers (n) select generate_series(1, 1000000);\ncreate index numbers_n_idx on numbers(n);\n\n-- Non-indexed queries\nexplain analyze select numbers.n from (values (5000000)) quantities(q)\njoin numbers on numbers.n in (select q);\n\nexplain analyze select numbers.n from (values (5000000)) quantities(q)\njoin numbers on numbers.n = any(select q);\n```\n\nThese examples may seem silly, so let me provide a \"case study\" query that should justify the need for such an optimization. I had a query that was generating an array of items, and wanted to join it to a table given that some column of that table was present in the array. It looked like so:\n\n```sql\nselect numbers.n from quantities join numbers on numbers.n in (select unnest(quantities.q));\n```\n\nThis turned out to be horrendously slow, because it was performing a sequential scan! I did however end up settling on the following form:\n\n```sql\nselect numbers.n from quantities join numbers on numbers.n = any(quantities.q);\n```\n\nThis was only possible because I was dealing with arrays though, and an operation such as `in (select unnest...)` can be easily converted to `= any(...)`. However for the general case, I believe an optimization in this area may provide benefit as there may exist a circumstance that does not have an alternative to a sub-query select (`= any()` was my alternative), but I am just a database newbie.\n\nI've noticed this problem has been around since at least 11.7, and is still present as of the `postgres:15beta1` docker image. I've attached a script which reproduces the issue. It uses docker, so I'm confident you'll be able to run it without issue.\n\nFinally, I ask:\n\n- Is this an issue that should be fixed? I'm a database newbie so I have no idea about the deep semantics of SQL and what a select inside a `join_condition` could imply to the planner to prevent it from optimizing it.\n\n- If \"yes\" to the previous question, what would be the precise semantics of such an optimization? I loosely say `n in (select q)` -> `n in (q)` for all n and q, but of course I don't have enough knowledge to know that this is correct in terms of whatever Postgres' internal query optimization IR is.\n\n- Can a database newbie like myself contribute an optimization pass in Postgres to fix this? I'm fascinated by the work y'all do, and submitting a patch to Postgres that makes it into production would make my week.\n\nThank you for your time, and have a great day!",
"msg_date": "Sat, 11 Jun 2022 23:50:49 +0000",
"msg_from": "\"Josh\" <postgres@sirjosh3917.com>",
"msg_from_op": true,
"msg_subject": "Missed query planner optimization: `n in (select q)` -> `n in\n (q)`"
},
{
"msg_contents": "On Sun, Jun 12, 2022 at 2:47 PM Josh <postgres@sirjosh3917.com> wrote:\n\n>\n> This was only possible because I was dealing with arrays though, and an\n> operation such as `in (select unnest...)` can be easily converted to `=\n> any(...)`. However for the general case,\n\n\nIn the general case you don't have subqueries inside join conditions.\n\n\n> I believe an optimization in this area may provide benefit as there may\n> exist a circumstance that does not have an alternative to a sub-query\n> select (`= any()` was my alternative)\n\n\nI think we'd want a concrete example of a non-poorly written query (or at\nleast a poorly written one that, say, is generated by a framework, not just\ninexperienced human SQL writers) before we'd want to even entertain\nspending time on something like this.\n\n\n> - Is this an issue that should be fixed?\n\n\nProbably not worth the effort.\n\nI'm fascinated by the work y'all do, and submitting a patch to Postgres\n> that makes it into production would make my week.\n>\n>\nMaybe you'll find almost as much good is done helping others get their\npatches committed. There are many in need of reviewers.\n\nhttps://commitfest.postgresql.org/\n\nOn Sun, Jun 12, 2022 at 2:47 PM Josh <postgres@sirjosh3917.com> wrote:\nThis was only possible because I was dealing with arrays though, and an operation such as `in (select unnest...)` can be easily converted to `= any(...)`. However for the general case,In the general case you don't have subqueries inside join conditions. I believe an optimization in this area may provide benefit as there may exist a circumstance that does not have an alternative to a sub-query select (`= any()` was my alternative)I think we'd want a concrete example of a non-poorly written query (or at least a poorly written one that, say, is generated by a framework, not just inexperienced human SQL writers) before we'd want to even entertain spending time on something like this. \n- Is this an issue that should be fixed?Probably not worth the effort. I'm fascinated by the work y'all do, and submitting a patch to Postgres that makes it into production would make my week.Maybe you'll find almost as much good is done helping others get their patches committed. There are many in need of reviewers.https://commitfest.postgresql.org/",
"msg_date": "Sun, 12 Jun 2022 16:17:14 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missed query planner optimization: `n in (select q)` -> `n in\n (q)`"
}
] |
[
{
"msg_contents": "Hi ,\n\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\n\n\n 1. Create tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table ...\n\nReindex on Postgres 13.6 not support parallel ,right? So we need to start multiple session to reindex multiple tables/indexes in parallel.\n\n\n2). Use pg_dump to dump meta data only , then copy \"CREATE INDEX ... sql \"\n Drop indexes before data load\n After data load, increase max_parallel_maintenance_workers, maintenance_work_mem\n\n Run CREATE INDEX ... sql to leverage parallel create index feature.\n\n\n\n\n\n\n\nThanks,\n\n\n\nJames\n\n\n\n\n\n\n\n\n\n\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\n\n \n\n\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\nReindex on Postgres 13.6 not support parallel ,right? So we need to start multiple session to reindex multiple tables/indexes in parallel.\n\n \n \n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n\n Drop indexes before data load \n After data load, increase \nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.\n\n \n \n \nThanks,\n \nJames",
"msg_date": "Fri, 17 Jun 2022 05:34:26 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "reindex option for tuning load large data "
},
{
"msg_contents": "I believe you should be able to use reindexdb with parallel jobs:\nhttps://www.postgresql.org/docs/13/app-reindexdb.html\nIt will still create multiple connections, but you won't need to run\nmultiple commands.\n\nчт, 16 черв. 2022 р. о 22:34 James Pang (chaolpan) <chaolpan@cisco.com>\nпише:\n\n> Hi ,\n>\n> We plan to migrate large database from Oracle to Postgres(version 13.6,\n> OS Redhat8 Enterprise), we are checking options to make data load in\n> Postgres fast. Data volume is about several TB, thousands of indexes,\n> many large table with partitions. We want to make data load running fast\n> and avoid miss any indexes when reindexing. There are 2 options about\n> reindex. Could you give some suggestions about the 2 options, which option\n> is better.\n>\n>\n>\n> 1. Create tables and indexes( empty database) , update pg_index set\n> indisready=false and inisvalid=false, then load data use COPY from csv ,\n> then reindex table …\n>\n> Reindex on Postgres 13.6 not support parallel ,right? So we need to start\n> multiple session to reindex multiple tables/indexes in parallel.\n>\n>\n>\n>\n>\n> 2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n>\n> Drop indexes before data load\n>\n> After data load, increase max_parallel_maintenance_workers,\n> maintenance_work_mem\n>\n> Run CREATE INDEX … sql to leverage parallel create index feature.\n>\n>\n>\n>\n>\n>\n>\n> Thanks,\n>\n>\n>\n> James\n>\n>\n>\n>\n>\n\nI believe you should be able to use reindexdb with parallel jobs:https://www.postgresql.org/docs/13/app-reindexdb.htmlIt will still create multiple connections, but you won't need to run multiple commands.чт, 16 черв. 2022 р. о 22:34 James Pang (chaolpan) <chaolpan@cisco.com> пише:\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\n\n \n\n\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\nReindex on Postgres 13.6 not support parallel ,right? So we need to start multiple session to reindex multiple tables/indexes in parallel.\n\n \n \n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n\n Drop indexes before data load \n After data load, increase \nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.\n\n \n \n \nThanks,\n \nJames",
"msg_date": "Fri, 17 Jun 2022 20:48:35 -0700",
"msg_from": "Vitalii Tymchyshyn <vit@tym.im>",
"msg_from_op": false,
"msg_subject": "Re: reindex option for tuning load large data"
},
{
"msg_contents": "We have more than 8500 indexes , > 1000 tables, many partition tables too ; it’s safe to update pg_index set indisready=false and indisvisilbe=false , then reindexdb with parallel ? reindex parallel got done by multiple sessions , each session reindex one index at the same time , reindex the one index done in serial instead of parallel ?\r\n Compared with “set max_maintain_parallel_workers, and run CREATE INDEX …” , which is faster ?\r\n\r\nThanks,\r\nFrom: Vitalii Tymchyshyn <vit@tym.im>\r\nSent: Saturday, June 18, 2022 11:49 AM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: reindex option for tuning load large data\r\n\r\nI believe you should be able to use reindexdb with parallel jobs:\r\nhttps://www.postgresql.org/docs/13/app-reindexdb.html\r\nIt will still create multiple connections, but you won't need to run multiple commands.\r\n\r\nчт, 16 черв. 2022 р. о 22:34 James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> пише:\r\nHi ,\r\n\r\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\r\n\r\n\r\n 1. Create tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\r\n\r\nReindex on Postgres 13.6 not support parallel ,right? So we need to start multiple session to reindex multiple tables/indexes in parallel.\r\n\r\n\r\n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\r\n Drop indexes before data load\r\n After data load, increase max_parallel_maintenance_workers, maintenance_work_mem\r\n\r\n Run CREATE INDEX … sql to leverage parallel create index feature.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nThanks,\r\n\r\n\r\n\r\nJames\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nWe have more than 8500 indexes , > 1000 tables, many partition tables too ; it’s safe to update pg_index set indisready=false and indisvisilbe=false , then reindexdb with parallel ? reindex parallel got done by multiple sessions , each\r\n session reindex one index at the same time , reindex the one index done in serial instead of parallel ?\r\n\n Compared with “set max_maintain_parallel_workers, and run CREATE INDEX …” , which is faster ?\r\n\n \nThanks,\n\nFrom: Vitalii Tymchyshyn <vit@tym.im> \nSent: Saturday, June 18, 2022 11:49 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: reindex option for tuning load large data\n\n \n\nI believe you should be able to use reindexdb with parallel jobs:\n\nhttps://www.postgresql.org/docs/13/app-reindexdb.html\n\n\nIt will still create multiple connections, but you won't need to run multiple commands.\n\n\n \n\n\nчт, 16 черв. 2022 р. о 22:34 James Pang (chaolpan) <chaolpan@cisco.com> пише:\n\n\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\r\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\r\n\n \n\n\r\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\n\r\nReindex on Postgres 13.6 not support parallel ,right? So we need to start multiple session to reindex multiple tables/indexes in parallel.\r\n\n \n \n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\r\n\n Drop indexes before data load\r\n\n After data load, increase\r\nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.\r\n\n \n \n \nThanks,\n \nJames",
"msg_date": "Sat, 18 Jun 2022 04:00:14 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: reindex option for tuning load large data"
},
{
"msg_contents": "On Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Hi ,\n>\n> We plan to migrate large database from Oracle to Postgres(version 13.6,\n> OS Redhat8 Enterprise), we are checking options to make data load in\n> Postgres fast. Data volume is about several TB, thousands of indexes,\n> many large table with partitions. We want to make data load running fast\n> and avoid miss any indexes when reindexing. There are 2 options about\n> reindex. Could you give some suggestions about the 2 options, which option\n> is better.\n>\n>\n>\n> 1. Create tables and indexes( empty database) , update pg_index set\n> indisready=false and inisvalid=false, then load data use COPY from csv ,\n> then reindex table …\n>\n>\nWhere did this idea come from? This is likely to destroy your database.\n\n\n> 2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n>\n> Drop indexes before data load\n>\n> After data load, increase max_parallel_maintenance_workers,\n> maintenance_work_mem\n>\n> Run CREATE INDEX … sql to leverage parallel create index feature.\n>\n\npg_dump doesn't run against Oracle, so where is the thing you are running\npg_dump against coming from?\n\nIf you already have a fleshed out schema in PostgreSQL, you should dump the\nsections separately (with --section=pre-data and --section=post-data) to\nget the commands to build the objects which should be run before and after\nthe data is loaded.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\n\n \n\n\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …Where did this idea come from? This is likely to destroy your database. \n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n\n Drop indexes before data load \n After data load, increase \nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.pg_dump doesn't run against Oracle, so where is the thing you are running pg_dump against coming from? If you already have a fleshed out schema in PostgreSQL, you should dump the sections separately (with --section=pre-data and --section=post-data) to get the commands to build the objects which should be run before and after the data is loaded.Cheers,Jeff",
"msg_date": "Sat, 18 Jun 2022 16:01:00 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: reindex option for tuning load large data"
},
{
"msg_contents": "We extracted data from Oracle to csv first, already convert schema objects from Oracle to Postgresql too. Then use COPY from csv to Postgres.\r\nThe point is about the 2 options to how to make the data load fast, pg_dump only used to dump metadata in Postgres to rebuild index and recreate constraints.\r\n The questions is instead of drop index and create index, we check update pg_index set indisready=false and reindex again after load.\r\n\r\nFrom: Jeff Janes <jeff.janes@gmail.com>\r\nSent: Sunday, June 19, 2022 4:01 AM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: reindex option for tuning load large data\r\n\r\n\r\n\r\nOn Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> wrote:\r\nHi ,\r\n\r\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\r\n\r\n\r\n 1. Create tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\r\n\r\nWhere did this idea come from? This is likely to destroy your database.\r\n\r\n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\r\n Drop indexes before data load\r\n After data load, increase max_parallel_maintenance_workers, maintenance_work_mem\r\n\r\n Run CREATE INDEX … sql to leverage parallel create index feature.\r\n\r\npg_dump doesn't run against Oracle, so where is the thing you are running pg_dump against coming from?\r\n\r\nIf you already have a fleshed out schema in PostgreSQL, you should dump the sections separately (with --section=pre-data and --section=post-data) to get the commands to build the objects which should be run before and after the data is loaded.\r\n\r\nCheers,\r\n\r\nJeff\r\n\n\n\n\n\n\n\n\n\nWe extracted data from Oracle to csv first, already convert schema objects from Oracle to Postgresql too. Then use COPY from csv to Postgres.\nThe point is about the 2 options to how to make the data load fast, pg_dump only used to dump metadata in Postgres to rebuild index and recreate constraints.\r\n\n The questions is instead of drop index and create index, we check update pg_index set indisready=false and reindex again after load.\n \n\nFrom: Jeff Janes <jeff.janes@gmail.com> \nSent: Sunday, June 19, 2022 4:01 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: reindex option for tuning load large data\n\n \n\n\n \n\n \n\n\nOn Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\r\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\r\n\n \n\n\r\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\n\n\n\n\n \n\n\nWhere did this idea come from? This is likely to destroy your database.\n\n\n \n\n\n\n\n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\r\n\n Drop indexes before data load\r\n\n After data load, increase\r\nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.\n\n\n\n\n \n\n\npg_dump doesn't run against Oracle, so where is the thing you are running pg_dump against coming from? \n\n\n \n\n\nIf you already have a fleshed out schema in PostgreSQL, you should dump the sections separately (with --section=pre-data and --section=post-data) to get the commands to build the objects which should be run before and after the data is\r\n loaded.\n\n\n \n\n\nCheers,\n\n\n \n\n\nJeff",
"msg_date": "Sun, 19 Jun 2022 07:16:26 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: reindex option for tuning load large data"
},
{
"msg_contents": "Hi James,\n\nYou should be using pgloader.\n\nRegards,\nIantcho\n\nOn Sun, Jun 19, 2022, 10:16 James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> We extracted data from Oracle to csv first, already convert schema objects\n> from Oracle to Postgresql too. Then use COPY from csv to Postgres.\n>\n> The point is about the 2 options to how to make the data load fast,\n> pg_dump only used to dump metadata in Postgres to rebuild index and\n> recreate constraints.\n>\n> The questions is instead of drop index and create index, we check\n> update pg_index set indisready=false and reindex again after load.\n>\n>\n>\n> *From:* Jeff Janes <jeff.janes@gmail.com>\n> *Sent:* Sunday, June 19, 2022 4:01 AM\n> *To:* James Pang (chaolpan) <chaolpan@cisco.com>\n> *Cc:* pgsql-performance@lists.postgresql.org\n> *Subject:* Re: reindex option for tuning load large data\n>\n>\n>\n>\n>\n>\n>\n> On Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com>\n> wrote:\n>\n> Hi ,\n>\n> We plan to migrate large database from Oracle to Postgres(version 13.6,\n> OS Redhat8 Enterprise), we are checking options to make data load in\n> Postgres fast. Data volume is about several TB, thousands of indexes,\n> many large table with partitions. We want to make data load running fast\n> and avoid miss any indexes when reindexing. There are 2 options about\n> reindex. Could you give some suggestions about the 2 options, which option\n> is better.\n>\n>\n>\n> 1. Create tables and indexes( empty database) , update pg_index set\n> indisready=false and inisvalid=false, then load data use COPY from csv ,\n> then reindex table …\n>\n>\n>\n> Where did this idea come from? This is likely to destroy your database.\n>\n>\n>\n> 2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n>\n> Drop indexes before data load\n>\n> After data load, increase max_parallel_maintenance_workers,\n> maintenance_work_mem\n>\n> Run CREATE INDEX … sql to leverage parallel create index feature.\n>\n>\n>\n> pg_dump doesn't run against Oracle, so where is the thing you are running\n> pg_dump against coming from?\n>\n>\n>\n> If you already have a fleshed out schema in PostgreSQL, you should dump\n> the sections separately (with --section=pre-data and --section=post-data)\n> to get the commands to build the objects which should be run before and\n> after the data is loaded.\n>\n>\n>\n> Cheers,\n>\n>\n>\n> Jeff\n>\n\nHi James,You should be using pgloader.Regards,IantchoOn Sun, Jun 19, 2022, 10:16 James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\nWe extracted data from Oracle to csv first, already convert schema objects from Oracle to Postgresql too. Then use COPY from csv to Postgres.\nThe point is about the 2 options to how to make the data load fast, pg_dump only used to dump metadata in Postgres to rebuild index and recreate constraints.\n\n The questions is instead of drop index and create index, we check update pg_index set indisready=false and reindex again after load.\n \n\nFrom: Jeff Janes <jeff.janes@gmail.com> \nSent: Sunday, June 19, 2022 4:01 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: reindex option for tuning load large data\n\n \n\n\n \n\n \n\n\nOn Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n\n\nHi ,\n We plan to migrate large database from Oracle to Postgres(version 13.6, OS Redhat8 Enterprise), we are checking options to make data load in Postgres fast. Data volume is about several TB, thousands of indexes, many large table with\n partitions. We want to make data load running fast and avoid miss any indexes when reindexing. There are 2 options about reindex. Could you give some suggestions about the 2 options, which option is better.\n\n \n\n\nCreate tables and indexes( empty database) , update pg_index set indisready=false and inisvalid=false, then load data use COPY from csv , then reindex table …\n\n\n\n\n \n\n\nWhere did this idea come from? This is likely to destroy your database.\n\n\n \n\n\n\n\n2). Use pg_dump to dump meta data only , then copy “CREATE INDEX … sql “\n\n Drop indexes before data load\n\n After data load, increase\nmax_parallel_maintenance_workers, maintenance_work_mem\n Run CREATE INDEX … sql to leverage parallel create index feature.\n\n\n\n\n \n\n\npg_dump doesn't run against Oracle, so where is the thing you are running pg_dump against coming from? \n\n\n \n\n\nIf you already have a fleshed out schema in PostgreSQL, you should dump the sections separately (with --section=pre-data and --section=post-data) to get the commands to build the objects which should be run before and after the data is\n loaded.\n\n\n \n\n\nCheers,\n\n\n \n\n\nJeff",
"msg_date": "Sun, 19 Jun 2022 11:46:53 +0300",
"msg_from": "\"I. V.\" <ianchov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: reindex option for tuning load large data"
}
] |
[
{
"msg_contents": "Hi,\n We have a table have range partition (about 5K partitions) , when\nExplain select count(*) from table where partitionkey between to_timestamp() and to_timestamp();\nIt show\nAggregate (cost=15594.72..15594.73 rows=1 width=8)\n -> Append (cost=0.15..15582.00 rows=5088 width=0)\n Subplans Removed: 5086\n\nBut when\nExplain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n It still show all of partitions with update ...\n\nenable_partition_pruning keep defaut value 'on', It's expected ? and we found for update sql with same where condition, it consumes huge memory than select.\nDatabase version is Postgres 13.4 on RHEL8.4.\n\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\n\nHi,\n We have a table have range partition (about 5K partitions) , when\n\nExplain select count(*) from table where partitionkey between to_timestamp() and to_timestamp();\nIt show \nAggregate (cost=15594.72..15594.73 rows=1 width=8)\n -> Append (cost=0.15..15582.00 rows=5088 width=0)\n Subplans Removed: 5086\n \nBut when \nExplain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n It still show all of partitions with update …\n \nenable_partition_pruning keep defaut value ‘on’, It’s expected ? and we found for update sql with same where condition, it consumes huge memory than select.\nDatabase version is Postgres 13.4 on RHEL8.4.\n \n \nThanks,\n \nJames",
"msg_date": "Tue, 28 Jun 2022 13:15:42 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "partition pruning only works for select but update "
},
{
"msg_contents": "\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> But when\n> Explain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n> It still show all of partitions with update ...\n\nIn releases before v14, partition pruning is far stupider for UPDATE\n(and DELETE) than it is for SELECT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jun 2022 09:30:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning only works for select but update"
},
{
"msg_contents": "For release v14, optimizer can handle large partition counts query ( select ,update ,delete) and partition pruning is similar as SELECT, right? We will check option to upgrade to v14.\n\nThanks,\n\nJames\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Tuesday, June 28, 2022 9:30 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: partition pruning only works for select but update\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> But when\n> Explain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n> It still show all of partitions with update ...\n\nIn releases before v14, partition pruning is far stupider for UPDATE (and DELETE) than it is for SELECT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jun 2022 13:34:18 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: partition pruning only works for select but update"
},
{
"msg_contents": "We have other application depend on V13, possible to backport code changes to V13 as https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=86dc90056dfdbd9d1b891718d2e5614e3e432f35\n\nThanks,\n\nJames\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Tuesday, June 28, 2022 9:30 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: partition pruning only works for select but update\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> But when\n> Explain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n> It still show all of partitions with update ...\n\nIn releases before v14, partition pruning is far stupider for UPDATE (and DELETE) than it is for SELECT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 1 Jul 2022 08:30:40 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: partition pruning only works for select but update"
},
{
"msg_contents": "On Fri, Jul 01, 2022 at 08:30:40AM +0000, James Pang (chaolpan) wrote:\n> We have other application depend on V13, possible to backport code changes to V13 as https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=86dc90056dfdbd9d1b891718d2e5614e3e432f35\n\nDo you mean that the other application needs to be updated to work with v14?\nOr that you haven't checked yet if they work with v14?\n\nIn any case, I'm sure the feature won't be backpatched to v13 - it's an\nimprovement but not a bugfix.\n\n-- \nJustin\n\n> -----Original Message-----\n> From: Tom Lane <tgl@sss.pgh.pa.us> \n> Sent: Tuesday, June 28, 2022 9:30 PM\n> To: James Pang (chaolpan) <chaolpan@cisco.com>\n> Cc: pgsql-performance@lists.postgresql.org\n> Subject: Re: partition pruning only works for select but update\n> \n> \"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> > But when\n> > Explain update table set .. where partitionkey between to_timestamp() and to_timestamp();\n> > It still show all of partitions with update ...\n> \n> In releases before v14, partition pruning is far stupider for UPDATE (and DELETE) than it is for SELECT.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 08:13:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning only works for select but update"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 01, 2022 at 08:30:40AM +0000, James Pang (chaolpan) wrote:\n>> We have other application depend on V13, possible to backport code changes to V13 as https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=86dc90056dfdbd9d1b891718d2e5614e3e432f35\n\n> In any case, I'm sure the feature won't be backpatched to v13 - it's an\n> improvement but not a bugfix.\n\nEven more to the point, it was an extremely major change and would take\na huge amount of QA effort to ensure that dropping it into v13 wouldn't\ncause fresh problems. The PG community has exactly no interest in making\nsuch effort.\n\nBesides which, what do you imagine \"depends on v13\" actually means?\nIf you have an app that works on v13 but not v14, maybe it's because\nit depends on the old behavior in some way.\n\nSpend your effort on updating your app, instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Jul 2022 09:18:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning only works for select but update"
},
{
"msg_contents": " Thanks for your quick response, yes, pretty complicated logical replication between databases for our system, the logical replication tool just support V13 until now and it take long time for this vendor to support V14. We just migrate from Oracle to Postgres, a lot of partition tables and huge table data in partition tables too, and we see big difference with part SQL response time between Oracle and PGv13, especially for \"updates\" on tables with many partitions(>2k partitions). \n I will share update and push replication tool and old app support V14 as a best way to improve partition table update for large tables, and as first step of tuning , we try to reduce partition count for these tables in PGV13.\n\nJames \n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Friday, July 1, 2022 9:18 PM\nTo: Justin Pryzby <pryzby@telsasoft.com>\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: partition pruning only works for select but update\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 01, 2022 at 08:30:40AM +0000, James Pang (chaolpan) wrote:\n>> We have other application depend on V13, possible to backport code \n>> changes to V13 as \n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=86dc900\n>> 56dfdbd9d1b891718d2e5614e3e432f35\n\n> In any case, I'm sure the feature won't be backpatched to v13 - it's \n> an improvement but not a bugfix.\n\nEven more to the point, it was an extremely major change and would take a huge amount of QA effort to ensure that dropping it into v13 wouldn't cause fresh problems. The PG community has exactly no interest in making such effort.\n\nBesides which, what do you imagine \"depends on v13\" actually means?\nIf you have an app that works on v13 but not v14, maybe it's because it depends on the old behavior in some way.\n\nSpend your effort on updating your app, instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 1 Jul 2022 14:06:11 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: partition pruning only works for select but update"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a performance issue that I would really appreciate if somebody \ncould help me better understand and investigate. I experience \nfluctuating performance of combined updates and inserts, seemingly \nfollowing a pattern which isn't immediately obvious.\n\nIn short I'm running PostgreSQL 14.1 on Linux on a small test machine \nwith 16 GB ram. Postgres is configured with shared_buffers = 4GB, \nmax_wal_size = 1GB.\n\nThe database contains a table with 14 columns and 100000 rows. The total \nsize of the table according to pg_total_relation_size is 20MB (so \nbasically nothing) and the table has no indexes, defaults or \nconstraints.\n\nThe table has one \"before update, on each row\"-trigger where the trigger \nfunction does an insert in the table and then lets the update complete \nby replacing NEW with OLD with one column modified. Each update \ntherefore becomes an insert immediately followed by an update.\n\nThere is only a single client which is written in Java and it runs on \nthe same machine as the database. It generates a reproducible load \nconsisting mainly of updates of two columns in single rows with a few \ninserts mixed in. The inserts and updates are grouped together in \ntransactions of currently 20000 operations.\n\nInserts are always fast. As measured by an imprecise millisecond counter \nthey consistently take 0-1 ms.\n\nUpdates (that as mentioned above also cause an insert) are in phases \nfast, 0-1 ms, and in phases mainly slow, about 10 ms. Performance starts \nout fine, but then it seems that something happens that \"flips a switch\" \ncausing the updates to become slow for a while. A bit later they speed \nup again, and the pattern repeats. When the updates are slow about 1 in \n10 is fast, but it is highly irregular when that happens.\n\nWhat puzzles me is that each time I run the test load against the table \nit's always the exact same number of inserts/updates that happen in the \nfast and slow phases. At first 56 pure inserts mixed with 1531 fast \nupdates, then 71 inserts and 606 slow updates, then 33 inserts and 471 \nfast updates etc. In other words, the time to do an update follows an \nirregular square wave, where the \"wavelength\" (fast and slow phases) is \nabout 200-1500 updates. Apparently the flips between fast and slow keep \nhappening so there's no steady state.\n\nThe fact that the flips always happen after the same number of \ninserts/updates makes me think that the underlying reason must be pretty \ndeterministic but there is no immediately discernible structure in the \nload at the times when the update performance slows down. When it speeds \nup again it is seemingly always at a point where at least 3 inserts are \nexecuted right after each other, but that may be a coincidence.\n\nIs there any feasible way to find out what it is that causes Postgres to \nstart doing slow updates? My guess would be a buffer filling up or \nsomething similar, but the regularity between runs paired with the \nirregular lengths of the fast and slow phases in each run doesn't really \nseem to fit with this.\n\nBest regards & thanks,\n Mikkel Lauritsen\n\n\n",
"msg_date": "Wed, 29 Jun 2022 21:31:58 +0200",
"msg_from": "Mikkel Lauritsen <renard@tala.dk>",
"msg_from_op": true,
"msg_subject": "Fluctuating performance of updates on small table with trigger"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 09:31:58PM +0200, Mikkel Lauritsen wrote:\n> In short I'm running PostgreSQL 14.1 on Linux on a small test machine with\n\nshould try to upgrade to 14.4, for $reasons\n\n> Is there any feasible way to find out what it is that causes Postgres to\n> start doing slow updates? My guess would be a buffer filling up or something\n> similar, but the regularity between runs paired with the irregular lengths\n> of the fast and slow phases in each run doesn't really seem to fit with\n> this.\n\nSet log_checkpoints=on, log_autovacuum_min_duration=0, log_lock_waits=on, and\nenable autoexplain with auto_explain.log_nested_statements=on.\n\nThen see what's in the logs when that happens.\n\n@hackers: the first two of those are enabled by default in 15dev, and this\ninquiry seems to support that change.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 29 Jun 2022 14:52:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fluctuating performance of updates on small table with trigger"
}
] |
[
{
"msg_contents": "Hi,\n\nI am running Postgres 13 on CentOS 7, installed from the yum.postgresql.org <http://yum.postgresql.org/> repo.\n\nI have 3 servers in different sites, with asynchronous replication managed by pgpool 4.0.11. Highest latency is 16ms RTT - but shouldn’t matter as it’s async - but my application is running many concurrent connections to keep throughput up when latency is high - as the queries are typically very fast.\n\nServers have 32GB RAM and 4 cores.\n\nmax_connections = 256\nmax_locks_per_transaction = 256\nmax_stack_depth = 2MB\nmax_wal_senders = 5\nmax_wal_size = 1GB\nmin_wal_size = 80MB\nshared_buffers = 10GB\n\nI am storing internet usage data and attributes from RADIUS messages.\nOur ingest does not come directly from the RADIUS server - instead it comes via rabbitmq which allows the database performance to drop without impacting our RADIUS services.\n\nI am making pretty heavy use of partitions to manage this. The data usage accounting is stored in a partition per hour, which runs through daily aggregation - aggregating usage older than 3 to usage per day, rather than usage per hour. At this point, we have a partition per day, rather than per hour.\nI am doing this, because we want hourly usage for 3 days, and RADIUS data comes in as a running total rather than a delta per some interval, and we want to have the deltas available. For each incoming record, we look at previous hours to find a previous total, and calculate a delta from that - this means we are regularly (hundred of times per second) looking at the last 2 hours. Partitioning per hour lets us manage that.\nI partition on an “interval_start” timestamp column.\n\nThe aggregation process which runs each night at some time after midnight does the following, in a transaction:\n1) Create a new “full day” partition for 3 days ago.\n2) Look at hourly data older than 3 days (so, across 24 partitions) and calculate totals for that internet session on that day and insert in to the “full day” table.\n3) Detach the hourly partitions\n4) Attach the full day partition\n5) Drop the hourly partitions\n\nThis process takes around 12s to run - aggregating around 16.8M rows in to 700k rows.\nEach partition is around 70MB.\n\nThrough this process, data continues to be ingested - but as the data being aggregated is 3+ days old, and the ingest process is only looking at current partitions (i.e. last couple hours), we don’t have any conflicts here.\n\n\n\nAdditionally (and I think less importantly so skip this if it’s getting long winded) we store all RADIUS attributes from the RADIUS messages as jsonb, doing deduplication - most RADIUS messages (apart from the usage, session timers, etc.) are static, so we have a table of those “static” attributes in a jsonb column, and a table of events matching timestamps to sets of RADIUS messages.\nThese tables also make use of partitioning - we have hot, cold, and frozen data so that only the hot and cold indexes are in memory most of the time - the “frozen” data is very large (100s of GB) so is only used for reporting.\nThere are a couple of processes which run over this data periodically:\n1) Move data from hot -> cold -> frozen as it ages\n2) Aggregate “events” (i.e. RADIUS message timestamp to attributes) together so that longer term we don’t have a row per message, and rather only a row each time the attributes for a RADIUS session changes. This means there is always dead rows in this table, but they regularly get re-used.\nThis appears to work very well, these processes run every 5 mins or so. The event aggregation process drops around 180k rows.\n\n\n\nThe issue I am having, is that when the daily data usage aggregation runs, sometimes we have a big performance impact, with the following characteristics which happen *after* the aggregation job runs in it usual fast time of 12s or so:\n- The aggregation runs fast as per normal\n- Load on the server goes to 30-40 - recall we have quite high “max connections” to keep throughput high when the client is far (16ms) from the server\n- IOWait drops to ~0% (it’s usually at around 1-2%) but actual disk IO rates seem approx normal\n- User and System CPU increase to a total of 100% - ~86% and ~14% respectively\n- Processing time for RADIUS messages increases, and a big processing backlog builds\n- Swap is not used - it is enabled, but very low swap IO\n- Memory usage does not change\n\nIf I stop the ingest process briefly, then start it up again, the problem goes away - the database server drops to 0% CPU, then after starting the ingest process the backlog clears very rapidly and performance is back to normal.\n\n\nThis happens maybe once or twice a week - it’s not every day. It’s not on specific days each week.\nThere is a vague correlation with other aggregation jobs (i.e. event aggregation mentioned above) running immediately after the daily data usage aggregation. Only one of these jobs runs at once - so if another scheduled job wants to run, it will run immediately after whatever is already running.\n\n\nI am wondering if there’s some sort of problem where we drop all these partitions, and postgres needs to do some work internally to free the space up or something, but has a hard time doing so with all the updates going on?\nI am not clear why this only happens some days - I am working on seeing if I can firm up (or rule out) the correlation with other aggregation jobs running immediately afterwards.\n\n\nCan anyone recommend some things to look at here? I’ve got quite a bit of metrics collected every minute - per-table io (i.e. hit/read), index sizes, table sizes, etc. - however everything there seems “normal” for the slow ingest rate when the issue occurs, so it’s hard to differentiate between cause and symptoms in those metrics.\n\n\nI have bumped up effective_cache_size from default of 4GB to 16GB since this last happened - but given IO doesn’t appear to be an issue, I don’t think this will have too much effect.\n\n--\nNathan Ward\n\n\nHi,I am running Postgres 13 on CentOS 7, installed from the yum.postgresql.org repo.I have 3 servers in different sites, with asynchronous replication managed by pgpool 4.0.11. Highest latency is 16ms RTT - but shouldn’t matter as it’s async - but my application is running many concurrent connections to keep throughput up when latency is high - as the queries are typically very fast.Servers have 32GB RAM and 4 cores.max_connections = 256max_locks_per_transaction = 256max_stack_depth = 2MBmax_wal_senders = 5max_wal_size = 1GBmin_wal_size = 80MBshared_buffers = 10GBI am storing internet usage data and attributes from RADIUS messages.Our ingest does not come directly from the RADIUS server - instead it comes via rabbitmq which allows the database performance to drop without impacting our RADIUS services.I am making pretty heavy use of partitions to manage this. The data usage accounting is stored in a partition per hour, which runs through daily aggregation - aggregating usage older than 3 to usage per day, rather than usage per hour. At this point, we have a partition per day, rather than per hour.I am doing this, because we want hourly usage for 3 days, and RADIUS data comes in as a running total rather than a delta per some interval, and we want to have the deltas available. For each incoming record, we look at previous hours to find a previous total, and calculate a delta from that - this means we are regularly (hundred of times per second) looking at the last 2 hours. Partitioning per hour lets us manage that.I partition on an “interval_start” timestamp column.The aggregation process which runs each night at some time after midnight does the following, in a transaction:1) Create a new “full day” partition for 3 days ago.2) Look at hourly data older than 3 days (so, across 24 partitions) and calculate totals for that internet session on that day and insert in to the “full day” table.3) Detach the hourly partitions4) Attach the full day partition5) Drop the hourly partitionsThis process takes around 12s to run - aggregating around 16.8M rows in to 700k rows.Each partition is around 70MB.Through this process, data continues to be ingested - but as the data being aggregated is 3+ days old, and the ingest process is only looking at current partitions (i.e. last couple hours), we don’t have any conflicts here.Additionally (and I think less importantly so skip this if it’s getting long winded) we store all RADIUS attributes from the RADIUS messages as jsonb, doing deduplication - most RADIUS messages (apart from the usage, session timers, etc.) are static, so we have a table of those “static” attributes in a jsonb column, and a table of events matching timestamps to sets of RADIUS messages.These tables also make use of partitioning - we have hot, cold, and frozen data so that only the hot and cold indexes are in memory most of the time - the “frozen” data is very large (100s of GB) so is only used for reporting.There are a couple of processes which run over this data periodically:1) Move data from hot -> cold -> frozen as it ages2) Aggregate “events” (i.e. RADIUS message timestamp to attributes) together so that longer term we don’t have a row per message, and rather only a row each time the attributes for a RADIUS session changes. This means there is always dead rows in this table, but they regularly get re-used.This appears to work very well, these processes run every 5 mins or so. The event aggregation process drops around 180k rows.The issue I am having, is that when the daily data usage aggregation runs, sometimes we have a big performance impact, with the following characteristics which happen *after* the aggregation job runs in it usual fast time of 12s or so:- The aggregation runs fast as per normal- Load on the server goes to 30-40 - recall we have quite high “max connections” to keep throughput high when the client is far (16ms) from the server- IOWait drops to ~0% (it’s usually at around 1-2%) but actual disk IO rates seem approx normal- User and System CPU increase to a total of 100% - ~86% and ~14% respectively- Processing time for RADIUS messages increases, and a big processing backlog builds- Swap is not used - it is enabled, but very low swap IO- Memory usage does not changeIf I stop the ingest process briefly, then start it up again, the problem goes away - the database server drops to 0% CPU, then after starting the ingest process the backlog clears very rapidly and performance is back to normal.This happens maybe once or twice a week - it’s not every day. It’s not on specific days each week.There is a vague correlation with other aggregation jobs (i.e. event aggregation mentioned above) running immediately after the daily data usage aggregation. Only one of these jobs runs at once - so if another scheduled job wants to run, it will run immediately after whatever is already running.I am wondering if there’s some sort of problem where we drop all these partitions, and postgres needs to do some work internally to free the space up or something, but has a hard time doing so with all the updates going on?I am not clear why this only happens some days - I am working on seeing if I can firm up (or rule out) the correlation with other aggregation jobs running immediately afterwards.Can anyone recommend some things to look at here? I’ve got quite a bit of metrics collected every minute - per-table io (i.e. hit/read), index sizes, table sizes, etc. - however everything there seems “normal” for the slow ingest rate when the issue occurs, so it’s hard to differentiate between cause and symptoms in those metrics.I have bumped up effective_cache_size from default of 4GB to 16GB since this last happened - but given IO doesn’t appear to be an issue, I don’t think this will have too much effect.--Nathan Ward",
"msg_date": "Sun, 10 Jul 2022 16:55:34 +1200",
"msg_from": "Nathan Ward <lists+postgresql@daork.net>",
"msg_from_op": true,
"msg_subject": "Occasional performance issue after changing table partitions"
},
{
"msg_contents": "On Sun, Jul 10, 2022 at 04:55:34PM +1200, Nathan Ward wrote:\n> I am running Postgres 13 on CentOS 7, installed from the yum.postgresql.org <http://yum.postgresql.org/> repo.\n\nIt doesn't sound relevant, but what kind of storage systems is postgres using ?\nFilesystem, raid, device.\n\nIs the high CPU use related to to autovacuum/autoanalyze ?\n\n> The issue I am having, is that when the daily data usage aggregation runs, sometimes we have a big performance impact, with the following characteristics which happen *after* the aggregation job runs in it usual fast time of 12s or so:\n> - The aggregation runs fast as per normal\n> - Load on the server goes to 30-40 - recall we have quite high “max connections” to keep throughput high when the client is far (16ms) from the server\n\nI suggest to install and enable autoexplain to see what's running slowly here,\nand what its query plans are. It seems possible that when the daily\naggregation script drops the old partitions, the plan changes for the worse.\nI'm not sure what the fix is - maybe you just need to run vacuum or analyze on\nthe new partitions soon after populating them.\n\nFor good measure, also set log_autovacuum_min_duration=0 (or something other\nthan -1) (and while you're at it, log_checkpoints=on, and log_lock_waits=on if\nyou haven't already).\n\nNote that postgres doesn't automatically analyze parent tables, so you should\nmaybe do that whenever the data changes enough for it to matter.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 10 Jul 2022 09:05:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
},
{
"msg_contents": "\n> On 11/07/2022, at 2:05 AM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Sun, Jul 10, 2022 at 04:55:34PM +1200, Nathan Ward wrote:\n>> I am running Postgres 13 on CentOS 7, installed from the yum.postgresql.org <http://yum.postgresql.org/> repo.\n> \n> It doesn't sound relevant, but what kind of storage systems is postgres using ?\n> Filesystem, raid, device.\n\nIt’s an NVME SSD backed SAN over 2x16G fibre channel. The postgres server is in a VM (vmware). I can pretty comfortably do 10Gbit/s or more to the disk (I’ve only personally tested that, because initial replication from other sites runs at around that sort of speed limited by ethernet interfaces). The normal IO is between 1MB/s and maybe 15MB/s - writes only. Reads are pretty minimal.\nFS is XFS.\n\n> Is the high CPU use related to to autovacuum/autoanalyze ?\n\nGood question - I don’t know. I’ve set the server to debug1 log level so I can see that - I see you have some notes below about autovacuum logs so I’ll see what that shows me.\nSince setting that log level I haven’t yet had the issue occur - I watched it tick over midnight last night and it was normal (as it is, most days).\n\n>> The issue I am having, is that when the daily data usage aggregation runs, sometimes we have a big performance impact, with the following characteristics which happen *after* the aggregation job runs in it usual fast time of 12s or so:\n>> - The aggregation runs fast as per normal\n>> - Load on the server goes to 30-40 - recall we have quite high “max connections” to keep throughput high when the client is far (16ms) from the server\n> \n> I suggest to install and enable autoexplain to see what's running slowly here,\n> and what its query plans are. It seems possible that when the daily\n> aggregation script drops the old partitions, the plan changes for the worse.\n> I'm not sure what the fix is - maybe you just need to run vacuum or analyze on\n> the new partitions soon after populating them.\n\nHmm, I’ll check it out. I hadn’t thought that the query planner could be doing something different, that’s a good point.\n\nNote that the normal data ingest queries don’t hit the new partition - the new partition is for data 3 days ago, and the ingest queries only hit the partitions covering the last ~2 hours.\n\n> For good measure, also set log_autovacuum_min_duration=0 (or something other\n> than -1) (and while you're at it, log_checkpoints=on, and log_lock_waits=on if\n> you haven't already).\n\nWilco.\n\n> Note that postgres doesn't automatically analyze parent tables, so you should\n> maybe do that whenever the data changes enough for it to matter.\n\nHmm. This raises some stuff I’m not familiar with - does analysing a parent table do anything? I got the impression that analysing the parent was just shorthand for analysing all of the attached partitions.\n\nPerhaps because I attach a table with data, the parent sometimes decides it needs to run analyse on a bunch of things?\nOr, maybe it uses the most recently attached partition, with bad statistics, to plan queries that only touch other partitions?\n\n--\nNathan Ward\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:21:38 +1200",
"msg_from": "Nathan Ward <lists+postgresql@daork.net>",
"msg_from_op": true,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 03:21:38PM +1200, Nathan Ward wrote:\n> > Note that postgres doesn't automatically analyze parent tables, so you should\n> > maybe do that whenever the data changes enough for it to matter.\n> \n> Hmm. This raises some stuff I’m not familiar with - does analysing a parent table do anything?\n\nYes\n\nYou could check if you have stats now (maybe due to a global ANALYZE or\nanalyzedb) and how the query plans change if you analyze.\nThe transaction may be overly conservative.\n\nSELECT COUNT(1) FROM pg_stats WHERE tablename=PARENT;\nSELECT last_analyze, last_autoanalyze, relname FROM pg_stat_all_tables WHERE relname=PARENT;\nbegin;\nSET default_statistics_target=10;\nANALYZE;\nexplain SELECT [...];\nrollback;\n\n> I got the impression that analysing the parent was just shorthand for analysing all of the attached partitions.\n\nCould you let us know if the documentation left that impression ?\n\nSee here (this was updated recently).\n\nhttps://www.postgresql.org/docs/13/sql-analyze.html#id-1.9.3.46.8\n\nFor partitioned tables, ANALYZE gathers statistics by sampling rows from all partitions; in addition, it will recurse into each partition and update its statistics. Each leaf partition is analyzed only once, even with multi-level partitioning. No statistics are collected for only the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.\n\nBy contrast, if the table being analyzed has inheritance children, ANALYZE gathers two sets of statistics: one on the rows of the parent table only, and a second including rows of both the parent table and all of its children. This second set of statistics is needed when planning queries that process the inheritance tree as a whole. The child tables themselves are not individually analyzed in this case.\n\nThe autovacuum daemon does not process partitioned tables, nor does it process inheritance parents if only the children are ever modified. It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.\n\n> Perhaps because I attach a table with data, the parent sometimes decides it needs to run analyse on a bunch of things?\n\nNo, that doesn't happen.\n\n> Or, maybe it uses the most recently attached partition, with bad statistics, to plan queries that only touch other partitions?\n\nThis is closer to what I was talking about.\n\nTo be clear, you are using relkind=p partitions (added in v10), and not\ninheritance parents, right ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 10 Jul 2022 23:05:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
},
{
"msg_contents": "\n> On 11/07/2022, at 4:05 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Mon, Jul 11, 2022 at 03:21:38PM +1200, Nathan Ward wrote:\n>>> Note that postgres doesn't automatically analyze parent tables, so you should\n>>> maybe do that whenever the data changes enough for it to matter.\n>> \n>> Hmm. This raises some stuff I’m not familiar with - does analysing a parent table do anything?\n> \n> Yes\n> \n> You could check if you have stats now (maybe due to a global ANALYZE or\n> analyzedb) and how the query plans change if you analyze.\n> The transaction may be overly conservative.\n> \n> SELECT COUNT(1) FROM pg_stats WHERE tablename=PARENT;\n> SELECT last_analyze, last_autoanalyze, relname FROM pg_stat_all_tables WHERE relname=PARENT;\n> begin;\n> SET default_statistics_target=10;\n> ANALYZE;\n> explain SELECT [...];\n> rollback;\n\nI have a development database which gets a mirror of about 50% of the data coming in, and ran a global ANALYZE earlier on - and note that the disk IO is actually a lot higher since which is interesting and not desirable obviously, so I have some more fiddling to do..\nThe behaviour during the ANALYZE was very similar to what happens on my production database when things go funny though, so, this feels like it’s getting me close.\n\nThe above is going to be a bit tricky to do I think - the ingest process runs a stored procedure, and behaviour varies quite a bit if I stick in synthetic values.\n\nI think probably my approach for now will be to turn on auto explain with some sampling, and see what happens.\n\n\nSide note, in the auto_explain docs, there is a note in a callout box saying that log_analyze has a high impact even if the query isn’t logged - if I use sampling, is this still the case - i.e. all queries are impacted - or is it only the sampled queries?\n\n>> I got the impression that analysing the parent was just shorthand for analysing all of the attached partitions.\n> \n> Could you let us know if the documentation left that impression ?\n> \n> See here (this was updated recently).\n> \n> https://www.postgresql.org/docs/13/sql-analyze.html#id-1.9.3.46.8\n> \n> For partitioned tables, ANALYZE gathers statistics by sampling rows from all partitions; in addition, it will recurse into each partition and update its statistics. Each leaf partition is analyzed only once, even with multi-level partitioning. No statistics are collected for only the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.\n> \n> By contrast, if the table being analyzed has inheritance children, ANALYZE gathers two sets of statistics: one on the rows of the parent table only, and a second including rows of both the parent table and all of its children. This second set of statistics is needed when planning queries that process the inheritance tree as a whole. The child tables themselves are not individually analyzed in this case.\n> \n> The autovacuum daemon does not process partitioned tables, nor does it process inheritance parents if only the children are ever modified. It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.\n\n\nIt was this part:\n“””\nNo statistics are collected for *only* the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.\n“””\n\nEmphasis around “only” is mine - I think my brain skipped that word, but, it’s obviously critical.\n\nI also note this:\n“””\nIt is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.\n“””\nThis seems really important and is something I was entirely unaware of - maybe this should be in one of those callout boxes.\n\n\n>> Perhaps because I attach a table with data, the parent sometimes decides it needs to run analyse on a bunch of things?\n> \n> No, that doesn't happen.\n\nAck.\n\n>> Or, maybe it uses the most recently attached partition, with bad statistics, to plan queries that only touch other partitions?\n> \n> This is closer to what I was talking about.\n> \n> To be clear, you are using relkind=p partitions (added in v10), and not\n> inheritance parents, right ?\n\nYes, relkind=p.\n\n--\nNathan Ward\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 18:20:23 +1200",
"msg_from": "Nathan Ward <lists+postgresql@daork.net>",
"msg_from_op": true,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
},
{
"msg_contents": "Hi,\n\nI haven’t caught the issue yet with this debug etc. in place, but auto_explain (and some pg_stat_statements poking about) has helped me find something interesting that might(?) be related.\n\nMy data ingest is in 2 functions, depending on the type of data:\n- RADIUS data with usage info\n- RADIUS data without usage info\n\nThe functions are the largely same, except the one with usage info has to go and work with a table with lots of partitions (444, right at the moment).\n\nThe function that works with the usage info is *significantly* slower.\nOne thing I have specifically noticed is that the run time for the total function doesn’t add up to the run time of each of the of the nested statements. Not even close. It’s around 16ms on average (both in pg_stat_statements and in the auto_explain), but the nested statements add up to around 1-2ms or so - which I think means the planner is the culprit here.\n\nI have been stepping through the various statements which are different between the two functions, and note that when I do math on a timestamp in a SELECT statement (i.e. _event_timestamp - INTERVAL ‘1 hour’), the planner takes 50ms or so - note that the result of the timestamp is used to search the partition key.\nIf I declare a function which does the math in advance, stores it in a variable and then runs the SELECT, the planner takes less than 1ms.\n\nDoes this mean it’s calculating the timestamp for each partition, or something like that?\n\n\nI have updated the function in my production database to do this math in advance, and the mean time is down to around 6ms, from 16ms.\nThis is still longer than the actual statement execution times in the auto_explain output - which add up to around 1-2ms - but it’s better!\n\n\nI’ve also turned on pg_stat_statements.track_planning and will see what that looks like after some time.\n\n\nI see Postgres 14 release notes has information about performance improvements in the planner for updates on tables with \"many partitions”. Is 444 partitions “many”?\nMy updates are all impacting a single partition only.\n\n> On 11/07/2022, at 6:20 PM, Nathan Ward <lists+postgresql@daork.net> wrote:\n> \n>> \n>> On 11/07/2022, at 4:05 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> \n>> On Mon, Jul 11, 2022 at 03:21:38PM +1200, Nathan Ward wrote:\n>>>> Note that postgres doesn't automatically analyze parent tables, so you should\n>>>> maybe do that whenever the data changes enough for it to matter.\n>>> \n>>> Hmm. This raises some stuff I’m not familiar with - does analysing a parent table do anything?\n>> \n>> Yes\n>> \n>> You could check if you have stats now (maybe due to a global ANALYZE or\n>> analyzedb) and how the query plans change if you analyze.\n>> The transaction may be overly conservative.\n>> \n>> SELECT COUNT(1) FROM pg_stats WHERE tablename=PARENT;\n>> SELECT last_analyze, last_autoanalyze, relname FROM pg_stat_all_tables WHERE relname=PARENT;\n>> begin;\n>> SET default_statistics_target=10;\n>> ANALYZE;\n>> explain SELECT [...];\n>> rollback;\n> \n> I have a development database which gets a mirror of about 50% of the data coming in, and ran a global ANALYZE earlier on - and note that the disk IO is actually a lot higher since which is interesting and not desirable obviously, so I have some more fiddling to do..\n> The behaviour during the ANALYZE was very similar to what happens on my production database when things go funny though, so, this feels like it’s getting me close.\n> \n> The above is going to be a bit tricky to do I think - the ingest process runs a stored procedure, and behaviour varies quite a bit if I stick in synthetic values.\n> \n> I think probably my approach for now will be to turn on auto explain with some sampling, and see what happens.\n> \n> \n> Side note, in the auto_explain docs, there is a note in a callout box saying that log_analyze has a high impact even if the query isn’t logged - if I use sampling, is this still the case - i.e. all queries are impacted - or is it only the sampled queries?\n> \n>>> I got the impression that analysing the parent was just shorthand for analysing all of the attached partitions.\n>> \n>> Could you let us know if the documentation left that impression ?\n>> \n>> See here (this was updated recently).\n>> \n>> https://www.postgresql.org/docs/13/sql-analyze.html#id-1.9.3.46.8\n>> \n>> For partitioned tables, ANALYZE gathers statistics by sampling rows from all partitions; in addition, it will recurse into each partition and update its statistics. Each leaf partition is analyzed only once, even with multi-level partitioning. No statistics are collected for only the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.\n>> \n>> By contrast, if the table being analyzed has inheritance children, ANALYZE gathers two sets of statistics: one on the rows of the parent table only, and a second including rows of both the parent table and all of its children. This second set of statistics is needed when planning queries that process the inheritance tree as a whole. The child tables themselves are not individually analyzed in this case.\n>> \n>> The autovacuum daemon does not process partitioned tables, nor does it process inheritance parents if only the children are ever modified. It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.\n> \n> \n> It was this part:\n> “””\n> No statistics are collected for *only* the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.\n> “””\n> \n> Emphasis around “only” is mine - I think my brain skipped that word, but, it’s obviously critical.\n> \n> I also note this:\n> “””\n> It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.\n> “””\n> This seems really important and is something I was entirely unaware of - maybe this should be in one of those callout boxes.\n> \n> \n>>> Perhaps because I attach a table with data, the parent sometimes decides it needs to run analyse on a bunch of things?\n>> \n>> No, that doesn't happen.\n> \n> Ack.\n> \n>>> Or, maybe it uses the most recently attached partition, with bad statistics, to plan queries that only touch other partitions?\n>> \n>> This is closer to what I was talking about.\n>> \n>> To be clear, you are using relkind=p partitions (added in v10), and not\n>> inheritance parents, right ?\n> \n> Yes, relkind=p.\n> \n> --\n> Nathan Ward\n\n\nHi,I haven’t caught the issue yet with this debug etc. in place, but auto_explain (and some pg_stat_statements poking about) has helped me find something interesting that might(?) be related.My data ingest is in 2 functions, depending on the type of data:- RADIUS data with usage info- RADIUS data without usage infoThe functions are the largely same, except the one with usage info has to go and work with a table with lots of partitions (444, right at the moment).The function that works with the usage info is *significantly* slower.One thing I have specifically noticed is that the run time for the total function doesn’t add up to the run time of each of the of the nested statements. Not even close. It’s around 16ms on average (both in pg_stat_statements and in the auto_explain), but the nested statements add up to around 1-2ms or so - which I think means the planner is the culprit here.I have been stepping through the various statements which are different between the two functions, and note that when I do math on a timestamp in a SELECT statement (i.e. _event_timestamp - INTERVAL ‘1 hour’), the planner takes 50ms or so - note that the result of the timestamp is used to search the partition key.If I declare a function which does the math in advance, stores it in a variable and then runs the SELECT, the planner takes less than 1ms.Does this mean it’s calculating the timestamp for each partition, or something like that?I have updated the function in my production database to do this math in advance, and the mean time is down to around 6ms, from 16ms.This is still longer than the actual statement execution times in the auto_explain output - which add up to around 1-2ms - but it’s better!I’ve also turned on pg_stat_statements.track_planning and will see what that looks like after some time.I see Postgres 14 release notes has information about performance improvements in the planner for updates on tables with \"many partitions”. Is 444 partitions “many”?My updates are all impacting a single partition only.On 11/07/2022, at 6:20 PM, Nathan Ward <lists+postgresql@daork.net> wrote:On 11/07/2022, at 4:05 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jul 11, 2022 at 03:21:38PM +1200, Nathan Ward wrote:Note that postgres doesn't automatically analyze parent tables, so you shouldmaybe do that whenever the data changes enough for it to matter.Hmm. This raises some stuff I’m not familiar with - does analysing a parent table do anything?YesYou could check if you have stats now (maybe due to a global ANALYZE oranalyzedb) and how the query plans change if you analyze.The transaction may be overly conservative.SELECT COUNT(1) FROM pg_stats WHERE tablename=PARENT;SELECT last_analyze, last_autoanalyze, relname FROM pg_stat_all_tables WHERE relname=PARENT;begin;SET default_statistics_target=10;ANALYZE;explain SELECT [...];rollback;I have a development database which gets a mirror of about 50% of the data coming in, and ran a global ANALYZE earlier on - and note that the disk IO is actually a lot higher since which is interesting and not desirable obviously, so I have some more fiddling to do..The behaviour during the ANALYZE was very similar to what happens on my production database when things go funny though, so, this feels like it’s getting me close.The above is going to be a bit tricky to do I think - the ingest process runs a stored procedure, and behaviour varies quite a bit if I stick in synthetic values.I think probably my approach for now will be to turn on auto explain with some sampling, and see what happens.Side note, in the auto_explain docs, there is a note in a callout box saying that log_analyze has a high impact even if the query isn’t logged - if I use sampling, is this still the case - i.e. all queries are impacted - or is it only the sampled queries?I got the impression that analysing the parent was just shorthand for analysing all of the attached partitions.Could you let us know if the documentation left that impression ?See here (this was updated recently).https://www.postgresql.org/docs/13/sql-analyze.html#id-1.9.3.46.8For partitioned tables, ANALYZE gathers statistics by sampling rows from all partitions; in addition, it will recurse into each partition and update its statistics. Each leaf partition is analyzed only once, even with multi-level partitioning. No statistics are collected for only the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.By contrast, if the table being analyzed has inheritance children, ANALYZE gathers two sets of statistics: one on the rows of the parent table only, and a second including rows of both the parent table and all of its children. This second set of statistics is needed when planning queries that process the inheritance tree as a whole. The child tables themselves are not individually analyzed in this case.The autovacuum daemon does not process partitioned tables, nor does it process inheritance parents if only the children are ever modified. It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.It was this part:“””No statistics are collected for *only* the parent table (without data from its partitions), because with partitioning it's guaranteed to be empty.“””Emphasis around “only” is mine - I think my brain skipped that word, but, it’s obviously critical.I also note this:“””It is usually necessary to periodically run a manual ANALYZE to keep the statistics of the table hierarchy up to date.“””This seems really important and is something I was entirely unaware of - maybe this should be in one of those callout boxes.Perhaps because I attach a table with data, the parent sometimes decides it needs to run analyse on a bunch of things?No, that doesn't happen.Ack.Or, maybe it uses the most recently attached partition, with bad statistics, to plan queries that only touch other partitions?This is closer to what I was talking about.To be clear, you are using relkind=p partitions (added in v10), and notinheritance parents, right ?Yes, relkind=p.--Nathan Ward",
"msg_date": "Wed, 13 Jul 2022 03:13:46 +1200",
"msg_from": "Nathan Ward <lists+postgresql@daork.net>",
"msg_from_op": true,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 03:13:46AM +1200, Nathan Ward wrote:\n> I have been stepping through the various statements which are different between the two functions, and note that when I do math on a timestamp in a SELECT statement (i.e. _event_timestamp - INTERVAL ‘1 hour’),\n> the planner takes 50ms or so - note that the result of the timestamp is used to search the partition key.\n> If I declare a function which does the math in advance, stores it in a variable and then runs the SELECT, the planner takes less than 1ms.\n> Does this mean it’s calculating the timestamp for each partition, or something like that?\n\nI'm not sure I understand what you're doing - the relevant parts of your\nfunction text and query plan would help here.\n\nMaybe auto_explain.log_nested_statements would be useful ?\n\nNote that \"partition pruning\" can happen even if you don't have a literal\nconstant. For example:\n|explain(costs off) SELECT * FROM metrics WHERE start_time > now()::timestamp - '1 days'::interval;\n| Append\n| Subplans Removed: 36\n\n> I see Postgres 14 release notes has information about performance improvements in the planner for updates on tables with \"many partitions”. Is 444 partitions “many”?\n> My updates are all impacting a single partition only.\n\nIt sounds like that'll certainly help you. Another option is to update the\npartition directly (which is what we do, to be able to use \"ON CONFLICT\").\n\nI think with \"old partitioning with inheritance\", more than a few hundred\npartitions was considered unreasonable, and plan-time suffered.\n\nWith relkind=p native/declarative partitioning, a few hundred is considered\nreasonable, and a few thousand is still considered excessive - even if the\nplanner time is no issue, you'll still run into problems like \"work-mem is\nper-node\", which works poorly when you might have 10x more nodes.\n\nTBH, this doesn't sound related to your original issue.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:43:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Occasional performance issue after changing table partitions"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQLv14- compiled with LLVM-Clangv13 and GCCv11,And captured\nperformance using HammerDBv4.3-TPC-H.\nAnd Observed the functionality differences as LLVM-Clangv13-triggers\nheapgetpage instead XidInMVCCSnapshot or vice versa with GCC.\nI would like to know here the functionality differences triggered i.e.\nfunction call \"heapgetpage vs XidInMVCCSnapshot '' in GCC vs LLVM-Clangv13\nAnd also observed the performance difference GCC performing (Query\nexecution time (small value is better) better than LLVM-Clang 13 on same\nBareMetal with same H/W and DB configurations\n\n perf data top hot functions:\n LLVM-Clangv13:\n=============\n TPCH-Query-completed-50.526 seconds\n\n Overhead Symbol\n 19.41% [.] tts_buffer_heap_getsomeattrs\n * 17.75% [.] heapgetpage*\n 9.46% [.] bpchareq\n 5.86% [.] ExecEvalScalarArrayOp\n 5.85% [.] ExecInterpExpr\n 4.50% [.] ReadBuffer_common\n 3.02% [.] heap_getnextslot\n\n GCCv11\n =======\n\n TPCH-Query-completed-41.593 seconds\n\n 21.13% [.] tts_buffer_heap_getsomeattrs\n *11.58% [.] XidInMVCCSnapshot*\n 10.87% [.] bpchareq\n 7.07% [.] ExecEvalScalarArrayOp\n 5.93% [.] ExecInterpExpr\n 5.16% [.] ReadBuffer_common\n 3.61% [.] heapgetpage\n\nRegards\nArjun\n\nHi,PostgreSQLv14- compiled with LLVM-Clangv13 and GCCv11,And captured performance using HammerDBv4.3-TPC-H.And Observed the functionality differences as LLVM-Clangv13-triggers heapgetpage instead XidInMVCCSnapshot or vice versa with GCC.I would like to know here the functionality differences triggered i.e. function call \"heapgetpage vs XidInMVCCSnapshot '' in GCC vs LLVM-Clangv13And also observed the performance difference GCC performing (Query execution time (small value is better) better than LLVM-Clang 13 on same BareMetal with same H/W and DB configurations perf data top hot functions: LLVM-Clangv13:============= TPCH-Query-completed-50.526 seconds Overhead Symbol 19.41% [.] tts_buffer_heap_getsomeattrs 17.75% [.] heapgetpage 9.46% [.] bpchareq 5.86% [.] ExecEvalScalarArrayOp 5.85% [.] ExecInterpExpr 4.50% [.] ReadBuffer_common 3.02% [.] heap_getnextslot GCCv11 ======= TPCH-Query-completed-41.593 seconds 21.13% [.] tts_buffer_heap_getsomeattrs 11.58% [.] XidInMVCCSnapshot 10.87% [.] bpchareq 7.07% [.] ExecEvalScalarArrayOp 5.93% [.] ExecInterpExpr 5.16% [.] ReadBuffer_common 3.61% [.] heapgetpageRegardsArjun",
"msg_date": "Mon, 11 Jul 2022 13:19:51 +0530",
"msg_from": "arjun shetty <arjunshetty955@gmail.com>",
"msg_from_op": true,
"msg_subject": "functionality difference-performance postgreSQLv14-GCC-llvm-clang"
}
] |
[
{
"msg_contents": "Hi,\nI have one Oracle fdw table which is giving performance issue when joined\nlocal temp table gives performance issue.\n\nselect * from oracle_fdw_table where transaction_id in ( select\ntransaction_id from temp_table) ---- 54 seconds. Seeing HASH SEMI JOIN in\nEXPLAIN PLAN. temp_table has only 74 records.\n\n\nselect * from from oracle_fdw_table where transaction_id in (\n1,2,3,.....,75)--- 23ms.\n\n\nCould you please help me understand this drastic behaviour change?\n\nRegards,\nAditya.\n\nHi,I have one Oracle fdw table which is giving performance issue when joined local temp table gives performance issue.select * from oracle_fdw_table where transaction_id in ( select transaction_id from temp_table) ---- 54 seconds. Seeing HASH SEMI JOIN in EXPLAIN PLAN. temp_table has only 74 records.select * from from oracle_fdw_table where transaction_id in ( 1,2,3,.....,75)--- 23ms.Could you please help me understand this drastic behaviour change?Regards,Aditya.",
"msg_date": "Mon, 11 Jul 2022 17:38:34 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Oracle_FDW table performance issue"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 05:38:34PM +0530, aditya desai wrote:\n> Hi,\n> I have one Oracle fdw table which is giving performance issue when joined\n> local temp table gives performance issue.\n> \n> select * from oracle_fdw_table where transaction_id in ( select\n> transaction_id from temp_table) ---- 54 seconds. Seeing HASH SEMI JOIN in\n> EXPLAIN PLAN. temp_table has only 74 records.\n\nYou'd have to share the plan\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nDo the tables have updated stats ?\n\n\n",
"msg_date": "Mon, 11 Jul 2022 07:13:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Oracle_FDW table performance issue"
},
{
"msg_contents": "Hi Justin,\nSorry unable to send a query plan from a closed network. Here the stats are\nupdated on the Oracle table.\n\n\nIt seems like when joining the local tables it is not filtering data on\nOracle and bringing data to postgres. It is filtering when we actually pass\nthe values.\n\n\nRegards,\nAditya.\n\nOn Mon, Jul 11, 2022 at 5:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jul 11, 2022 at 05:38:34PM +0530, aditya desai wrote:\n> > Hi,\n> > I have one Oracle fdw table which is giving performance issue when joined\n> > local temp table gives performance issue.\n> >\n> > select * from oracle_fdw_table where transaction_id in ( select\n> > transaction_id from temp_table) ---- 54 seconds. Seeing HASH SEMI JOIN\n> in\n> > EXPLAIN PLAN. temp_table has only 74 records.\n>\n> You'd have to share the plan\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Do the tables have updated stats ?\n>\n\nHi Justin,Sorry unable to send a query plan from a closed network. Here the stats are updated on the Oracle table. It seems like when joining the local tables it is not filtering data on Oracle and bringing data to postgres. It is filtering when we actually pass the values.Regards,Aditya.On Mon, Jul 11, 2022 at 5:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jul 11, 2022 at 05:38:34PM +0530, aditya desai wrote:\n> Hi,\n> I have one Oracle fdw table which is giving performance issue when joined\n> local temp table gives performance issue.\n> \n> select * from oracle_fdw_table where transaction_id in ( select\n> transaction_id from temp_table) ---- 54 seconds. Seeing HASH SEMI JOIN in\n> EXPLAIN PLAN. temp_table has only 74 records.\n\nYou'd have to share the plan\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nDo the tables have updated stats ?",
"msg_date": "Mon, 11 Jul 2022 17:52:40 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle_FDW table performance issue"
},
{
"msg_contents": "On Mon, 2022-07-11 at 17:38 +0530, aditya desai wrote:\n> I have one Oracle fdw table which is giving performance issue when joined\n> local temp table gives performance issue.\n> \n> select * from oracle_fdw_table where transaction_id in ( select transaction_id from temp_table)\n> ---- 54 seconds. Seeing HASH SEMI JOIN in EXPLAIN PLAN. temp_table has only 74 records.\n> \n> select * from from oracle_fdw_table where transaction_id in ( 1,2,3,.....,75)--- 23ms.\n> \n> Could you please help me understand this drastic behaviour change?\n\nThe first query joins a local table with a remote Oracle table. The only way for\nsuch a join to avoid fetching the whole Oracle table would be to have the foreign scan\non the inner side of a nested loop join. But that would incur many round trips to Oracle\nand is therefore perhaps not a great plan either.\n\nIn the second case, the whole IN list is shipped to the remote side.\n\nIn short, the queries are quite different, and I don't think it is possible to get\nthe first query to perform as well as the second.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 17:26:30 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Oracle_FDW table performance issue"
},
{
"msg_contents": "Understood thanks!! Will try to build dynamiq query to send ids\nacross instead of join.\n\nOn Mon, Jul 11, 2022 at 8:56 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Mon, 2022-07-11 at 17:38 +0530, aditya desai wrote:\n> > I have one Oracle fdw table which is giving performance issue when joined\n> > local temp table gives performance issue.\n> >\n> > select * from oracle_fdw_table where transaction_id in ( select\n> transaction_id from temp_table)\n> > ---- 54 seconds. Seeing HASH SEMI JOIN in EXPLAIN PLAN. temp_table has\n> only 74 records.\n> >\n> > select * from from oracle_fdw_table where transaction_id in (\n> 1,2,3,.....,75)--- 23ms.\n> >\n> > Could you please help me understand this drastic behaviour change?\n>\n> The first query joins a local table with a remote Oracle table. The only\n> way for\n> such a join to avoid fetching the whole Oracle table would be to have the\n> foreign scan\n> on the inner side of a nested loop join. But that would incur many round\n> trips to Oracle\n> and is therefore perhaps not a great plan either.\n>\n> In the second case, the whole IN list is shipped to the remote side.\n>\n> In short, the queries are quite different, and I don't think it is\n> possible to get\n> the first query to perform as well as the second.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n\nUnderstood thanks!! Will try to build dynamiq query to send ids across instead of join.On Mon, Jul 11, 2022 at 8:56 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Mon, 2022-07-11 at 17:38 +0530, aditya desai wrote:\n> I have one Oracle fdw table which is giving performance issue when joined\n> local temp table gives performance issue.\n> \n> select * from oracle_fdw_table where transaction_id in ( select transaction_id from temp_table)\n> ---- 54 seconds. Seeing HASH SEMI JOIN in EXPLAIN PLAN. temp_table has only 74 records.\n> \n> select * from from oracle_fdw_table where transaction_id in ( 1,2,3,.....,75)--- 23ms.\n> \n> Could you please help me understand this drastic behaviour change?\n\nThe first query joins a local table with a remote Oracle table. The only way for\nsuch a join to avoid fetching the whole Oracle table would be to have the foreign scan\non the inner side of a nested loop join. But that would incur many round trips to Oracle\nand is therefore perhaps not a great plan either.\n\nIn the second case, the whole IN list is shipped to the remote side.\n\nIn short, the queries are quite different, and I don't think it is possible to get\nthe first query to perform as well as the second.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 11 Jul 2022 23:07:21 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Oracle_FDW table performance issue"
}
] |
[
{
"msg_contents": "I'd be grateful for some comments on the advisability of using a large\nnumber of concurrent logical replication publications/subscriptions.\nBelow I've set out the current environment and a suggested design.\nApologies for the length of this email.\n\nWe presently have many hundreds of small databases in a cluster, in\nseparate databases due to historical issues, security considerations and\nthe ability to scale horizontally in future by using more than one\ncluster. The cluster is presently less than half a terabyte in size and\nruns very comfortably on a 96GB RAM/32 core Intel E5-2620 server on NVMe\ndisks in RAID10 configuration on Linux.\n\nThe individual databases cover a handful of discrete services (although\nthey have some common data structures) and are of different sizes\ndepending on client needs. The largest client database is currently\nabout 7.5GB in size.\n\nWe presently use streaming replication locally and remotely to replicate\nthe cluster and it has pg_bouncer in front of it. Some settings:\n\n max_connections: 500\n shared_buffers: 20GB\n work_mem: 15MB\n\nDue to changing client and operational requirements we need to aggregate\nsome common data between client databases in the same organisation into\nsingle read-only databases for reporting purposes. This new requirement\nis in addition to keeping the client databases in operation as they are\nnow. The potential for using logical replication comes to mind,\nspecifically the use case of \"Consolidating multiple databases into a\nsingle one\" mentioned at\nhttps://www.postgresql.org/docs/current/logical-replication.html\n\nSome tests suggest that we can meet the requirements for publication\ntable replica identity and safe aggregation of data.\n\nAt an overview level this consolidation might require the setup of\nlogical replication publications from say 500 client databases\naggregating in close to real time to 50 target or aggregation\nsubscribing databases, averaging roughly 10 client database per\naggregation database, but with some aggregation databases having roughly\n50 clients.\n\nI would be grateful for comments on the following design proposals:\n\n* to avoid overloading the existing primary host with many replication\n slots, it would be wise to implement aggregation on another host \n\n* the new aggregation host should receive streaming replication data\n from the primary on a first postgresql instance which will also have\n logical replication publishers on each relevant client database. As\n noted above, there may be ~500 publications\n\n* the new aggregation host would have a second postgresql instance\n serving the aggregation databases each acting as logical\n replication subscribers. As noted above, there would be ~50 target\n databases each with an average of ~10 subscriptions to the first\n postgresql instance.\n\n* that only one subscription per client database is needed (after\n initial synchronisation) to synchronise all tables in a particular\n client database schema\n\n* that publications and subscriptions are brought online on a per-client\n database basis, to reduce the number of replication slots required due\n to initial synchronisation (the docs aren't clear about how many\n temporary replication slots may be needed \"for the initial data\n synchronisation of pre-existing table data\"; see\n https://www.postgresql.org/docs/current/logical-replication-subscription.html)\n\n* that a similar server to the one noted above can handle two postgresql\n instances as described together with ~250 concurrent client\n connections to the second instance to serve client reporting needs.\n\nThoughts gratefully received,\nRory\n\n\n",
"msg_date": "Sat, 16 Jul 2022 17:07:09 +0100",
"msg_from": "Rory Campbell-Lange <rory@campbell-lange.net>",
"msg_from_op": true,
"msg_subject": "data consolidation: logical replication design considerations"
},
{
"msg_contents": "On Sat, Jul 16, 2022 at 12:07 PM Rory Campbell-Lange <\nrory@campbell-lange.net> wrote:\n\n> I'd be grateful for some comments on the advisability of using a large\n> number of concurrent logical replication publications/subscriptions.\n> Below I've set out the current environment and a suggested design.\n> Apologies for the length of this email.\n>\n\nAnother possibility is to use SymmetricDS for this. [\nhttps://symmetricds.org ] SymmetricDS was originally developed to keep\ndatabases on thousands of Point-of-Sale databases (in cash registers) in\nsync with pricing and inventory data for large international retailers.\n\nThere are lots of other use cases, but even 10-12 years ago it was scalable\nto the extent you are describing you need here.\n\nThe main drawback is that it is trigger based, so there is some slight\nlatency introduced for insert/update/delete actions on the tables on the\nappropriate master, but it usually isn't significant.\n\nOn Sat, Jul 16, 2022 at 12:07 PM Rory Campbell-Lange <rory@campbell-lange.net> wrote:I'd be grateful for some comments on the advisability of using a large\nnumber of concurrent logical replication publications/subscriptions.\nBelow I've set out the current environment and a suggested design.\nApologies for the length of this email.Another possibility is to use SymmetricDS for this. [ https://symmetricds.org ] SymmetricDS was originally developed to keep databases on thousands of Point-of-Sale databases (in cash registers) in sync with pricing and inventory data for large international retailers.There are lots of other use cases, but even 10-12 years ago it was scalable to the extent you are describing you need here.The main drawback is that it is trigger based, so there is some slight latency introduced for insert/update/delete actions on the tables on the appropriate master, but it usually isn't significant.",
"msg_date": "Sun, 17 Jul 2022 16:39:15 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: data consolidation: logical replication design considerations"
},
{
"msg_contents": "On 17/07/22, Rick Otten (rottenwindfish@gmail.com) wrote:\n> On Sat, Jul 16, 2022 at 12:07 PM Rory Campbell-Lange <\n> rory@campbell-lange.net> wrote:\n> \n> > I'd be grateful for some comments on the advisability of using a large\n> > number of concurrent logical replication publications/subscriptions.\n> > Below I've set out the current environment and a suggested design.\n> > Apologies for the length of this email.\n> \n> Another possibility is to use SymmetricDS for this. [\n> https://symmetricds.org ] SymmetricDS was originally developed to keep\n> databases on thousands of Point-of-Sale databases (in cash registers) in\n> sync with pricing and inventory data for large international retailers.\n> \n> There are lots of other use cases, but even 10-12 years ago it was scalable\n> to the extent you are describing you need here.\n> \n> The main drawback is that it is trigger based, so there is some slight\n> latency introduced for insert/update/delete actions on the tables on the\n> appropriate master, but it usually isn't significant.\n\nThanks very much for the pointer to SymmetricDS. I haven't come across\nit before. The architecture, configuration and use look very\nstraightforward, although using java would be new to our production\nenvironment, and SymmetricDS doesn't seem to be in Debian.\n\nI'd be grateful to know if 500 odd publishers/subscribers is \"out of the\npark\" or reasonable for a reasonably powerful machine (as described in\nmy original email). I would have thought using the native logical\nreplication capabilities would be much more scalable and efficient than\nstepping outside of postgresql.\n\nRegards,\nRory\n\n\n",
"msg_date": "Mon, 18 Jul 2022 15:23:13 +0100",
"msg_from": "Rory Campbell-Lange <rory@campbell-lange.net>",
"msg_from_op": true,
"msg_subject": "Re: data consolidation: logical replication design considerations"
},
{
"msg_contents": "On 16/07/22, Rory Campbell-Lange (rory@campbell-lange.net) wrote:\n> I'd be grateful for some comments on the advisability of using a large\n> number of concurrent logical replication publications/subscriptions.\n> Below I've set out the current environment and a suggested design.\n> Apologies for the length of this email.\n...\n> * to avoid overloading the existing primary host with many replication\n> slots, it would be wise to implement aggregation on another host \n\nLooking into this further it appears that a streaming replication secondary can\nnot act as a logical replication publisher. Is that correct?\n\nRory\n\n\n",
"msg_date": "Sat, 23 Jul 2022 09:40:23 +0100",
"msg_from": "Rory Campbell-Lange <rory@campbell-lange.net>",
"msg_from_op": true,
"msg_subject": "Re: data consolidation: logical replication design considerations"
}
] |
[
{
"msg_contents": "Hello.\n\nI'm investigating an issue on a PostgresSql 9.5.21 installation that\nbecomes unusable in an intermittent way. Simple queries like \"select\nnow();\" could take 20s. commits take 2s. and all gets fixed after an engine\nrestart.\n\nI look into the pg logs and no signs of errors. and checkpoints are\nalways timed. The machine is well provisioned, load isn't too high, and cpu\nio wait is under 1%.\n\nany suggestions on what I should check more?\n\n\nThanks in advance.\n-- \nBruno da Silva\n\nHello.I'm investigating an issue on a PostgresSql 9.5.21 installation that becomes unusable in an intermittent way. Simple queries like \"select now();\" could take 20s. commits take 2s. and all gets fixed after an engine restart.I look into the pg logs and no signs of errors. and checkpoints are always timed. The machine is well provisioned, load isn't too high, and cpu io wait is under 1%.any suggestions on what I should check more?Thanks in advance.-- Bruno da Silva",
"msg_date": "Thu, 21 Jul 2022 14:37:35 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 02:37:35PM -0400, bruno da silva wrote:\n> I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> becomes unusable in an intermittent way. Simple queries like \"select\n> now();\" could take 20s. commits take 2s. and all gets fixed after an engine\n> restart.\n> \n> I look into the pg logs and no signs of errors. and checkpoints are\n> always timed. The machine is well provisioned, load isn't too high, and cpu\n> io wait is under 1%.\n> \n> any suggestions on what I should check more?\n\nWhat OS/version is it ?\n\nWhat GUCs have you changed ?\n\nIs it a new issue ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOperating system+version\nWhat OS / version ? At least for linux, you can get the distribution by running: tail /etc/*release \n\nGUC Settings\nWhat database configuration settings have you changed? What are their values? (These are things like \"shared_buffers\", \"work_mem\", \"enable_seq_scan\", \"effective_io_concurrency\", \"effective_cache_size\", etc). See Server Configuration for a useful query that will show all of your non-default database settings, in an easier to read format than posting pieces of your postgresql.conf file. \n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jul 2022 14:33:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Thanks for the quick response.\n\n OS/version: CentOS release 6.9 (Final)\n\n Hardware(non dedicated to the db, other services and app run the same\nserver):\n\n Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM\n\nlogicaldrive 1 (1.5 TB, RAID 1, OK)\nphysicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\nphysicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK)\n\n\n GUC Settings:\n auto_explain.log_analyze 0\n auto_explain.log_min_duration 1000\n auto_explain.log_nested_statements 0\n auto_explain.log_verbose 0\n autovacuum_analyze_scale_factor 0.1\n autovacuum_analyze_threshold 50\n autovacuum_freeze_max_age 200000000\n autovacuum_max_workers 3\n autovacuum_multixact_freeze_max_age 400000000\n autovacuum_naptime 60\n autovacuum_vacuum_cost_delay 2\n autovacuum_vacuum_cost_limit 100\n autovacuum_vacuum_scale_factor 0.1\n autovacuum_vacuum_threshold 50\n autovacuum_work_mem -1\n checkpoint_timeout 2700\n effective_cache_size 4194304\n enable_seqscan 0\n log_autovacuum_min_duration 250\n log_checkpoints 1\n log_connections 1\n log_file_mode 600\n log_lock_waits 1\n log_min_duration_statement 1000\n log_rotation_age 1440\n log_truncate_on_rotation 1\n maintenance_work_mem 262144\n max_connections 300\n max_replication_slots 10\n max_wal_senders 10\n max_wal_size 1280\n max_worker_processes 15\n min_wal_size 5\n pg_stat_statements.max 10000\n standard_conforming_strings 1\n track_commit_timestamp 1\n wal_receiver_timeout 0\n wal_sender_timeout 0\n work_mem 8192\n\nOn Thu, Jul 21, 2022 at 3:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Jul 21, 2022 at 02:37:35PM -0400, bruno da silva wrote:\n> > I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> > becomes unusable in an intermittent way. Simple queries like \"select\n> > now();\" could take 20s. commits take 2s. and all gets fixed after an\n> engine\n> > restart.\n> >\n> > I look into the pg logs and no signs of errors. and checkpoints are\n> > always timed. The machine is well provisioned, load isn't too high, and\n> cpu\n> > io wait is under 1%.\n> >\n> > any suggestions on what I should check more?\n>\n> What OS/version is it ?\n>\n> What GUCs have you changed ?\n>\n> Is it a new issue ?\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Operating system+version\n> What OS / version ? At least for linux, you can get the distribution by\n> running: tail /etc/*release\n>\n> GUC Settings\n> What database configuration settings have you changed? What are their\n> values? (These are things like \"shared_buffers\", \"work_mem\",\n> \"enable_seq_scan\", \"effective_io_concurrency\", \"effective_cache_size\",\n> etc). See Server Configuration for a useful query that will show all of\n> your non-default database settings, in an easier to read format than\n> posting pieces of your postgresql.conf file.\n>\n> --\n> Justin\n>\n\n\n-- \nBruno da Silva\n\n Thanks for the quick response. OS/version: CentOS release 6.9 (Final) Hardware(non dedicated to the db, other services and app run the same server): Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM logicaldrive 1 (1.5 TB, RAID 1, OK)\nphysicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\nphysicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK) GUC Settings: auto_explain.log_analyze 0 auto_explain.log_min_duration 1000 auto_explain.log_nested_statements 0 auto_explain.log_verbose 0 autovacuum_analyze_scale_factor 0.1 autovacuum_analyze_threshold 50 autovacuum_freeze_max_age 200000000 autovacuum_max_workers 3 autovacuum_multixact_freeze_max_age 400000000 autovacuum_naptime 60 autovacuum_vacuum_cost_delay 2 autovacuum_vacuum_cost_limit 100 autovacuum_vacuum_scale_factor 0.1 autovacuum_vacuum_threshold 50 autovacuum_work_mem -1 checkpoint_timeout 2700 effective_cache_size 4194304 enable_seqscan 0 log_autovacuum_min_duration 250 log_checkpoints 1 log_connections 1 log_file_mode 600 log_lock_waits 1 log_min_duration_statement 1000 log_rotation_age 1440 log_truncate_on_rotation 1 maintenance_work_mem 262144 max_connections 300 max_replication_slots 10 max_wal_senders 10 max_wal_size 1280 max_worker_processes 15 min_wal_size 5 pg_stat_statements.max 10000 standard_conforming_strings 1 track_commit_timestamp 1 wal_receiver_timeout 0 wal_sender_timeout 0 work_mem 8192On Thu, Jul 21, 2022 at 3:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jul 21, 2022 at 02:37:35PM -0400, bruno da silva wrote:\n> I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> becomes unusable in an intermittent way. Simple queries like \"select\n> now();\" could take 20s. commits take 2s. and all gets fixed after an engine\n> restart.\n> \n> I look into the pg logs and no signs of errors. and checkpoints are\n> always timed. The machine is well provisioned, load isn't too high, and cpu\n> io wait is under 1%.\n> \n> any suggestions on what I should check more?\n\nWhat OS/version is it ?\n\nWhat GUCs have you changed ?\n\nIs it a new issue ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOperating system+version\nWhat OS / version ? At least for linux, you can get the distribution by running: tail /etc/*release \n\nGUC Settings\nWhat database configuration settings have you changed? What are their values? (These are things like \"shared_buffers\", \"work_mem\", \"enable_seq_scan\", \"effective_io_concurrency\", \"effective_cache_size\", etc). See Server Configuration for a useful query that will show all of your non-default database settings, in an easier to read format than posting pieces of your postgresql.conf file. \n\n-- \nJustin\n-- Bruno da Silva",
"msg_date": "Thu, 21 Jul 2022 15:59:30 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "The issue started a month ago.\n\nOn Thu, Jul 21, 2022 at 3:59 PM bruno da silva <brunogiovs@gmail.com> wrote:\n\n> Thanks for the quick response.\n>\n> OS/version: CentOS release 6.9 (Final)\n>\n> Hardware(non dedicated to the db, other services and app run the same\n> server):\n>\n> Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM\n>\n> logicaldrive 1 (1.5 TB, RAID 1, OK)\n> physicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\n> physicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK)\n>\n>\n> GUC Settings:\n> auto_explain.log_analyze 0\n> auto_explain.log_min_duration 1000\n> auto_explain.log_nested_statements 0\n> auto_explain.log_verbose 0\n> autovacuum_analyze_scale_factor 0.1\n> autovacuum_analyze_threshold 50\n> autovacuum_freeze_max_age 200000000\n> autovacuum_max_workers 3\n> autovacuum_multixact_freeze_max_age 400000000\n> autovacuum_naptime 60\n> autovacuum_vacuum_cost_delay 2\n> autovacuum_vacuum_cost_limit 100\n> autovacuum_vacuum_scale_factor 0.1\n> autovacuum_vacuum_threshold 50\n> autovacuum_work_mem -1\n> checkpoint_timeout 2700\n> effective_cache_size 4194304\n> enable_seqscan 0\n> log_autovacuum_min_duration 250\n> log_checkpoints 1\n> log_connections 1\n> log_file_mode 600\n> log_lock_waits 1\n> log_min_duration_statement 1000\n> log_rotation_age 1440\n> log_truncate_on_rotation 1\n> maintenance_work_mem 262144\n> max_connections 300\n> max_replication_slots 10\n> max_wal_senders 10\n> max_wal_size 1280\n> max_worker_processes 15\n> min_wal_size 5\n> pg_stat_statements.max 10000\n> standard_conforming_strings 1\n> track_commit_timestamp 1\n> wal_receiver_timeout 0\n> wal_sender_timeout 0\n> work_mem 8192\n>\n> On Thu, Jul 21, 2022 at 3:33 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> On Thu, Jul 21, 2022 at 02:37:35PM -0400, bruno da silva wrote:\n>> > I'm investigating an issue on a PostgresSql 9.5.21 installation that\n>> > becomes unusable in an intermittent way. Simple queries like \"select\n>> > now();\" could take 20s. commits take 2s. and all gets fixed after an\n>> engine\n>> > restart.\n>> >\n>> > I look into the pg logs and no signs of errors. and checkpoints are\n>> > always timed. The machine is well provisioned, load isn't too high, and\n>> cpu\n>> > io wait is under 1%.\n>> >\n>> > any suggestions on what I should check more?\n>>\n>> What OS/version is it ?\n>>\n>> What GUCs have you changed ?\n>>\n>> Is it a new issue ?\n>>\n>> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>>\n>> Operating system+version\n>> What OS / version ? At least for linux, you can get the distribution by\n>> running: tail /etc/*release\n>>\n>> GUC Settings\n>> What database configuration settings have you changed? What are their\n>> values? (These are things like \"shared_buffers\", \"work_mem\",\n>> \"enable_seq_scan\", \"effective_io_concurrency\", \"effective_cache_size\",\n>> etc). See Server Configuration for a useful query that will show all of\n>> your non-default database settings, in an easier to read format than\n>> posting pieces of your postgresql.conf file.\n>>\n>> --\n>> Justin\n>>\n>\n>\n> --\n> Bruno da Silva\n>\n\n\n-- \nBruno da Silva\n\nThe issue started a month ago.On Thu, Jul 21, 2022 at 3:59 PM bruno da silva <brunogiovs@gmail.com> wrote: Thanks for the quick response. OS/version: CentOS release 6.9 (Final) Hardware(non dedicated to the db, other services and app run the same server): Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM logicaldrive 1 (1.5 TB, RAID 1, OK)\nphysicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\nphysicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK) GUC Settings: auto_explain.log_analyze 0 auto_explain.log_min_duration 1000 auto_explain.log_nested_statements 0 auto_explain.log_verbose 0 autovacuum_analyze_scale_factor 0.1 autovacuum_analyze_threshold 50 autovacuum_freeze_max_age 200000000 autovacuum_max_workers 3 autovacuum_multixact_freeze_max_age 400000000 autovacuum_naptime 60 autovacuum_vacuum_cost_delay 2 autovacuum_vacuum_cost_limit 100 autovacuum_vacuum_scale_factor 0.1 autovacuum_vacuum_threshold 50 autovacuum_work_mem -1 checkpoint_timeout 2700 effective_cache_size 4194304 enable_seqscan 0 log_autovacuum_min_duration 250 log_checkpoints 1 log_connections 1 log_file_mode 600 log_lock_waits 1 log_min_duration_statement 1000 log_rotation_age 1440 log_truncate_on_rotation 1 maintenance_work_mem 262144 max_connections 300 max_replication_slots 10 max_wal_senders 10 max_wal_size 1280 max_worker_processes 15 min_wal_size 5 pg_stat_statements.max 10000 standard_conforming_strings 1 track_commit_timestamp 1 wal_receiver_timeout 0 wal_sender_timeout 0 work_mem 8192On Thu, Jul 21, 2022 at 3:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jul 21, 2022 at 02:37:35PM -0400, bruno da silva wrote:\n> I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> becomes unusable in an intermittent way. Simple queries like \"select\n> now();\" could take 20s. commits take 2s. and all gets fixed after an engine\n> restart.\n> \n> I look into the pg logs and no signs of errors. and checkpoints are\n> always timed. The machine is well provisioned, load isn't too high, and cpu\n> io wait is under 1%.\n> \n> any suggestions on what I should check more?\n\nWhat OS/version is it ?\n\nWhat GUCs have you changed ?\n\nIs it a new issue ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOperating system+version\nWhat OS / version ? At least for linux, you can get the distribution by running: tail /etc/*release \n\nGUC Settings\nWhat database configuration settings have you changed? What are their values? (These are things like \"shared_buffers\", \"work_mem\", \"enable_seq_scan\", \"effective_io_concurrency\", \"effective_cache_size\", etc). See Server Configuration for a useful query that will show all of your non-default database settings, in an easier to read format than posting pieces of your postgresql.conf file. \n\n-- \nJustin\n-- Bruno da Silva\n-- Bruno da Silva",
"msg_date": "Thu, 21 Jul 2022 16:01:10 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "\nOn 2022-07-21 Th 14:37, bruno da silva wrote:\n> Hello.\n>\n> I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> becomes unusable in an intermittent way. Simple queries like \"select\n> now();\" could take 20s. commits take 2s. and all gets fixed after an\n> engine restart.\n>\n> I look into the pg logs and no signs of errors. and checkpoints are\n> always timed. The machine is well provisioned, load isn't too high,\n> and cpu io wait is under 1%.\n>\n> any suggestions on what I should check more?\n>\n>\n>\n\n\n9.5 has been out of support for nearly 2 years. You should be looking to\nupgrade.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:18:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 03:59:30PM -0400, bruno da silva wrote:\n> OS/version: CentOS release 6.9 (Final)\n\nHow are these set ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/{defrag,enabled,khugepaged/defrag} /proc/sys/vm/zone_reclaim_mode\n\nI suspect you may be suffering from issues with transparent huge pages.\n\nI suggest to disable KSM and THP, or upgrade to a newer OS.\n\nI've written before about these:\nhttps://www.postgresql.org/message-id/20170524155855.GH31097@telsasoft.com\nhttps://www.postgresql.org/message-id/20190625162338.GF18602@telsasoft.com\nhttps://www.postgresql.org/message-id/20170718180152.GE17566@telsasoft.com\nhttps://www.postgresql.org/message-id/20191004060300.GA11241@telsasoft.com\nhttps://www.postgresql.org/message-id/20200413144254.GS2228@telsasoft.com\nhttps://www.postgresql.org/message-id/20220329182453.GA28503@telsasoft.com\n\nOn Thu, Jul 21, 2022 at 04:01:10PM -0400, bruno da silva wrote:\n> The issue started a month ago.\n\nOk .. but how long has the DB been running under this environment ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jul 2022 15:21:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Thanks, I will check it out.\n\nOn Thu, Jul 21, 2022 at 4:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Jul 21, 2022 at 03:59:30PM -0400, bruno da silva wrote:\n> > OS/version: CentOS release 6.9 (Final)\n>\n> How are these set ?\n>\n> tail /sys/kernel/mm/ksm/run\n> /sys/kernel/mm/transparent_hugepage/{defrag,enabled,khugepaged/defrag}\n> /proc/sys/vm/zone_reclaim_mode\n>\n> I suspect you may be suffering from issues with transparent huge pages.\n>\n> I suggest to disable KSM and THP, or upgrade to a newer OS.\n>\n> I've written before about these:\n> https://www.postgresql.org/message-id/20170524155855.GH31097@telsasoft.com\n> https://www.postgresql.org/message-id/20190625162338.GF18602@telsasoft.com\n> https://www.postgresql.org/message-id/20170718180152.GE17566@telsasoft.com\n> https://www.postgresql.org/message-id/20191004060300.GA11241@telsasoft.com\n> https://www.postgresql.org/message-id/20200413144254.GS2228@telsasoft.com\n> https://www.postgresql.org/message-id/20220329182453.GA28503@telsasoft.com\n>\n> On Thu, Jul 21, 2022 at 04:01:10PM -0400, bruno da silva wrote:\n> > The issue started a month ago.\n>\n> Ok .. but how long has the DB been running under this environment ?\n>\n> --\n> Justin\n>\n\n\n-- \nBruno da Silva\n\nThanks, I will check it out.On Thu, Jul 21, 2022 at 4:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jul 21, 2022 at 03:59:30PM -0400, bruno da silva wrote:\n> OS/version: CentOS release 6.9 (Final)\n\nHow are these set ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/{defrag,enabled,khugepaged/defrag} /proc/sys/vm/zone_reclaim_mode\n\nI suspect you may be suffering from issues with transparent huge pages.\n\nI suggest to disable KSM and THP, or upgrade to a newer OS.\n\nI've written before about these:\nhttps://www.postgresql.org/message-id/20170524155855.GH31097@telsasoft.com\nhttps://www.postgresql.org/message-id/20190625162338.GF18602@telsasoft.com\nhttps://www.postgresql.org/message-id/20170718180152.GE17566@telsasoft.com\nhttps://www.postgresql.org/message-id/20191004060300.GA11241@telsasoft.com\nhttps://www.postgresql.org/message-id/20200413144254.GS2228@telsasoft.com\nhttps://www.postgresql.org/message-id/20220329182453.GA28503@telsasoft.com\n\nOn Thu, Jul 21, 2022 at 04:01:10PM -0400, bruno da silva wrote:\n> The issue started a month ago.\n\nOk .. but how long has the DB been running under this environment ?\n\n-- \nJustin\n-- Bruno da Silva",
"msg_date": "Thu, 21 Jul 2022 16:32:08 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "It has been running at least since 2018.\n\nI got that from the tail\n\n=> /sys/kernel/mm/ksm/run <==\n0\n\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\n[always] madvise never\n\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\nalways madvise [never]\n\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n[yes] no\n\n==> /proc/sys/vm/zone_reclaim_mode <==\n0\n\n\nOn Thu, Jul 21, 2022 at 4:32 PM bruno da silva <brunogiovs@gmail.com> wrote:\n\n> Thanks, I will check it out.\n>\n> On Thu, Jul 21, 2022 at 4:21 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> On Thu, Jul 21, 2022 at 03:59:30PM -0400, bruno da silva wrote:\n>> > OS/version: CentOS release 6.9 (Final)\n>>\n>> How are these set ?\n>>\n>> tail /sys/kernel/mm/ksm/run\n>> /sys/kernel/mm/transparent_hugepage/{defrag,enabled,khugepaged/defrag}\n>> /proc/sys/vm/zone_reclaim_mode\n>>\n>> I suspect you may be suffering from issues with transparent huge pages.\n>>\n>> I suggest to disable KSM and THP, or upgrade to a newer OS.\n>>\n>> I've written before about these:\n>> https://www.postgresql.org/message-id/20170524155855.GH31097@telsasoft.com\n>> https://www.postgresql.org/message-id/20190625162338.GF18602@telsasoft.com\n>> https://www.postgresql.org/message-id/20170718180152.GE17566@telsasoft.com\n>> https://www.postgresql.org/message-id/20191004060300.GA11241@telsasoft.com\n>> https://www.postgresql.org/message-id/20200413144254.GS2228@telsasoft.com\n>> https://www.postgresql.org/message-id/20220329182453.GA28503@telsasoft.com\n>>\n>> On Thu, Jul 21, 2022 at 04:01:10PM -0400, bruno da silva wrote:\n>> > The issue started a month ago.\n>>\n>> Ok .. but how long has the DB been running under this environment ?\n>>\n>> --\n>> Justin\n>>\n>\n>\n> --\n> Bruno da Silva\n>\n\n\n-- \nBruno da Silva\n\nIt has been running at least since 2018. I got that from the tail=> /sys/kernel/mm/ksm/run <==0==> /sys/kernel/mm/transparent_hugepage/defrag <==[always] madvise never==> /sys/kernel/mm/transparent_hugepage/enabled <==always madvise [never]==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==[yes] no==> /proc/sys/vm/zone_reclaim_mode <==0On Thu, Jul 21, 2022 at 4:32 PM bruno da silva <brunogiovs@gmail.com> wrote:Thanks, I will check it out.On Thu, Jul 21, 2022 at 4:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jul 21, 2022 at 03:59:30PM -0400, bruno da silva wrote:\n> OS/version: CentOS release 6.9 (Final)\n\nHow are these set ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/{defrag,enabled,khugepaged/defrag} /proc/sys/vm/zone_reclaim_mode\n\nI suspect you may be suffering from issues with transparent huge pages.\n\nI suggest to disable KSM and THP, or upgrade to a newer OS.\n\nI've written before about these:\nhttps://www.postgresql.org/message-id/20170524155855.GH31097@telsasoft.com\nhttps://www.postgresql.org/message-id/20190625162338.GF18602@telsasoft.com\nhttps://www.postgresql.org/message-id/20170718180152.GE17566@telsasoft.com\nhttps://www.postgresql.org/message-id/20191004060300.GA11241@telsasoft.com\nhttps://www.postgresql.org/message-id/20200413144254.GS2228@telsasoft.com\nhttps://www.postgresql.org/message-id/20220329182453.GA28503@telsasoft.com\n\nOn Thu, Jul 21, 2022 at 04:01:10PM -0400, bruno da silva wrote:\n> The issue started a month ago.\n\nOk .. but how long has the DB been running under this environment ?\n\n-- \nJustin\n-- Bruno da Silva\n-- Bruno da Silva",
"msg_date": "Thu, 21 Jul 2022 16:37:23 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Hello.\n\nAfter more investigation, we found that pgss_query_texts.stat of a size of\n2.2GB. and this deployment has a 32bit pg.\nand this errors:\n\n\n*postgresql-2022-07-12-20:07:15.log.gz:[2022-07-14 11:17:06.713 EDT]\n207.89.58.230(46964) {62c87db0.8eb2} xxxx LOG: out of\nmemorypostgresql-2022-07-12-20:07:15.log.gz:[2022-07-14 11:17:06.713 EDT]\n207.89.58.230(46964) {62c87db0.8eb2} xxxx DETAIL: Could not allocate\nenough memory to read pg_stat_statement file\n\"pg_stat_tmp/pgss_query_texts.stat\".*\n\nSo, my question is if pgss_query_texts.stat increases in size gradually due\nto too many distincts large sql statements could it cause an overall\nslowness on the engine? this slowness could cause simple statements to be\nsuper slow to return like\n\"select now()\" taking 20s?\n\nThanks in advance\n\nEnvironment:\n\n OS/version: CentOS release 6.9 (Final)\n\n Hardware(non dedicated to the db, other services and app run the same\nserver):\n\n Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM\n\nlogicaldrive 1 (1.5 TB, RAID 1, OK)\nphysicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\nphysicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK)\n\n PostgresSQL 9.5.21 32bit\n\n GUC Settings:\n auto_explain.log_analyze 0\n auto_explain.log_min_duration 1000\n auto_explain.log_nested_statements 0\n auto_explain.log_verbose 0\n autovacuum_analyze_scale_factor 0.1\n autovacuum_analyze_threshold 50\n autovacuum_freeze_max_age 200000000\n autovacuum_max_workers 3\n autovacuum_multixact_freeze_max_age 400000000\n autovacuum_naptime 60\n autovacuum_vacuum_cost_delay 2\n autovacuum_vacuum_cost_limit 100\n autovacuum_vacuum_scale_factor 0.1\n autovacuum_vacuum_threshold 50\n autovacuum_work_mem -1\n checkpoint_timeout 2700\n effective_cache_size 4194304\n enable_seqscan 0\n log_autovacuum_min_duration 250\n log_checkpoints 1\n log_connections 1\n log_file_mode 600\n log_lock_waits 1\n log_min_duration_statement 1000\n log_rotation_age 1440\n log_truncate_on_rotation 1\n maintenance_work_mem 262144\n max_connections 300\n max_replication_slots 10\n max_wal_senders 10\n max_wal_size 1280\n max_worker_processes 15\n min_wal_size 5\n pg_stat_statements.max 10000\n standard_conforming_strings 1\n track_commit_timestamp 1\n wal_receiver_timeout 0\n wal_sender_timeout 0\n work_mem 8192\n\n\n\n\n\nOn Thu, Jul 21, 2022 at 2:37 PM bruno da silva <brunogiovs@gmail.com> wrote:\n\n> Hello.\n>\n> I'm investigating an issue on a PostgresSql 9.5.21 installation that\n> becomes unusable in an intermittent way. Simple queries like \"select\n> now();\" could take 20s. commits take 2s. and all gets fixed after an engine\n> restart.\n>\n> I look into the pg logs and no signs of errors. and checkpoints are\n> always timed. The machine is well provisioned, load isn't too high, and cpu\n> io wait is under 1%.\n>\n> any suggestions on what I should check more?\n>\n>\n> Thanks in advance.\n> --\n> Bruno da Silva\n>\n\n\n-- \nBruno da Silva\n\nHello.After more investigation, we found that pgss_query_texts.stat of a size of 2.2GB. and this deployment has a 32bit pg.and this errors:postgresql-2022-07-12-20:07:15.log.gz:[2022-07-14 11:17:06.713 EDT] 207.89.58.230(46964) {62c87db0.8eb2} xxxx LOG: out of memorypostgresql-2022-07-12-20:07:15.log.gz:[2022-07-14 11:17:06.713 EDT] 207.89.58.230(46964) {62c87db0.8eb2} xxxx DETAIL: Could not allocate enough memory to read pg_stat_statement file \"pg_stat_tmp/pgss_query_texts.stat\".So, my question is if pgss_query_texts.stat increases in size gradually due to too many distincts large sql statements could it cause an overall slowness on the engine? this slowness could cause simple statements to be super slow to return like \"select now()\" taking 20s? Thanks in advanceEnvironment: OS/version: CentOS release 6.9 (Final) Hardware(non dedicated to the db, other services and app run the same server): Xeon(R) CPU E5-2690 v4 @ 2.60GHz - 56 cores - 504 GB RAM logicaldrive 1 (1.5 TB, RAID 1, OK)\nphysicaldrive 1I:3:1 (port 1I:box 3:bay 1, Solid State SAS, 1600.3 GB, OK)\nphysicaldrive 1I:3:2 (port 1I:box 3:bay 2, Solid State SAS, 1600.3 GB, OK) PostgresSQL 9.5.21 32bit GUC Settings: auto_explain.log_analyze 0 auto_explain.log_min_duration 1000 auto_explain.log_nested_statements 0 auto_explain.log_verbose 0 autovacuum_analyze_scale_factor 0.1 autovacuum_analyze_threshold 50 autovacuum_freeze_max_age 200000000 autovacuum_max_workers 3 autovacuum_multixact_freeze_max_age 400000000 autovacuum_naptime 60 autovacuum_vacuum_cost_delay 2 autovacuum_vacuum_cost_limit 100 autovacuum_vacuum_scale_factor 0.1 autovacuum_vacuum_threshold 50 autovacuum_work_mem -1 checkpoint_timeout 2700 effective_cache_size 4194304 enable_seqscan 0 log_autovacuum_min_duration 250 log_checkpoints 1 log_connections 1 log_file_mode 600 log_lock_waits 1 log_min_duration_statement 1000 log_rotation_age 1440 log_truncate_on_rotation 1 maintenance_work_mem 262144 max_connections 300 max_replication_slots 10 max_wal_senders 10 max_wal_size 1280 max_worker_processes 15 min_wal_size 5 pg_stat_statements.max 10000 standard_conforming_strings 1 track_commit_timestamp 1 wal_receiver_timeout 0 wal_sender_timeout 0 work_mem 8192On Thu, Jul 21, 2022 at 2:37 PM bruno da silva <brunogiovs@gmail.com> wrote:Hello.I'm investigating an issue on a PostgresSql 9.5.21 installation that becomes unusable in an intermittent way. Simple queries like \"select now();\" could take 20s. commits take 2s. and all gets fixed after an engine restart.I look into the pg logs and no signs of errors. and checkpoints are always timed. The machine is well provisioned, load isn't too high, and cpu io wait is under 1%.any suggestions on what I should check more?Thanks in advance.-- Bruno da Silva\n-- Bruno da Silva",
"msg_date": "Tue, 2 Aug 2022 11:08:05 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "bruno da silva <brunogiovs@gmail.com> writes:\n> After more investigation, we found that pgss_query_texts.stat of a size of\n> 2.2GB. and this deployment has a 32bit pg.\n\nHm ... we've heard one previous report of pg_stat_statements' query text\nfile getting unreasonably large, but it's not clear how that can come\nto be. Do you have a lot of especially long statements being tracked\nin the pg_stat_statements view? Are there any other signs of distress\nin the postmaster log, like complaints about being unable to write\npgss_query_texts.stat?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 11:59:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Do you have a lot of especially long statements being tracked\nin the pg_stat_statements view?* well, the view was showing the query\ncolumn null.*\n* but looking on pgss_query_texts.stat there are very large sql\nstatements, of around ~ 400kb, multiple thousands. *\n\nAre there any other signs of distress\nin the postmaster log, like complaints about being unable to write\npgss_query_texts.stat? *no, just complaints for reading it. *\n\nThanks\n\nOn Tue, Aug 2, 2022 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> bruno da silva <brunogiovs@gmail.com> writes:\n> > After more investigation, we found that pgss_query_texts.stat of a size\n> of\n> > 2.2GB. and this deployment has a 32bit pg.\n>\n> Hm ... we've heard one previous report of pg_stat_statements' query text\n> file getting unreasonably large, but it's not clear how that can come\n> to be. Do you have a lot of especially long statements being tracked\n> in the pg_stat_statements view? Are there any other signs of distress\n> in the postmaster log, like complaints about being unable to write\n> pgss_query_texts.stat?\n>\n> regards, tom lane\n>\n\n\n-- \nBruno da Silva\n\nDo you have a lot of especially long statements being trackedin the pg_stat_statements view? well, the view was showing the query column null. but looking on pgss_query_texts.stat there are very large sql statements, of around ~ 400kb, multiple thousands. Are there any other signs of distressin the postmaster log, like complaints about being unable to writepgss_query_texts.stat? no, just complaints for reading it. ThanksOn Tue, Aug 2, 2022 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:bruno da silva <brunogiovs@gmail.com> writes:\n> After more investigation, we found that pgss_query_texts.stat of a size of\n> 2.2GB. and this deployment has a 32bit pg.\n\nHm ... we've heard one previous report of pg_stat_statements' query text\nfile getting unreasonably large, but it's not clear how that can come\nto be. Do you have a lot of especially long statements being tracked\nin the pg_stat_statements view? Are there any other signs of distress\nin the postmaster log, like complaints about being unable to write\npgss_query_texts.stat?\n\n regards, tom lane\n-- Bruno da Silva",
"msg_date": "Tue, 2 Aug 2022 13:02:41 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "bruno da silva <brunogiovs@gmail.com> writes:\n> Do you have a lot of especially long statements being tracked\n> in the pg_stat_statements view?* well, the view was showing the query\n> column null.*\n> * but looking on pgss_query_texts.stat there are very large sql\n> statements, of around ~ 400kb, multiple thousands. *\n\nHm. We try to recover from such failures by (a) resetting all the view's\nquery text fields to null and (b) truncating the file --- well, unlinking\nit and creating it as empty. It seems like (a) happened and (b) didn't.\nIt's pretty hard to explain that from the code though. Are you quite\nsure this is a 9.5.21 version of the pg_stat_statements extension?\nIs it possible that the pg_stat_tmp directory has been made non-writable?\n\n\t\t\tregards, tom lane\n\n\n\n\n> Are there any other signs of distress\n> in the postmaster log, like complaints about being unable to write\n> pgss_query_texts.stat? *no, just complaints for reading it. *\n\n> Thanks\n\n> On Tue, Aug 2, 2022 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> bruno da silva <brunogiovs@gmail.com> writes:\n> After more investigation, we found that pgss_query_texts.stat of a size\n>> of\n> 2.2GB. and this deployment has a 32bit pg.\n>> \n>> Hm ... we've heard one previous report of pg_stat_statements' query text\n>> file getting unreasonably large, but it's not clear how that can come\n>> to be. Do you have a lot of especially long statements being tracked\n>> in the pg_stat_statements view? Are there any other signs of distress\n>> in the postmaster log, like complaints about being unable to write\n>> pgss_query_texts.stat?\n>> \n>> regards, tom lane\n>> \n\n\n> -- \n> Bruno da Silva\n\n\n\n",
"msg_date": "Tue, 02 Aug 2022 13:25:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Hello.\n\nAre you quite sure this is a 9.5.21 version of the pg_stat_statements\nextension? *I got version 1.3 from SELECT * FROM pg_extension;*\nIs it possible that the pg_stat_tmp directory has been made non-writable? *hard\nto tell if it was made non-writable during the outage. but now it is\nwritable.*\n\nThanks\n\nOn Tue, Aug 2, 2022 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> bruno da silva <brunogiovs@gmail.com> writes:\n> > Do you have a lot of especially long statements being tracked\n> > in the pg_stat_statements view?* well, the view was showing the query\n> > column null.*\n> > * but looking on pgss_query_texts.stat there are very large sql\n> > statements, of around ~ 400kb, multiple thousands. *\n>\n> Hm. We try to recover from such failures by (a) resetting all the view's\n> query text fields to null and (b) truncating the file --- well, unlinking\n> it and creating it as empty. It seems like (a) happened and (b) didn't.\n> It's pretty hard to explain that from the code though. Are you quite\n> sure this is a 9.5.21 version of the pg_stat_statements extension?\n> Is it possible that the pg_stat_tmp directory has been made non-writable?\n>\n> regards, tom lane\n>\n>\n>\n>\n> > Are there any other signs of distress\n> > in the postmaster log, like complaints about being unable to write\n> > pgss_query_texts.stat? *no, just complaints for reading it. *\n>\n> > Thanks\n>\n> > On Tue, Aug 2, 2022 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> >> bruno da silva <brunogiovs@gmail.com> writes:\n> > After more investigation, we found that pgss_query_texts.stat of a size\n> >> of\n> > 2.2GB. and this deployment has a 32bit pg.\n> >>\n> >> Hm ... we've heard one previous report of pg_stat_statements' query text\n> >> file getting unreasonably large, but it's not clear how that can come\n> >> to be. Do you have a lot of especially long statements being tracked\n> >> in the pg_stat_statements view? Are there any other signs of distress\n> >> in the postmaster log, like complaints about being unable to write\n> >> pgss_query_texts.stat?\n> >>\n> >> regards, tom lane\n> >>\n>\n>\n> > --\n> > Bruno da Silva\n>\n>\n\n-- \nBruno da Silva\n\nHello.Are you quite sure this is a 9.5.21 version of the pg_stat_statements extension? I got version 1.3 from SELECT * FROM pg_extension;Is it possible that the pg_stat_tmp directory has been made non-writable? hard to tell if it was made non-writable during the outage. but now it is writable.ThanksOn Tue, Aug 2, 2022 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:bruno da silva <brunogiovs@gmail.com> writes:\n> Do you have a lot of especially long statements being tracked\n> in the pg_stat_statements view?* well, the view was showing the query\n> column null.*\n> * but looking on pgss_query_texts.stat there are very large sql\n> statements, of around ~ 400kb, multiple thousands. *\n\nHm. We try to recover from such failures by (a) resetting all the view's\nquery text fields to null and (b) truncating the file --- well, unlinking\nit and creating it as empty. It seems like (a) happened and (b) didn't.\nIt's pretty hard to explain that from the code though. Are you quite\nsure this is a 9.5.21 version of the pg_stat_statements extension?\nIs it possible that the pg_stat_tmp directory has been made non-writable?\n\n regards, tom lane\n\n\n\n\n> Are there any other signs of distress\n> in the postmaster log, like complaints about being unable to write\n> pgss_query_texts.stat? *no, just complaints for reading it. *\n\n> Thanks\n\n> On Tue, Aug 2, 2022 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> bruno da silva <brunogiovs@gmail.com> writes:\n> After more investigation, we found that pgss_query_texts.stat of a size\n>> of\n> 2.2GB. and this deployment has a 32bit pg.\n>> \n>> Hm ... we've heard one previous report of pg_stat_statements' query text\n>> file getting unreasonably large, but it's not clear how that can come\n>> to be. Do you have a lot of especially long statements being tracked\n>> in the pg_stat_statements view? Are there any other signs of distress\n>> in the postmaster log, like complaints about being unable to write\n>> pgss_query_texts.stat?\n>> \n>> regards, tom lane\n>> \n\n\n> -- \n> Bruno da Silva\n\n-- Bruno da Silva",
"msg_date": "Tue, 2 Aug 2022 13:58:02 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "I wrote:\n> bruno da silva <brunogiovs@gmail.com> writes:\n>> Do you have a lot of especially long statements being tracked\n>> in the pg_stat_statements view?* well, the view was showing the query\n>> column null.*\n>> * but looking on pgss_query_texts.stat there are very large sql\n>> statements, of around ~ 400kb, multiple thousands. *\n\nI see one possible piece of the puzzle here: since you're using a 32-bit\nbuild, overflowing size_t is a reachable hazard. Specifically, in this\ntest to see if we need to garbage-collect the query text file:\n\n\tif (extent < pgss->mean_query_len * pgss_max * 2)\n\t\treturn false;\n\nYou said earlier that pg_stat_statements.max = 10000, so a mean_query_len\nexceeding about 2^32 / 10000 / 2 = 214748.3648 would be enough to overflow\nsize_t and break this comparison. Now, a mean SQL query length in excess\nof 200kB sounds mighty improbable, but it's really the mean length of the\nquery texts in the view. If your \"normal\" queries fall into just a few\npatterns they might be represented by a relatively small number of view\nentries. And if the \"big\" queries are sufficiently not alike, they might\neach get their own view entry, which could potentially drive the mean high\nenough to cause trouble. It'd be interesting to track what\n\"SELECT avg(length(query)) FROM pg_stat_statements\" gives.\n\nHowever, even if we grant that mean_query_len is that big, overflow here\nwould make garbage collection of the query text file more likely not less\nso. What I'm speculating is that overflow is occurring and causing all\nprocesses to decide they need to run gc_qtexts() every time they insert\na new query entry, even though the query texts file isn't actually\nbloated. That could possibly explain your performance issues: a garbage\ncollection pass over a multi-gig file will take awhile, and what's worse\nis that it's done under an exclusive lock, meaning that all the backends\nstack up waiting their turn to perform a useless GC pass.\n\nWhat this doesn't explain is why the condition doesn't clear once you\nobserve one of those \"out of memory\" complaints, because that should\nlead to truncating the texts file. Maybe it does get truncated, but\nthen the cycle repeats after awhile? If you have a steady stream of\nincoming new 400kB queries, you could build back up to 2.2GB of text\nafter five thousand or so of those.\n\nI'm also curious whether this installation is in the habit of doing\npg_stat_statements_reset() a lot. It looks like that fails to\nreset mean_query_len, which might be intentional but perhaps it\ncould play into getting a silly result here later on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 15:14:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Hello Tom. Thanks for your response.\nI spent most of the time looking for evidence and checking other\ninstallations with similar patterns since your response.\n\nthis installation is in the habit of doing pg_stat_statements_reset() a lot?\n* resetting is very rare. How can I get \"pgss->mean_query_len\" via sql?*\n\nMaybe it does get truncated, but then the cycle repeats after a while?\n*it is possible as the slowness happened some days apart 3 times.*\n\n*Question: *Besides the gc issue that you mentioned, having a large ( 700MB\nor 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement\nprocessing\nthan leading to slower query responses with a 32bit PG? I'm thinking in\nreducing pg_stat_statements.max from 10k to 3k\n\n\nThanks\n\nOn Tue, Aug 2, 2022 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > bruno da silva <brunogiovs@gmail.com> writes:\n> >> Do you have a lot of especially long statements being tracked\n> >> in the pg_stat_statements view?* well, the view was showing the query\n> >> column null.*\n> >> * but looking on pgss_query_texts.stat there are very large sql\n> >> statements, of around ~ 400kb, multiple thousands. *\n>\n> I see one possible piece of the puzzle here: since you're using a 32-bit\n> build, overflowing size_t is a reachable hazard. Specifically, in this\n> test to see if we need to garbage-collect the query text file:\n>\n> if (extent < pgss->mean_query_len * pgss_max * 2)\n> return false;\n>\n> You said earlier that pg_stat_statements.max = 10000, so a mean_query_len\n> exceeding about 2^32 / 10000 / 2 = 214748.3648 would be enough to overflow\n> size_t and break this comparison. Now, a mean SQL query length in excess\n> of 200kB sounds mighty improbable, but it's really the mean length of the\n> query texts in the view. If your \"normal\" queries fall into just a few\n> patterns they might be represented by a relatively small number of view\n> entries. And if the \"big\" queries are sufficiently not alike, they might\n> each get their own view entry, which could potentially drive the mean high\n> enough to cause trouble. It'd be interesting to track what\n> \"SELECT avg(length(query)) FROM pg_stat_statements\" gives.\n>\n> However, even if we grant that mean_query_len is that big, overflow here\n> would make garbage collection of the query text file more likely not less\n> so. What I'm speculating is that overflow is occurring and causing all\n> processes to decide they need to run gc_qtexts() every time they insert\n> a new query entry, even though the query texts file isn't actually\n> bloated. That could possibly explain your performance issues: a garbage\n> collection pass over a multi-gig file will take awhile, and what's worse\n> is that it's done under an exclusive lock, meaning that all the backends\n> stack up waiting their turn to perform a useless GC pass.\n>\n> What this doesn't explain is why the condition doesn't clear once you\n> observe one of those \"out of memory\" complaints, because that should\n> lead to truncating the texts file. Maybe it does get truncated, but\n> then the cycle repeats after awhile? If you have a steady stream of\n> incoming new 400kB queries, you could build back up to 2.2GB of text\n> after five thousand or so of those.\n>\n> I'm also curious whether this installation is in the habit of doing\n> pg_stat_statements_reset() a lot. It looks like that fails to\n> reset mean_query_len, which might be intentional but perhaps it\n> could play into getting a silly result here later on.\n>\n> regards, tom lane\n>\n\n\n-- \nBruno da Silva\n\nHello Tom. Thanks for your response. I spent most of the time looking for evidence and checking other installations with similar patterns since your response.this installation is in the habit of doing pg_stat_statements_reset() a lot? resetting is very rare. How can I get \"pgss->mean_query_len\" via sql?Maybe it does get truncated, but then the cycle repeats after a while? it is possible as the slowness happened some days apart 3 times.Question: Besides the gc issue that you mentioned, having a large ( 700MB or 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement processing than leading to slower query responses with a 32bit PG? I'm thinking in reducing pg_stat_statements.max from 10k to 3kThanks On Tue, Aug 2, 2022 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> bruno da silva <brunogiovs@gmail.com> writes:\n>> Do you have a lot of especially long statements being tracked\n>> in the pg_stat_statements view?* well, the view was showing the query\n>> column null.*\n>> * but looking on pgss_query_texts.stat there are very large sql\n>> statements, of around ~ 400kb, multiple thousands. *\n\nI see one possible piece of the puzzle here: since you're using a 32-bit\nbuild, overflowing size_t is a reachable hazard. Specifically, in this\ntest to see if we need to garbage-collect the query text file:\n\n if (extent < pgss->mean_query_len * pgss_max * 2)\n return false;\n\nYou said earlier that pg_stat_statements.max = 10000, so a mean_query_len\nexceeding about 2^32 / 10000 / 2 = 214748.3648 would be enough to overflow\nsize_t and break this comparison. Now, a mean SQL query length in excess\nof 200kB sounds mighty improbable, but it's really the mean length of the\nquery texts in the view. If your \"normal\" queries fall into just a few\npatterns they might be represented by a relatively small number of view\nentries. And if the \"big\" queries are sufficiently not alike, they might\neach get their own view entry, which could potentially drive the mean high\nenough to cause trouble. It'd be interesting to track what\n\"SELECT avg(length(query)) FROM pg_stat_statements\" gives.\n\nHowever, even if we grant that mean_query_len is that big, overflow here\nwould make garbage collection of the query text file more likely not less\nso. What I'm speculating is that overflow is occurring and causing all\nprocesses to decide they need to run gc_qtexts() every time they insert\na new query entry, even though the query texts file isn't actually\nbloated. That could possibly explain your performance issues: a garbage\ncollection pass over a multi-gig file will take awhile, and what's worse\nis that it's done under an exclusive lock, meaning that all the backends\nstack up waiting their turn to perform a useless GC pass.\n\nWhat this doesn't explain is why the condition doesn't clear once you\nobserve one of those \"out of memory\" complaints, because that should\nlead to truncating the texts file. Maybe it does get truncated, but\nthen the cycle repeats after awhile? If you have a steady stream of\nincoming new 400kB queries, you could build back up to 2.2GB of text\nafter five thousand or so of those.\n\nI'm also curious whether this installation is in the habit of doing\npg_stat_statements_reset() a lot. It looks like that fails to\nreset mean_query_len, which might be intentional but perhaps it\ncould play into getting a silly result here later on.\n\n regards, tom lane\n-- Bruno da Silva",
"msg_date": "Wed, 3 Aug 2022 11:12:49 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "bruno da silva <brunogiovs@gmail.com> writes:\n> *Question: *Besides the gc issue that you mentioned, having a large ( 700MB\n> or 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement\n> processing\n> than leading to slower query responses with a 32bit PG? I'm thinking in\n> reducing pg_stat_statements.max from 10k to 3k\n\nWhether or not we've fully identified the problem, I think cutting\npg_stat_statements.max is a good idea. Especially as long as you're\nstuck on an unsupported PG version.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 11:17:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
},
{
"msg_contents": "Hello Guys.\n\nI'd like to report back on this issue as I've been monitoring on this\ninstallation that has very large distinct sqls and I noticed something that\nisn't probably new here but I'd like to confirm that again.\n\nSo after I reduced the pg_stat_statements.max from 10k to 3k\npgss_query_texts.stat was peaking at a reasonable size of ~450MB and by\nmonitoring the file size I was able to have a 1min window interval when the\npgss_query_texts.stat gc was happening. but whenever a gc was detected a\nbunch of statements would get logged on the pg log as slow statements and\nall would report taking around 1s some statements are like \"BEGIN\",\n\"COMMIT\" then last week I asked for another reduction from 3k to 300\npg_stat_statements.max and those slow statement reports aren't happening\nanymore even if pgss_query_texts.stat gc still occurs.\n\nmy question is: is it safe to assume that because the gc of\npgss_query_texts.stat requires a global lock this is a limitation of\npg_stat_statements current implementation?\n\nThanks\n\nOn Wed, Aug 3, 2022 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> bruno da silva <brunogiovs@gmail.com> writes:\n> > *Question: *Besides the gc issue that you mentioned, having a large (\n> 700MB\n> > or 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement\n> > processing\n> > than leading to slower query responses with a 32bit PG? I'm thinking in\n> > reducing pg_stat_statements.max from 10k to 3k\n>\n> Whether or not we've fully identified the problem, I think cutting\n> pg_stat_statements.max is a good idea. Especially as long as you're\n> stuck on an unsupported PG version.\n>\n> regards, tom lane\n>\n\n\n-- \nBruno da Silva\n\nHello Guys.I'd like to report back on this issue as I've been monitoring on this installation that has very large distinct sqls and I noticed something that isn't probably new here but I'd like to confirm that again.So after I reduced the pg_stat_statements.max from 10k to 3k pgss_query_texts.stat was peaking at a reasonable size of ~450MB and by monitoring the file size I was able to have a 1min window interval when the pgss_query_texts.stat gc was happening. but whenever a gc was detected a bunch of statements would get logged on the pg log as slow statements and all would report taking around 1s some statements are like \"BEGIN\", \"COMMIT\" then last week I asked for another reduction from 3k to 300 pg_stat_statements.max and those slow statement reports aren't happening anymore even if pgss_query_texts.stat gc still occurs. my question is: is it safe to assume that because the gc of pgss_query_texts.stat requires a global lock this is a limitation of pg_stat_statements current implementation? ThanksOn Wed, Aug 3, 2022 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:bruno da silva <brunogiovs@gmail.com> writes:\n> *Question: *Besides the gc issue that you mentioned, having a large ( 700MB\n> or 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement\n> processing\n> than leading to slower query responses with a 32bit PG? I'm thinking in\n> reducing pg_stat_statements.max from 10k to 3k\n\nWhether or not we've fully identified the problem, I think cutting\npg_stat_statements.max is a good idea. Especially as long as you're\nstuck on an unsupported PG version.\n\n regards, tom lane\n-- Bruno da Silva",
"msg_date": "Tue, 6 Sep 2022 13:39:54 -0400",
"msg_from": "bruno da silva <brunogiovs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries"
}
] |
[
{
"msg_contents": "Hi ,\nWe have PG v13.6 in RHEL8.4, we try to set table unlogged before load data. There are a lot of existing data in this table, when 'alter table xxx set unlogged', we found it take long time and spend time on IO datafileread. Is it expected?\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi ,\nWe have PG v13.6 in RHEL8.4, we try to set table unlogged before load data. There are a lot of existing data in this table, when ‘alter table xxx set unlogged’, we found it take long time and spend time on IO\n datafileread. Is it expected?\n \nThanks,\n \nJames",
"msg_date": "Tue, 26 Jul 2022 08:53:29 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "alter table xxx set unlogged take long time"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 4:53 AM James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Hi ,\n>\n> We have PG v13.6 in RHEL8.4, we try to set table unlogged before load\n> data. There are a lot of existing data in this table, when ‘alter table\n> xxx set unlogged’, we found it take long time and spend time on IO\n> datafileread. Is it expected?\n>\n>\n>\nYes, the whole table needs to be written to WAL so this could take a long\ntime for a large table\n\nOn Tue, Jul 26, 2022 at 4:53 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\nHi ,\nWe have PG v13.6 in RHEL8.4, we try to set table unlogged before load data. There are a lot of existing data in this table, when ‘alter table xxx set unlogged’, we found it take long time and spend time on IO\n datafileread. Is it expected?\n Yes, the whole table needs to be written to WAL so this could take a long time for a large table",
"msg_date": "Tue, 26 Jul 2022 08:21:26 -0400",
"msg_from": "Jim Mlodgenski <jimmy76@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "How to make it fast ? These are our steps about copy large data from Oracle to Postgres\r\n\r\n 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\r\nStep 5 took long time ,especially for large tables.\r\n\r\nThank,\r\n\r\nJames\r\n\r\nFrom: Jim Mlodgenski <jimmy76@gmail.com>\r\nSent: Tuesday, July 26, 2022 8:21 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: alter table xxx set unlogged take long time\r\n\r\n\r\n\r\nOn Tue, Jul 26, 2022 at 4:53 AM James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> wrote:\r\nHi ,\r\nWe have PG v13.6 in RHEL8.4, we try to set table unlogged before load data. There are a lot of existing data in this table, when ‘alter table xxx set unlogged’, we found it take long time and spend time on IO datafileread. Is it expected?\r\n\r\nYes, the whole table needs to be written to WAL so this could take a long time for a large table\r\n\n\n\n\n\n\n\n\n\nHow to make it fast ? These are our steps about copy large data from Oracle to Postgres\r\n\n\nCreate table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index\r\n … \nStep 5 took long time ,especially for large tables. \n \nThank,\n \nJames\n \n\nFrom: Jim Mlodgenski <jimmy76@gmail.com> \nSent: Tuesday, July 26, 2022 8:21 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: alter table xxx set unlogged take long time\n\n \n\n\n \n\n \n\n\nOn Tue, Jul 26, 2022 at 4:53 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n\n\nHi ,\n\r\nWe have PG v13.6 in RHEL8.4, we try to set table unlogged before load data. There are a lot of existing data in this table, when ‘alter table xxx set unlogged’, we found it take long time and spend time on IO datafileread. Is it expected?\n\r\n \n\n\n\nYes, the whole table needs to be written to WAL so this could take a long time for a large table",
"msg_date": "Tue, 26 Jul 2022 12:41:07 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: alter table xxx set unlogged take long time"
},
{
"msg_contents": "\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\n\nThe easy answer is to skip steps 3 and 5.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 08:42:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "Without step 3 , copy data take long time. Use wal_level=minimal can help make COPY load data without logging ?\r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \r\nSent: Tuesday, July 26, 2022 8:43 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: Jim Mlodgenski <jimmy76@gmail.com>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: alter table xxx set unlogged take long time\r\n\r\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\r\n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\r\n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\r\n\r\nThe easy answer is to skip steps 3 and 5.\r\n\r\n\t\t\tregards, tom lane\r\n",
"msg_date": "Tue, 26 Jul 2022 12:45:40 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: alter table xxx set unlogged take long time"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 8:45 AM James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Without step 3 , copy data take long time. Use wal_level=minimal can\n> help make COPY load data without logging ?\n>\n>\nI assume that you're most concerned with the total time of moving the data\nfrom the source database into the final table so you might get a big win by\nnot moving the data twice and directly load the table through a Foregin\nData Wrapper and avoid the csv export/import. Something like the oracle_fdw\nmight help here:\nhttps://github.com/laurenz/oracle_fdw\n\n-----Original Message-----\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> Sent: Tuesday, July 26, 2022 8:43 PM\n> To: James Pang (chaolpan) <chaolpan@cisco.com>\n> Cc: Jim Mlodgenski <jimmy76@gmail.com>;\n> pgsql-performance@lists.postgresql.org\n> Subject: Re: alter table xxx set unlogged take long time\n>\n> \"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> > How to make it fast ? These are our steps about copy large data from\n> Oracle to Postgres\n> > 1. Create table in Postgres 2. Extract data from Oracle to CSV 3.\n> Alter table set xxx unlogged, 4. Run copy command into Postgres db 5.\n> Alter table set xxx logged 6. Create index …\n>\n> The easy answer is to skip steps 3 and 5.\n>\n> regards, tom lane\n>\n\nOn Tue, Jul 26, 2022 at 8:45 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:Without step 3 , copy data take long time. Use wal_level=minimal can help make COPY load data without logging ?\nI assume that you're most concerned with the total time of moving the data from the source database into the final table so you might get a big win by not moving the data twice and directly load the table through a Foregin Data Wrapper and avoid the csv export/import. Something like the oracle_fdw might help here:https://github.com/laurenz/oracle_fdw\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Tuesday, July 26, 2022 8:43 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: Jim Mlodgenski <jimmy76@gmail.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: alter table xxx set unlogged take long time\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\n\nThe easy answer is to skip steps 3 and 5.\n\n regards, tom lane",
"msg_date": "Tue, 26 Jul 2022 08:52:59 -0400",
"msg_from": "Jim Mlodgenski <jimmy76@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "We use JDBC to export data into csv ,then copy that to Postgres. Multiple sessions working on multiple tables. If not set unlogged , how to make COPY run fast ? possible to start a transaction include all of these “truncate table xxx; copy table xxxx; create index on tables….” With wal_level=minimal, is it ok to make copy and create index without logging ?\r\n\r\nJames\r\n\r\nFrom: Jim Mlodgenski <jimmy76@gmail.com>\r\nSent: Tuesday, July 26, 2022 8:53 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: alter table xxx set unlogged take long time\r\n\r\n\r\n\r\nOn Tue, Jul 26, 2022 at 8:45 AM James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> wrote:\r\nWithout step 3 , copy data take long time. Use wal_level=minimal can help make COPY load data without logging ?\r\n\r\nI assume that you're most concerned with the total time of moving the data from the source database into the final table so you might get a big win by not moving the data twice and directly load the table through a Foregin Data Wrapper and avoid the csv export/import. Something like the oracle_fdw might help here:\r\nhttps://github.com/laurenz/oracle_fdw\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us<mailto:tgl@sss.pgh.pa.us>>\r\nSent: Tuesday, July 26, 2022 8:43 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>>\r\nCc: Jim Mlodgenski <jimmy76@gmail.com<mailto:jimmy76@gmail.com>>; pgsql-performance@lists.postgresql.org<mailto:pgsql-performance@lists.postgresql.org>\r\nSubject: Re: alter table xxx set unlogged take long time\r\n\r\n\"James Pang (chaolpan)\" <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> writes:\r\n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\r\n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\r\n\r\nThe easy answer is to skip steps 3 and 5.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\n\n We use JDBC to export data into csv ,then copy that to Postgres. Multiple sessions working on multiple tables. If not set unlogged , how to make COPY run fast ? possible to start a transaction include all of these “truncate table\r\n xxx; copy table xxxx; create index on tables….” With wal_level=minimal, is it ok to make copy and create index without logging ?\n \nJames\n \n\nFrom: Jim Mlodgenski <jimmy76@gmail.com> \nSent: Tuesday, July 26, 2022 8:53 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-performance@lists.postgresql.org\nSubject: Re: alter table xxx set unlogged take long time\n\n \n\n\n \n\n \n\n\nOn Tue, Jul 26, 2022 at 8:45 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\nWithout step 3 , copy data take long time. Use wal_level=minimal can help make COPY load data without logging ?\n\n\n \n\n\nI assume that you're most concerned with the total time of moving the data from the source database into the final table so you might get a big win by not moving the data twice and directly load the table through a Foregin Data Wrapper\r\n and avoid the csv export/import. Something like the oracle_fdw might help here:\n\n\nhttps://github.com/laurenz/oracle_fdw\n\n \n\n-----Original Message-----\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\r\n\r\nSent: Tuesday, July 26, 2022 8:43 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: Jim Mlodgenski <jimmy76@gmail.com>;\r\npgsql-performance@lists.postgresql.org\r\nSubject: Re: alter table xxx set unlogged take long time\n\r\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\r\n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\r\n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\n\r\nThe easy answer is to skip steps 3 and 5.\n\r\n regards, tom lane",
"msg_date": "Tue, 26 Jul 2022 12:59:15 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: alter table xxx set unlogged take long time"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 5:45 AM James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Without step 3 , copy data take long time. Use wal_level=minimal can\n> help make COPY load data without logging ?\n>\n>\nI believe you are referring to:\n\nhttps://www.postgresql.org/docs/current/populate.html#POPULATE-COPY-FROM\n\nSince the final state of your table will be \"logged\" relying on the above\noptimization is the correct path, if you enable \"logged\" at the end, even\nwith wal_level=minimal, you do not benefit from the optimization and thus\nyour data ends up being written to WAL.\n\nOtherwise, it is overall time that matters, it's no use boasting the COPY\nis fast if you end up spending hours waiting for ALTER TABLE at the end.\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 5:45 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:Without step 3 , copy data take long time. Use wal_level=minimal can help make COPY load data without logging ?I believe you are referring to:https://www.postgresql.org/docs/current/populate.html#POPULATE-COPY-FROMSince the final state of your table will be \"logged\" relying on the above optimization is the correct path, if you enable \"logged\" at the end, even with wal_level=minimal, you do not benefit from the optimization and thus your data ends up being written to WAL.Otherwise, it is overall time that matters, it's no use boasting the COPY is fast if you end up spending hours waiting for ALTER TABLE at the end.David J.",
"msg_date": "Tue, 26 Jul 2022 08:03:33 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "On 7/26/22 08:59, James Pang (chaolpan) wrote:\n> We use JDBC to export data into csv ,then copy that to Postgres. \n> Multiple sessions working on multiple tables. If not set unlogged , how \n> to make COPY run fast ? possible to start a transaction include all of \n> these “truncate table xxx; copy table xxxx; create index on tables….” \n> With wal_level=minimal, is it ok to make copy and create index without \n> logging ?\n\nNot sure if it would work for you, but perhaps a usable strategy would \nbe to partition the existing large table on something (e.g. a new column \nlike batch number?).\n\nThen (completely untested) I *think* you could create the \"partition\" \ninitially as a free standing unlogged table, load it, index it, switch \nto logged, and then attach it to the partitioned table.\n\nPerhaps you could also have a background job that periodically \naggregates the batch partitions into larger buckets to minimize the \noverall number of partitions.\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:35:02 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Then (completely untested) I *think* you could create the \"partition\" \n> initially as a free standing unlogged table, load it, index it, switch \n> to logged, and then attach it to the partitioned table.\n\nI'm still of the opinion that this plan to load the data unlogged\nand switch to logged later is a loser. Sooner or later you have\ngot to write the data to WAL, and this approach doesn't eliminate\nthat cost. What it does do is create one whole extra cycle of\nwriting the data to disk and reading it back. I don't think\nit's an oversight that no such thing is suggested in our standard\ntips for bulk-loading data:\n\nhttps://www.postgresql.org/docs/current/populate.html\n\nWhat perhaps *is* an oversight is that we don't suggest\nuse of COPY FREEZE there. AFAIK that doesn't reduce the initial\ndata loading cost directly, but it would save overhead later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:46:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "On 7/27/22 10:46, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> Then (completely untested) I *think* you could create the \"partition\" \n>> initially as a free standing unlogged table, load it, index it, switch \n>> to logged, and then attach it to the partitioned table.\n> \n> I'm still of the opinion that this plan to load the data unlogged\n> and switch to logged later is a loser. Sooner or later you have\n> got to write the data to WAL, and this approach doesn't eliminate\n> that cost. What it does do is create one whole extra cycle of\n> writing the data to disk and reading it back. I don't think\n> it's an oversight that no such thing is suggested in our standard\n> tips for bulk-loading data:\n\nYeah, agreed. I was mostly responding to the OP desire to use unlogged \nand not taking a stance on that.\n\n> https://www.postgresql.org/docs/current/populate.html\n> \n> What perhaps *is* an oversight is that we don't suggest\n> use of COPY FREEZE there. AFAIK that doesn't reduce the initial\n> data loading cost directly, but it would save overhead later.\n\nOh, yes, very good point.\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:01:33 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": "At Tue, 26 Jul 2022 12:41:07 +0000, \"James Pang (chaolpan)\" <chaolpan@cisco.com> wrote in \n> How to make it fast ? These are our steps about copy large data from Oracle to Postgres\n> \n> 1. Create table in Postgres 2. Extract data from Oracle to CSV 3. Alter table set xxx unlogged, 4. Run copy command into Postgres db 5. Alter table set xxx logged 6. Create index …\n> Step 5 took long time ,especially for large tables.\n\nAs others pointed, the step5 inevitably requires WAL emittion. On the\nother hand, there is a proposed patch [1]. It lets ALTER TABLE SET\nLOGGED/UNLOGGED evade duping the whole target table and could reduce\nthe amount of WAL to be emitted (caused by the difference of\ntuple-based WAL and per-page WAL) (in major cases).\n\nCould you try it and see if it works for you in any extent?\n\nregards.\n\n[1] https://commitfest.postgresql.org/38/3461/\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Jul 2022 13:56:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
},
{
"msg_contents": " Does \"wal_level=minimal\" help reducing wal emitting a lot for COPY and CREATE INDEX? We plan to remove \"set unlogged/log\" , instead , just set \"wal_level=minimal\" ,then COPY data in parallel, then create index.\r\n\r\n Thanks,\r\n\r\n James \r\n-----Original Message-----\r\nFrom: Joe Conway <mail@joeconway.com> \r\nSent: Wednesday, July 27, 2022 11:02 PM\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; Jim Mlodgenski <jimmy76@gmail.com>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: alter table xxx set unlogged take long time\r\n\r\nOn 7/27/22 10:46, Tom Lane wrote:\r\n> Joe Conway <mail@joeconway.com> writes:\r\n>> Then (completely untested) I *think* you could create the \"partition\" \r\n>> initially as a free standing unlogged table, load it, index it, \r\n>> switch to logged, and then attach it to the partitioned table.\r\n> \r\n> I'm still of the opinion that this plan to load the data unlogged and \r\n> switch to logged later is a loser. Sooner or later you have got to \r\n> write the data to WAL, and this approach doesn't eliminate that cost. \r\n> What it does do is create one whole extra cycle of writing the data to \r\n> disk and reading it back. I don't think it's an oversight that no \r\n> such thing is suggested in our standard tips for bulk-loading data:\r\n\r\nYeah, agreed. I was mostly responding to the OP desire to use unlogged and not taking a stance on that.\r\n\r\n> https://www.postgresql.org/docs/current/populate.html\r\n> \r\n> What perhaps *is* an oversight is that we don't suggest use of COPY \r\n> FREEZE there. AFAIK that doesn't reduce the initial data loading cost \r\n> directly, but it would save overhead later.\r\n\r\nOh, yes, very good point.\r\n\r\n\r\n--\r\nJoe Conway\r\nRDS Open Source Databases\r\nAmazon Web Services: https://aws.amazon.com\r\n",
"msg_date": "Thu, 28 Jul 2022 07:47:56 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: alter table xxx set unlogged take long time"
},
{
"msg_contents": "On 7/28/22 03:47, James Pang (chaolpan) wrote:\n> Does \"wal_level=minimal\" help reducing wal emitting a lot for COPY\n> and CREATE INDEX? We plan to remove \"set unlogged/log\" , instead ,\n> just set \"wal_level=minimal\" ,then COPY data in parallel, then create\n> index.\n\n(Note - please don't top post on these lists)\n\nYes, wal_level = minimal is a big help in my experience if you can \ntolerate it.\n\nSimilarly synchronous_commit = off might help as long as you are \nprepared to reload some data in the event of a crash (which generally is \ntrue when bulk loading). As noted in the docs:\n\n This parameter can be changed at any time; the\n behavior for any one transaction is determined by\n the setting in effect when it commits. It is\n therefore possible, and useful, to have some\n transactions commit synchronously and others\n asynchronously.\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 08:21:54 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table xxx set unlogged take long time"
}
] |
[
{
"msg_contents": "I'm spinning up a new Postgresql 14 database where I'll have to store a\ncouple years worth of time series data at the rate of single-digit millions\nof rows per day. Since this has to run in AWS Aurora, I can't use\nTimescaleDB.\n\nI've been soliciting advice for best practices for building this.\n\nOne person I talked to said \"try not to have more than 100 partitions\",\neven with the latest postgresql you'll end up with a lot of lock contention\nif you go over 100 partitions. This person also recommended manually\nkicking off vacuums on a regular schedule rather than trusting autovacuum\nto work reliably on the partitioned tables.\n\nI've got several keys, besides the obvious time-key that I could partition\non. I could do a multi-key partitioning scheme. Since the data is\ninbound at a relatively steady rate, if I partition on time, I can adjust\nthe partitions to be reasonably similarly sized. What is a good partition\nsize?\n\nAre there any tunables I should experiment with in particular for a\ndatabase with only 2 or 3 tables in it but many partitions each with\nmillions of rows?\n\nSince the data most frequently queried would be recent data (say the past\nmonth or so) would it make sense to build an archiving strategy that rolled\nup older partitions into larger ones? ie, do daily partitions for the\nfirst four weeks, then come up with a process that rolled them up into\nmonthly partitions for the next few months, then maybe quarterly partitions\nfor the data older than a year? (I'm thinking about ways to keep the\npartition count low - if that advice is justified.)\n\nOr, should I just have a single 7 Trillion row table with a BRIN index on\nthe timestamp and not mess with partitions at all?\n\nI'm spinning up a new Postgresql 14 database where I'll have to store a couple years worth of time series data at the rate of single-digit millions of rows per day. Since this has to run in AWS Aurora, I can't use TimescaleDB.I've been soliciting advice for best practices for building this.One person I talked to said \"try not to have more than 100 partitions\", even with the latest postgresql you'll end up with a lot of lock contention if you go over 100 partitions. This person also recommended manually kicking off vacuums on a regular schedule rather than trusting autovacuum to work reliably on the partitioned tables.I've got several keys, besides the obvious time-key that I could partition on. I could do a multi-key partitioning scheme. Since the data is inbound at a relatively steady rate, if I partition on time, I can adjust the partitions to be reasonably similarly sized. What is a good partition size?Are there any tunables I should experiment with in particular for a database with only 2 or 3 tables in it but many partitions each with millions of rows?Since the data most frequently queried would be recent data (say the past month or so) would it make sense to build an archiving strategy that rolled up older partitions into larger ones? ie, do daily partitions for the first four weeks, then come up with a process that rolled them up into monthly partitions for the next few months, then maybe quarterly partitions for the data older than a year? (I'm thinking about ways to keep the partition count low - if that advice is justified.)Or, should I just have a single 7 Trillion row table with a BRIN index on the timestamp and not mess with partitions at all?",
"msg_date": "Wed, 27 Jul 2022 08:55:14 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgresql 14 partitioning advice"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 08:55:14AM -0400, Rick Otten wrote:\n> I'm spinning up a new Postgresql 14 database where I'll have to store a\n> couple years worth of time series data at the rate of single-digit millions\n> of rows per day. Since this has to run in AWS Aurora, I can't use\n> TimescaleDB.\n\n> One person I talked to said \"try not to have more than 100 partitions\",\n> even with the latest postgresql you'll end up with a lot of lock contention\n> if you go over 100 partitions.\n\nI'm not familiar with this (but now I'm curious). We have over 2000 partitions\nin some tables. No locking issue that I'm aware of. One issue that I *have*\nseen is if you have many partitions, you can end up with query plans with a\nvery large number of planner nodes, and it's hard to set\nwork_mem*hash_mem_multiplier to account for that.\n\n> This person also recommended manually\n> kicking off vacuums on a regular schedule rather than trusting autovacuum\n> to work reliably on the partitioned tables.\n\nThey must mean *analyze*, which does not run automatically on the partitioned\ntables (only the partitions). The partitioned table is empty, so doesn't need\nto be vacuumed.\n\n> I've got several keys, besides the obvious time-key that I could partition\n> on. I could do a multi-key partitioning scheme. Since the data is\n> inbound at a relatively steady rate, if I partition on time, I can adjust\n> the partitions to be reasonably similarly sized. What is a good partition\n> size?\n\nDepends on 1) the target number of partitions; and 2) the target size for\nindexes on those partitions. More partition keys will lead to smaller indexes.\nDepending on the type of index, and the index keys, to get good INSERT\nperformance, you may need to set shared_buffers to accommodate the sum of size\nof all the indexes (but maybe not, if the leading column is timestamp).\n\n> Since the data most frequently queried would be recent data (say the past\n> month or so) would it make sense to build an archiving strategy that rolled\n> up older partitions into larger ones? ie, do daily partitions for the\n> first four weeks, then come up with a process that rolled them up into\n> monthly partitions for the next few months, then maybe quarterly partitions\n> for the data older than a year? (I'm thinking about ways to keep the\n> partition count low - if that advice is justified.)\n\nI think it can make sense. I do that myself in order to: 1) avoid having a\nhuge *total* number of tables (which causes pg_attribute to be large, since our\ntables are also \"wide\"); and 2) make our backups of \"recent data\" smaller; and\n3) make autoanalyze a bit more efficient (a monthly partition will be analyzed\nnumerous times the 2nd half of the month, even though all the historic data\nhasn't changed at all).\n\n> Or, should I just have a single 7 Trillion row table with a BRIN index on\n> the timestamp and not mess with partitions at all?\n\nAre you going to need to DELETE data ? Then this isn't great, and DELETEing\ndata will innevitably cause a lower correlation, making BRIN less effective.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Jul 2022 08:38:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n>\n> One person I talked to said \"try not to have more than 100 partitions\",\n> even with the latest postgresql you'll end up with a lot of lock contention\n> if you go over 100 partitions.\n>\n>\nIt is hard to know how seriously to take the advice of anonymous people\naccompanied with such sparse justification. Meanwhile, people who actually\nwrote the code seem to think that this problem has been mostly overcome\nwith declarative partitioning in the newer versions.\n\nWhen you do decide to start removing the oldest data, how will you do it?\nYour partitioning should probably be designed to align with this.\n\n> Since the data most frequently queried would be recent data (say the past\nmonth or so)\n\nIs this done specifically with a time clause, or just by criteria which\nhappen to align with time, but have no formal relationship with it?\n\nCheers,\n\nJeff\n\nOn Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:One person I talked to said \"try not to have more than 100 partitions\", even with the latest postgresql you'll end up with a lot of lock contention if you go over 100 partitions.It is hard to know how seriously to take the advice of anonymous people accompanied with such sparse justification. Meanwhile, people who actually wrote the code seem to think that this problem has been mostly overcome with declarative partitioning in the newer versions.When you do decide to start removing the oldest data, how will you do it? Your partitioning should probably be designed to align with this.> Since the data most frequently queried would be recent data (say the past month or so)Is this done specifically with a time clause, or just by criteria which happen to align with time, but have no formal relationship with it?Cheers,Jeff",
"msg_date": "Wed, 27 Jul 2022 13:17:23 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n> I'm spinning up a new Postgresql 14 database where I'll have to store a\n> couple years worth of time series data at the rate of single-digit millions\n> of rows per day. Since this has to run in AWS Aurora, I can't use\n> TimescaleDB.\n>\n\nI thought I'd report back some of my findings from testing this week:\n\nI took the same real world two week data set and created identical tables\nexcept that I partitioned one by month, one by week, one by day, and one by\nhour. I partitioned a little bit into the past and a little bit into the\nfuture. I did this on a PG 14.2 RDS instance. This gave me tables with:\n3 partitions, 13 partitions, 90 partitions and 2136 partitions, but\notherwise the same data.\n\nInsert times were equivalent.\n\nThen I crafted a query that was one of the main use cases for the data and\nran it a bunch of times.\n\nI noticed a significant degradation in performance as the number of\npartitions increased. The jump from 13 to 90, in particular, was very\nsteep. It didn't matter what I set work_mem or other tunables to. I dug\ndeeper...\n\nSurprising to me was if you partition on a `timestamp with timezone`\ncolumn, call it \"ts\":\nIf your where clause looks like\n```\nwhere ts at time zone 'UTC' > '2022-07-01 00:00'::timestamp\n```\nyou will NOT get partition pruning and it will sequence scan.\nHowever if you change it to (with an appropriately adjusted right hand side\nif necessary):\n```\nwhere ts > '2022-07-01 00:00'::timestamp\n```\nIt will do partition pruning and will index scan.\n\nWhen I made that change the query performance was equivalent regardless of\nwhich number of partitions I had in play.\nI did a quick test and this happens on a regular timestamp index on a\nregular table as well.\n\nThe other problem I ran into, which I'm still building a test case for and\nI fear might be a bug if I can easily reproduce it,\nis if I did the original select in a CTE, and then did a sort outside of\nthe CTE, even though the CTE found 0 rows, the database\nstill spent a _ton_ of time sorting those 0 rows:\n```\n -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual\ntime=84848.452..84848.453 rows=0 loops=1)\n```\nOnce I can reproduce this on test data I'll be able to pin down more\nclosely what is happening and tell if I'm just reading\nthe explain plan wrong or if something is broken. It was getting mixed up\nwith the lack of pruning/index usage problem.\n\nI'll report back again next week. Anyway it is looking to me like it\ndoesn't really matter (within reason) from a performance\nperspective how many partitions we use for our data set and query\npatterns. We should be able to pick the most convenient\nfrom an archiving and data management perspective instead.\n\nOn Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:I'm spinning up a new Postgresql 14 database where I'll have to store a couple years worth of time series data at the rate of single-digit millions of rows per day. Since this has to run in AWS Aurora, I can't use TimescaleDB.I thought I'd report back some of my findings from testing this week:I took the same real world two week data set and created identical tables except that I partitioned one by month, one by week, one by day, and one by hour. I partitioned a little bit into the past and a little bit into the future. I did this on a PG 14.2 RDS instance. This gave me tables with:3 partitions, 13 partitions, 90 partitions and 2136 partitions, but otherwise the same data.Insert times were equivalent.Then I crafted a query that was one of the main use cases for the data and ran it a bunch of times.I noticed a significant degradation in performance as the number of partitions increased. The jump from 13 to 90, in particular, was very steep. It didn't matter what I set work_mem or other tunables to. I dug deeper...Surprising to me was if you partition on a `timestamp with timezone` column, call it \"ts\":If your where clause looks like```where ts at time zone 'UTC' > '2022-07-01 00:00'::timestamp```you will NOT get partition pruning and it will sequence scan.However if you change it to (with an appropriately adjusted right hand side if necessary):```where ts > '2022-07-01 00:00'::timestamp```It will do partition pruning and will index scan.When I made that change the query performance was equivalent regardless of which number of partitions I had in play.I did a quick test and this happens on a regular timestamp index on a regular table as well.The other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the databasestill spent a _ton_ of time sorting those 0 rows:``` -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)```Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just readingthe explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performanceperspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenientfrom an archiving and data management perspective instead.",
"msg_date": "Fri, 29 Jul 2022 17:44:11 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "> On 30/07/2022, at 9:44 AM, Rick Otten <rottenwindfish@gmail.com> wrote:\n> \n> On Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com <mailto:rottenwindfish@gmail.com>> wrote:\n> I'm spinning up a new Postgresql 14 database where I'll have to store a couple years worth of time series data at the rate of single-digit millions of rows per day. Since this has to run in AWS Aurora, I can't use TimescaleDB.\n> \n> I thought I'd report back some of my findings from testing this week:\n> \n> I took the same real world two week data set and created identical tables except that I partitioned one by month, one by week, one by day, and one by hour. I partitioned a little bit into the past and a little bit into the future. I did this on a PG 14.2 RDS instance. This gave me tables with:\n> 3 partitions, 13 partitions, 90 partitions and 2136 partitions, but otherwise the same data.\n> \n> Insert times were equivalent.\n> \n> Then I crafted a query that was one of the main use cases for the data and ran it a bunch of times.\n> \n> I noticed a significant degradation in performance as the number of partitions increased. The jump from 13 to 90, in particular, was very steep. It didn't matter what I set work_mem or other tunables to. I dug deeper...\n> \n> Surprising to me was if you partition on a `timestamp with timezone` column, call it \"ts\":\n> If your where clause looks like\n> ```\n> where ts at time zone 'UTC' > '2022-07-01 00:00'::timestamp\n> ```\n> you will NOT get partition pruning and it will sequence scan.\n> However if you change it to (with an appropriately adjusted right hand side if necessary):\n> ```\n> where ts > '2022-07-01 00:00'::timestamp\n> ```\n> It will do partition pruning and will index scan.\n> \n> When I made that change the query performance was equivalent regardless of which number of partitions I had in play.\n> I did a quick test and this happens on a regular timestamp index on a regular table as well.\n> \n> The other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,\n> is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the database\n> still spent a _ton_ of time sorting those 0 rows:\n> ```\n> -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)\n> ```\n> Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just reading\n> the explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.\n> \n> I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performance\n> perspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenient\n> from an archiving and data management perspective instead.\n\nHi Rick,\n\nI am working with data with a similar structure. The most recent data is accessed significantly more often than older data, so my next step will be to have very recent data in hourly tables, then daily, and probably monthly tables for what is effectively archived data. My data is a little different in that it’s stored by the start of an “interval” which means I can do a = comparison for the start of the hour (or day once data is aggregated).\n\nI found a similar interesting partitioning performance issue recently when partitioning by timestamp, where if your where clause for a timestamp includes math the planner runs very slowly. In my case saying something like:\n```\nselect * from table where ts = some_time_variable - interval ‘1 hour’;\n```\nis *much* slower than something like:\n```\noffset_time_variable = some_time_variable - '1 hour’ interval;\nselect * from table where ts = offset_time_variable;\n```\n\nEverything is `timestamp with time zone`.\nI believe that it’s calculating that offset for each partition - of which there are a couple hundred - and it was causing the planner to run very slowly. Pruning works correctly once the planner has run.\n\nThis is on postgres 13 - I have yet to try 14 and see if this issue persists in 14.\n\nChanging my main query to the above structure significantly improved performance - I was previously having lots of performance issues when aggregation tasks ran and dropped partitions etc.\n\nI posted about this here: https://www.postgresql.org/message-id/84101021-8B67-45AD-83F2-A3C8F0AA4BEE%40daork.net\n\n--\nNathan Ward\n\n\n\nOn 30/07/2022, at 9:44 AM, Rick Otten <rottenwindfish@gmail.com> wrote:On Wed, Jul 27, 2022 at 8:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:I'm spinning up a new Postgresql 14 database where I'll have to store a couple years worth of time series data at the rate of single-digit millions of rows per day. Since this has to run in AWS Aurora, I can't use TimescaleDB.I thought I'd report back some of my findings from testing this week:I took the same real world two week data set and created identical tables except that I partitioned one by month, one by week, one by day, and one by hour. I partitioned a little bit into the past and a little bit into the future. I did this on a PG 14.2 RDS instance. This gave me tables with:3 partitions, 13 partitions, 90 partitions and 2136 partitions, but otherwise the same data.Insert times were equivalent.Then I crafted a query that was one of the main use cases for the data and ran it a bunch of times.I noticed a significant degradation in performance as the number of partitions increased. The jump from 13 to 90, in particular, was very steep. It didn't matter what I set work_mem or other tunables to. I dug deeper...Surprising to me was if you partition on a `timestamp with timezone` column, call it \"ts\":If your where clause looks like```where ts at time zone 'UTC' > '2022-07-01 00:00'::timestamp```you will NOT get partition pruning and it will sequence scan.However if you change it to (with an appropriately adjusted right hand side if necessary):```where ts > '2022-07-01 00:00'::timestamp```It will do partition pruning and will index scan.When I made that change the query performance was equivalent regardless of which number of partitions I had in play.I did a quick test and this happens on a regular timestamp index on a regular table as well.The other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the databasestill spent a _ton_ of time sorting those 0 rows:``` -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)```Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just readingthe explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performanceperspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenientfrom an archiving and data management perspective instead.\nHi Rick,I am working with data with a similar structure. The most recent data is accessed significantly more often than older data, so my next step will be to have very recent data in hourly tables, then daily, and probably monthly tables for what is effectively archived data. My data is a little different in that it’s stored by the start of an “interval” which means I can do a = comparison for the start of the hour (or day once data is aggregated).I found a similar interesting partitioning performance issue recently when partitioning by timestamp, where if your where clause for a timestamp includes math the planner runs very slowly. In my case saying something like:```select * from table where ts = some_time_variable - interval ‘1 hour’;```is *much* slower than something like:```offset_time_variable = some_time_variable - '1 hour’ interval;select * from table where ts = offset_time_variable;```Everything is `timestamp with time zone`.I believe that it’s calculating that offset for each partition - of which there are a couple hundred - and it was causing the planner to run very slowly. Pruning works correctly once the planner has run.This is on postgres 13 - I have yet to try 14 and see if this issue persists in 14.Changing my main query to the above structure significantly improved performance - I was previously having lots of performance issues when aggregation tasks ran and dropped partitions etc.I posted about this here: https://www.postgresql.org/message-id/84101021-8B67-45AD-83F2-A3C8F0AA4BEE%40daork.net--Nathan Ward",
"msg_date": "Sat, 30 Jul 2022 19:48:35 +1200",
"msg_from": "Nathan Ward <lists+postgresql@daork.net>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": ">\n>\n> The other problem I ran into, which I'm still building a test case for and\n> I fear might be a bug if I can easily reproduce it,\n> is if I did the original select in a CTE, and then did a sort outside of\n> the CTE, even though the CTE found 0 rows, the database\n> still spent a _ton_ of time sorting those 0 rows:\n> ```\n> -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual\n> time=84848.452..84848.453 rows=0 loops=1)\n> ```\n> Once I can reproduce this on test data I'll be able to pin down more\n> closely what is happening and tell if I'm just reading\n> the explain plan wrong or if something is broken. It was getting mixed up\n> with the lack of pruning/index usage problem.\n>\n> I'll report back again next week. Anyway it is looking to me like it\n> doesn't really matter (within reason) from a performance\n> perspective how many partitions we use for our data set and query\n> patterns. We should be able to pick the most convenient\n> from an archiving and data management perspective instead.\n>\n>\nThis behavior is definitely consistent. 0 rows end up slower than when I\nfind some rows in my CTE:\n```\n -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\ntime=87110.841..87110.842 rows=0 loops=1)\n -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\ntime=25367.867..25367.930 rows=840 loops=1)\n```\nThe only thing I changed in the query was the date range. It is actually\nthe CTE scan step inside the Sort block that is slower when no rows are\nreturned than when rows are returned. It also only happens when all the\npartitions are sequence scanned instead of being partition pruned.\n\nI'm still writing up a test case that can demo this without using\nproprietary data.\n\nThe other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the databasestill spent a _ton_ of time sorting those 0 rows:``` -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)```Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just readingthe explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performanceperspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenientfrom an archiving and data management perspective instead.This behavior is definitely consistent. 0 rows end up slower than when I find some rows in my CTE:``` -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=87110.841..87110.842 rows=0 loops=1) -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=25367.867..25367.930 rows=840 loops=1)```The only thing I changed in the query was the date range. It is actually the CTE scan step inside the Sort block that is slower when no rows are returned than when rows are returned. It also only happens when all the partitions are sequence scanned instead of being partition pruned.I'm still writing up a test case that can demo this without using proprietary data.",
"msg_date": "Mon, 1 Aug 2022 10:16:11 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 10:16 AM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n>\n>> The other problem I ran into, which I'm still building a test case for\n>> and I fear might be a bug if I can easily reproduce it,\n>> is if I did the original select in a CTE, and then did a sort outside of\n>> the CTE, even though the CTE found 0 rows, the database\n>> still spent a _ton_ of time sorting those 0 rows:\n>> ```\n>> -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual\n>> time=84848.452..84848.453 rows=0 loops=1)\n>> ```\n>> Once I can reproduce this on test data I'll be able to pin down more\n>> closely what is happening and tell if I'm just reading\n>> the explain plan wrong or if something is broken. It was getting mixed\n>> up with the lack of pruning/index usage problem.\n>>\n>> I'll report back again next week. Anyway it is looking to me like it\n>> doesn't really matter (within reason) from a performance\n>> perspective how many partitions we use for our data set and query\n>> patterns. We should be able to pick the most convenient\n>> from an archiving and data management perspective instead.\n>>\n>>\n> This behavior is definitely consistent. 0 rows end up slower than when I\n> find some rows in my CTE:\n> ```\n> -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\n> time=87110.841..87110.842 rows=0 loops=1)\n> -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\n> time=25367.867..25367.930 rows=840 loops=1)\n> ```\n> The only thing I changed in the query was the date range. It is actually\n> the CTE scan step inside the Sort block that is slower when no rows are\n> returned than when rows are returned. It also only happens when all the\n> partitions are sequence scanned instead of being partition pruned.\n>\n> I'm still writing up a test case that can demo this without using\n> proprietary data.\n>\n\nAfter a bunch of experiments I can explain this now. :-)\n\nI had a `limit` clause in my test CTE. When sequence scanning a bunch of\npartitions, if the limit is reached, the subsequent partitions are marked\nwith `never executed` and not scanned. On the other hand, when no rows are\nfound, all of the partitions are scanned.\n\nTherefore, with many millions of rows in the partitions, and being forced\nto sequence scan because I put the `at time zone` clause in the `where`,\nthe case when rows are found is always noticeably faster than the case when\nrows aren't found as long as at least one partition hasn't been scanned yet\nwhen the limit is hit.\n\nI'm now satisfied this is a good thing, and will move on to other\nproblems. Thanks for hearing me out. I was scratching my head for a while\nover that one.\n\nOn Mon, Aug 1, 2022 at 10:16 AM Rick Otten <rottenwindfish@gmail.com> wrote:The other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the databasestill spent a _ton_ of time sorting those 0 rows:``` -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)```Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just readingthe explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performanceperspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenientfrom an archiving and data management perspective instead.This behavior is definitely consistent. 0 rows end up slower than when I find some rows in my CTE:``` -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=87110.841..87110.842 rows=0 loops=1) -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=25367.867..25367.930 rows=840 loops=1)```The only thing I changed in the query was the date range. It is actually the CTE scan step inside the Sort block that is slower when no rows are returned than when rows are returned. It also only happens when all the partitions are sequence scanned instead of being partition pruned.I'm still writing up a test case that can demo this without using proprietary data.After a bunch of experiments I can explain this now. :-)I had a `limit` clause in my test CTE. When sequence scanning a bunch of partitions, if the limit is reached, the subsequent partitions are marked with `never executed` and not scanned. On the other hand, when no rows are found, all of the partitions are scanned.Therefore, with many millions of rows in the partitions, and being forced to sequence scan because I put the `at time zone` clause in the `where`, the case when rows are found is always noticeably faster than the case when rows aren't found as long as at least one partition hasn't been scanned yet when the limit is hit.I'm now satisfied this is a good thing, and will move on to other problems. Thanks for hearing me out. I was scratching my head for a while over that one.",
"msg_date": "Tue, 2 Aug 2022 08:54:46 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "> I'm spinning up a new Postgresql 14 database where I'll have to store a\ncouple years worth of time series data at the rate of single-digit millions\nof rows per day.\nI think you absolutely need to use partitioning for the following reasons:\n1. maintenance and roll-off of older data\n2. indexes are much smaller\n3. performance is predictable (if partition pruning kicks in)\n\nPostgres 14 improved partitioning quite a bit. I used it in Postgres 9 and\nthere was a lot of locking on partition hierarchy when you add/drop\npartition tables.\nHaving thousands of partitions shouldn't be a problem, BUT you will incur\ncost on query planning, which is usually under 0.1 second on modern\nhardware.\n\n\nOn Tue, Aug 2, 2022 at 5:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n>\n>\n> On Mon, Aug 1, 2022 at 10:16 AM Rick Otten <rottenwindfish@gmail.com>\n> wrote:\n>\n>>\n>>> The other problem I ran into, which I'm still building a test case for\n>>> and I fear might be a bug if I can easily reproduce it,\n>>> is if I did the original select in a CTE, and then did a sort outside of\n>>> the CTE, even though the CTE found 0 rows, the database\n>>> still spent a _ton_ of time sorting those 0 rows:\n>>> ```\n>>> -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual\n>>> time=84848.452..84848.453 rows=0 loops=1)\n>>> ```\n>>> Once I can reproduce this on test data I'll be able to pin down more\n>>> closely what is happening and tell if I'm just reading\n>>> the explain plan wrong or if something is broken. It was getting mixed\n>>> up with the lack of pruning/index usage problem.\n>>>\n>>> I'll report back again next week. Anyway it is looking to me like it\n>>> doesn't really matter (within reason) from a performance\n>>> perspective how many partitions we use for our data set and query\n>>> patterns. We should be able to pick the most convenient\n>>> from an archiving and data management perspective instead.\n>>>\n>>>\n>> This behavior is definitely consistent. 0 rows end up slower than when I\n>> find some rows in my CTE:\n>> ```\n>> -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\n>> time=87110.841..87110.842 rows=0 loops=1)\n>> -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual\n>> time=25367.867..25367.930 rows=840 loops=1)\n>> ```\n>> The only thing I changed in the query was the date range. It is actually\n>> the CTE scan step inside the Sort block that is slower when no rows are\n>> returned than when rows are returned. It also only happens when all the\n>> partitions are sequence scanned instead of being partition pruned.\n>>\n>> I'm still writing up a test case that can demo this without using\n>> proprietary data.\n>>\n>\n> After a bunch of experiments I can explain this now. :-)\n>\n> I had a `limit` clause in my test CTE. When sequence scanning a bunch of\n> partitions, if the limit is reached, the subsequent partitions are marked\n> with `never executed` and not scanned. On the other hand, when no rows are\n> found, all of the partitions are scanned.\n>\n> Therefore, with many millions of rows in the partitions, and being forced\n> to sequence scan because I put the `at time zone` clause in the `where`,\n> the case when rows are found is always noticeably faster than the case when\n> rows aren't found as long as at least one partition hasn't been scanned yet\n> when the limit is hit.\n>\n> I'm now satisfied this is a good thing, and will move on to other\n> problems. Thanks for hearing me out. I was scratching my head for a while\n> over that one.\n>\n>\n>\n\n\n-- \n-slava\n\n>\n\nI'm\n\n spinning up a new Postgresql 14 database where I'll have to store a \ncouple years worth of time series data at the rate of single-digit \nmillions of rows per day.\n\nI think you absolutely need to use partitioning for the following reasons:1. maintenance and roll-off of older data2. indexes are much smaller3. performance is predictable (if partition pruning kicks in)Postgres 14 improved partitioning quite a bit. I used it in Postgres 9 and there was a lot of locking on partition hierarchy when you add/drop partition tables.Having thousands of partitions shouldn't be a problem, BUT you will incur cost on query planning, which is usually under 0.1 second on modern hardware.On Tue, Aug 2, 2022 at 5:55 AM Rick Otten <rottenwindfish@gmail.com> wrote:On Mon, Aug 1, 2022 at 10:16 AM Rick Otten <rottenwindfish@gmail.com> wrote:The other problem I ran into, which I'm still building a test case for and I fear might be a bug if I can easily reproduce it,is if I did the original select in a CTE, and then did a sort outside of the CTE, even though the CTE found 0 rows, the databasestill spent a _ton_ of time sorting those 0 rows:``` -> Sort (cost=70.03..72.53 rows=1000 width=112) (actual time=84848.452..84848.453 rows=0 loops=1)```Once I can reproduce this on test data I'll be able to pin down more closely what is happening and tell if I'm just readingthe explain plan wrong or if something is broken. It was getting mixed up with the lack of pruning/index usage problem.I'll report back again next week. Anyway it is looking to me like it doesn't really matter (within reason) from a performanceperspective how many partitions we use for our data set and query patterns. We should be able to pick the most convenientfrom an archiving and data management perspective instead.This behavior is definitely consistent. 0 rows end up slower than when I find some rows in my CTE:``` -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=87110.841..87110.842 rows=0 loops=1) -> Sort (cost=109.44..113.19 rows=1500 width=112) (actual time=25367.867..25367.930 rows=840 loops=1)```The only thing I changed in the query was the date range. It is actually the CTE scan step inside the Sort block that is slower when no rows are returned than when rows are returned. It also only happens when all the partitions are sequence scanned instead of being partition pruned.I'm still writing up a test case that can demo this without using proprietary data.After a bunch of experiments I can explain this now. :-)I had a `limit` clause in my test CTE. When sequence scanning a bunch of partitions, if the limit is reached, the subsequent partitions are marked with `never executed` and not scanned. On the other hand, when no rows are found, all of the partitions are scanned.Therefore, with many millions of rows in the partitions, and being forced to sequence scan because I put the `at time zone` clause in the `where`, the case when rows are found is always noticeably faster than the case when rows aren't found as long as at least one partition hasn't been scanned yet when the limit is hit.I'm now satisfied this is a good thing, and will move on to other problems. Thanks for hearing me out. I was scratching my head for a while over that one. \n-- -slava",
"msg_date": "Mon, 8 Aug 2022 15:45:11 -0700",
"msg_from": "Slava Mudry <slava44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 14 partitioning advice"
},
{
"msg_contents": "On Mon, Aug 08, 2022 at 03:45:11PM -0700, Slava Mudry wrote:\n> Postgres 14 improved partitioning quite a bit. I used it in Postgres 9 and\n> there was a lot of locking on partition hierarchy when you add/drop\n> partition tables.\n\nNote that postgres 9 didn't have native/declarative partitioning, and most\nimprovements in native partitioning don't apply to legacy/inheritance\npartitioning.\n\nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html\n\n\"Native\" partitioning added in v10 tends to require stronger locks for add/drop\nthan legacy partitioning, since partitions have associated bounds, which cannot\noverlap. The locking is improved in v12 with CREATE+ATTACH and v14 with\nDETACH CONCURRENTLY+DROP.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 8 Aug 2022 17:55:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 14 partitioning advice"
}
] |
[
{
"msg_contents": "We have a Postgresql 13 database where we have a single table with\nseveral millions of rows . We plan to partition it based on timestamp .\nWe have been seeking advice for best practices for building this.\nThis table will get lots of updates for the same rows during a short period\nof time.During this time rows would be in a single partition .\nAfter this short time these rows would move to another partition .Where\nno more updates take place on these rows.But might have some SELECT\nqueries running.\nWe plan to l have partitions based on months and then roll them up in a\nyear and then archive these older partitions\nOne consultant we talked with told us this row movement between the\npartitions will have\nhuge complications .But this was an issue during the Postgres 10 version .\nSo we are seeking advice on the performance perspective and things we\nshould take care of along with manual vacuums on a regular schedule and\nindexing.\nAre there any tunables I should experiment with in particular ?\n\nWe have a Postgresql 13 database where we have a single table with several millions of rows . We plan to partition it based on timestamp .We have been seeking advice for best practices for building this.This table will get lots of\nupdates for the same rows during a short period of time.During this time rows would be in a single partition .After this short time these rows would move to another partition .Where no more updates take place on these rows.But might have some SELECT queries running.We plan to l have partitions based on months and then roll them up in a year and then archive these older partitionsOne consultant we talked with told us this row movement between the partitions will havehuge complications .But this was an issue during the Postgres 10 version .So we are seeking advice on the performance perspective and things we should take care of along with manual vacuums on a regular schedule and indexing.Are there any tunables I should experiment with in particular ?",
"msg_date": "Tue, 2 Aug 2022 09:48:02 +0200",
"msg_from": "Ameya Bidwalkar <bidwalkar.ameya10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgresql 13 partitioning advice"
},
{
"msg_contents": "On Tue, 2 Aug 2022 at 19:48, Ameya Bidwalkar\n<bidwalkar.ameya10@gmail.com> wrote:\n> We have a Postgresql 13 database where we have a single table with several millions of rows . We plan to partition it based on timestamp .\n> We have been seeking advice for best practices for building this.\n> This table will get lots of updates for the same rows during a short period of time.During this time rows would be in a single partition .\n> After this short time these rows would move to another partition .Where no more updates take place on these rows.But might have some SELECT queries running.\n> We plan to l have partitions based on months and then roll them up in a year and then archive these older partitions\n> One consultant we talked with told us this row movement between the partitions will have\n> huge complications .But this was an issue during the Postgres 10 version .\n\nDefine \"huge complications\"?\n\nThe capabilities of partitioned tables have changed quite a bit since\nthe feature was added. It's very easy for knowledge to get out-dated\nin this area. I did quite a bit of work on them and I struggle to\nremember off the top of my head which versions saw which improvements.\nPG12 saw lots. See [1], search for \"partition\".\n\nOne possible complication is what is mentioned in [2] about\n\"serialization failure error\". UPDATEs that cause a tuple to move to\nanother partition can cause a serialization failure at transaction\nisolation level, not just serializable transactions. If it's not\nalready, you might want to have your application retry transactions on\nSQL:40001 errors.\n\nApart from that, assuming there's comparatively a small number of rows\nin the partition being updated compared to the partition with the\nstatic rows, then it sounds fairly efficient. As you describe it, the\nlarger static partition is effectively INSERT only and auto-vacuum\nwill need to touch it only for tuple freezing work. The smaller of\nthe two tables will receive more churn but will be faster to vacuum.\nPG13 got a new feature that makes sure auto-vacuum also does the\nrounds on INSERT-only tables too, so the static partition is not going\nto be neglected until anti-wrap-around-autovacuums trigger, like they\nwould have in PG12 and earlier.\n\nAnother thing to consider is that an UPDATE of a non-partitioned table\nhas a chance at being a HOT update. That's possible if the tuple can\nfit on the same page and does not update any of the indexed columns. A\nHOT update means no indexes need to be updated so these perform faster\nand require less space in WAL than a non-HOT update. An UPDATE that\nmoves a tuple to another partition can never be a HOT update. That's\nsomething you might want to consider. If you're updating indexed\ncolumns already then it's not a factor to consider. There's also\noverhead to postgres having to find the partition for the newly\nupdated version of the tuple. That's not hugely expensive, but it's\ngenerally measurable. RANGE partitioned tables with a large number of\npartitions will have the most overhead for this. HASH partitioned\ntables, the least.\n\nThe best thing you can likely do is set up a scenario with pgbench and\ncompare the performance. pgbench is a pretty flexible tool that will\nallow you to run certain queries X% of the time and even throttle the\nworkload at what you expect your production server to experience. You\ncould then run it overnight on a test server, or even for weeks and\nsee how auto-vacuum keeps up when compared to the non-partitioned\ncase. You can also check how much extra WAL is generated vs the\nnon-partitioned case.\n\n> So we are seeking advice on the performance perspective and things we should take care of along with manual vacuums on a regular schedule and indexing.\n> Are there any tunables I should experiment with in particular ?\n\nPerhaps if you want to keep a small high-chun table in check you might\nwant to consider if autovacuum_naptime is set low enough. You may not\ncare if the space being consumed in the standard 1min\nautovacuum_naptime is small enough not to be of concern.\n\nDavid\n\n[1] https://www.postgresql.org/docs/release/12.0/\n[2] https://www.postgresql.org/docs/13/sql-update.html\n\n\n",
"msg_date": "Tue, 2 Aug 2022 22:16:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 13 partitioning advice"
},
{
"msg_contents": "Hello David,\n\nThank you for the valuable inputs.We will test these scenarios .\n\nRegards,\nAmeya\n\nOn Tue, Aug 2, 2022 at 12:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 2 Aug 2022 at 19:48, Ameya Bidwalkar\n> <bidwalkar.ameya10@gmail.com> wrote:\n> > We have a Postgresql 13 database where we have a single table with\n> several millions of rows . We plan to partition it based on timestamp .\n> > We have been seeking advice for best practices for building this.\n> > This table will get lots of updates for the same rows during a short\n> period of time.During this time rows would be in a single partition .\n> > After this short time these rows would move to another partition\n> .Where no more updates take place on these rows.But might have some SELECT\n> queries running.\n> > We plan to l have partitions based on months and then roll them up in a\n> year and then archive these older partitions\n> > One consultant we talked with told us this row movement between the\n> partitions will have\n> > huge complications .But this was an issue during the Postgres 10\n> version .\n>\n> Define \"huge complications\"?\n>\n> The capabilities of partitioned tables have changed quite a bit since\n> the feature was added. It's very easy for knowledge to get out-dated\n> in this area. I did quite a bit of work on them and I struggle to\n> remember off the top of my head which versions saw which improvements.\n> PG12 saw lots. See [1], search for \"partition\".\n>\n> One possible complication is what is mentioned in [2] about\n> \"serialization failure error\". UPDATEs that cause a tuple to move to\n> another partition can cause a serialization failure at transaction\n> isolation level, not just serializable transactions. If it's not\n> already, you might want to have your application retry transactions on\n> SQL:40001 errors.\n>\n> Apart from that, assuming there's comparatively a small number of rows\n> in the partition being updated compared to the partition with the\n> static rows, then it sounds fairly efficient. As you describe it, the\n> larger static partition is effectively INSERT only and auto-vacuum\n> will need to touch it only for tuple freezing work. The smaller of\n> the two tables will receive more churn but will be faster to vacuum.\n> PG13 got a new feature that makes sure auto-vacuum also does the\n> rounds on INSERT-only tables too, so the static partition is not going\n> to be neglected until anti-wrap-around-autovacuums trigger, like they\n> would have in PG12 and earlier.\n>\n> Another thing to consider is that an UPDATE of a non-partitioned table\n> has a chance at being a HOT update. That's possible if the tuple can\n> fit on the same page and does not update any of the indexed columns. A\n> HOT update means no indexes need to be updated so these perform faster\n> and require less space in WAL than a non-HOT update. An UPDATE that\n> moves a tuple to another partition can never be a HOT update. That's\n> something you might want to consider. If you're updating indexed\n> columns already then it's not a factor to consider. There's also\n> overhead to postgres having to find the partition for the newly\n> updated version of the tuple. That's not hugely expensive, but it's\n> generally measurable. RANGE partitioned tables with a large number of\n> partitions will have the most overhead for this. HASH partitioned\n> tables, the least.\n>\n> The best thing you can likely do is set up a scenario with pgbench and\n> compare the performance. pgbench is a pretty flexible tool that will\n> allow you to run certain queries X% of the time and even throttle the\n> workload at what you expect your production server to experience. You\n> could then run it overnight on a test server, or even for weeks and\n> see how auto-vacuum keeps up when compared to the non-partitioned\n> case. You can also check how much extra WAL is generated vs the\n> non-partitioned case.\n>\n> > So we are seeking advice on the performance perspective and things we\n> should take care of along with manual vacuums on a regular schedule and\n> indexing.\n> > Are there any tunables I should experiment with in particular ?\n>\n> Perhaps if you want to keep a small high-chun table in check you might\n> want to consider if autovacuum_naptime is set low enough. You may not\n> care if the space being consumed in the standard 1min\n> autovacuum_naptime is small enough not to be of concern.\n>\n> David\n>\n> [1] https://www.postgresql.org/docs/release/12.0/\n> [2] https://www.postgresql.org/docs/13/sql-update.html\n>\n\nHello David, Thank you for the valuable inputs.We will test these scenarios .Regards,Ameya On Tue, Aug 2, 2022 at 12:16 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 2 Aug 2022 at 19:48, Ameya Bidwalkar\n<bidwalkar.ameya10@gmail.com> wrote:\n> We have a Postgresql 13 database where we have a single table with several millions of rows . We plan to partition it based on timestamp .\n> We have been seeking advice for best practices for building this.\n> This table will get lots of updates for the same rows during a short period of time.During this time rows would be in a single partition .\n> After this short time these rows would move to another partition .Where no more updates take place on these rows.But might have some SELECT queries running.\n> We plan to l have partitions based on months and then roll them up in a year and then archive these older partitions\n> One consultant we talked with told us this row movement between the partitions will have\n> huge complications .But this was an issue during the Postgres 10 version .\n\nDefine \"huge complications\"?\n\nThe capabilities of partitioned tables have changed quite a bit since\nthe feature was added. It's very easy for knowledge to get out-dated\nin this area. I did quite a bit of work on them and I struggle to\nremember off the top of my head which versions saw which improvements.\nPG12 saw lots. See [1], search for \"partition\".\n\nOne possible complication is what is mentioned in [2] about\n\"serialization failure error\". UPDATEs that cause a tuple to move to\nanother partition can cause a serialization failure at transaction\nisolation level, not just serializable transactions. If it's not\nalready, you might want to have your application retry transactions on\nSQL:40001 errors.\n\nApart from that, assuming there's comparatively a small number of rows\nin the partition being updated compared to the partition with the\nstatic rows, then it sounds fairly efficient. As you describe it, the\nlarger static partition is effectively INSERT only and auto-vacuum\nwill need to touch it only for tuple freezing work. The smaller of\nthe two tables will receive more churn but will be faster to vacuum.\nPG13 got a new feature that makes sure auto-vacuum also does the\nrounds on INSERT-only tables too, so the static partition is not going\nto be neglected until anti-wrap-around-autovacuums trigger, like they\nwould have in PG12 and earlier.\n\nAnother thing to consider is that an UPDATE of a non-partitioned table\nhas a chance at being a HOT update. That's possible if the tuple can\nfit on the same page and does not update any of the indexed columns. A\nHOT update means no indexes need to be updated so these perform faster\nand require less space in WAL than a non-HOT update. An UPDATE that\nmoves a tuple to another partition can never be a HOT update. That's\nsomething you might want to consider. If you're updating indexed\ncolumns already then it's not a factor to consider. There's also\noverhead to postgres having to find the partition for the newly\nupdated version of the tuple. That's not hugely expensive, but it's\ngenerally measurable. RANGE partitioned tables with a large number of\npartitions will have the most overhead for this. HASH partitioned\ntables, the least.\n\nThe best thing you can likely do is set up a scenario with pgbench and\ncompare the performance. pgbench is a pretty flexible tool that will\nallow you to run certain queries X% of the time and even throttle the\nworkload at what you expect your production server to experience. You\ncould then run it overnight on a test server, or even for weeks and\nsee how auto-vacuum keeps up when compared to the non-partitioned\ncase. You can also check how much extra WAL is generated vs the\nnon-partitioned case.\n\n> So we are seeking advice on the performance perspective and things we should take care of along with manual vacuums on a regular schedule and indexing.\n> Are there any tunables I should experiment with in particular ?\n\nPerhaps if you want to keep a small high-chun table in check you might\nwant to consider if autovacuum_naptime is set low enough. You may not\ncare if the space being consumed in the standard 1min\nautovacuum_naptime is small enough not to be of concern.\n\nDavid\n\n[1] https://www.postgresql.org/docs/release/12.0/\n[2] https://www.postgresql.org/docs/13/sql-update.html",
"msg_date": "Wed, 3 Aug 2022 13:05:25 +0200",
"msg_from": "Ameya Bidwalkar <bidwalkar.ameya10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 13 partitioning advice"
}
] |
[
{
"msg_contents": "Using logical replication is it possible to have a single table \nsubscriber connect to multiple publishers of the same table ? This \nwould be for INSERT's only.\n\nThink multiple DB publishers just inserting records (audit transaction \nlogs)....\n\nIs it possible to have a single subscriber table contact multiple \npublishers and just insert all of the data into a single table on the \nsubscriber? ie: merge type replication. There are no primary/FK \nconstraints, etc. The records are just time based audit log type data...\n\n\n-- \ninoc.net!rblayzor\nXMPP: rblayzor.AT.inoc.net\nPGP: https://pgp.inoc.net/rblayzor/\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:57:50 -0400",
"msg_from": "Robert Blayzor <rblayzor.bulk@inoc.net>",
"msg_from_op": true,
"msg_subject": "PgSQL 14 - Logical Rep - Single table multiple publications?"
},
{
"msg_contents": "On 02/08/22, Robert Blayzor (rblayzor.bulk@inoc.net) wrote:\n> Is it possible to have a single subscriber table contact multiple publishers\n> and just insert all of the data into a single table on the subscriber? ie:\n> merge type replication. There are no primary/FK constraints, etc. The\n> records are just time based audit log type data...\n\nYour use case meets, I think, the third \"typical use case\" listed at\nhttps://www.postgresql.org/docs/current/logical-replication.html, namely\n\"Consolidating multiple databases into a single one (for example for\nanalytical purposes).\"\n\nI've just been testing aggregating all the data in one schema across 300\npublisher databases into 5 subscriber schemas on two Postgresql 14 clusters on\nthe same machine. Each of 60 publisher tables are aggregating into a\nsingle table on the subscriber.\n\nSpecial care must be taken with the \"replica identity\" of published\ntables, as set out at\nhttps://www.postgresql.org/docs/current/logical-replication-publication.html.\nFor example, you may need a unique identifying column for each source\ntable in addition to the normal row identifier to differentiate *this*\ntable's id 1 row from the *other* table's id 1 row, otherwise the\nsubscriber won't be able to identify the row to delete if this table's\nid 1 row is deleted (for example).\n\nAlthough this seems to work fine with native replication, the pglogical\nextension has more knobs. For instance, the\npglogical.wait_for_subscription_sync_complete function is useful to ensure that\nsync finishes when part of a migration.\n\nRory\n\n\n",
"msg_date": "Tue, 2 Aug 2022 15:57:39 +0100",
"msg_from": "Rory Campbell-Lange <rory@campbell-lange.net>",
"msg_from_op": false,
"msg_subject": "Re: PgSQL 14 - Logical Rep - Single table multiple publications?"
},
{
"msg_contents": "On 8/2/22 10:57, Rory Campbell-Lange wrote:\n> Special care must be taken with the \"replica identity\" of published\n> tables, as set out at\n> https://www.postgresql.org/docs/current/logical-replication-publication.html.\n> For example, you may need a unique identifying column for each source\n> table in addition to the normal row identifier to differentiate*this*\n> table's id 1 row from the*other* table's id 1 row, otherwise the\n> subscriber won't be able to identify the row to delete if this table's\n> id 1 row is deleted (for example).\n> \n> Although this seems to work fine with native replication, the pglogical\n> extension has more knobs. For instance, the\n> pglogical.wait_for_subscription_sync_complete function is useful to ensure that\n> sync finishes when part of a migration.\n\n\nWe would literally just be merging bulk data rows that are considered \nimmutable, meaning they would never be updated or deleted. We would \nreplicate only inserts, not deletes, updates, etc.\n\nWould the table identifier still be required in this case?\n\n\nWe have a half a dozen DB's that just collect call records, they are in \ndifferent locations. They get their local call data and store it into \ntheir local table. We would want to aggregate all that data in a central \nsubscription database table for pure analytics/reporting purposes...\n\n-- \ninoc.net!rblayzor\nXMPP: rblayzor.AT.inoc.net\nPGP: https://pgp.inoc.net/rblayzor/\n\n\n",
"msg_date": "Tue, 2 Aug 2022 12:09:58 -0400",
"msg_from": "Robert Blayzor <rblayzor.bulk@inoc.net>",
"msg_from_op": true,
"msg_subject": "Re: PgSQL 14 - Logical Rep - Single table multiple publications?"
},
{
"msg_contents": "On 02/08/22, Robert Blayzor (rblayzor.bulk@inoc.net) wrote:\n> On 8/2/22 10:57, Rory Campbell-Lange wrote:\n> > Special care must be taken with the \"replica identity\" of published\n> > tables, as set out at\n> > https://www.postgresql.org/docs/current/logical-replication-publication.html.\n> \n> We would literally just be merging bulk data rows that are considered\n> immutable, meaning they would never be updated or deleted. We would\n> replicate only inserts, not deletes, updates, etc.\n> \n> Would the table identifier still be required in this case?\n\nOn the page referenced above is the following:\n\n\"INSERT operations can proceed regardless of any replica identity.\"\n\nSo you should be good.\n\nRory\n\n\n",
"msg_date": "Tue, 2 Aug 2022 20:18:16 +0100",
"msg_from": "Rory Campbell-Lange <rory@campbell-lange.net>",
"msg_from_op": false,
"msg_subject": "Re: PgSQL 14 - Logical Rep - Single table multiple publications?"
}
] |
[
{
"msg_contents": "Hi,\nWe are doing an oracle to postgres migration(5 TB+ data). We are encoding\nand decoding BLOB data after migration and for that we are running updates\non tables having BLOB/CLOB data. When we execute this pg_wal is filling up.\n\nDo you have any general guidelines for migrating a 5TB + database from\nOracle to Postgres? Any specific guidelines around archiving logs?\n\nRegards,\nAditya.\n\nHi,We are doing an oracle to postgres migration(5 TB+ data). We are encoding and decoding BLOB data after migration and for that we are running updates on tables having BLOB/CLOB data. When we execute this pg_wal is filling up. Do you have any general guidelines for migrating a 5TB + database from Oracle to Postgres? Any specific guidelines around archiving logs?Regards,Aditya.",
"msg_date": "Fri, 5 Aug 2022 18:00:02 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_wal filling up while running huge updates"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 7:30 AM aditya desai <admad123@gmail.com> wrote:\n\n>\n> We are doing an oracle to postgres migration(5 TB+ data). We are encoding\n> and decoding BLOB data after migration and for that we are running updates\n> on tables having BLOB/CLOB data. When we execute this pg_wal is filling up.\n>\n> Do you have any general guidelines for migrating a 5TB + database from\n> Oracle to Postgres? Any specific guidelines around archiving logs?\n>\n\nIf you use pgBackrest for WAL archiving, you could enable asynchronous\narchiving.\n\nhttps://pgbackrest.org/configuration.html#section-archive/option-archive-async\n\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Fri, Aug 5, 2022 at 7:30 AM aditya desai <admad123@gmail.com> wrote:We are doing an oracle to postgres migration(5 TB+ data). We are encoding and decoding BLOB data after migration and for that we are running updates on tables having BLOB/CLOB data. When we execute this pg_wal is filling up. Do you have any general guidelines for migrating a 5TB + database from Oracle to Postgres? Any specific guidelines around archiving logs?If you use pgBackrest for WAL archiving, you could enable asynchronous archiving.https://pgbackrest.org/configuration.html#section-archive/option-archive-async -- Don Seilerwww.seiler.us",
"msg_date": "Fri, 5 Aug 2022 08:56:52 -0500",
"msg_from": "Don Seiler <don@seiler.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal filling up while running huge updates"
},
{
"msg_contents": "On Fri, Aug 05, 2022 at 06:00:02PM +0530, aditya desai wrote:\n> Hi,\n> We are doing an oracle to postgres migration(5 TB+ data). We are encoding\n> and decoding BLOB data after migration and for that we are running updates\n> on tables having BLOB/CLOB data. When we execute this pg_wal is filling up.\n\nCould you please include basic information in each new thread you create ?\n\nhttps://wiki.postgresql.org/wiki/Server_Configuration\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 5 Aug 2022 09:04:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal filling up while running huge updates"
}
] |
[
{
"msg_contents": "Good day,\n\nconsider the following query:\n\nWITH aggregation(\n SELECT\n a.*,\n (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n)\nSELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n\nImagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and \n\"e\" which makes the result of this pretty big (worst case: around 300kb \nwhen saved to a text file).\nI noticed that adding the \"to_jsonb\" increases the query time by 100%, \nfrom 9-10ms to 17-23ms on average.\nThis may not seem slow at all but this query has another issue: on an \nAWS Aurora Serverless V2 instance we are running into a RAM usage of \naround 30-50 GB compared to < 10 GB when using a simple LEFT JOINed \nquery when under high load (> 1000 queries / sec). Furthermore the CPU \nusage is quite high.\n\nIs there anything I could improve? I am open for other solutions but I \nam wondering if I ran into an edge case of \"to_jsonb\" for \"anonymous \nrecords\" (these are just rows without a defined UDT) - this is just a \nwild guess though.\nI am mostly looking to decrease the load (CPU and memory) on Postgres \nitself. Furthermore I would like to know why the memory usage is so \nsignificant. Any tips on how to analyze this issue are appreciated as \nwell - my knowledge is limited to being average at interpreting EXPLAIN \nANALYZE results.\n\nHere's a succinct list of the why's, what I have found out so far and \nsolution I already tried/ don't want to consider:\n\n- LEFT JOINing potentially creates a huge resultset because of the \ncartesian product, thats a nono\n- not using \"to_jsonb\" is sadly also not possible as Postgres' array + \nrecord syntax is very unfriendly and hard to parse (it's barely \ndocumented if at all and the quoting rules are cumbersome, furthermore I \nlack column names in the array which would make the parsing sensitive to \nfuture table changes and thus cumbersome to maintain) in my application\n- I know I could solve this with a separate query for a,b,c,d and e \nwhile \"joinining\" the result in my application, but I am looking for \nanother way to do this (bear with me, treat this as an academic question :))\n- I am using \"to_jsonb\" to simply map the result to my data model via a \njson mapper\n- EXPLAIN ANALYZE is not showing anything special when using \"to_jsonb\" \nvs. not using it, the outermost (hash) join just takes more time - is \nthere a more granular EXPLAIN that shows me the runtime of functions \nlike \"to_jsonb\"?\n- I tried an approach where b,c,d,e where array columns of UDTs: UDTs \nare not well supported by my application stack (JDBC) and are generally \nundesireable for me (because of a lack of migration possibilities)\n- I don't want to duplicate my data into another table (e.g. that has \njsonb columns)\n- MATERIALIZED VIEWS are also undesirable as the manual update, its \nupdate is non-incremental which would make a refresh on a big data set \ntake a long time\n- split the query into chunks to reduce the IN()-statement list size \nmakes no measurable difference\n- I don't want to use JSONB columns for b,c,d and e because future \nchanges of b,c,d or e's structure (e.g. new fields, changing a datatype) \nare harder to achieve with JSONB and it lacks constraint checks on \ninsert (e.g. not null on column b.xy)\n\nKind regards and thank you for your time,\nNico Heller\n\nP.S: Sorry for the long list of \"I don't want to do this\", some of them \nare not possible because of other requirements\n\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 18:49:58 +0000",
"msg_from": "Nico Heller <nico.heller@posteo.de>",
"msg_from_op": true,
"msg_subject": "to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "What version of postgres ?\n\nI wonder if you're hitting the known memory leak involving jit.\nTry with jit=off or jit_inline_above_cost=-1.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 12 Aug 2022 13:56:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "I knew I forgot something: We are currently on 13.6. When was this issue \nfixed?\n\nAm 12.08.2022 um 20:56 schrieb Justin Pryzby:\n> What version of postgres ?\n>\n> I wonder if you're hitting the known memory leak involving jit.\n> Try with jit=off or jit_inline_above_cost=-1.\n\n> Good day,\n>\n>\n>\n> consider the following query:\n>\n>\n>\n> WITH aggregation(\n>\n> SELECT\n>\n> a.*,\n>\n> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n>\n> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n>\n> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n>\n> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n>\n> FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n>\n> )\n>\n> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n>\n>\n>\n> Imagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and\n> \"e\" which makes the result of this pretty big (worst case: around 300kb\n> when saved to a text file).\n>\n> I noticed that adding the \"to_jsonb\" increases the query time by 100%,\n> from 9-10ms to 17-23ms on average.\n>\n> This may not seem slow at all but this query has another issue: on an\n> AWS Aurora Serverless V2 instance we are running into a RAM usage of\n> around 30-50 GB compared to < 10 GB when using a simple LEFT JOINed\n> query when under high load (> 1000 queries / sec). Furthermore the CPU\n> usage is quite high.\n>\n>\n>\n> Is there anything I could improve? I am open for other solutions but I\n> am wondering if I ran into an edge case of \"to_jsonb\" for \"anonymous\n> records\" (these are just rows without a defined UDT) - this is just a\n> wild guess though.\n>\n> I am mostly looking to decrease the load (CPU and memory) on Postgres\n> itself. Furthermore I would like to know why the memory usage is so\n> significant. Any tips on how to analyze this issue are appreciated as\n> well - my knowledge is limited to being average at interpreting EXPLAIN\n> ANALYZE results.\n>\n>\n>\n> Here's a succinct list of the why's, what I have found out so far and\n> solution I already tried/ don't want to consider:\n>\n>\n>\n> - LEFT JOINing potentially creates a huge resultset because of the\n> cartesian product, thats a nono\n>\n> - not using \"to_jsonb\" is sadly also not possible as Postgres' array +\n> record syntax is very unfriendly and hard to parse (it's barely\n> documented if at all and the quoting rules are cumbersome, furthermore I\n> lack column names in the array which would make the parsing sensitive to\n> future table changes and thus cumbersome to maintain) in my application\n>\n> - I know I could solve this with a separate query for a,b,c,d and e\n> while \"joinining\" the result in my application, but I am looking for\n> another way to do this (bear with me, treat this as an academic question :))\n>\n> - I am using \"to_jsonb\" to simply map the result to my data model via a\n> json mapper\n>\n> - EXPLAIN ANALYZE is not showing anything special when using \"to_jsonb\"\n> vs. not using it, the outermost (hash) join just takes more time - is\n> there a more granular EXPLAIN that shows me the runtime of functions\n> like \"to_jsonb\"?\n>\n> - I tried an approach where b,c,d,e where array columns of UDTs: UDTs\n> are not well supported by my application stack (JDBC) and are generally\n> undesireable for me (because of a lack of migration possibilities)\n>\n> - I don't want to duplicate my data into another table (e.g. that has\n> jsonb columns)\n>\n> - MATERIALIZED VIEWS are also undesirable as the manual update, its\n> update is non-incremental which would make a refresh on a big data set\n> take a long time\n>\n> - split the query into chunks to reduce the IN()-statement list size\n> makes no measurable difference\n>\n> - I don't want to use JSONB columns for b,c,d and e because future\n> changes of b,c,d or e's structure (e.g. new fields, changing a datatype)\n> are harder to achieve with JSONB and it lacks constraint checks on\n> insert (e.g. not null on column b.xy)\n>\n>\n>\n> Kind regards and thank you for your time,\n>\n> Nico Heller\n>\n>\n>\n> P.S: Sorry for the long list of \"I don't want to do this\", some of them\n> are not possible because of other requirements\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n\n\n\n\nI knew I forgot something: We are currently on 13.6. When was\n this issue fixed?\n\nAm 12.08.2022 um 20:56 schrieb Justin\n Pryzby:\n\n\nWhat version of postgres ?\n\nI wonder if you're hitting the known memory leak involving jit.\nTry with jit=off or jit_inline_above_cost=-1.\n\n\n\n\nGood day,\n\n\n\nconsider the following query:\n\n\n\nWITH aggregation(\n\n SELECT\n\n a.*,\n\n (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n\n (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n\n (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n\n (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n\n FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n\n)\n\nSELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n\n\n\nImagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and \n\"e\" which makes the result of this pretty big (worst case: around 300kb \nwhen saved to a text file).\n\nI noticed that adding the \"to_jsonb\" increases the query time by 100%, \nfrom 9-10ms to 17-23ms on average.\n\nThis may not seem slow at all but this query has another issue: on an \nAWS Aurora Serverless V2 instance we are running into a RAM usage of \naround 30-50 GB compared to < 10 GB when using a simple LEFT JOINed \nquery when under high load (> 1000 queries / sec). Furthermore the CPU \nusage is quite high.\n\n\n\nIs there anything I could improve? I am open for other solutions but I \nam wondering if I ran into an edge case of \"to_jsonb\" for \"anonymous \nrecords\" (these are just rows without a defined UDT) - this is just a \nwild guess though.\n\nI am mostly looking to decrease the load (CPU and memory) on Postgres \nitself. Furthermore I would like to know why the memory usage is so \nsignificant. Any tips on how to analyze this issue are appreciated as \nwell - my knowledge is limited to being average at interpreting EXPLAIN \nANALYZE results.\n\n\n\nHere's a succinct list of the why's, what I have found out so far and \nsolution I already tried/ don't want to consider:\n\n\n\n- LEFT JOINing potentially creates a huge resultset because of the \ncartesian product, thats a nono\n\n- not using \"to_jsonb\" is sadly also not possible as Postgres' array + \nrecord syntax is very unfriendly and hard to parse (it's barely \ndocumented if at all and the quoting rules are cumbersome, furthermore I \nlack column names in the array which would make the parsing sensitive to \nfuture table changes and thus cumbersome to maintain) in my application\n\n- I know I could solve this with a separate query for a,b,c,d and e \nwhile \"joinining\" the result in my application, but I am looking for \nanother way to do this (bear with me, treat this as an academic question :))\n\n- I am using \"to_jsonb\" to simply map the result to my data model via a \njson mapper\n\n- EXPLAIN ANALYZE is not showing anything special when using \"to_jsonb\" \nvs. not using it, the outermost (hash) join just takes more time - is \nthere a more granular EXPLAIN that shows me the runtime of functions \nlike \"to_jsonb\"?\n\n- I tried an approach where b,c,d,e where array columns of UDTs: UDTs \nare not well supported by my application stack (JDBC) and are generally \nundesireable for me (because of a lack of migration possibilities)\n\n- I don't want to duplicate my data into another table (e.g. that has \njsonb columns)\n\n- MATERIALIZED VIEWS are also undesirable as the manual update, its \nupdate is non-incremental which would make a refresh on a big data set \ntake a long time\n\n- split the query into chunks to reduce the IN()-statement list size \nmakes no measurable difference\n\n- I don't want to use JSONB columns for b,c,d and e because future \nchanges of b,c,d or e's structure (e.g. new fields, changing a datatype) \nare harder to achieve with JSONB and it lacks constraint checks on \ninsert (e.g. not null on column b.xy)\n\n\n\nKind regards and thank you for your time,\n\nNico Heller\n\n\n\nP.S: Sorry for the long list of \"I don't want to do this\", some of them \nare not possible because of other requirements",
"msg_date": "Fri, 12 Aug 2022 19:02:36 +0000",
"msg_from": "Nico Heller <nico.heller@posteo.de>",
"msg_from_op": true,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "Am 12.08.2022 um 21:02 schrieb Rick Otten:\n\n>\n>\n> On Fri, Aug 12, 2022 at 2:50 PM Nico Heller <nico.heller@posteo.de> wrote:\n>\n> Good day,\n>\n> consider the following query:\n>\n> WITH aggregation(\n> SELECT\n> a.*,\n> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id\n> <http://a.id>) as \"bs\",\n> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id\n> <http://a.id>) as \"cs\",\n> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id\n> <http://a.id>) as \"ds\",\n> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id\n> <http://a.id>) as \"es\"\n> FROM a WHERE a.id <http://a.id> IN (<some big list, ranging\n> from 20-180 entries)\n> )\n> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n>\n>\n> - You do have an index on `b.a_id` and `c.a_id`, etc... ? You didn't \n> say...\nYes there are indices on all referenced columns of the subselect (they \nare all primary keys anyway)\n> - Are you sure it is the `to_jsonb` that is making this query slow?\nYes, EXPLAIN ANALYZE shows a doubling of execution time - I don't have \nnumbers on the memory usage difference though\n>\n> - Since you are serializing this for easy machine readable consumption \n> outside of the database, does it make a difference if you use \n> `to_json` instead?\n>\nUsing to_json vs. to_jsonb makes no difference in regards to runtime, I \nwill check if the memory consumption is different on monday - thank you \nfor the idea!\n\n\n\n\n\n\nAm 12.08.2022 um 21:02 schrieb Rick Otten:\n\n\n\n\n\n\n\n\nOn Fri, Aug 12, 2022 at 2:50\n PM Nico Heller <nico.heller@posteo.de>\n wrote:\n\nGood day,\n\n consider the following query:\n\n WITH aggregation(\n SELECT\n a.*,\n (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n FROM a WHERE a.id IN\n (<some big list, ranging from 20-180 entries)\n )\n SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n\n\n\n\n- You do have an index on `b.a_id` and `c.a_id`, etc...\n ? You didn't say...\n\n\n\n Yes there are indices on all referenced columns of the subselect\n (they are all primary keys anyway)\n\n\n\n- Are you sure it is the `to_jsonb` that is making this\n query slow?\n\n\n\n Yes, EXPLAIN ANALYZE shows a doubling of execution time - I don't\n have numbers on the memory usage difference though\n\n\n\n\n\n- Since you are serializing this for easy machine\n readable consumption outside of the database, does it make a\n difference if you use `to_json` instead?\n\n\n\n\n\nUsing to_json vs. to_jsonb makes no difference in regards to\n runtime, I will check if the memory consumption is different on\n monday - thank you for the idea!",
"msg_date": "Fri, 12 Aug 2022 19:07:20 +0000",
"msg_from": "Nico Heller <nico.heller@posteo.de>",
"msg_from_op": true,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 07:02:36PM +0000, Nico Heller wrote:\n> I knew I forgot something: We are currently on 13.6. When was this issue\n> fixed?\n\nThere's a WIP/proposed fix, but the fix is not released.\nI asked about your version because jit was disabled by default in v11.\nBut it's enabled by default in v12.\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items#Older_bugs_affecting_stable_branches\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 12 Aug 2022 14:10:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-12 18:49:58 +0000, Nico Heller wrote:\n> WITH aggregation(\n> ��� SELECT\n> ���������� a.*,\n> ��������� (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n> ��������� (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n> ��������� (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n> ��������� (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n> ��� FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n> )\n> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n\n> Imagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and \"e\"\n> which makes the result of this pretty big (worst case: around 300kb when\n> saved to a text file).\n> I noticed that adding the \"to_jsonb\" increases the query time by 100%, from\n> 9-10ms to 17-23ms on average.\n\nCould we see the explain?\n\nHave you tried using json[b]_agg()?\n\n\n> This may not seem slow at all but this query has another issue: on an AWS\n> Aurora Serverless V2 instance we are running into a RAM usage of around\n> 30-50 GB compared to < 10 GB when using a simple LEFT JOINed query when\n> under high load (> 1000 queries / sec). Furthermore the CPU usage is quite\n> high.\n\nWe can't say much about aurora. It's a heavily modified fork of postgres. Did\nyou reproduce this with vanilla postgres? And if so, do you have it in a form\nthat somebody could try out?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 12 Aug 2022 12:15:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "Am 12.08.2022 um 21:15 schrieb Rick Otten:\n>\n>\n> On Fri, Aug 12, 2022 at 3:07 PM Nico Heller <nico.heller@posteo.de> wrote:\n>\n> Am 12.08.2022 um 21:02 schrieb Rick Otten:\n>\n>>\n>>\n>> On Fri, Aug 12, 2022 at 2:50 PM Nico Heller\n>> <nico.heller@posteo.de> wrote:\n>>\n>> Good day,\n>>\n>> consider the following query:\n>>\n>> WITH aggregation(\n>> SELECT\n>> a.*,\n>> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id\n>> <http://a.id>) as \"bs\",\n>> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id\n>> <http://a.id>) as \"cs\",\n>> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id\n>> <http://a.id>) as \"ds\",\n>> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id\n>> <http://a.id>) as \"es\"\n>> FROM a WHERE a.id <http://a.id> IN (<some big list,\n>> ranging from 20-180 entries)\n>> )\n>> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n>>\n>>\n>> - You do have an index on `b.a_id` and `c.a_id`, etc... ? You\n>> didn't say...\n> Yes there are indices on all referenced columns of the subselect\n> (they are all primary keys anyway)\n>> - Are you sure it is the `to_jsonb` that is making this query slow?\n> Yes, EXPLAIN ANALYZE shows a doubling of execution time - I don't\n> have numbers on the memory usage difference though\n>>\n>> - Since you are serializing this for easy machine readable\n>> consumption outside of the database, does it make a difference if\n>> you use `to_json` instead?\n>>\n> Using to_json vs. to_jsonb makes no difference in regards to\n> runtime, I will check if the memory consumption is different on\n> monday - thank you for the idea!\n>\n>\n> One other thought. Does it help if you convert the arrays to json \n> first before you convert the whole row? ie, add some to_json()'s \n> around the bs, cs, ds, es columns in the CTE. I'm wondering if \n> breaking the json conversions up into smaller pieces will let the \n> outer to_json() have less work to do and overall run faster. You \n> could even separately serialize the elements inside the array too. I \n> wouldn't think it would make a huge difference, you'd be making a \n> bunch of extra to_json calls, but maybe it avoids some large memory \n> structure that would otherwise have to be constructed to serialize all \n> of those objects in all of the arrays all at the same time.\n\nUsing jsonb_array_agg and another to_jsonb at the (its still needed to \ncreate one value at the end and to include the columns \"a.*\") worsens \nthe query performance by 100%, I can't speak for the memory usage \nbecause I would have to push these changes to preproduction - will try \nthis on monday, thanks.\n\n\n\n\n\n\n\n\nAm 12.08.2022 um 21:15 schrieb Rick\n Otten:\n\n\n\n\n\n\n\n\nOn Fri, Aug 12, 2022 at 3:07\n PM Nico Heller <nico.heller@posteo.de>\n wrote:\n\n\n\nAm 12.08.2022 um 21:02 schrieb Rick Otten:\n\n\n\n\n\n\n\nOn Fri, Aug 12,\n 2022 at 2:50 PM Nico Heller <nico.heller@posteo.de>\n wrote:\n\nGood day,\n\n consider the following query:\n\n WITH aggregation(\n SELECT\n a.*,\n (SELECT array_agg(b.*) FROM b WHERE\n b.a_id = a.id)\n as \"bs\",\n (SELECT array_agg(c.*) FROM c WHERE\n c.a_id = a.id)\n as \"cs\",\n (SELECT array_agg(d.*) FROM d WHERE\n d.a_id = a.id)\n as \"ds\",\n (SELECT array_agg(e.*) FROM d WHERE\n e.a_id = a.id)\n as \"es\"\n FROM a WHERE a.id IN (<some big\n list, ranging from 20-180 entries)\n )\n SELECT to_jsonb(aggregation.*) as \"value\" FROM\n aggregation;\n\n\n\n\n- You do have an index on `b.a_id` and\n `c.a_id`, etc... ? You didn't say...\n\n\n\n Yes there are indices on all referenced columns of the\n subselect (they are all primary keys anyway)\n\n\n\n- Are you sure it is the `to_jsonb` that is\n making this query slow?\n\n\n\n Yes, EXPLAIN ANALYZE shows a doubling of execution time -\n I don't have numbers on the memory usage difference though\n\n\n\n\n\n- Since you are serializing this for easy\n machine readable consumption outside of the\n database, does it make a difference if you use\n `to_json` instead?\n\n\n\n\n\nUsing to_json vs. to_jsonb makes no difference in\n regards to runtime, I will check if the memory\n consumption is different on monday - thank you for the\n idea!\n\n\n\n\n\nOne other thought. Does it help if you convert the\n arrays to json first before you convert the whole row? ie,\n add some to_json()'s around the bs, cs, ds, es columns in\n the CTE. I'm wondering if breaking the json conversions up\n into smaller pieces will let the outer to_json() have less\n work to do and overall run faster. You could even\n separately serialize the elements inside the array too. I\n wouldn't think it would make a huge difference, you'd be\n making a bunch of extra to_json calls, but maybe it avoids\n some large memory structure that would otherwise have to be\n constructed to serialize all of those objects in all of the\n arrays all at the same time.\n\n\n\nUsing jsonb_array_agg and another to_jsonb at the (its still\n needed to create one value at the end and to include the columns\n \"a.*\") worsens the query performance by 100%, I can't speak for\n the memory usage because I would have to push these changes to\n preproduction - will try this on monday, thanks.",
"msg_date": "Fri, 12 Aug 2022 19:18:33 +0000",
"msg_from": "Nico Heller <nico.heller@posteo.de>",
"msg_from_op": true,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "Here are the query plans (I hope my anonymization didn't break them). I \nran every query a couple times before copying the plan to avoid timing \nissues because of disk access.\nIgnore the sequential scan on one of the tables, it's very small (will \nchange in the future) so Postgres opts for a faster sequential scan - \nthe other sequential scan is on the IN()-statement which uses a VALUE \nlist in the actual query (using a non-VALUE list makes no difference).\nOverall the plan is quite optimal for me and performs really well \nconsidering the amount of rows it extracts and converts to json.\n\nNotice how removing to_jsonb improves the query performance \nsignificantly (see last query plan) and how the cost is attributed to \nthe hash join.\nUsing to_jsonb instead of to_jsonb or json_agg instead of jsonb_agg \nmakes no difference in query plan or execution time.\n\nI used random id's so I don't know how how big the result got but it \nshouldn't matter for the query plan:\n\n\n_*array_agg, then to_jsonb (my initially posted query)*_\n\nHash Semi Join (cost=5.00..15947.39 rows=200 width=32) (actual \ntime=0.266..18.128 rows=200 loops=1)\n\" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422) (actual \ntime=0.013..0.268 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual \ntime=0.091..0.092 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n\" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50 rows=200 \nwidth=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual \ntime=0.020..0.020 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12 \nwidth=156) (actual time=0.012..0.017 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx (cost=0.00..4.37 \nrows=12 width=0) (actual time=0.008..0.008 rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual \ntime=0.012..0.012 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9 width=98) \n(actual time=0.009..0.010 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx (cost=0.00..4.35 \nrows=9 width=0) (actual time=0.007..0.007 rows=5 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual \ntime=0.009..0.010 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29 rows=1 \nwidth=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76) (actual \ntime=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\nPlanning Time: 0.520 ms\nExecution Time: 18.650 ms\n\n_*jsonb_agg instead of array_agg, then to_jsonb*_\n\nHash Semi Join (cost=5.00..15947.39 rows=200 width=32) (actual \ntime=0.338..23.921 rows=200 loops=1)\n\" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422) (actual \ntime=0.012..0.244 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual \ntime=0.090..0.091 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n\" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50 rows=200 \nwidth=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual \ntime=0.050..0.050 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12 \nwidth=156) (actual time=0.012..0.018 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx (cost=0.00..4.37 \nrows=12 width=0) (actual time=0.008..0.008 rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual \ntime=0.028..0.028 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9 width=98) \n(actual time=0.009..0.011 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx (cost=0.00..4.35 \nrows=9 width=0) (actual time=0.007..0.007 rows=5 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual \ntime=0.014..0.014 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29 rows=1 \nwidth=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76) (actual \ntime=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\nPlanning Time: 0.513 ms\nExecution Time: 24.020 ms\n\n_*array_agg without to_jsonb at the end*_\n\nHash Semi Join (cost=5.00..15946.89 rows=200 width=550) (actual \ntime=0.209..9.784 rows=200 loops=1)\n\" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422) (actual \ntime=0.013..0.190 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual \ntime=0.079..0.080 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n\" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50 rows=200 \nwidth=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual \ntime=0.019..0.019 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12 \nwidth=156) (actual time=0.012..0.017 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx (cost=0.00..4.37 \nrows=12 width=0) (actual time=0.008..0.008 rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual \ntime=0.012..0.012 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9 width=98) \n(actual time=0.008..0.010 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx (cost=0.00..4.35 \nrows=9 width=0) (actual time=0.007..0.007 rows=5 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual \ntime=0.009..0.009 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29 rows=1 \nwidth=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76) (actual \ntime=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\nPlanning Time: 0.496 ms\nExecution Time: 9.892 ms\n\n\n\nAm 12.08.2022 um 21:15 schrieb Andres Freund:\n> Hi,\n>\n> On 2022-08-12 18:49:58 +0000, Nico Heller wrote:\n>> WITH aggregation(\n>> SELECT\n>> a.*,\n>> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n>> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n>> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n>> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n>> FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n>> )\n>> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n>> Imagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and \"e\"\n>> which makes the result of this pretty big (worst case: around 300kb when\n>> saved to a text file).\n>> I noticed that adding the \"to_jsonb\" increases the query time by 100%, from\n>> 9-10ms to 17-23ms on average.\n> Could we see the explain?\n>\n> Have you tried using json[b]_agg()?\n>\n>\n>> This may not seem slow at all but this query has another issue: on an AWS\n>> Aurora Serverless V2 instance we are running into a RAM usage of around\n>> 30-50 GB compared to < 10 GB when using a simple LEFT JOINed query when\n>> under high load (> 1000 queries / sec). Furthermore the CPU usage is quite\n>> high.\n> We can't say much about aurora. It's a heavily modified fork of postgres. Did\n> you reproduce this with vanilla postgres? And if so, do you have it in a form\n> that somebody could try out?\n>\n> Greetings,\n>\n> Andres Freund\n\n\n\n\n\nHere are the query plans (I hope my anonymization didn't break\n them). I ran every query a couple times before copying the plan to\n avoid timing issues because of disk access.\n Ignore the sequential scan on one of the tables, it's very small\n (will change in the future) so Postgres opts for a faster\n sequential scan - the other sequential scan is on the\n IN()-statement which uses a VALUE list in the actual query (using\n a non-VALUE list makes no difference).\n Overall the plan is quite optimal for me and performs really well\n considering the amount of rows it extracts and converts to json.\n\n Notice how removing to_jsonb improves the query performance\n significantly (see last query plan) and how the cost is attributed\n to the hash join.\n Using to_jsonb instead of to_jsonb or json_agg instead of\n jsonb_agg makes no difference in query plan or execution time.\nI used random id's so I don't know how how big the result got but\n it shouldn't matter for the query plan:\n\n\narray_agg, then to_jsonb (my initially posted query)\n \n Hash Semi Join (cost=5.00..15947.39 rows=200 width=32) (actual\n time=0.266..18.128 rows=200 loops=1)\n \" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422)\n (actual time=0.013..0.268 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual\n time=0.091..0.092 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n \" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50\n rows=200 width=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual\n time=0.020..0.020 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12\n width=156) (actual time=0.012..0.017 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx \n (cost=0.00..4.37 rows=12 width=0) (actual time=0.008..0.008\n rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual\n time=0.012..0.012 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9\n width=98) (actual time=0.009..0.010 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx \n (cost=0.00..4.35 rows=9 width=0) (actual time=0.007..0.007 rows=5\n loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual\n time=0.009..0.010 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29\n rows=1 width=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76)\n (actual time=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\n Planning Time: 0.520 ms\n Execution Time: 18.650 ms\n \njsonb_agg instead of array_agg, then to_jsonb\n \n Hash Semi Join (cost=5.00..15947.39 rows=200 width=32) (actual\n time=0.338..23.921 rows=200 loops=1)\n \" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422)\n (actual time=0.012..0.244 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual\n time=0.090..0.091 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n \" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50\n rows=200 width=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual\n time=0.050..0.050 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12\n width=156) (actual time=0.012..0.018 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx \n (cost=0.00..4.37 rows=12 width=0) (actual time=0.008..0.008\n rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual\n time=0.028..0.028 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9\n width=98) (actual time=0.009..0.011 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx \n (cost=0.00..4.35 rows=9 width=0) (actual time=0.007..0.007 rows=5\n loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual\n time=0.014..0.014 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29\n rows=1 width=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76)\n (actual time=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\n Planning Time: 0.513 ms\n Execution Time: 24.020 ms\n \narray_agg without to_jsonb at the end\n \n Hash Semi Join (cost=5.00..15946.89 rows=200 width=550) (actual\n time=0.209..9.784 rows=200 loops=1)\n \" Hash Cond: (a.id = \"\"*VALUES*\"\".column1)\"\n -> Seq Scan on a (cost=0.00..41.02 rows=502 width=422)\n (actual time=0.013..0.190 rows=502 loops=1)\n -> Hash (cost=2.50..2.50 rows=200 width=32) (actual\n time=0.079..0.080 rows=200 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n \" -> Values Scan on \"\"*VALUES*\"\" (cost=0.00..2.50\n rows=200 width=32) (actual time=0.001..0.040 rows=200 loops=1)\"\n SubPlan 1\n -> Aggregate (cost=42.20..42.21 rows=1 width=32) (actual\n time=0.019..0.019 rows=1 loops=200)\n -> Bitmap Heap Scan on b (cost=4.38..42.17 rows=12\n width=156) (actual time=0.012..0.017 rows=12 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=382\n -> Bitmap Index Scan on fk_b_idx \n (cost=0.00..4.37 rows=12 width=0) (actual time=0.008..0.008\n rows=14 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 2\n -> Aggregate (cost=27.68..27.69 rows=1 width=32) (actual\n time=0.012..0.012 rows=1 loops=200)\n -> Bitmap Heap Scan on c (cost=4.35..27.66 rows=9\n width=98) (actual time=0.008..0.010 rows=5 loops=200)\n Recheck Cond: (a_id = a.id)\n Heap Blocks: exact=169\n -> Bitmap Index Scan on fk_c_idx \n (cost=0.00..4.35 rows=9 width=0) (actual time=0.007..0.007 rows=5\n loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 3\n -> Aggregate (cost=8.30..8.31 rows=1 width=32) (actual\n time=0.009..0.009 rows=1 loops=200)\n -> Index Scan using fk_d_idx on d (cost=0.28..8.29\n rows=1 width=81) (actual time=0.008..0.008 rows=1 loops=200)\n Index Cond: (a_id = a.id)\n SubPlan 4\n -> Aggregate (cost=1.27..1.28 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=200)\n -> Seq Scan on e (cost=0.00..1.26 rows=1 width=76)\n (actual time=0.004..0.004 rows=0 loops=200)\n Filter: (a_id = a.id)\n Rows Removed by Filter: 21\n Planning Time: 0.496 ms\n Execution Time: 9.892 ms\n \n\n\n\nAm 12.08.2022 um 21:15 schrieb Andres\n Freund:\n\n\nHi,\n\nOn 2022-08-12 18:49:58 +0000, Nico Heller wrote:\n\n\nWITH aggregation(\n SELECT\n a.*,\n (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n)\nSELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n\n\n\n\n\nImagine that for each \"a\" there exists between 5-100 \"b\", \"c\", \"d\" and \"e\"\nwhich makes the result of this pretty big (worst case: around 300kb when\nsaved to a text file).\nI noticed that adding the \"to_jsonb\" increases the query time by 100%, from\n9-10ms to 17-23ms on average.\n\n\n\nCould we see the explain?\n\nHave you tried using json[b]_agg()?\n\n\n\n\nThis may not seem slow at all but this query has another issue: on an AWS\nAurora Serverless V2 instance we are running into a RAM usage of around\n30-50 GB compared to < 10 GB when using a simple LEFT JOINed query when\nunder high load (> 1000 queries / sec). Furthermore the CPU usage is quite\nhigh.\n\n\n\nWe can't say much about aurora. It's a heavily modified fork of postgres. Did\nyou reproduce this with vanilla postgres? And if so, do you have it in a form\nthat somebody could try out?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 12 Aug 2022 19:48:07 +0000",
"msg_from": "Nico Heller <nico.heller@posteo.de>",
"msg_from_op": true,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 3:02 PM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n>\n>\n> On Fri, Aug 12, 2022 at 2:50 PM Nico Heller <nico.heller@posteo.de> wrote:\n>\n>> Good day,\n>>\n>> consider the following query:\n>>\n>> WITH aggregation(\n>> SELECT\n>> a.*,\n>> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n>> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n>> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n>> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n>> FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n>> )\n>> SELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;\n>>\n>>\n> - You do have an index on `b.a_id` and `c.a_id`, etc... ? You didn't\n> say...\n>\n> - Are you sure it is the `to_jsonb` that is making this query slow?\n>\n> - Since you are serializing this for easy machine readable consumption\n> outside of the database, does it make a difference if you use `to_json`\n> instead?\n>\n>\nTo follow up here a little. I ran some quick tests on my database and\nfound that `to_json` is consistently, slightly, faster than `to_jsonb` when\nyou are just serializing the result set for consumption. I feed in some\narrays of 1,000,000 elements for testing. While both json serializers are\nslower than just sending back the result set, it wasn't significant on my\nmachine with simple object types. (3% slower).\n\nAre any of your objects in \"b.*\", etc, complex data structures or deeper\narrays, or gis shapes, or strange data types that might be hard to\nserialize? I'm wondering if there is something hidden in those \".*\" row\nsets that are particularly problematic and compute intensive to process.\n\nOn Fri, Aug 12, 2022 at 3:02 PM Rick Otten <rottenwindfish@gmail.com> wrote:On Fri, Aug 12, 2022 at 2:50 PM Nico Heller <nico.heller@posteo.de> wrote:Good day,\n\nconsider the following query:\n\nWITH aggregation(\n SELECT\n a.*,\n (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as \"bs\",\n (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as \"cs\",\n (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as \"ds\",\n (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as \"es\"\n FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)\n)\nSELECT to_jsonb(aggregation.*) as \"value\" FROM aggregation;- You do have an index on `b.a_id` and `c.a_id`, etc... ? You didn't say...- Are you sure it is the `to_jsonb` that is making this query slow?- Since you are serializing this for easy machine readable consumption outside of the database, does it make a difference if you use `to_json` instead?To follow up here a little. I ran some quick tests on my database and found that `to_json` is consistently, slightly, faster than `to_jsonb` when you are just serializing the result set for consumption. I feed in some arrays of 1,000,000 elements for testing. While both json serializers are slower than just sending back the result set, it wasn't significant on my machine with simple object types. (3% slower).Are any of your objects in \"b.*\", etc, complex data structures or deeper arrays, or gis shapes, or strange data types that might be hard to serialize? I'm wondering if there is something hidden in those \".*\" row sets that are particularly problematic and compute intensive to process.",
"msg_date": "Fri, 12 Aug 2022 16:17:02 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: to_jsonb performance on array aggregated correlated subqueries"
}
] |
[
{
"msg_contents": "Hi Everyone,\n\nI'm trying to run pgbench with various numbers of connections. However, my\nDB seems to be hitting some limit around 147-150 connections. I'd like to\nrun with at least 500 and even up to 2000 if possible.\n\nI've already increased the max_connections, shared_buffers\nand kernel.shmmax. All by 20 times.\n\nWhat's limiting my DB from allowing more connections?\n\nThis is a sample of the output I'm getting, which repeats the error 52\ntimes (one for each failed connection)\n\n-bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy\n...\nconnection to database \"benchy\" failed:\ncould not connect to server: Resource temporarily unavailable\n Is the server running locally and accepting\n connections on Unix domain socket\n\"/var/run/postgresql/.s.PGSQL.5432\"?\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 50\nquery mode: simple\nnumber of clients: 200\nnumber of threads: 200\nnumber of transactions per client: 100\nnumber of transactions actually processed: 14800/20000\nlatency average = 165.577 ms\ntps = 1207.895829 (including connections establishing)\ntps = 1255.496312 (excluding connections establishing)\n\nThanks,\nKevin\n\nHi Everyone,I'm trying to run pgbench with various numbers of connections. However, my DB seems to be hitting some limit around 147-150 connections. I'd like to run with at least 500 and even up to 2000 if possible.I've already increased the max_connections, shared_buffers and kernel.shmmax. All by 20 times.What's limiting my DB from allowing more connections?This is a sample of the output I'm getting, which repeats the error 52 times (one for each failed connection)-bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy...connection to database \"benchy\" failed:could not connect to server: Resource temporarily unavailable Is the server running locally and accepting connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?transaction type: <builtin: TPC-B (sort of)>scaling factor: 50query mode: simplenumber of clients: 200number of threads: 200number of transactions per client: 100number of transactions actually processed: 14800/20000latency average = 165.577 mstps = 1207.895829 (including connections establishing)tps = 1255.496312 (excluding connections establishing)Thanks,Kevin",
"msg_date": "Sat, 20 Aug 2022 20:08:47 -0600",
"msg_from": "Kevin McKibbin <kevinmckibbin123@gmail.com>",
"msg_from_op": true,
"msg_subject": "pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n> What's limiting my DB from allowing more connections?\n\n> This is a sample of the output I'm getting, which repeats the error 52\n> times (one for each failed connection)\n\n> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy\n> ...\n> connection to database \"benchy\" failed:\n> could not connect to server: Resource temporarily unavailable\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nThis is apparently a client-side failure not a server-side failure\n(you could confirm that by seeing whether any corresponding\nfailure shows up in the postmaster log). That means that the\nkernel wouldn't honor pgbench's attempt to open a connection,\nwhich implies you haven't provisioned enough networking resources\nto support the number of connections you want. Since you haven't\nmentioned what platform this is on, it's impossible to say more\nthan that --- but it doesn't look like Postgres configuration\nsettings are at issue at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Aug 2022 23:20:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Sorry Tom for the duplicate email. Resending with the mailing list.\n\n\n> Thanks for your response. I'm using a Centos Linux environment and have\n> the open files set very high:\n>\n> -bash-4.2$ ulimit -a|grep open\n> open files (-n) 65000\n>\n> What else could be limiting the connections?\n>\n> Kevin\n>\n>\n> On Sat, 20 Aug 2022 at 21:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n>> > What's limiting my DB from allowing more connections?\n>>\n>> > This is a sample of the output I'm getting, which repeats the error 52\n>> > times (one for each failed connection)\n>>\n>> > -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy\n>> > ...\n>> > connection to database \"benchy\" failed:\n>> > could not connect to server: Resource temporarily unavailable\n>> > Is the server running locally and accepting\n>> > connections on Unix domain socket\n>> > \"/var/run/postgresql/.s.PGSQL.5432\"?\n>>\n>> This is apparently a client-side failure not a server-side failure\n>> (you could confirm that by seeing whether any corresponding\n>> failure shows up in the postmaster log). That means that the\n>> kernel wouldn't honor pgbench's attempt to open a connection,\n>> which implies you haven't provisioned enough networking resources\n>> to support the number of connections you want. Since you haven't\n>> mentioned what platform this is on, it's impossible to say more\n>> than that --- but it doesn't look like Postgres configuration\n>> settings are at issue at all.\n>>\n>> regards, tom lane\n>>\n>\n\nSorry Tom for the duplicate email. Resending with the mailing list.Thanks for your response. I'm using a Centos Linux environment and have the open files set very high:-bash-4.2$ ulimit -a|grep openopen files (-n) 65000What else could be limiting the connections?KevinOn Sat, 20 Aug 2022 at 21:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n> What's limiting my DB from allowing more connections?\n\n> This is a sample of the output I'm getting, which repeats the error 52\n> times (one for each failed connection)\n\n> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy\n> ...\n> connection to database \"benchy\" failed:\n> could not connect to server: Resource temporarily unavailable\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nThis is apparently a client-side failure not a server-side failure\n(you could confirm that by seeing whether any corresponding\nfailure shows up in the postmaster log). That means that the\nkernel wouldn't honor pgbench's attempt to open a connection,\nwhich implies you haven't provisioned enough networking resources\nto support the number of connections you want. Since you haven't\nmentioned what platform this is on, it's impossible to say more\nthan that --- but it doesn't look like Postgres configuration\nsettings are at issue at all.\n\n regards, tom lane",
"msg_date": "Sun, 21 Aug 2022 10:29:56 -0600",
"msg_from": "Kevin McKibbin <kevinmckibbin123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "\nOn 2022-08-20 Sa 23:20, Tom Lane wrote:\n> Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n>> What's limiting my DB from allowing more connections?\n>> This is a sample of the output I'm getting, which repeats the error 52\n>> times (one for each failed connection)\n>> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy\n>> ...\n>> connection to database \"benchy\" failed:\n>> could not connect to server: Resource temporarily unavailable\n>> Is the server running locally and accepting\n>> connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.5432\"?\n> This is apparently a client-side failure not a server-side failure\n> (you could confirm that by seeing whether any corresponding\n> failure shows up in the postmaster log). That means that the\n> kernel wouldn't honor pgbench's attempt to open a connection,\n> which implies you haven't provisioned enough networking resources\n> to support the number of connections you want. Since you haven't\n> mentioned what platform this is on, it's impossible to say more\n> than that --- but it doesn't look like Postgres configuration\n> settings are at issue at all.\n\n\n\nThe first question in my mind from the above is where this postgres\ninstance is actually listening. Is it really /var/run/postgresql? Its\npostmaster.pid will tell you. I have often seen client programs pick up\na system libpq which is compiled with a different default socket directory.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Aug 2022 16:18:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-20 Sa 23:20, Tom Lane wrote:\n>> Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n>>> What's limiting my DB from allowing more connections?\n\n> The first question in my mind from the above is where this postgres\n> instance is actually listening. Is it really /var/run/postgresql? Its\n> postmaster.pid will tell you. I have often seen client programs pick up\n> a system libpq which is compiled with a different default socket directory.\n\nI wouldn't think that'd explain a symptom of some connections succeeding\nand others not within the same pgbench run.\n\nI tried to duplicate this behavior locally (on RHEL8) and got something\ninteresting. After increasing the server's max_connections to 1000,\nI can do\n\n$ pgbench -S -c 200 -j 100 -t 100 bench\n\nand it goes through fine. But:\n\n$ pgbench -S -c 200 -j 200 -t 100 bench\npgbench (16devel)\nstarting vacuum...end.\npgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5440\" failed: Resource temporarily unavailable\n Is the server running locally and accepting connections on that socket?\npgbench: error: could not create connection for client 154\n\nSo whatever is triggering this has nothing to do with the server,\nbut with how many threads are created inside pgbench. I notice\nalso that sometimes it works, making it seem like possibly a race\ncondition. Either that or there's some limitation on how fast\nthreads within a process can open sockets.\n\nAlso, I determined that libpq's connect() call is failing synchronously\n(we get EAGAIN directly from the connect() call, not later). I wondered\nif libpq should accept EAGAIN as a synonym for EINPROGRESS, but no:\nthat just makes it fail on the next touch of the socket.\n\nThe only documented reason for connect(2) to fail with EAGAIN is\n\n EAGAIN Insufficient entries in the routing cache.\n\nwhich seems pretty unlikely to be the issue here, since all these\nconnections are being made to the same local address.\n\nOn the whole this is smelling more like a Linux kernel bug than\nanything else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 17:15:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "\nOn 2022-08-21 Su 17:15, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-08-20 Sa 23:20, Tom Lane wrote:\n>>> Kevin McKibbin <kevinmckibbin123@gmail.com> writes:\n>>>> What's limiting my DB from allowing more connections?\n>> The first question in my mind from the above is where this postgres\n>> instance is actually listening. Is it really /var/run/postgresql? Its\n>> postmaster.pid will tell you. I have often seen client programs pick up\n>> a system libpq which is compiled with a different default socket directory.\n> I wouldn't think that'd explain a symptom of some connections succeeding\n> and others not within the same pgbench run.\n\n\nOh, yes, I agree, I missed that aspect of it.\n\n\n>\n> I tried to duplicate this behavior locally (on RHEL8) and got something\n> interesting. After increasing the server's max_connections to 1000,\n> I can do\n>\n> $ pgbench -S -c 200 -j 100 -t 100 bench\n>\n> and it goes through fine. But:\n>\n> $ pgbench -S -c 200 -j 200 -t 100 bench\n> pgbench (16devel)\n> starting vacuum...end.\n> pgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5440\" failed: Resource temporarily unavailable\n> Is the server running locally and accepting connections on that socket?\n> pgbench: error: could not create connection for client 154\n>\n> So whatever is triggering this has nothing to do with the server,\n> but with how many threads are created inside pgbench. I notice\n> also that sometimes it works, making it seem like possibly a race\n> condition. Either that or there's some limitation on how fast\n> threads within a process can open sockets.\n>\n> Also, I determined that libpq's connect() call is failing synchronously\n> (we get EAGAIN directly from the connect() call, not later). I wondered\n> if libpq should accept EAGAIN as a synonym for EINPROGRESS, but no:\n> that just makes it fail on the next touch of the socket.\n>\n> The only documented reason for connect(2) to fail with EAGAIN is\n>\n> EAGAIN Insufficient entries in the routing cache.\n>\n> which seems pretty unlikely to be the issue here, since all these\n> connections are being made to the same local address.\n>\n> On the whole this is smelling more like a Linux kernel bug than\n> anything else.\n>\n> \t\t\t\n\n\n*nod*\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Aug 2022 17:26:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-21 Su 17:15, Tom Lane wrote:\n>> On the whole this is smelling more like a Linux kernel bug than\n>> anything else.\n\n> *nod*\n\nConceivably we could work around this in libpq: on EAGAIN, just\nretry the failed connect(), or maybe better to close the socket\nand take it from the top with the same target server address.\n\nOn the one hand, reporting EAGAIN certainly sounds like an\ninvitation to do just that. On the other hand, if the failure\nis persistent then libpq is locked up in a tight loop --- and\n\"Insufficient entries in the routing cache\" doesn't seem like a\ncondition that would clear immediately.\n\nIt's also pretty unclear why the kernel would want to return\nEAGAIN instead of letting the nonblock connection path do the\nwaiting, which is why I'm suspecting a bug rather than designed\nbehavior.\n\nI think I'm disinclined to install such a workaround unless we\nget confirmation from some kernel hacker that it's operating\nas designed and application-level retry is intended.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 17:48:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 9:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's also pretty unclear why the kernel would want to return\n> EAGAIN instead of letting the nonblock connection path do the\n> waiting, which is why I'm suspecting a bug rather than designed\n> behavior.\n\nCould it be that it fails like that if the listen queue is full on the\nother side?\n\nhttps://github.com/torvalds/linux/blob/master/net/unix/af_unix.c#L1493\n\nIf it's something like that, maybe increasing\n/proc/sys/net/core/somaxconn would help? I think older kernels only\nhad 128 here.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 10:30:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-21 17:15:01 -0400, Tom Lane wrote:\n> I tried to duplicate this behavior locally (on RHEL8) and got something\n> interesting. After increasing the server's max_connections to 1000,\n> I can do\n>\n> $ pgbench -S -c 200 -j 100 -t 100 bench\n>\n> and it goes through fine. But:\n>\n> $ pgbench -S -c 200 -j 200 -t 100 bench\n> pgbench (16devel)\n> starting vacuum...end.\n> pgbench: error: connection to server on socket \"/tmp/.s.PGSQL.5440\" failed: Resource temporarily unavailable\n> Is the server running locally and accepting connections on that socket?\n> pgbench: error: could not create connection for client 154\n>\n> So whatever is triggering this has nothing to do with the server,\n> but with how many threads are created inside pgbench. I notice\n> also that sometimes it works, making it seem like possibly a race\n> condition. Either that or there's some limitation on how fast\n> threads within a process can open sockets.\n\nI think it's more likely to be caused by the net.core.somaxconn sysctl\nlimiting the size of the listen backlog. The threads part just influences the\nspeed at which new connections are made, and thus how quickly the backlog is\nfilled.\n\nDo you get the same behaviour if you set net.core.somaxconn to higher than the\nnumber of connections? IIRC you need to restart postgres for it to take\neffect.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 21 Aug 2022 15:43:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> If it's something like that, maybe increasing\n> /proc/sys/net/core/somaxconn would help? I think older kernels only\n> had 128 here.\n\nBingo! I see\n\n$ cat /proc/sys/net/core/somaxconn\n128\n\nby default, which is right about where the problem starts. After\n\n$ sudo sh -c 'echo 1000 >/proc/sys/net/core/somaxconn'\n\n*and restarting the PG server*, I can do a lot more threads without\na problem. Evidently, the server's socket's listen queue length\nis fixed at creation and adjusting the kernel limit won't immediately\nchange it.\n\nSo what we've got is that EAGAIN from connect() on a Unix socket can\nmean \"listen queue overflow\" and the kernel won't treat that as a\nnonblock-waitable condition. Still seems like a kernel bug perhaps,\nor at least a misfeature.\n\nNot sure what I think at this point about making libpq retry after\nEAGAIN. It would make sense for this particular undocumented use\nof EAGAIN, but I'm worried about others, especially the documented\nreason. On the whole I'm inclined to leave the code alone;\nbut is there sufficient reason to add something about adjusting\nsomaxconn to our documentation?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 18:55:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 10:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Not sure what I think at this point about making libpq retry after\n> EAGAIN. It would make sense for this particular undocumented use\n> of EAGAIN, but I'm worried about others, especially the documented\n> reason. On the whole I'm inclined to leave the code alone;\n> but is there sufficient reason to add something about adjusting\n> somaxconn to our documentation?\n\nMy Debian system apparently has a newer man page:\n\n EAGAIN For nonblocking UNIX domain sockets, the socket is nonblocking,\n and the connection cannot be completed immediately. For other\n socket families, there are insufficient entries in the routing\n cache.\n\nYeah retrying doesn't seem that nice. +1 for a bit of documentation,\nwhich I guess belongs in the server tuning part where we talk about\nsysctls, perhaps with a link somewhere near max_connections? More\nrecent Linux kernels bumped it to 4096 by default so I doubt it'll\ncome up much in the future, though. Note that we also call listen()\nwith a backlog value capped to our own PG_SOMAXCONN which is 1000. I\ndoubt many people benchmark with higher numbers of connections but\nit'd be nicer if it worked when you do...\n\nI was curious and checked how FreeBSD would handle this. Instead of\nEAGAIN you get ECONNREFUSED here, until you crank up\nkern.ipc.somaxconn, which also defaults to 128 like older Linux.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 11:33:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah retrying doesn't seem that nice. +1 for a bit of documentation,\n> which I guess belongs in the server tuning part where we talk about\n> sysctls, perhaps with a link somewhere near max_connections? More\n> recent Linux kernels bumped it to 4096 by default so I doubt it'll\n> come up much in the future, though.\n\nHmm. It'll be awhile till the 128 default disappears entirely\nthough, especially if assorted BSDen use that too. Probably\nworth the trouble to document.\n\n> Note that we also call listen()\n> with a backlog value capped to our own PG_SOMAXCONN which is 1000. I\n> doubt many people benchmark with higher numbers of connections but\n> it'd be nicer if it worked when you do...\n\nActually it's 10000. Still, I wonder if we couldn't just remove\nthat limit now that we've desupported a bunch of stone-age kernels.\nIt's hard to believe any modern kernel can't defend itself against\nsilly listen-queue requests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 20:20:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Yeah retrying doesn't seem that nice. +1 for a bit of documentation,\n> > which I guess belongs in the server tuning part where we talk about\n> > sysctls, perhaps with a link somewhere near max_connections? More\n> > recent Linux kernels bumped it to 4096 by default so I doubt it'll\n> > come up much in the future, though.\n>\n> Hmm. It'll be awhile till the 128 default disappears entirely\n> though, especially if assorted BSDen use that too. Probably\n> worth the trouble to document.\n\nI could try to write a doc patch if you aren't already on it.\n\n> > Note that we also call listen()\n> > with a backlog value capped to our own PG_SOMAXCONN which is 1000. I\n> > doubt many people benchmark with higher numbers of connections but\n> > it'd be nicer if it worked when you do...\n>\n> Actually it's 10000. Still, I wonder if we couldn't just remove\n> that limit now that we've desupported a bunch of stone-age kernels.\n> It's hard to believe any modern kernel can't defend itself against\n> silly listen-queue requests.\n\nOh, right. Looks like that was just paranoia in commit 153f4006763,\nback when you got away from using the (very conservative) SOMAXCONN\nmacro. Looks like that was 5 on ancient systems going back to the\noriginal sockets stuff, and later 128 was a popular number. Yeah I'd\nsay +1 for removing our cap. I'm pretty sure every system will\ninternally cap whatever value we pass in if it doesn't like it, as\nPOSIX explicitly says it can freely do with this \"hint\".\n\nThe main thing I learned today is that Linux's connect(AF_UNIX)\nimplementation doesn't refuse connections when the listen backlog is\nfull, unlike other OSes. Instead, for blocking sockets, it sleeps and\nwakes with everyone else to fight over space. I *guess* for\nnon-blocking sockets that introduced a small contradiction -- there\nisn't the state space required to give you a working EINPROGRESS with\nthe same sort of behaviour (if you reified a secondary queue for that\nyou might as well make the primary one larger...), but they also\ndidn't want to give you ECONNREFUSED just because you're non-blocking,\nso they went with EAGAIN, because you really do need to call again\nwith the sockaddr. The reason I wouldn't want to call it again is\nthat I guess it'd be a busy CPU burning loop until progress can be\nmade, which isn't nice, and failing with \"Resource temporarily\nunavailable\" to the user does in fact describe the problem, if\nsomewhat vaguely. Hmm, maybe we could add a hint to the error,\nthough?\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:02:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Aug 22, 2022 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm. It'll be awhile till the 128 default disappears entirely\n>> though, especially if assorted BSDen use that too. Probably\n>> worth the trouble to document.\n\n> I could try to write a doc patch if you aren't already on it.\n\nI haven't done anything about it yet, but could do so tomorrow or so.\n\n(BTW, I just finished discovering that NetBSD has the same 128 limit.\nIt looks like they intended to make that settable via sysctl, because\nit's a variable not a constant; but they haven't actually wired up the\nvariable to sysctl yet.)\n\n> Oh, right. Looks like that was just paranoia in commit 153f4006763,\n> back when you got away from using the (very conservative) SOMAXCONN\n> macro. Looks like that was 5 on ancient systems going back to the\n> original sockets stuff, and later 128 was a popular number. Yeah I'd\n> say +1 for removing our cap. I'm pretty sure every system will\n> internally cap whatever value we pass in if it doesn't like it, as\n> POSIX explicitly says it can freely do with this \"hint\".\n\nYeah. I hadn't thought to check the POSIX text, but their listen(2)\npage is pretty clear that implementations should *silently* reduce\nthe value to what they can handle, not fail. Also, SUSv2 says the\nsame thing in different words, so the requirement's been that way\nfor a very long time. I think we could drop this ancient bit of\nparanoia.\n\n> ... Hmm, maybe we could add a hint to the error,\n> though?\n\nlibpq doesn't really have a notion of hints --- perhaps we ought\nto fix that sometime. But this doesn't seem like a very exciting\nplace to start, given the paucity of prior complaints. (And anyway\npeople using other client libraries wouldn't be helped.) I think\nsome documentation in the \"Managing Kernel Resources\" section\nshould be plenty for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 22:18:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, Aug 22, 2022 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm. It'll be awhile till the 128 default disappears entirely\n> >> though, especially if assorted BSDen use that too. Probably\n> >> worth the trouble to document.\n>\n> > I could try to write a doc patch if you aren't already on it.\n>\n> I haven't done anything about it yet, but could do so tomorrow or so.\n\nCool. BTW small correction to something I said about FreeBSD: it'd be\nbetter to document the new name kern.ipc.soacceptqueue (see listen(2)\nHISTORY) even though the old name still works and matches OpenBSD and\nmacOS.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:43:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Cool. BTW small correction to something I said about FreeBSD: it'd be\n> better to document the new name kern.ipc.soacceptqueue (see listen(2)\n> HISTORY) even though the old name still works and matches OpenBSD and\n> macOS.\n\nThanks. Sounds like we get to document at least three different\nsysctl names for this setting :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Aug 2022 22:55:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "OK, here's some proposed patches.\n\n0001 adds a para about how to raise the listen queue length.\n\n0002 isn't quite related, but while writing 0001 I noticed a nearby\nuse of /proc/sys/... which I thought should be converted to sysctl.\nIMO /proc/sys pretty much sucks, at least for documentation purposes,\nfor multiple reasons:\n\n* It's unlike the way you do things on other platforms.\n\n* \"man sysctl\" will lead you to useful documentation about how to\nuse that command. There's no obvious way to find documentation\nabout /proc/sys.\n\n* It's not at all sudo-friendly. Compare\n\tsudo sh -c 'echo 0 >/proc/sys/kernel/randomize_va_space'\n\tsudo sysctl -w kernel.randomize_va_space=0\nThe former is a lot longer and it's far from obvious why you have\nto do it that way.\n\n* You have to think in sysctl terms anyway if you want to make the\nsetting persist across reboots, which you almost always do.\n\n* Everywhere else in runtime.sgml, we use sysctl not /proc/sys.\n\n0003 removes PG_SOMAXCONN. While doing that I noticed that this\ncomputation hadn't been touched throughout all the various\nchanges fooling with exactly what gets counted in MaxBackends.\nI think the most appropriate definition for the listen queue\nlength is now MaxConnections * 2, not MaxBackends * 2, because\nthe other processes counted in MaxBackends don't correspond to\nincoming connections.\n\nI propose 0003 for HEAD only, but the docs changes could be\nback-patched.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 22 Aug 2022 12:57:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thanks for your input everyone! I wanted to confirm that increasing the\nsomaxconn also fixed the issue for me.\n\nKevin\n\n\n> $ cat /proc/sys/net/core/somaxconn\n> 128\n>\n> by default, which is right about where the problem starts. After\n>\n> $ sudo sh -c 'echo 1000 >/proc/sys/net/core/somaxconn'\n>\n> *and restarting the PG server*, I can do a lot more threads without\n> a problem. Evidently, the server's socket's listen queue length\n> is fixed at creation and adjusting the kernel limit won't immediately\n> change it.\n>\n>\n>\n\nThanks for your input everyone! I wanted to confirm that increasing the somaxconn also fixed the issue for me.Kevin\n$ cat /proc/sys/net/core/somaxconn\n128\n\nby default, which is right about where the problem starts. After\n\n$ sudo sh -c 'echo 1000 >/proc/sys/net/core/somaxconn'\n\n*and restarting the PG server*, I can do a lot more threads without\na problem. Evidently, the server's socket's listen queue length\nis fixed at creation and adjusting the kernel limit won't immediately\nchange it.",
"msg_date": "Mon, 22 Aug 2022 15:14:23 -0600",
"msg_from": "Kevin McKibbin <kevinmckibbin123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 4:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 0001 adds a para about how to raise the listen queue length.\n\n+ service the requests, with those clients receiving unhelpful\n+ connection failure errors such as <quote>Resource temporarily\n+ unavailable</quote>.\n\nLGTM but I guess I would add \"... or Connection refused\"?\n\n> 0002 isn't quite related, but while writing 0001 I noticed a nearby\n> use of /proc/sys/... which I thought should be converted to sysctl.\n> IMO /proc/sys pretty much sucks, at least for documentation purposes,\n> for multiple reasons:\n\n+1\n\n> 0003 removes PG_SOMAXCONN. While doing that I noticed that this\n> computation hadn't been touched throughout all the various\n> changes fooling with exactly what gets counted in MaxBackends.\n> I think the most appropriate definition for the listen queue\n> length is now MaxConnections * 2, not MaxBackends * 2, because\n> the other processes counted in MaxBackends don't correspond to\n> incoming connections.\n\n+1\n\n> I propose 0003 for HEAD only, but the docs changes could be\n> back-patched.\n\n+1\n\n\n",
"msg_date": "Tue, 23 Aug 2022 14:42:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Just curious, *backlog* defines the maximum pending connections,\nwhy do we need to double the MaxConnections as the queue size?\nIt seems *listen* with larger *backlog* will tell the OS maintain a\nlarger buffer?\n\n- maxconn = MaxBackends * 2;\n- if (maxconn > PG_SOMAXCONN)\n- maxconn = PG_SOMAXCONN;\n+ maxconn = MaxConnections * 2;\n\nOn Tue, Aug 23, 2022 at 12:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> OK, here's some proposed patches.\n>\n> 0001 adds a para about how to raise the listen queue length.\n>\n> 0002 isn't quite related, but while writing 0001 I noticed a nearby\n> use of /proc/sys/... which I thought should be converted to sysctl.\n> IMO /proc/sys pretty much sucks, at least for documentation purposes,\n> for multiple reasons:\n>\n> * It's unlike the way you do things on other platforms.\n>\n> * \"man sysctl\" will lead you to useful documentation about how to\n> use that command. There's no obvious way to find documentation\n> about /proc/sys.\n>\n> * It's not at all sudo-friendly. Compare\n> sudo sh -c 'echo 0 >/proc/sys/kernel/randomize_va_space'\n> sudo sysctl -w kernel.randomize_va_space=0\n> The former is a lot longer and it's far from obvious why you have\n> to do it that way.\n>\n> * You have to think in sysctl terms anyway if you want to make the\n> setting persist across reboots, which you almost always do.\n>\n> * Everywhere else in runtime.sgml, we use sysctl not /proc/sys.\n>\n> 0003 removes PG_SOMAXCONN. While doing that I noticed that this\n> computation hadn't been touched throughout all the various\n> changes fooling with exactly what gets counted in MaxBackends.\n> I think the most appropriate definition for the listen queue\n> length is now MaxConnections * 2, not MaxBackends * 2, because\n> the other processes counted in MaxBackends don't correspond to\n> incoming connections.\n>\n> I propose 0003 for HEAD only, but the docs changes could be\n> back-patched.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:12:07 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> Just curious, *backlog* defines the maximum pending connections,\n> why do we need to double the MaxConnections as the queue size?\n\nThe postmaster allows up to twice MaxConnections child processes\nto exist, per the comment in canAcceptConnections:\n\n * We allow more connections here than we can have backends because some\n * might still be authenticating; they might fail auth, or some existing\n * backend might exit before the auth cycle is completed. The exact\n * MaxBackends limit is enforced when a new backend tries to join the\n * shared-inval backend array.\n\nYou can argue that 2X might not be the right multiplier, and you\ncan argue that the optimal listen queue length might be more or\nless than the limit on number of child processes, but that's how\nwe've historically done it. I'm not especially interested in\nchanging that without somebody making a well-reasoned case for\nsome other number.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Aug 2022 23:37:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Ok, thanks for the clarification.\n\nOn Tue, Aug 23, 2022 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > Just curious, *backlog* defines the maximum pending connections,\n> > why do we need to double the MaxConnections as the queue size?\n>\n> The postmaster allows up to twice MaxConnections child processes\n> to exist, per the comment in canAcceptConnections:\n>\n> * We allow more connections here than we can have backends because some\n> * might still be authenticating; they might fail auth, or some existing\n> * backend might exit before the auth cycle is completed. The exact\n> * MaxBackends limit is enforced when a new backend tries to join the\n> * shared-inval backend array.\n>\n> You can argue that 2X might not be the right multiplier, and you\n> can argue that the optimal listen queue length might be more or\n> less than the limit on number of child processes, but that's how\n> we've historically done it. I'm not especially interested in\n> changing that without somebody making a well-reasoned case for\n> some other number.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:46:12 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Aug 23, 2022 at 4:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> + service the requests, with those clients receiving unhelpful\n> + connection failure errors such as <quote>Resource temporarily\n> + unavailable</quote>.\n\n> LGTM but I guess I would add \"... or Connection refused\"?\n\nIs that the spelling that appears on FreeBSD? Happy to add it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Aug 2022 23:53:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Tue, Aug 23, 2022 at 4:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > + service the requests, with those clients receiving unhelpful\n> > + connection failure errors such as <quote>Resource temporarily\n> > + unavailable</quote>.\n>\n> > LGTM but I guess I would add \"... or Connection refused\"?\n>\n> Is that the spelling that appears on FreeBSD? Happy to add it.\n\nYep.\n\n\n",
"msg_date": "Tue, 23 Aug 2022 15:58:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 2:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > 0002 isn't quite related, but while writing 0001 I noticed a nearby\n> > use of /proc/sys/... which I thought should be converted to sysctl.\n> > IMO /proc/sys pretty much sucks, at least for documentation purposes,\n> > for multiple reasons:\n\nOh, one comment there is actually obsolete now AFAIK. Unless there is\nsome reason to think personality(ADDR_NO_RANDOMIZE) might not work in\nsome case where sysctl -w kernel.randomize_va_space=0 will, I think we\ncan just remove that.",
"msg_date": "Wed, 24 Aug 2022 14:55:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Oh, one comment there is actually obsolete now AFAIK. Unless there is\n> some reason to think personality(ADDR_NO_RANDOMIZE) might not work in\n> some case where sysctl -w kernel.randomize_va_space=0 will, I think we\n> can just remove that.\n\nAFAICS, f3e78069db7 silently does nothing on platforms lacking\nADDR_NO_RANDOMIZE and PROC_ASLR_FORCE_DISABLE. Are you asserting\nthere are no such platforms?\n\n(I'm happy to lose the comment if it's really useless now, but\nI think we have little evidence of that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 23:06:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 3:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Oh, one comment there is actually obsolete now AFAIK. Unless there is\n> > some reason to think personality(ADDR_NO_RANDOMIZE) might not work in\n> > some case where sysctl -w kernel.randomize_va_space=0 will, I think we\n> > can just remove that.\n>\n> AFAICS, f3e78069db7 silently does nothing on platforms lacking\n> ADDR_NO_RANDOMIZE and PROC_ASLR_FORCE_DISABLE. Are you asserting\n> there are no such platforms?\n\nThat's a Linux-only sysctl. ADDR_NO_RANDOMIZE is also Linux-only.\nBoth controls are old enough to be in any kernel that anyone's\ndeveloping on. On further reflection, though, I guess the comment is\nstill useful. ADDR_NO_RANDOMIZE only helps you with clusters launched\nby pg_ctl and pg_regress. A developer trying to run \"postgres\"\ndirectly might still want to know about the sysctl, so I withdraw that\nidea.\n\nAs for whether there are platforms where it does nothing: definitely.\nThese are highly OS-specific, and we've only tackled Linux and FreeBSD\n(with other solutions for macOS and Windows elsewhere in the tree),\nbut I doubt it matters: these are just the OSes that have ASLR on by\ndefault, that someone in our community uses as a daily driver to hack\nPostgreSQL on, that has been annoyed enough to look up how to turn it\noff :-)\n\n\n",
"msg_date": "Wed, 24 Aug 2022 16:53:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: could not connect to server: Resource temporarily\n unavailable"
}
] |
[
{
"msg_contents": "Dear all,\n\nAfter detecting some performance issues accessing a partitioned table in a\npostgres database we created a simple Java test to analyse the possible\ncauses of this problem. The test is very simple. It just creates a database\nconnection and executes a query on a partitioned table including the\npartition key column in the where clause. What we expect is that the\nEXPLAIN ANALYZE returns a single INDEX SCAN on the child partitioned table,\nas reported when this *same* query is executed from the pgadmin or psql\nclients. What we actually get is a PARALLEL SEQ SCAN on all the tables\nbelonging to the partition. Here some more information\n\nDatabase:\n\n - PostgreSQL, version: 13.7\n\nJAVA\n\n - openjdk version \"1.8.0_292\"\n - JDBC: PostgreSQL JDBC Driver, version: 42.4.2\n\n\nTABLE DEFINITION\n\nCREATE TABLE IF NOT EXISTS test_product_ui_partition.product_ui\n(\n id bigint NOT NULL DEFAULT\nnextval('test_product_ui_partition.product_ui_seq'::regclass),\n bundle_description citext COLLATE pg_catalog.\"default\",\n bundle_distribution_path character varying COLLATE pg_catalog.\"default\",\n mission_id citext COLLATE pg_catalog.\"default\" NOT NULL,\n...\n has_facets boolean,\n CONSTRAINT product_ui_pkey PRIMARY KEY (id, mission_id)\n) PARTITION BY LIST (mission_id);\n\n-- Relevant indexes\n\nCREATE INDEX IF NOT EXISTS product_ui_mission_id_idx\n ON test_product_ui_partition.product_ui USING btree\n (mission_id COLLATE pg_catalog.\"default\" ASC NULLS LAST);\n\nCREATE INDEX IF NOT EXISTS product_ui_logical_identifier_idx\n ON test_product_ui_partition.product_ui USING btree\n (logical_identifier COLLATE pg_catalog.\"default\" ASC NULLS LAST)\n\n\nTABLE SIZES\n\n Schema | Name | Type |\nOwner | Persistence | Access method | Size |\n---------------------------+-----------------+-------------------+----------+-------------+---------------+---------\n test_product_ui_partition | product_ui | partitioned table |\npostgres | permanent | | 0 bytes |\n---------------------------+-----------------+-------+----------+-------------+---------------+-------+-------------\n test_product_ui_partition | product_ui_em16 | table |\npostgres | permanent | heap | 19 GB |\n---------------------------+---------------+-------+----------+-------------+---------------+-------+---------------\n test_product_ui_partition | product_ui_bc | table |\npostgres | permanent | heap | 66 MB |\n\n\nTEST QUERY\n\nselect logical_identifier, version_id, lastproduct\n from test_product_ui_partition.product_ui pui\n where pui.mission_id='urn:esa:psa:context:investigation:mission.em16'\n and pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'\n\n*This query returns one single row*\n\n\nEXPLAIN ANALYZE FROM PGADMIN\n\nIndex Scan using product_ui_em16_logical_identifier_idx on\nproduct_ui_em16 pui (cost=0.69..19.75 rows=7 width=112) (actual\ntime=0.133..0.134 rows=1 loops=1)\n[...] Index Cond: (logical_identifier =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext)\"\n[...] Filter: (mission_id =\n'urn:esa:psa:context:investigation:mission.em16'::citext)\"\nPlanning Time: 0.237 ms\nExecution Time: 0.149 ms\n\nEXPLAIN ANALYZE FROM JAVA TEST\n\nclass org.postgresql.jdbc.PgConnection\nPostgreSQL\nversion: 13.7\nPostgreSQL JDBC Driver\nversion: 42.4.2\nQuery: explain analyze select logical_identifier, version_id,\nlastproduct from test_product_ui_partition.product_ui pui where\npui.mission_id='urn:esa:psa:context:investigation:mission.em16' and\npui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'\nGather (cost=1000.00..5617399.10 rows=19 width=82) (actual\ntime=9987.415..9999.325 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Append (cost=0.00..5616397.20 rows=15 width=82)\n(actual time=8240.465..9981.736 rows=0 loops=3)\n -> Parallel Seq Scan on product_ui_em16 pui_10\n(cost=0.00..2603849.81 rows=3 width=112) (actual\ntime=3048.850..4790.105 rows=0 loops=3)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 5337106\n -> Parallel Seq Scan on product_ui_rosetta pui_6\n(cost=0.00..2382752.79 rows=1 width=57) (actual\ntime=6070.946..6070.946 rows=0 loops=2)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 5878988\n -> Parallel Seq Scan on product_ui_mex pui_7\n(cost=0.00..354434.31 rows=1 width=56) (actual time=1280.037..1280.037\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 1085822\n -> Parallel Seq Scan on product_ui_smart1 pui_8\n(cost=0.00..148741.56 rows=1 width=63) (actual time=1045.532..1045.533\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 1187167\n -> Parallel Seq Scan on product_ui_vex pui_13\n(cost=0.00..112914.84 rows=1 width=56) (actual time=968.542..968.542\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 1133382\n -> Parallel Seq Scan on product_ui_bc pui_9\n(cost=0.00..8890.52 rows=1 width=83) (actual time=88.747..88.747\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 67763\n -> Parallel Seq Scan on product_ui_ch1 pui_2\n(cost=0.00..2224.51 rows=1 width=59) (actual time=23.304..23.304\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 20345\n -> Parallel Seq Scan on product_ui_huygens pui_1\n(cost=0.00..2090.82 rows=1 width=70) (actual time=21.786..21.786\nrows=0 loops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 19700\n -> Parallel Seq Scan on product_ui_ground_based pui_3\n(cost=0.00..260.74 rows=1 width=75) (actual time=2.615..2.616 rows=0\nloops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 2237\n -> Parallel Seq Scan on product_ui_giotto pui_4\n(cost=0.00..209.79 rows=1 width=47) (actual time=2.188..2.189 rows=0\nloops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 2129\n -> Parallel Seq Scan on product_ui_juice pui_12\n(cost=0.00..13.88 rows=1 width=67) (actual time=0.132..0.132 rows=0\nloops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 100\n -> Parallel Seq Scan on product_ui_emrsp pui_11\n(cost=0.00..10.26 rows=1 width=65) (actual time=0.000..0.000 rows=0\nloops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n -> Parallel Seq Scan on product_ui_hubble pui_5\n(cost=0.00..3.29 rows=1 width=62) (actual time=0.047..0.047 rows=0\nloops=1)\n Filter: (((mission_id)::text =\n'urn:esa:psa:context:investigation:mission.em16'::text) AND\n((logical_identifier)::text =\n'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text))\n Rows Removed by Filter: 33\nPlanning Time: 16.874 ms\nExecution Time: 9999.496 ms\nElapsed: 10122 ms\n\n\nSource code of the test is attached in the mail. It provides some other\ntest cases as:\n\n - Query other non partitioned table, with correct performance\n - Query the correct partitioned table according to the partition key\n column (mission_id). In this case only that table is scanned, but again,\n using PARALLEL SEQ SCAN and not the INDEX SCAN\n\nCould you please provide any hint on the possible reasons of this behavior\nand the performance degradation that is affecting only the JAVA client.\n\nBest regards,\n\nJose Osinde",
"msg_date": "Thu, 25 Aug 2022 10:49:51 +0200",
"msg_from": "Jose Osinde <jose.osinde@gmail.com>",
"msg_from_op": true,
"msg_subject": "Select on partitioned table is very slow"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 25, 2022 at 10:49:51AM +0200, Jose Osinde wrote:\n> select logical_identifier, version_id, lastproduct\n> from test_product_ui_partition.product_ui pui\n> where pui.mission_id='urn:esa:psa:context:investigation:mission.em16'\n> and pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'\n\n> EXPLAIN ANALYZE FROM PGADMIN\n> \n> Index Scan using product_ui_em16_logical_identifier_idx on\n> product_ui_em16 pui (cost=0.69..19.75 rows=7 width=112) (actual\n> time=0.133..0.134 rows=1 loops=1)\n> [...] Index Cond: (logical_identifier =\n> 'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext)\"\n> [...] Filter: (mission_id =\n> 'urn:esa:psa:context:investigation:mission.em16'::citext)\"\n> Planning Time: 0.237 ms\n> Execution Time: 0.149 ms\n\nI really wish you didn't butcher explains like this, but we can work\nwith it.\n\nPlease note that the condition for filter is:\n\nmission_id = 'urn:esa:psa:context:investigation:mission.em16'::citext\n\nSpecifically, column mission_id (which is partition key) is compared\nwith some value that is in citext type - same as column.\nThis means that pg can take this value, compare with partitioning\nschema, and pick one partition.\n\nNow look at the explain from java:\n\n> Filter: (((mission_id)::text =\n> 'urn:esa:psa:context:investigation:mission.em16'::text) AND\n\nThe rest is irrelevant.\n\nThe important part is that java sent query that doesn't compare value of\ncolumn mission_id with some value, but rather compares *cast* of the\ncolumn.\n\nSince it's not column value, then partitioning can't check what's going\non (cast can just as well make it totally different value), and it also\ncan't really use index on mission_id.\n\nWhy it happens - no idea, sorry, I don't grok java.\n\nBut you should be able to test/work on fix with simple, non-partitioned\ntable, just make there citext column, and try searching for value in it,\nand check explain from the search. If it will cast column - it's no\ngood.\n\nSorry I can't tell you what to fix, but perhaps this will be enough for\nyou to find solution.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Thu, 25 Aug 2022 11:10:14 +0200",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: Select on partitioned table is very slow"
},
{
"msg_contents": "On Thu, 2022-08-25 at 11:10 +0200, hubert depesz lubaczewski wrote:\n> Hi,\n> \n> On Thu, Aug 25, 2022 at 10:49:51AM +0200, Jose Osinde wrote:\n> > select logical_identifier, version_id, lastproduct\n> > from test_product_ui_partition.product_ui pui\n> > where pui.mission_id='urn:esa:psa:context:investigation:mission.em16'\n> > and pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'\n> \n> > EXPLAIN ANALYZE FROM PGADMIN\n> > \n> > Index Scan using product_ui_em16_logical_identifier_idx on\n> > product_ui_em16 pui (cost=0.69..19.75 rows=7 width=112) (actual\n> > time=0.133..0.134 rows=1 loops=1)\n> > [...] Index Cond: (logical_identifier =\n> > 'urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext)\"\n> > [...] Filter: (mission_id =\n> > 'urn:esa:psa:context:investigation:mission.em16'::citext)\"\n> > Planning Time: 0.237 ms\n> > Execution Time: 0.149 ms\n> \n> I really wish you didn't butcher explains like this, but we can work\n> with it.\n> \n> Please note that the condition for filter is:\n> \n> mission_id = 'urn:esa:psa:context:investigation:mission.em16'::citext\n> \n> Specifically, column mission_id (which is partition key) is compared\n> with some value that is in citext type - same as column.\n> This means that pg can take this value, compare with partitioning\n> schema, and pick one partition.\n> \n> Now look at the explain from java:\n> \n> > Filter: (((mission_id)::text =\n> > 'urn:esa:psa:context:investigation:mission.em16'::text) AND\n> \n> The rest is irrelevant.\n> \n> The important part is that java sent query that doesn't compare value of\n> column mission_id with some value, but rather compares *cast* of the\n> column.\n> \n> Since it's not column value, then partitioning can't check what's going\n> on (cast can just as well make it totally different value), and it also\n> can't really use index on mission_id.\n> \n> Why it happens - no idea, sorry, I don't grok java.\n> \n> But you should be able to test/work on fix with simple, non-partitioned\n> table, just make there citext column, and try searching for value in it,\n> and check explain from the search. If it will cast column - it's no\n> good.\n> \n> Sorry I can't tell you what to fix, but perhaps this will be enough for\n> you to find solution.\n\nQuite so.\n\nYou are probably using a prepared statement in JDBC.\n\nYou probably have to use explicit type casts, like:\n\nselect logical_identifier, version_id, lastproduct \n from test_product_ui_partition.product_ui pui \n where pui.mission_id = ? :: citext\n and pui.logical_identifier = ? :: citext\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:00:04 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Select on partitioned table is very slow"
},
{
"msg_contents": "Dear Depesz, Laurenz,\n\nThanks very much for the fast responses. They are actually correct and\nsaved me a lot of time. I couldn't test the cast from the Java test but\nthis is something I can deal with later on (most probably updating the\ncolumn types to text in the database side instead). But what I could do was\nreproduce the same problem in the psql console using the cast in the other\nway. This sentence:\n\nexplain analyze select logical_identifier, version_id, lastproduct\n FROM test_product_ui_partition.product_ui pui\n WHERE\npui.mission_id='urn:esa:psa:context:investigation:mission.em16'::citext\n AND\npui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext;\n\nCreates an output equivalent to that returned from the JAVA application and\nreproduces the exact same problems: Scans all the partitions instead of\nselect the right one and uses sec scans for all the cases.\nAttached the result.\n\nAgain, many thanks for your help,\nJose Osinde",
"msg_date": "Thu, 25 Aug 2022 15:42:46 +0200",
"msg_from": "Jose Osinde <jose.osinde@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Select on partitioned table is very slow"
},
{
"msg_contents": "Em qui., 25 de ago. de 2022 às 10:43, Jose Osinde <jose.osinde@gmail.com>\nescreveu:\n\n>\n> Dear Depesz, Laurenz,\n>\n> Thanks very much for the fast responses. They are actually correct and\n> saved me a lot of time. I couldn't test the cast from the Java test but\n> this is something I can deal with later on (most probably updating the\n> column types to text in the database side instead). But what I could do was\n> reproduce the same problem in the psql console using the cast in the other\n> way. This sentence:\n>\n> explain analyze select logical_identifier, version_id, lastproduct\n> FROM test_product_ui_partition.product_ui pui\n> WHERE\n> pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::citext\n> AND\n> pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext;\n>\nThe query in explain.txt attached, it seems not the same.\n\nexplain analyze select logical_identifier, version_id, lastproduct\n FROM test_product_ui_partition.product_ui pui\n WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::text\n AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text;\n\n::text?\n\nregards,\n\nRanier Vilela\n\nEm qui., 25 de ago. de 2022 às 10:43, Jose Osinde <jose.osinde@gmail.com> escreveu:Dear Depesz, Laurenz,Thanks very much for the fast responses. They are actually correct and saved me a lot of time. I couldn't test the cast from the Java test but this is something I can deal with later on (most probably updating the column types to text in the database side instead). But what I could do was reproduce the same problem in the psql console using the cast in the other way. This sentence:explain analyze select logical_identifier, version_id, lastproduct FROM test_product_ui_partition.product_ui pui WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::citext AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext;The query in explain.txt attached, it seems not the same.\nexplain analyze select logical_identifier, version_id, lastproduct \n FROM test_product_ui_partition.product_ui pui \n WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::text \n AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text; ::text?regards,Ranier Vilela",
"msg_date": "Thu, 25 Aug 2022 10:48:06 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Select on partitioned table is very slow"
},
{
"msg_contents": "You are right but the correct query is the one in the attached file. What\nwe want to do here is to force psql to send the \"wrong data types\" to\npostgres and, as a result of this, get a bad plan.\n\nCheers,\nJose Osinde\n\nOn Thu, Aug 25, 2022 at 3:48 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em qui., 25 de ago. de 2022 às 10:43, Jose Osinde <jose.osinde@gmail.com>\n> escreveu:\n>\n>>\n>> Dear Depesz, Laurenz,\n>>\n>> Thanks very much for the fast responses. They are actually correct and\n>> saved me a lot of time. I couldn't test the cast from the Java test but\n>> this is something I can deal with later on (most probably updating the\n>> column types to text in the database side instead). But what I could do was\n>> reproduce the same problem in the psql console using the cast in the other\n>> way. This sentence:\n>>\n>> explain analyze select logical_identifier, version_id, lastproduct\n>> FROM test_product_ui_partition.product_ui pui\n>> WHERE\n>> pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::citext\n>> AND\n>> pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext;\n>>\n> The query in explain.txt attached, it seems not the same.\n>\n> explain analyze select logical_identifier, version_id, lastproduct\n> FROM test_product_ui_partition.product_ui pui\n> WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::text\n> AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text;\n>\n> ::text?\n>\n> regards,\n>\n> Ranier Vilela\n>\n>\n\nYou are right but the correct query is the one in the attached file. What we want to do here is to force psql to send the \"wrong data types\" to postgres and, as a result of this, get a bad plan.Cheers,Jose OsindeOn Thu, Aug 25, 2022 at 3:48 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em qui., 25 de ago. de 2022 às 10:43, Jose Osinde <jose.osinde@gmail.com> escreveu:Dear Depesz, Laurenz,Thanks very much for the fast responses. They are actually correct and saved me a lot of time. I couldn't test the cast from the Java test but this is something I can deal with later on (most probably updating the column types to text in the database side instead). But what I could do was reproduce the same problem in the psql console using the cast in the other way. This sentence:explain analyze select logical_identifier, version_id, lastproduct FROM test_product_ui_partition.product_ui pui WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::citext AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::citext;The query in explain.txt attached, it seems not the same.\nexplain analyze select logical_identifier, version_id, lastproduct \n FROM test_product_ui_partition.product_ui pui \n WHERE pui.mission_id='urn:esa:psa:context:investigation:mission.em16'::text \n AND pui.logical_identifier='urn:esa:psa:em16_tgo_frd:data_raw:frd_raw_sc_n_20220729t000000-20220729t235959'::text; ::text?regards,Ranier Vilela",
"msg_date": "Thu, 25 Aug 2022 15:52:30 +0200",
"msg_from": "Jose Osinde <jose.osinde@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Select on partitioned table is very slow"
}
] |
[
{
"msg_contents": "We run same update or delete SQL statement \" DELETE FROM ... WHERE ... \" the table is a hash partition table (256 hash partitions). When run the sql from Postgresql JDBC driver, it soon increased to 150MB memory (RES filed from top command), but when run the same SQL from psql , it only consumes about 10MB memory. UPDATE statements is similar , need 100MB memory, even it delete or update 0 rows. Any specific control about Postgresql JDBC driver ?\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\n\n\n We run same update or delete SQL statement “ DELETE FROM … WHERE … “ the table is a hash partition table (256 hash partitions). When run the sql from Postgresql JDBC driver, it soon increased to 150MB memory (RES filed from top command), \n but when run the same SQL from psql , it only consumes about 10MB memory. UPDATE statements is similar , need 100MB memory, even it delete or update 0 rows. Any specific control about Postgresql JDBC driver ?\n\n \nThanks,\n \nJames",
"msg_date": "Mon, 5 Sep 2022 12:40:46 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 12:40:46PM +0000, James Pang (chaolpan) wrote:\n> We run same update or delete SQL statement \" DELETE FROM ... WHERE ... \" the table is a hash partition table (256 hash partitions). When run the sql from Postgresql JDBC driver, it soon increased to 150MB memory (RES filed from top command), but when run the same SQL from psql , it only consumes about 10MB memory. UPDATE statements is similar , need 100MB memory, even it delete or update 0 rows. Any specific control about Postgresql JDBC driver ?\n\nIt sounds like JDBC is using prepared statements, and partitions maybe\nweren't pruned by the server. What is the query plan from psql vs from\njdbc ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWhat version is the postgres server ?\nThat affects pruning as well as memory use.\n\nhttps://www.postgresql.org/docs/14/release-14.html\nImprove the performance of updates and deletes on partitioned tables\nwith many partitions (Amit Langote, Tom Lane)\n\nThis change greatly reduces the planner's overhead for such cases, and\nalso allows updates/deletes on partitioned tables to use execution-time\npartition pruning.\n\nActually, this is about the same response as when you asked in June,\nexcept that was about UPDATE.\nhttps://www.postgresql.org/message-id/PH0PR11MB519134D4171A126776E3E063D6B89@PH0PR11MB5191.namprd11.prod.outlook.com\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Sep 2022 07:47:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "PG V13, yes JDBC use prepared statements , from psql use pruned ,but even all partitions it NOT consumes too much memory. Any idea how to print SQL plan from JDBC driver ? \n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Monday, September 5, 2022 8:47 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory than psql client\n\nOn Mon, Sep 05, 2022 at 12:40:46PM +0000, James Pang (chaolpan) wrote:\n> We run same update or delete SQL statement \" DELETE FROM ... WHERE ... \" the table is a hash partition table (256 hash partitions). When run the sql from Postgresql JDBC driver, it soon increased to 150MB memory (RES filed from top command), but when run the same SQL from psql , it only consumes about 10MB memory. UPDATE statements is similar , need 100MB memory, even it delete or update 0 rows. Any specific control about Postgresql JDBC driver ?\n\nIt sounds like JDBC is using prepared statements, and partitions maybe weren't pruned by the server. What is the query plan from psql vs from jdbc ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWhat version is the postgres server ?\nThat affects pruning as well as memory use.\n\nhttps://www.postgresql.org/docs/14/release-14.html\nImprove the performance of updates and deletes on partitioned tables with many partitions (Amit Langote, Tom Lane)\n\nThis change greatly reduces the planner's overhead for such cases, and also allows updates/deletes on partitioned tables to use execution-time partition pruning.\n\nActually, this is about the same response as when you asked in June, except that was about UPDATE.\nhttps://www.postgresql.org/message-id/PH0PR11MB519134D4171A126776E3E063D6B89@PH0PR11MB5191.namprd11.prod.outlook.com\n\n--\nJustin\n\n\n",
"msg_date": "Mon, 5 Sep 2022 12:52:14 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 12:52:14PM +0000, James Pang (chaolpan) wrote:\n> Any idea how to print SQL plan from JDBC driver ? \n\nYou could use \"explain execute\" on the client, or autoexplain on the\nserver-side.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Sep 2022 08:26:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same data volume,same table attributes) , do same \"UPDATE,DELETE \" .\n 1. with partitioned tables , the \"RES\" from top command memory increased quickly to 160MB and keep stable there. \n From auto_explain trace, we did saw partition pruning to specific partition when execution the prepared sql statement by Postgresql JDBC .\n2. with no-partitioned tables, the \"RES\" from top command memory only keep 24MB stable there. \n Same auto_explain , and only table and index scan there by prepared sql statement by Postgresql JDBC. \n3. with psql client , run the UPDATE/DELETE sql locally, partition pruning works and the \"RES\" memory\" is much less, it's about 9MB . \n\nYesterday, when workload test, a lot of Postgresql JDBC connections use 150-160MB memory , so we got ERROR: out of memory\n Detail: Failed on request of size 240 in memory context \"MessageContext\". And other non-postgresql process like top command even failed into no-memory error. \n\nSo, looks like something with Postgresql JDBC driver lead to the high memory consumption when table is partitioned , even when table is no partitioned , compared with psql client, it consumes more memory. Any suggestions to tune that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make shared_buffers=36% physical memory , effective_cache_size=70%physical memory , total physical memory is about 128GB.\n\nThanks,\n\nJames\n\n\n-----Original Message-----\nFrom: James Pang (chaolpan) <chaolpan@cisco.com> \nSent: Monday, September 5, 2022 8:52 PM\nTo: Justin Pryzby <pryzby@telsasoft.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: RE: Postgresql JDBC process consumes more memory than psql client\n\nPG V13, yes JDBC use prepared statements , from psql use pruned ,but even all partitions it NOT consumes too much memory. Any idea how to print SQL plan from JDBC driver ? \n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Monday, September 5, 2022 8:47 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory than psql client\n\nOn Mon, Sep 05, 2022 at 12:40:46PM +0000, James Pang (chaolpan) wrote:\n> We run same update or delete SQL statement \" DELETE FROM ... WHERE ... \" the table is a hash partition table (256 hash partitions). When run the sql from Postgresql JDBC driver, it soon increased to 150MB memory (RES filed from top command), but when run the same SQL from psql , it only consumes about 10MB memory. UPDATE statements is similar , need 100MB memory, even it delete or update 0 rows. Any specific control about Postgresql JDBC driver ?\n\nIt sounds like JDBC is using prepared statements, and partitions maybe weren't pruned by the server. What is the query plan from psql vs from jdbc ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWhat version is the postgres server ?\nThat affects pruning as well as memory use.\n\nhttps://www.postgresql.org/docs/14/release-14.html\nImprove the performance of updates and deletes on partitioned tables with many partitions (Amit Langote, Tom Lane)\n\nThis change greatly reduces the planner's overhead for such cases, and also allows updates/deletes on partitioned tables to use execution-time partition pruning.\n\nActually, this is about the same response as when you asked in June, except that was about UPDATE.\nhttps://www.postgresql.org/message-id/PH0PR11MB519134D4171A126776E3E063D6B89@PH0PR11MB5191.namprd11.prod.outlook.com\n\n--\nJustin\n\n\n\n\n",
"msg_date": "Tue, 6 Sep 2022 04:15:03 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 04:15:03AM +0000, James Pang (chaolpan) wrote:\n> We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same data volume,same table attributes) , do same \"UPDATE,DELETE \" .\n> 1. with partitioned tables , the \"RES\" from top command memory increased quickly to 160MB and keep stable there. \n> From auto_explain trace, we did saw partition pruning to specific partition when execution the prepared sql statement by Postgresql JDBC .\n> 2. with no-partitioned tables, the \"RES\" from top command memory only keep 24MB stable there. \n> Same auto_explain , and only table and index scan there by prepared sql statement by Postgresql JDBC. \n> 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning works and the \"RES\" memory\" is much less, it's about 9MB . \n> \n> Yesterday, when workload test, a lot of Postgresql JDBC connections use 150-160MB memory , so we got ERROR: out of memory\n\nHow many JDBC clients were there?\n\nDid you use the same number of clients when you used psql ?\nOtherwise it wasn't a fair test.\n\nAlso, did you try using psql with PREPARE+EXECUTE ? I imagine memory\nuse would match JDBC.\n\nIt's probably not important, but if you set the log level high enough,\nyou could log memory use more accurately using log_executor_stats\n(maxrss).\n\n> So, looks like something with Postgresql JDBC driver lead to the high memory consumption when table is partitioned , even when table is no partitioned , compared with psql client, it consumes more memory. Any suggestions to tune that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make shared_buffers=36% physical memory , effective_cache_size=70%physical memory , total physical memory is about 128GB.\n\nI sent this before hoping to get answers to all the most common\nquestions earlier, rather than being spread out over the first handful\nof emails.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nversion 13 point what ?\nwhat are the other non-default gucs ?\nwhat are the query plans ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:15:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": "Yes, same prepared statement from both psql and JDBC. We started to compare with one by one, and see big difference as explained. Psql and JDBC show big difference. Let's focuse on JDBC driver client ,why it consumes 160MB memory even table size is very small. \n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Wednesday, September 7, 2022 12:15 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory than psql client\n\nOn Tue, Sep 06, 2022 at 04:15:03AM +0000, James Pang (chaolpan) wrote:\n> We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same data volume,same table attributes) , do same \"UPDATE,DELETE \" .\n> 1. with partitioned tables , the \"RES\" from top command memory increased quickly to 160MB and keep stable there. \n> From auto_explain trace, we did saw partition pruning to specific partition when execution the prepared sql statement by Postgresql JDBC .\n> 2. with no-partitioned tables, the \"RES\" from top command memory only keep 24MB stable there. \n> Same auto_explain , and only table and index scan there by prepared sql statement by Postgresql JDBC. \n> 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning works and the \"RES\" memory\" is much less, it's about 9MB . \n> \n> Yesterday, when workload test, a lot of Postgresql JDBC connections \n> use 150-160MB memory , so we got ERROR: out of memory\n\nHow many JDBC clients were there?\n\nDid you use the same number of clients when you used psql ?\nOtherwise it wasn't a fair test.\n\nAlso, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC.\n\nIt's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss).\n\n> So, looks like something with Postgresql JDBC driver lead to the high memory consumption when table is partitioned , even when table is no partitioned , compared with psql client, it consumes more memory. Any suggestions to tune that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make shared_buffers=36% physical memory , effective_cache_size=70%physical memory , total physical memory is about 128GB.\n\nI sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nversion 13 point what ?\nwhat are the other non-default gucs ?\nwhat are the query plans ?\n\n--\nJustin\n\n\n",
"msg_date": "Wed, 7 Sep 2022 00:05:13 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory than psql client"
},
{
"msg_contents": " Yes, same prepared statement from both psql and JDBC. We started to compare with one by one, and see big difference as explained. Psql and JDBC show big difference. Let's focuse on JDBC driver client ,why it consumes 160MB memory even table size is very small. But only consumes 25MB for non-partitioned tables with same table attributes and data volume size.\n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com>\nSent: Wednesday, September 7, 2022 12:15 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory than psql client\n\nOn Tue, Sep 06, 2022 at 04:15:03AM +0000, James Pang (chaolpan) wrote:\n> We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same data volume,same table attributes) , do same \"UPDATE,DELETE \" .\n> 1. with partitioned tables , the \"RES\" from top command memory increased quickly to 160MB and keep stable there. \n> From auto_explain trace, we did saw partition pruning to specific partition when execution the prepared sql statement by Postgresql JDBC .\n> 2. with no-partitioned tables, the \"RES\" from top command memory only keep 24MB stable there. \n> Same auto_explain , and only table and index scan there by prepared sql statement by Postgresql JDBC. \n> 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning works and the \"RES\" memory\" is much less, it's about 9MB . \n> \n> Yesterday, when workload test, a lot of Postgresql JDBC connections \n> use 150-160MB memory , so we got ERROR: out of memory\n\nHow many JDBC clients were there?\n\nDid you use the same number of clients when you used psql ?\nOtherwise it wasn't a fair test.\n\nAlso, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC.\n\nIt's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss).\n\n> So, looks like something with Postgresql JDBC driver lead to the high memory consumption when table is partitioned , even when table is no partitioned , compared with psql client, it consumes more memory. Any suggestions to tune that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make shared_buffers=36% physical memory , effective_cache_size=70%physical memory , total physical memory is about 128GB.\n\nI sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nversion 13 point what ?\nwhat are the other non-default gucs ?\nwhat are the query plans ?\n\n--\nJustin\n\n\n",
"msg_date": "Wed, 7 Sep 2022 00:10:09 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Hi ,\n Looks like it's Postgresql JDBC driver related client.\n\n-----Original Message-----\nFrom: James Pang (chaolpan) \nSent: Wednesday, September 7, 2022 8:10 AM\nTo: 'Justin Pryzby' <pryzby@telsasoft.com>\nCc: 'pgsql-performance@lists.postgresql.org' <pgsql-performance@lists.postgresql.org>\nSubject: RE: Postgresql JDBC process consumes more memory with partition tables update delete\n\n Yes, same prepared statement from both psql and JDBC. We started to compare with one by one, and see big difference as explained. Psql and JDBC show big difference. Let's focuse on JDBC driver client ,why it consumes 160MB memory even table size is very small. But only consumes 25MB for non-partitioned tables with same table attributes and data volume size.\n\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com>\nSent: Wednesday, September 7, 2022 12:15 AM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory than psql client\n\nOn Tue, Sep 06, 2022 at 04:15:03AM +0000, James Pang (chaolpan) wrote:\n> We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same data volume,same table attributes) , do same \"UPDATE,DELETE \" .\n> 1. with partitioned tables , the \"RES\" from top command memory increased quickly to 160MB and keep stable there. \n> From auto_explain trace, we did saw partition pruning to specific partition when execution the prepared sql statement by Postgresql JDBC .\n> 2. with no-partitioned tables, the \"RES\" from top command memory only keep 24MB stable there. \n> Same auto_explain , and only table and index scan there by prepared sql statement by Postgresql JDBC. \n> 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning works and the \"RES\" memory\" is much less, it's about 9MB . \n> \n> Yesterday, when workload test, a lot of Postgresql JDBC connections \n> use 150-160MB memory , so we got ERROR: out of memory\n\nHow many JDBC clients were there?\n\nDid you use the same number of clients when you used psql ?\nOtherwise it wasn't a fair test.\n\nAlso, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC.\n\nIt's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss).\n\n> So, looks like something with Postgresql JDBC driver lead to the high memory consumption when table is partitioned , even when table is no partitioned , compared with psql client, it consumes more memory. Any suggestions to tune that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make shared_buffers=36% physical memory , effective_cache_size=70%physical memory , total physical memory is about 128GB.\n\nI sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nversion 13 point what ?\nwhat are the other non-default gucs ?\nwhat are the query plans ?\n\n--\nJustin\n\n\n",
"msg_date": "Thu, 8 Sep 2022 06:17:39 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "> > Yesterday, when workload test, a lot of Postgresql JDBC connections\n> > use 150-160MB memory , so we got ERROR: out of memory\n\nWould you please share a reproducer? (e.g. DDL for the table, test code)\n\nHave you tried capturing memory context information for the backend that\nconsumes memory?\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\n\nVladimir\n\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections> > use 150-160MB memory , so we got ERROR: out of memoryWould you please share a reproducer? (e.g. DDL for the table, test code)Have you tried capturing memory context information for the backend that consumes memory?https://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_useVladimir",
"msg_date": "Thu, 8 Sep 2022 10:05:14 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Please check attached.\r\n\r\nThanks,\r\nJames\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\r\nSent: Thursday, September 8, 2022 3:05 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections\r\n> > use 150-160MB memory , so we got ERROR: out of memory\r\n\r\nWould you please share a reproducer? (e.g. DDL for the table, test code)\r\n\r\nHave you tried capturing memory context information for the backend that consumes memory?\r\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\r\n\r\nVladimir",
"msg_date": "Thu, 8 Sep 2022 08:24:03 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Please check attached.\r\n\r\nThanks,\r\nJames\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\r\nSent: Thursday, September 8, 2022 3:05 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections\r\n> > use 150-160MB memory , so we got ERROR: out of memory\r\n\r\nWould you please share a reproducer? (e.g. DDL for the table, test code)\r\n\r\nHave you tried capturing memory context information for the backend that consumes memory?\r\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\r\n\r\nVladimir",
"msg_date": "Thu, 8 Sep 2022 08:30:32 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Please check attached.\r\n\r\nThanks,\r\nJames\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\r\nSent: Thursday, September 8, 2022 3:05 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections\r\n> > use 150-160MB memory , so we got ERROR: out of memory\r\n\r\nWould you please share a reproducer? (e.g. DDL for the table, test code)\r\n\r\nHave you tried capturing memory context information for the backend that consumes memory?\r\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\r\n\r\nVladimir",
"msg_date": "Thu, 8 Sep 2022 08:32:14 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Please check attached. Removed some dbname and tablename , the interesting thing is we only see this issue by JDBC driver client, for psql client , only 25MB memory and similar SQL plan used.\r\n\r\nThanks,\r\nJames\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\r\nSent: Thursday, September 8, 2022 3:05 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections\r\n> > use 150-160MB memory , so we got ERROR: out of memory\r\n\r\nWould you please share a reproducer? (e.g. DDL for the table, test code)\r\n\r\nHave you tried capturing memory context information for the backend that consumes memory?\r\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\r\n\r\nVladimir",
"msg_date": "Thu, 8 Sep 2022 08:36:30 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 04:36, James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n>\n>\n> Please check attached. Removed some dbname and tablename , the\n> interesting thing is we only see this issue by JDBC driver client, for psql\n> client , only 25MB memory and similar SQL plan used.\n>\n>\n>\n> Thanks,\n>\n> James\n>\n> *From:* Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\n> *Sent:* Thursday, September 8, 2022 3:05 PM\n> *To:* James Pang (chaolpan) <chaolpan@cisco.com>\n> *Cc:* pgsql-jdbc@lists.postgresql.org\n> *Subject:* Re: Postgresql JDBC process consumes more memory with\n> partition tables update delete\n>\n>\n>\n> > > Yesterday, when workload test, a lot of Postgresql JDBC connections\n> > > use 150-160MB memory , so we got ERROR: out of memory\n>\n> Would you please share a reproducer? (e.g. DDL for the table, test code)\n>\n>\n>\n> Have you tried capturing memory context information for the backend that\n> consumes memory?\n>\n> https://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\n>\n> Vladimir\n>\n\n\nI'd like to see the actual statement for each. Can you turn on log all\nstatements in the back end and capture the actual statement for psql and\njdbc ?\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Thu, 8 Sept 2022 at 04:36, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n \nPlease check attached. Removed some dbname and tablename , the interesting thing is we only see this issue by JDBC driver client, for psql client , only 25MB memory and similar SQL plan used.\n \nThanks,\nJames\n\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\n\nSent: Thursday, September 8, 2022 3:05 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-jdbc@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\n\n \n\n> > Yesterday, when workload test, a lot of Postgresql JDBC connections\n> > use 150-160MB memory , so we got ERROR: out of memory\n\nWould you please share a reproducer? (e.g. DDL for the table, test code)\n\n \n\n\nHave you tried capturing memory context information for the backend that consumes memory?\n\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Examining_backend_memory_use\n\nVladimirI'd like to see the actual statement for each. Can you turn on log all statements in the back end and capture the actual statement for psql and jdbc ?Dave Cramerwww.postgres.rocks",
"msg_date": "Thu, 8 Sep 2022 05:31:15 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "> interesting thing is we only see this issue by JDBC driver client\n\nFirst of all, it turns out that a single UPDATE statement consumes 4M\n\nThen, it looks like you have **multiple** UPDATE statements in the\nserver-side cache.\nIt does sound strange that a single backend contains multiple entries\nfor the same SQL text.\n\n1) Would you please double-check that SQL text is the same. Do you use\nbind variables?\n2) Would you please double-check that you close statements after use\n(e.g. try-with-resources).\n\n\nCachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks);\n3715144 used: UPDATE WBXMEETINGINS\n\nFrankly speaking, I am not sure the JDBC driver is in a position to\npredict that a single-line statement would consume that much\nserver-side memory.\n\nIt would be nice if backend devs could optimize the memory consumption\nof the cached plan.\nIf optimization is not possible, then it would be nice if the backend\ncould provide clients with memory consumption of the cached plan.\nIn other words, it would be nice if there was a status message or\nsomething that says \"ok, by the way, the prepared statement S_01\nconsumes 2M\".\n\nJames, the captured dump includes only the first 100 entries.\nWould you please try capturing more details via the following command?\n\nMemoryContextStatsDetail(TopMemoryContext, 1000, true)\n\n(see https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591\n)\n\n\nVladimir\n\n\n",
"msg_date": "Thu, 8 Sep 2022 12:55:38 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Hi,\r\n When I convert the partitioned table to non-partitioned and copy all data to non-partitioned tables, then restart the load test , one backend server only consumes 25mb there. With partitioned tables , \r\nPGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend server. So , it's partitioned tables make the memory consumption changes. The dumped stats is backend(session) level cached plans ,right? The test servers use shared connection pooling to run same insert/update/delete transaction by multiple connections(we simulate 300 connections) , so each session see similar cached SQL plans, and part of table has trigger before UPDATE, so when UPDATE it trigger to call pl/pgsql function. \r\n I only use psql to make same prepared SQL and run that in a loop, I see stable memory usage, maybe my psql test is not same as the JAVA test code. I will check the test code details and try to check if possible to dump more context details. \r\n\r\nThanks,\r\n\r\nJames \r\n \r\n\r\n-----Original Message-----\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com> \r\nSent: Thursday, September 8, 2022 5:56 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> interesting thing is we only see this issue by JDBC driver client\r\n\r\nFirst of all, it turns out that a single UPDATE statement consumes 4M\r\n\r\nThen, it looks like you have **multiple** UPDATE statements in the server-side cache.\r\nIt does sound strange that a single backend contains multiple entries for the same SQL text.\r\n\r\n1) Would you please double-check that SQL text is the same. Do you use bind variables?\r\n2) Would you please double-check that you close statements after use (e.g. try-with-resources).\r\n\r\n\r\nCachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks);\r\n3715144 used: UPDATE WBXMEETINGINS\r\n\r\nFrankly speaking, I am not sure the JDBC driver is in a position to predict that a single-line statement would consume that much server-side memory.\r\n\r\nIt would be nice if backend devs could optimize the memory consumption of the cached plan.\r\nIf optimization is not possible, then it would be nice if the backend could provide clients with memory consumption of the cached plan.\r\nIn other words, it would be nice if there was a status message or something that says \"ok, by the way, the prepared statement S_01 consumes 2M\".\r\n\r\nJames, the captured dump includes only the first 100 entries.\r\nWould you please try capturing more details via the following command?\r\n\r\nMemoryContextStatsDetail(TopMemoryContext, 1000, true)\r\n\r\n(see https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591\r\n)\r\n\r\n\r\nVladimir\r\n",
"msg_date": "Thu, 8 Sep 2022 11:38:04 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "Hi,\r\n When I convert the partitioned table to non-partitioned and copy all data to non-partitioned tables, then restart the load test , one backend server only consumes 25mb there. With partitioned tables , \r\nPGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend server. So , it's partitioned tables make the memory consumption changes. The dumped stats is backend(session) level cached plans ,right? The test servers use shared connection pooling to run same insert/update/delete transaction by multiple connections(we simulate 300 connections) , so each session see similar cached SQL plans, and part of table has trigger before UPDATE, so when UPDATE it trigger to call pl/pgsql function. Another thing is even after the backend server idle there long time, it's still keep the same memory without release back to OS.\r\n I only use psql to make same prepared SQL and run that in a loop, I see stable memory usage, maybe my psql test is not same as the JAVA test code. I will check the test code details and try to check if possible to dump more context details. \r\n\r\nThanks,\r\n\r\nJames \r\n \r\n\r\n-----Original Message-----\r\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com> \r\nSent: Thursday, September 8, 2022 5:56 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-jdbc@lists.postgresql.org\r\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\r\n\r\n> interesting thing is we only see this issue by JDBC driver client\r\n\r\nFirst of all, it turns out that a single UPDATE statement consumes 4M\r\n\r\nThen, it looks like you have **multiple** UPDATE statements in the server-side cache.\r\nIt does sound strange that a single backend contains multiple entries for the same SQL text.\r\n\r\n1) Would you please double-check that SQL text is the same. Do you use bind variables?\r\n2) Would you please double-check that you close statements after use (e.g. try-with-resources).\r\n\r\n\r\nCachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks);\r\n3715144 used: UPDATE WBXMEETINGINS\r\n\r\nFrankly speaking, I am not sure the JDBC driver is in a position to predict that a single-line statement would consume that much server-side memory.\r\n\r\nIt would be nice if backend devs could optimize the memory consumption of the cached plan.\r\nIf optimization is not possible, then it would be nice if the backend could provide clients with memory consumption of the cached plan.\r\nIn other words, it would be nice if there was a status message or something that says \"ok, by the way, the prepared statement S_01 consumes 2M\".\r\n\r\nJames, the captured dump includes only the first 100 entries.\r\nWould you please try capturing more details via the following command?\r\n\r\nMemoryContextStatsDetail(TopMemoryContext, 1000, true)\r\n\r\n(see https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591\r\n)\r\n\r\n\r\nVladimir\r\n",
"msg_date": "Thu, 8 Sep 2022 12:04:45 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql JDBC process consumes more memory with partition\n tables update delete"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 08:05, James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Hi,\n> When I convert the partitioned table to non-partitioned and copy all\n> data to non-partitioned tables, then restart the load test , one backend\n> server only consumes 25mb there. With partitioned tables ,\n> PGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend\n> server. So , it's partitioned tables make the memory consumption changes.\n> The dumped stats is backend(session) level cached plans ,right? The test\n> servers use shared connection pooling to run same insert/update/delete\n> transaction by multiple connections(we simulate 300 connections) , so each\n> session see similar cached SQL plans, and part of table has trigger before\n> UPDATE, so when UPDATE it trigger to call pl/pgsql function. Another\n> thing is even after the backend server idle there long time, it's still\n> keep the same memory without release back to OS.\n>\n\nIf you are using a connection pool, then the connections aren't closed so I\ndon't see this an issue.\n\nDave\n\n> I only use psql to make same prepared SQL and run that in a loop, I see\n> stable memory usage, maybe my psql test is not same as the JAVA test code.\n> I will check the test code details and try to check if possible to dump\n> more context details.\n>\n> Thanks,\n>\n> James\n>\n>\n> -----Original Message-----\n> From: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\n> Sent: Thursday, September 8, 2022 5:56 PM\n> To: James Pang (chaolpan) <chaolpan@cisco.com>\n> Cc: pgsql-jdbc@lists.postgresql.org\n> Subject: Re: Postgresql JDBC process consumes more memory with partition\n> tables update delete\n>\n> > interesting thing is we only see this issue by JDBC driver client\n>\n> First of all, it turns out that a single UPDATE statement consumes 4M\n>\n> Then, it looks like you have **multiple** UPDATE statements in the\n> server-side cache.\n> It does sound strange that a single backend contains multiple entries for\n> the same SQL text.\n>\n> 1) Would you please double-check that SQL text is the same. Do you use\n> bind variables?\n> 2) Would you please double-check that you close statements after use (e.g.\n> try-with-resources).\n>\n>\n> CachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks);\n> 3715144 used: UPDATE WBXMEETINGINS\n>\n> Frankly speaking, I am not sure the JDBC driver is in a position to\n> predict that a single-line statement would consume that much server-side\n> memory.\n>\n> It would be nice if backend devs could optimize the memory consumption of\n> the cached plan.\n> If optimization is not possible, then it would be nice if the backend\n> could provide clients with memory consumption of the cached plan.\n> In other words, it would be nice if there was a status message or\n> something that says \"ok, by the way, the prepared statement S_01 consumes\n> 2M\".\n>\n> James, the captured dump includes only the first 100 entries.\n> Would you please try capturing more details via the following command?\n>\n> MemoryContextStatsDetail(TopMemoryContext, 1000, true)\n>\n> (see\n> https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591\n> )\n>\n>\n> Vladimir\n>\n\nOn Thu, 8 Sept 2022 at 08:05, James Pang (chaolpan) <chaolpan@cisco.com> wrote:Hi,\n When I convert the partitioned table to non-partitioned and copy all data to non-partitioned tables, then restart the load test , one backend server only consumes 25mb there. With partitioned tables , \nPGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend server. So , it's partitioned tables make the memory consumption changes. The dumped stats is backend(session) level cached plans ,right? The test servers use shared connection pooling to run same insert/update/delete transaction by multiple connections(we simulate 300 connections) , so each session see similar cached SQL plans, and part of table has trigger before UPDATE, so when UPDATE it trigger to call pl/pgsql function. Another thing is even after the backend server idle there long time, it's still keep the same memory without release back to OS.If you are using a connection pool, then the connections aren't closed so I don't see this an issue.Dave \n I only use psql to make same prepared SQL and run that in a loop, I see stable memory usage, maybe my psql test is not same as the JAVA test code. I will check the test code details and try to check if possible to dump more context details. \n\nThanks,\n\nJames \n\n\n-----Original Message-----\nFrom: Vladimir Sitnikov <sitnikov.vladimir@gmail.com> \nSent: Thursday, September 8, 2022 5:56 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-jdbc@lists.postgresql.org\nSubject: Re: Postgresql JDBC process consumes more memory with partition tables update delete\n\n> interesting thing is we only see this issue by JDBC driver client\n\nFirst of all, it turns out that a single UPDATE statement consumes 4M\n\nThen, it looks like you have **multiple** UPDATE statements in the server-side cache.\nIt does sound strange that a single backend contains multiple entries for the same SQL text.\n\n1) Would you please double-check that SQL text is the same. Do you use bind variables?\n2) Would you please double-check that you close statements after use (e.g. try-with-resources).\n\n\nCachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks);\n3715144 used: UPDATE WBXMEETINGINS\n\nFrankly speaking, I am not sure the JDBC driver is in a position to predict that a single-line statement would consume that much server-side memory.\n\nIt would be nice if backend devs could optimize the memory consumption of the cached plan.\nIf optimization is not possible, then it would be nice if the backend could provide clients with memory consumption of the cached plan.\nIn other words, it would be nice if there was a status message or something that says \"ok, by the way, the prepared statement S_01 consumes 2M\".\n\nJames, the captured dump includes only the first 100 entries.\nWould you please try capturing more details via the following command?\n\nMemoryContextStatsDetail(TopMemoryContext, 1000, true)\n\n(see https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591\n)\n\n\nVladimir",
"msg_date": "Thu, 8 Sep 2022 08:12:19 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql JDBC process consumes more memory with partition\n tables update delete"
}
] |
[
{
"msg_contents": "I was on this DBA.StackExchange question \nhttps://dba.stackexchange.com/questions/316715/in-postgresql-repmgr-master-slave-environment-is-it-necessary-to-have-same-h-w-c/316756, \nand which reminded me of my experience and disappointment with the \nhot-standby design.\n\nSay you have millions of users mostly querying. Then 1000 will also \ncreate change transactions. It should be really cool to have not just \none but 10 stand-by's load balancing all the queries through PgPool-II \n(or whenever that gets actually integrated into main PgSQL -- anyone \nthinking about this? After all, PgPool-II essentially is 75% Postgresql \ncode, is it not?)\n\nThe problem I found was that WAL log processing requires a lot of \nresources to the point where whatever work the master has done to turn \nthe transaction(s) into a WAL content, that seems insignificant compared \nto the work left just applying the WAL to the data files. I know this \nbecause I tried to run on a very insert-busy workload the stand-by on a \nlesser hardware, hoping that it would be enough just to \"keep up\", but \nit was not. I needed to use the same hardware configuration (I use AWS, \nso it's easy to try out different ones).\n\nWhat I then did to catch up, despite the files having grown quite big, \nit was pretty fast to just use rsync on the data files themselves, and \nquickly I was back in sync and could continue with processing the WAL \nupdates.\n\nI think I had asked here (it's over 1 or 2 years ago) to confirm, and \nthe conclusion was that, sadly, you do not gain that much free server \npower by using one master and several secondaries, because all those \nsecondaries will be quite busy handling the incoming WAL from the \nmaster, that they have very little spare resources left to handle a \nbunch of querying activity.\n\nBut this gave me an idea. Two ideas actually.\n\n 1. Use a shared device, also known as \"cluster\" filesystem, the point\n being, the slave operates on the same physical drive. But then that\n may cause contention with increased head-seek activity, which isn't\n really an issue these days with SSDs. Ultimately this pushes the\n issue down to the hardware where you have similar things that adding\n a stand-by will increase some of the load on the master and\n definitely clog up some significant amount of the stand-by resources\n just with keeping up. But the more low level seems way less CPU\n intensive than applying WAL.\n 2. Why not use some rsync-based replication from the actual data-files\n rather than the WAL? Perhaps such a stand-by would not be\n immediately ready to take over when the master goes down, but a\n combination of an rsync based delta applied to the stand-by plus the\n last few amounts of WAL should be able to bring a stand-by to a\n reliable state just like pure WAL shipping method.\n\nThis would seem particularly useful if most of the query activity is not \nso critical that it has to be up to the second of update from the \nmaster. I mean, you can always have a little lag with WAL-based \nreplication, so your results from a standby might just be a few minutes \nbehind of what you would get by querying the master. This means \nconsistent transactions would have to be applied to the master anyway, \nand some querying is involved there.\n\nLet's think of an airline fare finder use case with a booking feature. \nMost of the activity will be users finding their best itinerary \ncomparing fares, conveniences, maybe even available seats. Then they \nstart putting a reservation together, but we know that there will be \nattrition, where, say 50% of the reservation work will be abandoned. \nThat means we would let the users build their reservation, and only when \nthey are getting ready to pay would we begin moving the transaction to \nthe master, then re-check all the preconditions (e.g., is that seat on \n18C still available?) and lock the resources, ready to issue the \nreservation when the payment is confirmed, all in one transaction.\n\nIf the stand-by might be 30 seconds behind the master, they could be 3 \nminutes behind too, or 30 min. The less they can be behind, the more \nresources they have to spend on tracking the master. This can be tweaked \nfor real world use cases. But clearly it would be beneficial to have a \nmeans of a light-weight replication which is good enough and doesn't \ntake all the massive resources that a WAL replay based replication requires.\n\nAm I way off-base?\n\nCan I come up with a poor-man's implementation of that? For example, say \nI have 3 stand-by servers (the more stand-bys I have the more relevant \noverall this gets, as every stand-by replicates the heavy work of \napplying the WAL to the data files.) I would allow each stand-by to fall \nbehind up to n minutes. Say 10. In the 8th minute I would take it down \nbriefly, rsync the data files from the master, and start it back up. \nThis process might take just 2 minutes out of 10. (I think these are \nsomewhat realistic numbers). And my 3 stand-bys rotate doing that, so 2 \nof the 3 are always up while 1 of them might be briefly down.\n\nThis could be improved even with some OS and SAN support, all the way \ndown to RAID mirrors. But practically, I could mount the same file \nsystem (e.g., BSD UFS) on the master with -o rw and on the stand-by with \n-o ro. Even without UFS having any \"cluster\" support, I can do that. I \nwill have a few inconsistencies, but PostgreSQL is very tolerant of \nsmall inconsistencies and can fix them anyway. (I would not dare try \nanything like this with an Oracle system.)\n\nAnother tool I could think of using is BSD UFS snapshot support. I could \nmake a snapshot that is consistent from the file system perspective. \nPostgreSQL writers could interface with that, issue a sync, and trigger \nan UFS snapshot, then a file system sync. Now any standby who reads its \ndata files from this snapshot would not only have file-system level \nconsistency, but even database level consistency. So with this \nreplication using a file system mounted from multiple PgSQL servers, \nreplication should work well while consuming minimal amount of server \nresources and also not lead to too much actual disk IO contention (seeks).\n\nAnd even the disk contention could possibly be resolved by letting a \nRAID mirror sync to the same snapshot point and then split it off, or do \nthe read activity of the stand-by server querying like crazy only from \nthat mirror, while batching changes to the RAID master so that they can \nbe applied with very low overhead.\n\nAnyone thinking about these things?\n\nregards,\n-Gunther\n\n\n\n\n\n\nI was on this DBA.StackExchange question\nhttps://dba.stackexchange.com/questions/316715/in-postgresql-repmgr-master-slave-environment-is-it-necessary-to-have-same-h-w-c/316756,\n and which reminded me of my experience and disappointment with the\n hot-standby design.\nSay you have millions of users mostly querying. Then 1000 will\n also create change transactions. It should be really cool to have\n not just one but 10 stand-by's load balancing all the queries\n through PgPool-II (or whenever that gets actually integrated into\n main PgSQL -- anyone thinking about this? After all, PgPool-II\n essentially is 75% Postgresql code, is it not?) \n\nThe problem I found was that WAL log processing requires a lot of\n resources to the point where whatever work the master has done to\n turn the transaction(s) into a WAL content, that seems\n insignificant compared to the work left just applying the WAL to\n the data files. I know this because I tried to run on a very\n insert-busy workload the stand-by on a lesser hardware, hoping\n that it would be enough just to \"keep up\", but it was not. I\n needed to use the same hardware configuration (I use AWS, so it's\n easy to try out different ones). \n\nWhat I then did to catch up, despite the files having grown quite\n big, it was pretty fast to just use rsync on the data files\n themselves, and quickly I was back in sync and could continue with\n processing the WAL updates. \n\nI think I had asked here (it's over 1 or 2 years ago) to confirm,\n and the conclusion was that, sadly, you do not gain that much free\n server power by using one master and several secondaries, because\n all those secondaries will be quite busy handling the incoming WAL\n from the master, that they have very little spare resources left\n to handle a bunch of querying activity.\nBut this gave me an idea. Two ideas actually. \n\n\nUse a shared device, also known as \"cluster\" filesystem, the\n point being, the slave operates on the same physical drive. But\n then that may cause contention with increased head-seek\n activity, which isn't really an issue these days with SSDs.\n Ultimately this pushes the issue down to the hardware where you\n have similar things that adding a stand-by will increase some of\n the load on the master and definitely clog up some significant\n amount of the stand-by resources just with keeping up. But the\n more low level seems way less CPU intensive than applying WAL.\nWhy not use some rsync-based replication from the actual\n data-files rather than the WAL? Perhaps such a stand-by would\n not be immediately ready to take over when the master goes down,\n but a combination of an rsync based delta applied to the\n stand-by plus the last few amounts of WAL should be able to\n bring a stand-by to a reliable state just like pure WAL shipping\n method.\n\n\nThis would seem particularly useful if most of the query activity\n is not so critical that it has to be up to the second of update\n from the master. I mean, you can always have a little lag with\n WAL-based replication, so your results from a standby might just\n be a few minutes behind of what you would get by querying the\n master. This means consistent transactions would have to be\n applied to the master anyway, and some querying is involved there.\nLet's think of an airline fare finder use case with a booking\n feature. Most of the activity will be users finding their best\n itinerary comparing fares, conveniences, maybe even available\n seats. Then they start putting a reservation together, but we know\n that there will be attrition, where, say 50% of the reservation\n work will be abandoned. That means we would let the users build\n their reservation, and only when they are getting ready to pay\n would we begin moving the transaction to the master, then re-check\n all the preconditions (e.g., is that seat on 18C still available?)\n and lock the resources, ready to issue the reservation when the\n payment is confirmed, all in one transaction. \n\nIf the stand-by might be 30 seconds behind the master, they could\n be 3 minutes behind too, or 30 min. The less they can be behind,\n the more resources they have to spend on tracking the master. This\n can be tweaked for real world use cases. But clearly it would be\n beneficial to have a means of a light-weight replication which is\n good enough and doesn't take all the massive resources that a WAL\n replay based replication requires.\nAm I way off-base?\n\nCan I come up with a poor-man's implementation of that? For\n example, say I have 3 stand-by servers (the more stand-bys I have\n the more relevant overall this gets, as every stand-by replicates\n the heavy work of applying the WAL to the data files.) I would\n allow each stand-by to fall behind up to n minutes. Say 10. In the\n 8th minute I would take it down briefly, rsync the data files from\n the master, and start it back up. This process might take just 2\n minutes out of 10. (I think these are somewhat realistic numbers).\n And my 3 stand-bys rotate doing that, so 2 of the 3 are always up\n while 1 of them might be briefly down.\nThis could be improved even with some OS and SAN support, all the\n way down to RAID mirrors. But practically, I could mount the same\n file system (e.g., BSD UFS) on the master with -o rw and on the\n stand-by with -o ro. Even without UFS having any \"cluster\"\n support, I can do that. I will have a few inconsistencies, but\n PostgreSQL is very tolerant of small inconsistencies and can fix\n them anyway. (I would not dare try anything like this with an\n Oracle system.)\n\nAnother tool I could think of using is BSD UFS snapshot support.\n I could make a snapshot that is consistent from the file system\n perspective. PostgreSQL writers could interface with that, issue a\n sync, and trigger an UFS snapshot, then a file system sync. Now\n any standby who reads its data files from this snapshot would not\n only have file-system level consistency, but even database level\n consistency. So with this replication using a file system mounted\n from multiple PgSQL servers, replication should work well while\n consuming minimal amount of server resources and also not lead to\n too much actual disk IO contention (seeks). \n\nAnd even the disk contention could possibly be resolved by\n letting a RAID mirror sync to the same snapshot point and then\n split it off, or do the read activity of the stand-by server\n querying like crazy only from that mirror, while batching changes\n to the RAID master so that they can be applied with very low\n overhead. \n\nAnyone thinking about these things?\nregards,\n -Gunther",
"msg_date": "Tue, 13 Sep 2022 05:42:40 -0400",
"msg_from": "Gunther Schadow <raj@gusw.net>",
"msg_from_op": true,
"msg_subject": "Faster more low-level methods of having hot standby / secondary\n read-only servers?"
}
] |
[
{
"msg_contents": "What could be the reason of a query, which is sometimes fast and sometimes slow (factor >10x)?\n(running on a large table).\n \n \n",
"msg_date": "Wed, 14 Sep 2022 17:02:07 +0200",
"msg_from": "tiaswin@gmx.de",
"msg_from_op": true,
"msg_subject": "Query is sometimes fast and sometimes slow: what could be the\n reason?"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 05:02:07PM +0200, tiaswin@gmx.de wrote:\n> <html><head></head><body><div style=\"font-family: Verdana;font-size: 12.0px;\"><div>What could be the reason of a query, which is sometimes fast and sometimes slow (factor >10x)?</div>\n> <div>(running on a large table).</div>\n> <div> </div>\n> <div> </div></div></body></html>\n\nLots of possible issues. Is it using a different query plan ?\nCollect a good plan and a bad one and compare, or send both.\nPerhaps use autoexplain to do so.\n\nTurn on logging and send as much information as you can as described\nhere.\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nPlease try to configure your mail client to send text mail (instead of\nor in addition to the html one).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 14 Sep 2022 10:59:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Query is sometimes fast and sometimes slow: what could be the\n reason?"
}
] |
[
{
"msg_contents": "Hello! I have written python program to benchmark view efficiency,\nbecause in our platform they have a role to play and we noticed the\nperformance is less than expected.\nBasically, benchmark creates table:\n\nCREATE TABLE IF NOT EXISTS foobar ( id int, text varchar(40) );\n\nfor i in range(1200300):\n INSERT INTO foobar (id, text) VALUES ({i}, 'some string');\n CREATE VIEW foobar_{i} as select * from foobar where id={i};\n\nCouldn't be any simpler. Postgres 13.1 running in docker, on Ubuntu\n20. However, noticed that performance of certain commands is strangely\nslow:\n- dumping through pg_dump to tar took 13 minutes. Same table but\nwithout views: less than 1 second.\n- restoring through pg_restore took 147 minutes. Same table but\nwithout views: half a second.\n\nIn other situation (not observed by me) the dumping process of real\nworld db with not only 1.2M empty views but in addition gigabytes of\ndata in rows, lasted for many many hours, and ultimately had to be\nstopped.\n\n\nWhat's even stranger is dropping performance: DROP TABLE foobar\nCASCADE;. First of all, had to increase locks to allow it to finish,\notherwise it was quickly bailing because of \"too little shared\nmemory\".\n alter system set max_locks_per_transaction=40000;\n\nBut even after that, it took almost 7 hours and crashed:\n\n2022-09-13 23:16:31.113 UTC [1] LOG: server process (PID 404) was\nterminated by signal 9: Killed\n2022-09-13 23:16:31.113 UTC [1] DETAIL: Failed process was running:\ndrop table foobar cascade;\n2022-09-13 23:16:31.115 UTC [1] LOG: terminating any other active\nserver processes\n2022-09-13 23:16:31.115 UTC [1247] WARNING: terminating connection\nbecause of crash of another server process\n2022-09-13 23:16:31.115 UTC [1247] DETAIL: The postmaster has\ncommanded this server process to roll back the current transaction and\nexit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2022-09-13 23:16:31.117 UTC [97] HINT: In a moment you should be able\nto reconnect to the database and repeat your command.\n2022-09-13 23:16:31.136 UTC [1248] FATAL: the database system is in\nrecovery mode\n2022-09-13 23:16:31.147 UTC [1249] FATAL: the database system is in\nrecovery mode\n2022-09-13 23:16:31.192 UTC [1] LOG: all server processes terminated;\nreinitializing\n2022-09-13 23:16:31.819 UTC [1250] LOG: database system was\ninterrupted; last known up at 2022-09-13 23:15:47 UTC\n2022-09-13 23:16:34.959 UTC [1250] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2022-09-13 23:16:34.965 UTC [1250] LOG: redo starts at 2/3A3FEEC8\n2022-09-13 23:16:36.421 UTC [1250] LOG: invalid record length at\n2/5F355008: wanted 24, got 0\n2022-09-13 23:16:36.421 UTC [1250] LOG: redo done at 2/5F354FD0\n2022-09-13 23:16:37.166 UTC [1] LOG: database system is ready to\naccept connections\n\nAfter updating Postgres to 14.5, it crashed in a bit different way:\n\n2022-09-15 19:20:26.000 UTC [67] LOG: checkpoints are occurring too\nfrequently (23 seconds apart)\n2022-09-15 19:20:26.000 UTC [67] HINT: Consider increasing the\nconfiguration parameter \"max_wal_size\".\n2022-09-15 19:20:39.058 UTC [1] LOG: server process (PID 223) was\nterminated by signal 9: Killed\n2022-09-15 19:20:39.058 UTC [1] DETAIL: Failed process was running:\ndrop table foobar cascade;\n\n\nWihout the views, table can be dropped in 20ms.\n\nThere must be something inherently slow in the way that Postgres\nmanages views. I know that under the hood, views are like\ntable+relation to parent table, so it could be\ncompared to having about million of tables. They are not that light,\naren't they?\nProbably the issue is made worse because of atomicity: dropping the\nfoobar table with cascade needs to have all views dropped first, in\ntransaction. But why handling them would be so slow?\n\n\nAssuming the above is true, I'm wondering if there's a way to improve\nthe performance of Postgres commands like (the most important) backup\nand restore, in situation\nof so many views. Dropping table is not that important, but would be\ngood to have it working too, ie. by first deleting the views in\nbatches (my idea, will test).\nBut backups and restores must be faster and reliable in order to\nimplement one feature in our platform.\nPerhaps adding index on views, so that it can quickly assess how many\nthere are and to lock them, disabling something, tweaking some perf\noption... throwing ideas.\n\nPlease advice. Or maybe there's no hope to make this behave better :)\n\nRegards,\nHubert\n\nHello! I have written python program to benchmark view efficiency, because in our platform they have a role to play and we noticed the performance is less than expected.Basically, benchmark creates table:CREATE TABLE IF NOT EXISTS foobar ( id int, text varchar(40) );for i in range(1200300): INSERT INTO foobar (id, text) VALUES ({i}, 'some string'); CREATE VIEW foobar_{i} as select * from foobar where id={i};Couldn't be any simpler. Postgres 13.1 running in docker, on Ubuntu 20. However, noticed that performance of certain commands is strangely slow:- dumping through pg_dump to tar took 13 minutes. Same table but without views: less than 1 second.- restoring through pg_restore took 147 minutes. Same table but without views: half a second.In other situation (not observed by me) the dumping process of real world db with not only 1.2M empty views but in addition gigabytes of data in rows, lasted for many many hours, and ultimately had to be stopped.What's even stranger is dropping performance: DROP TABLE foobar CASCADE;. First of all, had to increase locks to allow it to finish, otherwise it was quickly bailing because of \"too little shared memory\". alter system set max_locks_per_transaction=40000;But even after that, it took almost 7 hours and crashed:2022-09-13 23:16:31.113 UTC [1] LOG: server process (PID 404) was terminated by signal 9: Killed2022-09-13 23:16:31.113 UTC [1] DETAIL: Failed process was running: drop table foobar cascade;2022-09-13 23:16:31.115 UTC [1] LOG: terminating any other active server processes2022-09-13 23:16:31.115 UTC [1247] WARNING: terminating connection because of crash of another server process2022-09-13 23:16:31.115 UTC [1247] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.2022-09-13 23:16:31.117 UTC [97] HINT: In a moment you should be able to reconnect to the database and repeat your command.2022-09-13 23:16:31.136 UTC [1248] FATAL: the database system is in recovery mode2022-09-13 23:16:31.147 UTC [1249] FATAL: the database system is in recovery mode2022-09-13 23:16:31.192 UTC [1] LOG: all server processes terminated; reinitializing2022-09-13 23:16:31.819 UTC [1250] LOG: database system was interrupted; last known up at 2022-09-13 23:15:47 UTC2022-09-13 23:16:34.959 UTC [1250] LOG: database system was not properly shut down; automatic recovery in progress2022-09-13 23:16:34.965 UTC [1250] LOG: redo starts at 2/3A3FEEC82022-09-13 23:16:36.421 UTC [1250] LOG: invalid record length at 2/5F355008: wanted 24, got 02022-09-13 23:16:36.421 UTC [1250] LOG: redo done at 2/5F354FD02022-09-13 23:16:37.166 UTC [1] LOG: database system is ready to accept connectionsAfter updating Postgres to 14.5, it crashed in a bit different way:2022-09-15 19:20:26.000 UTC [67] LOG: checkpoints are occurring too frequently (23 seconds apart)2022-09-15 19:20:26.000 UTC [67] HINT: Consider increasing the configuration parameter \"max_wal_size\".2022-09-15 19:20:39.058 UTC [1] LOG: server process (PID 223) was terminated by signal 9: Killed2022-09-15 19:20:39.058 UTC [1] DETAIL: Failed process was running: drop table foobar cascade;Wihout the views, table can be dropped in 20ms. There must be something inherently slow in the way that Postgres manages views. I know that under the hood, views are like table+relation to parent table, so it could be compared to having about million of tables. They are not that light, aren't they? Probably the issue is made worse because of atomicity: dropping the foobar table with cascade needs to have all views dropped first, in transaction. But why handling them would be so slow? Assuming the above is true, I'm wondering if there's a way to improve the performance of Postgres commands like (the most important) backup and restore, in situationof so many views. Dropping table is not that important, but would be good to have it working too, ie. by first deleting the views in batches (my idea, will test).But backups and restores must be faster and reliable in order to implement one feature in our platform.Perhaps adding index on views, so that it can quickly assess how many there are and to lock them, disabling something, tweaking some perf option... throwing ideas. Please advice. Or maybe there's no hope to make this behave better :)Regards,Hubert",
"msg_date": "Sat, 17 Sep 2022 01:05:57 +0200",
"msg_from": "Hubert Rutkowski <hubert.rutkowski@deepsense.ai>",
"msg_from_op": true,
"msg_subject": "Milions of views - performance, stability"
},
{
"msg_contents": "On Sat, 2022-09-17 at 01:05 +0200, Hubert Rutkowski wrote:\n> Hello! I have written python program to benchmark view efficiency, because in our platform\n> they have a role to play and we noticed the performance is less than expected.\n\nIf your platform plans to use millions of views, you should revise your design. As you\nsee, that is not going to fly. And no, I don't consider that a bug.\n\n> Basically, benchmark creates table:\n> \n> CREATE TABLE IF NOT EXISTS foobar ( id int, text varchar(40) );\n> \n> for i in range(1200300):\n> INSERT INTO foobar (id, text) VALUES ({i}, 'some string');\n> CREATE VIEW foobar_{i} as select * from foobar where id={i};\n> \n> Couldn't be any simpler. \n> [general slowness]\n> \n> What's even stranger is dropping performance: DROP TABLE foobar CASCADE;. First of all, had to\n> increase locks to allow it to finish, otherwise it was quickly bailing because of \"too little shared memory\".\n> alter system set max_locks_per_transaction=40000;\n> \n> But even after that, it took almost 7 hours and crashed:\n> \n> 2022-09-13 23:16:31.113 UTC [1] LOG: server process (PID 404) was terminated by signal 9: Killed\n> \n> After updating Postgres to 14.5, it crashed in a bit different way:\n> \n> 2022-09-15 19:20:26.000 UTC [67] LOG: checkpoints are occurring too frequently (23 seconds apart)\n> 2022-09-15 19:20:26.000 UTC [67] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n> 2022-09-15 19:20:39.058 UTC [1] LOG: server process (PID 223) was terminated by signal 9: Killed\n> 2022-09-15 19:20:39.058 UTC [1] DETAIL: Failed process was running: drop table foobar cascade;\n> \n> Wihout the views, table can be dropped in 20ms. \n\nYou misconfigured your operating system and didn't disable memory overcommit, so you got killed\nby the OOM killer. Basically, the operation ran out of memory.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 17 Sep 2022 07:33:03 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Milions of views - performance, stability"
}
] |
[
{
"msg_contents": ">\n> Hi All,\n>\n> I'm looking for suggestions:\n>\n> Environment: AWS PostgreSQL RDS instance - Version 14.3\n> Operations support gets intermittent alerts from the monitoring tool\n> through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU\n> Utilization.\n> I would like to understand what is causing the spike - is the number of\n> logon's increased, (or) number of transactions per second increased, (or)\n> SQL execution picked wrong plan and the long running (I/O, CPU or memory\n> intensive) SQL is increasing load on server (cause and effect scenario)\n> etc.,\n>\n> Due to the reactive nature of the issues, we rely on the metrics gathered\n> in the AWS cloud watch monitoring (for the underlying OS stats),\n> Performance Insights (for the DB performance) and correlate SQL queries\n> with pg_Stat_Statements view. But the data in the view is an aggregated\n> stats. And, I'm looking to see the deltas compared to normal runs.\n> How should I approach and get to the root-cause?\n>\n> AppDynamics is already configured for the RDS instance. Are there any open\n> source monitoring tools available which would help to capture and visualize\n> the deltas?\n>\n> Thanks,\n> Senko\n>\n\nHi All,I'm looking for suggestions:Environment: AWS PostgreSQL RDS instance - Version 14.3Operations support gets intermittent alerts from the monitoring tool through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU Utilization.I would like to understand what is causing the spike - is the number of logon's increased, (or) number of transactions per second increased, (or) SQL execution picked wrong plan and the long running (I/O, CPU or memory intensive) SQL is increasing load on server (cause and effect scenario) etc.,Due to the reactive nature of the issues, we rely on the metrics gathered in the AWS cloud watch monitoring (for the underlying OS stats), Performance Insights (for the DB performance) and correlate SQL queries with pg_Stat_Statements view. But the data in the view is an aggregated stats. And, I'm looking to see the deltas compared to normal runs. How should I approach and get to the root-cause? AppDynamics is already configured for the RDS instance. Are there any open source monitoring tools available which would help to capture and visualize the deltas? Thanks,Senko",
"msg_date": "Tue, 11 Oct 2022 16:36:55 +0530",
"msg_from": "Sengottaiyan T <techsenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Identify root-cause for intermittent spikes"
},
{
"msg_contents": "Hello,\n\nYour problem is probably, too many active, concurrent connections. Get \nit from here the db directly:\nselect datname, usename, application_name, substring(query, 1, 80) \nquery from pg_stat_activity where state in ('active','idle in \ntransaction');\n\nCompare the number of rows returned with the number of vCPUs. If it's \nmore than double the number of vCPUs in your AWS instance class, then \nyou are cpu saturated.\n\nRegards,\n\nMichael Vitale\n\n\n\nSengottaiyan T wrote on 10/11/2022 7:06 AM:\n>\n> Hi All,\n>\n> I'm looking for suggestions:\n>\n> Environment: AWS PostgreSQL RDS instance - Version 14.3\n> Operations support gets intermittent alerts from the monitoring\n> tool through AWS cloud watch metrics on Disk Queue Depth, CPU\n> burst-credit & CPU Utilization.\n> I would like to understand what is causing the spike - is the\n> number of logon's increased, (or) number of transactions per\n> second increased, (or) SQL execution picked wrong plan and the\n> long running (I/O, CPU or memory intensive) SQL is increasing load\n> on server (cause and effect scenario) etc.,\n>\n> Due to the reactive nature of the issues, we rely on the metrics\n> gathered in the AWS cloud watch monitoring (for the underlying OS\n> stats), Performance Insights (for the DB performance) and\n> correlate SQL queries with pg_Stat_Statements view. But the data\n> in the view is an aggregated stats. And, I'm looking to see the\n> deltas compared to normal runs.\n> How should I approach and get to the root-cause?\n>\n> AppDynamics is already configured for the RDS instance. Are there\n> any open source monitoring tools available which would help to\n> capture and visualize the deltas?\n>\n> Thanks,\n> Senko\n>\n\n\n\n\n\n\nHello,\n\nYour problem is probably, too many active, concurrent connections. Get \nit from here the db directly:\nselect datname, usename, application_name, substring(query, 1, 80) \nquery from pg_stat_activity where state in ('active','idle in \ntransaction');\n\nCompare the number of rows returned with the number of vCPUs. If it's \nmore than double the number of vCPUs in your AWS instance class, then \nyou are cpu saturated.\n\n\nRegards,\nMichael Vitale\n\nSengottaiyan T wrote on 10/11/2022 7:06 AM:\n\n\nHi All,I'm looking for \nsuggestions:Environment: AWS PostgreSQL RDS instance - Version \n14.3Operations support gets intermittent alerts from the monitoring \ntool through AWS cloud watch metrics on Disk Queue Depth, CPU \nburst-credit & CPU Utilization.I would like to understand what \nis causing the spike - is the number of logon's increased, (or) number \nof transactions per second increased, (or) SQL execution picked wrong \nplan and the long running (I/O, CPU or memory intensive) SQL is \nincreasing load on server (cause and effect scenario) etc.,Due \nto the reactive nature of the issues, we rely on the metrics gathered in\n the AWS cloud watch monitoring (for the underlying OS stats), \nPerformance Insights (for the DB performance) and correlate SQL queries \nwith pg_Stat_Statements view. But the data in the view is an aggregated \nstats. And, I'm looking to see the deltas compared to normal runs. How\n should I approach and get to the root-cause? AppDynamics is \nalready configured for the RDS instance. Are there any open source \nmonitoring tools available which would help to capture and visualize the\n deltas? Thanks,Senko",
"msg_date": "Tue, 11 Oct 2022 07:48:54 -0400",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "I like to use pgbadger to collect data on what is happening in RDS\ninstances. You have to turn up a bunch of logging in RDS:\n\n1. Turn on connection logging, duration logging, lock_waits, and anything\nelse that you are interested in studying.\n\n2. Then grab all of your postgresql logs from AWS. I wrote this little\nbash script to pull all of the logs for a current day. It will work if you\nhave your aws credentials configured correctly and can run aws-cli commands.\n```\n#!/bin/env bash\n\n## Return all of the postgresql log files saved by RDS since midnight.\n## Save them in your current directory.\n## This is so we can use cli tools like \"grep\"\n## It is also really handy for feeding into pgbadger for deeper analysis.\n\n# aws requires the timestamp to be in milliseconds.\n# unfortunately date will provide either seconds or nano seconds, so we\nhave to do math.\nmidnight_timestamp=$(date -d $(date -I) '+%s')\nmidnight_timestamp_milliseconds=$(echo \"${midnight_timestamp} * 1000\" | bc)\n\nlogfiles=$(aws rds describe-db-log-files \\\n --profile default \\\n --db-instance-identifier \"*some_rds_instance_name*\" \\\n --output json \\\n --file-last-written ${midnight_timestamp_milliseconds} | jq\n-r \".DescribeDBLogFiles[].LogFileName\")\n\nfor logfile in $(echo ${logfiles})\ndo\n # remove the leading \"error/\" so we can use the name to save it.\n logfile_save=$(echo \"${logfile}\" | awk -F\\/ '{print $NF}')\n\n tput bold; echo \"${logfile}\"; tput sgr0\n aws rds download-db-log-file-portion \\\n --profile admin \\\n --db-instance-identifier prod-notify-me-1 \\\n --log-file-name ${logfile} \\\n --output text \\\n --no-paginate > ${logfile_save}\ndone\n```\n3. Then run pgbadger:\n``` ~/src/pgbadger/pgbadger -f rds postgresql*\n```\n4. Open the `out.html` in your browser, and poke around. There is a ton\nof stuff you can find in all the drop down menus about what was happening\nin your database over the time window you collected the logs for. The html\nis generated as a standalone file by a perl script of all things. It is\npretty impressive.\n\n\n\nOn Tue, Oct 11, 2022 at 7:07 AM Sengottaiyan T <techsenko@gmail.com> wrote:\n\n> Hi All,\n>>\n>> I'm looking for suggestions:\n>>\n>> Environment: AWS PostgreSQL RDS instance - Version 14.3\n>> Operations support gets intermittent alerts from the monitoring tool\n>> through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU\n>> Utilization.\n>> I would like to understand what is causing the spike - is the number of\n>> logon's increased, (or) number of transactions per second increased, (or)\n>> SQL execution picked wrong plan and the long running (I/O, CPU or memory\n>> intensive) SQL is increasing load on server (cause and effect scenario)\n>> etc.,\n>>\n>> Due to the reactive nature of the issues, we rely on the metrics gathered\n>> in the AWS cloud watch monitoring (for the underlying OS stats),\n>> Performance Insights (for the DB performance) and correlate SQL queries\n>> with pg_Stat_Statements view. But the data in the view is an aggregated\n>> stats. And, I'm looking to see the deltas compared to normal runs.\n>> How should I approach and get to the root-cause?\n>>\n>> AppDynamics is already configured for the RDS instance. Are there any\n>> open source monitoring tools available which would help to capture and\n>> visualize the deltas?\n>>\n>> Thanks,\n>> Senko\n>>\n>\n\nI like to use pgbadger to collect data on what is happening in RDS instances. You have to turn up a bunch of logging in RDS:1. Turn on connection logging, duration logging, lock_waits, and anything else that you are interested in studying.2. Then grab all of your postgresql logs from AWS. I wrote this little bash script to pull all of the logs for a current day. It will work if you have your aws credentials configured correctly and can run aws-cli commands.```#!/bin/env bash## Return all of the postgresql log files saved by RDS since midnight.## Save them in your current directory.## This is so we can use cli tools like \"grep\"## It is also really handy for feeding into pgbadger for deeper analysis.# aws requires the timestamp to be in milliseconds.# unfortunately date will provide either seconds or nano seconds, so we have to do math.midnight_timestamp=$(date -d $(date -I) '+%s')midnight_timestamp_milliseconds=$(echo \"${midnight_timestamp} * 1000\" | bc)logfiles=$(aws rds describe-db-log-files \\ --profile default \\ --db-instance-identifier \"some_rds_instance_name\" \\ --output json \\ --file-last-written ${midnight_timestamp_milliseconds} | jq -r \".DescribeDBLogFiles[].LogFileName\")for logfile in $(echo ${logfiles})do # remove the leading \"error/\" so we can use the name to save it. logfile_save=$(echo \"${logfile}\" | awk -F\\/ '{print $NF}') tput bold; echo \"${logfile}\"; tput sgr0 aws rds download-db-log-file-portion \\ --profile admin \\ --db-instance-identifier prod-notify-me-1 \\ --log-file-name ${logfile} \\ --output text \\ --no-paginate > ${logfile_save}done```3. Then run pgbadger:``` ~/src/pgbadger/pgbadger -f rds postgresql*```4. Open the `out.html` in your browser, and poke around. There is a ton of stuff you can find in all the drop down menus about what was happening in your database over the time window you collected the logs for. The html is generated as a standalone file by a perl script of all things. It is pretty impressive.On Tue, Oct 11, 2022 at 7:07 AM Sengottaiyan T <techsenko@gmail.com> wrote:Hi All,I'm looking for suggestions:Environment: AWS PostgreSQL RDS instance - Version 14.3Operations support gets intermittent alerts from the monitoring tool through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU Utilization.I would like to understand what is causing the spike - is the number of logon's increased, (or) number of transactions per second increased, (or) SQL execution picked wrong plan and the long running (I/O, CPU or memory intensive) SQL is increasing load on server (cause and effect scenario) etc.,Due to the reactive nature of the issues, we rely on the metrics gathered in the AWS cloud watch monitoring (for the underlying OS stats), Performance Insights (for the DB performance) and correlate SQL queries with pg_Stat_Statements view. But the data in the view is an aggregated stats. And, I'm looking to see the deltas compared to normal runs. How should I approach and get to the root-cause? AppDynamics is already configured for the RDS instance. Are there any open source monitoring tools available which would help to capture and visualize the deltas? Thanks,Senko",
"msg_date": "Tue, 11 Oct 2022 20:42:19 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "On Tue, 11 Oct 2022, 22:07 Sengottaiyan T, <techsenko@gmail.com> wrote:\n\n> Hi All,\n>>\n>> I'm looking for suggestions:\n>>\n>> Environment: AWS PostgreSQL RDS instance - Version 14.3\n>> Operations support gets intermittent alerts from the monitoring tool\n>> through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU\n>> Utilization.\n>> I would like to understand what is causing the spike - is the number of\n>> logon's increased, (or) number of transactions per second increased, (or)\n>> SQL execution picked wrong plan and the long running (I/O, CPU or memory\n>> intensive) SQL is increasing load on server (cause and effect scenario)\n>> etc.,\n>>\n>> Due to the reactive nature of the issues, we rely on the metrics gathered\n>> in the AWS cloud watch monitoring (for the underlying OS stats),\n>> Performance Insights (for the DB performance) and correlate SQL queries\n>> with pg_Stat_Statements view. But the data in the view is an aggregated\n>> stats. And, I'm looking to see the deltas compared to normal runs.\n>>\n>\nPerformance Insights should also offer you visibility into statement level\nstats for Top SQL if pg_stat_statements is enabled.\n\nPerformance Insights also has other metrics (Counter Metrics) that you can\nrefer to to understand some of the data points you are after -\nxact_count/second, session_in_idle_in_transactions/second,\nblocked_transactions/second etc. You need to add them to PI dashboard by\nusing Manage Meteics button on PI dashboard.\n\n\n\n>> How should I approach and get to the root-cause?\n>>\n>> AppDynamics is already configured for the RDS instance. Are there any\n>> open source monitoring tools available which would help to capture and\n>> visualize the deltas?\n>>\n>> Thanks,\n>> Senko\n>>\n>\nRegards\nSameer\nDB Specialist,\nAmazon Web Services\n\nOn Tue, 11 Oct 2022, 22:07 Sengottaiyan T, <techsenko@gmail.com> wrote:Hi All,I'm looking for suggestions:Environment: AWS PostgreSQL RDS instance - Version 14.3Operations support gets intermittent alerts from the monitoring tool through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU Utilization.I would like to understand what is causing the spike - is the number of logon's increased, (or) number of transactions per second increased, (or) SQL execution picked wrong plan and the long running (I/O, CPU or memory intensive) SQL is increasing load on server (cause and effect scenario) etc.,Due to the reactive nature of the issues, we rely on the metrics gathered in the AWS cloud watch monitoring (for the underlying OS stats), Performance Insights (for the DB performance) and correlate SQL queries with pg_Stat_Statements view. But the data in the view is an aggregated stats. And, I'm looking to see the deltas compared to normal runs.Performance Insights should also offer you visibility into statement level stats for Top SQL if pg_stat_statements is enabled.Performance Insights also has other metrics (Counter Metrics) that you can refer to to understand some of the data points you are after - xact_count/second, session_in_idle_in_transactions/second, blocked_transactions/second etc. You need to add them to PI dashboard by using Manage Meteics button on PI dashboard. How should I approach and get to the root-cause? AppDynamics is already configured for the RDS instance. Are there any open source monitoring tools available which would help to capture and visualize the deltas? Thanks,SenkoRegardsSameerDB Specialist, Amazon Web Services",
"msg_date": "Wed, 12 Oct 2022 17:50:31 +1100",
"msg_from": "SAMEER KUMAR <sameer.kasi200x@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "Thanks, Michael.\n\nDue to reactive nature of the intermittent alerts, Is there any table which\nstores the historical information / periodic snapshots captured from the\npg_stat_activity view?\n\nOn Tue, Oct 11, 2022 at 5:18 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n> Hello,\n>\n> Your problem is probably, too many active, concurrent connections. Get it\n> from here the db directly:\n> select datname, usename, application_name, substring(query, 1, 80) query\n> from pg_stat_activity where state in ('active','idle in transaction');\n>\n> Compare the number of rows returned with the number of vCPUs. If it's\n> more than double the number of vCPUs in your AWS instance class, then you\n> are cpu saturated.\n>\n> Regards,\n>\n> Michael Vitale\n>\n>\n> Sengottaiyan T wrote on 10/11/2022 7:06 AM:\n>\n> Hi All,\n>>\n>> I'm looking for suggestions:\n>>\n>> Environment: AWS PostgreSQL RDS instance - Version 14.3\n>> Operations support gets intermittent alerts from the monitoring tool\n>> through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU\n>> Utilization.\n>> I would like to understand what is causing the spike - is the number of\n>> logon's increased, (or) number of transactions per second increased, (or)\n>> SQL execution picked wrong plan and the long running (I/O, CPU or memory\n>> intensive) SQL is increasing load on server (cause and effect scenario)\n>> etc.,\n>>\n>> Due to the reactive nature of the issues, we rely on the metrics gathered\n>> in the AWS cloud watch monitoring (for the underlying OS stats),\n>> Performance Insights (for the DB performance) and correlate SQL queries\n>> with pg_Stat_Statements view. But the data in the view is an aggregated\n>> stats. And, I'm looking to see the deltas compared to normal runs.\n>> How should I approach and get to the root-cause?\n>>\n>> AppDynamics is already configured for the RDS instance. Are there any\n>> open source monitoring tools available which would help to capture and\n>> visualize the deltas?\n>>\n>> Thanks,\n>> Senko\n>>\n>\n>\n>\n>\n\nThanks, Michael.Due to reactive nature of the intermittent alerts, Is there any table which stores the historical information / periodic snapshots captured from the pg_stat_activity view?On Tue, Oct 11, 2022 at 5:18 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\nHello,\n\nYour problem is probably, too many active, concurrent connections. Get \nit from here the db directly:\nselect datname, usename, application_name, substring(query, 1, 80) \nquery from pg_stat_activity where state in ('active','idle in \ntransaction');\n\nCompare the number of rows returned with the number of vCPUs. If it's \nmore than double the number of vCPUs in your AWS instance class, then \nyou are cpu saturated.\n\n\nRegards,\nMichael Vitale\n\nSengottaiyan T wrote on 10/11/2022 7:06 AM:\n\nHi All,I'm looking for \nsuggestions:Environment: AWS PostgreSQL RDS instance - Version \n14.3Operations support gets intermittent alerts from the monitoring \ntool through AWS cloud watch metrics on Disk Queue Depth, CPU \nburst-credit & CPU Utilization.I would like to understand what \nis causing the spike - is the number of logon's increased, (or) number \nof transactions per second increased, (or) SQL execution picked wrong \nplan and the long running (I/O, CPU or memory intensive) SQL is \nincreasing load on server (cause and effect scenario) etc.,Due \nto the reactive nature of the issues, we rely on the metrics gathered in\n the AWS cloud watch monitoring (for the underlying OS stats), \nPerformance Insights (for the DB performance) and correlate SQL queries \nwith pg_Stat_Statements view. But the data in the view is an aggregated \nstats. And, I'm looking to see the deltas compared to normal runs. How\n should I approach and get to the root-cause? AppDynamics is \nalready configured for the RDS instance. Are there any open source \nmonitoring tools available which would help to capture and visualize the\n deltas? Thanks,Senko",
"msg_date": "Wed, 12 Oct 2022 22:46:23 +0530",
"msg_from": "Sengottaiyan T <techsenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "Thanks, Rick - I will give it a try.\n\nOn Wed, Oct 12, 2022 at 6:12 AM Rick Otten <rottenwindfish@gmail.com> wrote:\n\n> I like to use pgbadger to collect data on what is happening in RDS\n> instances. You have to turn up a bunch of logging in RDS:\n>\n> 1. Turn on connection logging, duration logging, lock_waits, and anything\n> else that you are interested in studying.\n>\n> 2. Then grab all of your postgresql logs from AWS. I wrote this little\n> bash script to pull all of the logs for a current day. It will work if you\n> have your aws credentials configured correctly and can run aws-cli commands.\n> ```\n> #!/bin/env bash\n>\n> ## Return all of the postgresql log files saved by RDS since midnight.\n> ## Save them in your current directory.\n> ## This is so we can use cli tools like \"grep\"\n> ## It is also really handy for feeding into pgbadger for deeper analysis.\n>\n> # aws requires the timestamp to be in milliseconds.\n> # unfortunately date will provide either seconds or nano seconds, so we\n> have to do math.\n> midnight_timestamp=$(date -d $(date -I) '+%s')\n> midnight_timestamp_milliseconds=$(echo \"${midnight_timestamp} * 1000\" | bc)\n>\n> logfiles=$(aws rds describe-db-log-files \\\n> --profile default \\\n> --db-instance-identifier \"*some_rds_instance_name*\" \\\n> --output json \\\n> --file-last-written ${midnight_timestamp_milliseconds} | jq\n> -r \".DescribeDBLogFiles[].LogFileName\")\n>\n> for logfile in $(echo ${logfiles})\n> do\n> # remove the leading \"error/\" so we can use the name to save it.\n> logfile_save=$(echo \"${logfile}\" | awk -F\\/ '{print $NF}')\n>\n> tput bold; echo \"${logfile}\"; tput sgr0\n> aws rds download-db-log-file-portion \\\n> --profile admin \\\n> --db-instance-identifier prod-notify-me-1 \\\n> --log-file-name ${logfile} \\\n> --output text \\\n> --no-paginate > ${logfile_save}\n> done\n> ```\n> 3. Then run pgbadger:\n> ``` ~/src/pgbadger/pgbadger -f rds postgresql*\n> ```\n> 4. Open the `out.html` in your browser, and poke around. There is a ton\n> of stuff you can find in all the drop down menus about what was happening\n> in your database over the time window you collected the logs for. The html\n> is generated as a standalone file by a perl script of all things. It is\n> pretty impressive.\n>\n>\n>\n> On Tue, Oct 11, 2022 at 7:07 AM Sengottaiyan T <techsenko@gmail.com>\n> wrote:\n>\n>> Hi All,\n>>>\n>>> I'm looking for suggestions:\n>>>\n>>> Environment: AWS PostgreSQL RDS instance - Version 14.3\n>>> Operations support gets intermittent alerts from the monitoring tool\n>>> through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU\n>>> Utilization.\n>>> I would like to understand what is causing the spike - is the number of\n>>> logon's increased, (or) number of transactions per second increased, (or)\n>>> SQL execution picked wrong plan and the long running (I/O, CPU or memory\n>>> intensive) SQL is increasing load on server (cause and effect scenario)\n>>> etc.,\n>>>\n>>> Due to the reactive nature of the issues, we rely on the metrics\n>>> gathered in the AWS cloud watch monitoring (for the underlying OS stats),\n>>> Performance Insights (for the DB performance) and correlate SQL queries\n>>> with pg_Stat_Statements view. But the data in the view is an aggregated\n>>> stats. And, I'm looking to see the deltas compared to normal runs.\n>>> How should I approach and get to the root-cause?\n>>>\n>>> AppDynamics is already configured for the RDS instance. Are there any\n>>> open source monitoring tools available which would help to capture and\n>>> visualize the deltas?\n>>>\n>>> Thanks,\n>>> Senko\n>>>\n>>\n\nThanks, Rick - I will give it a try.On Wed, Oct 12, 2022 at 6:12 AM Rick Otten <rottenwindfish@gmail.com> wrote:I like to use pgbadger to collect data on what is happening in RDS instances. You have to turn up a bunch of logging in RDS:1. Turn on connection logging, duration logging, lock_waits, and anything else that you are interested in studying.2. Then grab all of your postgresql logs from AWS. I wrote this little bash script to pull all of the logs for a current day. It will work if you have your aws credentials configured correctly and can run aws-cli commands.```#!/bin/env bash## Return all of the postgresql log files saved by RDS since midnight.## Save them in your current directory.## This is so we can use cli tools like \"grep\"## It is also really handy for feeding into pgbadger for deeper analysis.# aws requires the timestamp to be in milliseconds.# unfortunately date will provide either seconds or nano seconds, so we have to do math.midnight_timestamp=$(date -d $(date -I) '+%s')midnight_timestamp_milliseconds=$(echo \"${midnight_timestamp} * 1000\" | bc)logfiles=$(aws rds describe-db-log-files \\ --profile default \\ --db-instance-identifier \"some_rds_instance_name\" \\ --output json \\ --file-last-written ${midnight_timestamp_milliseconds} | jq -r \".DescribeDBLogFiles[].LogFileName\")for logfile in $(echo ${logfiles})do # remove the leading \"error/\" so we can use the name to save it. logfile_save=$(echo \"${logfile}\" | awk -F\\/ '{print $NF}') tput bold; echo \"${logfile}\"; tput sgr0 aws rds download-db-log-file-portion \\ --profile admin \\ --db-instance-identifier prod-notify-me-1 \\ --log-file-name ${logfile} \\ --output text \\ --no-paginate > ${logfile_save}done```3. Then run pgbadger:``` ~/src/pgbadger/pgbadger -f rds postgresql*```4. Open the `out.html` in your browser, and poke around. There is a ton of stuff you can find in all the drop down menus about what was happening in your database over the time window you collected the logs for. The html is generated as a standalone file by a perl script of all things. It is pretty impressive.On Tue, Oct 11, 2022 at 7:07 AM Sengottaiyan T <techsenko@gmail.com> wrote:Hi All,I'm looking for suggestions:Environment: AWS PostgreSQL RDS instance - Version 14.3Operations support gets intermittent alerts from the monitoring tool through AWS cloud watch metrics on Disk Queue Depth, CPU burst-credit & CPU Utilization.I would like to understand what is causing the spike - is the number of logon's increased, (or) number of transactions per second increased, (or) SQL execution picked wrong plan and the long running (I/O, CPU or memory intensive) SQL is increasing load on server (cause and effect scenario) etc.,Due to the reactive nature of the issues, we rely on the metrics gathered in the AWS cloud watch monitoring (for the underlying OS stats), Performance Insights (for the DB performance) and correlate SQL queries with pg_Stat_Statements view. But the data in the view is an aggregated stats. And, I'm looking to see the deltas compared to normal runs. How should I approach and get to the root-cause? AppDynamics is already configured for the RDS instance. Are there any open source monitoring tools available which would help to capture and visualize the deltas? Thanks,Senko",
"msg_date": "Wed, 12 Oct 2022 22:49:19 +0530",
"msg_from": "Sengottaiyan T <techsenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 10:46:23PM +0530, Sengottaiyan T wrote:\n> Thanks, Michael.\n> \n> Due to reactive nature of the intermittent alerts, Is there any table which\n> stores the historical information / periodic snapshots captured from the\n> pg_stat_activity view?\n\nNo. Unless you will make one, and add data to it from some cron-like\nthing.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Thu, 13 Oct 2022 10:33:28 +0200",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "Thanks, Depsez.\n\nPlease suggest: Is there any open-source tool available for capturing such\ninformation?\n\nOn Thu, Oct 13, 2022, 14:03 hubert depesz lubaczewski <depesz@depesz.com>\nwrote:\n\n> On Wed, Oct 12, 2022 at 10:46:23PM +0530, Sengottaiyan T wrote:\n> > Thanks, Michael.\n> >\n> > Due to reactive nature of the intermittent alerts, Is there any table\n> which\n> > stores the historical information / periodic snapshots captured from the\n> > pg_stat_activity view?\n>\n> No. Unless you will make one, and add data to it from some cron-like\n> thing.\n>\n> Best regards,\n>\n> depesz\n>\n>\n\nThanks, Depsez.Please suggest: Is there any open-source tool available for capturing such information?On Thu, Oct 13, 2022, 14:03 hubert depesz lubaczewski <depesz@depesz.com> wrote:On Wed, Oct 12, 2022 at 10:46:23PM +0530, Sengottaiyan T wrote:\n> Thanks, Michael.\n> \n> Due to reactive nature of the intermittent alerts, Is there any table which\n> stores the historical information / periodic snapshots captured from the\n> pg_stat_activity view?\n\nNo. Unless you will make one, and add data to it from some cron-like\nthing.\n\nBest regards,\n\ndepesz",
"msg_date": "Fri, 14 Oct 2022 22:57:00 +0530",
"msg_from": "Sengottaiyan T <techsenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 10:57:00PM +0530, Sengottaiyan T wrote:\n>\n> Please suggest: Is there any open-source tool available for capturing such\n> information?\n\nMost of the open-source tools won't work as you won't be able to install them\non RDS.\n\nAs far as I know the \"Performance Insights\" provides detailed information, not\nonly cumulated metrics, so that's probably your best option.\n\n\n",
"msg_date": "Sat, 15 Oct 2022 18:35:22 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 10:57:00PM +0530, Sengottaiyan T wrote:\n> Please suggest: Is there any open-source tool available for capturing such\n> information?\n\npg_cron?\n\ndepesz\n\n\n",
"msg_date": "Sun, 16 Oct 2022 11:18:28 +0200",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: Identify root-cause for intermittent spikes"
}
] |
[
{
"msg_contents": "> I executed the following statements 3 times\n> explain(analyze, buffet) select * from table1\n>\n> The number of rows are different. Is the table corrupted? How to confirm\n> and how to fix it?\n>\n\nI executed the following statements 3 times explain(analyze, buffet) select * from table1The number of rows are different. Is the table corrupted? How to confirm and how to fix it?",
"msg_date": "Thu, 20 Oct 2022 07:58:29 -0400",
"msg_from": "Vince McMahon <sippingonesandzeros@gmail.com>",
"msg_from_op": true,
"msg_subject": "Explain returns different number of rows"
},
{
"msg_contents": "I did get reply so I am trying again.\n\nI executed the following statements 3 times\n> explain(analyze, buffet) select * from table1\n>\n> The number of rows are different. Is the table corrupted? How to confirm\n> and how to fix it?\n>\n\nI did get reply so I am trying again.I executed the following statements 3 times explain(analyze, buffet) select * from table1The number of rows are different. Is the table corrupted? How to confirm and how to fix it?",
"msg_date": "Thu, 20 Oct 2022 12:52:12 -0400",
"msg_from": "Vince McMahon <sippingonesandzeros@gmail.com>",
"msg_from_op": true,
"msg_subject": "Explain returns different number of rows"
},
{
"msg_contents": "\n\n> On Oct 20, 2022, at 09:52, Vince McMahon <sippingonesandzeros@gmail.com> wrote:\n> The number of rows are different. \n\nThis isn't unexpected. EXPLAIN does not actually run the query and determine how many rows are returned; it calculates an estimate based on the current system statistics, which vary constantly depending on activity in the database.\n\n",
"msg_date": "Thu, 20 Oct 2022 09:56:23 -0700",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": false,
"msg_subject": "Re: Explain returns different number of rows"
},
{
"msg_contents": "On 2022-10-20 09:56:23 -0700, Christophe Pettus wrote:\n> On Oct 20, 2022, at 09:52, Vince McMahon <sippingonesandzeros@gmail.com> wrote:\n> > The number of rows are different. \n> \n> This isn't unexpected. EXPLAIN does not actually run the query and\n> determine how many rows are returned; it calculates an estimate based\n> on the current system statistics, which vary constantly depending on\n> activity in the database.\n\nEXPLAIN ANALYZE (which is what he did) does run the query and return the\nactual number of rows:\n\n#v+\nwdsah=> explain (analyze, buffers) select * from facttable_eurostat_comext_cpa2_1 ;\n╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n║ QUERY PLAN ║\n╟──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╢\n║ Seq Scan on facttable_eurostat_comext_cpa2_1 (cost=0.00..1005741.32 rows=39633432 width=85) (actual time=0.396..6541.701 rows=39633591 loops=1) ║\n║ Buffers: shared read=609407 ║\n║ Planning Time: 1.650 ms ║\n║ Execution Time: 7913.027 ms ║\n╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n(4 rows)\n#v-\n\nThe first tuple (cost=0.00..1005741.32 rows=39633432 width=85) is an\nestimate used to plan the query. But the second one\n(actual time=0.396..6541.701 rows=39633591 loops=1)\ncontains measurements from actually running the query.\n\nI think it's possible that the rows estimate in the first tuple changes\nwithout any actual data change (although the only reason I can think of\nright now would be an ANALYZE (in another session or by autovacuum)).\nBut the actual rows definitely shouldn't change.\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"",
"msg_date": "Sat, 22 Oct 2022 11:32:32 +0200",
"msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>",
"msg_from_op": false,
"msg_subject": "Re: Explain returns different number of rows"
},
{
"msg_contents": "Thanks for the clarification, Peter.\r\n\r\n\r\n\r\nOn Sat, Oct 22, 2022, 05:32 Peter J. Holzer <hjp-pgsql@hjp.at> wrote:\r\n\r\n> On 2022-10-20 09:56:23 -0700, Christophe Pettus wrote:\r\n> > On Oct 20, 2022, at 09:52, Vince McMahon <sippingonesandzeros@gmail.com>\r\n> wrote:\r\n> > > The number of rows are different.\r\n> >\r\n> > This isn't unexpected. EXPLAIN does not actually run the query and\r\n> > determine how many rows are returned; it calculates an estimate based\r\n> > on the current system statistics, which vary constantly depending on\r\n> > activity in the database.\r\n>\r\n> EXPLAIN ANALYZE (which is what he did) does run the query and return the\r\n> actual number of rows:\r\n>\r\n> #v+\r\n> wdsah=> explain (analyze, buffers) select * from\r\n> facttable_eurostat_comext_cpa2_1 ;\r\n>\r\n> ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\r\n> ║ QUERY\r\n> PLAN ║\r\n>\r\n> ╟──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╢\r\n> ║ Seq Scan on facttable_eurostat_comext_cpa2_1 (cost=0.00..1005741.32\r\n> rows=39633432 width=85) (actual time=0.396..6541.701 rows=39633591 loops=1)\r\n> ║\r\n> ║ Buffers: shared read=609407\r\n> ║\r\n> ║ Planning Time: 1.650 ms\r\n> ║\r\n> ║ Execution Time: 7913.027 ms\r\n> ║\r\n>\r\n> ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\r\n> (4 rows)\r\n> #v-\r\n>\r\n> The first tuple (cost=0.00..1005741.32 rows=39633432 width=85) is an\r\n> estimate used to plan the query. But the second one\r\n> (actual time=0.396..6541.701 rows=39633591 loops=1)\r\n> contains measurements from actually running the query.\r\n>\r\n> I think it's possible that the rows estimate in the first tuple changes\r\n> without any actual data change (although the only reason I can think of\r\n> right now would be an ANALYZE (in another session or by autovacuum)).\r\n> But the actual rows definitely shouldn't change.\r\n>\r\n> hp\r\n>\r\n> --\r\n> _ | Peter J. Holzer | Story must make more sense than reality.\r\n> |_|_) | |\r\n> | | | hjp@hjp.at | -- Charles Stross, \"Creative writing\r\n> __/ | http://www.hjp.at/ | challenge!\"\r\n>\r\n\nThanks for the clarification, Peter.On Sat, Oct 22, 2022, 05:32 Peter J. Holzer <hjp-pgsql@hjp.at> wrote:On 2022-10-20 09:56:23 -0700, Christophe Pettus wrote:\r\n> On Oct 20, 2022, at 09:52, Vince McMahon <sippingonesandzeros@gmail.com> wrote:\r\n> > The number of rows are different. \r\n> \r\n> This isn't unexpected. EXPLAIN does not actually run the query and\r\n> determine how many rows are returned; it calculates an estimate based\r\n> on the current system statistics, which vary constantly depending on\r\n> activity in the database.\n\r\nEXPLAIN ANALYZE (which is what he did) does run the query and return the\r\nactual number of rows:\n\r\n#v+\r\nwdsah=> explain (analyze, buffers) select * from facttable_eurostat_comext_cpa2_1 ;\r\n╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\r\n║ QUERY PLAN ║\r\n╟──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╢\r\n║ Seq Scan on facttable_eurostat_comext_cpa2_1 (cost=0.00..1005741.32 rows=39633432 width=85) (actual time=0.396..6541.701 rows=39633591 loops=1) ║\r\n║ Buffers: shared read=609407 ║\r\n║ Planning Time: 1.650 ms ║\r\n║ Execution Time: 7913.027 ms ║\r\n╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\r\n(4 rows)\r\n#v-\n\r\nThe first tuple (cost=0.00..1005741.32 rows=39633432 width=85) is an\r\nestimate used to plan the query. But the second one\r\n(actual time=0.396..6541.701 rows=39633591 loops=1)\r\ncontains measurements from actually running the query.\n\r\nI think it's possible that the rows estimate in the first tuple changes\r\nwithout any actual data change (although the only reason I can think of\r\nright now would be an ANALYZE (in another session or by autovacuum)).\r\nBut the actual rows definitely shouldn't change.\n\r\n hp\n\r\n-- \r\n _ | Peter J. Holzer | Story must make more sense than reality.\r\n|_|_) | |\r\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\r\n__/ | http://www.hjp.at/ | challenge!\"",
"msg_date": "Mon, 24 Oct 2022 10:31:44 -0400",
"msg_from": "Vince McMahon <sippingonesandzeros@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Explain returns different number of rows"
}
] |
[
{
"msg_contents": "Hei\n\nThe main problem is that for instance ogr2ogr is using more time to get system info about tables than doing the actual job.\n\nThe time pick up postgresql meta info takes between 30 and 60 seconds and sometimes hours if we have not done vacuum analyze recenlty.\nThen actual spatial jobs takes less than 10 seconds.\n\nBefore I run ogr2ogr I do vacuum analyze\n\n schemaname | relname | n_live_tup | n_dead_tup | last_autovacuum\n------------+----------------+------------+------------+-------------------------------\n pg_catalog | pg_class | 215296 | 4365 | 2022-10-18 10:24:05.745915+02\n pg_catalog | pg_attribute | 1479648 | 18864 | 2022-10-18 12:36:52.820133+02\n pg_catalog | pg_type | 200777 | 2318 | 2022-10-18 06:33:58.598257+02\n pg_catalog | pg_constraint | 10199 | 104 | 2022-10-20 15:10:57.894674+02\n pg_catalog | pg_namespace | 860 | 1 | [NULL]\n pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n(6 rows)\n\nVACUUM ANALYZE pg_catalog.pg_class;\nVACUUM ANALYZE pg_catalog.pg_attribute;\nVACUUM ANALYZE pg_catalog.pg_namespace;\nVACUUM ANALYZE pg_catalog.pg_type;\nVACUUM ANALYZE pg_catalog.pg_constraint;\nVACUUM ANALYZE pg_catalog.pg_description;\n\nAfter running, we have this values\n\nschemaname | relname | n_live_tup | n_dead_tup | last_autovacuum\n------------+----------------+------------+------------+-------------------------------\n pg_catalog | pg_class | 221739 | 464 | 2022-10-18 10:24:05.745915+02\n pg_catalog | pg_namespace | 860 | 2 |\n pg_catalog | pg_attribute | 1464900 | 1672 | 2022-10-18 12:36:52.820133+02\n pg_catalog | pg_constraint | 10200 | 8 | 2022-10-20 15:10:57.894674+02\n pg_catalog | pg_type | 204936 | 93 | 2022-10-18 06:33:58.598257+02\n pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n(6 rows)\n\nHere https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample\n\nThen we take copy of the pg_catalog tables involved.\n\nCREATE SCHEMA test_pg_metadata;\n\nCREATE TABLE test_pg_metadata.pg_class ( like pg_class including all);\nINSERT INTO test_pg_metadata.pg_class SELECT * FROM pg_class;\n\n-- CREATE TABLE test_pg_metadata.pg_attribute ( like pg_attribute including all);\n-- Failes with ERROR: 42P16: column \"attmissingval\" has pseudo-type anyarray\n-- has to do it this way\nCREATE TABLE test_pg_metadata.pg_attribute AS SELECT\nattcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions\nFROM pg_attribute;\nCREATE UNIQUE INDEX ON test_pg_metadata.pg_attribute(attrelid, attnum);\nCREATE UNIQUE INDEX ON test_pg_metadata.pg_attribute(attrelid, attname);\n\nCREATE TABLE test_pg_metadata.pg_namespace ( like pg_namespace including all);\nINSERT INTO test_pg_metadata.pg_namespace SELECT * FROM pg_namespace;\n\nCREATE TABLE test_pg_metadata.pg_type ( like pg_type including all);\nINSERT INTO test_pg_metadata.pg_type SELECT * FROM pg_type;\n\nCREATE TABLE test_pg_metadata.pg_constraint ( like pg_constraint including all);\nINSERT INTO test_pg_metadata.pg_constraint SELECT * FROM pg_constraint;\n\nCREATE TABLE test_pg_metadata.pg_description ( like pg_description including all);\nINSERT INTO test_pg_metadata.pg_description SELECT * FROM pg_description;\n\nThere is no primary key on pg_attribute but that does not make any difference when testing, it seems like.\n\nHere https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.\n\nI do not understand way it's so much slower to use the tables in pg_catalog than in test_pg_metadata tables because they have the same content.\n\nI also tried to create a new index on pg_catalog.pg_attribute to check if that could help but that was not allowed, I was running as the postgres user.\n\nCREATE INDEX ON pg_catalog.pg_attribute(atttypid);\nERROR: 42501: permission denied: \"pg_attribute\" is a system catalog\nLOCATION: RangeVarCallbackOwnsRelation, tablecmds.c:16486\n\nWe run on\nPostgreSQL 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\nPOSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n\nThanks\n\nLars\n\n\n\n\n\n\n\n\n\nHei\n\n\n\n\nThe main problem is that for instance ogr2ogr is using more time to get system info about tables than doing the actual job.\n\n\n\n\nThe time pick up postgresql meta info takes between 30 and 60 seconds and sometimes hours if we have not done vacuum analyze recenlty. \n\n\n\nThen actual spatial jobs takes less than 10 seconds.\n\n\nBefore I run ogr2ogr I do vacuum analyze\n\n\n\n\n schemaname | relname | n_live_tup | n_dead_tup | last_autovacuum \n------------+----------------+------------+------------+-------------------------------\n pg_catalog | pg_class | 215296 | 4365 | 2022-10-18 10:24:05.745915+02\n pg_catalog | pg_attribute | 1479648 | 18864 | 2022-10-18 12:36:52.820133+02\n pg_catalog | pg_type | 200777 | 2318 | 2022-10-18 06:33:58.598257+02\n pg_catalog | pg_constraint | 10199 | 104 | 2022-10-20 15:10:57.894674+02\n pg_catalog | pg_namespace | 860 | 1 | [NULL]\n pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n(6 rows)\n\n\n\nVACUUM ANALYZE pg_catalog.pg_class;\nVACUUM ANALYZE pg_catalog.pg_attribute;\nVACUUM ANALYZE pg_catalog.pg_namespace;\nVACUUM ANALYZE pg_catalog.pg_type;\nVACUUM ANALYZE pg_catalog.pg_constraint;\nVACUUM ANALYZE pg_catalog.pg_description;\n\n\n\n\n\nAfter running,\nwe have this values\n\n\n\n\n\nschemaname | relname | n_live_tup | n_dead_tup | last_autovacuum \n------------+----------------+------------+------------+-------------------------------\n pg_catalog | pg_class | 221739 | 464 | 2022-10-18 10:24:05.745915+02\n pg_catalog | pg_namespace | 860 | 2 |\n\n pg_catalog | pg_attribute | 1464900 | 1672 | 2022-10-18 12:36:52.820133+02\n pg_catalog | pg_constraint | 10200 | 8 | 2022-10-20 15:10:57.894674+02\n pg_catalog | pg_type | 204936 | 93 | 2022-10-18 06:33:58.598257+02\n pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n(6 rows)\n\n\n\nHere https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample\n\n\n\n\n\nThen we take copy of the pg_catalog tables involved.\n\n\n\nCREATE SCHEMA test_pg_metadata; \n\n\n\nCREATE TABLE test_pg_metadata.pg_class ( like pg_class including all);\nINSERT INTO test_pg_metadata.pg_class SELECT * FROM pg_class;\n\n\n-- CREATE TABLE test_pg_metadata.pg_attribute ( like pg_attribute including all);\n-- Failes with ERROR: 42P16: column \"attmissingval\" has pseudo-type anyarray\n-- has to do it this way\nCREATE TABLE test_pg_metadata.pg_attribute AS SELECT \nattcollation,attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,atthasmissing,attidentity,attgenerated,attisdropped,attislocal,attinhcount,attacl,attoptions,attfdwoptions\nFROM pg_attribute;\nCREATE UNIQUE INDEX ON test_pg_metadata.pg_attribute(attrelid, attnum);\nCREATE UNIQUE INDEX ON test_pg_metadata.pg_attribute(attrelid, attname);\n\n\nCREATE TABLE test_pg_metadata.pg_namespace ( like pg_namespace including all);\nINSERT INTO test_pg_metadata.pg_namespace SELECT * FROM pg_namespace;\n\n\nCREATE TABLE test_pg_metadata.pg_type ( like pg_type including all);\nINSERT INTO test_pg_metadata.pg_type SELECT * FROM pg_type;\n\n\nCREATE TABLE test_pg_metadata.pg_constraint ( like pg_constraint including all);\nINSERT INTO test_pg_metadata.pg_constraint SELECT * FROM pg_constraint;\n \nCREATE TABLE test_pg_metadata.pg_description ( like pg_description including all);\nINSERT INTO test_pg_metadata.pg_description SELECT * FROM pg_description;\n\n\n\n\n\nThere is no primary key on pg_attribute but that does not make any difference when testing, it seems like.\n\n\n\n\nHere https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.\n\n\n\n\n\n\nI do not understand way it's so much slower to use the tables in pg_catalog than in test_pg_metadata tables because they have the same content.\n\n\n\n\n\nI also tried to create a new index on pg_catalog.pg_attribute to check if that could help but that was not allowed, I was running as the postgres user.\n\n\n\n\n\nCREATE INDEX ON pg_catalog.pg_attribute(atttypid);\nERROR: 42501: permission denied: \"pg_attribute\" is a system catalog\nLOCATION: RangeVarCallbackOwnsRelation, tablecmds.c:16486\n\n\n\n\nWe run on\n\nPostgreSQL 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n\n\nPOSTGIS=\"3.1.1 aaf4c79\" [EXTENSION] PGSQL=\"120\" GEOS=\"3.9.0-CAPI-1.16.2\" SFCGAL=\"1.3.7\" PROJ=\"7.2.1\" GDAL=\"GDAL 3.2.1, released 2020/12/29\" LIBXML=\"2.9.10\" LIBJSON=\"0.13.1\" LIBPROTOBUF=\"1.3.3\" WAGYU=\"0.5.0 (Internal)\" TOPOLOGY RASTER\n\n\n\n\nThanks\n\n\n\n\n\n\nLars",
"msg_date": "Fri, 21 Oct 2022 09:19:58 +0000",
"msg_from": "Lars Aksel Opsahl <Lars.Opsahl@nibio.no>",
"msg_from_op": true,
"msg_subject": "ogr2ogr slow sql when checking system tables for column info and so\n on."
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 21, 2022 at 09:19:58AM +0000, Lars Aksel Opsahl wrote:\n>\n> The main problem is that for instance ogr2ogr is using more time to get system info about tables than doing the actual job.\n>\n> The time pick up postgresql meta info takes between 30 and 60 seconds and sometimes hours if we have not done vacuum analyze recenlty.\n> Then actual spatial jobs takes less than 10 seconds.\n>\n> Before I run ogr2ogr I do vacuum analyze\n>\n> schemaname | relname | n_live_tup | n_dead_tup | last_autovacuum\n> ------------+----------------+------------+------------+-------------------------------\n> pg_catalog | pg_class | 215296 | 4365 | 2022-10-18 10:24:05.745915+02\n> pg_catalog | pg_attribute | 1479648 | 18864 | 2022-10-18 12:36:52.820133+02\n> pg_catalog | pg_type | 200777 | 2318 | 2022-10-18 06:33:58.598257+02\n> pg_catalog | pg_constraint | 10199 | 104 | 2022-10-20 15:10:57.894674+02\n> pg_catalog | pg_namespace | 860 | 1 | [NULL]\n> pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n> (6 rows)\n>\n> VACUUM ANALYZE pg_catalog.pg_class;\n> VACUUM ANALYZE pg_catalog.pg_attribute;\n> VACUUM ANALYZE pg_catalog.pg_namespace;\n> VACUUM ANALYZE pg_catalog.pg_type;\n> VACUUM ANALYZE pg_catalog.pg_constraint;\n> VACUUM ANALYZE pg_catalog.pg_description;\n>\n> After running, we have this values\n>\n> schemaname | relname | n_live_tup | n_dead_tup | last_autovacuum\n> ------------+----------------+------------+------------+-------------------------------\n> pg_catalog | pg_class | 221739 | 464 | 2022-10-18 10:24:05.745915+02\n> pg_catalog | pg_namespace | 860 | 2 |\n> pg_catalog | pg_attribute | 1464900 | 1672 | 2022-10-18 12:36:52.820133+02\n> pg_catalog | pg_constraint | 10200 | 8 | 2022-10-20 15:10:57.894674+02\n> pg_catalog | pg_type | 204936 | 93 | 2022-10-18 06:33:58.598257+02\n> pg_catalog | pg_description | 9119 | 0 | 2022-05-06 01:59:58.664618+02\n> (6 rows)\n>\n> Here https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample\n> [...]\n> -> Seq Scan on pg_attribute a (rows=1464751) (actual time=0.028..17740.663\n> [...]\n> Then we take copy of the pg_catalog tables involved.\n>\n> Here https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.\n> [...]\n> -> Seq Scan on pg_attribute a (rows=1452385) (actual time=0.006..156.392\n>\n> I do not understand way it's so much slower to use the tables in pg_catalog than in test_pg_metadata tables because they have the same content.\n\nIn both case you have a sequential scan over the pg_attribute table, but for\npg_catalog it takes 17 seconds to retrieve the 1.4M rows, and in the new table\nit takes 156 ms.\n\nIt looks like you catalog is heavily bloated, which is the cause of the\nslowdown.\n\nYou could do a VACUUM FULL of the tables in pg_catalog but it would only be a\nshort term fix as it's likely that your catalog will get bloated again. Do you\nrely a lot on temporary tables? If yes it can easily lead to this kind of side\neffect, and you should modify you code to perform manual vacuum of catalogs\ntables very often, or add a dedicated high frequency task for running similar\nvacuum and keep the bloat under control.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 17:48:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ogr2ogr slow sql when checking system tables for column info and\n so on."
},
{
"msg_contents": "________________________________\nFrom: Julien Rouhaud <rjuju123@gmail.com>\nSent: Friday, October 21, 2022 11:48 AM\nTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n\n>From: Julien Rouhaud <rjuju123@gmail.com>\n>Sent: Friday, October 21, 2022 11:48 AMTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>Subject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n>\n>Hi,\n>\n\n>> Here https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample\n>> [...]\n>> -> Seq Scan on pg_attribute a (rows=1464751) (actual time=0.028..17740.663\n>> [...]\n>> Then we take copy of the pg_catalog tables involved.\n>>\n>> Here https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.\n>> [...]\n>> -> Seq Scan on pg_attribute a (rows=1452385) (actual time=0.006..156.392\n>>\n>> I do not understand way it's so much slower to use the tables in pg_catalog than in test_pg_metadata tables because they have the same content.\n>\n>In both case you have a sequential scan over the pg_attribute table, but for\n>pg_catalog it takes 17 seconds to retrieve the 1.4M rows, and in the new table\n>it takes 156 ms.\n>\n>It looks like you catalog is heavily bloated, which is the cause of the\n>slowdown.\n>\n>You could do a VACUUM FULL of the tables in pg_catalog but it would only be a\n>short term fix as it's likely that your catalog will get bloated again. Do you\n>rely a lot on temporary tables? If yes it can easily lead to this kind of side\n>effect, and you should modify you code to perform manual vacuum of catalogs\n>tables very often, or add a dedicated high frequency task for running similar\n>vacuum and keep the bloat under control.\n\nHi\n\nYes we use a lot of temp tables sometimes .\n\nWith \"VACUUM FULL ANALYZE \" we got the same time as from the created tables https://explain.depesz.com/s/Yxy9 so that works.\n\nOK then we start up by trigger a 'VACUUM FULL ANALYZE ' for all the tables in th pg_catalog because this seems to be only thing that is working for now.\n\nI assume that adding more indexes on the tables in pg_catalog to avoid tables scans are not that easy.\n\nThanks for your help.\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Julien Rouhaud <rjuju123@gmail.com>\nSent: Friday, October 21, 2022 11:48 AM\nTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>\n\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\n\nSubject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n\n\n \n\n\n\n\n>From: Julien Rouhaud <rjuju123@gmail.com>\n>Sent: Friday, October 21, 2022 11:48 AMTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>Subject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n> \n>Hi,\n>\n\n\n\n>> Here https://explain.depesz.com/s/oU19#stats the sql generated by ogr2ogr that takes 33 seconds in this sample\n>> [...]\n>> -> Seq Scan on pg_attribute a (rows=1464751) (actual time=0.028..17740.663\n>> [...]\n>> Then we take copy of the pg_catalog tables involved.\n>>\n>> Here https://explain.depesz.com/s/NEwB#source is the trace when using the same sql as from ogr2ogr but using the tables in test_pg_metadata and then it runs in 5 seconds.\n>> [...]\n>> -> Seq Scan on pg_attribute a (rows=1452385) (actual time=0.006..156.392\n>>\n>> I do not understand way it's so much slower to use the tables in pg_catalog than in test_pg_metadata tables because they have the same content.\n>\n>In both case you have a sequential scan over the pg_attribute table, but for\n>pg_catalog it takes 17 seconds to retrieve the 1.4M rows, and in the new table\n>it takes 156 ms.\n>\n>It looks like you catalog is heavily bloated, which is the cause of the\n>slowdown.\n>\n>You could do a VACUUM FULL of the tables in pg_catalog but it would only be a\n>short term fix as it's likely that your catalog will get bloated again. Do you\n>rely a lot on temporary tables? If yes it can easily lead to this kind of side\n>effect, and you should modify you code to perform manual vacuum of catalogs\n>tables very often, or add a dedicated high frequency task for running similar\n>vacuum and keep the bloat under control.\n\n\n\nHi\n\n\n\nYes we use a lot of temp tables sometimes .\n\n\nWith \"VACUUM FULL ANALYZE \" we got the same time as from the created tables https://explain.depesz.com/s/Yxy9 so that works.\n\n\nOK then we start up by trigger a 'VACUUM FULL ANALYZE\n' for all the tables in th pg_catalog because this seems to be only thing that is working for now.\n\n\n\nI assume that adding more indexes on the tables in pg_catalog to avoid tables scans are not that easy.\n\n\n\nThanks for your help.\n\n\n\n\n\n\nLars",
"msg_date": "Fri, 21 Oct 2022 10:30:27 +0000",
"msg_from": "Lars Aksel Opsahl <Lars.Opsahl@nibio.no>",
"msg_from_op": true,
"msg_subject": "Re: ogr2ogr slow sql when checking system tables for column info and\n so on."
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 21, 2022 at 10:30:27AM +0000, Lars Aksel Opsahl wrote:\n>\n> >\n> >In both case you have a sequential scan over the pg_attribute table, but for\n> >pg_catalog it takes 17 seconds to retrieve the 1.4M rows, and in the new table\n> >it takes 156 ms.\n> >\n> >It looks like you catalog is heavily bloated, which is the cause of the\n> >slowdown.\n> >\n> >You could do a VACUUM FULL of the tables in pg_catalog but it would only be a\n> >short term fix as it's likely that your catalog will get bloated again. Do you\n> >rely a lot on temporary tables? If yes it can easily lead to this kind of side\n> >effect, and you should modify you code to perform manual vacuum of catalogs\n> >tables very often, or add a dedicated high frequency task for running similar\n> >vacuum and keep the bloat under control.\n>\n> Yes we use a lot of temp tables sometimes .\n\nWhat do you mean by sometimes? If you only have non frequent or specialized\njobs the creates a lot of temp tables, you just need to modify them to issue\nsome VACUUM (not VACUUM FULL) at the end, or regularly if you creates millions\nof tables in a single job.\n\n> With \"VACUUM FULL ANALYZE \" we got the same time as from the created tables\n> https://explain.depesz.com/s/Yxy9 so that works.\n>\n> OK then we start up by trigger a 'VACUUM FULL ANALYZE ' for all the tables in\n> th pg_catalog because this seems to be only thing that is working for now.\n\nJust to be clear the VACUUM FULL is only needed to shrink the tables which were\nlikely multiple GB each. If you do simple VACUUM frequently enough, you won't\nhave too much bloat in the first place and everything will just work as\nintended.\n\nYou could setup some monitoring on the size of the catalog tables to make sure\nthat you run those maintenance as frequently as necessary.\n\n> I assume that adding more indexes on the tables in pg_catalog to avoid tables\n> scans are not that easy.\n\nIndeed, that's not a supported operation.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 18:41:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ogr2ogr slow sql when checking system tables for column info and\n so on."
},
{
"msg_contents": "________________________________\nFrom: Julien Rouhaud <rjuju123@gmail.com>\nSent: Friday, October 21, 2022 12:41 PM\nTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n\n>From: Julien Rouhaud <rjuju123@gmail.com>Sent: Friday, October 21, 2022 12:41 PMTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>Subject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n>\n>\n>What do you mean by sometimes? If you only have non frequent or specialized\n>jobs the creates a lot of temp tables, you just need to modify them to issue\n>some VACUUM (not VACUUM FULL) at the end, or regularly if you creates millions\n>of tables in a single job.\n>\n\nHi again.\n\nIn this case the only thing that solved this performance issue was VACUUM FULL so to be sure we have run VACUUM FULL and not only VACUUM ANALYZE\n\nIt's a lot of different people using this server and many do not have rights to run vacuum on system tables and we do not want many people to run vacuum full at the same time either.\n\nSo we have to set up this as a schedueld or triggered job as you suggest.\n\nThanks again.\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Julien Rouhaud <rjuju123@gmail.com>\n\nSent: Friday, October 21, 2022 12:41 PM\nTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>\n\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\n\nSubject: Re: ogr2ogr slow sql when checking system tables for column info and so on.\n\n\n \n\n>From: Julien Rouhaud <rjuju123@gmail.com>Sent: Friday, October 21, 2022 12:41 PMTo: Lars Aksel Opsahl <Lars.Opsahl@nibio.no>Cc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>Subject: Re: ogr2ogr slow sql when checking system\n tables for column info and so on.\n> \n>\n>What do you mean by sometimes? If you only have non frequent or specialized\n>jobs the creates a lot of temp tables, you just need to modify them to issue\n>some VACUUM (not VACUUM FULL) at the end, or regularly if you creates millions\n>of tables in a single job.\n>\n\n\n\n\nHi again.\n\n\n\n\nIn this case the only thing that solved this performance issue was VACUUM FULL so to be sure we have run VACUUM FULL and not only VACUUM ANALYZE\n\n\n\n\n\nIt's a lot of different people using this server and many do not have rights to run vacuum on system tables and we do not want many people to run vacuum full at the same time either.\n\n\n\n\nSo we have to set up this as a schedueld or triggered job as you suggest.\n\n\n\n\n\nThanks again.\n\n\n\n\n\nLars",
"msg_date": "Fri, 21 Oct 2022 11:41:26 +0000",
"msg_from": "Lars Aksel Opsahl <Lars.Opsahl@nibio.no>",
"msg_from_op": true,
"msg_subject": "Re: ogr2ogr slow sql when checking system tables for column info and\n so on."
}
] |
[
{
"msg_contents": "Hello!\n\nI have been struggling with finding a proper solution for this query for\nsome time and wanted to ask if someone here knows how to approach this?\n\nI have a table named \"report\" which has an index on report.reporter_id.\nThis column consists of IDs which are grouped together using a table named\n\"group_links\".\nSo for every reporter id which is part of the same group, there is a row in\n\"group_links\" with the same group_id.\n\nNow, I noticed that I can select reports for a group in two ways. Both\nqueries return the same but one is using =ANY(ARRAY(expr)) (\"subselect\")\nand one is using =ANY(ARRAY) (\"static array\") with the same array as the\nexpression would return.\nThe static array query is running very fast for small selections and where\nnot a lot of rows match the condition. It uses a bitmap index scan.\nThe subselect is running very slow and uses an index scan. However, it is\nparticularly slow if not many rows match the condition and thus a lot of\nrows are filtered while scanning the index.\nI was able to reproduce a similar issue with using `= ANY(VALUES)`\ninstead of `= ANY(ARRAY)`:\n\n1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8\n2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb\n3. slow query using =ANY(VALUES): https://explain.depesz.com/s/cYrn\n\nI guess the difference comes from the query planner not being able to know\nthe exact values for the WHERE condition beforehand. But how should cases\nlike this be best handled?\n\nShould I denormalize the data such that I have a table with columns\nreport.id and group_id and report.created such that I can create an index\non (created, group_id)? Then I don't have to do a subselect anymore.\n\nI would be very glad for any help regarding this!\n\nPostgres version: PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n------------------------------------\n\n> \\d report\n> Table \"public.report\"\n> Column | Type | Collation | Nullable | Default\n> ---------------+--------------------------+-----------+----------+---------\n> reporter_id | uuid | | not null |\n> parsed | boolean | | |\n> id | text | | not null |\n> request_id | uuid | | |\n> created | timestamp with time zone | | not null | now()\n> customer | text | | |\n> subject | text | | |\n> parser_result | text | | not null |\n> parser | text | | |\n> event_types | jsonb | | |\n> event_count | integer | | |\n> account_id | integer | | |\n> reviewable | boolean | | not null | false\n> reviewed | boolean | | not null | false\n> Indexes:\n> \"PK_99e4d0bea58cba73c57f935a546\" PRIMARY KEY, btree (id)\n> \"idx_report_created_desc_id_asc\" btree (created DESC, id)\n> \"idx_report_created_desc_reporter_id_asc\" btree (created DESC,\n> reporter_id)\n> \"idx_report_event_types\" gin (event_types)\n> \"idx_report_parser_gin\" gin (parser gin_trgm_ops)\n> \"idx_report_parser_result_created_desc\" btree (parser_result, created\n> DESC)\n> \"idx_report_reporter_id_asc_created_desc\" btree (reporter_id, created\n> DESC)\n> \"idx_report_request_id_asc_created_desc\" btree (request_id, created\n> DESC)\n> \"idx_report_subject_gin\" gin (subject gin_trgm_ops)\n> Check constraints:\n> \"report_parser_result_constraint\" CHECK (parser_result = ANY\n> (ARRAY['PARSED'::text, 'UNPARSED'::text, 'REJECTED'::text]))\n> Foreign-key constraints:\n> \"FK_5b809608bb38d119333b69f65f9\" FOREIGN KEY (request_id) REFERENCES\n> request(id)\n> \"FK_d41df66b60944992386ed47cf2e\" FOREIGN KEY (reporter_id) REFERENCES\n> reporter(id)\n> Referenced by:\n> TABLE \"event\" CONSTRAINT \"event_report_id_foreign\" FOREIGN KEY\n> (report_id) REFERENCES report(id)\n\n------------------------------------\n\n> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> relname='report';\n> relname | relpages | reltuples | relallvisible | relkind | relnatts |\n> relhassubclass | reloptions | pg_table_size\n>\n> ---------+----------+---------------+---------------+---------+----------+----------------+------------+---------------\n> report | 2062252 | 8.5893344e+07 | 2062193 | r | 22 |\n> f | | 16898801664\n> (1 row)\n\n------------------------------------\n\n> \\d group_links\n> Table \"public.group_links\"\n> Column | Type | Collation | Nullable |\n> Default\n>\n> ------------------+--------------------------+-----------+----------+-------------------\n> rule_id | uuid | | not null |\n> reporter_id | uuid | | not null |\n> group_id | uuid | | not null |\n> exclusion | boolean | | | false\n> last_update_time | timestamp with time zone | | |\n> CURRENT_TIMESTAMP\n> Indexes:\n> \"group_rules_matches_pkey\" PRIMARY KEY, btree (rule_id, reporter_id)\n> \"idx_group_rules_matches_group_id\" btree (group_id)\n> \"idx_group_rules_matches_group_id_reporter_id_exclusion\" btree\n> (group_id, reporter_id, exclusion)\n> \"idx_group_rules_matches_reporter_id\" btree (reporter_id)\n> Foreign-key constraints:\n> \"group_rules_matches_group_id_foreign\" FOREIGN KEY (group_id)\n> REFERENCES \"group\"(id) ON DELETE CASCADE\n> \"group_rules_matches_reporter_id_foreign\" FOREIGN KEY (reporter_id)\n> REFERENCES reporter(id)\n\n \"group_rules_matches_rule_id_foreign\" FOREIGN KEY (rule_id) REFERENCES\n> group_rules(id) ON DELETE CASCADE\n\n------------------------------------\n\nHello!I have been struggling with finding a proper solution for this query for some time and wanted to ask if someone here knows how to approach this?I have a table named \"report\" which has an index on report.reporter_id. This column consists of IDs which are grouped together using a table named \"group_links\".So for every reporter id which is part of the same group, there is a row in \"group_links\" with the same group_id.Now, I noticed that I can select reports for a group in two ways. Both queries return the same but one is using =ANY(ARRAY(expr)) (\"subselect\") and one is using =ANY(ARRAY) (\"static array\") with the same array as the expression would return.The static array query is running very fast for small selections and where not a lot of rows match the condition. It uses a bitmap index scan.The subselect is running very slow and uses an index scan. However, it is particularly slow if not many rows match the condition and thus a lot of rows are filtered while scanning the index.I was able to reproduce a similar issue with using `= ANY(VALUES)` instead of `= ANY(ARRAY)`:1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP82. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb3. slow query using =ANY(VALUES): https://explain.depesz.com/s/cYrnI guess the difference comes from the query planner not being able to know the exact values for the WHERE condition beforehand. But how should cases like this be best handled?Should I denormalize the data such that I have a table with columns report.id and group_id and report.created such that I can create an index on (created, group_id)? Then I don't have to do a subselect anymore.I would be very glad for any help regarding this!Postgres version: PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit------------------------------------\\d report Table \"public.report\" Column | Type | Collation | Nullable | Default ---------------+--------------------------+-----------+----------+--------- reporter_id | uuid | | not null | parsed | boolean | | | id | text | | not null | request_id | uuid | | | created | timestamp with time zone | | not null | now() customer | text | | | subject | text | | | parser_result | text | | not null | parser | text | | | event_types | jsonb | | | event_count | integer | | | account_id | integer | | | reviewable | boolean | | not null | false reviewed | boolean | | not null | falseIndexes: \"PK_99e4d0bea58cba73c57f935a546\" PRIMARY KEY, btree (id) \"idx_report_created_desc_id_asc\" btree (created DESC, id) \"idx_report_created_desc_reporter_id_asc\" btree (created DESC, reporter_id) \"idx_report_event_types\" gin (event_types) \"idx_report_parser_gin\" gin (parser gin_trgm_ops) \"idx_report_parser_result_created_desc\" btree (parser_result, created DESC) \"idx_report_reporter_id_asc_created_desc\" btree (reporter_id, created DESC) \"idx_report_request_id_asc_created_desc\" btree (request_id, created DESC) \"idx_report_subject_gin\" gin (subject gin_trgm_ops)Check constraints: \"report_parser_result_constraint\" CHECK (parser_result = ANY (ARRAY['PARSED'::text, 'UNPARSED'::text, 'REJECTED'::text]))Foreign-key constraints: \"FK_5b809608bb38d119333b69f65f9\" FOREIGN KEY (request_id) REFERENCES request(id) \"FK_d41df66b60944992386ed47cf2e\" FOREIGN KEY (reporter_id) REFERENCES reporter(id)Referenced by: TABLE \"event\" CONSTRAINT \"event_report_id_foreign\" FOREIGN KEY (report_id) REFERENCES report(id)------------------------------------SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='report'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size ---------+----------+---------------+---------------+---------+----------+----------------+------------+--------------- report | 2062252 | 8.5893344e+07 | 2062193 | r | 22 | f | | 16898801664(1 row)------------------------------------\\d group_links Table \"public.group_links\" Column | Type | Collation | Nullable | Default ------------------+--------------------------+-----------+----------+------------------- rule_id | uuid | | not null | reporter_id | uuid | | not null | group_id | uuid | | not null | exclusion | boolean | | | false last_update_time | timestamp with time zone | | | CURRENT_TIMESTAMPIndexes: \"group_rules_matches_pkey\" PRIMARY KEY, btree (rule_id, reporter_id) \"idx_group_rules_matches_group_id\" btree (group_id) \"idx_group_rules_matches_group_id_reporter_id_exclusion\" btree (group_id, reporter_id, exclusion) \"idx_group_rules_matches_reporter_id\" btree (reporter_id)Foreign-key constraints: \"group_rules_matches_group_id_foreign\" FOREIGN KEY (group_id) REFERENCES \"group\"(id) ON DELETE CASCADE \"group_rules_matches_reporter_id_foreign\" FOREIGN KEY (reporter_id) REFERENCES reporter(id) \"group_rules_matches_rule_id_foreign\" FOREIGN KEY (rule_id) REFERENCES group_rules(id) ON DELETE CASCADE------------------------------------",
"msg_date": "Mon, 14 Nov 2022 02:49:13 +0100",
"msg_from": "Ramdip Gill <ramdip.singhgill@gmail.com>",
"msg_from_op": true,
"msg_subject": "=ANY(ARRAY) vs =ANY(ARRAY(expr)) performance"
},
{
"msg_contents": "Okay, increasing the collection of statistics seems to have helped. I used\n`ALTER TABLE report ALTER COLUMN reporter_id SET STATISTICS 10000` and now\nqueries which previously didn't finish at all now finish in < 1 ms.\n\nThe following gave me the hint:\n\n“The amount of information stored in `pg_statistic` by `ANALYZE`, in\nparticular the maximum number of entries in\nthe `most_common_vals` and `histogram_bounds` arrays for each column, can\nbe set on a column-by-column basis using the `ALTER TABLE SET\nSTATISTICS` command, or globally by setting the default_statistics_target\nconfiguration variable. The default limit is presently 100 entries. *Raising\nthe limit might allow more accurate planner estimates to be made,\nparticularly for columns with irregular data distributions*, at the price\nof consuming more space in `pg_statistic` and slightly more time to compute\nthe estimates. Conversely, a lower limit might be sufficient for columns\nwith simple data distributions.”\n\n— https://www.postgresql.org/docs/current/planner-stats.html\n\n>\n\nOkay, increasing the collection of statistics seems to have helped. I used `ALTER TABLE report ALTER COLUMN reporter_id SET STATISTICS 10000` and now queries which previously didn't finish at all now finish in < 1 ms.The following gave me the hint:“The amount of information stored in `pg_statistic` by `ANALYZE`, in particular the maximum number of entries in the `most_common_vals` and `histogram_bounds` arrays for each column, can be set on a column-by-column basis using the `ALTER TABLE SET STATISTICS` command, or globally by setting the default_statistics_target configuration variable. The default limit is presently 100 entries. Raising the limit might allow more accurate planner estimates to be made, particularly for columns with irregular data distributions, at the price of consuming more space in `pg_statistic` and slightly more time to compute the estimates. Conversely, a lower limit might be sufficient for columns with simple data distributions.”— https://www.postgresql.org/docs/current/planner-stats.html",
"msg_date": "Mon, 14 Nov 2022 05:17:17 +0100",
"msg_from": "Ramdip Gill <ramdip.singhgill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: =ANY(ARRAY) vs =ANY(ARRAY(expr)) performance"
},
{
"msg_contents": "I was able to reproduce a similar issue with using `= ANY(VALUES)`\n> instead of `= ANY(ARRAY)`:\n>\n> 1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8\n> 2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb\n> 3. slow query using =ANY(VALUES): https://explain.depesz.com/s/cYrn\n>\n>\n I have found the \"ANY\" operator to be slow in general. It is almost\nalways faster to use the \"<@\" operator:\n```\n-- more intuitive:\nselect\n count(*)\nfrom\n testarray\nwhere\n 'test' = ANY (myarray)\n;\n\n-- faster:\nselect\n count(*)\nfrom\n testarray\nwhere\n ARRAY['test'::varchar] <@ myarray\n;\n```\nIt is just one of those things, like replacing \"OR\" with \"UNION ALL\"\nwhenever possible too, that just make queries faster in PostgreSQL without\na ton of effort or fuss.\n\nI was able to reproduce a similar issue with using `= ANY(VALUES)` instead of `= ANY(ARRAY)`:1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP82. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb3. slow query using =ANY(VALUES): https://explain.depesz.com/s/cYrn I have found the \"ANY\" operator to be slow in general. It is almost always faster to use the \"<@\" operator:```-- more intuitive:select count(*)from testarraywhere 'test' = ANY (myarray);-- faster:select count(*)from testarraywhere ARRAY['test'::varchar] <@ myarray;```It is just one of those things, like replacing \"OR\" with \"UNION ALL\" whenever possible too, that just make queries faster in PostgreSQL without a ton of effort or fuss.",
"msg_date": "Mon, 14 Nov 2022 09:11:18 -0500",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: =ANY(ARRAY) vs =ANY(ARRAY(expr)) performance"
},
{
"msg_contents": "Rick Otten <rottenwindfish 'at' gmail.com> writes:\n\n> I was able to reproduce a similar issue with using `= ANY(VALUES)` instead of `= ANY(ARRAY)`:\n>\n> 1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8\n> 2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb\n> 3. slow query using =ANY(VALUES): https://explain.depesz.com/s/cYrn\n>\n> I have found the \"ANY\" operator to be slow in general. It is almost always faster to use the \"<@\" operator:\n> ```\n> -- more intuitive:\n> select\n> count(*)\n> from\n> testarray\n> where\n> 'test' = ANY (myarray)\n> ;\n>\n> -- faster:\n> select\n> count(*)\n> from\n> testarray\n> where\n> ARRAY['test'::varchar] <@ myarray\n> ;\n> ```\n> It is just one of those things, like replacing \"OR\" with \"UNION ALL\" whenever possible too, that just make queries faster in PostgreSQL without a\n> ton of effort or fuss.\n\ndepends^^\n\ndb=> select count(*) from table where uid = any( string_to_array('11290331,11290332,11290333,11290431',',')::int[]);\n count \n-------\n 4\n(1 row)\n\nTime: 0.837 ms\ndb=> select count(*) from table where uid = any( string_to_array('11290331,11290332,11290333,11290431',',')::int[]);\n count \n-------\n 4\n(1 row)\n\nTime: 0.854 ms\ndb=> select count(*) from table where array[uid] <@ string_to_array('11290331,11290332,11290333,11290431',',')::int[];\n count \n-------\n 4\n(1 row)\n\nTime: 52.335 ms\ndb=> select count(*) from table where array[uid] <@ string_to_array('11290331,11290332,11290333,11290431',',')::int[];\n count \n-------\n 4\n(1 row)\n\nTime: 44.176 ms\n\n\n-- \nGuillaume Cottenceau\n\n\n",
"msg_date": "Mon, 14 Nov 2022 15:22:27 +0100",
"msg_from": "Guillaume Cottenceau <gc@mnc.ch>",
"msg_from_op": false,
"msg_subject": "Re: =ANY(ARRAY) vs =ANY(ARRAY(expr)) performance"
}
] |
[
{
"msg_contents": "I've come up with a patch designed to improve performance for the case\nwhere Postgres has one or more overflowed subtransactions, i.e. at\nleast one open transaction has >64 subtransactions.\n\nSo far, performance results are very promising, which shows a loss of\nperformance of x8-9 unpatched and almost fully restored performance in\nthe patched version, for a test case of overflowed subxacts with data\ncontention, in the presence of a long running transaction.\n[Image attached]\n\nTo be accepted into PG16, we think it's a good idea to gather wide\nperformance results.\n\nIf you use subtransactions in your application, could you help\ncharacterize the performance of this patch? Sorry, no binaries\navailable, just the patch.\nhttps://www.postgresql.org/message-id/attachment/140021/002_minimize_calls_to_SubTransSetParent.v12.patch\n\nAny test case is OK, as long as you publish the code to run it and\npublish the measurements with/without the patch.\n\nThanks very much.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 16 Nov 2022 05:27:54 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Help needed with perf tests on subtransaction overflow"
}
] |
[
{
"msg_contents": "Hey all!\n\nI'm interested in the rules that Postgres uses to avoid joins. Are\nthese documented somewhere? If I had to look at the source code, where\nwould I start? They don't seem to match my intuition about which joins\ncould be avoided. Then again, it's quite possible that I'm wrong and\nmisunderstanding the semantics.\n\nHere are some particular examples I'm interested in.\nSet up some tables. Intuition: table a contains some info, table b\ncontains some (optional) extra info, table c contains some more\n(optional) extra info.\n\ncreate table c (\n c_id int primary key,\n c_foo text );\n\ncreate table b (\n b_id int primary key,\n c_id int references c(c_id),\n b_bar text );\n\ncreate table a (\n a_id int primary key,\n b_id int references b(b_id),\n a_baz text );\n\n\n-- Now some queries (join on b and c in various ways,\n-- but only ever select columns from a)\n\n-- This joins on b, as expected.\n-- (Because missing rows could reduce cardinality?\n-- but making a.b_id NOT NULL doesn't help...)\n explain\n select a_baz\n from a\n join b using (b_id);\n\n-- Making it a LEFT join avoids the join on b, as expected\n explain\n select a_baz\n from a\nLEFT join b using (b_id);\n\n-- Joins on b and c. This is very strange to me.\n-- Whether or not the join on c results in any rows\n-- shouldn't make any difference.\n explain\n select a_baz\n from a\nleft join (select *\n from b\n join c using (c_id)) bc using (b_id);\n\n-- making the join in the subquery a LEFT join\n-- avoids joining entirely (no b or c in the plan)\n explain\n select a_baz\n from a\nleft join (select *\n from b\n LEFT join c using (c_id)) bc using (b_id);\n\n\nIf anybody knows why Postgres behaves this way, please let me know :)\n\nCheers,\n Stefan\n\n\n",
"msg_date": "Wed, 16 Nov 2022 11:39:13 +0100",
"msg_from": "Stefan Fehrenbach <stefan.fehrenbach@gmail.com>",
"msg_from_op": true,
"msg_subject": "When can joins be avoided?"
},
{
"msg_contents": "Stefan Fehrenbach <stefan.fehrenbach@gmail.com> writes:\n> I'm interested in the rules that Postgres uses to avoid joins. Are\n> these documented somewhere? If I had to look at the source code, where\n> would I start?\n\nsrc/backend/optimizer/plan/analyzejoins.c\n\n> They don't seem to match my intuition about which joins\n> could be avoided.\n\nI believe only left joins to single tables can be elided ATM.\nIt's too hard to prove uniqueness of the join key in more-\ncomplicated cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Nov 2022 09:49:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When can joins be avoided?"
}
] |
[
{
"msg_contents": "Hi all,\nI'm just having a doubt about the choice of the planner for a small\nexample table.\nI've a table with a numeric column (integer), and I've created two\nindexes on such column, one btree and one hash. The hash results much\nlarger as the btree, but what puzzles me is that executing an equality\nsimple query, the system chooses the hash index (that has a final cost\nof 8984.08 while the btree index would have a final cost a little\nlower (8901.94).\n\nThe only difference I can spot in the EXPLAIN plans is that the btree\nindex has an initial cost, but I don't think this is the reason, since\nit should be the final cost what matters, right?\n\nNow, even if the two costs are comparable, why does the optimizer\nchooses to use the larger hash index? What am I missing here and where\ndo I have to dig?\nUsing EXPLAIN ANALYZE shows that the two indexes are very similar\ntimings, so while using the hash index is clearly not a wrong choice,\nI'm wondering why preferring a bigger index.\n\nPlease note that the table has been manually ANALYZEd and is not clustered.\n\n\nThanks,\nLuca\n\n\ntestdb=> select version();\nversion\n----------------------------------------------------------------------------------------------------------\nPostgreSQL 14.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.2.1\n20220127 (Red Hat 11.2.1-9), 64-bit\n\ntestdb=> select relname, pg_size_pretty( pg_relation_size( oid )) from\npg_class where relname = 'articoli';\nrelname | pg_size_pretty\n----------+----------------\narticoli | 134 MB\n\ntestdb=> \\d articoli\nTable \"public.articoli\"\nColumn | Type | Collation | Nullable | Default\n-----------+---------+-----------+----------+------------------------------\npk | integer | | not null | generated always as identity\ncodice | text | | not null |\nprezzo | integer | | | 0\ncitta | text | | |\nmagazzino | integer | | | 1\nvisibile | boolean | | | true\n\n\n\n\ntestdb=> create index articoli_prezzo_idx on articoli(prezzo);\ntestdb=> create index articoli_prezzo_hash_idx on articoli using hash (prezzo);\nCREATE INDEX\ntestdb=> select relname, pg_size_pretty( pg_relation_size( oid )) from\npg_class where relname like 'articoli%idx%';\nrelname | pg_size_pretty\n--------------------------+----------------\narticoli_prezzo_idx | 15 MB\narticoli_prezzo_hash_idx | 47 MB\n\n\ntestdb=> explain select * from articoli where prezzo = 77;\nQUERY PLAN\n------------------------------------------------------------------------------------------------\nIndex Scan using articoli_prezzo_hash_idx on articoli\n(cost=0.00..8984.08 rows=5033 width=28)\nIndex Cond: (prezzo = 77)\n\n\n\ntestdb=> begin;\nBEGIN\ntestdb=*> drop index articoli_prezzo_hash_idx;\nDROP INDEX\ntestdb=*> explain select * from articoli where prezzo = 77;\nQUERY PLAN\n-------------------------------------------------------------------------------------------\nIndex Scan using articoli_prezzo_idx on articoli (cost=0.42..8901.94\nrows=5033 width=28)\nIndex Cond: (prezzo = 77)\n(2 rows)\n\ntestdb=*> rollback;\nROLLBACK\n\n\n\nIf it does matter, these is an excerpt from the pg_stats:\n\ntestdb=> select tablename, attname, n_distinct, most_common_vals,\nmost_common_freqs from pg_stats where tablename = 'articoli' and\nattname = 'prezzo';\ntablename | attname | n_distinct | most_common_vals |\nmost_common_freqs\n-----------+---------+------------+------------------+-----------------------------------\narticoli | prezzo | 200 | {62,147,154} |\n{0.0062666666,0.0060333335,0.006}\n\n\nAnd here the EXPLAIN ANALYZE outputs:\n\ntestdb=> explain analyze select * from articoli where prezzo = 77;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using articoli_prezzo_hash_idx on articoli\n(cost=0.00..8971.95 rows=5026 width=27) (actual time=0.013..5.821\nrows=5200 loops=1)\n Index Cond: (prezzo = 77)\n Planning Time: 0.108 ms\n Execution Time: 6.037 ms\n(4 rows)\n\ntestdb=> begin;\nBEGIN\ntestdb=*> drop index articoli_prezzo_hash_idx;\nDROP INDEX\ntestdb=*> explain analyze select * from articoli where prezzo = 77;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using articoli_prezzo_idx on articoli (cost=0.42..8891.65\nrows=5026 width=27) (actual time=0.034..6.561 rows=5200 loops=1)\n Index Cond: (prezzo = 77)\n Planning Time: 0.165 ms\n Execution Time: 6.843 ms\n(4 rows)\n\ntestdb=*> rollback;\nROLLBACK\n\n\n",
"msg_date": "Fri, 18 Nov 2022 13:15:23 +0100",
"msg_from": "Luca Ferrari <fluca1978@gmail.com>",
"msg_from_op": true,
"msg_subject": "why choosing an hash index instead of the btree version even if the\n cost is lower?"
},
{
"msg_contents": "On 11/18/22 13:15, Luca Ferrari wrote:\n> Hi all,\n> I'm just having a doubt about the choice of the planner for a small\n> example table.\n> I've a table with a numeric column (integer), and I've created two\n> indexes on such column, one btree and one hash. The hash results much\n> larger as the btree, but what puzzles me is that executing an equality\n> simple query, the system chooses the hash index (that has a final cost\n> of 8984.08 while the btree index would have a final cost a little\n> lower (8901.94).\n> \n> The only difference I can spot in the EXPLAIN plans is that the btree\n> index has an initial cost, but I don't think this is the reason, since\n> it should be the final cost what matters, right?\n> \n> Now, even if the two costs are comparable, why does the optimizer\n> chooses to use the larger hash index? What am I missing here and where\n> do I have to dig?\n> Using EXPLAIN ANALYZE shows that the two indexes are very similar\n> timings, so while using the hash index is clearly not a wrong choice,\n> I'm wondering why preferring a bigger index.\n> \n> Please note that the table has been manually ANALYZEd and is not clustered.\n> \n\nMy guess is this is due to STD_FUZZ_FACTOR, see [1] and [2].\n\nThat is, when comparing costs, we require the cost to be at least 1%,\nbecause we have a cheapest path, and we're checking if it's worth\nbuilding another one (which is not free - we have to allocate stuff\netc.). And if the difference is tiny, it's not worth it.\n\nIn this case we have the indexscan for the hash index, with cost\n8971.95, and we're considering to build indexacan path for the btree\nindex. We haven't built it yet, we only calculate cost 8891.65. But\n\n 8971.95/8891.65 = 1.009\n\nSo it's close to 1.01, but just a little bit less. So we conclude it's\nnot worth building the second path, and we keep the hash index scan.\n\nNow, this is based on the idea that\n\n small cost difference => small runtime difference\n\nAnd from the timings you shared, this seems to be the case - 6.0 vs. 6.8\nms is fairly close.\n\nI'd say btree/hash estimates this close are not very common in practice,\nconsidering the costing formulas for the index types is quite different.\nYou may try tweaking the different cost parameters (e.g. operator cost,\nrandom page cost) to make the difference larger.\n\n\nregards\n\n[1]\nhttps://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L51\n\n[2]\nhttps://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L166\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 18 Nov 2022 14:23:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: why choosing an hash index instead of the btree version even if\n the cost is lower?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 11/18/22 13:15, Luca Ferrari wrote:\n>> I've a table with a numeric column (integer), and I've created two\n>> indexes on such column, one btree and one hash. The hash results much\n>> larger as the btree, but what puzzles me is that executing an equality\n>> simple query, the system chooses the hash index (that has a final cost\n>> of 8984.08 while the btree index would have a final cost a little\n>> lower (8901.94).\n>> \n>> The only difference I can spot in the EXPLAIN plans is that the btree\n>> index has an initial cost, but I don't think this is the reason, since\n>> it should be the final cost what matters, right?\n\n> My guess is this is due to STD_FUZZ_FACTOR, see [1] and [2].\n\n> That is, when comparing costs, we require the cost to be at least 1%,\n> because we have a cheapest path, and we're checking if it's worth\n> building another one (which is not free - we have to allocate stuff\n> etc.). And if the difference is tiny, it's not worth it.\n\nEven more to the point: if the total costs are fuzzily the same,\nthen the next point of comparison will be the startup costs,\nwhich is where the hash index wins. I'm not sure if it's quite\nfair to give hash a zero startup cost; but it doesn't have to\ndescend a search tree, so it is fair that its startup cost is\nless than btree's.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Nov 2022 09:55:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why choosing an hash index instead of the btree version even if\n the cost is lower?"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 2:23 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> That is, when comparing costs, we require the cost to be at least 1%,\n> because we have a cheapest path, and we're checking if it's worth\n> building another one (which is not free - we have to allocate stuff\n> etc.). And if the difference is tiny, it's not worth it.\n>\n> In this case we have the indexscan for the hash index, with cost\n> 8971.95, and we're considering to build indexacan path for the btree\n> index. We haven't built it yet, we only calculate cost 8891.65. But\n>\n> 8971.95/8891.65 = 1.009\n>\n> So it's close to 1.01, but just a little bit less. So we conclude it's\n> not worth building the second path, and we keep the hash index scan.\n>\n\n\nAn excellent explanation, it totally does make sense to me and\nexplains what I felt (i.e., similar costs lead to a kind of equality\nin choosing the index).\n\n\nThanks,\nLuca\n\n\n",
"msg_date": "Fri, 18 Nov 2022 17:16:37 +0100",
"msg_from": "Luca Ferrari <fluca1978@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: why choosing an hash index instead of the btree version even if\n the cost is lower?"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 3:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even more to the point: if the total costs are fuzzily the same,\n> then the next point of comparison will be the startup costs,\n> which is where the hash index wins.\n\nThanks, it is clear now.\n\nLuca\n\n\n",
"msg_date": "Fri, 18 Nov 2022 17:17:55 +0100",
"msg_from": "Luca Ferrari <fluca1978@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: why choosing an hash index instead of the btree version even if\n the cost is lower?"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm looking for suggestions:\n\nWhat kind of metrics would be beneficial to monitor and alert on Postgres\nRDS while switching to GP3 volumes (from existing GP2 volumes)?\n- We see intermittent I/O (disk queue depth) alerts with the existing GP2\nstorage type and burst credit alerts.\n\nhttps://aws.amazon.com/about-aws/whats-new/2022/11/amazon-rds-general-purpose-gp3-storage-volumes/\n\nThanks,\nSenko\n\nHi All,I'm looking for suggestions:What kind of metrics would be beneficial to monitor and alert on Postgres RDS while switching to GP3 volumes (from existing GP2 volumes)?- We see intermittent I/O (disk queue depth) alerts with the existing GP2 storage type and burst credit alerts. https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-rds-general-purpose-gp3-storage-volumes/Thanks,Senko",
"msg_date": "Sat, 19 Nov 2022 08:28:31 +0530",
"msg_from": "Sengottaiyan T <techsenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Need suggestion to set-up RDS alerts on GP3 volumes"
}
] |
[
{
"msg_contents": "Hey, folks:\n\nI haven't configured a PostgreSQL server since version 11 (before that, \nI did quite a few).\n\nWhat's changed in terms of performance configuration since then? Have \nthe fundamentals of shared_buffers/work_mem/max_connections changed at \nall? Which new settings are must-tunes?\n\nI've heard about new parallel stuff an JIT, but neither is that \napplicable to my use-case.\n\n-- \nJosh Berkus\n\n\n",
"msg_date": "Mon, 28 Nov 2022 18:59:41 -0800",
"msg_from": "Josh Berkus <josh@berkus.org>",
"msg_from_op": true,
"msg_subject": "Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 06:59:41PM -0800, Josh Berkus wrote:\n> Hey, folks:\n> \n> I haven't configured a PostgreSQL server since version 11 (before that, I\n> did quite a few).\n> \n> What's changed in terms of performance configuration since then? Have the\n> fundamentals of shared_buffers/work_mem/max_connections changed at all?\n> Which new settings are must-tunes?\n> \n> I've heard about new parallel stuff an JIT, but neither is that applicable\n> to my use-case.\n\nshared buffers is the same, but btree indexes are frequently (IME) 3x\nsmaller (!) since deduplication was added in v13, so s_b might not need\nto be as large.\n\nIn addition to setting work_mem, you can also (since v13) set\nhash_mem_multiplier.\n\ndefault_toast_compression = lz4 # v14\nrecovery_init_sync_method = syncfs # v14\ncheck_client_connection_interval = ... # v14\nwal_compression = {lz4,zstd} # v15\n\nPeeking at my notes, there's also: partitioning, parallel query, brin\nindexes, extended statistics, reindex concurrently, ...\n\n... but I don't think anything is radically changed :)\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 28 Nov 2022 21:34:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 11/28/22 21:59, Josh Berkus wrote:\n> Hey, folks:\n>\n> I haven't configured a PostgreSQL server since version 11 (before \n> that, I did quite a few).\n>\n> What's changed in terms of performance configuration since then? Have \n> the fundamentals of shared_buffers/work_mem/max_connections changed at \n> all? Which new settings are must-tunes?\n>\n> I've heard about new parallel stuff an JIT, but neither is that \n> applicable to my use-case.\n>\nWell, well! Long time no see! You'll probably be glad to learn that we \nhave hints now. Thank you for the following page you created:\n\nhttps://laptrinhx.com/why-postgresql-doesn-t-have-query-hints-2912445911/\n\nI've used it several times, with great success. It's priceless.\n\nNow, to answer your question: no, fundamentals of shared buffers, work \nmemory and connections haven't changed. Parallelism works fine, it's \nreliable and easy to enable. All you need is to set \nmax_parallel_workers_per_gather to an integer > 0 and PgSQL 15 will \nautomatically use parallel plan if the planner decides that it's the \nbest path. However, to warn you in advance, parallel query is not a \npanacea. On OLTP databases, I usually disable it on purpose. Parallel \nquery will speed up sequential scans, but if your application is OLTP, \nsequential scan is a sign of trouble. Parallelism is a data warehouse \nonly feature. And even then, you don't want it ti be run by multiple \nusers at the same time. Namely, the number of your CPU resources is \nfinite and having multiple users launch multiple processes is the best \nway to run out of the CPU power fast. Normally, you would package an \noutput of the parallel query into a materialized view and let the users \nquery the view.\n\nAs for JIT, I've recently asked that question myself. I was told that \nPostgreSQL with LLVM enabled performs approximately 25% better than \nwithout it. I haven't measured it so I can't either confirm or deny the \nnumber. I can tell you that there is a noticeable throughput \nimprovement with PL/PGSQL intensive applications. There was also an \nincrease in CPU consumption. I wasn't doing benchmarks, I was looking \nfor a generic settings to install via Ansible so I don't have the \nnumbers, only the feeling. One way of quantifying the difference would \nbe to run pgbench with and without JIT.\n\nPS:\n\nI am still an Oracle DBA, just as you wrote in the paper.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 11/28/22 21:59, Josh Berkus wrote:\n\nHey,\n folks:\n \n\n I haven't configured a PostgreSQL server since version 11 (before\n that, I did quite a few).\n \n\n What's changed in terms of performance configuration since then? \n Have the fundamentals of shared_buffers/work_mem/max_connections\n changed at all? Which new settings are must-tunes?\n \n\n I've heard about new parallel stuff an JIT, but neither is that\n applicable to my use-case.\n \n\n\nWell, well! Long time no see! You'll probably be glad to learn\n that we have hints now. Thank you for the following page you\n created:\nhttps://laptrinhx.com/why-postgresql-doesn-t-have-query-hints-2912445911/\nI've used it several times, with great success. It's priceless.\nNow, to answer your question: no, fundamentals of shared buffers,\n work memory and connections haven't changed. Parallelism works\n fine, it's reliable and easy to enable. All you need is to set\n max_parallel_workers_per_gather to an integer > 0 and PgSQL 15\n will automatically use parallel plan if the planner decides that\n it's the best path. However, to warn you in advance, parallel\n query is not a panacea. On OLTP databases, I usually disable it on\n purpose. Parallel query will speed up sequential scans, but if\n your application is OLTP, sequential scan is a sign of trouble.\n Parallelism is a data warehouse only feature. And even then, you\n don't want it ti be run by multiple users at the same time.\n Namely, the number of your CPU resources is finite and having\n multiple users launch multiple processes is the best way to run\n out of the CPU power fast. Normally, you would package an output\n of the parallel query into a materialized view and let the users\n query the view.\n\nAs for JIT, I've recently asked that question myself. I was told\n that PostgreSQL with LLVM enabled performs approximately 25%\n better than without it. I haven't measured it so I can't either\n confirm or deny the number. I can tell you that there is a\n noticeable throughput improvement with PL/PGSQL intensive\n applications. There was also an increase in CPU consumption. I\n wasn't doing benchmarks, I was looking for a generic settings to\n install via Ansible so I don't have the numbers, only the feeling.\n One way of quantifying the difference would be to run pgbench with\n and without JIT. \n\nPS:\nI am still an Oracle DBA, just as you wrote in the paper.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com",
"msg_date": "Mon, 28 Nov 2022 22:39:47 -0500",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 11/28/22 19:34, Justin Pryzby wrote:\n> In addition to setting work_mem, you can also (since v13) set\n> hash_mem_multiplier.\n\nIs there any guidance on setting this? Or is it still \"use the default \nunless you can play around with it\"?\n\n> default_toast_compression = lz4 # v14\n> recovery_init_sync_method = syncfs # v14\n> check_client_connection_interval = ... # v14\n> wal_compression = {lz4,zstd} # v15\n\nIf anyone has links to blogs or other things that discuss the \nperformance implications of the above settings that would be wonderful!\n\n-- \nJosh Berkus\n\n\n\n",
"msg_date": "Mon, 28 Nov 2022 22:09:57 -0800",
"msg_from": "Josh Berkus <josh@berkus.org>",
"msg_from_op": true,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 2022-Nov-28, Mladen Gogala wrote:\n\n> You'll probably be glad to learn that we have hints now.\n\nWhat hints are you talking about? As I understand, we still don't have\nOracle-style query hints.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Nov 2022 09:31:51 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 2022-Nov-28, Mladen Gogala wrote:\n\n> As for JIT, I've recently asked that question myself. I was told that\n> PostgreSQL with LLVM enabled performs approximately 25% better than without\n> it.\n\nHmm, actually, normally you're better off turning JIT off, because it's\nvery common to diagnose cases of queries that become much, much slower\nbecause of it. Some queries do become faster, but it's not a wide\nmargin, and it's not a lot. There are rare cases where JIT is\nbeneficial, but those tend to be queries that take upwards of several\nseconds already.\n\nIMO it was a mistake to turn JIT on in the default config, so that's one\nthing you'll likely want to change.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n",
"msg_date": "Tue, 29 Nov 2022 09:36:15 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 11/29/22 03:31, Alvaro Herrera wrote:\n> On 2022-Nov-28, Mladen Gogala wrote:\n>\n>> You'll probably be glad to learn that we have hints now.\n> What hints are you talking about? As I understand, we still don't have\n> Oracle-style query hints.\n>\nhttps://github.com/ossc-db/pg_hint_plan\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 11/29/22 03:31, Alvaro Herrera\n wrote:\n\n\nOn 2022-Nov-28, Mladen Gogala wrote:\n\n\n\nYou'll probably be glad to learn that we have hints now.\n\n\n\nWhat hints are you talking about? As I understand, we still don't have\nOracle-style query hints.\n\n\n\nhttps://github.com/ossc-db/pg_hint_plan\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com",
"msg_date": "Tue, 29 Nov 2022 09:16:11 -0500",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 11/29/22 03:36, Alvaro Herrera wrote:\n> On 2022-Nov-28, Mladen Gogala wrote:\n>\n>> As for JIT, I've recently asked that question myself. I was told that\n>> PostgreSQL with LLVM enabled performs approximately 25% better than without\n>> it.\n> Hmm, actually, normally you're better off turning JIT off, because it's\n> very common to diagnose cases of queries that become much, much slower\n> because of it. Some queries do become faster, but it's not a wide\n> margin, and it's not a lot. There are rare cases where JIT is\n> beneficial, but those tend to be queries that take upwards of several\n> seconds already.\n>\n> IMO it was a mistake to turn JIT on in the default config, so that's one\n> thing you'll likely want to change.\n>\nHmmm, I think I will run pgbench with and without JIT on and see the \ndifference.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 11/29/22 03:36, Alvaro Herrera\n wrote:\n\n\nOn 2022-Nov-28, Mladen Gogala wrote:\n\n\n\nAs for JIT, I've recently asked that question myself. I was told that\nPostgreSQL with LLVM enabled performs approximately 25% better than without\nit.\n\n\n\nHmm, actually, normally you're better off turning JIT off, because it's\nvery common to diagnose cases of queries that become much, much slower\nbecause of it. Some queries do become faster, but it's not a wide\nmargin, and it's not a lot. There are rare cases where JIT is\nbeneficial, but those tend to be queries that take upwards of several\nseconds already.\n\nIMO it was a mistake to turn JIT on in the default config, so that's one\nthing you'll likely want to change.\n\n\n\nHmmm, I think I will run pgbench with and without JIT on and see\n the difference.\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com",
"msg_date": "Tue, 29 Nov 2022 09:17:48 -0500",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> IMO it was a mistake to turn JIT on in the default config, so that's one\n> thing you'll likely want to change.\n\nI wouldn't necessarily go quite that far, but I do think that the\ndefault cost thresholds for invoking it are enormously too low,\nor else there are serious bugs in the cost-estimation algorithms\nfor deciding when to use it. A nearby example[1] of a sub-1-sec\npartitioned query that took 30sec after JIT was enabled makes me\nwonder if we're accounting correctly for per-partition JIT costs.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/B6025887-D73F-4B5B-9925-4DA4B675F7E5%40elevated-dev.com\n\n\n",
"msg_date": "Tue, 29 Nov 2022 09:31:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On 2022-Nov-29, Mladen Gogala wrote:\n\n> Hmmm, I think I will run pgbench with and without JIT on and see the\n> difference.\n\nI doubt you'll notice anything, because the pgbench queries will be far\nbelow the JIT cost, so nothing will get JIT compiled at all. Or are you\nplanning on using a custom set of queries?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Nov 2022 19:09:14 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On Wed, 30 Nov 2022 at 03:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > IMO it was a mistake to turn JIT on in the default config, so that's one\n> > thing you'll likely want to change.\n>\n> I wouldn't necessarily go quite that far, but I do think that the\n> default cost thresholds for invoking it are enormously too low,\n> or else there are serious bugs in the cost-estimation algorithms\n> for deciding when to use it. A nearby example[1] of a sub-1-sec\n> partitioned query that took 30sec after JIT was enabled makes me\n> wonder if we're accounting correctly for per-partition JIT costs.\n\nI'm very grateful for JIT. However, I do agree that the costs need to work.\n\nThe problem is that the threshold to turn JIT on does not consider how\nmany expressions need to be compiled. It's quite different to JIT\ncompile a simple one-node plan with a total cost of 100000 than to JIT\ncompile a plan that costs the same but queries 1000 partitions. I\nthink we should be compiling expressions based on the cost of the\nindividial node rather than the total cost of the plan. We need to\nmake some changes so we can more easily determine the number of times\na given node will be executed before we can determine how worthwhile\nJITting an expression in a node will be.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/B6025887-D73F-4B5B-9925-4DA4B675F7E5%40elevated-dev.com\n\n\n",
"msg_date": "Wed, 30 Nov 2022 10:06:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On Tue, 2022-11-29 at 19:09 +0100, Alvaro Herrera wrote:\n> On 2022-Nov-29, Mladen Gogala wrote:\n> \n> > Hmmm, I think I will run pgbench with and without JIT on and see\n> > the\n> > difference.\n> \n> I doubt you'll notice anything, because the pgbench queries will be\n> far\n> below the JIT cost, so nothing will get JIT compiled at all. Or are\n> you\n> planning on using a custom set of queries?\n> \n\nNope. I am planning to set jit_above_cost parameter to 5. That should\ntake care of the pgbench problem. Other than that, you're right: JIT\nshould not be used for OLTP. However, pure OLTP or DW databases are a\nrarity these days. Reporting is a crucial function and almost every\nOLTP database that I've seen also has reporting function, which means\nthat there are complex queries to be executed.\n-- \nMladen Gogala\nDatabase Consultant\nhttps://dbwhisperer.wordpress.com\n\n\nOn Tue, 2022-11-29 at 19:09 +0100, Alvaro Herrera wrote:On 2022-Nov-29, Mladen Gogala wrote:Hmmm, I think I will run pgbench with and without JIT on and see thedifference.I doubt you'll notice anything, because the pgbench queries will be farbelow the JIT cost, so nothing will get JIT compiled at all. Or are youplanning on using a custom set of queries?Nope. I am planning to set jit_above_cost parameter to 5. That should take care of the pgbench problem. Other than that, you're right: JIT should not be used for OLTP. However, pure OLTP or DW databases are a rarity these days. Reporting is a crucial function and almost every OLTP database that I've seen also has reporting function, which means that there are complex queries to be executed.-- Mladen GogalaDatabase Consultanthttps://dbwhisperer.wordpress.com",
"msg_date": "Tue, 29 Nov 2022 19:44:23 -0500",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "\nOn 2022-11-29 Tu 16:06, David Rowley wrote:\n> On Wed, 30 Nov 2022 at 03:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>>> IMO it was a mistake to turn JIT on in the default config, so that's one\n>>> thing you'll likely want to change.\n>> I wouldn't necessarily go quite that far, but I do think that the\n>> default cost thresholds for invoking it are enormously too low,\n>> or else there are serious bugs in the cost-estimation algorithms\n>> for deciding when to use it. A nearby example[1] of a sub-1-sec\n>> partitioned query that took 30sec after JIT was enabled makes me\n>> wonder if we're accounting correctly for per-partition JIT costs.\n> I'm very grateful for JIT. However, I do agree that the costs need to work.\n>\n> The problem is that the threshold to turn JIT on does not consider how\n> many expressions need to be compiled. It's quite different to JIT\n> compile a simple one-node plan with a total cost of 100000 than to JIT\n> compile a plan that costs the same but queries 1000 partitions. I\n> think we should be compiling expressions based on the cost of the\n> individial node rather than the total cost of the plan. We need to\n> make some changes so we can more easily determine the number of times\n> a given node will be executed before we can determine how worthwhile\n> JITting an expression in a node will be.\n>\n\nI think Alvaro's point is that it would have been better to work out\nthese wrinkles before turning on JIT by default. Based on anecdotal\nreports from the field I'm inclined to agree.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 30 Nov 2022 06:47:32 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "Hi, \n\nOn November 30, 2022 3:47:32 AM PST, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>On 2022-11-29 Tu 16:06, David Rowley wrote:\n>> On Wed, 30 Nov 2022 at 03:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>>>> IMO it was a mistake to turn JIT on in the default config, so that's one\n>>>> thing you'll likely want to change.\n>>> I wouldn't necessarily go quite that far, but I do think that the\n>>> default cost thresholds for invoking it are enormously too low,\n>>> or else there are serious bugs in the cost-estimation algorithms\n>>> for deciding when to use it. A nearby example[1] of a sub-1-sec\n>>> partitioned query that took 30sec after JIT was enabled makes me\n>>> wonder if we're accounting correctly for per-partition JIT costs.\n>> I'm very grateful for JIT. However, I do agree that the costs need to work.\n>>\n>> The problem is that the threshold to turn JIT on does not consider how\n>> many expressions need to be compiled. It's quite different to JIT\n>> compile a simple one-node plan with a total cost of 100000 than to JIT\n>> compile a plan that costs the same but queries 1000 partitions. I\n>> think we should be compiling expressions based on the cost of the\n>> individial node rather than the total cost of the plan. We need to\n>> make some changes so we can more easily determine the number of times\n>> a given node will be executed before we can determine how worthwhile\n>> JITting an expression in a node will be.\n>>\n>\n>I think Alvaro's point is that it would have been better to work out\n>these wrinkles before turning on JIT by default. Based on anecdotal\n>reports from the field I'm inclined to agree.\n\nThe problem is that back when it was introduced these problems didn't exist to a significant degree. JIT was developed when partitioning was very minimal- and the problems we're seeing are almost exclusively with queries with many partitions. The problems really only started much more recently. It also wasn't enabled in the first release..\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 08:07:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On November 30, 2022 3:47:32 AM PST, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I think Alvaro's point is that it would have been better to work out\n>> these wrinkles before turning on JIT by default. Based on anecdotal\n>> reports from the field I'm inclined to agree.\n\n> The problem is that back when it was introduced these problems didn't exist to a significant degree. JIT was developed when partitioning was very minimal- and the problems we're seeing are almost exclusively with queries with many partitions. The problems really only started much more recently. It also wasn't enabled in the first release..\n\nWell, wherever you want to pin the blame, it seems clear that we\nhave a problem now. And I don't think flipping back to off-by-default\nis the answer -- surely there is some population of users who will\nnot be happy with that. We really need to prioritize fixing the\ncost-estimation problems, and/or tweaking the default thresholds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 11:36:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "\nOn 2022-11-30 We 11:36, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On November 30, 2022 3:47:32 AM PST, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> I think Alvaro's point is that it would have been better to work out\n>>> these wrinkles before turning on JIT by default. Based on anecdotal\n>>> reports from the field I'm inclined to agree.\n>> The problem is that back when it was introduced these problems didn't exist to a significant degree. JIT was developed when partitioning was very minimal- and the problems we're seeing are almost exclusively with queries with many partitions. The problems really only started much more recently. It also wasn't enabled in the first release..\n> Well, wherever you want to pin the blame, it seems clear that we\n> have a problem now. And I don't think flipping back to off-by-default\n> is the answer -- surely there is some population of users who will\n> not be happy with that. We really need to prioritize fixing the\n> cost-estimation problems, and/or tweaking the default thresholds.\n>\n> \t\t\t\n\n\n+1\n\n\nFTR I am not trying to pin blame anywhere. I think the work that's been\ndone on JIT is more than impressive.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 1 Dec 2022 12:05:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 4:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 30 Nov 2022 at 03:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > IMO it was a mistake to turn JIT on in the default config, so that's\n> one\n> > > thing you'll likely want to change.\n> >\n> > I wouldn't necessarily go quite that far, but I do think that the\n> > default cost thresholds for invoking it are enormously too low,\n> > or else there are serious bugs in the cost-estimation algorithms\n> > for deciding when to use it. A nearby example[1] of a sub-1-sec\n> > partitioned query that took 30sec after JIT was enabled makes me\n> > wonder if we're accounting correctly for per-partition JIT costs.\n>\n> I'm very grateful for JIT. However, I do agree that the costs need to work.\n>\n> The problem is that the threshold to turn JIT on does not consider how\n> many expressions need to be compiled. It's quite different to JIT\n> compile a simple one-node plan with a total cost of 100000 than to JIT\n> compile a plan that costs the same but queries 1000 partitions. I\n> think we should be compiling expressions based on the cost of the\n> individial node rather than the total cost of the plan.\n\n\nI think a big win for JIT would be to be able to do it just once per cached\nplan, not once per execution. And then have it turned on only for prepared\nstatements. Of course that means JIT couldn't do parameter folding, but I\ndon't know if it does that anyway. Also, very expensive plans are\ngenerally dominated by IO cost estimates, and I think it doesn't make sense\nto drive JIT decisions based predominantly on the expected cost of the IO.\nIf the planner separated IO cost estimate totals from CPU cost estimate\ntotals, it might open up better choices.\n\nCheers,\n\nJeff\n\nOn Tue, Nov 29, 2022 at 4:07 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 30 Nov 2022 at 03:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > IMO it was a mistake to turn JIT on in the default config, so that's one\n> > thing you'll likely want to change.\n>\n> I wouldn't necessarily go quite that far, but I do think that the\n> default cost thresholds for invoking it are enormously too low,\n> or else there are serious bugs in the cost-estimation algorithms\n> for deciding when to use it. A nearby example[1] of a sub-1-sec\n> partitioned query that took 30sec after JIT was enabled makes me\n> wonder if we're accounting correctly for per-partition JIT costs.\n\nI'm very grateful for JIT. However, I do agree that the costs need to work.\n\nThe problem is that the threshold to turn JIT on does not consider how\nmany expressions need to be compiled. It's quite different to JIT\ncompile a simple one-node plan with a total cost of 100000 than to JIT\ncompile a plan that costs the same but queries 1000 partitions. I\nthink we should be compiling expressions based on the cost of the\nindividial node rather than the total cost of the plan.I think a big win for JIT would be to be able to do it just once per cached plan, not once per execution. And then have it turned on only for prepared statements. Of course that means JIT couldn't do parameter folding, but I don't know if it does that anyway. Also, very expensive plans are generally dominated by IO cost estimates, and I think it doesn't make sense to drive JIT decisions based predominantly on the expected cost of the IO. If the planner separated IO cost estimate totals from CPU cost estimate totals, it might open up better choices. Cheers,Jeff",
"msg_date": "Sat, 3 Dec 2022 22:50:57 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching up with performance & PostgreSQL 15"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI'm having a problem regarding the point type/gist indexes. Here's a\nminimal reproduction of it:\n\ncreate table test(p point);\ninsert into test(p) values (point(0, 0));\ninsert into test(p) values (point(0, 1));\ninsert into test(p) values (point(1, 0));\ninsert into test(p) values (point(1, 1));\ninsert into test(p) values (point(50, 0));\nanalyze test;\nexplain analyze select * from test where p <@ box '(0,0),(1,1)';\nexplain analyze select * from test where p <@ box '(50,0),(51,1)';\n\nThe two queries get the same cost/row estimation, of 1 row. This is the\nEXPLAIN ANALYZE of the first query:\n\nSeq Scan on test (cost=0.00..1.07 rows=1 width=16) (actual\ntime=0.022..0.026 rows=4 loops=1)\n Filter: ((p[0] >= '0'::double precision) AND (p[0] <= '1'::double\nprecision))\n Rows Removed by Filter: 1\n Planning Time: 0.115 ms\n Execution Time: 0.055 ms\n(5 rows)\n\nWhat I was expecting is the first query to estimate 4 rows and the second\nto estimate 1, like what I get If I try the same thing using integers.\n\ncreate table test(x integer, y integer);\ninsert into test(x, y) values (0, 0);\ninsert into test(x, y) values (0, 1);\ninsert into test(x, y) values (1, 0);\ninsert into test(x, y) values (1, 1);\ninsert into test(x, y) values (50, 0);\nanalyze test;\nexplain analyze select * from test where x between 0 and 1 and y between 0\nand 1;\nexplain analyze select * from test where x between 50 and 51 and y between\n0 and 1;\n\nMy question is: is this expected behaviour? I actually have a much larger\ntable with a gist index where I found this occurring, and this causes the\nplanner to make bad decisions: every query that I do will have the same\nestimation, and whenever this estimation is very wrong, the planner does\nnot take the optimal decision.\n\nI'm using the official docker image, PostgreSQL 15.1 (Debian\n15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6)\n10.2.1 20210110, 64-bit, running everything in psql (PostgreSQL) 15.1\n(Ubuntu 15.1-1.pgdg22.04+1).\n\nBest regards,\nIgor\n\nHello everyone,I'm having a problem regarding the point type/gist indexes. Here's a minimal reproduction of it:create table test(p point);insert into test(p) values (point(0, 0));insert into test(p) values (point(0, 1));insert into test(p) values (point(1, 0));insert into test(p) values (point(1, 1));insert into test(p) values (point(50, 0));analyze test;explain analyze select * from test where p <@ box '(0,0),(1,1)';explain analyze select * from test where p <@ box '(50,0),(51,1)';The two queries get the same cost/row estimation, of 1 row. This is the EXPLAIN ANALYZE of the first query:Seq Scan on test (cost=0.00..1.07 rows=1 width=16) (actual time=0.022..0.026 rows=4 loops=1) Filter: ((p[0] >= '0'::double precision) AND (p[0] <= '1'::double precision)) Rows Removed by Filter: 1 Planning Time: 0.115 ms Execution Time: 0.055 ms(5 rows)What I was expecting is the first query to estimate 4 rows and the second to estimate 1, like what I get If I try the same thing using integers.create table test(x integer, y integer);insert into test(x, y) values (0, 0);insert into test(x, y) values (0, 1);insert into test(x, y) values (1, 0);insert into test(x, y) values (1, 1);insert into test(x, y) values (50, 0);analyze test;explain analyze select * from test where x between 0 and 1 and y between 0 and 1;explain analyze select * from test where x between 50 and 51 and y between 0 and 1;My question is: is this expected behaviour? I actually have a much larger table with a gist index where I found this occurring, and this causes the planner to make bad decisions: every query that I do will have the same estimation, and whenever this estimation is very wrong, the planner does not take the optimal decision.I'm using the official docker image, PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit, running everything in psql (PostgreSQL) 15.1 (Ubuntu 15.1-1.pgdg22.04+1).Best regards,Igor",
"msg_date": "Wed, 30 Nov 2022 17:44:36 +0100",
"msg_from": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com>",
"msg_from_op": true,
"msg_subject": "Geometric types row estimation"
},
{
"msg_contents": "I'm sorry, I sent the wrong EXPLAIN ANALYZE for the first query, this is\nthe correct one:\n\nSeq Scan on test (cost=0.00..1.06 rows=1 width=16) (actual\ntime=0.018..0.022 rows=4 loops=1)\n Filter: (p <@ '(1,1),(0,0)'::box)\n Rows Removed by Filter: 1\n Planning Time: 0.211 ms\n Execution Time: 0.051 ms\n(5 rows)\n\nOn Wed, 30 Nov 2022 at 17:44, Igor ALBUQUERQUE SILVA <\ni.albuquerque-silva@kayrros.com> wrote:\n\n> Hello everyone,\n>\n> I'm having a problem regarding the point type/gist indexes. Here's a\n> minimal reproduction of it:\n>\n> create table test(p point);\n> insert into test(p) values (point(0, 0));\n> insert into test(p) values (point(0, 1));\n> insert into test(p) values (point(1, 0));\n> insert into test(p) values (point(1, 1));\n> insert into test(p) values (point(50, 0));\n> analyze test;\n> explain analyze select * from test where p <@ box '(0,0),(1,1)';\n> explain analyze select * from test where p <@ box '(50,0),(51,1)';\n>\n> The two queries get the same cost/row estimation, of 1 row. This is the\n> EXPLAIN ANALYZE of the first query:\n>\n> Seq Scan on test (cost=0.00..1.07 rows=1 width=16) (actual\n> time=0.022..0.026 rows=4 loops=1)\n> Filter: ((p[0] >= '0'::double precision) AND (p[0] <= '1'::double\n> precision))\n> Rows Removed by Filter: 1\n> Planning Time: 0.115 ms\n> Execution Time: 0.055 ms\n> (5 rows)\n>\n> What I was expecting is the first query to estimate 4 rows and the second\n> to estimate 1, like what I get If I try the same thing using integers.\n>\n> create table test(x integer, y integer);\n> insert into test(x, y) values (0, 0);\n> insert into test(x, y) values (0, 1);\n> insert into test(x, y) values (1, 0);\n> insert into test(x, y) values (1, 1);\n> insert into test(x, y) values (50, 0);\n> analyze test;\n> explain analyze select * from test where x between 0 and 1 and y between 0\n> and 1;\n> explain analyze select * from test where x between 50 and 51 and y between\n> 0 and 1;\n>\n> My question is: is this expected behaviour? I actually have a much larger\n> table with a gist index where I found this occurring, and this causes the\n> planner to make bad decisions: every query that I do will have the same\n> estimation, and whenever this estimation is very wrong, the planner does\n> not take the optimal decision.\n>\n> I'm using the official docker image, PostgreSQL 15.1 (Debian\n> 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6)\n> 10.2.1 20210110, 64-bit, running everything in psql (PostgreSQL) 15.1\n> (Ubuntu 15.1-1.pgdg22.04+1).\n>\n> Best regards,\n> Igor\n>\n\nI'm sorry, I sent the wrong EXPLAIN ANALYZE for the first query, this is the correct one:Seq Scan on test (cost=0.00..1.06 rows=1 width=16) (actual time=0.018..0.022 rows=4 loops=1) Filter: (p <@ '(1,1),(0,0)'::box) Rows Removed by Filter: 1 Planning Time: 0.211 ms Execution Time: 0.051 ms(5 rows)On Wed, 30 Nov 2022 at 17:44, Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> wrote:Hello everyone,I'm having a problem regarding the point type/gist indexes. Here's a minimal reproduction of it:create table test(p point);insert into test(p) values (point(0, 0));insert into test(p) values (point(0, 1));insert into test(p) values (point(1, 0));insert into test(p) values (point(1, 1));insert into test(p) values (point(50, 0));analyze test;explain analyze select * from test where p <@ box '(0,0),(1,1)';explain analyze select * from test where p <@ box '(50,0),(51,1)';The two queries get the same cost/row estimation, of 1 row. This is the EXPLAIN ANALYZE of the first query:Seq Scan on test (cost=0.00..1.07 rows=1 width=16) (actual time=0.022..0.026 rows=4 loops=1) Filter: ((p[0] >= '0'::double precision) AND (p[0] <= '1'::double precision)) Rows Removed by Filter: 1 Planning Time: 0.115 ms Execution Time: 0.055 ms(5 rows)What I was expecting is the first query to estimate 4 rows and the second to estimate 1, like what I get If I try the same thing using integers.create table test(x integer, y integer);insert into test(x, y) values (0, 0);insert into test(x, y) values (0, 1);insert into test(x, y) values (1, 0);insert into test(x, y) values (1, 1);insert into test(x, y) values (50, 0);analyze test;explain analyze select * from test where x between 0 and 1 and y between 0 and 1;explain analyze select * from test where x between 50 and 51 and y between 0 and 1;My question is: is this expected behaviour? I actually have a much larger table with a gist index where I found this occurring, and this causes the planner to make bad decisions: every query that I do will have the same estimation, and whenever this estimation is very wrong, the planner does not take the optimal decision.I'm using the official docker image, PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit, running everything in psql (PostgreSQL) 15.1 (Ubuntu 15.1-1.pgdg22.04+1).Best regards,Igor",
"msg_date": "Wed, 30 Nov 2022 17:46:58 +0100",
"msg_from": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com>",
"msg_from_op": true,
"msg_subject": "Re: Geometric types row estimation"
},
{
"msg_contents": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> I'm having a problem regarding the point type/gist indexes. Here's a\n> minimal reproduction of it:\n> ...\n> What I was expecting is the first query to estimate 4 rows and the second\n> to estimate 1, like what I get If I try the same thing using integers.\n\nUnfortunately, the selectivity estimation functions for PG's geometric\ntypes are mostly just stubs. The estimation function for point <@ box\nin particular is contsel [1]:\n\n/*\n *\tcontsel -- How likely is a box to contain (be contained by) a given box?\n *\n * This is a tighter constraint than \"overlap\", so produce a smaller\n * estimate than areasel does.\n */\nDatum\ncontsel(PG_FUNCTION_ARGS)\n{\n\tPG_RETURN_FLOAT8(0.001);\n}\n\nIt's been like that (excepting notational changes) since Berkeley days,\nbecause nobody has bothered to make it better.\n\nIn general, PG's built-in geometric types have never gotten much\nbeyond their origins as an academic proof-of-concept. I think people\nwho are doing serious work that requires such operations mostly use\nPostGIS, and I'd suggest looking into that.\n\nOr, if you feel like doing a lot of work to make these estimators\nbetter, have at it.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/utils/adt/geo_selfuncs.c;hb=HEAD\n\n\n",
"msg_date": "Wed, 30 Nov 2022 12:18:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Geometric types row estimation"
},
{
"msg_contents": "Hi Tom,\n\nThanks a lot for the explanation, I thought the built-in types were more\nstandard, so I didn't mention that I was having the same thing using\npostgis. Here's the example (I changed the values a little bit to avoid\nrounding errors):\n\ncreate table test(p geometry(point));\ninsert into test(p) values (st_makepoint(0,0));\ninsert into test(p) values (st_makepoint(0,1));\ninsert into test(p) values (st_makepoint(1,0));\ninsert into test(p) values (st_makepoint(1,1));\ninsert into test(p) values (st_makepoint(50,0));\nanalyze test;\nexplain analyze select * from test where\nST_Contains(ST_GeomFromText('POLYGON((-1 -1,2 -1,2 2,-1 2,-1 -1))'), p);\nexplain analyze select * from test where\nST_Contains(ST_GeomFromText('POLYGON((49 -1,51 -1,51 1,49 1,49 -1))'), p);\n\nEXPLAIN ANALYZE:\n\n Seq Scan on test (cost=0.00..126.05 rows=1 width=32) (actual\ntime=0.015..0.022 rows=4 loops=1)\n Filter:\nst_contains('01030000000100000005000000000000000000F0BF000000000000F0BF0000000000000040000000000000F0BF00000000000000400000000000000040000000000000F0BF0000000000000040000000000000F0BF000000000000F0BF'::geometry,\np)\n Rows Removed by Filter: 1\n Planning Time: 0.072 ms\n Execution Time: 0.035 ms\n(5 rows)\n\nDo you know if the functions in Postgis are also stubbed? Or maybe I'm\ndoing something wrong with the syntax?\n\nThis time I'm using the postgis docker image, PostgreSQL 15.1 (Debian\n15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6)\n10.2.1 20210110, 64-bit\n\nBest regards,\nIgor\n\nOn Wed, 30 Nov 2022 at 18:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> > I'm having a problem regarding the point type/gist indexes. Here's a\n> > minimal reproduction of it:\n> > ...\n> > What I was expecting is the first query to estimate 4 rows and the second\n> > to estimate 1, like what I get If I try the same thing using integers.\n>\n> Unfortunately, the selectivity estimation functions for PG's geometric\n> types are mostly just stubs. The estimation function for point <@ box\n> in particular is contsel [1]:\n>\n> /*\n> * contsel -- How likely is a box to contain (be contained by) a\n> given box?\n> *\n> * This is a tighter constraint than \"overlap\", so produce a smaller\n> * estimate than areasel does.\n> */\n> Datum\n> contsel(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_FLOAT8(0.001);\n> }\n>\n> It's been like that (excepting notational changes) since Berkeley days,\n> because nobody has bothered to make it better.\n>\n> In general, PG's built-in geometric types have never gotten much\n> beyond their origins as an academic proof-of-concept. I think people\n> who are doing serious work that requires such operations mostly use\n> PostGIS, and I'd suggest looking into that.\n>\n> Or, if you feel like doing a lot of work to make these estimators\n> better, have at it.\n>\n> regards, tom lane\n>\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/utils/adt/geo_selfuncs.c;hb=HEAD\n>\n\nHi Tom,Thanks a lot for the explanation, I thought the built-in types were more standard, so I didn't mention that I was having the same thing using postgis. Here's the example (I changed the values a little bit to avoid rounding errors):create table test(p geometry(point));insert into test(p) values (st_makepoint(0,0));insert into test(p) values (st_makepoint(0,1));insert into test(p) values (st_makepoint(1,0));insert into test(p) values (st_makepoint(1,1));insert into test(p) values (st_makepoint(50,0));analyze test;explain analyze select * from test where ST_Contains(ST_GeomFromText('POLYGON((-1 -1,2 -1,2 2,-1 2,-1 -1))'), p);explain analyze select * from test where ST_Contains(ST_GeomFromText('POLYGON((49 -1,51 -1,51 1,49 1,49 -1))'), p);EXPLAIN ANALYZE: Seq Scan on test (cost=0.00..126.05 rows=1 width=32) (actual time=0.015..0.022 rows=4 loops=1) Filter: st_contains('01030000000100000005000000000000000000F0BF000000000000F0BF0000000000000040000000000000F0BF00000000000000400000000000000040000000000000F0BF0000000000000040000000000000F0BF000000000000F0BF'::geometry, p) Rows Removed by Filter: 1 Planning Time: 0.072 ms Execution Time: 0.035 ms(5 rows)Do you know if the functions in Postgis are also stubbed? Or maybe I'm doing something wrong with the syntax?This time I'm using the postgis docker image, PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bitBest regards,IgorOn Wed, 30 Nov 2022 at 18:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> I'm having a problem regarding the point type/gist indexes. Here's a\n> minimal reproduction of it:\n> ...\n> What I was expecting is the first query to estimate 4 rows and the second\n> to estimate 1, like what I get If I try the same thing using integers.\n\nUnfortunately, the selectivity estimation functions for PG's geometric\ntypes are mostly just stubs. The estimation function for point <@ box\nin particular is contsel [1]:\n\n/*\n * contsel -- How likely is a box to contain (be contained by) a given box?\n *\n * This is a tighter constraint than \"overlap\", so produce a smaller\n * estimate than areasel does.\n */\nDatum\ncontsel(PG_FUNCTION_ARGS)\n{\n PG_RETURN_FLOAT8(0.001);\n}\n\nIt's been like that (excepting notational changes) since Berkeley days,\nbecause nobody has bothered to make it better.\n\nIn general, PG's built-in geometric types have never gotten much\nbeyond their origins as an academic proof-of-concept. I think people\nwho are doing serious work that requires such operations mostly use\nPostGIS, and I'd suggest looking into that.\n\nOr, if you feel like doing a lot of work to make these estimators\nbetter, have at it.\n\n regards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/utils/adt/geo_selfuncs.c;hb=HEAD",
"msg_date": "Wed, 30 Nov 2022 18:38:16 +0100",
"msg_from": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com>",
"msg_from_op": true,
"msg_subject": "Re: Geometric types row estimation"
},
{
"msg_contents": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> Thanks a lot for the explanation, I thought the built-in types were more\n> standard, so I didn't mention that I was having the same thing using\n> postgis.\n\nHm --- you'd have to take that up with the PostGIS people. But they\nat least would be likely to have motivation to improve things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 12:45:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Geometric types row estimation"
},
{
"msg_contents": "Ok I'll do that, thanks a lot!\n\nOn Wed, 30 Nov 2022 at 18:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> > Thanks a lot for the explanation, I thought the built-in types were more\n> > standard, so I didn't mention that I was having the same thing using\n> > postgis.\n>\n> Hm --- you'd have to take that up with the PostGIS people. But they\n> at least would be likely to have motivation to improve things.\n>\n> regards, tom lane\n>\n\nOk I'll do that, thanks a lot!On Wed, 30 Nov 2022 at 18:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com> writes:\n> Thanks a lot for the explanation, I thought the built-in types were more\n> standard, so I didn't mention that I was having the same thing using\n> postgis.\n\nHm --- you'd have to take that up with the PostGIS people. But they\nat least would be likely to have motivation to improve things.\n\n regards, tom lane",
"msg_date": "Wed, 30 Nov 2022 18:48:18 +0100",
"msg_from": "Igor ALBUQUERQUE SILVA <i.albuquerque-silva@kayrros.com>",
"msg_from_op": true,
"msg_subject": "Re: Geometric types row estimation"
}
] |
[
{
"msg_contents": "Hi there,\n\nI'm wondering if anyone has any insight into what might make the database\nchoose a sequential scan for a query (table defs and plan below) like :\n\nSELECT orders.orderid FROM orders\nWHERE (\norders.orderid IN ('546111')\n OR\norders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN\n('546111')))\n);\n\nI have a couple of environments, all on Postgresql 13.7 and:\n- on one the query executes with an sequential scan on the orders table\n- on the other sequential scan on an index (ie walks index and filters,\nrather than looking up ids on the index as an index condition.)\n\nPlan and tables are below, but it seems to me that the planner knows the\nsubplan is going to return 1 row (max) and should \"know\" that there is a\nmax of 2 IDs to look up an indexes would be faster than a sequential scan\n(of either table or index) and filter. I've tried re analyzing to make sure\nstats are good and it hasn't helped\n\nI can get a good plan that does use the index efficiently by using a union,\neg:\n\nselect orders.orderid FROM orders\nWHERE (\norders.orderid IN (\n SELECT '546111'\n UNION\n SELECT orderid FROM orderstotrans WHERE (transid IN ('546111'))\n)\n);\n\nbut I want to understand what warning signs I should be aware of with the\noriginal query that put it on the path of a bad plan, so I don't do it\nagain.\n\n\nPlan - seq scan of table:\n=====\n> explain\nselect orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR\norders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN\n('546111'))));\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------\n Seq Scan on orders (cost=8.45..486270.87 rows=4302781 width=8)\n Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n SubPlan 1\n -> Index Scan using orderstotrans_transid_key on orderstotrans\n (cost=0.43..8.45 rows=1 width=8)\n Index Cond: (transid = '546111'::bigint)\n(5 rows)\n=====\n\nPlan - Seq scan and filter of index:\n=====\n> explain select orders.orderid FROM orders WHERE (orders.orderid IN\n('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE\n(transid IN ('546111'))));\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------\n Index Only Scan using orders_pkey on orders (cost=9.16..4067888.60\nrows=64760840 width=8)\n Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n SubPlan 1\n -> Index Scan using orderstotrans_transid_key on orderstotrans\n (cost=0.57..8.59 rows=1 width=8)\n Index Cond: (transid = '546111'::bigint)\n(5 rows)\n=====\n\n\nTables:\n=====\n Table \"test.orders\"\n Column | Type | Collation | Nullable\n| Default\n----------------------+-----------------------------+-----------+----------+--------------\n orderid | bigint | | not null |\n istest | smallint | | not null\n| 0\n orderstatusid | integer | | |\n customername | text | | |\n customeraddress | text | | |\n customercountry | text | |\n |\n customercity | text | |\n |\n customerstate | text | |\n |\n customerzip | text | | |\n \"orders_pkey\" PRIMARY KEY, btree (orderid)\n\n Table \"test.orderstotrans\"\n Column | Type | Collation | Nullable | Default\n-------------+---------+-----------+----------+---------\n orderid | bigint | | |\n transid | bigint | | |\n orderitemid | integer | | |\nIndexes:\n \"orderstotrans_orderid_idx\" btree (orderid)\n \"orderstotrans_orderitemid_idx\" btree (orderitemid)\n \"orderstotrans_transid_key\" UNIQUE, btree (transid)\n\n\nHappier plan for the union version:\n====\nexplain select orders.orderid FROM orders\nWHERE (\norders.orderid IN (\n SELECT '3131275553'\n UNION\n select orderid FROM orderstotrans WHERE (transid IN ('3131275553'))\n)\n);\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=9.21..21.84 rows=2 width=8) (actual time=0.034..0.043\nrows=1 loops=1)\n -> Unique (cost=8.64..8.65 rows=2 width=8) (actual time=0.024..0.026\nrows=2 loops=1)\n -> Sort (cost=8.64..8.64 rows=2 width=8) (actual\ntime=0.023..0.024 rows=2 loops=1)\n Sort Key: ('3131275553'::bigint)\n Sort Method: quicksort Memory: 25kB\n -> Append (cost=0.00..8.63 rows=2 width=8) (actual\ntime=0.001..0.019 rows=2 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=8) (actual\ntime=0.001..0.001 rows=1 loops=1)\n -> Index Scan using orderstotrans_transid_key on\norderstotrans (cost=0.57..8.59 rows=1 width=8) (actual time=0.015..0.016\nrows=1 loops=1)\n Index Cond: (transid = '3131275553'::bigint)\n -> Index Only Scan using orders_pkey on orders (cost=0.57..6.58 rows=1\nwidth=8) (actual time=0.007..0.007 rows=0 loops=2)\n Index Cond: (orderid = ('3131275553'::bigint))\n Heap Fetches: 0\n Planning Time: 0.165 ms\n Execution Time: 0.065 ms\n(14 rows)\n====\n(though that plan is a bit misleading, as that index condition isn't\nexactly what is used, ie with:\n\nselect orders.orderid FROM orders\nWHERE (\norders.orderid IN (\n SELECT '3131275553'\n UNION\n select orderid FROM orderstotrans WHERE (transid IN ('3131275553'))\n)\n);\n orderid\n-----------\n 439155713\n(1 row)\n\nthe orderid it matches, isn't the one the planner showed, but it works)\n\nHi there,I'm wondering if anyone has any insight into what might make the database choose a sequential scan for a query (table defs and plan below) like :SELECT orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));I have a couple of environments, all on Postgresql 13.7 and:- on one the query executes with an sequential scan on the orders table - on the other sequential scan on an index (ie walks index and filters, rather than looking up ids on the index as an index condition.)Plan and tables are below, but it seems to me that the planner knows the subplan is going to return 1 row (max) and should \"know\" that there is a max of 2 IDs to look up an indexes would be faster than a sequential scan (of either table or index) and filter. I've tried re analyzing to make sure stats are good and it hasn't helpedI can get a good plan that does use the index efficiently by using a union, eg:select orders.orderid FROM ordersWHERE (orders.orderid IN ( SELECT '546111' UNION SELECT orderid FROM orderstotrans WHERE (transid IN ('546111'))));but I want to understand what warning signs I should be aware of with the original query that put it on the path of a bad plan, so I don't do it again.Plan - seq scan of table:=====> explain select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111')))); QUERY PLAN ------------------------------------------------------------------------------------------------------- Seq Scan on orders (cost=8.45..486270.87 rows=4302781 width=8) Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1)) SubPlan 1 -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.43..8.45 rows=1 width=8) Index Cond: (transid = '546111'::bigint)(5 rows)=====Plan - Seq scan and filter of index:=====> explain select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111')))); QUERY PLAN ------------------------------------------------------------------------------------------------------- Index Only Scan using orders_pkey on orders (cost=9.16..4067888.60 rows=64760840 width=8) Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1)) SubPlan 1 -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.57..8.59 rows=1 width=8) Index Cond: (transid = '546111'::bigint)(5 rows)=====Tables:===== Table \"test.orders\" Column | Type | Collation | Nullable | Default ----------------------+-----------------------------+-----------+----------+-------------- orderid | bigint | | not null | istest | smallint | | not null | 0 orderstatusid | integer | | | customername | text | | | customeraddress | text | | | customercountry | text | | | customercity | text | | | customerstate | text | | | customerzip | text | | | \"orders_pkey\" PRIMARY KEY, btree (orderid) Table \"test.orderstotrans\" Column | Type | Collation | Nullable | Default -------------+---------+-----------+----------+--------- orderid | bigint | | | transid | bigint | | | orderitemid | integer | | | Indexes: \"orderstotrans_orderid_idx\" btree (orderid) \"orderstotrans_orderitemid_idx\" btree (orderitemid) \"orderstotrans_transid_key\" UNIQUE, btree (transid)Happier plan for the union version:====explain select orders.orderid FROM ordersWHERE (orders.orderid IN ( SELECT '3131275553' UNION select orderid FROM orderstotrans WHERE (transid IN ('3131275553')))); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=9.21..21.84 rows=2 width=8) (actual time=0.034..0.043 rows=1 loops=1) -> Unique (cost=8.64..8.65 rows=2 width=8) (actual time=0.024..0.026 rows=2 loops=1) -> Sort (cost=8.64..8.64 rows=2 width=8) (actual time=0.023..0.024 rows=2 loops=1) Sort Key: ('3131275553'::bigint) Sort Method: quicksort Memory: 25kB -> Append (cost=0.00..8.63 rows=2 width=8) (actual time=0.001..0.019 rows=2 loops=1) -> Result (cost=0.00..0.01 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=1) -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.57..8.59 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=1) Index Cond: (transid = '3131275553'::bigint) -> Index Only Scan using orders_pkey on orders (cost=0.57..6.58 rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=2) Index Cond: (orderid = ('3131275553'::bigint)) Heap Fetches: 0 Planning Time: 0.165 ms Execution Time: 0.065 ms(14 rows)====(though that plan is a bit misleading, as that index condition isn't exactly what is used, ie with:select orders.orderid FROM ordersWHERE ( orders.orderid IN ( SELECT '3131275553' UNION select orderid FROM orderstotrans WHERE (transid IN ('3131275553'))) ); orderid ----------- 439155713(1 row)the orderid it matches, isn't the one the planner showed, but it works)",
"msg_date": "Fri, 2 Dec 2022 11:52:19 +1100",
"msg_from": "Paul McGarry <paul@paulmcgarry.com>",
"msg_from_op": true,
"msg_subject": "Odd Choice of seq scan"
},
{
"msg_contents": "On Fri, Dec 02, 2022 at 11:52:19AM +1100, Paul McGarry wrote:\n> Hi there,\n> \n> I'm wondering if anyone has any insight into what might make the database\n> choose a sequential scan for a query (table defs and plan below) like :\n\n> Plan - seq scan of table:\n> =====\n> > explain select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));\n\n> Plan - Seq scan and filter of index:\n> =====\n> > explain select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));\n\nCould you show explain analyze ?\n\nShow the size of the table and its indexes \nAnd GUC settings\nAnd the \"statistics\" here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\nMaybe on both a well-behaving instance and a badly-beving instance.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 1 Dec 2022 19:21:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd Choice of seq scan"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 8:21 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Could you show explain analyze ?\n>\n> Maybe on both a well-behaving instance and a badly-beving instance.\n\nApologies for barging into this thread with a potentially unrelated\n\"me too\" but here's a similar OR-causes-seqscan from 2018:\nhttps://www.postgresql.org/message-id/CAPhHnhpc6bdGbRBa9hG7FQiKByVqR3s37VoY64DSMUxjeJGOjQ%40mail.gmail.com\n\nI don't have other versions handy but can confirm that the problem\nexists on Postgres 11.17 (dated but newer than the 10.1 in that post).\n\nWe've been working around the problem by rewriting queries to use UNION instead.\n\n\n",
"msg_date": "Fri, 2 Dec 2022 00:37:50 -0500",
"msg_from": "Ronuk Raval <ronuk.raval@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd Choice of seq scan"
},
{
"msg_contents": "Ronuk Raval <ronuk.raval@gmail.com> writes:\n> We've been working around the problem by rewriting queries to use UNION instead.\n\nYeah, that. The real issue here is that the seqscan and indexscan plans\nboth suck, because they will both run that sub-select for every row\nin the table. The index-only plan might fetch fewer blocks along the\nway, because it only has to read the index not the table proper ...\nbut that's only true if the table's pages are mostly marked all-visible.\n(My bet about the plan instability is that the planner might choose\ndifferently depending on how much of the table it believes is\nall-visible.) That only helps a bit, though.\n\nWhat you really want to have happen, assuming there are not too many\ninteresting orderid values, is to do a point indexscan for each one\nof them. Currently the planner won't think of that by itself when\nfaced with OR'd conditions in WHERE. You have to help it along with\nUNION or some similar locution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 01:16:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd Choice of seq scan"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 12:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Could you show explain analyze ?\n>\n> Show the size of the table and its indexes\n> And GUC settings\n> And the \"statistics\" here:\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n> Maybe on both a well-behaving instance and a badly-beving instance.\n>\n>\nAnalyzes below, but they are both \"badly\" behaved and the plans. The index\nscan is presumably a marginally better option when the magic that allows\n\"index only\" lines up, but it's still a scan of the whole index rather than\nan index lookup. Both plans fetch \"everything\" and then filter out all but\nthe 0 to 2 rows that match.\n\nIn my head the stats should be simple, as\n1) The\n \"orderstotrans_transid_key\" UNIQUE, btree (transid)\nmeans the subquery can return at most one order_id when I look up by\ntrans_id (and the query plan does seem to know that, ie says rows=1)\n2) The other OR'd clause is exactly one order_id.\n\nSo the worst case scenario is effectively the same as:\nselect orders.orderid FROM orders WHERE (orders.orderid IN\n('546111','436345353'));\nwhich would be:\n====\n Index Only Scan using orders_pkey on orders (cost=0.57..13.17 rows=2\nwidth=8) (actual time=0.038..0.039 rows=1 loops=1)\n Index Cond: (orderid = ANY ('{546111,436345353}'::bigint[]))\n====\nie \"Index Cond\" rather than \"filter\"\n\nAnyway, maybe that insight is more naturally obvious to a human than\nsomething the planner can determine cheaply and easily.\n\nThe alternate \"union\" phrasing of the query works and as Ronuk and Tom said\nin other replies (thanks) seems to be the way to go and for now at least I\njust need to remember that ORs like this don't help the planner and should\nbe avoided.\n\nThanks all.\n\n\n====\n explain analyze\n select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR\norders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN\n('546111'))));\n\n=====\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on orders (cost=8.45..486499.59 rows=4304805 width=8) (actual\ntime=9623.981..20796.568 rows=1 loops=1)\n Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n Rows Removed by Filter: 8615097\n SubPlan 1\n -> Index Scan using orderstotrans_transid_key on orderstotrans\n (cost=0.43..8.45 rows=1 width=8) (actual time=1.105..1.105 rows=0 loops=1)\n Index Cond: (transid = '546111'::bigint)\n Planning Time: 0.199 ms\n Execution Time: 20796.613 ms\n====\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using orders_pkey on orders (cost=9.16..4070119.84\nrows=64770768 width=8) (actual time=21011.157..21011.158 rows=0 loops=1)\n Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n Rows Removed by Filter: 130888763\n Heap Fetches: 3171118\n SubPlan 1\n -> Index Scan using orderstotrans_transid_key on orderstotrans\n (cost=0.57..8.59 rows=1 width=8) (actual time=1.113..1.113 rows=0 loops=1)\n Index Cond: (transid = '546111'::bigint)\n Planning Time: 0.875 ms\n Execution Time: 21011.224 ms\n\nOn Fri, 2 Dec 2022 at 12:21, Justin Pryzby <pryzby@telsasoft.com> wrote:Could you show explain analyze ?\n\nShow the size of the table and its indexes \nAnd GUC settings\nAnd the \"statistics\" here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\nMaybe on both a well-behaving instance and a badly-beving instance.Analyzes below, but they are both \"badly\" behaved and the plans. The index scan is presumably a marginally better option when the magic that allows \"index only\" lines up, but it's still a scan of the whole index rather than an index lookup. Both plans fetch \"everything\" and then filter out all but the 0 to 2 rows that match.In my head the stats should be simple, as 1) The \"orderstotrans_transid_key\" UNIQUE, btree (transid)means the subquery can return at most one order_id when I look up by trans_id (and the query plan does seem to know that, ie says rows=1)2) The other OR'd clause is exactly one order_id.So the worst case scenario is effectively the same as:select orders.orderid FROM orders WHERE (orders.orderid IN ('546111','436345353'));which would be:==== Index Only Scan using orders_pkey on orders (cost=0.57..13.17 rows=2 width=8) (actual time=0.038..0.039 rows=1 loops=1) Index Cond: (orderid = ANY ('{546111,436345353}'::bigint[]))====ie \"Index Cond\" rather than \"filter\"Anyway, maybe that insight is more naturally obvious to a human than something the planner can determine cheaply and easily.The alternate \"union\" phrasing of the query works and as Ronuk and Tom said in other replies (thanks) seems to be the way to go and for now at least I just need to remember that ORs like this don't help the planner and should be avoided.Thanks all.==== explain analyze select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));===== QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on orders (cost=8.45..486499.59 rows=4304805 width=8) (actual time=9623.981..20796.568 rows=1 loops=1) Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1)) Rows Removed by Filter: 8615097 SubPlan 1 -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.43..8.45 rows=1 width=8) (actual time=1.105..1.105 rows=0 loops=1) Index Cond: (transid = '546111'::bigint) Planning Time: 0.199 ms Execution Time: 20796.613 ms==== QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------- Index Only Scan using orders_pkey on orders (cost=9.16..4070119.84 rows=64770768 width=8) (actual time=21011.157..21011.158 rows=0 loops=1) Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1)) Rows Removed by Filter: 130888763 Heap Fetches: 3171118 SubPlan 1 -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.57..8.59 rows=1 width=8) (actual time=1.113..1.113 rows=0 loops=1) Index Cond: (transid = '546111'::bigint) Planning Time: 0.875 ms Execution Time: 21011.224 ms",
"msg_date": "Fri, 2 Dec 2022 19:04:25 +1100",
"msg_from": "Paul McGarry <paul@paulmcgarry.com>",
"msg_from_op": true,
"msg_subject": "Re: Odd Choice of seq scan"
},
{
"msg_contents": "I don’t have a database running the versions you are, but what I’ve had to do to get around thing like is it to write the query something like this:\n\nWITH orderids AS (\nSELECT ‘546111’ AS orderid\nUNION\nSELECT orderid\n FROM orderstotrans\n WHERE transid IN ('546111')\n)\nselect orders.orderid\n FROM orderids\n JOIN orders USING (orderid);\n\nHope this helps your situation.\n\nThanks,\n\n\nChris Hoover\nSenior DBA\nAWeber.com\nCell: (803) 528-2269\nEmail: chrish@aweber.com\n\n\n\n> On Dec 1, 2022, at 7:52 PM, Paul McGarry <paul@paulmcgarry.com> wrote:\n> \n> Hi there,\n> \n> I'm wondering if anyone has any insight into what might make the database choose a sequential scan for a query (table defs and plan below) like :\n> \n> SELECT orders.orderid FROM orders \n> WHERE (\n> orders.orderid IN ('546111') \n> OR \n> orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111')))\n> );\n> \n> I have a couple of environments, all on Postgresql 13.7 and:\n> - on one the query executes with an sequential scan on the orders table \n> - on the other sequential scan on an index (ie walks index and filters, rather than looking up ids on the index as an index condition.)\n> \n> Plan and tables are below, but it seems to me that the planner knows the subplan is going to return 1 row (max) and should \"know\" that there is a max of 2 IDs to look up an indexes would be faster than a sequential scan (of either table or index) and filter. I've tried re analyzing to make sure stats are good and it hasn't helped\n> \n> I can get a good plan that does use the index efficiently by using a union, eg:\n> \n> select orders.orderid FROM orders\n> WHERE (\n> orders.orderid IN (\n> SELECT '546111'\n> UNION\n> SELECT orderid FROM orderstotrans WHERE (transid IN ('546111'))\n> )\n> );\n> \n> but I want to understand what warning signs I should be aware of with the original query that put it on the path of a bad plan, so I don't do it again.\n> \n> \n> Plan - seq scan of table:\n> =====\n> > explain \n> select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));\n> \n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------\n> Seq Scan on orders (cost=8.45..486270.87 rows=4302781 width=8)\n> Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n> SubPlan 1\n> -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.43..8.45 rows=1 width=8)\n> Index Cond: (transid = '546111'::bigint)\n> (5 rows)\n> =====\n> \n> Plan - Seq scan and filter of index:\n> =====\n> > explain select orders.orderid FROM orders WHERE (orders.orderid IN ('546111') OR orders.orderid IN (select orderid FROM orderstotrans WHERE (transid IN ('546111'))));\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------\n> Index Only Scan using orders_pkey on orders (cost=9.16..4067888.60 rows=64760840 width=8)\n> Filter: ((orderid = '546111'::bigint) OR (hashed SubPlan 1))\n> SubPlan 1\n> -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.57..8.59 rows=1 width=8)\n> Index Cond: (transid = '546111'::bigint)\n> (5 rows)\n> =====\n> \n> \n> Tables:\n> =====\n> Table \"test.orders\"\n> Column | Type | Collation | Nullable | Default \n> ----------------------+-----------------------------+-----------+----------+--------------\n> orderid | bigint | | not null |\n> istest | smallint | | not null | 0\n> orderstatusid | integer | | |\n> customername | text | | |\n> customeraddress | text | | |\n> customercountry | text | | | \n> customercity | text | | | \n> customerstate | text | | | \n> customerzip | text | | |\n> \"orders_pkey\" PRIMARY KEY, btree (orderid)\n> \n> Table \"test.orderstotrans\"\n> Column | Type | Collation | Nullable | Default \n> -------------+---------+-----------+----------+---------\n> orderid | bigint | | | \n> transid | bigint | | | \n> orderitemid | integer | | | \n> Indexes:\n> \"orderstotrans_orderid_idx\" btree (orderid)\n> \"orderstotrans_orderitemid_idx\" btree (orderitemid)\n> \"orderstotrans_transid_key\" UNIQUE, btree (transid)\n> \n> \n> Happier plan for the union version:\n> ====\n> explain select orders.orderid FROM orders\n> WHERE (\n> orders.orderid IN (\n> SELECT '3131275553'\n> UNION\n> select orderid FROM orderstotrans WHERE (transid IN ('3131275553'))\n> )\n> );\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=9.21..21.84 rows=2 width=8) (actual time=0.034..0.043 rows=1 loops=1)\n> -> Unique (cost=8.64..8.65 rows=2 width=8) (actual time=0.024..0.026 rows=2 loops=1)\n> -> Sort (cost=8.64..8.64 rows=2 width=8) (actual time=0.023..0.024 rows=2 loops=1)\n> Sort Key: ('3131275553'::bigint)\n> Sort Method: quicksort Memory: 25kB\n> -> Append (cost=0.00..8.63 rows=2 width=8) (actual time=0.001..0.019 rows=2 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=1)\n> -> Index Scan using orderstotrans_transid_key on orderstotrans (cost=0.57..8.59 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=1)\n> Index Cond: (transid = '3131275553'::bigint)\n> -> Index Only Scan using orders_pkey on orders (cost=0.57..6.58 rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=2)\n> Index Cond: (orderid = ('3131275553'::bigint))\n> Heap Fetches: 0\n> Planning Time: 0.165 ms\n> Execution Time: 0.065 ms\n> (14 rows)\n> ====\n> (though that plan is a bit misleading, as that index condition isn't exactly what is used, ie with:\n> \n> select orders.orderid FROM orders\n> WHERE ( \n> orders.orderid IN (\n> SELECT '3131275553'\n> UNION \n> select orderid FROM orderstotrans WHERE (transid IN ('3131275553'))\n> ) \n> );\n> orderid \n> -----------\n> 439155713\n> (1 row)\n> \n> the orderid it matches, isn't the one the planner showed, but it works)\n> \n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 14:35:19 -0500",
"msg_from": "Chris Hoover <chrish@aweber.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd Choice of seq scan"
}
] |
[
{
"msg_contents": "Hi,\n It's a prepared sql statement on a non-partitioned table , 16millions\ntuples and multiple indexes on this table. pk_xxxxx primary\nkey (aid,bid,btype) all 3 cols are bigint datatype, there is another index\nidx_xxxxx(starttime,endtime) , both cols are \"timestamp(0) without time\nzone\".\n the data distribution is skewed, not even. with first 5 times execution\ncustom_plan, optimizer choose primary key, but when it start building\ngeneric plan and choose another index idx_xxxx, obviously generic plan make\nsignificant different rows and cost estimation.\n below is the sql , sensitive info got masked here (tablename,\ncolumnname) .\n\n --with custom_plan\n Update on xxxxx (cost=0.56..8.60 rows=1 width=2923) (actual\ntime=0.030..0.031 rows=0 loops=1)\n Buffers: shared hit=4\n -> Index Scan using pk_xxxxx on xxxxxxx (cost=0.56..8.60 rows=1\nwidth=2923) (actual time=0.028..0.028 rows=0 loops=1)\n Index Cond: ((aid = '14654072'::bigint) AND (bid =\n'243379969878556159'::bigint) AND (btype = '0'::bigint))\n Filter: ((password IS NULL) AND ...) AND (starttime = '2071-12-31\n00:00:00'::timestamp without time zone) AND (endtime = '2072-01-01\n00:00:00'::timestamp without time zone) AND (opentime = '2022-11-07 09:\n40:26'::timestamp without time zone)\n Buffers: shared hit=4\n Planning Time: 1.575 ms\n Execution Time: 0.123 ms\n\n --after 5 times execution, it start to build generic plan and thought\ngeneric plan cost=0.44..8.48 that less than the customer plan ,so it choose\ngeneric plan for following sql executions,\n Update on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual\ntime=8136.243..8136.245 rows=0 loops=1)\n Buffers: shared hit=1284549\n -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1\nwidth=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\n Index Cond: ((starttime = $7) AND (endtime = $8))\n Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND\n(btype = $6) AND...\n Rows Removed by Filter: 5534630\n Buffers: shared hit=1284549\n Planning Time: 0.754 ms\n Execution Time: 8136.302 ms\n\n as a workaround, I remove \"starttime\" and \"endtime\" stats tuple from\npg_statistic, and optimizer use a DEFAULT value with NULL stats tuple so\nthat index_path cost > the primary key index_path cost, following eqsel\nfunction logic, postgres/selfuncs.c at REL_13_STABLE · postgres/postgres ·\nGitHub\n<https://github.com/postgres/postgres/blob/REL_13_STABLE/src/backend/utils/adt/selfuncs.c>\n optimzer is very complicated, could you direct me how optimizer to do\nselectivity estimation when building generic plan, for this case? for\ncustom_plan, optimizer knows boundparams values, but when generic_plan,\nplanner() use boundparams=NULL, it try to calculate average value based on\nmcv list of the index attributes (starttime,endtime) ?\n please check attached about sql details and pg_stats tuple for the\nindex attributes.\n\nThanks,\n\nJames",
"msg_date": "Tue, 6 Dec 2022 10:28:11 +0800",
"msg_from": "James Pang <jamespang886@gmail.com>",
"msg_from_op": true,
"msg_subject": "wrong rows and cost estimation when generic plan"
},
{
"msg_contents": "Hi,\r\n It's a prepared sql statement on a non-partitioned table , 16millions tuples and multiple indexes on this table. pk_xxxxx primary key (aid,bid,btype) all 3 cols are bigint datatype, there is another index idx_xxxxx(starttime,endtime) , both cols are \"timestamp(0) without time zone\".\r\n the data distribution is skewed, not even. with first 5 times execution custom_plan, optimizer choose primary key, but when it start building generic plan and choose another index idx_xxxx, obviously generic plan make significant different rows and cost estimation.\r\n below is the sql , sensitive info got masked here (tablename, columnname) .\r\n\r\n --with custom_plan\r\n Update on xxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.030..0.031 rows=0 loops=1)\r\n Buffers: shared hit=4\r\n -> Index Scan using pk_xxxxx on xxxxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.028..0.028 rows=0 loops=1)\r\n Index Cond: ((aid = '14654072'::bigint) AND (bid = '243379969878556159'::bigint) AND (btype = '0'::bigint))\r\n Filter: ((password IS NULL) AND ...) AND (starttime = '2071-12-31 00:00:00'::timestamp without time zone) AND (endtime = '2072-01-01 00:00:00'::timestamp without time zone) AND (opentime = '2022-11-07 09:\r\n40:26'::timestamp without time zone)\r\n Buffers: shared hit=4\r\n Planning Time: 1.575 ms\r\n Execution Time: 0.123 ms\r\n\r\n --after 5 times execution, it start to build generic plan and thought generic plan cost=0.44..8.48 that less than the customer plan ,so it choose generic plan for following sql executions,\r\n Update on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.243..8136.245 rows=0 loops=1)\r\n Buffers: shared hit=1284549\r\n -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\r\n Index Cond: ((starttime = $7) AND (endtime = $8))\r\n Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND (btype = $6) AND...\r\n Rows Removed by Filter: 5534630\r\n Buffers: shared hit=1284549\r\n Planning Time: 0.754 ms\r\n Execution Time: 8136.302 ms\r\n\r\n as a workaround, I remove \"starttime\" and \"endtime\" stats tuple from pg_statistic, and optimizer use a DEFAULT value with NULL stats tuple so that index_path cost > the primary key index_path cost, following eqsel function logic, postgres/selfuncs.c at REL_13_STABLE · postgres/postgres · GitHub<https://github.com/postgres/postgres/blob/REL_13_STABLE/src/backend/utils/adt/selfuncs.c>\r\n optimzer is very complicated, could you direct me how optimizer to do selectivity estimation when building generic plan, for this case? for custom_plan, optimizer knows boundparams values, but when generic_plan, planner() use boundparams=NULL, it try to calculate average value based on mcv list of the index attributes (starttime,endtime) ?\r\n please check attached about sql details and pg_stats tuple for the index attributes.\r\n\r\nThanks,\r\n\r\nJames\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n It's a prepared sql statement on a non-partitioned table , 16millions tuples and multiple indexes on this table. pk_xxxxx primary key (aid,bid,btype) all 3 cols are bigint datatype, there is another index idx_xxxxx(starttime,endtime)\r\n , both cols are \"timestamp(0) without time zone\". \n\n\n the data distribution is skewed, not even. with first 5 times execution custom_plan, optimizer choose primary key, but when it start building generic plan and choose another index idx_xxxx, obviously generic plan make significant different\r\n rows and cost estimation. \n\n\n below is the sql , sensitive info got masked here (tablename, columnname) .\n\n\n \n\n\n\n --with custom_plan\n\n\n Update on xxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.030..0.031 rows=0 loops=1)\n\n\n Buffers: shared hit=4\n\n\n -> Index Scan using pk_xxxxx on xxxxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.028..0.028 rows=0 loops=1)\n\n\n Index Cond: ((aid = '14654072'::bigint) AND (bid = '243379969878556159'::bigint) AND (btype = '0'::bigint))\n\n\n Filter: ((password IS NULL) AND ...) AND (starttime = '2071-12-31 00:00:00'::timestamp without time zone) AND (endtime = '2072-01-01 00:00:00'::timestamp without time zone) AND (opentime = '2022-11-07 09:\n\n\n40:26'::timestamp without time zone)\n\n\n Buffers: shared hit=4\n\n\n Planning Time: 1.575 ms\n\n\n Execution Time: 0.123 ms\n\n\n \n\n\n --after 5 times execution, it start to build generic plan and thought generic plan cost=0.44..8.48 that less than the customer plan ,so it choose generic plan for following sql executions, \n\n\n Update on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.243..8136.245 rows=0 loops=1)\n\n\n Buffers: shared hit=1284549\n\n\n -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\n\n\n Index Cond: ((starttime = $7) AND (endtime = $8))\n\n\n Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND (btype = $6) AND...\n\n\n Rows Removed by Filter: 5534630 \n\n\n Buffers: shared hit=1284549\n\n\n Planning Time: 0.754 ms\n\n\n Execution Time: 8136.302 ms\n\n\n \n\n\n as a workaround, I remove \"starttime\" and \"endtime\" stats tuple from pg_statistic, and optimizer use a DEFAULT value with NULL stats tuple so that index_path cost > the primary key index_path cost, following eqsel function logic, postgres/selfuncs.c\r\n at REL_13_STABLE · postgres/postgres · GitHub\n\n\n optimzer is very complicated, could you direct me how optimizer to do selectivity estimation when building generic plan, for this case? for custom_plan, optimizer knows boundparams values, but when generic_plan, planner() use boundparams=NULL, \r\n it try to calculate average value based on mcv list of the index attributes (starttime,endtime) ?\n\n\n please check attached about sql details and pg_stats tuple for the index attributes.\n\n\n \n\n\nThanks,\n\n\n \n\n\nJames",
"msg_date": "Tue, 6 Dec 2022 05:28:33 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": false,
"msg_subject": "RE: wrong rows and cost estimation when generic plan"
},
{
"msg_contents": "Hi,\r\n It's a prepared sql statement on a non-partitioned table , 16millions tuples and multiple indexes on this table. pk_xxxxx primary key (aid,bid,btype) all 3 cols are bigint datatype, there is another index idx_xxxxx(starttime,endtime) , both cols are \"timestamp(0) without time zone\".\r\n the data distribution is skewed, not even. with first 5 times execution custom_plan, optimizer choose primary key, but when it start building generic plan and choose another index idx_xxxx, obviously generic plan make significant different rows and cost estimation.\r\n below is the sql , sensitive info got masked here (tablename, columnname) .\r\n\r\n --with custom_plan\r\n Update on xxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.030..0.031 rows=0 loops=1)\r\n Buffers: shared hit=4\r\n -> Index Scan using pk_xxxxx on xxxxxxx (cost=0.56..8.60 rows=1 width=2923) (actual time=0.028..0.028 rows=0 loops=1)\r\n Index Cond: ((aid = '14654072'::bigint) AND (bid = '243379969878556159'::bigint) AND (btype = '0'::bigint))\r\n Filter: ((password IS NULL) AND ...) AND (starttime = '2071-12-31 00:00:00'::timestamp without time zone) AND (endtime = '2072-01-01 00:00:00'::timestamp without time zone) AND (opentime = '2022-11-07 09:\r\n40:26'::timestamp without time zone)\r\n Buffers: shared hit=4\r\n Planning Time: 1.575 ms\r\n Execution Time: 0.123 ms\r\n\r\n --after 5 times execution, it start to build generic plan and thought generic plan cost=0.44..8.48 that less than the customer plan ,so it choose generic plan for following sql executions,\r\n Update on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.243..8136.245 rows=0 loops=1)\r\n Buffers: shared hit=1284549\r\n -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\r\n Index Cond: ((starttime = $7) AND (endtime = $8))\r\n Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND (btype = $6) AND...\r\n Rows Removed by Filter: 5534630\r\n Buffers: shared hit=1284549\r\n Planning Time: 0.754 ms\r\n Execution Time: 8136.302 ms\r\n\r\n as a workaround, I remove \"starttime\" and \"endtime\" stats tuple from pg_statistic, and optimizer use a DEFAULT value with NULL stats tuple so that index_path cost > the primary key index_path cost, following eqsel function logic, postgres/selfuncs.c at REL_13_STABLE · postgres/postgres · GitHub<https://github.com/postgres/postgres/blob/REL_13_STABLE/src/backend/utils/adt/selfuncs.c>\r\n optimzer is very complicated, could you direct me how optimizer to do selectivity estimation when building generic plan, for this case? for custom_plan, optimizer knows boundparams values, but when generic_plan, planner() use boundparams=NULL, it try to calculate average value based on mcv list of the index attributes (starttime,endtime) ?\r\n please check attached about sql details and pg_stats tuple for the index attributes.\r\n\r\nThanks,\r\n\r\nJames",
"msg_date": "Tue, 6 Dec 2022 05:29:36 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": false,
"msg_subject": "RE: wrong rows and cost estimation when generic plan"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 18:28, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n> -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\n> Index Cond: ((starttime = $7) AND (endtime = $8))\n> Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND (btype = $6) AND...\n> Rows Removed by Filter: 5534630\n\nI wonder if you did:\n\ncreate statistics xxxxx_starttime_endtime_stats (ndistinct) on\nstarttime,endtime from xxxxx;\nanalyze xxxxx;\n\nif the planner would come up with a higher estimate than what it's\ngetting for the above and cause it to use the other index instead.\n\n> optimzer is very complicated, could you direct me how optimizer to do selectivity estimation when building generic plan, for this case? for custom_plan, optimizer knows boundparams values, but when generic_plan, planner() use boundparams=NULL, it try to calculate average value based on mcv list of the index attributes (starttime,endtime) ?\n\nIIRC, generic plan estimates become based on distinct estimations\nrather than histograms or MCVs.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Dec 2022 18:59:11 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong rows and cost estimation when generic plan"
},
{
"msg_contents": " No create statistics on starttime_endtime(distinct), that index on \"starttime,endtime\", use default analyze, I tested to increase statistics_targets but that no help. Could you provide the function name for generic plan selectivity estimation? \r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <dgrowleyml@gmail.com> \r\nSent: Tuesday, December 6, 2022 1:59 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org; jamespang886@gmail.com\r\nSubject: Re: wrong rows and cost estimation when generic plan\r\n\r\nOn Tue, 6 Dec 2022 at 18:28, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\r\n> -> Index Scan using idx_xxxxx_time on xxxxx (cost=0.44..8.48 rows=1 width=2923) (actual time=8136.242..8136.242 rows=0 loops=1)\r\n> Index Cond: ((starttime = $7) AND (endtime = $8))\r\n> Filter: ((password IS NULL) AND ...(aid = $4) AND (bid = $5) AND (btype = $6) AND...\r\n> Rows Removed by Filter: 5534630\r\n\r\nI wonder if you did:\r\n\r\ncreate statistics xxxxx_starttime_endtime_stats (ndistinct) on starttime,endtime from xxxxx; analyze xxxxx;\r\n\r\nif the planner would come up with a higher estimate than what it's getting for the above and cause it to use the other index instead.\r\n\r\n> optimzer is very complicated, could you direct me how optimizer to do selectivity estimation when building generic plan, for this case? for custom_plan, optimizer knows boundparams values, but when generic_plan, planner() use boundparams=NULL, it try to calculate average value based on mcv list of the index attributes (starttime,endtime) ?\r\n\r\nIIRC, generic plan estimates become based on distinct estimations rather than histograms or MCVs.\r\n\r\nDavid\r\n",
"msg_date": "Tue, 6 Dec 2022 07:16:55 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": false,
"msg_subject": "RE: wrong rows and cost estimation when generic plan"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 20:17, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n> Could you provide the function name for generic plan selectivity estimation?\n\nIf you look at eqsel_internal(), you'll see there are two functions\nthat it'll call var_eq_const() for Consts and otherwise\nvar_eq_non_const(). It'll take the non-Const path for planning generic\nplans.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Dec 2022 22:04:21 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong rows and cost estimation when generic plan"
}
] |
[
{
"msg_contents": "Hi everyone,\nI'm writing to ask about a correlation I was surprised to observe on our\nPSQL machines (particularly read-only standbys) where increasing\n\"shared_buffers\" appears to result in\nincreased pg_stat_database.blk_read_time and CPU iowait, which in turn\nseems to correlate with reduced throughput for our query-heavy services -\ndetails below.\n\nIs this expected, or are there configuration changes we might make to\nimprove the performance at higher \"shared_buffers\" values?\n\nThanks, let me know if I can provide any more info,\nJordan\n\n - Tests and results - public Datadog dashboard here\n <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>,\n screenshot attached:\n - Our beta system (\"endor\") was run with three different\n configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)\n - The only changes between these deployments was the \"shared_buffers\"\n parameter for all PSQL instances (machine and configuration\ndetails below).\n - \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11 20:00\n UTC\n - \"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12 13:30\n UTC\n - \"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13,\n 0:00 UTC\n - The datadog dashboard\n <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>\n shows our results including cpu divided by usage and the cache\nhit vs disk\n read ratio including blk_read_time (additional metrics were enabled at\n about Dec 11, 10am PST)\n - Our most query heavy service is our \"Trends worker\" for which\n the average worker duration is shown in the top-left graph\n - We expect the workload to be relatively constant throughout\n this period, particularly focusing on the standby\ninstances (PQSL2 and\n PSQL3) where all read-only queries should be sent.\n - We see the lowest duration, i.e. best performance, most\n consistently with the lowest setting for shared_buffers, \"4000MB\"\n - As we increase shared_buffers we see increased iowait on the\n standby instances (PSQL2 and PSQL3) and increased blk_read_time\n (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".\n - Even though we also see a higher ratio of cache hits on those\n instances. Our graphs show the per second change\n in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and\n \"all_read/s\") and pg_statio_user_tables.heap_blks_read,\nheap_blks_hit,\n idx_blks_read, and idx_blks_hit\n - Cluster contains 3 PSQL nodes, all on AWS EC2 instances,\n postgresql.conf attached\n - Version: psql (PostgreSQL) 14.1\n - Machine:\n - AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage 950\n GB ssd)\n - uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP\n Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux\n - Replication via WAL:\n - Line configuration: PSQL1 (master), PSQL1 followed by PSQL2,\n PSQL2 followed by PSQL3\n - Managed by repmgr (version: repmgr 5.3.0), no failovers observed\n during timeframe of interest\n - Load balancing:\n - Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS\n instances\n - All write queries go to master. All read-only queries go to\n standbys unless WAL on standby > 10MB, falling back to read\nfrom master as\n last resort",
"msg_date": "Mon, 12 Dec 2022 16:29:05 -0800",
"msg_from": "Jordan Hurwich <jhurwich@pulsasensors.com>",
"msg_from_op": true,
"msg_subject": "Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Hi Jordan,\n\nIncreased shared buffer size does not necessarily mean an increased\nperformance.\n\nRegarding the negative correlation between IOWait and shared_buffers' size;\nif you don't increase memory of the system, it is an expected result in my\nopinion. Because, PostgreSQL starts reserving a bigger portion of the\nsystem memory, and the OS cache size decreases respectively. Smaller OS\ncache can easily result with more disk access and higher IO demand and\nbigger IOWait.\n\nAs you can see in graphs, when you increase the size of shared_buffers, you\nsee higher block hits and lower block reads. \"hits\" refers to the blocks\nthat are already in shared_buffers. \"reads\" refers to the blocks that are\nnot in shared_buffers and *\"read from* *disk\"*. But, *\"read from disk\"*\nthat you see in PostgreSQL's statistic catalogs doesn't mean all of those\nblocks were read from the disk. PostgreSQL requests data blocks, which are\nnot already in shared_buffers, from the kernel. And, if the requested block\nis in the OS cache, the kernel provides it directly from the memory. No\ndisk access, therefore, happens at all. And, you observe that through lower\ndisk access (I/O) and lower IOWait on your operating system.\n\nWhen you increase size of shared_buffers without increasing amount of the\nsystem memory and with or without decreasing effective_cache_size,\nPostgreSQL considers the possibility of the block to be requested on the\nmemory lower than previous configuration. So, it creates execution plans\nwith less index usages. Less index usage means more sequential scan. More\nsequential scan means more disk read. We already have less OS cache. And\nthe system has to carry out more disk accesses.\n\nAs you can see, they are all connected. Setting shared_buffers higher than\na threshold, which varies from database to database, actually decreases\nyour performance.\n\nTo conclude, your results are expected results.\n\nA useful resource to read:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n> ..... given the way PostgreSQL also relies on the operating system cache,\n> it's unlikely you'll find using more than 40% of RAM to work better than a\n> smaller amount.\n>\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com>\nwrote:\n\n> Hi everyone,\n> I'm writing to ask about a correlation I was surprised to observe on our\n> PSQL machines (particularly read-only standbys) where increasing\n> \"shared_buffers\" appears to result in\n> increased pg_stat_database.blk_read_time and CPU iowait, which in turn\n> seems to correlate with reduced throughput for our query-heavy services -\n> details below.\n>\n> Is this expected, or are there configuration changes we might make to\n> improve the performance at higher \"shared_buffers\" values?\n>\n> Thanks, let me know if I can provide any more info,\n> Jordan\n>\n> - Tests and results - public Datadog dashboard here\n> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>,\n> screenshot attached:\n> - Our beta system (\"endor\") was run with three different\n> configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)\n> - The only changes between these deployments was the\n> \"shared_buffers\" parameter for all PSQL instances (machine and\n> configuration details below).\n> - \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11\n> 20:00 UTC\n> - \"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12\n> 13:30 UTC\n> - \"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13,\n> 0:00 UTC\n> - The datadog dashboard\n> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>\n> shows our results including cpu divided by usage and the cache hit vs disk\n> read ratio including blk_read_time (additional metrics were enabled at\n> about Dec 11, 10am PST)\n> - Our most query heavy service is our \"Trends worker\" for which\n> the average worker duration is shown in the top-left graph\n> - We expect the workload to be relatively constant throughout\n> this period, particularly focusing on the standby instances (PQSL2 and\n> PSQL3) where all read-only queries should be sent.\n> - We see the lowest duration, i.e. best performance, most\n> consistently with the lowest setting for shared_buffers, \"4000MB\"\n> - As we increase shared_buffers we see increased iowait on the\n> standby instances (PSQL2 and PSQL3) and increased blk_read_time\n> (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".\n> - Even though we also see a higher ratio of cache hits on\n> those instances. Our graphs show the per second change\n> in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and\n> \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit,\n> idx_blks_read, and idx_blks_hit\n> - Cluster contains 3 PSQL nodes, all on AWS EC2 instances,\n> postgresql.conf attached\n> - Version: psql (PostgreSQL) 14.1\n> - Machine:\n> - AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage\n> 950 GB ssd)\n> - uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP\n> Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux\n> - Replication via WAL:\n> - Line configuration: PSQL1 (master), PSQL1 followed by PSQL2,\n> PSQL2 followed by PSQL3\n> - Managed by repmgr (version: repmgr 5.3.0), no failovers\n> observed during timeframe of interest\n> - Load balancing:\n> - Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS\n> instances\n> - All write queries go to master. All read-only queries go to\n> standbys unless WAL on standby > 10MB, falling back to read from master as\n> last resort\n>\n>\n\nHi Jordan,Increased shared buffer size does not necessarily mean an increased performance.Regarding the negative correlation between IOWait and shared_buffers' size; if you don't increase memory of the system, it is an expected result in my opinion. Because, PostgreSQL starts reserving a bigger portion of the system memory, and the OS cache size decreases respectively. Smaller OS cache can easily result with more disk access and higher IO demand and bigger IOWait.As you can see in graphs, when you increase the size of shared_buffers, you see higher block hits and lower block reads. \"hits\" refers to the blocks that are already in shared_buffers. \"reads\" refers to the blocks that are not in shared_buffers and \"read from disk\". But, \"read from disk\" that you see in PostgreSQL's statistic catalogs doesn't mean all of those blocks were read from the disk. PostgreSQL requests data blocks, which are not already in shared_buffers, from the kernel. And, if the requested block is in the OS cache, the kernel provides it directly from the memory. No disk access, therefore, happens at all. And, you observe that through lower disk access (I/O) and lower IOWait on your operating system.When you increase size of shared_buffers without increasing amount of the system memory and with or without decreasing effective_cache_size, PostgreSQL considers the possibility of the block to be requested on the memory lower than previous configuration. So, it creates execution plans with less index usages. Less index usage means more sequential scan. More sequential scan means more disk read. We already have less OS cache. And the system has to carry out more disk accesses.As you can see, they are all connected. Setting shared_buffers higher than a threshold, which varies from database to database, actually decreases your performance.To conclude, your results are expected results.A useful resource to read: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ..... given the way PostgreSQL also relies on the operating system cache, it's\n unlikely you'll find using more than 40% of RAM to work better than a \nsmaller amount.Best regards.Samed YILDIRIMOn Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Hi everyone,I'm writing to ask about a correlation I was surprised to observe on our PSQL machines (particularly read-only standbys) where increasing \"shared_buffers\" appears to result in increased pg_stat_database.blk_read_time and CPU iowait, which in turn seems to correlate with reduced throughput for our query-heavy services - details below. Is this expected, or are there configuration changes we might make to improve the performance at higher \"shared_buffers\" values?Thanks, let me know if I can provide any more info,JordanTests and results - public Datadog dashboard here, screenshot attached:Our beta system (\"endor\") was run with three different configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)The only changes between these deployments was the \"shared_buffers\" parameter for all PSQL instances (machine and configuration details below). \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11 20:00 UTC\"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12 13:30 UTC\"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13, 0:00 UTCThe datadog dashboard shows our results including cpu divided by usage and the cache hit vs disk read ratio including blk_read_time (additional metrics were enabled at about Dec 11, 10am PST)Our most query heavy service is our \"Trends worker\" for which the average worker duration is shown in the top-left graphWe expect the workload to be relatively constant throughout this period, particularly focusing on the standby instances (PQSL2 and PSQL3) where all read-only queries should be sent.We see the lowest duration, i.e. best performance, most consistently with the lowest setting for shared_buffers, \"4000MB\"As we increase shared_buffers we see increased iowait on the standby instances (PSQL2 and PSQL3) and increased blk_read_time (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".Even though we also see a higher ratio of cache hits on those instances. Our graphs show the per second change in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit, idx_blks_read, and idx_blks_hitCluster contains 3 PSQL nodes, all on AWS EC2 instances, postgresql.conf attachedVersion: psql (PostgreSQL) 14.1Machine: AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage 950 GB ssd)uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/LinuxReplication via WAL:Line configuration: PSQL1 (master), PSQL1 followed by PSQL2, PSQL2 followed by PSQL3Managed by repmgr (version: repmgr 5.3.0), no failovers observed during timeframe of interestLoad balancing:Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS instancesAll write queries go to master. All read-only queries go to standbys unless WAL on standby > 10MB, falling back to read from master as last resort",
"msg_date": "Wed, 14 Dec 2022 14:38:10 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Thanks for your thoughtful response Samed.\n\nI'm familiar with the article you linked to, and part of my surprise is\nthat with these 32GB RAM machines we're seeing better performance at 12.5%\n(4GB) than the commonly recommended 25% (8GB) of system memory for\nshared_buffers. Your notes about disk read stats from Postgres potentially\nactually representing blocks read from the OS cache make sense, I just\nimagined that Postgres would be better at managing the memory when it was\ndedicated to it via shared_buffers than the OS (obviously with some point\nof diminishing returns); and I'm still hoping there's some Postgres\nconfiguration change we can make that enables better performance through\nimproved utilization of shared_buffers at the commonly recommended 25% of\nsystem memory.\n\nYou mentioned effective_cache_size, which we currently have set to 16GB\n(50% of system memory). Is it worth us experimenting with that value, if so\nwould you recommend we try reducing it or increasing it? Are there other\nsettings that we might consider to see if we can improve the utilization of\nshared_buffers at higher values like 8GB (25% of system memory)?\n\nOn Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:\n\n> Hi Jordan,\n>\n> Increased shared buffer size does not necessarily mean an increased\n> performance.\n>\n> Regarding the negative correlation between IOWait and shared_buffers'\n> size; if you don't increase memory of the system, it is an expected result\n> in my opinion. Because, PostgreSQL starts reserving a bigger portion of the\n> system memory, and the OS cache size decreases respectively. Smaller OS\n> cache can easily result with more disk access and higher IO demand and\n> bigger IOWait.\n>\n> As you can see in graphs, when you increase the size of shared_buffers,\n> you see higher block hits and lower block reads. \"hits\" refers to the\n> blocks that are already in shared_buffers. \"reads\" refers to the blocks\n> that are not in shared_buffers and *\"read from* *disk\"*. But, *\"read from\n> disk\"* that you see in PostgreSQL's statistic catalogs doesn't mean all\n> of those blocks were read from the disk. PostgreSQL requests data blocks,\n> which are not already in shared_buffers, from the kernel. And, if the\n> requested block is in the OS cache, the kernel provides it directly from\n> the memory. No disk access, therefore, happens at all. And, you observe\n> that through lower disk access (I/O) and lower IOWait on your operating\n> system.\n>\n> When you increase size of shared_buffers without increasing amount of the\n> system memory and with or without decreasing effective_cache_size,\n> PostgreSQL considers the possibility of the block to be requested on the\n> memory lower than previous configuration. So, it creates execution plans\n> with less index usages. Less index usage means more sequential scan. More\n> sequential scan means more disk read. We already have less OS cache. And\n> the system has to carry out more disk accesses.\n>\n> As you can see, they are all connected. Setting shared_buffers higher than\n> a threshold, which varies from database to database, actually decreases\n> your performance.\n>\n> To conclude, your results are expected results.\n>\n> A useful resource to read:\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n>> ..... given the way PostgreSQL also relies on the operating system\n>> cache, it's unlikely you'll find using more than 40% of RAM to work better\n>> than a smaller amount.\n>>\n>\n> Best regards.\n> Samed YILDIRIM\n>\n>\n> On Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com>\n> wrote:\n>\n>> Hi everyone,\n>> I'm writing to ask about a correlation I was surprised to observe on our\n>> PSQL machines (particularly read-only standbys) where increasing\n>> \"shared_buffers\" appears to result in\n>> increased pg_stat_database.blk_read_time and CPU iowait, which in turn\n>> seems to correlate with reduced throughput for our query-heavy services -\n>> details below.\n>>\n>> Is this expected, or are there configuration changes we might make to\n>> improve the performance at higher \"shared_buffers\" values?\n>>\n>> Thanks, let me know if I can provide any more info,\n>> Jordan\n>>\n>> - Tests and results - public Datadog dashboard here\n>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>,\n>> screenshot attached:\n>> - Our beta system (\"endor\") was run with three different\n>> configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)\n>> - The only changes between these deployments was the\n>> \"shared_buffers\" parameter for all PSQL instances (machine and\n>> configuration details below).\n>> - \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11\n>> 20:00 UTC\n>> - \"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12\n>> 13:30 UTC\n>> - \"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13,\n>> 0:00 UTC\n>> - The datadog dashboard\n>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>\n>> shows our results including cpu divided by usage and the cache hit vs disk\n>> read ratio including blk_read_time (additional metrics were enabled at\n>> about Dec 11, 10am PST)\n>> - Our most query heavy service is our \"Trends worker\" for which\n>> the average worker duration is shown in the top-left graph\n>> - We expect the workload to be relatively constant\n>> throughout this period, particularly focusing on the standby instances\n>> (PQSL2 and PSQL3) where all read-only queries should be sent.\n>> - We see the lowest duration, i.e. best performance, most\n>> consistently with the lowest setting for shared_buffers, \"4000MB\"\n>> - As we increase shared_buffers we see increased iowait on the\n>> standby instances (PSQL2 and PSQL3) and increased blk_read_time\n>> (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".\n>> - Even though we also see a higher ratio of cache hits on\n>> those instances. Our graphs show the per second change\n>> in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and\n>> \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit,\n>> idx_blks_read, and idx_blks_hit\n>> - Cluster contains 3 PSQL nodes, all on AWS EC2 instances,\n>> postgresql.conf attached\n>> - Version: psql (PostgreSQL) 14.1\n>> - Machine:\n>> - AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage\n>> 950 GB ssd)\n>> - uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu\n>> SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux\n>> - Replication via WAL:\n>> - Line configuration: PSQL1 (master), PSQL1 followed by PSQL2,\n>> PSQL2 followed by PSQL3\n>> - Managed by repmgr (version: repmgr 5.3.0), no failovers\n>> observed during timeframe of interest\n>> - Load balancing:\n>> - Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3\n>> AWS instances\n>> - All write queries go to master. All read-only queries go to\n>> standbys unless WAL on standby > 10MB, falling back to read from master as\n>> last resort\n>>\n>>\n\nThanks for your thoughtful response Samed.I'm familiar with the article you linked to, and part of my surprise is that with these 32GB RAM machines we're seeing better performance at 12.5% (4GB) than the commonly recommended 25% (8GB) of system memory for shared_buffers. Your notes about disk read stats from Postgres potentially actually representing blocks read from the OS cache make sense, I just imagined that Postgres would be better at managing the memory when it was dedicated to it via shared_buffers than the OS (obviously with some point of diminishing returns); and I'm still hoping there's some Postgres configuration change we can make that enables better performance through improved utilization of shared_buffers at the commonly recommended 25% of system memory.You mentioned effective_cache_size, which we currently have set to 16GB (50% of system memory). Is it worth us experimenting with that value, if so would you recommend we try reducing it or increasing it? Are there other settings that we might consider to see if we can improve the utilization of shared_buffers at higher values like 8GB (25% of system memory)? On Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:Hi Jordan,Increased shared buffer size does not necessarily mean an increased performance.Regarding the negative correlation between IOWait and shared_buffers' size; if you don't increase memory of the system, it is an expected result in my opinion. Because, PostgreSQL starts reserving a bigger portion of the system memory, and the OS cache size decreases respectively. Smaller OS cache can easily result with more disk access and higher IO demand and bigger IOWait.As you can see in graphs, when you increase the size of shared_buffers, you see higher block hits and lower block reads. \"hits\" refers to the blocks that are already in shared_buffers. \"reads\" refers to the blocks that are not in shared_buffers and \"read from disk\". But, \"read from disk\" that you see in PostgreSQL's statistic catalogs doesn't mean all of those blocks were read from the disk. PostgreSQL requests data blocks, which are not already in shared_buffers, from the kernel. And, if the requested block is in the OS cache, the kernel provides it directly from the memory. No disk access, therefore, happens at all. And, you observe that through lower disk access (I/O) and lower IOWait on your operating system.When you increase size of shared_buffers without increasing amount of the system memory and with or without decreasing effective_cache_size, PostgreSQL considers the possibility of the block to be requested on the memory lower than previous configuration. So, it creates execution plans with less index usages. Less index usage means more sequential scan. More sequential scan means more disk read. We already have less OS cache. And the system has to carry out more disk accesses.As you can see, they are all connected. Setting shared_buffers higher than a threshold, which varies from database to database, actually decreases your performance.To conclude, your results are expected results.A useful resource to read: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ..... given the way PostgreSQL also relies on the operating system cache, it's\n unlikely you'll find using more than 40% of RAM to work better than a \nsmaller amount.Best regards.Samed YILDIRIMOn Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Hi everyone,I'm writing to ask about a correlation I was surprised to observe on our PSQL machines (particularly read-only standbys) where increasing \"shared_buffers\" appears to result in increased pg_stat_database.blk_read_time and CPU iowait, which in turn seems to correlate with reduced throughput for our query-heavy services - details below. Is this expected, or are there configuration changes we might make to improve the performance at higher \"shared_buffers\" values?Thanks, let me know if I can provide any more info,JordanTests and results - public Datadog dashboard here, screenshot attached:Our beta system (\"endor\") was run with three different configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)The only changes between these deployments was the \"shared_buffers\" parameter for all PSQL instances (machine and configuration details below). \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11 20:00 UTC\"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12 13:30 UTC\"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13, 0:00 UTCThe datadog dashboard shows our results including cpu divided by usage and the cache hit vs disk read ratio including blk_read_time (additional metrics were enabled at about Dec 11, 10am PST)Our most query heavy service is our \"Trends worker\" for which the average worker duration is shown in the top-left graphWe expect the workload to be relatively constant throughout this period, particularly focusing on the standby instances (PQSL2 and PSQL3) where all read-only queries should be sent.We see the lowest duration, i.e. best performance, most consistently with the lowest setting for shared_buffers, \"4000MB\"As we increase shared_buffers we see increased iowait on the standby instances (PSQL2 and PSQL3) and increased blk_read_time (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".Even though we also see a higher ratio of cache hits on those instances. Our graphs show the per second change in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit, idx_blks_read, and idx_blks_hitCluster contains 3 PSQL nodes, all on AWS EC2 instances, postgresql.conf attachedVersion: psql (PostgreSQL) 14.1Machine: AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage 950 GB ssd)uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/LinuxReplication via WAL:Line configuration: PSQL1 (master), PSQL1 followed by PSQL2, PSQL2 followed by PSQL3Managed by repmgr (version: repmgr 5.3.0), no failovers observed during timeframe of interestLoad balancing:Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS instancesAll write queries go to master. All read-only queries go to standbys unless WAL on standby > 10MB, falling back to read from master as last resort",
"msg_date": "Wed, 14 Dec 2022 10:12:16 -0800",
"msg_from": "Jordan Hurwich <jhurwich@pulsasensors.com>",
"msg_from_op": true,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Jordan Hurwich <jhurwich@pulsasensors.com> writes:\n> I'm familiar with the article you linked to, and part of my surprise is\n> that with these 32GB RAM machines we're seeing better performance at 12.5%\n> (4GB) than the commonly recommended 25% (8GB) of system memory for\n> shared_buffers. Your notes about disk read stats from Postgres potentially\n> actually representing blocks read from the OS cache make sense, I just\n> imagined that Postgres would be better at managing the memory when it was\n> dedicated to it via shared_buffers than the OS (obviously with some point\n> of diminishing returns); and I'm still hoping there's some Postgres\n> configuration change we can make that enables better performance through\n> improved utilization of shared_buffers at the commonly recommended 25% of\n> system memory.\n\nKeep in mind that 25% was never some kind of golden number. It is\na rough rule of thumb that was invented for far smaller machines than\nwhat you're talking about here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Dec 2022 13:27:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Thanks Tom, that makes a lot of sense. Given we're seeing low iowait and\nblk_read_time at 4GB shared_buffers, sounds like we should just declare\nvictory here and be happy with that setting?\n\nOn Wed, Dec 14, 2022 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jordan Hurwich <jhurwich@pulsasensors.com> writes:\n> > I'm familiar with the article you linked to, and part of my surprise is\n> > that with these 32GB RAM machines we're seeing better performance at\n> 12.5%\n> > (4GB) than the commonly recommended 25% (8GB) of system memory for\n> > shared_buffers. Your notes about disk read stats from Postgres\n> potentially\n> > actually representing blocks read from the OS cache make sense, I just\n> > imagined that Postgres would be better at managing the memory when it was\n> > dedicated to it via shared_buffers than the OS (obviously with some point\n> > of diminishing returns); and I'm still hoping there's some Postgres\n> > configuration change we can make that enables better performance through\n> > improved utilization of shared_buffers at the commonly recommended 25% of\n> > system memory.\n>\n> Keep in mind that 25% was never some kind of golden number. It is\n> a rough rule of thumb that was invented for far smaller machines than\n> what you're talking about here.\n>\n> regards, tom lane\n>\n\nThanks Tom, that makes a lot of sense. Given we're seeing low iowait and blk_read_time at 4GB shared_buffers, sounds like we should just declare victory here and be happy with that setting?On Wed, Dec 14, 2022 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jordan Hurwich <jhurwich@pulsasensors.com> writes:\n> I'm familiar with the article you linked to, and part of my surprise is\n> that with these 32GB RAM machines we're seeing better performance at 12.5%\n> (4GB) than the commonly recommended 25% (8GB) of system memory for\n> shared_buffers. Your notes about disk read stats from Postgres potentially\n> actually representing blocks read from the OS cache make sense, I just\n> imagined that Postgres would be better at managing the memory when it was\n> dedicated to it via shared_buffers than the OS (obviously with some point\n> of diminishing returns); and I'm still hoping there's some Postgres\n> configuration change we can make that enables better performance through\n> improved utilization of shared_buffers at the commonly recommended 25% of\n> system memory.\n\nKeep in mind that 25% was never some kind of golden number. It is\na rough rule of thumb that was invented for far smaller machines than\nwhat you're talking about here.\n\n regards, tom lane",
"msg_date": "Wed, 14 Dec 2022 10:32:34 -0800",
"msg_from": "Jordan Hurwich <jhurwich@pulsasensors.com>",
"msg_from_op": true,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Hello Jordan,\n\nYou don't have to set %25 for the best performance. You need to test\ndifferent values for your database. If I were you, I would\n\n - try to enable huge pages. You probably will see better performance\n with bigger shared_buffers when you configure huge pages. ->\n https://www.postgresql.org/docs/14/kernel-resources.html#LINUX-HUGE-PAGES\n - set effective_io_concurrency to 200. But, you need to test to figure\n out the best value. It significantly depends on your disk's\n metrics/configuration\n - set random_page_cost to 2 and try to decrease it gradually until 1.2.\n - set effective_cache_size to 24GB\n - run pg_test_timing on the server to see the cost of asking time to the\n system. Because track_io_timing is enabled in your configuration file. If\n it is expensive, I would disable tracking io timing.\n\n\nNote that I assumed that those resources/servers are reserved for\nPostgreSQL and there is no other service running on them.\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Wed, 14 Dec 2022 at 20:12, Jordan Hurwich <jhurwich@pulsasensors.com>\nwrote:\n\n> Thanks for your thoughtful response Samed.\n>\n> I'm familiar with the article you linked to, and part of my surprise is\n> that with these 32GB RAM machines we're seeing better performance at 12.5%\n> (4GB) than the commonly recommended 25% (8GB) of system memory for\n> shared_buffers. Your notes about disk read stats from Postgres potentially\n> actually representing blocks read from the OS cache make sense, I just\n> imagined that Postgres would be better at managing the memory when it was\n> dedicated to it via shared_buffers than the OS (obviously with some point\n> of diminishing returns); and I'm still hoping there's some Postgres\n> configuration change we can make that enables better performance through\n> improved utilization of shared_buffers at the commonly recommended 25% of\n> system memory.\n>\n> You mentioned effective_cache_size, which we currently have set to 16GB\n> (50% of system memory). Is it worth us experimenting with that value, if so\n> would you recommend we try reducing it or increasing it? Are there other\n> settings that we might consider to see if we can improve the utilization of\n> shared_buffers at higher values like 8GB (25% of system memory)?\n>\n> On Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:\n>\n>> Hi Jordan,\n>>\n>> Increased shared buffer size does not necessarily mean an increased\n>> performance.\n>>\n>> Regarding the negative correlation between IOWait and shared_buffers'\n>> size; if you don't increase memory of the system, it is an expected result\n>> in my opinion. Because, PostgreSQL starts reserving a bigger portion of the\n>> system memory, and the OS cache size decreases respectively. Smaller OS\n>> cache can easily result with more disk access and higher IO demand and\n>> bigger IOWait.\n>>\n>> As you can see in graphs, when you increase the size of shared_buffers,\n>> you see higher block hits and lower block reads. \"hits\" refers to the\n>> blocks that are already in shared_buffers. \"reads\" refers to the blocks\n>> that are not in shared_buffers and *\"read from* *disk\"*. But, *\"read\n>> from disk\"* that you see in PostgreSQL's statistic catalogs doesn't mean\n>> all of those blocks were read from the disk. PostgreSQL requests data\n>> blocks, which are not already in shared_buffers, from the kernel. And, if\n>> the requested block is in the OS cache, the kernel provides it directly\n>> from the memory. No disk access, therefore, happens at all. And, you\n>> observe that through lower disk access (I/O) and lower IOWait on your\n>> operating system.\n>>\n>> When you increase size of shared_buffers without increasing amount of the\n>> system memory and with or without decreasing effective_cache_size,\n>> PostgreSQL considers the possibility of the block to be requested on the\n>> memory lower than previous configuration. So, it creates execution plans\n>> with less index usages. Less index usage means more sequential scan. More\n>> sequential scan means more disk read. We already have less OS cache. And\n>> the system has to carry out more disk accesses.\n>>\n>> As you can see, they are all connected. Setting shared_buffers higher\n>> than a threshold, which varies from database to database, actually\n>> decreases your performance.\n>>\n>> To conclude, your results are expected results.\n>>\n>> A useful resource to read:\n>> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>>\n>>> ..... given the way PostgreSQL also relies on the operating system\n>>> cache, it's unlikely you'll find using more than 40% of RAM to work better\n>>> than a smaller amount.\n>>>\n>>\n>> Best regards.\n>> Samed YILDIRIM\n>>\n>>\n>> On Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com>\n>> wrote:\n>>\n>>> Hi everyone,\n>>> I'm writing to ask about a correlation I was surprised to observe on our\n>>> PSQL machines (particularly read-only standbys) where increasing\n>>> \"shared_buffers\" appears to result in\n>>> increased pg_stat_database.blk_read_time and CPU iowait, which in turn\n>>> seems to correlate with reduced throughput for our query-heavy services -\n>>> details below.\n>>>\n>>> Is this expected, or are there configuration changes we might make to\n>>> improve the performance at higher \"shared_buffers\" values?\n>>>\n>>> Thanks, let me know if I can provide any more info,\n>>> Jordan\n>>>\n>>> - Tests and results - public Datadog dashboard here\n>>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>,\n>>> screenshot attached:\n>>> - Our beta system (\"endor\") was run with three different\n>>> configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)\n>>> - The only changes between these deployments was the\n>>> \"shared_buffers\" parameter for all PSQL instances (machine and\n>>> configuration details below).\n>>> - \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11\n>>> 20:00 UTC\n>>> - \"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12\n>>> 13:30 UTC\n>>> - \"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13,\n>>> 0:00 UTC\n>>> - The datadog dashboard\n>>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>\n>>> shows our results including cpu divided by usage and the cache hit vs disk\n>>> read ratio including blk_read_time (additional metrics were enabled at\n>>> about Dec 11, 10am PST)\n>>> - Our most query heavy service is our \"Trends worker\" for\n>>> which the average worker duration is shown in the top-left graph\n>>> - We expect the workload to be relatively constant\n>>> throughout this period, particularly focusing on the standby instances\n>>> (PQSL2 and PSQL3) where all read-only queries should be sent.\n>>> - We see the lowest duration, i.e. best performance, most\n>>> consistently with the lowest setting for shared_buffers, \"4000MB\"\n>>> - As we increase shared_buffers we see increased iowait on the\n>>> standby instances (PSQL2 and PSQL3) and increased blk_read_time\n>>> (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".\n>>> - Even though we also see a higher ratio of cache hits on\n>>> those instances. Our graphs show the per second change\n>>> in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and\n>>> \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit,\n>>> idx_blks_read, and idx_blks_hit\n>>> - Cluster contains 3 PSQL nodes, all on AWS EC2 instances,\n>>> postgresql.conf attached\n>>> - Version: psql (PostgreSQL) 14.1\n>>> - Machine:\n>>> - AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage\n>>> 950 GB ssd)\n>>> - uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu\n>>> SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux\n>>> - Replication via WAL:\n>>> - Line configuration: PSQL1 (master), PSQL1 followed by PSQL2,\n>>> PSQL2 followed by PSQL3\n>>> - Managed by repmgr (version: repmgr 5.3.0), no failovers\n>>> observed during timeframe of interest\n>>> - Load balancing:\n>>> - Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3\n>>> AWS instances\n>>> - All write queries go to master. All read-only queries go to\n>>> standbys unless WAL on standby > 10MB, falling back to read from master as\n>>> last resort\n>>>\n>>>\n\nHello Jordan,You don't have to set %25 for the best performance. You need to test different values for your database. If I were you, I would try to enable huge pages. You probably will see better performance with bigger shared_buffers when you configure huge pages. -> https://www.postgresql.org/docs/14/kernel-resources.html#LINUX-HUGE-PAGESset effective_io_concurrency to 200. But, you need to test to figure out the best value. It significantly depends on your disk's metrics/configurationset random_page_cost to 2 and try to decrease it gradually until 1.2.set effective_cache_size to 24GBrun pg_test_timing on the server to see the cost of asking time to the system. Because track_io_timing is enabled in your configuration file. If it is expensive, I would disable tracking io timing.Note that I assumed that those resources/servers are reserved for PostgreSQL and there is no other service running on them.Best regards.Samed YILDIRIMOn Wed, 14 Dec 2022 at 20:12, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Thanks for your thoughtful response Samed.I'm familiar with the article you linked to, and part of my surprise is that with these 32GB RAM machines we're seeing better performance at 12.5% (4GB) than the commonly recommended 25% (8GB) of system memory for shared_buffers. Your notes about disk read stats from Postgres potentially actually representing blocks read from the OS cache make sense, I just imagined that Postgres would be better at managing the memory when it was dedicated to it via shared_buffers than the OS (obviously with some point of diminishing returns); and I'm still hoping there's some Postgres configuration change we can make that enables better performance through improved utilization of shared_buffers at the commonly recommended 25% of system memory.You mentioned effective_cache_size, which we currently have set to 16GB (50% of system memory). Is it worth us experimenting with that value, if so would you recommend we try reducing it or increasing it? Are there other settings that we might consider to see if we can improve the utilization of shared_buffers at higher values like 8GB (25% of system memory)? On Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:Hi Jordan,Increased shared buffer size does not necessarily mean an increased performance.Regarding the negative correlation between IOWait and shared_buffers' size; if you don't increase memory of the system, it is an expected result in my opinion. Because, PostgreSQL starts reserving a bigger portion of the system memory, and the OS cache size decreases respectively. Smaller OS cache can easily result with more disk access and higher IO demand and bigger IOWait.As you can see in graphs, when you increase the size of shared_buffers, you see higher block hits and lower block reads. \"hits\" refers to the blocks that are already in shared_buffers. \"reads\" refers to the blocks that are not in shared_buffers and \"read from disk\". But, \"read from disk\" that you see in PostgreSQL's statistic catalogs doesn't mean all of those blocks were read from the disk. PostgreSQL requests data blocks, which are not already in shared_buffers, from the kernel. And, if the requested block is in the OS cache, the kernel provides it directly from the memory. No disk access, therefore, happens at all. And, you observe that through lower disk access (I/O) and lower IOWait on your operating system.When you increase size of shared_buffers without increasing amount of the system memory and with or without decreasing effective_cache_size, PostgreSQL considers the possibility of the block to be requested on the memory lower than previous configuration. So, it creates execution plans with less index usages. Less index usage means more sequential scan. More sequential scan means more disk read. We already have less OS cache. And the system has to carry out more disk accesses.As you can see, they are all connected. Setting shared_buffers higher than a threshold, which varies from database to database, actually decreases your performance.To conclude, your results are expected results.A useful resource to read: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ..... given the way PostgreSQL also relies on the operating system cache, it's\n unlikely you'll find using more than 40% of RAM to work better than a \nsmaller amount.Best regards.Samed YILDIRIMOn Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Hi everyone,I'm writing to ask about a correlation I was surprised to observe on our PSQL machines (particularly read-only standbys) where increasing \"shared_buffers\" appears to result in increased pg_stat_database.blk_read_time and CPU iowait, which in turn seems to correlate with reduced throughput for our query-heavy services - details below. Is this expected, or are there configuration changes we might make to improve the performance at higher \"shared_buffers\" values?Thanks, let me know if I can provide any more info,JordanTests and results - public Datadog dashboard here, screenshot attached:Our beta system (\"endor\") was run with three different configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)The only changes between these deployments was the \"shared_buffers\" parameter for all PSQL instances (machine and configuration details below). \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11 20:00 UTC\"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12 13:30 UTC\"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13, 0:00 UTCThe datadog dashboard shows our results including cpu divided by usage and the cache hit vs disk read ratio including blk_read_time (additional metrics were enabled at about Dec 11, 10am PST)Our most query heavy service is our \"Trends worker\" for which the average worker duration is shown in the top-left graphWe expect the workload to be relatively constant throughout this period, particularly focusing on the standby instances (PQSL2 and PSQL3) where all read-only queries should be sent.We see the lowest duration, i.e. best performance, most consistently with the lowest setting for shared_buffers, \"4000MB\"As we increase shared_buffers we see increased iowait on the standby instances (PSQL2 and PSQL3) and increased blk_read_time (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".Even though we also see a higher ratio of cache hits on those instances. Our graphs show the per second change in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit, idx_blks_read, and idx_blks_hitCluster contains 3 PSQL nodes, all on AWS EC2 instances, postgresql.conf attachedVersion: psql (PostgreSQL) 14.1Machine: AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage 950 GB ssd)uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/LinuxReplication via WAL:Line configuration: PSQL1 (master), PSQL1 followed by PSQL2, PSQL2 followed by PSQL3Managed by repmgr (version: repmgr 5.3.0), no failovers observed during timeframe of interestLoad balancing:Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS instancesAll write queries go to master. All read-only queries go to standbys unless WAL on standby > 10MB, falling back to read from master as last resort",
"msg_date": "Wed, 14 Dec 2022 20:41:27 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
},
{
"msg_contents": "Awesome, this is really helpful Samed. I'll start experimenting with these\nsettings next. Really appreciate your guidance.\n\nOn Wed, Dec 14, 2022 at 10:41 AM Samed YILDIRIM <samed@reddoc.net> wrote:\n\n> Hello Jordan,\n>\n> You don't have to set %25 for the best performance. You need to test\n> different values for your database. If I were you, I would\n>\n> - try to enable huge pages. You probably will see better performance\n> with bigger shared_buffers when you configure huge pages. ->\n> https://www.postgresql.org/docs/14/kernel-resources.html#LINUX-HUGE-PAGES\n> - set effective_io_concurrency to 200. But, you need to test to figure\n> out the best value. It significantly depends on your disk's\n> metrics/configuration\n> - set random_page_cost to 2 and try to decrease it gradually until 1.2.\n> - set effective_cache_size to 24GB\n> - run pg_test_timing on the server to see the cost of asking time to\n> the system. Because track_io_timing is enabled in your configuration file.\n> If it is expensive, I would disable tracking io timing.\n>\n>\n> Note that I assumed that those resources/servers are reserved for\n> PostgreSQL and there is no other service running on them.\n>\n> Best regards.\n> Samed YILDIRIM\n>\n>\n> On Wed, 14 Dec 2022 at 20:12, Jordan Hurwich <jhurwich@pulsasensors.com>\n> wrote:\n>\n>> Thanks for your thoughtful response Samed.\n>>\n>> I'm familiar with the article you linked to, and part of my surprise is\n>> that with these 32GB RAM machines we're seeing better performance at 12.5%\n>> (4GB) than the commonly recommended 25% (8GB) of system memory for\n>> shared_buffers. Your notes about disk read stats from Postgres potentially\n>> actually representing blocks read from the OS cache make sense, I just\n>> imagined that Postgres would be better at managing the memory when it was\n>> dedicated to it via shared_buffers than the OS (obviously with some point\n>> of diminishing returns); and I'm still hoping there's some Postgres\n>> configuration change we can make that enables better performance through\n>> improved utilization of shared_buffers at the commonly recommended 25% of\n>> system memory.\n>>\n>> You mentioned effective_cache_size, which we currently have set to 16GB\n>> (50% of system memory). Is it worth us experimenting with that value, if so\n>> would you recommend we try reducing it or increasing it? Are there other\n>> settings that we might consider to see if we can improve the utilization of\n>> shared_buffers at higher values like 8GB (25% of system memory)?\n>>\n>> On Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:\n>>\n>>> Hi Jordan,\n>>>\n>>> Increased shared buffer size does not necessarily mean an increased\n>>> performance.\n>>>\n>>> Regarding the negative correlation between IOWait and shared_buffers'\n>>> size; if you don't increase memory of the system, it is an expected result\n>>> in my opinion. Because, PostgreSQL starts reserving a bigger portion of the\n>>> system memory, and the OS cache size decreases respectively. Smaller OS\n>>> cache can easily result with more disk access and higher IO demand and\n>>> bigger IOWait.\n>>>\n>>> As you can see in graphs, when you increase the size of shared_buffers,\n>>> you see higher block hits and lower block reads. \"hits\" refers to the\n>>> blocks that are already in shared_buffers. \"reads\" refers to the blocks\n>>> that are not in shared_buffers and *\"read from* *disk\"*. But, *\"read\n>>> from disk\"* that you see in PostgreSQL's statistic catalogs doesn't\n>>> mean all of those blocks were read from the disk. PostgreSQL requests data\n>>> blocks, which are not already in shared_buffers, from the kernel. And, if\n>>> the requested block is in the OS cache, the kernel provides it directly\n>>> from the memory. No disk access, therefore, happens at all. And, you\n>>> observe that through lower disk access (I/O) and lower IOWait on your\n>>> operating system.\n>>>\n>>> When you increase size of shared_buffers without increasing amount of\n>>> the system memory and with or without decreasing effective_cache_size,\n>>> PostgreSQL considers the possibility of the block to be requested on the\n>>> memory lower than previous configuration. So, it creates execution plans\n>>> with less index usages. Less index usage means more sequential scan. More\n>>> sequential scan means more disk read. We already have less OS cache. And\n>>> the system has to carry out more disk accesses.\n>>>\n>>> As you can see, they are all connected. Setting shared_buffers higher\n>>> than a threshold, which varies from database to database, actually\n>>> decreases your performance.\n>>>\n>>> To conclude, your results are expected results.\n>>>\n>>> A useful resource to read:\n>>> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>>>\n>>>> ..... given the way PostgreSQL also relies on the operating system\n>>>> cache, it's unlikely you'll find using more than 40% of RAM to work better\n>>>> than a smaller amount.\n>>>>\n>>>\n>>> Best regards.\n>>> Samed YILDIRIM\n>>>\n>>>\n>>> On Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com>\n>>> wrote:\n>>>\n>>>> Hi everyone,\n>>>> I'm writing to ask about a correlation I was surprised to observe on\n>>>> our PSQL machines (particularly read-only standbys) where increasing\n>>>> \"shared_buffers\" appears to result in\n>>>> increased pg_stat_database.blk_read_time and CPU iowait, which in turn\n>>>> seems to correlate with reduced throughput for our query-heavy services -\n>>>> details below.\n>>>>\n>>>> Is this expected, or are there configuration changes we might make to\n>>>> improve the performance at higher \"shared_buffers\" values?\n>>>>\n>>>> Thanks, let me know if I can provide any more info,\n>>>> Jordan\n>>>>\n>>>> - Tests and results - public Datadog dashboard here\n>>>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>,\n>>>> screenshot attached:\n>>>> - Our beta system (\"endor\") was run with three different\n>>>> configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)\n>>>> - The only changes between these deployments was the\n>>>> \"shared_buffers\" parameter for all PSQL instances (machine and\n>>>> configuration details below).\n>>>> - \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11\n>>>> 20:00 UTC\n>>>> - \"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12\n>>>> 13:30 UTC\n>>>> - \"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec\n>>>> 13, 0:00 UTC\n>>>> - The datadog dashboard\n>>>> <https://p.datadoghq.com/sb/0d34b3451-8bde042f82c012981b94796cdc26e259>\n>>>> shows our results including cpu divided by usage and the cache hit vs disk\n>>>> read ratio including blk_read_time (additional metrics were enabled at\n>>>> about Dec 11, 10am PST)\n>>>> - Our most query heavy service is our \"Trends worker\" for\n>>>> which the average worker duration is shown in the top-left graph\n>>>> - We expect the workload to be relatively constant\n>>>> throughout this period, particularly focusing on the standby instances\n>>>> (PQSL2 and PSQL3) where all read-only queries should be sent.\n>>>> - We see the lowest duration, i.e. best performance, most\n>>>> consistently with the lowest setting for shared_buffers, \"4000MB\"\n>>>> - As we increase shared_buffers we see increased iowait on\n>>>> the standby instances (PSQL2 and PSQL3) and increased blk_read_time\n>>>> (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".\n>>>> - Even though we also see a higher ratio of cache hits on\n>>>> those instances. Our graphs show the per second change\n>>>> in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and\n>>>> \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit,\n>>>> idx_blks_read, and idx_blks_hit\n>>>> - Cluster contains 3 PSQL nodes, all on AWS EC2 instances,\n>>>> postgresql.conf attached\n>>>> - Version: psql (PostgreSQL) 14.1\n>>>> - Machine:\n>>>> - AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local\n>>>> storage 950 GB ssd)\n>>>> - uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu\n>>>> SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux\n>>>> - Replication via WAL:\n>>>> - Line configuration: PSQL1 (master), PSQL1 followed by\n>>>> PSQL2, PSQL2 followed by PSQL3\n>>>> - Managed by repmgr (version: repmgr 5.3.0), no failovers\n>>>> observed during timeframe of interest\n>>>> - Load balancing:\n>>>> - Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3\n>>>> AWS instances\n>>>> - All write queries go to master. All read-only queries go to\n>>>> standbys unless WAL on standby > 10MB, falling back to read from master as\n>>>> last resort\n>>>>\n>>>>\n\nAwesome, this is really helpful Samed. I'll start experimenting with these settings next. Really appreciate your guidance. On Wed, Dec 14, 2022 at 10:41 AM Samed YILDIRIM <samed@reddoc.net> wrote:Hello Jordan,You don't have to set %25 for the best performance. You need to test different values for your database. If I were you, I would try to enable huge pages. You probably will see better performance with bigger shared_buffers when you configure huge pages. -> https://www.postgresql.org/docs/14/kernel-resources.html#LINUX-HUGE-PAGESset effective_io_concurrency to 200. But, you need to test to figure out the best value. It significantly depends on your disk's metrics/configurationset random_page_cost to 2 and try to decrease it gradually until 1.2.set effective_cache_size to 24GBrun pg_test_timing on the server to see the cost of asking time to the system. Because track_io_timing is enabled in your configuration file. If it is expensive, I would disable tracking io timing.Note that I assumed that those resources/servers are reserved for PostgreSQL and there is no other service running on them.Best regards.Samed YILDIRIMOn Wed, 14 Dec 2022 at 20:12, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Thanks for your thoughtful response Samed.I'm familiar with the article you linked to, and part of my surprise is that with these 32GB RAM machines we're seeing better performance at 12.5% (4GB) than the commonly recommended 25% (8GB) of system memory for shared_buffers. Your notes about disk read stats from Postgres potentially actually representing blocks read from the OS cache make sense, I just imagined that Postgres would be better at managing the memory when it was dedicated to it via shared_buffers than the OS (obviously with some point of diminishing returns); and I'm still hoping there's some Postgres configuration change we can make that enables better performance through improved utilization of shared_buffers at the commonly recommended 25% of system memory.You mentioned effective_cache_size, which we currently have set to 16GB (50% of system memory). Is it worth us experimenting with that value, if so would you recommend we try reducing it or increasing it? Are there other settings that we might consider to see if we can improve the utilization of shared_buffers at higher values like 8GB (25% of system memory)? On Wed, Dec 14, 2022 at 4:38 AM Samed YILDIRIM <samed@reddoc.net> wrote:Hi Jordan,Increased shared buffer size does not necessarily mean an increased performance.Regarding the negative correlation between IOWait and shared_buffers' size; if you don't increase memory of the system, it is an expected result in my opinion. Because, PostgreSQL starts reserving a bigger portion of the system memory, and the OS cache size decreases respectively. Smaller OS cache can easily result with more disk access and higher IO demand and bigger IOWait.As you can see in graphs, when you increase the size of shared_buffers, you see higher block hits and lower block reads. \"hits\" refers to the blocks that are already in shared_buffers. \"reads\" refers to the blocks that are not in shared_buffers and \"read from disk\". But, \"read from disk\" that you see in PostgreSQL's statistic catalogs doesn't mean all of those blocks were read from the disk. PostgreSQL requests data blocks, which are not already in shared_buffers, from the kernel. And, if the requested block is in the OS cache, the kernel provides it directly from the memory. No disk access, therefore, happens at all. And, you observe that through lower disk access (I/O) and lower IOWait on your operating system.When you increase size of shared_buffers without increasing amount of the system memory and with or without decreasing effective_cache_size, PostgreSQL considers the possibility of the block to be requested on the memory lower than previous configuration. So, it creates execution plans with less index usages. Less index usage means more sequential scan. More sequential scan means more disk read. We already have less OS cache. And the system has to carry out more disk accesses.As you can see, they are all connected. Setting shared_buffers higher than a threshold, which varies from database to database, actually decreases your performance.To conclude, your results are expected results.A useful resource to read: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ..... given the way PostgreSQL also relies on the operating system cache, it's\n unlikely you'll find using more than 40% of RAM to work better than a \nsmaller amount.Best regards.Samed YILDIRIMOn Tue, 13 Dec 2022 at 02:29, Jordan Hurwich <jhurwich@pulsasensors.com> wrote:Hi everyone,I'm writing to ask about a correlation I was surprised to observe on our PSQL machines (particularly read-only standbys) where increasing \"shared_buffers\" appears to result in increased pg_stat_database.blk_read_time and CPU iowait, which in turn seems to correlate with reduced throughput for our query-heavy services - details below. Is this expected, or are there configuration changes we might make to improve the performance at higher \"shared_buffers\" values?Thanks, let me know if I can provide any more info,JordanTests and results - public Datadog dashboard here, screenshot attached:Our beta system (\"endor\") was run with three different configurations over the ~30hrs from Dec 11 17:00 to Dec 13 0:00 (UTC)The only changes between these deployments was the \"shared_buffers\" parameter for all PSQL instances (machine and configuration details below). \"shared_buffers\" = \"4000MB\" - from Dec 10 19:00 to Dec 11 20:00 UTC\"shared_buffers\" = \"8000MB\" - from Dec 11 21:00 to Dec 12 13:30 UTC\"shared_buffers\" = \"14000MB\" - from Dec 12, 14:30 to Dec 13, 0:00 UTCThe datadog dashboard shows our results including cpu divided by usage and the cache hit vs disk read ratio including blk_read_time (additional metrics were enabled at about Dec 11, 10am PST)Our most query heavy service is our \"Trends worker\" for which the average worker duration is shown in the top-left graphWe expect the workload to be relatively constant throughout this period, particularly focusing on the standby instances (PQSL2 and PSQL3) where all read-only queries should be sent.We see the lowest duration, i.e. best performance, most consistently with the lowest setting for shared_buffers, \"4000MB\"As we increase shared_buffers we see increased iowait on the standby instances (PSQL2 and PSQL3) and increased blk_read_time (per pg_stat_database), in the bottom-most graphs as \"blks_read_time\".Even though we also see a higher ratio of cache hits on those instances. Our graphs show the per second change in pg_stat_database.blks_read abd blks_hit (as \"all_hit/s\" and \"all_read/s\") and pg_statio_user_tables.heap_blks_read, heap_blks_hit, idx_blks_read, and idx_blks_hitCluster contains 3 PSQL nodes, all on AWS EC2 instances, postgresql.conf attachedVersion: psql (PostgreSQL) 14.1Machine: AWS \"c6gd.4xlarge\" (32GB RAM, 16 core 2.5 GHz, local storage 950 GB ssd)uname -a: Linux ip-172-30-64-110 5.4.0-1038-aws #40-Ubuntu SMP Fri Feb 5 23:53:34 UTC 2021 aarch64 aarch64 aarch64 GNU/LinuxReplication via WAL:Line configuration: PSQL1 (master), PSQL1 followed by PSQL2, PSQL2 followed by PSQL3Managed by repmgr (version: repmgr 5.3.0), no failovers observed during timeframe of interestLoad balancing:Managed by PGPool-II (version: 4.3.2 (tamahomeboshi)) on 3 AWS instancesAll write queries go to master. All read-only queries go to standbys unless WAL on standby > 10MB, falling back to read from master as last resort",
"msg_date": "Wed, 14 Dec 2022 10:43:45 -0800",
"msg_from": "Jordan Hurwich <jhurwich@pulsasensors.com>",
"msg_from_op": true,
"msg_subject": "Re: Increased iowait and blk_read_time with higher shared_buffers"
}
] |
[
{
"msg_contents": "I inherited a database with several single-digit billion row tables. Those\ntables have a varchar(36) column populated with uuids (all connected to\neach other via FKs) each currently supported by a btree index.\n\nAfter the recent conversations about hash indexes I thought I'd do some\ncomparisons to see if using a hash index could help and perhaps\ndepriortize my burning desire to change the data type. We never look up\nuuids with inequalities after all. Indeed, in my test environments the\nhash index was half the size of the btree index, and the select performance\nwas slightly faster than btree lookups. varchar(36) with hash index was\nroughly comparable to using a uuid data type (btree or hash index).\n\nI was pretty excited until I tried to create the index on a table with the\ndata (instead of creating it ahead of time and then loading up the test\ndata).\n\nWorking in PG 14.5, on a tiny 9M row table, in an idle database, I found:\n- creating the btree index on the varchar(36) column to consistently take 7\n*seconds*\n- creating the hash index on the varchar(36) to consistently take 1 *hour*\n\nI was surprised at how dramatically slower it was. I tried this on both\npartitioned and non-partitioned tables (with the same data set) and in both\ncases the timings came out similar.\n\nI also tried creating a hash index on a varchar(100) column, also with 9M\nrows. I gave up after it did not complete after several hours. (it wasn't\nlocked, just slow)\n\nWhile I was experimenting with the different index types, I did some insert\ntests. After putting the hash index on the column, the inserts were\nsignificantly slower. The btree index was *6-7x *slower than no index, and\nthe hash index was *100x* slower than no index.\n\nAssuming I can live with the slower inserts, is there any parameter in\nparticular I can tweak that would make the time it takes to create the hash\nindex closer to the btree index creation time? In particular if I wanted\nto try this on a several billion row table in a busy database?\n\n---\n\nFWIW, from my tests on my laptop, on a 250M row table last weekend, after\n100K selects:\n\nMEAN (ms) | btree | hash\n--------- | ------- | ----\nvarchar | 28.14916 | 27.03769\nuuid | 27.04855 | 27.64424\n\nand the sizes\n\nSIZE | btree | hash\n---- | ----- | ----\nvarchar | 12 GB | 6212 MB\nuuid | 6595 MB | 6212 MB\n\n- As long as the index fits in memory, varchar btree isn't really that\nmuch slower in postgresql 14 (the way it was a few years ago), so we'll\nprobably just live with that for the forseeable future given the complexity\nof changing things at the moment.\n\n--\nRick\n\nI inherited a database with several single-digit billion row tables. Those tables have a varchar(36) column populated with uuids (all connected to each other via FKs) each currently supported by a btree index.After the recent conversations about hash indexes I thought I'd do some comparisons to see if using a hash index could help and perhaps depriortize my burning desire to change the data type. We never look up uuids with inequalities after all. Indeed, in my test environments the hash index was half the size of the btree index, and the select performance was slightly faster than btree lookups. varchar(36) with hash index was roughly comparable to using a uuid data type (btree or hash index).I was pretty excited until I tried to create the index on a table with the data (instead of creating it ahead of time and then loading up the test data).Working in PG 14.5, on a tiny 9M row table, in an idle database, I found:- creating the btree index on the varchar(36) column to consistently take 7 seconds- creating the hash index on the varchar(36) to consistently take 1 hourI was surprised at how dramatically slower it was. I tried this on both partitioned and non-partitioned tables (with the same data set) and in both cases the timings came out similar.I also tried creating a hash index on a varchar(100) column, also with 9M rows. I gave up after it did not complete after several hours. (it wasn't locked, just slow)While I was experimenting with the different index types, I did some insert tests. After putting the hash index on the column, the inserts were significantly slower. The btree index was 6-7x slower than no index, and the hash index was 100x slower than no index.Assuming I can live with the slower inserts, is there any parameter in particular I can tweak that would make the time it takes to create the hash index closer to the btree index creation time? In particular if I wanted to try this on a several billion row table in a busy database?---FWIW, from my tests on my laptop, on a 250M row table last weekend, after 100K selects:MEAN (ms) | btree | hash--------- | ------- | ----varchar | 28.14916 | 27.03769uuid | 27.04855 | 27.64424and the sizesSIZE | btree | hash---- | ----- | ----varchar | 12 GB | 6212 MBuuid | 6595 MB | 6212 MB- As long as the index fits in memory, varchar btree isn't really that much slower in postgresql 14 (the way it was a few years ago), so we'll probably just live with that for the forseeable future given the complexity of changing things at the moment.--Rick",
"msg_date": "Wed, 14 Dec 2022 15:03:42 -0500",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "creating hash indexes"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 12:03 PM Rick Otten <rottenwindfish@gmail.com> wrote:\n> Assuming I can live with the slower inserts, is there any parameter in particular I can tweak that would make the time it takes to create the hash index closer to the btree index creation time? In particular if I wanted to try this on a several billion row table in a busy database?\n\nNo. B-Tree index builds are parallelized, and are far better optimized\nin general.\n\n> - As long as the index fits in memory, varchar btree isn't really that much slower in postgresql 14 (the way it was a few years ago), so we'll probably just live with that for the forseeable future given the complexity of changing things at the moment.\n\nThe other things to consider are 1.) the index size after retail\ninserts, 2.) the index size following some number of updates and\ndeletes.\n\nEven if you just had plain inserts for your production workload, the\npicture will not match your test case (which I gather just looked at\nthe index size after a CREATE INDEX ran). I think that B-Tree indexes\nwill still come out ahead if you take this growth into account, and by\nquite a bit, but probably not due to any effect that your existing test case\nexercises.\n\nB-Tree indexes are good at accommodating unpredictable growth, without\never getting terrible performance on any metric of interest. So it's\nnot just that they tend to have better performance on average than\nhash indexes (though they do); it's that they have much more\n*predictable* performance characteristics as conditions change.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Dec 2022 12:28:47 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: creating hash indexes"
}
] |
[
{
"msg_contents": "Hi, \n\ncould someone please comment on this article https://vladmihalcea.com/uuid-database-primary-key/ specifically re the comments (copied below) in regards to a Postgres database. \n\n... \n\n\nBut, using a random UUID as a database table Primary Key is a bad idea for multiple reasons. \n\nFirst, the UUID is huge. Every single record will need 16 bytes for the database identifier, and this impacts all associated Foreign Key columns as well. \n\nSecond, the Primary Key column usually has an associated B+Tree index to speed up lookups or joins, and B+Tree indexes store data in sorted order. \n\nHowever, indexing random values using B+Tree causes a lot of problems: \n\n * Index pages will have a very low fill factor because the values come randomly. So, a page of 8kB will end up storing just a few elements, therefore wasting a lot of space, both on the disk and in the database memory, as index pages could be cached in the Buffer Pool. \n * Because the B+Tree index needs to rebalance itself in order to maintain its equidistant tree structure, the random key values will cause more index page splits and merges as there is no pre-determined order of filling the tree structure. \n... \n\n\nAny other general comments about time sorted UUIDs would be welcome. \n\n\n\nThanks, \n\nTim Jones \n\n\t\n\nHi,could someone please comment on this article https://vladmihalcea.com/uuid-database-primary-key/ specifically re the comments (copied below) in regards to a Postgres database....But, using a random UUID as a database table Primary Key is a bad idea for multiple reasons.First, the UUID is huge. Every single record will need 16 bytes for the database identifier, and this impacts all associated Foreign Key columns as well.Second, the Primary Key column usually has an associated B+Tree index to speed up lookups or joins, and B+Tree indexes store data in sorted order.However, indexing random values using B+Tree causes a lot of problems:Index pages will have a very low fill factor because the values come randomly. So, a page of 8kB will end up storing just a few elements, therefore wasting a lot of space, both on the disk and in the database memory, as index pages could be cached in the Buffer Pool.Because the B+Tree index needs to rebalance itself in order to maintain its equidistant tree structure, the random key values will cause more index page splits and merges as there is no pre-determined order of filling the tree structure....Any other general comments about time sorted UUIDs would be welcome.Thanks,Tim Jones",
"msg_date": "Thu, 15 Dec 2022 10:56:34 +1300 (NZDT)",
"msg_from": "Tim Jones <tim.jones@mccarthy.co.nz>",
"msg_from_op": true,
"msg_subject": "time sorted UUIDs"
},
{
"msg_contents": "On Thu, 2022-12-15 at 10:56 +1300, Tim Jones wrote:\n> could someone please comment on this article https://vladmihalcea.com/uuid-database-primary-key/\n> specifically re the comments (copied below) in regards to a Postgres database.\n> \n> ...\n> But, using a random UUID as a database table Primary Key is a bad idea for multiple reasons.\n> First, the UUID is huge. Every single record will need 16 bytes for the database identifier,\n> and this impacts all associated Foreign Key columns as well.\n> Second, the Primary Key column usually has an associated B+Tree index to speed up lookups or\n> joins, and B+Tree indexes store data in sorted order.\n> However, indexing random values using B+Tree causes a lot of problems:\n> * Index pages will have a very low fill factor because the values come randomly. So, a page\n> of 8kB will end up storing just a few elements, therefore wasting a lot of space, both\n> on the disk and in the database memory, as index pages could be cached in the Buffer Pool.\n> * Because the B+Tree index needs to rebalance itself in order to maintain its equidistant\n> tree structure, the random key values will cause more index page splits and merges as\n> there is no pre-determined order of filling the tree structure.\n\nI'd say that is quite accurate.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:59:05 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: time sorted UUIDs"
},
{
"msg_contents": "Tomas Vondra made an extension to have sequential uuid:\n\nhttps://www.2ndquadrant.com/en/blog/sequential-uuid-generators/\nhttps://github.com/tvondra/sequential-uuids\n\n\n--\nAdrien NAYRAT\n\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:05:42 +0100",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: time sorted UUIDs"
},
{
"msg_contents": "Hi Tim -- I am looking at the issue of random IDs (ie, UUIDs) as well. Did\nyou have a chance to try time sorted UUIDs as was suggested in one of the\nresponses?\n\nOn Mon, Apr 17, 2023 at 5:23 PM Tim Jones <tim.jones@mccarthy.co.nz> wrote:\n\n> Hi,\n>\n> could someone please comment on this article\n> https://vladmihalcea.com/uuid-database-primary-key/ specifically re the\n> comments (copied below) in regards to a Postgres database.\n>\n> ...\n>\n> But, using a random UUID as a database table Primary Key is a bad idea for\n> multiple reasons.\n>\n> First, the UUID is huge. Every single record will need 16 bytes for the\n> database identifier, and this impacts all associated Foreign Key columns as\n> well.\n>\n> Second, the Primary Key column usually has an associated B+Tree index to\n> speed up lookups or joins, and B+Tree indexes store data in sorted order.\n>\n> However, indexing random values using B+Tree causes a lot of problems:\n>\n> - Index pages will have a very low fill factor because the values come\n> randomly. So, a page of 8kB will end up storing just a few elements,\n> therefore wasting a lot of space, both on the disk and in the database\n> memory, as index pages could be cached in the Buffer Pool.\n> - Because the B+Tree index needs to rebalance itself in order to\n> maintain its equidistant tree structure, the random key values will cause\n> more index page splits and merges as there is no pre-determined order of\n> filling the tree structure.\n>\n> ...\n>\n>\n> Any other general comments about time sorted UUIDs would be welcome.\n>\n>\n>\n> Thanks,\n>\n> *Tim Jones*\n>\n>\n>\n\nHi Tim -- I am looking at the issue of random IDs (ie, UUIDs) as well. Did you have a chance to try time sorted UUIDs as was suggested in one of the responses?On Mon, Apr 17, 2023 at 5:23 PM Tim Jones <tim.jones@mccarthy.co.nz> wrote:Hi,could someone please comment on this article https://vladmihalcea.com/uuid-database-primary-key/ specifically re the comments (copied below) in regards to a Postgres database....But, using a random UUID as a database table Primary Key is a bad idea for multiple reasons.First, the UUID is huge. Every single record will need 16 bytes for the database identifier, and this impacts all associated Foreign Key columns as well.Second, the Primary Key column usually has an associated B+Tree index to speed up lookups or joins, and B+Tree indexes store data in sorted order.However, indexing random values using B+Tree causes a lot of problems:Index pages will have a very low fill factor because the values come randomly. So, a page of 8kB will end up storing just a few elements, therefore wasting a lot of space, both on the disk and in the database memory, as index pages could be cached in the Buffer Pool.Because the B+Tree index needs to rebalance itself in order to maintain its equidistant tree structure, the random key values will cause more index page splits and merges as there is no pre-determined order of filling the tree structure....Any other general comments about time sorted UUIDs would be welcome.Thanks,Tim Jones",
"msg_date": "Mon, 17 Apr 2023 17:25:06 -0700",
"msg_from": "peter plachta <pplachta@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: time sorted UUIDs"
}
] |
[
{
"msg_contents": "Hi,\n We had some load test ( DML inserts/deletes/updates/ on tens of hash partition tables) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 5%-25% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\n Now, I get a pgbench test in same server, the steps as below, it's similar as our application workload test, small sql statement running very fast but did see v14 slow down 5-10% for DML,compared with v13.\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\n 2.reboot OS to refresh buffer\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\n\nCompare 14.6 and 13.9 on RHEL8.4, the \"add primary key\" step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same, no idea what contribute to the sql exec_time difference? Attached please find sql exec_time.\n\nI copy the sql here too,\n\nversion\nmin_exec_time\nmax_exec_time\nmean_exec_time\ncalls\nSQL\n13.9\n0.002814\n1.088559\n0.004214798\n3467468\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n14.6\n0.003169\n0.955241\n0.004482497\n3466665\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n%diff\n12.61549396\n\n6.351410351\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n13.9\n0.013449\n15.638027\n1.18372356\n3467468\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n14.6\n0.016109\n133.106913\n1.228978518\n3466665\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n%diff\n19.77842219\n\n3.823101875\n\n\n\n\n\n\n\n\n13.9\n0.005433\n2.051736\n0.008532748\n3467468\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n14.6\n0.00625\n1.847688\n0.009062454\n3466665\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n%diff\n15.03773238\n\n6.207914363\n\n\n\nThanks,\n\nJames",
"msg_date": "Thu, 15 Dec 2022 08:12:31 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "On Thu, 15 Dec 2022 at 21:12, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n> We had some load test ( DML inserts/deletes/updates/ on tens of hash partition tables) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 5%-25% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\n\nI tried this out on the tip of the PG13 and PG14 branch with the same\nscale of pgbench as you mentioned and I don't see the same slowdown as\nyou do.\n\nPG13:\ntps = 1711.980109 (excluding connections establishing)\n\nPG14:\ntps = 1736.466835 (without initial connection time)\n\nAs for why yours might be slower. You might want to have a look at\nthe EXPLAIN ANALYZE output for the UPDATE statements. You can recreate\nthe -M prepared by using PREPARE and EXECUTE. You might want to\nexecute the statements 6 times and see if the plan changes on the 6th\nexecution. It's likely not impossible that PG14 is using custom\nplans, whereas PG13 might be using generic plans for these updates.\nThere were some quite significant changes made to the query planner in\nPG14 that changed how planning works for UPDATEs and DELETEs from\npartitioned tables. Perhaps there's some reason there that the\ncustom/generic plan choice might differ. I see no reason why INSERT\nwould have become slower. Both the query planning and execution is\nvery different for INSERT.\n\nYou might also want to have a look at what perf says. If you have the\ndebug symbols installed, then you could just watch \"perf top --pid=<pg\nbackend running the pgbench workload>\". Maybe that will show you\nsomething interesting.\n\n\n",
"msg_date": "Thu, 15 Dec 2022 23:42:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "Did you check pg_stat_statements ? looks like select some better , but DML decreased.\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <dgrowleyml@gmail.com> \r\nSent: Thursday, December 15, 2022 6:42 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: DML sql execution time slow down PGv14 compared with PGv13\r\n\r\nOn Thu, 15 Dec 2022 at 21:12, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\r\n> We had some load test ( DML inserts/deletes/updates/ on tens of hash partition tables) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 5%-25% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\r\n\r\nI tried this out on the tip of the PG13 and PG14 branch with the same scale of pgbench as you mentioned and I don't see the same slowdown as you do.\r\n\r\nPG13:\r\ntps = 1711.980109 (excluding connections establishing)\r\n\r\nPG14:\r\ntps = 1736.466835 (without initial connection time)\r\n\r\nAs for why yours might be slower. You might want to have a look at the EXPLAIN ANALYZE output for the UPDATE statements. You can recreate the -M prepared by using PREPARE and EXECUTE. You might want to execute the statements 6 times and see if the plan changes on the 6th execution. It's likely not impossible that PG14 is using custom plans, whereas PG13 might be using generic plans for these updates.\r\nThere were some quite significant changes made to the query planner in\r\nPG14 that changed how planning works for UPDATEs and DELETEs from partitioned tables. Perhaps there's some reason there that the custom/generic plan choice might differ. I see no reason why INSERT would have become slower. Both the query planning and execution is very different for INSERT.\r\n\r\nYou might also want to have a look at what perf says. If you have the debug symbols installed, then you could just watch \"perf top --pid=<pg backend running the pgbench workload>\". Maybe that will show you something interesting.\r\n",
"msg_date": "Thu, 15 Dec 2022 11:34:50 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "Did you check pg_stat_statements ? looks like select better, but DML decreased, so average tps looks similar .\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <dgrowleyml@gmail.com> \r\nSent: Thursday, December 15, 2022 6:42 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: DML sql execution time slow down PGv14 compared with PGv13\r\n\r\nOn Thu, 15 Dec 2022 at 21:12, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\r\n> We had some load test ( DML inserts/deletes/updates/ on tens of hash partition tables) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 5%-25% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\r\n\r\nI tried this out on the tip of the PG13 and PG14 branch with the same scale of pgbench as you mentioned and I don't see the same slowdown as you do.\r\n\r\nPG13:\r\ntps = 1711.980109 (excluding connections establishing)\r\n\r\nPG14:\r\ntps = 1736.466835 (without initial connection time)\r\n\r\nAs for why yours might be slower. You might want to have a look at the EXPLAIN ANALYZE output for the UPDATE statements. You can recreate the -M prepared by using PREPARE and EXECUTE. You might want to execute the statements 6 times and see if the plan changes on the 6th execution. It's likely not impossible that PG14 is using custom plans, whereas PG13 might be using generic plans for these updates.\r\nThere were some quite significant changes made to the query planner in\r\nPG14 that changed how planning works for UPDATEs and DELETEs from partitioned tables. Perhaps there's some reason there that the custom/generic plan choice might differ. I see no reason why INSERT would have become slower. Both the query planning and execution is very different for INSERT.\r\n\r\nYou might also want to have a look at what perf says. If you have the debug symbols installed, then you could just watch \"perf top --pid=<pg backend running the pgbench workload>\". Maybe that will show you something interesting.\r\n",
"msg_date": "Thu, 15 Dec 2022 12:19:54 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: DML sql execution time slow down PGv14 compared with PGv13"
}
] |
[
{
"msg_contents": "Hi,\n We had some load test ( DML inserts/deletes/updates) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 20%-30% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\n Now, I get a pgbench test in same server, the steps as below, small sql statement running very fast, not like our application load test that show INSERTS slow down 20-30%, but did see v14 slow down 5-10% for DML,compared with v13.\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\n 2.reboot OS to refresh buffer\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\n\nCompare 14.6 and 13.9 on RHEL8.4, the \"add primary key\" step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same, no idea what contribute to the sql exec_time difference? Attached please find sql exec_time.\n\nI copy the sql here too,\n\nversion\nmin_exec_time\nmax_exec_time\nmean_exec_time\ncalls\nSQL\n13.9\n0.002814\n1.088559\n0.004214798\n3467468\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n14.6\n0.003169\n0.955241\n0.004482497\n3466665\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n%diff\n12.61549396\n\n6.351410351\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n13.9\n0.013449\n15.638027\n1.18372356\n3467468\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n14.6\n0.016109\n133.106913\n1.228978518\n3466665\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n%diff\n19.77842219\n\n3.823101875\n\n\n\n\n\n\n\n\n13.9\n0.005433\n2.051736\n0.008532748\n3467468\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n14.6\n0.00625\n1.847688\n0.009062454\n3466665\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n%diff\n15.03773238\n\n6.207914363\n\n\n\nThanks,\n\nJames",
"msg_date": "Thu, 15 Dec 2022 08:22:29 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "Hello James,\n\nCould you please add configurations of your PostgreSQL installations too?\nI also wonder why you skip vacuuming (-n parameter) before starting of\ntests.\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Thu, 15 Dec 2022 at 10:22, James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> Hi,\n>\n> We had some load test ( DML inserts/deletes/updates) and found that\n> PGV14 slow down 10-15% compared with PGV13. Same test server, same schema\n> tables and data. From pg_stat_statements, sql exec_time, we did found\n> similar mean_exec_time increased from 20%-30% with same SQL statements.\n> Both v14 and v13 give very fast sql response time, just compare the %diff\n> from sql statements mean_exec_time.\n>\n> Now, I get a pgbench test in same server, the steps as below, small sql\n> statement running very fast, not like our application load test that show\n> INSERTS slow down 20-30%, but did see v14 slow down 5-10% for DML,compared\n> with v13.\n>\n> 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash\n> --partitions=32\n>\n> 2.reboot OS to refresh buffer\n>\n> 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U\n> pgbench -M prepared\n>\n>\n>\n> Compare 14.6 and 13.9 on RHEL8.4, the “add primary key” step 14.6 much\n> fast than 13.9, but most of insert/updates slow down 5-10%. The table is\n> very simple and sql should be same, no idea what contribute to the sql\n> exec_time difference? Attached please find sql exec_time.\n>\n>\n>\n> I copy the sql here too,\n>\n>\n>\n> version\n>\n> min_exec_time\n>\n> max_exec_time\n>\n> mean_exec_time\n>\n> calls\n>\n> SQL\n>\n> 13.9\n>\n> 0.002814\n>\n> 1.088559\n>\n> 0.004214798\n>\n> 3467468\n>\n> INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2,\n> $3, $4, CURRENT_TIMESTAMP)\n>\n> 14.6\n>\n> 0.003169\n>\n> 0.955241\n>\n> 0.004482497\n>\n> 3466665\n>\n> INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2,\n> $3, $4, CURRENT_TIMESTAMP)\n>\n> %diff\n>\n> 12.61549396\n>\n>\n>\n> 6.351410351\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> 13.9\n>\n> 0.013449\n>\n> 15.638027\n>\n> 1.18372356\n>\n> 3467468\n>\n> UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n>\n> 14.6\n>\n> 0.016109\n>\n> 133.106913\n>\n> 1.228978518\n>\n> 3466665\n>\n> UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n>\n> %diff\n>\n> 19.77842219\n>\n>\n>\n> 3.823101875\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> 13.9\n>\n> 0.005433\n>\n> 2.051736\n>\n> 0.008532748\n>\n> 3467468\n>\n> UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n>\n> 14.6\n>\n> 0.00625\n>\n> 1.847688\n>\n> 0.009062454\n>\n> 3466665\n>\n> UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n>\n> %diff\n>\n> 15.03773238\n>\n>\n>\n> 6.207914363\n>\n>\n>\n>\n>\n>\n>\n> Thanks,\n>\n>\n>\n> James\n>\n\nHello James,Could you please add configurations of your PostgreSQL installations too?I also wonder why you skip vacuuming (-n parameter) before starting of tests.Best regards.Samed YILDIRIMOn Thu, 15 Dec 2022 at 10:22, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\nHi,\n We had some load test ( DML inserts/deletes/updates) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time\n increased from 20%-30% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\n\n Now, I get a pgbench test in same server, the steps as below, small sql statement running very fast, not like our application load test that show INSERTS slow down 20-30%, but did see v14 slow down 5-10% for DML,compared with v13.\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\n 2.reboot OS to refresh buffer\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\n \nCompare 14.6 and 13.9 on RHEL8.4, the “add primary key” step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same, no idea what contribute to the sql exec_time difference?\n Attached please find sql exec_time.\n \nI copy the sql here too, \n \n\n\n\n\nversion\n\n\nmin_exec_time\n\n\nmax_exec_time\n\n\nmean_exec_time\n\n\ncalls\n\n\nSQL\n\n\n\n\n13.9\n\n\n0.002814\n\n\n1.088559\n\n\n0.004214798\n\n\n3467468\n\n\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n\n\n\n\n14.6\n\n\n0.003169\n\n\n0.955241\n\n\n0.004482497\n\n\n3466665\n\n\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n\n\n\n\n%diff\n\n\n12.61549396\n\n\n \n\n\n6.351410351\n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n13.9\n\n\n0.013449\n\n\n15.638027\n\n\n1.18372356\n\n\n3467468\n\n\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n\n\n\n\n14.6\n\n\n0.016109\n\n\n133.106913\n\n\n1.228978518\n\n\n3466665\n\n\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n\n\n\n\n%diff\n\n\n19.77842219\n\n\n \n\n\n3.823101875\n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n13.9\n\n\n0.005433\n\n\n2.051736\n\n\n0.008532748\n\n\n3467468\n\n\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n\n\n\n\n14.6\n\n\n0.00625\n\n\n1.847688\n\n\n0.009062454\n\n\n3466665\n\n\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n\n\n\n\n%diff\n\n\n15.03773238\n\n\n \n\n\n6.207914363\n\n\n \n\n\n \n\n\n\n\n \nThanks,\n \nJames",
"msg_date": "Thu, 15 Dec 2022 10:38:24 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "When pgbench -i , it did already done vacuuming just before pgbench tpc-b test, below is the output of init loading. Same postgresql.conf for both v14 and v13, please check attached.\r\n\r\ndate;pgbench -i -s 6000 -F 85 -U pgbench --partitions 6\r\nFri Dec 9 05:54:17 GMT 2022\r\ndropping old tables...\r\ncreating tables...\r\ncreating 6 partitions...\r\ngenerating data (client-side)...\r\n600000000 of 600000000 tuples (100%) done (elapsed 577.18 s, remaining 0.00 s))\r\nvacuuming...\r\ncreating primary keys...\r\ndone in 1568.52 s (drop tables 8.40 s, create tables 0.02 s, client-side generate 579.66 s, vacuum 339.54 s, primary keys 640.91 s).\r\n\r\nThanks,\r\n\r\nJames\r\n\r\nFrom: Samed YILDIRIM <samed@reddoc.net>\r\nSent: Thursday, December 15, 2022 4:38 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: DML sql execution time slow down PGv14 compared with PGv13\r\n\r\nHello James,\r\n\r\nCould you please add configurations of your PostgreSQL installations too?\r\nI also wonder why you skip vacuuming (-n parameter) before starting of tests.\r\n\r\nBest regards.\r\nSamed YILDIRIM\r\n\r\n\r\nOn Thu, 15 Dec 2022 at 10:22, James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> wrote:\r\nHi,\r\n We had some load test ( DML inserts/deletes/updates) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 20%-30% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\r\n Now, I get a pgbench test in same server, the steps as below, small sql statement running very fast, not like our application load test that show INSERTS slow down 20-30%, but did see v14 slow down 5-10% for DML,compared with v13.\r\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\r\n 2.reboot OS to refresh buffer\r\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\r\n\r\nCompare 14.6 and 13.9 on RHEL8.4, the “add primary key” step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same, no idea what contribute to the sql exec_time difference? Attached please find sql exec_time.\r\n\r\nI copy the sql here too,\r\n\r\nversion\r\nmin_exec_time\r\nmax_exec_time\r\nmean_exec_time\r\ncalls\r\nSQL\r\n13.9\r\n0.002814\r\n1.088559\r\n0.004214798\r\n3467468\r\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\r\n14.6\r\n0.003169\r\n0.955241\r\n0.004482497\r\n3466665\r\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\r\n%diff\r\n12.61549396\r\n\r\n6.351410351\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n13.9\r\n0.013449\r\n15.638027\r\n1.18372356\r\n3467468\r\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\r\n14.6\r\n0.016109\r\n133.106913\r\n1.228978518\r\n3466665\r\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\r\n%diff\r\n19.77842219\r\n\r\n3.823101875\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n13.9\r\n0.005433\r\n2.051736\r\n0.008532748\r\n3467468\r\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\r\n14.6\r\n0.00625\r\n1.847688\r\n0.009062454\r\n3466665\r\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\r\n%diff\r\n15.03773238\r\n\r\n6.207914363\r\n\r\n\r\n\r\nThanks,\r\n\r\nJames",
"msg_date": "Thu, 15 Dec 2022 08:44:30 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: DML sql execution time slow down PGv14 compared with PGv13"
},
{
"msg_contents": "Actually, with our application that’s JDBC clients instead of pgbench , we saw similar DML exec_time increase too.\r\n\r\nFrom: James Pang (chaolpan) <chaolpan@cisco.com>\r\nSent: Thursday, December 15, 2022 4:45 PM\r\nTo: Samed YILDIRIM <samed@reddoc.net>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: RE: DML sql execution time slow down PGv14 compared with PGv13\r\n\r\nWhen pgbench -i , it did already done vacuuming just before pgbench tpc-b test, below is the output of init loading. Same postgresql.conf for both v14 and v13, please check attached.\r\n\r\ndate;pgbench -i -s 6000 -F 85 -U pgbench --partitions 6\r\nFri Dec 9 05:54:17 GMT 2022\r\ndropping old tables...\r\ncreating tables...\r\ncreating 6 partitions...\r\ngenerating data (client-side)...\r\n600000000 of 600000000 tuples (100%) done (elapsed 577.18 s, remaining 0.00 s))\r\nvacuuming...\r\ncreating primary keys...\r\ndone in 1568.52 s (drop tables 8.40 s, create tables 0.02 s, client-side generate 579.66 s, vacuum 339.54 s, primary keys 640.91 s).\r\n\r\nThanks,\r\n\r\nJames\r\n\r\nFrom: Samed YILDIRIM <samed@reddoc.net<mailto:samed@reddoc.net>>\r\nSent: Thursday, December 15, 2022 4:38 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>>\r\nCc: pgsql-performance@lists.postgresql.org<mailto:pgsql-performance@lists.postgresql.org>\r\nSubject: Re: DML sql execution time slow down PGv14 compared with PGv13\r\n\r\nHello James,\r\n\r\nCould you please add configurations of your PostgreSQL installations too?\r\nI also wonder why you skip vacuuming (-n parameter) before starting of tests.\r\n\r\nBest regards.\r\nSamed YILDIRIM\r\n\r\n\r\nOn Thu, 15 Dec 2022 at 10:22, James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> wrote:\r\nHi,\r\n We had some load test ( DML inserts/deletes/updates) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements, sql exec_time, we did found similar mean_exec_time increased from 20%-30% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\r\n Now, I get a pgbench test in same server, the steps as below, small sql statement running very fast, not like our application load test that show INSERTS slow down 20-30%, but did see v14 slow down 5-10% for DML,compared with v13.\r\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\r\n 2.reboot OS to refresh buffer\r\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\r\n\r\nCompare 14.6 and 13.9 on RHEL8.4, the “add primary key” step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same, no idea what contribute to the sql exec_time difference? Attached please find sql exec_time.\r\n\r\nI copy the sql here too,\r\n\r\nversion\r\nmin_exec_time\r\nmax_exec_time\r\nmean_exec_time\r\ncalls\r\nSQL\r\n13.9\r\n0.002814\r\n1.088559\r\n0.004214798\r\n3467468\r\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\r\n14.6\r\n0.003169\r\n0.955241\r\n0.004482497\r\n3466665\r\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\r\n%diff\r\n12.61549396\r\n\r\n6.351410351\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n13.9\r\n0.013449\r\n15.638027\r\n1.18372356\r\n3467468\r\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\r\n14.6\r\n0.016109\r\n133.106913\r\n1.228978518\r\n3466665\r\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\r\n%diff\r\n19.77842219\r\n\r\n3.823101875\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n13.9\r\n0.005433\r\n2.051736\r\n0.008532748\r\n3467468\r\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\r\n14.6\r\n0.00625\r\n1.847688\r\n0.009062454\r\n3466665\r\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\r\n%diff\r\n15.03773238\r\n\r\n6.207914363\r\n\r\n\r\n\r\nThanks,\r\n\r\nJames\r\n\n\n\n\n\n\n\n\n\nActually, with our application that’s JDBC clients instead of pgbench , we saw similar DML exec_time increase too.\r\n\n \n\n\nFrom: James Pang (chaolpan) <chaolpan@cisco.com> \nSent: Thursday, December 15, 2022 4:45 PM\nTo: Samed YILDIRIM <samed@reddoc.net>\nCc: pgsql-performance@lists.postgresql.org\nSubject: RE: DML sql execution time slow down PGv14 compared with PGv13\n\n\n \nWhen pgbench -i , it did already done vacuuming just before pgbench tpc-b test, below is the output of init loading. Same postgresql.conf for both v14 and v13, please check attached.\r\n\n \ndate;pgbench -i -s 6000 -F 85 -U pgbench --partitions 6\nFri Dec 9 05:54:17 GMT 2022\ndropping old tables...\ncreating tables...\ncreating 6 partitions...\ngenerating data (client-side)...\n600000000 of 600000000 tuples (100%) done (elapsed 577.18 s, remaining 0.00 s))\nvacuuming...\ncreating primary keys...\ndone in 1568.52 s (drop tables 8.40 s, create tables 0.02 s, client-side generate 579.66 s, vacuum 339.54 s, primary keys 640.91 s).\n \nThanks,\n \nJames\n \n\nFrom: Samed YILDIRIM <samed@reddoc.net>\r\n\nSent: Thursday, December 15, 2022 4:38 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: DML sql execution time slow down PGv14 compared with PGv13\n\n \n\n\nHello James,\n\n\n \n\n\nCould you please add configurations of your PostgreSQL installations too?\n\n\nI also wonder why you skip vacuuming (-n parameter) before starting of tests.\n\n\n \n\n\n\n\n\n\nBest regards.\n\n\nSamed YILDIRIM\n\n\n\n\n \n\n\n \n\n\nOn Thu, 15 Dec 2022 at 10:22, James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n\n\n\nHi,\n We had some load test ( DML inserts/deletes/updates) and found that PGV14 slow down 10-15% compared with PGV13. Same test server, same schema tables and data. From pg_stat_statements,\r\n sql exec_time, we did found similar mean_exec_time increased from 20%-30% with same SQL statements. Both v14 and v13 give very fast sql response time, just compare the %diff from sql statements mean_exec_time.\r\n\n Now, I get a pgbench test in same server, the steps as below, small sql statement running very fast, not like our application load test that show INSERTS slow down 20-30%, but\r\n did see v14 slow down 5-10% for DML,compared with v13.\n 1.date;pgbench -i -s 6000 -F 85 -U pgbench --partition-method=hash --partitions=32\n 2.reboot OS to refresh buffer\n 3.run four rounds of test: date;pgbench -c 10 -j 10 -n -T 180 -U pgbench -M prepared\n \nCompare 14.6 and 13.9 on RHEL8.4, the “add primary key” step 14.6 much fast than 13.9, but most of insert/updates slow down 5-10%. The table is very simple and sql should be same,\r\n no idea what contribute to the sql exec_time difference? Attached please find sql exec_time.\n \nI copy the sql here too,\r\n\n \n\n\n\n\nversion\n\n\nmin_exec_time\n\n\nmax_exec_time\n\n\nmean_exec_time\n\n\ncalls\n\n\nSQL\n\n\n\n\n\n13.9\n\n\n\n0.002814\n\n\n\n1.088559\n\n\n\n0.004214798\n\n\n\n3467468\n\n\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n\n\n\n\n\n14.6\n\n\n\n0.003169\n\n\n\n0.955241\n\n\n\n0.004482497\n\n\n\n3466665\n\n\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)\n\n\n\n\n%diff\n\n\n\n12.61549396\n\n\n \n\n\n\n6.351410351\n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n\n13.9\n\n\n\n0.013449\n\n\n\n15.638027\n\n\n\n1.18372356\n\n\n\n3467468\n\n\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n\n\n\n\n\n14.6\n\n\n\n0.016109\n\n\n\n133.106913\n\n\n\n1.228978518\n\n\n\n3466665\n\n\nUPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2\n\n\n\n\n%diff\n\n\n\n19.77842219\n\n\n \n\n\n\n3.823101875\n\n\n \n\n\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n\n13.9\n\n\n\n0.005433\n\n\n\n2.051736\n\n\n\n0.008532748\n\n\n\n3467468\n\n\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n\n\n\n\n\n14.6\n\n\n\n0.00625\n\n\n\n1.847688\n\n\n\n0.009062454\n\n\n\n3466665\n\n\nUPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2\n\n\n\n\n%diff\n\n\n\n15.03773238\n\n\n \n\n\n\n6.207914363\n\n\n \n\n\n \n\n\n\n\n \nThanks,\n \nJames",
"msg_date": "Thu, 15 Dec 2022 08:46:18 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: DML sql execution time slow down PGv14 compared with PGv13"
}
] |
[
{
"msg_contents": "Hello to everyone\n\nI'm new here, and I hope that my question is on the correct mail-list.\n\nWe use PostgreSQL to store JSON-B in different tables, all tables have the same schema and all tables are indexed with GIN index for the JSON data.\n\nWe use two properties of the JSON to locate data:\n\n{\n\t\"section_id\":\"1\",\n\t\"section_tipo\":\"numisdata3\"\n}\n\nThe issue:\nWhen we search our locator with section_id: 1 (or any number < 4), PostgreSQL takes around 40000, 5000, 8000ms or more.\nWhen we search our locator with section_id: 4 (or any other bigger number), PostgreSQL takes around 100 ms. ( ~ expected time)\n\nNext queries are done in a database with +/- 1 million of rows in total, and we tested in PostgreSQL 13,14 and 15 with similar results.\n\n_____________________________________________\n\nThe query for section_id: 1 (13 rows)\n\nEXPLAIN ANALYZE SELECT section_tipo, section_id, datos\nFROM \"matrix\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_activities\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_hierarchy\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\n SELECT section_tipo, section_id, datos\nFROM \"matrix_list\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nORDER BY section_tipo, section_id ASC\nLIMIT ALL;\n\nQUERY PLAN\nSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=8752.794..8752.797 rows=13 loops=1)\n Sort Key: matrix.section_tipo, matrix.section_id\n Sort Method: quicksort Memory: 47kB\n -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=415.709..8741.565 rows=13 loops=1)\n -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=415.708..8325.296 rows=11 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 272037\n Heap Blocks: exact=34164 lossy=33104\n -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=61.462..61.462 rows=155031 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.012..0.012 rows=0 loops=1)\n Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=269.624..414.954 rows=2 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 5043\n Heap Blocks: exact=3034\n -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=9.529..9.529 rows=5049 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=1.260..1.260 rows=0 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=1.258..1.258 rows=0 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\nPlanning Time: 33.625 ms\nExecution Time: 8753.461 ms\n\n_____________________________________________\n\nThe query for section_id: 2 (18 rows)\n\n\nEXPLAIN ANALYZE SELECT section_tipo, section_id, datos\nFROM \"matrix\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_activities\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_hierarchy\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\n SELECT section_tipo, section_id, datos\nFROM \"matrix_list\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nORDER BY section_tipo, section_id ASC\nLIMIT ALL;\n\nSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=5236.090..5236.097 rows=18 loops=1)\n Sort Key: matrix.section_tipo, matrix.section_id\n Sort Method: quicksort Memory: 57kB\n -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=200.244..5235.964 rows=18 loops=1)\n -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=200.244..5188.015 rows=16 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 270935\n Heap Blocks: exact=33885 lossy=33106\n -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=51.763..51.764 rows=153659 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.022..0.022 rows=0 loops=1)\n Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=5.112..47.171 rows=2 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 2479\n Heap Blocks: exact=1805\n -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=4.834..4.834 rows=2484 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.738..0.738 rows=0 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.735..0.735 rows=0 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\nPlanning Time: 30.869 ms\nExecution Time: 5236.796 ms\n\n_____________________________________________\n\nThe query for section_id: 3 (7 rows)\n\nEXPLAIN ANALYZE SELECT section_tipo, section_id, datos\nFROM \"matrix\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_activities\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_hierarchy\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\n SELECT section_tipo, section_id, datos\nFROM \"matrix_list\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nORDER BY section_tipo, section_id ASC\nLIMIT ALL;\n\nSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=1796.808..1796.813 rows=7 loops=1)\n Sort Key: matrix.section_tipo, matrix.section_id\n Sort Method: quicksort Memory: 36kB\n -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=114.715..1796.731 rows=7 loops=1)\n -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=114.714..1403.005 rows=6 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 63200\n Heap Blocks: exact=39788\n -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=52.239..52.239 rows=63248 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.018..0.018 rows=0 loops=1)\n Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=329.263..392.708 rows=1 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 4925\n Heap Blocks: exact=2996\n -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=6.059..6.059 rows=4930 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.988..0.988 rows=0 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.985..0.985 rows=0 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\nPlanning Time: 4.339 ms\nExecution Time: 1797.240 ms\n\n_____________________________________________\n\nThe query for section_id: 4 (6 rows)\n\nEXPLAIN ANALYZE SELECT section_tipo, section_id, datos\nFROM \"matrix\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_activities\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_hierarchy\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\n SELECT section_tipo, section_id, datos\nFROM \"matrix_list\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nORDER BY section_tipo, section_id ASC\nLIMIT ALL;\n\nQUERY PLAN\nSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=25.171..25.174 rows=6 loops=1)\n Sort Key: matrix.section_tipo, matrix.section_id\n Sort Method: quicksort Memory: 34kB\n -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=6.227..25.112 rows=6 loops=1)\n -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=6.227..24.955 rows=4 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 1758\n Heap Blocks: exact=1469\n -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=3.139..3.139 rows=1765 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.010..0.010 rows=0 loops=1)\n Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=0.101..0.126 rows=2 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 1\n Heap Blocks: exact=3\n -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=0.088..0.088 rows=3 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.015..0.015 rows=0 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.015..0.015 rows=0 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\nPlanning Time: 3.579 ms\nExecution Time: 25.278 ms\n\n_____________________________________________\n\nThe query for section_id: 5 (24 rows)\n\nEXPLAIN ANALYZE SELECT section_tipo, section_id, datos\nFROM \"matrix\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_activities\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\nSELECT section_tipo, section_id, datos\nFROM \"matrix_hierarchy\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nUNION ALL\n SELECT section_tipo, section_id, datos\nFROM \"matrix_list\"\nWHERE (\ndatos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)\nORDER BY section_tipo, section_id ASC\nLIMIT ALL;\n\n\nQUERY PLAN\nSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=111.243..111.249 rows=28 loops=1)\n Sort Key: matrix.section_tipo, matrix.section_id\n Sort Method: quicksort Memory: 69kB\n -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=13.804..111.086 rows=28 loops=1)\n -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=13.803..108.578 rows=26 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 5967\n Heap Blocks: exact=4691\n -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=11.815..11.815 rows=6000 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.011..0.011 rows=0 loops=1)\n Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=2.034..2.052 rows=2 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n Rows Removed by Index Recheck: 1\n Heap Blocks: exact=3\n -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=1.987..1.987 rows=4 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.426..0.426 rows=0 loops=1)\n Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.418..0.418 rows=0 loops=1)\n Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\nPlanning Time: 4.328 ms\nExecution Time: 111.514 ms\n\n_____________________________________________\n\nWe have checked the index and it's ok, we did vacuum, vacuum analyze... \n\nWe can understand that the first 3 searches for section_id didn't use the index... But... how can we fix it? \n\nThanks!\n\nBest\nAlex\nalex@render.es\n\n\n657661974 · Denia 50, bajo izquierda · 46006 · Valencia\n\n\n\n\nHello to everyoneI'm new here, and I hope that my question is on the correct mail-list.We use PostgreSQL to store JSON-B in different tables, all tables have the same schema and all tables are indexed with GIN index for the JSON data.We use two properties of the JSON to locate data:{ \"section_id\":\"1\", \"section_tipo\":\"numisdata3\"}The issue:When we search our locator with section_id: 1 (or any number < 4), PostgreSQL takes around 40000, 5000, 8000ms or more.When we search our locator with section_id: 4 (or any other bigger number), PostgreSQL takes around 100 ms. ( ~ expected time)Next queries are done in a database with +/- 1 million of rows in total, and we tested in PostgreSQL 13,14 and 15 with similar results._____________________________________________The query for section_id: 1 (13 rows)EXPLAIN ANALYZE SELECT section_tipo, section_id, datosFROM \"matrix\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_activities\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_hierarchy\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALL SELECT section_tipo, section_id, datosFROM \"matrix_list\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"1\",\"section_tipo\":\"numisdata3\"}]'::jsonb)ORDER BY section_tipo, section_id ASCLIMIT ALL;QUERY PLANSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=8752.794..8752.797 rows=13 loops=1) Sort Key: matrix.section_tipo, matrix.section_id Sort Method: quicksort Memory: 47kB -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=415.709..8741.565 rows=13 loops=1) -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=415.708..8325.296 rows=11 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 272037 Heap Blocks: exact=34164 lossy=33104 -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=61.462..61.462 rows=155031 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.012..0.012 rows=0 loops=1) Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=269.624..414.954 rows=2 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 5043 Heap Blocks: exact=3034 -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=9.529..9.529 rows=5049 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=1.260..1.260 rows=0 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=1.258..1.258 rows=0 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)Planning Time: 33.625 msExecution Time: 8753.461 ms_____________________________________________The query for section_id: 2 (18 rows)EXPLAIN ANALYZE SELECT section_tipo, section_id, datosFROM \"matrix\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_activities\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_hierarchy\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALL SELECT section_tipo, section_id, datosFROM \"matrix_list\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"2\",\"section_tipo\":\"numisdata3\"}]'::jsonb)ORDER BY section_tipo, section_id ASCLIMIT ALL;Sort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=5236.090..5236.097 rows=18 loops=1) Sort Key: matrix.section_tipo, matrix.section_id Sort Method: quicksort Memory: 57kB -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=200.244..5235.964 rows=18 loops=1) -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=200.244..5188.015 rows=16 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 270935 Heap Blocks: exact=33885 lossy=33106 -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=51.763..51.764 rows=153659 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.022..0.022 rows=0 loops=1) Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=5.112..47.171 rows=2 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 2479 Heap Blocks: exact=1805 -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=4.834..4.834 rows=2484 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.738..0.738 rows=0 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.735..0.735 rows=0 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"2\", \"section_tipo\": \"numisdata3\"}]'::jsonb)Planning Time: 30.869 msExecution Time: 5236.796 ms_____________________________________________The query for section_id: 3 (7 rows)EXPLAIN ANALYZE SELECT section_tipo, section_id, datosFROM \"matrix\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_activities\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_hierarchy\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALL SELECT section_tipo, section_id, datosFROM \"matrix_list\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"3\",\"section_tipo\":\"numisdata3\"}]'::jsonb)ORDER BY section_tipo, section_id ASCLIMIT ALL;Sort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=1796.808..1796.813 rows=7 loops=1) Sort Key: matrix.section_tipo, matrix.section_id Sort Method: quicksort Memory: 36kB -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=114.715..1796.731 rows=7 loops=1) -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=114.714..1403.005 rows=6 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 63200 Heap Blocks: exact=39788 -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=52.239..52.239 rows=63248 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.018..0.018 rows=0 loops=1) Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=329.263..392.708 rows=1 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 4925 Heap Blocks: exact=2996 -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=6.059..6.059 rows=4930 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.988..0.988 rows=0 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.985..0.985 rows=0 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"3\", \"section_tipo\": \"numisdata3\"}]'::jsonb)Planning Time: 4.339 msExecution Time: 1797.240 ms_____________________________________________The query for section_id: 4 (6 rows)EXPLAIN ANALYZE SELECT section_tipo, section_id, datosFROM \"matrix\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_activities\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_hierarchy\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALL SELECT section_tipo, section_id, datosFROM \"matrix_list\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"4\",\"section_tipo\":\"numisdata3\"}]'::jsonb)ORDER BY section_tipo, section_id ASCLIMIT ALL;QUERY PLANSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=25.171..25.174 rows=6 loops=1) Sort Key: matrix.section_tipo, matrix.section_id Sort Method: quicksort Memory: 34kB -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=6.227..25.112 rows=6 loops=1) -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=6.227..24.955 rows=4 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 1758 Heap Blocks: exact=1469 -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=3.139..3.139 rows=1765 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.010..0.010 rows=0 loops=1) Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=0.101..0.126 rows=2 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 1 Heap Blocks: exact=3 -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=0.088..0.088 rows=3 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.015..0.015 rows=0 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.015..0.015 rows=0 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"4\", \"section_tipo\": \"numisdata3\"}]'::jsonb)Planning Time: 3.579 msExecution Time: 25.278 ms_____________________________________________The query for section_id: 5 (24 rows)EXPLAIN ANALYZE SELECT section_tipo, section_id, datosFROM \"matrix\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_activities\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALLSELECT section_tipo, section_id, datosFROM \"matrix_hierarchy\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)UNION ALL SELECT section_tipo, section_id, datosFROM \"matrix_list\"WHERE (datos#>'{relations}' @> '[{\"section_id\":\"5\",\"section_tipo\":\"numisdata3\"}]'::jsonb)ORDER BY section_tipo, section_id ASCLIMIT ALL;QUERY PLANSort (cost=8984.49..8991.16 rows=2669 width=1357) (actual time=111.243..111.249 rows=28 loops=1) Sort Key: matrix.section_tipo, matrix.section_id Sort Method: quicksort Memory: 69kB -> Append (cost=92.21..8832.59 rows=2669 width=1357) (actual time=13.804..111.086 rows=28 loops=1) -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=13.803..108.578 rows=26 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 5967 Heap Blocks: exact=4691 -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=11.815..11.815 rows=6000 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Seq Scan on matrix_activities (cost=0.00..0.00 rows=1 width=68) (actual time=0.011..0.011 rows=0 loops=1) Filter: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_hierarchy (cost=52.26..8492.67 rows=2614 width=1362) (actual time=2.034..2.052 rows=2 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) Rows Removed by Index Recheck: 1 Heap Blocks: exact=3 -> Bitmap Index Scan on matrix_hierarchy_relations_idx (cost=0.00..51.61 rows=2614 width=0) (actual time=1.987..1.987 rows=4 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Heap Scan on matrix_list (cost=12.21..100.53 rows=27 width=1161) (actual time=0.426..0.426 rows=0 loops=1) Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb) -> Bitmap Index Scan on matrix_list_relations_idx (cost=0.00..12.21 rows=27 width=0) (actual time=0.418..0.418 rows=0 loops=1) Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"5\", \"section_tipo\": \"numisdata3\"}]'::jsonb)Planning Time: 4.328 msExecution Time: 111.514 ms_____________________________________________We have checked the index and it's ok, we did vacuum, vacuum analyze... We can understand that the first 3 searches for section_id didn't use the index... But... how can we fix it? Thanks!\nBestAlexalex@render.es657661974 · Denia 50, bajo izquierda · 46006 · Valencia",
"msg_date": "Fri, 16 Dec 2022 14:30:42 +0100",
"msg_from": "Render Comunicacion S.L. <alex@render.es>",
"msg_from_op": true,
"msg_subject": "JSON down performacen when id:1"
},
{
"msg_contents": "\"Render Comunicacion S.L.\" <alex@render.es> writes:\n> The issue:\n> When we search our locator with section_id: 1 (or any number < 4), PostgreSQL takes around 40000, 5000, 8000ms or more.\n> When we search our locator with section_id: 4 (or any other bigger number), PostgreSQL takes around 100 ms. ( ~ expected time)\n\nYour index is providing pretty awful performance:\n\n> -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=415.708..8325.296 rows=11 loops=1)\n> Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n> Rows Removed by Index Recheck: 272037\n> Heap Blocks: exact=34164 lossy=33104\n> -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=61.462..61.462 rows=155031 loops=1)\n> Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n\nI read that as 155K hits delivered by the index, of which only 11 were\nreal matches. To make matters worse, with so many hits the bitmap was\nallowed to become \"lossy\" (ie track some hits at page-level not\ntuple-level) to conserve memory, so that the executor actually had to\ncheck even more than 155K rows.\n\nYou need a better index. It might be that switching to a jsonb_path_ops\nindex would be enough to fix it, or you might need to build an expression\nindex matched specifically to this type of query. See\n\nhttps://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\nAlso, if any of the terminology there doesn't make sense, read\n\nhttps://www.postgresql.org/docs/current/indexes.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Dec 2022 10:06:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: JSON down performacen when id:1"
},
{
"msg_contents": "Hi Tom\n\nThanks for your quick answer.\n\nI did not mention that the index for all tables is:\n\nCREATE INDEX IF NOT EXISTS matrix_relations_idx\n ON public.matrix USING gin\n ((datos #> '{relations}') jsonb_path_ops) TABLESPACE pg_default;\n\nAnd we try with and without jsonb_path_ops option with similar results.\n\nMy question is about, what is the difference between the first 3 searches and the > 4 search? \nWe don't know why in the first 3 cases seems that PostgreSQL doesn't use the index, and the result takes the same time with or without index, and the > 4, every number higher of 3, it works perfectly...\n\nWe are really desperate about this... \n\nThanks in avance.\n\nBest\nAlex\nalex@render.es\n\n\n657661974 · Denia 50, bajo izquierda · 46006 · Valencia\n\n\n\n\n\n> On 16 Dec 2022, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> \"Render Comunicacion S.L.\" <alex@render.es> writes:\n>> The issue:\n>> When we search our locator with section_id: 1 (or any number < 4), PostgreSQL takes around 40000, 5000, 8000ms or more.\n>> When we search our locator with section_id: 4 (or any other bigger number), PostgreSQL takes around 100 ms. ( ~ expected time)\n> \n> Your index is providing pretty awful performance:\n> \n>> -> Bitmap Heap Scan on matrix (cost=92.21..199.36 rows=27 width=1144) (actual time=415.708..8325.296 rows=11 loops=1)\n>> Recheck Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n>> Rows Removed by Index Recheck: 272037\n>> Heap Blocks: exact=34164 lossy=33104\n>> -> Bitmap Index Scan on matrix_relations_idx (cost=0.00..92.20 rows=27 width=0) (actual time=61.462..61.462 rows=155031 loops=1)\n>> Index Cond: ((datos #> '{relations}'::text[]) @> '[{\"section_id\": \"1\", \"section_tipo\": \"numisdata3\"}]'::jsonb)\n> \n> I read that as 155K hits delivered by the index, of which only 11 were\n> real matches. To make matters worse, with so many hits the bitmap was\n> allowed to become \"lossy\" (ie track some hits at page-level not\n> tuple-level) to conserve memory, so that the executor actually had to\n> check even more than 155K rows.\n> \n> You need a better index. It might be that switching to a jsonb_path_ops\n> index would be enough to fix it, or you might need to build an expression\n> index matched specifically to this type of query. See\n> \n> https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n> \n> Also, if any of the terminology there doesn't make sense, read\n> \n> https://www.postgresql.org/docs/current/indexes.html\n> \n> \t\t\tregards, tom lane\n> \n>",
"msg_date": "Fri, 16 Dec 2022 18:01:47 +0100",
"msg_from": "\"Render Comunicacion S.L.\" <alex@render.es>",
"msg_from_op": false,
"msg_subject": "Re: JSON down performacen when id:1"
}
] |
[
{
"msg_contents": "Hi! Sorry to post to this mailing list, but I could not find many tips working around HashAggregate issues.\n\nIn a research project involving text repetition analysis (on top of public documents)\nI have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),\nrunning postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)\nand some tables with many rows:\n\nnsoamt=> ANALYSE VERBOSE SentenceSource;\nINFO: analyzing \"public.sentencesource\"\nINFO: \"sentencesource\": scanned 30000 of 9028500 pages, containing 3811990 live rows and 268323 dead rows; 30000 rows in sample, 1147218391 estimated total rows\nANALYZE\nnsoamt=> ANALYSE VERBOSE SentenceToolCheck;\nINFO: analyzing \"public.sentencetoolcheck\"\nINFO: \"sentencetoolcheck\": scanned 30000 of 33536425 pages, containing 498508 live rows and 25143 dead rows; 30000 rows in sample, 557272538 estimated total rows\nANALYZE\nnsoamt=> ANALYZE VERBOSE Document;\nINFO: analyzing \"public.document\"\nINFO: \"document\": scanned 30000 of 34570 pages, containing 1371662 live rows and 30366 dead rows; 30000 rows in sample, 1580612 estimated total rows\nANALYZE\n\nThe estimates for the number of rows above are accurate.\n\nI am running this query\n\n SELECT COUNT(*), COUNT(NULLIF(Stchk.haserrors,'f'))\n FROM SentenceToolCheck Stchk\n WHERE EXISTS (SELECT SSrc.sentence\n FROM SentenceSource SSrc, Document Doc\n WHERE SSrc.sentence = Stchk.id\n AND Doc.id = SSrc.document\n AND Doc.source ILIKE '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%');\n\nand I have 2 (related?) problems\n\n\n1 - the query is making a postgresql project have 76.7 GB resident RAM usage.\nHaving a WORK_MEM setting of 2GB (and \"simple\" COUNT() results),\nthat was not expected.\n(I risk oom-killer killing my postgres as soon as I run another concurrent\nquery.)\n\nThe memory settings are:\n\nwork_mem = 2GB\nshared_buffers = 16GB\nmaintenance_work_mem = 1GB\n\n\n\n2 - the query never finishes... (it is over 3x24hours execution by now,\nand I have no ideia how far from finishing it is).\n\nThe EXPLAIN plan is:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=28630195.79..28630195.80 rows=1 width=16)\n -> Nested Loop (cost=26397220.49..28628236.23 rows=261275 width=1)\n -> HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8)\n Group Key: ssrc.sentence\n -> Hash Join (cost=73253.21..23635527.52 rows=1104676957 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..20540394.02 rows=1151189402 width=16)\n -> Hash (cost=54310.40..54310.40 rows=1515425 width=4)\n -> Seq Scan on document doc (cost=0.00..54310.40 rows=1515425 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n -> Index Scan using pk_sentencetoolcheck on sentencetoolcheck stchk (cost=0.57..8.53 rows=1 width=9)\n Index Cond: (id = ssrc.sentence)\n JIT:\n Functions: 20\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(15 rows)\n\nThe rows=1515425 estimate on Seq Scan on document doc (cost=0.00..54310.40 rows=1515425 width=4) seems right.\n\nThe rows=1104676957 estimate on Hash Join (cost=73253.21..23635527.52 rows=1104676957 width=8) also seems right.\n\nThe rows=261275 on HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!\nI was expecting something like rows=1.0E+09 instead.\n\n\nOn a laptop (with just 80% of the rows, 32GB RAM, but all SSD disks),\nI finish the query in a few hours (+/- 2 hours).\n\nThe EXPLAIN plan is different on the laptop:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=216688374.89..216688374.90 rows=1 width=16)\n -> Nested Loop (cost=211388557.47..216686210.27 rows=288616 width=1)\n -> Unique (cost=211388556.90..215889838.75 rows=288616 width=8)\n -> Sort (cost=211388556.90..213639197.82 rows=900256370 width=8)\n Sort Key: ssrc.sentence\n -> Hash Join (cost=56351.51..28261726.31 rows=900256370 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..16453055.44 rows=948142144 width=16)\n -> Hash (cost=38565.65..38565.65 rows=1084069 width=4)\n -> Seq Scan on document doc (cost=0.00..38565.65 rows=1084069 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n -> Index Scan using pk_sentencetoolcheck on sentencetoolcheck stchk (cost=0.57..2.76 rows=1 width=9)\n Index Cond: (id = ssrc.sentence)\n JIT:\n Functions: 18\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n\n(The Unique rows estimation is also very wrong, but at least the query finishes).\n\nI would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating the #rows and taking forever...)\n\nAny suggestions ?\n\n\nJoão Luís\n\nSenior Developer\n\n<mailto:%%Email%%>joao.luis@pdmfc.com<mailto:joao.luis@pdmfc.com>\n\n+351 210 337 700\n\n\n[https://dlnk.bio/wp-content/uploads/2022/11/assinaturaPDM-Natal-1-1.gif]\n[https://www.pdmfc.com/images/email-signature/28-04.png]<https://pdmfc.com> [https://www.pdmfc.com/images/email-signature/28-06.png] <https://www.facebook.com/PDMFC> [https://www.pdmfc.com/images/email-signature/28-05.png] <https://www.linkedin.com/company/pdmfc> [https://www.pdmfc.com/images/email-signature/28-07.png] <https://www.instagram.com/pdmfc.tech> [https://www.pdmfc.com/images/email-signature/28-08.png] <https://www.youtube.com/channel/UCFiu8g5wv10TfMB-OfOaJUA>\n\n\n\n\nConfidentiality\nThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited.\nPlease contact the sender immediately if you have received this message by mistake.\nThank you for your cooperation.\n\n\n\n\n\n\n\n\nHi! Sorry to post to this mailing list, but I could not find many tips working\n around HashAggregate issues.\n\n\nIn a research project involving text repetition analysis (on top of public documents)\nI have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),\nrunning postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)\nand some tables with many rows:\n\n\nnsoamt=> ANALYSE VERBOSE SentenceSource;\nINFO: analyzing \"public.sentencesource\"\nINFO: \"sentencesource\": scanned 30000 of 9028500 pages, containing 3811990 live rows and 268323 dead rows; 30000 rows in sample, 1147218391 estimated total rows\nANALYZE\nnsoamt=> ANALYSE VERBOSE SentenceToolCheck;\nINFO: analyzing \"public.sentencetoolcheck\"\nINFO: \"sentencetoolcheck\": scanned 30000 of 33536425 pages, containing 498508 live rows and 25143 dead rows; 30000 rows in sample, 557272538 estimated total rows\nANALYZE\nnsoamt=> ANALYZE VERBOSE Document;\nINFO: analyzing \"public.document\"\nINFO: \"document\": scanned 30000 of 34570 pages, containing 1371662 live rows and 30366 dead rows; 30000 rows in sample, 1580612 estimated total rows\nANALYZE\n\n\nThe estimates for the number of rows above are accurate.\n\n\nI am running this query\n\n\n SELECT COUNT(*), COUNT(NULLIF(Stchk.haserrors,'f'))\n FROM SentenceToolCheck Stchk\n WHERE EXISTS (SELECT SSrc.sentence\n FROM SentenceSource SSrc, Document Doc\n WHERE SSrc.sentence = Stchk.id\n AND Doc.id = SSrc.document\n AND Doc.source ILIKE '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%');\n\n\nand I have 2 (related?) problems\n\n\n\n\n\n1 - the query is making a postgresql project have 76.7 GB resident RAM usage.\nHaving a WORK_MEM setting of 2GB (and \"simple\" COUNT() results),\nthat was not expected.\n(I risk oom-killer killing my postgres as soon as I run another concurrent\nquery.)\n\n\nThe memory settings are:\n\n\nwork_mem = 2GB\nshared_buffers = 16GB\nmaintenance_work_mem = 1GB\n\n\n\n\n\n\n2 - the query never finishes... (it is over 3x24hours execution by now,\nand I have no ideia how far from finishing it is).\n\n\nThe EXPLAIN plan is:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=28630195.79..28630195.80 rows=1 width=16)\n -> Nested Loop (cost=26397220.49..28628236.23 rows=261275 width=1)\n -> HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8)\n Group Key: ssrc.sentence\n -> Hash Join (cost=73253.21..23635527.52 rows=1104676957 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..20540394.02 rows=1151189402 width=16)\n -> Hash (cost=54310.40..54310.40 rows=1515425 width=4)\n -> Seq Scan on document doc (cost=0.00..54310.40 rows=1515425 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n -> Index Scan using pk_sentencetoolcheck on sentencetoolcheck stchk (cost=0.57..8.53 rows=1 width=9)\n Index Cond: (id = ssrc.sentence)\n JIT:\n Functions: 20\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(15 rows)\n\n\nThe rows=1515425 estimate on Seq Scan on document doc (cost=0.00..54310.40 rows=1515425 width=4) seems right.\n\n\nThe rows=1104676957 estimate on Hash Join (cost=73253.21..23635527.52 rows=1104676957 width=8) also seems right.\n\n\nThe rows=261275 on HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!\nI was expecting something like rows=1.0E+09 instead.\n\n\n\n\nOn a laptop (with just 80% of the rows, 32GB RAM, but all SSD disks),\nI finish the query in a few hours (+/- 2 hours).\n\n\nThe EXPLAIN plan is different on the laptop:\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=216688374.89..216688374.90 rows=1 width=16)\n -> Nested Loop (cost=211388557.47..216686210.27 rows=288616 width=1)\n -> Unique (cost=211388556.90..215889838.75 rows=288616 width=8)\n -> Sort (cost=211388556.90..213639197.82 rows=900256370 width=8)\n Sort Key: ssrc.sentence\n -> Hash Join (cost=56351.51..28261726.31 rows=900256370 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..16453055.44 rows=948142144 width=16)\n -> Hash (cost=38565.65..38565.65 rows=1084069 width=4)\n -> Seq Scan on document doc (cost=0.00..38565.65 rows=1084069 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n -> Index Scan using pk_sentencetoolcheck on sentencetoolcheck stchk (cost=0.57..2.76 rows=1 width=9)\n Index Cond: (id = ssrc.sentence)\n JIT:\n Functions: 18\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n\n\n(The Unique rows estimation is also very wrong, but at least the query finishes).\n\n\nI would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating the #rows and taking forever...)\n\n\nAny suggestions ?\n\n\n\n\n\n\n\nJoão Luís\nSenior Developer\njoao.luis@pdmfc.com \n+351 210 337 700\n\n\n\n\n\n\n\n\n\n\n \n\nConfidentialityThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited. Please contact the sender immediately if you have received this message by mistake.Thank you for your cooperation.",
"msg_date": "Fri, 16 Dec 2022 15:24:17 +0000",
"msg_from": "=?iso-8859-1?Q?Jo=E3o_Paulo_Lu=EDs?= <joao.luis@pdmfc.com>",
"msg_from_op": true,
"msg_subject": "Postgres12 looking for possible HashAggregate issue workarounds?"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 03:24:17PM +0000, João Paulo Luís wrote:\n> Hi! Sorry to post to this mailing list, but I could not find many tips working around HashAggregate issues.\n> \n> In a research project involving text repetition analysis (on top of public documents)\n> I have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),\n> running postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)\n> and some tables with many rows:\n\n> 1 - the query is making a postgresql project have 76.7 GB resident RAM usage.\n> Having a WORK_MEM setting of 2GB (and \"simple\" COUNT() results),\n> that was not expected.\n> (I risk oom-killer killing my postgres as soon as I run another concurrent\n> query.)\n\n> The rows=261275 on HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!\n> I was expecting something like rows=1.0E+09 instead.\n\n> I would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating the #rows and taking forever...)\n\nHuge memory use sounds like what was fixed in postgres 13.\n\nhttps://www.postgresql.org/docs/13/release-13.html\n\nAllow hash aggregation to use disk storage for large aggregation result\nsets (Jeff Davis)\n\nPreviously, hash aggregation was avoided if it was expected to use more\nthan work_mem memory. Now, a hash aggregation plan can be chosen despite\nthat. The hash table will be spilled to disk if it exceeds work_mem\ntimes hash_mem_multiplier.\n\nThis behavior is normally preferable to the old behavior, in which once\nhash aggregation had been chosen, the hash table would be kept in memory\nno matter how large it got — which could be very large if the planner\nhad misestimated. If necessary, behavior similar to that can be obtained\nby increasing hash_mem_multiplier.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 16 Dec 2022 10:06:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres12 looking for possible HashAggregate issue workarounds?"
},
{
"msg_contents": "Thank you. It seems it is precisely that problem.\n\n(I will discuss with the rest of the team upgrade possibilities, as I guess it will never be backported to the bugfixes of version 12.)\n\nMeanwhile, as a one-time workaround I've disabled the hashagg algorithm,\n\nSET enable_hashagg=off;\n\nrepeated the query, and it finished in 1h28m (and the RAM resident memory stayed just a little above the 16GB of shared_buffers).\n\nHappy holidays!\n\n\nJoão Luís\n\nSenior Developer\n\n<mailto:%%Email%%>joao.luis@pdmfc.com<mailto:joao.luis@pdmfc.com>\n\n+351 210 337 700\n\n\n[https://dlnk.bio/wp-content/uploads/2022/11/assinaturaPDM-Natal-1-1.gif]\n[https://www.pdmfc.com/images/email-signature/28-04.png]<https://pdmfc.com> [https://www.pdmfc.com/images/email-signature/28-06.png] <https://www.facebook.com/PDMFC> [https://www.pdmfc.com/images/email-signature/28-05.png] <https://www.linkedin.com/company/pdmfc> [https://www.pdmfc.com/images/email-signature/28-07.png] <https://www.instagram.com/pdmfc.tech> [https://www.pdmfc.com/images/email-signature/28-08.png] <https://www.youtube.com/channel/UCFiu8g5wv10TfMB-OfOaJUA>\n\n\n\n\nConfidentiality\nThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited.\nPlease contact the sender immediately if you have received this message by mistake.\nThank you for your cooperation.\n\n________________________________\nDe: Justin Pryzby <pryzby@telsasoft.com>\nEnviado: 16 de dezembro de 2022 16:06\nPara: João Paulo Luís <joao.luis@pdmfc.com>\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nAssunto: Re: Postgres12 looking for possible HashAggregate issue workarounds?\n\n[Não costuma receber e-mails de pryzby@telsasoft.com. Saiba por que motivo isto é importante em https://aka.ms/LearnAboutSenderIdentification. ]\n\nCAUTION: External E-mail\n\n\nOn Fri, Dec 16, 2022 at 03:24:17PM +0000, João Paulo Luís wrote:\n> Hi! Sorry to post to this mailing list, but I could not find many tips working around HashAggregate issues.\n>\n> In a research project involving text repetition analysis (on top of public documents)\n> I have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),\n> running postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)\n> and some tables with many rows:\n\n> 1 - the query is making a postgresql project have 76.7 GB resident RAM usage.\n> Having a WORK_MEM setting of 2GB (and \"simple\" COUNT() results),\n> that was not expected.\n> (I risk oom-killer killing my postgres as soon as I run another concurrent\n> query.)\n\n> The rows=261275 on HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!\n> I was expecting something like rows=1.0E+09 instead.\n\n> I would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating the #rows and taking forever...)\n\nHuge memory use sounds like what was fixed in postgres 13.\n\nhttps://www.postgresql.org/docs/13/release-13.html\n\nAllow hash aggregation to use disk storage for large aggregation result\nsets (Jeff Davis)\n\nPreviously, hash aggregation was avoided if it was expected to use more\nthan work_mem memory. Now, a hash aggregation plan can be chosen despite\nthat. The hash table will be spilled to disk if it exceeds work_mem\ntimes hash_mem_multiplier.\n\nThis behavior is normally preferable to the old behavior, in which once\nhash aggregation had been chosen, the hash table would be kept in memory\nno matter how large it got — which could be very large if the planner\nhad misestimated. If necessary, behavior similar to that can be obtained\nby increasing hash_mem_multiplier.\n\n--\nJustin\n\n\n\n\n\n\n\nThank you. It seems it is precisely that problem.\n\n\n\n(I will discuss with the rest of the team upgrade possibilities, as I guess it will never be backported\n to the bugfixes of version 12.)\n\n\nMeanwhile, as a one-time workaround I've disabled the hashagg algorithm,\n\n\nSET enable_hashagg=off;\n\n\n\nrepeated the query, and it finished in 1h28m (and the RAM resident memory stayed\n just a little above the 16GB of shared_buffers).\n\n\nHappy holidays!\n\n\n\n\n\n\n\n\n\nJoão Luís\nSenior Developer\njoao.luis@pdmfc.com \n+351 210 337 700\n\n\n\n\n\n\n\n\n\n\n \n\nConfidentialityThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited. Please contact the sender immediately if you have received this message by mistake.Thank you for your cooperation.\nDe: Justin Pryzby <pryzby@telsasoft.com>\nEnviado: 16 de dezembro de 2022 16:06\nPara: João Paulo Luís <joao.luis@pdmfc.com>\nCc: pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nAssunto: Re: Postgres12 looking for possible HashAggregate issue workarounds?\n \n\n\n[Não costuma receber e-mails de pryzby@telsasoft.com. Saiba por que motivo isto é importante em\nhttps://aka.ms/LearnAboutSenderIdentification. ]\n\nCAUTION: External E-mail\n\n\nOn Fri, Dec 16, 2022 at 03:24:17PM +0000, João Paulo Luís wrote:\n> Hi! Sorry to post to this mailing list, but I could not find many tips working around HashAggregate issues.\n>\n> In a research project involving text repetition analysis (on top of public documents)\n> I have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),\n> running postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)\n> and some tables with many rows:\n\n> 1 - the query is making a postgresql project have 76.7 GB resident RAM usage.\n> Having a WORK_MEM setting of 2GB (and \"simple\" COUNT() results),\n> that was not expected.\n> (I risk oom-killer killing my postgres as soon as I run another concurrent\n> query.)\n\n> The rows=261275 on HashAggregate (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!\n> I was expecting something like rows=1.0E+09 instead.\n\n> I would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating the #rows and taking forever...)\n\nHuge memory use sounds like what was fixed in postgres 13.\n\nhttps://www.postgresql.org/docs/13/release-13.html\n\nAllow hash aggregation to use disk storage for large aggregation result\nsets (Jeff Davis)\n\nPreviously, hash aggregation was avoided if it was expected to use more\nthan work_mem memory. Now, a hash aggregation plan can be chosen despite\nthat. The hash table will be spilled to disk if it exceeds work_mem\ntimes hash_mem_multiplier.\n\nThis behavior is normally preferable to the old behavior, in which once\nhash aggregation had been chosen, the hash table would be kept in memory\nno matter how large it got — which could be very large if the planner\nhad misestimated. If necessary, behavior similar to that can be obtained\nby increasing hash_mem_multiplier.\n\n--\nJustin",
"msg_date": "Fri, 16 Dec 2022 17:47:07 +0000",
"msg_from": "=?Windows-1252?Q?Jo=E3o_Paulo_Lu=EDs?= <joao.luis@pdmfc.com>",
"msg_from_op": false,
"msg_subject": "RE: Postgres12 looking for possible HashAggregate issue workarounds?"
},
{
"msg_contents": "On Sun, 18 Dec 2022 at 23:44, João Paulo Luís <joao.luis@pdmfc.com> wrote:\n> Meanwhile, as a one-time workaround I've disabled the hashagg algorithm,\n\nThe way the query planner determines if Hash Aggregate's hash table\nwill fit in work_mem or not is based on the n_distinct estimate of the\ncolumns being grouped on. You may want to review what analyze set\nn_distinct to on this table. That can be done by looking at:\n\nselect attname,n_distinct from pg_Stats where tablename =\n'sentencesource' and attname = 'sentence';\n\nIf what that's set to does not seem realistic, then you can overwrite this with:\n\nALTER TABLE sentencesource ALTER COLUMN sentence SET (n_distinct = N);\n\nPlease see the paragraph in [1] about n_distinct. Using an absolute\nvalue is likely not a great idea if the table is going to grow. You\ncould maybe give it a better estimate about how many times values are\nrepeated by setting some negative value, as described in the\ndocuments. You'll need to analyze the table again after changing this\nsetting.\n\nDavid\n\n[1] https://www.postgresql.org/docs/12/sql-altertable.html\n\n\n",
"msg_date": "Mon, 19 Dec 2022 00:06:30 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres12 looking for possible HashAggregate issue workarounds?"
},
{
"msg_contents": "Thank you David Rowley (best peformance fix so far)!\n\nnsoamt=> select attname,n_distinct from pg_Stats where tablename = 'sentencesource' and attname = 'sentence';\n attname | n_distinct\n----------+------------\n sentence | 255349\n(1 row)\n\nselect count(*), count(distinct sentence) from sentencesource;\n count | count\n------------+-----------\n 1150174041 | 885946963\n(1 row)\n\n-- Seems badly estimated to me.\n\n-- I expect +/-80% of rows to have a distinct value. Manual says -1 is for all rows being distinct.\nnsoamt=> ALTER TABLE sentencesource ALTER COLUMN sentence SET (n_distinct = -1);\n\nnsoamt=> ANALYZE VERBOSE sentencesource ;\nINFO: analyzing \"public.sentencesource\"\nINFO: \"sentencesource\": scanned 30000 of 9028500 pages, containing 3819977 live rows and 260307 dead rows; 30000 rows in sample, 1149622078 estimated total rows\nANALYZE\n\nnsoamt=> select attname,n_distinct from pg_Stats where tablename = 'sentencesource' and attname = 'sentence';\n attname | n_distinct\n----------+------------\n sentence | -1\n(1 row)\n\n\nnsoamt=> EXPLAIN SELECT COUNT(*), COUNT(NULLIF(Stchk.haserrors,'f'))\n FROM SentenceToolCheck Stchk\n WHERE EXISTS (SELECT SSrc.sentence\n FROM SentenceSource SSrc, Document Doc\n WHERE SSrc.sentence = Stchk.id\n AND Doc.id = SSrc.document\n AND Doc.source ILIKE '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=275199757.84..275199757.85 rows=1 width=16)\n -> Gather (cost=275199757.62..275199757.83 rows=2 width=16)\n Workers Planned: 2\n -> Partial Aggregate (cost=275198757.62..275198757.63 rows=1 width=16)\n -> Hash Join (cost=228004096.84..273527643.59 rows=222815204 width=1)\n Hash Cond: (stchk.id = ssrc.sentence)\n -> Parallel Seq Scan on sentencetoolcheck stchk (cost=0.00..35858393.80 rows=232196880 width=9)\n -> Hash (cost=209905168.81..209905168.81 rows=1103172722 width=8)\n -> Unique (cost=204389305.20..209905168.81 rows=1103172722 width=8)\n -> Sort (cost=204389305.20..207147237.01 rows=1103172722 width=8)\n Sort Key: ssrc.sentence\n -> Hash Join (cost=73287.01..23615773.05 rows=1103172722 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..20524720.16 rows=1149622016 width=16)\n -> Hash (cost=54327.65..54327.65 rows=1516749 width=4)\n -> Seq Scan on document doc (cost=0.00..54327.65 rows=1516749 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n JIT:\n Functions: 25\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(20 rows)\n\nAnd the query finished in Time: 2637891.352 ms (43:57.891) (the best performance so far, although sentencesource fits in RAM :-)\n\n\nCurious, now that I've manually set it to -1, who/what will change that setting in the future (not ANALYZE?) ?\nIt will stay that way until some one else (human user) changes it ? (How do I set it back to \"automatic\"?)\n\nHope that there is a way that this poor estimation is fixed in the future releases...\n\nJoão Luís\n\nSenior Developer\n\n<mailto:%%Email%%>joao.luis@pdmfc.com<mailto:joao.luis@pdmfc.com>\n\n+351 210 337 700\n\n\n[https://dlnk.bio/wp-content/uploads/2022/11/assinaturaPDM-Natal-1-1.gif]\n[https://www.pdmfc.com/images/email-signature/28-04.png]<https://pdmfc.com> [https://www.pdmfc.com/images/email-signature/28-06.png] <https://www.facebook.com/PDMFC> [https://www.pdmfc.com/images/email-signature/28-05.png] <https://www.linkedin.com/company/pdmfc> [https://www.pdmfc.com/images/email-signature/28-07.png] <https://www.instagram.com/pdmfc.tech> [https://www.pdmfc.com/images/email-signature/28-08.png] <https://www.youtube.com/channel/UCFiu8g5wv10TfMB-OfOaJUA>\n\n\n\n\nConfidentiality\nThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited.\nPlease contact the sender immediately if you have received this message by mistake.\nThank you for your cooperation.\n\n________________________________\nDe: David Rowley <dgrowleyml@gmail.com>\nEnviado: 18 de dezembro de 2022 11:06\nPara: João Paulo Luís <joao.luis@pdmfc.com>\nCc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nAssunto: Re: Postgres12 looking for possible HashAggregate issue workarounds?\n\n[Não costuma receber e-mails de dgrowleyml@gmail.com. Saiba por que motivo isto é importante em https://aka.ms/LearnAboutSenderIdentification. ]\n\nCAUTION: External E-mail\n\n\nOn Sun, 18 Dec 2022 at 23:44, João Paulo Luís <joao.luis@pdmfc.com> wrote:\n> Meanwhile, as a one-time workaround I've disabled the hashagg algorithm,\n\nThe way the query planner determines if Hash Aggregate's hash table\nwill fit in work_mem or not is based on the n_distinct estimate of the\ncolumns being grouped on. You may want to review what analyze set\nn_distinct to on this table. That can be done by looking at:\n\nselect attname,n_distinct from pg_Stats where tablename =\n'sentencesource' and attname = 'sentence';\n\nIf what that's set to does not seem realistic, then you can overwrite this with:\n\nALTER TABLE sentencesource ALTER COLUMN sentence SET (n_distinct = N);\n\nPlease see the paragraph in [1] about n_distinct. Using an absolute\nvalue is likely not a great idea if the table is going to grow. You\ncould maybe give it a better estimate about how many times values are\nrepeated by setting some negative value, as described in the\ndocuments. You'll need to analyze the table again after changing this\nsetting.\n\nDavid\n\n[1] https://www.postgresql.org/docs/12/sql-altertable.html\n\n\n\n\n\n\n\n\nThank you David Rowley (best peformance fix so far)!\n\n\nnsoamt=> select attname,n_distinct from pg_Stats where tablename = 'sentencesource' and attname = 'sentence';\n attname | n_distinct\n\n----------+------------\n sentence | 255349\n(1 row)\n\n\nselect count(*), count(distinct sentence) from sentencesource;\n count | count\n------------+-----------\n 1150174041 | 885946963\n(1 row)\n\n\n-- Seems badly estimated to me.\n\n\n-- I expect +/-80% of rows to have a distinct value. Manual says -1 is for all rows being distinct.\nnsoamt=> ALTER TABLE sentencesource ALTER COLUMN sentence SET (n_distinct = -1);\n\n\nnsoamt=> ANALYZE VERBOSE sentencesource ;\nINFO: analyzing \"public.sentencesource\"\nINFO: \"sentencesource\": scanned 30000 of 9028500 pages, containing 3819977 live rows and 260307 dead rows; 30000 rows in sample, 1149622078 estimated total rows\nANALYZE\n\n\nnsoamt=> select attname,n_distinct from pg_Stats where tablename = 'sentencesource' and attname = 'sentence';\n attname | n_distinct\n\n----------+------------\n sentence | -1\n(1 row)\n\n\n\n\nnsoamt=> EXPLAIN SELECT COUNT(*), COUNT(NULLIF(Stchk.haserrors,'f'))\n FROM SentenceToolCheck Stchk\n WHERE EXISTS (SELECT SSrc.sentence\n FROM SentenceSource SSrc, Document Doc\n WHERE SSrc.sentence = Stchk.id\n AND Doc.id = SSrc.document\n AND Doc.source ILIKE '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=275199757.84..275199757.85 rows=1 width=16)\n -> Gather (cost=275199757.62..275199757.83 rows=2 width=16)\n Workers Planned: 2\n -> Partial Aggregate (cost=275198757.62..275198757.63 rows=1 width=16)\n -> Hash Join (cost=228004096.84..273527643.59 rows=222815204 width=1)\n Hash Cond: (stchk.id = ssrc.sentence)\n -> Parallel Seq Scan on sentencetoolcheck stchk (cost=0.00..35858393.80 rows=232196880 width=9)\n -> Hash (cost=209905168.81..209905168.81 rows=1103172722 width=8)\n -> Unique (cost=204389305.20..209905168.81 rows=1103172722 width=8)\n -> Sort (cost=204389305.20..207147237.01 rows=1103172722 width=8)\n Sort Key: ssrc.sentence\n -> Hash Join (cost=73287.01..23615773.05 rows=1103172722 width=8)\n Hash Cond: (ssrc.document = doc.id)\n -> Seq Scan on sentencesource ssrc (cost=0.00..20524720.16 rows=1149622016 width=16)\n -> Hash (cost=54327.65..54327.65 rows=1516749 width=4)\n -> Seq Scan on document doc (cost=0.00..54327.65 rows=1516749 width=4)\n Filter: (source ~~* '/bigpostgres/misc/arxiv/arxiv/arxiv/pdf/%'::text)\n JIT:\n Functions: 25\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n(20 rows)\n\n\nAnd the query finished in Time: 2637891.352 ms (43:57.891) (the best performance so far, although sentencesource fits in RAM :-)\n\n\n\n\nCurious, now that I've manually set it to -1, who/what will change that setting in the future (not ANALYZE?) ?\nIt will stay that way until some one else (human user) changes it ? (How do I set it back to \"automatic\"?)\n\n\nHope that there is a way that this poor estimation is fixed in the future releases...\n\n\n\n\n\n\n\nJoão Luís\nSenior Developer\njoao.luis@pdmfc.com \n+351 210 337 700\n\n\n\n\n\n\n\n\n\n\n \n\nConfidentialityThe information in this message is confidential and privileged. It is intended solely for the addressee. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it is prohibited. Please contact the sender immediately if you have received this message by mistake.Thank you for your cooperation.\nDe: David Rowley <dgrowleyml@gmail.com>\nEnviado: 18 de dezembro de 2022 11:06\nPara: João Paulo Luís <joao.luis@pdmfc.com>\nCc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nAssunto: Re: Postgres12 looking for possible HashAggregate issue workarounds?\n \n\n\n[Não costuma receber e-mails de dgrowleyml@gmail.com. Saiba por que motivo isto é importante em\nhttps://aka.ms/LearnAboutSenderIdentification. ]\n\nCAUTION: External E-mail\n\n\nOn Sun, 18 Dec 2022 at 23:44, João Paulo Luís <joao.luis@pdmfc.com> wrote:\n> Meanwhile, as a one-time workaround I've disabled the hashagg algorithm,\n\nThe way the query planner determines if Hash Aggregate's hash table\nwill fit in work_mem or not is based on the n_distinct estimate of the\ncolumns being grouped on. You may want to review what analyze set\nn_distinct to on this table. That can be done by looking at:\n\nselect attname,n_distinct from pg_Stats where tablename =\n'sentencesource' and attname = 'sentence';\n\nIf what that's set to does not seem realistic, then you can overwrite this with:\n\nALTER TABLE sentencesource ALTER COLUMN sentence SET (n_distinct = N);\n\nPlease see the paragraph in [1] about n_distinct. Using an absolute\nvalue is likely not a great idea if the table is going to grow. You\ncould maybe give it a better estimate about how many times values are\nrepeated by setting some negative value, as described in the\ndocuments. You'll need to analyze the table again after changing this\nsetting.\n\nDavid\n\n[1] https://www.postgresql.org/docs/12/sql-altertable.html",
"msg_date": "Mon, 19 Dec 2022 16:50:56 +0000",
"msg_from": "=?iso-8859-1?Q?Jo=E3o_Paulo_Lu=EDs?= <joao.luis@pdmfc.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgres12 looking for possible HashAggregate issue workarounds?"
}
] |
[
{
"msg_contents": "Hi list,\n\nI have a misbehaving query which uses all available disk space and then\nterminates with a \"cannot write block\" error. To prevent other processes\nfrom running into trouble I've set the following:\n\ntemp_file_limit = 100GB\n\nThe query does parallelize and uses one parallel worker while executing,\nbut it does not abort when the temp file limit is reached:\n\n345G pgsql_tmp\n\nIt does abort way later, after using around 300+ GB:\n[53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB)\nWhere: parallel worker\nThe comment in the file states that this is a per-session parameter, so\nwhat is going wrong here?\n\nI am using Postgres 14 on Ubuntu.\n\nRegards,\n\nFrits\n\nHi list,I have a misbehaving query which uses all available disk space and then terminates with a \"cannot write block\" error. To prevent other processes from running into trouble I've set the following:temp_file_limit = 100GBThe query does parallelize and uses one parallel worker while executing, but it does not abort when the temp file limit is reached:345G\tpgsql_tmpIt does abort way later, after using around 300+ GB:[53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB) Where: parallel workerThe comment in the file states that this is a per-session parameter, so what is going wrong here?I am using Postgres 14 on Ubuntu.Regards,Frits",
"msg_date": "Sun, 18 Dec 2022 12:48:03 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "temp_file_limit?"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 12:48:03PM +0100, Frits Jalvingh wrote:\n> Hi list,\n> \n> I have a misbehaving query which uses all available disk space and then\n> terminates with a \"cannot write block\" error. To prevent other processes\n> from running into trouble I've set the following:\n> \n> temp_file_limit = 100GB\n\n> The comment in the file states that this is a per-session parameter, so\n> what is going wrong here?\n\nDo you mean the comment in postgresql.conf ?\n\ncommit d1f822e58 changed to say that temp_file_limit is actually\nper-process and not per-session.\n\nCould you send the query plan, preferably \"explain analyze\" (if the\nquery finishes sometimes) ?\n\nlog_temp_files may be helpful here.\n\n> The query does parallelize and uses one parallel worker while executing,\n> but it does not abort when the temp file limit is reached:\n> \n> 345G pgsql_tmp\n> \n> It does abort way later, after using around 300+ GB:\n> [53400] ERROR: temporary file size exceeds temp_file_limit (104857600kB)\n> Where: parallel worker\n\nAre you sure the 345G are from only one instance of the query ?\nOr is it running multiple times, or along with other queries writing\n100GB of tempfiles.\n\nIt seems possible that it sometimes runs with more than one parallel\nworker. Also, are there old/stray tempfiles there which need to be\ncleaned up?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 18 Dec 2022 09:57:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: temp_file_limit?"
},
{
"msg_contents": "Hi Justin, thanks for your help!\n\nSimple things first:\n- I am running a single query on a developer machine. Nothing else uses the\ndatabase at that point.\n- The database runs on a disk that has 473GB in use and 1.3T still free. I\nam watching the increase in size used (watch df -hl /d2).\n- If I remove the temp_file_limit the query will run until it has used the\n1.3TB that was free, then it dies.\n- when it runs I see two PG processes active: a main and a worker process\nfor that main.\n\nI hope this answers some of the questions: yes, the query is the one using\nthe tempspace; it is the only one running; it uses only one parallel worker.\n\nJust to be clear: my real question is: why is temp_file_limit not working\nat the specified size? Because this is my real problem: when a query is\ndying like this it will also kill other queries because these are also\nrunning out of space. Even when the limit is per-process it should not have\nexceeded 200GB imo. BTW, if that limit is really per process instead of per\nsession/query then that is a Very Bad Thing(tm), because this makes the\nlimit effectively worthless - if a query can spawn 8 parallel processes\nthen you can suddenly, without any form of control, again fill up that disk.\n\nI'm not really asking for a solution to the bad performance, but hints are\nalways welcome so I'll include the requested info below:\n\nWith the failing plan the query never finishes; it just uses 1.3TB of\nspace, then dies.\nThis also means I cannot explain analyze as this does not produce output\nwhen the query dies. This is a pretty terrible bug in my eyes, because you\ncannot get the info when it's most needed. If I ever have time left to work\non Postgres' code this will be the first thing to fix 8-/\n\nAnyway. The plan that fails badly is this one:\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=37360.85..37360.86 rows=1 width=42)\n -> Sort (cost=37360.85..37360.85 rows=1 width=42)\n Sort Key: (COALESCE(tijd.tijdkey, 'Unknown'::character varying)),\ns_h_eenheid_ssm.identificatie\n -> Hash Join (cost=34899.49..37360.84 rows=1 width=42)\n Hash Cond: ((ve03678.calender_id)::text =\n(COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text)\n Join Filter: ((s_h_eenheid_ssm.dv_start_dts <=\ntijd.einddatum) AND (s_h_eenheid_ssm.dv_end_dts > tijd.einddatum) AND\n(l_eenheid_sturingslabel_ssm_pe.dv_start_dts <= tijd.einddatum) AND\n(l_eenheid_sturingslabel_ssm_pe.dv_end_dts > tijd.einddatum) AND\n(sturingslabel_pe.dv_start_dts <= tijd.einddatum) AND\n(sturingslabel_pe.dv_end_dts > tijd.einddatum))\n -> Gather (cost=34897.66..37358.98 rows=1 width=65)\n Workers Planned: 1\n -> Parallel Hash Join (cost=33897.66..36358.88\nrows=1 width=65)\n Hash Cond: ((s_h_eenheid_ssm.id_h_eenheid =\nl_eenheid_sturingslabel_ssm_pe.id_h_eenheid) AND\n(COALESCE(s_h_eenheid_ssm.id_s, '-1'::integer) = ve03678.eenheid_id))\n -> Parallel Seq Scan on s_h_eenheid_ssm\n (cost=0.00..2326.55 rows=17955 width=34)\n -> Parallel Hash (cost=33896.02..33896.02\nrows=109 width=47)\n -> Parallel Hash Join\n (cost=18850.80..33896.02 rows=109 width=47)\n Hash Cond: (ve03678.ve03678 =\nsturingslabel_pe.datum)\n -> Parallel Seq Scan on ve0367801\nve03678 (cost=0.00..12584.92 rows=655792 width=15)\n -> Parallel Hash\n (cost=18850.78..18850.78 rows=1 width=40)\n -> Parallel Hash Join\n (cost=15458.27..18850.78 rows=1 width=40)\n Hash Cond:\n(l_eenheid_sturingslabel_ssm_pe.id_h_sturingslabel =\nsturingslabel_pe.id_h_sturingslabel)\n -> Parallel Seq Scan on\nl_eenheid_sturingslabel_ssm l_eenheid_sturingslabel_ssm_pe\n (cost=0.00..2963.36 rows=114436 width=24)\n -> Parallel Hash\n (cost=15458.26..15458.26 rows=1 width=24)\n -> Parallel Seq\nScan on s_h_sturingslabel_ssm sturingslabel_pe (cost=0.00..15458.26 rows=1\nwidth=24)\n Filter:\n((soort = 'MSL'::text) AND (code = 'DAE'::text))\n -> Hash (cost=1.37..1.37 rows=37 width=11)\n -> Seq Scan on tijd (cost=0.00..1.37 rows=37\nwidth=11)\n(24 rows)\nhttps://explain.depesz.com/s/qwsh\n\nBy itself I'm used to bad query performance in Postgresql; our application\nonly does bulk queries and Postgres quite often makes terrible plans for\nthose, but with set enable_nestloop=false set always most of them at least\nexecute. The remaining failing queries are almost 100% caused by bad join\nsequences; I plan to work around those by forcing the join order from our\napplication. For instance, the exact same query above can also generate the\nfollowing plan (this one was created by manually setting\njoin_collapse_limit = 1, but fast variants also occur quite often when\ndisabling parallelism):\n Unique (cost=70070.71..70070.72 rows=1 width=42) (actual\ntime=4566.379..4766.112 rows=1058629 loops=1)\n -> Sort (cost=70070.71..70070.71 rows=1 width=42) (actual\ntime=4566.377..4618.021 rows=1058629 loops=1)\n Sort Key: (COALESCE(tijd.tijdkey, 'Unknown'::character varying)),\ns_h_eenheid_ssm.identificatie\n Sort Method: quicksort Memory: 115317kB\n -> Gather (cost=50108.01..70070.70 rows=1 width=42) (actual\ntime=1297.620..1651.003 rows=1058629 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Parallel Hash Join (cost=49108.01..69070.60 rows=1\nwidth=42) (actual time=1294.655..1604.524 rows=529314 loops=2)\n Hash Cond: (((ve03678.calender_id)::text =\n(COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text) AND\n(ve03678.eenheid_id = COALESCE(s_h_eenheid_ssm.id_s, '-1'::integer)) AND\n(ve03678.ve03678 = sturingslabel_pe.datum))\n -> Parallel Seq Scan on ve0367801 ve03678\n (cost=0.00..12584.92 rows=655792 width=15) (actual time=0.004..27.971\nrows=557423 loops=2)\n -> Parallel Hash (cost=49107.99..49107.99 rows=1\nwidth=25) (actual time=1294.347..1294.352 rows=542512 loops=2)\n Buckets: 2097152 (originally 1024) Batches: 1\n(originally 1) Memory Usage: 100728kB\n -> Parallel Hash Join (cost=39244.15..49107.99\nrows=1 width=25) (actual time=390.276..1089.770 rows=542512 loops=2)\n Hash Cond:\n(l_eenheid_sturingslabel_ssm_pe.id_h_sturingslabel =\nsturingslabel_pe.id_h_sturingslabel)\n Join Filter:\n((sturingslabel_pe.dv_start_dts <= tijd.einddatum) AND\n(sturingslabel_pe.dv_end_dts > tijd.einddatum))\n -> Hash Join (cost=23785.87..33486.40\nrows=43548 width=29) (actual time=342.982..791.469 rows=3367092 loops=2)\n Hash Cond:\n(l_eenheid_sturingslabel_ssm_pe.id_h_eenheid = s_h_eenheid_ssm.id_h_eenheid)\n Join Filter:\n((l_eenheid_sturingslabel_ssm_pe.dv_start_dts <= tijd.einddatum) AND\n(l_eenheid_sturingslabel_ssm_pe.dv_end_dts > tijd.einddatum))\n -> Parallel Seq Scan on\nl_eenheid_sturingslabel_ssm l_eenheid_sturingslabel_ssm_pe\n (cost=0.00..2963.36 rows=114436 width=24) (actual time=0.002..5.818\nrows=97271 loops=2)\n -> Hash (cost=22217.33..22217.33\nrows=125483 width=29) (actual time=342.703..342.705 rows=1129351 loops=2)\n Buckets: 2097152 (originally\n131072) Batches: 1 (originally 1) Memory Usage: 86969kB\n -> Nested Loop\n (cost=0.00..22217.33 rows=125483 width=29) (actual time=0.039..175.471\nrows=1129351 loops=2)\n Join Filter:\n((s_h_eenheid_ssm.dv_start_dts <= tijd.einddatum) AND\n(s_h_eenheid_ssm.dv_end_dts > tijd.einddatum))\n -> Seq Scan on\ns_h_eenheid_ssm (cost=0.00..2452.23 rows=30523 width=34) (actual\ntime=0.022..4.488 rows=30523 loops=2)\n -> Materialize\n (cost=0.00..1.56 rows=37 width=11) (actual time=0.000..0.001 rows=37\nloops=61046)\n -> Seq Scan on\ntijd (cost=0.00..1.37 rows=37 width=11) (actual time=0.009..0.013 rows=37\nloops=2)\n -> Parallel Hash\n (cost=15458.26..15458.26 rows=1 width=24) (actual time=47.265..47.265\nrows=69 loops=2)\n Buckets: 1024 Batches: 1 Memory\nUsage: 72kB\n -> Parallel Seq Scan on\ns_h_sturingslabel_ssm sturingslabel_pe (cost=0.00..15458.26 rows=1\nwidth=24) (actual time=4.478..47.241 rows=69 loops=2)\n Filter: ((soort = 'MSL'::text)\nAND (code = 'DAE'::text))\n Rows Removed by Filter: 233072\n Planning Time: 0.623 ms\n Execution Time: 5144.937 ms\n(33 rows)\nhttps://explain.depesz.com/s/CKhC\n\nSame query, now runs in 5 seconds.\n\nThis query is behaving quite special in one of our customers' databases; it\nruns about 80% of the time in 8 to 16 seconds, about 15% of the time it\ntakes about 2 hours, and the remaining 5% it dies with a disk space issue...\n\nRegards,\n\nFrits\n\nHi Justin, thanks for your help!Simple things first:- I am running a single query on a developer machine. Nothing else uses the database at that point.- The database runs on a disk that has 473GB in use and 1.3T still free. I am watching the increase in size used (watch df -hl /d2).- If I remove the temp_file_limit the query will run until it has used the 1.3TB that was free, then it dies.- when it runs I see two PG processes active: a main and a worker process for that main.I hope this answers some of the questions: yes, the query is the one using the tempspace; it is the only one running; it uses only one parallel worker.Just to be clear: my real question is: why is temp_file_limit not working at the specified size? Because this is my real problem: when a query is dying like this it will also kill other queries because these are also running out of space. Even when the limit is per-process it should not have exceeded 200GB imo. BTW, if that limit is really per process instead of per session/query then that is a Very Bad Thing(tm), because this makes the limit effectively worthless - if a query can spawn 8 parallel processes then you can suddenly, without any form of control, again fill up that disk.I'm not really asking for a solution to the bad performance, but hints are always welcome so I'll include the requested info below:With the failing plan the query never finishes; it just uses 1.3TB of space, then dies.This also means I cannot explain analyze as this does not produce output when the query dies. This is a pretty terrible bug in my eyes, because you cannot get the info when it's most needed. If I ever have time left to work on Postgres' code this will be the first thing to fix 8-/Anyway. The plan that fails badly is this one:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=37360.85..37360.86 rows=1 width=42) -> Sort (cost=37360.85..37360.85 rows=1 width=42) Sort Key: (COALESCE(tijd.tijdkey, 'Unknown'::character varying)), s_h_eenheid_ssm.identificatie -> Hash Join (cost=34899.49..37360.84 rows=1 width=42) Hash Cond: ((ve03678.calender_id)::text = (COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text) Join Filter: ((s_h_eenheid_ssm.dv_start_dts <= tijd.einddatum) AND (s_h_eenheid_ssm.dv_end_dts > tijd.einddatum) AND (l_eenheid_sturingslabel_ssm_pe.dv_start_dts <= tijd.einddatum) AND (l_eenheid_sturingslabel_ssm_pe.dv_end_dts > tijd.einddatum) AND (sturingslabel_pe.dv_start_dts <= tijd.einddatum) AND (sturingslabel_pe.dv_end_dts > tijd.einddatum)) -> Gather (cost=34897.66..37358.98 rows=1 width=65) Workers Planned: 1 -> Parallel Hash Join (cost=33897.66..36358.88 rows=1 width=65) Hash Cond: ((s_h_eenheid_ssm.id_h_eenheid = l_eenheid_sturingslabel_ssm_pe.id_h_eenheid) AND (COALESCE(s_h_eenheid_ssm.id_s, '-1'::integer) = ve03678.eenheid_id)) -> Parallel Seq Scan on s_h_eenheid_ssm (cost=0.00..2326.55 rows=17955 width=34) -> Parallel Hash (cost=33896.02..33896.02 rows=109 width=47) -> Parallel Hash Join (cost=18850.80..33896.02 rows=109 width=47) Hash Cond: (ve03678.ve03678 = sturingslabel_pe.datum) -> Parallel Seq Scan on ve0367801 ve03678 (cost=0.00..12584.92 rows=655792 width=15) -> Parallel Hash (cost=18850.78..18850.78 rows=1 width=40) -> Parallel Hash Join (cost=15458.27..18850.78 rows=1 width=40) Hash Cond: (l_eenheid_sturingslabel_ssm_pe.id_h_sturingslabel = sturingslabel_pe.id_h_sturingslabel) -> Parallel Seq Scan on l_eenheid_sturingslabel_ssm l_eenheid_sturingslabel_ssm_pe (cost=0.00..2963.36 rows=114436 width=24) -> Parallel Hash (cost=15458.26..15458.26 rows=1 width=24) -> Parallel Seq Scan on s_h_sturingslabel_ssm sturingslabel_pe (cost=0.00..15458.26 rows=1 width=24) Filter: ((soort = 'MSL'::text) AND (code = 'DAE'::text)) -> Hash (cost=1.37..1.37 rows=37 width=11) -> Seq Scan on tijd (cost=0.00..1.37 rows=37 width=11)(24 rows)https://explain.depesz.com/s/qwshBy itself I'm used to bad query performance in Postgresql; our application only does bulk queries and Postgres quite often makes terrible plans for those, but with set enable_nestloop=false set always most of them at least execute. The remaining failing queries are almost 100% caused by bad join sequences; I plan to work around those by forcing the join order from our application. For instance, the exact same query above can also generate the following plan (this one was created by manually setting join_collapse_limit = 1, but fast variants also occur quite often when disabling parallelism): Unique (cost=70070.71..70070.72 rows=1 width=42) (actual time=4566.379..4766.112 rows=1058629 loops=1) -> Sort (cost=70070.71..70070.71 rows=1 width=42) (actual time=4566.377..4618.021 rows=1058629 loops=1) Sort Key: (COALESCE(tijd.tijdkey, 'Unknown'::character varying)), s_h_eenheid_ssm.identificatie Sort Method: quicksort Memory: 115317kB -> Gather (cost=50108.01..70070.70 rows=1 width=42) (actual time=1297.620..1651.003 rows=1058629 loops=1) Workers Planned: 1 Workers Launched: 1 -> Parallel Hash Join (cost=49108.01..69070.60 rows=1 width=42) (actual time=1294.655..1604.524 rows=529314 loops=2) Hash Cond: (((ve03678.calender_id)::text = (COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text) AND (ve03678.eenheid_id = COALESCE(s_h_eenheid_ssm.id_s, '-1'::integer)) AND (ve03678.ve03678 = sturingslabel_pe.datum)) -> Parallel Seq Scan on ve0367801 ve03678 (cost=0.00..12584.92 rows=655792 width=15) (actual time=0.004..27.971 rows=557423 loops=2) -> Parallel Hash (cost=49107.99..49107.99 rows=1 width=25) (actual time=1294.347..1294.352 rows=542512 loops=2) Buckets: 2097152 (originally 1024) Batches: 1 (originally 1) Memory Usage: 100728kB -> Parallel Hash Join (cost=39244.15..49107.99 rows=1 width=25) (actual time=390.276..1089.770 rows=542512 loops=2) Hash Cond: (l_eenheid_sturingslabel_ssm_pe.id_h_sturingslabel = sturingslabel_pe.id_h_sturingslabel) Join Filter: ((sturingslabel_pe.dv_start_dts <= tijd.einddatum) AND (sturingslabel_pe.dv_end_dts > tijd.einddatum)) -> Hash Join (cost=23785.87..33486.40 rows=43548 width=29) (actual time=342.982..791.469 rows=3367092 loops=2) Hash Cond: (l_eenheid_sturingslabel_ssm_pe.id_h_eenheid = s_h_eenheid_ssm.id_h_eenheid) Join Filter: ((l_eenheid_sturingslabel_ssm_pe.dv_start_dts <= tijd.einddatum) AND (l_eenheid_sturingslabel_ssm_pe.dv_end_dts > tijd.einddatum)) -> Parallel Seq Scan on l_eenheid_sturingslabel_ssm l_eenheid_sturingslabel_ssm_pe (cost=0.00..2963.36 rows=114436 width=24) (actual time=0.002..5.818 rows=97271 loops=2) -> Hash (cost=22217.33..22217.33 rows=125483 width=29) (actual time=342.703..342.705 rows=1129351 loops=2) Buckets: 2097152 (originally 131072) Batches: 1 (originally 1) Memory Usage: 86969kB -> Nested Loop (cost=0.00..22217.33 rows=125483 width=29) (actual time=0.039..175.471 rows=1129351 loops=2) Join Filter: ((s_h_eenheid_ssm.dv_start_dts <= tijd.einddatum) AND (s_h_eenheid_ssm.dv_end_dts > tijd.einddatum)) -> Seq Scan on s_h_eenheid_ssm (cost=0.00..2452.23 rows=30523 width=34) (actual time=0.022..4.488 rows=30523 loops=2) -> Materialize (cost=0.00..1.56 rows=37 width=11) (actual time=0.000..0.001 rows=37 loops=61046) -> Seq Scan on tijd (cost=0.00..1.37 rows=37 width=11) (actual time=0.009..0.013 rows=37 loops=2) -> Parallel Hash (cost=15458.26..15458.26 rows=1 width=24) (actual time=47.265..47.265 rows=69 loops=2) Buckets: 1024 Batches: 1 Memory Usage: 72kB -> Parallel Seq Scan on s_h_sturingslabel_ssm sturingslabel_pe (cost=0.00..15458.26 rows=1 width=24) (actual time=4.478..47.241 rows=69 loops=2) Filter: ((soort = 'MSL'::text) AND (code = 'DAE'::text)) Rows Removed by Filter: 233072 Planning Time: 0.623 ms Execution Time: 5144.937 ms(33 rows)https://explain.depesz.com/s/CKhCSame query, now runs in 5 seconds.This query is behaving quite special in one of our customers' databases; it runs about 80% of the time in 8 to 16 seconds, about 15% of the time it takes about 2 hours, and the remaining 5% it dies with a disk space issue...Regards,Frits",
"msg_date": "Sun, 18 Dec 2022 18:29:41 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Fwd: temp_file_limit?"
},
{
"msg_contents": "Frits Jalvingh <jal@etc.to> writes:\n> Just to be clear: my real question is: why is temp_file_limit not working\n> at the specified size?\n\nI've not looked at that code lately, but I strongly suspect that\nit's implemented in such a way that it's a per-process limit, not a\nper-session limit. So each parallel worker could use up that much\nspace.\n\nIt's also possible that you've found an actual bug, but without\na reproducer case nobody's going to take that possibility too\nseriously. We're unlikely to accept \"the limit should work\nacross multiple processes\" as a valid bug though. That would\nrequire a vastly more complicated implementation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Dec 2022 14:28:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote:\n> Just to be clear: my real question is: why is temp_file_limit not\n> working at the specified size? Because this is my real problem: when a\n> query is dying like this it will also kill other queries because these\n> are also running out of space. Even when the limit is per-process it\n> should not have exceeded 200GB imo.\n\nWhat OS and filesystem are in use ?\n\nCould you list the tmpdir when it's getting huge? The filenames include\nthe PID, which would indicate if there's another procecss involved, or a\nbug allowed it to get huge.\nsudo du --max=2 -mx ./pgsql_tmp |sort -nr\n\nBTW, pg_ls_tmpdir() hides directories, so you shouldn't rely on it for\nlisting temporary directories...\n\nOne possibility is that there are files in the tmpdir, which have been\nunlinked, but are still opened, so their space hasn't been reclaimed.\nYou could check for that by running lsof -nn |grep pgsql_tmp Any deleted\nfiles would say things like 'DEL|deleted|inode|no such'\n\n> BTW, if that limit is really per process instead of per\n> session/query then that is a Very Bad Thing(tm), because this makes the\n> limit effectively worthless - if a query can spawn 8 parallel processes\n> then you can suddenly, without any form of control, again fill up that disk.\n\n8 is the default value of max_worker_processes and max_parallel_workers,\nbut 2 is the default value of max_parallel_workers_per_gather. You're\nfree the change the default value to balance it with the temp_file_limit\n(as suggested by the earlier-mentioned commit).\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 18 Dec 2022 14:10:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 9:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote:\n> > Just to be clear: my real question is: why is temp_file_limit not\n> > working at the specified size? Because this is my real problem: when a\n> > query is dying like this it will also kill other queries because these\n> > are also running out of space. Even when the limit is per-process it\n> > should not have exceeded 200GB imo.\n\nIt's really the limit for a single file (or virtual file because we\nsplit them on 1GB boundaries, probably well past time we stopped doing\nthat), but we create many temporary files for various reasons. One\npossibility is that you've hit a case that needs several rounds of\nrepartitioning (because of a failure to estimate the number of tuples\nwell), but we can't see that because you didn't show EXPLAIN (ANALYZE)\noutput (understandably if it runs out of disk space before\ncompleting...). The parallel hash code doesn't free up the previous\ngenerations' temporary files; it really only needs two generations'\nworth concurrently (the one it's reading from and the one it's writing\nto). In rare cases where more generations are needed it could unlink\nthe older ones -- that hasn't been implemented yet. If you set\nlog_temp_files = 0 to log temporary file names, it should be clear if\nit's going through multiple rounds of repartitioning, from the names\n(...of32..., ...of64..., ...of128..., ...of256..., ...).\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:51:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 1:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It's really the limit for a single file\n\nOops, sorry I take that back. It should be per process.\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:53:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Hi Tom and Thomas, thanks for your help.\n\n@Tom:\nIf it really is per-process then I would have expected it to die after\n200GB was used?\nAs far as \"valid bug\" is concerned: I had hoped this would be per session,\nas this at least delivers a reasonable and usable limit; it is easy to\ncontrol the number of sessions/statements in execution.\nIf it really is per process then the limit is not really useful, just like\nwork_mem: the execution plan of a query determines the number of processes\n(up to the max, at least that is way better than work_mem) and that can\nchange whenever Postgres feels a new plan is in order. I can understand\nthat solving this might be harder (although to me it looks like just a\nlittle bit of shared memory per session to keep a number). To me it does\nnot sound like an invalid bug, just one you do not want to solve now ;) And\nthe real problem, for me, is actually that both work_mem and\ntemp_file_limit should be for the entire instance/cluster ;) I know that\nthat is even harder.\n\nFor us it means we really cannot use Postgres parallelism: it is infinitely\nbetter to have a query that runs longer but which finishes than to have the\ndatabase die and recover randomly with OOM or with disk space filling up\nkilling random queries. Which is a bit of a pity, ofc.\n\n@Justin\nThe test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB\nWD850X with ext4 as a file system.\nI will collect the other data around tmpfiles hopefully later today.\nI have already set max_parallel_workers_per_gather to 1. I will probably\ndisable all parallelism for the next runs to see whether that makes the\nsize limit more workable..\n\nHi Tom and Thomas, thanks for your help.@Tom:If it really is per-process then I would have expected it to die after 200GB was used?As far as \"valid bug\" is concerned: I had hoped this would be per session, as this at least delivers a reasonable and usable limit; it is easy to control the number of sessions/statements in execution.If it really is per process then the limit is not really useful, just like work_mem: the execution plan of a query determines the number of processes (up to the max, at least that is way better than work_mem) and that can change whenever Postgres feels a new plan is in order. I can understand that solving this might be harder (although to me it looks like just a little bit of shared memory per session to keep a number). To me it does not sound like an invalid bug, just one you do not want to solve now ;) And the real problem, for me, is actually that both work_mem and temp_file_limit should be for the entire instance/cluster ;) I know that that is even harder.For us it means we really cannot use Postgres parallelism: it is infinitely better to have a query that runs longer but which finishes than to have the database die and recover randomly with OOM or with disk space filling up killing random queries. Which is a bit of a pity, ofc.@JustinThe test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB WD850X with ext4 as a file system.I will collect the other data around tmpfiles hopefully later today.I have already set max_parallel_workers_per_gather to 1. I will probably disable all parallelism for the next runs to see whether that makes the size limit more workable..",
"msg_date": "Mon, 19 Dec 2022 10:47:34 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Em seg., 19 de dez. de 2022 às 06:47, Frits Jalvingh <jal@etc.to> escreveu:\n\n>\n> The test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB\n> WD850X with ext4 as a file system.\n>\nIt's probably not a production environment.\nAny chance of adding another 2TB NVMe, just for the temp files?\nTo see if Postgres can finish the queries and provide more information?\nWhat exactly is the version of Postgres (14.???) are you using it?\n\nregards,\nRanier Vilela\n\nEm seg., 19 de dez. de 2022 às 06:47, Frits Jalvingh <jal@etc.to> escreveu:The test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB WD850X with ext4 as a file system.It's probably not a production environment.Any chance of adding another 2TB NVMe, just for the temp files?To see if Postgres can finish the queries and provide more information?What exactly is the version of Postgres (14.???) are you using it?regards,Ranier Vilela",
"msg_date": "Mon, 19 Dec 2022 09:15:14 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Hi Ranier, thanks for your help.\n\nI do not have more disks lying around, and I fear that if it does not\ncomplete with 1.3TB of disk space it might not be that likely that adding\n750GB would work...\nPostgres version: the original (prd) issue was on 10.x. I also tested it on\n14.x with the same issue. I then upgraded my machine to 15.1 to make sure\nto report on the latest version, and all information mentioned in this\nthread is from that version.\n\nbtw, this query generates quite different plans when tweaking things like\nnested_loop=false/true, and the \"fast\" plan requires nested_loops=true and\njoin_collapse_limit=1 (5 seconds response). An odd thing is that both plans\ncontain only one nested loop (a cross join, I think it cannot do that\nwithout one) but the general plan changes a lot.. I am trying to get output\nfrom that second plan because this one just loops using CPU, not disk...\nPerhaps that one will finish with some statistics...\n\nHi Ranier, thanks for your help.I do not have more disks lying around, and I fear that if it does not complete with 1.3TB of disk space it might not be that likely that adding 750GB would work...Postgres version: the original (prd) issue was on 10.x. I also tested it on 14.x with the same issue. I then upgraded my machine to 15.1 to make sure to report on the latest version, and all information mentioned in this thread is from that version.btw, this query generates quite different plans when tweaking things like nested_loop=false/true, and the \"fast\" plan requires nested_loops=true and join_collapse_limit=1 (5 seconds response). An odd thing is that both plans contain only one nested loop (a cross join, I think it cannot do that without one) but the general plan changes a lot.. I am trying to get output from that second plan because this one just loops using CPU, not disk... Perhaps that one will finish with some statistics...",
"msg_date": "Mon, 19 Dec 2022 15:45:25 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Em seg., 19 de dez. de 2022 às 11:45, Frits Jalvingh <jal@etc.to> escreveu:\n\n> Hi Ranier, thanks for your help.\n>\n> I do not have more disks lying around, and I fear that if it does not\n> complete with 1.3TB of disk space it might not be that likely that adding\n> 750GB would work...\n> Postgres version: the original (prd) issue was on 10.x. I also tested it\n> on 14.x with the same issue. I then upgraded my machine to 15.1 to make\n> sure to report on the latest version, and all information mentioned in this\n> thread is from that version.\n>\nYou can run with a Postgres debug compiled version?\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\nMaybe, some light appears.\n\nregards,\nRanier Vilela\n\n>\n\nEm seg., 19 de dez. de 2022 às 11:45, Frits Jalvingh <jal@etc.to> escreveu:Hi Ranier, thanks for your help.I do not have more disks lying around, and I fear that if it does not complete with 1.3TB of disk space it might not be that likely that adding 750GB would work...Postgres version: the original (prd) issue was on 10.x. I also tested it on 14.x with the same issue. I then upgraded my machine to 15.1 to make sure to report on the latest version, and all information mentioned in this thread is from that version.You can run with a Postgres debug compiled version?https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSDMaybe, some light appears.regards,Ranier Vilela",
"msg_date": "Mon, 19 Dec 2022 11:52:03 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Hehehe, that is not the worst plan ;) I did that once to debug a deadlock\nin the JDBC driver when talking with Postgres, but it's not an adventure\nI'd like to repeat right now ;)\n\n>\n\nHehehe, that is not the worst plan ;) I did that once to debug a deadlock in the JDBC driver when talking with Postgres, but it's not an adventure I'd like to repeat right now ;)",
"msg_date": "Mon, 19 Dec 2022 16:01:21 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "@justin:\n\nRan the query again. Top shows the following processes:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n\n 650830 postgres 20 0 7503,2m 2,6g 2,6g R 100,0 4,2 12:46.34\npostgres: jal datavault_317_prd [local] EXPLAIN\n\n 666141 postgres 20 0 7486,3m 2,6g 2,6g R 100,0 4,1 2:10.24\npostgres: parallel worker for PID 650830\n\nYour commands shows, during execution:\nroot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx\n./pgsql_tmp |sort -nr\n68629 ./pgsql_tmp/pgsql_tmp650830.3.fileset\n68629 ./pgsql_tmp\n\nroot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx\n./pgsql_tmp |sort -nr\n194494 ./pgsql_tmp\n194493 ./pgsql_tmp/pgsql_tmp650830.3.fileset\n\nroot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx\n./pgsql_tmp |sort -nr\n335289 ./pgsql_tmp/pgsql_tmp650830.3.fileset\n335289 ./pgsql_tmp\n\nroot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx\n./pgsql_tmp |sort -nr\n412021 ./pgsql_tmp/pgsql_tmp650830.3.fileset\n412021 ./pgsql_tmp\n^^^ a few seconds after this last try the query aborted:\nERROR: temporary file size exceeds temp_file_limit (104857600kB)\n\nOne possibility is that there are files in the tmpdir, which have been\n> unlinked, but are still opened, so their space hasn't been reclaimed.\n> You could check for that by running lsof -nn |grep pgsql_tmp Any deleted\n> files would say things like 'DEL|deleted|inode|no such'\n>\nI do not really understand what you would like me to do, and when. The disk\nspace is growing, and it is actual files under pgsql_tmp?\n\nHope this tells you something, please let me know if you would like more\ninfo, and again - thanks!\n\n@justin:Ran the query again. Top shows the following processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 650830 postgres 20 0 7503,2m 2,6g 2,6g R 100,0 4,2 12:46.34 postgres: jal datavault_317_prd [local] EXPLAIN 666141 postgres 20 0 7486,3m 2,6g 2,6g R 100,0 4,1 2:10.24 postgres: parallel worker for PID 650830Your commands shows, during execution:root@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx ./pgsql_tmp |sort -nr68629\t./pgsql_tmp/pgsql_tmp650830.3.fileset68629\t./pgsql_tmproot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx ./pgsql_tmp |sort -nr194494\t./pgsql_tmp194493\t./pgsql_tmp/pgsql_tmp650830.3.filesetroot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx ./pgsql_tmp |sort -nr335289\t./pgsql_tmp/pgsql_tmp650830.3.fileset335289\t./pgsql_tmproot@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx ./pgsql_tmp |sort -nr412021\t./pgsql_tmp/pgsql_tmp650830.3.fileset412021\t./pgsql_tmp^^^ a few seconds after this last try the query aborted:ERROR: temporary file size exceeds temp_file_limit (104857600kB)One possibility is that there are files in the tmpdir, which have been\nunlinked, but are still opened, so their space hasn't been reclaimed.\nYou could check for that by running lsof -nn |grep pgsql_tmp Any deleted\nfiles would say things like 'DEL|deleted|inode|no such'I do not really understand what you would like me to do, and when. The disk space is growing, and it is actual files under pgsql_tmp?Hope this tells you something, please let me know if you would like more info, and again - thanks!",
"msg_date": "Mon, 19 Dec 2022 17:57:42 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 05:57:42PM +0100, Frits Jalvingh wrote:\n> @justin:\n> \n> Ran the query again. Top shows the following processes:\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n\nThanks\n\n> root@chatelet:/d2/var/lib/postgresql/15/main/base# du --max=2 -mx\n> ./pgsql_tmp |sort -nr\n> 412021 ./pgsql_tmp/pgsql_tmp650830.3.fileset\n> 412021 ./pgsql_tmp\n> ^^^ a few seconds after this last try the query aborted:\n> ERROR: temporary file size exceeds temp_file_limit (104857600kB)\n> \n> One possibility is that there are files in the tmpdir, which have been\n> > unlinked, but are still opened, so their space hasn't been reclaimed.\n> > You could check for that by running lsof -nn |grep pgsql_tmp Any deleted\n> > files would say things like 'DEL|deleted|inode|no such'\n>\n> I do not really understand what you would like me to do, and when. The disk\n> space is growing, and it is actual files under pgsql_tmp?\n\nRun this during the query as either postgres or root:\n| lsof -nn |grep pgsql_tmp |grep -E 'DEL|deleted|inode|no such'\n\nAny files it lists would be interesting to know about.\n\n> Hope this tells you something, please let me know if you would like more\n> info, and again - thanks!\n\nI think Thomas' idea is more likely. We'd want to know the names of\nfiles being written, either as logged by log_temp_files or from \n| find pgsql_tmp -ls\nduring the query.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 19 Dec 2022 11:15:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "I have listed the files during that run, I will try to add those as an\nattachment to the mail, no idea if the list accepts that. There were 2024\nfiles in that directory around the end...\n\nExecuting that lsof command only produces some warnings, no files:\n\nroot@chatelet:/d2/var/lib/postgresql/15/main/base# lsof -nn |grep pgsql_tmp\n|grep -E 'DEL|deleted|inode|no such'\nlsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs\n Output information may be incomplete.\nlsof: WARNING: can't stat() fuse.jetbrains-toolbox file system\n/tmp/.mount_jetbraOAyv5H\n Output information may be incomplete.\nlsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc\n Output information may be incomplete.",
"msg_date": "Mon, 19 Dec 2022 18:27:57 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 06:27:57PM +0100, Frits Jalvingh wrote:\n> I have listed the files during that run,\n\n> 213M -rw------- 1 postgres postgres 213M dec 19 17:46 i100of128.p0.0\n> 207M -rw------- 1 postgres postgres 207M dec 19 17:46 i100of128.p1.0\n> 210M -rw------- 1 postgres postgres 210M dec 19 17:49 i100of256.p0.0\n> 211M -rw------- 1 postgres postgres 211M dec 19 17:49 i100of256.p1.0\n> 188M -rw------- 1 postgres postgres 188M dec 19 17:53 i100of512.p0.0\n[...]\n\nI think that proves Thomas' theory. I'm not sure how that helps you,\nthough...\n\nOn Mon, Dec 19, 2022 at 01:51:33PM +1300, Thomas Munro wrote:\n> One possibility is that you've hit a case that needs several rounds of\n> repartitioning (because of a failure to estimate the number of tuples\n> well), but we can't see that because you didn't show EXPLAIN (ANALYZE)\n> output (understandably if it runs out of disk space before\n> completing...). The parallel hash code doesn't free up the previous\n> generations' temporary files; it really only needs two generations'\n> worth concurrently (the one it's reading from and the one it's writing\n> to). In rare cases where more generations are needed it could unlink\n> the older ones -- that hasn't been implemented yet. If you set\n> log_temp_files = 0 to log temporary file names, it should be clear if\n> it's going through multiple rounds of repartitioning, from the names\n> (...of32..., ...of64..., ...of128..., ...of256..., ...).\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n",
"msg_date": "Mon, 19 Dec 2022 11:46:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Ok, just to make sure that I understand correctly:\nThe parallel hash implementation needs to resize its table because of a\nmismatch in expected tuple count. I do expect this to be true: Postgres\noften grossly underestimates the expected row counts in our queries.\nThis is not fully implemented yet: removing the \"old \"files is not yet\ndone, so every time the table resizes it creates a new set of files and the\nold ones remain.\nI assume that the \"used file size\" only includes the \"current\" set of\nfiles, and that the old ones are not counted towards that amount? That\nwould explain why it overallocates, of course.\n\nBy itself I now know what to do: I just need to disable all parallelism (\n•̀ᴗ•́ )و ̑̑\n\nI usually do that anyway because it makes queries die randomly. This is\njust another reason.\n\nI restarted that query with max_parallel_workers_per_gather=0, and this\ndoes not seem to use tempspace at all. It was not exactly fast, it took 82\nminutes of a single process running at 100% cpu.\nhttps://explain.depesz.com/s/HedE\n\nThanks a lot for your help Justin, and all others that chimed in too.\n\nNext round is to try to get that query to run in the 5 seconds that we know\nit can do, reliably.\n\n\nOn Mon, Dec 19, 2022 at 6:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Dec 19, 2022 at 06:27:57PM +0100, Frits Jalvingh wrote:\n> > I have listed the files during that run,\n>\n> > 213M -rw------- 1 postgres postgres 213M dec 19 17:46 i100of128.p0.0\n> > 207M -rw------- 1 postgres postgres 207M dec 19 17:46 i100of128.p1.0\n> > 210M -rw------- 1 postgres postgres 210M dec 19 17:49 i100of256.p0.0\n> > 211M -rw------- 1 postgres postgres 211M dec 19 17:49 i100of256.p1.0\n> > 188M -rw------- 1 postgres postgres 188M dec 19 17:53 i100of512.p0.0\n> [...]\n>\n> I think that proves Thomas' theory. I'm not sure how that helps you,\n> though...\n>\n> On Mon, Dec 19, 2022 at 01:51:33PM +1300, Thomas Munro wrote:\n> > One possibility is that you've hit a case that needs several rounds of\n> > repartitioning (because of a failure to estimate the number of tuples\n> > well), but we can't see that because you didn't show EXPLAIN (ANALYZE)\n> > output (understandably if it runs out of disk space before\n> > completing...). The parallel hash code doesn't free up the previous\n> > generations' temporary files; it really only needs two generations'\n> > worth concurrently (the one it's reading from and the one it's writing\n> > to). In rare cases where more generations are needed it could unlink\n> > the older ones -- that hasn't been implemented yet. If you set\n> > log_temp_files = 0 to log temporary file names, it should be clear if\n> > it's going through multiple rounds of repartitioning, from the names\n> > (...of32..., ...of64..., ...of128..., ...of256..., ...).\n>\n> --\n> Justin Pryzby\n> System Administrator\n> Telsasoft\n> +1-952-707-8581\n>\n\nOk, just to make sure that I understand correctly:The parallel hash implementation needs to resize its table because of a mismatch in expected tuple count. I do expect this to be true: Postgres often grossly underestimates the expected row counts in our queries.This is not fully implemented yet: removing the \"old \"files is not yet done, so every time the table resizes it creates a new set of files and the old ones remain.I assume that the \"used file size\" only includes the \"current\" set of files, and that the old ones are not counted towards that amount? That would explain why it overallocates, of course.By itself I now know what to do: I just need to disable all parallelism ( •̀ᴗ•́ )و ̑̑I usually do that anyway because it makes queries die randomly. This is just another reason.I restarted that query with max_parallel_workers_per_gather=0, and this does not seem to use tempspace at all. It was not exactly fast, it took 82 minutes of a single process running at 100% cpu. https://explain.depesz.com/s/HedEThanks a lot for your help Justin, and all others that chimed in too.Next round is to try to get that query to run in the 5 seconds that we know it can do, reliably.On Mon, Dec 19, 2022 at 6:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Dec 19, 2022 at 06:27:57PM +0100, Frits Jalvingh wrote:\n> I have listed the files during that run,\n\n> 213M -rw------- 1 postgres postgres 213M dec 19 17:46 i100of128.p0.0\n> 207M -rw------- 1 postgres postgres 207M dec 19 17:46 i100of128.p1.0\n> 210M -rw------- 1 postgres postgres 210M dec 19 17:49 i100of256.p0.0\n> 211M -rw------- 1 postgres postgres 211M dec 19 17:49 i100of256.p1.0\n> 188M -rw------- 1 postgres postgres 188M dec 19 17:53 i100of512.p0.0\n[...]\n\nI think that proves Thomas' theory. I'm not sure how that helps you,\nthough...\n\nOn Mon, Dec 19, 2022 at 01:51:33PM +1300, Thomas Munro wrote:\n> One possibility is that you've hit a case that needs several rounds of\n> repartitioning (because of a failure to estimate the number of tuples\n> well), but we can't see that because you didn't show EXPLAIN (ANALYZE)\n> output (understandably if it runs out of disk space before\n> completing...). The parallel hash code doesn't free up the previous\n> generations' temporary files; it really only needs two generations'\n> worth concurrently (the one it's reading from and the one it's writing\n> to). In rare cases where more generations are needed it could unlink\n> the older ones -- that hasn't been implemented yet. If you set\n> log_temp_files = 0 to log temporary file names, it should be clear if\n> it's going through multiple rounds of repartitioning, from the names\n> (...of32..., ...of64..., ...of128..., ...of256..., ...).\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581",
"msg_date": "Mon, 19 Dec 2022 20:29:39 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Em seg., 19 de dez. de 2022 às 16:29, Frits Jalvingh <jal@etc.to> escreveu:\n\n> Ok, just to make sure that I understand correctly:\n> The parallel hash implementation needs to resize its table because of a\n> mismatch in expected tuple count. I do expect this to be true: Postgres\n> often grossly underestimates the expected row counts in our queries.\n> This is not fully implemented yet: removing the \"old \"files is not yet\n> done, so every time the table resizes it creates a new set of files and the\n> old ones remain.\n> I assume that the \"used file size\" only includes the \"current\" set of\n> files, and that the old ones are not counted towards that amount? That\n> would explain why it overallocates, of course.\n>\nIt is not necessary what is happening.\nCould you try manually deleting (rm) these files, using the postgres user?\nIt's an ugly and dirty test, but it could indicate that files are really\nbeing left behind, without being deleted by Postgres.\n\nAlternatively, you could compile a version with\nCHECK_WRITE_VS_EXTEND set, and try to fetch as much information from the\nlogs as possible,\nas has been indicated by others here.\n\n\n> By itself I now know what to do: I just need to disable all parallelism (\n> •̀ᴗ•́ )و ̑̑\n>\n> I usually do that anyway because it makes queries die randomly. This is\n> just another reason.\n>\n> I restarted that query with max_parallel_workers_per_gather=0, and this\n> does not seem to use tempspace at all. It was not exactly fast, it took 82\n> minutes of a single process running at 100% cpu.\n> https://explain.depesz.com/s/HedE\n>\nAnyway, see the hint page (https://explain.depesz.com/s/HedE#hints),\nmaybe it will be useful.\n\nregards,\nRanier Vilela\n\nEm seg., 19 de dez. de 2022 às 16:29, Frits Jalvingh <jal@etc.to> escreveu:Ok, just to make sure that I understand correctly:The parallel hash implementation needs to resize its table because of a mismatch in expected tuple count. I do expect this to be true: Postgres often grossly underestimates the expected row counts in our queries.This is not fully implemented yet: removing the \"old \"files is not yet done, so every time the table resizes it creates a new set of files and the old ones remain.I assume that the \"used file size\" only includes the \"current\" set of files, and that the old ones are not counted towards that amount? That would explain why it overallocates, of course.It is not necessary what is happening.Could you try manually deleting (rm) these files, using the postgres user?It's an ugly and dirty test, but it could indicate that files are really being left behind, without being deleted by Postgres.Alternatively, you could compile a version withCHECK_WRITE_VS_EXTEND set, and try to fetch as much information from the logs as possible,as has been indicated by others here. By itself I now know what to do: I just need to disable all parallelism ( •̀ᴗ•́ )و ̑̑I usually do that anyway because it makes queries die randomly. This is just another reason.I restarted that query with max_parallel_workers_per_gather=0, and this does not seem to use tempspace at all. It was not exactly fast, it took 82 minutes of a single process running at 100% cpu. https://explain.depesz.com/s/HedEAnyway, see the hint page (https://explain.depesz.com/s/HedE#hints),maybe it will be useful.regards,Ranier Vilela",
"msg_date": "Mon, 19 Dec 2022 16:42:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote:\n> By itself I'm used to bad query performance in Postgresql; our application\n> only does bulk queries and Postgres quite often makes terrible plans for\n> those, but with set enable_nestloop=false set always most of them at least\n> execute. The remaining failing queries are almost 100% caused by bad join\n> sequences; I plan to work around those by forcing the join order from our\n> application. For instance, the exact same query above can also generate the\n> following plan (this one was created by manually setting\n> join_collapse_limit = 1, but fast variants also occur quite often when\n> disabling parallelism):\n\nI, too, ended up setting enable_nestloop=false for our report queries,\nto avoid the worst-case plans.\n\nBut you should also try to address the rowcount misestimates. This\nunderestimates the rowcount by a factor of 69 (or 138 in the plan you\nsent today):\n\n| (soort = 'MSL'::text) AND (code = 'DAE'::text)\n\nIf those conditions are correlated, you can improve the estimate by\nadding extended stats object.\n\n| CREATE STATISTICS s_h_sturingslabel_ssm_stats soort,code FROM s_h_sturingslabel_ssm; ANALYZE s_h_sturingslabel_ssm;\n\nUnfortunately, stats objects currently only improve scans, and not\njoins, so that might *improve* some queries, but it won't resolve the\nworst problems:\n\n| Hash Join (cost=22,832.23..44,190.21 rows=185 width=47) (actual time=159.725..2,645,634.918 rows=28,086,472,886 loops=1) \n\nMaybe you can improve that by adjusting the stats target or ndistinct...\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:50:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "@ranier\nThese files ONLY exist during the query. They get deleted as soon as the\nquery terminates, by Postgres itself. Once the query terminates pgsql_tmp\nis completely empty. Considering what Thomas said (and the actual\noccurrence of the files he mentioned) this does seem to be the more likely\ncause to me.\n\n@ranierThese files ONLY exist during the query. They get deleted as soon as the query terminates, by Postgres itself. Once the query terminates pgsql_tmp is completely empty. Considering what Thomas said (and the actual occurrence of the files he mentioned) this does seem to be the more likely cause to me.",
"msg_date": "Mon, 19 Dec 2022 20:59:31 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 8:59 AM Frits Jalvingh <jal@etc.to> wrote:\n> @ranier\n> These files ONLY exist during the query. They get deleted as soon as the query terminates, by Postgres itself. Once the query terminates pgsql_tmp is completely empty. Considering what Thomas said (and the actual occurrence of the files he mentioned) this does seem to be the more likely cause to me.\n\nI'm working on some bug fixes near this area at the moment, so I'll\nalso see if I can figure out how to implement the missing eager\ncleanup of earlier generations. It's still a pretty bad scenario once\nyou reach it (repartitioning repeatedly, that is) and the solution to\nthat it probably much harder, but it's obviously not great to waste\ntemporary disk space like that. BTW you can disable just parallel\nhash with enable_parallel_hash=false.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:07:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "@justin\n\nI tried the create statistics variant and that definitely improves the\nestimate, and with that one of the \"bad\" cases (the one with the 82 minute\nplan) now creates a good plan using only a few seconds.\nThat is a worthwhile path to follow. A bit hard to do, because those\nconditions can be anything, but I can probably calculate the ones used per\ncustomer and create those correlation statistics from that... It is\ndefinitely better than tweaking the \"poor man's query hints\" enable_xxxx\n8-/ which is really not helping with plan stability either.\n\nThat will be a lot of work, but I'll let you know the results ;)\n\n@justinI tried the create statistics variant and that definitely improves the estimate, and with that one of the \"bad\" cases (the one with the 82 minute plan) now creates a good plan using only a few seconds.That is a worthwhile path to follow. A bit hard to do, because those conditions can be anything, but I can probably calculate the ones used per customer and create those correlation statistics from that... It is definitely better than tweaking the \"poor man's query hints\" enable_xxxx 8-/ which is really not helping with plan stability either.That will be a lot of work, but I'll let you know the results ;)",
"msg_date": "Mon, 19 Dec 2022 21:10:27 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "@Thomas\n\n\nThanks for helping identifying the issue. I think it would be nice to clean\nup those obsoleted files during the run because running out of disk is\nreality not a good thing to have ;) Of course the bad estimates leading to\nthe resize are the real issue but this at least makes it less bad.\n\nThanks for the tip about disabling parallel_hash but I also found it in the\nsource. As mentioned before I disable (on production systems) all\nparallelism, not just for this issue but to prevent the OOM killer from\nkilling Postgres - which happens way more often with parallelism on...\n\n@ThomasThanks for helping identifying the issue. I think it would be nice to clean up those obsoleted files during the run because running out of disk is reality not a good thing to have ;) Of course the bad estimates leading to the resize are the real issue but this at least makes it less bad.Thanks for the tip about disabling parallel_hash but I also found it in the source. As mentioned before I disable (on production systems) all parallelism, not just for this issue but to prevent the OOM killer from killing Postgres - which happens way more often with parallelism on...",
"msg_date": "Mon, 19 Dec 2022 21:32:33 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 09:10:27PM +0100, Frits Jalvingh wrote:\n> @justin\n> \n> I tried the create statistics variant and that definitely improves the\n> estimate, and with that one of the \"bad\" cases (the one with the 82 minute\n> plan) now creates a good plan using only a few seconds.\n> That is a worthwhile path to follow. A bit hard to do, because those\n> conditions can be anything, but I can probably calculate the ones used per\n> customer and create those correlation statistics from that... It is\n> definitely better than tweaking the \"poor man's query hints\" enable_xxxx\n> 8-/ which is really not helping with plan stability either.\n> \n> That will be a lot of work, but I'll let you know the results ;)\n\nYeah, if the conditions are arbitrary, then it's going to be more\ndifficult. Hopefully you don't have too many columns. :)\n\nI suggest enabling autoexplain and monitoring for queries which were\nslow, and retroactively adding statistics to those columns which are\nmost-commonly queried, and which have correlations (which the planner\ndoesn't otherwise know about).\n\nYou won't want to have more than a handful of columns in a stats object\n(since it requires factorial(N) complexity), but you can have multiple\nstats objects with different combinations of columns (and, in v14,\nexpressions). You can also set a lower stats target to make the cost a\nbit lower.\n\nYou could try to check which columns are correlated, either by running:\n| SELECT COUNT(1),col1,col2 FROM tbl GROUP BY 2,3 ORDER BY 1;\nfor different combinations of columns.\n\nOr by creating a tentative/experimental stats object on a handful of\ncolumns at a time for which you have an intuition about their\ncorrelation, and then checking the calculated dependencies FROM\npg_stats_ext. You may need to to something clever to use that for\narbitrarily columns. Maybe this is a start.\n| SELECT dep.value::float, tablename, attnames, dep.key, exprs FROM (SELECT (json_each_text(dependencies::text::json)).* AS dep, * FROM pg_stats_ext)dep WHERE dependencies IS NOT NULL ORDER BY 1 DESC ; -- AND regexp_count(key, ',') < 2\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 20 Dec 2022 15:11:09 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: temp_file_limit?"
},
{
"msg_contents": "Hi Justin,\n\nAs our queries are generated I decided to create a peephole optimizer kind\nof thing to scan the generated SQL AST to find multiple conditions on the\nsame table reference. I can then use our metadata to see if these\nreferences are expected to be correlated. This creates about 20 statistics\nsets, including the one you have indicated. This at least makes the\nproblematic query have a stable and very fast plan (so far). I had hoped\nfor some more improvement with other queries but that has not yet been\nevident ;)\n\nThanks a lot for the tips and your help!\n\nCordially,\n\nFrits\n\nOn Tue, Dec 20, 2022 at 10:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Dec 19, 2022 at 09:10:27PM +0100, Frits Jalvingh wrote:\n> > @justin\n> >\n> > I tried the create statistics variant and that definitely improves the\n> > estimate, and with that one of the \"bad\" cases (the one with the 82\n> minute\n> > plan) now creates a good plan using only a few seconds.\n> > That is a worthwhile path to follow. A bit hard to do, because those\n> > conditions can be anything, but I can probably calculate the ones used\n> per\n> > customer and create those correlation statistics from that... It is\n> > definitely better than tweaking the \"poor man's query hints\" enable_xxxx\n> > 8-/ which is really not helping with plan stability either.\n> >\n> > That will be a lot of work, but I'll let you know the results ;)\n>\n> Yeah, if the conditions are arbitrary, then it's going to be more\n> difficult. Hopefully you don't have too many columns. :)\n>\n> I suggest enabling autoexplain and monitoring for queries which were\n> slow, and retroactively adding statistics to those columns which are\n> most-commonly queried, and which have correlations (which the planner\n> doesn't otherwise know about).\n>\n> You won't want to have more than a handful of columns in a stats object\n> (since it requires factorial(N) complexity), but you can have multiple\n> stats objects with different combinations of columns (and, in v14,\n> expressions). You can also set a lower stats target to make the cost a\n> bit lower.\n>\n> You could try to check which columns are correlated, either by running:\n> | SELECT COUNT(1),col1,col2 FROM tbl GROUP BY 2,3 ORDER BY 1;\n> for different combinations of columns.\n>\n> Or by creating a tentative/experimental stats object on a handful of\n> columns at a time for which you have an intuition about their\n> correlation, and then checking the calculated dependencies FROM\n> pg_stats_ext. You may need to to something clever to use that for\n> arbitrarily columns. Maybe this is a start.\n> | SELECT dep.value::float, tablename, attnames, dep.key, exprs FROM\n> (SELECT (json_each_text(dependencies::text::json)).* AS dep, * FROM\n> pg_stats_ext)dep WHERE dependencies IS NOT NULL ORDER BY 1 DESC ; -- AND\n> regexp_count(key, ',') < 2\n>\n> --\n> Justin\n>\n\nHi Justin,As our queries are generated I decided to create a peephole optimizer kind of thing to scan the generated SQL AST to find multiple conditions on the same table reference. I can then use our metadata to see if these references are expected to be correlated. This creates about 20 statistics sets, including the one you have indicated. This at least makes the problematic query have a stable and very fast plan (so far). I had hoped for some more improvement with other queries but that has not yet been evident ;)Thanks a lot for the tips and your help!Cordially,FritsOn Tue, Dec 20, 2022 at 10:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Dec 19, 2022 at 09:10:27PM +0100, Frits Jalvingh wrote:\n> @justin\n> \n> I tried the create statistics variant and that definitely improves the\n> estimate, and with that one of the \"bad\" cases (the one with the 82 minute\n> plan) now creates a good plan using only a few seconds.\n> That is a worthwhile path to follow. A bit hard to do, because those\n> conditions can be anything, but I can probably calculate the ones used per\n> customer and create those correlation statistics from that... It is\n> definitely better than tweaking the \"poor man's query hints\" enable_xxxx\n> 8-/ which is really not helping with plan stability either.\n> \n> That will be a lot of work, but I'll let you know the results ;)\n\nYeah, if the conditions are arbitrary, then it's going to be more\ndifficult. Hopefully you don't have too many columns. :)\n\nI suggest enabling autoexplain and monitoring for queries which were\nslow, and retroactively adding statistics to those columns which are\nmost-commonly queried, and which have correlations (which the planner\ndoesn't otherwise know about).\n\nYou won't want to have more than a handful of columns in a stats object\n(since it requires factorial(N) complexity), but you can have multiple\nstats objects with different combinations of columns (and, in v14,\nexpressions). You can also set a lower stats target to make the cost a\nbit lower.\n\nYou could try to check which columns are correlated, either by running:\n| SELECT COUNT(1),col1,col2 FROM tbl GROUP BY 2,3 ORDER BY 1;\nfor different combinations of columns.\n\nOr by creating a tentative/experimental stats object on a handful of\ncolumns at a time for which you have an intuition about their\ncorrelation, and then checking the calculated dependencies FROM\npg_stats_ext. You may need to to something clever to use that for\narbitrarily columns. Maybe this is a start.\n| SELECT dep.value::float, tablename, attnames, dep.key, exprs FROM (SELECT (json_each_text(dependencies::text::json)).* AS dep, * FROM pg_stats_ext)dep WHERE dependencies IS NOT NULL ORDER BY 1 DESC ; -- AND regexp_count(key, ',') < 2\n\n-- \nJustin",
"msg_date": "Fri, 23 Dec 2022 09:12:08 +0100",
"msg_from": "Frits Jalvingh <jal@etc.to>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: temp_file_limit?"
}
] |
[
{
"msg_contents": "I have a complex query which essentially runs a finite state automaton \nthrough a with recursive union, adding the next state based on the \nprevious. This is run at 100,000 or a million start states at the same \ntime, picking a new record (token), matching it to the FSA (a three-way \njoin:\n\n token inner join next token * state-transition-table -> next state\n\nI know this doesn't really tell you much. The following might give you a \nglimpse:\n\nwith recursive Token as (\n select * from steps left outer join token using(event)\n limit 100000\n), StartStates as (\nselect pathId, start, end, m.new_state as state, m.goalId\n from Token w inner join FSA m\n on(m.token = w.token and m.old_state = w.state)\n), Phrase as (\nselect pathId, start, end, state, goalId\n from StartStates\nunion all\nselect p.pathId, p.start, n.end, n.new_state as state, n.goalId\n from Phrase p\n inner join (\n select pathId, start, end, old_state as state, new_state, f.goalId\n from Token inner join FSA f using(token)\n ) n using(pathId, end, state)\n\nThere are 100s of thousands of states. This join has a HUGE fan out if \nit is not immediately limited by the chaining criterion on the \nold_state. So any attempt to use merge join or hash join which will \ncompute the whole big thing and only then apply the chaining criterion, \nwill just create massive amounts of sort load and/or humongous hash \ntables only to throw the vast majority away every time. But when it runs \nthrough nested loops, the indexes help to make it really quick.\n\nI cannot show you the exact data, but I can show you the plan that works \namazingly fast:\n\n Insert on good_paths (cost=912224.51..912228.71 rows=240 width=302)\n CTE token\n -> Limit (cost=46728.24..81127.75 rows=100000 width=519)\n -> Hash Left Join (cost=46728.24..115752.23 rows=200654 width=519)\n ... this is creating the start states\n CTE path\n -> Recursive Union (cost=23293.75..831082.45 rows=241 width=278)\n -> Merge Join (cost=23293.75..289809.83 rows=171 width=278)\n Merge Cond: ((m.old_state = w_1.state) AND (m.token = w_1.token))\n -> Index Scan using fsa_old_state_token_idx on fsa m (cost=0.43..245365.63 rows=4029834 width=28)\n -> Materialize (cost=23293.32..23793.32 rows=100000 width=278)\n -> Sort (cost=23293.32..23543.32 rows=100000 width=278)\n Sort Key: w_1.state, w_1.token\n -> CTE Scan on token w_1 (cost=0.00..2000.00 rows=100000 width=278)\n -> Nested Loop (cost=18295.78..54126.78 rows=7 width=278)\n -> Merge Join (cost=18295.35..19120.16 rows=4275 width=340)\n Merge Cond: ((token.pathid = p.pathid) AND (token.start = p.end))\n -> Sort (cost=18169.32..18419.32 rows=100000 width=160)\n Sort Key: token.pathid, token.start\n -> CTE Scan on token (cost=0.00..2000.00 rows=100000 width=160)\n -> Sort (cost=126.03..130.30 rows=1710 width=212)\n Sort Key: p.pathid, p.end\n -> WorkTable Scan on path p (cost=0.00..34.20 rows=1710 width=212)\n -> Index Scan using fsa_old_state_token_idx on fsa f (cost=0.43..8.18 rows=1 width=28)\n\nNow, when that initial token list (of start states) grows beyond this \nlimit of 100,000, the execution plan flips:\n\n Insert on good_paths (cost=2041595.63..2041606.66 rows=630 width=302)\n CTE token\n -> Limit (cost=46728.24..115752.23 rows=200654 width=519)\n -> Hash Left Join (cost=46728.24..115752.23 rows=200654 width=519)\n ... this is creating the start states\n CTE path\n -> Recursive Union (cost=47749.96..1925801.45 rows=633 width=278)\n -> Merge Join (cost=47749.96..315274.30 rows=343 width=278)\n Merge Cond: ((m.old_state = w_1.state) AND (m.token = w_1.token))\n -> Index Scan using fsa_old_state_token_idx on fsa m (cost=0.43..245365.63 rows=4029834 width=28)\n -> Materialize (cost=47749.53..48752.80 rows=200654 width=278)\n -> Sort (cost=47749.53..48251.16 rows=200654 width=278)\n Sort Key: w_1.state, w_1.token\n -> CTE Scan on token w_1 (cost=0.00..4013.08 rows=200654 width=278)\n -> Merge Join (cost=158013.87..161051.45 rows=29 width=278)\n Merge Cond: ((token.token = f.token) AND (token.pathid = p.pathid) AND (token.start = p.end))\n -> Sort (cost=37459.53..37961.16 rows=200654 width=160)\n Sort Key: token.token, token.pathid, token.start\n -> CTE Scan on token (cost=0.00..4013.08 rows=200654 width=160)\n -> Materialize (cost=120554.35..120966.44 rows=82419 width=228)\n -> Sort (cost=120554.35..120760.39 rows=82419 width=228)\n Sort Key: f.token, p.pathid, p.end\n -> Nested Loop (cost=0.43..104808.55 rows=82419 width=228)\n -> WorkTable Scan on path p (cost=0.00..68.60 rows=3430 width=212)\n -> Index Scan using fsa_old_state_token_idx on fsa f (cost=0.43..30.30 rows=24 width=28)\n\nOnce this merge join kicks in, the query essentially stalls (I mean, \neach of the limited components runs in seconds, and I can iteratively \nrun them so that my initial set of tokens never grows past 100,000, and \nthen I can complete everything in about linear time, each iteration \ntakes about linear time proportional with the amount of tokens. But with \nthe merge join it doesn't complete before several times that amount of time.\n\nI doubt that I can find any trick to give to the planner better data \nwhich it can then use to figure out that the merge join is a bad \nproposition.\n\nI wish I could just force it. I probably had this discussion here some \nyears ago. I think that while the PostgreSQL optimizer is pretty good, \nthere are situations such as this where its predictions do not work.\n\nNote, for my immediate relief I have forced it by simply set \nenable_mergejoin=off. This works fine, except, it converts both into a \nnested loop, but the upper merge join was not a problem, and sometimes \n(most often) nested loop is a bad choice for bulk data. It's only for \nthis recursive query it sometimes makes sense.\n\nregards,\n-Gunther\n\n\n\n\n\n\nI have a complex query which essentially runs a finite state\n automaton through a with recursive union, adding the next state\n based on the previous. This is run at 100,000 or a million start\n states at the same time, picking a new record (token), matching it\n to the FSA (a three-way join: \n\n\ntoken inner join next token * state-transition-table -> next\n state\n\n\nI know this doesn't really tell you much. The following might\n give you a glimpse:\n\nwith recursive Token as (\n select * from steps left outer join token using(event)\n limit 100000\n), StartStates as (\nselect pathId, start, end, m.new_state as state, m.goalId\n from Token w inner join FSA m\n on(m.token = w.token and m.old_state = w.state)\n), Phrase as (\nselect pathId, start, end, state, goalId\n from StartStates\nunion all\nselect p.pathId, p.start, n.end, n.new_state as state, n.goalId\n from Phrase p\n inner join (\n select pathId, start, end, old_state as state, new_state, f.goalId\n from Token inner join FSA f using(token)\n ) n using(pathId, end, state)\n\n\nThere are 100s of thousands of states. This join has a HUGE fan\n out if it is not immediately limited by the chaining criterion on\n the old_state. So any attempt to use merge join or hash join which\n will compute the whole big thing and only then apply the chaining\n criterion, will just create massive amounts of sort load and/or\n humongous hash tables only to throw the vast majority away every\n time. But when it runs through nested loops, the indexes help to\n make it really quick.\nI cannot show you the exact data, but I can show you the plan\n that works amazingly fast:\n\n Insert on good_paths (cost=912224.51..912228.71 rows=240 width=302)\n CTE token\n -> Limit (cost=46728.24..81127.75 rows=100000 width=519)\n -> Hash Left Join (cost=46728.24..115752.23 rows=200654 width=519)\n ... this is creating the start states\n CTE path\n -> Recursive Union (cost=23293.75..831082.45 rows=241 width=278)\n -> Merge Join (cost=23293.75..289809.83 rows=171 width=278)\n Merge Cond: ((m.old_state = w_1.state) AND (m.token = w_1.token))\n -> Index Scan using fsa_old_state_token_idx on fsa m (cost=0.43..245365.63 rows=4029834 width=28)\n -> Materialize (cost=23293.32..23793.32 rows=100000 width=278)\n -> Sort (cost=23293.32..23543.32 rows=100000 width=278)\n Sort Key: w_1.state, w_1.token\n -> CTE Scan on token w_1 (cost=0.00..2000.00 rows=100000 width=278)\n -> Nested Loop (cost=18295.78..54126.78 rows=7 width=278)\n -> Merge Join (cost=18295.35..19120.16 rows=4275 width=340)\n Merge Cond: ((token.pathid = p.pathid) AND (token.start = p.end))\n -> Sort (cost=18169.32..18419.32 rows=100000 width=160)\n Sort Key: token.pathid, token.start\n -> CTE Scan on token (cost=0.00..2000.00 rows=100000 width=160)\n -> Sort (cost=126.03..130.30 rows=1710 width=212)\n Sort Key: p.pathid, p.end\n -> WorkTable Scan on path p (cost=0.00..34.20 rows=1710 width=212)\n -> Index Scan using fsa_old_state_token_idx on fsa f (cost=0.43..8.18 rows=1 width=28)\n\n\nNow, when that initial token list (of start states) grows beyond\n this limit of 100,000, the execution plan flips:\n Insert on good_paths (cost=2041595.63..2041606.66 rows=630 width=302)\n CTE token\n -> Limit (cost=46728.24..115752.23 rows=200654 width=519)\n -> Hash Left Join (cost=46728.24..115752.23 rows=200654 width=519)\n ... this is creating the start states\n CTE path\n -> Recursive Union (cost=47749.96..1925801.45 rows=633 width=278)\n -> Merge Join (cost=47749.96..315274.30 rows=343 width=278)\n Merge Cond: ((m.old_state = w_1.state) AND (m.token = w_1.token))\n -> Index Scan using fsa_old_state_token_idx on fsa m (cost=0.43..245365.63 rows=4029834 width=28)\n -> Materialize (cost=47749.53..48752.80 rows=200654 width=278)\n -> Sort (cost=47749.53..48251.16 rows=200654 width=278)\n Sort Key: w_1.state, w_1.token\n -> CTE Scan on token w_1 (cost=0.00..4013.08 rows=200654 width=278)\n -> Merge Join (cost=158013.87..161051.45 rows=29 width=278)\n Merge Cond: ((token.token = f.token) AND (token.pathid = p.pathid) AND (token.start = p.end))\n -> Sort (cost=37459.53..37961.16 rows=200654 width=160)\n Sort Key: token.token, token.pathid, token.start\n -> CTE Scan on token (cost=0.00..4013.08 rows=200654 width=160)\n -> Materialize (cost=120554.35..120966.44 rows=82419 width=228)\n -> Sort (cost=120554.35..120760.39 rows=82419 width=228)\n Sort Key: f.token, p.pathid, p.end\n -> Nested Loop (cost=0.43..104808.55 rows=82419 width=228)\n -> WorkTable Scan on path p (cost=0.00..68.60 rows=3430 width=212)\n -> Index Scan using fsa_old_state_token_idx on fsa f (cost=0.43..30.30 rows=24 width=28)\n\n\nOnce this merge join kicks in, the query essentially stalls (I\n mean, each of the limited components runs in seconds, and I can\n iteratively run them so that my initial set of tokens never grows\n past 100,000, and then I can complete everything in about linear\n time, each iteration takes about linear time proportional with the\n amount of tokens. But with the merge join it doesn't complete\n before several times that amount of time.\n\nI doubt that I can find any trick to give to the planner better\n data which it can then use to figure out that the merge join is a\n bad proposition. \n\nI wish I could just force it. I probably had this discussion here\n some years ago. I think that while the PostgreSQL optimizer is\n pretty good, there are situations such as this where its\n predictions do not work.\nNote, for my immediate relief I have forced it by simply set\n enable_mergejoin=off. This works fine, except, it converts both\n into a nested loop, but the upper merge join was not a problem,\n and sometimes (most often) nested loop is a bad choice for bulk\n data. It's only for this recursive query it sometimes makes sense.\nregards,\n -Gunther",
"msg_date": "Wed, 28 Dec 2022 10:39:14 -0500",
"msg_from": "Gunther Schadow <raj@gusw.net>",
"msg_from_op": true,
"msg_subject": "When you really want to force a certain join type?"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 10:39:14AM -0500, Gunther Schadow wrote:\n> I have a complex query which essentially runs a finite state automaton\n> through a with recursive union, adding the next state based on the\n> previous. This is run at 100,000 or a million start states at the same\n> time, picking a new record (token), matching it to the FSA (a three-way\n> join:\n\n> There are 100s of thousands of states. This join has a HUGE fan out if it is\n\n> I doubt that I can find any trick to give to the planner better data which\n> it can then use to figure out that the merge join is a bad proposition.\n\n> Note, for my immediate relief I have forced it by simply set\n> enable_mergejoin=off. This works fine, except, it converts both into a\n> nested loop, but the upper merge join was not a problem, and sometimes (most\n> often) nested loop is a bad choice for bulk data. It's only for this\n> recursive query it sometimes makes sense.\n\nMaybe the new parameter in v15 would help.\n\nhttps://www.postgresql.org/docs/15/runtime-config-query.html#GUC-RECURSIVE-WORKTABLE-FACTOR\nrecursive_worktable_factor (floating point)\n\n Sets the planner's estimate of the average size of the working table\n of a recursive query, as a multiple of the estimated size of the\n initial non-recursive term of the query. This helps the planner\n choose the most appropriate method for joining the working table to\n the query's other tables. The default value is 10.0. A smaller value\n such as 1.0 can be helpful when the recursion has low “fan-out” from\n one step to the next, as for example in shortest-path queries. Graph\n analytics queries may benefit from larger-than-default values.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Dec 2022 09:48:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: When you really want to force a certain join type?"
},
{
"msg_contents": "On 12/28/2022 10:48 AM, Justin Pryzby wrote:\n> Maybe the new parameter in v15 would help.\n>\n> https://www.postgresql.org/docs/15/runtime-config-query.html#GUC-RECURSIVE-WORKTABLE-FACTOR\n> recursive_worktable_factor (floating point)\n>\n> Sets the planner's estimate of the average size of the working table\n> of a recursive query, as a multiple of the estimated size of the\n> initial non-recursive term of the query. This helps the planner\n> choose the most appropriate method for joining the working table to\n> the query's other tables. The default value is 10.0. A smaller value\n> such as 1.0 can be helpful when the recursion has low “fan-out” from\n> one step to the next, as for example in shortest-path queries. Graph\n> analytics queries may benefit from larger-than-default values.\n\nThanks that's something I will try after I upgraded.\n\nBut speaking of such other uses for recursive queries, I can say I have \nquite a bit of experience of turning graph related \"traversal\" and \nsearch and optimization and classification queries into SQL, in short, \ncomputing the transitive closure. And usually I have stayed away from \nthe recursive WITH query and instead set up a start table and then \nperform the iterative step. And there are two ways to go about it. Say \nyou have a graph, simple nodes and arcs. You want to find all paths \nthrough the graph.\n\nNow you can set up start nodes and then extend them at the end by \njoining the recursive table to the simple arc table and extend your path \nevery time. This is what the WITH RECURSIVE supports. These queries \nlinearly iterate as many times as the length of the longest path.\n\nwith recursive arcs as (\n select source, target, 1 as distance, ...\n from ...\n), paths as (\n select * from arcs\n union all\n select a.source, b.target, a.distance + 1 as distance, ...\n from paths a inner join_*arcs *_b\n on(b.source = a.target)\n)\nselect * from paths\n\nBut another way is to join paths with paths. It would be this, which I \nthink I have seen postgresql unable to deal with:\n\nwith recursive arcs as (\n select source, target, 1 as distance, ...\n from ...\n), paths as (\n select * from arcs\n union all\n select a.source, b.target, a.distance + 1 as distance, ...\n from paths a inner join_*paths *_b\n on(b.source = a.target)\n)\nselect * from paths\n\nSo, instead of the recursive union to join back to the fixed table, it \njoins the recursive table to the recursive table, and the benefit of \nthat is that these queries converge much quicker. Instead of going 10 \niterations to find a path of length 10, you go 1 iteration to find all \npaths of 2 (if distance 1 is the base table of all arcs), then next you \nfind paths of up to 4 then you find paths of up to 8, then 16, 32, ... \nThis converges much faster. I usually do that as follows\n\ncreate table paths as\nselect source, target, 1 as distance, ...\n from arcs;\n\nprepare rep as\ninsert into paths(source, target, distance, ...)\nselect a.source, b.target, a.distance + b.distance as distance, ...\n from paths a inner join paths b on(b.source = a.target)\nexcept\nselect * from paths;\n\nexecute rep;\nexecute rep;\n...\n\nor instead of the except, in order to minimize distances:\n\nwhere not exists (select 1 from paths x\n where x.source = a.source\n and x.target = a.target\n and x.distance < a.distance)\n\nI have even done a group by in the recursive step which replaces the \npaths relation at every iteration (e.g. with only minimal distance paths).\n\nSince this converges so rapidly I often prefer that approach over a \nrecursive union query.\n\nI think in IBM DB2 allowed to join the recursive table with itself. Is \nthis something you want to support at some time?\n\nAlso, why even use the RECURSIVE keyword, DB2 didn't need it, and the \nquery analyzer should immediately see the recursion, so no need to have \nthat keyword.\n\nregards,\n-Gunther\n\n\n\n\n\n\nOn 12/28/2022 10:48 AM, Justin Pryzby\n wrote:\n \n\nMaybe the new parameter in v15 would help.\n\nhttps://www.postgresql.org/docs/15/runtime-config-query.html#GUC-RECURSIVE-WORKTABLE-FACTOR\nrecursive_worktable_factor (floating point)\n\n Sets the planner's estimate of the average size of the working table\n of a recursive query, as a multiple of the estimated size of the\n initial non-recursive term of the query. This helps the planner\n choose the most appropriate method for joining the working table to\n the query's other tables. The default value is 10.0. A smaller value\n such as 1.0 can be helpful when the recursion has low “fan-out” from\n one step to the next, as for example in shortest-path queries. Graph\n analytics queries may benefit from larger-than-default values.\n\nThanks that's something I will try after I upgraded.\nBut speaking of such other uses for recursive queries, I can say\n I have quite a bit of experience of turning graph related\n \"traversal\" and search and optimization and classification queries\n into SQL, in short, computing the transitive closure. And usually\n I have stayed away from the recursive WITH query and instead set\n up a start table and then perform the iterative step. And there\n are two ways to go about it. Say you have a graph, simple nodes\n and arcs. You want to find all paths through the graph. \n\nNow you can set up start nodes and then extend them at the end by\n joining the recursive table to the simple arc table and extend\n your path every time. This is what the WITH RECURSIVE supports.\n These queries linearly iterate as many times as the length of the\n longest path. \n\nwith recursive arcs as (\n select source, target, 1 as distance, ...\n from ...\n), paths as (\n select * from arcs\n union all\n select a.source, b.target, a.distance + 1 as distance, ...\n from paths a inner join arcs b\n on(b.source = a.target)\n)\nselect * from paths\n\nBut another way is to join paths with paths. It would be this,\n which I think I have seen postgresql unable to deal with:\nwith recursive arcs as (\n select source, target, 1 as distance, ...\n from ...\n), paths as (\n select * from arcs\n union all\n select a.source, b.target, a.distance + 1 as distance, ...\n from paths a inner join paths b\n on(b.source = a.target)\n)\nselect * from paths\n\nSo, instead of the recursive union to join back to the fixed\n table, it joins the recursive table to the recursive table, and\n the benefit of that is that these queries converge much quicker.\n Instead of going 10 iterations to find a path of length 10, you go\n 1 iteration to find all paths of 2 (if distance 1 is the base\n table of all arcs), then next you find paths of up to 4 then you\n find paths of up to 8, then 16, 32, ... This converges much\n faster. I usually do that as follows\n\ncreate table paths as\nselect source, target, 1 as distance, ...\n from arcs;\n\nprepare rep as\ninsert into paths(source, target, distance, ...)\nselect a.source, b.target, a.distance + b.distance as distance, ... \n from paths a inner join paths b on(b.source = a.target) \nexcept\nselect * from paths;\n\nexecute rep;\nexecute rep;\n...\n\nor instead of the except, in order to minimize distances:\nwhere not exists (select 1 from paths x \n where x.source = a.source \n and x.target = a.target\n and x.distance < a.distance)\n\nI have even done a group by in the recursive step which replaces\n the paths relation at every iteration (e.g. with only minimal\n distance paths).\n\nSince this converges so rapidly I often prefer that approach over\n a recursive union query. \n\nI think in IBM DB2 allowed to join the recursive table with\n itself. Is this something you want to support at some time?\nAlso, why even use the RECURSIVE keyword, DB2 didn't need it, and\n the query analyzer should immediately see the recursion, so no\n need to have that keyword.\nregards,\n -Gunther",
"msg_date": "Thu, 29 Dec 2022 02:31:59 -0500",
"msg_from": "Gunther Schadow <raj@gusw.net>",
"msg_from_op": true,
"msg_subject": "Re: When you really want to force a certain join type?"
},
{
"msg_contents": "Gunther Schadow <raj@gusw.net> writes:\n> Also, why even use the RECURSIVE keyword, DB2 didn't need it, and the \n> query analyzer should immediately see the recursion, so no need to have \n> that keyword.\n\nOur reading of the SQL spec is that it's required. The scope of\nvisibility of CTE names is different depending on whether you\nwrite RECURSIVE or not, so it's not a question of \"the query analyzer\nshould see it\": the analyzer is required NOT to see it.\n\nDB2 generally has a reputation for agreeing with the spec,\nso I'm surprised to hear that they're not doing this per spec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 02:52:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When you really want to force a certain join type?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen performing post-mortem analysis of some short latency spikes on a\nheavily loaded database, I found that the reason for (less than 10 second\nlatency spike) wasn't on the EXECUTE stage but on the BIND stage.\nAt the same time graphical monitoring shows that during this few second\nperiod there were some queries waiting in the BIND stage.\n\nLogging setup:\nlog_min_duration_statement=200ms\nlog_lock_waits=on\ndeadlock_timeout=100ms\nSo I expected that every lock waiting over 100ms (>deadlock_timeout) should\nbe in the log.\nBut in the log I see only spikes on slow BIND but not lock waits logged.\n(\ngrep BIND /var/log/postgresql/postgresql-2022-12-29.log | grep 'duration' |\nperl -pe 's/^(2022-12-29 \\d\\d:\\d\\d:\\d).*$/$1/' | sort | uniq -c | less\n...\n 9 2022-12-29 00:12:5\n 2 2022-12-29 00:13:1\n 3 2022-12-29 00:13:5\n!!! 68 2022-12-29 00:14:0\n 5 2022-12-29 00:14:1\n 3 2022-12-29 00:14:2\n 2 2022-12-29 00:14:3\n).\nBut no lock waits on the BIND stage logged during the problem period (and\nno lock waits in general).\nSimilar issues happen a few times per day without any visible pattern (but\non the same tables usually).\nNo CPU or IO load/latency spikes found during problem periods.\nNo EXECUTE slowdown found in the log during that time.\n\nSo currently I have two hypotheses in research:\n1)during BIND stage not every lock waits logged\n2)there are some not a lock related intermittent slowdown of BIND\n\nI ask for any ideas how to debug this issue (duration of such spike usually\nunder 1s but given how many TPS database serving - 1s is too much and\naffect end users).\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nHi,When performing post-mortem analysis of some short latency spikes on a heavily loaded database, I found that the reason for (less than 10 second latency spike) wasn't on the EXECUTE stage but on the BIND stage.At the same time graphical monitoring shows that during this few second period there were some queries waiting in the BIND stage.Logging setup:log_min_duration_statement=200mslog_lock_waits=ondeadlock_timeout=100msSo I expected that every lock waiting over 100ms (>deadlock_timeout) should be in the log.But in the log I see only spikes on slow BIND but not lock waits logged.(grep BIND /var/log/postgresql/postgresql-2022-12-29.log | grep 'duration' | perl -pe 's/^(2022-12-29 \\d\\d:\\d\\d:\\d).*$/$1/' | sort | uniq -c | less... 9 2022-12-29 00:12:5 2 2022-12-29 00:13:1 3 2022-12-29 00:13:5!!! 68 2022-12-29 00:14:0 5 2022-12-29 00:14:1 3 2022-12-29 00:14:2 2 2022-12-29 00:14:3).But no lock waits on the BIND stage logged during the problem period (and no lock waits in general).Similar issues happen a few times per day without any visible pattern (but on the same tables usually).No CPU or IO load/latency spikes found during problem periods.No EXECUTE slowdown found in the log during that time.So currently I have two hypotheses in research:1)during BIND stage not every lock waits logged2)there are some not a lock related intermittent slowdown of BINDI ask for any ideas how to debug this issue (duration of such spike usually under 1s but given how many TPS database serving - 1s is too much and affect end users).-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Sat, 31 Dec 2022 14:26:08 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to analyze of short but heavy intermittent slowdown on BIND on\n production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 02:26:08PM +0200, Maxim Boguk wrote:\n> Hi,\n> \n> When performing post-mortem analysis of some short latency spikes on a\n> heavily loaded database, I found that the reason for (less than 10 second\n> latency spike) wasn't on the EXECUTE stage but on the BIND stage.\n> At the same time graphical monitoring shows that during this few second\n> period there were some queries waiting in the BIND stage.\n> \n> Logging setup:\n> log_min_duration_statement=200ms\n> log_lock_waits=on\n> deadlock_timeout=100ms\n> So I expected that every lock waiting over 100ms (>deadlock_timeout) should\n> be in the log.\n> But in the log I see only spikes on slow BIND but not lock waits logged.\n\nWhat version postgres? What settings have non-default values ?\nWhat OS/version? What environment/hardware? VM/image/provider/...\nWhat are the queries that are running BIND ? What parameter types ?\nAre the slow BINDs failing? Are their paramters being logged ?\nWhat else is running besides postgres ? Are the DB clients local or\nremote ? It shouldn't matter, but what client library?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 31 Dec 2022 08:32:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 4:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, Dec 31, 2022 at 02:26:08PM +0200, Maxim Boguk wrote:\n> > Hi,\n> >\n> > When performing post-mortem analysis of some short latency spikes on a\n> > heavily loaded database, I found that the reason for (less than 10 second\n> > latency spike) wasn't on the EXECUTE stage but on the BIND stage.\n> > At the same time graphical monitoring shows that during this few second\n> > period there were some queries waiting in the BIND stage.\n> >\n> > Logging setup:\n> > log_min_duration_statement=200ms\n> > log_lock_waits=on\n> > deadlock_timeout=100ms\n> > So I expected that every lock waiting over 100ms (>deadlock_timeout)\n> should\n> > be in the log.\n> > But in the log I see only spikes on slow BIND but not lock waits logged.\n>\n> What version postgres? What settings have non-default values ?\n> What OS/version? What environment/hardware? VM/image/provider/...\n> What are the queries that are running BIND ? What parameter types ?\n> Are the slow BINDs failing? Are their paramters being logged ?\n> What else is running besides postgres ? Are the DB clients local or\n> remote ? It shouldn't matter, but what client library?\n>\n\nWhat version of postgres? - 14.6\n\nWhat settings have non-default values ? - a lot (it's 48 core Amazon EC2\nserver with 396GB of RAM)\n(e.g. it carefully tuned database for particular workload)\n\nWhat OS/version? - Ubuntu 20.04LTS\n\nWhat environment/hardware? - 48 core Amazon EC2 server with 396GB of RAM\nand local NVME storage\n(i3en.12xlarge)\n\nWhat are the queries that are running BIND ? - nothing special, e.g.\nduring problem period a lot completely different queries become stuck in\nBIND and PARSE stage but no long duration (>100ms) EXECUTE calls found, in\ngeneral it feel that whole BIND/PARSE mechanics lock for short period\n==== LOG SAMPLE ==========================\n2023-01-01 09:07:31.622 UTC 1848286 ****** from [local] [vxid:84/20886619\ntxid:0] [PARSE] LOG: duration: 235.472 ms parse <unnamed>: SELECT\nCOUNT(*) FROM \"job_stats_master\" WHERE (job_stats_master.created_at >\n= '2022-12-31 09:07:31.350000') AND (job_stats_master.created_at <\n'2023-01-01 09:07:31.350000') AND \"job_stats_master\".\"employer_id\" = ****\nAND \"job_stats_master\".\"action\" = 2 AND \"job_stats_master\".\"job_board_id\" =\n**** AND \"job_stats_master\".\"ip_matching_id\" = *****\n2023-01-01 09:07:31.622 UTC 1898699 ****** from [local] [vxid:158/22054921\ntxid:0] [BIND] LOG: duration: 231.274 ms bind <unnamed>: SELECT id, name\nFROM job_types WHERE id IN ($1)\n2023-01-01 09:07:31.622 UTC 1898699 ******* from [local] [vxid:158/22054921\ntxid:0] [BIND] DETAIL: parameters: $1 = '0'\n2023-01-01 09:07:31.622 UTC 1794756 ******* from [local] [vxid:281/10515416\ntxid:0] [BIND] LOG: duration: 231.024 ms bind <unnamed>: SELECT id, name\nFROM job_types WHERE id IN ($1)\n2023-01-01 09:07:31.622 UTC 1794756 ******* from [local] [vxid:281/10515416\ntxid:0] [BIND] DETAIL: parameters: $1 = '0'\n\n... 5 pages of BIND/PARSE of different/unrelated to each other queries\nlogged with over 100ms runtime\n\n2023-01-01 09:07:31.623 UTC 1806315 ******* from [local] [vxid:231/17406673\ntxid:0] [BIND] LOG: duration: 139.372 ms bind <unnamed>: SELECT\nemployers.*, third_party_employer_pixels.facebook_pixel_id AS\nfacebook_pixel_id, third_party_employer_pixels.google_pixel_id AS\ngoogle_pixel_id, third_party_employer_pixels.google_actions AS\ngoogle_actions, employer_pixel_configurations.solution AS\ntracking_solution, employer_pixel_configurations.domain_name AS\ndomain, settings.use_multiple_bids FROM employers LEFT JOIN\nthird_party_employer_pixels ON third_party_employer_pixels.employer_id =\nemployers.id LEFT JOIN employer_pixel_configurations ON\nemployer_pixel_configurations.id = employers.id LEFT JOIN settings\n ON settings.id = employers.setting_id WHERE employers.id =\n$1\n2023-01-01 09:07:31.623 UTC 1806315 ******* from [local] [vxid:231/17406673\ntxid:0] [BIND] DETAIL: parameters: $1 = '*****'\n2023-01-01 09:07:31.624 UTC 1806321 ******* from [local] [vxid:176/21846997\ntxid:0] [BIND] LOG: duration: 120.237 ms bind <unnamed>: SELECT\njob_boards.*, enterprises.product_type,\nfeed_settings.use_employer_exported_name as use_employer_exported_name,\nintegration_job_board_settings.integration_status as integration_status\nFROM job_boards LEFT JOIN integration_job_board_settings ON\nintegration_job_board_settings.id =\njob_boards.integration_job_board_setting_id LEFT JOIN enterprises ON\nenterprises.id = job_boards.enterprise_id LEFT JOIN feed_settings ON\nfeed_settings.id = job_boards.feed_setting_id WHERE job_boards.id = $1\n2023-01-01 09:07:31.624 UTC 1806321 ******* from [local] [vxid:176/21846997\ntxid:0] [BIND] DETAIL: parameters: $1 = '****'\n===============================================================\nWhat really curious in the log: that every of 100+ stuck in PARSE/BIND\nstage queries that had been logged (and thus unstuck) in the same exact\nmoment... that highly likely means that they all had been stuck in the same\nsingle place.\nE.g. something locked the whole PARSE/BIND machinery (but not an EXECUTE)\nfor 200+ms.\n\nAre the slow BINDs failing?\nNo, they all executed successfully later after being unstuck.\n\nAre their parameters being logged ?\nYes.\n\nWhat else is running besides postgres ?\nNothing else , dedicated DB server.\n\nAre the DB clients local or remote ?\nRemote all over a fast network.\n\nIt shouldn't matter, but what client library?\n50% ROR (ruby on rails) / 50% java(jdbc).\n\n\nProblem that issue happens only few times per 24 hour and usual duration\nunder 1second\nso it very hard to catch problem with perf or gdb or strace.\n\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Sat, Dec 31, 2022 at 4:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Dec 31, 2022 at 02:26:08PM +0200, Maxim Boguk wrote:\n> Hi,\n> \n> When performing post-mortem analysis of some short latency spikes on a\n> heavily loaded database, I found that the reason for (less than 10 second\n> latency spike) wasn't on the EXECUTE stage but on the BIND stage.\n> At the same time graphical monitoring shows that during this few second\n> period there were some queries waiting in the BIND stage.\n> \n> Logging setup:\n> log_min_duration_statement=200ms\n> log_lock_waits=on\n> deadlock_timeout=100ms\n> So I expected that every lock waiting over 100ms (>deadlock_timeout) should\n> be in the log.\n> But in the log I see only spikes on slow BIND but not lock waits logged.\n\nWhat version postgres? What settings have non-default values ?\nWhat OS/version? What environment/hardware? VM/image/provider/...\nWhat are the queries that are running BIND ? What parameter types ?\nAre the slow BINDs failing? Are their paramters being logged ?\nWhat else is running besides postgres ? Are the DB clients local or\nremote ? It shouldn't matter, but what client library?What version of postgres? - 14.6What settings have non-default values ? - a lot (it's 48 core Amazon EC2 server with 396GB of RAM)(e.g. it carefully tuned database for particular workload)What OS/version? - Ubuntu 20.04LTSWhat environment/hardware? - 48 core Amazon EC2 server with 396GB of RAM and local NVME storage(i3en.12xlarge)What are the queries that are running BIND ? - nothing special, e.g. during problem period a lot completely different queries become stuck in BIND and PARSE stage but no long duration (>100ms) EXECUTE calls found, in general it feel that whole BIND/PARSE mechanics lock for short period==== LOG SAMPLE ==========================2023-01-01 09:07:31.622 UTC 1848286 ****** from [local] [vxid:84/20886619 txid:0] [PARSE] LOG: duration: 235.472 ms parse <unnamed>: SELECT COUNT(*) FROM \"job_stats_master\" WHERE (job_stats_master.created_at >= '2022-12-31 09:07:31.350000') AND (job_stats_master.created_at < '2023-01-01 09:07:31.350000') AND \"job_stats_master\".\"employer_id\" = **** AND \"job_stats_master\".\"action\" = 2 AND \"job_stats_master\".\"job_board_id\" = **** AND \"job_stats_master\".\"ip_matching_id\" = *****2023-01-01 09:07:31.622 UTC 1898699 ****** from [local] [vxid:158/22054921 txid:0] [BIND] LOG: duration: 231.274 ms bind <unnamed>: SELECT id, name FROM job_types WHERE id IN ($1)2023-01-01 09:07:31.622 UTC 1898699 ******* from [local] [vxid:158/22054921 txid:0] [BIND] DETAIL: parameters: $1 = '0'2023-01-01 09:07:31.622 UTC 1794756 ******* from [local] [vxid:281/10515416 txid:0] [BIND] LOG: duration: 231.024 ms bind <unnamed>: SELECT id, name FROM job_types WHERE id IN ($1)2023-01-01 09:07:31.622 UTC 1794756 ******* from [local] [vxid:281/10515416 txid:0] [BIND] DETAIL: parameters: $1 = '0'... 5 pages of BIND/PARSE of different/unrelated to each other queries logged with over 100ms runtime2023-01-01 09:07:31.623 UTC 1806315 ******* from [local] [vxid:231/17406673 txid:0] [BIND] LOG: duration: 139.372 ms bind <unnamed>: SELECT employers.*, third_party_employer_pixels.facebook_pixel_id AS facebook_pixel_id, third_party_employer_pixels.google_pixel_id AS google_pixel_id, third_party_employer_pixels.google_actions AS google_actions, employer_pixel_configurations.solution AS tracking_solution, employer_pixel_configurations.domain_name AS domain, settings.use_multiple_bids FROM employers LEFT JOIN third_party_employer_pixels ON third_party_employer_pixels.employer_id = employers.id LEFT JOIN employer_pixel_configurations ON employer_pixel_configurations.id = employers.id LEFT JOIN settings ON settings.id = employers.setting_id WHERE employers.id = $12023-01-01 09:07:31.623 UTC 1806315 ******* from [local] [vxid:231/17406673 txid:0] [BIND] DETAIL: parameters: $1 = '*****'2023-01-01 09:07:31.624 UTC 1806321 ******* from [local] [vxid:176/21846997 txid:0] [BIND] LOG: duration: 120.237 ms bind <unnamed>: SELECT job_boards.*, enterprises.product_type, feed_settings.use_employer_exported_name as use_employer_exported_name, integration_job_board_settings.integration_status as integration_status FROM job_boards LEFT JOIN integration_job_board_settings ON integration_job_board_settings.id = job_boards.integration_job_board_setting_id LEFT JOIN enterprises ON enterprises.id = job_boards.enterprise_id LEFT JOIN feed_settings ON feed_settings.id = job_boards.feed_setting_id WHERE job_boards.id = $12023-01-01 09:07:31.624 UTC 1806321 ******* from [local] [vxid:176/21846997 txid:0] [BIND] DETAIL: parameters: $1 = '****'===============================================================What really curious in the log: that every of 100+ stuck in PARSE/BIND stage queries that had been logged (and thus unstuck) in the same exact moment... that highly likely means that they all had been stuck in the same single place.E.g. something locked the whole PARSE/BIND machinery (but not an EXECUTE) for 200+ms.Are the slow BINDs failing?No, they all executed successfully later after being unstuck.Are their parameters being logged ?Yes.What else is running besides postgres ?Nothing else , dedicated DB server.Are the DB clients local or remote ?Remote all over a fast network.It shouldn't matter, but what client library?50% ROR (ruby on rails) / 50% java(jdbc).Problem that issue happens only few times per 24 hour and usual duration under 1secondso it very hard to catch problem with perf or gdb or strace.-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Sun, 1 Jan 2023 13:34:50 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "Howdy,\n\nFew additional questions:\n\n 1. How many concurrent, active connections are running when these BIND\n problems occur? select count(*) from pg_stat_activity where state\n in ('active','idle in transaction')\n 2. Are the queries using gigantic IN (<big list>) values?\n 3. Perhaps unrelated, but islog_temp_files turned on, and if so, do you\n have a lot of logs related to that?\n\nRegards,\nMichael Vitale, just another PG DBA\n\n\n\n\n\n\n\n\n\n\nHowdy,\n\nFew additional questions:\n\nHow many concurrent, active connections are running when these \nBIND problems occur? select count(*) from pg_stat_activity where state \nin ('active','idle in transaction')\nAre the queries using gigantic IN (<big list>) values?\nPerhaps unrelated, but is \nlog_temp_files turned on,\n and if so, do you have a lot of logs related to that?\n\nRegards,\nMichael Vitale, just another PG DBA",
"msg_date": "Sun, 1 Jan 2023 08:27:34 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sun, Jan 1, 2023 at 3:27 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n> Howdy,\n>\n> Few additional questions:\n>\n> 1. How many concurrent, active connections are running when these BIND\n> problems occur? select count(*) from pg_stat_activity where state in\n> ('active','idle in transaction')\n> 2. Are the queries using gigantic IN (<big list>) values?\n> 3. Perhaps unrelated, but is log_temp_files turned on, and if so, do\n> you have a lot of logs related to that?\n>\n> Regards,\n> Michael Vitale, just another PG DBA\n>\n\n1)usual load (e.g. no anomalies)\n10-20 concurrent query runs (e.g. issues isn't related to the load spike or\nsimilar anomalies)\nadditionally 5-10 short idle in transaction (usual amount too)\ntotal around 300 active connections to the database (after local pgbouncer\nin transaction mode)\n\n2)no... long BIND for huge parameter lists is a known issue for me, in this\ncase there is nothing like that... just (every?) PARSE/BIND stuck for a\nshort period (including ones which don't require pg_statistic table\naccess)...\nThere are some funny samples from the latest spike:\n2023-01-01 15:45:09.151 UTC 2421121 ******** from [local]\n[vxid:109/20732521 txid:0] [BIND] LOG: duration: 338.830 ms bind\n<unnamed>: ROLLBACK\n2023-01-01 15:45:09.151 UTC 2365255 ******** from [local] [vxid:41/21277531\ntxid:2504447286] [PARSE] LOG: duration: 338.755 ms parse <unnamed>:\nselect nextval ('jobs_id_seq')\nalong with normal select/insert/update/delete operations stuck for a short\ntime too...\n\n3)log_temp_files on for sure, I found no correlation with temp file usage,\nas well as no correlation between latency spikes and logged autovacuum\nactions.\n\nPS: '[BIND] LOG: duration: 338.830 ms bind <unnamed>: ROLLBACK' on a\ndefinitely not overloaded and perfectly healthy server - probably the most\ncurious log entry of 2022 year for me.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Sun, Jan 1, 2023 at 3:27 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\nHowdy,\n\nFew additional questions:\n\nHow many concurrent, active connections are running when these \nBIND problems occur? select count(*) from pg_stat_activity where state \nin ('active','idle in transaction')\nAre the queries using gigantic IN (<big list>) values?\nPerhaps unrelated, but is \nlog_temp_files turned on,\n and if so, do you have a lot of logs related to that?\n\nRegards,\nMichael Vitale, just another PG DBA1)usual load (e.g. no anomalies) 10-20 concurrent query runs (e.g. issues isn't related to the load spike or similar anomalies)additionally 5-10 short idle in transaction (usual amount too)total around 300 active connections to the database (after local pgbouncer in transaction mode)2)no... long BIND for huge parameter lists is a known issue for me, in this case there is nothing like that... just (every?) PARSE/BIND stuck for a short period (including ones which don't require pg_statistic table access)... There are some funny samples from the latest spike:2023-01-01 15:45:09.151 UTC 2421121 ******** from [local] [vxid:109/20732521 txid:0] [BIND] LOG: duration: 338.830 ms bind <unnamed>: ROLLBACK2023-01-01 15:45:09.151 UTC 2365255 ******** from [local] [vxid:41/21277531 txid:2504447286] [PARSE] LOG: duration: 338.755 ms parse <unnamed>: select nextval ('jobs_id_seq')along with normal select/insert/update/delete operations stuck for a short time too...3)log_temp_files on for sure, I found no correlation with temp file usage, as well as no correlation between latency spikes and logged autovacuum actions.PS: '[BIND] LOG: duration: 338.830 ms bind <unnamed>: ROLLBACK' on a definitely not overloaded and perfectly healthy server - probably the most curious log entry of 2022 year for me.-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Sun, 1 Jan 2023 18:30:55 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "Hi Maxim,\n\n10-20 active, concurrent connections is way below any CPU load problem \nyou should have with 48 available vCPUs.\nYou never explicitly said what the load is, so what is it in the context \nof the 1,5,15?\n\nMaxim Boguk wrote on 1/1/2023 11:30 AM:\n> 1)usual load (e.g. no anomalies)\n> 10-20 concurrent query runs (e.g. issues isn't related to the load \n> spike or similar anomalies)\n> additionally 5-10 short idle in transaction (usual amount too)\n> total around 300 active connections to the database (after local \n> pgbouncer in transaction mode)\n\n\nRegards,\n\nMichael Vitale\n\nMichaeldba@sqlexec.com <mailto:michaelvitale@sqlexec.com>\n\n703-600-9343",
"msg_date": "Sun, 1 Jan 2023 11:43:10 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sun, Jan 1, 2023 at 6:43 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n> Hi Maxim,\n>\n> 10-20 active, concurrent connections is way below any CPU load problem you\n> should have with 48 available vCPUs.\n> You never explicitly said what the load is, so what is it in the context\n> of the 1,5,15?\n>\n>\nLA 10-15 all time, servers are really overprovisioned (2-3x by available\nCPU resources) because an application is quite sensitive to the database\nlatency.\nAnd during these latency spikes - EXECUTE work without any issues (e.g.\nonly PARSE/BIND suck).\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Sun, Jan 1, 2023 at 6:43 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\nHi Maxim, \n\n10-20 active, concurrent connections is way below any CPU load problem \nyou should have with 48 available vCPUs. \nYou never explicitly said what the load is, so what is it in the context\n of the 1,5,15?\nLA 10-15 all time, servers are really overprovisioned (2-3x by available CPU resources) because an application is quite sensitive to the database latency. And during these latency spikes - EXECUTE work without any issues (e.g. only PARSE/BIND suck).-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Sun, 1 Jan 2023 18:51:07 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "You said it's a dedicated server, but pgbouncer is running locally, \nright? PGBouncer has a small footprint, but is the CPU high for it?\n\nMaxim Boguk wrote on 1/1/2023 11:51 AM:\n>\n>\n> On Sun, Jan 1, 2023 at 6:43 PM MichaelDBA <MichaelDBA@sqlexec.com \n> <mailto:MichaelDBA@sqlexec.com>> wrote:\n>\n> Hi Maxim,\n>\n> 10-20 active, concurrent connections is way below any CPU load\n> problem you should have with 48 available vCPUs.\n> You never explicitly said what the load is, so what is it in the\n> context of the 1,5,15?\n>\n>\n> LA 10-15 all time, servers are really overprovisioned (2-3x by \n> available CPU resources) because an application is quite sensitive to \n> the database latency.\n> And during these latency spikes - EXECUTE work without any issues \n> (e.g. only PARSE/BIND suck).\n>\n>\n> -- \n> Maxim Boguk\n> Senior Postgresql DBA\n> https://dataegret.com/\n>\n> Phone UA: +380 99 143 0000\n> Phone AU: +61 45 218 5678\n>\n\n\nRegards,\n\nMichael Vitale\n\nMichaeldba@sqlexec.com <mailto:michaelvitale@sqlexec.com>\n\n703-600-9343",
"msg_date": "Sun, 1 Jan 2023 11:54:59 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sun, Jan 1, 2023 at 6:55 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n> You said it's a dedicated server, but pgbouncer is running locally,\n> right? PGBouncer has a small footprint, but is the CPU high for it?\n>\n\nThere are 4 pgbouncer processes in so_reuseport mode.\nI never saw more than 40% of a single CPU core per one pgbouncer process\n(most time under 20%).\nSo it's an unlikely result of pgbouncer being overloaded.\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Sun, Jan 1, 2023 at 6:55 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\nYou said it's a dedicated server, but \npgbouncer is running locally, right? PGBouncer has a small footprint, \nbut is the CPU high for it?There are 4 pgbouncer processes in so_reuseport mode. I never saw more than 40% of a single CPU core per one pgbouncer process (most time under 20%).So it's an unlikely result of pgbouncer being overloaded. -- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Sun, 1 Jan 2023 19:06:49 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 2:26 PM Maxim Boguk <maxim.boguk@gmail.com> wrote:\n\n> Hi,\n>\n> When performing post-mortem analysis of some short latency spikes on a\n> heavily loaded database, I found that the reason for (less than 10 second\n> latency spike) wasn't on the EXECUTE stage but on the BIND stage.\n> At the same time graphical monitoring shows that during this few second\n> period there were some queries waiting in the BIND stage.\n>\n> Logging setup:\n> log_min_duration_statement=200ms\n> log_lock_waits=on\n> deadlock_timeout=100ms\n> So I expected that every lock waiting over 100ms (>deadlock_timeout)\n> should be in the log.\n> But in the log I see only spikes on slow BIND but not lock waits logged.\n> (\n> grep BIND /var/log/postgresql/postgresql-2022-12-29.log | grep 'duration'\n> | perl -pe 's/^(2022-12-29 \\d\\d:\\d\\d:\\d).*$/$1/' | sort | uniq -c | less\n> ...\n> 9 2022-12-29 00:12:5\n> 2 2022-12-29 00:13:1\n> 3 2022-12-29 00:13:5\n> !!! 68 2022-12-29 00:14:0\n> 5 2022-12-29 00:14:1\n> 3 2022-12-29 00:14:2\n> 2 2022-12-29 00:14:3\n> ).\n> But no lock waits on the BIND stage logged during the problem period (and\n> no lock waits in general).\n> Similar issues happen a few times per day without any visible pattern (but\n> on the same tables usually).\n> No CPU or IO load/latency spikes found during problem periods.\n> No EXECUTE slowdown found in the log during that time.\n>\n\n\nFollowup research of this issue lead me to following results:\nEvery logged spike of BIND/PARSE response time correlated with\ncorresponding backend waiting on\nwait_event_type = LWLock\nwait_event = pg_stat_statements\nand all of these spikes happen during increment of\npg_stat_statements_info.dealloc counter.\n\nSome searching about this issue lead me to following blog post about\nsimilar issue:\nhttps://yhuelf.github.io/2021/09/30/pg_stat_statements_bottleneck.html\n\nHowever, we already have pg_stat_statements.max=10000 so further increase\nof this parameter\nseems counterproductive (the size of\n14/main/pg_stat_tmp/pgss_query_texts.stat is already over 20MB).\n\n\nOpen questions remains:\n1)Is it expected behaviour of pg_stat_statements to block every BIND/PARSE\nduring deallocation of least used entries for the whole period of cleanup?\n\n\n2)Any recommended workaround for this issue for systems with strict latency\nSLA\n(block every database query (used extended query protocol) for 200-500ms\n50+ times per day at random time - isn't acceptable for our case\nunfortunately)?\n\n\n3)Why only BIND/PARSE locks but not EXECUTE?\n(may be some difference in implementation of plan vs exec\npg_stat_statements counters?).\n\n\nKind Regards,\nMaxim\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Sat, Dec 31, 2022 at 2:26 PM Maxim Boguk <maxim.boguk@gmail.com> wrote:Hi,When performing post-mortem analysis of some short latency spikes on a heavily loaded database, I found that the reason for (less than 10 second latency spike) wasn't on the EXECUTE stage but on the BIND stage.At the same time graphical monitoring shows that during this few second period there were some queries waiting in the BIND stage.Logging setup:log_min_duration_statement=200mslog_lock_waits=ondeadlock_timeout=100msSo I expected that every lock waiting over 100ms (>deadlock_timeout) should be in the log.But in the log I see only spikes on slow BIND but not lock waits logged.(grep BIND /var/log/postgresql/postgresql-2022-12-29.log | grep 'duration' | perl -pe 's/^(2022-12-29 \\d\\d:\\d\\d:\\d).*$/$1/' | sort | uniq -c | less... 9 2022-12-29 00:12:5 2 2022-12-29 00:13:1 3 2022-12-29 00:13:5!!! 68 2022-12-29 00:14:0 5 2022-12-29 00:14:1 3 2022-12-29 00:14:2 2 2022-12-29 00:14:3).But no lock waits on the BIND stage logged during the problem period (and no lock waits in general).Similar issues happen a few times per day without any visible pattern (but on the same tables usually).No CPU or IO load/latency spikes found during problem periods.No EXECUTE slowdown found in the log during that time.Followup research of this issue lead me to following results:Every logged spike of BIND/PARSE response time correlated with corresponding backend waiting onwait_event_type = LWLockwait_event = pg_stat_statementsand all of these spikes happen during increment of pg_stat_statements_info.dealloc counter.Some searching about this issue lead me to following blog post about similar issue:https://yhuelf.github.io/2021/09/30/pg_stat_statements_bottleneck.htmlHowever, we already have pg_stat_statements.max=10000 so further increase of this parameterseems counterproductive (the size of 14/main/pg_stat_tmp/pgss_query_texts.stat is already over 20MB).Open questions remains:1)Is it expected behaviour of pg_stat_statements to block every BIND/PARSE during deallocation of least used entries for the whole period of cleanup?2)Any recommended workaround for this issue for systems with strict latency SLA(block every database query (used extended query protocol) for 200-500ms 50+ times per day at random time - isn't acceptable for our case unfortunately)?3)Why only BIND/PARSE locks but not EXECUTE?(may be some difference in implementation of plan vs exec pg_stat_statements counters?).Kind Regards,Maxim-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Thu, 5 Jan 2023 12:57:00 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "What happens if you take pg_stat_statements out of the picture (remove \nfrom shared_preload_libraries)? Does your BIND problem go away?\n\n\n\n\nWhat happens if you take pg_stat_statements\n out of the picture (remove from\n shared_preload_libraries)? Does your BIND problem go away?",
"msg_date": "Thu, 5 Jan 2023 06:31:46 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 1:31 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n>\n> What happens if you take pg_stat_statements out of the picture (remove\n> from shared_preload_libraries)? Does your BIND problem go away?\n>\n\nI didn't test this idea, because it requires restart of the database (it\ncannot be done quickly) and without pg_stat_statements there will be no\nadequate performance monitoring of the database.\nBut I'm pretty sure that the issue will go away with pg_stat_statements\ndisabled.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nOn Thu, Jan 5, 2023 at 1:31 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\nWhat happens if you take pg_stat_statements\n out of the picture (remove from\n shared_preload_libraries)? Does your BIND problem go away?\n\nI didn't test this idea, because it requires restart of the database (it cannot be done quickly) and without pg_stat_statements there will be no adequate performance monitoring of the database.But I'm pretty sure that the issue will go away with pg_stat_statements disabled.-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone UA: +380 99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Thu, 5 Jan 2023 13:44:33 +0200",
"msg_from": "Maxim Boguk <maxim.boguk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
},
{
"msg_contents": "Well if you find out for sure, please let me know. I'm very interested \nin the outcome of this problem.\n\nMaxim Boguk wrote on 1/5/2023 6:44 AM:\n>\n>\n> On Thu, Jan 5, 2023 at 1:31 PM MichaelDBA <MichaelDBA@sqlexec.com \n> <mailto:MichaelDBA@sqlexec.com>> wrote:\n>\n>\n> What happens if you takepg_stat_statements out of the picture\n> (remove from shared_preload_libraries)? Does your BIND problem go\n> away?\n>\n>\n> I didn't test this idea, because it requires restart of the database \n> (it cannot be done quickly) and without pg_stat_statementsthere will \n> be no adequate performance monitoring of the database.\n> But I'm pretty sure that the issue will go away with \n> pg_stat_statements disabled.\n>\n> -- \n> Maxim Boguk\n> Senior Postgresql DBA\n> https://dataegret.com/\n>\n> Phone UA: +380 99 143 0000\n> Phone AU: +61 45 218 5678\n>\n\n\n\n\n\n\nWell if you find out for sure, please let me\n know. I'm very interested in the outcome of this problem.\n\nMaxim Boguk wrote on 1/5/2023 6:44 AM:\n\n\nOn Thu, Jan 5, \n2023 at 1:31 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\nWhat happens if you take pg_stat_statements\n out of the picture (remove from\n shared_preload_libraries)? Does your BIND problem go away?\nI didn't test this idea, because it requires \nrestart of the database (it cannot be done quickly) and without pg_stat_statements there will be no adequate performance \nmonitoring of the database.But I'm pretty sure that the issue will go away \nwith pg_stat_statements\n disabled.--\n Maxim BogukSenior Postgresql\n DBAhttps://dataegret.com/Phone UA: +380 \n99 143 0000Phone AU: +61 45 218 5678",
"msg_date": "Thu, 5 Jan 2023 06:46:49 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: How to analyze of short but heavy intermittent slowdown on BIND\n on production database (or BIND vs log_lock_waits)"
}
] |
[
{
"msg_contents": "Hello,\n\nI am trying to speed up the initial logical replication sync process. The database being replicated is dominated by one table that is 750GB (heap). The process quickly boils down to a single COPY writing into the subscriber. We have dropped all indexes and key constraints in the subscriber but we are seeing the sync take >24 hours before we have to kill the process (to avoid so much WAL being reserved on the publisher).\n\nI haven’t done a huge amount of performance tuning at the Linux level before as I’m used to working with cloud-managed installations where you obviously don’t have access to the underlying host. However, in this case, the subscriber instance is not a cloud-managed one.\n\nCan anyone give comment on what might be a reasonable throughput in MB/s for a single COPY operation?\n\nThe material I’ve read on I/O talks about saturating the device … I’m pretty sure that a single COPY operation is not capable of doing this. It’s therefore one thing to see the advertised top-line figures about IOPS and throughput, vs what you can actually do with the single COPY. I’d be interested in hearing what other people are able to get as a throughput figure for COPY.\n\n-Joe\n\n\n\n\n",
"msg_date": "Fri, 6 Jan 2023 21:09:54 +0000",
"msg_from": "Joe Wildish <joe@lateraljoin.com>",
"msg_from_op": true,
"msg_subject": "Max write throughput for single COPY"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have a table containing ~1.75 billion rows, using 170GB storage.\nThe table schema is the following:\n\nmessages=# \\d messages\n Table \"public.messages\"\n Column | Type | Collation | Nullable | Default \n--------------+---------+-----------+----------+---------\n mid | bigint | | not null | \n channel | bigint | | not null | \n member | integer | | | \n sender | bigint | | not null | \n original_mid | bigint | | | \n guild | bigint | | | \nIndexes:\n \"messages_pkey\" PRIMARY KEY, btree (mid)\n\n\nThis table is used essentially as a key-value store; rows are accessed\nonly with `mid` primary key. Additionally, inserted rows may only be\ndeleted, but never updated.\n\nWe only run the following queries:\n- INSERT INTO messages VALUES (...data...);\n- SELECT * FROM messages WHERE mid = $1;\n- DELETE FROM messages WHERE mid = $1;\n- DELETE FROM messages WHERE mid IN ($1...$n);\n- SELECT count(*) FROM messages;\n\nFor the \"IN\" query, it is possible for there to be up to 100\nparameters, and it is possible that none of them will match an existing\nrow.\n\nSo, the problem: I don't know how to best store this data in\npostgres, or what system requirements would be needed.\nOriginally, this table did not contain a substantial amount of data,\nand so I stored it in the same database as our CRUD user data. However,\nas the table became larger, cache was being allocated to (mostly\nunused) historical data from the `messages` table, and I decided to\nmove the large table to its own postgres instance.\n\nAt the same time, I partitioned the table, with TimescaleDB's automatic\ntime-series partitioning, because our data is essentially time-series\n(`mid` values are Twitter-style snowflakes) and it was said that\npartitioning would improve performance.\nThis ended up being a mistake... shared_buffers memory usage went way\nup, from the 20GB of the previous combined database to 28GB for just\nthe messages database, and trying to lower shared_buffers at all made\nthe database start throwing \"out of shared memory\" errors when running\nDELETE queries. A TimescaleDB update did improve this, but 28GB is way\nmore memory than I can afford to allocate to this database - instead of\n\"out of shared memory\", it gets OOM killed by the system.\n\nWhat is the best course of action here?\n- Ideally, I would like to host this database on a machine with 4\n (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't\n support adding additional local storage to a VPS plan). Of course,\n this seems very unrealistic, so it's not a requirement, but the\n closer we can get to this, the better.\n- Is it a good idea to use table partitioning? I heard advice that one\n should partition tables with above a couple million rows, but I don't\n know how true this is. We have a table with ~6mil rows in our main\n database that has somewhat slow lookups, but we also have a table\n with ~13mil rows that has fast lookups, so I'm not sure.\n\nThanks\nspiral\n\n\n",
"msg_date": "Sun, 8 Jan 2023 07:02:01 -0500",
"msg_from": "spiral <spiral@spiral.sh>",
"msg_from_op": true,
"msg_subject": "Advice on best way to store a large amount of data in postgresql"
},
{
"msg_contents": "That’s crazy only having 8GB memory when you have tables with over 100GBs. One general rule of thumb is have enough memory to hold the biggest index.\n\nSent from my iPad\n\n> On Jan 9, 2023, at 3:23 AM, spiral <spiral@spiral.sh> wrote:\n> \n> Hello,\n> \n> We have a table containing ~1.75 billion rows, using 170GB storage.\n> The table schema is the following:\n> \n> messages=# \\d messages\n> Table \"public.messages\"\n> Column | Type | Collation | Nullable | Default \n> --------------+---------+-----------+----------+---------\n> mid | bigint | | not null | \n> channel | bigint | | not null | \n> member | integer | | | \n> sender | bigint | | not null | \n> original_mid | bigint | | | \n> guild | bigint | | | \n> Indexes:\n> \"messages_pkey\" PRIMARY KEY, btree (mid)\n> \n> \n> This table is used essentially as a key-value store; rows are accessed\n> only with `mid` primary key. Additionally, inserted rows may only be\n> deleted, but never updated.\n> \n> We only run the following queries:\n> - INSERT INTO messages VALUES (...data...);\n> - SELECT * FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid IN ($1...$n);\n> - SELECT count(*) FROM messages;\n> \n> For the \"IN\" query, it is possible for there to be up to 100\n> parameters, and it is possible that none of them will match an existing\n> row.\n> \n> So, the problem: I don't know how to best store this data in\n> postgres, or what system requirements would be needed.\n> Originally, this table did not contain a substantial amount of data,\n> and so I stored it in the same database as our CRUD user data. However,\n> as the table became larger, cache was being allocated to (mostly\n> unused) historical data from the `messages` table, and I decided to\n> move the large table to its own postgres instance.\n> \n> At the same time, I partitioned the table, with TimescaleDB's automatic\n> time-series partitioning, because our data is essentially time-series\n> (`mid` values are Twitter-style snowflakes) and it was said that\n> partitioning would improve performance.\n> This ended up being a mistake... shared_buffers memory usage went way\n> up, from the 20GB of the previous combined database to 28GB for just\n> the messages database, and trying to lower shared_buffers at all made\n> the database start throwing \"out of shared memory\" errors when running\n> DELETE queries. A TimescaleDB update did improve this, but 28GB is way\n> more memory than I can afford to allocate to this database - instead of\n> \"out of shared memory\", it gets OOM killed by the system.\n> \n> What is the best course of action here?\n> - Ideally, I would like to host this database on a machine with 4\n> (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't\n> support adding additional local storage to a VPS plan). Of course,\n> this seems very unrealistic, so it's not a requirement, but the\n> closer we can get to this, the better.\n> - Is it a good idea to use table partitioning? I heard advice that one\n> should partition tables with above a couple million rows, but I don't\n> know how true this is. We have a table with ~6mil rows in our main\n> database that has somewhat slow lookups, but we also have a table\n> with ~13mil rows that has fast lookups, so I'm not sure.\n> \n> Thanks\n> spiral\n> \n> \n\n\n\n",
"msg_date": "Mon, 9 Jan 2023 05:39:45 -0500",
"msg_from": "\"Michaeldba@sqlexec.com\" <Michaeldba@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: Advice on best way to store a large amount of data in postgresql"
},
{
"msg_contents": "On Sun, Jan 08, 2023 at 07:02:01AM -0500, spiral wrote:\n> This table is used essentially as a key-value store; rows are accessed\n> only with `mid` primary key. Additionally, inserted rows may only be\n> deleted, but never updated.\n> \n> We only run the following queries:\n> - INSERT INTO messages VALUES (...data...);\n> - SELECT * FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid IN ($1...$n);\n> - SELECT count(*) FROM messages;\n\nGreat - it's good to start with the queries to optimize.\n\nAre you using the extended query protocol with \"bind\" parameters, or are they\nescaped and substituted by the client library ?\n\n> So, the problem: I don't know how to best store this data in\n> postgres, or what system requirements would be needed.\n> Originally, this table did not contain a substantial amount of data,\n> and so I stored it in the same database as our CRUD user data. However,\n> as the table became larger, cache was being allocated to (mostly\n> unused) historical data from the `messages` table, and I decided to\n> move the large table to its own postgres instance.\n> \n> At the same time, I partitioned the table, with TimescaleDB's automatic\n> time-series partitioning, because our data is essentially time-series\n> (`mid` values are Twitter-style snowflakes) and it was said that\n> partitioning would improve performance.\n> This ended up being a mistake... shared_buffers memory usage went way\n> up, from the 20GB of the previous combined database to 28GB for just\n> the messages database, and trying to lower shared_buffers at all made\n> the database start throwing \"out of shared memory\" errors when running\n> DELETE queries. A TimescaleDB update did improve this, but 28GB is way\n> more memory than I can afford to allocate to this database - instead of\n> \"out of shared memory\", it gets OOM killed by the system.\n\nCan you avoid using DELETE and instead use DROP ? I mean, can you\narrange your partitioning such that the things to be dropped are all in\none partition, to handle in bulk ? That's one of the main reasons for\nusing partitioning.\n\n(Or, as a worse option, if you need to use DELETE, can you change the\nquery to DELETE one MID at a time, and loop over MIDs?)\n\nWhat version of postgres is it ? Ah, I found that you reported the same thing\nat least one other place. (It'd be useful to include here that information as\nwell as the prior discussion with other product/vendor).\n\nhttps://github.com/timescale/timescaledb/issues/5075\n\nIn this other issue report, you said that you increased\nmax_locks_per_transaction. I suppose you need to increase it further,\nor decrease your chunk size. How many \"partitions\" do you have\n(actually, timescale uses inheritance) ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 9 Jan 2023 11:56:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Advice on best way to store a large amount of data in postgresql"
},
{
"msg_contents": "Hi Spiral,\n\nIf I were you, I would absolutely consider using table partitioning. There\nare a couple of questions to be answered.\n1. What is the rate/speed of the table's growth?\n2. What is the range of values you use for mid columns to query the table?\nAre they generally close to each other? Or, are they generally closer to\nthe newest rows?\n3. What is your speed limitation/expectation for the query execution time?\n4. What is the version of PostgreSQL installation you use?\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Mon, 9 Jan 2023 at 10:23, spiral <spiral@spiral.sh> wrote:\n\n> Hello,\n>\n> We have a table containing ~1.75 billion rows, using 170GB storage.\n> The table schema is the following:\n>\n> messages=# \\d messages\n> Table \"public.messages\"\n> Column | Type | Collation | Nullable | Default\n> --------------+---------+-----------+----------+---------\n> mid | bigint | | not null |\n> channel | bigint | | not null |\n> member | integer | | |\n> sender | bigint | | not null |\n> original_mid | bigint | | |\n> guild | bigint | | |\n> Indexes:\n> \"messages_pkey\" PRIMARY KEY, btree (mid)\n>\n>\n> This table is used essentially as a key-value store; rows are accessed\n> only with `mid` primary key. Additionally, inserted rows may only be\n> deleted, but never updated.\n>\n> We only run the following queries:\n> - INSERT INTO messages VALUES (...data...);\n> - SELECT * FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid = $1;\n> - DELETE FROM messages WHERE mid IN ($1...$n);\n> - SELECT count(*) FROM messages;\n>\n> For the \"IN\" query, it is possible for there to be up to 100\n> parameters, and it is possible that none of them will match an existing\n> row.\n>\n> So, the problem: I don't know how to best store this data in\n> postgres, or what system requirements would be needed.\n> Originally, this table did not contain a substantial amount of data,\n> and so I stored it in the same database as our CRUD user data. However,\n> as the table became larger, cache was being allocated to (mostly\n> unused) historical data from the `messages` table, and I decided to\n> move the large table to its own postgres instance.\n>\n> At the same time, I partitioned the table, with TimescaleDB's automatic\n> time-series partitioning, because our data is essentially time-series\n> (`mid` values are Twitter-style snowflakes) and it was said that\n> partitioning would improve performance.\n> This ended up being a mistake... shared_buffers memory usage went way\n> up, from the 20GB of the previous combined database to 28GB for just\n> the messages database, and trying to lower shared_buffers at all made\n> the database start throwing \"out of shared memory\" errors when running\n> DELETE queries. A TimescaleDB update did improve this, but 28GB is way\n> more memory than I can afford to allocate to this database - instead of\n> \"out of shared memory\", it gets OOM killed by the system.\n>\n> What is the best course of action here?\n> - Ideally, I would like to host this database on a machine with 4\n> (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't\n> support adding additional local storage to a VPS plan). Of course,\n> this seems very unrealistic, so it's not a requirement, but the\n> closer we can get to this, the better.\n> - Is it a good idea to use table partitioning? I heard advice that one\n> should partition tables with above a couple million rows, but I don't\n> know how true this is. We have a table with ~6mil rows in our main\n> database that has somewhat slow lookups, but we also have a table\n> with ~13mil rows that has fast lookups, so I'm not sure.\n>\n> Thanks\n> spiral\n>\n>\n>\n\nHi Spiral,If I were you, I would absolutely consider using table partitioning. There are a couple of questions to be answered.1. What is the rate/speed of the table's growth?2. What is the range of values you use for mid columns to query the table? Are they generally close to each other? Or, are they generally closer to the newest rows?3. What is your speed limitation/expectation for the query execution time?4. What is the version of PostgreSQL installation you use?Best regards.Samed YILDIRIMOn Mon, 9 Jan 2023 at 10:23, spiral <spiral@spiral.sh> wrote:Hello,\n\nWe have a table containing ~1.75 billion rows, using 170GB storage.\nThe table schema is the following:\n\nmessages=# \\d messages\n Table \"public.messages\"\n Column | Type | Collation | Nullable | Default \n--------------+---------+-----------+----------+---------\n mid | bigint | | not null | \n channel | bigint | | not null | \n member | integer | | | \n sender | bigint | | not null | \n original_mid | bigint | | | \n guild | bigint | | | \nIndexes:\n \"messages_pkey\" PRIMARY KEY, btree (mid)\n\n\nThis table is used essentially as a key-value store; rows are accessed\nonly with `mid` primary key. Additionally, inserted rows may only be\ndeleted, but never updated.\n\nWe only run the following queries:\n- INSERT INTO messages VALUES (...data...);\n- SELECT * FROM messages WHERE mid = $1;\n- DELETE FROM messages WHERE mid = $1;\n- DELETE FROM messages WHERE mid IN ($1...$n);\n- SELECT count(*) FROM messages;\n\nFor the \"IN\" query, it is possible for there to be up to 100\nparameters, and it is possible that none of them will match an existing\nrow.\n\nSo, the problem: I don't know how to best store this data in\npostgres, or what system requirements would be needed.\nOriginally, this table did not contain a substantial amount of data,\nand so I stored it in the same database as our CRUD user data. However,\nas the table became larger, cache was being allocated to (mostly\nunused) historical data from the `messages` table, and I decided to\nmove the large table to its own postgres instance.\n\nAt the same time, I partitioned the table, with TimescaleDB's automatic\ntime-series partitioning, because our data is essentially time-series\n(`mid` values are Twitter-style snowflakes) and it was said that\npartitioning would improve performance.\nThis ended up being a mistake... shared_buffers memory usage went way\nup, from the 20GB of the previous combined database to 28GB for just\nthe messages database, and trying to lower shared_buffers at all made\nthe database start throwing \"out of shared memory\" errors when running\nDELETE queries. A TimescaleDB update did improve this, but 28GB is way\nmore memory than I can afford to allocate to this database - instead of\n\"out of shared memory\", it gets OOM killed by the system.\n\nWhat is the best course of action here?\n- Ideally, I would like to host this database on a machine with 4\n (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't\n support adding additional local storage to a VPS plan). Of course,\n this seems very unrealistic, so it's not a requirement, but the\n closer we can get to this, the better.\n- Is it a good idea to use table partitioning? I heard advice that one\n should partition tables with above a couple million rows, but I don't\n know how true this is. We have a table with ~6mil rows in our main\n database that has somewhat slow lookups, but we also have a table\n with ~13mil rows that has fast lookups, so I'm not sure.\n\nThanks\nspiral",
"msg_date": "Tue, 10 Jan 2023 01:14:51 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: Advice on best way to store a large amount of data in postgresql"
},
{
"msg_contents": "(re-sending this because I forgot to use \"reply all\". Sorry!)\n\nOn Mon, 9 Jan 2023 11:56:47 -0600\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Are you using the extended query protocol with \"bind\" parameters, or\n> are they escaped and substituted by the client library ?\n\nOur client library uses parameters, yes. \"$1\" is passed literally to\npostgres.\n\n> can you arrange your partitioning such that the things to be dropped\n> are all in one partition, to handle in bulk ?\n\nUnfortunately, no. Deletes are all generated from user actions - if a\nuser deletes one message, we need to delete one single row from our\ndatabase.\n\n> [...] one of the main reasons for using partitioning\n> How many \"partitions\" do you have (actually, timescale uses\n> inheritance) ?\n\nWe have ~1600 timescaledb chunks. We could increase the chunk count. We\ncould also stop using partitions/chunks if it would improve things. I'm\ncurrently setting up a test database without partitioning to see what\nthe performance would look like.\n\nspiral\n\n\n",
"msg_date": "Tue, 10 Jan 2023 04:25:38 -0500",
"msg_from": "spiral <spiral@spiral.sh>",
"msg_from_op": true,
"msg_subject": "Re: Advice on best way to store a large amount of data in\n postgresql"
}
] |
[
{
"msg_contents": "The default is enable_bitmapscan on. However, TPC-H.query17 get slower running on my NVMe SSD (WD SN850) after switching on the parameter: latency drop from 9secs to 16secs. During a B-tree Index Scan, bitmapscan optimization converts random I/O into sequential. However, many users use SSDs rather than HDDs. But they may not know the trick. Is there a possibility that can change the default value to off?\n\nThanks!\n\n\n\n\n\n\n\nThe\n default is enable_bitmapscan on. However, TPC-H.query17\n get slower running on my NVMe SSD (WD SN850) after switching on the parameter: latency drop from 9secs to 16secs. During a B-tree Index Scan, bitmapscan optimization converts random I/O into sequential. However, many users use SSDs rather than HDDs. But they\n may not know the trick. Is there a possibility that can change the default value to off?\n\n\n\nThanks!",
"msg_date": "Sat, 14 Jan 2023 14:51:03 +0000",
"msg_from": "\"hehaochen@hotmail.com\" <hehaochen@hotmail.com>",
"msg_from_op": true,
"msg_subject": "change the default value of enable_bitmapscan to off"
},
{
"msg_contents": "Hi\n\n\nso 14. 1. 2023 v 15:51 odesílatel hehaochen@hotmail.com <\nhehaochen@hotmail.com> napsal:\n\n> The default is enable_bitmapscan on. However, TPC-H.query17 get slower\n> running on my NVMe SSD (WD SN850) after switching on the parameter: latency\n> drop from 9secs to 16secs. During a B-tree Index Scan, bitmapscan\n> optimization converts random I/O into sequential. However, many users use\n> SSDs rather than HDDs. But they may not know the trick. Is there a\n> possibility that can change the default value to off?\n>\n\nI don't think it can be disabled by default.\n\nWhen you have fast SSD disk, then common setting is decreasing\nrandom_page_cost to some value to 2 or maybe 1.5\n\nRegards\n\nPavel\n\n\n>\n> Thanks!\n>\n\nHiso 14. 1. 2023 v 15:51 odesílatel hehaochen@hotmail.com <hehaochen@hotmail.com> napsal:\n\nThe\n default is enable_bitmapscan on. However, TPC-H.query17\n get slower running on my NVMe SSD (WD SN850) after switching on the parameter: latency drop from 9secs to 16secs. During a B-tree Index Scan, bitmapscan optimization converts random I/O into sequential. However, many users use SSDs rather than HDDs. But they\n may not know the trick. Is there a possibility that can change the default value to off?I don't think it can be disabled by default.When you have fast SSD disk, then common setting is decreasing random_page_cost to some value to 2 or maybe 1.5RegardsPavel \n\n\n\nThanks!",
"msg_date": "Sat, 14 Jan 2023 16:00:29 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: change the default value of enable_bitmapscan to off"
},
{
"msg_contents": "\"hehaochen@hotmail.com\" <hehaochen@hotmail.com> writes:\n> The default is enable_bitmapscan on. However, TPC-H.query17 get slower running on my NVMe SSD (WD SN850) after switching on the parameter: latency drop from 9secs to 16secs. During a B-tree Index Scan, bitmapscan optimization converts random I/O into sequential. However, many users use SSDs rather than HDDs. But they may not know the trick. Is there a possibility that can change the default value to off?\n\nUse ALTER SYSTEM SET, or edit postgresql.conf:\n\nhttps://www.postgresql.org/docs/current/config-setting.html\n\nNote that changing planner parameters on the basis of a single\nquery getting slower is a classic beginner error. You need\nto think about the totality of the installation's workload.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Jan 2023 10:57:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change the default value of enable_bitmapscan to off"
}
] |
[
{
"msg_contents": "Hi,\nWe have a Postgres 11.16 DB which is continuously connected to informatica\nand data gets read from it continuously.\n\nWhen we have to ALTER TABLE.. ADD COLUMN.. it gets blocked by the SELECTs\non the table mentioned by process above.\n\nIs there any way to ALTER the table concurrently without getting blocked?\nAny parameter or option? Can someone give a specific command?\n\nRegards,\nAditya.\n\nHi,We have a Postgres 11.16 DB which is continuously connected to informatica and data gets read from it continuously.When we have to ALTER TABLE.. ADD COLUMN.. it gets blocked by the SELECTs on the table mentioned by process above. Is there any way to ALTER the table concurrently without getting blocked? Any parameter or option? Can someone give a specific command?Regards,Aditya.",
"msg_date": "Thu, 19 Jan 2023 23:00:41 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "ALTER STATEMENT getting blocked"
},
{
"msg_contents": "aditya desai <admad123@gmail.com> writes:\n> We have a Postgres 11.16 DB which is continuously connected to informatica\n> and data gets read from it continuously.\n\n> When we have to ALTER TABLE.. ADD COLUMN.. it gets blocked by the SELECTs\n> on the table mentioned by process above.\n\n> Is there any way to ALTER the table concurrently without getting blocked?\n> Any parameter or option? Can someone give a specific command?\n\nALTER TABLE requires exclusive lock to do that, so it will queue up\nbehind any existing table locks --- but then new lock requests will\nqueue up behind its request. So this'd only happen if your existing\nreading transactions don't terminate. Very long-running transactions\nare unfriendly to other transactions for lots of reasons including\nthis one; see if you can fix your application to avoid that. Or\nmanually cancel the blocking transaction(s) after the ALTER begins\nwaiting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:45:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER STATEMENT getting blocked"
},
{
"msg_contents": "Do something like this to get it without being behind other \ntransactions...You either get in and get your work done or try again\n\nDO language plpgsql $$\nBEGIN\nFOR get_lock IN 1 .. 100 LOOP\n BEGIN\n ALTER TABLE mytable <do something>;\n EXIT;\n END;\nEND LOOP;\nEND;\n$$;\n\n\n\nTom Lane wrote on 1/19/2023 12:45 PM:\n> aditya desai <admad123@gmail.com> writes:\n>> We have a Postgres 11.16 DB which is continuously connected to informatica\n>> and data gets read from it continuously.\n>> When we have to ALTER TABLE.. ADD COLUMN.. it gets blocked by the SELECTs\n>> on the table mentioned by process above.\n>> Is there any way to ALTER the table concurrently without getting blocked?\n>> Any parameter or option? Can someone give a specific command?\n> ALTER TABLE requires exclusive lock to do that, so it will queue up\n> behind any existing table locks --- but then new lock requests will\n> queue up behind its request. So this'd only happen if your existing\n> reading transactions don't terminate. Very long-running transactions\n> are unfriendly to other transactions for lots of reasons including\n> this one; see if you can fix your application to avoid that. Or\n> manually cancel the blocking transaction(s) after the ALTER begins\n> waiting.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\nRegards,\n\nMichael Vitale\n\nMichaeldba@sqlexec.com <mailto:michaelvitale@sqlexec.com>\n\n703-600-9343",
"msg_date": "Thu, 19 Jan 2023 16:06:17 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER STATEMENT getting blocked"
},
{
"msg_contents": "Thanks All. Let me check this and get back to you.\n\nOn Fri, Jan 20, 2023 at 2:36 AM MichaelDBA <MichaelDBA@sqlexec.com> wrote:\n\n> Do something like this to get it without being behind other\n> transactions...You either get in and get your work done or try again\n>\n> DO language plpgsql $$\n> BEGIN\n> FOR get_lock IN 1 .. 100 LOOP\n> BEGIN\n> ALTER TABLE mytable <do something>;\n> EXIT;\n> END;\n> END LOOP;\n> END;\n> $$;\n>\n>\n>\n> Tom Lane wrote on 1/19/2023 12:45 PM:\n>\n> aditya desai <admad123@gmail.com> <admad123@gmail.com> writes:\n>\n> We have a Postgres 11.16 DB which is continuously connected to informatica\n> and data gets read from it continuously.\n>\n> When we have to ALTER TABLE.. ADD COLUMN.. it gets blocked by the SELECTs\n> on the table mentioned by process above.\n>\n> Is there any way to ALTER the table concurrently without getting blocked?\n> Any parameter or option? Can someone give a specific command?\n>\n> ALTER TABLE requires exclusive lock to do that, so it will queue up\n> behind any existing table locks --- but then new lock requests will\n> queue up behind its request. So this'd only happen if your existing\n> reading transactions don't terminate. Very long-running transactions\n> are unfriendly to other transactions for lots of reasons including\n> this one; see if you can fix your application to avoid that. Or\n> manually cancel the blocking transaction(s) after the ALTER begins\n> waiting.\n>\n> \t\t\tregards, tom lane\n>\n>\n>\n>\n>\n> Regards,\n>\n> Michael Vitale\n>\n> Michaeldba@sqlexec.com <michaelvitale@sqlexec.com>\n>\n> 703-600-9343\n>\n>\n>\n>",
"msg_date": "Sun, 22 Jan 2023 16:59:14 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER STATEMENT getting blocked"
}
] |
[
{
"msg_contents": "Hi,\nIs there any way to improve performance of LIKE clause on VIEWS.\n\nselect * From request_vw where upper(status) like '%CAPTURED%' - 28 seconds.\n\nselect * from request_vw where status='CAPTURED'\n\nApplication team is reluctant to change queries from the Application side\nto = instead of LIKE.\n\nAlso as this is VIEW TRIGRAM nor normal indexes don't get used.\n\n\nRegards,\nAditya.\n\nHi,Is there any way to improve performance of LIKE clause on VIEWS.select * From request_vw where upper(status) like '%CAPTURED%' - 28 seconds.select * from request_vw where status='CAPTURED'Application team is reluctant to change queries from the Application side to = instead of LIKE.Also as this is VIEW TRIGRAM nor normal indexes don't get used.Regards,Aditya.",
"msg_date": "Sun, 22 Jan 2023 17:03:46 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "LIKE CLAUSE on VIEWS"
},
{
"msg_contents": "Hi Aditya,\n\nIf you share your view's query and the query you run against the view, it\nwould help all of us to understand better.\n\npg_trgm would be the life saver option for you, of course if you created it\non the right column, with the right expression, and by using the right\nindexing method. It doesn't mean you can't use any index and indexes won't\nbe used because it is a view, well, if you do it right.\n\nhttps://www.postgresql.org/docs/current/pgtrgm.html\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Sun, 22 Jan 2023 at 13:34, aditya desai <admad123@gmail.com> wrote:\n\n> Hi,\n> Is there any way to improve performance of LIKE clause on VIEWS.\n>\n> select * From request_vw where upper(status) like '%CAPTURED%' - 28\n> seconds.\n>\n> select * from request_vw where status='CAPTURED'\n>\n> Application team is reluctant to change queries from the Application side\n> to = instead of LIKE.\n>\n> Also as this is VIEW TRIGRAM nor normal indexes don't get used.\n>\n>\n> Regards,\n> Aditya.\n>\n\nHi Aditya,If you share your view's query and the query you run against the view, it would help all of us to understand better.pg_trgm would be the life saver option for you, of course if you created it on the right column, with the right expression, and by using the right indexing method. It doesn't mean you can't use any index and indexes won't be used because it is a view, well, if you do it right.https://www.postgresql.org/docs/current/pgtrgm.htmlBest regards.Samed YILDIRIMOn Sun, 22 Jan 2023 at 13:34, aditya desai <admad123@gmail.com> wrote:Hi,Is there any way to improve performance of LIKE clause on VIEWS.select * From request_vw where upper(status) like '%CAPTURED%' - 28 seconds.select * from request_vw where status='CAPTURED'Application team is reluctant to change queries from the Application side to = instead of LIKE.Also as this is VIEW TRIGRAM nor normal indexes don't get used.Regards,Aditya.",
"msg_date": "Sun, 22 Jan 2023 13:40:58 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: LIKE CLAUSE on VIEWS"
},
{
"msg_contents": ">\n>\n> On Sun, 22 Jan 2023 at 13:34, aditya desai <admad123@gmail.com> wrote:\n>\n>> Hi,\n>> Is there any way to improve performance of LIKE clause on VIEWS.\n>>\n>> select * From request_vw where upper(status) like '%CAPTURED%' - 28\n>> seconds.\n>>\n>> select * from request_vw where status='CAPTURED'\n>>\n>> Application team is reluctant to change queries from the Application side\n>> to = instead of LIKE.\n>>\n>> Also as this is VIEW TRIGRAM nor normal indexes don't get used.\n>>\n>>\n>> Regards,\n>> Aditya.\n>>\n>\nYou could try using the `text_pattern_ops` operator class on your index on\nthe `status` column:\nhttps://www.postgresql.org/docs/current/indexes-opclass.html\n\nOn Sun, 22 Jan 2023 at 13:34, aditya desai <admad123@gmail.com> wrote:Hi,Is there any way to improve performance of LIKE clause on VIEWS.select * From request_vw where upper(status) like '%CAPTURED%' - 28 seconds.select * from request_vw where status='CAPTURED'Application team is reluctant to change queries from the Application side to = instead of LIKE.Also as this is VIEW TRIGRAM nor normal indexes don't get used.Regards,Aditya.You could try using the `text_pattern_ops` operator class on your index on the `status` column: https://www.postgresql.org/docs/current/indexes-opclass.html",
"msg_date": "Sun, 22 Jan 2023 10:55:13 -0500",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LIKE CLAUSE on VIEWS"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 6:34 AM aditya desai <admad123@gmail.com> wrote:\n\n> Hi,\n> Is there any way to improve performance of LIKE clause on VIEWS.\n>\n> select * From request_vw where upper(status) like '%CAPTURED%' - 28\n> seconds.\n>\n\nYou would need to have an expression index over upper(status) to support\nsuch a query, not an index on status itself. It would probably be better\nto just use ILIKE rather than upper(), so `status ILIKE '%captured%'`,\nwhich can benefit from an index on \"status\" itself.\n\nAlso as this is VIEW TRIGRAM nor normal indexes don't get used.\n>\n\nThere is no problem in general using trigram indexes (or any other index\ntypes) on views. Maybe your view has particular features which inhibit the\nuse of the index, but you haven't given any information which would be\nuseful for assessing that. Did you try an index, or just assume it\nwouldn't work without trying?\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Jan 22, 2023 at 6:34 AM aditya desai <admad123@gmail.com> wrote:Hi,Is there any way to improve performance of LIKE clause on VIEWS.select * From request_vw where upper(status) like '%CAPTURED%' - 28 seconds.You would need to have an expression index over upper(status) to support such a query, not an index on status itself. It would probably be better to just use ILIKE rather than upper(), so `status ILIKE '%captured%'`, which can benefit from an index on \"status\" itself.Also as this is VIEW TRIGRAM nor normal indexes don't get used.There is no problem in general using trigram indexes (or any other index types) on views. Maybe your view has particular features which inhibit the use of the index, but you haven't given any information which would be useful for assessing that. Did you try an index, or just assume it wouldn't work without trying? Cheers,Jeff",
"msg_date": "Sun, 22 Jan 2023 12:49:51 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LIKE CLAUSE on VIEWS"
}
] |
[
{
"msg_contents": "Hi,\n\nWe've started to observe instances of one of our databases stalling for a\nfew seconds.\n\nWe see a spike in wal write locks then nothing for a few seconds. After\nwhich we have spike latency as processes waiting to get to the db can do\nso.\n\nThere is nothing in the postgres logs that give us any clues to what could\nbe happening, no locks, unusually high/long running transactions, just a\npause and resume.\n\nCould anyone give me any advice as to what to look for when it comes to\nchecking the underlying disk that the db is on?\n\nThanks,\n\nGurmokh\n\nHi, We've started to observe instances of one of our databases stalling for a few seconds. We see a spike in wal write locks then nothing for a few seconds. After which we have spike latency as processes waiting to get to the db can do so. There is nothing in the postgres logs that give us any clues to what could be happening, no locks, unusually high/long running transactions, just a pause and resume. Could anyone give me any advice as to what to look for when it comes to checking the underlying disk that the db is on? Thanks, Gurmokh",
"msg_date": "Mon, 30 Jan 2023 17:47:49 +0000",
"msg_from": "Mok <gurmokh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Database Stalls"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 05:47:49PM +0000, Mok wrote:\n> Hi,\n> \n> We've started to observe instances of one of our databases stalling for a\n> few seconds.\n> \n> We see a spike in wal write locks then nothing for a few seconds. After\n> which we have spike latency as processes waiting to get to the db can do\n> so.\n> \n> There is nothing in the postgres logs that give us any clues to what could\n> be happening, no locks, unusually high/long running transactions, just a\n> pause and resume.\n> \n> Could anyone give me any advice as to what to look for when it comes to\n> checking the underlying disk that the db is on?\n\nWhat version postgres? What settings have non-default values ? \nWhat OS/version? What environment/hardware? VM/image/provider/... \n\nHave you enabled logging for vacuum/checkpoints/locks ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:51:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Stalls"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 2:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jan 30, 2023 at 05:47:49PM +0000, Mok wrote:\n> > Hi,\n> >\n> > We've started to observe instances of one of our databases stalling for a\n> > few seconds.\n> >\n> > We see a spike in wal write locks then nothing for a few seconds. After\n> > which we have spike latency as processes waiting to get to the db can do\n> > so.\n> >\n> > There is nothing in the postgres logs that give us any clues to what\n> could\n> > be happening, no locks, unusually high/long running transactions, just a\n> > pause and resume.\n> >\n> > Could anyone give me any advice as to what to look for when it comes to\n> > checking the underlying disk that the db is on?\n>\n> What version postgres? What settings have non-default values ?\n>\n>\n> What OS/version? What environment/hardware? VM/image/provider/...\n>\n>\n>\n> Have you enabled logging for vacuum/checkpoints/locks ?\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n>\n> In addition to previous questions, if possible, a SELECT * FROM\npg_stat_activity at the moment of the stall. The most important information\nis the wait_event column. My guess is the disk, but just the select at the\nright moment can answer this.\n\n-- \nJosé Arthur Benetasso Villanova\n\nOn Mon, Jan 30, 2023 at 2:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jan 30, 2023 at 05:47:49PM +0000, Mok wrote:\n> Hi,\n> \n> We've started to observe instances of one of our databases stalling for a\n> few seconds.\n> \n> We see a spike in wal write locks then nothing for a few seconds. After\n> which we have spike latency as processes waiting to get to the db can do\n> so.\n> \n> There is nothing in the postgres logs that give us any clues to what could\n> be happening, no locks, unusually high/long running transactions, just a\n> pause and resume.\n> \n> Could anyone give me any advice as to what to look for when it comes to\n> checking the underlying disk that the db is on?\n\nWhat version postgres? What settings have non-default values ? \nWhat OS/version? What environment/hardware? VM/image/provider/... \n\nHave you enabled logging for vacuum/checkpoints/locks ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\nIn addition to previous questions, if possible, a SELECT * FROM pg_stat_activity at the moment of the stall. The most important information is the wait_event column. My guess is the disk, but just the select at the right moment can answer this.-- José Arthur Benetasso Villanova",
"msg_date": "Mon, 30 Jan 2023 15:10:54 -0300",
"msg_from": "=?UTF-8?Q?Jos=C3=A9_Arthur_Benetasso_Villanova?= <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Stalls"
},
{
"msg_contents": "Hi Burmokh,\n\nPlease take a look at this article copied below and ping me for further guidance. Thanks! \n\n\nHow expensive SQLs can impact PostgreSQL Performance? - https://minervadb.xyz/how-expensive-sqls-can-impact-postgresql-performance/ \n\n\n—\nBest\nShiv \n\n\n\n> On 30-Jan-2023, at 11:17 PM, Mok <gurmokh@gmail.com> wrote:\n> \n> Hi, \n> \n> We've started to observe instances of one of our databases stalling for a few seconds. \n> \n> We see a spike in wal write locks then nothing for a few seconds. After which we have spike latency as processes waiting to get to the db can do so. \n> \n> There is nothing in the postgres logs that give us any clues to what could be happening, no locks, unusually high/long running transactions, just a pause and resume. \n> \n> Could anyone give me any advice as to what to look for when it comes to checking the underlying disk that the db is on? \n> \n> Thanks, \n> \n> Gurmokh \n> \n> \n\n\nHi Burmokh,Please take a look at this article copied below and ping me for further guidance. Thanks! How expensive SQLs can impact PostgreSQL Performance? - https://minervadb.xyz/how-expensive-sqls-can-impact-postgresql-performance/ —BestShiv On 30-Jan-2023, at 11:17 PM, Mok <gurmokh@gmail.com> wrote:Hi, We've started to observe instances of one of our databases stalling for a few seconds. We see a spike in wal write locks then nothing for a few seconds. After which we have spike latency as processes waiting to get to the db can do so. There is nothing in the postgres logs that give us any clues to what could be happening, no locks, unusually high/long running transactions, just a pause and resume. Could anyone give me any advice as to what to look for when it comes to checking the underlying disk that the db is on? Thanks, Gurmokh",
"msg_date": "Tue, 31 Jan 2023 00:17:33 +0530",
"msg_from": "Shiv Iyer <shiv@minervadb.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Stalls"
},
{
"msg_contents": "Hi,\n\nUnfortunately there is no pg_stat_activity data available as we are unaware\nof the issue until it has already happened.\n\nThe version we are on is 12.11.\n\nI don't think it is due to locks as there are none in the logs. Vacuums are\nlogged also and none occur before or after this event. Checkpoint timeout\nis set to 1 hour and these events do not coincide with checkpoints.\n\nGurmokh\n\nOn Mon, 30 Jan 2023 at 18:47, Shiv Iyer <shiv@minervadb.com> wrote:\n\n> Hi Burmokh,\n>\n> Please take a look at this article copied below and ping me for further\n> guidance. Thanks!\n>\n>\n> How expensive SQLs can impact PostgreSQL Performance? -\n> https://minervadb.xyz/how-expensive-sqls-can-impact-postgresql-performance/\n>\n>\n> —\n> Best\n> Shiv\n>\n>\n>\n> On 30-Jan-2023, at 11:17 PM, Mok <gurmokh@gmail.com> wrote:\n>\n> Hi,\n>\n> We've started to observe instances of one of our databases stalling for a\n> few seconds.\n>\n> We see a spike in wal write locks then nothing for a few seconds. After\n> which we have spike latency as processes waiting to get to the db can do\n> so.\n>\n> There is nothing in the postgres logs that give us any clues to what could\n> be happening, no locks, unusually high/long running transactions, just a\n> pause and resume.\n>\n> Could anyone give me any advice as to what to look for when it comes to\n> checking the underlying disk that the db is on?\n>\n> Thanks,\n>\n> Gurmokh\n>\n>\n>\n>\n\nHi, Unfortunately there is no pg_stat_activity data available as we are unaware of the issue until it has already happened. The version we are on is 12.11. I don't think it is due to locks as there are none in the logs. Vacuums are logged also and none occur before or after this event. Checkpoint timeout is set to 1 hour and these events do not coincide with checkpoints. GurmokhOn Mon, 30 Jan 2023 at 18:47, Shiv Iyer <shiv@minervadb.com> wrote:Hi Burmokh,Please take a look at this article copied below and ping me for further guidance. Thanks! How expensive SQLs can impact PostgreSQL Performance? - https://minervadb.xyz/how-expensive-sqls-can-impact-postgresql-performance/ —BestShiv On 30-Jan-2023, at 11:17 PM, Mok <gurmokh@gmail.com> wrote:Hi, We've started to observe instances of one of our databases stalling for a few seconds. We see a spike in wal write locks then nothing for a few seconds. After which we have spike latency as processes waiting to get to the db can do so. There is nothing in the postgres logs that give us any clues to what could be happening, no locks, unusually high/long running transactions, just a pause and resume. Could anyone give me any advice as to what to look for when it comes to checking the underlying disk that the db is on? Thanks, Gurmokh",
"msg_date": "Mon, 30 Jan 2023 21:31:57 +0000",
"msg_from": "Mok <gurmokh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Database Stalls"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 4:32 PM Mok <gurmokh@gmail.com> wrote:\n\n> Hi,\n>\n> Unfortunately there is no pg_stat_activity data available as we are\n> unaware of the issue until it has already happened.\n>\n> The version we are on is 12.11.\n>\n> I don't think it is due to locks as there are none in the logs. Vacuums\n> are logged also and none occur before or after this event. Checkpoint\n> timeout is set to 1 hour and these events do not coincide with checkpoints.\n>\n> Gurmokh\n>\n>>\n>>\nHave you eliminated network issues? I have seen what looks like a database\nstalling to end up actually being the network packets taking a side trip to\nhalfway around the world for a while. Or DNS lookups suddenly taking a\nreally long time.\n\nThe next most likely thing is disk i/o. Do you have huge corresponding\ndisk i/o spikes or does it drop completely to zero (which is also bad -\nespecially if you are on a SAN and you can't get any packets out on that\nnetwork). You'll have to look at your disks via OS tools to see.\n\nDo you have any hardware faults? Errors on a hardware bus? Overheating?\nI used to have a system that would freeze up entirely due to a problem with\na serial port that we had a console attached to - it was sending a low\nlevel interrupt. Sometimes it would recover mysteriously if someone hit\nthe carriage return a couple times. Ie, is it _really_ the database that\nis locking up, or is it your hardware?\n\nOn Mon, Jan 30, 2023 at 4:32 PM Mok <gurmokh@gmail.com> wrote:Hi, Unfortunately there is no pg_stat_activity data available as we are unaware of the issue until it has already happened. The version we are on is 12.11. I don't think it is due to locks as there are none in the logs. Vacuums are logged also and none occur before or after this event. Checkpoint timeout is set to 1 hour and these events do not coincide with checkpoints. GurmokhHave you eliminated network issues? I have seen what looks like a database stalling to end up actually being the network packets taking a side trip to halfway around the world for a while. Or DNS lookups suddenly taking a really long time.The next most likely thing is disk i/o. Do you have huge corresponding disk i/o spikes or does it drop completely to zero (which is also bad - especially if you are on a SAN and you can't get any packets out on that network). You'll have to look at your disks via OS tools to see.Do you have any hardware faults? Errors on a hardware bus? Overheating? I used to have a system that would freeze up entirely due to a problem with a serial port that we had a console attached to - it was sending a low level interrupt. Sometimes it would recover mysteriously if someone hit the carriage return a couple times. Ie, is it _really_ the database that is locking up, or is it your hardware?",
"msg_date": "Mon, 30 Jan 2023 17:47:50 -0500",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Fwd: Database Stalls"
},
{
"msg_contents": "Consider creating a pg_stat_activity history table. This would allow you to\nlook back at the time of incident and verify if any unusual activity was\noccurring in the database. Something like:\n\nCREATE TABLE pg_stat_activity_hist ASSELECT\n now() AS sample_time,\n a.*FROM\n pg_stat_activity a WITH NO data;\n\n\nThen with a cron job or a pg job scheduler insert the pg_stat_activity\nhistory at some desired interval (e.g 30s, 1m or 5m):\n\nINSERT INTO pg_stat_activity_hist\nSELECT\n now(),\n a.*\nFROM\n pg_stat_activity a\nWHERE\n state IN ('active', 'idle in transaction’);\n\nThen regularly purge any sample_times older than some desired interval (1\nday, 1 week, 1 month).\n\nNot a perfect solution because the problem (if a db problem) could occur\nbetween your pg_stat_activity samples. We keep this kind of history and it\nis very helpful when trying to find a post-event root cause.\n\nCraig\n\n\n\nOn Jan 30, 2023 at 10:47:49 AM, Mok <gurmokh@gmail.com> wrote:\n\n> Hi,\n>\n> We've started to observe instances of one of our databases stalling for a\n> few seconds.\n>\n> We see a spike in wal write locks then nothing for a few seconds. After\n> which we have spike latency as processes waiting to get to the db can do\n> so.\n>\n> There is nothing in the postgres logs that give us any clues to what could\n> be happening, no locks, unusually high/long running transactions, just a\n> pause and resume.\n>\n> Could anyone give me any advice as to what to look for when it comes to\n> checking the underlying disk that the db is on?\n>\n> Thanks,\n>\n> Gurmokh\n>\n>\n>\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.",
"msg_date": "Fri, 3 Feb 2023 08:25:45 -0800",
"msg_from": "Craig Jackson <craig.jackson@broadcom.com>",
"msg_from_op": false,
"msg_subject": "Re: Database Stalls"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to get the following query to use a plan with parallelism, but I\nhaven't been successful and would like some advice.\n\nThe schema and table that I'm using is this:\n\nCREATE TABLE testing(\n id INT,\n info INT,\n data_one TEXT,\n data_two TEXT,\n primary key(id, info)\n);\n\nINSERT INTO testing(id, info, data_one, data_two)\nSELECT idx, idx, md5(random()::text), md5(random()::text)\nFROM generate_series(1,10000000) idx;\n\nThen the query that I'm trying to run is this (I'll include the full query\nat the very end of the email because it is long:\n\nselect * from testing where id in (1608377,5449811, ... <1000 random ids>\n,4654284,3558460);\n\nEssentially I have a list of 1000 ids and I would like the rows for all of\nthose ids.\n\nThis seems like it would be pretty easy to parallelize, if you have X\nthreads then you would split the list of IDs into 1000/X sub lists and give\none to each thread to go find the rows for ids in the given list. Even\nwhen I use the following configs I don't get a query plan that actually\nuses any parallelism:\n\npsql (15.1 (Debian 15.1-1.pgdg110+1))\nType \"help\" for help.\n\npostgres=# show max_parallel_workers;\n max_parallel_workers\n----------------------\n 8\n(1 row)\n\npostgres=# set max_parallel_workers_per_gather = 8;\nSET\npostgres=# set parallel_setup_cost = 0;\nSET\npostgres=# set parallel_tuple_cost = 0;\nSET\npostgres=# set force_parallel_mode = on;\nSET\npostgres=# explain select * from testing where id in (1608377,5449811, ...\n<removed for brevity> ... ,4654284,3558460);\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Gather (cost=0.43..6138.81 rows=1000 width=74)\n Workers Planned: 1\n Single Copy: true\n -> Index Scan using testing_pkey on testing (cost=0.43..6138.81\nrows=1000 width=74)\n Index Cond: (id = ANY ('{1608377,5449811 ... <removed for brevity>\n... 4654284,3558460}'::integer[]))\n(5 rows)\n\npostgres=# explain (analyze, buffers) select * from testing where id in\n(1608377,5449811, ... <removed for brevity> ... ,4654284,3558460);\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Gather (cost=0.43..6138.81 rows=1000 width=74) (actual\ntime=22.388..59.860 rows=1000 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n Single Copy: true\n Buffers: shared hit=4003\n -> Index Scan using testing_pkey on testing (cost=0.43..6138.81\nrows=1000 width=74) (actual time=0.443..43.660 rows=1000 loops=1)\n Index Cond: (id = ANY ('{1608377,5449811 ... <removed for brevity>\n... 4654284,3558460}'::integer[]))\n Buffers: shared hit=4003\n Planning Time: 3.101 ms\n Execution Time: 60.211 ms\n(10 rows)\n\npostgres=# explain select * from testing;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Gather (cost=0.00..153334.10 rows=10000050 width=74)\n Workers Planned: 5\n -> Parallel Seq Scan on testing (cost=0.00..153334.10 rows=2000010\nwidth=74)\n(3 rows)\n\n\nThat last query is just to show that I can get parallel plans, so they\naren't completely turned off.\n\nIs there a particular reason why this query can't be parallelized? Or is\nthere some other way I could structure the query to get it to use\nparallelism?\n\nI've tried this both on PG 15.1 (In docker, which seems to be on Debian\n15.1) and PG 14.5 (on Centos 7) and got the same results\n\nThanks,\nAlex Kaiser\n\nFull query:\nselect * from testing where id in\n(1608377,5449811,5334677,5458230,2053195,3572313,1949724,3559988,5061560,8479775,6604845,1531946,8578236,1767138,1887562,9224796,801839,1389073,2070024,3378634,5935175,253322,6503217,492190,1646143,6073879,6344884,3120926,6077454,7988246,2359088,2758185,2277417,6144637,7869743,450645,2675170,307844,2752378,9765759,7604173,4702773,9447882,6403407,1020813,2421819,2246889,6118484,5675269,38400,989987,5226654,2910389,9741575,5909526,8752890,1429931,3598345,9541469,6728532,2454806,6470370,6338418,2525642,2286146,9319587,5821710,4138188,8677346,2188096,3242293,9711468,8308979,6505437,5620847,5870305,5177061,7519783,1441852,8264516,7637571,1994901,3979976,8828452,6327321,4377585,6055558,2620337,9944860,7822890,664424,8832299,8564521,4978015,5910646,8527205,3573524,996558,1270265,7774940,1747145,104339,6867262,9128122,1303267,3810412,2694329,7145818,6719318,3789062,9870348,986684,5603862,1698361,7732472,2816324,1337682,5012390,2309943,1691809,3480539,49005,6857269,9555513,2599309,2515895,4568931,641192,781186,4762944,13013,4987725,8990541,5654081,193138,4012985,2884209,5352762,9816619,1363209,3019900,8276055,2129378,1121730,7607112,5210575,3288097,1489630,1163497,7136711,9799048,375373,8046412,8724195,6005442,1290573,5721078,1214636,7569919,4654551,8618870,7709458,9852972,9717197,5704549,4163520,9558516,5443577,24670,332576,6877103,5932612,8298990,6309522,8041687,5977063,9500416,6432058,4937450,9923650,9117734,7237497,1798290,4124950,2185197,9948176,1094346,6746478,7304769,5568030,3796416,8891995,1053559,1821980,1185072,2349200,2219299,2969613,2472087,2450905,3121489,9638165,4790546,3720200,1311820,1296827,1138950,7784270,3824064,6915212,7383886,6855810,3491033,256301,9997854,2214084,9878366,5682387,5710729,8856125,9335563,3901871,2085478,5444947,4838361,9332499,1225090,3004836,9119361,5476573,9425201,9613762,9108411,4271769,6614784,3201217,8138778,1219241,4984103,6557882,2197275,3579784,5011159,7465713,760962,6200169,9687904,9045984,3827388,8586783,9949942,4918807,1309167,3406506,2453149,1061703,8054158,6778320,1431668,4145674,331232,6461486,6929178,5155683,5003625,9836477,6152755,2343676,2988832,6746977,2399198,8124075,8757743,4311457,5031384,8400655,1912444,6677221,5574997,1386860,1031616,3689530,4131063,5438418,944326,6217568,3395754,8937413,9269528,3699673,8552533,7437048,1024909,4343149,1434220,6593217,6142852,9110998,6207558,921357,2186560,6091282,1928657,4302412,6325582,1337393,6427695,3469914,4356086,8892231,8384082,1477346,3822408,5268755,4070623,3119427,3290973,4265753,817119,4504091,2401305,1925450,429200,1094436,9602884,5245982,1824411,432238,596900,8421662,8595645,2424955,1782602,1894324,427312,6048285,5864834,1348501,955343,6950739,8252446,3828615,9670815,3706371,3717929,7814353,1757583,8490290,4413043,2322689,4891500,5054674,4600353,1281555,3863893,1162106,2958640,6006984,4302963,1117738,8642737,5409180,9556862,841143,5045278,3748140,8894409,2506817,4273288,2633581,3119707,9952893,2750853,5474210,9249846,5639610,83338,9908504,8465361,2074546,7720208,5654917,7144433,8071670,3197270,1756937,9289716,6653496,4772491,7468146,1582580,2386228,5539203,6113389,5099513,9876191,9628095,3183250,6775459,7665608,6794804,8653394,4434664,8513441,5103707,8053446,6073965,2622184,4532773,6334178,5336613,3266043,8146834,7920939,1870993,8202151,309347,748345,6260993,1923670,377350,580449,3369377,2396135,661803,1731830,3729992,8501495,8212247,8515391,8718631,4730537,3122036,6299099,1923435,615308,1863293,6995898,5760160,2666671,9125446,3641934,6430855,489597,7183510,4181075,4815452,8985924,2344090,3416311,8092533,6306505,426770,5383875,4362857,3212107,5146937,2293104,6022662,7250711,4970184,4239079,1302390,8935997,1533922,1393172,5048505,4293843,1570827,9805238,1420916,293318,6162275,9177640,89886,1543620,3113059,1726434,3340563,1719843,9570231,2501492,2949354,2036931,8557586,1691786,6073593,3495457,416982,1373202,3858682,6765954,9991676,9190916,222078,8272108,9779778,6417060,2312865,3283936,3978241,7360141,3681005,3208006,7322741,1390421,3998891,5168998,7500754,4350760,724402,2576055,1365770,8550804,529521,2631191,333968,7544501,8130917,7154053,7885496,5928191,9471764,6755786,8272211,9432888,8840290,1228823,4915460,1801542,5852244,358500,7775207,7769606,5831998,4249440,1307330,4463268,766442,9131985,9780620,6820832,2601339,8317405,1679354,3419739,4819118,7326443,4510262,3015014,7192154,6284079,4207593,236283,4464714,7062157,7028124,9523370,7911438,2671064,1290471,9669065,5520807,5938961,4575373,9253011,7962875,8783002,6512827,8263442,6729440,3942648,856559,5202945,4928362,3282835,7887470,9975130,7615773,4030926,6176507,4497481,6033126,8621176,4504739,500044,2278118,9346590,6744253,7017476,4682119,3657000,5095471,174918,4551074,6687135,8296926,2622254,8752505,991505,8631264,8088985,5785268,1926815,2574783,6431649,8982423,8142710,824511,1875290,5054562,1437928,2075485,1949035,3757345,2528250,3307412,1779505,2096270,8807006,2685238,6559635,2027260,7526005,7616809,8731914,6472225,8846633,6619892,8782922,8631158,9069894,8547921,1293574,6272547,9859811,5509842,5516969,974646,3242662,2794043,5569866,2520950,5133422,9998183,5874455,4938074,5455495,9439197,7571865,2250902,1610594,9624168,1041235,2889120,6083148,3913825,4455711,405261,4303490,5588906,7985761,890989,7957500,13751,3022733,1380315,4471197,7128770,8145719,1786111,5209933,3062919,3753422,8123022,3230853,6095301,5093459,817527,2151655,9266058,9472989,9925539,1615290,8411945,95723,8567772,7870496,4487771,5124509,2453780,3946342,5859762,596133,8612152,3616196,2317853,6221780,5234609,1429272,5190050,1756430,4596457,5402935,5318101,3655060,1256006,4843877,3148982,5386241,4538154,7817465,3904008,2144081,5551025,2749786,4748282,6185119,2091766,2701159,5191374,1218345,3542677,4075715,6222181,4159050,6540911,3119637,6367663,2682116,9943058,1115652,5939513,6070897,3798441,408171,9264198,2727531,9187981,2837304,333856,5538241,8714618,6736394,313999,3015204,861772,7326900,961309,1722967,4652654,7328448,3670361,9081414,362096,9292335,1684179,1284622,3312337,1824664,7767797,5533043,5793208,1725413,6214729,9992784,8418622,6493664,8776426,1426161,1031983,3715268,6505887,8305875,7013880,7144356,9729782,436564,8608028,1010584,70717,2873837,7856269,2316654,7170184,6723773,7698527,9252650,5040660,7181806,5377517,5424349,3805788,4033651,4294239,5355707,6900075,5625668,3410262,4013203,481183,62184,8797500,970495,6625255,7254913,7662343,8987287,2610657,7294315,2724733,4649950,6509042,5306803,8816473,1173624,170600,1668636,3774797,3439784,7700452,9720665,7032018,8549446,9971526,9109279,9765304,5229101,5563539,6800753,5298323,5622436,5774485,6651444,1375607,7729739,7534311,7677402,9028109,9022462,9169017,6708403,8618359,7862319,4164876,5267625,5752478,3394094,2743359,7883411,3192807,6908084,2511599,9077668,2223928,9051932,5693857,4006603,364537,3964003,695520,2486464,2451789,524608,2937878,3432943,2987441,6847474,3349875,1847131,4010301,4885624,1193549,7902402,8756424,1890613,9598187,5647783,5375794,1835320,2363315,7101994,9646975,2582592,6539719,8914453,5196939,8161107,3899236,3050366,3449634,2616291,1669386,8632847,493803,8630172,7503179,6089968,8019732,9133326,1778968,6843066,6618579,2994096,8618807,9159460,303658,33203,6218402,4193805,338210,8828259,3770193,5646522,1959199,7231533,9087536,5524141,8049095,831964,2876993,119133,2008356,4142233,1763463,3510804,144448,8034613,6689542,6209014,5200398,7821812,7806829,3007319,371296,6503646,7713090,2140125,4895835,5475298,2381570,1813346,5893364,1287930,9494416,3264004,4379806,7156907,9199443,8766138,1521584,2700616,8516805,5936484,8717735,3035350,6076409,9913722,3638170,5015296,1824135,1546175,3240878,7591542,5853806,2678731,8194246,3846118,9304679,1055867,2073446,2082338,3043546,7440437,2437338,7237400,4411273,7560449,7042633,1236595,1900140,3129298,5580344,8006821,1554224,7064671,5722874,1873303,4876629,7638248,1434123,461213,2892216,9979823,1764459,1218933,1091006,8106607,4654284,3558460);\n\nHello,I'm trying to get the following query to use a plan with parallelism, but I haven't been successful and would like some advice.The schema and table that I'm using is this:CREATE TABLE testing( id INT, info INT, data_one TEXT, data_two TEXT, primary key(id, info));INSERT INTO testing(id, info, data_one, data_two)SELECT idx, idx, md5(random()::text), md5(random()::text)FROM generate_series(1,10000000) idx;Then the query that I'm trying to run is this (I'll include the full query at the very end of the email because it is long:select * from testing where id in (1608377,5449811, ... <1000 random ids> ,4654284,3558460);Essentially I have a list of 1000 ids and I would like the rows for all of those ids.This seems like it would be pretty easy to parallelize, if you have X threads then you would split the list of IDs into 1000/X sub lists and give one to each thread to go find the rows for ids in the given list. Even when I use the following configs I don't get a query plan that actually uses any parallelism:psql (15.1 (Debian 15.1-1.pgdg110+1))Type \"help\" for help.postgres=# show max_parallel_workers; max_parallel_workers---------------------- 8(1 row)postgres=# set max_parallel_workers_per_gather = 8;SETpostgres=# set parallel_setup_cost = 0;SETpostgres=# set parallel_tuple_cost = 0;SETpostgres=# set force_parallel_mode = on;SETpostgres=# explain select * from testing where id in (1608377,5449811, ... <removed for brevity> ... ,4654284,3558460); QUERY PLAN--------------------------------------------------------------------------------------------------------------- Gather (cost=0.43..6138.81 rows=1000 width=74) Workers Planned: 1 Single Copy: true -> Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000 width=74) Index Cond: (id = ANY ('{1608377,5449811 ... <removed for brevity> ... 4654284,3558460}'::integer[]))(5 rows)postgres=# explain (analyze, buffers) select * from testing where id in (1608377,5449811, ... <removed for brevity> ... ,4654284,3558460); QUERY PLAN--------------------------------------------------------------------------------------------------------------- Gather (cost=0.43..6138.81 rows=1000 width=74) (actual time=22.388..59.860 rows=1000 loops=1) Workers Planned: 1 Workers Launched: 1 Single Copy: true Buffers: shared hit=4003 -> Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000 width=74) (actual time=0.443..43.660 rows=1000 loops=1) Index Cond: (id = ANY ('{1608377,5449811 ... <removed for brevity> ... 4654284,3558460}'::integer[])) Buffers: shared hit=4003 Planning Time: 3.101 ms Execution Time: 60.211 ms(10 rows)postgres=# explain select * from testing; QUERY PLAN---------------------------------------------------------------------------------- Gather (cost=0.00..153334.10 rows=10000050 width=74) Workers Planned: 5 -> Parallel Seq Scan on testing (cost=0.00..153334.10 rows=2000010 width=74)(3 rows)That last query is just to show that I can get parallel plans, so they aren't completely turned off.Is there a particular reason why this query can't be parallelized? Or is there some other way I could structure the query to get it to use parallelism?I've tried this both on PG 15.1 (In docker, which seems to be on Debian 15.1) and PG 14.5 (on Centos 7) and got the same resultsThanks,Alex KaiserFull query:select * from testing where id in (1608377,5449811,5334677,5458230,2053195,3572313,1949724,3559988,5061560,8479775,6604845,1531946,8578236,1767138,1887562,9224796,801839,1389073,2070024,3378634,5935175,253322,6503217,492190,1646143,6073879,6344884,3120926,6077454,7988246,2359088,2758185,2277417,6144637,7869743,450645,2675170,307844,2752378,9765759,7604173,4702773,9447882,6403407,1020813,2421819,2246889,6118484,5675269,38400,989987,5226654,2910389,9741575,5909526,8752890,1429931,3598345,9541469,6728532,2454806,6470370,6338418,2525642,2286146,9319587,5821710,4138188,8677346,2188096,3242293,9711468,8308979,6505437,5620847,5870305,5177061,7519783,1441852,8264516,7637571,1994901,3979976,8828452,6327321,4377585,6055558,2620337,9944860,7822890,664424,8832299,8564521,4978015,5910646,8527205,3573524,996558,1270265,7774940,1747145,104339,6867262,9128122,1303267,3810412,2694329,7145818,6719318,3789062,9870348,986684,5603862,1698361,7732472,2816324,1337682,5012390,2309943,1691809,3480539,49005,6857269,9555513,2599309,2515895,4568931,641192,781186,4762944,13013,4987725,8990541,5654081,193138,4012985,2884209,5352762,9816619,1363209,3019900,8276055,2129378,1121730,7607112,5210575,3288097,1489630,1163497,7136711,9799048,375373,8046412,8724195,6005442,1290573,5721078,1214636,7569919,4654551,8618870,7709458,9852972,9717197,5704549,4163520,9558516,5443577,24670,332576,6877103,5932612,8298990,6309522,8041687,5977063,9500416,6432058,4937450,9923650,9117734,7237497,1798290,4124950,2185197,9948176,1094346,6746478,7304769,5568030,3796416,8891995,1053559,1821980,1185072,2349200,2219299,2969613,2472087,2450905,3121489,9638165,4790546,3720200,1311820,1296827,1138950,7784270,3824064,6915212,7383886,6855810,3491033,256301,9997854,2214084,9878366,5682387,5710729,8856125,9335563,3901871,2085478,5444947,4838361,9332499,1225090,3004836,9119361,5476573,9425201,9613762,9108411,4271769,6614784,3201217,8138778,1219241,4984103,6557882,2197275,3579784,5011159,7465713,760962,6200169,9687904,9045984,3827388,8586783,9949942,4918807,1309167,3406506,2453149,1061703,8054158,6778320,1431668,4145674,331232,6461486,6929178,5155683,5003625,9836477,6152755,2343676,2988832,6746977,2399198,8124075,8757743,4311457,5031384,8400655,1912444,6677221,5574997,1386860,1031616,3689530,4131063,5438418,944326,6217568,3395754,8937413,9269528,3699673,8552533,7437048,1024909,4343149,1434220,6593217,6142852,9110998,6207558,921357,2186560,6091282,1928657,4302412,6325582,1337393,6427695,3469914,4356086,8892231,8384082,1477346,3822408,5268755,4070623,3119427,3290973,4265753,817119,4504091,2401305,1925450,429200,1094436,9602884,5245982,1824411,432238,596900,8421662,8595645,2424955,1782602,1894324,427312,6048285,5864834,1348501,955343,6950739,8252446,3828615,9670815,3706371,3717929,7814353,1757583,8490290,4413043,2322689,4891500,5054674,4600353,1281555,3863893,1162106,2958640,6006984,4302963,1117738,8642737,5409180,9556862,841143,5045278,3748140,8894409,2506817,4273288,2633581,3119707,9952893,2750853,5474210,9249846,5639610,83338,9908504,8465361,2074546,7720208,5654917,7144433,8071670,3197270,1756937,9289716,6653496,4772491,7468146,1582580,2386228,5539203,6113389,5099513,9876191,9628095,3183250,6775459,7665608,6794804,8653394,4434664,8513441,5103707,8053446,6073965,2622184,4532773,6334178,5336613,3266043,8146834,7920939,1870993,8202151,309347,748345,6260993,1923670,377350,580449,3369377,2396135,661803,1731830,3729992,8501495,8212247,8515391,8718631,4730537,3122036,6299099,1923435,615308,1863293,6995898,5760160,2666671,9125446,3641934,6430855,489597,7183510,4181075,4815452,8985924,2344090,3416311,8092533,6306505,426770,5383875,4362857,3212107,5146937,2293104,6022662,7250711,4970184,4239079,1302390,8935997,1533922,1393172,5048505,4293843,1570827,9805238,1420916,293318,6162275,9177640,89886,1543620,3113059,1726434,3340563,1719843,9570231,2501492,2949354,2036931,8557586,1691786,6073593,3495457,416982,1373202,3858682,6765954,9991676,9190916,222078,8272108,9779778,6417060,2312865,3283936,3978241,7360141,3681005,3208006,7322741,1390421,3998891,5168998,7500754,4350760,724402,2576055,1365770,8550804,529521,2631191,333968,7544501,8130917,7154053,7885496,5928191,9471764,6755786,8272211,9432888,8840290,1228823,4915460,1801542,5852244,358500,7775207,7769606,5831998,4249440,1307330,4463268,766442,9131985,9780620,6820832,2601339,8317405,1679354,3419739,4819118,7326443,4510262,3015014,7192154,6284079,4207593,236283,4464714,7062157,7028124,9523370,7911438,2671064,1290471,9669065,5520807,5938961,4575373,9253011,7962875,8783002,6512827,8263442,6729440,3942648,856559,5202945,4928362,3282835,7887470,9975130,7615773,4030926,6176507,4497481,6033126,8621176,4504739,500044,2278118,9346590,6744253,7017476,4682119,3657000,5095471,174918,4551074,6687135,8296926,2622254,8752505,991505,8631264,8088985,5785268,1926815,2574783,6431649,8982423,8142710,824511,1875290,5054562,1437928,2075485,1949035,3757345,2528250,3307412,1779505,2096270,8807006,2685238,6559635,2027260,7526005,7616809,8731914,6472225,8846633,6619892,8782922,8631158,9069894,8547921,1293574,6272547,9859811,5509842,5516969,974646,3242662,2794043,5569866,2520950,5133422,9998183,5874455,4938074,5455495,9439197,7571865,2250902,1610594,9624168,1041235,2889120,6083148,3913825,4455711,405261,4303490,5588906,7985761,890989,7957500,13751,3022733,1380315,4471197,7128770,8145719,1786111,5209933,3062919,3753422,8123022,3230853,6095301,5093459,817527,2151655,9266058,9472989,9925539,1615290,8411945,95723,8567772,7870496,4487771,5124509,2453780,3946342,5859762,596133,8612152,3616196,2317853,6221780,5234609,1429272,5190050,1756430,4596457,5402935,5318101,3655060,1256006,4843877,3148982,5386241,4538154,7817465,3904008,2144081,5551025,2749786,4748282,6185119,2091766,2701159,5191374,1218345,3542677,4075715,6222181,4159050,6540911,3119637,6367663,2682116,9943058,1115652,5939513,6070897,3798441,408171,9264198,2727531,9187981,2837304,333856,5538241,8714618,6736394,313999,3015204,861772,7326900,961309,1722967,4652654,7328448,3670361,9081414,362096,9292335,1684179,1284622,3312337,1824664,7767797,5533043,5793208,1725413,6214729,9992784,8418622,6493664,8776426,1426161,1031983,3715268,6505887,8305875,7013880,7144356,9729782,436564,8608028,1010584,70717,2873837,7856269,2316654,7170184,6723773,7698527,9252650,5040660,7181806,5377517,5424349,3805788,4033651,4294239,5355707,6900075,5625668,3410262,4013203,481183,62184,8797500,970495,6625255,7254913,7662343,8987287,2610657,7294315,2724733,4649950,6509042,5306803,8816473,1173624,170600,1668636,3774797,3439784,7700452,9720665,7032018,8549446,9971526,9109279,9765304,5229101,5563539,6800753,5298323,5622436,5774485,6651444,1375607,7729739,7534311,7677402,9028109,9022462,9169017,6708403,8618359,7862319,4164876,5267625,5752478,3394094,2743359,7883411,3192807,6908084,2511599,9077668,2223928,9051932,5693857,4006603,364537,3964003,695520,2486464,2451789,524608,2937878,3432943,2987441,6847474,3349875,1847131,4010301,4885624,1193549,7902402,8756424,1890613,9598187,5647783,5375794,1835320,2363315,7101994,9646975,2582592,6539719,8914453,5196939,8161107,3899236,3050366,3449634,2616291,1669386,8632847,493803,8630172,7503179,6089968,8019732,9133326,1778968,6843066,6618579,2994096,8618807,9159460,303658,33203,6218402,4193805,338210,8828259,3770193,5646522,1959199,7231533,9087536,5524141,8049095,831964,2876993,119133,2008356,4142233,1763463,3510804,144448,8034613,6689542,6209014,5200398,7821812,7806829,3007319,371296,6503646,7713090,2140125,4895835,5475298,2381570,1813346,5893364,1287930,9494416,3264004,4379806,7156907,9199443,8766138,1521584,2700616,8516805,5936484,8717735,3035350,6076409,9913722,3638170,5015296,1824135,1546175,3240878,7591542,5853806,2678731,8194246,3846118,9304679,1055867,2073446,2082338,3043546,7440437,2437338,7237400,4411273,7560449,7042633,1236595,1900140,3129298,5580344,8006821,1554224,7064671,5722874,1873303,4876629,7638248,1434123,461213,2892216,9979823,1764459,1218933,1091006,8106607,4654284,3558460);",
"msg_date": "Tue, 31 Jan 2023 21:39:06 -0800",
"msg_from": "Alex Kaiser <alextkaiser@gmail.com>",
"msg_from_op": true,
"msg_subject": "Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "Em qua., 1 de fev. de 2023 às 02:39, Alex Kaiser <alextkaiser@gmail.com>\nescreveu:\n\n> Hello,\n>\n> I'm trying to get the following query to use a plan with parallelism, but\n> I haven't been successful and would like some advice.\n>\n> The schema and table that I'm using is this:\n>\n> CREATE TABLE testing(\n> id INT,\n> info INT,\n> data_one TEXT,\n> data_two TEXT,\n> primary key(id, info)\n> );\n>\n> INSERT INTO testing(id, info, data_one, data_two)\n> SELECT idx, idx, md5(random()::text), md5(random()::text)\n> FROM generate_series(1,10000000) idx;\n>\n> Then the query that I'm trying to run is this (I'll include the full query\n> at the very end of the email because it is long:\n>\n> select * from testing where id in (1608377,5449811, ... <1000 random ids>\n> ,4654284,3558460);\n>\n> Essentially I have a list of 1000 ids and I would like the rows for all of\n> those ids.\n>\n> This seems like it would be pretty easy to parallelize, if you have X\n> threads then you would split the list of IDs into 1000/X sub lists and give\n> one to each thread to go find the rows for ids in the given list. Even\n> when I use the following configs I don't get a query plan that actually\n> uses any parallelism:\n>\n> psql (15.1 (Debian 15.1-1.pgdg110+1))\n> Type \"help\" for help.\n>\n> postgres=# show max_parallel_workers;\n> max_parallel_workers\n> ----------------------\n> 8\n> (1 row)\n>\n> postgres=# set max_parallel_workers_per_gather = 8;\n> SET\n> postgres=# set parallel_setup_cost = 0;\n> SET\n> postgres=# set parallel_tuple_cost = 0;\n> SET\n> postgres=# set force_parallel_mode = on;\n> SET\n> postgres=# explain select * from testing where id in (1608377,5449811, ...\n> <removed for brevity> ... ,4654284,3558460);\n>\nCan you try:\nselect * from testing where id any = (values(1608377),(5449811),(5334677)\n... <removed for brevity> ... ,(4654284),(3558460));\n\nOr alternately you can use EXTEND STATISTICS to improve Postgres planner\nchoice.\n\nregards,\nRanier Vilela\n\nEm qua., 1 de fev. de 2023 às 02:39, Alex Kaiser <alextkaiser@gmail.com> escreveu:Hello,I'm trying to get the following query to use a plan with parallelism, but I haven't been successful and would like some advice.The schema and table that I'm using is this:CREATE TABLE testing( id INT, info INT, data_one TEXT, data_two TEXT, primary key(id, info));INSERT INTO testing(id, info, data_one, data_two)SELECT idx, idx, md5(random()::text), md5(random()::text)FROM generate_series(1,10000000) idx;Then the query that I'm trying to run is this (I'll include the full query at the very end of the email because it is long:select * from testing where id in (1608377,5449811, ... <1000 random ids> ,4654284,3558460);Essentially I have a list of 1000 ids and I would like the rows for all of those ids.This seems like it would be pretty easy to parallelize, if you have X threads then you would split the list of IDs into 1000/X sub lists and give one to each thread to go find the rows for ids in the given list. Even when I use the following configs I don't get a query plan that actually uses any parallelism:psql (15.1 (Debian 15.1-1.pgdg110+1))Type \"help\" for help.postgres=# show max_parallel_workers; max_parallel_workers---------------------- 8(1 row)postgres=# set max_parallel_workers_per_gather = 8;SETpostgres=# set parallel_setup_cost = 0;SETpostgres=# set parallel_tuple_cost = 0;SETpostgres=# set force_parallel_mode = on;SETpostgres=# explain select * from testing where id in (1608377,5449811, ... <removed for brevity> ... ,4654284,3558460);Can you try:\nselect * from testing where id any = (values(1608377),(5449811),(5334677)\n... <removed for brevity> ... ,(4654284),(3558460));Or alternately you can use EXTEND STATISTICS to improve Postgres planner choice.regards,Ranier Vilela",
"msg_date": "Wed, 1 Feb 2023 08:17:17 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "On Wed, 1 Feb 2023 at 18:39, Alex Kaiser <alextkaiser@gmail.com> wrote:\n> postgres=# set force_parallel_mode = on;\n\nThere's been a bit of debate about that GUC and I'm wondering how you\ncame to the conclusion that it might help you. Can you share details\nof how you found out about it and what made you choose to set it to\n\"on\"?\n\nDavid\n\n\n",
"msg_date": "Thu, 2 Feb 2023 00:30:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "Rainier,\n\nI tried using the any syntax (had to modify your query slightly) and it\ndidn't result in any change in the query plan.\n\npostgres=# explain select * from testing where id =\nANY(array[1608377,5449811, ... <removed for brevity> ...\n,4654284,3558460]::integer[]);\n QUERY PLAN\n----------------------------------------------------------------------------------\n Gather (cost=0.43..6138.81 rows=1000 width=74)\n Workers Planned: 1\n Single Copy: true\n -> Index Scan using testing_pkey on testing (cost=0.43..6138.81\nrows=1000 width=74)\n Index Cond: (id = ANY ('{1608377,5449811, ... <removed for\nbrevity> ... ,4654284,3558460}'::integer[]))\n\nI've never messed around with extended statistics, but I'm not sure how\nthey would help here. From what I've read they seem to help when your query\nis restricting over multiple columns. Since this query is only on one\ncolumn I'm not sure what a good \"CREATE STATISTICS ...\" command to run\nwould be to improve the query plan. Any suggestions?\n\n\nDavid,\n\nAs for how I found 'force_parallel_mode', I think I found it first here:\nhttps://postgrespro.com/list/thread-id/2574997 and then I also saw it when\nI was searching for 'parallel' on https://postgresqlco.nf .\n\nIt's not that I think the parameter would help my query, it was really as a\nlast resort to try and force the query to be parallel. Without that\nparameter, it just does a normal index scan (see the result below). My\nthinking with using that parameter was to see if I could force a parallel\nquery plan just to see if maybe the planner just thought the parallel plan\nwould be more expensive. So I was surprised to see that even with that\nparameter turned on it doesn't actually do anything in parallel. Here is\nthe plan with that parameter turned off:\n\npostgres=# set force_parallel_mode = off;\nSET\npostgres=# explain select * from testing where id =\nANY(array[1608377,5449811, ... <removed for brevity> ...\n,4654284,3558460]::integer[]);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000\nwidth=74)\n Index Cond: (id = ANY ('{1608377,5449811, ... < removed for brevity >\n... 4654284,3558460}'::integer[]))\n(2 rows)\n\n\nThanks,\nAlex Kaiser\n\nOn Wed, Feb 1, 2023 at 3:30 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 1 Feb 2023 at 18:39, Alex Kaiser <alextkaiser@gmail.com> wrote:\n> > postgres=# set force_parallel_mode = on;\n>\n> There's been a bit of debate about that GUC and I'm wondering how you\n> came to the conclusion that it might help you. Can you share details\n> of how you found out about it and what made you choose to set it to\n> \"on\"?\n>\n> David\n>\n\nRainier,I tried using the any syntax (had to modify your query slightly) and it didn't result in any change in the query plan.postgres=# explain select * from testing where id = ANY(array[1608377,5449811, ... <removed for brevity> ... ,4654284,3558460]::integer[]); QUERY PLAN---------------------------------------------------------------------------------- Gather (cost=0.43..6138.81 rows=1000 width=74) Workers Planned: 1 Single Copy: true -> Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000 width=74) Index Cond: (id = ANY ('{1608377,5449811, ... <removed for brevity> ... ,4654284,3558460}'::integer[]))I've never messed around with extended statistics, but I'm not sure how they would help here. From what I've read they seem to help when your query is restricting over multiple columns. Since this query is only on one column I'm not sure what a good \"CREATE STATISTICS ...\" command to run would be to improve the query plan. Any suggestions?David,As for how I found 'force_parallel_mode', I think I found it first here: https://postgrespro.com/list/thread-id/2574997 and then I also saw it when I was searching for 'parallel' on https://postgresqlco.nf . It's not that I think the parameter would help my query, it was really as a last resort to try and force the query to be parallel. Without that parameter, it just does a normal index scan (see the result below). My thinking with using that parameter was to see if I could force a parallel query plan just to see if maybe the planner just thought the parallel plan would be more expensive. So I was surprised to see that even with that parameter turned on it doesn't actually do anything in parallel. Here is the plan with that parameter turned off:postgres=# set force_parallel_mode = off;SETpostgres=# explain select * from testing where id = ANY(array[1608377,5449811, ... <removed for brevity> ... ,4654284,3558460]::integer[]); QUERY PLAN-------------------------------------------------------------------------------------------------------------------- Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000 width=74) Index Cond: (id = ANY ('{1608377,5449811, ... < removed for brevity > ... 4654284,3558460}'::integer[]))(2 rows)Thanks,Alex KaiserOn Wed, Feb 1, 2023 at 3:30 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 1 Feb 2023 at 18:39, Alex Kaiser <alextkaiser@gmail.com> wrote:\n> postgres=# set force_parallel_mode = on;\n\nThere's been a bit of debate about that GUC and I'm wondering how you\ncame to the conclusion that it might help you. Can you share details\nof how you found out about it and what made you choose to set it to\n\"on\"?\n\nDavid",
"msg_date": "Wed, 1 Feb 2023 11:22:47 -0800",
"msg_from": "Alex Kaiser <alextkaiser@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 11:22:47AM -0800, Alex Kaiser wrote:\n> I've never messed around with extended statistics, but I'm not sure how\n> they would help here. From what I've read they seem to help when your query\n> is restricting over multiple columns. Since this query is only on one\n> column I'm not sure what a good \"CREATE STATISTICS ...\" command to run\n> would be to improve the query plan. Any suggestions?\n\nThey wouldn't help. It seems like that was a guess.\n\n> As for how I found 'force_parallel_mode', I think I found it first here:\n> https://postgrespro.com/list/thread-id/2574997 and then I also saw it when\n> I was searching for 'parallel' on https://postgresqlco.nf .\n\nYeah. force_parallel_mode is meant for debugging, only, and we're\nwondering how people end up trying to use it for other purposes.\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\nDid you try adjusting min_parallel_index_scan_size /\nmin_parallel_table_scan_size ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Feb 2023 14:02:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 6:39 PM Alex Kaiser <alextkaiser@gmail.com> wrote:\n> select * from testing where id in (1608377,5449811, ... <1000 random ids> ,4654284,3558460);\n>\n> Essentially I have a list of 1000 ids and I would like the rows for all of those ids.\n>\n> This seems like it would be pretty easy to parallelize, if you have X threads then you would split the list of IDs into 1000/X sub lists and give one to each thread to go find the rows for ids in the given list. Even when I use the following configs I don't get a query plan that actually uses any parallelism:\n\nIt sounds like the plan you are imagining is something like:\n\nGather\n Nested Loop Join\n Outer side: <partial scan of your set of constant values>\n Inner side: Index scan of your big table\n\nSuch a plan would only give the right answer if each process has a\nnon-overlapping subset of the constant values to probe the index with,\nand together they have the whole set. Hypothetically, a planner could\nchop that set up beforehand and and give a different subset to each\nprocess (just as you could do that yourself using N connections and\nseparate queries), but that might be unfair: one process might find\nlots of matches, and the others might find none, because of the\ndistribution of data. So you'd ideally want some kind of \"work\nstealing\" scheme, where each worker can take more values to probe from\nwhenever it needs more, so that they all keep working until the values\nrun out. We don't have a thing that can do that. You might imagine\nthat a CTE could do it, so WITH keys_to_look_up AS (VALUES (1), (2),\n...) SELECT ... JOIN ON ..., but that also doesn't work because we\ndon't have a way to do \"partial\" scans of CTEs either (though someone\ncould invent that). Likewise for temporary tables: they are invisible\nto parallel workers, so they can't help us. I have contemplated\n\"partial function scans\" for set-returning functions, where a function\ncould be given a bit of shared memory and various other infrastructure\nto be able to be \"parallel aware\" (= able to coordinate across\nprocesses so that each process gets a subset of the data), and one\ncould imagine that that would allow various solutions to the problem,\nbut that's vapourware.\n\nBut you can get a plan like that if you insert all those values into a\nregular table, depending on various settings, stats and\nmin_parallel_table_scan_size (try 0, I guess that'll definitely do\nit). Which probably isn't the answer you wanted to hear.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 10:51:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "Justin,\n\nI did try changing min_parallel_index_scan_size /\nmin_parallel_table_scan_size and didn't see any change (the below is with\nforce_parallel_mode = off):\n\npostgres=# set min_parallel_index_scan_size = 0;\nSET\npostgres=# set min_parallel_table_scan_size = 0;\nSET\npostgres=# explain select * from testing where id =\nANY(array[1608377,5449811, ... <removed for brevity> ...\n,4654284,3558460]::integer[]);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000\nwidth=74)\n Index Cond: (id = ANY ('{1608377,5449811, ... < removed for brevity >\n... 4654284,3558460}'::integer[]))\n(2 rows)\n\n\nAs for 'force_parallel_mode', while this isn't \"debugging PG\", it isn't\nsomething that I would actually turn on production, just something I was\nplaying with to see the cost of parallel queries when the planner might not\nthink they are the most efficient.\n\n\nThomas,\n\nThanks for the explanation. Yes, that is the query plan I was imagining. I\ndo see how chopping it up could result in an unfair distribution. But my\ncounter to that would be that wouldn't chopping it up still be better than\nnot. If things do happen to work out to be fair, now it's X times as fast,\nif things are very unfair, then you haven't really lost much (besides the\nparallel overhead) compared to the non-parallel query. Or maybe it should\nbe possible to do the parallel query if there were some statistics (either\nnormal ones or extended ones) that told the planner that the result would\nprobably be fair?\n\nThough I do agree that the \"work stealing\" option would be the most\nefficient, but would be a lot more complicated to code up.\n\nI tried out inserting into a separate table, and as you guessed that\nworked. For my production scenario that isn't really feasible, but still\ncool to see it work.\n\n\npostgres=# create table ids(\n probe_id int PRIMARY KEY\n);\n\ninsert into ids(probe_id) values (774494);\ninsert into ids(probe_id) values (9141914);\n...\n\npostgres=# select count(*) from ids;\n count\n-------\n 1000\n(1 row)\n\npostgres=# explain select * from testing where id in (select * from ids);\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Gather (cost=0.43..3504.67 rows=1000 width=74)\n Workers Planned: 2\n -> Nested Loop (cost=0.43..3504.67 rows=417 width=74)\n -> Parallel Seq Scan on ids (cost=0.00..9.17 rows=417 width=4)\n -> Index Scan using testing_pkey on testing (cost=0.43..8.37\nrows=1 width=74)\n Index Cond: (id = ids.probe_id)\n(6 rows)\n\nThanks,\nAlex Kaiser\n\nOn Wed, Feb 1, 2023 at 1:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Feb 1, 2023 at 6:39 PM Alex Kaiser <alextkaiser@gmail.com> wrote:\n> > select * from testing where id in (1608377,5449811, ... <1000 random\n> ids> ,4654284,3558460);\n> >\n> > Essentially I have a list of 1000 ids and I would like the rows for all\n> of those ids.\n> >\n> > This seems like it would be pretty easy to parallelize, if you have X\n> threads then you would split the list of IDs into 1000/X sub lists and give\n> one to each thread to go find the rows for ids in the given list. Even\n> when I use the following configs I don't get a query plan that actually\n> uses any parallelism:\n>\n> It sounds like the plan you are imagining is something like:\n>\n> Gather\n> Nested Loop Join\n> Outer side: <partial scan of your set of constant values>\n> Inner side: Index scan of your big table\n>\n> Such a plan would only give the right answer if each process has a\n> non-overlapping subset of the constant values to probe the index with,\n> and together they have the whole set. Hypothetically, a planner could\n> chop that set up beforehand and and give a different subset to each\n> process (just as you could do that yourself using N connections and\n> separate queries), but that might be unfair: one process might find\n> lots of matches, and the others might find none, because of the\n> distribution of data. So you'd ideally want some kind of \"work\n> stealing\" scheme, where each worker can take more values to probe from\n> whenever it needs more, so that they all keep working until the values\n> run out. We don't have a thing that can do that. You might imagine\n> that a CTE could do it, so WITH keys_to_look_up AS (VALUES (1), (2),\n> ...) SELECT ... JOIN ON ..., but that also doesn't work because we\n> don't have a way to do \"partial\" scans of CTEs either (though someone\n> could invent that). Likewise for temporary tables: they are invisible\n> to parallel workers, so they can't help us. I have contemplated\n> \"partial function scans\" for set-returning functions, where a function\n> could be given a bit of shared memory and various other infrastructure\n> to be able to be \"parallel aware\" (= able to coordinate across\n> processes so that each process gets a subset of the data), and one\n> could imagine that that would allow various solutions to the problem,\n> but that's vapourware.\n>\n> But you can get a plan like that if you insert all those values into a\n> regular table, depending on various settings, stats and\n> min_parallel_table_scan_size (try 0, I guess that'll definitely do\n> it). Which probably isn't the answer you wanted to hear.\n>\n\nJustin,I did try changing min_parallel_index_scan_size / min_parallel_table_scan_size and didn't see any change (the below is with force_parallel_mode = off):postgres=# set min_parallel_index_scan_size = 0;SETpostgres=# set min_parallel_table_scan_size = 0;SETpostgres=# explain select * from testing where id = ANY(array[1608377,5449811, ... <removed for brevity> ... ,4654284,3558460]::integer[]); QUERY PLAN-------------------------------------------------------------------------------------------------------------------- Index Scan using testing_pkey on testing (cost=0.43..6138.81 rows=1000 width=74) Index Cond: (id = ANY ('{1608377,5449811, ... < removed for brevity > ... 4654284,3558460}'::integer[]))(2 rows)As for 'force_parallel_mode', while this isn't \"debugging PG\", it isn't something that I would actually turn on production, just something I was playing with to see the cost of parallel queries when the planner might not think they are the most efficient.Thomas,Thanks for the explanation. Yes, that is the query plan I was imagining. I do see how chopping it up could result in an unfair distribution. But my counter to that would be that wouldn't chopping it up still be better than not. If things do happen to work out to be fair, now it's X times as fast, if things are very unfair, then you haven't really lost much (besides the parallel overhead) compared to the non-parallel query. Or maybe it should be possible to do the parallel query if there were some statistics (either normal ones or extended ones) that told the planner that the result would probably be fair?Though I do agree that the \"work stealing\" option would be the most efficient, but would be a lot more complicated to code up.I tried out inserting into a separate table, and as you guessed that worked. For my production scenario that isn't really feasible, but still cool to see it work.postgres=# create table ids( probe_id int PRIMARY KEY);insert into ids(probe_id) values (774494);insert into ids(probe_id) values (9141914);...postgres=# select count(*) from ids; count------- 1000(1 row)postgres=# explain select * from testing where id in (select * from ids); QUERY PLAN----------------------------------------------------------------------------------------- Gather (cost=0.43..3504.67 rows=1000 width=74) Workers Planned: 2 -> Nested Loop (cost=0.43..3504.67 rows=417 width=74) -> Parallel Seq Scan on ids (cost=0.00..9.17 rows=417 width=4) -> Index Scan using testing_pkey on testing (cost=0.43..8.37 rows=1 width=74) Index Cond: (id = ids.probe_id)(6 rows)Thanks,Alex KaiserOn Wed, Feb 1, 2023 at 1:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Feb 1, 2023 at 6:39 PM Alex Kaiser <alextkaiser@gmail.com> wrote:\n> select * from testing where id in (1608377,5449811, ... <1000 random ids> ,4654284,3558460);\n>\n> Essentially I have a list of 1000 ids and I would like the rows for all of those ids.\n>\n> This seems like it would be pretty easy to parallelize, if you have X threads then you would split the list of IDs into 1000/X sub lists and give one to each thread to go find the rows for ids in the given list. Even when I use the following configs I don't get a query plan that actually uses any parallelism:\n\nIt sounds like the plan you are imagining is something like:\n\nGather\n Nested Loop Join\n Outer side: <partial scan of your set of constant values>\n Inner side: Index scan of your big table\n\nSuch a plan would only give the right answer if each process has a\nnon-overlapping subset of the constant values to probe the index with,\nand together they have the whole set. Hypothetically, a planner could\nchop that set up beforehand and and give a different subset to each\nprocess (just as you could do that yourself using N connections and\nseparate queries), but that might be unfair: one process might find\nlots of matches, and the others might find none, because of the\ndistribution of data. So you'd ideally want some kind of \"work\nstealing\" scheme, where each worker can take more values to probe from\nwhenever it needs more, so that they all keep working until the values\nrun out. We don't have a thing that can do that. You might imagine\nthat a CTE could do it, so WITH keys_to_look_up AS (VALUES (1), (2),\n...) SELECT ... JOIN ON ..., but that also doesn't work because we\ndon't have a way to do \"partial\" scans of CTEs either (though someone\ncould invent that). Likewise for temporary tables: they are invisible\nto parallel workers, so they can't help us. I have contemplated\n\"partial function scans\" for set-returning functions, where a function\ncould be given a bit of shared memory and various other infrastructure\nto be able to be \"parallel aware\" (= able to coordinate across\nprocesses so that each process gets a subset of the data), and one\ncould imagine that that would allow various solutions to the problem,\nbut that's vapourware.\n\nBut you can get a plan like that if you insert all those values into a\nregular table, depending on various settings, stats and\nmin_parallel_table_scan_size (try 0, I guess that'll definitely do\nit). Which probably isn't the answer you wanted to hear.",
"msg_date": "Wed, 1 Feb 2023 16:54:17 -0800",
"msg_from": "Alex Kaiser <alextkaiser@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 1:54 PM Alex Kaiser <alextkaiser@gmail.com> wrote:\n> Thanks for the explanation. Yes, that is the query plan I was imagining. I do see how chopping it up could result in an unfair distribution. But my counter to that would be that wouldn't chopping it up still be better than not. If things do happen to work out to be fair, now it's X times as fast, if things are very unfair, then you haven't really lost much (besides the parallel overhead) compared to the non-parallel query. Or maybe it should be possible to do the parallel query if there were some statistics (either normal ones or extended ones) that told the planner that the result would probably be fair?\n\nMaybe, but unfairness multiplies if it's part of a larger plan; what\nif the output of those nodes is the input to much more work, but now\nTHAT work is being done by one process? But yeah, statistics could\nhelp with that. I'm vaguely aware that other systems that do more\npartition-based parallelism spend a lot of effort on that sort of\nthinking.\n\n> Though I do agree that the \"work stealing\" option would be the most efficient, but would be a lot more complicated to code up.\n\nYeah. I probably used the wrong word; what I was describing is\n(something like) page-based parallelism, where input gets chopped up\ninto arbitrary chunks and handed out to consumers on demand, but we\ndon't know anything about the values in those chunks; that allows for\nmany interesting kind of plans, and it's nice because it's fair.\n\nAnother kind of parallelism is partition-based, which PostgreSQL can\ndo in a limited sense: we can send workers into different partitions\nof a table (what we can't do is partition the table on-the-fly, which\nis central to most parallelism in some other systems). Let's see:\n\nCREATE TABLE testing(\n id INT,\n info INT,\n data_one TEXT,\n data_two TEXT,\n primary key(id, info)\n) partition by hash (id);\ncreate table testing_p0 partition of testing for values with (modulus\n2, remainder 0);\ncreate table testing_p1 partition of testing for values with (modulus\n2, remainder 1);\nINSERT INTO testing(id, info, data_one, data_two)\nSELECT idx, idx, md5(random()::text), md5(random()::text)\nFROM generate_series(1,10000000) idx;\nanalyze;\n\nexplain select count(*) from testing where id in\n(1608377,5449811,5334677,5458230,2053195,3572313,1949724,3559988,5061560,8479775,\n...);\n\n Aggregate\n -> Append\n -> Index Only Scan using testing_p0_pkey on testing_p0 testing_1\n -> Index Only Scan using testing_p1_pkey on testing_p1 testing_2\n\nHmph. I can't seem to convince it to use Parallel Append. I think it\nmight be because the planner is not smart enough to chop down the =ANY\nlists to match the partitions. One sec...\n\nOk I hacked my copy of PostgreSQL to let me set parallel_setup_costs\nto negative numbers, and then I told it that parallelism is so awesome\nthat it makes your queries cost -1000000 timerons before they even\nstart. Now I see a plan like:\n\n Gather (cost=-999999.57..-987689.45 rows=2000 width=74)\n Workers Planned: 2\n -> Parallel Append (cost=0.43..12110.55 rows=832 width=74)\n -> Parallel Index Scan using testing_p0_pkey on testing_p0 testing_1\n -> Parallel Index Scan using testing_p1_pkey on testing_p1 testing_2\n\nBut it's probing every index for every one of the values in the big\nlist, not just the ones that have a non-zero chance of finding a\nmatch, which is a waste of cycles. I think if the planner were\nsmarter about THAT (as it is for plain old \"=\"), then the costing\nwould have chosen parallelism naturally by cost.\n\nBut it's probably not as cool as page-based parallelism, because\nparallelism is limited by your partitioning scheme.\n\nIf I had more timerons myself, I'd like to try to make parallel\nfunction scans, or parallel CTE scans, work...\n\n\n",
"msg_date": "Thu, 2 Feb 2023 14:48:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "On Thu, 2 Feb 2023 at 14:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n> If I had more timerons myself, I'd like to try to make parallel\n> function scans, or parallel CTE scans, work...\n\nI've not really looked in detail but I thought parallel VALUES scan\nmight be easier than those two.\n\nDavid\n\n\n",
"msg_date": "Thu, 2 Feb 2023 15:11:53 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
},
{
"msg_contents": "Okay after reading\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html I\ndo see that I was using force_parallel_mode incorectly and wouldn't have\ngotten what I wanted even if the original query was possible to parallelize.\n\n> Maybe, but unfairness multiplies if it's part of a larger plan\n\nAh, I didn't think of that, and it's a good point.\n\n> Ok I hacked my copy of PostgreSQL to let me set parallel_setup_costs\n> to negative numbers ...\n\nThanks for taking the time to do that and look into that. I don't actually\nthink it's worth the confusion to allow this in general, but I was thinking\nthat setting \"force_parallel_mode = on\" would essentially be doing\nsomething equivalent to this (though I now see that is wrong).\n\n> But it's probing every index for every one of the values in the big\n> list, not just the ones that have a non-zero chance of finding a\n> match, which is a waste of cycles.\n\nIn my case, this would actually be quite helpful because the real\nbottleneck when I run this in production is time spent waiting for IO. I\nwas hoping to spread that IO wait time over multiple threads, and wouldn't\nreally care about the few extra wasted CPU cycles. But I can't actually do\nthis as I can't set parallel_setup_costs to be negative, so I wouldn't be\nable to get PG to choose the parallel plan even if I did partition the\ntable.\n\n> If I had more timerons myself ...\n\nIf only we all had more timerons ... :)\n\nThanks,\nAlex Kaiser\n\nOn Wed, Feb 1, 2023 at 6:12 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 2 Feb 2023 at 14:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > If I had more timerons myself, I'd like to try to make parallel\n> > function scans, or parallel CTE scans, work...\n>\n> I've not really looked in detail but I thought parallel VALUES scan\n> might be easier than those two.\n>\n> David\n>\n\nOkay after reading http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html I do see that I was using force_parallel_mode incorectly and wouldn't have gotten what I wanted even if the original query was possible to parallelize.> Maybe, but unfairness multiplies if it's part of a larger planAh, I didn't think of that, and it's a good point.> Ok I hacked my copy of PostgreSQL to let me set parallel_setup_costs> to negative numbers ...Thanks for taking the time to do that and look into that. I don't actually think it's worth the confusion to allow this in general, but I was thinking that setting \"force_parallel_mode = on\" would essentially be doing something equivalent to this (though I now see that is wrong).> But it's probing every index for every one of the values in the big> list, not just the ones that have a non-zero chance of finding a> match, which is a waste of cycles.In my case, this would actually be quite helpful because the real bottleneck when I run this in production is time spent waiting for IO. I was hoping to spread that IO wait time over multiple threads, and wouldn't really care about the few extra wasted CPU cycles. But I can't actually do this as I can't set parallel_setup_costs to be negative, so I wouldn't be able to get PG to choose the parallel plan even if I did partition the table.> If I had more timerons myself ...If only we all had more timerons ... :)Thanks,Alex KaiserOn Wed, Feb 1, 2023 at 6:12 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 2 Feb 2023 at 14:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n> If I had more timerons myself, I'd like to try to make parallel\n> function scans, or parallel CTE scans, work...\n\nI've not really looked in detail but I thought parallel VALUES scan\nmight be easier than those two.\n\nDavid",
"msg_date": "Wed, 1 Feb 2023 21:00:09 -0800",
"msg_from": "Alex Kaiser <alextkaiser@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Getting an index scan to be a parallel index scan"
}
] |
[
{
"msg_contents": "Dear Postgres Family,\n\nI need some clarity and suggestions on the situation below.\n\nThere is a table called table_1\n\n 1. its immutable only INSERTS allowed.\n 2. it should get partitioned monthly basis\n 3. partitions should be created while inserting data\n 4. inheritance method should be used\n\nEverything I have written with the Trigger Function.\n\nNow My doubt is how I can do this INSERT routing better. It must be\noptimized for efficiency and handle concurrency.\n\nAny thoughts?\n\nThanks,\n\nDear Postgres Family,I need some clarity and suggestions on the situation below.There is a table called table_1its immutable only INSERTS allowed.it should get partitioned monthly basispartitions should be created while inserting datainheritance method should be usedEverything I have written with the Trigger Function.Now My doubt is how I can do this INSERT routing better. It must be optimized for efficiency and handle concurrency.Any thoughts? Thanks,",
"msg_date": "Tue, 7 Feb 2023 10:03:18 +0100",
"msg_from": "chanukya SDS <chanukyasds@gmail.com>",
"msg_from_op": true,
"msg_subject": "Routing & Concurrency with trigger functions"
}
] |
[
{
"msg_contents": "I'm used to adding an empty column being instant in most cases, so my \nattention was drawn when it took a long lock.\n\nThe timings below imply that each row is running the CHECK?\n\nI've come to expect addition of a NULL column to be fast, and what I'm \nseeing seems to contradict the docs [1]:\n\n> PostgreSQL assumes that CHECK constraints' conditions are immutable, \n> that is, they will always give the same result for the same input value. \n> This assumption is what justifies examining CHECK constraints only when \n> a value is first converted to be of a domain type, and not at other \n> times.\n\nI've ruled out waiting on a lock; nothing is reported with \nlog_lock_waits=on. This is a test database with exclusive access (2.5 \nmillion rows):\n\nI don't think this is another index or constraint, as removing them does \nnot affect performance. Also the \"text\" case below seems to prove this. \nResults are fully reproducable by repeatedly dropping and adding these \ncolumns.\n\nReporting in case something is not as expected. I can't even think of a \nworkaround here...\n\nThis is PostgreSQL 14.5 on Alpine Linux. Thanks.\n\n[1] https://www.postgresql.org/docs/current/sql-createdomain.html\n\n---\n\nCREATE DOMAIN hash AS text\n CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n \ndevstats=> ALTER TABLE invite ADD COLUMN test text;\nALTER TABLE\nTime: 8.988 ms\n \ndevstats=> ALTER TABLE invite ADD COLUMN test hash;\nALTER TABLE\nTime: 30923.380 ms (00:30.923)\n \ndevstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT NULL;\nALTER TABLE\nTime: 30344.272 ms (00:30.344)\n \ndevstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT '123abc123'::hash;\nALTER TABLE\nTime: 67439.232 ms (01:07.439)\n\n-- \nMark\n\n\n",
"msg_date": "Wed, 8 Feb 2023 18:01:23 +0000 (GMT)",
"msg_from": "Mark Hills <mark@xwax.org>",
"msg_from_op": true,
"msg_subject": "Domain check taking place unnecessarily?"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 11:01 AM Mark Hills <mark@xwax.org> wrote:\n\n>\n> CREATE DOMAIN hash AS text\n> CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n>\n> devstats=> ALTER TABLE invite ADD COLUMN test hash;\n> ALTER TABLE\n> Time: 30923.380 ms (00:30.923)\n>\n\nNecessarily, I presume because if you decided that the check on the domain\nshould be \"value is not null\" (don't do this though...) the column addition\nwould have to fail for existing rows (modulo defaults...).\n\nDavid J.\n\nOn Wed, Feb 8, 2023 at 11:01 AM Mark Hills <mark@xwax.org> wrote:\nCREATE DOMAIN hash AS text\n CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\ndevstats=> ALTER TABLE invite ADD COLUMN test hash;\nALTER TABLE\nTime: 30923.380 ms (00:30.923)Necessarily, I presume because if you decided that the check on the domain should be \"value is not null\" (don't do this though...) the column addition would have to fail for existing rows (modulo defaults...).David J.",
"msg_date": "Wed, 8 Feb 2023 11:09:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain check taking place unnecessarily?"
},
{
"msg_contents": "On Wed, 2023-02-08 at 18:01 +0000, Mark Hills wrote:\n> I've ruled out waiting on a lock; nothing is reported with \n> log_lock_waits=on. This is a test database with exclusive access (2.5 \n> million rows):\n> \n> This is PostgreSQL 14.5 on Alpine Linux. Thanks.\n> \n> CREATE DOMAIN hash AS text\n> CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n> \n> devstats=> ALTER TABLE invite ADD COLUMN test text;\n> ALTER TABLE\n> Time: 8.988 ms\n> \n> devstats=> ALTER TABLE invite ADD COLUMN test hash;\n> ALTER TABLE\n> Time: 30923.380 ms (00:30.923)\n> \n> devstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT NULL;\n> ALTER TABLE\n> Time: 30344.272 ms (00:30.344)\n> \n> devstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT '123abc123'::hash;\n> ALTER TABLE\n> Time: 67439.232 ms (01:07.439)\n\nIt takes 30 seconds to schan the table and determine that all existing rows\nsatisky the constraint.\n\nThe last example is slower, because there is actually a non-NULL value to check.\n\nIf that were not a domain, but a normal check constraint, you could first add\nthe constraint as NOT VALID and later run ALTER TABLE ... VALIDATE CONSTRAINT ...,\nwhich takes a while too, but does not lock the table quite that much.\nBut I don't think there is a way to do that with a domain.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 08 Feb 2023 20:47:07 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Domain check taking place unnecessarily?"
},
{
"msg_contents": "On Wed, 8 Feb 2023, Laurenz Albe wrote:\n\n> On Wed, 2023-02-08 at 18:01 +0000, Mark Hills wrote:\n> > I've ruled out waiting on a lock; nothing is reported with \n> > log_lock_waits=on. This is a test database with exclusive access (2.5 \n> > million rows):\n> > \n> > This is PostgreSQL 14.5 on Alpine Linux. Thanks.\n> > \n> > CREATE DOMAIN hash AS text\n> > CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n> > \n> > devstats=> ALTER TABLE invite ADD COLUMN test text;\n> > ALTER TABLE\n> > Time: 8.988 ms\n> > \n> > devstats=> ALTER TABLE invite ADD COLUMN test hash;\n> > ALTER TABLE\n> > Time: 30923.380 ms (00:30.923)\n> > \n> > devstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT NULL;\n> > ALTER TABLE\n> > Time: 30344.272 ms (00:30.344)\n> > \n> > devstats=> ALTER TABLE invite ADD COLUMN test hash DEFAULT '123abc123'::hash;\n> > ALTER TABLE\n> > Time: 67439.232 ms (01:07.439)\n> \n> It takes 30 seconds to schan the table and determine that all existing \n> rows satisky the constraint.\n\nBut there's no existing data (note this is adding column, not constraint)\n\nExisting rows are guaranteed to satisfy the domain check, because the \ndomain check is guaranteed to be immutable (per [1] in my original mail)\n\nOf course, if it were a table constraint it may involve multiple columns, \nrequiring it to be evaluated per-row.\n\nBut the docs make it clear the domain check is expected to be evaluated on \ninput, precisely for this purpose.\n\nSo I wondered if this was a shortcoming or even a bug.\n\nIt seems that adding a column of NULL (or even default) values for a \ndomain can (should?) be as quick as a basic data type like text or \ninteger...?\n\n-- \nMark",
"msg_date": "Thu, 9 Feb 2023 10:56:35 +0000 (GMT)",
"msg_from": "Mark Hills <mark@xwax.org>",
"msg_from_op": true,
"msg_subject": "Re: Domain check taking place unnecessarily?"
},
{
"msg_contents": "On Wed, 8 Feb 2023, David G. Johnston wrote:\n\n> On Wed, Feb 8, 2023 at 11:01 AM Mark Hills <mark@xwax.org> wrote:\n> \n> >\n> > CREATE DOMAIN hash AS text\n> > CHECK (VALUE ~ E'^[a-zA-Z0-9]{8,32}$');\n> >\n> > devstats=> ALTER TABLE invite ADD COLUMN test hash;\n> > ALTER TABLE\n> > Time: 30923.380 ms (00:30.923)\n> >\n> \n> Necessarily, I presume because if you decided that the check on the domain\n> should be \"value is not null\" (don't do this though...) the column addition\n> would have to fail for existing rows (modulo defaults...).\n\nI'm not sure I'm parsing this paragraph correctly, but the existing rows \ndon't provide any data to the domain check. Perhaps you could clarify.\n\nMany thanks\n\n-- \nMark\n\n\n",
"msg_date": "Thu, 9 Feb 2023 11:01:41 +0000 (GMT)",
"msg_from": "Mark Hills <mark@xwax.org>",
"msg_from_op": true,
"msg_subject": "Re: Domain check taking place unnecessarily?"
},
{
"msg_contents": "Mark Hills <mark@xwax.org> writes:\n> On Wed, 8 Feb 2023, Laurenz Albe wrote:\n>> It takes 30 seconds to schan the table and determine that all existing \n>> rows satisky the constraint.\n\n> But there's no existing data (note this is adding column, not constraint)\n\n> Existing rows are guaranteed to satisfy the domain check, because the \n> domain check is guaranteed to be immutable (per [1] in my original mail)\n\nimmutable != \"will accept null\".\n\nThere could be some more optimizations here, perhaps, but there aren't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 10:10:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain check taking place unnecessarily?"
},
{
"msg_contents": "On Thu, 9 Feb 2023, Tom Lane wrote:\n\n> Mark Hills <mark@xwax.org> writes:\n> > On Wed, 8 Feb 2023, Laurenz Albe wrote:\n> >> It takes 30 seconds to schan the table and determine that all existing \n> >> rows satisky the constraint.\n> \n> > But there's no existing data (note this is adding column, not constraint)\n> \n> > Existing rows are guaranteed to satisfy the domain check, because the \n> > domain check is guaranteed to be immutable (per [1] in my original mail)\n> \n> immutable != \"will accept null\".\n> \n> There could be some more optimizations here, perhaps, but there aren't.\n\nWell that's no problem at all. Thanks for the clarification.\n\nI mentioned this case to a few people and they were also surprised by the \noutcome, to the point where we wondered if this might be misbehaving. \nHence bringing it up in this forum.\n\nWe'll go ahead and deal with the pauses in production, as I don't think \nthere's a workaround.\n\nThanks\n\n-- \nMark\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:49:25 +0000 (GMT)",
"msg_from": "Mark Hills <mark@xwax.org>",
"msg_from_op": true,
"msg_subject": "Re: Domain check taking place unnecessarily?"
}
] |
[
{
"msg_contents": "Hell postgres people!\n\nThis is not an issue report so much as a gripe. I'm on postgres 12.2, so it\nis entirely possible that the issue I describe is fixed in a later version.\nIf so, it is not described in the docs or any posts I can find archived on\npgsql-performance. (I am not brave enough to delve into pgsql-developer,\nwhere I'm sure this has been brought up at some point)\n\nBasically- window partition functions don't take advantage of existing\ntable partitions. I use window functions as a more powerful GROUP BY clause\nthat preserves row-by-row information- super handy for a lot of things.\n\nIn particular, I want to use window functions on already partitioned\ntables, like the below example:\n\ncreate table abb (a int, b int, g int) partition by hash(b)\n/* populate table etc... */\nselect a, b, min(a) over (partition by b) as g from abb\n\nIdeally with a query plan like this:\n\nWindow:\n Append:\n Sort on table_p0\n Sort on table_p1\n Sort on table_p2\n\nInstead, I get this:\n\nWindow:\n Sort:\n Append:\n Parallel seq scan on table_p0\n Parallel seq scan on table_p1\n Parallel seq scan on table_p2\n\nWhich is a BIG no-no, as there could potentially be thousands of partitions\nand BILLIONS of rows per table. This can be solved by manually implementing\nthe first query plan via scripting, e.g:\n\ndo $$\ndeclare i int;\nbegin\n for i in 0..get_npartitions() loop\n execute('select a, b, min(a) over (partition by b) as g from\nabb_p%', i);\n end loop;\nend $$ language plpgsql;\n\nThis is not ideal, but perfectly workable. I'm sure you guys are already\naware of this, it just seems like a really simple fix to me- if the window\nfunction partition scheme exactly matches the partition scheme of the table\nit queries, it should take advantage of those partitions.\n\nThanks,\nBen\n\nHell postgres people!This is not an issue report so much as a gripe. I'm on postgres 12.2, so it is entirely possible that the issue I describe is fixed in a later version. If so, it is not described in the docs or any posts I can find archived on pgsql-performance. (I am not brave enough to delve into pgsql-developer, where I'm sure this has been brought up at some point)Basically- window partition functions don't take advantage of existing table partitions. I use window functions as a more powerful GROUP BY clause that preserves row-by-row information- super handy for a lot of things. In particular, I want to use window functions on already partitioned tables, like the below example:create table abb (a int, b int, g int) partition by hash(b)/* populate table etc... */select a, b, min(a) over (partition by b) as g from abbIdeally with a query plan like this:Window: Append: Sort on table_p0 Sort on table_p1 Sort on table_p2Instead, I get this:Window: Sort: Append: Parallel seq scan on table_p0 Parallel seq scan on table_p1 Parallel seq scan on table_p2Which is a BIG no-no, as there could potentially be thousands of partitions and BILLIONS of rows per table. This can be solved by manually implementing the first query plan via scripting, e.g:do $$declare i int;begin for i in 0..get_npartitions() loop execute('select a, b, min(a) over (partition by b) as g from abb_p%', i); end loop;end $$ language plpgsql;This is not ideal, but perfectly workable. I'm sure you guys are already aware of this, it just seems like a really simple fix to me- if the window function partition scheme exactly matches the partition scheme of the table it queries, it should take advantage of those partitions.Thanks,Ben",
"msg_date": "Wed, 8 Feb 2023 11:45:09 -0800",
"msg_from": "Benjamin Tingle <ben@tingle.org>",
"msg_from_op": true,
"msg_subject": "Window Functions & Table Partitions"
},
{
"msg_contents": "On Thu, 9 Feb 2023 at 10:45, Benjamin Tingle <ben@tingle.org> wrote:\n> Basically- window partition functions don't take advantage of existing table partitions. I use window functions as a more powerful GROUP BY clause that preserves row-by-row information- super handy for a lot of things.\n>\n> In particular, I want to use window functions on already partitioned tables, like the below example:\n>\n> create table abb (a int, b int, g int) partition by hash(b)\n> /* populate table etc... */\n> select a, b, min(a) over (partition by b) as g from abb\n>\n> Ideally with a query plan like this:\n>\n> Window:\n> Append:\n> Sort on table_p0\n> Sort on table_p1\n> Sort on table_p2\n\nThere was some effort [1] in version 12 to take advantage of the order\ndefined by the partitioning scheme. The release notes [2] mention:\n\n\"Avoid sorting when partitions are already being scanned in the necessary order\"\n\nHowever, it's not 100% of what you need as there'd have to be a btree\nindex on abb(b) for the planner to notice.\n\nLikely this could be made better so that add_paths_to_append_rel()\nadded the pathkeys defined by the partitioned table into\nall_child_pathkeys if they didn't exist already. In fact, I've\nattached a very quickly hacked together patch against master to do\nthis. I've given it very little thought and it comes complete with\nfailing regression tests.\n\nIf you're interested in pursuing this then feel free to take the patch\nto the pgsql-hackers mailing list and propose it. It's unlikely I'll\nget time to do that for a while, but I will keep a branch locally with\nit to remind me in case I do at some point in the future.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=959d00e9dbe4cfcf4a63bb655ac2c29a5e579246\n[2] https://www.postgresql.org/docs/release/12.0/",
"msg_date": "Thu, 9 Feb 2023 11:35:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Window Functions & Table Partitions"
},
{
"msg_contents": "Thanks for the helpful response david! I'll have a shot at getting the\npatch to work myself & submitting to pgsql-hackers.\n\nBen\n\nOn Wed, Feb 8, 2023 at 2:36 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 9 Feb 2023 at 10:45, Benjamin Tingle <ben@tingle.org> wrote:\n> > Basically- window partition functions don't take advantage of existing\n> table partitions. I use window functions as a more powerful GROUP BY clause\n> that preserves row-by-row information- super handy for a lot of things.\n> >\n> > In particular, I want to use window functions on already partitioned\n> tables, like the below example:\n> >\n> > create table abb (a int, b int, g int) partition by hash(b)\n> > /* populate table etc... */\n> > select a, b, min(a) over (partition by b) as g from abb\n> >\n> > Ideally with a query plan like this:\n> >\n> > Window:\n> > Append:\n> > Sort on table_p0\n> > Sort on table_p1\n> > Sort on table_p2\n>\n> There was some effort [1] in version 12 to take advantage of the order\n> defined by the partitioning scheme. The release notes [2] mention:\n>\n> \"Avoid sorting when partitions are already being scanned in the necessary\n> order\"\n>\n> However, it's not 100% of what you need as there'd have to be a btree\n> index on abb(b) for the planner to notice.\n>\n> Likely this could be made better so that add_paths_to_append_rel()\n> added the pathkeys defined by the partitioned table into\n> all_child_pathkeys if they didn't exist already. In fact, I've\n> attached a very quickly hacked together patch against master to do\n> this. I've given it very little thought and it comes complete with\n> failing regression tests.\n>\n> If you're interested in pursuing this then feel free to take the patch\n> to the pgsql-hackers mailing list and propose it. It's unlikely I'll\n> get time to do that for a while, but I will keep a branch locally with\n> it to remind me in case I do at some point in the future.\n>\n> David\n>\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=959d00e9dbe4cfcf4a63bb655ac2c29a5e579246\n> [2] https://www.postgresql.org/docs/release/12.0/\n>\n\n\n-- \n\nBen(t).\n\nThanks for the helpful response david! I'll have a shot at getting the patch to work myself & submitting to pgsql-hackers.BenOn Wed, Feb 8, 2023 at 2:36 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 9 Feb 2023 at 10:45, Benjamin Tingle <ben@tingle.org> wrote:\n> Basically- window partition functions don't take advantage of existing table partitions. I use window functions as a more powerful GROUP BY clause that preserves row-by-row information- super handy for a lot of things.\n>\n> In particular, I want to use window functions on already partitioned tables, like the below example:\n>\n> create table abb (a int, b int, g int) partition by hash(b)\n> /* populate table etc... */\n> select a, b, min(a) over (partition by b) as g from abb\n>\n> Ideally with a query plan like this:\n>\n> Window:\n> Append:\n> Sort on table_p0\n> Sort on table_p1\n> Sort on table_p2\n\nThere was some effort [1] in version 12 to take advantage of the order\ndefined by the partitioning scheme. The release notes [2] mention:\n\n\"Avoid sorting when partitions are already being scanned in the necessary order\"\n\nHowever, it's not 100% of what you need as there'd have to be a btree\nindex on abb(b) for the planner to notice.\n\nLikely this could be made better so that add_paths_to_append_rel()\nadded the pathkeys defined by the partitioned table into\nall_child_pathkeys if they didn't exist already. In fact, I've\nattached a very quickly hacked together patch against master to do\nthis. I've given it very little thought and it comes complete with\nfailing regression tests.\n\nIf you're interested in pursuing this then feel free to take the patch\nto the pgsql-hackers mailing list and propose it. It's unlikely I'll\nget time to do that for a while, but I will keep a branch locally with\nit to remind me in case I do at some point in the future.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=959d00e9dbe4cfcf4a63bb655ac2c29a5e579246\n[2] https://www.postgresql.org/docs/release/12.0/\n-- Ben(t).",
"msg_date": "Thu, 9 Feb 2023 09:40:48 -0800",
"msg_from": "Benjamin Tingle <ben@tingle.org>",
"msg_from_op": true,
"msg_subject": "Re: Window Functions & Table Partitions"
},
{
"msg_contents": "On Fri, 10 Feb 2023 at 06:40, Benjamin Tingle <ben@tingle.org> wrote:\n> Thanks for the helpful response david! I'll have a shot at getting the patch to work myself & submitting to pgsql-hackers.\n\nI took some time today for this and fixed up a few mistakes in the\npatch and added it to the March commitfest [1]. Time is ticking away\nfor v16, so given this is a fairly trivial patch, I thought it might\nbe nice to have it.\n\nAny discussion on the patch can be directed at [2]\n\nDavid\n\n[1] https://commitfest.postgresql.org/42/4198/\n[2] https://www.postgresql.org/message-id/flat/CAApHDvojKdBR3MR59JXmaCYbyHB6Q_5qPRU+dy93En8wm+XiDA@mail.gmail.com\n\n> Ben\n>\n> On Wed, Feb 8, 2023 at 2:36 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Thu, 9 Feb 2023 at 10:45, Benjamin Tingle <ben@tingle.org> wrote:\n>> > Basically- window partition functions don't take advantage of existing table partitions. I use window functions as a more powerful GROUP BY clause that preserves row-by-row information- super handy for a lot of things.\n>> >\n>> > In particular, I want to use window functions on already partitioned tables, like the below example:\n>> >\n>> > create table abb (a int, b int, g int) partition by hash(b)\n>> > /* populate table etc... */\n>> > select a, b, min(a) over (partition by b) as g from abb\n>> >\n>> > Ideally with a query plan like this:\n>> >\n>> > Window:\n>> > Append:\n>> > Sort on table_p0\n>> > Sort on table_p1\n>> > Sort on table_p2\n>>\n>> There was some effort [1] in version 12 to take advantage of the order\n>> defined by the partitioning scheme. The release notes [2] mention:\n>>\n>> \"Avoid sorting when partitions are already being scanned in the necessary order\"\n>>\n>> However, it's not 100% of what you need as there'd have to be a btree\n>> index on abb(b) for the planner to notice.\n>>\n>> Likely this could be made better so that add_paths_to_append_rel()\n>> added the pathkeys defined by the partitioned table into\n>> all_child_pathkeys if they didn't exist already. In fact, I've\n>> attached a very quickly hacked together patch against master to do\n>> this. I've given it very little thought and it comes complete with\n>> failing regression tests.\n>>\n>> If you're interested in pursuing this then feel free to take the patch\n>> to the pgsql-hackers mailing list and propose it. It's unlikely I'll\n>> get time to do that for a while, but I will keep a branch locally with\n>> it to remind me in case I do at some point in the future.\n>>\n>> David\n>>\n>> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=959d00e9dbe4cfcf4a63bb655ac2c29a5e579246\n>> [2] https://www.postgresql.org/docs/release/12.0/\n>\n>\n>\n> --\n>\n> Ben(t).>\n> Ben\n>\n> On Wed, Feb 8, 2023 at 2:36 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Thu, 9 Feb 2023 at 10:45, Benjamin Tingle <ben@tingle.org> wrote:\n>> > Basically- window partition functions don't take advantage of existing table partitions. I use window functions as a more powerful GROUP BY clause that preserves row-by-row information- super handy for a lot of things.\n>> >\n>> > In particular, I want to use window functions on already partitioned tables, like the below example:\n>> >\n>> > create table abb (a int, b int, g int) partition by hash(b)\n>> > /* populate table etc... */\n>> > select a, b, min(a) over (partition by b) as g from abb\n>> >\n>> > Ideally with a query plan like this:\n>> >\n>> > Window:\n>> > Append:\n>> > Sort on table_p0\n>> > Sort on table_p1\n>> > Sort on table_p2\n>>\n>> There was some effort [1] in version 12 to take advantage of the order\n>> defined by the partitioning scheme. The release notes [2] mention:\n>>\n>> \"Avoid sorting when partitions are already being scanned in the necessary order\"\n>>\n>> However, it's not 100% of what you need as there'd have to be a btree\n>> index on abb(b) for the planner to notice.\n>>\n>> Likely this could be made better so that add_paths_to_append_rel()\n>> added the pathkeys defined by the partitioned table into\n>> all_child_pathkeys if they didn't exist already. In fact, I've\n>> attached a very quickly hacked together patch against master to do\n>> this. I've given it very little thought and it comes complete with\n>> failing regression tests.\n>>\n>> If you're interested in pursuing this then feel free to take the patch\n>> to the pgsql-hackers mailing list and propose it. It's unlikely I'll\n>> get time to do that for a while, but I will keep a branch locally with\n>> it to remind me in case I do at some point in the future.\n>>\n>> David\n>>\n>> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=959d00e9dbe4cfcf4a63bb655ac2c29a5e579246\n>> [2] https://www.postgresql.org/docs/release/12.0/\n>\n>\n>\n> --\n>\n> Ben(t).\n\n\n",
"msg_date": "Tue, 21 Feb 2023 16:18:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Window Functions & Table Partitions"
}
] |
[
{
"msg_contents": "I've been thinking about the max_wal_senders parameter lately and wondering\nif there is any harm in setting it too high. I'm wondering if I should try\nto shave a few senders off, perhaps to match my logical replicas + 1,\ninstead of just leaving it at the default of 10. Or vice-versa, can\nclients use more than one sender if they are available? Would increasing\nit result in lower latency? The documentation is a little vague.\n\nThe documentation mentions an orphaned connection slot that may take a\nwhile to time out. How can I tell if I have any of those? I was looking\nfor a `pg_wal_slots` table similar to the `pg_replication_slots` table, but\ndon't see anything obvious in the catalog.\n\nI've been thinking about the max_wal_senders parameter lately and wondering if there is any harm in setting it too high. I'm wondering if I should try to shave a few senders off, perhaps to match my logical replicas + 1, instead of just leaving it at the default of 10. Or vice-versa, can clients use more than one sender if they are available? Would increasing it result in lower latency? The documentation is a little vague.The documentation mentions an orphaned connection slot that may take a while to time out. How can I tell if I have any of those? I was looking for a `pg_wal_slots` table similar to the `pg_replication_slots` table, but don't see anything obvious in the catalog.",
"msg_date": "Wed, 8 Feb 2023 18:07:15 -0500",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": true,
"msg_subject": "max_wal_senders"
},
{
"msg_contents": "On Wed, 2023-02-08 at 18:07 -0500, Rick Otten wrote:\n> I've been thinking about the max_wal_senders parameter lately and wondering if there\n> is any harm in setting it too high.\n\nNo, there isn't, except that if you end up having too many *actual* WAL senders, it\nwill cause load. A high limit is no problem as such.\n\n> The documentation mentions an orphaned connection slot that may take a while to time out.\n> How can I tell if I have any of those? I was looking for a `pg_wal_slots` table\n> similar to the `pg_replication_slots` table, but don't see anything obvious in the catalog.\n\nThe view is \"pg_stat_replication\", but you won't see there if an entry is\nabandoned before PostgreSQL does and terminates it. You can set \"tcp_keepalived_idle\"\nlow enough so that the kernel will detect broken connections early on.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 09 Feb 2023 06:59:53 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: max_wal_senders"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 06:59:53 +0100, Laurenz Albe wrote:\n> On Wed, 2023-02-08 at 18:07 -0500, Rick Otten wrote:\n> > I've been thinking about the max_wal_senders parameter lately and wondering if there\n> > is any harm in setting it too high.\n> \n> No, there isn't, except that if you end up having too many *actual* WAL senders, it\n> will cause load. A high limit is no problem as such.\n\nThat's not *quite* true. The downsides are basically the same as for\nmax_connections (It's basically treated the same, see\nInitializeMaxBackends()): You need more shared memory. There's a small\ndegradation of performance due to the increased size of some shared\ndatastructures, most prominently the lock table for heavyweight locks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 22:40:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: max_wal_senders"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am running a simple test and am curious to know why a difference in execution times between PostgreSQL 12 vs PostgreSQL 15.\n\nI have this function:\nCREATE function test() returns int language plpgsql as $$\ndeclare\n v_number bigint;\n v_multiplier float = 3.14159;\n loop_cnt bigint;\nbegin\n\n for loop_cnt in 1..1000000000\n loop\n v_number := 1000;\n v_number := v_number * v_multiplier;\n end loop;\n\n return 0;\n\nend;$$;\n\nI execute this in PostgreSQL 12:\n\n[cid:f1c8e1b4-c488-457b-93fb-31617a54a567]\n\n\nPostgreSQL 15:\n[cid:3db591b7-7913-4705-be81-10a5d4c8989f]\n\nIt is much faster in 15 than in 12, and while I love the performance improvement. I am curious to know the rationale behind this improvement on PostgreSQL 15.\n\nThe test result is from PostgreSQL on Windows but I observed the same behavior on Linux OS too.\n\nServer Spec:\nIntel i7-8650U CPU @1.90GHz 2.11GHz\nRAM 16 GB\nWindows 11 Enterprise\n\nThanks,\nAdi",
"msg_date": "Fri, 10 Feb 2023 18:53:09 +0000",
"msg_from": "Adithya Kumaranchath <akumaranchath@live.com>",
"msg_from_op": true,
"msg_subject": "For loop execution times in PostgreSQL 12 vs 15"
},
{
"msg_contents": "Hi\n\n\npá 10. 2. 2023 v 19:53 odesílatel Adithya Kumaranchath <\nakumaranchath@live.com> napsal:\n\n> Hi all,\n>\n> I am running a simple test and am curious to know why a difference in\n> execution times between PostgreSQL 12 vs PostgreSQL 15.\n>\n> *I have this function:*\n> CREATE function test() returns int language plpgsql as $$\n> declare\n> v_number bigint;\n> v_multiplier float = 3.14159;\n> loop_cnt bigint;\n> begin\n>\n> for loop_cnt in 1..1000000000\n> loop\n> v_number := 1000;\n> v_number := v_number * v_multiplier;\n> end loop;\n>\n> return 0;\n>\n> end;$$;\n>\n> *I execute this in PostgreSQL 12:*\n>\n>\n>\n>\n> *PostgreSQL 15:*\n>\n>\n> It is much faster in 15 than in 12, and while I love the performance\n> improvement. I am curious to know the rationale behind this improvement on\n> PostgreSQL 15.\n>\n> The test result is from PostgreSQL on Windows but I observed the same\n> behavior on Linux OS too.\n>\n> *Server Spec:*\n> Intel i7-8650U CPU @1.90GHz 2.11GHz\n> RAM 16 GB\n> Windows 11 Enterprise\n>\n> Thanks,\n> Adi\n>\n\nPlease, don't send screenshots - we believe you :-)\n\nYour code can be little bit faster if you use flag IMMUTABLE\n\nThere were more patches that reduced the overhead of expression's\nevaluation in PL/pgSQL.\n\nHistory\nhttps://github.com/postgres/postgres/commits/master/src/pl/plpgsql/src/pl_exec.c\n\nSome interesting commits\nhttps://github.com/postgres/postgres/commit/8f59f6b9c0376173a072e4fb7de1edd6a26e6b52\nhttps://github.com/postgres/postgres/commit/fbc7a716084ebccd2a996cc415187c269ea54b3e\nhttps://github.com/postgres/postgres/commit/73b06cf893c9d3bb38c11878a12cc29407e78b6c\n\nOriginally, PL/pgSQL was designed as glue of SQL and the expression\nevaluation was not too good. It was significantly slower in expression's\nevaluation than other interpreters like Perl or Python.\n\nBut lot of people uses PL/pgSQL for numeric calculations with PostGIS, so\nspeed of expression's evaluation is more important than before, and after\nall optimizations, although the PL/pgSQL is still slower than generic\ninterprets - still PL/pgSQL should be used mainly like glue of SQL, the\ndifference is significantly less - from 10x times slower to 2 slower. Still\nthere is not any JIT - so the performance is almost good I think.\n\nRegards\n\nPavel",
"msg_date": "Fri, 10 Feb 2023 20:35:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: For loop execution times in PostgreSQL 12 vs 15"
},
{
"msg_contents": "Hi\n\n\n> Please, don't send screenshots - we believe you :-)\n>\n> Your code can be little bit faster if you use flag IMMUTABLE\n>\n> There were more patches that reduced the overhead of expression's\n> evaluation in PL/pgSQL.\n>\n> History\n>\n> https://github.com/postgres/postgres/commits/master/src/pl/plpgsql/src/pl_exec.c\n>\n> Some interesting commits\n>\n> https://github.com/postgres/postgres/commit/8f59f6b9c0376173a072e4fb7de1edd6a26e6b52\n>\n> https://github.com/postgres/postgres/commit/fbc7a716084ebccd2a996cc415187c269ea54b3e\n>\n> https://github.com/postgres/postgres/commit/73b06cf893c9d3bb38c11878a12cc29407e78b6c\n>\n> Originally, PL/pgSQL was designed as glue of SQL and the expression\n> evaluation was not too good. It was significantly slower in expression's\n> evaluation than other interpreters like Perl or Python.\n>\n> But lot of people uses PL/pgSQL for numeric calculations with PostGIS, so\n> speed of expression's evaluation is more important than before, and after\n> all optimizations, although the PL/pgSQL is still slower than generic\n> interprets - still PL/pgSQL should be used mainly like glue of SQL, the\n> difference is significantly less - from 10x times slower to 2 slower. Still\n> there is not any JIT - so the performance is almost good I think.\n>\n\nstill there is a lot of overhead there - in profiler the overhead of\nmultiplication is less than 1%. But for significant improvements it needs\nsome form of JIT (Postgres has JIT for SQL expressions, but it is not used\nfor PLpgSQL expressions). On second hand, PL/pgSQL is not designed (and\nusually) not used for extensive numeric calculations like this. But if\nsomebody try to enhance performance, (s)he will be welcome every time (I\nthink so there is some space for 2x better performance - but it requires\nJIT).\n\nRegards\n\nPavel\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n\nHiPlease, don't send screenshots - we believe you :-)Your code can be little bit faster if you use flag IMMUTABLEThere were more patches that reduced the overhead of expression's evaluation in PL/pgSQL. Historyhttps://github.com/postgres/postgres/commits/master/src/pl/plpgsql/src/pl_exec.cSome interesting commitshttps://github.com/postgres/postgres/commit/8f59f6b9c0376173a072e4fb7de1edd6a26e6b52https://github.com/postgres/postgres/commit/fbc7a716084ebccd2a996cc415187c269ea54b3ehttps://github.com/postgres/postgres/commit/73b06cf893c9d3bb38c11878a12cc29407e78b6cOriginally, PL/pgSQL was designed as glue of SQL and the expression evaluation was not too good. It was significantly slower in expression's evaluation than other interpreters like Perl or Python.But lot of people uses PL/pgSQL for numeric calculations with PostGIS, so speed of expression's evaluation is more important than before, and after all optimizations, although the PL/pgSQL is still slower than generic interprets - still PL/pgSQL should be used mainly like glue of SQL, the difference is significantly less - from 10x times slower to 2 slower. Still there is not any JIT - so the performance is almost good I think. still there is a lot of overhead there - in profiler the overhead of multiplication is less than 1%. But for significant improvements it needs some form of JIT (Postgres has JIT for SQL expressions, but it is not used for PLpgSQL expressions). On second hand, PL/pgSQL is not designed (and usually) not used for extensive numeric calculations like this. But if somebody try to enhance performance, (s)he will be welcome every time (I think so there is some space for 2x better performance - but it requires JIT).RegardsPavelRegardsPavel",
"msg_date": "Fri, 10 Feb 2023 20:45:39 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: For loop execution times in PostgreSQL 12 vs 15"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 20:45:39 +0100, Pavel Stehule wrote:\n> But for significant improvements it needs some form of JIT (Postgres has JIT\n> for SQL expressions, but it is not used for PLpgSQL expressions). On second\n> hand, PL/pgSQL is not designed (and usually) not used for extensive numeric\n> calculations like this. But if somebody try to enhance performance, (s)he\n> will be welcome every time (I think so there is some space for 2x better\n> performance - but it requires JIT).\n\nI think there's a *lot* of performance gain to be had before JIT is\nrequired. Or before JIT really can do a whole lot.\n\nWe do a lot of work for each plpgsql statement / expr. Most of the time\ntypically isn't spent actually evaluating expressions, but doing setup /\ninvalidation work.\n\nE.g. here's a profile of the test() function from upthread:\n\n Overhead Command Shared Object Symbol\n+ 17.31% postgres plpgsql.so [.] exec_stmts\n+ 15.43% postgres postgres [.] ExecInterpExpr\n+ 14.29% postgres plpgsql.so [.] exec_eval_expr\n+ 11.79% postgres plpgsql.so [.] exec_assign_value\n+ 7.06% postgres plpgsql.so [.] plpgsql_param_eval_var\n+ 6.58% postgres plpgsql.so [.] exec_assign_expr\n+ 4.82% postgres postgres [.] recomputeNamespacePath\n+ 3.90% postgres postgres [.] CachedPlanIsSimplyValid\n+ 3.45% postgres postgres [.] dtoi8\n+ 3.02% postgres plpgsql.so [.] exec_stmt_fori\n+ 2.88% postgres postgres [.] OverrideSearchPathMatchesCurrent\n+ 2.76% postgres postgres [.] EnsurePortalSnapshotExists\n+ 2.16% postgres postgres [.] float8mul\n+ 1.62% postgres postgres [.] MemoryContextReset\n\nSome of this is a bit distorted due to inlining (e.g. exec_eval_simple_expr()\nis attributed to exec_eval_expr()).\n\n\nMost of the checks we do ought to be done once, at the start of plpgsql\nevaluation, rather than be done over and over, during evaluation.\n\nFor things like simple exprs, we likely could gain a lot by pushing more of\nthe work into ExecEvalExpr(), rather than calling ExecEvalExpr() multiple\ntimes.\n\nThe memory layout of plpgsql statements should be improved, there's a lot of\nunnecessary indirection. That's what e.g. hurts exec_stmts() a lot.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Mon, 13 Feb 2023 13:21:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: For loop execution times in PostgreSQL 12 vs 15"
},
{
"msg_contents": "po 13. 2. 2023 v 22:22 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2023-02-10 20:45:39 +0100, Pavel Stehule wrote:\n> > But for significant improvements it needs some form of JIT (Postgres has\n> JIT\n> > for SQL expressions, but it is not used for PLpgSQL expressions). On\n> second\n> > hand, PL/pgSQL is not designed (and usually) not used for extensive\n> numeric\n> > calculations like this. But if somebody try to enhance performance, (s)he\n> > will be welcome every time (I think so there is some space for 2x better\n> > performance - but it requires JIT).\n>\n> I think there's a *lot* of performance gain to be had before JIT is\n> required. Or before JIT really can do a whole lot.\n>\n> We do a lot of work for each plpgsql statement / expr. Most of the time\n> typically isn't spent actually evaluating expressions, but doing setup /\n> invalidation work.\n>\n\nAnd it is the reason why I think JIT can help.\n\nYou repeatedly read and use switches based if the variable has fixed length\nor if it is varlena, if it is native composite or plpgsql composite, every\ntime you check if target is mutable or not, every time you check if\nexpression type is the same as target type. The PL/pgSQL compiler is very\n\"lazy\". Lots of checks are executed at runtime (or repeated). Another\nquestion is the cost of v1 calling notation. These functions require some\nenvironment, and preparing this environment is expensive. SQL executor has\na lot of parameters and setup is not cheap.\n\nThere are the same cases where expression: use buildin stable or immutable\nfunctions, operators and types, and these types are immutable. Maybe it can\nbe extended with buffering for different search_paths, and then it cannot\nbe limited just for buildin's objects.\n\n\n\n\n>\n> E.g. here's a profile of the test() function from upthread:\n>\n> Overhead Command Shared Object Symbol\n> + 17.31% postgres plpgsql.so [.] exec_stmts\n> + 15.43% postgres postgres [.] ExecInterpExpr\n> + 14.29% postgres plpgsql.so [.] exec_eval_expr\n> + 11.79% postgres plpgsql.so [.] exec_assign_value\n> + 7.06% postgres plpgsql.so [.] plpgsql_param_eval_var\n> + 6.58% postgres plpgsql.so [.] exec_assign_expr\n> + 4.82% postgres postgres [.] recomputeNamespacePath\n> + 3.90% postgres postgres [.] CachedPlanIsSimplyValid\n> + 3.45% postgres postgres [.] dtoi8\n> + 3.02% postgres plpgsql.so [.] exec_stmt_fori\n> + 2.88% postgres postgres [.]\n> OverrideSearchPathMatchesCurrent\n> + 2.76% postgres postgres [.] EnsurePortalSnapshotExists\n> + 2.16% postgres postgres [.] float8mul\n> + 1.62% postgres postgres [.] MemoryContextReset\n>\n> Some of this is a bit distorted due to inlining (e.g.\n> exec_eval_simple_expr()\n> is attributed to exec_eval_expr()).\n>\n>\n> Most of the checks we do ought to be done once, at the start of plpgsql\n> evaluation, rather than be done over and over, during evaluation.\n>\n> For things like simple exprs, we likely could gain a lot by pushing more of\n> the work into ExecEvalExpr(), rather than calling ExecEvalExpr() multiple\n> times.\n>\n> The memory layout of plpgsql statements should be improved, there's a lot\n> of\n> unnecessary indirection. That's what e.g. hurts exec_stmts() a lot.\n>\n> Greetings,\n>\n> Andres\n>\n\npo 13. 2. 2023 v 22:22 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2023-02-10 20:45:39 +0100, Pavel Stehule wrote:\n> But for significant improvements it needs some form of JIT (Postgres has JIT\n> for SQL expressions, but it is not used for PLpgSQL expressions). On second\n> hand, PL/pgSQL is not designed (and usually) not used for extensive numeric\n> calculations like this. But if somebody try to enhance performance, (s)he\n> will be welcome every time (I think so there is some space for 2x better\n> performance - but it requires JIT).\n\nI think there's a *lot* of performance gain to be had before JIT is\nrequired. Or before JIT really can do a whole lot.\n\nWe do a lot of work for each plpgsql statement / expr. Most of the time\ntypically isn't spent actually evaluating expressions, but doing setup /\ninvalidation work.And it is the reason why I think JIT can help.You repeatedly read and use switches based if the variable has fixed length or if it is varlena, if it is native composite or plpgsql composite, every time you check if target is mutable or not, every time you check if expression type is the same as target type. The PL/pgSQL compiler is very \"lazy\". Lots of checks are executed at runtime (or repeated). Another question is the cost of v1 calling notation. These functions require some environment, and preparing this environment is expensive. SQL executor has a lot of parameters and setup is not cheap. There are the same cases where expression: use buildin stable or immutable functions, operators and types, and these types are immutable. Maybe it can be extended with buffering for different search_paths, and then it cannot be limited just for buildin's objects. \n\nE.g. here's a profile of the test() function from upthread:\n\n Overhead Command Shared Object Symbol\n+ 17.31% postgres plpgsql.so [.] exec_stmts\n+ 15.43% postgres postgres [.] ExecInterpExpr\n+ 14.29% postgres plpgsql.so [.] exec_eval_expr\n+ 11.79% postgres plpgsql.so [.] exec_assign_value\n+ 7.06% postgres plpgsql.so [.] plpgsql_param_eval_var\n+ 6.58% postgres plpgsql.so [.] exec_assign_expr\n+ 4.82% postgres postgres [.] recomputeNamespacePath\n+ 3.90% postgres postgres [.] CachedPlanIsSimplyValid\n+ 3.45% postgres postgres [.] dtoi8\n+ 3.02% postgres plpgsql.so [.] exec_stmt_fori\n+ 2.88% postgres postgres [.] OverrideSearchPathMatchesCurrent\n+ 2.76% postgres postgres [.] EnsurePortalSnapshotExists\n+ 2.16% postgres postgres [.] float8mul\n+ 1.62% postgres postgres [.] MemoryContextReset\n\nSome of this is a bit distorted due to inlining (e.g. exec_eval_simple_expr()\nis attributed to exec_eval_expr()).\n\n\nMost of the checks we do ought to be done once, at the start of plpgsql\nevaluation, rather than be done over and over, during evaluation.\n\nFor things like simple exprs, we likely could gain a lot by pushing more of\nthe work into ExecEvalExpr(), rather than calling ExecEvalExpr() multiple\ntimes.\n\nThe memory layout of plpgsql statements should be improved, there's a lot of\nunnecessary indirection. That's what e.g. hurts exec_stmts() a lot.\n\nGreetings,\n\nAndres",
"msg_date": "Tue, 14 Feb 2023 06:53:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: For loop execution times in PostgreSQL 12 vs 15"
}
] |
[
{
"msg_contents": "Dear Postgres Folks,\n\nTypically we expect that UPDATE is a slow operation in PostgreSQL, however,\nthere are cases where it's hard to understand why. In particular, I have a table like\n\n```\nCREATE SEQUENCE t_inodes_inumber_seq\n START WITH 1\n INCREMENT BY 1\n NO MINVALUE\n NO MAXVALUE\n CACHE 1;\n\n\nCREATE TABLE t_inodes (\n inumber bigint PRIMARY KEY,\n icrtime timestamp with time zone NOT NULL,\n igeneration bigint NOT NULL\n);\n```\n\nand a transaction that inserts and update an entry in that table:\n\n```\nBEGIN;\nINSERT INTO t_inodes (inumber, icrtime, igeneration)\n VALUES (nextval('t_inodes_inumber_seq'), now(), 0) RETURNING inumber \\gset\n\nUPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\nEND;\n```\n\nThe pgbench shows the following result:\n\n```\n$ pgbench -h localhost -n -r -f update.sql -t 10000 -c 64 -j 64 testdb\npgbench (15.0 (Debian 15.0-1.pgdg110+1))\ntransaction type: update.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nmaximum number of tries: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 640000/640000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 11.559 ms\ninitial connection time = 86.038 ms\ntps = 5536.736898 (without initial connection time)\nstatement latencies in milliseconds and failures:\n 0.524 0 BEGIN;\n 0.819 0 INSERT INTO t_inodes (inumber, icrtime, igeneration)\n 0.962 0 UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n 9.203 0 END;\n```\n\nMy naive expectation will be that updating the newly inserted record should cost nothing... Are there ways\nto make it less expensive?\n\nBest regards,\n Tigran.",
"msg_date": "Mon, 13 Feb 2023 16:09:27 +0100 (CET)",
"msg_from": "\"Mkrtchyan, Tigran\" <tigran.mkrtchyan@desy.de>",
"msg_from_op": true,
"msg_subject": "Performance of UPDATE operation"
},
{
"msg_contents": "On Mon, 2023-02-13 at 16:09 +0100, Mkrtchyan, Tigran wrote:\n> Typically we expect that UPDATE is a slow operation in PostgreSQL, however,\n> there are cases where it's hard to understand why. In particular, I have a table like\n> \n> ```\n> CREATE SEQUENCE t_inodes_inumber_seq\n> START WITH 1\n> INCREMENT BY 1\n> NO MINVALUE\n> NO MAXVALUE\n> CACHE 1;\n> \n> \n> CREATE TABLE t_inodes (\n> inumber bigint PRIMARY KEY,\n> icrtime timestamp with time zone NOT NULL,\n> igeneration bigint NOT NULL\n> );\n> ```\n> \n> and a transaction that inserts and update an entry in that table:\n> \n> ```\n> BEGIN;\n> INSERT INTO t_inodes (inumber, icrtime, igeneration)\n> VALUES (nextval('t_inodes_inumber_seq'), now(), 0) RETURNING inumber \\gset\n> \n> UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n> END;\n> ```\n> \n> The pgbench shows the following result:\n> \n> ```\n> $ pgbench -h localhost -n -r -f update.sql -t 10000 -c 64 -j 64 testdb\n> pgbench (15.0 (Debian 15.0-1.pgdg110+1))\n> transaction type: update.sql\n> scaling factor: 1\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> maximum number of tries: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 640000/640000\n> number of failed transactions: 0 (0.000%)\n> latency average = 11.559 ms\n> initial connection time = 86.038 ms\n> tps = 5536.736898 (without initial connection time)\n> statement latencies in milliseconds and failures:\n> 0.524 0 BEGIN;\n> 0.819 0 INSERT INTO t_inodes (inumber, icrtime, igeneration)\n> 0.962 0 UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n> 9.203 0 END;\n> ```\n> \n> My naive expectation will be that updating the newly inserted record should cost nothing... Are there ways\n> to make it less expensive?\n\nUpdating a newly inserted row is about as expensive as inserting the row in the first place.\n\nYou can reduce the overall impact somewhat by creating the table with a \"fillfactor\" below\n100, in your case 90 would probably be enough. That won't speed up the UPDATE itself, but\nit should greatly reduce the need for VACUUM.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 13 Feb 2023 18:47:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Performance of UPDATE operation"
},
{
"msg_contents": "Maybe reconsider your expectation.\nNote: Every “update” have to “select” before modifying data.\nEven if the page is in memory, there still work…reading ,acquiring lock, modifying and request to write to disk.\n\n\nRegards,\nTobi\n\n> On 13 Feb 2023, at 18:48, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> On Mon, 2023-02-13 at 16:09 +0100, Mkrtchyan, Tigran wrote:\n>> Typically we expect that UPDATE is a slow operation in PostgreSQL, however,\n>> there are cases where it's hard to understand why. In particular, I have a table like\n>> \n>> ```\n>> CREATE SEQUENCE t_inodes_inumber_seq\n>> START WITH 1\n>> INCREMENT BY 1\n>> NO MINVALUE\n>> NO MAXVALUE\n>> CACHE 1;\n>> \n>> \n>> CREATE TABLE t_inodes (\n>> inumber bigint PRIMARY KEY,\n>> icrtime timestamp with time zone NOT NULL,\n>> igeneration bigint NOT NULL\n>> );\n>> ```\n>> \n>> and a transaction that inserts and update an entry in that table:\n>> \n>> ```\n>> BEGIN;\n>> INSERT INTO t_inodes (inumber, icrtime, igeneration)\n>> VALUES (nextval('t_inodes_inumber_seq'), now(), 0) RETURNING inumber \\gset\n>> \n>> UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n>> END;\n>> ```\n>> \n>> The pgbench shows the following result:\n>> \n>> ```\n>> $ pgbench -h localhost -n -r -f update.sql -t 10000 -c 64 -j 64 testdb\n>> pgbench (15.0 (Debian 15.0-1.pgdg110+1))\n>> transaction type: update.sql\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> maximum number of tries: 1\n>> number of transactions per client: 10000\n>> number of transactions actually processed: 640000/640000\n>> number of failed transactions: 0 (0.000%)\n>> latency average = 11.559 ms\n>> initial connection time = 86.038 ms\n>> tps = 5536.736898 (without initial connection time)\n>> statement latencies in milliseconds and failures:\n>> 0.524 0 BEGIN;\n>> 0.819 0 INSERT INTO t_inodes (inumber, icrtime, igeneration)\n>> 0.962 0 UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n>> 9.203 0 END;\n>> ```\n>> \n>> My naive expectation will be that updating the newly inserted record should cost nothing... Are there ways\n>> to make it less expensive?\n> \n> Updating a newly inserted row is about as expensive as inserting the row in the first place.\n> \n> You can reduce the overall impact somewhat by creating the table with a \"fillfactor\" below\n> 100, in your case 90 would probably be enough. That won't speed up the UPDATE itself, but\n> it should greatly reduce the need for VACUUM.\n> \n> Yours,\n> Laurenz Albe\n> \n> \n\n\n\n",
"msg_date": "Mon, 13 Feb 2023 21:52:31 +0100",
"msg_from": "Oluwatobi Ogunsola <tobfis@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance of UPDATE operation"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 10:09 AM Mkrtchyan, Tigran <tigran.mkrtchyan@desy.de>\nwrote:\n\n>\n> 0.524 0 BEGIN;\n> 0.819 0 INSERT INTO t_inodes (inumber, icrtime,\n> igeneration)\n> 0.962 0 UPDATE t_inodes SET igeneration = igeneration\n> + 1 where inumber = :inumber;\n> 9.203 0 END;\n> ```\n>\n> My naive expectation will be that updating the newly inserted record\n> should cost nothing\n\n\nIt takes less than 1/10 of the total time. That is pretty close to\nnothing. Why would you expect it to be truly free?\n\n\n> ... Are there ways\n> to make it less expensive?\n>\n\nObviously here you could just insert the correct value in the first place\nand not do the update at all.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 13, 2023 at 10:09 AM Mkrtchyan, Tigran <tigran.mkrtchyan@desy.de> wrote:\n 0.524 0 BEGIN;\n 0.819 0 INSERT INTO t_inodes (inumber, icrtime, igeneration)\n 0.962 0 UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber;\n 9.203 0 END;\n```\n\nMy naive expectation will be that updating the newly inserted record should cost nothingIt takes less than 1/10 of the total time. That is pretty close to nothing. Why would you expect it to be truly free? ... Are there ways\nto make it less expensive?Obviously here you could just insert the correct value in the first place and not do the update at all.Cheers,Jeff",
"msg_date": "Mon, 13 Feb 2023 16:49:15 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance of UPDATE operation"
}
] |
[
{
"msg_contents": "Hi,\nWe are getting this error when transferring data using COPY command or\nrunning workflow for huge data. We are using Password Authentication(LDAP)\n\n\"Connection forcibly closed remote server\"\n\nCan someone help how we can avoid this?\n\n\nRegards,\nAditya.\n\nHi,We are getting this error when transferring data using COPY command or running workflow for huge data. We are using Password Authentication(LDAP)\"Connection forcibly closed remote server\"Can someone help how we can avoid this?Regards,Aditya.",
"msg_date": "Wed, 15 Feb 2023 17:43:27 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Connection forcibly closed remote server error."
},
{
"msg_contents": "Forgot to mention. The error comes intermittently. It is not consistent.\nObservation is that it comes when a connection has to process a larger data\nset.\n\nOn Wed, Feb 15, 2023 at 5:43 PM aditya desai <admad123@gmail.com> wrote:\n\n> Hi,\n> We are getting this error when transferring data using COPY command or\n> running workflow for huge data. We are using Password Authentication(LDAP)\n>\n> \"Connection forcibly closed remote server\"\n>\n> Can someone help how we can avoid this?\n>\n>\n> Regards,\n> Aditya.\n>\n\nForgot to mention. The error comes intermittently. It is not consistent. Observation is that it comes when a connection has to process a larger data set.On Wed, Feb 15, 2023 at 5:43 PM aditya desai <admad123@gmail.com> wrote:Hi,We are getting this error when transferring data using COPY command or running workflow for huge data. We are using Password Authentication(LDAP)\"Connection forcibly closed remote server\"Can someone help how we can avoid this?Regards,Aditya.",
"msg_date": "Wed, 15 Feb 2023 18:05:47 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Connection forcibly closed remote server error."
},
{
"msg_contents": "aditya desai <admad123@gmail.com> writes:\n> We are getting this error when transferring data using COPY command or\n> running workflow for huge data. We are using Password Authentication(LDAP)\n> \"Connection forcibly closed remote server\"\n\nNetwork timeout perhaps? If so, setting more-aggressive TCP keepalive\nparameters might help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Feb 2023 09:58:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection forcibly closed remote server error."
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 7:13 AM aditya desai <admad123@gmail.com> wrote:\n\n> Hi,\n> We are getting this error when transferring data using COPY command or\n> running workflow for huge data. We are using Password Authentication(LDAP)\n>\n> \"Connection forcibly closed remote server\"\n>\n\nAre you sure that that is the exact wording? It doesn't sound like grammar\nthat would be used for an error message. Or did you perhaps translate it\nto English from a localized error message?\n\nIs that error reported by the client, or in the server log? Whichever end\nthat is, what does the other end say?\n\nCheers,\n\nJeff\n\nOn Wed, Feb 15, 2023 at 7:13 AM aditya desai <admad123@gmail.com> wrote:Hi,We are getting this error when transferring data using COPY command or running workflow for huge data. We are using Password Authentication(LDAP)\"Connection forcibly closed remote server\"Are you sure that that is the exact wording? It doesn't sound like grammar that would be used for an error message. Or did you perhaps translate it to English from a localized error message?Is that error reported by the client, or in the server log? Whichever end that is, what does the other end say?Cheers,Jeff",
"msg_date": "Wed, 15 Feb 2023 13:36:48 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection forcibly closed remote server error."
},
{
"msg_contents": "Hi Jeff,\nApologies. Here is how the message actually looks like.\n\ncould not receive data from client: An existing connection was forcibly\nclosed by the remote host.\n\nAll links from Google pointing towards Connection Pooling. However it has\nbeen implemented from the application side.\n\nRegards,\nAditya.\n\nOn Thu, Feb 16, 2023 at 12:07 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n> On Wed, Feb 15, 2023 at 7:13 AM aditya desai <admad123@gmail.com> wrote:\n>\n>> Hi,\n>> We are getting this error when transferring data using COPY command or\n>> running workflow for huge data. We are using Password Authentication(LDAP)\n>>\n>> \"Connection forcibly closed remote server\"\n>>\n>\n> Are you sure that that is the exact wording? It doesn't sound like grammar\n> that would be used for an error message. Or did you perhaps translate it\n> to English from a localized error message?\n>\n> Is that error reported by the client, or in the server log? Whichever end\n> that is, what does the other end say?\n>\n> Cheers,\n>\n> Jeff\n>\n>\n\nHi Jeff,Apologies. Here is how the message actually looks like.could not receive data from client: An existing connection was forcibly closed by the remote host.All links from Google pointing towards Connection Pooling. However it has been implemented from the application side. Regards,Aditya.On Thu, Feb 16, 2023 at 12:07 AM Jeff Janes <jeff.janes@gmail.com> wrote:On Wed, Feb 15, 2023 at 7:13 AM aditya desai <admad123@gmail.com> wrote:Hi,We are getting this error when transferring data using COPY command or running workflow for huge data. We are using Password Authentication(LDAP)\"Connection forcibly closed remote server\"Are you sure that that is the exact wording? It doesn't sound like grammar that would be used for an error message. Or did you perhaps translate it to English from a localized error message?Is that error reported by the client, or in the server log? Whichever end that is, what does the other end say?Cheers,Jeff",
"msg_date": "Thu, 16 Feb 2023 00:25:21 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Connection forcibly closed remote server error."
},
{
"msg_contents": "Em qua., 15 de fev. de 2023 às 15:55, aditya desai <admad123@gmail.com>\nescreveu:\n\n> Hi Jeff,\n> Apologies. Here is how the message actually looks like.\n>\n> could not receive data from client: An existing connection was forcibly\n> closed by the remote host.\n>\nIt looks like a Microsoft tool message.\nPerhaps .NET\nhttps://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/connect/tls-exist-connection-closed\n\nI think that Postgres has nothing to do with the problem.\n\nregards,\nRanier Vilela\n\n>\n\nEm qua., 15 de fev. de 2023 às 15:55, aditya desai <admad123@gmail.com> escreveu:Hi Jeff,Apologies. Here is how the message actually looks like.could not receive data from client: An existing connection was forcibly closed by the remote host.It looks like a Microsoft tool message.Perhaps .NEThttps://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/connect/tls-exist-connection-closedI think that Postgres has nothing to do with the problem.regards,Ranier Vilela",
"msg_date": "Wed, 15 Feb 2023 16:02:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Connection forcibly closed remote server error."
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI'm playing around with BRIN indexes so as to get a feel for the feature.\nDuring my tests, I was unable to make BRIN indexes perform better than a\nsequential scan for queries searching for large value sets (20K values in\nthe example down below).\n\nCreating the table with one single BIGINT required column:\n\n> CREATE TABLE\n> test_brin\n> (\n> idx BIGINT NOT NULL\n> )\n> WITH\n> (\n> fillfactor = 100\n> )\n> ;\n\n\nFilling the table with 20 million sorted random BIGINT values:\n\n> INSERT INTO\n> test_brin\n> (\n> idx\n> )\n> SELECT\n> CAST(FLOOR(RANDOM() * 9223372036854775806) AS BIGINT)\n> AS idx\n> FROM\n> GENERATE_SERIES(1, 20 * 1000 * 1000, 1)\n> AS g\n> ORDER BY\n> idx ASC\n> ;\n\n\nNow we cluster the table (even though this shouldn't be needed):\n\n> CREATE UNIQUE INDEX test_brin_idx_uniq ON test_brin (idx);\n> CLUSTER test_brin USING test_brin_idx_uniq;\n> DROP INDEX test_brin_idx_uniq;\n\n\n\nNow we create the BRIN index on what should be a perfectly ordered and\ndense table:\n\n> CREATE INDEX\n> test_brin_idx\n> ON\n> test_brin\n> USING\n> BRIN\n> (\n> idx\n> )\n> ;\n\n\nLet's VACUUM the table just to be safe:\n\n> VACUUM test_brin;\n> VACUUM ANALYSE test_brin;\n\n\nAnd now let's query 20K random rows from our 20M total rows:\n\n> EXPLAIN (\n> ANALYZE,\n> VERBOSE,\n> COSTS,\n> BUFFERS,\n> TIMING\n> )\n> SELECT\n> tb.idx\n> FROM\n> test_brin\n> AS tb\n> WHERE\n> EXISTS (\n> SELECT\n> FROM\n> (\n> SELECT\n> idx\n> FROM\n> (\n> SELECT\n> -- Just trying to break the optimisation that would\n> recognize \"idx\" as being an indexed column\n> (idx + (CEIL(RANDOM()) - 1))::BIGINT\n> AS idx\n> FROM\n> test_brin\n> ORDER BY\n> RANDOM()\n> LIMIT\n> 20000\n> )\n> AS t\n> ORDER BY\n> idx ASC\n> )\n> AS q\n> WHERE\n> tb.idx = q.idx\n> )\n> ;\n\n\nBy default, this query will not use the BRIN index and simply run a 1.5s\nlong sequential scan (hitting 700 MB) and a 2.47s hash join for a total\n8.7s query time:\n\nhttps://explain.dalibo.com/plan/46c3191g8a6c1bc7\n\nIf we force the use of the BRIN index using (`SET LOCAL enable_seqscan =\nOFF;`) the same query will now take 50s with 2.5s spent on the bitmap index\nscan (hitting 470 MB of data) and a whopping 42s on the bitmap heap scan\n(hitting 20 GB of data!):\n\nhttps://explain.dalibo.com/plan/7f73bg9172a8b226\n\nSince the bitmap heap scan is taking a long time, lets reduce the\n`pages_per_range` parameter from its 128 default value to 32:\n\n> CREATE INDEX\n> test_brin_idx\n> ON\n> test_brin\n> USING\n> BRIN\n> (\n> idx\n> )\n> WITH\n> (\n> pages_per_range = 32\n> )\n> ;\n\n\nThe query now takes 25s, half the time we had before, with 9.7s (up from\n2.5s) spent on the bitmap index scan (hitting 2.6GB of data, up from 470\nMB) and 10s (down from 42s) on the bitmap heap scan (hitting 4.9GB of data,\ndown from 20 GB):\n\nhttps://explain.dalibo.com/plan/64fh5h1daaheeab3\n\nWe go a step further and lower the `pages_per_range` parameter to 8 (the\nother extreme).\n\nThe query now takes 45s, close-ish to the initial time, with 38.5s (up from\n2.5s) spent on the bitmap index scan (hitting 9.8GB of data, up from 470\nMB) and 2.6s (down from 42s) on the bitmap heap scan (hitting 1.2GB of\ndata, down from 20 GB):\n\nhttps://explain.dalibo.com/plan/431fbde7gb19g6g6\n\nSo I had the following two questions:\n\n 1. Why is the BRIN index systematically worse than a sequential scan,\n even when the table is x1000 larger than the search set, physically\n pre-sorted, dense (fillfactor at 100%) and the search rows are themselves\n sorted?\n This setup would seem to be the ideal conditions for a BRIN index to run\n well.\n\n 2. Since we only select the \"idx\" column, why does the BRIN index not\n simply return the searched value if included in one of it's ranges?\n Hitting the actual row data stored in the table seems to be unnessary no?\n\nHere's my test environnement:\n\n - PostgreSQL version: v14\n - Memory: 32GB\n - CPUs: 8\n\nThanks a lot in advance,\n\nMickael\n\nHello everyone,I'm playing around with BRIN indexes so as to get a feel for the feature.During my tests, I was unable to make BRIN indexes perform better than a sequential scan for queries searching for large value sets (20K values in the example down below).Creating the table with one single BIGINT required column:CREATE TABLE test_brin ( idx BIGINT NOT NULL )WITH ( fillfactor = 100 );Filling the table with 20 million sorted random BIGINT values:INSERT INTO test_brin ( idx )SELECT CAST(FLOOR(RANDOM() * 9223372036854775806) AS BIGINT) AS idxFROM GENERATE_SERIES(1, 20 * 1000 * 1000, 1) AS gORDER BY idx ASC;Now we cluster the table (even though this shouldn't be needed):CREATE UNIQUE INDEX test_brin_idx_uniq ON test_brin (idx);CLUSTER test_brin USING test_brin_idx_uniq;DROP INDEX test_brin_idx_uniq;Now we create the BRIN index on what should be a perfectly ordered and dense table:CREATE INDEX test_brin_idxON test_brinUSING BRIN ( idx );Let's VACUUM the table just to be safe:VACUUM test_brin;VACUUM ANALYSE test_brin;And now let's query 20K random rows from our 20M total rows:EXPLAIN ( ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)SELECT tb.idxFROM test_brin AS tbWHERE EXISTS ( SELECT FROM ( SELECT idx FROM ( SELECT -- Just trying to break the optimisation that would recognize \"idx\" as being an indexed column (idx + (CEIL(RANDOM()) - 1))::BIGINT AS idx FROM test_brin ORDER BY RANDOM() LIMIT 20000 ) AS t ORDER BY idx ASC ) AS q WHERE tb.idx = q.idx );By default, this query will not use the BRIN index and simply run a 1.5s long sequential scan (hitting 700 MB) and a 2.47s hash join for a total 8.7s query time:https://explain.dalibo.com/plan/46c3191g8a6c1bc7If we force the use of the BRIN index using (`SET LOCAL enable_seqscan = OFF;`) the same query will now take 50s with 2.5s spent on the bitmap index scan (hitting 470 MB of data) and a whopping 42s on the bitmap heap scan (hitting 20 GB of data!):https://explain.dalibo.com/plan/7f73bg9172a8b226Since the bitmap heap scan is taking a long time, lets reduce the `pages_per_range` parameter from its 128 default value to 32:CREATE INDEX test_brin_idxON test_brinUSING BRIN ( idx )WITH ( pages_per_range = 32 );The query now takes 25s, half the time we had before, with 9.7s (up from 2.5s) spent on the bitmap index scan (hitting 2.6GB of data, up from 470 MB) and 10s (down from 42s) on the bitmap heap scan (hitting 4.9GB of data, down from 20 GB):https://explain.dalibo.com/plan/64fh5h1daaheeab3We go a step further and lower the `pages_per_range` parameter to 8 (the other extreme).The query now takes 45s, close-ish to the initial time, with 38.5s (up from 2.5s) spent on the bitmap index scan (hitting 9.8GB of data, up from 470 MB) and 2.6s (down from 42s) on the bitmap heap scan (hitting 1.2GB of data, down from 20 GB):https://explain.dalibo.com/plan/431fbde7gb19g6g6So I had the following two questions:Why is the BRIN index systematically worse than a sequential scan, even when the table is x1000 larger than the search set, physically pre-sorted, dense (fillfactor at 100%) and the search rows are themselves sorted?This setup would seem to be the ideal conditions for a BRIN index to run well.Since we only select the \"idx\" column, why does the BRIN index not simply return the searched value if included in one of it's ranges?Hitting the actual row data stored in the table seems to be unnessary no?Here's my test environnement: PostgreSQL version: v14 Memory: 32GB CPUs: 8Thanks a lot in advance,Mickael",
"msg_date": "Fri, 24 Feb 2023 17:40:55 +0100",
"msg_from": "Mickael van der Beek <mickael.van.der.beek@gmail.com>",
"msg_from_op": true,
"msg_subject": "BRIN index worse than sequential scan for large search set"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 05:40:55PM +0100, Mickael van der Beek wrote:\n> Hello everyone,\n> \n> I'm playing around with BRIN indexes so as to get a feel for the feature.\n> During my tests, I was unable to make BRIN indexes perform better than a\n> sequential scan for queries searching for large value sets (20K values in\n> the example down below).\n\n> And now let's query 20K random rows from our 20M total rows:\n\nI didn't try your test, but I think *random* is the problem/explanation.\n\n> By default, this query will not use the BRIN index and simply run a 1.5s\n> long sequential scan (hitting 700 MB) and a 2.47s hash join for a total\n> 8.7s query time:\n> https://explain.dalibo.com/plan/46c3191g8a6c1bc7\n\n> If we force the use of the BRIN index using (`SET LOCAL enable_seqscan =\n> OFF;`) the same query will now take 50s with 2.5s spent on the bitmap index\n> scan (hitting 470 MB of data) and a whopping 42s on the bitmap heap scan\n> (hitting 20 GB of data!):\n> https://explain.dalibo.com/plan/7f73bg9172a8b226\n\nThat means the planner's cost model correctly preferred a seq scan.\n\n> So I had the following two questions:\n> \n> 1. Why is the BRIN index systematically worse than a sequential scan,\n> even when the table is x1000 larger than the search set, physically\n> pre-sorted, dense (fillfactor at 100%) and the search rows are themselves\n> sorted?\n\nThe table may be dense, but the tuples aren't. You're asking to return\n1/1000th of the tuples, across the entire table. Suppose there are ~100\ntuples per page, and you need to read about every 10th page. It makes\nsense that it's slow to read a large amount of data nonsequentially.\nThat's why random_page_cost is several times higher than seq_page_cost.\n\nI would expect brin to win if the pages to be accessed were dense rather\nthan distributed across the whole table.\n\n> 2. Since we only select the \"idx\" column, why does the BRIN index not\n> simply return the searched value if included in one of it's ranges?\n> Hitting the actual row data stored in the table seems to be unnessary no?\n\nBecause it's necessary to check if the tuple is visible to the current\ntransaction. It might be from an uncommited/aborted transaction.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Feb 2023 11:19:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN index worse than sequential scan for large search set"
},
{
"msg_contents": "Hello Justin,\n\nThanks for the quick response!\n\nThe table may be dense, but the tuples aren't. You're asking to return\n> 1/1000th of the tuples, across the entire table. Suppose there are ~100\n> tuples per page, and you need to read about every 10th page. It makes\n> sense that it's slow to read a large amount of data nonsequentially.\n\n\nAh, of course, you're right!\nI forgot that the BRIN indexes store ranges that are not fully covered by\nthe row values and that PostgreSQL has to double-check (bitmap heap scan)\n...\nWould you thus advise to only use BRIN indexes for columns who's values are\n(1) monotonically increasing but also (2) close to each other?\n\nI guess that in my use case, something like a roaring bitmap would have\nbeen perfect but I do not believe that those exist in PostgreSQL by default.\nBtrees work well performance wise but are simply too large (> 400MB for 20M\nrows) even when the index fill factor is 100% (+/- 380 MB) for my use case\nas I need to index around 6B rows partitioned in roughly 3K buckets which\nwould result in ~120GB of Btree indexes which seems a bit much for simple\nexistence checks.\n\nBecause it's necessary to check if the tuple is visible to the current\n> transaction. It might be from an uncommited/aborted transaction.\n\n\nInteresting, that's something I did not consider indeed.\nI'm guessing that this is a cost brought by MVCC that you can't get around\nno matter the isolation level?\n\nMickael\n\n\nOn Fri, Feb 24, 2023 at 6:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Feb 24, 2023 at 05:40:55PM +0100, Mickael van der Beek wrote:\n> > Hello everyone,\n> >\n> > I'm playing around with BRIN indexes so as to get a feel for the feature.\n> > During my tests, I was unable to make BRIN indexes perform better than a\n> > sequential scan for queries searching for large value sets (20K values in\n> > the example down below).\n>\n> > And now let's query 20K random rows from our 20M total rows:\n>\n> I didn't try your test, but I think *random* is the problem/explanation.\n>\n> > By default, this query will not use the BRIN index and simply run a 1.5s\n> > long sequential scan (hitting 700 MB) and a 2.47s hash join for a total\n> > 8.7s query time:\n> > https://explain.dalibo.com/plan/46c3191g8a6c1bc7\n>\n> > If we force the use of the BRIN index using (`SET LOCAL enable_seqscan =\n> > OFF;`) the same query will now take 50s with 2.5s spent on the bitmap\n> index\n> > scan (hitting 470 MB of data) and a whopping 42s on the bitmap heap scan\n> > (hitting 20 GB of data!):\n> > https://explain.dalibo.com/plan/7f73bg9172a8b226\n>\n> That means the planner's cost model correctly preferred a seq scan.\n>\n> > So I had the following two questions:\n> >\n> > 1. Why is the BRIN index systematically worse than a sequential scan,\n> > even when the table is x1000 larger than the search set, physically\n> > pre-sorted, dense (fillfactor at 100%) and the search rows are\n> themselves\n> > sorted?\n>\n> The table may be dense, but the tuples aren't. You're asking to return\n> 1/1000th of the tuples, across the entire table. Suppose there are ~100\n> tuples per page, and you need to read about every 10th page. It makes\n> sense that it's slow to read a large amount of data nonsequentially.\n> That's why random_page_cost is several times higher than seq_page_cost.\n>\n> I would expect brin to win if the pages to be accessed were dense rather\n> than distributed across the whole table.\n>\n> > 2. Since we only select the \"idx\" column, why does the BRIN index not\n> > simply return the searched value if included in one of it's ranges?\n> > Hitting the actual row data stored in the table seems to be unnessary\n> no?\n>\n> Because it's necessary to check if the tuple is visible to the current\n> transaction. It might be from an uncommited/aborted transaction.\n>\n> --\n> Justin\n>\n\n\n-- \nMickael van der BeekWeb developer & Security analyst\n\nmickael.van.der.beek@gmail.com\n\nHello Justin,Thanks for the quick response!The table may be dense, but the tuples aren't. You're asking to return1/1000th of the tuples, across the entire table. Suppose there are ~100tuples per page, and you need to read about every 10th page. It makessense that it's slow to read a large amount of data nonsequentially.Ah, of course, you're right!I forgot that the BRIN indexes store ranges that are not fully covered by the row values and that PostgreSQL has to double-check (bitmap heap scan) ...Would you thus advise to only use BRIN indexes for columns who's values are (1) monotonically increasing but also (2) close to each other?I guess that in my use case, something like a roaring bitmap would have been perfect but I do not believe that those exist in PostgreSQL by default.Btrees work well performance wise but are simply too large (> 400MB for 20M rows) even when the index fill factor is 100% (+/- 380 MB) for my use case as I need to index around 6B rows partitioned in roughly 3K buckets which would result in ~120GB of Btree indexes which seems a bit much for simple existence checks.Because it's necessary to check if the tuple is visible to the currenttransaction. It might be from an uncommited/aborted transaction.Interesting, that's something I did not consider indeed.I'm guessing that this is a cost brought by MVCC that you can't get around no matter the isolation level?MickaelOn Fri, Feb 24, 2023 at 6:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Fri, Feb 24, 2023 at 05:40:55PM +0100, Mickael van der Beek wrote:\n> Hello everyone,\n> \n> I'm playing around with BRIN indexes so as to get a feel for the feature.\n> During my tests, I was unable to make BRIN indexes perform better than a\n> sequential scan for queries searching for large value sets (20K values in\n> the example down below).\n\n> And now let's query 20K random rows from our 20M total rows:\n\nI didn't try your test, but I think *random* is the problem/explanation.\n\n> By default, this query will not use the BRIN index and simply run a 1.5s\n> long sequential scan (hitting 700 MB) and a 2.47s hash join for a total\n> 8.7s query time:\n> https://explain.dalibo.com/plan/46c3191g8a6c1bc7\n\n> If we force the use of the BRIN index using (`SET LOCAL enable_seqscan =\n> OFF;`) the same query will now take 50s with 2.5s spent on the bitmap index\n> scan (hitting 470 MB of data) and a whopping 42s on the bitmap heap scan\n> (hitting 20 GB of data!):\n> https://explain.dalibo.com/plan/7f73bg9172a8b226\n\nThat means the planner's cost model correctly preferred a seq scan.\n\n> So I had the following two questions:\n> \n> 1. Why is the BRIN index systematically worse than a sequential scan,\n> even when the table is x1000 larger than the search set, physically\n> pre-sorted, dense (fillfactor at 100%) and the search rows are themselves\n> sorted?\n\nThe table may be dense, but the tuples aren't. You're asking to return\n1/1000th of the tuples, across the entire table. Suppose there are ~100\ntuples per page, and you need to read about every 10th page. It makes\nsense that it's slow to read a large amount of data nonsequentially.\nThat's why random_page_cost is several times higher than seq_page_cost.\n\nI would expect brin to win if the pages to be accessed were dense rather\nthan distributed across the whole table.\n\n> 2. Since we only select the \"idx\" column, why does the BRIN index not\n> simply return the searched value if included in one of it's ranges?\n> Hitting the actual row data stored in the table seems to be unnessary no?\n\nBecause it's necessary to check if the tuple is visible to the current\ntransaction. It might be from an uncommited/aborted transaction.\n\n-- \nJustin\n-- Mickael van der BeekWeb developer & Security analystmickael.van.der.beek@gmail.com",
"msg_date": "Fri, 24 Feb 2023 18:51:00 +0100",
"msg_from": "Mickael van der Beek <mickael.van.der.beek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN index worse than sequential scan for large search set"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 06:51:00PM +0100, Mickael van der Beek wrote:\n> Hello Justin,\n> \n> Thanks for the quick response!\n> \n> > The table may be dense, but the tuples aren't. You're asking to return\n> > 1/1000th of the tuples, across the entire table. Suppose there are ~100\n> > tuples per page, and you need to read about every 10th page. It makes\n> > sense that it's slow to read a large amount of data nonsequentially.\n> \n> Ah, of course, you're right!\n> I forgot that the BRIN indexes store ranges that are not fully covered by\n> the row values and that PostgreSQL has to double-check (bitmap heap scan)\n> ...\n> Would you thus advise to only use BRIN indexes for columns who's values are\n> (1) monotonically increasing but also (2) close to each other?\n\nIt's not important whether they're \"rigidly\" monotonic (nor \"strictly\").\nWhat's important is that a query doesn't need to access a large number\nof pages.\n\nFor example, some of the BRIN indexes that I'm familiar with are created\non a column called \"start time\", but the table's data tends to be\nnaturally sorted by \"end time\" - and that's good enough. If someone\nqueries for data between 12pm and 1pm, there's surely no data for the\nfirst 12 hours of the day's table (because it hadn't happened yet) and\nthere's probably no data for the last 9+ hours of the day, either, so\nit's only got to read data for a 1-2h interval in the middle. This\nassumes that the column's data is typically correlated. If the tuples\naren't clustered/\"close to each other\" then it probably doesn't work\nwell. I haven't played with brin \"multi minmax\", though.\n\n> > > 2. Since we only select the \"idx\" column, why does the BRIN index not\n> > > simply return the searched value if included in one of it's ranges?\n> > > Hitting the actual row data stored in the table seems to be unnessary no?\n> >\n> > Because it's necessary to check if the tuple is visible to the current\n> > transaction. It might be from an uncommited/aborted transaction.\n\nActually, a better explanation is that all the brin scan returns is the page,\nand not the tuples.\n\n\"BRIN indexes can satisfy queries via regular bitmap index scans, and\nwill return all tuples in all pages within each range if the summary\ninfo stored by the index is CONSISTENT with the query conditions. The\nquery executor is in charge of rechecking these tuples and discarding\nthose that do not match the query conditions — in other words, these\nindexes are LOSSY\".\n\nThe index is returning pages where matching tuples *might* be found,\nafter excluding those pages where it's certain that no tuples are found.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Feb 2023 14:37:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN index worse than sequential scan for large search set"
},
{
"msg_contents": "FWIW I don't think the randomness per se is the main issue. The main\nproblem is that this results in a loop of bitmap index scans, with 20k\nloops. This is made worse by the block-range nature of BRIN indexes,\nresulting in many more heap block accesses.\n\nThe table has ~80k pages, but the bitmapscan plan reads ~2559362. That\ncan easily happen if each loop matches 100 ranges (out of 700), each\nhaving 128 pages. Making the ranges smaller should help to minimize the\namount of pages read unnecessarily, but the loops are an issue.\n\nAnd the same range may be scanned again and again, if the range is\nconsistent with multiple values.\n\nInterestingly enough, this is the kind of queries / plans I thought\nabout a couple weeks ago, which made me to look at SK_SEARCHARRAY\nsupport for BRIN indexes.\n\nImagine you did rewrite the query to something like:\n\n SELECT * FROM test_brin WHERE id IN (... lilerals ...);\n\nThat would only scan the table one, reducing the number of heap pages it\nhas to access. The drawback is that we still have to build the bitmap,\nand without the SK_SEARCHARRAY support we just scan the index for each\nelement again. So better.\n\nWith the SK_SEARCHARRAY patch [1] this is further optimized, and for me\nthe query runs a bit faster than seqscan (not by much, and it depend on\nhow the data is cached). At least with pages_per_range=1.\n\nOf course, this won't help unless the query can be rewritten like this.\nAt least not currently, but I wonder if we could invent some sort of\n\"pushdown\" that'd derive an array of values and push it down into a\nparameterized path at once (instead of doing that for each value in a loop).\n\nregards\n\n[1] https://commitfest.postgresql.org/42/4187/\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 25 Feb 2023 17:18:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN index worse than sequential scan for large search set"
}
] |
[
{
"msg_contents": "Hi All,\nUnfortunately I am unable to share a query plan or query.\n\nI have a SQL which is getting called from a web service. At a certain point\nwhere it inserts data in the table . Process is going in a hung state.\npg_stat_activity shows wait_even='IPC' , wait_even_type=MessageQueueSend.\nIn Webservice log we see I/O error occurred message.\n\nSurprisingly when I run it from PSQL or pgadmin it runs fine.\n\nHas anyone come across this issue? Could you please help?\n\nRegards,\nAditya.\n\nHi All,Unfortunately I am unable to share a query plan or query.I have a SQL which is getting called from a web service. At a certain point where it inserts data in the table . Process is going in a hung state. pg_stat_activity shows wait_even='IPC' , wait_even_type=MessageQueueSend. In Webservice log we see I/O error occurred message.Surprisingly when I run it from PSQL or pgadmin it runs fine. Has anyone come across this issue? Could you please help?Regards,Aditya.",
"msg_date": "Thu, 2 Mar 2023 02:10:02 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "INSERT statement going in IPC Wait_event"
},
{
"msg_contents": "On 2023-03-01 We 15:40, aditya desai wrote:\n> Hi All,\n> Unfortunately I am unable to share a query plan or query.\n>\n> I have a SQL which is getting called from a web service. At a certain \n> point where it inserts data in the table . Process is going in a hung \n> state. pg_stat_activity shows wait_even='IPC' , \n> wait_even_type=MessageQueueSend. In Webservice log we see I/O error \n> occurred message.\n>\n> Surprisingly when I run it from PSQL or pgadmin it runs fine.\n>\n>\n\nDoesn't this suggest that the problem is probably not with Postgres but \nwith your web service (about which you have given us no information \nwhatsoever)?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-01 We 15:40, aditya desai\n wrote:\n\n\n\nHi All,\nUnfortunately I am unable to share a query plan or query.\n\n\nI have a SQL which is getting called from a web service. At\n a certain point where it inserts data in the table . Process\n is going in a hung state. pg_stat_activity shows\n wait_even='IPC' , wait_even_type=MessageQueueSend. In\n Webservice log we see I/O error occurred message.\n\n\nSurprisingly when I run it from PSQL or pgadmin it runs\n fine. \n\n\n\n\n\n\n\nDoesn't this suggest that the problem is probably not with\n Postgres but with your web service (about which you have given us\n no information whatsoever)?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 1 Mar 2023 16:36:38 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: INSERT statement going in IPC Wait_event"
},
{
"msg_contents": "Hello Aditya,\n\nHow many connections do you have on your PostgreSQL cluster? And, do your\nwebserver and database service run on the same machine/VM?\n\nI would check system logs on the server on which PostgreSQL cluster run.\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Wed, 1 Mar 2023 at 22:40, aditya desai <admad123@gmail.com> wrote:\n\n> Hi All,\n> Unfortunately I am unable to share a query plan or query.\n>\n> I have a SQL which is getting called from a web service. At a certain\n> point where it inserts data in the table . Process is going in a hung\n> state. pg_stat_activity shows wait_even='IPC' ,\n> wait_even_type=MessageQueueSend. In Webservice log we see I/O error\n> occurred message.\n>\n> Surprisingly when I run it from PSQL or pgadmin it runs fine.\n>\n> Has anyone come across this issue? Could you please help?\n>\n> Regards,\n> Aditya.\n>\n>\n>\n\nHello Aditya,How many connections do you have on your PostgreSQL cluster? And, do your webserver and database service run on the same machine/VM?I would check system logs on the server on which PostgreSQL cluster run.Best regards.Samed YILDIRIMOn Wed, 1 Mar 2023 at 22:40, aditya desai <admad123@gmail.com> wrote:Hi All,Unfortunately I am unable to share a query plan or query.I have a SQL which is getting called from a web service. At a certain point where it inserts data in the table . Process is going in a hung state. pg_stat_activity shows wait_even='IPC' , wait_even_type=MessageQueueSend. In Webservice log we see I/O error occurred message.Surprisingly when I run it from PSQL or pgadmin it runs fine. Has anyone come across this issue? Could you please help?Regards,Aditya.",
"msg_date": "Sat, 11 Mar 2023 12:37:20 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: INSERT statement going in IPC Wait_event"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query which is taking roughly 10mins to complete and the query\nplanner is choosing a nested loop.\n\nquery and query plan with analyze,verbose,buffers\nqsEn | explain.depesz.com <https://explain.depesz.com/s/qsEn#html>\n\nDisabling the nested loop on session is allowing the query planner to\nchoose a better plan and complete it in 2mins.Stats are up to date and\nanalyze was performed a few hours ago.\n\nAny suggestions on what is causing the planner to choose a nested loop in\nplace of hash and how can we get the query to choose a better plan without\ndisabling the enable_nestloopenable_nestloopenable_nestloop enable_nestloop\nenable_nestloop?\n\nThanks\nPraneel\n\nHi,I have a query which is taking roughly 10mins to complete and the query planner is choosing a nested loop.query and query plan with analyze,verbose,buffersqsEn | explain.depesz.comDisabling the nested loop on session is allowing the query planner to choose a better plan and complete it in 2mins.Stats are up to date and analyze was performed a few hours ago.Any suggestions on what is causing the planner to choose a nested loop in place of hash and how can we get the query to choose a better plan without disabling the enable_nestloopenable_nestloopenable_nestloop enable_nestloopenable_nestloop?Thanks Praneel",
"msg_date": "Tue, 7 Mar 2023 17:44:08 +0530",
"msg_from": "Praneel Devisetty <devisettypraneel@gmail.com>",
"msg_from_op": true,
"msg_subject": "Planner choosing nested loop in place of Hashjoin"
},
{
"msg_contents": "Hi Praneel,\n\nIt is hard to propose a solution without seeing the actual query and\nknowing details of the tables. If I were you, I would try to increase\nstatistics target for the columns used in joins. Default value is 100. You\nneed to analyze those tables again after updating the statistics targets.\n\nALTER TABLE table ALTER COLUMN column SET STATISTICS 300;\n\nhttps://www.postgresql.org/docs/14/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET\n\nBest regards.\nSamed YILDIRIM\n\n\nOn Tue, 7 Mar 2023 at 14:14, Praneel Devisetty <devisettypraneel@gmail.com>\nwrote:\n\n> Hi,\n>\n> I have a query which is taking roughly 10mins to complete and the query\n> planner is choosing a nested loop.\n>\n> query and query plan with analyze,verbose,buffers\n> qsEn | explain.depesz.com <https://explain.depesz.com/s/qsEn#html>\n>\n> Disabling the nested loop on session is allowing the query planner to\n> choose a better plan and complete it in 2mins.Stats are up to date and\n> analyze was performed a few hours ago.\n>\n> Any suggestions on what is causing the planner to choose a nested loop in\n> place of hash and how can we get the query to choose a better plan without\n> disabling the enable_nestloopenable_nestloopenable_nestloop\n> enable_nestloopenable_nestloop?\n>\n> Thanks\n> Praneel\n>\n>\n>\n\nHi Praneel,It is hard to propose a solution without seeing the actual query and knowing details of the tables. If I were you, I would try to increase statistics target for the columns used in joins. Default value is 100. You need to analyze those tables again after updating the statistics targets.ALTER TABLE table ALTER COLUMN column SET STATISTICS 300;https://www.postgresql.org/docs/14/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGETBest regards.Samed YILDIRIMOn Tue, 7 Mar 2023 at 14:14, Praneel Devisetty <devisettypraneel@gmail.com> wrote:Hi,I have a query which is taking roughly 10mins to complete and the query planner is choosing a nested loop.query and query plan with analyze,verbose,buffersqsEn | explain.depesz.comDisabling the nested loop on session is allowing the query planner to choose a better plan and complete it in 2mins.Stats are up to date and analyze was performed a few hours ago.Any suggestions on what is causing the planner to choose a nested loop in place of hash and how can we get the query to choose a better plan without disabling the enable_nestloopenable_nestloopenable_nestloop enable_nestloopenable_nestloop?Thanks Praneel",
"msg_date": "Sat, 11 Mar 2023 12:31:04 +0200",
"msg_from": "Samed YILDIRIM <samed@reddoc.net>",
"msg_from_op": false,
"msg_subject": "Re: Planner choosing nested loop in place of Hashjoin"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 7:14 AM Praneel Devisetty <devisettypraneel@gmail.com>\nwrote:\n\n> Hi,\n>\n> I have a query which is taking roughly 10mins to complete and the query\n> planner is choosing a nested loop.\n>\n> query and query plan with analyze,verbose,buffers\n> qsEn | explain.depesz.com <https://explain.depesz.com/s/qsEn#html>\n>\n>\nWhat version is this? Any chance you can share this without\nanonymization? Not knowing the actual names makes it a lot harder to\nunderstand. In particular, what is the actual function golf_romeo()? And\nfive_two()? And what is the regexp pattern that is bastardized into\n'oscar_mike'::text ?\n\n\n> Disabling the nested loop on session is allowing the query planner to\n> choose a better plan and complete it in 2mins.Stats are up to date and\n> analyze was performed a few hours ago.\n>\n\nA lot can change in a few hours, do another analyze immediately before\ngathering the execution plan. Your row estimates are dreadful, but we\ncan't really tell why with the info provided.\n\nCheers,\n\nJeff\n\nOn Tue, Mar 7, 2023 at 7:14 AM Praneel Devisetty <devisettypraneel@gmail.com> wrote:Hi,I have a query which is taking roughly 10mins to complete and the query planner is choosing a nested loop.query and query plan with analyze,verbose,buffersqsEn | explain.depesz.comWhat version is this? Any chance you can share this without anonymization? Not knowing the actual names makes it a lot harder to understand. In particular, what is the actual function golf_romeo()? And five_two()? And what is the regexp pattern that is bastardized into 'oscar_mike'::text ? Disabling the nested loop on session is allowing the query planner to choose a better plan and complete it in 2mins.Stats are up to date and analyze was performed a few hours ago.A lot can change in a few hours, do another analyze immediately before gathering the execution plan. Your row estimates are dreadful, but we can't really tell why with the info provided.Cheers,Jeff",
"msg_date": "Sat, 11 Mar 2023 17:13:59 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planner choosing nested loop in place of Hashjoin"
}
] |
[
{
"msg_contents": "Hello Guys\n\nRegarding a particular performance + architecture situation with postgres\n12, I have a table with 300 millions rows and then I ask you, which basic\napproach like *parameters in postgres.conf*, suitable index type ,\npartitions type, would you suggest me knowing that we have Queries using\nbind with range id ( primary Key ) + 1 or 2 columns ?\n\n\nBest regards\nAndre\n\nHello Guys Regarding a particular performance + architecture situation with postgres 12, I have a table with 300 millions rows and then I ask you, which basic approach like parameters in postgres.conf, suitable index type , partitions type, would you suggest me knowing that we have Queries using bind with range id ( primary Key ) + 1 or 2 columns ? Best regardsAndre",
"msg_date": "Sat, 11 Mar 2023 11:47:46 +0000",
"msg_from": "=?UTF-8?Q?Andr=C3=A9_Rodrigues?= <db.andre@gmail.com>",
"msg_from_op": true,
"msg_subject": "Huge Tables"
},
{
"msg_contents": "300M rows isn't \"huge\", but it is starting to get to be real data.\n\nSome notes/very general rules of thumb since you asked a very general\nquestion:\n1. Consider updating the statistics on the table from the default sample\nof 100 rows to something larger - especially if you have a wide variety of\ndata. (either set on a per-table basis or set globally on your database\nwith the `default_statistics_target` parameter.\n2. Consider the `create statistics` command to see if there any other\nadditional hints you can give the planner to help figure out if columns are\nrelated.\n3. If you partition:\n a. Your queries could be _slower_ if they don't include the partition\ncriteria. So partition on something you are likely to almost always want\nto filter on anyhow. That way you can take advantage of \"partition\npruning\".\n b. One of the main advantages of partitioning is to be able to archive\nold data easily - either by moving it to other tables, dropping it, or\ndoing other things with it. Think about whether you ever intend to roll\nout old data and figure out ways partitions might make that easier.\n4. Consider tweaking `max_parallel_workers` to enable more concurrency if\nyou are running a lot of big queries on your larger table.\n a. There are a number of other `*parallel*` parameters you can study\nand tune as well.\n5. Consider bumping `work_mem` if you are running queries that are doing a\nlot of sorting and other intermediary work on the larger data sets.\n6. For a table with only 300M rows, btree is going to be fine for most use\ncases. If you have a monotonically increasing/decreasing column you may be\nable to use a BRIN index on it to save a little space and make for slightly\nmore efficient query.\n7. You may want to tweak the vacuum parameters to be able to use a little\nmore memory and more parallel processing. Since autovacuums are triggered\nby a percentage of change in the table, you may want to lower the\npercentage of rows that trigger the vacuums.\n\nYou'll need to get a lot more specific about the issues you are running\ninto for us to be able to provide more specific recommendations\n\n\nOn Sat, Mar 11, 2023 at 6:48 AM André Rodrigues <db.andre@gmail.com> wrote:\n\n> Hello Guys\n>\n> Regarding a particular performance + architecture situation with postgres\n> 12, I have a table with 300 millions rows and then I ask you, which basic\n> approach like *parameters in postgres.conf*, suitable index type ,\n> partitions type, would you suggest me knowing that we have Queries using\n> bind with range id ( primary Key ) + 1 or 2 columns ?\n>\n>\n> Best regards\n> Andre\n>\n>\n>\n\n300M rows isn't \"huge\", but it is starting to get to be real data.Some notes/very general rules of thumb since you asked a very general question:1. Consider updating the statistics on the table from the default sample of 100 rows to something larger - especially if you have a wide variety of data. (either set on a per-table basis or set globally on your database with the `default_statistics_target` parameter.2. Consider the `create statistics` command to see if there any other additional hints you can give the planner to help figure out if columns are related.3. If you partition: a. Your queries could be _slower_ if they don't include the partition criteria. So partition on something you are likely to almost always want to filter on anyhow. That way you can take advantage of \"partition pruning\". b. One of the main advantages of partitioning is to be able to archive old data easily - either by moving it to other tables, dropping it, or doing other things with it. Think about whether you ever intend to roll out old data and figure out ways partitions might make that easier.4. Consider tweaking `max_parallel_workers` to enable more concurrency if you are running a lot of big queries on your larger table. a. There are a number of other `*parallel*` parameters you can study and tune as well.5. Consider bumping `work_mem` if you are running queries that are doing a lot of sorting and other intermediary work on the larger data sets.6. For a table with only 300M rows, btree is going to be fine for most use cases. If you have a monotonically increasing/decreasing column you may be able to use a BRIN index on it to save a little space and make for slightly more efficient query.7. You may want to tweak the vacuum parameters to be able to use a little more memory and more parallel processing. Since autovacuums are triggered by a percentage of change in the table, you may want to lower the percentage of rows that trigger the vacuums.You'll need to get a lot more specific about the issues you are running into for us to be able to provide more specific recommendationsOn Sat, Mar 11, 2023 at 6:48 AM André Rodrigues <db.andre@gmail.com> wrote:Hello Guys Regarding a particular performance + architecture situation with postgres 12, I have a table with 300 millions rows and then I ask you, which basic approach like parameters in postgres.conf, suitable index type , partitions type, would you suggest me knowing that we have Queries using bind with range id ( primary Key ) + 1 or 2 columns ? Best regardsAndre",
"msg_date": "Mon, 13 Mar 2023 08:45:14 -0400",
"msg_from": "Rick Otten <rottenwindfish@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge Tables"
},
{
"msg_contents": "Thanks a Million for your tips !!!!!\nVery very good !\n\nEm seg., 13 de mar. de 2023 às 12:45, Rick Otten <rottenwindfish@gmail.com>\nescreveu:\n\n> 300M rows isn't \"huge\", but it is starting to get to be real data.\n>\n> Some notes/very general rules of thumb since you asked a very general\n> question:\n> 1. Consider updating the statistics on the table from the default sample\n> of 100 rows to something larger - especially if you have a wide variety of\n> data. (either set on a per-table basis or set globally on your database\n> with the `default_statistics_target` parameter.\n> 2. Consider the `create statistics` command to see if there any other\n> additional hints you can give the planner to help figure out if columns are\n> related.\n> 3. If you partition:\n> a. Your queries could be _slower_ if they don't include the partition\n> criteria. So partition on something you are likely to almost always want\n> to filter on anyhow. That way you can take advantage of \"partition\n> pruning\".\n> b. One of the main advantages of partitioning is to be able to\n> archive old data easily - either by moving it to other tables, dropping it,\n> or doing other things with it. Think about whether you ever intend to roll\n> out old data and figure out ways partitions might make that easier.\n> 4. Consider tweaking `max_parallel_workers` to enable more concurrency if\n> you are running a lot of big queries on your larger table.\n> a. There are a number of other `*parallel*` parameters you can study\n> and tune as well.\n> 5. Consider bumping `work_mem` if you are running queries that are doing\n> a lot of sorting and other intermediary work on the larger data sets.\n> 6. For a table with only 300M rows, btree is going to be fine for most\n> use cases. If you have a monotonically increasing/decreasing column you\n> may be able to use a BRIN index on it to save a little space and make for\n> slightly more efficient query.\n> 7. You may want to tweak the vacuum parameters to be able to use a little\n> more memory and more parallel processing. Since autovacuums are triggered\n> by a percentage of change in the table, you may want to lower the\n> percentage of rows that trigger the vacuums.\n>\n> You'll need to get a lot more specific about the issues you are running\n> into for us to be able to provide more specific recommendations\n>\n>\n> On Sat, Mar 11, 2023 at 6:48 AM André Rodrigues <db.andre@gmail.com>\n> wrote:\n>\n>> Hello Guys\n>>\n>> Regarding a particular performance + architecture situation with postgres\n>> 12, I have a table with 300 millions rows and then I ask you, which basic\n>> approach like *parameters in postgres.conf*, suitable index type ,\n>> partitions type, would you suggest me knowing that we have Queries using\n>> bind with range id ( primary Key ) + 1 or 2 columns ?\n>>\n>>\n>> Best regards\n>> Andre\n>>\n>>\n>>\n\n-- \n Atenciosamente,\n*André Rodrigues *\n\nThanks a Million for your tips !!!!!Very very good ! Em seg., 13 de mar. de 2023 às 12:45, Rick Otten <rottenwindfish@gmail.com> escreveu:300M rows isn't \"huge\", but it is starting to get to be real data.Some notes/very general rules of thumb since you asked a very general question:1. Consider updating the statistics on the table from the default sample of 100 rows to something larger - especially if you have a wide variety of data. (either set on a per-table basis or set globally on your database with the `default_statistics_target` parameter.2. Consider the `create statistics` command to see if there any other additional hints you can give the planner to help figure out if columns are related.3. If you partition: a. Your queries could be _slower_ if they don't include the partition criteria. So partition on something you are likely to almost always want to filter on anyhow. That way you can take advantage of \"partition pruning\". b. One of the main advantages of partitioning is to be able to archive old data easily - either by moving it to other tables, dropping it, or doing other things with it. Think about whether you ever intend to roll out old data and figure out ways partitions might make that easier.4. Consider tweaking `max_parallel_workers` to enable more concurrency if you are running a lot of big queries on your larger table. a. There are a number of other `*parallel*` parameters you can study and tune as well.5. Consider bumping `work_mem` if you are running queries that are doing a lot of sorting and other intermediary work on the larger data sets.6. For a table with only 300M rows, btree is going to be fine for most use cases. If you have a monotonically increasing/decreasing column you may be able to use a BRIN index on it to save a little space and make for slightly more efficient query.7. You may want to tweak the vacuum parameters to be able to use a little more memory and more parallel processing. Since autovacuums are triggered by a percentage of change in the table, you may want to lower the percentage of rows that trigger the vacuums.You'll need to get a lot more specific about the issues you are running into for us to be able to provide more specific recommendationsOn Sat, Mar 11, 2023 at 6:48 AM André Rodrigues <db.andre@gmail.com> wrote:Hello Guys Regarding a particular performance + architecture situation with postgres 12, I have a table with 300 millions rows and then I ask you, which basic approach like parameters in postgres.conf, suitable index type , partitions type, would you suggest me knowing that we have Queries using bind with range id ( primary Key ) + 1 or 2 columns ? Best regardsAndre \n\n\n-- Atenciosamente,\nAndré Rodrigues",
"msg_date": "Mon, 13 Mar 2023 12:51:43 +0000",
"msg_from": "=?UTF-8?Q?Andr=C3=A9_Rodrigues?= <db.andre@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Huge Tables"
}
] |
[
{
"msg_contents": "Hey folks,\nI am having issues with multicolumn partitioning. For reference I am using\nthe following link as my guide:\nhttps://www.postgresql.org/docs/devel/sql-createtable.html\n\nTo demonstrate my problem, I created a simple table called humans. I want\nto partition by the year of the human birth and then the first character of\nthe hash. So for each year I'll have year*16 partitions. (hex)\n\nCREATE TABLE humans (\n hash bytea,\n fname text,\n dob date\n )PARTITION BY RANGE (EXTRACT(YEAR FROM dob),substring(hash::text, 1,\n1));\n\nReading the documentation: \"When creating a range partition, the lower\nbound specified with FROM is an inclusive bound, whereas the upper bound\nspecified with TO is an exclusive bound\".\n\nHowever I can't insert any of the following after the first one, because it\nsays it overlaps. Do I need to do anything different when defining\nmulti-column partitions?\n\n\nThis works:\nCREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968, '0')\nTO (1969, '1');\n\n\nThese fail:\nCREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968, '1')\nTO (1969, '2');\nCREATE TABLE humans_1968_2 PARTITION OF humans FOR VALUES FROM (1968, '2')\nTO (1969, '3');\nCREATE TABLE humans_1968_3 PARTITION OF humans FOR VALUES FROM (1968, '3')\nTO (1969, '4');\nCREATE TABLE humans_1968_4 PARTITION OF humans FOR VALUES FROM (1968, '4')\nTO (1969, '5');\nCREATE TABLE humans_1968_5 PARTITION OF humans FOR VALUES FROM (1968, '5')\nTO (1969, '6');\nCREATE TABLE humans_1968_6 PARTITION OF humans FOR VALUES FROM (1968, '6')\nTO (1969, '7');\nCREATE TABLE humans_1968_7 PARTITION OF humans FOR VALUES FROM (1968, '7')\nTO (1969, '8');\nCREATE TABLE humans_1968_8 PARTITION OF humans FOR VALUES FROM (1968, '8')\nTO (1969, '9');\nCREATE TABLE humans_1968_9 PARTITION OF humans FOR VALUES FROM (1968, '9')\nTO (1969, 'a');\nCREATE TABLE humans_1968_a PARTITION OF humans FOR VALUES FROM (1968, 'a')\nTO (1969, 'b');\nCREATE TABLE humans_1968_b PARTITION OF humans FOR VALUES FROM (1968, 'b')\nTO (1969, 'c');\nCREATE TABLE humans_1968_c PARTITION OF humans FOR VALUES FROM (1968, 'c')\nTO (1969, 'd');\nCREATE TABLE humans_1968_d PARTITION OF humans FOR VALUES FROM (1968, 'd')\nTO (1969, 'e');\nCREATE TABLE humans_1968_e PARTITION OF humans FOR VALUES FROM (1968, 'e')\nTO (1969, 'f');\nCREATE TABLE humans_1968_f PARTITION OF humans FOR VALUES FROM (1968, 'f')\nTO (1969, 'g');\nCREATE TABLE humans_1969_0 PARTITION OF humans FOR VALUES FROM (1969, '0')\nTO (1970, '1');\nCREATE TABLE humans_1969_1 PARTITION OF humans FOR VALUES FROM (1969, '1')\nTO (1970, '2');\nCREATE TABLE humans_1969_2 PARTITION OF humans FOR VALUES FROM (1969, '2')\nTO (1970, '3');\nCREATE TABLE humans_1969_3 PARTITION OF humans FOR VALUES FROM (1969, '3')\nTO (1970, '4');\nCREATE TABLE humans_1969_4 PARTITION OF humans FOR VALUES FROM (1969, '4')\nTO (1970, '5');\nCREATE TABLE humans_1969_5 PARTITION OF humans FOR VALUES FROM (1969, '5')\nTO (1970, '6');\nCREATE TABLE humans_1969_6 PARTITION OF humans FOR VALUES FROM (1969, '6')\nTO (1970, '7');\nCREATE TABLE humans_1969_7 PARTITION OF humans FOR VALUES FROM (1969, '7')\nTO (1970, '8');\nCREATE TABLE humans_1969_8 PARTITION OF humans FOR VALUES FROM (1969, '8')\nTO (1970, '9');\nCREATE TABLE humans_1969_9 PARTITION OF humans FOR VALUES FROM (1969, '9')\nTO (1970, 'a');\nCREATE TABLE humans_1969_a PARTITION OF humans FOR VALUES FROM (1969, 'a')\nTO (1970, 'b');\nCREATE TABLE humans_1969_b PARTITION OF humans FOR VALUES FROM (1969, 'b')\nTO (1970, 'c');\nCREATE TABLE humans_1969_c PARTITION OF humans FOR VALUES FROM (1969, 'c')\nTO (1970, 'd');\nCREATE TABLE humans_1969_d PARTITION OF humans FOR VALUES FROM (1969, 'd')\nTO (1970, 'e');\nCREATE TABLE humans_1969_e PARTITION OF humans FOR VALUES FROM (1969, 'e')\nTO (1970, 'f');\nCREATE TABLE humans_1969_f PARTITION OF humans FOR VALUES FROM (1969, 'f')\nTO (1970, 'g');\n\nThank you for reviewing this problem.\n\nHey folks,I am having issues with multicolumn partitioning. For reference I am using the following link as my guide:https://www.postgresql.org/docs/devel/sql-createtable.htmlTo demonstrate my problem, I created a simple table called humans. I want to partition by the year of the human birth and then the first character of the hash. So for each year I'll have year*16 partitions. (hex)CREATE TABLE humans ( hash bytea, fname text, dob date )PARTITION BY RANGE (EXTRACT(YEAR FROM dob),substring(hash::text, 1, 1)); Reading the documentation: \"When creating a range partition, the lower bound specified with FROM is an inclusive bound, whereas the upper bound specified with TO is an exclusive bound\".However I can't insert any of the following after the first one, because it says it overlaps. Do I need to do anything different when defining multi-column partitions?This works:CREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968, '0') TO (1969, '1');These fail: CREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968, '1') TO (1969, '2');CREATE TABLE humans_1968_2 PARTITION OF humans FOR VALUES FROM (1968, '2') TO (1969, '3');CREATE TABLE humans_1968_3 PARTITION OF humans FOR VALUES FROM (1968, '3') TO (1969, '4');CREATE TABLE humans_1968_4 PARTITION OF humans FOR VALUES FROM (1968, '4') TO (1969, '5');CREATE TABLE humans_1968_5 PARTITION OF humans FOR VALUES FROM (1968, '5') TO (1969, '6');CREATE TABLE humans_1968_6 PARTITION OF humans FOR VALUES FROM (1968, '6') TO (1969, '7');CREATE TABLE humans_1968_7 PARTITION OF humans FOR VALUES FROM (1968, '7') TO (1969, '8');CREATE TABLE humans_1968_8 PARTITION OF humans FOR VALUES FROM (1968, '8') TO (1969, '9');CREATE TABLE humans_1968_9 PARTITION OF humans FOR VALUES FROM (1968, '9') TO (1969, 'a');CREATE TABLE humans_1968_a PARTITION OF humans FOR VALUES FROM (1968, 'a') TO (1969, 'b');CREATE TABLE humans_1968_b PARTITION OF humans FOR VALUES FROM (1968, 'b') TO (1969, 'c');CREATE TABLE humans_1968_c PARTITION OF humans FOR VALUES FROM (1968, 'c') TO (1969, 'd');CREATE TABLE humans_1968_d PARTITION OF humans FOR VALUES FROM (1968, 'd') TO (1969, 'e');CREATE TABLE humans_1968_e PARTITION OF humans FOR VALUES FROM (1968, 'e') TO (1969, 'f');CREATE TABLE humans_1968_f PARTITION OF humans FOR VALUES FROM (1968, 'f') TO (1969, 'g');CREATE TABLE humans_1969_0 PARTITION OF humans FOR VALUES FROM (1969, '0') TO (1970, '1');CREATE TABLE humans_1969_1 PARTITION OF humans FOR VALUES FROM (1969, '1') TO (1970, '2');CREATE TABLE humans_1969_2 PARTITION OF humans FOR VALUES FROM (1969, '2') TO (1970, '3');CREATE TABLE humans_1969_3 PARTITION OF humans FOR VALUES FROM (1969, '3') TO (1970, '4');CREATE TABLE humans_1969_4 PARTITION OF humans FOR VALUES FROM (1969, '4') TO (1970, '5');CREATE TABLE humans_1969_5 PARTITION OF humans FOR VALUES FROM (1969, '5') TO (1970, '6');CREATE TABLE humans_1969_6 PARTITION OF humans FOR VALUES FROM (1969, '6') TO (1970, '7');CREATE TABLE humans_1969_7 PARTITION OF humans FOR VALUES FROM (1969, '7') TO (1970, '8');CREATE TABLE humans_1969_8 PARTITION OF humans FOR VALUES FROM (1969, '8') TO (1970, '9');CREATE TABLE humans_1969_9 PARTITION OF humans FOR VALUES FROM (1969, '9') TO (1970, 'a');CREATE TABLE humans_1969_a PARTITION OF humans FOR VALUES FROM (1969, 'a') TO (1970, 'b');CREATE TABLE humans_1969_b PARTITION OF humans FOR VALUES FROM (1969, 'b') TO (1970, 'c');CREATE TABLE humans_1969_c PARTITION OF humans FOR VALUES FROM (1969, 'c') TO (1970, 'd');CREATE TABLE humans_1969_d PARTITION OF humans FOR VALUES FROM (1969, 'd') TO (1970, 'e');CREATE TABLE humans_1969_e PARTITION OF humans FOR VALUES FROM (1969, 'e') TO (1970, 'f');CREATE TABLE humans_1969_f PARTITION OF humans FOR VALUES FROM (1969, 'f') TO (1970, 'g');Thank you for reviewing this problem.",
"msg_date": "Sun, 12 Mar 2023 13:59:32 -0400",
"msg_from": "James Robertson <james@jsrobertson.net>",
"msg_from_op": true,
"msg_subject": "multicolumn partitioning help"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 01:59:32PM -0400, James Robertson wrote:\n> Hey folks,\n> I am having issues with multicolumn partitioning. For reference I am using\n> the following link as my guide:\n> https://www.postgresql.org/docs/devel/sql-createtable.html\n> \n> Reading the documentation: \"When creating a range partition, the lower\n> bound specified with FROM is an inclusive bound, whereas the upper bound\n> specified with TO is an exclusive bound\".\n> \n> However I can't insert any of the following after the first one, because it\n> says it overlaps. Do I need to do anything different when defining\n> multi-column partitions?\n\nThe bounds are compared like rows:\n\nWhen creating a range partition, the lower bound specified with FROM is\nan inclusive bound, whereas the upper bound specified with TO is an\nexclusive bound. That is, the values specified in the FROM list are\nvalid values of the corresponding partition key columns for this\npartition, whereas those in the TO list are not. Note that this\nstatement must be understood according to the rules of row-wise\ncomparison (Section 9.24.5). For example, given PARTITION BY RANGE\n(x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\nx=2 with any non-null y, and x=3 with any y<4.\n\nhttps://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON\n\n> This works:\n> CREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968, '0')\n> TO (1969, '1');\n\nThis table is everything from 1968 (starting with '0') to 1969\n\n> These fail:\n> CREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968, '1')\n> TO (1969, '2');\n\nWhich is why these are overlapping.\n\n> CREATE TABLE humans_1969_1 PARTITION OF humans FOR VALUES FROM (1969, '1')\n> TO (1970, '2');\n\nThis one doesn't fail, but it \"occupies\" / subjugates all of 1969\nstarting with 1.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 14 Mar 2023 12:54:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: multicolumn partitioning help"
},
{
"msg_contents": "On Sun, 2023-03-12 at 13:59 -0400, James Robertson wrote:\n> I am having issues with multicolumn partitioning. For reference I am using the following link as my guide:\n> https://www.postgresql.org/docs/devel/sql-createtable.html\n> \n> To demonstrate my problem, I created a simple table called humans. I want to partition by the year\n> of the human birth and then the first character of the hash. So for each year I'll have year*16 partitions. (hex)\n> \n> CREATE TABLE humans (\n> hash bytea,\n> fname text,\n> dob date\n> )PARTITION BY RANGE (EXTRACT(YEAR FROM dob),substring(hash::text, 1, 1));\n> \n> Reading the documentation: \"When creating a range partition, the lower bound specified with\n> FROM is an inclusive bound, whereas the upper bound specified with TO is an exclusive bound\".\n> \n> However I can't insert any of the following after the first one, because it says it overlaps.\n> Do I need to do anything different when defining multi-column partitions?\n> \n> \n> This works:\n> CREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968, '0') TO (1969, '1');\n> \n> \n> These fail: \n> CREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968, '1') TO (1969, '2');\n\nJustin has explained what the problem is, let me supply a solution.\n\nI think you want subpartitioning, like\n\n CREATE TABLE humans (\n hash bytea,\n fname text,\n dob date\n ) PARTITION BY LIST (EXTRACT (YEAR FROM dob));\n\n CREATE TABLE humans_2002\n PARTITION OF humans FOR VALUES IN (2002)\n PARTITION BY HASH (hash);\n\n CREATE TABLE humans_2002_0\n PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 0);\n\n [...]\n\n CREATE TABLE humans_2002_25\n PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 25);\n\nand so on for the other years.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 14 Mar 2023 22:41:32 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: multicolumn partitioning help"
},
{
"msg_contents": "Laurenz, Justin,\nThank you both for thinking of this problem.\n\nLaurenz your solution is how I thought I would work around my (lack of)\nunderstanding of partitioning. (nested partitions).\nI was hesitant because I didn't know what sort of performance problems I\nwould create for myself.\n\nIf we have true multi-column don't we get the benefit of:\n\nTopLevelTable\n|\n|----> worker-thread 1\n|\n|----> worker-thread 2\n|\n|----> worker-thread n\n\nDoesn't that give me more performance than:\n\nTopLevelTable\n|\n|----> worker-thread 1\n........|----> sub-table 1.1\n........|----> sub-table 1.2\n........|----> sub-table 1.n\n|\n|----> worker-thread 2\n........|----> sub-table 2.1\n........|----> sub-table 2.2\n........|----> sub-table 2.n\n\nor do we get?\n\nTopLevelTable\n|\n|----> worker-thread 1 (default catch)\n........|----> worker thread 2 -> sub-table 1.1\n........|----> worker thread 3 -> sub-table 1.2\n........|----> worker thread 4 -> sub-table 1.n\n|\n|----> worker-thread 5 (default catch)\n........|----> worker thread 6 -> sub-table 2.1\n........|----> worker thread 7 -> sub-table 2.2\n........|----> worker thread 8 -> sub-table 2.n\n\n\nSummary:\n1) if we create nested partitions, do we create performance issues:\n2) if nested partitions are the solutions, what is the point of\nmulti-column partitioning?\n\n\nwish list) wouldn't it be neat if we can do mult-mode multi-column? like\nPARTITION BY RANGE (EXTRACT(YEAR FROM dob)) LIST (SUBSTRING(hash, 1, 1));\n\nOn Tue, Mar 14, 2023 at 5:41 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Sun, 2023-03-12 at 13:59 -0400, James Robertson wrote:\n> > I am having issues with multicolumn partitioning. For reference I am\n> using the following link as my guide:\n> > https://www.postgresql.org/docs/devel/sql-createtable.html\n> >\n> > To demonstrate my problem, I created a simple table called humans. I\n> want to partition by the year\n> > of the human birth and then the first character of the hash. So for each\n> year I'll have year*16 partitions. (hex)\n> >\n> > CREATE TABLE humans (\n> > hash bytea,\n> > fname text,\n> > dob date\n> > )PARTITION BY RANGE (EXTRACT(YEAR FROM dob),substring(hash::text, 1,\n> 1));\n> >\n> > Reading the documentation: \"When creating a range partition, the lower\n> bound specified with\n> > FROM is an inclusive bound, whereas the upper bound specified with TO is\n> an exclusive bound\".\n> >\n> > However I can't insert any of the following after the first one, because\n> it says it overlaps.\n> > Do I need to do anything different when defining multi-column partitions?\n> >\n> >\n> > This works:\n> > CREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968,\n> '0') TO (1969, '1');\n> >\n> >\n> > These fail:\n> > CREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968,\n> '1') TO (1969, '2');\n>\n> Justin has explained what the problem is, let me supply a solution.\n>\n> I think you want subpartitioning, like\n>\n> CREATE TABLE humans (\n> hash bytea,\n> fname text,\n> dob date\n> ) PARTITION BY LIST (EXTRACT (YEAR FROM dob));\n>\n> CREATE TABLE humans_2002\n> PARTITION OF humans FOR VALUES IN (2002)\n> PARTITION BY HASH (hash);\n>\n> CREATE TABLE humans_2002_0\n> PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 0);\n>\n> [...]\n>\n> CREATE TABLE humans_2002_25\n> PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 25);\n>\n> and so on for the other years.\n>\n> Yours,\n> Laurenz Albe\n>\n\nLaurenz, Justin,Thank you both for thinking of this problem.Laurenz your solution is how I thought I would work around my (lack of) understanding of partitioning. (nested partitions).I was hesitant because I didn't know what sort of performance problems I would create for myself.If we have true multi-column don't we get the benefit of:TopLevelTable||----> worker-thread 1||----> worker-thread 2||----> worker-thread nDoesn't that give me more performance than:TopLevelTable||----> worker-thread 1........|----> sub-table 1.1........|----> sub-table 1.2........|----> sub-table 1.n||----> worker-thread 2........|----> sub-table 2.1........|----> sub-table 2.2........|----> sub-table 2.nor do we get?TopLevelTable||----> worker-thread 1 (default catch)........|----> worker thread 2 -> sub-table 1.1........|----> worker thread 3 -> sub-table 1.2........|----> worker thread 4 -> sub-table 1.n||----> worker-thread 5 (default catch)........|----> worker thread 6 -> sub-table 2.1........|----> worker thread 7 -> sub-table 2.2........|----> worker thread 8 -> sub-table 2.nSummary: 1) if we create nested partitions, do we create performance issues:2) if nested partitions are the solutions, what is the point of multi-column partitioning? wish list) wouldn't it be neat if we can do mult-mode multi-column? like PARTITION BY RANGE (EXTRACT(YEAR FROM dob)) LIST (SUBSTRING(hash, 1, 1));On Tue, Mar 14, 2023 at 5:41 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Sun, 2023-03-12 at 13:59 -0400, James Robertson wrote:\n> I am having issues with multicolumn partitioning. For reference I am using the following link as my guide:\n> https://www.postgresql.org/docs/devel/sql-createtable.html\n> \n> To demonstrate my problem, I created a simple table called humans. I want to partition by the year\n> of the human birth and then the first character of the hash. So for each year I'll have year*16 partitions. (hex)\n> \n> CREATE TABLE humans (\n> hash bytea,\n> fname text,\n> dob date\n> )PARTITION BY RANGE (EXTRACT(YEAR FROM dob),substring(hash::text, 1, 1));\n> \n> Reading the documentation: \"When creating a range partition, the lower bound specified with\n> FROM is an inclusive bound, whereas the upper bound specified with TO is an exclusive bound\".\n> \n> However I can't insert any of the following after the first one, because it says it overlaps.\n> Do I need to do anything different when defining multi-column partitions?\n> \n> \n> This works:\n> CREATE TABLE humans_1968_0 PARTITION OF humans FOR VALUES FROM (1968, '0') TO (1969, '1');\n> \n> \n> These fail: \n> CREATE TABLE humans_1968_1 PARTITION OF humans FOR VALUES FROM (1968, '1') TO (1969, '2');\n\nJustin has explained what the problem is, let me supply a solution.\n\nI think you want subpartitioning, like\n\n CREATE TABLE humans (\n hash bytea,\n fname text,\n dob date\n ) PARTITION BY LIST (EXTRACT (YEAR FROM dob));\n\n CREATE TABLE humans_2002\n PARTITION OF humans FOR VALUES IN (2002)\n PARTITION BY HASH (hash);\n\n CREATE TABLE humans_2002_0\n PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 0);\n\n [...]\n\n CREATE TABLE humans_2002_25\n PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 25);\n\nand so on for the other years.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 14 Mar 2023 19:33:11 -0400",
"msg_from": "James Robertson <james@jsrobertson.net>",
"msg_from_op": true,
"msg_subject": "Re: multicolumn partitioning help"
},
{
"msg_contents": "On Tue, 2023-03-14 at 19:33 -0400, James Robertson wrote:\n> Laurenz your solution is how I thought I would work around my (lack of) understanding\n> of partitioning. (nested partitions).\n> I was hesitant because I didn't know what sort of performance problems I would create for myself.\n> \n> [...] more performance [...]\n\nIf you are thinking of subpartitioning primarily in terms of boosting performance,\nyou should know that you only get performance benefits from partitioning with\nvery special queries that effectively have to be designed together with the\npartitioning strategy. Other statements typically become somewhat slower\nthrough partitioning.\n\nSo it is really impossible to discuss performance benefits without knowing\nthe exact query. It may be best if you build a play database with realistic amounts\nof test data and use EXPLAIN and EXPLAIN (ANALYZE) to see the effects that\npartitioning has on your queries.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 15 Mar 2023 08:37:39 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: multicolumn partitioning help"
},
{
"msg_contents": "On Wed, 15 Mar 2023 at 10:41, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> I think you want subpartitioning, like\n>\n> CREATE TABLE humans (\n> hash bytea,\n> fname text,\n> dob date\n> ) PARTITION BY LIST (EXTRACT (YEAR FROM dob));\n\nThis may be perfectly fine, but it is also important to highlight that\npartitioning in this way may hinder partition pruning.\n\nIf the first level partitioned table was to be BY RANGE (dob); then\nthe partitions could be defined like FOR VALUES FROM ('2023-01-01') TO\n('2024-01-01'). For a query that had something like WHERE dob =\n'2023-03-16', then PostgreSQL could prune away all the partitions for\nthe other years. The same wouldn't occur if the table was partitioned\nby LIST (EXTRACT (YEAR FROM dob)) unless you added a AND EXTRACT (YEAR\nFROM dob) = 2023 to the query's WHERE clause.\n\nRobert, there are a few tips about partitioning in [1] that you may\nwish to review.\n\nDavid\n\n[1] https://www.postgresql.org/docs/devel/ddl-partitioning.html\n\n\n",
"msg_date": "Thu, 16 Mar 2023 09:46:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: multicolumn partitioning help"
},
{
"msg_contents": "On Thu, 16 Mar 2023 at 00:47, James Robertson <james@jsrobertson.net> wrote:\n> or do we get?\n>\n> TopLevelTable\n> |\n> |----> worker-thread 1 (default catch)\n> ........|----> worker thread 2 -> sub-table 1.1\n> ........|----> worker thread 3 -> sub-table 1.2\n> ........|----> worker thread 4 -> sub-table 1.n\n> |\n> |----> worker-thread 5 (default catch)\n> ........|----> worker thread 6 -> sub-table 2.1\n> ........|----> worker thread 7 -> sub-table 2.2\n> ........|----> worker thread 8 -> sub-table 2.n\n\nThe planner generally flattens out the scans to each partition into a\nsingle Append or MergeAppend node. Append nodes can be parallelised.\nAssuming there's no reason that a particular partition can't support\nit, the parallel workers can be distributed to assist without\nrestriction to which partition they help with. Multiple workers can\neven help with a single partition. Workers can then move over to\nhelping with other partitions when they're done with the partition\nthey've been working on. I believe some other databases do or did at\nleast restrict parallelism to 1 worker per partition (which perhaps is\nwhy you raised this). There's no such restriction with PostgreSQL.\n\n> Summary:\n> 1) if we create nested partitions, do we create performance issues:\n\nIf you create too many partitions, it can create performance issues.\nYou should look at the partitioning best practices section of the\ndocuments for details about that. I recommend a careful read of those.\n\n> 2) if nested partitions are the solutions, what is the point of multi-column partitioning?\n\nThere are many reasons. If you have multiple levels of partitioning,\nthen the partition pruning done during planning is going to have more\nwork to do as it'll be executed once, then once again for each\npartitioned table remaining after running it for the first level.\nAlso, it seems much easier to PARTITION BY HASH(a,b) than to first do\nHASH(a) then another level to HASH(b). However, there may be\nadvantages to having multiple levels here as the planner would still\nbe able to prune partitions if the WHERE clause didn't contain any\nquals like \"b = <value>\". The key take away here is that they're\ndifferent, so we support both.\n\n> wish list) wouldn't it be neat if we can do mult-mode multi-column? like PARTITION BY RANGE (EXTRACT(YEAR FROM dob)) LIST (SUBSTRING(hash, 1, 1));\n\nEffectively, multi-level partitioning gives you that, It's just the\nDDL is different from how you wrote it.\n\nDavid\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:20:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: multicolumn partitioning help"
}
] |
[
{
"msg_contents": "1) Querying the table using the primary key and selecting one of the\ncolumns from the table which is not part of the index.\n explain (analyze, buffers) select date_created from schema_name.table1\nwhere id = 200889258190298;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pk_id on table1 (cost=0.58..8.60 rows=1 width=11)\n(actual time=0.015..0.016 rows=1 loops=1)\n Index Cond: (id = '200889258190298'::numeric)\n Buffers: shared hit=5\n Planning Time: 0.059 ms\n Execution Time: 0.029 ms\n(5 rows)\n\n2) Querying the table using the primary key and selecting a constant so it\ndoesn't need to fetch data from the table.\n explain (analyze, buffers) select 1 from schema_name.table1 where id =\n200889258190298;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using pk_id on table1 (cost=0.58..8.60 rows=1 width=4)\n(actual time=0.020..0.021 rows=1 loops=1)\n Index Cond: (id = '200889258190298'::numeric)\n Heap Fetches: 1\n Buffers: shared hit=6\n Planning Time: 0.054 ms\n Execution Time: 0.029 ms\n(6 rows)\n\nI was expecting SQL (2) to report fewer shared hits compared to SQL (1) but\nseeing the opposite. As in case (1) it has to read index+tale and case 2 it\nonly has to read the index. Both queries are querying the same row and\nusing the same index.\n\nIs there any reason why, shared hit is reported higher for \"Index Only\nScan\" querying the index only?\n\nThank you\n\n1) Querying the table using the primary key and selecting one of the columns from the table which is not part of the index. explain (analyze, buffers) select date_created from schema_name.table1 where id = 200889258190298; QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------- Index Scan using pk_id on table1 (cost=0.58..8.60 rows=1 width=11) (actual time=0.015..0.016 rows=1 loops=1) Index Cond: (id = '200889258190298'::numeric) Buffers: shared hit=5 Planning Time: 0.059 ms Execution Time: 0.029 ms(5 rows)2) Querying the table using the primary key and selecting a constant so it doesn't need to fetch data from the table. explain (analyze, buffers) select 1 from schema_name.table1 where id = 200889258190298; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------- Index Only Scan using pk_id on table1 (cost=0.58..8.60 rows=1 width=4) (actual time=0.020..0.021 rows=1 loops=1) Index Cond: (id = '200889258190298'::numeric) Heap Fetches: 1 Buffers: shared hit=6 Planning Time: 0.054 ms Execution Time: 0.029 ms(6 rows)I was expecting SQL (2) to report fewer shared hits compared to SQL (1) but seeing the opposite. As in case (1) it has to read index+tale and case 2 it only has to read the index. Both queries are querying the same row and using the same index.Is there any reason why, shared hit is reported higher for \"Index Only Scan\" querying the index only?Thank you",
"msg_date": "Mon, 3 Apr 2023 23:21:11 -0700",
"msg_from": "Amin Jaffer <aminjaffer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Explain plan shows fewer shared blocks when index+table compared to\n index alone?"
},
{
"msg_contents": "Hello\n\nThis block is reading and checking the visibility map, I think. We don't have to check the visibility map during the index scan - we still need to get a tuple from the table, we can check the visibility for current transaction there. With index only scan, we need to check the visibility map: if the tuple is visible to all transactions, then we return it. Otherwise, we read the tuple from the table as in the index scan (this is your case, as indicated by \"Heap Fetches: 1\")\n\nIndex only scan does not mean that we will not read the tuple from the table. It means that we can skip reading the table if the visibility map allows it for given tuple.\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 04 Apr 2023 10:51:03 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re:Explain plan shows fewer shared blocks when index+table compared\n to index alone?"
}
] |
[
{
"msg_contents": "The test data below is from a non-virtualized (client system and database\nserver) Postgres 14 environment, with no replication, no high availability,\nand with no load balancing. This environment has older and slower disk\ndrives, and the test is driven by a single client process.\n\n\nIn this case 24% of the round trips (client to database and back) are for\ncommit processing. However, commit processing is consuming 89% of the\ntotal database time. (All times are measured from within the client.)\n\n\nIn this non-virtualized environment, on the exact same hardware, other\nRBMSs have a much lower commit-time/total-database-time ratio.\n\nIn a virtualized environment (both client system and database server) are\nrunning in separate VMs with faster disks and with possibly many other\nactive VMs this number drops to about 70% for Postgres.\n\n\nWe see similar results in Linux environments as well.\n\n\n*What is a good approach to identifying what is happening within the commit\nprocessing?*\n\n\n*Are there any known bugs in this area?*\n\n\nAny other thoughts would be greatly appreciated.\n\nThank you.\n\n\n-Tim\n\n\nLine Freq Cum.t Max.t Avg.t Rows Err. Statement\n\n1 2268 *301.908* 0.243 0.133 2235 0 COMMIT\n\n2 755 9.665 0.102 0.013 2326 0 INSERT INTO\nPOMQUERY_U ( col0 ) VALUES (:1)\n\n3 266 0.195 0.103 0.001 263 0 SELECT t_01.puid\nFROM PITEM t_01 WHERE ( UPPER ( t_01.pitem_id ) = UPPER( :1 ) )\n\n4 244 0.186 0.002 0.001 260 0 INSERT INTO\nPOM_TIMESTAMP (puid, ptimestamp, pdbtimestamp, pdeleted) (SELECT :1, :2,\nnow() ...\n\n[...snip...]\n\nSum: 9264 *338.200* - - 12050 -\n\nPercent Commit 24% *89%*\n\n\nMy latest run was similar, in that its total database time was 14876.691\nseconds with total commit time of 13032.575 seconds, or 88% commit time.\n\n\nPostgres Version: PostgreSQL 14.5, compiled by Visual C++ build 1914, 64-bit\nOS Name: Microsoft Windows Server 2019 Standard\nOS Version: 10.0.17763 N/A Build 17763\n\nThe test data below is from a non-virtualized (client system\nand database server) Postgres 14 environment, with no replication, no high availability,\nand with no load balancing. This environment has older and slower disk drives,\nand the test is driven by a single client process.In this case 24% of the round trips (client to database and\nback) are for commit processing. \nHowever, commit processing is consuming 89% of the total database time. (All\ntimes are measured from within the client.)In this non-virtualized environment, on the exact same\nhardware, other RBMSs have a much lower commit-time/total-database-time ratio.In a virtualized environment (both client system and\ndatabase server) are running in separate VMs with faster disks and with\npossibly many other active VMs this number drops to about 70% for Postgres. We see similar results in Linux environments as well.What is a good approach to identifying what is happening\nwithin the commit processing?Are there any known bugs in this area?Any other thoughts would be greatly appreciated.Thank you.-TimLine Freq Cum.t Max.t Avg.t Rows Err. Statement1 2268 301.908 0.243 0.133 2235 0 COMMIT2 755 9.665 0.102 0.013 2326 0 INSERT INTO POMQUERY_U ( col0 ) VALUES (:1)3 266 0.195 0.103 0.001 263 0 SELECT t_01.puid FROM PITEM t_01 WHERE ( UPPER ( t_01.pitem_id ) = UPPER( :1 ) )4 244 0.186 0.002 0.001 260 0 INSERT INTO POM_TIMESTAMP (puid, ptimestamp, pdbtimestamp, pdeleted) (SELECT :1, :2, now() ...[...snip...]Sum: 9264 338.200 - - 12050 - Percent Commit 24% 89% My latest run was similar, in that its total database time was 14876.691 seconds with total commit time of 13032.575 seconds, or 88% commit time.Postgres Version: PostgreSQL 14.5, compiled by Visual C++\nbuild 1914, 64-bit\nOS Name: Microsoft Windows Server 2019\nStandard\nOS Version: 10.0.17763 N/A Build 17763",
"msg_date": "Tue, 4 Apr 2023 09:46:15 -0500",
"msg_from": "Tim Slechta <trslechta@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why are commits consuming most of the database time?"
},
{
"msg_contents": "Tim Slechta <trslechta@gmail.com> writes:\n> The test data below is from a non-virtualized (client system and database\n> server) Postgres 14 environment, with no replication, no high availability,\n> and with no load balancing. This environment has older and slower disk\n> drives, and the test is driven by a single client process.\n\n> In this case 24% of the round trips (client to database and back) are for\n> commit processing. However, commit processing is consuming 89% of the\n> total database time. (All times are measured from within the client.)\n\nYou didn't say how big the transactions are, but if they're not writing\na lot of data apiece, this result seems totally non-surprising. The\ncommits have to push WAL log data down to disk before they can promise\nthat the transaction's results are durable, while the statements within\nthe transactions probably are not waiting for any disk writes at all.\n\nIf you don't need strict ACID compliance, you could turn off\nsynchronous_commit so that commits don't wait for WAL flush.\n(This doesn't risk the consistency of your database, but it does\nmean that a crash might lose the last few transactions that clients\nwere told got committed.)\n\nIf you do need strict ACID compliance, get a better disk subsystem.\nOr, perhaps, just a better OS ... Windows is generally not thought of\nas the best-performing platform for Postgres.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 10:57:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why are commits consuming most of the database time?"
},
{
"msg_contents": "Tom,\n\nThank you for your comments, they are very much appreciated.\n\nYou are correct that the transactions are typically short, likely with\ndozens of rows.\n\nDo you know of any problems or defects in this area?\n\nWould there be any usefulness to generating Postgres log files?\n\nOnce again, thanks for your help.\n\n-Tim\n\nOn Tue, Apr 4, 2023 at 10:55 AM Tim Slechta <trslechta@gmail.com> wrote:\n\n>\n>\n> On Tue, Apr 4, 2023 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Tim Slechta <trslechta@gmail.com> writes:\n>> > The test data below is from a non-virtualized (client system and\n>> database\n>> > server) Postgres 14 environment, with no replication, no high\n>> availability,\n>> > and with no load balancing. This environment has older and slower disk\n>> > drives, and the test is driven by a single client process.\n>>\n>> > In this case 24% of the round trips (client to database and back) are\n>> for\n>> > commit processing. However, commit processing is consuming 89% of the\n>> > total database time. (All times are measured from within the client.)\n>>\n>> You didn't say how big the transactions are, but if they're not writing\n>> a lot of data apiece, this result seems totally non-surprising. The\n>> commits have to push WAL log data down to disk before they can promise\n>> that the transaction's results are durable, while the statements within\n>> the transactions probably are not waiting for any disk writes at all.\n>>\n>> If you don't need strict ACID compliance, you could turn off\n>> synchronous_commit so that commits don't wait for WAL flush.\n>> (This doesn't risk the consistency of your database, but it does\n>> mean that a crash might lose the last few transactions that clients\n>> were told got committed.)\n>>\n>> If you do need strict ACID compliance, get a better disk subsystem.\n>> Or, perhaps, just a better OS ... Windows is generally not thought of\n>> as the best-performing platform for Postgres.\n>>\n>> regards, tom lane\n>>\n>\n\nTom, \nThank you for your comments,\nthey are very much appreciated. \nYou are correct that the\ntransactions are typically short, likely with dozens of rows.\nDo you know of any problems\nor defects in this area?\nWould there be any usefulness\nto generating Postgres log files?\nOnce again, thanks for\nyour help. \n-TimOn Tue, Apr 4, 2023 at 10:55 AM Tim Slechta <trslechta@gmail.com> wrote:On Tue, Apr 4, 2023 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tim Slechta <trslechta@gmail.com> writes:\n> The test data below is from a non-virtualized (client system and database\n> server) Postgres 14 environment, with no replication, no high availability,\n> and with no load balancing. This environment has older and slower disk\n> drives, and the test is driven by a single client process.\n\n> In this case 24% of the round trips (client to database and back) are for\n> commit processing. However, commit processing is consuming 89% of the\n> total database time. (All times are measured from within the client.)\n\nYou didn't say how big the transactions are, but if they're not writing\na lot of data apiece, this result seems totally non-surprising. The\ncommits have to push WAL log data down to disk before they can promise\nthat the transaction's results are durable, while the statements within\nthe transactions probably are not waiting for any disk writes at all.\n\nIf you don't need strict ACID compliance, you could turn off\nsynchronous_commit so that commits don't wait for WAL flush.\n(This doesn't risk the consistency of your database, but it does\nmean that a crash might lose the last few transactions that clients\nwere told got committed.)\n\nIf you do need strict ACID compliance, get a better disk subsystem.\nOr, perhaps, just a better OS ... Windows is generally not thought of\nas the best-performing platform for Postgres.\n\n regards, tom lane",
"msg_date": "Tue, 4 Apr 2023 15:24:02 -0500",
"msg_from": "Tim Slechta <trslechta@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why are commits consuming most of the database time?"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have encountered an unexpected quirk with our DB and we are unsure if\nthis is expected behaviour or an issue.\n\nPG version PostgreSQL 14.3 on aarch64-unknown-linux-gnu, compiled by\naarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit\n\nschema of table in question and related indexes\n\n\nCREATE TABLE public.marketplace_sale (\n log_index integer NOT NULL,\n created_at timestamp with time zone DEFAULT now() NOT NULL,\n updated_at timestamp with time zone DEFAULT now() NOT NULL,\n block_timestamp timestamp with time zone NOT NULL,\n block bigint NOT NULL,\n contract_address character(42) NOT NULL,\n buyer_address character(42) NOT NULL,\n seller_address character(42) NOT NULL,\n transaction_hash character(66) NOT NULL,\n quantity numeric NOT NULL,\n token_id numeric NOT NULL,\n seller_amount_wei numeric,\n marketplace_fees_wei numeric DEFAULT 0,\n royalty_fees_wei numeric DEFAULT 0,\n data_source text NOT NULL,\n marketplace text,\n original_data jsonb,\n source_discriminator text,\n total_amount_wei numeric NOT NULL,\n unique_hash bytea GENERATED ALWAYS AS\n(sha512((((((((((transaction_hash)::text || (block)::text) ||\n(log_index)::text) || (contract_address)::text) || (token_id)::text) ||\n(buyer_address)::text) || (seller_address)::text) ||\n(quantity)::text))::bytea)) STORED NOT NULL,\n CONSTRAINT buyer_address_lower CHECK (((buyer_address)::text =\nlower((buyer_address)::text))),\n CONSTRAINT buyer_address_prefix CHECK\n(starts_with((buyer_address)::text, '0x'::text)),\n CONSTRAINT contract_address_lower CHECK (((contract_address)::text =\nlower((contract_address)::text))),\n CONSTRAINT contract_address_prefix CHECK\n(starts_with((contract_address)::text, '0x'::text)),\n CONSTRAINT seller_address_lower CHECK (((seller_address)::text =\nlower((seller_address)::text))),\n CONSTRAINT seller_address_prefix CHECK\n(starts_with((seller_address)::text, '0x'::text)),\n CONSTRAINT transaction_hash_lower CHECK (((transaction_hash)::text =\nlower((transaction_hash)::text))),\n CONSTRAINT transaction_hash_prefix CHECK\n(starts_with((transaction_hash)::text, '0x'::text))\n);\n\nALTER TABLE ONLY public.marketplace_sale\n ADD CONSTRAINT marketplace_sale_pkey PRIMARY KEY (unique_hash);\nCREATE INDEX sales_contract_blocktimestamp_idx ON public.marketplace_sale\nUSING btree (contract_address, block_timestamp);\nCREATE INDEX sales_contract_date_idx ON public.marketplace_sale USING btree\n(contract_address, token_id, block_timestamp);\n\n\nWhen running this query\n\nEXPLAIN(verbose, costs, buffers) with token_pairs(contract_address,\ntoken_id) as (\n values ('0xed5af388653567af2f388e6224dc7c4b3241c544', '1375'::numeric ),\n('0xed5af388653567af2f388e6224dc7c4b3241c544', '4'::numeric )\n)\nselect sales.* from token_pairs, LATERAL (\nselect\n contract_address, token_id,\n block_timestamp, total_amount_wei, buyer_address,\n seller_address, block, quantity, transaction_hash\n from marketplace_sale\nwhere\n(marketplace_sale.contract_address, marketplace_sale.token_id) =\n(token_pairs.contract_address, token_pairs.token_id)\norder by contract_address desc, token_id desc, block_timestamp desc\nlimit 1\n) sales;\n\nwe get the query plan\n\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.69..332764.78 rows=2 width=231)\n Output: marketplace_sale.contract_address, marketplace_sale.token_id,\nmarketplace_sale.block_timestamp, marketplace_sale.total_amount_wei,\nmarketplace_sale.buyer_address, marketplace_sale.seller_address,\nmarketplace_sale.block, marketplace_sale.quantity,\nmarketplace_sale.transaction_hash\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=64)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n -> Limit (cost=0.69..166382.36 rows=1 width=231)\n Output: marketplace_sale.contract_address,\nmarketplace_sale.token_id, marketplace_sale.block_timestamp,\nmarketplace_sale.total_amount_wei, marketplace_sale.buyer_address,\nmarketplace_sale.seller_address, marketplace_sale.block,\nmarketplace_sale.quantity, marketplace_sale.transaction_hash\n -> Index Scan Backward using sales_contract_date_idx on\npublic.marketplace_sale (cost=0.69..3660397.27 rows=22 width=231)\n Output: marketplace_sale.contract_address,\nmarketplace_sale.token_id, marketplace_sale.block_timestamp,\nmarketplace_sale.total_amount_wei, marketplace_sale.buyer_address,\nmarketplace_sale.seller_address, marketplace_sale.block,\nmarketplace_sale.quantity, marketplace_sale.transaction_hash\n Index Cond: (marketplace_sale.token_id = \"*VALUES*\".column2)\n Filter: ((marketplace_sale.contract_address)::text =\n\"*VALUES*\".column1)\n Query Identifier: 8815736494208428864\n Planning:\n Buffers: shared hit=4\n(13 rows)\n\nAs you can see it is unable to fully utilize the (contract_address,\ntoken_id, block_timestamp) index and can only use the token_id column as\nthe index condition.\n\nHowever if we explicitly cast the contract values in the values list to\nvarchar or character(42)\n\nLike so\nEXPLAIN(verbose, costs, buffers) with token_pairs(contract_address,\ntoken_id) as (\n values ('0xed5af388653567af2f388e6224dc7c4b3241c544'::varchar,\n'1375'::numeric ), ('0xed5af388653567af2f388e6224dc7c4b3241c544'::varchar,\n'4'::numeric )\n)\nselect sales.* from token_pairs, LATERAL (\nselect\n contract_address, token_id,\n block_timestamp, total_amount_wei, buyer_address,\n seller_address, block, quantity, transaction_hash\n from marketplace_sale\nwhere\n(marketplace_sale.contract_address, marketplace_sale.token_id) =\n(token_pairs.contract_address, token_pairs.token_id)\norder by contract_address desc, token_id desc, block_timestamp desc\nlimit 1\n) sales;\n\nIt can now use the index\n\n\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.69..17.49 rows=2 width=231)\n Output: marketplace_sale.contract_address, marketplace_sale.token_id,\nmarketplace_sale.block_timestamp, marketplace_sale.total_amount_wei,\nmarketplace_sale.buyer_address, marketplace_sale.seller_address,\nmarketplace_sale.block, marketplace_sale.quantity,\nmarketplace_sale.transaction_hash\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=64)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n -> Limit (cost=0.69..8.71 rows=1 width=231)\n Output: marketplace_sale.contract_address,\nmarketplace_sale.token_id, marketplace_sale.block_timestamp,\nmarketplace_sale.total_amount_wei, marketplace_sale.buyer_address,\nmarketplace_sale.seller_address, marketplace_sale.block,\nmarketplace_sale.quantity, marketplace_sale.transaction_hash\n -> Index Scan Backward using sales_contract_date_idx on\npublic.marketplace_sale (cost=0.69..8.71 rows=1 width=231)\n Output: marketplace_sale.contract_address,\nmarketplace_sale.token_id, marketplace_sale.block_timestamp,\nmarketplace_sale.total_amount_wei, marketplace_sale.buyer_address,\nmarketplace_sale.seller_address, marketplace_sale.block,\nmarketplace_sale.quantity, marketplace_sale.transaction_hash\n Index Cond: ((marketplace_sale.contract_address =\n(\"*VALUES*\".column1)::bpchar) AND (marketplace_sale.token_id =\n\"*VALUES*\".column2))\n Query Identifier: -5527103051535383406\n Planning:\n Buffers: shared hit=4\n(12 rows)\n\n\n\nWe were expecting behaviour similar to\nexplain (verbose, costs, buffers) select * from marketplace_sale where\ncontract_address = '0xed5af388653567af2f388e6224dc7c4b3241c544'\nand token_id = '1375'\norder by contract_address desc, token_id desc, block_timestamp desc\nlimit 1;\n\n\nQUERY PLAN\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.69..6.04 rows=1 width=1610)\n Output: log_index, created_at, updated_at, block_timestamp, block,\ncontract_address, buyer_address, seller_address, transaction_hash,\nquantity, token_id, seller_amount_wei, marketplace_fees_wei,\nroyalty_fees_wei, data_source, marketplace, original_data,\nsource_discriminator, total_amount_wei, unique_hash\n -> Index Scan Backward using sales_contract_date_idx on\npublic.marketplace_sale (cost=0.69..16.74 rows=3 width=1610)\n Output: log_index, created_at, updated_at, block_timestamp, block,\ncontract_address, buyer_address, seller_address, transaction_hash,\nquantity, token_id, seller_amount_wei, marketplace_fees_wei,\nroyalty_fees_wei, data_source, marketplace, original_data,\nsource_discriminator, total_amount_wei, unique_hash\n Index Cond: ((marketplace_sale.contract_address =\n'0xed5af388653567af2f388e6224dc7c4b3241c544'::bpchar) AND\n(marketplace_sale.token_id = '1375'::numeric))\n Query Identifier: -2069211501626469745\n Planning:\n Buffers: shared hit=2\n(8 rows)\n\n\nAny insight into why this happens would be greatly appreciated\n\nHello,We have encountered an unexpected quirk with our DB and we are unsure if this is expected behaviour or an issue.PG version PostgreSQL 14.3 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bitschema of table in question and related indexesCREATE TABLE public.marketplace_sale ( log_index integer NOT NULL, created_at timestamp with time zone DEFAULT now() NOT NULL, updated_at timestamp with time zone DEFAULT now() NOT NULL, block_timestamp timestamp with time zone NOT NULL, block bigint NOT NULL, contract_address character(42) NOT NULL, buyer_address character(42) NOT NULL, seller_address character(42) NOT NULL, transaction_hash character(66) NOT NULL, quantity numeric NOT NULL, token_id numeric NOT NULL, seller_amount_wei numeric, marketplace_fees_wei numeric DEFAULT 0, royalty_fees_wei numeric DEFAULT 0, data_source text NOT NULL, marketplace text, original_data jsonb, source_discriminator text, total_amount_wei numeric NOT NULL, unique_hash bytea GENERATED ALWAYS AS (sha512((((((((((transaction_hash)::text || (block)::text) || (log_index)::text) || (contract_address)::text) || (token_id)::text) || (buyer_address)::text) || (seller_address)::text) || (quantity)::text))::bytea)) STORED NOT NULL, CONSTRAINT buyer_address_lower CHECK (((buyer_address)::text = lower((buyer_address)::text))), CONSTRAINT buyer_address_prefix CHECK (starts_with((buyer_address)::text, '0x'::text)), CONSTRAINT contract_address_lower CHECK (((contract_address)::text = lower((contract_address)::text))), CONSTRAINT contract_address_prefix CHECK (starts_with((contract_address)::text, '0x'::text)), CONSTRAINT seller_address_lower CHECK (((seller_address)::text = lower((seller_address)::text))), CONSTRAINT seller_address_prefix CHECK (starts_with((seller_address)::text, '0x'::text)), CONSTRAINT transaction_hash_lower CHECK (((transaction_hash)::text = lower((transaction_hash)::text))), CONSTRAINT transaction_hash_prefix CHECK (starts_with((transaction_hash)::text, '0x'::text)));ALTER TABLE ONLY public.marketplace_sale ADD CONSTRAINT marketplace_sale_pkey PRIMARY KEY (unique_hash);CREATE INDEX sales_contract_blocktimestamp_idx ON public.marketplace_sale USING btree (contract_address, block_timestamp);CREATE INDEX sales_contract_date_idx ON public.marketplace_sale USING btree (contract_address, token_id, block_timestamp);When running this queryEXPLAIN(verbose, costs, buffers) with token_pairs(contract_address, token_id) as ( values ('0xed5af388653567af2f388e6224dc7c4b3241c544', '1375'::numeric ), ('0xed5af388653567af2f388e6224dc7c4b3241c544', '4'::numeric )) select sales.* from token_pairs, LATERAL (select contract_address, token_id, block_timestamp, total_amount_wei, buyer_address, seller_address, block, quantity, transaction_hash from marketplace_salewhere (marketplace_sale.contract_address, marketplace_sale.token_id) = (token_pairs.contract_address, token_pairs.token_id) order by contract_address desc, token_id desc, block_timestamp desclimit 1 ) sales;we get the query plan QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.69..332764.78 rows=2 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=64) Output: \"*VALUES*\".column1, \"*VALUES*\".column2 -> Limit (cost=0.69..166382.36 rows=1 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash -> Index Scan Backward using sales_contract_date_idx on public.marketplace_sale (cost=0.69..3660397.27 rows=22 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash Index Cond: (marketplace_sale.token_id = \"*VALUES*\".column2) Filter: ((marketplace_sale.contract_address)::text = \"*VALUES*\".column1) Query Identifier: 8815736494208428864 Planning: Buffers: shared hit=4(13 rows)As you can see it is unable to fully utilize the (contract_address, token_id, block_timestamp) index and can only use the token_id column as the index condition.However if we explicitly cast the contract values in the values list to varchar or character(42)Like soEXPLAIN(verbose, costs, buffers) with token_pairs(contract_address, token_id) as ( values ('0xed5af388653567af2f388e6224dc7c4b3241c544'::varchar, '1375'::numeric ), ('0xed5af388653567af2f388e6224dc7c4b3241c544'::varchar, '4'::numeric )) select sales.* from token_pairs, LATERAL (select contract_address, token_id, block_timestamp, total_amount_wei, buyer_address, seller_address, block, quantity, transaction_hash from marketplace_salewhere (marketplace_sale.contract_address, marketplace_sale.token_id) = (token_pairs.contract_address, token_pairs.token_id) order by contract_address desc, token_id desc, block_timestamp desclimit 1 ) sales;It can now use the index QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.69..17.49 rows=2 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=64) Output: \"*VALUES*\".column1, \"*VALUES*\".column2 -> Limit (cost=0.69..8.71 rows=1 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash -> Index Scan Backward using sales_contract_date_idx on public.marketplace_sale (cost=0.69..8.71 rows=1 width=231) Output: marketplace_sale.contract_address, marketplace_sale.token_id, marketplace_sale.block_timestamp, marketplace_sale.total_amount_wei, marketplace_sale.buyer_address, marketplace_sale.seller_address, marketplace_sale.block, marketplace_sale.quantity, marketplace_sale.transaction_hash Index Cond: ((marketplace_sale.contract_address = (\"*VALUES*\".column1)::bpchar) AND (marketplace_sale.token_id = \"*VALUES*\".column2)) Query Identifier: -5527103051535383406 Planning: Buffers: shared hit=4(12 rows)We were expecting behaviour similar to explain (verbose, costs, buffers) select * from marketplace_sale where contract_address = '0xed5af388653567af2f388e6224dc7c4b3241c544'and token_id = '1375'order by contract_address desc, token_id desc, block_timestamp desclimit 1; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.69..6.04 rows=1 width=1610) Output: log_index, created_at, updated_at, block_timestamp, block, contract_address, buyer_address, seller_address, transaction_hash, quantity, token_id, seller_amount_wei, marketplace_fees_wei, royalty_fees_wei, data_source, marketplace, original_data, source_discriminator, total_amount_wei, unique_hash -> Index Scan Backward using sales_contract_date_idx on public.marketplace_sale (cost=0.69..16.74 rows=3 width=1610) Output: log_index, created_at, updated_at, block_timestamp, block, contract_address, buyer_address, seller_address, transaction_hash, quantity, token_id, seller_amount_wei, marketplace_fees_wei, royalty_fees_wei, data_source, marketplace, original_data, source_discriminator, total_amount_wei, unique_hash Index Cond: ((marketplace_sale.contract_address = '0xed5af388653567af2f388e6224dc7c4b3241c544'::bpchar) AND (marketplace_sale.token_id = '1375'::numeric)) Query Identifier: -2069211501626469745 Planning: Buffers: shared hit=2(8 rows)Any insight into why this happens would be greatly appreciated",
"msg_date": "Thu, 6 Apr 2023 12:23:41 +0200",
"msg_from": "ahi <ahm3d.hisham@gmail.com>",
"msg_from_op": true,
"msg_subject": "Query unable to utilize index without typecast to fixed length\n character"
},
{
"msg_contents": "ahi <ahm3d.hisham@gmail.com> writes:\n> CREATE TABLE public.marketplace_sale (\n> log_index integer NOT NULL,\n> created_at timestamp with time zone DEFAULT now() NOT NULL,\n> updated_at timestamp with time zone DEFAULT now() NOT NULL,\n> block_timestamp timestamp with time zone NOT NULL,\n> block bigint NOT NULL,\n> contract_address character(42) NOT NULL,\n> buyer_address character(42) NOT NULL,\n> seller_address character(42) NOT NULL,\n> transaction_hash character(66) NOT NULL,\n> quantity numeric NOT NULL,\n> token_id numeric NOT NULL,\n ...\n\nType character(N) is a hangover from the days of punched cards.\nDon't use it. It has weird semantics concerning trailing spaces,\nwhich are almost never the behavior you actually want, and cause\ninteroperability issues with type text. (Text is Postgres' native\nstring type, meaning that unlabeled string constants will tend to\nget resolved to that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Apr 2023 10:50:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query unable to utilize index without typecast to fixed length\n character"
},
{
"msg_contents": "You are right we should move from character(N) to text, however the\nexplicit typecast is also required for the numeric column not just the\ncharacter one\n\nOn Thu, Apr 6, 2023 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> ahi <ahm3d.hisham@gmail.com> writes:\n> > CREATE TABLE public.marketplace_sale (\n> > log_index integer NOT NULL,\n> > created_at timestamp with time zone DEFAULT now() NOT NULL,\n> > updated_at timestamp with time zone DEFAULT now() NOT NULL,\n> > block_timestamp timestamp with time zone NOT NULL,\n> > block bigint NOT NULL,\n> > contract_address character(42) NOT NULL,\n> > buyer_address character(42) NOT NULL,\n> > seller_address character(42) NOT NULL,\n> > transaction_hash character(66) NOT NULL,\n> > quantity numeric NOT NULL,\n> > token_id numeric NOT NULL,\n> ...\n>\n> Type character(N) is a hangover from the days of punched cards.\n> Don't use it. It has weird semantics concerning trailing spaces,\n> which are almost never the behavior you actually want, and cause\n> interoperability issues with type text. (Text is Postgres' native\n> string type, meaning that unlabeled string constants will tend to\n> get resolved to that.)\n>\n> regards, tom lane\n>\n\nYou are right we should move from character(N) to text, however the explicit typecast is also required for the numeric column not just the character oneOn Thu, Apr 6, 2023 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:ahi <ahm3d.hisham@gmail.com> writes:\n> CREATE TABLE public.marketplace_sale (\n> log_index integer NOT NULL,\n> created_at timestamp with time zone DEFAULT now() NOT NULL,\n> updated_at timestamp with time zone DEFAULT now() NOT NULL,\n> block_timestamp timestamp with time zone NOT NULL,\n> block bigint NOT NULL,\n> contract_address character(42) NOT NULL,\n> buyer_address character(42) NOT NULL,\n> seller_address character(42) NOT NULL,\n> transaction_hash character(66) NOT NULL,\n> quantity numeric NOT NULL,\n> token_id numeric NOT NULL,\n ...\n\nType character(N) is a hangover from the days of punched cards.\nDon't use it. It has weird semantics concerning trailing spaces,\nwhich are almost never the behavior you actually want, and cause\ninteroperability issues with type text. (Text is Postgres' native\nstring type, meaning that unlabeled string constants will tend to\nget resolved to that.)\n\n regards, tom lane",
"msg_date": "Fri, 7 Apr 2023 09:09:06 +0200",
"msg_from": "ahi <ahm3d.hisham@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query unable to utilize index without typecast to fixed length\n character"
},
{
"msg_contents": "Hi,\n\n \n\nYour error is the use of quotes around the constant numeric value!\n\nYou should not use it because that means then that it is a character constant causing an implicit conversion. \n\nWe must consider any implicit conversion in our queries as a potential problem and we must absolutely avoid using implicit conversions…\n\n \n\nBest regards\n\n \n\nMichel SALAIS\n\nConsultant Oracle, PostgreSQL\n\nDe : ahi <ahm3d.hisham@gmail.com> \nEnvoyé : vendredi 7 avril 2023 09:09\nÀ : Tom Lane <tgl@sss.pgh.pa.us>\nCc : pgsql-performance@lists.postgresql.org\nObjet : Re: Query unable to utilize index without typecast to fixed length character\n\n \n\nYou are right we should move from character(N) to text, however the explicit typecast is also required for the numeric column not just the character one\n\n \n\nOn Thu, Apr 6, 2023 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us> > wrote:\n\nahi <ahm3d.hisham@gmail.com <mailto:ahm3d.hisham@gmail.com> > writes:\n> CREATE TABLE public.marketplace_sale (\n> log_index integer NOT NULL,\n> created_at timestamp with time zone DEFAULT now() NOT NULL,\n> updated_at timestamp with time zone DEFAULT now() NOT NULL,\n> block_timestamp timestamp with time zone NOT NULL,\n> block bigint NOT NULL,\n> contract_address character(42) NOT NULL,\n> buyer_address character(42) NOT NULL,\n> seller_address character(42) NOT NULL,\n> transaction_hash character(66) NOT NULL,\n> quantity numeric NOT NULL,\n> token_id numeric NOT NULL,\n ...\n\nType character(N) is a hangover from the days of punched cards.\nDon't use it. It has weird semantics concerning trailing spaces,\nwhich are almost never the behavior you actually want, and cause\ninteroperability issues with type text. (Text is Postgres' native\nstring type, meaning that unlabeled string constants will tend to\nget resolved to that.)\n\n regards, tom lane\n\n\nHi, Your error is the use of quotes around the constant numeric value!You should not use it because that means then that it is a character constant causing an implicit conversion. We must consider any implicit conversion in our queries as a potential problem and we must absolutely avoid using implicit conversions… Best regards Michel SALAISConsultant Oracle, PostgreSQLDe : ahi <ahm3d.hisham@gmail.com> Envoyé : vendredi 7 avril 2023 09:09À : Tom Lane <tgl@sss.pgh.pa.us>Cc : pgsql-performance@lists.postgresql.orgObjet : Re: Query unable to utilize index without typecast to fixed length character You are right we should move from character(N) to text, however the explicit typecast is also required for the numeric column not just the character one On Thu, Apr 6, 2023 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:ahi <ahm3d.hisham@gmail.com> writes:> CREATE TABLE public.marketplace_sale (> log_index integer NOT NULL,> created_at timestamp with time zone DEFAULT now() NOT NULL,> updated_at timestamp with time zone DEFAULT now() NOT NULL,> block_timestamp timestamp with time zone NOT NULL,> block bigint NOT NULL,> contract_address character(42) NOT NULL,> buyer_address character(42) NOT NULL,> seller_address character(42) NOT NULL,> transaction_hash character(66) NOT NULL,> quantity numeric NOT NULL,> token_id numeric NOT NULL, ...Type character(N) is a hangover from the days of punched cards.Don't use it. It has weird semantics concerning trailing spaces,which are almost never the behavior you actually want, and causeinteroperability issues with type text. (Text is Postgres' nativestring type, meaning that unlabeled string constants will tend toget resolved to that.) regards, tom lane",
"msg_date": "Sat, 8 Apr 2023 19:58:46 +0200",
"msg_from": "<msalais@msym.fr>",
"msg_from_op": false,
"msg_subject": "RE: Query unable to utilize index without typecast to fixed length\n character"
}
] |
[
{
"msg_contents": "Hi Listers,\nAnyone here use such a tool for Postgres? Any recommendations?\n\nSay I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15.\nI want to run explain analyze for 150 in both versions for comparative\nanalysis.\n\nI am looking for the easiest way to do it with a tool :)\n\n-- \nCheers,\nKunwar\n\nHi Listers,Anyone here use such a tool for Postgres? Any recommendations?Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative analysis. I am looking for the easiest way to do it with a tool :) -- Cheers,Kunwar",
"msg_date": "Fri, 7 Apr 2023 13:57:32 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is there any tool which will help me run and explain analyze about\n 150 queries?"
},
{
"msg_contents": "Prepend \"EXPLAIN ANALYZE \" on every statement :\n\ncat foo.sql | awk '{print \" EXPLAIN (ANALYZE, BUFFERS, TIMING, SUMMARY) \n\" $0}' | psql testdb -f -\n\nΣτις 7/4/23 20:57, ο/η kunwar singh έγραψε:\n> Hi Listers,\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to \n> Postgres 15. I want to run explain analyze for 150 in both versions \n> for comparative analysis.\n>\n> I am looking for the easiest way to do it with a tool :)\n>\n> -- \n> Cheers,\n> Kunwar\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt\n\n\n\n\n\n\nPrepend \"EXPLAIN ANALYZE \" on every statement :\ncat foo.sql |\n awk '{print \" EXPLAIN (ANALYZE, BUFFERS, TIMING, SUMMARY) \"\n $0}' | psql testdb -f -\n\nΣτις 7/4/23 20:57, ο/η kunwar singh\n έγραψε:\n\n\n\nHi Listers,\n Anyone here use such a tool for Postgres? Any\n recommendations?\n\n\nSay I have 150 queries in Postgres 11 and I want to upgrade\n to Postgres 15. I want to run explain analyze for 150 in both\n versions for comparative analysis. \n\n\nI am looking for the easiest way to do it with a tool :) \n\n\n\n-- \nCheers,\n Kunwar\n\n\n-- \nAchilleas Mantzios\n IT DEV - HEAD\n IT DEPT\n Dynacom Tankers Mgmt",
"msg_date": "Fri, 7 Apr 2023 21:19:36 +0300",
"msg_from": "Achilleas Mantzios <a.mantzios@cloud.gatewaynet.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "In my opinion, Datadog is the best Postgres monitor available, but it\ndoesn't have a feature that has been discussed. However, you can use\nauto_explain to analyze long-running queries by setting a limit. I recently\nenabled it in GCP Cloud SQL, but I haven't seen any results in the logs\nyet. I need to figure out which parameters to enable to get it working.\n\n\n\n\nOn Fri, Apr 7, 2023 at 10:57 AM kunwar singh <krishsingh.111@gmail.com>\nwrote:\n\n> Hi Listers,\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres\n> 15. I want to run explain analyze for 150 in both versions for comparative\n> analysis.\n>\n> I am looking for the easiest way to do it with a tool :)\n>\n> --\n> Cheers,\n> Kunwar\n>\n\nIn my opinion, Datadog is the best Postgres monitor available, but it doesn't have a feature that has been discussed. However, you can use auto_explain to analyze long-running queries by setting a limit. I recently enabled it in GCP Cloud SQL, but I haven't seen any results in the logs yet. I need to figure out which parameters to enable to get it working.On Fri, Apr 7, 2023 at 10:57 AM kunwar singh <krishsingh.111@gmail.com> wrote:Hi Listers,Anyone here use such a tool for Postgres? Any recommendations?Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative analysis. I am looking for the easiest way to do it with a tool :) -- Cheers,Kunwar",
"msg_date": "Fri, 7 Apr 2023 11:40:47 -0700",
"msg_from": "kyle Hailey <kylelf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "Thanks Kyle. I am also trying to get it working :).\n\n@Archilleas ,Thanks for your inputs. Appreciate it. I am further\ninterested in learning if we can automate the generation/creation of foo.sql\nby including queries with bind variables defined and bind values populated\n( speaking in Oracle linguistics , pardon my Postgres ignorance, a newbie\nhere)\n\nOn Fri, Apr 7, 2023 at 2:41 PM kyle Hailey <kylelf@gmail.com> wrote:\n\n>\n> In my opinion, Datadog is the best Postgres monitor available, but it\n> doesn't have a feature that has been discussed. However, you can use\n> auto_explain to analyze long-running queries by setting a limit. I recently\n> enabled it in GCP Cloud SQL, but I haven't seen any results in the logs\n> yet. I need to figure out which parameters to enable to get it working.\n>\n>\n>\n>\n> On Fri, Apr 7, 2023 at 10:57 AM kunwar singh <krishsingh.111@gmail.com>\n> wrote:\n>\n>> Hi Listers,\n>> Anyone here use such a tool for Postgres? Any recommendations?\n>>\n>> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres\n>> 15. I want to run explain analyze for 150 in both versions for comparative\n>> analysis.\n>>\n>> I am looking for the easiest way to do it with a tool :)\n>>\n>> --\n>> Cheers,\n>> Kunwar\n>>\n>\n\n-- \nCheers,\nKunwar\n\nThanks Kyle. I am also trying to get it working :). @Archilleas ,Thanks for your inputs. Appreciate it. I am further interested in learning if we can automate the generation/creation of foo.sql by including queries with bind variables defined and bind values populated ( speaking in Oracle linguistics , pardon my Postgres ignorance, a newbie here)On Fri, Apr 7, 2023 at 2:41 PM kyle Hailey <kylelf@gmail.com> wrote:In my opinion, Datadog is the best Postgres monitor available, but it doesn't have a feature that has been discussed. However, you can use auto_explain to analyze long-running queries by setting a limit. I recently enabled it in GCP Cloud SQL, but I haven't seen any results in the logs yet. I need to figure out which parameters to enable to get it working.On Fri, Apr 7, 2023 at 10:57 AM kunwar singh <krishsingh.111@gmail.com> wrote:Hi Listers,Anyone here use such a tool for Postgres? Any recommendations?Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative analysis. I am looking for the easiest way to do it with a tool :) -- Cheers,Kunwar\n\n-- Cheers,Kunwar",
"msg_date": "Fri, 7 Apr 2023 16:48:59 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "kunwar singh <krishsingh.111@gmail.com> writes:\n\n> Hi Listers,\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative\n> analysis. \n>\n> I am looking for the easiest way to do it with a tool :) \n\nI'd use a tool like bash for this which is very affordable :-)\n\nJust load your queries into individual files in some directory with a\n.sql suffix...\n\nfor file in $some-directory/*.sql; do\n psql <<EOF >$file.explain-output 2>&1\nexplain analyze\n$(<$file)\nEOF\ndone\n\n\n",
"msg_date": "Fri, 07 Apr 2023 23:40:36 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": false,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "Just wrote up my experiences setting up auto_explain on Google Cloud SQL to\nget explain analyze:\n\nhttps://www.kylehailey.com/post/auto_explain-on-google-cloud-sql-gcp\n\n\n\n\nOn Fri, Apr 7, 2023 at 9:40 PM Jerry Sievers <gsievers19@comcast.net> wrote:\n\n> kunwar singh <krishsingh.111@gmail.com> writes:\n>\n> > Hi Listers,\n> > Anyone here use such a tool for Postgres? Any recommendations?\n> >\n> > Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres\n> 15. I want to run explain analyze for 150 in both versions for comparative\n> > analysis.\n> >\n> > I am looking for the easiest way to do it with a tool :)\n>\n> I'd use a tool like bash for this which is very affordable :-)\n>\n> Just load your queries into individual files in some directory with a\n> .sql suffix...\n>\n> for file in $some-directory/*.sql; do\n> psql <<EOF >$file.explain-output 2>&1\n> explain analyze\n> $(<$file)\n> EOF\n> done\n>\n>\n>\n\nJust wrote up my experiences setting up auto_explain on Google Cloud SQL to get explain analyze:https://www.kylehailey.com/post/auto_explain-on-google-cloud-sql-gcpOn Fri, Apr 7, 2023 at 9:40 PM Jerry Sievers <gsievers19@comcast.net> wrote:kunwar singh <krishsingh.111@gmail.com> writes:\n\n> Hi Listers,\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative\n> analysis. \n>\n> I am looking for the easiest way to do it with a tool :) \n\nI'd use a tool like bash for this which is very affordable :-)\n\nJust load your queries into individual files in some directory with a\n.sql suffix...\n\nfor file in $some-directory/*.sql; do\n psql <<EOF >$file.explain-output 2>&1\nexplain analyze\n$(<$file)\nEOF\ndone",
"msg_date": "Sat, 8 Apr 2023 14:44:28 -0700",
"msg_from": "kyle Hailey <kylelf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "Thanks, I'll check it out!\n\nOn Sat, Apr 8, 2023 at 5:45 PM kyle Hailey <kylelf@gmail.com> wrote:\n\n>\n> Just wrote up my experiences setting up auto_explain on Google Cloud SQL\n> to get explain analyze:\n>\n> https://www.kylehailey.com/post/auto_explain-on-google-cloud-sql-gcp\n>\n>\n>\n>\n> On Fri, Apr 7, 2023 at 9:40 PM Jerry Sievers <gsievers19@comcast.net>\n> wrote:\n>\n>> kunwar singh <krishsingh.111@gmail.com> writes:\n>>\n>> > Hi Listers,\n>> > Anyone here use such a tool for Postgres? Any recommendations?\n>> >\n>> > Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres\n>> 15. I want to run explain analyze for 150 in both versions for comparative\n>> > analysis.\n>> >\n>> > I am looking for the easiest way to do it with a tool :)\n>>\n>> I'd use a tool like bash for this which is very affordable :-)\n>>\n>> Just load your queries into individual files in some directory with a\n>> .sql suffix...\n>>\n>> for file in $some-directory/*.sql; do\n>> psql <<EOF >$file.explain-output 2>&1\n>> explain analyze\n>> $(<$file)\n>> EOF\n>> done\n>>\n>>\n>>\n\n-- \nCheers,\nKunwar\n\nThanks, I'll check it out!On Sat, Apr 8, 2023 at 5:45 PM kyle Hailey <kylelf@gmail.com> wrote:Just wrote up my experiences setting up auto_explain on Google Cloud SQL to get explain analyze:https://www.kylehailey.com/post/auto_explain-on-google-cloud-sql-gcpOn Fri, Apr 7, 2023 at 9:40 PM Jerry Sievers <gsievers19@comcast.net> wrote:kunwar singh <krishsingh.111@gmail.com> writes:\n\n> Hi Listers,\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative\n> analysis. \n>\n> I am looking for the easiest way to do it with a tool :) \n\nI'd use a tool like bash for this which is very affordable :-)\n\nJust load your queries into individual files in some directory with a\n.sql suffix...\n\nfor file in $some-directory/*.sql; do\n psql <<EOF >$file.explain-output 2>&1\nexplain analyze\n$(<$file)\nEOF\ndone\n\n\n\n-- Cheers,Kunwar",
"msg_date": "Sun, 9 Apr 2023 18:15:33 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
},
{
"msg_contents": "> > Anyone here use such a tool for Postgres? Any recommendations?\n>>> >\n>>> > Say I have 150 queries in Postgres 11 and I want to upgrade to\n>>> Postgres 15. I want to run explain analyze for 150 in both versions for\n>>> comparative\n>>> > analysis.\n>>> >\n>>> > I am looking for the easiest way to do it with a tool :)\n>>>\n>>\nI hadn't considered using it for this, and it's not free, but our pgMustard\nscoring API[1] might make your comparative analysis easier.\n\nIf you send plans to it (in JSON format), you'll get back summary info like\ntotal time and total buffers, as well as scored tips. You can then compare\nthe summary info, tips, and scores to find those with the biggest\ndifferences, and some starting points for what the issue(s) may be.\n\n[1]: https://www.pgmustard.com/docs/scoring-api\n\n\n> Anyone here use such a tool for Postgres? Any recommendations?\n>\n> Say I have 150 queries in Postgres 11 and I want to upgrade to Postgres 15. I want to run explain analyze for 150 in both versions for comparative\n> analysis. \n>\n> I am looking for the easiest way to do it with a tool :) \nI hadn't considered using it for this, and it's not free, but our pgMustard scoring API[1] might make your comparative analysis easier.If you send plans to it (in JSON format), you'll get back summary info like total time and total buffers, as well as scored tips. You can then compare the summary info, tips, and scores to find those with the biggest differences, and some starting points for what the issue(s) may be.[1]: https://www.pgmustard.com/docs/scoring-api",
"msg_date": "Wed, 12 Apr 2023 10:17:30 +0100",
"msg_from": "Michael Christofides <michael@pgmustard.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there any tool which will help me run and explain analyze\n about 150 queries?"
}
] |
[
{
"msg_contents": "Hello\n\nI have just started in the new challenger as dba postgres and I am in\ncharge of enhancing our postgres monitoring( dashboards and alarms ) using\nGrafana.\n\nCould you list me relevant metrics in postgres that I can apply in my\nenvironment ?\n\n\nIn additional, If someone else has grafana template implemented, please\nshare with me or some link to do that ,\n\n\nRegards\n\n*André Rodrigues *\n\nHelloI have just started in the new challenger as dba postgres and I am in charge of enhancing our postgres monitoring( dashboards and alarms ) using Grafana. Could you list me relevant metrics in postgres that I can apply in my environment ? In additional, If someone else has grafana template implemented, please share with me or some link to do that , Regards \nAndré Rodrigues",
"msg_date": "Sat, 15 Apr 2023 12:04:12 +0100",
"msg_from": "=?UTF-8?Q?Andr=C3=A9_Rodrigues?= <db.andre@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgres Metrics"
},
{
"msg_contents": "On 4/15/23 07:04, André Rodrigues wrote:\n> Hello\n> \n> I have just started in the new challenger as dba postgres and I am in \n> charge of enhancing our postgres monitoring( dashboards and alarms ) \n> using Grafana.\n> \n> Could you list me relevant metrics in postgres that I can apply in my \n> environment ?\n> \n> In additional, If someone else has grafana template implemented, please \n> share with me or some link to do that ,\n\nSee:\nhttps://github.com/CrunchyData/pgmonitor\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 15 Apr 2023 10:53:22 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Metrics"
},
{
"msg_contents": "On 4/15/23 10:53, Joe Conway wrote:\n> On 4/15/23 07:04, André Rodrigues wrote:\n>> Hello\n>>\n>> I have just started in the new challenger as dba postgres and I am in \n>> charge of enhancing our postgres monitoring( dashboards and alarms ) \n>> using Grafana.\n>>\n>> Could you list me relevant metrics in postgres that I can apply in my \n>> environment ?\n>>\n>> In additional, If someone else has grafana template implemented, \n>> please share with me or some link to do that ,\n>\n> See:\n> https://github.com/CrunchyData/pgmonitor\n>\n\nOr https://pgexporter.github.io/\n\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 04:23:23 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Metrics"
}
] |
[
{
"msg_contents": "Hi,\n\nI am currently trying to migrate an influxdb 1.7 smarthome database to\npostgresql (13.9) running on my raspberry 3.\nIt works quite well, but for the queries executed by grafana I get a\nbit highter execution times than I'd hoped for.\n\nExample:\ntable smartmeter with non-null column ts (timestamp with time zone)\nand brinc index on ts, no pk to avoid a btree index.\nSensor values are stored every 5s, so for 1 month there are about 370k\nrows - and in total the table currently holds about 3M rows.\nThe query to display the values for 1 month takes ~3s, with the bitmap\nheap scan as well as aggregation taking up most of the time, with\nsorting in between.\n\nIs there anything that could be improved?\nWith influxdb I was able to view 3 and 6 months graphs, with\npostgresql it simply takes too long.\n\nI am currently running the 32-bit ARMv6 build, would it be a big\nimprovement running ARMv8/64-bit?\n\nThank you in advance, Clemens\n\nsmarthomedb=> explain analyze SELECT floor(extract(epoch from\nts)/10800)*10800 AS \"time\", AVG(stromL1) as l1, AVG(stromL2) as l2,\nAVG(stroml3) as l3 FROM smartmeter WHERE ts BETWEEN '2023-03-16\nT09:51:28.397Z' AND '2023-04-16T08:51:28.397Z' GROUP BY time order by time;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGroupAggregate (cost=117490.70..132536.10 rows=376135 width=32)\n(actual time=2061.253..2974.336 rows=236 loops=1)\n Group Key: ((floor((date_part('epoch'::text, ts) / '10800'::double\nprecision)) * '10800'::double precision))\n -> Sort (cost=117490.70..118431.04 rows=376135 width=20) (actual\ntime=2058.407..2410.467 rows=371810 loops=1)\n Sort Key: ((floor((date_part('epoch'::text, ts) /\n'10800'::double precision)) * '10800'::double precision))\n Sort Method: external merge Disk: 10960kB\n -> Bitmap Heap Scan on smartmeter (cost=112.09..74944.93\nrows=376135 width=20) (actual time=88.336..1377.862 rows=371810\nloops=1)\n Recheck Cond: ((ts >= '2023-03-16\n10:51:28.397+01'::timestamp with time zone) AND (ts <= '2023-04-16\n10:51:28.397+02'::timestamp with time zone))\n Rows Removed by Index Recheck: 2131\n Heap Blocks: lossy=4742\n -> Bitmap Index Scan on smartmeter_ts_idx\n(cost=0.00..18.05 rows=377166 width=0) (actual time=1.376..1.377\nrows=47420 loops=1)\n Index Cond: ((ts >= '2023-03-16\n10:51:28.397+01'::timestamp with time zone) AND (ts <= '2023-04-16\n10:51:28.397+02'::timestamp with time zone))\nPlanning Time: 0.419 ms\nJIT:\n Functions: 9\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 8.734 ms, Inlining 0.000 ms, Optimization 2.388\nms, Emission 83.137 ms, Total 94.259 ms\nExecution Time: 2990.772 ms\n(17 Zeilen)\n\n\n",
"msg_date": "Sun, 16 Apr 2023 19:00:33 +0200",
"msg_from": "Clemens Eisserer <linuxhippy@gmail.com>",
"msg_from_op": true,
"msg_subject": "speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 07:00:33PM +0200, Clemens Eisserer wrote:\n> Hi,\n> \n> I am currently trying to migrate an influxdb 1.7 smarthome database to\n> postgresql (13.9) running on my raspberry 3.\n> It works quite well, but for the queries executed by grafana I get a\n> bit highter execution times than I'd hoped for.\n\nSuggestions:\n\n - enable track_io_timing and show explain (analyze,buffers,settings)\n - or otherwise show your non-default settings;\n - show \\d of your table(s)\n - show the query plan for the 6 months query . The query plan may be\n different, or (if you can run it with \"analyze\") it may be\n illuminating to see how the query \"scales\".\n - consider trying postgres 15 (btw, v16 will have a beta release next\n month)\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 16 Apr 2023 12:15:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "On Mon, 17 Apr 2023 at 05:00, Clemens Eisserer <linuxhippy@gmail.com> wrote:\n> Example:\n> table smartmeter with non-null column ts (timestamp with time zone)\n> and brinc index on ts, no pk to avoid a btree index.\n> Sensor values are stored every 5s, so for 1 month there are about 370k\n> rows - and in total the table currently holds about 3M rows.\n> The query to display the values for 1 month takes ~3s, with the bitmap\n> heap scan as well as aggregation taking up most of the time, with\n> sorting in between.\n\nI know you likely don't have much RAM to spare here, but more work_mem\nmight help, even just 16MBs might be enough. This would help the Sort\nand to a lesser extent the Bitmap Heap Scan too.\n\nAlso, if you'd opted to use PostgreSQL 14 or 15, then you could have\nperformed CREATE STATISTICS on your GROUP BY clause expression and\nthen run ANALYZE. That might cause the planner to flip to a Hash\nAggregate which would eliminate the Sort before aggregation. You'd\nonly need to sort 236 rows after the Hash Aggregate for the ORDER BY.\n\nPlus, what Justin said.\n\nDavid\n\n\n",
"msg_date": "Mon, 17 Apr 2023 08:50:31 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-16 19:00:33 +0200, Clemens Eisserer wrote:\n> I am currently trying to migrate an influxdb 1.7 smarthome database to\n> postgresql (13.9) running on my raspberry 3.\n> It works quite well, but for the queries executed by grafana I get a\n> bit highter execution times than I'd hoped for.\n>\n> Example:\n> table smartmeter with non-null column ts (timestamp with time zone)\n> and brinc index on ts, no pk to avoid a btree index.\n> Sensor values are stored every 5s, so for 1 month there are about 370k\n> rows - and in total the table currently holds about 3M rows.\n> The query to display the values for 1 month takes ~3s, with the bitmap\n> heap scan as well as aggregation taking up most of the time, with\n> sorting in between.\n> \n> Is there anything that could be improved?\n> With influxdb I was able to view 3 and 6 months graphs, with\n> postgresql it simply takes too long.\n> \n> I am currently running the 32-bit ARMv6 build, would it be a big\n> improvement running ARMv8/64-bit?\n\nYes, I suspect so. On a 64bit system most of the datatypes you're dealing with\nare going to be pass-by-value, i.e. not incur memory allocation\noverhead. Whereas timestamps, doubles, etc will all require allocations on a\n32bit system.\n\n\n> smarthomedb=> explain analyze SELECT floor(extract(epoch from\n> ts)/10800)*10800 AS \"time\", AVG(stromL1) as l1, AVG(stromL2) as l2,\n> AVG(stroml3) as l3 FROM smartmeter WHERE ts BETWEEN '2023-03-16\n> T09:51:28.397Z' AND '2023-04-16T08:51:28.397Z' GROUP BY time order by time;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=117490.70..132536.10 rows=376135 width=32)\n> (actual time=2061.253..2974.336 rows=236 loops=1)\n> Group Key: ((floor((date_part('epoch'::text, ts) / '10800'::double\n> precision)) * '10800'::double precision))\n> -> Sort (cost=117490.70..118431.04 rows=376135 width=20) (actual\n> time=2058.407..2410.467 rows=371810 loops=1)\n> Sort Key: ((floor((date_part('epoch'::text, ts) /\n> '10800'::double precision)) * '10800'::double precision))\n\nGiven the number of rows you're sorting on a somewhat slow platform, the\ncomplexity of the expression here might be a relevant factor. Particularly on\na 32bit system (see above), due to the memory allocations we'll end up doing.\n\n\nI don't know how much control over the query generation you have. Consider\nrewriting\n floor(extract(epoch from ts)/10800)*10800 AS \"time\"\nto something like\n date_bin('3h', ts, '2001-01-01 00:00')\n\n\n\n> Sort Method: external merge Disk: 10960kB\n> -> Bitmap Heap Scan on smartmeter (cost=112.09..74944.93\n> rows=376135 width=20) (actual time=88.336..1377.862 rows=371810\n> loops=1)\n\nGiven the time spent in the bitmap heap scan, it might be beneficial to\nincrease effective_io_concurrency some.\n\n\n> Recheck Cond: ((ts >= '2023-03-16\n> 10:51:28.397+01'::timestamp with time zone) AND (ts <= '2023-04-16\n> 10:51:28.397+02'::timestamp with time zone))\n> Rows Removed by Index Recheck: 2131\n> Heap Blocks: lossy=4742\n\nThe lossiness might also incur some overhead, so increasing work_mem a bit\nwill help some.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 Apr 2023 15:10:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "Is an option partitioning the table by month? If your report is month\nbased, you can improve performance by partitioning.\n\nFelipph\n\n\nEm dom., 16 de abr. de 2023 às 19:10, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2023-04-16 19:00:33 +0200, Clemens Eisserer wrote:\n> > I am currently trying to migrate an influxdb 1.7 smarthome database to\n> > postgresql (13.9) running on my raspberry 3.\n> > It works quite well, but for the queries executed by grafana I get a\n> > bit highter execution times than I'd hoped for.\n> >\n> > Example:\n> > table smartmeter with non-null column ts (timestamp with time zone)\n> > and brinc index on ts, no pk to avoid a btree index.\n> > Sensor values are stored every 5s, so for 1 month there are about 370k\n> > rows - and in total the table currently holds about 3M rows.\n> > The query to display the values for 1 month takes ~3s, with the bitmap\n> > heap scan as well as aggregation taking up most of the time, with\n> > sorting in between.\n> >\n> > Is there anything that could be improved?\n> > With influxdb I was able to view 3 and 6 months graphs, with\n> > postgresql it simply takes too long.\n> >\n> > I am currently running the 32-bit ARMv6 build, would it be a big\n> > improvement running ARMv8/64-bit?\n>\n> Yes, I suspect so. On a 64bit system most of the datatypes you're dealing\n> with\n> are going to be pass-by-value, i.e. not incur memory allocation\n> overhead. Whereas timestamps, doubles, etc will all require allocations on\n> a\n> 32bit system.\n>\n>\n> > smarthomedb=> explain analyze SELECT floor(extract(epoch from\n> > ts)/10800)*10800 AS \"time\", AVG(stromL1) as l1, AVG(stromL2) as l2,\n> > AVG(stroml3) as l3 FROM smartmeter WHERE ts BETWEEN '2023-03-16\n> > T09:51:28.397Z' AND '2023-04-16T08:51:28.397Z' GROUP BY time order by\n> time;\n> >\n> > QUERY PLAN\n> >\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=117490.70..132536.10 rows=376135 width=32)\n> > (actual time=2061.253..2974.336 rows=236 loops=1)\n> > Group Key: ((floor((date_part('epoch'::text, ts) / '10800'::double\n> > precision)) * '10800'::double precision))\n> > -> Sort (cost=117490.70..118431.04 rows=376135 width=20) (actual\n> > time=2058.407..2410.467 rows=371810 loops=1)\n> > Sort Key: ((floor((date_part('epoch'::text, ts) /\n> > '10800'::double precision)) * '10800'::double precision))\n>\n> Given the number of rows you're sorting on a somewhat slow platform, the\n> complexity of the expression here might be a relevant factor. Particularly\n> on\n> a 32bit system (see above), due to the memory allocations we'll end up\n> doing.\n>\n>\n> I don't know how much control over the query generation you have. Consider\n> rewriting\n> floor(extract(epoch from ts)/10800)*10800 AS \"time\"\n> to something like\n> date_bin('3h', ts, '2001-01-01 00:00')\n>\n>\n>\n> > Sort Method: external merge Disk: 10960kB\n> > -> Bitmap Heap Scan on smartmeter (cost=112.09..74944.93\n> > rows=376135 width=20) (actual time=88.336..1377.862 rows=371810\n> > loops=1)\n>\n> Given the time spent in the bitmap heap scan, it might be beneficial to\n> increase effective_io_concurrency some.\n>\n>\n> > Recheck Cond: ((ts >= '2023-03-16\n> > 10:51:28.397+01'::timestamp with time zone) AND (ts <= '2023-04-16\n> > 10:51:28.397+02'::timestamp with time zone))\n> > Rows Removed by Index Recheck: 2131\n> > Heap Blocks: lossy=4742\n>\n> The lossiness might also incur some overhead, so increasing work_mem a bit\n> will help some.\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nIs an option partitioning the table by month? If your report is month based, you can improve performance by partitioning.FelipphEm dom., 16 de abr. de 2023 às 19:10, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2023-04-16 19:00:33 +0200, Clemens Eisserer wrote:\n> I am currently trying to migrate an influxdb 1.7 smarthome database to\n> postgresql (13.9) running on my raspberry 3.\n> It works quite well, but for the queries executed by grafana I get a\n> bit highter execution times than I'd hoped for.\n>\n> Example:\n> table smartmeter with non-null column ts (timestamp with time zone)\n> and brinc index on ts, no pk to avoid a btree index.\n> Sensor values are stored every 5s, so for 1 month there are about 370k\n> rows - and in total the table currently holds about 3M rows.\n> The query to display the values for 1 month takes ~3s, with the bitmap\n> heap scan as well as aggregation taking up most of the time, with\n> sorting in between.\n> \n> Is there anything that could be improved?\n> With influxdb I was able to view 3 and 6 months graphs, with\n> postgresql it simply takes too long.\n> \n> I am currently running the 32-bit ARMv6 build, would it be a big\n> improvement running ARMv8/64-bit?\n\nYes, I suspect so. On a 64bit system most of the datatypes you're dealing with\nare going to be pass-by-value, i.e. not incur memory allocation\noverhead. Whereas timestamps, doubles, etc will all require allocations on a\n32bit system.\n\n\n> smarthomedb=> explain analyze SELECT floor(extract(epoch from\n> ts)/10800)*10800 AS \"time\", AVG(stromL1) as l1, AVG(stromL2) as l2,\n> AVG(stroml3) as l3 FROM smartmeter WHERE ts BETWEEN '2023-03-16\n> T09:51:28.397Z' AND '2023-04-16T08:51:28.397Z' GROUP BY time order by time;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=117490.70..132536.10 rows=376135 width=32)\n> (actual time=2061.253..2974.336 rows=236 loops=1)\n> Group Key: ((floor((date_part('epoch'::text, ts) / '10800'::double\n> precision)) * '10800'::double precision))\n> -> Sort (cost=117490.70..118431.04 rows=376135 width=20) (actual\n> time=2058.407..2410.467 rows=371810 loops=1)\n> Sort Key: ((floor((date_part('epoch'::text, ts) /\n> '10800'::double precision)) * '10800'::double precision))\n\nGiven the number of rows you're sorting on a somewhat slow platform, the\ncomplexity of the expression here might be a relevant factor. Particularly on\na 32bit system (see above), due to the memory allocations we'll end up doing.\n\n\nI don't know how much control over the query generation you have. Consider\nrewriting\n floor(extract(epoch from ts)/10800)*10800 AS \"time\"\nto something like\n date_bin('3h', ts, '2001-01-01 00:00')\n\n\n\n> Sort Method: external merge Disk: 10960kB\n> -> Bitmap Heap Scan on smartmeter (cost=112.09..74944.93\n> rows=376135 width=20) (actual time=88.336..1377.862 rows=371810\n> loops=1)\n\nGiven the time spent in the bitmap heap scan, it might be beneficial to\nincrease effective_io_concurrency some.\n\n\n> Recheck Cond: ((ts >= '2023-03-16\n> 10:51:28.397+01'::timestamp with time zone) AND (ts <= '2023-04-16\n> 10:51:28.397+02'::timestamp with time zone))\n> Rows Removed by Index Recheck: 2131\n> Heap Blocks: lossy=4742\n\nThe lossiness might also incur some overhead, so increasing work_mem a bit\nwill help some.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 17 Apr 2023 16:06:53 -0300",
"msg_from": "Luiz Felipph <luizfelipph@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "Hi again,\n\nThanks for the suggestions.\n\n- I increased work_mem to 64M, which caused disk-based sorting to be\nreplaced with quicksort and resulted in a modest speedup. However I\nhave to admit I didn't understand why more work_mem speeds up the heap\nscan.\n- the suggestion regarding \"create statistics on group by\" is awesome,\nto get rid of sorting is probably the best that could happen to the\nquery.\n- ARMv8 instead of ARMv6 could have a positive impact\n\nI'll mirate to a containerized postgresql-version running on raspbian\nos 64-bit as I find some time to spare and report back.\n\nThanks again, Clemens\n\nAm So., 16. Apr. 2023 um 22:50 Uhr schrieb David Rowley <dgrowleyml@gmail.com>:\n>\n> On Mon, 17 Apr 2023 at 05:00, Clemens Eisserer <linuxhippy@gmail.com> wrote:\n> > Example:\n> > table smartmeter with non-null column ts (timestamp with time zone)\n> > and brinc index on ts, no pk to avoid a btree index.\n> > Sensor values are stored every 5s, so for 1 month there are about 370k\n> > rows - and in total the table currently holds about 3M rows.\n> > The query to display the values for 1 month takes ~3s, with the bitmap\n> > heap scan as well as aggregation taking up most of the time, with\n> > sorting in between.\n>\n> I know you likely don't have much RAM to spare here, but more work_mem\n> might help, even just 16MBs might be enough. This would help the Sort\n> and to a lesser extent the Bitmap Heap Scan too.\n>\n> Also, if you'd opted to use PostgreSQL 14 or 15, then you could have\n> performed CREATE STATISTICS on your GROUP BY clause expression and\n> then run ANALYZE. That might cause the planner to flip to a Hash\n> Aggregate which would eliminate the Sort before aggregation. You'd\n> only need to sort 236 rows after the Hash Aggregate for the ORDER BY.\n>\n> Plus, what Justin said.\n>\n> David\n\n\n",
"msg_date": "Tue, 18 Apr 2023 14:14:45 +0200",
"msg_from": "Clemens Eisserer <linuxhippy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
},
{
"msg_contents": "Hi,\n\nI was too lazy to switch to a ARMv8 rootfs, instead I used an ARMv7\npostgresql 15.2 docker image running via podman to try the\nsuggestions.\nThe performance improvements offered by the new postgresql features is\nreally impressive!\n\n5318.382 ms: original query\n2372.618 ms: with date_bin\n2154.530 ms: date_bin + work_mem=64mb (quicksort instead of disk-based sorting)\n0826.196 ms: date_bin + work-mem + create-statistics\n0589.445 ms: date_bin + work-mem + create-statistics + max_workers=2\n(instead of 1)\n\nSo evaluating the complex/old expression indeed was really slow, using\ndate_bin already reduced query time to 50%.\nHash based aggregation further more than halfed the remaining 2,1s -\ndown to ~825ms!\n826ms for one month, and ~8s for a whole year is actually great - as\nfar as I can remember even influxdb, which is actually optimized for\nthis kind of data, didn't perform nearly as well as postgresql -\nawesome!\n\nThanks again for all the suggestions and for such a great dbms!\n\n- Clemens\n\n\nAm Di., 18. Apr. 2023 um 14:14 Uhr schrieb Clemens Eisserer\n<linuxhippy@gmail.com>:\n>\n> Hi again,\n>\n> Thanks for the suggestions.\n>\n> - I increased work_mem to 64M, which caused disk-based sorting to be\n> replaced with quicksort and resulted in a modest speedup. However I\n> have to admit I didn't understand why more work_mem speeds up the heap\n> scan.\n> - the suggestion regarding \"create statistics on group by\" is awesome,\n> to get rid of sorting is probably the best that could happen to the\n> query.\n> - ARMv8 instead of ARMv6 could have a positive impact\n>\n> I'll mirate to a containerized postgresql-version running on raspbian\n> os 64-bit as I find some time to spare and report back.\n>\n> Thanks again, Clemens\n>\n> Am So., 16. Apr. 2023 um 22:50 Uhr schrieb David Rowley <dgrowleyml@gmail.com>:\n> >\n> > On Mon, 17 Apr 2023 at 05:00, Clemens Eisserer <linuxhippy@gmail.com> wrote:\n> > > Example:\n> > > table smartmeter with non-null column ts (timestamp with time zone)\n> > > and brinc index on ts, no pk to avoid a btree index.\n> > > Sensor values are stored every 5s, so for 1 month there are about 370k\n> > > rows - and in total the table currently holds about 3M rows.\n> > > The query to display the values for 1 month takes ~3s, with the bitmap\n> > > heap scan as well as aggregation taking up most of the time, with\n> > > sorting in between.\n> >\n> > I know you likely don't have much RAM to spare here, but more work_mem\n> > might help, even just 16MBs might be enough. This would help the Sort\n> > and to a lesser extent the Bitmap Heap Scan too.\n> >\n> > Also, if you'd opted to use PostgreSQL 14 or 15, then you could have\n> > performed CREATE STATISTICS on your GROUP BY clause expression and\n> > then run ANALYZE. That might cause the planner to flip to a Hash\n> > Aggregate which would eliminate the Sort before aggregation. You'd\n> > only need to sort 236 rows after the Hash Aggregate for the ORDER BY.\n> >\n> > Plus, what Justin said.\n> >\n> > David\n\n\n",
"msg_date": "Thu, 20 Apr 2023 18:39:35 +0200",
"msg_from": "Clemens Eisserer <linuxhippy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: speeding up grafana sensor-data query on raspberry pi 3"
}
] |
[
{
"msg_contents": "Hi all\n\nThe company I work for has a large (50+ instances, 2-4 TB each) Postgres\ninstall. One of the key problems we are facing in vanilla Postgres is\nvacuum behavior on high QPS (20K writes/s), random index access on UUIDs.\n\nIn one case the table is 50Gb and has 3 indexes which are also 50Gb each.\nIt takes 20 hours to vacuum the entire thing, where bulk of the time is\nspent doing 'index vacuuming'. The table is then instantly vacuumed again.\nI increased work_mem to 2Gb, decreased sleep threshold to 2ms and increased\nthe IO limit to 2000. I also changed the autovacuum thresholds for this\ntable.\n\nI understand that doing random index writes is not a good strategy, but, 20\nhours to vacuum 200Gb is excessive.\n\nMy question is: what is the recommended strategy to deal with such cases in\nPostgres?\n\nThanks very much!!\n\nHi allThe company I work for has a large (50+ instances, 2-4 TB each) Postgres install. One of the key problems we are facing in vanilla Postgres is vacuum behavior on high QPS (20K writes/s), random index access on UUIDs.In one case the table is 50Gb and has 3 indexes which are also 50Gb each. It takes 20 hours to vacuum the entire thing, where bulk of the time is spent doing 'index vacuuming'. The table is then instantly vacuumed again.I increased work_mem to 2Gb, decreased sleep threshold to 2ms and increased the IO limit to 2000. I also changed the autovacuum thresholds for this table.I understand that doing random index writes is not a good strategy, but, 20 hours to vacuum 200Gb is excessive.My question is: what is the recommended strategy to deal with such cases in Postgres?Thanks very much!!",
"msg_date": "Mon, 17 Apr 2023 17:35:33 -0700",
"msg_from": "peter plachta <pplachta@gmail.com>",
"msg_from_op": true,
"msg_subject": "High QPS, random index writes and vacuum"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 5:35 PM peter plachta <pplachta@gmail.com> wrote:\n> My question is: what is the recommended strategy to deal with such cases in Postgres?\n\nYou didn't say what version of Postgres you're using...\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 17 Apr 2023 17:38:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "peter plachta <pplachta@gmail.com> writes:\n> The company I work for has a large (50+ instances, 2-4 TB each) Postgres\n> install. One of the key problems we are facing in vanilla Postgres is\n> vacuum behavior on high QPS (20K writes/s), random index access on UUIDs.\n\nIndexing on a UUID column is an antipattern, because you're pretty much\nguaranteed the worst-case random access patterns for both lookups and\ninsert/delete/maintenance cases. Can you switch to timestamps or\nthe like?\n\nThere are proposals out there for more database-friendly ways of\ngenerating UUIDs than the traditional ones, but nobody's gotten\naround to implementing that in Postgres AFAIK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 22:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 12:35, peter plachta <pplachta@gmail.com> wrote:\n> I increased work_mem to 2Gb\n\nmaintenance_work_mem is the configuration option that vacuum uses to\ncontrol how much memory it'll make available for storage of dead\ntuples. I believe 1GB would allow 178,956,970 tuples to be stored\nbefore multiple passes would be required. The chunk of memory for dead\ntuple storage is capped at 1GB.\n\nDavid\n\n\n",
"msg_date": "Tue, 18 Apr 2023 14:40:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "Thank you Tom.\nVersion: I sheepishly admit it's 9.6, 10 and 11 (it's Azure Single Server,\nthat's another story).\n\nI am definitely looking at redoing the way we do UUIDs... but that' s not a\ntrivial change given the volume of data we have + 24/7 workload.\n\nI was trying to understand whether there are any known workarounds for\nrandom access + index vacuums. Are my vacuum times 'normal' ?\n\nOn Mon, Apr 17, 2023 at 7:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> peter plachta <pplachta@gmail.com> writes:\n> > The company I work for has a large (50+ instances, 2-4 TB each) Postgres\n> > install. One of the key problems we are facing in vanilla Postgres is\n> > vacuum behavior on high QPS (20K writes/s), random index access on UUIDs.\n>\n> Indexing on a UUID column is an antipattern, because you're pretty much\n> guaranteed the worst-case random access patterns for both lookups and\n> insert/delete/maintenance cases. Can you switch to timestamps or\n> the like?\n>\n> There are proposals out there for more database-friendly ways of\n> generating UUIDs than the traditional ones, but nobody's gotten\n> around to implementing that in Postgres AFAIK.\n>\n> regards, tom lane\n>\n\nThank you Tom.Version: I sheepishly admit it's 9.6, 10 and 11 (it's Azure Single Server, that's another story).I am definitely looking at redoing the way we do UUIDs... but that' s not a trivial change given the volume of data we have + 24/7 workload.I was trying to understand whether there are any known workarounds for random access + index vacuums. Are my vacuum times 'normal' ?On Mon, Apr 17, 2023 at 7:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:peter plachta <pplachta@gmail.com> writes:\n> The company I work for has a large (50+ instances, 2-4 TB each) Postgres\n> install. One of the key problems we are facing in vanilla Postgres is\n> vacuum behavior on high QPS (20K writes/s), random index access on UUIDs.\n\nIndexing on a UUID column is an antipattern, because you're pretty much\nguaranteed the worst-case random access patterns for both lookups and\ninsert/delete/maintenance cases. Can you switch to timestamps or\nthe like?\n\nThere are proposals out there for more database-friendly ways of\ngenerating UUIDs than the traditional ones, but nobody's gotten\naround to implementing that in Postgres AFAIK.\n\n regards, tom lane",
"msg_date": "Mon, 17 Apr 2023 19:43:22 -0700",
"msg_from": "peter plachta <pplachta@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "Thank you David -- I increased this to 1GB as well (seeing as that was the\nmax). We are doing mostly single passes now.\n\nOn Mon, Apr 17, 2023 at 7:40 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 18 Apr 2023 at 12:35, peter plachta <pplachta@gmail.com> wrote:\n> > I increased work_mem to 2Gb\n>\n> maintenance_work_mem is the configuration option that vacuum uses to\n> control how much memory it'll make available for storage of dead\n> tuples. I believe 1GB would allow 178,956,970 tuples to be stored\n> before multiple passes would be required. The chunk of memory for dead\n> tuple storage is capped at 1GB.\n>\n> David\n>\n\nThank you David -- I increased this to 1GB as well (seeing as that was the max). We are doing mostly single passes now.On Mon, Apr 17, 2023 at 7:40 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 18 Apr 2023 at 12:35, peter plachta <pplachta@gmail.com> wrote:\n> I increased work_mem to 2Gb\n\nmaintenance_work_mem is the configuration option that vacuum uses to\ncontrol how much memory it'll make available for storage of dead\ntuples. I believe 1GB would allow 178,956,970 tuples to be stored\nbefore multiple passes would be required. The chunk of memory for dead\ntuple storage is capped at 1GB.\n\nDavid",
"msg_date": "Mon, 17 Apr 2023 19:44:47 -0700",
"msg_from": "peter plachta <pplachta@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 2:43 PM peter plachta <pplachta@gmail.com> wrote:\n> I was trying to understand whether there are any known workarounds for random access + index vacuums. Are my vacuum times 'normal' ?\n\nAh, it's not going to help on the old versions you mentioned, but for\nwhat it's worth: I remember noticing that I could speed up vacuum of\nuncorrelated indexes using parallel vacuum (v13), huge_pages=on,\nmaintainance_work_mem=BIG, min_dynamic_shared_memory=BIG (v14),\nbecause then the memory that is binary-searched in random order avoids\nthrashing the TLB.\n\n\n",
"msg_date": "Tue, 18 Apr 2023 14:49:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: High QPS, random index writes and vacuum"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 7:43 PM peter plachta <pplachta@gmail.com> wrote:\n> Version: I sheepishly admit it's 9.6, 10 and 11 (it's Azure Single Server, that's another story).\n\nIf you can upgrade to 14, you'll find that there is much improved\nmanagement of index updates on that version:\n\nhttps://www.postgresql.org/docs/current/btree-implementation.html#BTREE-DELETION\n\nBut it's not clear what the problem really is here. If the problem is\nthat you're dependent on vacuum to get acceptable response times by\nholding back index bloat, then an upgrade could easily help a lot. But\nan upgrade might not make VACUUM take less time, given that you've\nalready tuned it fairly aggressively. It depends.\n\nAn upgrade might make VACUUM go faster if you set\nvacuum_cost_page_miss to 2, which is the default on later versions\nanyway -- looks like you didn't touch that. And, as Thomas said, later\nversions do have parallel VACUUM, though that cannot be used by\nautovacuum workers.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 17 Apr 2023 19:52:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: High QPS, random index writes and vacuum"
}
] |
[
{
"msg_contents": "If not , how can I get the same functionality by writing a\nquery/job/procedure?\n\nSomething which doesn't require me to install a plugin , it is RDS\nPostgreSQL so cannot install /enable plugin without downtime to a\ncritical production database\n\nv$sesstat use case - I run a query , I would like to see what all metrics\nin postgres change for my session\n\nv$sql_plan use case - I want to capture all operations for all plans and do\nanalysis on those.\n\n-- \nCheers,\nKunwar\n\nIf not , how can I get the same functionality by writing a query/job/procedure?Something which doesn't require me to install a plugin , it is RDS PostgreSQL so cannot install /enable plugin without downtime to a critical production databasev$sesstat use case - I run a query , I would like to see what all metrics in postgres change for my sessionv$sql_plan use case - I want to capture all operations for all plans and do analysis on those.-- Cheers,Kunwar",
"msg_date": "Thu, 20 Apr 2023 09:37:24 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "What is equivalent of v$sesstat and v$sql_plan in postgres?"
},
{
"msg_contents": "RDS does allow you to now create your customized extensions via the \npg_tle extension. Regarding your desire to capture session metrics and \nsql plan information, I do not know anything that can do that in PG.\nRegards,\nMichael Vitale\n\nkunwar singh wrote on 4/20/2023 9:37 AM:\n> If not , how can I get the same functionality by writing a \n> query/job/procedure?\n>\n> Something which doesn't require me to install a plugin , it is RDS \n> PostgreSQL so cannot install /enable plugin without downtime to a \n> critical production database\n>\n> v$sesstat use case - I run a query , I would like to see what all \n> metrics in postgres change for my session\n>\n> v$sql_plan use case - I want to capture all operations for all plans \n> and do analysis on those.\n>\n> -- \n> Cheers,\n> Kunwar\n\n\nRegards,\n\nMichael Vitale\n\nMichaeldba@sqlexec.com <mailto:michaelvitale@sqlexec.com>\n\n703-600-9343",
"msg_date": "Thu, 20 Apr 2023 09:40:47 -0400",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: What is equivalent of v$sesstat and v$sql_plan in postgres?"
}
] |
[
{
"msg_contents": "Postgres noob question.\n\nFor example say datadog\nhttps://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plans\n\nDo they run EXPLAIN on all queries?\nOr is it automatic in postgres? Like when auto_explain is enabled?\n\n\n-- \nCheers,\nKunwar\n\nPostgres noob question.For example say datadoghttps://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plansDo they run EXPLAIN on all queries?Or is it automatic in postgres? Like when auto_explain is enabled?-- Cheers,Kunwar",
"msg_date": "Sat, 22 Apr 2023 11:28:39 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "How do Monitoring tools capture explain plan of a query"
},
{
"msg_contents": "Datadog runs explain on a subset of queries , should be most of the top\nqueries, and doesn't use auto_explain (though there is an request to use it\nif it is already set up)\n\nKyle\n\n\n\nOn Sat, Apr 22, 2023 at 8:29 AM kunwar singh <krishsingh.111@gmail.com>\nwrote:\n\n> Postgres noob question.\n>\n> For example say datadog\n> https://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plans\n>\n> Do they run EXPLAIN on all queries?\n> Or is it automatic in postgres? Like when auto_explain is enabled?\n>\n>\n> --\n> Cheers,\n> Kunwar\n>\n\nDatadog runs explain on a subset of queries , should be most of the top queries, and doesn't use auto_explain (though there is an request to use it if it is already set up)KyleOn Sat, Apr 22, 2023 at 8:29 AM kunwar singh <krishsingh.111@gmail.com> wrote:Postgres noob question.For example say datadoghttps://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plansDo they run EXPLAIN on all queries?Or is it automatic in postgres? Like when auto_explain is enabled?-- Cheers,Kunwar",
"msg_date": "Sat, 22 Apr 2023 09:17:00 -0700",
"msg_from": "kyle Hailey <kylelf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How do Monitoring tools capture explain plan of a query"
},
{
"msg_contents": "Thank you for the clarification Kyle 🙂\n\nOn Sat, Apr 22, 2023 at 12:17 PM kyle Hailey <kylelf@gmail.com> wrote:\n\n>\n> Datadog runs explain on a subset of queries , should be most of the top\n> queries, and doesn't use auto_explain (though there is an request to use it\n> if it is already set up)\n>\n> Kyle\n>\n>\n>\n> On Sat, Apr 22, 2023 at 8:29 AM kunwar singh <krishsingh.111@gmail.com>\n> wrote:\n>\n>> Postgres noob question.\n>>\n>> For example say datadog\n>>\n>> https://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plans\n>>\n>> Do they run EXPLAIN on all queries?\n>> Or is it automatic in postgres? Like when auto_explain is enabled?\n>>\n>>\n>> --\n>> Cheers,\n>> Kunwar\n>>\n> --\nCheers,\nKunwar\n\nThank you for the clarification Kyle 🙂On Sat, Apr 22, 2023 at 12:17 PM kyle Hailey <kylelf@gmail.com> wrote:Datadog runs explain on a subset of queries , should be most of the top queries, and doesn't use auto_explain (though there is an request to use it if it is already set up)KyleOn Sat, Apr 22, 2023 at 8:29 AM kunwar singh <krishsingh.111@gmail.com> wrote:Postgres noob question.For example say datadoghttps://docs.datadoghq.com/database_monitoring/query_metrics/#explain-plansDo they run EXPLAIN on all queries?Or is it automatic in postgres? Like when auto_explain is enabled?-- Cheers,Kunwar\n\n-- Cheers,Kunwar",
"msg_date": "Sun, 23 Apr 2023 15:28:31 -0400",
"msg_from": "kunwar singh <krishsingh.111@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How do Monitoring tools capture explain plan of a query"
}
] |
[
{
"msg_contents": "Dear all,\n\nWe are facing a performance issue with the following query. Executing this\nquery takes about 20 seconds.\n(the database version is 14.1)\n\nThe query:\n\n----- SLOW QUERY -----\n\nSELECT lead_record.id AS id\n FROM \"lead_record\" lead_record\n LEFT JOIN \"lead_record__field_msisdn\"\n\"lead_record__field_msisdn\" ON \"lead_record\".id =\n\"lead_record__field_msisdn\".entity_id\n LEFT JOIN \"lead_record__field_campaign\"\n\"lead_record__field_campaign\" ON \"lead_record\".id =\n\"lead_record__field_campaign\".entity_id\n LEFT JOIN \"lead_record__field_number_of_calls\"\n\"lead_record__field_number_of_calls\" ON \"lead_record\".id =\n\"lead_record__field_number_of_calls\".entity_id\n LEFT JOIN \"lead_record__field_last_call\"\n\"lead_record__field_last_call\" ON \"lead_record\".id =\n\"lead_record__field_last_call\".entity_id\n LEFT JOIN \"lead_record__field_last_offered_plan\"\n\"lead_record__field_last_offered_plan\" ON \"lead_record\".id =\n\"lead_record__field_last_offered_plan\".entity_id\n LEFT JOIN \"lead_record__field_status\"\n\"lead_record__field_status\" ON \"lead_record\".id =\n\"lead_record__field_status\".entity_id\n LEFT JOIN \"lead_record__field_comment_text\"\n\"lead_record__field_comment_text\" ON \"lead_record\".id =\n\"lead_record__field_comment_text\".entity_id\n LEFT JOIN \"lead_record__field_date_of_visit\"\n\"lead_record__field_date_of_visit\" ON \"lead_record\".id =\n\"lead_record__field_date_of_visit\".entity_id\n LEFT JOIN \"lead_record__field_current_mf\"\n\"lead_record__field_current_mf\" ON \"lead_record\".id =\n\"lead_record__field_current_mf\".entity_id\n LEFT JOIN \"lead_record__field_pos_code\"\n\"lead_record__field_pos_code\" ON \"lead_record\".id =\n\"lead_record__field_pos_code\".entity_id\n LEFT JOIN \"lead_record__field_assignee\"\n\"lead_record__field_assignee\" ON \"lead_record\".id =\n\"lead_record__field_assignee\".entity_id\n LEFT JOIN \"lead_record__field_checks_passed\"\n\"lead_record__field_checks_passed\" ON \"lead_record\".id =\n\"lead_record__field_checks_passed\".entity_id\n LEFT JOIN \"lead_record__field_last_offer_name\"\n\"lead_record__field_last_offer_name\" ON \"lead_record\".id =\n\"lead_record__field_last_offer_name\".entity_id\n LEFT JOIN \"lead_record__field_next_scheduled_call\"\n\"lead_record__field_next_scheduled_call\" ON \"lead_record\".id =\n\"lead_record__field_next_scheduled_call\".entity_id\n LEFT JOIN \"taxonomy_term_field_data\"\n\"taxonomy_term_field_data_lead_record__field_campaign\" ON\n\"lead_record__field_campaign\".field_campaign_target_id =\n\"taxonomy_term_field_data_lead_record__field_campaign\".tid\n LEFT JOIN \"taxonomy_term__field_active\"\n\"taxonomy_term__field_active\" ON\ntaxonomy_term_field_data_lead_record__field_campaign.tid =\ntaxonomy_term__field_active.entity_id\n LEFT JOIN \"users_field_data\"\n\"users_field_data_lead_record__field_assignee\" ON\n\"lead_record__field_assignee\".field_assignee_target_id =\n\"users_field_data_lead_record__field_assignee\".uid\n LEFT JOIN \"taxonomy_term__field_campaign_end_date\"\n\"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\"\nON \"taxonomy_term_field_data_lead_record__field_campaign\".tid =\n\"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\".entity_id\n\n WHERE\n((TO_DATE(\"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\".field_campaign_end_date_value,\n'YYYY-MM-DDTHH24:MI:SS') >= (now() - INTERVAL '1 days'))\n and \"taxonomy_term__field_active\".field_active_value = 1\n and \"lead_record__field_assignee\".field_assignee_target_id = 140\n and \"lead_record__field_pos_code\".field_pos_code_value =\n'100000064'\n and\n\"lead_record__field_checks_passed\".field_checks_passed_value = 1\n and\n\"lead_record__field_number_of_calls\".field_number_of_calls_value < 10);\n\n\nThis is the execution plan:\n\n----- EXPLAIN ANALYZE -----\n\nNested Loop Left Join (cost=65337.38..77121.07 rows=1 width=4) (actual\ntime=27164.156..27209.338 rows=0 loops=1)\n -> Nested Loop (cost=65337.11..77120.46 rows=1 width=12) (actual\ntime=27164.155..27209.337 rows=0 loops=1)\n -> Nested Loop (cost=65336.82..77120.15 rows=1 width=36) (actual\ntime=27164.155..27209.336 rows=0 loops=1)\n Join Filter: (taxonomy_term__field_active.entity_id =\nlead_record__field_campaign.field_campaign_target_id)\n -> Merge Join (cost=3.46..5.64 rows=1 width=16) (actual\ntime=0.109..0.547 rows=14 loops=1)\n Merge Cond: (taxonomy_term__field_active.entity_id =\ntaxonomy_term_field_data_lead_record__field_campaign__taxonomy_.entity_id)\n -> Index Scan using\ntaxonomy_term__field_active____pkey on taxonomy_term__field_active\n (cost=0.28..53.37 rows=25 width=8) (actual time=0.006..0.382 rows=23\nloops=1)\n Filter: (field_active_value = 1)\n Rows Removed by Filter: 1337\n -> Sort (cost=3.10..3.16 rows=25 width=8) (actual\ntime=0.096..0.111 rows=14 loops=1)\n Sort Key:\ntaxonomy_term_field_data_lead_record__field_campaign__taxonomy_.entity_id\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on\ntaxonomy_term__field_campaign_end_date\ntaxonomy_term_field_data_lead_record__field_campaign__taxonomy_\n (cost=0.00..2.52 rows=25 width=8) (actual t\nime=0.074..0.088 rows=14 loops=1)\n Filter:\n(to_date((field_campaign_end_date_value)::text,\n'YYYY-MM-DDTHH24:MI:SS'::text) >= (now() - '1 day'::interval))\n Rows Removed by Filter: 65\n -> Nested Loop Left Join (cost=65333.36..77114.49 rows=1\nwidth=20) (actual time=1940.255..1943.482 rows=0 loops=14)\n -> Nested Loop Left Join (cost=65332.94..77114.04\nrows=1 width=20) (actual time=1940.255..1943.482 rows=0 loops=14)\n -> Nested Loop Left Join\n (cost=65332.53..77113.59 rows=1 width=20) (actual time=1940.254..1943.481\nrows=0 loops=14)\n -> Nested Loop Left Join\n (cost=65332.10..77113.13 rows=1 width=20) (actual time=1940.254..1943.481\nrows=0 loops=14)\n -> Gather (cost=65331.68..77112.68\nrows=1 width=20) (actual time=1940.254..1943.481 rows=0 loops=14)\n Workers Planned: 2\n Workers Launched: 1\n -> Nested Loop\n (cost=64331.68..76112.58 rows=1 width=20) (actual time=1076.181..1076.182\nrows=0 loops=25)\n -> Nested Loop\n (cost=64331.26..76112.10 rows=1 width=52) (actual time=1076.180..1076.181\nrows=0 loops=25)\n -> Hash Join\n (cost=64330.84..76065.41 rows=99 width=44) (actual time=1076.179..1076.181\nrows=0 loops=25)\n Hash Cond: (\nlead_record.id = lead_record__field_assignee.entity_id)\n -> Parallel\nHash Left Join (cost=64071.94..75186.33 rows=165117 width=28) (actual\ntime=1564.113..2045.479 rows=396506 loops=1\n3)\n Hash\nCond: (lead_record.id = lead_record__field_comment_text.entity_id)\n ->\n Parallel Hash Left Join (cost=60385.75..70584.84 rows=165117 width=28)\n(actual time=1522.981..1849.529 rows=396506 l\noops=13)\n\nHash Cond: (lead_record.id = lead_record__field_last_offered_plan.entity_id)\n\n-> Parallel Hash Join (cost=56819.26..66289.32 rows=165117 width=28)\n(actual time=1484.191..1677.707 rows=396506\nloops=13)\n\n Hash Cond: (lead_record__field_campaign.entity_id = lead_record.id)\n\n -> Parallel Seq Scan on lead_record__field_campaign\n (cost=0.00..5741.41 rows=165241 width=16) (actual time=\n0.007..60.109 rows=396579 loops=13)\n\n -> Parallel Hash (cost=53947.02..53947.02 rows=165219 width=12)\n(actual time=1360.127..1360.128 rows=396506\n loops=13)\n\n Buckets: 131072 Batches: 8 Memory Usage: 3392kB\n\n -> Parallel Hash Join (cost=33369.97..53947.02 rows=165219\nwidth=12) (actual time=1120.261..1297.656\nrows=396506 loops=13)\n\n Hash Cond: (lead_record.id =\nlead_record__field_number_of_calls.entity_id)\n\n -> Parallel Hash Left Join (cost=23851.21..41181.82\nrows=165343 width=4) (actual time=798.152..\n937.445 rows=396579 loops=13)\n\n Hash Cond: (lead_record.id =\nlead_record__field_status.entity_id)\n\n -> Parallel Hash Left Join (cost=16358.35..30690.03\nrows=165343 width=4) (actual time=520\n.228..659.801 rows=396579 loops=13)\nTime: 27237.160 ms (00:27.237)\n\n\n\n\nAny assistance to improve the performance would be appreciated.\nThank you in advance.\n\nDear all,We are facing a performance issue with the following query. Executing this query takes about 20 seconds.(the database version is 14.1)The query: ----- SLOW QUERY -----SELECT lead_record.id AS id FROM \"lead_record\" lead_record LEFT JOIN \"lead_record__field_msisdn\" \"lead_record__field_msisdn\" ON \"lead_record\".id = \"lead_record__field_msisdn\".entity_id LEFT JOIN \"lead_record__field_campaign\" \"lead_record__field_campaign\" ON \"lead_record\".id = \"lead_record__field_campaign\".entity_id LEFT JOIN \"lead_record__field_number_of_calls\" \"lead_record__field_number_of_calls\" ON \"lead_record\".id = \"lead_record__field_number_of_calls\".entity_id LEFT JOIN \"lead_record__field_last_call\" \"lead_record__field_last_call\" ON \"lead_record\".id = \"lead_record__field_last_call\".entity_id LEFT JOIN \"lead_record__field_last_offered_plan\" \"lead_record__field_last_offered_plan\" ON \"lead_record\".id = \"lead_record__field_last_offered_plan\".entity_id LEFT JOIN \"lead_record__field_status\" \"lead_record__field_status\" ON \"lead_record\".id = \"lead_record__field_status\".entity_id LEFT JOIN \"lead_record__field_comment_text\" \"lead_record__field_comment_text\" ON \"lead_record\".id = \"lead_record__field_comment_text\".entity_id LEFT JOIN \"lead_record__field_date_of_visit\" \"lead_record__field_date_of_visit\" ON \"lead_record\".id = \"lead_record__field_date_of_visit\".entity_id LEFT JOIN \"lead_record__field_current_mf\" \"lead_record__field_current_mf\" ON \"lead_record\".id = \"lead_record__field_current_mf\".entity_id LEFT JOIN \"lead_record__field_pos_code\" \"lead_record__field_pos_code\" ON \"lead_record\".id = \"lead_record__field_pos_code\".entity_id LEFT JOIN \"lead_record__field_assignee\" \"lead_record__field_assignee\" ON \"lead_record\".id = \"lead_record__field_assignee\".entity_id LEFT JOIN \"lead_record__field_checks_passed\" \"lead_record__field_checks_passed\" ON \"lead_record\".id = \"lead_record__field_checks_passed\".entity_id LEFT JOIN \"lead_record__field_last_offer_name\" \"lead_record__field_last_offer_name\" ON \"lead_record\".id = \"lead_record__field_last_offer_name\".entity_id LEFT JOIN \"lead_record__field_next_scheduled_call\" \"lead_record__field_next_scheduled_call\" ON \"lead_record\".id = \"lead_record__field_next_scheduled_call\".entity_id LEFT JOIN \"taxonomy_term_field_data\" \"taxonomy_term_field_data_lead_record__field_campaign\" ON \"lead_record__field_campaign\".field_campaign_target_id = \t\t\t\t \t\t\"taxonomy_term_field_data_lead_record__field_campaign\".tid LEFT JOIN \"taxonomy_term__field_active\" \"taxonomy_term__field_active\" ON taxonomy_term_field_data_lead_record__field_campaign.tid = taxonomy_term__field_active.entity_id LEFT JOIN \"users_field_data\" \"users_field_data_lead_record__field_assignee\" ON \"lead_record__field_assignee\".field_assignee_target_id = \t\t\t\t\t\t\t \t\t\"users_field_data_lead_record__field_assignee\".uid LEFT JOIN \"taxonomy_term__field_campaign_end_date\" \"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\" ON \t\t \t\t\t\t\t\"taxonomy_term_field_data_lead_record__field_campaign\".tid = \"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\".entity_id WHERE ((TO_DATE(\"taxonomy_term_field_data_lead_record__field_campaign__taxonomy_term__field_campaign_end_date\".field_campaign_end_date_value, 'YYYY-MM-DDTHH24:MI:SS') >= (now() - INTERVAL '1 days')) and \"taxonomy_term__field_active\".field_active_value = 1 and \"lead_record__field_assignee\".field_assignee_target_id = 140 and \"lead_record__field_pos_code\".field_pos_code_value = '100000064' and \"lead_record__field_checks_passed\".field_checks_passed_value = 1 and \"lead_record__field_number_of_calls\".field_number_of_calls_value < 10); This is the execution plan:----- EXPLAIN ANALYZE -----Nested Loop Left Join (cost=65337.38..77121.07 rows=1 width=4) (actual time=27164.156..27209.338 rows=0 loops=1) -> Nested Loop (cost=65337.11..77120.46 rows=1 width=12) (actual time=27164.155..27209.337 rows=0 loops=1) -> Nested Loop (cost=65336.82..77120.15 rows=1 width=36) (actual time=27164.155..27209.336 rows=0 loops=1) Join Filter: (taxonomy_term__field_active.entity_id = lead_record__field_campaign.field_campaign_target_id) -> Merge Join (cost=3.46..5.64 rows=1 width=16) (actual time=0.109..0.547 rows=14 loops=1) Merge Cond: (taxonomy_term__field_active.entity_id = taxonomy_term_field_data_lead_record__field_campaign__taxonomy_.entity_id) -> Index Scan using taxonomy_term__field_active____pkey on taxonomy_term__field_active (cost=0.28..53.37 rows=25 width=8) (actual time=0.006..0.382 rows=23 loops=1) Filter: (field_active_value = 1) Rows Removed by Filter: 1337 -> Sort (cost=3.10..3.16 rows=25 width=8) (actual time=0.096..0.111 rows=14 loops=1) Sort Key: taxonomy_term_field_data_lead_record__field_campaign__taxonomy_.entity_id Sort Method: quicksort Memory: 25kB -> Seq Scan on taxonomy_term__field_campaign_end_date taxonomy_term_field_data_lead_record__field_campaign__taxonomy_ (cost=0.00..2.52 rows=25 width=8) (actual time=0.074..0.088 rows=14 loops=1) Filter: (to_date((field_campaign_end_date_value)::text, 'YYYY-MM-DDTHH24:MI:SS'::text) >= (now() - '1 day'::interval)) Rows Removed by Filter: 65 -> Nested Loop Left Join (cost=65333.36..77114.49 rows=1 width=20) (actual time=1940.255..1943.482 rows=0 loops=14) -> Nested Loop Left Join (cost=65332.94..77114.04 rows=1 width=20) (actual time=1940.255..1943.482 rows=0 loops=14) -> Nested Loop Left Join (cost=65332.53..77113.59 rows=1 width=20) (actual time=1940.254..1943.481 rows=0 loops=14) -> Nested Loop Left Join (cost=65332.10..77113.13 rows=1 width=20) (actual time=1940.254..1943.481 rows=0 loops=14) -> Gather (cost=65331.68..77112.68 rows=1 width=20) (actual time=1940.254..1943.481 rows=0 loops=14) Workers Planned: 2 Workers Launched: 1 -> Nested Loop (cost=64331.68..76112.58 rows=1 width=20) (actual time=1076.181..1076.182 rows=0 loops=25) -> Nested Loop (cost=64331.26..76112.10 rows=1 width=52) (actual time=1076.180..1076.181 rows=0 loops=25) -> Hash Join (cost=64330.84..76065.41 rows=99 width=44) (actual time=1076.179..1076.181 rows=0 loops=25) Hash Cond: (lead_record.id = lead_record__field_assignee.entity_id) -> Parallel Hash Left Join (cost=64071.94..75186.33 rows=165117 width=28) (actual time=1564.113..2045.479 rows=396506 loops=13) Hash Cond: (lead_record.id = lead_record__field_comment_text.entity_id) -> Parallel Hash Left Join (cost=60385.75..70584.84 rows=165117 width=28) (actual time=1522.981..1849.529 rows=396506 loops=13) Hash Cond: (lead_record.id = lead_record__field_last_offered_plan.entity_id) -> Parallel Hash Join (cost=56819.26..66289.32 rows=165117 width=28) (actual time=1484.191..1677.707 rows=396506loops=13) Hash Cond: (lead_record__field_campaign.entity_id = lead_record.id) -> Parallel Seq Scan on lead_record__field_campaign (cost=0.00..5741.41 rows=165241 width=16) (actual time=0.007..60.109 rows=396579 loops=13) -> Parallel Hash (cost=53947.02..53947.02 rows=165219 width=12) (actual time=1360.127..1360.128 rows=396506 loops=13) Buckets: 131072 Batches: 8 Memory Usage: 3392kB -> Parallel Hash Join (cost=33369.97..53947.02 rows=165219 width=12) (actual time=1120.261..1297.656rows=396506 loops=13) Hash Cond: (lead_record.id = lead_record__field_number_of_calls.entity_id) -> Parallel Hash Left Join (cost=23851.21..41181.82 rows=165343 width=4) (actual time=798.152..937.445 rows=396579 loops=13) Hash Cond: (lead_record.id = lead_record__field_status.entity_id) -> Parallel Hash Left Join (cost=16358.35..30690.03 rows=165343 width=4) (actual time=520.228..659.801 rows=396579 loops=13)Time: 27237.160 ms (00:27.237)Any assistance to improve the performance would be appreciated.Thank you in advance.",
"msg_date": "Fri, 28 Apr 2023 15:19:21 +0300",
"msg_from": "=?UTF-8?B?zqDOsc+BzrHPg866zrXPhc63IM6gzrHPg8+DzrHPgc63?=\n <passari.paraskevi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Performance issues in query with multiple joins"
},
{
"msg_contents": "On Friday, April 28, 2023, Παρασκευη Πασσαρη <passari.paraskevi@gmail.com>\nwrote:\n\n> Dear all,\n>\n> We are facing a performance issue with the following query. Executing this\n> query takes about 20 seconds.\n> (the database version is 14.1)\n>\n\nGiven the possibility of this working better in the supported 14.7 I\nsuggest starting with a minor version update then posting again if still\nhaving problems.\n\nDavid J.\n\nOn Friday, April 28, 2023, Παρασκευη Πασσαρη <passari.paraskevi@gmail.com> wrote:Dear all,We are facing a performance issue with the following query. Executing this query takes about 20 seconds.(the database version is 14.1)Given the possibility of this working better in the supported 14.7 I suggest starting with a minor version update then posting again if still having problems.David J.",
"msg_date": "Fri, 28 Apr 2023 06:03:13 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Performance issues in query with multiple joins"
},
{
"msg_contents": "=?UTF-8?B?zqDOsc+BzrHPg866zrXPhc63IM6gzrHPg8+DzrHPgc63?= <passari.paraskevi@gmail.com> writes:\n> We are facing a performance issue with the following query. Executing this\n> query takes about 20 seconds.\n\nRaising join_collapse_limit (to more than the number of joins in\nthe query) might help. But I think really if performance is a\nproblem you should think about ditching the star schema design.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Apr 2023 10:07:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues in query with multiple joins"
},
{
"msg_contents": "On Fri, 2023-04-28 at 15:19 +0300, Παρασκευη Πασσαρη wrote:\n> We are facing a performance issue with the following query. Executing this query takes about 20 seconds.\n> (the database version is 14.1)\n\nThe execution plan seems to be incomplete.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 01 May 2023 06:15:56 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues in query with multiple joins"
}
] |
[
{
"msg_contents": "Hi there,\n\n I've been struggling with very high write load on a server.\n\n We are collecting around 400k values each 5 minutes into a hypertable. \n(We use timescaledb extension, I also shared this on timescale forum but \nthen I realised the issue is postgresql related.)\n\n When running iostat I see a constant 7-10MB/s write by postgres, and \nthis just doesn't add up for me and I'm fully stuck with this. Even \nwith the row overhead it should be around 20Mb / 5 mins ! Even with \nindeces this 7-10MB/s constant write is inexplicable for me.\n\n The writes may trigger an update in an other table, but not all of \nthem do (I use a time filter). Let's say 70% does (which I dont think). \nThere we update two timestamps, and two ints. This still doesnt add up \nfor me. Even if we talk about 50MB of records, that should be 0,16MB/s \nat most!\n\n So I dag in and found it was WAL, of course, what else.\n\n Tweaking all around the config, reading forums and docs, to no avail. \nThe only thing that made the scenario realistic is disabling fsync \n(which I know I must not, but for the experiment I did). That eased the \nwrite load to 0.6MB/s.\n\n I also found that the 16MB WAL segment got 80+ MB written into it \nbefore being closed. So what's happening here? Does fsync cause the \nwhole file to be written out again and again?\n\n I checked with pg_dump, the content is as expected.\n\n We are talking about some insane data overhead here, two magnitudes \nmore is being written to WAL than the actual useful data.\n\nAll help is greatly appreciated.\n\nThanks!\n\nAndrás\n\n ---\nOlcsó Virtuális szerver:\nhttp://www.ProfiVPS.hu\n\nTámogatás: Support@ProfiVPS.hu\n\nHi there, \n I've been struggling with very high write load on a server. \n We are collecting around 400k values each 5 minutes into a hypertable. (We use timescaledb extension, I also shared this on timescale forum but then I realised the issue is postgresql related.) \n\n\n When running iostat I see a constant 7-10MB/s write by postgres, and this just doesn’t add up for me and I’m fully stuck with this. Even with the row overhead it should be around 20Mb / 5 mins ! Even with indeces this 7-10MB/s constant write is inexplicable for me.\n The writes may trigger an update in an other table, but not all of them do (I use a time filter). Let’s say 70% does (which I dont think). There we update two timestamps, and two ints. This still doesnt add up for me. Even if we talk about 50MB of records, that should be 0,16MB/s at most!\n So I dag in and found it was WAL, of course, what else. \n Tweaking all around the config, reading forums and docs, to no avail. The only thing that made the scenario realistic is disabling fsync (which I know I must not, but for the experiment I did). That eased the write load to 0.6MB/s.\n I also found that the 16MB WAL segment got 80+ MB written into it before being closed. So what's happening here? Does fsync cause the whole file to be written out again and again? \n I checked with pg_dump, the content is as expected. \n We are talking about some insane data overhead here, two magnitudes more is being written to WAL than the actual useful data. \n\nAll help is greatly appreciated. \nThanks!\nAndrás\n\n\n---Olcsó Virtuális szerver:http://www.ProfiVPS.huTámogatás: Support@ProfiVPS.hu",
"msg_date": "Thu, 04 May 2023 19:31:45 +0200",
"msg_from": "ProfiVPS Support <support@profivps.hu>",
"msg_from_op": true,
"msg_subject": "Fsync IO issue"
},
{
"msg_contents": "Oh, sorry, we are using PostgreSQL 13.10 (Debian 13.10-1.pgdg100+1) on \nthe server with with TimescaleDB 2.5.1 on Debian 10.\n\n2023-05-04 19:31 időpontban ProfiVPS Support ezt írta:\n\n> Hi there,\n> \n> I've been struggling with very high write load on a server.\n> \n> We are collecting around 400k values each 5 minutes into a hypertable. \n> (We use timescaledb extension, I also shared this on timescale forum \n> but then I realised the issue is postgresql related.)\n> \n> When running iostat I see a constant 7-10MB/s write by postgres, and \n> this just doesn't add up for me and I'm fully stuck with this. Even \n> with the row overhead it should be around 20Mb / 5 mins ! Even with \n> indeces this 7-10MB/s constant write is inexplicable for me.\n> \n> The writes may trigger an update in an other table, but not all of them \n> do (I use a time filter). Let's say 70% does (which I dont think). \n> There we update two timestamps, and two ints. This still doesnt add up \n> for me. Even if we talk about 50MB of records, that should be 0,16MB/s \n> at most!\n> \n> So I dag in and found it was WAL, of course, what else.\n> \n> Tweaking all around the config, reading forums and docs, to no avail. \n> The only thing that made the scenario realistic is disabling fsync \n> (which I know I must not, but for the experiment I did). That eased the \n> write load to 0.6MB/s.\n> \n> I also found that the 16MB WAL segment got 80+ MB written into it \n> before being closed. So what's happening here? Does fsync cause the \n> whole file to be written out again and again?\n> \n> I checked with pg_dump, the content is as expected.\n> \n> We are talking about some insane data overhead here, two magnitudes \n> more is being written to WAL than the actual useful data.\n> \n> All help is greatly appreciated.\n> \n> Thanks!\n> \n> András\n> \n> ---\n> Olcsó Virtuális szerver:\n> http://www.ProfiVPS.hu\n> \n> Támogatás: Support@ProfiVPS.hu\n\n---\nOlcsó Virtuális szerver:\nhttp://www.ProfiVPS.hu\n\nTámogatás: Support@ProfiVPS.hu",
"msg_date": "Thu, 04 May 2023 19:41:13 +0200",
"msg_from": "ProfiVPS Support <support@profivps.hu>",
"msg_from_op": true,
"msg_subject": "Re: Fsync IO issue"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-04 19:31:45 +0200, ProfiVPS Support wrote:\n> We are collecting around 400k values each 5 minutes into a hypertable. (We\n> use timescaledb extension, I also shared this on timescale forum but then I\n> realised the issue is postgresql related.)\n\nI don't know how timescale does its storage - how did you conclude this is\nabout postgres, not about timescale? Obviously WAL write patterns depend on\nthe way records are inserted and flushed.\n\n\n> I also found that the 16MB WAL segment got 80+ MB written into it before\n> being closed. So what's happening here? Does fsync cause the whole file to\n> be written out again and again?\n\nOne possible reason for this is that you are committing small transactions\nvery frequently. When a transaction commits, the commit records needs to be\nflushed to disk. If the transactions are small, the next commit might reside\non the same page - which needs to be written out again. Which of course can\nincrease the write rate considerably.\n\nYour workload does not sound like it actually needs to commit in tiny\ntransactions? Some larger batching / using longer lived transactions might\nhelp a lot.\n\nAnother possibility is that timescale does flush WAL too frequently for some\nreason...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 May 2023 12:21:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fsync IO issue"
},
{
"msg_contents": "Hi,\n\n thank you for your response :)\n\n Yes, that's exactly what's happening and I understand the issue with \nfsync in these cases. But I see no workaround about this as the data is \ningested one-by-one (sent by collectd) and a db function handles it (it \nhas to do lookup and set state in a different table based on the \nincoming value).\n\n I feel like ANYTHING would be better than this. Even risking loosing \n_some_ of the latest data in case of a server crash (if it crashes we \nlose data anyways until restart, ofc we could have HA I know and we will \nwhen there'll be a need) .\n\n Around 100 times the write need for wall than the useful data! That's \ninsane. This is actually endangering the whole project we've been \nworking on for the last 1.5 years and I face this issue after 100k \ndevices have been added for a client. So I'm between a rock and a hard \nplace :(\n\n Ye, I think this is called \"experience\", but I must be honest, I was \nnot expecting this at all :(\n\n However, collectd's plugin does have an option to increase commit \ninterval, but that kept the records locked and it caused strange issues, \nthat's why I disabled it. I tried now to add that setting back and it \ndoes ease the situation somewhat with write spikes on commit.\n\n All in all, thank you for your help. Honestly, after todays journey I \nthought that's the issue, but didn't want to believe it.\n\nThanks,\n\nAndrás\n\n2023-05-04 21:21 időpontban Andres Freund ezt írta:\n\n> Hi,\n> \n> On 2023-05-04 19:31:45 +0200, ProfiVPS Support wrote:\n> \n>> We are collecting around 400k values each 5 minutes into a hypertable. \n>> (We\n>> use timescaledb extension, I also shared this on timescale forum but \n>> then I\n>> realised the issue is postgresql related.)\n> \n> I don't know how timescale does its storage - how did you conclude this \n> is\n> about postgres, not about timescale? Obviously WAL write patterns \n> depend on\n> the way records are inserted and flushed.\n> \n>> I also found that the 16MB WAL segment got 80+ MB written into it \n>> before\n>> being closed. So what's happening here? Does fsync cause the whole \n>> file to\n>> be written out again and again?\n> \n> One possible reason for this is that you are committing small \n> transactions\n> very frequently. When a transaction commits, the commit records needs \n> to be\n> flushed to disk. If the transactions are small, the next commit might \n> reside\n> on the same page - which needs to be written out again. Which of course \n> can\n> increase the write rate considerably.\n> \n> Your workload does not sound like it actually needs to commit in tiny\n> transactions? Some larger batching / using longer lived transactions \n> might\n> help a lot.\n> \n> Another possibility is that timescale does flush WAL too frequently for \n> some\n> reason...\n> \n> Greetings,\n> \n> Andres Freund\n\n---\nOlcsó Virtuális szerver:\nhttp://www.ProfiVPS.hu\n\nTámogatás: Support@ProfiVPS.hu\n\nHi, \n thank you for your response :)\n Yes, that's exactly what's happening and I understand the issue with fsync in these cases. But I see no workaround about this as the data is ingested one-by-one (sent by collectd) and a db function handles it (it has to do lookup and set state in a different table based on the incoming value). \n I feel like ANYTHING would be better than this. Even risking loosing _some_ of the latest data in case of a server crash (if it crashes we lose data anyways until restart, ofc we could have HA I know and we will when there'll be a need) . \n Around 100 times the write need for wall than the useful data! That's insane. This is actually endangering the whole project we've been working on for the last 1.5 years and I face this issue after 100k devices have been added for a client. So I'm between a rock and a hard place :( \n Ye, I think this is called \"experience\", but I must be honest, I was not expecting this at all :(\n However, collectd's plugin does have an option to increase commit interval, but that kept the records locked and it caused strange issues, that's why I disabled it. I tried now to add that setting back and it does ease the situation somewhat with write spikes on commit. \n All in all, thank you for your help. Honestly, after todays journey I thought that's the issue, but didn't want to believe it.\n\nThanks,\nAndrás\n\n2023-05-04 21:21 időpontban Andres Freund ezt írta:\n\nHi,On 2023-05-04 19:31:45 +0200, ProfiVPS Support wrote:\n We are collecting around 400k values each 5 minutes into a hypertable. (Weuse timescaledb extension, I also shared this on timescale forum but then Irealised the issue is postgresql related.)\nI don't know how timescale does its storage - how did you conclude this isabout postgres, not about timescale? Obviously WAL write patterns depend onthe way records are inserted and flushed.\n I also found that the 16MB WAL segment got 80+ MB written into it beforebeing closed. So what's happening here? Does fsync cause the whole file tobe written out again and again?\nOne possible reason for this is that you are committing small transactionsvery frequently. When a transaction commits, the commit records needs to beflushed to disk. If the transactions are small, the next commit might resideon the same page - which needs to be written out again. Which of course canincrease the write rate considerably.Your workload does not sound like it actually needs to commit in tinytransactions? Some larger batching / using longer lived transactions mighthelp a lot.Another possibility is that timescale does flush WAL too frequently for somereason...Greetings,Andres Freund\n\n\n\n---Olcsó Virtuális szerver:http://www.ProfiVPS.huTámogatás: Support@ProfiVPS.hu",
"msg_date": "Thu, 04 May 2023 22:37:22 +0200",
"msg_from": "ProfiVPS Support <support@profivps.hu>",
"msg_from_op": true,
"msg_subject": "Re: Fsync IO issue"
},
{
"msg_contents": "On Fri, May 5, 2023 at 8:37 AM ProfiVPS Support <support@profivps.hu> wrote:\n> I feel like ANYTHING would be better than this. Even risking loosing _some_ of the latest data in case of a server crash (if it crashes we lose data anyways until restart, ofc we could have HA I know and we will when there'll be a need) .\n\nTry synchronous_commit=off:\n\nhttps://www.postgresql.org/docs/current/wal-async-commit.html\n\n\n",
"msg_date": "Fri, 5 May 2023 09:23:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync IO issue"
},
{
"msg_contents": "On Thu, May 4, 2023 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, May 5, 2023 at 8:37 AM ProfiVPS Support <support@profivps.hu>\n> wrote:\n> > I feel like ANYTHING would be better than this. Even risking loosing\n> _some_ of the latest data in case of a server crash (if it crashes we lose\n> data anyways until restart, ofc we could have HA I know and we will when\n> there'll be a need) .\n>\n> Try synchronous_commit=off:\n>\n> https://www.postgresql.org/docs/current/wal-async-commit.html\n\n\nYeah, or batch multiple inserts into a transaction somehow. In the worst\ncase, each insert can cause multiple things to happen, write to WAL, flush\nto heap (8k write), commit bit set (another 8k write), etc. In most\nworkloads these steps can aggregate together in various ways but not\nalways.\n\nmerlin\n\nOn Thu, May 4, 2023 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, May 5, 2023 at 8:37 AM ProfiVPS Support <support@profivps.hu> wrote:\n> I feel like ANYTHING would be better than this. Even risking loosing _some_ of the latest data in case of a server crash (if it crashes we lose data anyways until restart, ofc we could have HA I know and we will when there'll be a need) .\n\nTry synchronous_commit=off:\n\nhttps://www.postgresql.org/docs/current/wal-async-commit.htmlYeah, or batch multiple inserts into a transaction somehow. In the worst case, each insert can cause multiple things to happen, write to WAL, flush to heap (8k write), commit bit set (another 8k write), etc. In most workloads these steps can aggregate together in various ways but not always. merlin",
"msg_date": "Tue, 30 May 2023 15:56:36 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync IO issue"
}
] |
[
{
"msg_contents": "Hi,\nWe are getting intermittent connection errors on Postgres 11.16, in\ninformatica as well as Python jobs that run queries on Postgres.\n\nIn informatica logs we see below error.\n\nODBC PostgreSQL Wire Protocol driver]SSL I/O Error.[DataDirect][ODBC lib]\nConnection in use\n\nWhile in Postgres logs we see below errors.\n\nAn existing connection was forcibly closed by the remote host.\n\n\n\nRegards,\nAditya.\n\nHi,We are getting intermittent connection errors on Postgres 11.16, in informatica as well as Python jobs that run queries on Postgres. In informatica logs we see below error.ODBC PostgreSQL Wire Protocol driver]SSL I/O Error.[DataDirect][ODBC lib] Connection in useWhile in Postgres logs we see below errors.An existing connection was forcibly closed by the remote host.Regards,Aditya.",
"msg_date": "Fri, 12 May 2023 00:10:06 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Connection drops on postgres 11.16"
}
] |
[
{
"msg_contents": "Hi there,\n\nI am investigating possible throughput with PostgreSQL 14.4 on an ARM i.MX6 Quad CPU (NXP sabre board).\nTesting with a simple python script (running on the same CPU), I get ~1000 request/s.\n\nimport psycopg as pg\nconn = pg.connect('dbname=test')\nconn.autocommit = True\ncur = conn.cursor()\nwhile True:\n cur.execute(\"call dummy_call(%s,%s,%s, ARRAY[%s, %s, %s]::real[]);\", (1,2,3, 4.0, 5.0, 6.0), binary=True )\n\nwhere the called procedure is basically a no-op:\n\nCREATE OR REPLACE PROCEDURE dummy_call(\n in arg1 int,\n in arg2 int,\n in arg3 int,\n in arg4 double precision[])\nAS $$\nBEGIN\nEND\n$$ LANGUAGE plpgsql;\n\nThis seems to be a quite low number of requests/s, given that there are no changes to the database.\nLooking for suggestions what could cause this poor performance and where to start investigations.\n\nThanks,\n\nMarc\n\n________________________________\nThe information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message.\n\n\n\n\n\n\n\n\n\nHi there,\n \nI am investigating possible throughput with PostgreSQL 14.4 on an ARM i.MX6 Quad CPU (NXP sabre board).\nTesting with a simple python script (running on the same CPU), I get ~1000 request/s.\n \nimport psycopg as pg\nconn = pg.connect('dbname=test')\nconn.autocommit = True\ncur = conn.cursor()\nwhile True:\n cur.execute(\"call dummy_call(%s,%s,%s, ARRAY[%s, %s, %s]::real[]);\", (1,2,3, 4.0, 5.0, 6.0), binary=True )\n \nwhere the called procedure is basically a no-op:\n \nCREATE OR REPLACE PROCEDURE dummy_call(\n in arg1 int,\n in arg2 int,\n in arg3 int,\n in arg4 double precision[])\nAS $$\nBEGIN\nEND\n$$ LANGUAGE plpgsql;\n \nThis seems to be a quite low number of requests/s, given that there are no changes to the database.\nLooking for suggestions what could cause this poor performance and where to start investigations.\n \nThanks,\n\nMarc\n\n\n\nThe information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified\n that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message.",
"msg_date": "Tue, 23 May 2023 11:42:53 +0000",
"msg_from": "\"Druckenmueller, Marc\" <marc.druckenmueller@philips.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL performance on ARM i.MX6"
},
{
"msg_contents": "On Tue, 23 May 2023 at 13:43, Druckenmueller, Marc\n<marc.druckenmueller@philips.com> wrote:\n\n> Testing with a simple python script (running on the same CPU), I get ~1000 request/s.\n\nIs the time spent in the client or in the server? Are there noticeable\ndifferences if you execute that statement in a loop in psql (with the\nvariables already bound)?\n\n-- Daniele\n\n\n",
"msg_date": "Tue, 23 May 2023 15:25:22 +0200",
"msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance on ARM i.MX6"
},
{
"msg_contents": "\"Druckenmueller, Marc\" <marc.druckenmueller@philips.com> writes:\n> I am investigating possible throughput with PostgreSQL 14.4 on an ARM i.MX6 Quad CPU (NXP sabre board).\n> Testing with a simple python script (running on the same CPU), I get ~1000 request/s.\n\nThat does seem pretty awful for modern hardware, but it's hard to\ntease apart the various potential causes. How beefy is that CPU\nreally? Maybe the overhead is all down to client/server network round\ntrips? Maybe psycopg is doing something unnecessarily inefficient?\n\nFor comparison, on my development workstation I get\n\n[ create the procedure manually in db test ]\n$ cat bench.sql\ncall dummy_call(1,2,3,array[1,2,3]::float8[]);\n$ pgbench -f bench.sql -n -T 10 test\npgbench (16beta1)\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nmaximum number of tries: 1\nduration: 10 s\nnumber of transactions actually processed: 353891\nnumber of failed transactions: 0 (0.000%)\nlatency average = 0.028 ms\ninitial connection time = 7.686 ms\ntps = 35416.189844 (without initial connection time)\n\nand it'd be more if I weren't using an assertions-enabled\ndebug build. It would be interesting to see what you get\nfrom exactly that test case on your ARM board.\n\nBTW, one thing I see that's definitely an avoidable inefficiency in\nyour test is that you're forcing the array parameter to real[]\n(i.e. float4) when the procedure takes double precision[]\n(i.e. float8). That forces an extra run-time conversion. Swapping\nbetween float4 and float8 in my pgbench test doesn't move the needle\na lot, but it's noticeable.\n\nAnother thing to think about is that psycopg might be defaulting\nto a TCP rather than Unix-socket connection, and that might add\noverhead depending on what kernel you're using. Although, rather\nthan try to micro-optimize that, you probably ought to be thinking\nof how to remove network round trips altogether. I can get upwards\nof 300K calls/second if I push the loop to the server side:\n\ntest=# \\timing\nTiming is on.\ntest=# do $$\ndeclare x int := 1; a float8[] := array[1,2,3];\nbegin\nfor i in 1..1000000 loop\n call dummy_call (x,x,x,a);\nend loop;\nend $$;\nDO\nTime: 3256.023 ms (00:03.256)\ntest=# select 1000000/3.256023;\n ?column? \n---------------------\n 307123.137643683721\n(1 row)\n\nAgain, it would be interesting to compare exactly that\ntest case on your ARM board.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 12:29:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance on ARM i.MX6"
},
{
"msg_contents": "On 2023-05-23 12:42, Druckenmueller, Marc wrote:\n> Hi there,\n> \n> I am investigating possible throughput with PostgreSQL 14.4 on an ARM\n> i.MX6 Quad CPU (NXP sabre board).\n> \n> Testing with a simple python script (running on the same CPU), I get\n> ~1000 request/s.\n\nI tweaked your script slightly, but this is what I got on the Raspberry \nPi 4 that I have in the corner of the room. Almost twice the speed you \nare seeing.\n\n 0: this = 0.58 tot = 0.58\n 1: this = 0.55 tot = 1.13\n 2: this = 0.59 tot = 1.72\n 3: this = 0.55 tot = 2.27\n 4: this = 0.56 tot = 2.83\n 5: this = 0.57 tot = 3.40\n 6: this = 0.56 tot = 3.96\n 7: this = 0.55 tot = 4.51\n 8: this = 0.59 tot = 5.11\n 9: this = 0.60 tot = 5.71\n\nThat's with governor=performance and a couple of background tasks \nrunning as well as the python. PostgreSQL 15 in a container on a Debian \nO.S. I've not done any tuning on PostgreSQL (but your call isn't doing \nanything really) nor the Pi.\n\nThe minor tweaks to your script were as below:\n\n import psycopg as pg\n import time\n\n conn = pg.connect('')\n conn.autocommit = True\n cur = conn.cursor()\n start = time.time()\n prev = start\n end = start\n for j in range(10):\n for i in range(1000):\n cur.execute(\"call dummy_call(%s,%s,%s, ARRAY[%s, %s, \n%s]::real[]);\", (1,2,3, 4.0, 5.0, 6.0), binary=True )\n end = time.time()\n print(f\"{j}: this = {(end - prev):.2f} tot = {(end - \nstart):.2f}\")\n prev = end\n\n-- \n Richard Huxton\n\n\n",
"msg_date": "Tue, 23 May 2023 20:46:14 +0100",
"msg_from": "Richard Huxton <richard@huxton.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance on ARM i.MX6"
},
{
"msg_contents": "Em ter., 23 de mai. de 2023 às 08:43, Druckenmueller, Marc <\nmarc.druckenmueller@philips.com> escreveu:\n\n> Hi there,\n>\n>\n>\n> I am investigating possible throughput with PostgreSQL 14.4 on an ARM\n> i.MX6 Quad CPU (NXP sabre board).\n>\n> Testing with a simple python script (running on the same CPU), I get ~1000\n> request/s.\n>\nCan you share kernel and python detalis? (version, etc).\n\nregards,\nRanier Vilela\n\nEm ter., 23 de mai. de 2023 às 08:43, Druckenmueller, Marc <marc.druckenmueller@philips.com> escreveu:\n\n\nHi there,\n \nI am investigating possible throughput with PostgreSQL 14.4 on an ARM i.MX6 Quad CPU (NXP sabre board).\nTesting with a simple python script (running on the same CPU), I get ~1000 request/s.Can you share kernel and python detalis? (version, etc).regards,Ranier Vilela",
"msg_date": "Tue, 23 May 2023 16:56:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance on ARM i.MX6"
}
] |
[
{
"msg_contents": "Hello\n\nWe have an application (https://dhis2.org) which has been using postgresql\nas a backend for the past 15 years or so. Gradually moving through pg\nversions 8,9,10 etc as the years went by. At the moment a large number of\nour implementations are using versions 13, 14 and 15. Unfortunately we\nhave recently discovered that, despite most operations performing\nconsiderably better on later versions, there is a particular type of query\nthat is very much slower (up to 100x) than it was on postgresql 11. We\nhave seen this regression in 13, 14 and 15. Unfortunately I dont have\nstats on version 12 yet.\n\nThe query is not beautifully crafted. It is automatically generated from a\ncustom expression language. We know that it can probably be improved, but\nat the moment we would really like to know if there is anything we can\nconfigure with the SQL as-is to get performance like we had back on pg11.\n\nThe example below is a typical such query. I've attached below that, links\nto the results of EXPLAIN (ANALYZE, BUFFERS). for pg11 and pg15 on the same\nphysical environment loaded with the same database. I would appreciate\nsome help trying to understand what we are seeing with the EXPLAIN output\nand whether there is anything to be done.\n\nEXPLAIN ANALYZE\nselect\n count(pi) as value,\n '2022W21' as Weekly\nfrom\n analytics_enrollment_gr3uwzvzpqt as ax\nwhere\n cast(\n (\n select\n \"IEMtgZapP2s\"\n from\n analytics_event_gr3uWZVzPQT\n where\n analytics_event_gr3uWZVzPQT.pi = ax.pi\n and \"IEMtgZapP2s\" is not null\n and ps = 'XO45JBGJcXJ'\n order by\n executiondate desc\n limit\n 1\n ) as date\n ) < cast('2022-05-30' as date)\n and cast(\n (\n select\n \"IEMtgZapP2s\"\n from\n analytics_event_gr3uWZVzPQT\n where\n analytics_event_gr3uWZVzPQT.pi = ax.pi\n and \"IEMtgZapP2s\" is not null\n and ps = 'XO45JBGJcXJ'\n order by\n executiondate desc\n limit\n 1\n ) as date\n ) >= cast('2022-05-23' as date)\n and (uidlevel1 = 'Plmg8ikyfrK')\n and (\n coalesce(\n (\n select\n \"QEbYS2QOXLf\"\n from\n analytics_event_gr3uWZVzPQT\n where\n analytics_event_gr3uWZVzPQT.pi = ax.pi\n and \"QEbYS2QOXLf\" is not null\n and ps = 'XO45JBGJcXJ'\n order by\n executiondate desc\n limit\n 1\n ):: text,\n ''\n ) = 'FIN_CASE_CLASS_CONFIRM_LAB'\n )\nlimit 1;\n-- limit 200001;\n\nThe EXPLAIN result for postgresql 11 is here:\nhttps://explain.depesz.com/s/3QfC\n\nThe same query on postgresql 15 is here:\nhttps://explain.depesz.com/s/BzpA#html\n\nWhereas the first example takes 23s, the pg15 one takes 243s (this time\ndifference is even more stark when you remove BUFFERS from the explain).\nDuring execution the pg15 query consumes 100% of a CPU core throughout\nindicating it is probably cpu bound rather than IO.\n\nThe plan selected in both cases seems to be exactly the same. But pg15\nseems to make a lot of work of the final aggregation step. Anecdotally I\nunderstand that the same difference is there with pg13 and 14. The only\nsignificant factor I could think of relating to new behaviour in pg13 is\nthe new hash_mem_multiplier configuration and it its relation to work_mem\navailbale for hash tables. I have attempted to turn up both\nhash_mem_multilier and work_mem to ridiculous values and I see no change\nwhatsoever on pg15.\n\nI also removed the LIMIT and tested again with no significant difference:\nhttps://explain.depesz.com/s/K9Lq\n\nDoes anyone have a theory of why pg15 should behave so differently to pg11\nhere? Better still, any suggestions for configuration that might make pg15\nbehave more like pg10. I am really dreading the prospect of stepping our\nmany live implementations back to pg11 :-(.\n\nRegards\nBob\n\nHelloWe have an application (https://dhis2.org) which has been using postgresql as a backend for the past 15 years or so. Gradually moving through pg versions 8,9,10 etc as the years went by. At the moment a large number of our implementations are using versions 13, 14 and 15. Unfortunately we have recently discovered that, despite most operations performing considerably better on later versions, there is a particular type of query that is very much slower (up to 100x) than it was on postgresql 11. We have seen this regression in 13, 14 and 15. Unfortunately I dont have stats on version 12 yet.The query is not beautifully crafted. It is automatically generated from a custom expression language. We know that it can probably be improved, but at the moment we would really like to know if there is anything we can configure with the SQL as-is to get performance like we had back on pg11.The example below is a typical such query. I've attached below that, links to the results of EXPLAIN (ANALYZE, BUFFERS). for pg11 and pg15 on the same physical environment loaded with the same database. I would appreciate some help trying to understand what we are seeing with the EXPLAIN output and whether there is anything to be done.EXPLAIN ANALYZE select count(pi) as value, '2022W21' as Weekly from analytics_enrollment_gr3uwzvzpqt as ax where cast( ( select \"IEMtgZapP2s\" from analytics_event_gr3uWZVzPQT where analytics_event_gr3uWZVzPQT.pi = ax.pi and \"IEMtgZapP2s\" is not null and ps = 'XO45JBGJcXJ' order by executiondate desc limit 1 ) as date ) < cast('2022-05-30' as date) and cast( ( select \"IEMtgZapP2s\" from analytics_event_gr3uWZVzPQT where analytics_event_gr3uWZVzPQT.pi = ax.pi and \"IEMtgZapP2s\" is not null and ps = 'XO45JBGJcXJ' order by executiondate desc limit 1 ) as date ) >= cast('2022-05-23' as date) and (uidlevel1 = 'Plmg8ikyfrK') and ( coalesce( ( select \"QEbYS2QOXLf\" from analytics_event_gr3uWZVzPQT where analytics_event_gr3uWZVzPQT.pi = ax.pi and \"QEbYS2QOXLf\" is not null and ps = 'XO45JBGJcXJ' order by executiondate desc limit 1 ):: text, '' ) = 'FIN_CASE_CLASS_CONFIRM_LAB' ) limit 1;-- limit 200001;The EXPLAIN result for postgresql 11 is here: https://explain.depesz.com/s/3QfCThe same query on postgresql 15 is here: https://explain.depesz.com/s/BzpA#htmlWhereas the first example takes 23s, the pg15 one takes 243s (this time difference is even more stark when you remove BUFFERS from the explain). During execution the pg15 query consumes 100% of a CPU core throughout indicating it is probably cpu bound rather than IO.The plan selected in both cases seems to be exactly the same. But pg15 seems to make a lot of work of the final aggregation step. Anecdotally I understand that the same difference is there with pg13 and 14. The only significant factor I could think of relating to new behaviour in pg13 is the new hash_mem_multiplier configuration and it its relation to work_mem availbale for hash tables. I have attempted to turn up both hash_mem_multilier and work_mem to ridiculous values and I see no change whatsoever on pg15. I also removed the LIMIT and tested again with no significant difference: https://explain.depesz.com/s/K9LqDoes anyone have a theory of why pg15 should behave so differently to pg11 here? Better still, any suggestions for configuration that might make pg15 behave more like pg10. I am really dreading the prospect of stepping our many live implementations back to pg11 :-(.RegardsBob",
"msg_date": "Wed, 31 May 2023 09:43:42 +0100",
"msg_from": "Bob Jolliffe <bobjolliffe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unaccounted regression from postgresql 11 in later versions"
},
{
"msg_contents": ">\n> Does anyone have a theory of why pg15 should behave so differently to pg11\n> here? Better still, any suggestions for configuration that might make pg15\n> behave more like pg10. I am really dreading the prospect of stepping our\n> many live implementations back to pg11 :-(.\n>\n\nOne major factor here appears to be JIT compilation, which is off by\ndefault in pg11, but on by default in pg12+.\n\nYou can see at the bottom of your slowest query plan that about 233s of the\n240s are JIT related.\n\nThere is good info in the docs about tuning, or turning off, JIT:\nhttps://www.postgresql.org/docs/current/jit-decision.html\n\nDoes anyone have a theory of why pg15 should behave so differently to pg11 here? Better still, any suggestions for configuration that might make pg15 behave more like pg10. I am really dreading the prospect of stepping our many live implementations back to pg11 :-(.One major factor here appears to be JIT compilation, which is off by default in pg11, but on by default in pg12+. You can see at the bottom of your slowest query plan that about 233s of the 240s are JIT related. There is good info in the docs about tuning, or turning off, JIT: https://www.postgresql.org/docs/current/jit-decision.html",
"msg_date": "Wed, 31 May 2023 11:11:29 +0100",
"msg_from": "Michael Christofides <michael@pgmustard.com>",
"msg_from_op": false,
"msg_subject": "Re: Unaccounted regression from postgresql 11 in later versions"
},
{
"msg_contents": "Wow Michael you are absolutely right. Turning jit off results in a query\nexecution about twice as fast as pg11. That is a huge relief. I will read\nthe jit related docs and see if there is anything smarter I should be doing\nother than disabling jit entirely, but it works a treat for this query.\n\nRegards\nBob\n\nOn Wed, 31 May 2023 at 11:11, Michael Christofides <michael@pgmustard.com>\nwrote:\n\n> Does anyone have a theory of why pg15 should behave so differently to pg11\n>> here? Better still, any suggestions for configuration that might make pg15\n>> behave more like pg10. I am really dreading the prospect of stepping our\n>> many live implementations back to pg11 :-(.\n>>\n>\n> One major factor here appears to be JIT compilation, which is off by\n> default in pg11, but on by default in pg12+.\n>\n> You can see at the bottom of your slowest query plan that about 233s of\n> the 240s are JIT related.\n>\n> There is good info in the docs about tuning, or turning off, JIT:\n> https://www.postgresql.org/docs/current/jit-decision.html\n>\n\nWow Michael you are absolutely right. Turning jit off results in a query execution about twice as fast as pg11. That is a huge relief. I will read the jit related docs and see if there is anything smarter I should be doing other than disabling jit entirely, but it works a treat for this query.RegardsBobOn Wed, 31 May 2023 at 11:11, Michael Christofides <michael@pgmustard.com> wrote:Does anyone have a theory of why pg15 should behave so differently to pg11 here? Better still, any suggestions for configuration that might make pg15 behave more like pg10. I am really dreading the prospect of stepping our many live implementations back to pg11 :-(.One major factor here appears to be JIT compilation, which is off by default in pg11, but on by default in pg12+. You can see at the bottom of your slowest query plan that about 233s of the 240s are JIT related. There is good info in the docs about tuning, or turning off, JIT: https://www.postgresql.org/docs/current/jit-decision.html",
"msg_date": "Wed, 31 May 2023 11:26:58 +0100",
"msg_from": "Bob Jolliffe <bobjolliffe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unaccounted regression from postgresql 11 in later versions"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI've been configuring a new server and tuning Postgresql 15.3, but I'm\nstruggling with a latency I'm consistently seeing with this new server when\nrunning fast short queries, compared to the other server.\n\nWe're running two different versions of Postgresql:\n\n- Server A: Postgresql 9.3\n- Server B: Postgresql 15.3\n\nServer B is the new server and is way more powerful than server A:\n\n- Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0\n- Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1\n\nWe're running Linux Ubuntu 20.04 on server B and I've been tweaking some\nsettings in Linux and Postgresql 15.3. With the current setup, Postgresql\n15.3 is able to process more than 1 million transactions per second running\npgbench:\n\n # pgbench --username postgres --select-only --client 100 --jobs 10\n--time 20 test\n pgbench (15.3 (Ubuntu 15.3-1.pgdg20.04+1))\n starting vacuum...end.\n transaction type: <builtin: select only>\n scaling factor: 1\n query mode: simple\n number of clients: 100\n number of threads: 10\n maximum number of tries: 1\n duration: 20 s\n number of transactions actually processed: 23039950\n number of failed transactions: 0 (0.000%)\n latency average = 0.087 ms\n initial connection time = 62.536 ms\n tps = 1155053.135317 (without initial connection time)\n\nAs shown in pgbench, the performance is great. Also when testing individual\nqueries, heavy queries (those taking a few ms) run faster on server B than\nserver A. Unfortunately when we run fast short SELECT queries (< 1 ms),\nserver A is consistently running faster than server B, even if the query\nplans are the same:\n\n Server A:\n\n # EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT 1 AS \"a\" FROM \"foobar\"\nWHERE (\"foobar\".\"id\" = 1) LIMIT 1;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..8.44 rows=1 width=0) (actual time=0.008..0.008\nrows=1 loops=1)\n Output: (1)\n Buffers: shared hit=5\n -> Index Only Scan using foobar_pkey on public.foobar\n (cost=0.42..8.44 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1)\n Output: 1\n Index Cond: (foobar.id = 1)\n Heap Fetches: 1\n Buffers: shared hit=5\n Total runtime: 0.017 ms\n (9 rows)\n\n Time: 0.281 ms\n\n\n Server B:\n\n # EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT 1 AS \"a\" FROM \"foobar\"\nWHERE (\"foobar\".\"id\" = 1) LIMIT 1;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.11 rows=1 width=4) (actual time=0.019..0.021\nrows=1 loops=1)\n Output: 1\n Buffers: shared hit=4\n -> Index Only Scan using foobar_pkey on public.foobar\n (cost=0.00..1.11 rows=1 width=4) (actual time=0.018..0.018 rows=1 loops=1)\n Output: 1\n Index Cond: (foobar.id = 1)\n Heap Fetches: 0\n Buffers: shared hit=4\n Planning Time: 0.110 ms\n Execution Time: 0.045 ms\n (10 rows)\n\n Time: 0.635 ms\n\n\nRAID1 could add some latency on server B if it was reading from disk, but\nI've confirmed that these queries are hitting the buffer/cache and\ntherefore reading data from memory and not from disk. I've checked the hit\nrate with the following query:\n\n SELECT 'cache hit rate' AS name, sum(heap_blks_hit) /\n(sum(heap_blks_hit) + sum(heap_blks_read)) AS ratio FROM\npg_statio_user_tables;\n\nThe hit rate was over 95% and it increased as soon as I ran those queries.\nSame thing with the index hit rate.\n\nI've been playing with some parameters in Postgresql, decreasing/increasing\nthe number of workers, shared buffers, work_mem, JIT, cpu_*_cost variables,\netc, but nothing did help to reduce that latency.\n\nHere are the settings I'm currently using with Postgresql 15.3 after a lot\nof work experimenting with different values:\n\n checkpoint_completion_target = 0.9\n checkpoint_timeout = 900\n cpu_index_tuple_cost = 0.00001\n cpu_operator_cost = 0.00001\n effective_cache_size = 12GB\n effective_io_concurrency = 200\n jit = off\n listen_addresses = 'localhost'\n maintenance_work_mem = 1GB\n max_connections = 100\n max_parallel_maintenance_workers = 4\n max_parallel_workers = 12\n max_parallel_workers_per_gather = 4\n max_wal_size = 4GB\n max_worker_processes = 12\n min_wal_size = 1GB\n random_page_cost = 1.1\n shared_buffers = 4GB\n ssl = off\n timezone = 'UTC'\n wal_buffers = 16MB\n work_mem = 64MB\n\nSome notes about those settings:\n\n - We're running other services on this server, that's why I'm not using\nmore resources.\n - Tweaking the cpu_*_cost parameters was crucial to improve the query\nplan. With the default values Postgresql was consistently using a slower\nquery plan.\n\nI've been looking at some settings in Linux as well:\n\n - Swappiness is set to the lowest safe value: vm.swappiness = 1\n - Huge Pages is not being used and Transparent Huge Pages (THP) is set\nto 'madvise'. Postgresql 15.3 is using the default value for the\n'huge_pages' parameter: 'try'.\n - The memory overcommit policy is set to 1: vm.overcommit_memory = 1\n\nI've been playing with Huge Pages, to try to force Postgresql using this\nfeature. I manually allocated the number of Huge Pages as shown in this\nquery:\n\n SHOW shared_memory_size_in_huge_pages;\n\nI confirmed Huge Pages were being used by Postgresql, but unfortunately I\ndidn't see any improvement regarding latency and performance. So I set this\nback to the previous state.\n\nConclusion:\n\nThe latency is quite low on both servers, but when you're running dozens or\nhundreds of fast short queries concurrently, on aggregate you see the\ndifference, with server A being 0.1-1.0 seconds faster than server B.\n\nAs you can see, server B has 2 CPUs and is using NUMA on Linux. And the\nCPU clock is slower on server B than server A. Maybe any of those are\ncausing that latency?\n\nAny suggestions or ideas where to look? I'd really appreciate your help.\n\nThank you\n\nHi guys,I've been configuring a new server and tuning Postgresql 15.3, but I'm struggling with a latency I'm consistently seeing with this new server when running fast short queries, compared to the other server.We're running two different versions of Postgresql:- Server A: Postgresql 9.3- Server B: Postgresql 15.3Server B is the new server and is way more powerful than server A:- Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0- Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1We're running Linux Ubuntu 20.04 on server B and I've been tweaking some settings in Linux and Postgresql 15.3. With the current setup, Postgresql 15.3 is able to process more than 1 million transactions per second running pgbench: # pgbench --username postgres --select-only --client 100 --jobs 10 --time 20 test pgbench (15.3 (Ubuntu 15.3-1.pgdg20.04+1)) starting vacuum...end. transaction type: <builtin: select only> scaling factor: 1 query mode: simple number of clients: 100 number of threads: 10 maximum number of tries: 1 duration: 20 s number of transactions actually processed: 23039950 number of failed transactions: 0 (0.000%) latency average = 0.087 ms initial connection time = 62.536 ms tps = 1155053.135317 (without initial connection time)As shown in pgbench, the performance is great. Also when testing individual queries, heavy queries (those taking a few ms) run faster on server B than server A. Unfortunately when we run fast short SELECT queries (< 1 ms), server A is consistently running faster than server B, even if the query plans are the same: Server A: # EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT 1 AS \"a\" FROM \"foobar\" WHERE (\"foobar\".\"id\" = 1) LIMIT 1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.42..8.44 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1) Output: (1) Buffers: shared hit=5 -> Index Only Scan using foobar_pkey on public.foobar (cost=0.42..8.44 rows=1 width=0) (actual time=0.008..0.008 rows=1 loops=1) Output: 1 Index Cond: (foobar.id = 1) Heap Fetches: 1 Buffers: shared hit=5 Total runtime: 0.017 ms (9 rows) Time: 0.281 ms Server B: # EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT 1 AS \"a\" FROM \"foobar\" WHERE (\"foobar\".\"id\" = 1) LIMIT 1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.00..1.11 rows=1 width=4) (actual time=0.019..0.021 rows=1 loops=1) Output: 1 Buffers: shared hit=4 -> Index Only Scan using foobar_pkey on public.foobar (cost=0.00..1.11 rows=1 width=4) (actual time=0.018..0.018 rows=1 loops=1) Output: 1 Index Cond: (foobar.id = 1) Heap Fetches: 0 Buffers: shared hit=4 Planning Time: 0.110 ms Execution Time: 0.045 ms (10 rows) Time: 0.635 msRAID1 could add some latency on server B if it was reading from disk, but I've confirmed that these queries are hitting the buffer/cache and therefore reading data from memory and not from disk. I've checked the hit rate with the following query: SELECT 'cache hit rate' AS name, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS ratio FROM pg_statio_user_tables;The hit rate was over 95% and it increased as soon as I ran those queries. Same thing with the index hit rate.I've been playing with some parameters in Postgresql, decreasing/increasing the number of workers, shared buffers, work_mem, JIT, cpu_*_cost variables, etc, but nothing did help to reduce that latency.Here are the settings I'm currently using with Postgresql 15.3 after a lot of work experimenting with different values: checkpoint_completion_target = 0.9 checkpoint_timeout = 900 cpu_index_tuple_cost = 0.00001 cpu_operator_cost = 0.00001 effective_cache_size = 12GB effective_io_concurrency = 200 jit = off listen_addresses = 'localhost' maintenance_work_mem = 1GB max_connections = 100 max_parallel_maintenance_workers = 4 max_parallel_workers = 12 max_parallel_workers_per_gather = 4 max_wal_size = 4GB max_worker_processes = 12 min_wal_size = 1GB random_page_cost = 1.1 shared_buffers = 4GB ssl = off timezone = 'UTC' wal_buffers = 16MB work_mem = 64MBSome notes about those settings: - We're running other services on this server, that's why I'm not using more resources. - Tweaking the cpu_*_cost parameters was crucial to improve the query plan. With the default values Postgresql was consistently using a slower query plan.I've been looking at some settings in Linux as well: - Swappiness is set to the lowest safe value: vm.swappiness = 1 - Huge Pages is not being used and Transparent Huge Pages (THP) is set to 'madvise'. Postgresql 15.3 is using the default value for the 'huge_pages' parameter: 'try'. - The memory overcommit policy is set to 1: vm.overcommit_memory = 1I've been playing with Huge Pages, to try to force Postgresql using this feature. I manually allocated the number of Huge Pages as shown in this query: SHOW shared_memory_size_in_huge_pages;I confirmed Huge Pages were being used by Postgresql, but unfortunately I didn't see any improvement regarding latency and performance. So I set this back to the previous state.Conclusion:The latency is quite low on both servers, but when you're running dozens or hundreds of fast short queries concurrently, on aggregate you see the difference, with server A being 0.1-1.0 seconds faster than server B.As you can see, server B has 2 CPUs and is using NUMA on Linux. And the CPU clock is slower on server B than server A. Maybe any of those are causing that latency?Any suggestions or ideas where to look? I'd really appreciate your help.Thank you",
"msg_date": "Wed, 31 May 2023 14:40:05 +0200",
"msg_from": "Sergio Rus <geiros@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to reduce latency with fast short queries in Postgresql 15.3 on a\n NUMA server"
},
{
"msg_contents": "On Wed, May 31, 2023 at 02:40:05PM +0200, Sergio Rus wrote:\n> Hi guys,\n> \n> I've been configuring a new server and tuning Postgresql 15.3, but I'm\n> struggling with a latency I'm consistently seeing with this new server when\n> running fast short queries, compared to the other server.\n> \n> We're running two different versions of Postgresql:\n> \n> - Server A: Postgresql 9.3\n> - Server B: Postgresql 15.3\n> \n> Server B is the new server and is way more powerful than server A:\n> \n> - Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0\n> - Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1\n> ...\n> Conclusion:\n> \n> As you can see, server B has 2 CPUs and is using NUMA on Linux. And the\n> CPU clock is slower on server B than server A. Maybe any of those are\n> causing that latency?\n> \n\nHi Sergio,\n\nThis really looks like it is caused by the CPU clock speed difference.\nThe E3 is 1.6X faster at the base frequency. Many times that is the\ntrade-off when going to many more cores. Simple short will run faster on\nthe older CPU even though overall the new CPU has much more total\ncapacity.\n\nRegards,\nKen\n\n\n",
"msg_date": "Wed, 31 May 2023 08:47:12 -0500",
"msg_from": "Kenneth Marshall <ktm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
},
{
"msg_contents": "Thanks for your reply, Ken.\n\nWith such a big server I was convinced that we should see a boost\neverywhere, but after spending so much time tweaking and looking at so many\nparameters on Linux, Postgresql and our current setup, I started to think\nthat maybe that latency was intrinsic to the hardware and therefore\ninevitable. So after all, the CPU clock speed still counts these days! I\nthink we're many just looking at the number of CPU cores and forgetting\nthat the clock speed is still relevant for many tasks.\n\nI guess those simple short queries are very sensible to the hardware specs\nand there is no room for improving as much as the heavy queries in recent\nversions of Postgres, as I have seen in my tests.\n\nOn Wed, 31 May 2023 at 15:47, Kenneth Marshall <ktm@rice.edu> wrote:\n\n> On Wed, May 31, 2023 at 02:40:05PM +0200, Sergio Rus wrote:\n> > Hi guys,\n> >\n> > I've been configuring a new server and tuning Postgresql 15.3, but I'm\n> > struggling with a latency I'm consistently seeing with this new server\n> when\n> > running fast short queries, compared to the other server.\n> >\n> > We're running two different versions of Postgresql:\n> >\n> > - Server A: Postgresql 9.3\n> > - Server B: Postgresql 15.3\n> >\n> > Server B is the new server and is way more powerful than server A:\n> >\n> > - Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0\n> > - Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1\n> > ...\n> > Conclusion:\n> >\n> > As you can see, server B has 2 CPUs and is using NUMA on Linux. And the\n> > CPU clock is slower on server B than server A. Maybe any of those are\n> > causing that latency?\n> >\n>\n> Hi Sergio,\n>\n> This really looks like it is caused by the CPU clock speed difference.\n> The E3 is 1.6X faster at the base frequency. Many times that is the\n> trade-off when going to many more cores. Simple short will run faster on\n> the older CPU even though overall the new CPU has much more total\n> capacity.\n>\n> Regards,\n> Ken\n>\n\nThanks for your reply, Ken.With such a big server I was convinced that we should see a boost everywhere, but after spending so much time tweaking and looking at so many parameters on Linux, Postgresql and our current setup, I started to think that maybe that latency was intrinsic to the hardware and therefore inevitable. So after all, the CPU clock speed still counts these days! I think we're many just looking at the number of CPU cores and forgetting that the clock speed is still relevant for many tasks.I guess those simple short queries are very sensible to the hardware specs and there is no room for improving as much as the heavy queries in recent versions of Postgres, as I have seen in my tests.On Wed, 31 May 2023 at 15:47, Kenneth Marshall <ktm@rice.edu> wrote:On Wed, May 31, 2023 at 02:40:05PM +0200, Sergio Rus wrote:\n> Hi guys,\n> \n> I've been configuring a new server and tuning Postgresql 15.3, but I'm\n> struggling with a latency I'm consistently seeing with this new server when\n> running fast short queries, compared to the other server.\n> \n> We're running two different versions of Postgresql:\n> \n> - Server A: Postgresql 9.3\n> - Server B: Postgresql 15.3\n> \n> Server B is the new server and is way more powerful than server A:\n> \n> - Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0\n> - Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1\n> ...\n> Conclusion:\n> \n> As you can see, server B has 2 CPUs and is using NUMA on Linux. And the\n> CPU clock is slower on server B than server A. Maybe any of those are\n> causing that latency?\n> \n\nHi Sergio,\n\nThis really looks like it is caused by the CPU clock speed difference.\nThe E3 is 1.6X faster at the base frequency. Many times that is the\ntrade-off when going to many more cores. Simple short will run faster on\nthe older CPU even though overall the new CPU has much more total\ncapacity.\n\nRegards,\nKen",
"msg_date": "Wed, 31 May 2023 18:03:11 +0200",
"msg_from": "Sergio Rus <geiros@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
},
{
"msg_contents": "> Server B is the new server and is way more powerful than server A:\n> ...\n> So after all, the CPU clock speed still counts these days!\n\nHi Sergio,\n\nMaybe \"powerful\" + \"powersave\"?\nas I see Sever B : Processor Base Frequency : 2.40 GHz AND\n\n* Max Turbo Frequency : 3.90 GHz*\nCould you verify this by running the 'cpupower frequency-info' command and\nchecking the governor line?\"\n\nread more:\n*\"But If we haven’t emphasised it enough, firstly whatever database\nbenchmark you are running *\n*make sure that your CPUs are not configured to run in powersave mode.\"*\nhttps://www.hammerdb.com/blog/uncategorized/how-to-maximize-cpu-performance-for-postgresql-12-0-benchmarks-on-linux/\n\nregards,\n Imre\n\nSergio Rus <geiros@gmail.com> ezt írta (időpont: 2023. máj. 31., Sze,\n18:03):\n\n> Thanks for your reply, Ken.\n>\n> With such a big server I was convinced that we should see a boost\n> everywhere, but after spending so much time tweaking and looking at so many\n> parameters on Linux, Postgresql and our current setup, I started to think\n> that maybe that latency was intrinsic to the hardware and therefore\n> inevitable. So after all, the CPU clock speed still counts these days! I\n> think we're many just looking at the number of CPU cores and forgetting\n> that the clock speed is still relevant for many tasks.\n>\n> I guess those simple short queries are very sensible to the hardware specs\n> and there is no room for improving as much as the heavy queries in recent\n> versions of Postgres, as I have seen in my tests.\n>\n\n> Server B is the new server and is way more powerful than server A:> ...> So after all, the CPU clock speed still counts these days! Hi Sergio,Maybe \"powerful\" + \"powersave\"? as I see Sever B : Processor Base Frequency : 2.40 GHz AND Max Turbo Frequency : 3.90 GHzCould you verify this by running the 'cpupower frequency-info' command and checking the governor line?\"read more:\"But If we haven’t emphasised it enough, firstly whatever database benchmark you are running make sure that your CPUs are not configured to run in powersave mode.\"https://www.hammerdb.com/blog/uncategorized/how-to-maximize-cpu-performance-for-postgresql-12-0-benchmarks-on-linux/regards, ImreSergio Rus <geiros@gmail.com> ezt írta (időpont: 2023. máj. 31., Sze, 18:03):Thanks for your reply, Ken.With such a big server I was convinced that we should see a boost everywhere, but after spending so much time tweaking and looking at so many parameters on Linux, Postgresql and our current setup, I started to think that maybe that latency was intrinsic to the hardware and therefore inevitable. So after all, the CPU clock speed still counts these days! I think we're many just looking at the number of CPU cores and forgetting that the clock speed is still relevant for many tasks.I guess those simple short queries are very sensible to the hardware specs and there is no room for improving as much as the heavy queries in recent versions of Postgres, as I have seen in my tests.",
"msg_date": "Wed, 31 May 2023 19:17:28 +0200",
"msg_from": "Imre Samu <pella.samu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-31 14:40:05 +0200, Sergio Rus wrote:\n> I've been configuring a new server and tuning Postgresql 15.3, but I'm\n> struggling with a latency I'm consistently seeing with this new server when\n> running fast short queries, compared to the other server.\n> \n> We're running two different versions of Postgresql:\n> \n> - Server A: Postgresql 9.3\n> - Server B: Postgresql 15.3\n> \n> Server B is the new server and is way more powerful than server A:\n> \n> - Server A: 1x Intel Xeon E3-1270 3.5GHz, 2x 8GB DDR3, RAID0\n> - Server B: 2x Intel Xeon Platinum 8260 2.4GHz, 4x 16GB DDR4, RAID1\n> \n> We're running Linux Ubuntu 20.04 on server B and I've been tweaking some\n> settings in Linux and Postgresql 15.3. With the current setup, Postgresql\n> 15.3 is able to process more than 1 million transactions per second running\n> pgbench:\n> \n> # pgbench --username postgres --select-only --client 100 --jobs 10\n> --time 20 test\n\nCould you post the pgbench results for both systems? Which one is this from?\n\n\n> As shown in pgbench, the performance is great. Also when testing individual\n> queries, heavy queries (those taking a few ms) run faster on server B than\n> server A. Unfortunately when we run fast short SELECT queries (< 1 ms),\n> server A is consistently running faster than server B, even if the query\n> plans are the same:\n\nOne explanation for this can be the powersaving settings. Newer CPUs can\nthrottle down a lot further than the older ones. Increasing the clock speed\nhas a fair bit of latency - for a long running query that's not really\nvisible, but if you run a short query in isolation, it'll likely complete\nbefore the clock speed has finished ramping up.\n\nYou can query that with\n cpupower frequency-info\n\nAnother thing is that you're comparing a two socket system with a one socket\nsystem. Latency between a program running on one node and a client on another,\nand similarly, a program running on one node and memory attached to the other\nCPU, is higher.\n\nYou could check what happens if you bind both server and client to the same\nCPU socket.\n numactl --membind 1 --cpunodebind 1 <program> <parameters>\nforces programs to allocate memory and run on a specific CPU socket.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 May 2023 10:54:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
},
{
"msg_contents": "Em qua., 31 de mai. de 2023 às 09:40, Sergio Rus <geiros@gmail.com>\nescreveu:\n\n> As you can see, server B has 2 CPUs and is using NUMA on Linux. And the\n> CPU clock is slower on server B than server A. Maybe any of those are\n> causing that latency?\n>\n> Any suggestions or ideas where to look? I'd really appreciate your help.\n>\nIf this is cpu bound, linux perf can show the difference.\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\nregards,\nRanier Vilela\n\nEm qua., 31 de mai. de 2023 às 09:40, Sergio Rus <geiros@gmail.com> escreveu:As you can see, server B has 2 CPUs and is using NUMA on Linux. And the CPU clock is slower on server B than server A. Maybe any of those are causing that latency?Any suggestions or ideas where to look? I'd really appreciate your help.If this is cpu bound, linux perf can show the difference.https://wiki.postgresql.org/wiki/Profiling_with_perfregards,Ranier Vilela",
"msg_date": "Wed, 31 May 2023 15:16:40 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
},
{
"msg_contents": "Thanks for your replies, you were totally right, it was due to the CPU\ngovernor: the governor was set to 'powersave'. I've changed it to\n'performance' and the server is flying now. I'm still working on it,\nbut the first quick tests I've made are showing much better numbers.\nThose simple short queries are running faster now, the latency is now\nbasically the same or even lower than the old server. The server feels\nmore responsive overall.\n\nI've finally installed cpupower, to simplify the process, but you can\nuse basic shell commands. Here are the output for some commands:\n\n # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors\n =>\n performance powersave\n\n # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor\n =>\n performance\n\n # cpupower -c all frequency-info\n =>\n analyzing CPU 0:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 0\n CPUs which need to have their frequency coordinated by software: 0\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 1.94 GHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n analyzing CPU 1:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 1\n CPUs which need to have their frequency coordinated by software: 1\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 1.91 GHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n analyzing CPU 2:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 2\n CPUs which need to have their frequency coordinated by software: 2\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 2.14 GHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n ... (cropped)\n\n analyzing CPU 9:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 9\n CPUs which need to have their frequency coordinated by software: 9\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 2.95 GHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n ... (cropped)\n\n analyzing CPU 26:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 26\n CPUs which need to have their frequency coordinated by software: 26\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 1.00 GHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n analyzing CPU 27:\n driver: intel_pstate\n CPUs which run at the same hardware frequency: 27\n CPUs which need to have their frequency coordinated by software: 27\n maximum transition latency: Cannot determine or is not supported.\n hardware limits: 1000 MHz - 3.90 GHz\n available cpufreq governors: performance powersave\n current policy: frequency should be within 1000 MHz and 3.90 GHz.\n The governor \"performance\" may decide which speed to use\n within this range.\n current CPU frequency: Unable to call hardware\n current CPU frequency: 1000 MHz (asserted by call to kernel)\n boost state support:\n Supported: yes\n Active: yes\n\n ... (cropped)\n\n---\n\nBefore this change, with the CPU governor set to 'powersave',\nbasically all the CPU cores were at 1.00 GHz. I haven't listed all the\ncores, but I'm seeing very different frequencies now. I noticed that\nsome of the cores are still at 1 GHz, which is good if they're idle,\notherwise the server would get really hot!\n\n> Could you post the pgbench results for both systems? Which one is this from?\nAndres, I ran pgbench on the new server. Unfortunately the old server\nis in production and quite busy, so I can't run any benchmark over\nthere.\n\nThanks!\n\n\n",
"msg_date": "Thu, 1 Jun 2023 16:26:59 +0200",
"msg_from": "Sergio Rus <geiros@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to reduce latency with fast short queries in Postgresql 15.3\n on a NUMA server"
}
] |
[
{
"msg_contents": "PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per backend, from Operating system and memorycontext dump \"Grand total:\", both mached. But from details, we found almost of entry belong to \"CacheMemoryContext\", from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used, but there are thousands of lines of it's child, the sum of blocks much more than \"8737352\" total in 42 blocks\n CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used\n CachedPlan: 4096 total in 3 blocks; 888 free (0 chunks); 3208 used: xxxxxxx\n CachedPlanSource: 2048 total in 2 blocks; 440 free (0 chunks); 1608 used: xxxxxxx\n unnamed prepared statement: 8192 total in 1 blocks; 464 free (0 chunks); 7728 used\n CachedPlan: 66560 total in 7 blocks; 15336 free (0 chunks); 51224 used: xxxxxxx\n CachedPlan: 8192 total in 4 blocks; 2456 free (0 chunks); 5736 used: xxxxxxx\n CachedPlan: 33792 total in 6 blocks; 14344 free (1 chunks); 19448 used: xxxxxxx\n ...\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n CachedPlanSource: 4096 total in 3 blocks; 1152 free (0 chunks); 2944 used: xxxxxxx\n CachedPlanQuery: 4096 total in 3 blocks; 848 free (0 chunks); 3248 used\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n CachedPlanSource: 4096 total in 3 blocks; 1472 free (0 chunks); 2624 used: xxxxxxx\n CachedPlanQuery: 4096 total in 3 blocks; 1464 free (0 chunks); 2632 used\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx\n index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx\n index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: xxxxxxx\n index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: xxxxxxx\n index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx\n index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: xxxxxxx\n index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: xxxxxxx\n index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx\n index info: 2048 total in 2 blocks; 656 free (2 chunks); 1392 used: xxxxxxx\n index info: 3072 total in 2 blocks; 1160 free (2 chunks); 1912 used: xxxxxxx\n index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx\n index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx\n index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx\n WAL record construction: 49768 total in 2 blocks; 6360 free (0 chunks); 43408 used\n PrivateRefCount: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used\n MdSmgr: 32768 total in 3 blocks; 10104 free (7 chunks); 22664 used\n LOCALLOCK hash: 65536 total in 4 blocks; 18704 free (13 chunks); 46832 used\n Timezones: 104120 total in 2 blocks; 2616 free (0 chunks); 101504 used\n ErrorContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nGrand total: 34558032 bytes in 8798 blocks; 9206536 free (2484 chunks); 25351496 used\n\nOur application use Postgresql JDBC driver with default parameters(maxprepared statement 256), there are many triggers, functions in this database, and a few functions run sql by an extension pg_background. We have thousands of connections and have big concern why have thousands of entrys of cached SQL ? that will consume huge memory , anyway to limit the cached plan entry to save memory consumption? Or it looks like an abnormal behavior or bug to see so many cached plan lines.\n Attached please see details, the detail of SQL got masked sensitive information, this backend has huge lines so using MemoryContextStatsDetail(TopMemoryContext) instead to dump all lines.",
"msg_date": "Thu, 1 Jun 2023 03:36:13 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "thousands of CachedPlan entry per backend"
},
{
"msg_contents": "On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:\n> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per\n> backend, from Operating system and memorycontext dump “Grand total:”, both mached.\n> But from details, we found almost of entry belong to “CacheMemoryContext”,\n> from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\n> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks\n> \n> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),\n> there are many triggers, functions in this database, and a few functions run sql by an extension\n> pg_background. We have thousands of connections and have big concern why have thousands of entrys\n> of cached SQL ? that will consume huge memory , anyway to limit the cached plan entry to save memory\n> consumption? Or it looks like an abnormal behavior or bug to see so many cached plan lines.\n\nIf you have thousands of connections, that's your problem. You need effective connection pooling.\nThen 40MB per backend won't be a problem at all. Having thousands of connections will cause\nother, worse, problems for you.\n\nSee for example \nhttps://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/\n\nIf you want to use functions, but don't want to benefit from plan caching, you can set\nthe configuration parameter \"plan_cache_mode\" to \"force_custom_plan\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 01 Jun 2023 08:53:41 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "Hi\n\nčt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at>\nnapsal:\n\n> On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:\n> > PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process\n> memory per\n> > backend, from Operating system and memorycontext dump “Grand total:”,\n> both mached.\n> > But from details, we found almost of entry belong to\n> “CacheMemoryContext”,\n> > from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944\n> free (215 chunks); 7715408 used,\n> > but there are thousands of lines of it’s child, the sum of blocks much\n> more than “8737352” total in 42 blocks\n> >\n> > Our application use Postgresql JDBC driver with default\n> parameters(maxprepared statement 256),\n> > there are many triggers, functions in this database, and a few functions\n> run sql by an extension\n> > pg_background. We have thousands of connections and have big concern\n> why have thousands of entrys\n> > of cached SQL ? that will consume huge memory , anyway to limit the\n> cached plan entry to save memory\n> > consumption? Or it looks like an abnormal behavior or bug to see so\n> many cached plan lines.\n>\n> If you have thousands of connections, that's your problem. You need\n> effective connection pooling.\n> Then 40MB per backend won't be a problem at all. Having thousands of\n> connections will cause\n> other, worse, problems for you.\n>\n> See for example\n>\n> https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/\n>\n> If you want to use functions, but don't want to benefit from plan caching,\n> you can set\n> the configuration parameter \"plan_cache_mode\" to \"force_custom_plan\".\n>\n\nThe problem with too big of cached metadata can be forced by too long\nsessions too.\n\nIn this case it is good to throw a session (connect) after 1hour or maybe\nless.\n\nRegards\n\nPavel\n\n\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n\nHičt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:\n> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per\n> backend, from Operating system and memorycontext dump “Grand total:”, both mached.\n> But from details, we found almost of entry belong to “CacheMemoryContext”,\n> from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\n> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks\n> \n> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),\n> there are many triggers, functions in this database, and a few functions run sql by an extension\n> pg_background. We have thousands of connections and have big concern why have thousands of entrys\n> of cached SQL ? that will consume huge memory , anyway to limit the cached plan entry to save memory\n> consumption? Or it looks like an abnormal behavior or bug to see so many cached plan lines.\n\nIf you have thousands of connections, that's your problem. You need effective connection pooling.\nThen 40MB per backend won't be a problem at all. Having thousands of connections will cause\nother, worse, problems for you.\n\nSee for example \nhttps://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/\n\nIf you want to use functions, but don't want to benefit from plan caching, you can set\nthe configuration parameter \"plan_cache_mode\" to \"force_custom_plan\".The problem with too big of cached metadata can be forced by too long sessions too.In this case it is good to throw a session (connect) after 1hour or maybe less.RegardsPavel \n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 1 Jun 2023 09:18:31 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "Yes, too many cached metadata and we are thinking of a workaround to disconnect the sessions timely.\r\nIn addition, based on the dumped memory context, I have questions\r\n 1) we found thousands of cached plan , since JDBC driver only allow max 256 cached prepared statements, how backend cache so many sql plans. If we have one function, when application call that function will make backend to cache every SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\r\n 2) from this line, we saw total 42 blocks ,215 chunks CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\r\n But from sum of it’s child level entrys, total sum(child lines) block ,trunks show much more than “CacheMemoryContext, is expected to see that?\r\n\r\n\r\nThanks,\r\n\r\nJames\r\n\r\n\r\n\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Thursday, June 1, 2023 3:19 PM\r\nTo: Laurenz Albe <laurenz.albe@cybertec.at>\r\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: thousands of CachedPlan entry per backend\r\n\r\nHi\r\n\r\nčt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at<mailto:laurenz.albe@cybertec.at>> napsal:\r\nOn Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:\r\n> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per\r\n> backend, from Operating system and memorycontext dump “Grand total:”, both mached.\r\n> But from details, we found almost of entry belong to “CacheMemoryContext”,\r\n> from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\r\n> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks\r\n>\r\n> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),\r\n> there are many triggers, functions in this database, and a few functions run sql by an extension\r\n> pg_background. We have thousands of connections and have big concern why have thousands of entrys\r\n> of cached SQL ? that will consume huge memory , anyway to limit the cached plan entry to save memory\r\n> consumption? Or it looks like an abnormal behavior or bug to see so many cached plan lines.\r\n\r\nIf you have thousands of connections, that's your problem. You need effective connection pooling.\r\nThen 40MB per backend won't be a problem at all. Having thousands of connections will cause\r\nother, worse, problems for you.\r\n\r\nSee for example\r\nhttps://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/\r\n\r\nIf you want to use functions, but don't want to benefit from plan caching, you can set\r\nthe configuration parameter \"plan_cache_mode\" to \"force_custom_plan\".\r\n\r\nThe problem with too big of cached metadata can be forced by too long sessions too.\r\n\r\nIn this case it is good to throw a session (connect) after 1hour or maybe less.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\nYours,\r\nLaurenz Albe\r\n\r\n\n\n\n\n\n\n\n\n\n Yes, too many cached metadata and we are thinking of a workaround to disconnect the sessions timely.\nIn addition, based on the dumped memory context, I have questions\r\n\n 1) we found thousands of cached plan , since JDBC driver only allow max 256 cached prepared statements, how backend cache so many sql plans. If we have one function, when application call that function will make backend to cache every\r\n SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\r\n\n 2) from this line, we saw total 42 blocks ,215 chunks CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\n But from sum of it’s child level entrys, total sum(child lines) block ,trunks show much more than “CacheMemoryContext, is expected to see that?\n \n \nThanks,\n \nJames\n \n \n \n \n\nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Thursday, June 1, 2023 3:19 PM\nTo: Laurenz Albe <laurenz.albe@cybertec.at>\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: thousands of CachedPlan entry per backend\n\n \n\n\nHi\n\n \n\n\nčt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:\n\n\nOn Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:\r\n> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per\r\n> backend, from Operating system and memorycontext dump “Grand total:”, both mached.\r\n> But from details, we found almost of entry belong to “CacheMemoryContext”,\r\n> from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\r\n> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks\r\n> \r\n> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),\r\n> there are many triggers, functions in this database, and a few functions run sql by an extension\r\n> pg_background. We have thousands of connections and have big concern why have thousands of entrys\r\n> of cached SQL ? that will consume huge memory , anyway to limit the cached plan entry to save memory\r\n> consumption? Or it looks like an abnormal behavior or bug to see so many cached plan lines.\n\r\nIf you have thousands of connections, that's your problem. You need effective connection pooling.\r\nThen 40MB per backend won't be a problem at all. Having thousands of connections will cause\r\nother, worse, problems for you.\n\r\nSee for example \nhttps://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/\n\r\nIf you want to use functions, but don't want to benefit from plan caching, you can set\r\nthe configuration parameter \"plan_cache_mode\" to \"force_custom_plan\".\n\n\n \n\n\nThe problem with too big of cached metadata can be forced by too long sessions too.\n\n\n \n\n\nIn this case it is good to throw a session (connect) after 1hour or maybe less.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\r\nYours,\r\nLaurenz Albe",
"msg_date": "Thu, 1 Jun 2023 08:50:44 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\n> we found thousands of cached plan , since JDBC driver only allow max 256 cached\n> prepared statements, how backend cache so many sql plans. If we have one function,\n> when application call that function will make backend to cache every SQL statement\n> plan in that function too? and for table triggers, have similar caching behavior ?\n\nYes, as long as the functions are written in PL/pgSQL.\nIt only affects static SQL, that is, nothing that is run with EXECUTE.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 01 Jun 2023 14:48:23 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: thousands of CachedPlan entry per backend"
},
{
"msg_contents": " these lines about \"SPI Plan\" are these PL/PGSQL functions related SPI_prepare plan entry, right? Possible to set a GUC to max(cached plan) per backend ? \r\n\r\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\r\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\r\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\r\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <laurenz.albe@cybertec.at> \r\nSent: Thursday, June 1, 2023 8:48 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: thousands of CachedPlan entry per backend\r\n\r\nOn Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\r\n> we found thousands of cached plan , since JDBC driver only allow max \r\n> 256 cached prepared statements, how backend cache so many sql plans. \r\n> If we have one function, when application call that function will make \r\n> backend to cache every SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\r\n\r\nYes, as long as the functions are written in PL/pgSQL.\r\nIt only affects static SQL, that is, nothing that is run with EXECUTE.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 2 Jun 2023 01:45:26 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "pá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com>\nnapsal:\n\n> these lines about \"SPI Plan\" are these PL/PGSQL functions related\n> SPI_prepare plan entry, right? Possible to set a GUC to max(cached plan)\n> per backend ?\n>\n\nThere is no limit for size of system cache. You can use pgbouncer that\nimplicitly refresh session after 1 hour (and this limit can be reduced)\n\nRegards\n\nPavel\n\n\n\n\n>\n> SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n> CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used:\n> xxxxxxx\n> CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848\n> used: xxxxxxx\n> CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344\n> used\n>\n> Thanks,\n>\n> James\n>\n> -----Original Message-----\n> From: Laurenz Albe <laurenz.albe@cybertec.at>\n> Sent: Thursday, June 1, 2023 8:48 PM\n> To: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <\n> pavel.stehule@gmail.com>\n> Cc: pgsql-performance@lists.postgresql.org\n> Subject: Re: thousands of CachedPlan entry per backend\n>\n> On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\n> > we found thousands of cached plan , since JDBC driver only allow max\n> > 256 cached prepared statements, how backend cache so many sql plans.\n> > If we have one function, when application call that function will make\n> > backend to cache every SQL statement plan in that function too? and\n> for table triggers, have similar caching behavior ?\n>\n> Yes, as long as the functions are written in PL/pgSQL.\n> It only affects static SQL, that is, nothing that is run with EXECUTE.\n>\n> Yours,\n> Laurenz Albe\n>\n\npá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal: these lines about \"SPI Plan\" are these PL/PGSQL functions related SPI_prepare plan entry, right? Possible to set a GUC to max(cached plan) per backend ? There is no limit for size of system cache. You can use pgbouncer that implicitly refresh session after 1 hour (and this limit can be reduced)RegardsPavel \n\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\n\nThanks,\n\nJames\n\n-----Original Message-----\nFrom: Laurenz Albe <laurenz.albe@cybertec.at> \nSent: Thursday, June 1, 2023 8:48 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: thousands of CachedPlan entry per backend\n\nOn Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\n> we found thousands of cached plan , since JDBC driver only allow max \n> 256 cached prepared statements, how backend cache so many sql plans. \n> If we have one function, when application call that function will make \n> backend to cache every SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\n\nYes, as long as the functions are written in PL/pgSQL.\nIt only affects static SQL, that is, nothing that is run with EXECUTE.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 2 Jun 2023 06:56:51 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "these lines about \"SPI Plan\" are these PL/PGSQL functions related through SPI_prepare plan entry, right?\r\n\r\n\r\nSPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\r\n\r\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\r\n\r\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\r\n\r\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Friday, June 2, 2023 12:57 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\r\nCc: Laurenz Albe <laurenz.albe@cybertec.at>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: thousands of CachedPlan entry per backend\r\n\r\n\r\n\r\npá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> napsal:\r\n these lines about \"SPI Plan\" are these PL/PGSQL functions related SPI_prepare plan entry, right? Possible to set a GUC to max(cached plan) per backend ?\r\n\r\nThere is no limit for size of system cache. You can use pgbouncer that implicitly refresh session after 1 hour (and this limit can be reduced)\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n\r\n\r\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\r\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\r\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\r\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\r\n\r\nThanks,\r\n\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <laurenz.albe@cybertec.at<mailto:laurenz.albe@cybertec.at>>\r\nSent: Thursday, June 1, 2023 8:48 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>>; Pavel Stehule <pavel.stehule@gmail.com<mailto:pavel.stehule@gmail.com>>\r\nCc: pgsql-performance@lists.postgresql.org<mailto:pgsql-performance@lists.postgresql.org>\r\nSubject: Re: thousands of CachedPlan entry per backend\r\n\r\nOn Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\r\n> we found thousands of cached plan , since JDBC driver only allow max\r\n> 256 cached prepared statements, how backend cache so many sql plans.\r\n> If we have one function, when application call that function will make\r\n> backend to cache every SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\r\n\r\nYes, as long as the functions are written in PL/pgSQL.\r\nIt only affects static SQL, that is, nothing that is run with EXECUTE.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n\n\n\n\n\n\n\n\nthese lines about \"SPI Plan\" are these PL/PGSQL functions related through SPI_prepare plan entry, right?\n \nSPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\n \n\nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Friday, June 2, 2023 12:57 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: Laurenz Albe <laurenz.albe@cybertec.at>; pgsql-performance@lists.postgresql.org\nSubject: Re: thousands of CachedPlan entry per backend\n\n \n\n\n \n\n \n\n\npá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\n\n\n these lines about \"SPI Plan\" are these PL/PGSQL functions related SPI_prepare plan entry, right? Possible to set a GUC to max(cached plan) per backend ?\r\n\n\n\n \n\n\nThere is no limit for size of system cache. You can use pgbouncer that implicitly refresh session after 1 hour (and this limit can be reduced)\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n \n\n\n \n\n\n\r\n SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used\r\n CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx\r\n CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx\r\n CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used\n\r\nThanks,\n\r\nJames\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <laurenz.albe@cybertec.at>\r\n\r\nSent: Thursday, June 1, 2023 8:48 PM\r\nTo: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>\r\nCc: pgsql-performance@lists.postgresql.org\r\nSubject: Re: thousands of CachedPlan entry per backend\n\r\nOn Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:\r\n> we found thousands of cached plan , since JDBC driver only allow max \r\n> 256 cached prepared statements, how backend cache so many sql plans. \r\n> If we have one function, when application call that function will make \r\n> backend to cache every SQL statement plan in that function too? and for table triggers, have similar caching behavior ?\n\r\nYes, as long as the functions are written in PL/pgSQL.\r\nIt only affects static SQL, that is, nothing that is run with EXECUTE.\n\r\nYours,\r\nLaurenz Albe",
"msg_date": "Fri, 2 Jun 2023 09:06:12 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: thousands of CachedPlan entry per backend"
},
{
"msg_contents": "On Thu, Jun 1, 2023 at 4:51 AM James Pang (chaolpan) <chaolpan@cisco.com>\nwrote:\n\n> 2) from this line, we saw total 42 blocks ,215 chunks\n> CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks);\n> 7715408 used,\n>\n> But from sum of it’s child level entrys, total sum(child lines)\n> block ,trunks show much more than “CacheMemoryContext, is expected to see\n> that?\n>\n\nYes, that is expected. The parent context reports only its own direct\nmemory usage and blocks. It does not include the sum of memory usage of\nits child contexts.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Jun 1, 2023 at 4:51 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:\n\n\n 2) from this line, we saw total 42 blocks ,215 chunks CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,\n But from sum of it’s child level entrys, total sum(child lines) block ,trunks show much more than “CacheMemoryContext, is expected to see that?Yes, that is expected. The parent context reports only its own direct memory usage and blocks. It does not include the sum of memory usage of its child contexts.Cheers,Jeff",
"msg_date": "Fri, 2 Jun 2023 15:17:13 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: thousands of CachedPlan entry per backend"
}
] |
[
{
"msg_contents": "Hi Listers,\n\n\nWe would like to determine how long it takes for each SQL statement to\nexecute within a long-running procedure. I tried to see if\npg_stat_statements could offer any insight into the matter. But I was\nunable to locate any. Is this even possible? How can we also determine the\nprecise SQL execution plan used when a SQL is run from an application? The\nquery runs without issue when we try to execute it directly, but it takes\nlonger to run when an application is used.\n\n\n\nRegards,\n\nSatalabha\n\nHi Listers,We would like to determine how long it takes for each SQL statement to execute within a long-running procedure. I tried to see if pg_stat_statements could offer any insight into the matter. But I was unable to locate any. Is this even possible? How can we also determine the precise SQL execution plan used when a SQL is run from an application? The query runs without issue when we try to execute it directly, but it takes longer to run when an application is used.Regards,Satalabha",
"msg_date": "Sat, 3 Jun 2023 12:48:37 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Understand time taken by individual SQL statements in a procedure"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jun 03, 2023 at 12:48:37PM +0530, Satalabaha Postgres wrote:\n> Hi Listers,\n>\n> We would like to determine how long it takes for each SQL statement to\n> execute within a long-running procedure. I tried to see if\n> pg_stat_statements could offer any insight into the matter. But I was\n> unable to locate any. Is this even possible?\n\npg_stat_statements can tell you about queries executed inside a procedure, as\nlong as you set pg_stat_statements.track = 'all':\n\n rjuju=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\nrjuju=# set pg_stat_statements.track = 'all';\nSET\n\nrjuju=# do\n$$\nbegin\nperform count(*) from pg_class;\nperform pg_sleep(2);\nend;\n$$ language plpgsql;\nDO\n\nrjuju=# select query, total_exec_time from pg_stat_statements;\n query | total_exec_time\n--------------------------------------+---------------------\n SELECT count(*) from pg_class | 0.13941699999999999\n do +| 2001.903792\n $$ +|\n begin +|\n perform count(*) from pg_class; +|\n perform pg_sleep(2); +|\n end; +|\n $$ language plpgsql |\n SELECT pg_sleep($1) | 2000.227249\n[...]\n\nIf that's not enough, and if your procedures are written in plpgsql you could\nalso look at plpgsql_check: https://github.com/okbob/plpgsql_check. It has an\nintegrated profiler (see https://github.com/okbob/plpgsql_check#profiler) that\nworks very well.\n\n> unable to locate any. Is this even possible? How can we also determine the\n> precise SQL execution plan used when a SQL is run from an application? The\n> query runs without issue when we try to execute it directly, but it takes\n> longer to run when an application is used.\n\nYou could look at auto_explain for that:\nhttps://www.postgresql.org/docs/current/auto-explain.html.\n\n\n",
"msg_date": "Sat, 3 Jun 2023 15:36:04 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Understand time taken by individual SQL statements in a procedure"
},
{
"msg_contents": "Thanks Julien.\n\nRegards,\n\nSatalabha\n\n\nOn Sat, 3 Jun 2023 at 13:06, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Sat, Jun 03, 2023 at 12:48:37PM +0530, Satalabaha Postgres wrote:\n> > Hi Listers,\n> >\n> > We would like to determine how long it takes for each SQL statement to\n> > execute within a long-running procedure. I tried to see if\n> > pg_stat_statements could offer any insight into the matter. But I was\n> > unable to locate any. Is this even possible?\n>\n> pg_stat_statements can tell you about queries executed inside a procedure,\n> as\n> long as you set pg_stat_statements.track = 'all':\n>\n> rjuju=# select pg_stat_statements_reset();\n> pg_stat_statements_reset\n> --------------------------\n>\n> (1 row)\n>\n> rjuju=# set pg_stat_statements.track = 'all';\n> SET\n>\n> rjuju=# do\n> $$\n> begin\n> perform count(*) from pg_class;\n> perform pg_sleep(2);\n> end;\n> $$ language plpgsql;\n> DO\n>\n> rjuju=# select query, total_exec_time from pg_stat_statements;\n> query | total_exec_time\n> --------------------------------------+---------------------\n> SELECT count(*) from pg_class | 0.13941699999999999\n> do +| 2001.903792\n> $$ +|\n> begin +|\n> perform count(*) from pg_class; +|\n> perform pg_sleep(2); +|\n> end; +|\n> $$ language plpgsql |\n> SELECT pg_sleep($1) | 2000.227249\n> [...]\n>\n> If that's not enough, and if your procedures are written in plpgsql you\n> could\n> also look at plpgsql_check: https://github.com/okbob/plpgsql_check. It\n> has an\n> integrated profiler (see https://github.com/okbob/plpgsql_check#profiler)\n> that\n> works very well.\n>\n> > unable to locate any. Is this even possible? How can we also determine\n> the\n> > precise SQL execution plan used when a SQL is run from an application?\n> The\n> > query runs without issue when we try to execute it directly, but it takes\n> > longer to run when an application is used.\n>\n> You could look at auto_explain for that:\n> https://www.postgresql.org/docs/current/auto-explain.html.\n>\n\nThanks Julien.Regards,SatalabhaOn Sat, 3 Jun 2023 at 13:06, Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Sat, Jun 03, 2023 at 12:48:37PM +0530, Satalabaha Postgres wrote:\n> Hi Listers,\n>\n> We would like to determine how long it takes for each SQL statement to\n> execute within a long-running procedure. I tried to see if\n> pg_stat_statements could offer any insight into the matter. But I was\n> unable to locate any. Is this even possible?\n\npg_stat_statements can tell you about queries executed inside a procedure, as\nlong as you set pg_stat_statements.track = 'all':\n\n rjuju=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\nrjuju=# set pg_stat_statements.track = 'all';\nSET\n\nrjuju=# do\n$$\nbegin\nperform count(*) from pg_class;\nperform pg_sleep(2);\nend;\n$$ language plpgsql;\nDO\n\nrjuju=# select query, total_exec_time from pg_stat_statements;\n query | total_exec_time\n--------------------------------------+---------------------\n SELECT count(*) from pg_class | 0.13941699999999999\n do +| 2001.903792\n $$ +|\n begin +|\n perform count(*) from pg_class; +|\n perform pg_sleep(2); +|\n end; +|\n $$ language plpgsql |\n SELECT pg_sleep($1) | 2000.227249\n[...]\n\nIf that's not enough, and if your procedures are written in plpgsql you could\nalso look at plpgsql_check: https://github.com/okbob/plpgsql_check. It has an\nintegrated profiler (see https://github.com/okbob/plpgsql_check#profiler) that\nworks very well.\n\n> unable to locate any. Is this even possible? How can we also determine the\n> precise SQL execution plan used when a SQL is run from an application? The\n> query runs without issue when we try to execute it directly, but it takes\n> longer to run when an application is used.\n\nYou could look at auto_explain for that:\nhttps://www.postgresql.org/docs/current/auto-explain.html.",
"msg_date": "Sat, 3 Jun 2023 23:16:30 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Understand time taken by individual SQL statements in a procedure"
}
] |
[
{
"msg_contents": "Hi Listers,\n\nDB : postgres 14.\n\nWe are experiencing weird performance issue of one simple insert statement\ntaking several minutes to insert data. The application calls insert\nstatement via stored procedure show mentioned below.\n\nThe select query in the insert returns about 499 rows. However, this insert\nstatement when executed from application user i.e. schema1_u takes close to\n 8 minutes. When the same insert statement gets executed as postgres user\nit takes less than 280 ms. Both the executions use the same execution plan\nwith only difference that when schema1_u executes the SQL, we observe\n\"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\nmore time. Both the parent and child tables are not big in size. There is\nno table bloat etc for both of these tables. Below are the details.\nIs there any way we can identify why as postgres user the insert statement\nworks fine and why not with application user schema1_u?\n\nStored Procedure:\n====================\n\nCREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\nprecision, parcreatedby text)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\n BEGIN\n insert into table_a\n (\n ROWVERSION,\n CREATED,\n ISDELETED,\n ISIGNORED,\n IMPORTEDACCOUNTCODE,\n IMPORTEDUNITCODE,\n BEGINNINGBALANCE,\n ENDINGBALANCE,\n CREATEDBY,\n FILEID\n )\n select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\nHH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n to_timestamp(To_char(clock_timestamp() at time zone\n'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n false,\n false,\n IMPORTEDACCOUNTCODE,\n IMPORTEDUNITCODE,\n BEGINNINGBALANCE,\n ENDINGBALANCE,\n parCreatedBy,\n FILEID\n from STAGING_table_a\n where FILEID = parFileId;\n\n END;\n $function$\n;\n\n\n\nCount of tables:\n=================\n\nselect count(*) from schema1.table_a;\n count\n-------\n 67019\n\nselect count(*) from schema1.table_b;\n count\n-------\n 20\n\n\n\ncreate trigger table_a_trigger before\ninsert\n on\n schema1.table_a for each row execute function\nschema1.\"table_a_trigger$table_a\"();\n\n\n\nCREATE OR REPLACE FUNCTION schema1.\"table_a_trigger$table_a\"()\n RETURNS trigger\n LANGUAGE plpgsql\nAS $function$\n BEGIN\n IF COALESCE(new.id, - 1) = - 1 THEN\n SELECT\n nextval('table_a_sq')\n INTO STRICT new.id;\n END IF;\n RETURN NEW;\n END;\n $function$;\n\nALTER TABLE schema1.table_a ADD CONSTRAINT fk_con_tablea FOREIGN KEY\n(fileid) REFERENCES schema1.table_b(id);\n\n\n POSTGRES\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual\ntime=35.806..35.807 rows=0 loops=1)\n Buffers: shared hit=9266 written=19\n -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771)\n(actual time=0.427..5.654 rows=499 loops=1)\n Buffers: shared hit=162\n -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499\nwidth=83) (actual time=0.416..4.286 rows=499 loops=1)\n Filter: (fileid = 37)\n Rows Removed by Filter: 18989\n Buffers: shared hit=162\n Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency\n= '1', max_parallel_workers = '24', search_path = 'schema1, public'\n Planning Time: 0.092 ms\n Trigger for constraint fk_con_tablea: time=6.304 calls=499\n Trigger table_a_trigger: time=5.658 calls=499\n Execution Time: 42.206 ms\n(13 rows)\n\n\n\nschema1_U QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual\ntime=34.806..34.806 rows=0 loops=1)\n Buffers: shared hit=9247 written=9\n -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771)\n(actual time=0.427..5.372 rows=499 loops=1)\n Buffers: shared hit=162\n -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499\nwidth=83) (actual time=0.416..4.159 rows=499 loops=1)\n Filter: (fileid = 37)\n Rows Removed by Filter: 18989\n Buffers: shared hit=162\n Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency\n= '1', max_parallel_workers = '24', search_path = 'schema1, public'\n Planning Time: 0.092 ms\n Trigger for constraint fk_con_tablea: time=426499.314 calls=499\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Issue\n Trigger table_a_trigger: time=5.712 calls=499\n Execution Time: 426534.633 ms\n(13 rows)\n\nRegards,\n\nSatalabha\n\nHi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Count of tables:=================select count(*) from schema1.table_a; count------- 67019select count(*) from schema1.table_b; count------- 20create trigger table_a_trigger beforeinsert on schema1.table_a for each row execute function schema1.\"table_a_trigger$table_a\"();CREATE OR REPLACE FUNCTION schema1.\"table_a_trigger$table_a\"() RETURNS trigger LANGUAGE plpgsqlAS $function$ BEGIN IF COALESCE(new.id, - 1) = - 1 THEN SELECT nextval('table_a_sq') INTO STRICT new.id; END IF; RETURN NEW; END; $function$;ALTER TABLE schema1.table_a ADD CONSTRAINT fk_con_tablea FOREIGN KEY (fileid) REFERENCES schema1.table_b(id); POSTGRES QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual time=35.806..35.807 rows=0 loops=1) Buffers: shared hit=9266 written=19 -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771) (actual time=0.427..5.654 rows=499 loops=1) Buffers: shared hit=162 -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499 width=83) (actual time=0.416..4.286 rows=499 loops=1) Filter: (fileid = 37) Rows Removed by Filter: 18989 Buffers: shared hit=162 Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path = 'schema1, public' Planning Time: 0.092 ms Trigger for constraint fk_con_tablea: time=6.304 calls=499 Trigger table_a_trigger: time=5.658 calls=499 Execution Time: 42.206 ms(13 rows) schema1_U QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual time=34.806..34.806 rows=0 loops=1) Buffers: shared hit=9247 written=9 -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771) (actual time=0.427..5.372 rows=499 loops=1) Buffers: shared hit=162 -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499 width=83) (actual time=0.416..4.159 rows=499 loops=1) Filter: (fileid = 37) Rows Removed by Filter: 18989 Buffers: shared hit=162 Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path = 'schema1, public' Planning Time: 0.092 ms Trigger for constraint fk_con_tablea: time=426499.314 calls=499 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Issue Trigger table_a_trigger: time=5.712 calls=499 Execution Time: 426534.633 ms(13 rows)Regards,Satalabha",
"msg_date": "Sun, 4 Jun 2023 14:04:52 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Forgot to mention. If I cancel the long running insert statement from\nschema1_u user query before completion, I see the below. Not sure if this\nwill help but just thought to mention it.\n\nCONTEXT: SQL statement \"SELECT 1 FROM ONLY \"schema1\".\"table_b\" x WHERE\n\"id\" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\"\nRegards,\n\nSatalabha\n\n\nOn Sun, 4 Jun 2023 at 14:04, Satalabaha Postgres <\nsatalabaha.postgres@gmail.com> wrote:\n\n> Hi Listers,\n>\n> DB : postgres 14.\n>\n> We are experiencing weird performance issue of one simple insert statement\n> taking several minutes to insert data. The application calls insert\n> statement via stored procedure show mentioned below.\n>\n> The select query in the insert returns about 499 rows. However, this\n> insert statement when executed from application user i.e. schema1_u takes\n> close to 8 minutes. When the same insert statement gets executed as\n> postgres user it takes less than 280 ms. Both the executions use the same\n> execution plan with only difference that when schema1_u executes the SQL,\n> we observe \"Trigger for constraint fk_con_tablea: time=426499.314\n> calls=499\" taking more time. Both the parent and child tables are not big\n> in size. There is no table bloat etc for both of these tables. Below are\n> the details.\n> Is there any way we can identify why as postgres user the insert statement\n> works fine and why not with application user schema1_u?\n>\n> Stored Procedure:\n> ====================\n>\n> CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\n> precision, parcreatedby text)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> insert into table_a\n> (\n> ROWVERSION,\n> CREATED,\n> ISDELETED,\n> ISIGNORED,\n> IMPORTEDACCOUNTCODE,\n> IMPORTEDUNITCODE,\n> BEGINNINGBALANCE,\n> ENDINGBALANCE,\n> CREATEDBY,\n> FILEID\n> )\n> select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\n> HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n> to_timestamp(To_char(clock_timestamp() at time zone\n> 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n> false,\n> false,\n> IMPORTEDACCOUNTCODE,\n> IMPORTEDUNITCODE,\n> BEGINNINGBALANCE,\n> ENDINGBALANCE,\n> parCreatedBy,\n> FILEID\n> from STAGING_table_a\n> where FILEID = parFileId;\n>\n> END;\n> $function$\n> ;\n>\n>\n>\n> Count of tables:\n> =================\n>\n> select count(*) from schema1.table_a;\n> count\n> -------\n> 67019\n>\n> select count(*) from schema1.table_b;\n> count\n> -------\n> 20\n>\n>\n>\n> create trigger table_a_trigger before\n> insert\n> on\n> schema1.table_a for each row execute function\n> schema1.\"table_a_trigger$table_a\"();\n>\n>\n>\n> CREATE OR REPLACE FUNCTION schema1.\"table_a_trigger$table_a\"()\n> RETURNS trigger\n> LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> IF COALESCE(new.id, - 1) = - 1 THEN\n> SELECT\n> nextval('table_a_sq')\n> INTO STRICT new.id;\n> END IF;\n> RETURN NEW;\n> END;\n> $function$;\n>\n> ALTER TABLE schema1.table_a ADD CONSTRAINT fk_con_tablea FOREIGN KEY\n> (fileid) REFERENCES schema1.table_b(id);\n>\n>\n>\n> POSTGRES QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual\n> time=35.806..35.807 rows=0 loops=1)\n> Buffers: shared hit=9266 written=19\n> -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771)\n> (actual time=0.427..5.654 rows=499 loops=1)\n> Buffers: shared hit=162\n> -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499\n> width=83) (actual time=0.416..4.286 rows=499 loops=1)\n> Filter: (fileid = 37)\n> Rows Removed by Filter: 18989\n> Buffers: shared hit=162\n> Settings: effective_cache_size = '266500832kB',\n> maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path\n> = 'schema1, public'\n> Planning Time: 0.092 ms\n> Trigger for constraint fk_con_tablea: time=6.304 calls=499\n> Trigger table_a_trigger: time=5.658 calls=499\n> Execution Time: 42.206 ms\n> (13 rows)\n>\n>\n>\n> schema1_U QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual\n> time=34.806..34.806 rows=0 loops=1)\n> Buffers: shared hit=9247 written=9\n> -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771)\n> (actual time=0.427..5.372 rows=499 loops=1)\n> Buffers: shared hit=162\n> -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499\n> width=83) (actual time=0.416..4.159 rows=499 loops=1)\n> Filter: (fileid = 37)\n> Rows Removed by Filter: 18989\n> Buffers: shared hit=162\n> Settings: effective_cache_size = '266500832kB',\n> maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path\n> = 'schema1, public'\n> Planning Time: 0.092 ms\n> Trigger for constraint fk_con_tablea: time=426499.314 calls=499\n> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Issue\n> Trigger table_a_trigger: time=5.712 calls=499\n> Execution Time: 426534.633 ms\n> (13 rows)\n>\n> Regards,\n>\n> Satalabha\n>\n\nForgot to mention. If I cancel the long running insert statement from schema1_u user query before completion, I see the below. Not sure if this will help but just thought to mention it.CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"schema1\".\"table_b\" x WHERE \"id\" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\"Regards,SatalabhaOn Sun, 4 Jun 2023 at 14:04, Satalabaha Postgres <satalabaha.postgres@gmail.com> wrote:Hi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Count of tables:=================select count(*) from schema1.table_a; count------- 67019select count(*) from schema1.table_b; count------- 20create trigger table_a_trigger beforeinsert on schema1.table_a for each row execute function schema1.\"table_a_trigger$table_a\"();CREATE OR REPLACE FUNCTION schema1.\"table_a_trigger$table_a\"() RETURNS trigger LANGUAGE plpgsqlAS $function$ BEGIN IF COALESCE(new.id, - 1) = - 1 THEN SELECT nextval('table_a_sq') INTO STRICT new.id; END IF; RETURN NEW; END; $function$;ALTER TABLE schema1.table_a ADD CONSTRAINT fk_con_tablea FOREIGN KEY (fileid) REFERENCES schema1.table_b(id); POSTGRES QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual time=35.806..35.807 rows=0 loops=1) Buffers: shared hit=9266 written=19 -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771) (actual time=0.427..5.654 rows=499 loops=1) Buffers: shared hit=162 -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499 width=83) (actual time=0.416..4.286 rows=499 loops=1) Filter: (fileid = 37) Rows Removed by Filter: 18989 Buffers: shared hit=162 Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path = 'schema1, public' Planning Time: 0.092 ms Trigger for constraint fk_con_tablea: time=6.304 calls=499 Trigger table_a_trigger: time=5.658 calls=499 Execution Time: 42.206 ms(13 rows) schema1_U QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Insert on table_a (cost=0.00..431.80 rows=0 width=0) (actual time=34.806..34.806 rows=0 loops=1) Buffers: shared hit=9247 written=9 -> Subquery Scan on \"*SELECT*\" (cost=0.00..431.80 rows=499 width=771) (actual time=0.427..5.372 rows=499 loops=1) Buffers: shared hit=162 -> Seq Scan on staging_table_a (cost=0.00..414.33 rows=499 width=83) (actual time=0.416..4.159 rows=499 loops=1) Filter: (fileid = 37) Rows Removed by Filter: 18989 Buffers: shared hit=162 Settings: effective_cache_size = '266500832kB', maintenance_io_concurrency = '1', max_parallel_workers = '24', search_path = 'schema1, public' Planning Time: 0.092 ms Trigger for constraint fk_con_tablea: time=426499.314 calls=499 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Issue Trigger table_a_trigger: time=5.712 calls=499 Execution Time: 426534.633 ms(13 rows)Regards,Satalabha",
"msg_date": "Sun, 4 Jun 2023 14:10:05 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Hi,\n\nOn Sun, Jun 04, 2023 at 02:04:52PM +0530, Satalabaha Postgres wrote:\n>\n> DB : postgres 14.\n>\n> We are experiencing weird performance issue of one simple insert statement\n> taking several minutes to insert data. The application calls insert\n> statement via stored procedure show mentioned below.\n>\n> The select query in the insert returns about 499 rows. However, this insert\n> statement when executed from application user i.e. schema1_u takes close to\n> 8 minutes. When the same insert statement gets executed as postgres user\n> it takes less than 280 ms. Both the executions use the same execution plan\n> with only difference that when schema1_u executes the SQL, we observe\n> \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> more time. Both the parent and child tables are not big in size. There is\n> no table bloat etc for both of these tables. Below are the details.\n> Is there any way we can identify why as postgres user the insert statement\n> works fine and why not with application user schema1_u?\n\nAre you sure that in both case the exact same tables are accessed? It looks\nlike schema1_u is checking the rows for a way bigger table. The usual answer\nis to create a proper index for the table referenced by the FK.\n\n\n",
"msg_date": "Sun, 4 Jun 2023 19:21:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Hi Julien,\n\nYes both in both the cases the same tables are accessed. Yes we tried\nindexing as well, but we have the same behaviour.\n\nRegards,\n\nSatalabha\n\n\nOn Sun, 4 Jun 2023 at 16:51, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Sun, Jun 04, 2023 at 02:04:52PM +0530, Satalabaha Postgres wrote:\n> >\n> > DB : postgres 14.\n> >\n> > We are experiencing weird performance issue of one simple insert\n> statement\n> > taking several minutes to insert data. The application calls insert\n> > statement via stored procedure show mentioned below.\n> >\n> > The select query in the insert returns about 499 rows. However, this\n> insert\n> > statement when executed from application user i.e. schema1_u takes close\n> to\n> > 8 minutes. When the same insert statement gets executed as postgres\n> user\n> > it takes less than 280 ms. Both the executions use the same execution\n> plan\n> > with only difference that when schema1_u executes the SQL, we observe\n> > \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> > more time. Both the parent and child tables are not big in size. There is\n> > no table bloat etc for both of these tables. Below are the details.\n> > Is there any way we can identify why as postgres user the insert\n> statement\n> > works fine and why not with application user schema1_u?\n>\n> Are you sure that in both case the exact same tables are accessed? It\n> looks\n> like schema1_u is checking the rows for a way bigger table. The usual\n> answer\n> is to create a proper index for the table referenced by the FK.\n>\n\nHi Julien,Yes both in both the cases the same tables are accessed. Yes we tried indexing as well, but we have the same behaviour. Regards,SatalabhaOn Sun, 4 Jun 2023 at 16:51, Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Sun, Jun 04, 2023 at 02:04:52PM +0530, Satalabaha Postgres wrote:\n>\n> DB : postgres 14.\n>\n> We are experiencing weird performance issue of one simple insert statement\n> taking several minutes to insert data. The application calls insert\n> statement via stored procedure show mentioned below.\n>\n> The select query in the insert returns about 499 rows. However, this insert\n> statement when executed from application user i.e. schema1_u takes close to\n> 8 minutes. When the same insert statement gets executed as postgres user\n> it takes less than 280 ms. Both the executions use the same execution plan\n> with only difference that when schema1_u executes the SQL, we observe\n> \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> more time. Both the parent and child tables are not big in size. There is\n> no table bloat etc for both of these tables. Below are the details.\n> Is there any way we can identify why as postgres user the insert statement\n> works fine and why not with application user schema1_u?\n\nAre you sure that in both case the exact same tables are accessed? It looks\nlike schema1_u is checking the rows for a way bigger table. The usual answer\nis to create a proper index for the table referenced by the FK.",
"msg_date": "Sun, 4 Jun 2023 17:12:27 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Please don't top post on this mailing list:\nhttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n\nOn Sun, Jun 4, 2023 at 7:42 PM Satalabaha Postgres\n<satalabaha.postgres@gmail.com> wrote:\n>\n> Yes both in both the cases the same tables are accessed. Yes we tried indexing as well, but we have the same behaviour.\n\nDo you reproduce the problem if you connect and execute that query\nmanually with the schema1_u user? Is the query always fast when\nrunning as postgres and always slow when running as schema1_u? Is\nthere any special configuration for that user? \\drds can tell you\nthat, but those should get logged with the rest of the non default\nparameters already displayed in the explain plan.\n\nIf you can reproduce easily, you should be able to get the execution\nplans of the underlying FK queries using auto_explain and\nauto_explain.log_nested_statements = true. It might show you the\nproblem.\n\n\n",
"msg_date": "Sun, 4 Jun 2023 21:21:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Satalabaha Postgres <satalabaha.postgres@gmail.com> writes:\n> The select query in the insert returns about 499 rows. However, this insert\n> statement when executed from application user i.e. schema1_u takes close to\n> 8 minutes. When the same insert statement gets executed as postgres user\n> it takes less than 280 ms. Both the executions use the same execution plan\n> with only difference that when schema1_u executes the SQL, we observe\n> \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> more time.\n\nSo you need to find out what's happening in the trigger. Perhaps\nauto_explain with auto_explain.log_nested_statements enabled\nwould give some insight.\n\nI suspect there might be a permissions problem causing schema1_u\nto not be allowed to \"see\" the statistics for table_b, resulting\nin a bad plan choice for the FK enforcement query; but that's\njust a guess.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Jun 2023 09:26:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <\nsatalabaha.postgres@gmail.com> escreveu:\n\n> Hi Listers,\n>\n> DB : postgres 14.\n>\n> We are experiencing weird performance issue of one simple insert statement\n> taking several minutes to insert data. The application calls insert\n> statement via stored procedure show mentioned below.\n>\n> The select query in the insert returns about 499 rows. However, this\n> insert statement when executed from application user i.e. schema1_u takes\n> close to 8 minutes. When the same insert statement gets executed as\n> postgres user it takes less than 280 ms. Both the executions use the same\n> execution plan with only difference that when schema1_u executes the SQL,\n> we observe \"Trigger for constraint fk_con_tablea: time=426499.314\n> calls=499\" taking more time. Both the parent and child tables are not big\n> in size. There is no table bloat etc for both of these tables. Below are\n> the details.\n> Is there any way we can identify why as postgres user the insert statement\n> works fine and why not with application user schema1_u?\n>\n> Stored Procedure:\n> ====================\n>\n> CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\n> precision, parcreatedby text)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> insert into table_a\n> (\n> ROWVERSION,\n> CREATED,\n> ISDELETED,\n> ISIGNORED,\n> IMPORTEDACCOUNTCODE,\n> IMPORTEDUNITCODE,\n> BEGINNINGBALANCE,\n> ENDINGBALANCE,\n> CREATEDBY,\n> FILEID\n> )\n> select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\n> HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n> to_timestamp(To_char(clock_timestamp() at time zone\n> 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n> false,\n> false,\n> IMPORTEDACCOUNTCODE,\n> IMPORTEDUNITCODE,\n> BEGINNINGBALANCE,\n> ENDINGBALANCE,\n> parCreatedBy,\n> FILEID\n> from STAGING_table_a\n> where FILEID = parFileId;\n>\n> END;\n> $function$\n> ;\n>\nCan you show what type is FILEID?\n\nCan there be type mismatch?\n\nregards,\nRanier Vilela\n\nEm dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:Hi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Can you show what type is FILEID?Can there be type mismatch?regards,Ranier Vilela",
"msg_date": "Sun, 4 Jun 2023 11:16:45 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "On Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <\n> satalabaha.postgres@gmail.com> escreveu:\n>\n>> Hi Listers,\n>>\n>> DB : postgres 14.\n>>\n>> We are experiencing weird performance issue of one simple insert\n>> statement taking several minutes to insert data. The application calls\n>> insert statement via stored procedure show mentioned below.\n>>\n>> The select query in the insert returns about 499 rows. However, this\n>> insert statement when executed from application user i.e. schema1_u takes\n>> close to 8 minutes. When the same insert statement gets executed as\n>> postgres user it takes less than 280 ms. Both the executions use the same\n>> execution plan with only difference that when schema1_u executes the SQL,\n>> we observe \"Trigger for constraint fk_con_tablea: time=426499.314\n>> calls=499\" taking more time. Both the parent and child tables are not big\n>> in size. There is no table bloat etc for both of these tables. Below are\n>> the details.\n>> Is there any way we can identify why as postgres user the insert\n>> statement works fine and why not with application user schema1_u?\n>>\n>> Stored Procedure:\n>> ====================\n>>\n>> CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\n>> precision, parcreatedby text)\n>> RETURNS void\n>> LANGUAGE plpgsql\n>> AS $function$\n>> BEGIN\n>> insert into table_a\n>> (\n>> ROWVERSION,\n>> CREATED,\n>> ISDELETED,\n>> ISIGNORED,\n>> IMPORTEDACCOUNTCODE,\n>> IMPORTEDUNITCODE,\n>> BEGINNINGBALANCE,\n>> ENDINGBALANCE,\n>> CREATEDBY,\n>> FILEID\n>> )\n>> select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\n>> HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>> to_timestamp(To_char(clock_timestamp() at time zone\n>> 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>> false,\n>> false,\n>> IMPORTEDACCOUNTCODE,\n>> IMPORTEDUNITCODE,\n>> BEGINNINGBALANCE,\n>> ENDINGBALANCE,\n>> parCreatedBy,\n>> FILEID\n>> from STAGING_table_a\n>> where FILEID = parFileId;\n>>\n>> END;\n>> $function$\n>> ;\n>>\n> Can you show what type is FILEID?\n>\n> Can there be type mismatch?\n>\n>\nregards,\n> Ranier Vilela\n>\n\nThanks Ranier. Please find the below.\n\n\\d+ schema1.table_a\n Table \"schema1.table_a\"\n Column | Type | Collation |\nNullable | Default | Storage | Stats target | Description\n---------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n id | numeric(20,0) | | not\nnull | | main | |\n rowversion | timestamp(4) without time zone | | not\nnull | | plain | |\n created | timestamp(4) without time zone | | not\nnull | | plain | |\n isdeleted | boolean | | not\nnull | | plain | |\n lastupdated | timestamp(4) without time zone | |\n | | plain | |\n isignored | boolean | | not\nnull | | plain | |\n importedaccountcode | character varying(255) | |\n | | extended | |\n importedunitcode | character varying(255) | |\n | | extended | |\n beginningbalance | numeric(19,5) | |\n | | main | |\n endingbalance | numeric(19,5) | |\n | | main | |\n createdbyid | numeric(20,0) | |\n | | main | |\n updatedbyid | numeric(20,0) | |\n | | main | |\n fileid | numeric(20,0) | | not\nnull | | main | |\n previousid | numeric(20,0) | |\n | | main | |\n createdby | character varying(255) | |\n | | extended | |\n lastupdatedby | character varying(255) | |\n | | extended | |\n\n\\d+ schema1.table_b\n Table \"schema1.table_b\"\n Column | Type | Collation |\nNullable | Default | Storage | Stats target | Description\n--------------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n id | numeric(20,0) | |\nnot null | | main | |\n rowversion | timestamp(4) without time zone | |\nnot null | | plain | |\n created | timestamp(4) without time zone | |\nnot null | | plain | |\n isdeleted | boolean | |\nnot null | | plain | |\n lastupdated | timestamp(4) without time zone | |\n | | plain | |\n version | numeric(10,0) | |\nnot null | | main | |\n isactive | boolean | |\nnot null | | plain | |\n name | character varying(255) | |\nnot null | | extended | |\n displayname | character varying(255) | |\nnot null | | extended | |\n ispublished | boolean | |\nnot null | | plain | |\n isretired | boolean | |\nnot null | | plain | |\n publishdatetime | timestamp(4) without time zone | |\n | | plain | |\n createdbyid | numeric(20,0) | |\n | | main | |\n updatedbyid | numeric(20,0) | |\n | | main | |\n periodid | numeric(20,0) | |\nnot null | | main | |\n uploadchartyearversionid | numeric(20,0) | |\nnot null | | main | |\n importchartyearversionid | numeric(20,0) | |\n | | main | |\n initialtbadjversionid | numeric(20,0) | |\n | | main | |\n latesttbadjversionid | numeric(20,0) | |\n | | main | |\n trialbalancesourceid | numeric(20,0) | |\nnot null | | main | |\n filedefinitionid | numeric(20,0) | |\nnot null | | main | |\n createdby | character varying(255) | |\n | | extended | |\n lastupdatedby | character varying(255) | |\n | | extended | |\n\nRegards, Satalabaha\n\nOn Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:Hi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Can you show what type is FILEID?Can there be type mismatch? regards,Ranier VilelaThanks Ranier. Please find the below.\\d+ schema1.table_a Table \"schema1.table_a\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description---------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | isignored | boolean | | not null | | plain | | importedaccountcode | character varying(255) | | | | extended | | importedunitcode | character varying(255) | | | | extended | | beginningbalance | numeric(19,5) | | | | main | | endingbalance | numeric(19,5) | | | | main | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | fileid | numeric(20,0) | | not null | | main | | previousid | numeric(20,0) | | | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |\\d+ schema1.table_b Table \"schema1.table_b\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | version | numeric(10,0) | | not null | | main | | isactive | boolean | | not null | | plain | | name | character varying(255) | | not null | | extended | | displayname | character varying(255) | | not null | | extended | | ispublished | boolean | | not null | | plain | | isretired | boolean | | not null | | plain | | publishdatetime | timestamp(4) without time zone | | | | plain | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | periodid | numeric(20,0) | | not null | | main | | uploadchartyearversionid | numeric(20,0) | | not null | | main | | importchartyearversionid | numeric(20,0) | | | | main | | initialtbadjversionid | numeric(20,0) | | | | main | | latesttbadjversionid | numeric(20,0) | | | | main | | trialbalancesourceid | numeric(20,0) | | not null | | main | | filedefinitionid | numeric(20,0) | | not null | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |Regards, Satalabaha",
"msg_date": "Sun, 4 Jun 2023 20:19:37 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Regards,\n\nSatalabha\n\n\nOn Sun, 4 Jun 2023 at 18:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Satalabaha Postgres <satalabaha.postgres@gmail.com> writes:\n> > The select query in the insert returns about 499 rows. However, this\n> insert\n> > statement when executed from application user i.e. schema1_u takes close\n> to\n> > 8 minutes. When the same insert statement gets executed as postgres\n> user\n> > it takes less than 280 ms. Both the executions use the same execution\n> plan\n> > with only difference that when schema1_u executes the SQL, we observe\n> > \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> > more time.\n>\n> So you need to find out what's happening in the trigger. Perhaps\n> auto_explain with auto_explain.log_nested_statements enabled\n> would give some insight.\n>\n> I suspect there might be a permissions problem causing schema1_u\n> to not be allowed to \"see\" the statistics for table_b, resulting\n> in a bad plan choice for the FK enforcement query; but that's\n> just a guess.\n>\n> regards, tom lane\n>\n\n\nHi Tom,\n\nWe did enable auto_explain and auto_explain.log_nested_statements and apart\nfrom the insert statement we couldn't find any other SQL's causing that was\ntaking more time.\n\nRegards, Satalabaha.\n\nRegards,SatalabhaOn Sun, 4 Jun 2023 at 18:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:Satalabaha Postgres <satalabaha.postgres@gmail.com> writes:\n> The select query in the insert returns about 499 rows. However, this insert\n> statement when executed from application user i.e. schema1_u takes close to\n> 8 minutes. When the same insert statement gets executed as postgres user\n> it takes less than 280 ms. Both the executions use the same execution plan\n> with only difference that when schema1_u executes the SQL, we observe\n> \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking\n> more time.\n\nSo you need to find out what's happening in the trigger. Perhaps\nauto_explain with auto_explain.log_nested_statements enabled\nwould give some insight.\n\nI suspect there might be a permissions problem causing schema1_u\nto not be allowed to \"see\" the statistics for table_b, resulting\nin a bad plan choice for the FK enforcement query; but that's\njust a guess.\n\n regards, tom laneHi Tom,We did enable auto_explain and auto_explain.log_nested_statements and apart from the insert statement we couldn't find any other SQL's causing that was taking more time. Regards, Satalabaha.",
"msg_date": "Sun, 4 Jun 2023 20:28:18 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "Em dom., 4 de jun. de 2023 às 11:49, Satalabaha Postgres <\nsatalabaha.postgres@gmail.com> escreveu:\n\n>\n>\n>\n> On Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <\n>> satalabaha.postgres@gmail.com> escreveu:\n>>\n>>> Hi Listers,\n>>>\n>>> DB : postgres 14.\n>>>\n>>> We are experiencing weird performance issue of one simple insert\n>>> statement taking several minutes to insert data. The application calls\n>>> insert statement via stored procedure show mentioned below.\n>>>\n>>> The select query in the insert returns about 499 rows. However, this\n>>> insert statement when executed from application user i.e. schema1_u takes\n>>> close to 8 minutes. When the same insert statement gets executed as\n>>> postgres user it takes less than 280 ms. Both the executions use the same\n>>> execution plan with only difference that when schema1_u executes the SQL,\n>>> we observe \"Trigger for constraint fk_con_tablea: time=426499.314\n>>> calls=499\" taking more time. Both the parent and child tables are not big\n>>> in size. There is no table bloat etc for both of these tables. Below are\n>>> the details.\n>>> Is there any way we can identify why as postgres user the insert\n>>> statement works fine and why not with application user schema1_u?\n>>>\n>>> Stored Procedure:\n>>> ====================\n>>>\n>>> CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\n>>> precision, parcreatedby text)\n>>> RETURNS void\n>>> LANGUAGE plpgsql\n>>> AS $function$\n>>> BEGIN\n>>> insert into table_a\n>>> (\n>>> ROWVERSION,\n>>> CREATED,\n>>> ISDELETED,\n>>> ISIGNORED,\n>>> IMPORTEDACCOUNTCODE,\n>>> IMPORTEDUNITCODE,\n>>> BEGINNINGBALANCE,\n>>> ENDINGBALANCE,\n>>> CREATEDBY,\n>>> FILEID\n>>> )\n>>> select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\n>>> HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>>> to_timestamp(To_char(clock_timestamp() at time zone\n>>> 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>>> false,\n>>> false,\n>>> IMPORTEDACCOUNTCODE,\n>>> IMPORTEDUNITCODE,\n>>> BEGINNINGBALANCE,\n>>> ENDINGBALANCE,\n>>> parCreatedBy,\n>>> FILEID\n>>> from STAGING_table_a\n>>> where FILEID = parFileId;\n>>>\n>>> END;\n>>> $function$\n>>> ;\n>>>\n>> Can you show what type is FILEID?\n>>\n>> Can there be type mismatch?\n>>\n>>\n> regards,\n>> Ranier Vilela\n>>\n>\n> Thanks Ranier. Please find the below.\n>\n> \\d+ schema1.table_a\n> Table \"schema1.table_a\"\n> Column | Type | Collation |\n> Nullable | Default | Storage | Stats target | Description\n>\n> ---------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n> id | numeric(20,0) | | not\n> null | | main | |\n> rowversion | timestamp(4) without time zone | | not\n> null | | plain | |\n> created | timestamp(4) without time zone | | not\n> null | | plain | |\n> isdeleted | boolean | | not\n> null | | plain | |\n> lastupdated | timestamp(4) without time zone | |\n> | | plain | |\n> isignored | boolean | | not\n> null | | plain | |\n> importedaccountcode | character varying(255) | |\n> | | extended | |\n> importedunitcode | character varying(255) | |\n> | | extended | |\n> beginningbalance | numeric(19,5) | |\n> | | main | |\n> endingbalance | numeric(19,5) | |\n> | | main | |\n> createdbyid | numeric(20,0) | |\n> | | main | |\n> updatedbyid | numeric(20,0) | |\n> | | main | |\n> fileid | numeric(20,0) | | not\n> null | | main | |\n> previousid | numeric(20,0) | |\n> | | main | |\n> createdby | character varying(255) | |\n> | | extended | |\n> lastupdatedby | character varying(255) | |\n> | | extended | |\n>\n> \\d+ schema1.table_b\n> Table \"schema1.table_b\"\n> Column | Type | Collation |\n> Nullable | Default | Storage | Stats target | Description\n>\n> --------------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n> id | numeric(20,0) | |\n> not null | | main | |\n> rowversion | timestamp(4) without time zone | |\n> not null | | plain | |\n> created | timestamp(4) without time zone | |\n> not null | | plain | |\n> isdeleted | boolean | |\n> not null | | plain | |\n> lastupdated | timestamp(4) without time zone | |\n> | | plain | |\n> version | numeric(10,0) | |\n> not null | | main | |\n> isactive | boolean | |\n> not null | | plain | |\n> name | character varying(255) | |\n> not null | | extended | |\n> displayname | character varying(255) | |\n> not null | | extended | |\n> ispublished | boolean | |\n> not null | | plain | |\n> isretired | boolean | |\n> not null | | plain | |\n> publishdatetime | timestamp(4) without time zone | |\n> | | plain | |\n> createdbyid | numeric(20,0) | |\n> | | main | |\n> updatedbyid | numeric(20,0) | |\n> | | main | |\n> periodid | numeric(20,0) | |\n> not null | | main | |\n> uploadchartyearversionid | numeric(20,0) | |\n> not null | | main | |\n> importchartyearversionid | numeric(20,0) | |\n> | | main | |\n> initialtbadjversionid | numeric(20,0) | |\n> | | main | |\n> latesttbadjversionid | numeric(20,0) | |\n> | | main | |\n> trialbalancesourceid | numeric(20,0) | |\n> not null | | main | |\n> filedefinitionid | numeric(20,0) | |\n> not null | | main | |\n> createdby | character varying(255) | |\n> | | extended | |\n> lastupdatedby | character varying(255) | |\n> | | extended | |\n>\nI think you are in trouble when comparing float8 (double precision) with\nnumeric.\nThis small example shows problems.\n\nPostgres version 14.2:\nSELECT '8217316934885843456'::float8 =\n'8217316934885843456'::float8::bigint::float8,\n'8217316934885843456'::float8 =\n'8217316934885843456'::float8::numeric::float8;\n ?column? | ?column?\n----------+----------\n t | f\n(1 row)\n\nI suggest a study to switch to bigint.\n\nregards,\nRanier Vilela\n\nEm dom., 4 de jun. de 2023 às 11:49, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:On Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:Hi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Can you show what type is FILEID?Can there be type mismatch? regards,Ranier VilelaThanks Ranier. Please find the below.\\d+ schema1.table_a Table \"schema1.table_a\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description---------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | isignored | boolean | | not null | | plain | | importedaccountcode | character varying(255) | | | | extended | | importedunitcode | character varying(255) | | | | extended | | beginningbalance | numeric(19,5) | | | | main | | endingbalance | numeric(19,5) | | | | main | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | fileid | numeric(20,0) | | not null | | main | | previousid | numeric(20,0) | | | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |\\d+ schema1.table_b Table \"schema1.table_b\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | version | numeric(10,0) | | not null | | main | | isactive | boolean | | not null | | plain | | name | character varying(255) | | not null | | extended | | displayname | character varying(255) | | not null | | extended | | ispublished | boolean | | not null | | plain | | isretired | boolean | | not null | | plain | | publishdatetime | timestamp(4) without time zone | | | | plain | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | periodid | numeric(20,0) | | not null | | main | | uploadchartyearversionid | numeric(20,0) | | not null | | main | | importchartyearversionid | numeric(20,0) | | | | main | | initialtbadjversionid | numeric(20,0) | | | | main | | latesttbadjversionid | numeric(20,0) | | | | main | | trialbalancesourceid | numeric(20,0) | | not null | | main | | filedefinitionid | numeric(20,0) | | not null | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |I think you are in trouble when comparing float8 (double precision) with numeric.This small example shows problems.Postgres version 14.2:SELECT '8217316934885843456'::float8 = '8217316934885843456'::float8::bigint::float8, '8217316934885843456'::float8 = '8217316934885843456'::float8::numeric::float8; ?column? | ?column?----------+---------- t | f(1 row)I suggest a study to switch to bigint.regards,Ranier Vilela",
"msg_date": "Sun, 4 Jun 2023 20:05:44 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
},
{
"msg_contents": "On Mon, 5 Jun 2023 at 04:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em dom., 4 de jun. de 2023 às 11:49, Satalabaha Postgres <\n> satalabaha.postgres@gmail.com> escreveu:\n>\n>>\n>>\n>>\n>> On Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>>> Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <\n>>> satalabaha.postgres@gmail.com> escreveu:\n>>>\n>>>> Hi Listers,\n>>>>\n>>>> DB : postgres 14.\n>>>>\n>>>> We are experiencing weird performance issue of one simple insert\n>>>> statement taking several minutes to insert data. The application calls\n>>>> insert statement via stored procedure show mentioned below.\n>>>>\n>>>> The select query in the insert returns about 499 rows. However, this\n>>>> insert statement when executed from application user i.e. schema1_u takes\n>>>> close to 8 minutes. When the same insert statement gets executed as\n>>>> postgres user it takes less than 280 ms. Both the executions use the same\n>>>> execution plan with only difference that when schema1_u executes the SQL,\n>>>> we observe \"Trigger for constraint fk_con_tablea: time=426499.314\n>>>> calls=499\" taking more time. Both the parent and child tables are not big\n>>>> in size. There is no table bloat etc for both of these tables. Below are\n>>>> the details.\n>>>> Is there any way we can identify why as postgres user the insert\n>>>> statement works fine and why not with application user schema1_u?\n>>>>\n>>>> Stored Procedure:\n>>>> ====================\n>>>>\n>>>> CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double\n>>>> precision, parcreatedby text)\n>>>> RETURNS void\n>>>> LANGUAGE plpgsql\n>>>> AS $function$\n>>>> BEGIN\n>>>> insert into table_a\n>>>> (\n>>>> ROWVERSION,\n>>>> CREATED,\n>>>> ISDELETED,\n>>>> ISIGNORED,\n>>>> IMPORTEDACCOUNTCODE,\n>>>> IMPORTEDUNITCODE,\n>>>> BEGINNINGBALANCE,\n>>>> ENDINGBALANCE,\n>>>> CREATEDBY,\n>>>> FILEID\n>>>> )\n>>>> select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY\n>>>> HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>>>> to_timestamp(To_char(clock_timestamp() at time zone\n>>>> 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'),\n>>>> false,\n>>>> false,\n>>>> IMPORTEDACCOUNTCODE,\n>>>> IMPORTEDUNITCODE,\n>>>> BEGINNINGBALANCE,\n>>>> ENDINGBALANCE,\n>>>> parCreatedBy,\n>>>> FILEID\n>>>> from STAGING_table_a\n>>>> where FILEID = parFileId;\n>>>>\n>>>> END;\n>>>> $function$\n>>>> ;\n>>>>\n>>> Can you show what type is FILEID?\n>>>\n>>> Can there be type mismatch?\n>>>\n>>>\n>> regards,\n>>> Ranier Vilela\n>>>\n>>\n>> Thanks Ranier. Please find the below.\n>>\n>> \\d+ schema1.table_a\n>> Table \"schema1.table_a\"\n>> Column | Type | Collation |\n>> Nullable | Default | Storage | Stats target | Description\n>>\n>> ---------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n>> id | numeric(20,0) | | not\n>> null | | main | |\n>> rowversion | timestamp(4) without time zone | | not\n>> null | | plain | |\n>> created | timestamp(4) without time zone | | not\n>> null | | plain | |\n>> isdeleted | boolean | | not\n>> null | | plain | |\n>> lastupdated | timestamp(4) without time zone | |\n>> | | plain | |\n>> isignored | boolean | | not\n>> null | | plain | |\n>> importedaccountcode | character varying(255) | |\n>> | | extended | |\n>> importedunitcode | character varying(255) | |\n>> | | extended | |\n>> beginningbalance | numeric(19,5) | |\n>> | | main | |\n>> endingbalance | numeric(19,5) | |\n>> | | main | |\n>> createdbyid | numeric(20,0) | |\n>> | | main | |\n>> updatedbyid | numeric(20,0) | |\n>> | | main | |\n>> fileid | numeric(20,0) | | not\n>> null | | main | |\n>> previousid | numeric(20,0) | |\n>> | | main | |\n>> createdby | character varying(255) | |\n>> | | extended | |\n>> lastupdatedby | character varying(255) | |\n>> | | extended | |\n>>\n>> \\d+ schema1.table_b\n>> Table \"schema1.table_b\"\n>> Column | Type | Collation |\n>> Nullable | Default | Storage | Stats target | Description\n>>\n>> --------------------------+--------------------------------+-----------+----------+---------+----------+--------------+-------------\n>> id | numeric(20,0) | |\n>> not null | | main | |\n>> rowversion | timestamp(4) without time zone | |\n>> not null | | plain | |\n>> created | timestamp(4) without time zone | |\n>> not null | | plain | |\n>> isdeleted | boolean | |\n>> not null | | plain | |\n>> lastupdated | timestamp(4) without time zone | |\n>> | | plain | |\n>> version | numeric(10,0) | |\n>> not null | | main | |\n>> isactive | boolean | |\n>> not null | | plain | |\n>> name | character varying(255) | |\n>> not null | | extended | |\n>> displayname | character varying(255) | |\n>> not null | | extended | |\n>> ispublished | boolean | |\n>> not null | | plain | |\n>> isretired | boolean | |\n>> not null | | plain | |\n>> publishdatetime | timestamp(4) without time zone | |\n>> | | plain | |\n>> createdbyid | numeric(20,0) | |\n>> | | main | |\n>> updatedbyid | numeric(20,0) | |\n>> | | main | |\n>> periodid | numeric(20,0) | |\n>> not null | | main | |\n>> uploadchartyearversionid | numeric(20,0) | |\n>> not null | | main | |\n>> importchartyearversionid | numeric(20,0) | |\n>> | | main | |\n>> initialtbadjversionid | numeric(20,0) | |\n>> | | main | |\n>> latesttbadjversionid | numeric(20,0) | |\n>> | | main | |\n>> trialbalancesourceid | numeric(20,0) | |\n>> not null | | main | |\n>> filedefinitionid | numeric(20,0) | |\n>> not null | | main | |\n>> createdby | character varying(255) | |\n>> | | extended | |\n>> lastupdatedby | character varying(255) | |\n>> | | extended | |\n>>\n> I think you are in trouble when comparing float8 (double precision) with\n> numeric.\n> This small example shows problems.\n>\n> Postgres version 14.2:\n> SELECT '8217316934885843456'::float8 =\n> '8217316934885843456'::float8::bigint::float8,\n> '8217316934885843456'::float8 =\n> '8217316934885843456'::float8::numeric::float8;\n> ?column? | ?column?\n> ----------+----------\n> t | f\n> (1 row)\n>\n> I suggest a study to switch to bigint.\n>\n\n\n>\n> regards,\n> Ranier Vilela\n>\n\nHi Ranier / All,\n\nAny idea if that is the case why as postgres user the query just works\nfine? Also I enabled all parameters for auto_explain and couldn't find any\nSQL that is taking more time. At last it just showed the insert statement\nand its execution plan which I have mentioned in the beginning of the email\nstating \"trigger for constraints \" taking more time.\n\nOne thing that was observed is that, when as postgres user I ran the query,\nit was not taking rowshare locks on the parent table (table_b) whereas as\nwhen I ran the same SQL as schema1_u user, I saw the Row Share locks\nacquired on the parent table and its FK's and indexes etc. Not sure if I am\nmissing something here.\n\nAs postgres user:\n\n clock_timestamp | relname | locktype |\ndatabase | relation | page | tuple | virtualtransaction | pid |\nmode | granted\n-------------------------------+----------------------+----------+----------+----------+------+-------+--------------------+-------+------------------+---------\n 2023-06-05 08:57:38.596859+00 | table_a_sq | relation | 16400 |\n12826203 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.596877+00 | idx1_table_a | relation | 16400\n| 28894204 | | | 11/14697 | 17833 | RowExclusiveLock |\nt\n 2023-06-05 08:57:38.596884+00 | idx2_table_a | relation | 16400 |\n28894201 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.59689+00 | idx3_table_a | relation | 16400\n| 28894199 | | | 11/14697 | 17833 | RowExclusiveLock |\nt\n 2023-06-05 08:57:38.596896+00 | idx4_table_a | relation | 16400 |\n28894197 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.596902+00 | idx5_table_a | relation | 16400\n| 28894195 | | | 11/14697 | 17833 | RowExclusiveLock |\nt\n 2023-06-05 08:57:38.596909+00 | fk1_table_a | relation | 16400 |\n28894193 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.596915+00 | fk2_table_a | relation | 16400 |\n28894191 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.596923+00 | table_a_pkey | relation | 16400 |\n12826690 | | | 11/14697 | 17833 | RowExclusiveLock | t\n 2023-06-05 08:57:38.596932+00 | staging_table_a | relation | 16400 |\n12826482 | | | 11/14697 | 17833 | AccessShareLock | t\n 2023-06-05 08:57:38.596939+00 | table_a | relation | 16400 |\n12826497 | | | 11/14697 | 17833 | RowExclusiveLock | t\n(11 rows)\n\nAs schema1_u user:\n\n===========================\n\n clock_timestamp | relname | locktype |\ndatabase | relation | page | tuple | virtualtransaction | pid |\nmode | granted\n-------------------------------+------------------------+----------+----------+----------+------+-------+--------------------+-------+------------------+---------\n 2023-06-05 09:16:24.097184+00 | fk1_table_b | relation | 16400 |\n28894114 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097203+00 | fk2_table_b | relation | 16400 |\n28894112 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.09721+00 | fk3_table_b | relation | 16400 |\n28894110 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097221+00 | table_b_pkey | relation | 16400 |\n12826648 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097229+00 | table_b | relation | 16400 |\n12826410 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097238+00 | table_a_sq | relation | 16400 |\n12826203 | | | 13/18586 | 21032 | RowExclusiveLock | t\n 2023-06-05 09:16:24.097246+00 | idx1_table_a | relation |\n 16400 | 28894204 | | | 13/18586 | 21032 |\nRowExclusiveLock | t\n 2023-06-05 09:16:24.097252+00 | idx2_table_a | relation | 16400\n| 28894201 | | | 13/18586 | 21032 | RowExclusiveLock |\nt\n 2023-06-05 09:16:24.097258+00 | idx3_table_a | relation |\n 16400 | 28894199 | | | 13/18586 | 21032 |\nRowExclusiveLock | t\n 2023-06-05 09:16:24.097264+00 | idx4_table_a | relation | 16400\n| 28894197 | | | 13/18586 | 21032 | RowExclusiveLock |\nt\n 2023-06-05 09:16:24.097271+00 | idx5_table_a | relation |\n 16400 | 28894195 | | | 13/18586 | 21032 |\nRowExclusiveLock | t\n 2023-06-05 09:16:24.097277+00 | fk1_table_a | relation | 16400\n| 28894193 | | | 13/18586 | 21032 | RowExclusiveLock |\nt\n 2023-06-05 09:16:24.097283+00 | fk2_table_a | relation | 16400\n| 28894191 | | | 13/18586 | 21032 | RowExclusiveLock |\nt\n 2023-06-05 09:16:24.09729+00 | table_a_pkey | relation | 16400 |\n12826690 | | | 13/18586 | 21032 | RowExclusiveLock | t\n 2023-06-05 09:16:24.097298+00 | staging_table_a | relation | 16400 |\n12826482 | | | 13/18586 | 21032 | AccessShareLock | t\n 2023-06-05 09:16:24.097305+00 | table_a | relation | 16400 |\n12826497 | | | 13/18586 | 21032 | RowExclusiveLock | t\n 2023-06-05 09:16:24.097318+00 | fk4_table_b | relation | 16400 |\n28894116 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097324+00 | fk5_table_b | relation | 16400 |\n28894120 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.09733+00 | fk6_table_b | relation | 16400 |\n28894122 | | | 13/18586 | 21032 | RowShareLock | t\n 2023-06-05 09:16:24.097336+00 | fk7_table_b | relation | 16400\n| 28894118 | | | 13/18586 | 21032 | RowShareLock |\nt\n 2023-06-05 09:16:24.097344+00 | fk8_table_b | relation | 16400\n| 29343754 | | | 13/18586 | 21032 | RowShareLock |\nt\n(21 rows)\n\nRegards, Satalabaha\n\nOn Mon, 5 Jun 2023 at 04:35, Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 4 de jun. de 2023 às 11:49, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:On Sun, 4 Jun 2023 at 19:46, Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 4 de jun. de 2023 às 05:35, Satalabaha Postgres <satalabaha.postgres@gmail.com> escreveu:Hi Listers,DB : postgres 14.We are experiencing weird performance issue of one simple insert statement taking several minutes to insert data. The application calls insert statement via stored procedure show mentioned below.The select query in the insert returns about 499 rows. However, this insert statement when executed from application user i.e. schema1_u takes close to 8 minutes. When the same insert statement gets executed as postgres user it takes less than 280 ms. Both the executions use the same execution plan with only difference that when schema1_u executes the SQL, we observe \"Trigger for constraint fk_con_tablea: time=426499.314 calls=499\" taking more time. Both the parent and child tables are not big in size. There is no table bloat etc for both of these tables. Below are the details.Is there any way we can identify why as postgres user the insert statement works fine and why not with application user schema1_u?Stored Procedure:====================CREATE OR REPLACE FUNCTION schema1.ins_staging_fn(parfileid double precision, parcreatedby text) RETURNS void LANGUAGE plpgsqlAS $function$ BEGIN insert into table_a ( ROWVERSION, CREATED, ISDELETED, ISIGNORED, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, CREATEDBY, FILEID ) select to_timestamp(To_char(clock_timestamp(),'DD-MON-YY HH.MI.SS.FF4 AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), to_timestamp(To_char(clock_timestamp() at time zone 'utc', 'DD-MON-YY HH.MI.SS.MS AM'),'DD-MON-YY HH.MI.SS.FF4 AM'), false, false, IMPORTEDACCOUNTCODE, IMPORTEDUNITCODE, BEGINNINGBALANCE, ENDINGBALANCE, parCreatedBy, FILEID from STAGING_table_a where FILEID = parFileId; END; $function$;Can you show what type is FILEID?Can there be type mismatch? regards,Ranier VilelaThanks Ranier. Please find the below.\\d+ schema1.table_a Table \"schema1.table_a\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description---------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | isignored | boolean | | not null | | plain | | importedaccountcode | character varying(255) | | | | extended | | importedunitcode | character varying(255) | | | | extended | | beginningbalance | numeric(19,5) | | | | main | | endingbalance | numeric(19,5) | | | | main | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | fileid | numeric(20,0) | | not null | | main | | previousid | numeric(20,0) | | | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |\\d+ schema1.table_b Table \"schema1.table_b\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------------------------+--------------------------------+-----------+----------+---------+----------+--------------+------------- id | numeric(20,0) | | not null | | main | | rowversion | timestamp(4) without time zone | | not null | | plain | | created | timestamp(4) without time zone | | not null | | plain | | isdeleted | boolean | | not null | | plain | | lastupdated | timestamp(4) without time zone | | | | plain | | version | numeric(10,0) | | not null | | main | | isactive | boolean | | not null | | plain | | name | character varying(255) | | not null | | extended | | displayname | character varying(255) | | not null | | extended | | ispublished | boolean | | not null | | plain | | isretired | boolean | | not null | | plain | | publishdatetime | timestamp(4) without time zone | | | | plain | | createdbyid | numeric(20,0) | | | | main | | updatedbyid | numeric(20,0) | | | | main | | periodid | numeric(20,0) | | not null | | main | | uploadchartyearversionid | numeric(20,0) | | not null | | main | | importchartyearversionid | numeric(20,0) | | | | main | | initialtbadjversionid | numeric(20,0) | | | | main | | latesttbadjversionid | numeric(20,0) | | | | main | | trialbalancesourceid | numeric(20,0) | | not null | | main | | filedefinitionid | numeric(20,0) | | not null | | main | | createdby | character varying(255) | | | | extended | | lastupdatedby | character varying(255) | | | | extended | |I think you are in trouble when comparing float8 (double precision) with numeric.This small example shows problems.Postgres version 14.2:SELECT '8217316934885843456'::float8 = '8217316934885843456'::float8::bigint::float8, '8217316934885843456'::float8 = '8217316934885843456'::float8::numeric::float8; ?column? | ?column?----------+---------- t | f(1 row)I suggest a study to switch to bigint. regards,Ranier VilelaHi Ranier / All,Any idea if that is the case why as postgres user the query just works fine? Also I enabled all parameters for auto_explain and couldn't find any SQL that is taking more time. At last it just showed the insert statement and its execution plan which I have mentioned in the beginning of the email stating \"trigger for constraints \" taking more time.One thing that was observed is that, when as postgres user I ran the query, it was not taking rowshare locks on the parent table (table_b) whereas as when I ran the same SQL as schema1_u user, I saw the Row Share locks acquired on the parent table and its FK's and indexes etc. Not sure if I am missing something here.As postgres user: clock_timestamp | relname | locktype | database | relation | page | tuple | virtualtransaction | pid | mode | granted-------------------------------+----------------------+----------+----------+----------+------+-------+--------------------+-------+------------------+--------- 2023-06-05 08:57:38.596859+00 | table_a_sq | relation | 16400 | 12826203 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596877+00 | idx1_table_a | relation | 16400 | 28894204 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596884+00 | idx2_table_a | relation | 16400 | 28894201 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.59689+00 | idx3_table_a | relation | 16400 | 28894199 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596896+00 | idx4_table_a | relation | 16400 | 28894197 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596902+00 | idx5_table_a | relation | 16400 | 28894195 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596909+00 | fk1_table_a | relation | 16400 | 28894193 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596915+00 | fk2_table_a | relation | 16400 | 28894191 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596923+00 | table_a_pkey | relation | 16400 | 12826690 | | | 11/14697 | 17833 | RowExclusiveLock | t 2023-06-05 08:57:38.596932+00 | staging_table_a | relation | 16400 | 12826482 | | | 11/14697 | 17833 | AccessShareLock | t 2023-06-05 08:57:38.596939+00 | table_a | relation | 16400 | 12826497 | | | 11/14697 | 17833 | RowExclusiveLock | t(11 rows)As schema1_u user:=========================== clock_timestamp | relname | locktype | database | relation | page | tuple | virtualtransaction | pid | mode | granted-------------------------------+------------------------+----------+----------+----------+------+-------+--------------------+-------+------------------+--------- 2023-06-05 09:16:24.097184+00 | fk1_table_b | relation | 16400 | 28894114 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097203+00 | fk2_table_b | relation | 16400 | 28894112 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.09721+00 | fk3_table_b | relation | 16400 | 28894110 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097221+00 | table_b_pkey | relation | 16400 | 12826648 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097229+00 | table_b | relation | 16400 | 12826410 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097238+00 | table_a_sq | relation | 16400 | 12826203 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097246+00 | idx1_table_a | relation | 16400 | 28894204 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097252+00 | idx2_table_a | relation | 16400 | 28894201 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097258+00 | idx3_table_a | relation | 16400 | 28894199 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097264+00 | idx4_table_a | relation | 16400 | 28894197 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097271+00 | idx5_table_a | relation | 16400 | 28894195 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097277+00 | fk1_table_a | relation | 16400 | 28894193 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097283+00 | fk2_table_a | relation | 16400 | 28894191 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.09729+00 | table_a_pkey | relation | 16400 | 12826690 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097298+00 | staging_table_a | relation | 16400 | 12826482 | | | 13/18586 | 21032 | AccessShareLock | t 2023-06-05 09:16:24.097305+00 | table_a | relation | 16400 | 12826497 | | | 13/18586 | 21032 | RowExclusiveLock | t 2023-06-05 09:16:24.097318+00 | fk4_table_b | relation | 16400 | 28894116 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097324+00 | fk5_table_b | relation | 16400 | 28894120 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.09733+00 | fk6_table_b | relation | 16400 | 28894122 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097336+00 | fk7_table_b | relation | 16400 | 28894118 | | | 13/18586 | 21032 | RowShareLock | t 2023-06-05 09:16:24.097344+00 | fk8_table_b | relation | 16400 | 29343754 | | | 13/18586 | 21032 | RowShareLock | t(21 rows)Regards, Satalabaha",
"msg_date": "Mon, 5 Jun 2023 17:25:33 +0530",
"msg_from": "Satalabaha Postgres <satalabaha.postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behavior of INSERT QUERY"
}
] |
[
{
"msg_contents": "How does hash join estimation rows ? pg v14, it make wrong rows estimation then leave nest loop lef join that make poor sql plan. A\n\n -> Nested Loop Left Join (cost=171112.69..475856.90 rows=1 width=521)\n -> Nested Loop Left Join (cost=171111.31..474489.54 rows=1 width=423)\n -> Hash Join (cost=171110.76..474488.93 rows=1 width=257) <<< here , actually the rows is 98000 ,but optimizer returns\n Hash Cond: (((ccsm.xxx_id)::text = (cc.xxx_id)::text) AND ((ccsm.xxx_key)::text = (cc.account_key)::text)) <<< ccsm.xx_id and ccsm.xx_key are part of primary key.\n -> Seq Scan on cs_xxxxx ccsm (cost=0.00..254328.08 rows=4905008 width=201)\n -> Hash (cost=167540.92..167540.92 rows=237989 width=115)\n -> Index Scan using cs_xxxx_test on cs_contract cc (cost=0.43..167540.92 rows=237989 width=115)\n Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY ('{Active,Inactive,Pending}'::text[])))\n -> Index Scan using cs_xxx_pk on cs_site cs (cost=0.56..0.61 rows=1 width=203)\n Index Cond: ((xxx_key)::text = (ccsm.xxx_key)::text)\n\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\n How does hash join estimation rows ? pg v14, it make wrong rows estimation then leave nest loop lef join that make poor sql plan. A\n\n \n -> Nested Loop Left Join (cost=171112.69..475856.90 rows=1 width=521)\n -> Nested Loop Left Join (cost=171111.31..474489.54 rows=1 width=423)\n -> Hash Join (cost=171110.76..474488.93 rows=1 width=257) <<< here , actually the rows is 98000 ,but optimizer\n returns \n Hash Cond: (((ccsm.xxx_id)::text = (cc.xxx_id)::text) AND ((ccsm.xxx_key)::text = (cc.account_key)::text)) <<< ccsm.xx_id and ccsm.xx_key are part of primary key.\n -> Seq Scan on cs_xxxxx ccsm (cost=0.00..254328.08 rows=4905008 width=201)\n -> Hash (cost=167540.92..167540.92 rows=237989 width=115)\n -> Index Scan using cs_xxxx_test on cs_contract cc (cost=0.43..167540.92 rows=237989 width=115)\n Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY ('{Active,Inactive,Pending}'::text[])))\n -> Index Scan using cs_xxx_pk on cs_site cs (cost=0.56..0.61 rows=1 width=203)\n Index Cond: ((xxx_key)::text = (ccsm.xxx_key)::text)\n \n \nThanks,\n \nJames",
"msg_date": "Fri, 9 Jun 2023 08:36:01 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "wrong rows estimation by hash join"
},
{
"msg_contents": "Hi,\n\nOn 6/9/23 10:36, James Pang (chaolpan) wrote:\n> How does hash join estimation rows ? pg v14, it make wrong rows\n> estimation then leave nest loop lef join that make poor sql plan. A\n> \n\nI doubt this is specific to hashjoins, we estimate cardinality the same\nway for all joins (or more precisely, we estimate it before picking the\nparticular join method).\n\nI'm just guessing, but I'd bet the join condition is correlated with the\nfilter on cs_contract:\n\n> -> Index\n> Scan using cs_xxxx_test on cs_contract cc (cost=0.43..167540.92\n> rows=237989 width=115)\n> Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY\n> ('{Active,Inactive,Pending}'::text[])))\n> \n\nIf you remove that condition, does the estimate improve?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Jun 2023 22:38:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong rows estimation by hash join"
}
] |
[
{
"msg_contents": "Hi,\n We migrate from Oracle to Postgresql14.8, one SQL has regression in Postgres run in 5800 milliseconds in Postgresql v14.8, but the same SQL got done in several hundred milliseconds in Oracle database.\n With multiple table JOINs, if the join condition is tablea.column1=tableb.column1, optimizer will use the index to filter data in nest loops, but if tablea.column1=regexp_replace(tableb.column1....),\nOptimizer will not be able to use the index on tablea.column1, then it do a table scan and nestloop to produce a lot rows then use tablea.column1=regexp_replace(tableb.column1....) as a filter. As a workaround we create a view then use tablea.column1=view.column1 that works.\n Is it expected ? details as below.\n\n SELECT DISTINCT a.xxx, b.xxx as TOLLFREE FROM tableA a, tableB b\n WHERE a.column1 = regexp_replace(b.column1,'[^0-9]','') AND b.column2 = $1 AND b.column3= $2\n AND NOT EXISTS (SELECT 1 FROM tableC c\n WHERE c.xxid = b.xxid\n AND c.xxtype = case when b.col4 = 1 then 'TollFree' else 'Toll' end\n AND c.xxid = $3)\n\n Unique (cost=423374.60..423377.87 rows=436 width=21) (actual time=6070.963..6071.054 rows=395 loops=1)\n Buffers: shared hit=148\n -> Sort (cost=423374.60..423375.69 rows=436 width=21) (actual time=6070.963..6070.992 rows=397 loops=1)\n Sort Key: a.xx, b.xx\n Sort Method: quicksort Memory: 56kB\n Buffers: shared hit=148\n -> Nested Loop (cost=0.69..423355.48 rows=436 width=21) (actual time=120.338..6070.669 rows=397 loops=1)\n Join Filter: ((a.column1)::text = regexp_replace((b.column1)::text, '[^0-9]'::text, ''::text)) <<<optimizer only do filter after nest loops with a lot of rows\n Rows Removed by Join Filter: 1511155\n Buffers: shared hit=145\n -> Seq Scan on tableA a (cost=0.00..161.12 rows=7712 width=25) (actual time=0.022..1.380 rows=7712 loops=1)\n Buffers: shared hit=84\n -> Materialize (cost=0.69..153.12 rows=207 width=21) (actual time=0.000..0.011 rows=196 loops=7712)\n Buffers: shared hit=58\n -> Nested Loop Anti Join (cost=0.69..152.09 rows=207 width=21) (actual time=0.069..0.278 rows=196 loops=1)\n Join Filter: ((c.xxid = b.xxid) AND ((c.xxxx)::text = CASE WHEN (b.column2 = 1) THEN 'aaa'::text ELSE 'bbb'::t\next END))\n Buffers: shared hit=58\n -> Index Scan using idx_xxx on tableB b (cost=0.42..146.55 rows=207 width=29) (actual time=0.047..0.207 rows=196 loops=1)\n Index Cond: ((colum3 = 40957) AND (column2 = 1))\n Buffers: shared hit=56\n -> Materialize (cost=0.27..1.40 rows=1 width=15) (actual time=0.000..0.000 rows=0 loops=196)\n Buffers: shared hit=2\n -> Index Only Scan using pk_xxxx on tableC c (cost=0.27..1.39 rows=1 width=15\n) (actual time=0.020..0.020 rows=0 loops=1)\n Index Cond: (xxxid = 12407262)\n Heap Fetches: 0\n Buffers: shared hit=2\n\nIf we create a view ,the SQL got done in several million seconds,\nCREATE VIEW tableBREGXP as (select xx,column2,column3,xxid,regexp_replace(column1,'[^0-9]','') as column1 from tableB);\n SELECT DISTINCT a.xxx, b.xxx as TOLLFREE FROM tableA a, tableBREGXP b <<< replace the tableB with view name.\n WHERE a.column1 = b.column1 AND b.column2 = $1 AND b.column3= $2 <<< use b.column1 to replace regexp_replace((b.column1)::text, '[^0-9]'::text, ''::text))\n AND NOT EXISTS (SELECT 1 FROM tableC c\n WHERE c.xxid = b.xxid\n AND c.xxtype = case when b.col4 = 1 then 'TollFree' else 'Toll' end\n AND c.xxid = $3)\n\nHashAggregate (cost=408.19..412.76 rows=457 width=21) (actual time=4.524..4.644 rows=395 loops=1)\n Group Key: a.xxx, b.xx\n Batches: 1 Memory Usage: 61kB\n Buffers: shared hit=693\n -> Nested Loop (cost=0.97..405.90 rows=457 width=21) (actual time=0.154..4.205 rows=397 loops=1)\n Buffers: shared hit=693\n -> Nested Loop Anti Join (cost=0.69..214.97 rows=217 width=40) (actual time=0.137..2.877 rows=196 loops=1)\n Join Filter: ((c.xxyid = b.xxid) AND ((c.xxxxx)::text = CASE WHEN (b.column2 = 1) THEN 'To\nllFree'::text ELSE 'Toll'::text END))\n Buffers: shared hit=55\n -> Index Scan using idx_xxx on b (cost=0.42..207.06 rows=217 width=64) (actual time=0.123..2.725 rows=196 loops=1)\n Index Cond: ((column2 = 40957) AND (column3 = 1))\n Buffers: shared hit=53\n -> Materialize (cost=0.27..1.40 rows=1 width=15) (actual time=0.000..0.000 rows=0 loops=196)\n Buffers: shared hit=2\n -> Index Only Scan using pk_xxxx on tableC c (cost=0.27..1.39 rows=1 width=15) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (siteid = 12407262)\n Heap Fetches: 0\n Buffers: shared hit=2\n -> Index Scan using idx_xxx on tableA a (cost=0.28..0.86 rows=2 width=25) (actual time=0.004..0.005 rows=2 loops=196)\n Index Cond: ((xxxx)::text = (regexp_replace((b.phonenumber)::text, '[^0-9]'::text, ''::text))) <<< it use the index to filter a lot of rows here,\n Buffers: shared hit=638\nPlanning Time: 0.619 ms\nExecution Time: 4.762 ms\n\n\nThanks,\n\nJames\n\n\n\n\n\n\n\n\n\nHi,\n We migrate from Oracle to Postgresql14.8, one SQL has regression in Postgres run in 5800 milliseconds in Postgresql v14.8, but the same SQL got done in several hundred milliseconds in Oracle database.\n With multiple table JOINs, if the join condition is tablea.column1=tableb.column1, optimizer will use the index to filter data in nest loops, but if tablea.column1=regexp_replace(tableb.column1….),\nOptimizer will not be able to use the index on tablea.column1, then it do a table scan and nestloop to produce a lot rows then use tablea.column1=regexp_replace(tableb.column1….) as a filter. As a workaround we create a view then use tablea.column1=view.column1\n that works. \n Is it expected ? details as below.\n \n SELECT DISTINCT a.xxx, b.xxx as TOLLFREE FROM tableA a, tableB b\n WHERE \na.column1 = regexp_replace(b.column1,'[^0-9]','') AND b.column2 = $1 AND b.column3= $2\n AND NOT EXISTS (SELECT 1 FROM tableC c\n WHERE c.xxid = b.xxid\n AND c.xxtype = case when b.col4 = 1 then 'TollFree' else 'Toll' end\n AND c.xxid = $3)\n \n Unique (cost=423374.60..423377.87 rows=436 width=21) (actual time=6070.963..6071.054 rows=395 loops=1)\n Buffers: shared hit=148\n -> Sort (cost=423374.60..423375.69 rows=436 width=21) (actual time=6070.963..6070.992 rows=397 loops=1)\n Sort Key: a.xx, b.xx\n Sort Method: quicksort Memory: 56kB\n Buffers: shared hit=148\n \n-> Nested Loop (cost=0.69..423355.48 rows=436 width=21) (actual time=120.338..6070.669 rows=397 loops=1)\n Join Filter: ((a.column1)::text = regexp_replace((b.column1)::text, '[^0-9]'::text, ''::text)) <<<optimizer only do filter\n after nest loops with a lot of rows \n Rows Removed by Join Filter: 1511155\n Buffers: shared hit=145\n -> Seq Scan on tableA a (cost=0.00..161.12 rows=7712 width=25) (actual time=0.022..1.380 rows=7712 loops=1)\n Buffers: shared hit=84\n -> Materialize (cost=0.69..153.12 rows=207 width=21) (actual time=0.000..0.011 rows=196 loops=7712)\n Buffers: shared hit=58\n -> Nested Loop Anti Join (cost=0.69..152.09 rows=207 width=21) (actual time=0.069..0.278 rows=196 loops=1)\n Join Filter: ((c.xxid = b.xxid) AND ((c.xxxx)::text = CASE WHEN (b.column2 = 1) THEN 'aaa'::text ELSE 'bbb'::t\next END))\n Buffers: shared hit=58\n -> Index Scan using idx_xxx on tableB b (cost=0.42..146.55 rows=207 width=29) (actual time=0.047..0.207 rows=196 loops=1)\n Index Cond: ((colum3 = 40957) AND (column2 = 1))\n Buffers: shared hit=56\n -> Materialize (cost=0.27..1.40 rows=1 width=15) (actual time=0.000..0.000 rows=0 loops=196)\n Buffers: shared hit=2\n -> Index Only Scan using pk_xxxx on tableC c (cost=0.27..1.39 rows=1 width=15\n) (actual time=0.020..0.020 rows=0 loops=1)\n Index Cond: (xxxid = 12407262)\n Heap Fetches: 0\n Buffers: shared hit=2\n \nIf we create a view ,the SQL got done in several million seconds,\nCREATE VIEW tableBREGXP as (select xx,column2,column3,xxid,regexp_replace(column1,'[^0-9]','') as column1 from tableB);\n SELECT DISTINCT a.xxx, b.xxx as TOLLFREE FROM tableA a,\ntableBREGXP b <<< replace the tableB with view name.\n WHERE \na.column1 = b.column1 AND b.column2 = $1 AND b.column3= $2 <<< use b.column1 to replace\nregexp_replace((b.column1)::text, '[^0-9]'::text, ''::text))\n\n AND NOT EXISTS (SELECT 1 FROM tableC c\n WHERE c.xxid = b.xxid\n AND c.xxtype = case when b.col4 = 1 then 'TollFree' else 'Toll' end\n AND c.xxid = $3)\n \nHashAggregate (cost=408.19..412.76 rows=457 width=21) (actual time=4.524..4.644 rows=395 loops=1)\n Group Key: a.xxx, b.xx\n Batches: 1 Memory Usage: 61kB\n Buffers: shared hit=693\n -> Nested Loop (cost=0.97..405.90 rows=457 width=21) (actual time=0.154..4.205 rows=397 loops=1)\n Buffers: shared hit=693\n -> Nested Loop Anti Join (cost=0.69..214.97 rows=217 width=40) (actual time=0.137..2.877 rows=196 loops=1)\n Join Filter: ((c.xxyid = b.xxid) AND ((c.xxxxx)::text = CASE WHEN (b.column2 = 1) THEN 'To\nllFree'::text ELSE 'Toll'::text END))\n Buffers: shared hit=55\n -> Index Scan using idx_xxx on b (cost=0.42..207.06 rows=217 width=64) (actual time=0.123..2.725 rows=196 loops=1)\n Index Cond: ((column2 = 40957) AND (column3 = 1))\n Buffers: shared hit=53\n -> Materialize (cost=0.27..1.40 rows=1 width=15) (actual time=0.000..0.000 rows=0 loops=196)\n Buffers: shared hit=2\n -> Index Only Scan using pk_xxxx on tableC c (cost=0.27..1.39 rows=1 width=15) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (siteid = 12407262)\n Heap Fetches: 0\n Buffers: shared hit=2\n -> Index Scan using idx_xxx on tableA a (cost=0.28..0.86 rows=2 width=25) (actual time=0.004..0.005 rows=2 loops=196)\n \nIndex Cond: ((xxxx)::text = (regexp_replace((b.phonenumber)::text, '[^0-9]'::text, ''::text))) <<< it use the index to filter a lot of rows here,\n Buffers: shared hit=638\nPlanning Time: 0.619 ms\nExecution Time: 4.762 ms\n \n \nThanks,\n \nJames",
"msg_date": "Mon, 12 Jun 2023 08:09:28 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> We migrate from Oracle to Postgresql14.8, one SQL has regression in Postgres run in 5800 milliseconds in Postgresql v14.8, but the same SQL got done in several hundred milliseconds in Oracle database.\n> With multiple table JOINs, if the join condition is tablea.column1=tableb.column1, optimizer will use the index to filter data in nest loops, but if tablea.column1=regexp_replace(tableb.column1....),\n> Optimizer will not be able to use the index on tablea.column1, then it do a table scan and nestloop to produce a lot rows then use tablea.column1=regexp_replace(tableb.column1....) as a filter. As a workaround we create a view then use tablea.column1=view.column1 that works.\n> Is it expected ? details as below.\n\nIt's impossible to comment on this usefully with such a fragmentary\ndescription of the problem. Please send a complete, self-contained\ntest case if you want anybody to look at it carefully.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jun 2023 09:18:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "Hi, \n Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \n =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\n oid | proname | pronamespace | prosecdef | proisstrict | provolatile\n-------+----------------+--------------+-----------+-------------+-------------\n 2284 | regexp_replace | pg_catalog | f | t | i\n 2285 | regexp_replace | pg_catalog | f | t | i\n 17095 | regexp_replace | oracle | f | f | v \n 17096 | regexp_replace | oracle | f | f | v\n 17097 | regexp_replace | oracle | f | f | v\n 17098 | regexp_replace | oracle | f | f | v\n\n--with default it use orafce, oracle.regexp_replace function,\nSelect a.phonenumber,... from tableA a, tableB b where a.phonenumber=oracle. regexp_replace(b.PHONENUMBER,'[^0-9]','') , \n --index on a.phonenumber not used\n \nSwitch to pg_catalog.regexp_replace(b.PHONENUMBER,'[^0-9]',''), \n Index on a.phonenumber got used.\n\nThanks,\n\nJames Pang \n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Monday, June 12, 2023 9:19 PM\nTo: James Pang (chaolpan) <chaolpan@cisco.com>\nCc: pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql equal join on function with columns not use index\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> We migrate from Oracle to Postgresql14.8, one SQL has regression in Postgres run in 5800 milliseconds in Postgresql v14.8, but the same SQL got done in several hundred milliseconds in Oracle database.\n> With multiple table JOINs, if the join condition is \n> tablea.column1=tableb.column1, optimizer will use the index to filter \n> data in nest loops, but if \n> tablea.column1=regexp_replace(tableb.column1....),\n> Optimizer will not be able to use the index on tablea.column1, then it do a table scan and nestloop to produce a lot rows then use tablea.column1=regexp_replace(tableb.column1....) as a filter. As a workaround we create a view then use tablea.column1=view.column1 that works.\n> Is it expected ? details as below.\n\nIt's impossible to comment on this usefully with such a fragmentary description of the problem. Please send a complete, self-contained test case if you want anybody to look at it carefully.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jun 2023 14:20:25 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\n> -------+----------------+--------------+-----------+-------------+-------------\n> 2284 | regexp_replace | pg_catalog | f | t | i\n> 2285 | regexp_replace | pg_catalog | f | t | i\n> 17095 | regexp_replace | oracle | f | f | v \n> 17096 | regexp_replace | oracle | f | f | v\n> 17097 | regexp_replace | oracle | f | f | v\n> 17098 | regexp_replace | oracle | f | f | v\n\nWhy in the world are the oracle ones marked volatile? That's what's\npreventing them from being used in index quals.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:50:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "út 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> \"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> > Looks like it's the function \"regexp_replace\" volatile and\n> restrict=false make the difference, we have our application role with\n> default search_path=oracle,$user,public,pg_catalog.\n> > =# select\n> oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile\n> from pg_proc where proname='regexp_replace' order by oid;\n> > oid | proname | pronamespace | prosecdef | proisstrict |\n> provolatile\n> >\n> -------+----------------+--------------+-----------+-------------+-------------\n> > 2284 | regexp_replace | pg_catalog | f | t | i\n> > 2285 | regexp_replace | pg_catalog | f | t | i\n> > 17095 | regexp_replace | oracle | f | f | v\n> > 17096 | regexp_replace | oracle | f | f | v\n> > 17097 | regexp_replace | oracle | f | f | v\n> > 17098 | regexp_replace | oracle | f | f | v\n>\n> Why in the world are the oracle ones marked volatile? That's what's\n> preventing them from being used in index quals.\n>\n\nIt looks like orafce issue\n\nI'll fix it\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n>\n>\n\nút 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\n> -------+----------------+--------------+-----------+-------------+-------------\n> 2284 | regexp_replace | pg_catalog | f | t | i\n> 2285 | regexp_replace | pg_catalog | f | t | i\n> 17095 | regexp_replace | oracle | f | f | v \n> 17096 | regexp_replace | oracle | f | f | v\n> 17097 | regexp_replace | oracle | f | f | v\n> 17098 | regexp_replace | oracle | f | f | v\n\nWhy in the world are the oracle ones marked volatile? That's what's\npreventing them from being used in index quals.It looks like orafce issueI'll fix itRegardsPavel \n\n regards, tom lane",
"msg_date": "Tue, 13 Jun 2023 16:17:59 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "út 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> \"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n>> > Looks like it's the function \"regexp_replace\" volatile and\n>> restrict=false make the difference, we have our application role with\n>> default search_path=oracle,$user,public,pg_catalog.\n>> > =# select\n>> oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile\n>> from pg_proc where proname='regexp_replace' order by oid;\n>> > oid | proname | pronamespace | prosecdef | proisstrict |\n>> provolatile\n>> >\n>> -------+----------------+--------------+-----------+-------------+-------------\n>> > 2284 | regexp_replace | pg_catalog | f | t | i\n>> > 2285 | regexp_replace | pg_catalog | f | t | i\n>> > 17095 | regexp_replace | oracle | f | f | v\n>> > 17096 | regexp_replace | oracle | f | f | v\n>> > 17097 | regexp_replace | oracle | f | f | v\n>> > 17098 | regexp_replace | oracle | f | f | v\n>>\n>> Why in the world are the oracle ones marked volatile? That's what's\n>> preventing them from being used in index quals.\n>>\n>\n> It looks like orafce issue\n>\n> I'll fix it\n>\n\nshould be fixed in orafce 4.4.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> regards, tom lane\n>>\n>>\n>>\n\nút 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\n> -------+----------------+--------------+-----------+-------------+-------------\n> 2284 | regexp_replace | pg_catalog | f | t | i\n> 2285 | regexp_replace | pg_catalog | f | t | i\n> 17095 | regexp_replace | oracle | f | f | v \n> 17096 | regexp_replace | oracle | f | f | v\n> 17097 | regexp_replace | oracle | f | f | v\n> 17098 | regexp_replace | oracle | f | f | v\n\nWhy in the world are the oracle ones marked volatile? That's what's\npreventing them from being used in index quals.It looks like orafce issueI'll fix itshould be fixed in orafce 4.4.RegardsPavel RegardsPavel \n\n regards, tom lane",
"msg_date": "Tue, 13 Jun 2023 17:00:57 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "Thanks a lot, we use orafce 3.17, and there some varchar2 columns and function indexes depends on oracle.substr too. Is it ok to upgrade to orafce version 4.4 by “alter extension update to ‘4.4’? it’s online to do that ?\r\n\r\nThanks,\r\n\r\nJames\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Tuesday, June 13, 2023 11:01 PM\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\r\nSubject: Re: Postgresql equal join on function with columns not use index\r\n\r\n\r\n\r\nút 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com<mailto:pavel.stehule@gmail.com>> napsal:\r\n\r\n\r\nút 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us<mailto:tgl@sss.pgh.pa.us>> napsal:\r\n\"James Pang (chaolpan)\" <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> writes:\r\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog.\r\n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\r\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\r\n> -------+----------------+--------------+-----------+-------------+-------------\r\n> 2284 | regexp_replace | pg_catalog | f | t | i\r\n> 2285 | regexp_replace | pg_catalog | f | t | i\r\n> 17095 | regexp_replace | oracle | f | f | v\r\n> 17096 | regexp_replace | oracle | f | f | v\r\n> 17097 | regexp_replace | oracle | f | f | v\r\n> 17098 | regexp_replace | oracle | f | f | v\r\n\r\nWhy in the world are the oracle ones marked volatile? That's what's\r\npreventing them from being used in index quals.\r\n\r\nIt looks like orafce issue\r\n\r\nI'll fix it\r\n\r\nshould be fixed in orafce 4.4.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n regards, tom lane\r\n\r\n\n\n\n\n\n\n\n\n\n Thanks a lot, we use orafce 3.17, and there some varchar2 columns and function indexes depends on oracle.substr too. Is it ok to upgrade to orafce version 4.4 by “alter extension update to ‘4.4’? it’s online to do that ?\n \nThanks,\n \nJames\n \n\nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Tuesday, June 13, 2023 11:01 PM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql equal join on function with columns not use index\n\n \n\n\n \n\n \n\n\nút 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n\n\n\n\n \n\n \n\n\nút 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\r\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \r\n\r\n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\r\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\r\n> -------+----------------+--------------+-----------+-------------+-------------\r\n> 2284 | regexp_replace | pg_catalog | f | t | i\r\n> 2285 | regexp_replace | pg_catalog | f | t | i\r\n> 17095 | regexp_replace | oracle | f | f | v \r\n> 17096 | regexp_replace | oracle | f | f | v\r\n> 17097 | regexp_replace | oracle | f | f | v\r\n> 17098 | regexp_replace | oracle | f | f | v\n\r\nWhy in the world are the oracle ones marked volatile? That's what's\r\npreventing them from being used in index quals.\n\n\n \n\n\nIt looks like orafce issue\n\n\n \n\n\nI'll fix it\n\n\n\n\n\n \n\n\nshould be fixed in orafce 4.4.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\r\n regards, tom lane",
"msg_date": "Thu, 15 Jun 2023 08:32:41 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql equal join on function with columns not use index"
},
{
"msg_contents": "Hi\n\nčt 15. 6. 2023 v 10:32 odesílatel James Pang (chaolpan) <chaolpan@cisco.com>\nnapsal:\n\n> Thanks a lot, we use orafce 3.17, and there some varchar2 columns and\n> function indexes depends on oracle.substr too. Is it ok to upgrade to\n> orafce version 4.4 by “alter extension update to ‘4.4’? it’s online to do\n> that ?\n>\n\nI didn't release 4.4, but it is available on github. Orafce supports\nonline upgrades\n\nHot fix can be execution of\nhttps://github.com/orafce/orafce/blob/master/orafce--4.3--4.4.sql file\n\nRegards\n\nPavel\n\n\n\n>\n> Thanks,\n>\n>\n>\n> James\n>\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Sent:* Tuesday, June 13, 2023 11:01 PM\n> *To:* Tom Lane <tgl@sss.pgh.pa.us>\n> *Cc:* James Pang (chaolpan) <chaolpan@cisco.com>;\n> pgsql-performance@lists.postgresql.org\n> *Subject:* Re: Postgresql equal join on function with columns not use\n> index\n>\n>\n>\n>\n>\n>\n>\n> út 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>\n>\n>\n>\n> út 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> \"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> > Looks like it's the function \"regexp_replace\" volatile and\n> restrict=false make the difference, we have our application role with\n> default search_path=oracle,$user,public,pg_catalog.\n> > =# select\n> oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile\n> from pg_proc where proname='regexp_replace' order by oid;\n> > oid | proname | pronamespace | prosecdef | proisstrict |\n> provolatile\n> >\n> -------+----------------+--------------+-----------+-------------+-------------\n> > 2284 | regexp_replace | pg_catalog | f | t | i\n> > 2285 | regexp_replace | pg_catalog | f | t | i\n> > 17095 | regexp_replace | oracle | f | f | v\n> > 17096 | regexp_replace | oracle | f | f | v\n> > 17097 | regexp_replace | oracle | f | f | v\n> > 17098 | regexp_replace | oracle | f | f | v\n>\n> Why in the world are the oracle ones marked volatile? That's what's\n> preventing them from being used in index quals.\n>\n>\n>\n> It looks like orafce issue\n>\n>\n>\n> I'll fix it\n>\n>\n>\n> should be fixed in orafce 4.4.\n>\n>\n>\n> Regards\n>\n>\n>\n> Pavel\n>\n>\n>\n>\n>\n> Regards\n>\n>\n>\n> Pavel\n>\n>\n>\n>\n> regards, tom lane\n>\n>\n\nHičt 15. 6. 2023 v 10:32 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\n\n\n Thanks a lot, we use orafce 3.17, and there some varchar2 columns and function indexes depends on oracle.substr too. Is it ok to upgrade to orafce version 4.4 by “alter extension update to ‘4.4’? it’s online to do that ?I didn't release 4.4, but it is available on github. Orafce supports online upgradesHot fix can be execution of https://github.com/orafce/orafce/blob/master/orafce--4.3--4.4.sql fileRegardsPavel\n \nThanks,\n \nJames\n \n\nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Tuesday, June 13, 2023 11:01 PM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org\nSubject: Re: Postgresql equal join on function with columns not use index\n\n \n\n\n \n\n \n\n\nút 13. 6. 2023 v 16:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n\n\n\n\n \n\n \n\n\nút 13. 6. 2023 v 15:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n\n\"James Pang (chaolpan)\" <chaolpan@cisco.com> writes:\n> Looks like it's the function \"regexp_replace\" volatile and restrict=false make the difference, we have our application role with default search_path=oracle,$user,public,pg_catalog. \n\n> =# select oid,proname,pronamespace::regnamespace,prosecdef,proisstrict,provolatile from pg_proc where proname='regexp_replace' order by oid;\n> oid | proname | pronamespace | prosecdef | proisstrict | provolatile\n> -------+----------------+--------------+-----------+-------------+-------------\n> 2284 | regexp_replace | pg_catalog | f | t | i\n> 2285 | regexp_replace | pg_catalog | f | t | i\n> 17095 | regexp_replace | oracle | f | f | v \n> 17096 | regexp_replace | oracle | f | f | v\n> 17097 | regexp_replace | oracle | f | f | v\n> 17098 | regexp_replace | oracle | f | f | v\n\nWhy in the world are the oracle ones marked volatile? That's what's\npreventing them from being used in index quals.\n\n\n \n\n\nIt looks like orafce issue\n\n\n \n\n\nI'll fix it\n\n\n\n\n\n \n\n\nshould be fixed in orafce 4.4.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n regards, tom lane",
"msg_date": "Thu, 15 Jun 2023 10:35:47 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql equal join on function with columns not use index"
}
] |
[
{
"msg_contents": "Hello\n\n\nI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n\n\nI am using version 13 but soon 14.\n\n\nI wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\n\nI also have plans on a snapshot of the DB with real data.\n\n- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n\n - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n\n- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.\n\n- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n\n\nIt seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n\n\nIs there a misusage of my indexes?\n\nIs there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\n\nThanks a lot\n\n\n\n\n\n\n\n\n\n\n\nHello\n\n\n\n\nI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n\n\n\n\nI am using version 13 but soon 14.\n\n\n\n\nI wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\n\n\n\nI also have plans on a snapshot of the DB with real data.\n\n- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n\n - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n\n- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga .\n Query is fast as I would like but it means generate some merge to be able to get a fast result.\n\n- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n\n\n\n\nIt seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n\n\n\n\nIs there a misusage of my indexes?\n\nIs there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\n\n\n\nThanks a lot",
"msg_date": "Mon, 12 Jun 2023 20:17:34 +0000",
"msg_from": "benoit <benoit@hopsandfork.com>",
"msg_from_op": true,
"msg_subject": "Forced to use UNION ALL when having multiple ANY operators and ORDER\n BY LIMIT"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 1:17 PM benoit <benoit@hopsandfork.com> wrote:\n> Is there a misusage of my indexes?\n>\n> Is there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\nIt's complicated. Do you find that you get satisfactory performance if\nyou force a bitmap index scan? In other words, what is the effect of\n\"set enable_indexscan = off\" on your original query? Does that speed\nup execution at all? (I think that this approach ought to produce a\nplan that uses a bitmap index scan in place of the index scan, without\nchanging anything else.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Jun 2023 13:34:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "Sadly it doesn't help to disable indexscan. The plan : https://explain.dalibo.com/plan/3b3gfce5b29c3hh4\r\n\r\n________________________________\r\nDe : Peter Geoghegan <pg@bowt.ie>\r\nEnvoyé : lundi 12 juin 2023 22:34:50\r\nÀ : benoit\r\nCc : pgsql-performance@lists.postgresql.org\r\nObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT\r\n\r\nOn Mon, Jun 12, 2023 at 1:17 PM benoit <benoit@hopsandfork.com> wrote:\r\n> Is there a misusage of my indexes?\r\n>\r\n> Is there a limitation when using ANY or IN operators and ordered LIMIT behind?\r\n\r\nIt's complicated. Do you find that you get satisfactory performance if\r\nyou force a bitmap index scan? In other words, what is the effect of\r\n\"set enable_indexscan = off\" on your original query? Does that speed\r\nup execution at all? (I think that this approach ought to produce a\r\nplan that uses a bitmap index scan in place of the index scan, without\r\nchanging anything else.)\r\n\r\n--\r\nPeter Geoghegan\r\n\n\n\n\n\n\n\n\n\n\n\n\nSadly it doesn't help to disable indexscan. The plan : https://explain.dalibo.com/plan/3b3gfce5b29c3hh4\n\n\nDe : Peter Geoghegan <pg@bowt.ie>\nEnvoyé : lundi 12 juin 2023 22:34:50\nÀ : benoit\nCc : pgsql-performance@lists.postgresql.org\nObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT\n \n\n\n\nOn Mon, Jun 12, 2023 at 1:17 PM benoit <benoit@hopsandfork.com> wrote:\r\n> Is there a misusage of my indexes?\r\n>\r\n> Is there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\r\nIt's complicated. Do you find that you get satisfactory performance if\r\nyou force a bitmap index scan? In other words, what is the effect of\r\n\"set enable_indexscan = off\" on your original query? Does that speed\r\nup execution at all? (I think that this approach ought to produce a\r\nplan that uses a bitmap index scan in place of the index scan, without\r\nchanging anything else.)\n\r\n-- \r\nPeter Geoghegan",
"msg_date": "Mon, 12 Jun 2023 20:50:50 +0000",
"msg_from": "benoit <benoit@hopsandfork.com>",
"msg_from_op": true,
"msg_subject": "RE: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "I normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.\n\nI would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.\n\nSELECT *\n FROM docs\n WHERE status IN ('draft', 'sent') \n AND sender_reference IN ('Custom/1175', 'Client/362', 'Custom/280') \n ORDER BY sent_at DESC\n\nThanks,\n\n\nChris Hoover\nSenior DBA\nAWeber.com\nCell: (803) 528-2269\nEmail: chrish@aweber.com\n\n\n\n> On Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:\n> \n> Hello\n> \n> I have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n> \n> I am using version 13 but soon 14.\n> \n> I wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n> \n> I also have plans on a snapshot of the DB with real data.\n> - The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n> - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n> - The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.\n> - The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n> \n> It seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n> \n> Is there a misusage of my indexes?\n> Is there a limitation when using ANY or IN operators and ordered LIMIT behind?\n> \n> Thanks a lot\n\n\nI normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.I would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.SELECT *\n FROM docs\n WHERE status IN ('draft', 'sent') \n AND sender_reference IN ('Custom/1175', 'Client/362', 'Custom/280') \n ORDER BY sent_at DESC\n\n\nThanks,Chris HooverSenior DBAAWeber.comCell: (803) 528-2269Email: chrish@aweber.com\n\nOn Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:HelloI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.I am using version 13 but soon 14.I wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04dI also have plans on a snapshot of the DB with real data.- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551 - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946It seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.Is there a misusage of my indexes?Is there a limitation when using ANY or IN operators and ordered LIMIT behind?Thanks a lot",
"msg_date": "Mon, 12 Jun 2023 16:55:22 -0400",
"msg_from": "Chris Hoover <chrish@aweber.com>",
"msg_from_op": false,
"msg_subject": "Re: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "This new index is used but still the read is 230mb.\n\n\nhttps://explain.dalibo.com/plan/b0f28a9e8a136afd\n\n\n________________________________\nDe : Chris Hoover <chrish@aweber.com>\nEnvoyé : lundi 12 juin 2023 22:55\nÀ : benoit\nCc : pgsql-performance@lists.postgresql.org\nObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT\n\nI normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.\n\nI would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.\n\n SELECT * FROM docs WHERE status IN ('draft', 'sent') AND sender_reference IN ('Custom/1175', 'Client/362', 'Custom/280') ORDER BY sent_at DESC\nThanks,\n\n\nChris Hoover\nSenior DBA\nAWeber.com\nCell: (803) 528-2269\nEmail: chrish@aweber.com\n\n\n\nOn Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:\n\n\nHello\n\nI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n\nI am using version 13 but soon 14.\n\nI wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\nI also have plans on a snapshot of the DB with real data.\n- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.\n- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n\nIt seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n\nIs there a misusage of my indexes?\nIs there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\nThanks a lot\n\n\n\n\n\n\n\n\n\nThis new index is used but still the read is 230mb.\n\n\nhttps://explain.dalibo.com/plan/b0f28a9e8a136afd\n\n\n\n\n\nDe : Chris Hoover <chrish@aweber.com>\nEnvoyé : lundi 12 juin 2023 22:55\nÀ : benoit\nCc : pgsql-performance@lists.postgresql.org\nObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT\n \n\nI normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.\n\n\nI would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.\n\n\n\n\n\n\n\n\nSELECT * FROM docs WHERE status \nIN ('draft',\n'sent')\nAND sender_reference \nIN ('Custom/1175',\n'Client/362',\n'Custom/280')\nORDER BY sent_at DESC\n\n\n\n\n\n\n\n\nThanks,\n\n\n\n\nChris Hoover\nSenior DBA\nAWeber.com\nCell: (803) 528-2269\nEmail: chrish@aweber.com\n\n\n\n\n\n\nOn Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:\n\n\n\n\n\nHello\n\n\n\n\nI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n\n\n\n\nI am using version 13 but soon 14.\n\n\n\n\nI wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\n\n\n\nI also have plans on a snapshot of the DB with real data.\n\n- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n\n - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n\n- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga .\n Query is fast as I would like but it means generate some merge to be able to get a fast result.\n\n- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n\n\n\n\nIt seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n\n\n\n\nIs there a misusage of my indexes?\n\nIs there a limitation when using ANY or IN operators and ordered LIMIT behind?\n\n\n\n\nThanks a lot",
"msg_date": "Mon, 12 Jun 2023 21:34:51 +0000",
"msg_from": "benoit <benoit@hopsandfork.com>",
"msg_from_op": true,
"msg_subject": "RE: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "So, had a bit more time to look into this.\n\nHere is the issue:\nYour query is requesting 20 rows. However, you are doing a sort on sent_at. Because of this, the database is having to pull all rows that match the status and sender_reference, sort them, and then give you 20.\n\nFrom your example:\n1. You have 29744 rows that match your criteria. ->\nIndex Scan using docs_sent_at_idx_1 on public.docs (cost=0.56..39748945.58 rows=29744 width=38) \n\n2. To get those 29744 rows, the database had to read 5046 database blocks (8KB/block). 45046 * 8KB is your ~220MB of read. -> \nBuffers: shared hit=1421 read=45046\n\n3. Since you are selecting all rows, the database uses the index to find the matching rows and then has to go read the head blocks to retrieve and validate the rows. Then it sorts all the returned rows by date and returns\nthe first 20 rows.\n\nSo, PostgreSQL is preforming well. It’s just an expensive way to get 20 rows and I don’t see an easy way to make it better if that is what is needed.\n\nThanks,\n\n\nChris Hoover\nSenior DBA\nAWeber.com\nCell: (803) 528-2269\nEmail: chrish@aweber.com\n\n\n\n> On Jun 12, 2023, at 5:34 PM, benoit <benoit@hopsandfork.com> wrote:\n> \n> This new index is used but still the read is 230mb.\n> \n> https://explain.dalibo.com/plan/b0f28a9e8a136afd\n> \n> \n> De : Chris Hoover <chrish@aweber.com>\n> Envoyé : lundi 12 juin 2023 22:55\n> À : benoit\n> Cc : pgsql-performance@lists.postgresql.org\n> Objet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT\n> \n> I normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.\n> \n> I would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.\n> \n> \n> SELECT * FROM docs WHERE status \n> IN ('draft',\n> 'sent')\n> AND sender_reference \n> IN ('Custom/1175',\n> 'Client/362',\n> 'Custom/280')\n> ORDER BY sent_at DESC\n> Thanks,\n> \n> \n> Chris Hoover\n> Senior DBA\n> AWeber.com\n> Cell: (803) 528-2269\n> Email: chrish@aweber.com\n> \n> \n> \n>> On Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:\n>> \n>> Hello\n>> \n>> I have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.\n>> \n>> I am using version 13 but soon 14.\n>> \n>> I wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n>> \n>> I also have plans on a snapshot of the DB with real data.\n>> - The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n>> - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.\n>> - The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.\n>> - The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n>> \n>> It seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.\n>> \n>> Is there a misusage of my indexes?\n>> Is there a limitation when using ANY or IN operators and ordered LIMIT behind?\n>> \n>> Thanks a lot\n\n\nSo, had a bit more time to look into this.Here is the issue:Your query is requesting 20 rows. However, you are doing a sort on sent_at. Because of this, the database is having to pull all rows that match the status and sender_reference, sort them, and then give you 20.From your example:1. You have 29744 rows that match your criteria. ->Index Scan using docs_sent_at_idx_1 on public.docs (cost=0.56..39748945.58 rows=29744 width=38) 2. To get those 29744 rows, the database had to read 5046 database blocks (8KB/block). 45046 * 8KB is your ~220MB of read. -> Buffers: shared hit=1421 read=450463. Since you are selecting all rows, the database uses the index to find the matching rows and then has to go read the head blocks to retrieve and validate the rows. Then it sorts all the returned rows by date and returns\nthe first 20 rows.So, PostgreSQL is preforming well. It’s just an expensive way to get 20 rows and I don’t see an easy way to make it better if that is what is needed.Thanks,Chris HooverSenior DBAAWeber.comCell: (803) 528-2269Email: chrish@aweber.com\n\nOn Jun 12, 2023, at 5:34 PM, benoit <benoit@hopsandfork.com> wrote:This new index is used but still the read is 230mb.https://explain.dalibo.com/plan/b0f28a9e8a136afdDe : Chris Hoover <chrish@aweber.com>Envoyé : lundi 12 juin 2023 22:55À : benoitCc : pgsql-performance@lists.postgresql.orgObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT I normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does.I would create an index on (status, sender_reference, sent_at) and see if the improves your query performance.\nSELECT * FROM docs WHERE status \nIN ('draft',\n'sent')\nAND sender_reference \nIN ('Custom/1175',\n'Client/362',\n'Custom/280')\nORDER BY sent_at DESC\nThanks,Chris HooverSenior DBAAWeber.comCell: (803) 528-2269Email: chrish@aweber.comOn Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:HelloI have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read.I am using version 13 but soon 14.I wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04dI also have plans on a snapshot of the DB with real data.- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551 - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946It seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array.Is there a misusage of my indexes?Is there a limitation when using ANY or IN operators and ordered LIMIT behind?Thanks a lot",
"msg_date": "Mon, 12 Jun 2023 20:09:53 -0400",
"msg_from": "Chris Hoover <chrish@aweber.com>",
"msg_from_op": false,
"msg_subject": "Re: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "Hi,\n\nDo you really need to do select *?\n\nIn other words, is it necessary to have all columns in the result?\n\n \n\nMichel SALAIS\n\n \n\nDe : benoit <benoit@hopsandfork.com> \nEnvoyé : lundi 12 juin 2023 23:35\nÀ : Chris Hoover <chrish@aweber.com>\nCc : pgsql-performance@lists.postgresql.org\nObjet : RE: Forced to use UNION ALL when having multiple ANY operators and\nORDER BY LIMIT\n\n \n\nThis new index is used but still the read is 230mb.\n\n \n\nhttps://explain.dalibo.com/plan/b0f28a9e8a136afd\n\n \n\n _____ \n\nDe : Chris Hoover <chrish@aweber.com <mailto:chrish@aweber.com> >\nEnvoyé : lundi 12 juin 2023 22:55\nÀ : benoit\nCc : pgsql-performance@lists.postgresql.org\n<mailto:pgsql-performance@lists.postgresql.org> \nObjet : Re: Forced to use UNION ALL when having multiple ANY operators and\nORDER BY LIMIT \n\n \n\nI normally create my indexes to match the where clause of the query. While\ntechnically, it should not matter, I find a lot of time, it does. \n\n \n\nI would create an index on (status, sender_reference, sent_at) and see if\nthe improves your query performance.\n\n \n\n\t\n \n\nSELECT * FROM docs WHERE status \n\nIN ('draft',\n\n'sent')\n\nAND sender_reference \n\nIN ('Custom/1175',\n\n'Client/362',\n\n'Custom/280')\n\nORDER BY sent_at DESC\n\n \n\n \n\n \n\nThanks,\n\n \n\n \n\nChris Hoover\n\nSenior DBA\n\nAWeber.com\n\nCell: (803) 528-2269\n\nEmail: chrish@aweber.com <mailto:chrish@aweber.com> \n\n \n\n \n\n\n\n\n\nOn Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com\n<mailto:benoit@hopsandfork.com> > wrote:\n\n \n\nHello\n\n \n\nI have a database with few 60gb tables. Tables rows are requested with\nmultiple ANY or IN operators. I am not able to find an easy way to make DB\nable to use indexes. I often hit the index, but see a a spike of 200mb of IO\nor disk read.\n\n \n\nI am using version 13 but soon 14.\n\n \n\nI wrote a reproduction script on version 14 with plans included.\nhttps://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\n \n\nI also have plans on a snapshot of the DB with real data.\n\n- The current query that I try to improve :\nhttps://explain.dalibo.com/plan/8b8f6e0he9feb551\n\n - I added the DB schema + index in query view. As you can see I have many\nindexes for testing purpose and try what the planner can do.\n\n- The optimized query when I have only one ANY and migrate to UNION ALL for\neach parameter of the ANY operator\nhttps://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would\nlike but it means generate some merge to be able to get a fast result.\n\n- The new issue I have when I have a new ANY operator on the previous\noptimized query. Big IO/read\nhttps://explain.dalibo.com/plan/e7ha9g637b4eh946\n\n \n\nIt seems to me quite undoable to generate for every parameters a query that\nwill then merge. I have sometimes 3-4 ANY operators with up to 15 elements\nin an array.\n\n \n\nIs there a misusage of my indexes?\n\nIs there a limitation when using ANY or IN operators and ordered LIMIT\nbehind?\n\n \n\nThanks a lot\n\n \n\n\nHi,Do you really need to do “select *”?In other words, is it necessary to have all columns in the result? Michel SALAIS De : benoit <benoit@hopsandfork.com> Envoyé : lundi 12 juin 2023 23:35À : Chris Hoover <chrish@aweber.com>Cc : pgsql-performance@lists.postgresql.orgObjet : RE: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT This new index is used but still the read is 230mb. https://explain.dalibo.com/plan/b0f28a9e8a136afd De : Chris Hoover <chrish@aweber.com>Envoyé : lundi 12 juin 2023 22:55À : benoitCc : pgsql-performance@lists.postgresql.orgObjet : Re: Forced to use UNION ALL when having multiple ANY operators and ORDER BY LIMIT I normally create my indexes to match the where clause of the query. While technically, it should not matter, I find a lot of time, it does. I would create an index on (status, sender_reference, sent_at) and see if the improves your query performance. SELECT * FROM docs WHERE status IN ('draft','sent')AND sender_reference IN ('Custom/1175','Client/362','Custom/280')ORDER BY sent_at DESC Thanks, Chris HooverSenior DBAAWeber.comCell: (803) 528-2269Email: chrish@aweber.com On Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote: Hello I have a database with few 60gb tables. Tables rows are requested with multiple ANY or IN operators. I am not able to find an easy way to make DB able to use indexes. I often hit the index, but see a a spike of 200mb of IO or disk read. I am using version 13 but soon 14. I wrote a reproduction script on version 14 with plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d I also have plans on a snapshot of the DB with real data.- The current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551 - I added the DB schema + index in query view. As you can see I have many indexes for testing purpose and try what the planner can do.- The optimized query when I have only one ANY and migrate to UNION ALL for each parameter of the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast as I would like but it means generate some merge to be able to get a fast result.- The new issue I have when I have a new ANY operator on the previous optimized query. Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946 It seems to me quite undoable to generate for every parameters a query that will then merge. I have sometimes 3-4 ANY operators with up to 15 elements in an array. Is there a misusage of my indexes?Is there a limitation when using ANY or IN operators and ordered LIMIT behind? Thanks a lot",
"msg_date": "Sun, 18 Jun 2023 23:23:42 +0200",
"msg_from": "<msalais@msym.fr>",
"msg_from_op": false,
"msg_subject": "RE: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
},
{
"msg_contents": "No it is not. But do you think there is an impact here?\n\n\n\nLe 18/06/2023 à 23:23, msalais@msym.fr a écrit :\n>\n> Hi,\n>\n> Do you really need to do “select *”?\n>\n> In other words, is it necessary to have all columns in the result?\n>\n> /Michel SALAIS/\n>\n> *De :*benoit <benoit@hopsandfork.com>\n> *Envoyé :* lundi 12 juin 2023 23:35\n> *À :* Chris Hoover <chrish@aweber.com>\n> *Cc :* pgsql-performance@lists.postgresql.org\n> *Objet :* RE: Forced to use UNION ALL when having multiple ANY \n> operators and ORDER BY LIMIT\n>\n> This new index is used but still the read is 230mb.\n>\n> https://explain.dalibo.com/plan/b0f28a9e8a136afd\n>\n> ------------------------------------------------------------------------\n>\n> *De :*Chris Hoover <chrish@aweber.com>\n> *Envoyé :* lundi 12 juin 2023 22:55\n> *À :* benoit\n> *Cc :* pgsql-performance@lists.postgresql.org\n> *Objet :* Re: Forced to use UNION ALL when having multiple ANY \n> operators and ORDER BY LIMIT\n>\n> I normally create my indexes to match the where clause of the query. \n> While technically, it should not matter, I find a lot of time, it does.\n>\n> I would create an index on (status, sender_reference, sent_at) and see \n> if the improves your query performance.\n>\n>\n> \t\n>\n> SELECT * FROM docs WHEREstatus\n>\n> IN('draft',\n>\n> 'sent')\n>\n> ANDsender_reference\n>\n> IN('Custom/1175',\n>\n> 'Client/362',\n>\n> 'Custom/280')\n>\n> ORDER BYsent_at DESC\n>\n> Thanks,\n>\n> Chris Hoover\n>\n> Senior DBA\n>\n> AWeber.com\n>\n> Cell: (803) 528-2269\n>\n> Email: chrish@aweber.com\n>\n>\n>\n> On Jun 12, 2023, at 4:17 PM, benoit <benoit@hopsandfork.com> wrote:\n>\n> Hello\n>\n> I have a database with few 60gb tables. Tables rows are requested\n> with multiple ANY or IN operators. I am not able to find an easy\n> way to make DB able to use indexes. I often hit the index, but see\n> a a spike of 200mb of IO or disk read.\n>\n> I am using version 13 but soon 14.\n>\n> I wrote a reproduction script on version 14 with plans included.\n> https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n>\n> I also have plans on a snapshot of the DB with real data.\n>\n> - The current query that I try to improve :\n> https://explain.dalibo.com/plan/8b8f6e0he9feb551\n>\n> - I added the DB schema + index in query view. As you can see I\n> have many indexes for testing purpose and try what the planner can do.\n>\n> - The optimized query when I have only one ANY and migrate to\n> UNION ALL for each parameter of the ANY operator\n> https://explain.dalibo.com/plan/427gg053d07328ga . Query is fast\n> as I would like but it means generate some merge to be able to get\n> a fast result.\n>\n> - The new issue I have when I have a new ANY operator on the\n> previous optimized query. Big IO/read\n> https://explain.dalibo.com/plan/e7ha9g637b4eh946\n>\n> It seems to me quite undoable to generate for every parameters a\n> query that will then merge. I have sometimes 3-4 ANY operators\n> with up to 15 elements in an array.\n>\n> Is there a misusage of my indexes?\n>\n> Is there a limitation when using ANY or IN operators and\n> ordered LIMIT behind?\n>\n> Thanks a lot\n>\n\n\n\n\n\n\nNo it is not. But do you think there is an\n impact here?\n\n\n\nLe 18/06/2023 à 23:23, msalais@msym.fr\n a écrit :\n\n\n\n\n\n\n\nHi,\nDo you really need to do “select *”?\nIn other words, is it necessary to have all\n columns in the result?\n \n\nMichel SALAIS\n\n \n\n\nDe : benoit <benoit@hopsandfork.com> \nEnvoyé : lundi 12 juin 2023 23:35\nÀ : Chris Hoover <chrish@aweber.com>\nCc : pgsql-performance@lists.postgresql.org\nObjet : RE: Forced to use UNION ALL when having\n multiple ANY operators and ORDER BY LIMIT\n\n\n \n\nThis new index\n is used but still the read is 230mb.\n \nhttps://explain.dalibo.com/plan/b0f28a9e8a136afd\n \n\n\n\n\nDe : Chris Hoover <chrish@aweber.com>\nEnvoyé : lundi 12 juin 2023 22:55\nÀ : benoit\nCc : pgsql-performance@lists.postgresql.org\nObjet : Re: Forced to use UNION ALL when having\n multiple ANY operators and ORDER BY LIMIT \n\n \n\n\n\nI normally create\n my indexes to match the where clause of the query.\n While technically, it should not matter, I find a lot\n of time, it does. \n\n \n\n\nI would create\n an index on (status, sender_reference, sent_at) and\n see if the improves your query performance.\n\n \n\n\n\n\n\n\n\n \nSELECT\n * FROM docs WHERE\n status \nIN ('draft',\n'sent')\nAND\n sender_reference \nIN ('Custom/1175',\n'Client/362',\n'Custom/280')\nORDER\n BY\n sent_at DESC\n \n \n \n\n\n\n\n \n\n\n\n\n\nThanks,\n\n\n \n\n\n \n\n\nChris Hoover\n\n\nSenior DBA\n\n\nAWeber.com\n\n\nCell: (803)\n 528-2269\n\n\nEmail: chrish@aweber.com\n\n\n \n\n \n\n\n\n\n\n\n\nOn Jun 12,\n 2023, at 4:17 PM, benoit <benoit@hopsandfork.com>\n wrote:\n\n \n\n\n\nHello\n\n\n \n\n\nI have\n a database with few 60gb tables. Tables\n rows are requested with multiple ANY or\n IN operators. I am not able to find an easy\n way to make DB able to use indexes. I often\n hit the index, but see a a spike of 200mb of\n IO or disk read.\n\n\n \n\n\nI am\n using version 13 but soon 14.\n\n\n \n\n\nI wrote\n a reproduction script on version 14 with\n plans included. https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n\n\n \n\n\nI also\n have plans on a snapshot of the DB with real\n data.\n\n\n- The\n current query that I try to improve : https://explain.dalibo.com/plan/8b8f6e0he9feb551\n\n\n - I\n added the DB schema + index in query view.\n As you can see I have many indexes for\n testing purpose and try what the planner can\n do.\n\n\n- The\n optimized query when I have only one ANY and\n migrate to UNION ALL for each parameter of\n the ANY operator https://explain.dalibo.com/plan/427gg053d07328ga .\n Query is fast as I would like but it means\n generate some merge to be able to get a fast\n result.\n\n\n- The\n new issue I have when I have a new ANY\n operator on the previous optimized query.\n Big IO/read https://explain.dalibo.com/plan/e7ha9g637b4eh946\n\n\n \n\n\nIt\n seems to me quite undoable to generate for\n every parameters a query that will then\n merge. I have sometimes 3-4 ANY operators\n with up to 15 elements in an array.\n\n\n \n\n\nIs\n there a misusage of my indexes?\n\n\nIs\n there a limitation when using ANY or IN\n operators and ordered LIMIT behind?\n\n\n \n\n\nThanks\n a lot",
"msg_date": "Mon, 19 Jun 2023 18:30:12 +0200",
"msg_from": "Benoit Tigeot <benoit@hopsandfork.com>",
"msg_from_op": false,
"msg_subject": "Re: Forced to use UNION ALL when having multiple ANY operators and\n ORDER BY LIMIT"
}
] |
[
{
"msg_contents": "Hi,\n When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n PG v14.8-1, attached please check test case with details.\n\nThanks,\n\nJames",
"msg_date": "Tue, 13 Jun 2023 09:21:25 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "extended statistics n-distinct on multiple columns not used when join\n two tables"
},
{
"msg_contents": "Hi\n\nút 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com>\nnapsal:\n\n> Hi,\n>\n> When join two table on multiple columns equaljoin, rows estimation\n> always use selectivity = multiplied by distinct multiple individual\n> columns, possible to use extended n-distinct statistics on multiple\n> columns?\n>\n> PG v14.8-1, attached please check test case with details.\n>\n\nThere is not any support for multi tables statistic\n\nRegards\n\nPavel\n\n\n>\n>\n> Thanks,\n>\n>\n>\n> James\n>\n>\n>\n\nHiút 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\n\n\nHi,\n When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n\n PG v14.8-1, attached please check test case with details.There is not any support for multi tables statisticRegardsPavel \n \nThanks,\n \nJames",
"msg_date": "Tue, 13 Jun 2023 11:29:49 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "(moving to -hackers)\n\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n>>\n>> PG v14.8-1, attached please check test case with details.\n>\n> There is not any support for multi tables statistic\n\nI think it's probably worth adjusting the docs to mention this. It\nseems like it might be something that could surprise someone.\n\nSomething like the attached, maybe?\n\nDavid",
"msg_date": "Tue, 13 Jun 2023 23:25:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "út 13. 6. 2023 v 13:26 odesílatel David Rowley <dgrowleyml@gmail.com>\nnapsal:\n\n> (moving to -hackers)\n>\n> On Tue, 13 Jun 2023 at 21:30, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <\n> chaolpan@cisco.com> napsal:\n> >> When join two table on multiple columns equaljoin, rows estimation\n> always use selectivity = multiplied by distinct multiple individual\n> columns, possible to use extended n-distinct statistics on multiple\n> columns?\n> >>\n> >> PG v14.8-1, attached please check test case with details.\n> >\n> > There is not any support for multi tables statistic\n>\n> I think it's probably worth adjusting the docs to mention this. It\n> seems like it might be something that could surprise someone.\n>\n> Something like the attached, maybe?\n>\n\n+1\n\nPavel\n\n\n> David\n>\n\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:(moving to -hackers)\n\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n>>\n>> PG v14.8-1, attached please check test case with details.\n>\n> There is not any support for multi tables statistic\n\nI think it's probably worth adjusting the docs to mention this. It\nseems like it might be something that could surprise someone.\n\nSomething like the attached, maybe?+1Pavel\n\nDavid",
"msg_date": "Tue, 13 Jun 2023 13:28:34 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "Thanks for your information, yes, with multiple columns equal join and correlation , looks like extended statistics could help reduce “significantly rows estimation”. Hopefully it’s in future version.\r\n\r\nJames\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Tuesday, June 13, 2023 7:29 PM\r\nTo: David Rowley <dgrowleyml@gmail.com>\r\nCc: PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>; James Pang (chaolpan) <chaolpan@cisco.com>\r\nSubject: Re: extended statistics n-distinct on multiple columns not used when join two tables\r\n\r\n\r\n\r\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <dgrowleyml@gmail.com<mailto:dgrowleyml@gmail.com>> napsal:\r\n(moving to -hackers)\r\n\r\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <pavel.stehule@gmail.com<mailto:pavel.stehule@gmail.com>> wrote:\r\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com<mailto:chaolpan@cisco.com>> napsal:\r\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\r\n>>\r\n>> PG v14.8-1, attached please check test case with details.\r\n>\r\n> There is not any support for multi tables statistic\r\n\r\nI think it's probably worth adjusting the docs to mention this. It\r\nseems like it might be something that could surprise someone.\r\n\r\nSomething like the attached, maybe?\r\n\r\n+1\r\n\r\nPavel\r\n\r\n\r\nDavid\r\n\n\n\n\n\n\n\n\n\nThanks for your information, yes, with multiple columns equal join and correlation , looks like extended statistics could help reduce “significantly rows estimation”. Hopefully it’s in future version.\n \nJames\n \n\nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Tuesday, June 13, 2023 7:29 PM\nTo: David Rowley <dgrowleyml@gmail.com>\nCc: PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>; James Pang (chaolpan) <chaolpan@cisco.com>\nSubject: Re: extended statistics n-distinct on multiple columns not used when join two tables\n\n \n\n\n \n\n \n\n\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n\n(moving to -hackers)\n\r\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:\r\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\r\n>>\r\n>> PG v14.8-1, attached please check test case with details.\r\n>\r\n> There is not any support for multi tables statistic\n\r\nI think it's probably worth adjusting the docs to mention this. It\r\nseems like it might be something that could surprise someone.\n\r\nSomething like the attached, maybe?\n\n\n \n\n\n+1\n\n\n \n\n\nPavel\n\n\n \n\n\n\r\nDavid",
"msg_date": "Tue, 13 Jun 2023 11:32:54 +0000",
"msg_from": "\"James Pang (chaolpan)\" <chaolpan@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "On Tue, 13 Jun 2023 at 23:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I think it's probably worth adjusting the docs to mention this. It\n>> seems like it might be something that could surprise someone.\n>>\n>> Something like the attached, maybe?\n>\n> +1\n\nOk, I pushed that patch. Thanks.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jun 2023 12:53:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nI recently started at a new firm and have been trying to help to grok\ncertain planner behavior. A strip-down example of the sort of join we do in\nthe database looks like this, wherein we join two tables that have about 1\nmillion rows:\n\n-- VACUUM (FULL, VERBOSE, ANALYZE), run the query twice first, then...\nEXPLAIN(ANALYZE, VERBOSE, COSTS, SETTINGS, BUFFERS, WAL, TIMING, SUMMARY)\nSELECT\n ci.conversation_uuid,\n ci.item_uuid,\n ci.tenant_id,\n it.item_subject,\n it.item_body_start\nFROM\n conversation_item AS ci\nINNER JOIN item_text AS it ON it.item_uuid = ci.item_uuid;\n\n-- The necessary DDL that creates these tables and indexes is attached.\nI've commented out some extra stuff that isn't directly related to the\nabove query.\n\nDepending on config, we get different results in terms of performance\n(EXPLAIN output attached):\n\nPLAN A (default config, effective cache size just shy of 15GB): 3.829\nseconds. A nested loop is used to probe the hash index\n`conversation_item_item_hash_index` for each row of item_text. Although the\ncost of probing once is low, a fair amount of time passes because the\noperation is repeated ~1.3 million times.\n\nPLAN B (enable_indexscan off, effective cache same as before): 3.254\nseconds (~15% speedup, sometimes 30%). Both tables are scanned sequentially\nand conversation_item is hashed before results are combined with a hash\njoin.\n\nPLAN C: (random_page_cost = 8.0, instead of default 4, effective cache same\nas before): 2.959 (~23% speedup, sometimes 38%). Same overall plan as PLAN\nB, some differences in buffers and I/O. I'll note we had to get to 8.0\nbefore we saw a change to planner behavior; 5.0, 6.0, and 7.0 were too low\nto make a difference.\n\nEnvironment:\n\nPostgres 15.2\nAmazon RDS — db.m6g.2xlarge\n\n\nQuestions:\n\n 1. In Plan A, what factors are causing the planner to select a\n substantially slower plan despite having recent stats about number of rows?\n 2. Is there a substantial difference between the on-the-fly hash done in\n Plan B and Plan C compared to the hash-index used in Plan A? Can I assume\n they are essentially the same? Perhaps there are there differences in how\n they're applied?\n 3. Is it common to see values for random_page_cost set as high as 8.0?\n We would of course need to investigate whether we see a net positive or net\n negative impact on other queries, to adopt this as a general setting, but\n is it a proposal we should actually consider?\n 4. Maybe we are barking up the wrong tree with the previous questions.\n Are there other configuration parameters we should consider first to\n improve performance in situations like the one illustrated?\n 5. Are there other problems with our schema, query, or plans shown here?\n Other approaches (or tools/analyses) we should consider?",
"msg_date": "Tue, 13 Jun 2023 13:24:51 -0600",
"msg_from": "\"Patrick O'Toole\" <patrick.otoole@sturdy.ai>",
"msg_from_op": true,
"msg_subject": "Helping planner to chose sequential scan when it improves performance"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 10:28 PM Patrick O'Toole <patrick.otoole@sturdy.ai>\nwrote:\n\n> Hi all,\n>\n\n\n\n> Questions:\n>\n> 1. In Plan A, what factors are causing the planner to select a\n> substantially slower plan despite having recent stats about number of rows?\n>\n> Estimated overall cost. For Plan A it is ~200k. For plans B/C (haven't\nnoticed any differences in these two) it is ~250k. The planner uses a less\nexpensive plan.\n\nAlso, in the plans you can see that Pg estimates the number of rows\ncorrectly.\n\n>\n> 1. Is there a substantial difference between the on-the-fly hash done\n> in Plan B and Plan C compared to the hash-index used in Plan A? Can I\n> assume they are essentially the same? Perhaps there are there differences\n> in how they're applied?\n>\n> I don't see any difference in plans B and C, but you report timing\nchanges. To me this looks like just a fluctuation in measurements. So I\nwouldn't trust any measurements for plan A either.\n\nI'm not a big expert, but can not say that plan A and B are essentially the\nsame.\n\nPlan A: DB scans item_text table and for every record looks into the index\nof conversation_item table, then looks into the table itself.\n\nPlan B/C: DB scans conversation_item table without looking into its indexes\nbuilding a hash table on the fly.\n\n\n\n> 1. Is it common to see values for random_page_cost set as high as 8.0?\n> We would of course need to investigate whether we see a net positive or net\n> negative impact on other queries, to adopt this as a general setting, but\n> is it a proposal we should actually consider?\n>\n> No idea.\n\n>\n> 1. Maybe we are barking up the wrong tree with the previous questions.\n> Are there other configuration parameters we should consider first to\n> improve performance in situations like the one illustrated?\n>\n> Recheck your numbers.\n\n>\n> 1. Are there other problems with our schema, query, or plans shown\n> here? Other approaches (or tools/analyses) we should consider?\n>\n> You can try the following index:\n\nCREATE INDEX conversation_item_ruz1 ON conversation_item(item_uuid,\nconversation_uuid, tenant_id);\n\nI believe this index would allow Pg to use \"index only scan\" as variation\nof Plan A and avoid touching the conversation_item table completely.\n\n-- \nBest regards, Ruslan.\n\nOn Tue, Jun 13, 2023 at 10:28 PM Patrick O'Toole <patrick.otoole@sturdy.ai> wrote:Hi all, Questions:In Plan A, what factors are causing the planner to select a substantially slower plan despite having recent stats about number of rows?Estimated overall cost. For Plan A it is ~200k. For plans B/C (haven't noticed any differences in these two) it is ~250k. The planner uses a less expensive plan.Also, in the plans you can see that Pg estimates the number of rows correctly.Is there a substantial difference between the on-the-fly hash done in Plan B and Plan C compared to the hash-index used in Plan A? Can I assume they are essentially the same? Perhaps there are there differences in how they're applied?I don't see any difference in plans B and C, but you report timing changes. To me this looks like just a fluctuation in measurements. So I wouldn't trust any measurements for plan A either.I'm not a big expert, but can not say that plan A and B are essentially the same.Plan A: DB scans item_text table and for every record looks into the index of conversation_item table, then looks into the table itself.Plan B/C: DB scans conversation_item table without looking into its indexes building a hash table on the fly.Is it common to see values for random_page_cost set as high as 8.0? We would of course need to investigate whether we see a net positive or net negative impact on other queries, to adopt this as a general setting, but is it a proposal we should actually consider?No idea.Maybe we are barking up the wrong tree with the previous questions. Are there other configuration parameters we should consider first to improve performance in situations like the one illustrated?Recheck your numbers. Are there other problems with our schema, query, or plans shown here? Other approaches (or tools/analyses) we should consider?You can try the following index: CREATE INDEX conversation_item_ruz1 ON conversation_item(item_uuid, conversation_uuid, tenant_id);I believe this index would allow Pg to use \"index only scan\" as variation of Plan A and avoid touching the conversation_item table completely.-- Best regards, Ruslan.",
"msg_date": "Thu, 15 Jun 2023 02:00:12 +0300",
"msg_from": "Ruslan Zakirov <ruslan.zakirov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Helping planner to chose sequential scan when it improves\n performance"
},
{
"msg_contents": "Hello! I tried asking this over on the general listserv before realizing\npgsql-performance is probably better suited.\n\nHi all,\n\nI recently started at a new firm and have been trying to help to grok\ncertain planner behavior. A strip-down example of the sort of join we do in\nthe database looks like this, wherein we join two tables that have about 1\nmillion rows:\n\n-- VACUUM (FULL, VERBOSE, ANALYZE), run the query twice first, then...\nEXPLAIN(ANALYZE, VERBOSE, COSTS, SETTINGS, BUFFERS, WAL, TIMING, SUMMARY)\nSELECT\n ci.conversation_uuid,\n ci.item_uuid,\n ci.tenant_id,\n it.item_subject,\n it.item_body_start\nFROM\n conversation_item AS ci\nINNER JOIN item_text AS it ON it.item_uuid = ci.item_uuid;\n\n-- The necessary DDL that creates these tables and indexes is attached.\nI've commented out some extra stuff that isn't directly related to the\nabove query.\n\nDepending on config, we get different results in terms of performance\n(EXPLAIN output attached):\n\nPLAN A (default config, effective cache size just shy of 15GB): 3.829\nseconds. A nested loop is used to probe the hash index\n`conversation_item_item_hash_index` for each row of item_text. Although the\ncost of probing once is low, a fair amount of time passes because the\noperation is repeated ~1.3 million times.\n\nPLAN B (enable_indexscan off, effective cache same as before): 3.254\nseconds (~15% speedup, sometimes 30%). Both tables are scanned sequentially\nand conversation_item is hashed before results are combined with a hash\njoin.\n\nPLAN C: (random_page_cost = 8.0, instead of default 4, effective cache same\nas before): 2.959 (~23% speedup, sometimes 38%). Same overall plan as PLAN\nB, some differences in buffers and I/O. I'll note we had to get to 8.0\nbefore we saw a change to planner behavior; 5.0, 6.0, and 7.0 were too low\nto make a difference.\n\nEnvironment:\n\nPostgres 15.2\nAmazon RDS — db.m6g.2xlarge\n\n\nQuestions:\n\n 1. In Plan A, what factors (read: which GUC settings) are causing the\n planner to select a substantially slower plan despite having recent stats\n about number of rows?\n 2. Is there a substantial difference between the on-the-fly hash done in\n Plan B and Plan C compared to the hash-index used in Plan A? Can I assume\n they are essentially the same? Perhaps there are there differences in how\n they're applied?\n 3. Is it common to see values for random_page_cost set as high as 8.0?\n We would of course need to investigate whether we see a net positive or net\n negative impact on other queries, to adopt this as a general setting, but\n is it a proposal we should actually consider?\n 4. Maybe we are barking up the wrong tree with the previous questions.\n Are there other configuration parameters we should consider first to\n improve performance in situations like the one illustrated?\n 5. Are there other problems with our schema, query, or plans shown here?\n Other approaches (or tools/analyses) we should consider?\n\nAttached also are the results of\n\nSELECT name, current_setting(name), source\n FROM pg_settings\n WHERE source NOT IN ('default', 'override');\n\n- Patrick",
"msg_date": "Sun, 25 Jun 2023 00:29:00 -0600",
"msg_from": "\"Patrick O'Toole\" <patrick.otoole@sturdy.ai>",
"msg_from_op": true,
"msg_subject": "Fwd: Helping planner to chose sequential scan when it improves\n performance"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 3:28 PM Patrick O'Toole <patrick.otoole@sturdy.ai>\nwrote:\n\n> run the query twice first, then...\n\nIs that a realistic way to run the test? Often forcing all the data needed\nfor the query into memory is going to make things less realistic, not more\nrealistic. Assuming the system has more stuff to do than just perform this\none query, it might be unusual for the query to find everything it needs in\nmemory. Also, if you really do want to do it this way, then you should do\nthis for every plan. Different plans need to access a different\ncollections of buffers, so prewarming just one plan will privilege that one\nover the others.\n\n\n\n>\n> PLAN A (default config, effective cache size just shy of 15GB): 3.829\n> seconds. A nested loop is used to probe the hash index\n> `conversation_item_item_hash_index` for each row of item_text. Although the\n> cost of probing once is low, a fair amount of time passes because the\n> operation is repeated ~1.3 million times.\n>\n> PLAN B (enable_indexscan off, effective cache same as before): 3.254\n> seconds (~15% speedup, sometimes 30%). Both tables are scanned sequentially\n> and conversation_item is hashed before results are combined with a hash\n> join.\n>\n> PLAN C: (random_page_cost = 8.0, instead of default 4, effective cache\n> same as before): 2.959 (~23% speedup, sometimes 38%). Same overall plan as\n> PLAN B, some differences in buffers and I/O. I'll note we had to get to 8.0\n> before we saw a change to planner behavior; 5.0, 6.0, and 7.0 were too low\n> to make a difference.\n>\n\nThe difference between B and C looks like it is entirely noise, having to\ndo with how many buffers it found already in the cache and how many of them\nneeded cleaning (which causes the buffer to be dirty as the cleaned version\nnow needs to be written to disk) and how many dirty buffers it found that\nneeded to be written in order to make way to read other buffers it needs.\n(This last number most generally reflects dirty buffers left around by\nother things which this query encountered, not the buffers the query itself\ndirtied). None of this is likely to be reproducible, and so not worth\ninvestigating.\n\nAnd the difference between A and BC is small enough that it is unlikely to\nbe worth pursuing, either, even if it is reproducible. If your apps runs\nthis one exact query often enough that a 30% difference is worth worrying\nabout, you would probably be better served by questioning the business\ncase. What are you doing with 1.4 million rows once you do fetch them,\nthat it needs to be repeated so often?\n\nIf you think that taking a deep dive into this one query is going to\ndeliver knowledge which will pay off for other (so far unexamined) queries,\nI suspect you are wrong. Look for queries where the misestimation is more\nstark than 30% to serve as your case studies.\n\n\n>\n> Environment:\n>\n> Postgres 15.2\n> Amazon RDS — db.m6g.2xlarge\n>\n>\n> Questions:\n>\n\n\n> In Plan A, what factors are causing the planner to select a substantially\n> slower plan despite having recent stats about number of rows?\n>\n\nEven if it were worth trying to answer this (which I think it is not),\nthere isn't much we can do with dummy tables containing no data. You would\nneed to include a script to generate data of a size and distribution which\nreproduces the given behavior.\n\n> Is there a substantial difference between the on-the-fly hash done in\nPlan B and Plan C compared to the hash-index used in Plan A? Can I assume\nthey are essentially the same? Perhaps there are there differences in how\nthey're applied?\n\nThey are pretty much entirely different. Once jumps all over the index on\ndisk, the other reads the table sequentially and (due to work_mem) parcels\nit out into chunks where it expects each chunk can also be read back in\nsequentially as well. About the only thing not different is that they both\ninvolve computing a hash function.\n\n> Is it common to see values for random_page_cost set as high as 8.0? We\nwould of course need to investigate whether we see a net positive or net\nnegative impact on other queries, to adopt this as a general setting, but\nis it a proposal we should actually consider?\n\nI've never needed to set it that high, but there is no a priori reason it\nwouldn't make sense to do. Settings that high would probably only be\nsuitable for HDD (rather than SSD) storage and when caching is not very\neffective, which does seem to be the opposite of your situation. So I\ncertainly wouldn't do it just based on the evidence at hand.\n\nCheers,\n\nJeff\n\n\n>\n\nOn Tue, Jun 13, 2023 at 3:28 PM Patrick O'Toole <patrick.otoole@sturdy.ai> wrote:> run the query twice first, then...Is that a realistic way to run the test? Often forcing all the data needed for the query into memory is going to make things less realistic, not more realistic. Assuming the system has more stuff to do than just perform this one query, it might be unusual for the query to find everything it needs in memory. Also, if you really do want to do it this way, then you should do this for every plan. Different plans need to access a different collections of buffers, so prewarming just one plan will privilege that one over the others. PLAN A (default config, effective cache size just shy of 15GB): 3.829 seconds. A nested loop is used to probe the hash index `conversation_item_item_hash_index` for each row of item_text. Although the cost of probing once is low, a fair amount of time passes because the operation is repeated ~1.3 million times.PLAN B (enable_indexscan off, effective cache same as before): 3.254 seconds (~15% speedup, sometimes 30%). Both tables are scanned sequentially and conversation_item is hashed before results are combined with a hash join.PLAN C: (random_page_cost = 8.0, instead of default 4, effective cache same as before): 2.959 (~23% speedup, sometimes 38%). Same overall plan as PLAN B, some differences in buffers and I/O. I'll note we had to get to 8.0 before we saw a change to planner behavior; 5.0, 6.0, and 7.0 were too low to make a difference.The difference between B and C looks like it is entirely noise, having to do with how many buffers it found already in the cache and how many of them needed cleaning (which causes the buffer to be dirty as the cleaned version now needs to be written to disk) and how many dirty buffers it found that needed to be written in order to make way to read other buffers it needs. (This last number most generally reflects dirty buffers left around by other things which this query encountered, not the buffers the query itself dirtied). None of this is likely to be reproducible, and so not worth investigating.And the difference between A and BC is small enough that it is unlikely to be worth pursuing, either, even if it is reproducible. If your apps runs this one exact query often enough that a 30% difference is worth worrying about, you would probably be better served by questioning the business case. What are you doing with 1.4 million rows once you do fetch them, that it needs to be repeated so often? If you think that taking a deep dive into this one query is going to deliver knowledge which will pay off for other (so far unexamined) queries, I suspect you are wrong. Look for queries where the misestimation is more stark than 30% to serve as your case studies. Environment:Postgres 15.2Amazon RDS — db.m6g.2xlargeQuestions: In Plan A, what factors are causing the planner to select a substantially slower plan despite having recent stats about number of rows?Even if it were worth trying to answer this (which I think it is not), there isn't much we can do with dummy tables containing no data. You would need to include a script to generate data of a size and distribution which reproduces the given behavior.> Is there a substantial difference between the on-the-fly hash done in Plan B and Plan C compared to the hash-index used in Plan A? Can I assume they are essentially the same? Perhaps there are there differences in how they're applied?They are pretty much entirely different. Once jumps all over the index on disk, the other reads the table sequentially and (due to work_mem) parcels it out into chunks where it expects each chunk can also be read back in sequentially as well. About the only thing not different is that they both involve computing a hash function. > Is it common to see values for random_page_cost set as high as 8.0? We would of course need to investigate whether we see a net positive or net negative impact on other queries, to adopt this as a general setting, but is it a proposal we should actually consider?I've never needed to set it that high, but there is no a priori reason it wouldn't make sense to do. Settings that high would probably only be suitable for HDD (rather than SSD) storage and when caching is not very effective, which does seem to be the opposite of your situation. So I certainly wouldn't do it just based on the evidence at hand.Cheers,Jeff",
"msg_date": "Sun, 25 Jun 2023 15:34:38 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Helping planner to chose sequential scan when it improves\n performance"
},
{
"msg_contents": "On Wed, 14 Jun 2023 at 07:28, Patrick O'Toole <patrick.otoole@sturdy.ai> wrote:\n> Maybe we are barking up the wrong tree with the previous questions. Are there other configuration parameters we should consider first to improve performance in situations like the one illustrated?\n\nrandom_page_cost and effective_cache_size are the main settings which\nwill influence plan A vs plan B. Larger values of\neffective_cache_size will have the planner apply more seq_page_costs\nto the index scan. Lower values of effective_cache_size will mean\nmore pages will be assumed to cost random_page_cost.\n\nDavid\n\n\n",
"msg_date": "Mon, 26 Jun 2023 07:48:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Helping planner to chose sequential scan when it improves\n performance"
},
{
"msg_contents": "On Sun, Jun 25, 2023 at 3:48 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 14 Jun 2023 at 07:28, Patrick O'Toole <patrick.otoole@sturdy.ai>\n> wrote:\n> > Maybe we are barking up the wrong tree with the previous questions. Are\n> there other configuration parameters we should consider first to improve\n> performance in situations like the one illustrated?\n>\n> random_page_cost and effective_cache_size are the main settings which\n> will influence plan A vs plan B. Larger values of\n> effective_cache_size will have the planner apply more seq_page_costs\n> to the index scan.\n\n\nSqueezing otherwise-random page costs towards seq_page_costs is what bitmap\nscans do, and what large index scans with high pg_stats.correlation do.\nBut effective_cache_size does something else, it squeezes the per page\ncosts towards zero, not towards seq_page_costs. This is surely not\naccurate, as the costs of locking the buffer mapping partition, finding the\nbuffer or reading it from the kernel cache if not found, maybe faulting the\nbuffer from main memory into on-CPU memory, pinning the buffer, and\nread-locking it are certainly well above zero, even if not nearly as high\nas seq_page_cost. I'd guess they are truly about 2 to 5 times a\ncpu_tuple_cost per buffer. But zero is what they currently get, there is\nno knob to twist to change that.\n\n\n> Lower values of effective_cache_size will mean\n> more pages will be assumed to cost random_page_cost.\n>\n\n\nSure, but it addresses the issue only obliquely (as does raising\nrandom_page_cost) not directly. So the change you need to make to them\nwill be large, and will likely make other things worse.\n\nCheers,\n\nJeff\n\nOn Sun, Jun 25, 2023 at 3:48 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 14 Jun 2023 at 07:28, Patrick O'Toole <patrick.otoole@sturdy.ai> wrote:\n> Maybe we are barking up the wrong tree with the previous questions. Are there other configuration parameters we should consider first to improve performance in situations like the one illustrated?\n\nrandom_page_cost and effective_cache_size are the main settings which\nwill influence plan A vs plan B. Larger values of\neffective_cache_size will have the planner apply more seq_page_costs\nto the index scan.Squeezing otherwise-random page costs towards seq_page_costs is what bitmap scans do, and what large index scans with high pg_stats.correlation do. But effective_cache_size does something else, it squeezes the per page costs towards zero, not towards seq_page_costs. This is surely not accurate, as the costs of locking the buffer mapping partition, finding the buffer or reading it from the kernel cache if not found, maybe faulting the buffer from main memory into on-CPU memory, pinning the buffer, and read-locking it are certainly well above zero, even if not nearly as high as seq_page_cost. I'd guess they are truly about 2 to 5 times a cpu_tuple_cost per buffer. But zero is what they currently get, there is no knob to twist to change that. Lower values of effective_cache_size will mean\nmore pages will be assumed to cost random_page_cost.Sure, but it addresses the issue only obliquely (as does raising random_page_cost) not directly. So the change you need to make to them will be large, and will likely make other things worse.Cheers,Jeff",
"msg_date": "Tue, 27 Jun 2023 08:47:44 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Helping planner to chose sequential scan when it improves\n performance"
}
] |
[
{
"msg_contents": "In my use case I have a 2billion / 1To table. I have daily data to upsert around 2milion, with say 50% inserts, based on the primary key in a fresh analyzed table.\n\nI have tested multiple strategies to merge the data, all based on first stage to copy the 2m dataset in an staging unlogged / indexed table:\n\n1. Join insert then join update \n2.1. Usage of the new merge statement\n2.2 Usage of merge on two hash partitioned tables wit partition wide join enabled\n3. Usage of merge by batch of 1000 rows\n\nFirst remark is the merge statement is almost 30% faster than two statements in my benchmarks. Thanks to the pg community for this.\n\nWhile the strategies 1 and 2.x are incredibly slow (canceled after 10 hours), the third one finishes within 30 minutes.\n\nMy interpretation reading the query plan is: well sized small batches of upserts leverage the indexes while the regular join choose the sequential scan, including sorting and hashing which takes forever time and resources including disk.\n\nSadly my incoming dataset is too small to benefit from a seq scan and too large to benefit from an index scan join. However when splited manuallyin N portions, the problem can be tackled with N * small cost, which is cheap anyway.\n\nQuestions:\n1. Is there another strategy ?\n2. Could postgres support a \"batched indexed join itself\", leveraging indexes itself by dynamic sized batches ?\n\n\nIt is error prone write code to split and iterate I suspect postgres has everything internally (indexes catalog, planner) to split itself the job, making David vs Goliath something trivial.\n\n\n",
"msg_date": "Sat, 17 Jun 2023 13:48:33 +0000",
"msg_from": "Nicolas Paris <nicolas.paris@riseup.net>",
"msg_from_op": true,
"msg_subject": "Merge David and Goliath tables efficiently"
},
{
"msg_contents": "On 6/17/23 15:48, Nicolas Paris wrote:\n> In my use case I have a 2billion / 1To table. I have daily data to upsert around 2milion, with say 50% inserts, based on the primary key in a fresh analyzed table.\n> \n> I have tested multiple strategies to merge the data, all based on first stage to copy the 2m dataset in an staging unlogged / indexed table:\n> \n> 1. Join insert then join update \n> 2.1. Usage of the new merge statement\n> 2.2 Usage of merge on two hash partitioned tables wit partition wide join enabled\n> 3. Usage of merge by batch of 1000 rows\n> \n> First remark is the merge statement is almost 30% faster than two statements in my benchmarks. Thanks to the pg community for this.\n> \n> While the strategies 1 and 2.x are incredibly slow (canceled after 10 hours), the third one finishes within 30 minutes.\n> \n\nSeems pretty terrible, provided the data is on reasonable storage (with\nacceptable random I/O behavior).\n\n> My interpretation reading the query plan is: well sized small batches of upserts leverage the indexes while the regular join choose the sequential scan, including sorting and hashing which takes forever time and resources including disk.\n\nYou may be right, but it's hard to tell without seeing the query plan.\n\n> \n> Sadly my incoming dataset is too small to benefit from a seq scan and too large to benefit from an index scan join. However when splited manuallyin N portions, the problem can be tackled with N * small cost, which is cheap anyway.\n> \n\nSounds very much like you'd benefit from tuning some cost parameters to\nmake the index scan look cheaper.\n\n> Questions:\n> 1. Is there another strategy ?\n> 2. Could postgres support a \"batched indexed join itself\", leveraging indexes itself by dynamic sized batches ?\n> \n\nNot sure what 'batched indexed join' would be, but it very much sounds\nlike a nested loop with an index scan.\n\n> \n> It is error prone write code to split and iterate I suspect postgres has everything internally (indexes catalog, planner) to split itself the job, making David vs Goliath something trivial.\n> \n\nWhat PostgreSQL version are you using, what hardware? Did you tune it in\nany way, or is everything just default?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Jun 2023 21:52:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "> > My interpretation reading the query plan is: well sized small\n> > batches of upserts leverage the indexes while the regular join\n> > choose the sequential scan, including sorting and hashing which\n> > takes forever time and resources including disk.\n> \n> You may be right, but it's hard to tell without seeing the query\n> plan.\n\nHere are part of both plans:\n\nBad case (strategy 2.1):\n\n-> Merge Left Join (cost=530202629.03..255884257913.32\nrows=17023331531230 width=579)\nMerge Cond: (david.list_id = ca.list_id)\n-> Sort (cost=2019172.91..2024398.82 rows=2090361 width=569)\n Sort Key: david.list_id\n -> Append (cost=0.00..192152.41 rows=2090361 width=569)\n -> Seq Scan on david_0 david_1 (cost=0.00..1812.52\nrows=20852 width=569)\n -> Seq Scan on david_1 david_2 (cost=0.00..1800.09\nrows=20709 width=569)\n -> Seq Scan on david_2 david_3 (cost=0.00..1794.44\nrows=20644 width=569)\n\nGood case (strategy 3):\n\nMerge on goliath_23 ca (cost=2139.75..11077.17 rows=0 width=0)\n -> Nested Loop Left Join (cost=2139.75..11077.17 rows=1000\nwidth=575)\n -> Limit (cost=2139.19..2495.67 rows=1000 width=569)\n -> Index Scan using david_23_list_id_account_id_idx on\ndavid_23 (cost=0.29..6794.16 rows=19058 width=569)\n -> Index Scan using goliath_23_list_id_account_id_idx on\ngoliath_23 ca (cost=0.56..8.56 rows=1 width=14)\n Index Cond: (list_id = david_23.list_id)\n\n> \n> Sounds very much like you'd benefit from tuning some cost parameters\n> to\n> make the index scan look cheaper.\n> Not sure what 'batched indexed join' would be, but it very much\n> sounds\n> like a nested loop with an index scan.\n\nAgreed, a 2M nested loop over index scan would likely work as well.\nWould tuning the costs param could lead to get such large nested loop ?\n\n> What PostgreSQL version are you using, what hardware? Did you tune it\n> in\n> any way, or is everything just default?\n\nIt is pg 15.3, on 2 cores / 8GO / 2TO ssds, with defaults cloud\nprovider parameters (RDS). \n\n\n",
"msg_date": "Sat, 17 Jun 2023 23:42:48 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "\n\nOn 6/17/23 23:42, nicolas paris wrote:\n>>> My interpretation reading the query plan is: well sized small\n>>> batches of upserts leverage the indexes while the regular join\n>>> choose the sequential scan, including sorting and hashing which\n>>> takes forever time and resources including disk.\n>>\n>> You may be right, but it's hard to tell without seeing the query\n>> plan.\n> \n> Here are part of both plans:\n> \n\nI don't understand why you're sharing just a part of the plan and not\nthe whole thing, ideally including the actual update ... Giving us the\ninfo in small pieces just means we need to guess and speculate.\n\n> Bad case (strategy 2.1):\n> \n> -> Merge Left Join (cost=530202629.03..255884257913.32\n> rows=17023331531230 width=579)\n> Merge Cond: (david.list_id = ca.list_id)\n> -> Sort (cost=2019172.91..2024398.82 rows=2090361 width=569)\n> Sort Key: david.list_id\n> -> Append (cost=0.00..192152.41 rows=2090361 width=569)\n> -> Seq Scan on david_0 david_1 (cost=0.00..1812.52\n> rows=20852 width=569)\n> -> Seq Scan on david_1 david_2 (cost=0.00..1800.09\n> rows=20709 width=569)\n> -> Seq Scan on david_2 david_3 (cost=0.00..1794.44\n> rows=20644 width=569)\n> \n\nWell, I kinda doubt you have 17023331531230 rows (not even physically\npossible with 2TB disk), so that's immediately suspicious. I'd bet the\nUPDATE ... FROM ... is missing a condition or something like that, which\nresults in a cartesian product.\n\n> Good case (strategy 3):\n> \n> Merge on goliath_23 ca (cost=2139.75..11077.17 rows=0 width=0)\n> -> Nested Loop Left Join (cost=2139.75..11077.17 rows=1000\n> width=575)\n> -> Limit (cost=2139.19..2495.67 rows=1000 width=569)\n> -> Index Scan using david_23_list_id_account_id_idx on\n> david_23 (cost=0.29..6794.16 rows=19058 width=569)\n> -> Index Scan using goliath_23_list_id_account_id_idx on\n> goliath_23 ca (cost=0.56..8.56 rows=1 width=14)\n> Index Cond: (list_id = david_23.list_id)\n> \n>>\n>> Sounds very much like you'd benefit from tuning some cost parameters\n>> to\n>> make the index scan look cheaper.\n>> Not sure what 'batched indexed join' would be, but it very much\n>> sounds\n>> like a nested loop with an index scan.\n> \n> Agreed, a 2M nested loop over index scan would likely work as well.\n> Would tuning the costs param could lead to get such large nested loop ?\n> \n\nIt should be, but maybe let's see if there are other problems in the\nquery itself. If it's generating a cartesian product, it's pointless to\ntune parameters.\n\n>> What PostgreSQL version are you using, what hardware? Did you tune it\n>> in\n>> any way, or is everything just default?\n> \n> It is pg 15.3, on 2 cores / 8GO / 2TO ssds, with defaults cloud\n> provider parameters (RDS). \n> \n\nI assume 2TO is 2TB?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Jun 2023 01:48:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "> I assume 2TO is 2TB?\n\nYes. 2TB\n\n\n> I don't understand why you're sharing just a part of the plan \n\n\nAs for the nested loop plan, what I shared is the full plan. Actually\nit is repeated many times, since 2M batched by 500 rows. I add it\nagain:\n\nMerge on goliath_23 ca (cost=2139.75..11077.17 rows=0 width=0)\n -> Nested Loop Left Join (cost=2139.75..11077.17 rows=1000\nwidth=575)\n -> Limit (cost=2139.19..2495.67 rows=1000 width=569)\n -> Index Scan using david_23_list_id_account_id_idx on\ndavid_23 (cost=0.29..6794.16 rows=19058 width=569)\n -> Index Scan using goliath_23_list_id_account_id_idx on\ngoliath_23 ca (cost=0.56..8.56 rows=1 width=14)\n Index Cond: (list_id = david_23.list_id)\n\n\n> Well, I kinda doubt you have 17023331531230 rows (not even physically\n> possible with 2TB disk), so that's immediately suspicious.\n\nBelow is the full plan for the strategy 2.1 (Indeed the previous email\nplan was truncated and wrong, sorry for that). \n\nNote that both plan acome from the same partitioned by hash table with\n100 parts, with a unique index on the list_id + hash_key. For strategy\n2.1, I turned on enable_partitionwise_join, since david table has the\nsame partitioning scheme as goliath including unique indexe. In both\ncase the query is:\n\nMERGE INTO \"goliath\" ca \nUSING (SELECT * FROM \"david\" ORDER BY \"list_id\") AS t \nON t.\"list_id\" = ca.\"list_id\" \nWHEN MATCHED THEN\nUPDATE SET ...\nWHEN NOT MATCHED THEN \nINSERT (...) \nVALUES (...)\n\nExcept in strategy 3 david is split by limit/offset 500 on each part\ntables such:\n\nMERGE INTO \"goliath_23\" ca \nUSING (SELECT * FROM \"david_23\" ORDER BY \"list_id\" LIMIT 500 OFFSET 0)\nAS t \nON t.\"list_id\" = ca.\"list_id\" \nWHEN MATCHED THEN\nUPDATE SET ...\nWHEN NOT MATCHED THEN \nINSERT (...) \nVALUES (...) \n\n \n\nMerge on goliath ca (cost=178016528.81..192778842.44 rows=0 width=0)\n Merge on goliath_0 ca_1\n Merge on goliath_1 ca_2\n Merge on goliath_2 ca_3\n Merge on goliath_3 ca_4\n Merge on goliath_4 ca_5\n Merge on goliath_5 ca_6\n Merge on goliath_6 ca_7\n Merge on goliath_7 ca_8\n Merge on goliath_8 ca_9\n Merge on goliath_9 ca_10\n Merge on goliath_10 ca_11\n Merge on goliath_11 ca_12\n Merge on goliath_12 ca_13\n Merge on goliath_13 ca_14\n Merge on goliath_14 ca_15\n Merge on goliath_15 ca_16\n Merge on goliath_16 ca_17\n Merge on goliath_17 ca_18\n Merge on goliath_18 ca_19\n Merge on goliath_19 ca_20\n Merge on goliath_20 ca_21\n Merge on goliath_21 ca_22\n Merge on goliath_22 ca_23\n Merge on goliath_23 ca_24\n Merge on goliath_24 ca_25\n Merge on goliath_25 ca_26\n Merge on goliath_26 ca_27\n Merge on goliath_27 ca_28\n Merge on goliath_28 ca_29\n Merge on goliath_29 ca_30\n Merge on goliath_30 ca_31\n Merge on goliath_31 ca_32\n Merge on goliath_32 ca_33\n Merge on goliath_33 ca_34\n Merge on goliath_34 ca_35\n Merge on goliath_35 ca_36\n Merge on goliath_36 ca_37\n Merge on goliath_37 ca_38\n Merge on goliath_38 ca_39\n Merge on goliath_39 ca_40\n Merge on goliath_40 ca_41\n Merge on goliath_41 ca_42\n Merge on goliath_42 ca_43\n Merge on goliath_43 ca_44\n Merge on goliath_44 ca_45\n Merge on goliath_45 ca_46\n Merge on goliath_46 ca_47\n Merge on goliath_47 ca_48\n Merge on goliath_48 ca_49\n Merge on goliath_49 ca_50\n Merge on goliath_50 ca_51\n Merge on goliath_51 ca_52\n Merge on goliath_52 ca_53\n Merge on goliath_53 ca_54\n Merge on goliath_54 ca_55\n Merge on goliath_55 ca_56\n Merge on goliath_56 ca_57\n Merge on goliath_57 ca_58\n Merge on goliath_58 ca_59\n Merge on goliath_59 ca_60\n Merge on goliath_60 ca_61\n Merge on goliath_61 ca_62\n Merge on goliath_62 ca_63\n Merge on goliath_63 ca_64\n Merge on goliath_64 ca_65\n Merge on goliath_65 ca_66\n Merge on goliath_66 ca_67\n Merge on goliath_67 ca_68\n Merge on goliath_68 ca_69\n Merge on goliath_69 ca_70\n Merge on goliath_70 ca_71\n Merge on goliath_71 ca_72\n Merge on goliath_72 ca_73\n Merge on goliath_73 ca_74\n Merge on goliath_74 ca_75\n Merge on goliath_75 ca_76\n Merge on goliath_76 ca_77\n Merge on goliath_77 ca_78\n Merge on goliath_78 ca_79\n Merge on goliath_79 ca_80\n Merge on goliath_80 ca_81\n Merge on goliath_81 ca_82\n Merge on goliath_82 ca_83\n Merge on goliath_83 ca_84\n Merge on goliath_84 ca_85\n Merge on goliath_85 ca_86\n Merge on goliath_86 ca_87\n Merge on goliath_87 ca_88\n Merge on goliath_88 ca_89\n Merge on goliath_89 ca_90\n Merge on goliath_90 ca_91\n Merge on goliath_91 ca_92\n Merge on goliath_92 ca_93\n Merge on goliath_93 ca_94\n Merge on goliath_94 ca_95\n Merge on goliath_95 ca_96\n Merge on goliath_96 ca_97\n Merge on goliath_97 ca_98\n Merge on goliath_98 ca_99\n Merge on goliath_99 ca_100\n -> Hash Left Join (cost=178016528.81..192778842.44 rows=2187354 width=579)\n Hash Cond: (david.list_id = ca.list_id)\n -> Append (cost=0.00..201068.31 rows=2187354 width=569)\n -> Seq Scan on david_0 david_1 (cost=0.00..1926.65 rows=22165 width=569)\n -> Seq Scan on david_1 david_2 (cost=0.00..1809.13 rows=20813 width=569)\n -> Seq Scan on david_2 david_3 (cost=0.00..1812.52 rows=20852 width=569)\n -> Seq Scan on david_3 david_4 (cost=0.00..1648.67 rows=18967 width=569)\n -> Seq Scan on david_4 david_5 (cost=0.00..1853.20 rows=21320 width=569)\n -> Seq Scan on david_5 david_6 (cost=0.00..1735.68 rows=19968 width=569)\n -> Seq Scan on david_6 david_7 (cost=0.00..1693.87 rows=19487 width=569)\n -> Seq Scan on david_7 david_8 (cost=0.00..1872.41 rows=21541 width=569)\n -> Seq Scan on david_8 david_9 (cost=0.00..1827.21 rows=21021 width=569)\n -> Seq Scan on david_9 david_10 (cost=0.00..1815.91 rows=20891 width=569)\n -> Seq Scan on david_10 david_11 (cost=0.00..1757.15 rows=20215 width=569)\n -> Seq Scan on david_11 david_12 (cost=0.00..1624.94 rows=18694 width=569)\n -> Seq Scan on david_12 david_13 (cost=0.00..1867.89 rows=21489 width=569)\n -> Seq Scan on david_13 david_14 (cost=0.00..1979.76 rows=22776 width=569)\n -> Seq Scan on david_14 david_15 (cost=0.00..1706.30 rows=19630 width=569)\n -> Seq Scan on david_15 david_16 (cost=0.00..1828.34 rows=21034 width=569)\n -> Seq Scan on david_16 david_17 (cost=0.00..1702.91 rows=19591 width=569)\n -> Seq Scan on david_17 david_18 (cost=0.00..1805.74 rows=20774 width=569)\n -> Seq Scan on david_18 david_19 (cost=0.00..3531.25 rows=40625 width=569)\n -> Seq Scan on david_19 david_20 (cost=0.00..1522.11 rows=17511 width=569)\n -> Seq Scan on david_20 david_21 (cost=0.00..1950.38 rows=22438 width=569)\n -> Seq Scan on david_21 david_22 (cost=0.00..1957.16 rows=22516 width=569)\n -> Seq Scan on david_22 david_23 (cost=0.00..1745.85 rows=20085 width=569)\n -> Seq Scan on david_23 david_24 (cost=0.00..1730.03 rows=19903 width=569)\n -> Seq Scan on david_24 david_25 (cost=0.00..1784.27 rows=20527 width=569)\n -> Seq Scan on david_25 david_26 (cost=0.00..1698.39 rows=19539 width=569)\n -> Seq Scan on david_26 david_27 (cost=0.00..1900.66 rows=21866 width=569)\n -> Seq Scan on david_27 david_28 (cost=0.00..1813.65 rows=20865 width=569)\n -> Seq Scan on david_28 david_29 (cost=0.00..2009.14 rows=23114 width=569)\n -> Seq Scan on david_29 david_30 (cost=0.00..1778.62 rows=20462 width=569)\n -> Seq Scan on david_30 david_31 (cost=0.00..1779.75 rows=20475 width=569)\n -> Seq Scan on david_31 david_32 (cost=0.00..1892.75 rows=21775 width=569)\n -> Seq Scan on david_32 david_33 (cost=0.00..1988.80 rows=22880 width=569)\n -> Seq Scan on david_33 david_34 (cost=0.00..1804.61 rows=20761 width=569)\n -> Seq Scan on david_34 david_35 (cost=0.00..1857.72 rows=21372 width=569)\n -> Seq Scan on david_35 david_36 (cost=0.00..1782.01 rows=20501 width=569)\n -> Seq Scan on david_36 david_37 (cost=0.00..2352.66 rows=27066 width=569)\n -> Seq Scan on david_37 david_38 (cost=0.00..1962.81 rows=22581 width=569)\n -> Seq Scan on david_38 david_39 (cost=0.00..2002.36 rows=23036 width=569)\n -> Seq Scan on david_39 david_40 (cost=0.00..1852.07 rows=21307 width=569)\n -> Seq Scan on david_40 david_41 (cost=0.00..2116.49 rows=24349 width=569)\n -> Seq Scan on david_41 david_42 (cost=0.00..1785.40 rows=20540 width=569)\n -> Seq Scan on david_42 david_43 (cost=0.00..1838.51 rows=21151 width=569)\n -> Seq Scan on david_43 david_44 (cost=0.00..1931.17 rows=22217 width=569)\n -> Seq Scan on david_44 david_45 (cost=0.00..1878.06 rows=21606 width=569)\n -> Seq Scan on david_45 david_46 (cost=0.00..1859.98 rows=21398 width=569)\n -> Seq Scan on david_46 david_47 (cost=0.00..1882.58 rows=21658 width=569)\n -> Seq Scan on david_47 david_48 (cost=0.00..1791.05 rows=20605 width=569)\n -> Seq Scan on david_48 david_49 (cost=0.00..1925.52 rows=22152 width=569)\n -> Seq Scan on david_49 david_50 (cost=0.00..1953.77 rows=22477 width=569)\n -> Seq Scan on david_50 david_51 (cost=0.00..1797.83 rows=20683 width=569)\n -> Seq Scan on david_51 david_52 (cost=0.00..1680.31 rows=19331 width=569)\n -> Seq Scan on david_52 david_53 (cost=0.00..1626.07 rows=18707 width=569)\n -> Seq Scan on david_53 david_54 (cost=0.00..2003.49 rows=23049 width=569)\n -> Seq Scan on david_54 david_55 (cost=0.00..1771.84 rows=20384 width=569)\n -> Seq Scan on david_55 david_56 (cost=0.00..1700.65 rows=19565 width=569)\n -> Seq Scan on david_56 david_57 (cost=0.00..1931.17 rows=22217 width=569)\n -> Seq Scan on david_57 david_58 (cost=0.00..1833.99 rows=21099 width=569)\n -> Seq Scan on david_58 david_59 (cost=0.00..1918.74 rows=22074 width=569)\n -> Seq Scan on david_59 david_60 (cost=0.00..1885.97 rows=21697 width=569)\n -> Seq Scan on david_60 david_61 (cost=0.00..4095.12 rows=47112 width=569)\n -> Seq Scan on david_61 david_62 (cost=0.00..2076.94 rows=23894 width=569)\n -> Seq Scan on david_62 david_63 (cost=0.00..2876.98 rows=33098 width=569)\n -> Seq Scan on david_63 david_64 (cost=0.00..1647.54 rows=18954 width=569)\n -> Seq Scan on david_64 david_65 (cost=0.00..1653.19 rows=19019 width=569)\n -> Seq Scan on david_65 david_66 (cost=0.00..1684.83 rows=19383 width=569)\n -> Seq Scan on david_66 david_67 (cost=0.00..1863.37 rows=21437 width=569)\n -> Seq Scan on david_67 david_68 (cost=0.00..1717.60 rows=19760 width=569)\n -> Seq Scan on david_68 david_69 (cost=0.00..1847.55 rows=21255 width=569)\n -> Seq Scan on david_69 david_70 (cost=0.00..2235.14 rows=25714 width=569)\n -> Seq Scan on david_70 david_71 (cost=0.00..2273.56 rows=26156 width=569)\n -> Seq Scan on david_71 david_72 (cost=0.00..1745.85 rows=20085 width=569)\n -> Seq Scan on david_72 david_73 (cost=0.00..1861.11 rows=21411 width=569)\n -> Seq Scan on david_73 david_74 (cost=0.00..1856.59 rows=21359 width=569)\n -> Seq Scan on david_74 david_75 (cost=0.00..1885.97 rows=21697 width=569)\n -> Seq Scan on david_75 david_76 (cost=0.00..1665.62 rows=19162 width=569)\n -> Seq Scan on david_76 david_77 (cost=0.00..1870.15 rows=21515 width=569)\n -> Seq Scan on david_77 david_78 (cost=0.00..1776.36 rows=20436 width=569)\n -> Seq Scan on david_78 david_79 (cost=0.00..1766.19 rows=20319 width=569)\n -> Seq Scan on david_79 david_80 (cost=0.00..1812.52 rows=20852 width=569)\n -> Seq Scan on david_80 david_81 (cost=0.00..1995.58 rows=22958 width=569)\n -> Seq Scan on david_81 david_82 (cost=0.00..1701.78 rows=19578 width=569)\n -> Seq Scan on david_82 david_83 (cost=0.00..1658.84 rows=19084 width=569)\n -> Seq Scan on david_83 david_84 (cost=0.00..1840.77 rows=21177 width=569)\n -> Seq Scan on david_84 david_85 (cost=0.00..1688.22 rows=19422 width=569)\n -> Seq Scan on david_85 david_86 (cost=0.00..1918.74 rows=22074 width=569)\n -> Seq Scan on david_86 david_87 (cost=0.00..2963.99 rows=34099 width=569)\n -> Seq Scan on david_87 david_88 (cost=0.00..2075.81 rows=23881 width=569)\n -> Seq Scan on david_88 david_89 (cost=0.00..1783.14 rows=20514 width=569)\n -> Seq Scan on david_89 david_90 (cost=0.00..1765.06 rows=20306 width=569)\n -> Seq Scan on david_90 david_91 (cost=0.00..1950.38 rows=22438 width=569)\n -> Seq Scan on david_91 david_92 (cost=0.00..1840.77 rows=21177 width=569)\n -> Seq Scan on david_92 david_93 (cost=0.00..1783.14 rows=20514 width=569)\n -> Seq Scan on david_93 david_94 (cost=0.00..1705.17 rows=19617 width=569)\n -> Seq Scan on david_94 david_95 (cost=0.00..1817.04 rows=20904 width=569)\n -> Seq Scan on david_95 david_96 (cost=0.00..1977.50 rows=22750 width=569)\n -> Seq Scan on david_96 david_97 (cost=0.00..1946.99 rows=22399 width=569)\n -> Seq Scan on david_97 david_98 (cost=0.00..1814.78 rows=20878 width=569)\n -> Seq Scan on david_98 david_99 (cost=0.00..1844.16 rows=21216 width=569)\n -> Seq Scan on david_99 david_100 (cost=0.00..1769.58 rows=20358 width=569)\n -> Hash (cost=147650973.90..147650973.90 rows=1653953593 width=18)\n -> Append (cost=0.00..147650973.90 rows=1653953593 width=18)\n -> Seq Scan on goliath_0 ca_1 (cost=0.00..11177255.56 rows=150300456 width=18)\n -> Seq Scan on goliath_1 ca_2 (cost=0.00..1234238.22 rows=14780522 width=18)\n -> Seq Scan on goliath_2 ca_3 (cost=0.00..1323160.42 rows=15336142 width=18)\n -> Seq Scan on goliath_3 ca_4 (cost=0.00..1256666.46 rows=15029146 width=18)\n -> Seq Scan on goliath_4 ca_5 (cost=0.00..1272324.75 rows=15157175 width=18)\n -> Seq Scan on goliath_5 ca_6 (cost=0.00..1270349.37 rows=15044037 width=18)\n -> Seq Scan on goliath_6 ca_7 (cost=0.00..1284810.74 rows=15261474 width=18)\n -> Seq Scan on goliath_7 ca_8 (cost=0.00..1263715.41 rows=15020741 width=18)\n -> Seq Scan on goliath_8 ca_9 (cost=0.00..1265121.73 rows=14953673 width=18)\n -> Seq Scan on goliath_9 ca_10 (cost=0.00..1309331.70 rows=15314570 width=18)\n -> Seq Scan on goliath_10 ca_11 (cost=0.00..1269041.02 rows=15086702 width=18)\n -> Seq Scan on goliath_11 ca_12 (cost=0.00..1268299.98 rows=15042498 width=18)\n -> Seq Scan on goliath_12 ca_13 (cost=0.00..1294069.08 rows=15206708 width=18)\n -> Seq Scan on goliath_13 ca_14 (cost=0.00..1344155.97 rows=15480897 width=18)\n -> Seq Scan on goliath_14 ca_15 (cost=0.00..1258529.41 rows=15007641 width=18)\n -> Seq Scan on goliath_15 ca_16 (cost=0.00..1247612.99 rows=14801699 width=18)\n -> Seq Scan on goliath_16 ca_17 (cost=0.00..1398973.15 rows=15833115 width=18)\n -> Seq Scan on goliath_17 ca_18 (cost=0.00..1234430.89 rows=14907189 width=18)\n -> Seq Scan on goliath_18 ca_19 (cost=0.00..1491068.89 rows=16395989 width=18)\n -> Seq Scan on goliath_19 ca_20 (cost=0.00..1241254.74 rows=14743874 width=18)\n -> Seq Scan on goliath_20 ca_21 (cost=0.00..1308537.31 rows=15139031 width=18)\n -> Seq Scan on goliath_21 ca_22 (cost=0.00..1274257.03 rows=15133903 width=18)\n -> Seq Scan on goliath_22 ca_23 (cost=0.00..1348582.00 rows=15415900 width=18)\n -> Seq Scan on goliath_23 ca_24 (cost=0.00..1245613.99 rows=14770499 width=18)\n -> Seq Scan on goliath_24 ca_25 (cost=0.00..1232552.90 rows=14592890 width=18)\n -> Seq Scan on goliath_25 ca_26 (cost=0.00..1237272.86 rows=14785186 width=18)\n -> Seq Scan on goliath_26 ca_27 (cost=0.00..1395925.80 rows=15794380 width=18)\n -> Seq Scan on goliath_27 ca_28 (cost=0.00..1243112.59 rows=14888659 width=18)\n -> Seq Scan on goliath_28 ca_29 (cost=0.00..1261176.05 rows=15014705 width=18)\n -> Seq Scan on goliath_29 ca_30 (cost=0.00..1375287.81 rows=15912981 width=18)\n -> Seq Scan on goliath_30 ca_31 (cost=0.00..1236320.19 rows=14789519 width=18)\n -> Seq Scan on goliath_31 ca_32 (cost=0.00..1278375.97 rows=15203897 width=18)\n -> Seq Scan on goliath_32 ca_33 (cost=0.00..1263550.43 rows=14860643 width=18)\n -> Seq Scan on goliath_33 ca_34 (cost=0.00..1299830.39 rows=15186239 width=18)\n -> Seq Scan on goliath_34 ca_35 (cost=0.00..1352761.61 rows=15664361 width=18)\n -> Seq Scan on goliath_35 ca_36 (cost=0.00..1323724.61 rows=15543061 width=18)\n -> Seq Scan on goliath_36 ca_37 (cost=0.00..1508080.05 rows=16098705 width=18)\n -> Seq Scan on goliath_37 ca_38 (cost=0.00..1247180.20 rows=15092220 width=18)\n -> Seq Scan on goliath_38 ca_39 (cost=0.00..1265121.63 rows=14913063 width=18)\n -> Seq Scan on goliath_39 ca_40 (cost=0.00..1263161.74 rows=15008274 width=18)\n -> Seq Scan on goliath_40 ca_41 (cost=0.00..1422028.56 rows=15874056 width=18)\n -> Seq Scan on goliath_41 ca_42 (cost=0.00..1276259.24 rows=15052824 width=18)\n -> Seq Scan on goliath_42 ca_43 (cost=0.00..1331700.23 rows=15499623 width=18)\n -> Seq Scan on goliath_43 ca_44 (cost=0.00..1246053.10 rows=14665010 width=18)\n -> Seq Scan on goliath_44 ca_45 (cost=0.00..1275255.85 rows=15143785 width=18)\n -> Seq Scan on goliath_45 ca_46 (cost=0.00..1305362.83 rows=15361783 width=18)\n -> Seq Scan on goliath_46 ca_47 (cost=0.00..1280577.37 rows=15247837 width=18)\n -> Seq Scan on goliath_47 ca_48 (cost=0.00..1251285.15 rows=14806015 width=18)\n -> Seq Scan on goliath_48 ca_49 (cost=0.00..1351232.48 rows=15433548 width=18)\n -> Seq Scan on goliath_49 ca_50 (cost=0.00..1347924.50 rows=15296550 width=18)\n -> Seq Scan on goliath_50 ca_51 (cost=0.00..1357086.54 rows=15541854 width=18)\n -> Seq Scan on goliath_51 ca_52 (cost=0.00..1216370.11 rows=14562711 width=18)\n -> Seq Scan on goliath_52 ca_53 (cost=0.00..1358864.91 rows=15543891 width=18)\n -> Seq Scan on goliath_53 ca_54 (cost=0.00..1303103.21 rows=15233521 width=18)\n -> Seq Scan on goliath_54 ca_55 (cost=0.00..1247450.04 rows=14984504 width=18)\n -> Seq Scan on goliath_55 ca_56 (cost=0.00..1265316.35 rows=14879835 width=18)\n -> Seq Scan on goliath_56 ca_57 (cost=0.00..1256864.72 rows=14942272 width=18)\n -> Seq Scan on goliath_57 ca_58 (cost=0.00..1234443.50 rows=14857950 width=18)\n -> Seq Scan on goliath_58 ca_59 (cost=0.00..1293245.96 rows=15297596 width=18)\n -> Seq Scan on goliath_59 ca_60 (cost=0.00..1234137.56 rows=14820356 width=18)\n -> Seq Scan on goliath_60 ca_61 (cost=0.00..1561333.77 rows=16903277 width=18)\n -> Seq Scan on goliath_61 ca_62 (cost=0.00..1289386.58 rows=15273158 width=18)\n -> Seq Scan on goliath_62 ca_63 (cost=0.00..1375996.18 rows=15783618 width=18)\n -> Seq Scan on goliath_63 ca_64 (cost=0.00..1318835.42 rows=15393842 width=18)\n -> Seq Scan on goliath_64 ca_65 (cost=0.00..1279811.42 rows=15025442 width=18)\n -> Seq Scan on goliath_65 ca_66 (cost=0.00..1238623.43 rows=14795243 width=18)\n -> Seq Scan on goliath_66 ca_67 (cost=0.00..1305470.50 rows=15230950 width=18)\n -> Seq Scan on goliath_67 ca_68 (cost=0.00..1261104.19 rows=14894319 width=18)\n -> Seq Scan on goliath_68 ca_69 (cost=0.00..1283365.18 rows=15239718 width=18)\n -> Seq Scan on goliath_69 ca_70 (cost=0.00..1314101.63 rows=15292263 width=18)\n -> Seq Scan on goliath_70 ca_71 (cost=0.00..1253906.46 rows=14978446 width=18)\n -> Seq Scan on goliath_71 ca_72 (cost=0.00..1294493.71 rows=15331171 width=18)\n -> Seq Scan on goliath_72 ca_73 (cost=0.00..1286198.09 rows=15418009 width=18)\n -> Seq Scan on goliath_73 ca_74 (cost=0.00..1289391.43 rows=15229243 width=18)\n -> Seq Scan on goliath_74 ca_75 (cost=0.00..1242385.13 rows=14670113 width=18)\n -> Seq Scan on goliath_75 ca_76 (cost=0.00..1263916.51 rows=14982751 width=18)\n -> Seq Scan on goliath_76 ca_77 (cost=0.00..1294899.69 rows=15200869 width=18)\n -> Seq Scan on goliath_77 ca_78 (cost=0.00..1269406.36 rows=14930436 width=18)\n -> Seq Scan on goliath_78 ca_79 (cost=0.00..1235383.65 rows=14800365 width=18)\n -> Seq Scan on goliath_79 ca_80 (cost=0.00..1236916.73 rows=14710273 width=18)\n -> Seq Scan on goliath_80 ca_81 (cost=0.00..1421834.11 rows=16753911 width=18)\n -> Seq Scan on goliath_81 ca_82 (cost=0.00..1335250.54 rows=15351354 width=18)\n -> Seq Scan on goliath_82 ca_83 (cost=0.00..1243153.05 rows=14688205 width=18)\n -> Seq Scan on goliath_83 ca_84 (cost=0.00..1250345.33 rows=14741833 width=18)\n -> Seq Scan on goliath_84 ca_85 (cost=0.00..1301004.00 rows=14956400 width=18)\n -> Seq Scan on goliath_85 ca_86 (cost=0.00..1289135.50 rows=15009950 width=18)\n -> Seq Scan on goliath_86 ca_87 (cost=0.00..1488060.65 rows=16380865 width=18)\n -> Seq Scan on goliath_87 ca_88 (cost=0.00..1265255.27 rows=15004127 width=18)\n -> Seq Scan on goliath_88 ca_89 (cost=0.00..1291141.46 rows=15112446 width=18)\n -> Seq Scan on goliath_89 ca_90 (cost=0.00..1236365.37 rows=14733237 width=18)\n -> Seq Scan on goliath_90 ca_91 (cost=0.00..1276548.94 rows=14938294 width=18)\n -> Seq Scan on goliath_91 ca_92 (cost=0.00..1375334.48 rows=16060648 width=18)\n -> Seq Scan on goliath_92 ca_93 (cost=0.00..1257210.14 rows=14934014 width=18)\n -> Seq Scan on goliath_93 ca_94 (cost=0.00..1268048.19 rows=14998419 width=18)\n -> Seq Scan on goliath_94 ca_95 (cost=0.00..1277075.92 rows=15219292 width=18)\n -> Seq Scan on goliath_95 ca_96 (cost=0.00..1351108.41 rows=15454141 width=18)\n -> Seq Scan on goliath_96 ca_97 (cost=0.00..1283608.49 rows=15091049 width=18)\n -> Seq Scan on goliath_97 ca_98 (cost=0.00..1277019.32 rows=15265132 width=18)\n -> Seq Scan on goliath_98 ca_99 (cost=0.00..1258056.96 rows=15006496 width=18)\n -> Seq Scan on goliath_99 ca_100 (cost=0.00..1221325.89 rows=14612389 width=18)\n\n\n\n\n",
"msg_date": "Sun, 18 Jun 2023 22:57:11 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "I came here to talk about partitionwise join, but then noticed you have\nalready thought of that:\n\nOn 2023-Jun-18, nicolas paris wrote:\n\n> Note that both plan acome from the same partitioned by hash table with\n> 100 parts, with a unique index on the list_id + hash_key. For strategy\n> 2.1, I turned on enable_partitionwise_join, since david table has the\n> same partitioning scheme as goliath including unique indexe. In both\n> case the query is:\n\nHmm, I suppose the reason partitionwise join isn't having any effect is\nthat the presence of WHEN NOT MATCHED clauses force an outer join, which\nprobably disarms partitionwise joining, since each join pair would\nrequire to match for nulls, so there would be two matching partitions at\nthe other end. A quick test for this hypothesis might be to try the\nMERGE without the WHEN NOT MATCHED clauses and see if partitionwise join\nworks better.\n\nMaybe Tom L's new outer-join infrastructure in 16 allows to improve on\nthis, not sure.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Mon, 19 Jun 2023 09:46:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "\n\nOn 6/18/23 22:57, nicolas paris wrote:\n>> ...\n>> Well, I kinda doubt you have 17023331531230 rows (not even physically\n>> possible with 2TB disk), so that's immediately suspicious.\n> \n> Below is the full plan for the strategy 2.1 (Indeed the previous email\n> plan was truncated and wrong, sorry for that). \n> \n\nNone of the plans has estimates anywhere close to 17023331531230, so\nwhere did that come from?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Jun 2023 13:34:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "On 6/19/23 09:46, Alvaro Herrera wrote:\n> I came here to talk about partitionwise join, but then noticed you have\n> already thought of that:\n> \n> On 2023-Jun-18, nicolas paris wrote:\n> \n>> Note that both plan acome from the same partitioned by hash table with\n>> 100 parts, with a unique index on the list_id + hash_key. For strategy\n>> 2.1, I turned on enable_partitionwise_join, since david table has the\n>> same partitioning scheme as goliath including unique indexe. In both\n>> case the query is:\n> \n> Hmm, I suppose the reason partitionwise join isn't having any effect is\n> that the presence of WHEN NOT MATCHED clauses force an outer join, which\n> probably disarms partitionwise joining, since each join pair would\n> require to match for nulls, so there would be two matching partitions at\n> the other end. A quick test for this hypothesis might be to try the\n> MERGE without the WHEN NOT MATCHED clauses and see if partitionwise join\n> works better.\n> \n> Maybe Tom L's new outer-join infrastructure in 16 allows to improve on\n> this, not sure.\n> \n\nNot sure why would that disarm partitionwise join - attached is a simple\nreproducer, generating two tables, loading 10000000 and 10000 rows into\nthem, and then doing explain on a simple merge.\n\nIMHO the thing that breaks it is the ORDER BY in the merge, which likely\nacts as an optimization fence and prevents all sorts of smart things\nincluding the partitionwise join. I'd bet that if Nicolas replaces\n\n MERGE INTO \"goliath\" ca\n USING (SELECT * FROM \"david\" ORDER BY \"list_id\") AS t\n ..\n\nwith\n\n MERGE INTO \"goliath\" ca\n USING \"david\" AS t\n ...\n\nit'll start doing the working much better.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 19 Jun 2023 13:53:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "On Mon, 2023-06-19 at 13:34 +0200, Tomas Vondra wrote:\n> \n> \n> On 6/18/23 22:57, nicolas paris wrote:\n> > > ...\n> > > Well, I kinda doubt you have 17023331531230 rows (not even\n> > > physically\n> > > possible with 2TB disk), so that's immediately suspicious.\n> > \n> > Below is the full plan for the strategy 2.1 (Indeed the previous\n> > email\n> > plan was truncated and wrong, sorry for that). \n> > \n> \n> None of the plans has estimates anywhere close to 17023331531230, so\n> where did that come from?\n> \n\nWell this was an old plan where there was an issue: the david table did\nnot have the same partitioning scheme as goliath. It was partitioned by\nan other column.\n\n\n",
"msg_date": "Mon, 19 Jun 2023 14:09:37 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "> IMHO the thing that breaks it is the ORDER BY in the merge, which\n> likely\n> acts as an optimization fence and prevents all sorts of smart things\n> including the partitionwise join. I'd bet that if Nicolas replaces\n> \n> MERGE INTO \"goliath\" ca\n> USING (SELECT * FROM \"david\" ORDER BY \"list_id\") AS t\n> .\n\nSorry if it was not clear, however there is no order by in the 2.1\nstrategy. Then this cannot be the reason of not triggering the optim.\n\nFor information I do enable partition join feature with jdbc call just\nbefore the merge:\nset enable_partitionwise_join=true\n\n\n",
"msg_date": "Mon, 19 Jun 2023 14:20:26 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "On 6/19/23 14:20, nicolas paris wrote:\n>> IMHO the thing that breaks it is the ORDER BY in the merge, which\n>> likely\n>> acts as an optimization fence and prevents all sorts of smart things\n>> including the partitionwise join. I'd bet that if Nicolas replaces\n>>\n>> MERGE INTO \"goliath\" ca\n>> USING (SELECT * FROM \"david\" ORDER BY \"list_id\") AS t\n>> .\n> \n> Sorry if it was not clear, however there is no order by in the 2.1\n> strategy. Then this cannot be the reason of not triggering the optim.\n> \n> For information I do enable partition join feature with jdbc call just\n> before the merge:\n> set enable_partitionwise_join=true\n> \n\nBut you wrote that in both cases the query is:\n\n MERGE INTO \"goliath\" ca\n USING (SELECT * FROM \"david\" ORDER BY \"list_id\") AS t\n ON t.\"list_id\" = ca.\"list_id\"\n WHEN MATCHED THEN\n UPDATE SET ...\n WHEN NOT MATCHED THEN\n INSERT (...)\n VALUES (...)\n\nWith all due respect, I'm getting a bit tired of having to speculate\nabout what exactly you're doing etc. based on bits of information.\n\nI'm willing to continue to investigate, but only if you prepare a\nreproducer, i.e. a SQL script that demonstrates the issue - I don't\nthink preparing that should be difficult, something like the SQL script\nI shared earlier today should do the trick.\n\nI suggest you do that directly, not through JDBC. Perhaps the JDBC\nconnection pool does something funny (like connection pooling and\nresetting settings).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Jun 2023 15:13:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "> But you wrote that in both cases the query is:\n\nthat was indeed yet another tipo, hope to do better in the future.\n\n\n> I'm willing to continue to investigate, but only if you prepare a\n> reproducer, \n\nThanks for your starter script. Please find attached 2 scripts which\nnow illustrates two troubles.\n\nrepro1.sql is a slight evolution of yours. When I play with david size\n(as described in the comments) you will see plan going from nested loop\nto sequential scan. Also note that the partition wise join is likely\nworking. This illustrate my initial problem: the sequential scan is not\ngoing to work fine on my workload (david too large). How to suggest\npostgres to use a nested loop here ? (suspect playing with costs should\nhelp)\n\n\nrepro2.sql now I changed the table layout (similar to my setup) to\nreproduce the partition wise join which does not triggers. I added a\npartition column, and a unique index to be able to mimic a primary key.\nNow partition wise (in my local docker vanilla postgres 15.3) does not\nwork. Eventually, if I do small batch, then the merge is working fast.\nThat's an other, unrelated problem.\n\n\n> I suggest you do that directly, not through JDBC. Perhaps the JDBC\n> connection pool does something funny (like connection pooling and\n> resetting settings).\n\nI can tell jdbc was working, and likely the reason might be in my\ncurrent table setup.",
"msg_date": "Mon, 19 Jun 2023 17:45:06 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "\n\nOn 6/19/23 17:45, nicolas paris wrote:\n>> But you wrote that in both cases the query is:\n> \n> that was indeed yet another tipo, hope to do better in the future.\n> \n> \n>> I'm willing to continue to investigate, but only if you prepare a\n>> reproducer, \n> \n> Thanks for your starter script. Please find attached 2 scripts which\n> now illustrates two troubles.\n> \n> repro1.sql is a slight evolution of yours. When I play with david size\n> (as described in the comments) you will see plan going from nested loop\n> to sequential scan. Also note that the partition wise join is likely\n> working. This illustrate my initial problem: the sequential scan is not\n> going to work fine on my workload (david too large). How to suggest\n> postgres to use a nested loop here ? (suspect playing with costs should\n> help)\n> \n\nIn general, this behavior is expected. The overall idea is that nested\nloops are efficient for small row counts, but the cost raises quickly\nexactly because they do a lot of random I/O (due to the index scan). At\nsome point it's cheaper to switch to plan does sequential I/O. Which is\nexactly what's happening here - we switch from a plan doing a lot of\nrandom I/O on goliath\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Merge on goliath (cost=0.29..7888.69 rows=0 width=0)\n Merge on goliath_0 goliath_1\n ...\n Merge on goliath_99 goliath_100\n -> Append (cost=0.29..7888.69 rows=9998 width=47)\n -> Nested Loop Left Join (cost=0.29..93.89 rows=120 ...\n -> Seq Scan on david_0 david_1 (cost=0.00..2.20 ...\n -> Index Scan using goliath_0_pkey on goliath_0 ...\n Index Cond: (id = david_1.id)\n -> Nested Loop Left Join (cost=0.29..77.10 rows=98 ...\n -> Seq Scan on david_1 david_2 (cost=0.00..1.98 ...\n -> Index Scan using goliath_1_pkey on goliath_1 ...\n Index Cond: (id = david_2.id)\n ...\n -> Nested Loop Left Join (cost=0.29..74.58 rows=95 ...\n -> Seq Scan on david_99 david_100 (cost=0.00..1.95\n -> Index Scan using goliath_99_pkey on goliath_99 ...\n Index Cond: (id = david_100.id)\n (502 rows)\n\nto a plan that does a lot of sequential I/O\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Merge on goliath (cost=293.44..264556.47 rows=0 width=0)\n Merge on goliath_0 goliath_1\n ...\n Merge on goliath_99 goliath_100\n -> Append (cost=293.44..264556.47 rows=951746 width=47)\n -> Hash Right Join (cost=293.44..2597.05 rows=9486 ...\n Hash Cond: (goliath_1.id = david_1.id)\n -> Seq Scan on goliath_0 goliath_1 (cost=0.00..\n -> Hash (cost=174.86..174.86 rows=9486 width=37)\n -> Seq Scan on david_0 david_1 (cost=0.00..\n -> Hash Right Join (cost=295.62..2613.90 rows=9583 width=\n Hash Cond: (goliath_2.id = david_2.id)\n -> Seq Scan on goliath_1 goliath_2 (cost=0.00..1845\n -> Hash (cost=175.83..175.83 rows=9583 width=37)\n -> Seq Scan on david_1 david_2 (cost=0.00..\n ...\n -> Hash Right Join (cost=288.33..2593.16 rows=9348 width\n Hash Cond: (goliath_100.id = david_100.id)\n -> Seq Scan on goliath_99 goliath_100 (cost=0.00..\n -> Hash (cost=171.48..171.48 rows=9348 width=37)\n -> Seq Scan on david_99 david_100 (cost=0.00..\n (602 rows)\n\nThat's expected, because the cost if I force the Nested Loop with the\nhigher row cound in \"david\" looks like this:\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Merge on goliath (cost=0.29..331253.00 rows=0 width=0)\n ...\n\nOf course, the question is at which point the switch should happen. You\ncan try setting enable_hashjoin=off, which should push the optimizer to\nuse the first plan. If it does, you'll know the cost difference between\nthe two plans.\n\nIf you run it and it's actually faster than the \"default\" plan with a\nhashjoin/seqscans, you can try lowering random_page_cost, which is\nlikely the main input. The default is 4, in general it should be higher\nthan seq_page_cost=1.0 (because random I/O is more expensive).\n\nIn any case, there's a point when the nested loops get terrible. I mean,\nimagine the \"david\" has 100000 rows, and \"goliath\" hash 100000 pages\n(800MB). It's just cheaper to do seqscan 100k pages than randomly scan\nthe same 100k pages. You can tune where the plan flips, ofc.\n\n> \n> repro2.sql now I changed the table layout (similar to my setup) to\n> reproduce the partition wise join which does not triggers. I added a\n> partition column, and a unique index to be able to mimic a primary key.\n> Now partition wise (in my local docker vanilla postgres 15.3) does not\n> work. Eventually, if I do small batch, then the merge is working fast.\n> That's an other, unrelated problem.\n> \n\nThis is absolutely expected. If you partition by hash (id, part_key),\nyou can't join on (id) and expect partitionwise join to work. To quote\nthe enable_partitionwise_join documentation [1]:\n\n Enables or disables the query planner's use of partitionwise join,\n which allows a join between partitioned tables to be performed by\n joining the matching partitions. Partitionwise join currently\n applies only when the join conditions include all the partition\n keys, which must be of the same data type and have one-to-one\n matching sets of child partitions.\n\nSo the fact that\n\n merge into goliath using david on david.id = goliath.id\n when matched then update set val = david.val\n when not matched then insert (id, val) values (david.id, david.val);\n\ndoes not work is absolutely expected. You need to join on part_col too.\n\nThis is because the partition is determined by hash of both columns, a\nbit like md5(id+part_col) so knowing just the \"id\" is not enough to\ndetermine the partition.\n\nEven if you fix that, the last query won't use partitionwise join\nbecause of the ORDER BY subquery, which serves as an optimization fence\n(which means the merge does not actually see the underlying table is\npartitioned).\n\nIf you get rid of that and add the part_col to the join, it translates\nto the first issue with setting costs to flip to the sequential scan at\nthe right point.\n\n\n\n[1]\nhttps://www.postgresql.org/docs/15/runtime-config-query.html#GUC-ENABLE-PARTITIONWISE-JOIN\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 01:25:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "> This is absolutely expected. If you partition by hash (id, part_key),\n> you can't join on (id) and expect partitionwise join to work. To\n> quote\n> the enable_partitionwise_join documentation [1]:\n> \n> Enables or disables the query planner's use of partitionwise\n> join,\n> which allows a join between partitioned tables to be performed by\n> joining the matching partitions. Partitionwise join currently\n> applies only when the join conditions include all the partition\n> keys, which must be of the same data type and have one-to-one\n> matching sets of child partitions.\n> \n> So the fact that\n> \n> merge into goliath using david on david.id = goliath.id\n> when matched then update set val = david.val\n> when not matched then insert (id, val) values (david.id,\n> david.val);\n> \n> does not work is absolutely expected. You need to join on part_col\n> too.\n\nDefinitely this makes sense to add the part_col in the join columns.\nAlso it helps the planner to choose a better plan, since now it goes\nwith per partition nested loop without having to trick the costs\n(either enable_hashjoin/random_page_cost), with my current workload so\nfar.\n\n\n\nThanks you goliath\n\n\n-- david\n\n\n",
"msg_date": "Tue, 20 Jun 2023 12:02:17 +0200",
"msg_from": "nicolas paris <nicolas.paris@riseup.net>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
},
{
"msg_contents": "On 6/20/23 12:02, nicolas paris wrote:\n>...\n>\n> Definitely this makes sense to add the part_col in the join columns.\n> Also it helps the planner to choose a better plan, since now it goes\n> with per partition nested loop without having to trick the costs\n> (either enable_hashjoin/random_page_cost), with my current workload so\n> far.\n>\n\nRight. With non-partitionwise join the nestloop inner lookup has to do\nindexscan on every partition (it can't decide which of the partitions\nwill have a match, and for costing we assume there's at least 1 row in\neach lookup). Which essentially amplifies the amount of random I/O by a\nfactor of 100x (or whatever the number of partitions is).\n\nThat is, instead of doing 100x nested loops like this:\n\n -> Nested Loop Left Join (cost=0.29..33.42 rows=8 width=47)\n -> Seq Scan on david_98 david_99 (cost=0.00..1.08\n -> Index Scan using goliath_98_id_part_col_idx on\n Index Cond: ((id = david_99.id) AND ...)\n\nwe end up doing one nested loop with an inner lookup like this\n\n -> Append (cost=0.29..557.63 rows=100 width=14)\n -> Index Scan using ... goliath_1 (cost=0.29..5.57 ...\n Index Cond: (id = david.id)\n ...\n\nAnd this is per-loop, of which there'll be 500 (because the small david\ntable has 500 rows).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 14:45:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Merge David and Goliath tables efficiently"
}
] |
[
{
"msg_contents": "Dear fellow list members,\n\nI'm in the process of implementing a file storage system that is based on\nPostgreSQL and streaming replication. There will possibly be many similar\nfiles stored. I would like to implement block-level deduplication: each\nfile consists of a series of blocks, and each unique block is stored only\nonce (e.g. one block may be reused by multiple files). It will be part of a\nbigger software, e.g. the same database will be used for other purposes too.\n\nHere is the basic idea for storing individual blocks:\n\n\ncreate table block(\n id uuid not null primary key,\n block bytea not null,\n hs256 bytea not null\n) ;\ncreate unique index uidx_block_hs256 on block(hs256);\n\ncreate or replace function trg_biu_block() returns trigger language plpgsql\nas\n$function$\nbegin\nnew.hs256 = digest(new.block, 'sha256');\nend;\n$function$;\n\ncreate trigger trg_biu_block before insert or update on block for each row\nexecute procedure trg_biu_block();\n\nThis is just for storing the blocks. I'm going to put this \"block\" table\ninto a separate tablespace. File operations will be at least 95% read and\nat most 5% write. (Streaming replication will hopefully allow almost\nhorizontal scaling for read operations.) Most of the files will be several\nmegabytes in size (images), and some of them will be 100MB or more\n(videos). Total capacity is in the 10TB range. Storage will be SSD\n(possibly fiber connected, or local RAID, we are not sure yet).\n\nI do not want to use PostgreSQL large objects, because it does not have\nblock level deduplication.\n\nHere are some things that I need help with:\n\n1. What should be the maximum size of a block? I was trying to find out the\noptimal value. Default BLCKSZ is 8192 bytes. AFAIK PostgreSQL does not\nallow a row to occupy multiple blocks. I don't know enough to calculate the\noptimal \"block size\" (the max. number of bytes stored in a single row in\nthe block.block field), but I suspect that it should be 4K or something\nsimilar. I think that it should be as large as possible, without hitting\nthe toast. My thinking is this: most files will be at least 1MB in size, so\nmost \"block\" rows will reach the maximum tuple size. It would be practical\nto make one row in the \"block\" table occupy almost one PostgreSQL block.\n2. I'm not sure if it would be beneficial to increase BLCKSZ. I will be\nable to test the speed of my (not yet finished) implementation with\ndifferent BLCKSZ values, but that alone won't help me make the decision,\nbecause AFAIK BLCKSZ must have a fixed value for the PostgreSQL instance,\nso it will affect all other tables in the database. It would be hard to\ntell how changing BLCKSZ would affect the system as a whole.\n3. In the above example, I used SHA-256 (pgcrypto), because apparently it\nis very well optimized for 64 bit machines, and it has practically zero\nchance of a collision. I think that sha512 would be an overkill. But I'm\nnot sure that this is the best choice. Maybe somebody with more experience\ncan suggest a better hash function.\n4. The hs256 value will always be non-null, fixed 32 byte binary value, but\nprobably the query planner will not know anything about that. I was also\nthinking about bit(256), but I don't see an easy way to convert the bytea\ndigest into bit(256). A simple type cast won't work here. Maybe using bytea\nhere is perfectly fine, and creating an index on the hs256 bytea fields is\nas effective as possible.\n\nI'm not looking for a definitive answer, just trying to get some hints from\nmore experienced users before I fill up the drives with terabytes of data.\n\nThank you,\n\n Laszlo\n\n Dear fellow list members,I'm in the process of implementing a file storage system that is based on PostgreSQL and streaming replication. There will possibly be many similar files stored. I would like to implement block-level deduplication: each file consists of a series of blocks, and each unique block is stored only once (e.g. one block may be reused by multiple files). It will be part of a bigger software, e.g. the same database will be used for other purposes too.Here is the basic idea for storing individual blocks:create table block( id uuid not null primary key, block bytea not null, hs256 bytea not null) ;create unique index uidx_block_hs256 on block(hs256);create or replace function trg_biu_block() returns trigger language plpgsql as$function$begin\tnew.hs256 = digest(new.block, 'sha256');end;$function$;create trigger trg_biu_block before insert or update on block for each row execute procedure trg_biu_block();This is just for storing the blocks. I'm going to put this \"block\" table into a separate tablespace. File operations will be at least 95% read and at most 5% write. (Streaming replication will hopefully allow almost horizontal scaling for read operations.) Most of the files will be several megabytes in size (images), and some of them will be 100MB or more (videos). Total capacity is in the 10TB range. Storage will be SSD (possibly fiber connected, or local RAID, we are not sure yet).I do not want to use PostgreSQL large objects, because it does not have block level deduplication.Here are some things that I need help with:1. What should be the maximum size of a block? I was trying to find out the optimal value. Default BLCKSZ is 8192 bytes. AFAIK PostgreSQL does not allow a row to occupy multiple blocks. I don't know enough to calculate the optimal \"block size\" (the max. number of bytes stored in a single row in the block.block field), but I suspect that it should be 4K or something similar. I think that it should be as large as possible, without hitting the toast. My thinking is this: most files will be at least 1MB in size, so most \"block\" rows will reach the maximum tuple size. It would be practical to make one row in the \"block\" table occupy almost one PostgreSQL block.2. I'm not sure if it would be beneficial to increase BLCKSZ. I will be able to test the speed of my (not yet finished) implementation with different BLCKSZ values, but that alone won't help me make the decision, because AFAIK BLCKSZ must have a fixed value for the PostgreSQL instance, so it will affect all other tables in the database. It would be hard to tell how changing BLCKSZ would affect the system as a whole.3. In the above example, I used SHA-256 (pgcrypto), because apparently it is very well optimized for 64 bit machines, and it has practically zero chance of a collision. I think that sha512 would be an overkill. But I'm not sure that this is the best choice. Maybe somebody with more experience can suggest a better hash function. 4. The hs256 value will always be non-null, fixed 32 byte binary value, but probably the query planner will not know anything about that. I was also thinking about bit(256), but I don't see an easy way to convert the bytea digest into bit(256). A simple type cast won't work here. Maybe using bytea here is perfectly fine, and creating an index on the hs256 bytea fields is as effective as possible.I'm not looking for a definitive answer, just trying to get some hints from more experienced users before I fill up the drives with terabytes of data.Thank you, Laszlo",
"msg_date": "Mon, 19 Jun 2023 22:05:33 +0200",
"msg_from": "Les <nagylzs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index on (fixed size) bytea value"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 1:05 PM Les <nagylzs@gmail.com> wrote:\n\n> AFAIK PostgreSQL does not allow a row to occupy multiple blocks.\n>\n\nYour plan is going to heavily involve out-of-band storage. Please read up\non it here:\n\nhttps://www.postgresql.org/docs/current/storage-toast.html\n\nI'm not looking for a definitive answer, just trying to get some hints from\n> more experienced users before I fill up the drives with terabytes of data.\n>\n>\nStore a hash (and other metadata, like the hashing algorithm) as well as\nthe path to some system better designed for object storage and retrieval\ninstead.\n\nDavid J.\n\nOn Mon, Jun 19, 2023 at 1:05 PM Les <nagylzs@gmail.com> wrote:AFAIK PostgreSQL does not allow a row to occupy multiple blocks.Your plan is going to heavily involve out-of-band storage. Please read up on it here:https://www.postgresql.org/docs/current/storage-toast.htmlI'm not looking for a definitive answer, just trying to get some hints from more experienced users before I fill up the drives with terabytes of data.Store a hash (and other metadata, like the hashing algorithm) as well as the path to some system better designed for object storage and retrieval instead.David J.",
"msg_date": "Mon, 19 Jun 2023 13:30:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index on (fixed size) bytea value"
},
{
"msg_contents": "David G. Johnston <david.g.johnston@gmail.com> ezt írta (időpont: 2023.\njún. 19., H, 22:30):\n\n> On Mon, Jun 19, 2023 at 1:05 PM Les <nagylzs@gmail.com> wrote:\n>\n>> AFAIK PostgreSQL does not allow a row to occupy multiple blocks.\n>>\n>\n> Your plan is going to heavily involve out-of-band storage. Please read up\n> on it here:\n>\n> https://www.postgresql.org/docs/current/storage-toast.html\n>\nI'm aware of the TOAST, and how it works. I was referring to it (\"I think\nthat it should be as large as possible, without hitting the toast. \") I\nhave designed a separate \"block\" table specifically to avoid storing binary\ndata in the TOAST. So my plan is not going to involve out-of-band storage.\n\nJust to make this very clear: a record in the block table would store a\nblock, not the whole file. My question is to finding the optimal block size\n(without hitting the toast), and finding the optimal hash algorithm for\nblock de-duplication.\n\nUnless I totally misunderstood how the TOAST works. (?)\n\n Laszlo\n\nDavid G. Johnston <david.g.johnston@gmail.com> ezt írta (időpont: 2023. jún. 19., H, 22:30):On Mon, Jun 19, 2023 at 1:05 PM Les <nagylzs@gmail.com> wrote:AFAIK PostgreSQL does not allow a row to occupy multiple blocks.Your plan is going to heavily involve out-of-band storage. Please read up on it here:https://www.postgresql.org/docs/current/storage-toast.htmlI'm aware of the TOAST, and how it works. I was referring to it (\"I think that it should be as large as possible, without hitting the toast. \") I have designed a separate \"block\" table specifically to avoid storing binary data in the TOAST. So my plan is not going to involve out-of-band storage.Just to make this very clear: a record in the block table would store a block, not the whole file. My question is to finding the optimal block size (without hitting the toast), and finding the optimal hash algorithm for block de-duplication.Unless I totally misunderstood how the TOAST works. (?) Laszlo",
"msg_date": "Tue, 20 Jun 2023 08:13:07 +0200",
"msg_from": "Les <nagylzs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index on (fixed size) bytea value"
},
{
"msg_contents": "On Tue, 2023-06-20 at 08:13 +0200, Les wrote:\n> I'm aware of the TOAST, and how it works. I was referring to it (\"I think that it should\n> be as large as possible, without hitting the toast. \") I have designed a separate \"block\"\n> table specifically to avoid storing binary data in the TOAST. So my plan is not going to\n> involve out-of-band storage.\n> \n> Just to make this very clear: a record in the block table would store a block, not the\n> whole file. My question is to finding the optimal block size (without hitting the toast),\n> and finding the optimal hash algorithm for block de-duplication.\n\nThen you would ALTER the column and SET STORAGE MAIN, so that it does not ever use TOAST.\n\nThe size limit for a row would then be 8kB minus page header minus row header, which\nshould be somewhere in the vicinity of 8140 bytes.\n\nIf you want your block size to be a power of two, the limit would be 4kB, which would waste\nalmost half your storage space.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 20 Jun 2023 08:50:58 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Index on (fixed size) bytea value"
},
{
"msg_contents": ">\n>\n> Then you would ALTER the column and SET STORAGE MAIN, so that it does not\n> ever use TOAST.\n>\n> The size limit for a row would then be 8kB minus page header minus row\n> header, which\n> should be somewhere in the vicinity of 8140 bytes.\n>\n> If you want your block size to be a power of two, the limit would be 4kB,\n> which would waste\n> almost half your storage space.\n>\n\nOh I see. So if I want to save some space for future columns, then storing\nabout 7500 bytes in the \"block bytea\" column would be close to optimal,\nutilizing more than 90% of the block space. I guess that the fillfactor\nsetting will have no effect on this table, and it does not matter if I set\nit or not.\n\n\nThen you would ALTER the column and SET STORAGE MAIN, so that it does not ever use TOAST.\n\nThe size limit for a row would then be 8kB minus page header minus row header, which\nshould be somewhere in the vicinity of 8140 bytes.\n\nIf you want your block size to be a power of two, the limit would be 4kB, which would waste\nalmost half your storage space.Oh I see. So if I want to save some space for future columns, then storing about 7500 bytes in the \"block bytea\" column would be close to optimal, utilizing more than 90% of the block space. I guess that the fillfactor setting will have no effect on this table, and it does not matter if I set it or not.",
"msg_date": "Tue, 20 Jun 2023 09:18:31 +0200",
"msg_from": "Les <nagylzs@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index on (fixed size) bytea value"
}
] |
[
{
"msg_contents": "Hi,\n\n------\nPostgres version\n------\npostgres=# SELECT version();\n version\n\n-----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.3 (Debian 15.3-1.pgdg110+1) on aarch64-unknown-linux-gnu,\ncompiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n------\n\n------\nLoad data\n------\nChinook database\nhttps://github.com/lerocha/chinook-database/blob/master/ChinookDatabase/DataSources/Chinook_PostgreSql.sql\n------\n\n------\nInsert dummy data into Track to bring rows count to 10 million\n------\nINSERT INTO \"Track\"(\"TrackId\", \"Name\", \"AlbumId\", \"MediaTypeId\", \"GenreId\",\n\"Milliseconds\", \"Bytes\", \"UnitPrice\")\nSELECT i::int, i::text, 1, 1, 1, 276349, 9056902, 0.99\nFROM generate_series(3504, 10000000) AS t(i);\n------\n\n------\nSetup role and policies\n------\ncreate role \"User\";\ngrant select on \"Album\" to \"User\";\nCREATE POLICY artist_rls_policy ON \"Album\" FOR SELECT TO public USING\n(\"ArtistId\"=((current_setting('rls.artistID'))::integer));\nALTER TABLE \"Album\" ENABLE ROW LEVEL SECURITY;\ngrant select on \"Track\" to \"User\";\nCREATE POLICY album_rls_policy ON \"Track\" FOR SELECT to public\nUSING (\n EXISTS (\n select 1 from \"Album\" where \"Track\".\"AlbumId\" = \"Album\".\"AlbumId\"\n )\n);\nALTER TABLE \"Track\" ENABLE ROW LEVEL SECURITY;\n------\n\n------\nQuery and verify the policies through psql\n------\nset role \"User\";\nset rls.artistID = '116';\nselect * from \"Track\";\n------\n\n------\nQuery plan for postgres\n------\npostgres=> explain analyze select * from \"Track\";\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"Track\" (cost=0.00..34589179.11 rows=2110303 width=58)\n(actual time=68.097..350.074 rows=14 loops=1)\n Filter: (hashed SubPlan 2)\n Rows Removed by Filter: 4220538\n SubPlan 2\n -> Index Scan using \"IFK_AlbumArtistId\" on \"Album\" (cost=0.15..8.17\nrows=1 width=4) (actual time=0.017..0.018 rows=1 loops=1)\n Index Cond: (\"ArtistId\" =\n(current_setting('rls.artistID'::text))::integer)\n Planning Time: 0.091 ms\n JIT:\n Functions: 17\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 1.008 ms, Inlining 11.450 ms, Optimization 33.233 ms,\nEmission 22.443 ms, Total 68.135 ms\n Execution Time: 350.922 ms\n(12 rows)\n------\n\n------\nDisabled ROW LEVEL SECURITY and get appropriate tracks\n------\n\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=7657.40..7657.41 rows=1 width=32) (actual\ntime=0.070..0.071 rows=1 loops=1)\n -> Nested Loop Left Join (cost=7650.01..7657.38 rows=1 width=55)\n(actual time=0.061..0.068 rows=1 loops=1)\n -> Seq Scan on \"Album\" (cost=0.00..7.34 rows=1 width=27) (actual\ntime=0.020..0.026 rows=1 loops=1)\n Filter: (\"ArtistId\" = 116)\n Rows Removed by Filter: 346\n -> Aggregate (cost=7650.01..7650.02 rows=1 width=32) (actual\ntime=0.040..0.040 rows=1 loops=1)\n -> Nested Loop (cost=0.43..6107.07 rows=102863 width=11)\n(actual time=0.016..0.026 rows=14 loops=1)\n -> Seq Scan on \"Album\" \"__be_0_Album\"\n (cost=0.00..8.21 rows=1 width=4) (actual time=0.008..0.015 rows=1 loops=1)\n Filter: ((\"AlbumId\" = \"Album\".\"AlbumId\") AND\n(\"ArtistId\" = 116))\n Rows Removed by Filter: 346\n -> Index Scan using \"IFK_TrackAlbumId\" on \"Track\"\n (cost=0.43..5070.23 rows=102863 width=15) (actual time=0.008..0.009\nrows=14 loops=1)\n Index Cond: (\"AlbumId\" = \"Album\".\"AlbumId\")\n SubPlan 2\n -> Result (cost=0.00..0.01 rows=1 width=32) (actual\ntime=0.000..0.000 rows=1 loops=14)\n SubPlan 1\n -> Result (cost=0.00..0.01 rows=1 width=32) (actual\ntime=0.000..0.000 rows=1 loops=1)\n Planning Time: 0.182 ms\n Execution Time: 0.094 ms\n(18 rows)\n------\n\nWhy did Postgres choose to do a sequential scan on Track when RLS is\nenabled?\n\nRegards,\nAkash Anand\n\nHi,------Postgres version------postgres=# SELECT version(); version ----------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 15.3 (Debian 15.3-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit(1 row)------------Load data------Chinook databasehttps://github.com/lerocha/chinook-database/blob/master/ChinookDatabase/DataSources/Chinook_PostgreSql.sql------------Insert dummy data into Track to bring rows count to 10 million------INSERT INTO \"Track\"(\"TrackId\", \"Name\", \"AlbumId\", \"MediaTypeId\", \"GenreId\", \"Milliseconds\", \"Bytes\", \"UnitPrice\")SELECT i::int, i::text, 1, 1, 1, 276349, 9056902, 0.99FROM generate_series(3504, 10000000) AS t(i);------------Setup role and policies------create role \"User\";grant select on \"Album\" to \"User\";CREATE POLICY artist_rls_policy ON \"Album\" FOR SELECT TO public USING (\"ArtistId\"=((current_setting('rls.artistID'))::integer));ALTER TABLE \"Album\" ENABLE ROW LEVEL SECURITY;grant select on \"Track\" to \"User\";CREATE POLICY album_rls_policy ON \"Track\" FOR SELECT to publicUSING ( EXISTS ( select 1 from \"Album\" where \"Track\".\"AlbumId\" = \"Album\".\"AlbumId\" ));ALTER TABLE \"Track\" ENABLE ROW LEVEL SECURITY;------------Query and verify the policies through psql------set role \"User\";set rls.artistID = '116';select * from \"Track\";------------Query plan for postgres------postgres=> explain analyze select * from \"Track\"; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on \"Track\" (cost=0.00..34589179.11 rows=2110303 width=58) (actual time=68.097..350.074 rows=14 loops=1) Filter: (hashed SubPlan 2) Rows Removed by Filter: 4220538 SubPlan 2 -> Index Scan using \"IFK_AlbumArtistId\" on \"Album\" (cost=0.15..8.17 rows=1 width=4) (actual time=0.017..0.018 rows=1 loops=1) Index Cond: (\"ArtistId\" = (current_setting('rls.artistID'::text))::integer) Planning Time: 0.091 ms JIT: Functions: 17 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.008 ms, Inlining 11.450 ms, Optimization 33.233 ms, Emission 22.443 ms, Total 68.135 ms Execution Time: 350.922 ms(12 rows)------------Disabled ROW LEVEL SECURITY and get appropriate tracks------ QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=7657.40..7657.41 rows=1 width=32) (actual time=0.070..0.071 rows=1 loops=1) -> Nested Loop Left Join (cost=7650.01..7657.38 rows=1 width=55) (actual time=0.061..0.068 rows=1 loops=1) -> Seq Scan on \"Album\" (cost=0.00..7.34 rows=1 width=27) (actual time=0.020..0.026 rows=1 loops=1) Filter: (\"ArtistId\" = 116) Rows Removed by Filter: 346 -> Aggregate (cost=7650.01..7650.02 rows=1 width=32) (actual time=0.040..0.040 rows=1 loops=1) -> Nested Loop (cost=0.43..6107.07 rows=102863 width=11) (actual time=0.016..0.026 rows=14 loops=1) -> Seq Scan on \"Album\" \"__be_0_Album\" (cost=0.00..8.21 rows=1 width=4) (actual time=0.008..0.015 rows=1 loops=1) Filter: ((\"AlbumId\" = \"Album\".\"AlbumId\") AND (\"ArtistId\" = 116)) Rows Removed by Filter: 346 -> Index Scan using \"IFK_TrackAlbumId\" on \"Track\" (cost=0.43..5070.23 rows=102863 width=15) (actual time=0.008..0.009 rows=14 loops=1) Index Cond: (\"AlbumId\" = \"Album\".\"AlbumId\") SubPlan 2 -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=14) SubPlan 1 -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=1) Planning Time: 0.182 ms Execution Time: 0.094 ms(18 rows)------Why did Postgres choose to do a sequential scan on Track when RLS is enabled?Regards,Akash Anand",
"msg_date": "Mon, 10 Jul 2023 11:33:47 +0530",
"msg_from": "Akash Anand <akash@hasura.io>",
"msg_from_op": true,
"msg_subject": "Why is query performance on RLS enabled Postgres worse?"
},
{
"msg_contents": "Hi,\n\nIs there a way to visualize RLS policy check(s) in the query plan?\n\nRegards,\nAkash Anand\n\nOn Mon, Jul 10, 2023 at 11:33 AM Akash Anand <akash@hasura.io> wrote:\n\n> Hi,\n>\n> ------\n> Postgres version\n> ------\n> postgres=# SELECT version();\n> version\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 15.3 (Debian 15.3-1.pgdg110+1) on aarch64-unknown-linux-gnu,\n> compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n> (1 row)\n> ------\n>\n> ------\n> Load data\n> ------\n> Chinook database\n>\n> https://github.com/lerocha/chinook-database/blob/master/ChinookDatabase/DataSources/Chinook_PostgreSql.sql\n> ------\n>\n> ------\n> Insert dummy data into Track to bring rows count to 10 million\n> ------\n> INSERT INTO \"Track\"(\"TrackId\", \"Name\", \"AlbumId\", \"MediaTypeId\",\n> \"GenreId\", \"Milliseconds\", \"Bytes\", \"UnitPrice\")\n> SELECT i::int, i::text, 1, 1, 1, 276349, 9056902, 0.99\n> FROM generate_series(3504, 10000000) AS t(i);\n> ------\n>\n> ------\n> Setup role and policies\n> ------\n> create role \"User\";\n> grant select on \"Album\" to \"User\";\n> CREATE POLICY artist_rls_policy ON \"Album\" FOR SELECT TO public USING\n> (\"ArtistId\"=((current_setting('rls.artistID'))::integer));\n> ALTER TABLE \"Album\" ENABLE ROW LEVEL SECURITY;\n> grant select on \"Track\" to \"User\";\n> CREATE POLICY album_rls_policy ON \"Track\" FOR SELECT to public\n> USING (\n> EXISTS (\n> select 1 from \"Album\" where \"Track\".\"AlbumId\" = \"Album\".\"AlbumId\"\n> )\n> );\n> ALTER TABLE \"Track\" ENABLE ROW LEVEL SECURITY;\n> ------\n>\n> ------\n> Query and verify the policies through psql\n> ------\n> set role \"User\";\n> set rls.artistID = '116';\n> select * from \"Track\";\n> ------\n>\n> ------\n> Query plan for postgres\n> ------\n> postgres=> explain analyze select * from \"Track\";\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on \"Track\" (cost=0.00..34589179.11 rows=2110303 width=58)\n> (actual time=68.097..350.074 rows=14 loops=1)\n> Filter: (hashed SubPlan 2)\n> Rows Removed by Filter: 4220538\n> SubPlan 2\n> -> Index Scan using \"IFK_AlbumArtistId\" on \"Album\" (cost=0.15..8.17\n> rows=1 width=4) (actual time=0.017..0.018 rows=1 loops=1)\n> Index Cond: (\"ArtistId\" =\n> (current_setting('rls.artistID'::text))::integer)\n> Planning Time: 0.091 ms\n> JIT:\n> Functions: 17\n> Options: Inlining true, Optimization true, Expressions true, Deforming\n> true\n> Timing: Generation 1.008 ms, Inlining 11.450 ms, Optimization 33.233\n> ms, Emission 22.443 ms, Total 68.135 ms\n> Execution Time: 350.922 ms\n> (12 rows)\n> ------\n>\n> ------\n> Disabled ROW LEVEL SECURITY and get appropriate tracks\n> ------\n>\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=7657.40..7657.41 rows=1 width=32) (actual\n> time=0.070..0.071 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=7650.01..7657.38 rows=1 width=55)\n> (actual time=0.061..0.068 rows=1 loops=1)\n> -> Seq Scan on \"Album\" (cost=0.00..7.34 rows=1 width=27)\n> (actual time=0.020..0.026 rows=1 loops=1)\n> Filter: (\"ArtistId\" = 116)\n> Rows Removed by Filter: 346\n> -> Aggregate (cost=7650.01..7650.02 rows=1 width=32) (actual\n> time=0.040..0.040 rows=1 loops=1)\n> -> Nested Loop (cost=0.43..6107.07 rows=102863 width=11)\n> (actual time=0.016..0.026 rows=14 loops=1)\n> -> Seq Scan on \"Album\" \"__be_0_Album\"\n> (cost=0.00..8.21 rows=1 width=4) (actual time=0.008..0.015 rows=1 loops=1)\n> Filter: ((\"AlbumId\" = \"Album\".\"AlbumId\") AND\n> (\"ArtistId\" = 116))\n> Rows Removed by Filter: 346\n> -> Index Scan using \"IFK_TrackAlbumId\" on \"Track\"\n> (cost=0.43..5070.23 rows=102863 width=15) (actual time=0.008..0.009\n> rows=14 loops=1)\n> Index Cond: (\"AlbumId\" = \"Album\".\"AlbumId\")\n> SubPlan 2\n> -> Result (cost=0.00..0.01 rows=1 width=32) (actual\n> time=0.000..0.000 rows=1 loops=14)\n> SubPlan 1\n> -> Result (cost=0.00..0.01 rows=1 width=32) (actual\n> time=0.000..0.000 rows=1 loops=1)\n> Planning Time: 0.182 ms\n> Execution Time: 0.094 ms\n> (18 rows)\n> ------\n>\n> Why did Postgres choose to do a sequential scan on Track when RLS is\n> enabled?\n>\n> Regards,\n> Akash Anand\n>\n>\n\nHi,Is there a way to visualize RLS policy check(s) in the query plan?Regards,Akash AnandOn Mon, Jul 10, 2023 at 11:33 AM Akash Anand <akash@hasura.io> wrote:Hi,------Postgres version------postgres=# SELECT version(); version ----------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 15.3 (Debian 15.3-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit(1 row)------------Load data------Chinook databasehttps://github.com/lerocha/chinook-database/blob/master/ChinookDatabase/DataSources/Chinook_PostgreSql.sql------------Insert dummy data into Track to bring rows count to 10 million------INSERT INTO \"Track\"(\"TrackId\", \"Name\", \"AlbumId\", \"MediaTypeId\", \"GenreId\", \"Milliseconds\", \"Bytes\", \"UnitPrice\")SELECT i::int, i::text, 1, 1, 1, 276349, 9056902, 0.99FROM generate_series(3504, 10000000) AS t(i);------------Setup role and policies------create role \"User\";grant select on \"Album\" to \"User\";CREATE POLICY artist_rls_policy ON \"Album\" FOR SELECT TO public USING (\"ArtistId\"=((current_setting('rls.artistID'))::integer));ALTER TABLE \"Album\" ENABLE ROW LEVEL SECURITY;grant select on \"Track\" to \"User\";CREATE POLICY album_rls_policy ON \"Track\" FOR SELECT to publicUSING ( EXISTS ( select 1 from \"Album\" where \"Track\".\"AlbumId\" = \"Album\".\"AlbumId\" ));ALTER TABLE \"Track\" ENABLE ROW LEVEL SECURITY;------------Query and verify the policies through psql------set role \"User\";set rls.artistID = '116';select * from \"Track\";------------Query plan for postgres------postgres=> explain analyze select * from \"Track\"; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on \"Track\" (cost=0.00..34589179.11 rows=2110303 width=58) (actual time=68.097..350.074 rows=14 loops=1) Filter: (hashed SubPlan 2) Rows Removed by Filter: 4220538 SubPlan 2 -> Index Scan using \"IFK_AlbumArtistId\" on \"Album\" (cost=0.15..8.17 rows=1 width=4) (actual time=0.017..0.018 rows=1 loops=1) Index Cond: (\"ArtistId\" = (current_setting('rls.artistID'::text))::integer) Planning Time: 0.091 ms JIT: Functions: 17 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.008 ms, Inlining 11.450 ms, Optimization 33.233 ms, Emission 22.443 ms, Total 68.135 ms Execution Time: 350.922 ms(12 rows)------------Disabled ROW LEVEL SECURITY and get appropriate tracks------ QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=7657.40..7657.41 rows=1 width=32) (actual time=0.070..0.071 rows=1 loops=1) -> Nested Loop Left Join (cost=7650.01..7657.38 rows=1 width=55) (actual time=0.061..0.068 rows=1 loops=1) -> Seq Scan on \"Album\" (cost=0.00..7.34 rows=1 width=27) (actual time=0.020..0.026 rows=1 loops=1) Filter: (\"ArtistId\" = 116) Rows Removed by Filter: 346 -> Aggregate (cost=7650.01..7650.02 rows=1 width=32) (actual time=0.040..0.040 rows=1 loops=1) -> Nested Loop (cost=0.43..6107.07 rows=102863 width=11) (actual time=0.016..0.026 rows=14 loops=1) -> Seq Scan on \"Album\" \"__be_0_Album\" (cost=0.00..8.21 rows=1 width=4) (actual time=0.008..0.015 rows=1 loops=1) Filter: ((\"AlbumId\" = \"Album\".\"AlbumId\") AND (\"ArtistId\" = 116)) Rows Removed by Filter: 346 -> Index Scan using \"IFK_TrackAlbumId\" on \"Track\" (cost=0.43..5070.23 rows=102863 width=15) (actual time=0.008..0.009 rows=14 loops=1) Index Cond: (\"AlbumId\" = \"Album\".\"AlbumId\") SubPlan 2 -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=14) SubPlan 1 -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.000..0.000 rows=1 loops=1) Planning Time: 0.182 ms Execution Time: 0.094 ms(18 rows)------Why did Postgres choose to do a sequential scan on Track when RLS is enabled?Regards,Akash Anand",
"msg_date": "Mon, 10 Jul 2023 12:31:00 +0530",
"msg_from": "Akash Anand <akash@hasura.io>",
"msg_from_op": true,
"msg_subject": "Re: Why is query performance on RLS enabled Postgres worse?"
}
] |
Subsets and Splits