threads
listlengths
1
2.99k
[ { "msg_contents": "Andrew Martin wrote:\n> Just run the regression tests on 6.1 and as I suspected the array bug\n> is still there. The regression test passes because the expected output\n> has been fixed to the *wrong* output.\n\nOK, I think I understand the current array behavior, which is apparently\ndifferent than the behavior for v1.0x.\n\nPostgres v6.1 allows one to specify a dimensionality for an array object\nwhen declaring that object/column. However, that specification is not\nused when decoding a field. Instead, the dimensionality is deduced from\nthe input string itself. The dimensionality is stored with each field,\nand is used to encode the array on output. So, one is currently allowed\nto mix array dimensions within a column, but Postgres seems to keep that\nall straight for input and output.\n\nIs this the behavior that we want? Just because it is different from\nprevious behavior doesn't mean that it is undesirable. However, when\nmixing dimensionality within the same column it must be more difficult\nto figure out how to do comparison and other operations on arrays.\n\nIf we are to enforce dimensionality within columns, then I need to\nfigure out how to get that information from the table declaration when\ndecoding fields. Bruce, do you know where to look for that kind of code?\nAnyone have an idea on how much this code has changed over the last\nyear??\n\n\t\t\t- Tom\n\n\n--ELM913966242-1523-0_\n\n--ELM913966242-1523-0_--\n", "msg_date": "Tue, 24 Jun 1997 15:16:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Array bug is still there...." } ]
[ { "msg_contents": "\nignore\n\n", "msg_date": "Sun, 4 Jan 1998 00:47:02 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "setting up mhonarc one list at a time" } ]
[ { "msg_contents": "\nI've just created a *very* simple script that creates a snapshot\nbased on the current source tree. Nothing at all fancy, its just\nmeant to give those without CVSup access to something to test and\nwork with.\n\nIt will get regenerated every Friday/Saturday night via cron\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 03:26:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "New Snapshot(s)" } ]
[ { "msg_contents": "\nOn Sat, 3 Jan 1998, Bruce Momjian wrote:\n\n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nThis shouldn't affect JDBC. The only thing that would break things, is if\nthe notification sent by the \"show datestyle\" statement is changed.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n\n", "msg_date": "Sun, 4 Jan 1998 12:16:08 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Hello,\n\nfor thanks for the great stuff you wrote.\n\nI remember some time ago talk about error numbers in PostgreSQL. Are \nthey implemented? And standard SQL error codes (5 character long \nstrings). As PostgreSQL is moving towards ANSI SQL rather fast, it \nwould be nice to see standard error codes and messages as well.\n\n> > > > I just think the WARN word coming up on users terminals is odd. I can\n> > > > make the change in all the source files easily if we decide what the new\n> > > > error word should be. Error? Failure?\n> > > >\n> > > \n> > > Yes, that's one of the things I don't understand with PostgreSQL.\n> > > ERROR would be much better.\n> > \n> > How about ABORT ?\n> \n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nMe too :-)\n\nCiao\n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Sun, 4 Jan 1998 14:04:44 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS] " }, { "msg_contents": "> \n> Mattias Kregert wrote:\n> > \n> > Bruce Momjian wrote:\n> > >\n> > \n> > > I just think the WARN word coming up on users terminals is odd. I can\n> > > make the change in all the source files easily if we decide what the new\n> > > error word should be. Error? Failure?\n> > >\n> > \n> > Yes, that's one of the things I don't understand with PostgreSQL.\n> > ERROR would be much better.\n> \n> How about ABORT ?\n\nSo I assume no one has pending patches where this change would cause a\nproblem. So I will go ahead.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 21:22:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> ABORT means that transaction is ABORTed.\n> Will ERROR mean something else ?\n> Why should we use two different flag-words for the same thing ?\n> Note, that I don't object against using ERROR, but against using two words.\n\nI wanted two words to distinguish between user errors like a mis-spelled\nfield name, and internal errors like btree failure messages.\n\nMake sense?\n\nI made all the error messages coming from the parser as ERROR, and\nnon-parser messages as ABORT. I think I will need to fine-tune the\nmessages because I am sure I missed some messages that should be ERROR\nbut are ABORT. For example, utils/adt messages about improper data\nformats, is that an ERROR or an ABORT?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 22:25:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Mattias Kregert wrote:\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > >\n> > > > I just think the WARN word coming up on users terminals is odd. I can\n> > > > make the change in all the source files easily if we decide what the new\n> > > > error word should be. Error? Failure?\n> > > >\n> > >\n> > > Yes, that's one of the things I don't understand with PostgreSQL.\n> > > ERROR would be much better.\n> >\n> > How about ABORT ?\n> \n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nABORT means that transaction is ABORTed.\nWill ERROR mean something else ?\nWhy should we use two different flag-words for the same thing ?\nNote, that I don't object against using ERROR, but against using two words.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 10:35:10 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > ABORT means that transaction is ABORTed.\n> > > Will ERROR mean something else ?\n> > > Why should we use two different flag-words for the same thing ?\n> > > Note, that I don't object against using ERROR, but against using two words.\n> > \n> > I wanted two words to distinguish between user errors like a mis-spelled\n> > field name, and internal errors like btree failure messages.\n> > \n> > Make sense?\n> \n> No, for me. Do Informix, Oracle, etc use two words ?\n> What benefit of special \"in-parser-error\" word for user - in any case\n> user will read error message itself to understand what caused error.\n\nOK, if no one likes my idea in the next day, I will make them all ERROR.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 23:43:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > ABORT means that transaction is ABORTed.\n> > Will ERROR mean something else ?\n> > Why should we use two different flag-words for the same thing ?\n> > Note, that I don't object against using ERROR, but against using two words.\n> \n> I wanted two words to distinguish between user errors like a mis-spelled\n> field name, and internal errors like btree failure messages.\n> \n> Make sense?\n\nNo, for me. Do Informix, Oracle, etc use two words ?\nWhat benefit of special \"in-parser-error\" word for user - in any case\nuser will read error message itself to understand what caused error.\n\n> \n> I made all the error messages coming from the parser as ERROR, and\n> non-parser messages as ABORT. I think I will need to fine-tune the\n> messages because I am sure I missed some messages that should be ERROR\n> but are ABORT. For example, utils/adt messages about improper data\n> formats, is that an ERROR or an ABORT?\n\nGood question :)\n\nFollowing your way\n\ninsert into X (an_int2_field) values (9999999999);\n\nshould cause ERROR message, but\n\ninsert into X (an_int2_field) select an_int4_field from Y;\n\nshould return ABORT message if value of some an_int4_field in Y is\ngreater than 32768.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 11:51:30 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > > I wanted two words to distinguish between user errors like a mis-spelled\n> > > field name, and internal errors like btree failure messages.\n> > >\n> > > Make sense?\n> >\n> > No, for me. Do Informix, Oracle, etc use two words ?\n> > What benefit of special \"in-parser-error\" word for user - in any case\n> > user will read error message itself to understand what caused error.\n>\n> OK, if no one likes my idea in the next day, I will make them all ERROR.\n\nWell, _I_ like your idea. Seems like we can distinguish between operator error\n(which the operator can fix) and internal problems, and we could flag them\ndifferently. Perhaps there are so many grey areas that this becomes difficult to\ndo??\n\n - Tom\n\n", "msg_date": "Mon, 05 Jan 1998 07:12:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > > > I wanted two words to distinguish between user errors like a mis-spelled\n> > > > field name, and internal errors like btree failure messages.\n> > > >\n> > > > Make sense?\n> > >\n> > > No, for me. Do Informix, Oracle, etc use two words ?\n> > > What benefit of special \"in-parser-error\" word for user - in any case\n> > > user will read error message itself to understand what caused error.\n> >\n> > OK, if no one likes my idea in the next day, I will make them all ERROR.\n> \n> Well, _I_ like your idea. Seems like we can distinguish between operator error\n> (which the operator can fix) and internal problems, and we could flag them\n> differently. Perhaps there are so many grey areas that this becomes difficult to\n> do??\n\nAll adt/*.c are \"grey areas\":\n\ninsert into X (an_int2_field) values (9999999999);\n\nshould cause ERROR message, but\n\ninsert into X (an_int2_field) select an_int4_field from Y;\n\nshould return ABORT message if value of some an_int4_field in Y is\ngreater than 32768.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 14:48:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > I made all the error messages coming from the parser as ERROR, and\n> > non-parser messages as ABORT. I think I will need to fine-tune the\n> > messages because I am sure I missed some messages that should be ERROR\n> > but are ABORT. For example, utils/adt messages about improper data\n> > formats, is that an ERROR or an ABORT?\n> \n> Good question :)\n> \n> Following your way\n> \n> insert into X (an_int2_field) values (9999999999);\n> \n> should cause ERROR message, but\n> \n> insert into X (an_int2_field) select an_int4_field from Y;\n\nThis generates an ERROR, because the parser catches the type mismatch.\n\nIt looks like the changes are broken up pretty much among directories. \nutils/adt and catalog/ and commands/ are all pretty much ERROR.\n\n> \n> should return ABORT message if value of some an_int4_field in Y is\n> greater than 32768.\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 11:10:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > I made all the error messages coming from the parser as ERROR, and\n> > > non-parser messages as ABORT. I think I will need to fine-tune the\n> > > messages because I am sure I missed some messages that should be ERROR\n> > > but are ABORT. For example, utils/adt messages about improper data\n> > > formats, is that an ERROR or an ABORT?\n> >\n> > Good question :)\n> >\n> > Following your way\n> >\n> > insert into X (an_int2_field) values (9999999999);\n> >\n> > should cause ERROR message, but\n> >\n> > insert into X (an_int2_field) select an_int4_field from Y;\n> \n> This generates an ERROR, because the parser catches the type mismatch.\n\nHm - this is just example, I could use casting here...\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 00:12:38 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > This generates an ERROR, because the parser catches the type mismatch.\n> \n> Hm - this is just example, I could use casting here...\n\nAh, you got me here. If you cast int2(), you would get a different\nmessage. You are right.\n\nI changes parser/, commands/, utils/adt/, and several of the /tcop\nfiles. Should take care of most of them. Any errors coming out of the\noptimizer or executor, or cache code should be marked as serious. Let's\nsee if it helps. I can easily make them all the same.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 12:27:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" } ]
[ { "msg_contents": "\nOn Sat, 3 Jan 1998, Bruce Momjian wrote:\n\n> OK, I am CC'ing Peter. I know he suspected a protocol issue, and\n> perhaps this will help him.\n\nPS: Its best to send postgresql stuff to my home addresses, rather than\nthe work one. Also, I'm actually on leave this week, so stuff sent to\nmaidstone.gov.uk won't be read until the 12th.\n\n> > The output produced looks like the lo_import() call worked (a id number is\n> > returned), but the second query runs into problenms. For example:\n> > \n> > Here is some output:\n> > \n> > IMPORTING FILE: /tmp/10-225341.jpg\n> > \n> > File Id is 18209Executing Query : INSERT INTO foo VALUES ('Testing', 18209)\n> > \n> > Database Error: unknown protocol character 'V' read from backend. (The protocol\n> > character is the first character the backend sends in\n> > response to a query it receives). \n\nThis was the first bug that I found (and the reason I first suspected the\nprotocol). I have a possible fix for this one (which I applied to the jdbc\ndriver), but another problem then rears its head else where.\n\nPS: this problem is infact in libpq.\n\n> > this is a CGI program, and the http user has all access to table \"foo\" in the\n> > database. The postgres.log file also spits out this:\n> > NOTICE:LockRelease: locktable lookup failed, no lock\n\nYep, I get this as well. It's on my list of things to find.\n\n> > -------------------------------------------------------------\n> > \n> > \n> > Bruce:\n> > \n> > Do you see a problem with the code that I missed, or are the large objects\n> > interface functions broken in 6.2.1?\n\nThey are broken.\n\nWhen testing, I find that as long as the large object is < 1k in size, it\nworks once, then will never work again. This obviously is useless.\n\nI'm spending the whole of next week hitting this with a vengence (doing\nsome preliminary work this afternoon). I'll keep everyone posted.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n\n", "msg_date": "Sun, 4 Jan 1998 12:39:39 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] tuple is too big?" } ]
[ { "msg_contents": "> \n> On Sat, 3 Jan 1998, Bruce Momjian wrote:\n> \n> > I believe it is 8k to match the base size for the filesystem block size,\n> > for performance.\n> \n> \tHrmmm...how does one find out what the file system block size is? I know\n> there is an option to newfs for changing this, and at least under FreeBSD, the\n> default is set to:\n> \n> sys/param.h:#define DFLTBSIZE 4096\n> \n> \tSo maybe a multiple of block size should be considered more appropriate?\n> Maybe have it so that you can stipulate # of blocks that equal max tuple size?\n> Then, if I wanted, I could format a drive with a block size of 16k that is only\n> going to be used for databases, and have a tuple size up to that level?\n> \n\nYes, you certainly could do that. The comment says:\n\t\n\t/*\n\t * the maximum size of a disk block for any possible installation.\n\t *\n\t * in theory this could be anything, but in practice this is actually\n\t * limited to 2^13 bytes because we have limited ItemIdData.lp_off and\n\t * ItemIdData.lp_len to 13 bits (see itemid.h).\n\t */\n\t#define MAXBLCKSZ 8192\n\nYou can now specify the actual file system block size at the time of\nnewfs. We actually could query the file system at time of compile, but\nthat would be strange becuase the database location is set at time of\npostmaster startup, and I don't think we can make this a run-time\nparameter, but I may be wrong.\n\nOf course, you have to change the structures the mention.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 10:50:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> Yes, you certainly could do that. The comment says:\n> \t\n> \t/*\n> \t * the maximum size of a disk block for any possible installation.\n> \t *\n> \t * in theory this could be anything, but in practice this is actually\n> \t * limited to 2^13 bytes because we have limited ItemIdData.lp_off and\n> \t * ItemIdData.lp_len to 13 bits (see itemid.h).\n> \t */\n> \t#define MAXBLCKSZ 8192\n> \n> You can now specify the actual file system block size at the time of\n> newfs. We actually could query the file system at time of compile, but\n> that would be strange becuase the database location is set at time of\n> postmaster startup, and I don't think we can make this a run-time\n> parameter, but I may be wrong.\n\n\tNo, don't make it a run-time or auto-detect thing, just a compile time\noption. By default, leave it at 8192, since \"that's the way its always been\"...\nbut if we are justifying it based on disk block size, its 2x the disk block \nsize that my system is setup for. What's the difference between that and making\nit 3x or 4x? Or, hell, would I get a performance increase if I brought it\ndown to 4096, which is what my actually disk block size is?\n\n\tSo, what we would really be doing is setting the default to 8192, but give\nthe installer the opportunity (with a caveat that this value should be a multiple\nof default file system block size for optimal performance) to increase it as they\nsee fit.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 14:18:53 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> option. By default, leave it at 8192, since \"that's the way its always been\"...\n> but if we are justifying it based on disk block size, its 2x the disk block \n> size that my system is setup for. What's the difference between that and making\n> it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> down to 4096, which is what my actually disk block size is?\n> \n> \tSo, what we would really be doing is setting the default to 8192, but give\n> the installer the opportunity (with a caveat that this value should be a multiple\n> of default file system block size for optimal performance) to increase it as they\n> see fit.\n\nI assume you changed the default, becuase the BSD44 default is 8k\nblocks, with 1k fragments.\n\nI don't think there is any 'performance' improvement with making it\ngreater than the file system block size.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 14:05:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> > \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> > option. By default, leave it at 8192, since \"that's the way its always been\"...\n> > but if we are justifying it based on disk block size, its 2x the disk block \n> > size that my system is setup for. What's the difference between that and making\n> > it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> > down to 4096, which is what my actually disk block size is?\n> > \n> > \tSo, what we would really be doing is setting the default to 8192, but give\n> > the installer the opportunity (with a caveat that this value should be a multiple\n> > of default file system block size for optimal performance) to increase it as they\n> > see fit.\n> \n> I assume you changed the default, becuase the BSD44 default is 8k\n> blocks, with 1k fragments.\n\n\tGood question, I don't know. What does BSDi have it set at? Linux? NetBSD?\n\n\tI just checked our sys/param.h file under Solaris 2.5.1, and it doesn't\nseem to define a DEFAULT, but a MAXSIZE of 8192...oops, newfs defines the default\nthere for 8192 also\n\n> I don't think there is any 'performance' improvement with making it\n> greater than the file system block size.\n\n\tNo no...you missed the point. If we are saying that max tuple size is 8k\nbecause of block size of the file system, under FreeBSD, the tuple size is 2x\nthe block size of the file system. So, if there a performance decrease because\nof that...on modern OSs, how much does that even matter anymore? The 8192 that\nwe have current set, that's probably still from the original Postgres4.2 system\nthat was written in which decade? :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 15:31:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Sun, 4 Jan 1998, Bruce Momjian wrote:\n> \n> > > \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> > > option. By default, leave it at 8192, since \"that's the way its always been\"...\n> > > but if we are justifying it based on disk block size, its 2x the disk block \n> > > size that my system is setup for. What's the difference between that and making\n> > > it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> > > down to 4096, which is what my actually disk block size is?\n> > > \n> > > \tSo, what we would really be doing is setting the default to 8192, but give\n> > > the installer the opportunity (with a caveat that this value should be a multiple\n> > > of default file system block size for optimal performance) to increase it as they\n> > > see fit.\n> > \n> > I assume you changed the default, becuase the BSD44 default is 8k\n> > blocks, with 1k fragments.\n> \n> \tGood question, I don't know. What does BSDi have it set at? Linux? NetBSD?\n> \n> \tI just checked our sys/param.h file under Solaris 2.5.1, and it doesn't\n> seem to define a DEFAULT, but a MAXSIZE of 8192...oops, newfs defines the default\n> there for 8192 also\n> \n> > I don't think there is any 'performance' improvement with making it\n> > greater than the file system block size.\n> \n> \tNo no...you missed the point. If we are saying that max tuple size is 8k\n> because of block size of the file system, under FreeBSD, the tuple size is 2x\n> the block size of the file system. So, if there a performance decrease because\n> of that...on modern OSs, how much does that even matter anymore? The 8192 that\n> we have current set, that's probably still from the original Postgres4.2 system\n> that was written in which decade? :)\n\nI see, we could increase it and it probably would not matter much.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 14:39:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "On Sun, 4 Jan 1998, Stan Brown wrote:\n\n> >\n> >\n> >I've just created a *very* simple script that creates a snapshot\n> >based on the current source tree. Nothing at all fancy, its just\n> >meant to give those without CVSup access to something to test and\n> >work with.\n> >\n> >It will get regenerated every Friday/Saturday night via cron\n> >\n> \n> \tAh. Mark, a bit more info please. Where will I find these snapshots.\n\n\tftp.postgresql.org/pub\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 17:51:57 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New Snapshot(s)" } ]
[ { "msg_contents": "\nCan someone comment on this? For an SQL server in Canada, should we be using\nEuropean format? *raised eyebrow* I'm currently using the default (US) format\n\n\n------------------------------------\n\n> \tmmddyy is proper ISO/SQL format for north america...we went through a major\n> discussion as to the differences, because European dates are *totally* different :(\n\nYes canada is supposed to be the same as europe being date/month/year\nas in small > large while americans go month day year which is no\norder at all...\n\n", "msg_date": "Sun, 4 Jan 1998 20:28:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "date format: Canada same as European or US?" }, { "msg_contents": "On Sun, 4 Jan 1998, The Hermit Hacker wrote:\n\n> \n> Can someone comment on this? For an SQL server in Canada, should we be using\n> European format? *raised eyebrow* I'm currently using the default (US) format\n\nYes - Canada follows european dates, spelling, and measurement system.\nI guess it's a matter of choice though... I think how the US spell\n\"colour\" is weird for instance.... and reading their dates is a pain.\n(also I prefer tea with a little bit of milk and sugar... haven't a clue\nwhere that's from :)\n\nSo how do I setup postgres for Canada? *curious?*\n\n> ------------------------------------\n> \n> > \tmmddyy is proper ISO/SQL format for north america...we went through a major\n> > discussion as to the differences, because European dates are *totally* different :(\n> \n> Yes canada is supposed to be the same as europe being date/month/year\n> as in small > large while americans go month day year which is no\n> order at all...\n> \n\nG'day, eh? :)\n\t- Teunis ... trying to catch up on ~500 backlogged messages\n\t\t\there...\n\n", "msg_date": "Thu, 15 Jan 1998 15:45:33 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date format: Canada same as European or US?" } ]
[ { "msg_contents": "> \n> So, is it lp_len or lp_offset that can be reduced by 2? I want to \n> experiment with this...\n> \n\nNeither...try grep'ing around for uses of lp_flags. I dug into this\nlast Dec/Jan...check the hackers digests from that time for any\nrelevent info. At that time, only two bits in lp_flags were in use.\nDon't know if any more are taken now or not.\n\nBoth lp_len and lp_offset should be the same, so if you take four bits\nfrom lp_flags (and give two apiece to lp_len & lp_offset), that would\nget you to a block size of 32k.\n\nNow that there're timely src snapshots available, I'm going to try to\nget back into coding (assuming the aix port still works. :)\n\ndarrenk\n", "msg_date": "Sun, 4 Jan 1998 20:45:30 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] config.h/Followup FOLLOWUP" } ]
[ { "msg_contents": "I have added NOT NULL and DEFAULT indications to \\d:\n\ntest=> \\d testz\n\nTable = testz\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| x | int4 not null default '4' | 4 |\n+----------------------------------+----------------------------------+-------+\n\nSome people have asked for this on the questions list.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 21:14:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "new \\d information" } ]
[ { "msg_contents": "Integration wrote:\n> \n> ps. why not allow for larger tuples in general? Do we take a speed hit?\n\nUsing large blocks is bad for performance: by increasing block size\nyou automatically decrease number of blocks in shared buffer pool -\nthis is bad for index scans and in multi-user environment!\nJust remember that Informix (and others) use 2K blocks.\n(Actually, I would like to have smaller blocks, but postgres lives\nover file system...)\n\nAs for having big tuples - someone said about multi-representation\nfeature of Illustra (automatically storing of big fields outside\nof tuple itself - in blobs, large objects, ...): looks very nice.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 11:14:49 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n\n> Just remember that Informix (and others) use 2K blocks.\n\n\tSo we're 4x what the commercial ones are as of right now? \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jan 1998 00:21:08 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> \n> > Just remember that Informix (and others) use 2K blocks.\n> \n> \tSo we're 4x what the commercial ones are as of right now? \n\nThat is because they do not use the file system, so they try to match\nthe raw disk block sizes, while we try to match the file system size.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 23:28:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > \n> > > Just remember that Informix (and others) use 2K blocks.\n> > \n> > \tSo we're 4x what the commercial ones are as of right now? \n> \n> That is because they do not use the file system, so they try to match\n> the raw disk block sizes, while we try to match the file system size.\n\n\tIrrelevant to my question...our tuples...are they 4x the size of the\ncommercial vendors, or is Vadim talking about something altogether different?\n\n\tIf we are 4x their size, then I think this whole discussion is a joke since\nwe are already *way* better then \"the others\"\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jan 1998 00:54:25 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Sun, 4 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > > \n> > > > Just remember that Informix (and others) use 2K blocks.\n> > > \n> > > \tSo we're 4x what the commercial ones are as of right now? \n> > \n> > That is because they do not use the file system, so they try to match\n> > the raw disk block sizes, while we try to match the file system size.\n> \n> \tIrrelevant to my question...our tuples...are they 4x the size of the\n> commercial vendors, or is Vadim talking about something altogether different?\n> \n> \tIf we are 4x their size, then I think this whole discussion is a joke since\n> we are already *way* better then \"the others\"\n\nThat's a good question. What is the maximum tuple size for Informix or\nOracle tuples?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 00:08:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "> On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> \n> > Just remember that Informix (and others) use 2K blocks.\n> \n> \tSo we're 4x what the commercial ones are as of right now? \n> \n\nDate: Sat, 14 Dec 1996 17:29:53 -0500\nFrom: aixssd!darrenk (Darren King)\nTo: abs.net!postgreSQL.org!hackers\nSubject: [HACKERS] The 8k block size.\n\n--- snip ---\n\n(All of this is taken from their respective web site docs)\n\n block size max tuple size\nIBM DB2 4K 4005 bytes\nSybase 2K 2016 bytes\nInformix 2K 32767 bytes\nOracle (left my Oracle books at home...oops)\n\n--- snip ---\n\nDarren [email protected]\n\n\n", "msg_date": "Sun, 4 Jan 1998 23:23:37 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > \n> > > Just remember that Informix (and others) use 2K blocks.\n> > \n> > \tSo we're 4x what the commercial ones are as of right now? \n> > \n> \n> (All of this is taken from their respective web site docs)\n> \n> block size max tuple size\n> IBM DB2 4K 4005 bytes\n> Sybase 2K 2016 bytes\n> Informix 2K 32767 bytes\n> Oracle (left my Oracle books at home...oops)\n\nWow, I guess we are not as bad as I thought. If Peter gets large\nobjects working properly, we can close this issue.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 08:13:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "I have changed the WARN messages to ERROR and ABORT messages.\n\nI wanted to differentiate between errors the user caused, and are\nnormal, like mistyped field names, and more serious errors coming from\nthe backend routines.\n\nI have made all the elog's in the parser/ directory as ERRORS. All the\nothers are ABORT.\n\nDoes someone want to review all the elog's and submit a patch changing\nABORT to ERROR as appropriate?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 4 Jan 1998 23:35:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR/ABORT message" } ]
[ { "msg_contents": "I was thinking about subselects, and how to attach the two queries.\n\nWhat if the subquery makes a range table entry in the outer query, and\nthe query is set up like the UNION queries where we put the scans in a\nrow, but in the case we put them over/under each other.\n\nAnd we push a temp table into the catalog cache that represents the\nresult of the subquery, then we could join to it in the outer query as\nthough it was a real table.\n\nAlso, can't we do the correlated subqueries by adding the proper\ntarget/output columns to the subquery, and have the outer query\nreference those columns in the subquery range table entry.\n\nMaybe I can write up a sample of this? Vadim, would this help? Is this\nthe point we are stuck at?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 00:16:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I was thinking about subselects, and how to attach the two queries.\n> \n> What if the subquery makes a range table entry in the outer query, and\n> the query is set up like the UNION queries where we put the scans in a\n> row, but in the case we put them over/under each other.\n> \n> And we push a temp table into the catalog cache that represents the\n> result of the subquery, then we could join to it in the outer query as\n> though it was a real table.\n> \n> Also, can't we do the correlated subqueries by adding the proper\n> target/output columns to the subquery, and have the outer query\n> reference those columns in the subquery range table entry.\n\nYes, this is a way to handle subqueries by joining to temp table.\nAfter getting plan we could change temp table access path to\nnode material. On the other hand, it could be useful to let optimizer\nknow about cost of temp table creation (have to think more about it)...\nUnfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\nis one example of this - joining by <> will give us invalid results.\nSetting special NOT EQUAL flag is not enough: subquery plan must be\nalways inner one in this case. The same for handling ALL modifier.\nNote, that we generaly can't use aggregates here: we can't add MAX to \nsubquery in the case of > ALL (subquery), because of > ALL should return FALSE\nif subquery returns NULL(s) but aggregates don't take NULLs into account.\n\n> \n> Maybe I can write up a sample of this? Vadim, would this help? Is this\n> the point we are stuck at?\n\nPersonally, I was stuck by holydays -:)\nNow I can spend ~ 8 hours ~ each day for development...\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 19:35:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "\nVadim,\n\n Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n is one example of this - joining by <> will give us invalid results.\n\nWhat is you approach towards this problem?\nI got an idea that one could reverse the order,\nthat is execute the outer first into a temptable\nand delete from that according to the result of the\nsubquery and then return it.\nProbably this is too raw and slow. ;-)\n\n Personally, I was stuck by holydays -:)\n Now I can spend ~ 8 hours ~ each day for development...\n\nOh, isn't it christmas eve right now in Russia?\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "5 Jan 1998 13:28:25 -0000", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> Yes, this is a way to handle subqueries by joining to temp table.\n> After getting plan we could change temp table access path to\n> node material. On the other hand, it could be useful to let optimizer\n> know about cost of temp table creation (have to think more about it)...\n> Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n> is one example of this - joining by <> will give us invalid results.\n> Setting special NOT EQUAL flag is not enough: subquery plan must be\n> always inner one in this case. The same for handling ALL modifier.\n> Note, that we generaly can't use aggregates here: we can't add MAX to \n> subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> if subquery returns NULL(s) but aggregates don't take NULLs into account.\n\nOK, here are my ideas. First, I think you have to handle subselects in\nthe outer node because a subquery could have its own subquery. Also, we\nnow have a field in Aggreg to all us to 'usenulls'.\n\nOK, here it is. I recommend we pass the outer and subquery through\nthe parser and optimizer separately.\n\nWe parse the subquery first. If the subquery is not correlated, it\nshould parse fine. If it is correlated, any columns we find in the\nsubquery that are not already in the FROM list, we add the table to the\nsubquery FROM list, and add the referenced column to the target list of\nthe subquery.\n\nWhen we are finished parsing the subquery, we create a catalog cache\nentry for it called 'sub1' and make its fields match the target\nlist of the subquery.\n\nIn the outer query, we add 'sub1' to its target list, and change\nthe subquery reference to point to the new range table. We also add\nWHERE clauses to do any correlated joins.\n\nHere is a simple example:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb)\n\nThis is not correlated, and the subquery parser easily. We create a\n'sub1' catalog cache entry, and add 'sub1' to the outer query FROM\nclause. We also replace 'col1 = (subquery)' with 'col1 = sub1.col2'.\n\nHere is a more complex correlated subquery:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where taba.col3 = tabb.col4)\n\nHere we must add 'taba' to the subquery's FROM list, and add col3 to the\ntarget list of the subquery. After we parse the subquery, add 'sub1' to\nthe FROM list of the outer query, change 'col1 = (subquery)' to 'col1 =\nsub1.col2', and add to the outer WHERE clause 'AND taba.col3 = sub1.col3'.\nTHe optimizer will do the correlation for us.\n\nIn the optimizer, we can parse the subquery first, then the outer query,\nand then replace all 'sub1' references in the outer query to use the\nsubquery plan.\n\nI realize making merging the two plans and doing IN and NOT IN is the\nreal challenge, but I hoped this would give us a start.\n\nWhat do you think?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 10:28:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > always inner one in this case. The same for handling ALL modifier.\n> > Note, that we generaly can't use aggregates here: we can't add MAX to\n> > subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> > if subquery returns NULL(s) but aggregates don't take NULLs into account.\n> \n> OK, here are my ideas. First, I think you have to handle subselects in\n> the outer node because a subquery could have its own subquery. Also, we\n\nI hope that this is no matter: if results of subquery (with/without sub-subqueries)\nwill go into temp table then this table will be re-scanned for each outer tuple.\n\n> now have a field in Aggreg to all us to 'usenulls'.\n ^^^^^^^^\n This can't help:\n\nvac=> select * from x;\ny\n-\n1\n2\n3\n <<< this is NULL\n(4 rows)\n\nvac=> select max(y) from x;\nmax\n---\n 3\n\n==> we can't replace \n\nselect * from A where A.a > ALL (select y from x);\n ^^^^^^^^^^^^^^^\n (NULL will be returned and so A.a > ALL is FALSE - this is what \n Sybase does, is it right ?)\nwith\n\nselect * from A where A.a > (select max(y) from x);\n ^^^^^^^^^^^^^^^^^^^^\njust because of we lose knowledge about NULLs here.\n\nAlso, I would like to handle ANY and ALL modifiers for all bool\noperators, either built-in or user-defined, for all data types -\nisn't PostgreSQL OO-like RDBMS -:)\n\n> OK, here it is. I recommend we pass the outer and subquery through\n> the parser and optimizer separately.\n\nI don't like this. I would like to get parse-tree from parser for\nentire query and let optimizer (on upper level) decide how to rewrite\nparse-tree and what plans to produce and how these plans should be\nmerged. Note, that I don't object your methods below, but only where\nto place handling of this. I don't understand why should we add\nnew part to the system which will do optimizer' work (parse-tree --> \nexecution plan) and deal with optimizer nodes. Imho, upper optimizer\nlevel is nice place to do this.\n\n> \n> We parse the subquery first. If the subquery is not correlated, it\n> should parse fine. If it is correlated, any columns we find in the\n> subquery that are not already in the FROM list, we add the table to the\n> subquery FROM list, and add the referenced column to the target list of\n> the subquery.\n> \n> When we are finished parsing the subquery, we create a catalog cache\n> entry for it called 'sub1' and make its fields match the target\n> list of the subquery.\n> \n> In the outer query, we add 'sub1' to its target list, and change\n> the subquery reference to point to the new range table. We also add\n> WHERE clauses to do any correlated joins.\n...\n> Here is a more complex correlated subquery:\n> \n> select *\n> from taba\n> where col1 = (select col2\n> from tabb\n> where taba.col3 = tabb.col4)\n> \n> Here we must add 'taba' to the subquery's FROM list, and add col3 to the\n> target list of the subquery. After we parse the subquery, add 'sub1' to\n> the FROM list of the outer query, change 'col1 = (subquery)' to 'col1 =\n> sub1.col2', and add to the outer WHERE clause 'AND taba.col3 = sub1.col3'.\n> THe optimizer will do the correlation for us.\n> \n> In the optimizer, we can parse the subquery first, then the outer query,\n> and then replace all 'sub1' references in the outer query to use the\n> subquery plan.\n> \n> I realize making merging the two plans and doing IN and NOT IN is the\n ^^^^^^^^^^^^^^^^^^^^^\nThis is very easy to do! As I already said we have just change sub1\naccess path (SeqScan of sub1) with SeqScan of Material node with \nsubquery plan.\n\n> real challenge, but I hoped this would give us a start.\n\nDecision about how to record subquery stuff in to parse-tree\nwould be very good start -:)\n\nBTW, note that for _expression_ subqueries (which are introduced without\nIN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples - \nwe have to check that subquery returns single tuple...\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 02:55:57 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > always inner one in this case. The same for handling ALL modifier.\n> > > Note, that we generaly can't use aggregates here: we can't add MAX to\n> > > subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> > > if subquery returns NULL(s) but aggregates don't take NULLs into account.\n> > \n> > OK, here are my ideas. First, I think you have to handle subselects in\n> > the outer node because a subquery could have its own subquery. Also, we\n> \n> I hope that this is no matter: if results of subquery (with/without sub-subqueries)\n> will go into temp table then this table will be re-scanned for each outer tuple.\n\nOK, sounds good.\n\n> \n> > now have a field in Aggreg to all us to 'usenulls'.\n> ^^^^^^^^\n> This can't help:\n> \n> vac=> select * from x;\n> y\n> -\n> 1\n> 2\n> 3\n> <<< this is NULL\n> (4 rows)\n> \n> vac=> select max(y) from x;\n> max\n> ---\n> 3\n> \n> ==> we can't replace \n> \n> select * from A where A.a > ALL (select y from x);\n> ^^^^^^^^^^^^^^^\n> (NULL will be returned and so A.a > ALL is FALSE - this is what \n> Sybase does, is it right ?)\n> with\n> \n> select * from A where A.a > (select max(y) from x);\n\nI agree. I don't see how we can ever replace an '> ALL (y)' with '> ALL\n(max(y))'. This sounds too detailed for the system to deal with. If\nthey do ALL, we have to implement ALL, without any use of aggregates to\ntry and second-guess their request.\n\n> ^^^^^^^^^^^^^^^^^^^^\n> just because of we lose knowledge about NULLs here.\n\nYep. And it is too much work. If they want to replace the query with\nmax(), let them do it, if not, we do what they requested.\n\n> \n> Also, I would like to handle ANY and ALL modifiers for all bool\n> operators, either built-in or user-defined, for all data types -\n> isn't PostgreSQL OO-like RDBMS -:)\n\nOK, sounds good to me.\n\n> \n> > OK, here it is. I recommend we pass the outer and subquery through\n> > the parser and optimizer separately.\n> \n> I don't like this. I would like to get parse-tree from parser for\n> entire query and let optimizer (on upper level) decide how to rewrite\n> parse-tree and what plans to produce and how these plans should be\n> merged. Note, that I don't object your methods below, but only where\n> to place handling of this. I don't understand why should we add\n> new part to the system which will do optimizer' work (parse-tree --> \n> execution plan) and deal with optimizer nodes. Imho, upper optimizer\n> level is nice place to do this.\n\nI am confused. Do you want one flat query and want to pass the whole\nthing into the optimizer? That brings up some questions:\n\nHow do we want to do this? Well, we could easily have the two queries\nshare the same range table by making the subquery have the proper\nalias's/refnames.\n\nHowever, how do we represent the join and correlated joins to the\nsubquery. We can do the correlated stuff by having the outer columns\nreference the inner queries range table entries that we added, but how\nto represent the subquery WHERE clause, and the join of the outer to\ninner queries?\n\nIn:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where taba.col3 = tabb.col4)\n\nHow do we represent join of col1 to tabb.col2? I guess we have a new\nnode type for IN and NOT IN and ANY, and we put that operator in the\nparse grammar.\n\nSo I assume you are suggesting we flatten the query, to merge the range\ntables of the two queries, and the WHERE clauses of the two queries, add\nthe proper WHERE conditionals to join the two range tables for\ncorrelated queries, and have the IN, NOT IN, ALL nodes in the WHERE\nclause, and have the optimizer figure out how to handle the issues.\n\nHow do we handle aggregates in the subquery? Currently the optimizer\ndoes those last, but we must put them above the materialized node. And\nif we merge the outer and subquery to produce one flat query, how do we\ntell the optimizer to make sure the aggregate is in a node that can be\nmaterialized?\n\n---------------------------------------------------------------------------\n\nIf you don't want to flatten the outer query and subquery into one\nquery, I am really confused. There certainly will be stuff that needs\nto be put into the upper optimizer, to properly handle the two plans and\nmake sure they are merged into one plan.\n\nAre you suggesting we put the IN node in the upper optimizer, and the\ncorrelation stuff. That sounds good.\n\n> > I realize making merging the two plans and doing IN and NOT IN is the\n> ^^^^^^^^^^^^^^^^^^^^^\n> This is very easy to do! As I already said we have just change sub1\n> access path (SeqScan of sub1) with SeqScan of Material node with \n> subquery plan.\n\nGood. Makes sense. This is what I was suggesting.\n\n> \n> > real challenge, but I hoped this would give us a start.\n> \n> Decision about how to record subquery stuff in to parse-tree\n> would be very good start -:)\n> \n> BTW, note that for _expression_ subqueries (which are introduced without\n> IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples - \n> we have to check that subquery returns single tuple...\n\nYes, I realize this.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 15:51:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> > I am confused. Do you want one flat query and want to pass the whole\n> > thing into the optimizer? That brings up some questions:\n> \n> No. I just want to follow Tom's way: I would like to see new\n> SubSelect node as shortened version of struct Query (or use\n> Query structure for each subquery - no matter for me), some \n> subquery-related stuff added to Query (and SubSelect) to help\n> optimizer to start, and see\n\nOK, so you want the subquery to actually be INSIDE the outer query\nexpression. Do they share a common range table? If they don't, we\ncould very easily just fly through when processing the WHERE clause, and\nstart a new query using a new query structure for the subquery. Believe\nme, you don't want a separate SubQuery-type, just re-use Query for it. \nIt allows you to call all the normal query stuff with a consistent\nstructure.\n\nThe parser will need to know it is in a subquery, so it can add the\nproper target columns to the subquery, or are you going to do that in\nthe optimizer. You can do it in the optimizer, and join the range table\nreferences there too.\n\n> \n> typedef struct A_Expr\n> {\n> NodeTag type;\n> int oper; /* type of operation\n> * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> IN, NOT IN, ANY, ALL, EXISTS here,\n> \n> char *opname; /* name of operator/function */\n> Node *lexpr; /* left argument */\n> Node *rexpr; /* right argument */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> and SubSelect (Query) here (as possible case).\n> \n> One thought to follow this way: RULEs (and so - VIEWs) are handled by using\n> Query - how else can we implement VIEWs on selects with subqueries ?\n\nViews are stored as nodeout structures, and are merged into the query's\nfrom list, target list, and where clause. I am working out\nreadfunc,outfunc now to make sure they are up-to-date with all the\ncurrent fields.\n\n> \n> BTW, is\n> \n> select * from A where (select TRUE from B);\n> \n> valid syntax ?\n\nI don't think so.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 17:16:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > OK, here it is. I recommend we pass the outer and subquery through\n> > > the parser and optimizer separately.\n> >\n> > I don't like this. I would like to get parse-tree from parser for\n> > entire query and let optimizer (on upper level) decide how to rewrite\n> > parse-tree and what plans to produce and how these plans should be\n> > merged. Note, that I don't object your methods below, but only where\n> > to place handling of this. I don't understand why should we add\n> > new part to the system which will do optimizer' work (parse-tree -->\n> > execution plan) and deal with optimizer nodes. Imho, upper optimizer\n> > level is nice place to do this.\n> \n> I am confused. Do you want one flat query and want to pass the whole\n> thing into the optimizer? That brings up some questions:\n\nNo. I just want to follow Tom's way: I would like to see new\nSubSelect node as shortened version of struct Query (or use\nQuery structure for each subquery - no matter for me), some \nsubquery-related stuff added to Query (and SubSelect) to help\noptimizer to start, and see\n\ntypedef struct A_Expr\n{\n NodeTag type;\n int oper; /* type of operation\n * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n IN, NOT IN, ANY, ALL, EXISTS here,\n\n char *opname; /* name of operator/function */\n Node *lexpr; /* left argument */\n Node *rexpr; /* right argument */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n and SubSelect (Query) here (as possible case).\n\nOne thought to follow this way: RULEs (and so - VIEWs) are handled by using\nQuery - how else can we implement VIEWs on selects with subqueries ?\n\nBTW, is\n\nselect * from A where (select TRUE from B);\n\nvalid syntax ?\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 05:18:11 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Goran Thyni wrote:\n> \n> Vadim,\n> \n> Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n> is one example of this - joining by <> will give us invalid results.\n> \n> What is you approach towards this problem?\n\nActually, this is problem of ALL modifier (NOT IN is _not_equal_ ALL)\nand so, we have to have not just NOT EQUAL flag but some ALL node\nwith modified operator.\n\nAfter that, one way is put subquery into inner plan of an join node\nto be sure that for an outer tuple all corresponding subquery tuples\nwill be tested with modified operator (this will require either\nchanging code of all join nodes or addition of new plan type - we'll see)\nand another way is ... suggested by you:\n\n> I got an idea that one could reverse the order,\n> that is execute the outer first into a temptable\n> and delete from that according to the result of the\n> subquery and then return it.\n> Probably this is too raw and slow. ;-)\n\nThis will be faster in some cases (when subquery returns many results\nand there are \"not so many\" results from outer query) - thanks for idea!\n\n> \n> Personally, I was stuck by holydays -:)\n> Now I can spend ~ 8 hours ~ each day for development...\n> \n> Oh, isn't it christmas eve right now in Russia?\n\nDue to historic reasons New Year is mu-u-u-uch popular\nholiday in Russia -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 05:48:58 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> No. I just want to follow Tom's way: I would like to see new\n> SubSelect node as shortened version of struct Query (or use\n> Query structure for each subquery - no matter for me), some \n> subquery-related stuff added to Query (and SubSelect) to help\n> optimizer to start, and see\n\nThis is fine. I thought it would be too much work for the optimizer to\npass a subquery inside the WHERE clause, but if you think you can handle\nit, great. I think it is more likely views will work with subqueries if\nwe do that too, and it is cleaner.\n\nI recommend adding a boolean flag to the rangetable entries to show if\nthe range was added automatically, meaning it refers to an outer query. \nAlso, we will need a flag in the Query structure to tell if it is a\nsubquery, and a pointer to the parent's range table to resolve\nreferences like:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where col3 = tabb.col4)\n\nIn this case, the proper table for col3 can not be determined from the\nsubquery range table, so we must search the parent range table to add\nthe proper entry to the child. If we add target entries at the same\ntime in the parser, we should add a flag to the targetentry structure to\nidentify it as an entry that will have to have additional WHERE clauses\nadded to the parent to restrict the join, or we could add those entries\nin the parser, but at the time we are processing the subquery, we are\nalready inside the WHERE clause, so we must be careful.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 18:02:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > I am confused. Do you want one flat query and want to pass the whole\n> > > thing into the optimizer? That brings up some questions:\n> >\n> > No. I just want to follow Tom's way: I would like to see new\n> > SubSelect node as shortened version of struct Query (or use\n> > Query structure for each subquery - no matter for me), some\n> > subquery-related stuff added to Query (and SubSelect) to help\n> > optimizer to start, and see\n> \n> OK, so you want the subquery to actually be INSIDE the outer query\n> expression. Do they share a common range table? If they don't, we\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNo.\n\n> could very easily just fly through when processing the WHERE clause, and\n> start a new query using a new query structure for the subquery. Believe\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n... and filling some subquery-related stuff in upper query structure -\nstill don't know what exactly this could be -:)\n\n> me, you don't want a separate SubQuery-type, just re-use Query for it.\n> It allows you to call all the normal query stuff with a consistent\n> structure.\n\nNo objections.\n\n> \n> The parser will need to know it is in a subquery, so it can add the\n> proper target columns to the subquery, or are you going to do that in\n\nI don't think that we need in it, but list of correlation clauses\ncould be good thing - all in all parser has to check all column \nreferences...\n\n> the optimizer. You can do it in the optimizer, and join the range table\n> references there too.\n\nYes.\n\n> > typedef struct A_Expr\n> > {\n> > NodeTag type;\n> > int oper; /* type of operation\n> > * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > IN, NOT IN, ANY, ALL, EXISTS here,\n> >\n> > char *opname; /* name of operator/function */\n> > Node *lexpr; /* left argument */\n> > Node *rexpr; /* right argument */\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > and SubSelect (Query) here (as possible case).\n> >\n> > One thought to follow this way: RULEs (and so - VIEWs) are handled by using\n> > Query - how else can we implement VIEWs on selects with subqueries ?\n> \n> Views are stored as nodeout structures, and are merged into the query's\n> from list, target list, and where clause. I am working out\n> readfunc,outfunc now to make sure they are up-to-date with all the\n> current fields.\n\nNice! This stuff was out-of-date for too long time.\n\n> > BTW, is\n> >\n> > select * from A where (select TRUE from B);\n> >\n> > valid syntax ?\n> \n> I don't think so.\n\nAnd so, *rexpr can be of Query type only for oper \"in\" OP, IN, NOT IN,\nANY, ALL, EXISTS - well.\n\n(Time to sleep -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 06:09:56 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> BTW, note that for _expression_ subqueries (which are introduced without\n> IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples -\n> we have to check that subquery returns single tuple...\n\nIt might be nice to have a tuple-counting operation or query node (is this the right\nterminology?) which could be used to help implement EXISTS. It might help to\nre-implement the count(*) function also.\n\n - Tom\n\n\n", "msg_date": "Tue, 06 Jan 1998 04:50:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> \n> > BTW, note that for _expression_ subqueries (which are introduced without\n> > IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples -\n> > we have to check that subquery returns single tuple...\n> \n> It might be nice to have a tuple-counting operation or query node (is this the right\n> terminology?) which could be used to help implement EXISTS. It might help to\n> re-implement the count(*) function also.\n\nIn the new code, count(*) picks a column from one of the tables to count\non.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 00:06:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" } ]
[ { "msg_contents": "Hi all,\n\nthe current snapshot reserves 'char' as keyword.\nCan anyone tell me the reason ?\n\nthanks\nEdmund\n-- \nEdmund Mergl mail: [email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Mon, 05 Jan 1998 07:07:37 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "why is char now a keyword" }, { "msg_contents": "Edmund Mergl wrote:\n\n> Hi all,\n>\n> the current snapshot reserves 'char' as keyword.\n> Can anyone tell me the reason ?\n\nAt least in part so that we can do the explicit parsing required to\nsupport SQL92 syntax elements such as \"character sets\" and \"collating\nsequences\". I'd like to get support for multiple character sets, but am\nimmersed in documentation so will not likely get to this for the next\nrelease. Have you encountered a problem with an existing database? In\nwhat context??\n\n - Tom\n\n", "msg_date": "Mon, 05 Jan 1998 07:56:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why is char now a keyword" } ]
[ { "msg_contents": "> > \n> > 2. This is more like a C issue rather than aix-specific. The aix compiler complains\n> > about assigning the (void)NULL to isnull in the heap_getattr macro. Changing the\n> > (void) to (bool) works and seems like it should be (bool) to match the type of isnull,\n> > shouldn't it?\n> > \n> > *** include/access/heapam.h.org\tSun Jan 4 23:52:05 1998\n> > --- include/access/heapam.h\tSun Jan 4 23:52:11 1998\n> > ***************\n> > *** 101,110 ****\n> > #define heap_getattr(tup, b, attnum, tupleDesc, isnull) \\\n> > \t(AssertMacro((tup) != NULL) ? \\\n> > \t\t((attnum) > (int) (tup)->t_natts) ? \\\n> > ! \t\t\t(((isnull) ? (*(isnull) = true) : (void)NULL), (Datum)NULL) : \\\n> > \t\t((attnum) > 0) ? \\\n> > \t\t\tfastgetattr((tup), (attnum), (tupleDesc), (isnull)) : \\\n> > ! \t\t(((isnull) ? (*(isnull) = false) : (void)NULL), heap_getsysattr((tup), (b), (attnum))) : \\\n> > \t(Datum)NULL)\n> > \n> > extern HeapAccessStatistics heap_access_stats;\t/* in stats.c */\n> > --- 101,110 ----\n> > #define heap_getattr(tup, b, attnum, tupleDesc, isnull) \\\n> > \t(AssertMacro((tup) != NULL) ? \\\n> > \t\t((attnum) > (int) (tup)->t_natts) ? \\\n> > ! \t\t\t(((isnull) ? (*(isnull) = true) : (bool)NULL), (Datum)NULL) : \\\n> > \t\t((attnum) > 0) ? \\\n> > \t\t\tfastgetattr((tup), (attnum), (tupleDesc), (isnull)) : \\\n> > ! \t\t(((isnull) ? (*(isnull) = false) : (bool)NULL), heap_getsysattr((tup), (b), (attnum))) : \\\n> > \t(Datum)NULL)\n> > \n> > extern HeapAccessStatistics heap_access_stats;\t/* in stats.c */\n> \n> We made if void so that we would stop getting gcc warnings about 'unused\n> left-hand side of conditional' messages. Does aix complain or stop. If\n> it just complains, I think we have to leave it alone, because everyone\n> else will complain about bool.\n\nBut this is then trying to assign a (void)NULL to isnull, which is a bool (really a char).\nIMHO gcc should complain. Aix gives a severe error since the types don't match.\n\nMaybe better to have a warning than fix it by causing an error. Gcc just happens to be in\na forgiving mood. What does the C standard say about casting (void) ptrs to other types?\n\nWhy not make this a _little_ more legible and compiler-friendly by making it into an\nif-then-else block? Is the ?: operator really saving any ops?\n\n---------\n\nRe: the StrNCpy macro...\n\nThe aix compiler complains about trying to assign a (void)NULL to (len > 0). Can this be\nfixed with another set of parens separating the returned dest from the ?: operator?\n\nLike...\n\n(((strncpy((dst),(src),(len)),(len > 0)) ? *((dst)+(len)-1)='\\0' : ((char)NULL)),(dst)))\n ^ ^\n \nThis gets the return value back doesn't it? And changing to a (char)NULL makes the\ncompiler happy again too. Is this acceptable?\n\ndarrenk\n", "msg_date": "Mon, 5 Jan 1998 09:13:30 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [PORTS] (void)NULL in macros and aix" } ]
[ { "msg_contents": "\nOn Tue, 30 Dec 1997, Bruce Momjian wrote:\n\n> The following was sent to me. Does it fit our needs anywhere? Let's\n> discuss it.\n> > \n> > I wrote an indexed file system some time ago in ANSI C. I've compiled\n> > and used it on several platforms, but have never aspired to do very much\n> > with it. I'm not really the marketing type, and I don't want to compete\n> > with existing standards.\n> > \n> > I wonder if it could make any contribution to the PostgreSQL effort? \n> > Here are the pluses and minuses:\n\n[snip]\n\n> > Maybe it could do strange sorts or handle BLOBs. If you think it could\n> > make a contribution I'd be willing to learn and work on the appropriate\n> > code. You're welcome to a copy of it or any additional information you\n> > might want.\n\nBruce, while reading another thread about the tuple size, could this be\nutilised in building some form of MEMO field, getting round the 8k limit?\n\nWe have the existing large objects that are ideal for binary data, but for\ntextual data this could be of use. From the client's point of view, this\nis stored with the tuple, but in reality, the content is stored in a large\nobject.\n\n\nAlso, quite some time ago, someone did ask on how to do searches on large\nobjects (consisting of large unicode documents). The existing stuff\ndoesn't support this, but could it be done with this code?\n\nAnyhow, just a thought.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n\n", "msg_date": "Mon, 5 Jan 1998 15:10:35 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: development" }, { "msg_contents": "> Also, quite some time ago, someone did ask on how to do searches on large\n> objects (consisting of large unicode documents). The existing stuff\n> doesn't support this, but could it be done with this code?\n\nI think this is what Vadim was alluding to when he talked about Illustra\nhaving large objects that looked like columns in a table. Perhaps it\ncould be done easily by defining a function that takes a large object\nname stored in a table, and returns its value.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 10:41:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: development" } ]
[ { "msg_contents": "Hello all,\n\nWARNING : It's a long mail, but please have patience and read it *all*\n\nI have reached a point in developing PgAccess when I discovered that\nsome functions in libpgtcl are bad implemented and even libpq does not\nhelp me a lot!\n\nWhat's the problem ! Is about working with large queries result, big\ntables with thousand of records that have to be processed in order to\nget a full report for example.\n\nGetting a query result from Tcl/Tk (pg_select function) uses PQexec.\nBut PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\nhandle query results.\nBut what if table has thousand records ? Probably I would need more than\n512 Mb of RAM in order to get a report finished.\n\nViorel Mitache from RENEL Braila, ([email protected], please cc him and me\nbecause we aren't on hacker list) proposed another sollution.\n\nWith some small changes in libpq-fe.h\n\n ( void (* callback)(PGresult *,void *ptr,int stat);\n void *usr_ptr;)\n\nand also in libpq to allow a newly defined function in libpgtcl\n(pg_loop) to initiate a query and then calling back a user defined\nfunction after every record fetched from the connection.\n\nIn order to do this, the connection is 'cloned' and on this new\nconnection the query is issued. For every record fetched, the C callback\nfunction is called, here the Tcl interpreted is invoked for the source\ninside the loop, then memory used by the record is release and the next\nrecord is ready to come.\nMore than that, after processing some records, user can choose to break\nthe loop (using break command in Tcl) that is actually breaking the\nconnection.\n\nWhat we achieve making this patches ?\n\nFirst of all the ability of sequential processing large tables.\nThen increasing performance due to parallel execution of receiving data\non the network and local processing. The backend process on the server\nis filling the communication channel with data and the local task is\nprocessing it as it comes.\nIn the old version, the local task has to wait until *all* data has\ncomed (buffered in memory if it was room enough) and then processing it.\n\nWhat I would ask from you?\n1) First of all, if my needs could be satisfied in other way with\ncurrent functions in libpq of libpgtcl. I can assure you that with\ncurrent libpgtcl is rather impossible. I am not sure if there is another\nmechanism using some subtle functions that I didn't know about them.\n2) Then, if you agree with the idea, to whom we must send more accurate\nthe changes that we would like to make in order to be analysed and\nchecked for further development of Pg.\n3) Is there any other normal mode to tell to the backend not to send any\nmore tuples instead of breaking the connection ?\n4) Even working in C, using PQexec , it's impossible to handle large\nquery results, am I true ?\n\nPlease cc to : [email protected] and also [email protected]\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Mon, 05 Jan 1998 20:43:45 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "I want to change libpq and libpgtcl for better handling of large\n\tquery results" }, { "msg_contents": "> Getting a query result from Tcl/Tk (pg_select function) uses PQexec.\n> But PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\n> handle query results.\n> But what if table has thousand records ? Probably I would need more than\n> 512 Mb of RAM in order to get a report finished.\n\nThis issue has come up before. The accepted solution is to open a\ncursor, and fetch whatever records you need. The backend still\ngenerates the full result, but the front end requests the records it\nwants.\n\nDoes that not work in your case?\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 21:15:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n\n> In order to do this, the connection is 'cloned' and on this new\n> connection the query is issued. For every record fetched, the C callback\n> function is called, here the Tcl interpreted is invoked for the source\n> inside the loop, then memory used by the record is release and the next\n> record is ready to come.\n> More than that, after processing some records, user can choose to break\n> the loop (using break command in Tcl) that is actually breaking the\n> connection.\n> \n> What we achieve making this patches ?\n> \n> First of all the ability of sequential processing large tables.\n> Then increasing performance due to parallel execution of receiving data\n> on the network and local processing. The backend process on the server\n> is filling the communication channel with data and the local task is\n> processing it as it comes.\n> In the old version, the local task has to wait until *all* data has\n> comed (buffered in memory if it was room enough) and then processing it.\n> \n> What I would ask from you?\n> 1) First of all, if my needs could be satisfied in other way with\n> current functions in libpq of libpgtcl. I can assure you that with\n> current libpgtcl is rather impossible. I am not sure if there is another\n> mechanism using some subtle functions that I didn't know about them.\n\n\tBruce answered this one by asking about cursors...\n\n> 2) Then, if you agree with the idea, to whom we must send more accurate\n> the changes that we would like to make in order to be analysed and\n> checked for further development of Pg.\n\n\tHere, on this mailing list...\n\n\tNow, let's see if I understand what you are thinking of...\n\n\tBasically, by \"cloning\", you are effectively looking at implementing ftp's\nway of dealing with a connection, having one \"control\" channel, and one \"data\"\nchannel, is this right? So that the \"frontend\" has a means of sending a STOP\ncommand to the backend even while the backend is still sending the frontend\nthe data?\n\n\tNow, from reading Bruce's email before reading this, this doesn't get \naround the fact that the backend is still going to have to finish generating\na response to the query before it can send *any* data back, so, as Bruce has\nasked, don't cursors already provide what you are looking for? With cursors,\nas I understand it, you basically tell the backend to send forward X tuples at\na time and after that, if you want to break the connection, you just break \nthe connection. \n\n\tWith what you are proposing (again, if I'm understanding correctly), the\nfrontend would effectively accept X bytes of data (or X tuples) and then it\nwould have an opportunity to send back a STOP over a control channel...\n\n\tOversimplified, I know, but I'm a simple man *grin*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 01:45:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> > What I would ask from you?\n> > 1) First of all, if my needs could be satisfied in other way with\n> > current functions in libpq of libpgtcl. I can assure you that with\n> > current libpgtcl is rather impossible. I am not sure if there is another\n> > mechanism using some subtle functions that I didn't know about them.\n> \n> Bruce answered this one by asking about cursors...\n\nYes. It's true. I have used cursors for speeding up opening tables in\nPgAccess fetching only the first 200 records from the table.\nBut for a 10 thousand record table I will send over the network 10\nthousand \"FETCH 1 IN CURSOR\" because in a report table I am processing\nrecords one by one.\nThe time for this kind of retrieval would be more than twice as in the\n'callback' mechanism.\n\nIf you think that is better to keep libpq and libpgtcl as they are, then\nI will use cursors.\nBut using the 'callback' method it would increase performance.\n\nI am waiting for the final resolution :-)\n\n> Basically, by \"cloning\", you are effectively looking at implementing ftp's\n> way of dealing with a connection, having one \"control\" channel, and one \"data\"\n> channel, is this right? So that the \"frontend\" has a means of sending a STOP\n> command to the backend even while the backend is still sending the frontend\n> the data?\n\nNot exactly. Looking from Tcl/Tk point of view, the mechanism is\ntransparent. I am using this structure :\n\npg_loop $database \"select * from sometable\" record {\n set something $record(somefield)\n}\n\nBut the new libpgtcl is opening a 'cloned' connection in order to :\n- send the query through it\n- receive the data from it\nI am not able to break the connection using commands send through the\n'original' one. The query is 'stopped' by breaking the connection.\nThat's why we needed another connection. Because there isn't (yet) a\nmechanism to tell the backend to abort transmission of the rest of the\nquery. I understand that the backend is not reading any more the socket\nin order to receive some CANCEL signal from the frontend. So, dropping\nthe rest of the query results isn't possible without a hard break of the\nconnection.\n\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 06 Jan 1998 09:32:26 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "As far as I understood, this seems to be another solution to the older\nproblem of speeding up the browser display of large results. The first one\nconsisted on nonblocking exec/blocking fetchtuple in libpq (the patch is\nvery simple). But the main point is that I learned at that time that\nbackend send tuples as soon as it computes them. \n\nCan someone give an authorized answer?\n\nOn Tue, 6 Jan 1998, The Hermit Hacker wrote:\n\n> On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n...\n> \n> \tNow, from reading Bruce's email before reading this, this doesn't get \n> around the fact that the backend is still going to have to finish generating\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> a response to the query before it can send *any* data back, so, as Bruce has\n...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\nPS. On the other hand, if someone is working on back/front protocol, could\nhe think about how difficult would be to have a async full duplex\nconnection?\n\nCostin Oproiu\n\n", "msg_date": "Tue, 6 Jan 1998 10:39:21 +0200 (EET)", "msg_from": "PostgreSQL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n\n> I have reached a point in developing PgAccess when I discovered that\n> some functions in libpgtcl are bad implemented and even libpq does not\n> help me a lot!\n> \n> What's the problem ! Is about working with large queries result, big\n> tables with thousand of records that have to be processed in order to\n> get a full report for example.\n\nIn the past, I've had a lot of people complaining about the performance\n(or lack of) when handling large results in JDBC.\n\n> Getting a query result from Tcl/Tk (pg_select function) uses PQexec.\n> But PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\n> handle query results.\n> But what if table has thousand records ? Probably I would need more than\n> 512 Mb of RAM in order to get a report finished.\n\nThe only solution I was able to give was for them to use cursors, and\nfetch the result in chunks.\n\n> With some small changes in libpq-fe.h\n> \n> ( void (* callback)(PGresult *,void *ptr,int stat);\n> void *usr_ptr;)\n> \n> and also in libpq to allow a newly defined function in libpgtcl\n> (pg_loop) to initiate a query and then calling back a user defined\n> function after every record fetched from the connection.\n> \n> In order to do this, the connection is 'cloned' and on this new\n> connection the query is issued. For every record fetched, the C callback\n> function is called, here the Tcl interpreted is invoked for the source\n> inside the loop, then memory used by the record is release and the next\n> record is ready to come.\n\nI understand the idea here as I've use this trick before with tcl, but\nthis could cause a problem with the other languages that we support. I\ndon't know how this would be done for Perl, but with Java, the JDBC spec\ndoesn't have this type of callback.\n\nSome time back (around v6.0), I did look at having a seperate thread on\nthe client, that read the results in the background, and the foreground\nthread would then get the results almost immediately. It would only wait,\nif it had read everything transfered so far, and (as JDBC cannot go back a\nrow in a ResultSet), the read rows are freed once used.\n\nAlthough the idea was sound, in practice, it didn't work well. Not every\nJVM implemented threading in the same way, so it locked up a lot. In the\nend, the idea was dropped.\n\n> More than that, after processing some records, user can choose to break\n> the loop (using break command in Tcl) that is actually breaking the\n> connection.\n\nWhat side effects could this have to the backend if the second connection\nis broken. I think the existing code would simply terminate.\n\n> What we achieve making this patches ?\n> \n> First of all the ability of sequential processing large tables.\n> Then increasing performance due to parallel execution of receiving data\n> on the network and local processing. The backend process on the server\n> is filling the communication channel with data and the local task is\n> processing it as it comes.\n> In the old version, the local task has to wait until *all* data has\n> comed (buffered in memory if it was room enough) and then processing it.\n\n> What I would ask from you?\n> 1) First of all, if my needs could be satisfied in other way with\n> current functions in libpq of libpgtcl. I can assure you that with\n> current libpgtcl is rather impossible. I am not sure if there is another\n> mechanism using some subtle functions that I didn't know about them.\n\nWe were talking about some changes to the protocol. Perhaps, we could do\nsomething like changing it so it sends the result in blocks of tuples,\nrather than everything in one block. Then, in between each packet, an ACK\nor CAN style packet could be sent to the backend, either asking for the\nnext, or canceling the results.\n\nAnother alternative is (as an option definable by the client at run time)\nto have results open another connection on a per-result basis (aka FTP).\nHowever, I can see a performance hit with the overhead involved in opening\na new connection every time.\n\nAlso, I can see a possible problem:-\n\nSay, a client has executed a query, which returns a large number of rows.\nWe have read in the first 100 rows. The backend still has the majority of\nthe result queued up behind it.\n\nNow in JDBC, we have getAsciiStream/getBinaryStream/getUnicodeStream which\nare the standard way of getting at BLOBS.\n\nIf one of the columns is a blob, and the client tries to read from the\nblob, it will fail, because we are not in the main loop in the backend\n(were still transfering a result, and BLOBS use fastpath).\n\nThere are ways around this, but things could get messy if were not\ncareful.\n\n> 3) Is there any other normal mode to tell to the backend not to send any\n> more tuples instead of breaking the connection ?\n\nApart from using cursors, not that I know of.\n\n> 4) Even working in C, using PQexec , it's impossible to handle large\n> query results, am I true ?\n\nMemory is the only limitation to this.\n\n> Please cc to : [email protected] and also [email protected]\n\nDone..\n\nIt would be interesting to see what the others think. Both TCL & Java are\nclose relatives, and Sun are working on a TCL extension to Java, so any\nchanges could (in the future) help both of us.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 6 Jan 1998 12:07:07 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "> \n> The Hermit Hacker wrote:\n> > \n> > > What I would ask from you?\n> > > 1) First of all, if my needs could be satisfied in other way with\n> > > current functions in libpq of libpgtcl. I can assure you that with\n> > > current libpgtcl is rather impossible. I am not sure if there is another\n> > > mechanism using some subtle functions that I didn't know about them.\n> > \n> > Bruce answered this one by asking about cursors...\n> \n> Yes. It's true. I have used cursors for speeding up opening tables in\n> PgAccess fetching only the first 200 records from the table.\n> But for a 10 thousand record table I will send over the network 10\n> thousand \"FETCH 1 IN CURSOR\" because in a report table I am processing\n> records one by one.\n> The time for this kind of retrieval would be more than twice as in the\n> 'callback' mechanism.\n\nYou can tell fetch to give you as many records as you want, so you can\nread in 100-tuple blocks.\n\n> \n> If you think that is better to keep libpq and libpgtcl as they are, then\n> I will use cursors.\n> But using the 'callback' method it would increase performance.\n> \n> I am waiting for the final resolution :-)\n> \n> > Basically, by \"cloning\", you are effectively looking at implementing ftp's\n> > way of dealing with a connection, having one \"control\" channel, and one \"data\"\n> > channel, is this right? So that the \"frontend\" has a means of sending a STOP\n> > command to the backend even while the backend is still sending the frontend\n> > the data?\n> \n> Not exactly. Looking from Tcl/Tk point of view, the mechanism is\n> transparent. I am using this structure :\n> \n> pg_loop $database \"select * from sometable\" record {\n> set something $record(somefield)\n> }\n> \n> But the new libpgtcl is opening a 'cloned' connection in order to :\n> - send the query through it\n> - receive the data from it\n> I am not able to break the connection using commands send through the\n> 'original' one. The query is 'stopped' by breaking the connection.\n> That's why we needed another connection. Because there isn't (yet) a\n> mechanism to tell the backend to abort transmission of the rest of the\n> query. I understand that the backend is not reading any more the socket\n> in order to receive some CANCEL signal from the frontend. So, dropping\n> the rest of the query results isn't possible without a hard break of the\n> connection.\n\nWe have this on the TODO list. We could use the TCP/IP out-of-band\nconnection option to inform the backend to stop things, but no one has\nimplemented it yet. (For the new Unix domain sockets, we could use\nsignals.) Anyone want to tackle it?\n\nman send shows:\n\n The flags parameter may include one or more of the following:\n\n #define MSG_OOB 0x1 /* process out-of-band data */\n #define MSG_DONTROUTE 0x4 /* bypass routing, use direct interface */\n\n The flag MSG_OOB is used to send ``out-of-band'' data on sockets that\n support this notion (e.g. SOCK_STREAM); the underlying protocol must al-\n so support ``out-of-band'' data. MSG_DONTROUTE is usually used only by\n diagnostic or routing programs.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 10:13:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> We have this on the TODO list. We could use the TCP/IP out-of-band\n> connection option to inform the backend to stop things, but no one has\n> implemented it yet. (For the new Unix domain sockets, we could use\n> signals.) Anyone want to tackle it?\n\nI'll have to check, but I'm not sure if OOB is possible with Java.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 6 Jan 1998 18:11:32 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "Sorry, just to clear things:\n\n> We were talking about some changes to the protocol. Perhaps, we could do\n> something like changing it so it sends the result in blocks of tuples,\n> rather than everything in one block. Then, in between each packet, an ACK\n ^^^^^^^^^^^^^^^^^^^^^^^\n\nBackend sends tuples one by one - just after executor gets next tuple \nfrom upper plan, backend sends this tuple to client-side.\n\n> or CAN style packet could be sent to the backend, either asking for the\n> next, or canceling the results.\n\nVadim\n", "msg_date": "Wed, 07 Jan 1998 01:51:15 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Wed, 7 Jan 1998, Vadim B. Mikheev wrote:\n\n> Sorry, just to clear things:\n> \n> > We were talking about some changes to the protocol. Perhaps, we could do\n> > something like changing it so it sends the result in blocks of tuples,\n> > rather than everything in one block. Then, in between each packet, an ACK\n> ^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Backend sends tuples one by one - just after executor gets next tuple \n> from upper plan, backend sends this tuple to client-side.\n\nOops, of course it does, sorry ;-)\n\n> > or CAN style packet could be sent to the backend, either asking for the\n> > next, or canceling the results.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 6 Jan 1998 23:13:23 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "Peter T Mount wrote:\n> \n> The only solution I was able to give was for them to use cursors, and\n> fetch the result in chunks.\n\nGot it!!!\n\nSeems everyone has 'voted' for using cursors.\n\nAs a matter of fact, I have tested both a \nBEGIN ; DECLARE CURSOR ; FETCH N; END;\nand a \nSELECT FROM \n\nBoth of them are locking for write the tables that they use, until end\nof processing.\n\nFetching records in chunks (100) would speed up a little the processing.\n\nBut I am still convinced that if frontend would be able to process\ntuples as soon as they come, the overall time of processing a big table\nwould be less.\nFetching in chunks, the frontend waits for the 100 records to come (time\nA) and then process them (time B). A and B cannot be overlapped.\n\nThanks a lot for helping me to decide. Reports in PgAccess will use\ncursors.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 07 Jan 1998 10:14:37 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" } ]
[ { "msg_contents": "Gautam Thaker (609-338-3907) wrote:\n> \n> I did not see contrib/spi because I have just the binary .rpm\n> for linux. I guess I will have to go get the entire src dist.\n> and perhaps even make the system myself.\n\nI see your problem. It seems that spi.h (and trigger.h) should be \ninstalled into _pg_installation_dir_/include dir to go into rpm - thanks,\nhope to fix this for 6.3. \n\n...Also, contrib should be included into rpm...\n\n> \n> BTW, how can I tell which version I have on my system?\n\nOnly rpm' author knows -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 03:59:56 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] where to find SPI support (and what is the latest\n\tversion?)" } ]
[ { "msg_contents": "Your name was selected from a list of consultants found on the search engines. It is a personal invitation mailed separately to your attention, NOT a \"bulk\" E-Mail proposal.. If it is inappropriate, please type \"remove\" under Subject : and reply.\n====================================================================\n \nHere is the Preplanning Checklist from our 2001 Innovation Seminar for your use.\n\n1. What is the purpose of my venture?\n2. What business function will it perform?\n3. What markets will it serve?\n4. What products will it sell?\n5. How is it unique from others in the same business?\n6. Do I plan to diversify into new markets?\n7. Do I plan to introduce new products?\n8. How will I accomplish this diversification?\n9. What resources will I set aside to diversify with?\n10. What policies will I follow in setting the tone of the business?\n11. Do I have the manpower needed to attain my objectives?\n12. Do I have the capital required to attain my objectives?\n13. Do I have the knowledge required to pursue these objectives?\n14. What changes in my organization will be required?\n15. Are my manufacturing costs in line?\n16. Are my merchandise purchasing costs satisfactory?\n17. Do I have an adequate return on my investment?\n18. Is my competitive position viable?\n19. Is the venture in a cyclical business?\n20. What can I do to reduce its cyclical?\n21. Does my projected income/earnings meet my objectives?\n\nThe seminar uses four strategic planning lists. I'll send you the next one in a few weeks, or you can E-Mail me for a copy. Ask for Product Positioning and Opportunity Analysis Strategies. When you are browsing, stop by http://www.chemmgrs.com and see our new book, 21st Century Consulting. We'll give you Chapter 1 free just for stopping by our Web Site, or E-Mail us with \"Chapter 1 free\" as the subject. \n\nRichard Montgomery, General Manager\nChem Mgrs, Ltd\n2751 W. N. Union St Suite #72\nMidland, MI 48642\n \n", "msg_date": "Mon, 5 Jan 1998 16:10:45 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Strategic Planning for the year 2001 and\n\[email protected]@postgresql.org" } ]
[ { "msg_contents": "> Hi everybody:\n> I have the following information for you.\n> I installed Postgress 6.2.1 in the next platform:\n> Apollo 9000 Serie 800 with PA-RISC running HP-UX 10.20\n> - The version of PostgreSQL (v6.2.1, 6.1.1, beta 970703, etc.).\n> v6.2.1\n> - Your operating system (i.e. RedHat v4.0 Linux v2.0.26).\n> HP-UX 10.20\n> - Your hardware (SPARC, i486, etc.).\n> Precision Arquitecture-RISC (32 o 64 Mz ???)\n> - Install problems:\n> I had problems running HP's make so I need to install 'gmake'. By the\n> way, I couldn't find a program called gmake so I got 'make' instead.\n> I did ftp from prep.ai.mit.edu. File:/pub/gnu/make-3.76.1.tar.gz\n>\n> I hadn't any problems to install gnu make.\n>\n> As to the Postgress installation I had the following problems:\n> When I ran 'make all', I got a yacc error:\n> Too many states compiling gram.y. (I got a clue from the yacc compiler:\n> use Ns option)\n> Solution: I edited the Makefile.global file. I modified the line with\n> the YFLAGS variable (line 211), so I added the option Ns with a value of\n> 5000(The default for HP was 1000)\n> After this change The problem vanished but I found the next problem:The\n> size of look ahead tables was not big enough. (Default 650) So I\n> modified to 2500 with the Nl (En -el) option.\n> At last the line was modified in the following way:\n> Original line:\n> YFLAGS= -d\n> Modified line\n> YFLAGS= -d -Ns5000 -Nl2500\n>\n> After this I got the next fatal error:\n> cc -I../../include -W l,-E -Ae -DNOFIXADE -Dhpux -I.. -I.\n> include -c scan.c -o scan.o\n> cc: \"scan.c\", line 145: error 1000: Unexpected symbol: \"&\".\n>\n> The problem was very simple to solve. One comment was erronous written.\n> The '/' was missing. I just edited the file scan.c and everythig worked\n> fine.\n\nOh, I assumed that this was a comment from scan.l, but I'm now guessing\nthat this was a comment inserted by HP's lex program. Yes?\n\n> I ran the regress test and I could find some tests failed principally\n> due to the floating point precision.\n\nWhich is OK.\n\nGood information. Anyone interested in typing this up as a FAQ\n(doc/FAQ_HP)?\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 01:55:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Postgress installation in HP-UX 10.20." } ]
[ { "msg_contents": "\nOracle has adjustable disk block sizes, tuneable per instance\n(can you change it after install?? can't remember). In any\ncase the default is 2k or 4k, and things almost always go\nfaster with larger block sizes. On one project we needed to\ngo fast and had disk space to burn, so we upped it to 16k.\nThis was all _without_ using raw devices.\n\nMy *gut* feeling is that the underlying block size is a trade-off,\nsmaller blocks are better for small transactions, bigger blocks\nare better for bulk load/extract operations, with a penalty for \nfinding a single row. Optimum depends on the application, but is\nsomewhere between 2 and 32 k.\n\nHow hard would it be for postgresql to support adjustable block sizes? \nJust wondering.\n\n-- cary\[email protected]\n", "msg_date": "Mon, 5 Jan 1998 22:43:19 -0500 (EST)", "msg_from": "\"Cary B. O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Block Sizes" } ]
[ { "msg_contents": "I believe I found a bug. If a user other than the postgres superuser is\ngiven permission to create databases, then he should be able to destroy\nthe databases he creates. Currently he can't, at least in version 6.2.1\ncomplied for SunOS 5.5. Only the poostgres superuser can delete\ndatabases. If otherusers try they get the following error message:\n\n\"WARN:pg_database: Permission denied.\ndestroydb: database destroy failed on tmpdb.\"\n\neventhough this user is the database admin for tmpdb as shown in the\npd_database table.\n", "msg_date": "Mon, 05 Jan 1998 20:05:30 -0800", "msg_from": "Kevin Witten <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres acl" }, { "msg_contents": "> I believe I found a bug. If a user other than the postgres superuser is\n> given permission to create databases, then he should be able to destroy\n> the databases he creates. Currently he can't, at least in version 6.2.1\n> complied for SunOS 5.5. Only the poostgres superuser can delete\n> databases. If otherusers try they get the following error message:\n>\n> \"WARN:pg_database: Permission denied.\n> destroydb: database destroy failed on tmpdb.\"\n>\n> eventhough this user is the database admin for tmpdb as shown in the\n> pd_database table.\n\nAt the moment, one requires \"create users\" privilege to destroy your own\ndatabase, but only \"create databases\" privilege to create one. I think\nthere is something about this on the ToDo list...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 04:36:06 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl" }, { "msg_contents": "> \n> I believe I found a bug. If a user other than the postgres superuser is\n> given permission to create databases, then he should be able to destroy\n> the databases he creates. Currently he can't, at least in version 6.2.1\n> complied for SunOS 5.5. Only the poostgres superuser can delete\n> databases. If otherusers try they get the following error message:\n> \n> \"WARN:pg_database: Permission denied.\n> destroydb: database destroy failed on tmpdb.\"\n> \n> eventhough this user is the database admin for tmpdb as shown in the\n> pd_database table.\n> \n> \n\nHere is the fix. This bug has been around for a while:\n\n---------------------------------------------------------------------------\n\n*** ./aclchk.c.orig\tTue Jan 6 00:10:25 1998\n--- ./aclchk.c\tTue Jan 6 00:18:40 1998\n***************\n*** 410,416 ****\n \t\t * pg_database table, there is still additional permissions\n \t\t * checking in dbcommands.c\n \t\t */\n! \t\tif (mode & ACL_AP)\n \t\t\treturn ACLCHECK_OK;\n \t}\n \n--- 410,416 ----\n \t\t * pg_database table, there is still additional permissions\n \t\t * checking in dbcommands.c\n \t\t */\n! \t\tif ((mode & ACL_WR) || (mode & ACL_AP))\n \t\t\treturn ACLCHECK_OK;\n \t}\n \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 00:19:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl" } ]
[ { "msg_contents": "I would like to stand up as being very much in favor of two related things:\n\n(1) adjustable blocksizes\n(2) a larger MAX TUPLE size\n\nAs far as I can tell (and that ain't far), there would only be relatively \nminor changes that would have to be made to give the option of allowing \nthe user to select 2, 4, 8 or 16 as the blocksize. Concurrently, it \nwould seem wise to simply up the max tuple size to 32k. It seems to me \nunlikely that this would have a noticeable performance impact. In order \nto do this, we would need to know about the 32 bit ItemIdData structure in \n/storage/itemid.h (see my previous posts). It was recommended to me that \nlp_flags might still be only using 2 of the 6 bits allocated to it. If \nso, increasing lp_offset to 15 and lp_len to 15, i.e. 2^15 bits, i.e. \n32768 bytes max tuple size, would be possible! I think!\n\nJust my 2 cents.\n\nEddie\n", "msg_date": "Mon, 05 Jan 1998 23:08:10 -0500 (EST)", "msg_from": "Integration <[email protected]>", "msg_from_op": true, "msg_subject": "My 2c on adjustable blocksizes" }, { "msg_contents": "> I would like to stand up as being very much in favor of two related things:\n>\n> (1) adjustable blocksizes\n> (2) a larger MAX TUPLE size\n>\n> As far as I can tell (and that ain't far), there would only be relatively\n> minor changes that would have to be made to give the option of allowing\n> the user to select 2, 4, 8 or 16 as the blocksize. Concurrently, it\n> would seem wise to simply up the max tuple size to 32k. It seems to me\n> unlikely that this would have a noticeable performance impact. In order\n> to do this, we would need to know about the 32 bit ItemIdData structure in\n> /storage/itemid.h (see my previous posts). It was recommended to me that\n> lp_flags might still be only using 2 of the 6 bits allocated to it. If\n> so, increasing lp_offset to 15 and lp_len to 15, i.e. 2^15 bits, i.e.\n> 32768 bytes max tuple size, would be possible! I think!\n\nIf someone came up with some clean patches to allow #define declarations for\nblock size and for tuple sizes, I'm sure they would be of interest. The\nongoing work being discussed for v6.3 would not conflict with those areas (I\nsuspect) so go to it!\n\nI have noticed some integer constants scattered around the code (in places\nwhere they don't belong) which are related to a maximum tuple size. For\nexample, there is an arbitrary 4096 byte limit on the size of a character\ncolumn, and the 4096 is hardcoded into the parser. That particular one would\nbe easy to change...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 04:43:26 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My 2c on adjustable blocksizes" } ]
[ { "msg_contents": "\nWell, I've found the problem that was breaking Large Objects and, although\nI have a fix, I still believe the cause is still there.\n\nIt wasn't a protocol problem after all, but a memory leak that was causing\nthe backend to throw a Segmentation Violation when an argument was being\nfree'd.\n\nFor example:\n\nRunning src/test/example/testlo2, it calls lo_import on a file, then\nlo_export to export the newly formed large object to another file.\n\nAnyhow, every time, on the last call to lo_write (when importing the\nfile), the backend seemed to just die. The only difference between this\ncall and the previous calls, is that the amount of data to write is\nsmaller. Even changing the block size didn't change this fact.\n\nAnyhow, I'm now trying to break it before posting the patch, but both\nlibpq & JDBC are running flawlessly.\n\nHopefully, the patch will be up later this afternoon.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 6 Jan 1998 12:16:08 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Large objects fixed" } ]
[ { "msg_contents": "> \n> How hard would it be for postgresql to support adjustable block sizes? \n> Just wondering.\n> \n\nI can take a stab at this tonite after work now that the snapshot is there.\nStill have around some of the files/diffs from looking at this a year ago...\n\nI don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\nreferences to check for breakage. Appears that only one bit of lp_flags is\nbeing used too, so that would seem to allow up to 32k blocks.\n\nOther issue is the bit alignment in the ItemIdData structure. In the past,\nI've read that bit operations were slower than int ops. Is this the case?\n\nI want to check to see if the structure is only 32 bits and not being padded\nby the compiler. Worse to worse, make one field of 32 bits and make macros\nto access the three pieces or make lp_off & lp_len shorts and lp_flags a char.\n\nI can check the aix compiler, but what does gcc and other compilers do with\nbit field alignment?\n\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 08:51:45 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Block Sizes" }, { "msg_contents": "> \n> > \n> > How hard would it be for postgresql to support adjustable block sizes? \n> > Just wondering.\n> > \n> \n> I can take a stab at this tonite after work now that the snapshot is there.\n> Still have around some of the files/diffs from looking at this a year ago...\n> \n> I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> references to check for breakage. Appears that only one bit of lp_flags is\n> being used too, so that would seem to allow up to 32k blocks.\n> \n> Other issue is the bit alignment in the ItemIdData structure. In the past,\n> I've read that bit operations were slower than int ops. Is this the case?\n\nUsually, yes.\n\n> \n> I want to check to see if the structure is only 32 bits and not being padded\n> by the compiler. Worse to worse, make one field of 32 bits and make macros\n> to access the three pieces or make lp_off & lp_len shorts and lp_flags a char.\n> \n> I can check the aix compiler, but what does gcc and other compilers do with\n> bit field alignment?\n\nI don't know.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 10:18:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Block Sizes" } ]
[ { "msg_contents": "> I've run the regression tests on today's source tree, and found lots of\n> ordering differences..., and one different result.\n>\n> The different result is in the \"select_distinct_on\" test; the original\n> result had 8 rows and the new result has 40 rows. However, I'm getting\n> myself confused on what the correct result _should_ be, since \"select\n> distinct on\" is not documented. For a query like:\n>\n> SELECT DISTINCT ON string4 two, string4, ten FROM temp;\n>\n> What is the \"ON string4 two\" clause saying? Anyway, the result is\n> different than before, so we would probably want to look at it. I'm away\n> 'til after the weekend, but can help after that.\n\nHi Bruce. Some of the \"order by\" clauses are currently broken in the\nregression tests (at least on my machine). Do you see this also? For\nexample, in the point test:\n\nQUERY: SET geqo TO 'off';\nQUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 <->\np2.f1 AS dist\n FROM POINT_TBL p1, POINT_TBL p2\n ORDER BY dist, point1 using <<, point2 using <<;\nthirtysix|point1 |point2 | dist\n---------+----------+----------+----------------\n |(0,0) |(-10,0) | 10\n |(-10,0) |(-10,0) | 0\n |(-3,4) |(-10,0) |8.06225774829855\n ...\n\nAlso, some of Vadim's contrib stuff is broken since WARN is no longer\ndefined. I can post patches for that (there are two files affected) but\nsubstituted ERROR and am not certain whether that is the correct choice.\n\nLet me know if I can help with anything...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 15:04:18 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current regression tests" }, { "msg_contents": "> \n> > I've run the regression tests on today's source tree, and found lots of\n> > ordering differences..., and one different result.\n> >\n> > The different result is in the \"select_distinct_on\" test; the original\n> > result had 8 rows and the new result has 40 rows. However, I'm getting\n> > myself confused on what the correct result _should_ be, since \"select\n> > distinct on\" is not documented. For a query like:\n> >\n> > SELECT DISTINCT ON string4 two, string4, ten FROM temp;\n> >\n> > What is the \"ON string4 two\" clause saying? Anyway, the result is\n> > different than before, so we would probably want to look at it. I'm away\n> > 'til after the weekend, but can help after that.\n> \n> Hi Bruce. Some of the \"order by\" clauses are currently broken in the\n> regression tests (at least on my machine). Do you see this also? For\n> example, in the point test:\n> \n> QUERY: SET geqo TO 'off';\n> QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 <->\n> p2.f1 AS dist\n> FROM POINT_TBL p1, POINT_TBL p2\n> ORDER BY dist, point1 using <<, point2 using <<;\n> thirtysix|point1 |point2 | dist\n> ---------+----------+----------+----------------\n> |(0,0) |(-10,0) | 10\n> |(-10,0) |(-10,0) | 0\n> |(-3,4) |(-10,0) |8.06225774829855\n> ...\n> \n> Also, some of Vadim's contrib stuff is broken since WARN is no longer\n> defined. I can post patches for that (there are two files affected) but\n> substituted ERROR and am not certain whether that is the correct choice.\n> \n> Let me know if I can help with anything...\n\nI am starting to agree with Vadim that it is too much work to go though\nevery elog(), and doing it by directory or file is very imprecise, and\nmay cause confusion.\n\nShould I throw in the towel and make them all ERROR?\n\nI don't know anything that would cause the ORDER BY problems.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 10:55:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current regression tests" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Should I throw in the towel and make them all ERROR?\n\n\tI'm curious, but, again, how does everyone else handle this?\n\n", "msg_date": "Tue, 6 Jan 1998 11:46:56 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current regression tests" } ]
[ { "msg_contents": "", "msg_date": "Tue, 06 Jan 1998 15:53:35 +0000", "msg_from": "Tony Rios <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "Hi,\n\nI created a table with two columns of type int, and loaded about 300 K records\nin it. So, the total size of the table is approx. that of 600 K integers,\nroughly 2.4 MB.\nBut, the file corresponding to the table in pgsql/data/base directory\nhas a size of 19 MB. I was wondering if I have done something wrong in\nthe installation or usage, or is it the normal behavior ?\n\nAlso, I was trying to execute the query: \nselect item as item, count(*) as cnt into table C_temp \nfrom temp group by item;\n\nHere, temp is the name of the table which contains the data and item is an\ninteger attribute. While doing the sort for the group by, the size of one of\nthe temporary pg_psort relation grows to about 314 MB. The size of the temp \ntable is as mentioned above. If someone tried similar queries, could you\nplease tell me if this is normal. \nThe above query did not finish even after 2 hours. I am executing it on a \nSun Sparc 5 running Sun OS 5.5.\n\nThanks\n--shiby\n\n\n", "msg_date": "Tue, 06 Jan 1998 18:09:52 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "database size" }, { "msg_contents": "Shiby Thomas wrote:\n> \n> Hi,\n> \n> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n\nThis is OK. First thing - int is not 2 bytes long, it's 4 bytes long.\nUse int2 if you want so. Second - you have to add up other per-record\nstuff like oids and other internal attributes. \n\n> Also, I was trying to execute the query:\n> select item as item, count(*) as cnt into table C_temp\n> from temp group by item;\n> \n> Here, temp is the name of the table which contains the data and item is an\n> integer attribute. While doing the sort for the group by, the size of one of\n> the temporary pg_psort relation grows to about 314 MB. The size of the temp\n> table is as mentioned above. \n\nIt ain't good. Seems like the psort is very hungry. \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Tue, 06 Jan 1998 23:37:43 +0000", "msg_from": "\"Micha��� Mosiewicz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Shiby Thomas wrote:\n\n> Hi,\n> \n> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n> \n> Also, I was trying to execute the query: \n> select item as item, count(*) as cnt into table C_temp \n> from temp group by item;\n> \n> Here, temp is the name of the table which contains the data and item is an\n> integer attribute. While doing the sort for the group by, the size of one of\n> the temporary pg_psort relation grows to about 314 MB. The size of the temp \n> table is as mentioned above. If someone tried similar queries, could you\n> please tell me if this is normal. \n> The above query did not finish even after 2 hours. I am executing it on a \n> Sun Sparc 5 running Sun OS 5.5.\n\n\tWhat version of PostgreSQL are you running? *raised eyebrow*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 19:42:18 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Tony Rios wrote:\n\n> At 06:09 PM 1/6/98 -0500, Shiby Thomas wrote:\n> \n> >Hi,\n> \n> >\n> \n> >I created a table with two columns of type int, and loaded about 300 K records\n> \n> >in it. So, the total size of the table is approx. that of 600 K integers,\n> \n> >roughly 2.4 MB.\n> \n> >But, the file corresponding to the table in pgsql/data/base directory\n> \n> >has a size of 19 MB. I was wondering if I have done something wrong in\n> \n> >the installation or usage, or is it the normal behavior ?\n> \n> >\n> \n> \n> Just wondering.. did you happen to do an INSERT into the database,\n> \n> then delete some rows.. say 19megs worth, then re-add... From what I've\n> \n> seen msql db's will always be at least the size of the largest you've ever\n> \n> had the database before. It will over time, overrite existing deleted\n> \n> records, but it keeps the data still in there, just sets a delete flag.\n> \n> \n> If you really need to cut the size down, I've had to delete the database\n> \n> completely, then create another table from scratch. Not sure if there\n> \n> is a 'purge' type function available, but you have to be careful that\n> \n> nobody is accessing the db at that time, since it's very sensitive at\n> \n> that time.\n\n\tvacuum will clean out the deleted records and truncate the table...has\nbeen so since v6.1, I believe...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 20:18:19 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "\n=> \tWhat version of PostgreSQL are you running? *raised eyebrow*\n6.2.1. I haven't yet applied the patches(put in the PostgreSQL web page) \nthough.\n\n--shiby\n\n\n\n", "msg_date": "Tue, 06 Jan 1998 20:10:39 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size " }, { "msg_contents": "> Just wondering.. did you happen to do an INSERT into the database,\n> \n> then delete some rows.. say 19megs worth, then re-add... From what I've\n> \n> seen msql db's will always be at least the size of the largest you've ever\n> \n> had the database before. It will over time, overrite existing deleted\n> \n> records, but it keeps the data still in there, just sets a delete flag.\n> \n> \n> If you really need to cut the size down, I've had to delete the database\n> \n> completely, then create another table from scratch. Not sure if there\n> \n> is a 'purge' type function available, but you have to be careful that\n> \n> nobody is accessing the db at that time, since it's very sensitive at\n> \n> that time.\n> \n\nThanks to Vadim, vacuum shrinks the size to the exact amount needed to\nstore the data. Also, the table is locked while vacuuming, so no one\ncan accidentally access it.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 20:16:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "Forwarded message:\n> > I believe I found a bug. If a user other than the postgres superuser is\n> > given permission to create databases, then he should be able to destroy\n> > the databases he creates. Currently he can't, at least in version 6.2.1\n> > complied for SunOS 5.5. Only the poostgres superuser can delete\n> > databases. If otherusers try they get the following error message:\n> > \n> > \"WARN:pg_database: Permission denied.\n> > destroydb: database destroy failed on tmpdb.\"\n> > \n> > eventhough this user is the database admin for tmpdb as shown in the\n> > pd_database table.\n> > \n> > \n> \n> Here is the fix. This bug has been around for a while:\n> \n> ---------------------------------------------------------------------------\n> \n> *** ./aclchk.c.orig\tTue Jan 6 00:10:25 1998\n> --- ./aclchk.c\tTue Jan 6 00:18:40 1998\n> ***************\n> *** 410,416 ****\n> \t\t * pg_database table, there is still additional permissions\n> \t\t * checking in dbcommands.c\n> \t\t */\n> ! \t\tif (mode & ACL_AP)\n> \t\t\treturn ACLCHECK_OK;\n> \t}\n> \n> --- 410,416 ----\n> \t\t * pg_database table, there is still additional permissions\n> \t\t * checking in dbcommands.c\n> \t\t */\n> ! \t\tif ((mode & ACL_WR) || (mode & ACL_AP))\n> \t\t\treturn ACLCHECK_OK;\n> \t}\n\nI am now thinking about this patch, and I don't think I like it. The\noriginal code allowed APPEND-only for users who can create databases,\nbut no DELETE. The patch gives them DELETE permission, so they can\ndestroy their database, but they could issue the command:\n\n\tselect from pg_database\n\nand destroy everyone's. 'drop database' does checkes, but the acl check\nis done in the executor, and it doesn't know if the the checks have been\nperformed or not.\n\nCan someone who has permission to create databases be trusted not to\ndelete others? If we say no, how do we make sure they can change\npg_database rows on only databases that they own?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 11:52:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Can someone who has permission to create databases be trusted not to\n> delete others? If we say no, how do we make sure they can change\n> pg_database rows on only databases that they own?\n\n\tdeleting a database is accomplished using 'drop database', no?\nCan the code for that not be modified to see whether the person dropping\nthe database is the person that owns it *or* pgsuperuser?\n\n\n", "msg_date": "Tue, 6 Jan 1998 12:11:19 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Forwarded message:\n> > > I believe I found a bug. If a user other than the postgres superuser is\n> > > given permission to create databases, then he should be able to destroy\n> > > the databases he creates. Currently he can't, at least in version 6.2.1\n> > > complied for SunOS 5.5. Only the poostgres superuser can delete\n> > > databases. If otherusers try they get the following error message:\n> > >\n> > > \"WARN:pg_database: Permission denied.\n> > > destroydb: database destroy failed on tmpdb.\"\n> > >\n> > > eventhough this user is the database admin for tmpdb as shown in the\n> > > pd_database table.\n> > >\n> > >\n> >\n> > Here is the fix. This bug has been around for a while:\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > *** ./aclchk.c.orig Tue Jan 6 00:10:25 1998\n> > --- ./aclchk.c Tue Jan 6 00:18:40 1998\n> > ***************\n> > *** 410,416 ****\n> > * pg_database table, there is still additional permissions\n> > * checking in dbcommands.c\n> > */\n> > ! if (mode & ACL_AP)\n> > return ACLCHECK_OK;\n> > }\n> >\n> > --- 410,416 ----\n> > * pg_database table, there is still additional permissions\n> > * checking in dbcommands.c\n> > */\n> > ! if ((mode & ACL_WR) || (mode & ACL_AP))\n> > return ACLCHECK_OK;\n> > }\n> \n> I am now thinking about this patch, and I don't think I like it. The\n> original code allowed APPEND-only for users who can create databases,\n> but no DELETE. The patch gives them DELETE permission, so they can\n> destroy their database, but they could issue the command:\n> \n> select from pg_database\n> \n> and destroy everyone's. 'drop database' does checkes, but the acl check\n> is done in the executor, and it doesn't know if the the checks have been\n> performed or not.\n> \n> Can someone who has permission to create databases be trusted not to\n> delete others? If we say no, how do we make sure they can change\n> pg_database rows on only databases that they own?\n> \n> --\n> Bruce Momjian\n> [email protected]\n\n\nCan't you check to see if they own the database before you let them\ndelete the row in pg_database. If a row is deleted from pg_database, it\nis disallowed unless the userid is the same as the datdba field in that\nrow?\n", "msg_date": "Tue, 06 Jan 1998 10:01:03 -0800", "msg_from": "Kevin Witten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > Can someone who has permission to create databases be trusted not to\n> > delete others? If we say no, how do we make sure they can change\n> > pg_database rows on only databases that they own?\n> \n> \tdeleting a database is accomplished using 'drop database', no?\n> Can the code for that not be modified to see whether the person dropping\n> the database is the person that owns it *or* pgsuperuser?\n\nIt already does the check, but issues an SQL from the C code to delete\nfrom pg_database. I believe any user who can create a database can\nissue the same SQL command from psql, bypassing the drop database\nchecks, no?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 13:42:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> > \n> > > Can someone who has permission to create databases be trusted not to\n> > > delete others? If we say no, how do we make sure they can change\n> > > pg_database rows on only databases that they own?\n> > \n> > \tdeleting a database is accomplished using 'drop database', no?\n> > Can the code for that not be modified to see whether the person dropping\n> > the database is the person that owns it *or* pgsuperuser?\n> \n> It already does the check, but issues an SQL from the C code to delete\n> from pg_database. I believe any user who can create a database can\n> issue the same SQL command from psql, bypassing the drop database\n> checks, no?\n\n\tOkay, I understand what you mean here...so I guess the next\nquestion is should system tables be directly modifyable by non-superuser?\n\n\tFor instance, we have a 'drop database' SQL command...can we\nrestrict 'delete from pg_database' to just superuser, while leaving 'drop\ndatabase' open to those with createdb privileges? Same with 'create\nuser', and, possible, a 'create group' command instead of 'insert into\npg_group'?\n\n\n", "msg_date": "Tue, 6 Jan 1998 13:47:17 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> > > \n> > > > Can someone who has permission to create databases be trusted not to\n> > > > delete others? If we say no, how do we make sure they can change\n> > > > pg_database rows on only databases that they own?\n> > > \n> > > \tdeleting a database is accomplished using 'drop database', no?\n> > > Can the code for that not be modified to see whether the person dropping\n> > > the database is the person that owns it *or* pgsuperuser?\n> > \n> > It already does the check, but issues an SQL from the C code to delete\n> > from pg_database. I believe any user who can create a database can\n> > issue the same SQL command from psql, bypassing the drop database\n> > checks, no?\n> \n> \tOkay, I understand what you mean here...so I guess the next\n> question is should system tables be directly modifyable by non-superuser?\n> \n> \tFor instance, we have a 'drop database' SQL command...can we\n> restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> database' open to those with createdb privileges? Same with 'create\n> user', and, possible, a 'create group' command instead of 'insert into\n> pg_group'?\n\nYes, we must replace the SQL commands in commands/dbcommands.c with\nlower-level C table access routines so we do not have to go to the\nexecutor, where the access permissions are checked.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 14:21:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" } ]
[ { "msg_contents": "> > > > Can someone who has permission to create databases be trusted not to\n> > > > delete others? If we say no, how do we make sure they can change\n> > > > pg_database rows on only databases that they own?\n> > > \n> > > \tdeleting a database is accomplished using 'drop database', no?\n> > > Can the code for that not be modified to see whether the person dropping\n> > > the database is the person that owns it *or* pgsuperuser?\n> > \n> > It already does the check, but issues an SQL from the C code to delete\n> > from pg_database. I believe any user who can create a database can\n> > issue the same SQL command from psql, bypassing the drop database\n> > checks, no?\n> \n> \tOkay, I understand what you mean here...so I guess the next\n> question is should system tables be directly modifyable by non-superuser?\n> \n> \tFor instance, we have a 'drop database' SQL command...can we\n> restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> database' open to those with createdb privileges? Same with 'create\n> user', and, possible, a 'create group' command instead of 'insert into\n> pg_group'?\n\nIMHO, the system tables should _never_ be directly modifiable by anyone\nother than the superuser/dba. The rest of the population should have to\nuse a command of some sort that can be grant/revoked by said superuser/dba.\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 14:20:15 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> > > > > Can someone who has permission to create databases be trusted not to\n> > > > > delete others? If we say no, how do we make sure they can change\n> > > > > pg_database rows on only databases that they own?\n> > > >\n> > > > deleting a database is accomplished using 'drop database', no?\n> > > > Can the code for that not be modified to see whether the person dropping\n> > > > the database is the person that owns it *or* pgsuperuser?\n> > >\n> > > It already does the check, but issues an SQL from the C code to delete\n> > > from pg_database. I believe any user who can create a database can\n> > > issue the same SQL command from psql, bypassing the drop database\n> > > checks, no?\n> >\n> > Okay, I understand what you mean here...so I guess the next\n> > question is should system tables be directly modifyable by non-superuser?\n> >\n> > For instance, we have a 'drop database' SQL command...can we\n> > restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> > database' open to those with createdb privileges? Same with 'create\n> > user', and, possible, a 'create group' command instead of 'insert into\n> > pg_group'?\n>\n> IMHO, the system tables should _never_ be directly modifiable by anyone\n> other than the superuser/dba. The rest of the population should have to\n> use a command of some sort that can be grant/revoked by said superuser/dba.\n\nAre there any maintenance operations which require a \"delete from pg_xxx\"? If\nnot, then we could just modify the parser (or the executor?) to check the table\nname and not allow insert/delete from any table whose name starts with \"pg_\". Had\nto ask, although I'm sure this is too easy to actually work :)\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 01:25:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n\n> Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> not, then we could just modify the parser (or the executor?) to check the table\n> name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> to ask, although I'm sure this is too easy to actually work :)\n\n\tAs long as what you are suggesting doesn't break \"drop database\", \"drop\ntable\", \"drop view\"...I realize that this is obvious, but...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:18:01 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> >\n> > IMHO, the system tables should _never_ be directly modifiable by anyone\n> > other than the superuser/dba. The rest of the population should have to\n> > use a command of some sort that can be grant/revoked by said superuser/dba.\n> \n> Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> not, then we could just modify the parser (or the executor?) to check the table\n> name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> to ask, although I'm sure this is too easy to actually work :)\n\nInteresting thought. Wonder if it would work?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 21:23:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n> \n> > Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> > not, then we could just modify the parser (or the executor?) to check the table\n> > name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> > to ask, although I'm sure this is too easy to actually work :)\n> \n> \tAs long as what you are suggesting doesn't break \"drop database\", \"drop\n> table\", \"drop view\"...I realize that this is obvious, but...\n\nGood point. Yes it does. dbcommands.c and user.c both do direct calls\nto pg_exec to pass everything into the parser, optimizer, and executor. \n\nThe real fix is to do things like copy.c does, by directly calling the C\nroutines and making the desired changes there. Or to have some global\nflag that says \"Backend performed the rights test, let this SQL\nsucceed.\" That may be cleaner. Table access rights are tested in just\none function, I think.\n\nWe still have the pg_user.passwd problem, and pg_user is not readable by\ngeneral users. I can't think of a fix for this.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 21:27:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" } ]
[ { "msg_contents": "> \n> Bruce,\n> \n> Just running the regression tests on the latest CVS on SPARC-Linux!!\n> \n> Appart from several other ordering and precision errors I'm seeing\n> errors in constraints tests due to output/constraints.source not\n> being updated for the new error messages.\n\nI have just fixed many of these WARN problems. I am looking at the new\nresults. The first problem:\n\t\n\t====== boolean ======\n\t166,168d165\n\t< |f |f \n\t< |f |f \n\t< |f |f \n\t170a168\n\t> |f |f \n\t173a172\n\t> |f |f \n\t176a176\n\t> |f |f \n\nis because the query has no ORDER BY.\n\nThe second problem looks serious:\n\nQUERY: SET geqo TO 'off';\nQUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> p2.f$\n FROM POINT_TBL p1, POINT_TBL p2\n ORDER BY dist, point1 using <<, point2 using <<;\nthirtysix|point1 |point2 | dist\n---------+----------+----------+----------------\n |(10,10) |(-10,0) |22.3606797749979 \n |(0,0) |(-10,0) | 10\n\nThe 'dist' is not being ordered. \n\nIn geometry we have:\n\n104c103\n\t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n\t---\n\t> |(0,0) |[(0,0),(6,6)] |(0,0) \n \nI am happy to see the -0 changed to zero, but this may be just on my\nplatform. Also:\n\n\t< |(-0,0),(-20,-20) \n\t---\n\t> |(0,0),(-20,-20) \n\t213c212\n\t< |(-0,2),(-14,0) \n\t---\n\t> |(0,2),(-14,0) \n\t221c220\n\t< |(14,-0),(0,-34) \n\t---\n\t> |(14,0),(0,-34) \n\t236c235\n\nWe also have broken sorting in timespan:\n\nQUERY: SELECT '' AS fortyfive, r1.*, r2.*\n FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n WHERE r1.f1 > r2.f1\n ORDER BY r1.f1, r2.f1;\nfortyfive|f1 |f1\n---------+-----------------------------+-----------------------------\n |@ 6 years |@ 14 secs ago\n |@ 5 mons |@ 14 secs ago\n |@ 5 mons 12 hours |@ 14 secs ago\n \nHow long has this been broken? Any idea on a cause. Obviously it is a\nsorting issue, but where?\n\n-- \nBruce Momjian [email protected]\n\n", "msg_date": "Tue, 6 Jan 1998 14:57:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" } ]
[ { "msg_contents": "\nThe Mailing List Archives are now available through your Web Browser at:\n\n\thttp://www.postgresql.org/mhonarc/pgsql-questions\n\t\t- all archives converted over\n\thttp://www.postgresql.org/mhonarc/pgsql-hackers\n\t\t- Oct to present converted so far\n\nWill most likely be integrating WebGlimpse in as the search engine once I've \nfully figured that thing out :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 17:10:01 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Mailing List Archives via MHonarc" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing similar problems, mainly due to failure to sort correctly\neven though there is an \"order by\" clause.\n\nI did a few tests and found that the sort sort seemed to fail when there\nwere multiple columns in the \"order by\" clause. (Not conclusive)\n\nI don't know when it 1st appeared as I've been trying to compile on \nSPARC-Linux for the past few attempts and this is the 1st time I've\nhad a fully working package to run the regression tests on!!\n\nThanks,\nKeith.\n \n\nBruce Momjian <[email protected]>\n> [email protected]\n> > \n> > Bruce,\n> > \n> > Just running the regression tests on the latest CVS on SPARC-Linux!!\n> > \n> > Appart from several other ordering and precision errors I'm seeing\n> > errors in constraints tests due to output/constraints.source not\n> > being updated for the new error messages.\n> \n> I have just fixed many of these WARN problems. I am looking at the new\n> results. The first problem:\n> \t\n> \t====== boolean ======\n> \t166,168d165\n> \t< |f |f \n> \t< |f |f \n> \t< |f |f \n> \t170a168\n> \t> |f |f \n> \t173a172\n> \t> |f |f \n> \t176a176\n> \t> |f |f \n> \n> is because the query has no ORDER BY.\n> \n> The second problem looks serious:\n> \n> QUERY: SET geqo TO 'off';\n> QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> \np2.f$\n> FROM POINT_TBL p1, POINT_TBL p2\n> ORDER BY dist, point1 using <<, point2 using <<;\n> thirtysix|point1 |point2 | dist\n> ---------+----------+----------+----------------\n> |(10,10) |(-10,0) |22.3606797749979 \n> |(0,0) |(-10,0) | 10\n> \n> The 'dist' is not being ordered. \n> \n> In geometry we have:\n> \n> 104c103\n> \t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n> \t---\n> \t> |(0,0) |[(0,0),(6,6)] |(0,0) \n> \n> I am happy to see the -0 changed to zero, but this may be just on my\n> platform. Also:\n> \n> \t< |(-0,0),(-20,-20) \n> \t---\n> \t> |(0,0),(-20,-20) \n> \t213c212\n> \t< |(-0,2),(-14,0) \n> \t---\n> \t> |(0,2),(-14,0) \n> \t221c220\n> \t< |(14,-0),(0,-34) \n> \t---\n> \t> |(14,0),(0,-34) \n> \t236c235\n> \n> We also have broken sorting in timespan:\n> \n> QUERY: SELECT '' AS fortyfive, r1.*, r2.*\n> FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n> WHERE r1.f1 > r2.f1\n> ORDER BY r1.f1, r2.f1;\n> fortyfive|f1 |f1\n> ---------+-----------------------------+-----------------------------\n> |@ 6 years |@ 14 secs ago\n> |@ 5 mons |@ 14 secs ago\n> |@ 5 mons 12 hours |@ 14 secs ago\n> \n> How long has this been broken? Any idea on a cause. Obviously it is a\n> sorting issue, but where?\n> \n> -- \n> Bruce Momjian [email protected]\n> \n\n", "msg_date": "Tue, 6 Jan 1998 21:11:53 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> \n> Hi,\n> \n> I'm seeing similar problems, mainly due to failure to sort correctly\n> even though there is an \"order by\" clause.\n> \n> I did a few tests and found that the sort sort seemed to fail when there\n> were multiple columns in the \"order by\" clause. (Not conclusive)\n\nThis is a huge help. I think I have found it. I just overhauled the\nreadfunc/outfunc code, so it was now very clear that was in the\nQuery.sortClause.\n\nYour hint that the it fails when there is more than one sort identifier\nwas the trick.\n\n> \n> I don't know when it 1st appeared as I've been trying to compile on \n> SPARC-Linux for the past few attempts and this is the 1st time I've\n> had a fully working package to run the regression tests on!!\n> \n> Thanks,\n> Keith.\n> \n> \n> Bruce Momjian <[email protected]>\n> > [email protected]\n> > > \n> > > Bruce,\n> > > \n> > > Just running the regression tests on the latest CVS on SPARC-Linux!!\n> > > \n> > > Appart from several other ordering and precision errors I'm seeing\n> > > errors in constraints tests due to output/constraints.source not\n> > > being updated for the new error messages.\n> > \n> > I have just fixed many of these WARN problems. I am looking at the new\n> > results. The first problem:\n> > \t\n> > \t====== boolean ======\n> > \t166,168d165\n> > \t< |f |f \n> > \t< |f |f \n> > \t< |f |f \n> > \t170a168\n> > \t> |f |f \n> > \t173a172\n> > \t> |f |f \n> > \t176a176\n> > \t> |f |f \n> > \n> > is because the query has no ORDER BY.\n> > \n> > The second problem looks serious:\n> > \n> > QUERY: SET geqo TO 'off';\n> > QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> \n> p2.f$\n> > FROM POINT_TBL p1, POINT_TBL p2\n> > ORDER BY dist, point1 using <<, point2 using <<;\n> > thirtysix|point1 |point2 | dist\n> > ---------+----------+----------+----------------\n> > |(10,10) |(-10,0) |22.3606797749979 \n> > |(0,0) |(-10,0) | 10\n> > \n> > The 'dist' is not being ordered. \n> > \n> > In geometry we have:\n> > \n> > 104c103\n> > \t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n> > \t---\n> > \t> |(0,0) |[(0,0),(6,6)] |(0,0) \n> > \n> > I am happy to see the -0 changed to zero, but this may be just on my\n> > platform. Also:\n> > \n> > \t< |(-0,0),(-20,-20) \n> > \t---\n> > \t> |(0,0),(-20,-20) \n> > \t213c212\n> > \t< |(-0,2),(-14,0) \n> > \t---\n> > \t> |(0,2),(-14,0) \n> > \t221c220\n> > \t< |(14,-0),(0,-34) \n> > \t---\n> > \t> |(14,0),(0,-34) \n> > \t236c235\n> > \n> > We also have broken sorting in timespan:\n> > \n> > QUERY: SELECT '' AS fortyfive, r1.*, r2.*\n> > FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n> > WHERE r1.f1 > r2.f1\n> > ORDER BY r1.f1, r2.f1;\n> > fortyfive|f1 |f1\n> > ---------+-----------------------------+-----------------------------\n> > |@ 6 years |@ 14 secs ago\n> > |@ 5 mons |@ 14 secs ago\n> > |@ 5 mons 12 hours |@ 14 secs ago\n> > \n> > How long has this been broken? Any idea on a cause. Obviously it is a\n> > sorting issue, but where?\n> > \n> > -- \n> > Bruce Momjian [email protected]\n> > \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 18:43:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> \n> Hi,\n> \n> I'm seeing similar problems, mainly due to failure to sort correctly\n> even though there is an \"order by\" clause.\n> \n> I did a few tests and found that the sort sort seemed to fail when there\n> were multiple columns in the \"order by\" clause. (Not conclusive)\n> \n> I don't know when it 1st appeared as I've been trying to compile on \n> SPARC-Linux for the past few attempts and this is the 1st time I've\n> had a fully working package to run the regression tests on!!\n\nSort is now fixed. When I added UNION, I needed to add UNIQUE from\noptimizer, so I added a SortClause node to the routine. Turns out it\nwas NULL'ing it for every sort field. Should work now.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 18:56:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> > I'm seeing similar problems, mainly due to failure to sort correctly\n> > even though there is an \"order by\" clause.\n> >\n> > I did a few tests and found that the sort sort seemed to fail when there\n> > were multiple columns in the \"order by\" clause. (Not conclusive)\n>\n> This is a huge help. I think I have found it. I just overhauled the\n> readfunc/outfunc code, so it was now very clear that was in the\n> Query.sortClause.\n>\n> Your hint that the it fails when there is more than one sort identifier\n> was the trick.\n\nAh, Keith beat me to the test :) fwiw, the problem was introduced between 971227\nand 980101...\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 01:36:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: consttraints.source" } ]
[ { "msg_contents": "> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n\n48 bytes + each row header (on my aix box..._your_ mileage may vary)\n 8 bytes + two int fields @ 4 bytes each\n 4 bytes + pointer on page to tuple\n-------- =\n60 bytes per tuple\n\n8192 / 60 give 136 tuples per page.\n\n300000 / 136 ... round up ... need 2206 pages which gives us ...\n\n2206 * 8192 = 18,071,532\n\nSo 19 MB is about right. And this is the best to be done, unless\nyou can make do with int2s which would optimally shrink the table\nsize to 16,834,560 bytes. Any nulls in there might add a few bytes\nper offending row too, but other than that, this should be considered\nnormal postgresql behavior.\n\n> ...\n> One massive sort file...\n> ...\n\nThis one I don't know if is \"normal\"...\n\n\nDarren aka [email protected]\n", "msg_date": "Tue, 6 Jan 1998 18:26:31 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Darren King wrote:\n\n> 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> 8 bytes + two int fields @ 4 bytes each\n> 4 bytes + pointer on page to tuple\n> -------- =\n> 60 bytes per tuple\n> \n> 8192 / 60 give 136 tuples per page.\n> \n> 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> \n> 2206 * 8192 = 18,071,532\n> \n> So 19 MB is about right. And this is the best to be done, unless\n> you can make do with int2s which would optimally shrink the table\n> size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> per offending row too, but other than that, this should be considered\n> normal postgresql behavior.\n\n\tBruce...this would be *great* to have in the FAQ!! What we do need is\na section of the User Manual dealing with computing resources required for\na table, similar to this :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 20:32:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> \n> > I created a table with two columns of type int, and loaded about 300 K records\n> > in it. So, the total size of the table is approx. that of 600 K integers,\n> > roughly 2.4 MB.\n> > But, the file corresponding to the table in pgsql/data/base directory\n> > has a size of 19 MB. I was wondering if I have done something wrong in\n> > the installation or usage, or is it the normal behavior ?\n> \n> 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> 8 bytes + two int fields @ 4 bytes each\n> 4 bytes + pointer on page to tuple\n> -------- =\n> 60 bytes per tuple\n> \n> 8192 / 60 give 136 tuples per page.\n> \n> 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> \n> 2206 * 8192 = 18,071,532\n> \n> So 19 MB is about right. And this is the best to be done, unless\n> you can make do with int2s which would optimally shrink the table\n> size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> per offending row too, but other than that, this should be considered\n> normal postgresql behavior.\n\n\nNice math exercise.\n\nDoes anyone want to tell me the row overhead on commercial databases?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 20:18:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Darren King wrote:\n> \n> > 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> > 8 bytes + two int fields @ 4 bytes each\n> > 4 bytes + pointer on page to tuple\n> > -------- =\n> > 60 bytes per tuple\n> > \n> > 8192 / 60 give 136 tuples per page.\n> > \n> > 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> > \n> > 2206 * 8192 = 18,071,532\n> > \n> > So 19 MB is about right. And this is the best to be done, unless\n> > you can make do with int2s which would optimally shrink the table\n> > size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> > per offending row too, but other than that, this should be considered\n> > normal postgresql behavior.\n> \n> \tBruce...this would be *great* to have in the FAQ!! What we do need is\n> a section of the User Manual dealing with computing resources required for\n> a table, similar to this :)\n\nAdded to FAQ.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 22:02:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "Hi All,\n\nI'm just investigating some regression failures on a SPARC-Linux\nbuild of PostgreSQL from the 7th Jan CVS tree.\n\nAlthough there are many failures I can't explain, the failure\nof the sanity_check VACUUM has attracted my interest.\n\nDoing a VACUUM on the regression database I'm getting:-\n\nregression=> vacuum;\nABORT: nodeRead: Bad type 0\nregression=>\n\nThe log shows:-\n\nDEBUG: Rel equipment_r: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 4: \nVac 0, Crash 0, UnUsed 0, MinLen 62, MaxLen 75; Re-using: Free/Avail. Space 0/0; \nEndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nDEBUG: Rel iportaltest: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 2: \nVac 0, Crash 0, UnUsed 0, MinLen 120, MaxLen 120; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nABORT: nodeRead: Bad type 0\n\n\n\nSo what comes next?\n\nIt looks like the Rel iportaltest vac'd OK.\n\nIf I VACUUM each relation individually everything seems to be OK.\n\nIf I try to vacuum a VIEW I get the same error.\n\nregression=> vacuum toyemp;\nABORT: nodeRead: Bad type 0\nregression=> \n\n\nAnyone have any insight into this?\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 00:35:20 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM error on CVS build 07-JAN-98" }, { "msg_contents": "> \n> Hi All,\n> \n> I'm just investigating some regression failures on a SPARC-Linux\n> build of PostgreSQL from the 7th Jan CVS tree.\n> \n> Although there are many failures I can't explain, the failure\n> of the sanity_check VACUUM has attracted my interest.\n> \n> Doing a VACUUM on the regression database I'm getting:-\n> \n> regression=> vacuum;\n> ABORT: nodeRead: Bad type 0\n> regression=>\n> \n> The log shows:-\n> \n> DEBUG: Rel equipment_r: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 4: \n> Vac 0, Crash 0, UnUsed 0, MinLen 62, MaxLen 75; Re-using: Free/Avail. Space 0/0; \n> EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> DEBUG: Rel iportaltest: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 2: \n> Vac 0, Crash 0, UnUsed 0, MinLen 120, MaxLen 120; Re-using: Free/Avail. Space \n> 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> ABORT: nodeRead: Bad type 0\n> \n> \n> \n> So what comes next?\n> \n> It looks like the Rel iportaltest vac'd OK.\n> \n> If I VACUUM each relation individually everything seems to be OK.\n> \n> If I try to vacuum a VIEW I get the same error.\n> \n> regression=> vacuum toyemp;\n> ABORT: nodeRead: Bad type 0\n> regression=> \n> \n> \n> Anyone have any insight into this?\n> \n> Keith.\n> \n> \n> \n\nTry the newest version. I think I fixed it.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 21:16:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM error on CVS build 07-JAN-98" } ]
[ { "msg_contents": "Hi,\n\nAnyone have any ideas about this error in the horology regression test?\n\nThe platform is SPARC-Linux running the latest CVS build.\n\nKeith.\n\n\nQUERY: CREATE TABLE TEMP_DATETIME (f1 datetime);\nQUERY: INSERT INTO TEMP_DATETIME (f1)\n SELECT d1 FROM DATETIME_TBL\n WHERE d1 BETWEEN '13-jun-1957' AND '1-jan-1997'\n OR d1 BETWEEN '1-jan-1999' AND '1-jan-2010';\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\nQUERY: SELECT '' AS ten, f1 AS datetime\n FROM TEMP_DATETIME\n ORDER BY datetime;\nten|datetime\n---+--------\n(0 rows) \n\n", "msg_date": "Wed, 7 Jan 1998 00:39:57 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Another regression test failure." } ]
[ { "msg_contents": "> I can take a stab at this tonite after work now that the snapshot is there.\n> Still have around some of the files/diffs from looking at this a year ago...\n> \n> I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> references to check for breakage. Appears that only one bit of lp_flags is\n> being used too, so that would seem to allow up to 32k blocks.\n\nI have finished \"fixing\" the code for this and have a test system of postgres\nrunning with 4k blocks right now. Tables appear to take about 10% less space.\nSimple btree indices are taking the same as with 8k blocks. Regression is\nrunning now and is going smoothly.\n\nNow for the question...\n\nIn backend/access/nbtree/nbtsort.c, ---> #define TAPEBLCKSZ (MAXBLCKSZ << 2)\n\nSo far MAXBLCKSZ has been equal to BLCKSZ. What effect will a MAXBLCKSZ=32768\nhave on these tape files? Should I leave it as MAXBLCKSZ this big or change\nthem to BLCKSZ to mirror the real block size being used?\n\n\n> I can check the aix compiler, but what does gcc and other compilers do with\n> bit field alignment?\n\nThe ibm compiler allocates the ItemIdData as four bytes. My C book says though\nthat the individual compiler is free to align bit fields however it chooses.\nThe bit-fields might not always be packed or allowed to cross integer boundaries.\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 19:52:42 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Tape files and MAXBLCKSZ vs. BLCKSZ" }, { "msg_contents": "> \n> > I can take a stab at this tonite after work now that the snapshot is there.\n> > Still have around some of the files/diffs from looking at this a year ago...\n> > \n> > I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> > references to check for breakage. Appears that only one bit of lp_flags is\n> > being used too, so that would seem to allow up to 32k blocks.\n> \n> I have finished \"fixing\" the code for this and have a test system of postgres\n> running with 4k blocks right now. Tables appear to take about 10% less space.\n> Simple btree indices are taking the same as with 8k blocks. Regression is\n> running now and is going smoothly.\n> \n> Now for the question...\n> \n> In backend/access/nbtree/nbtsort.c, ---> #define TAPEBLCKSZ (MAXBLCKSZ << 2)\n> \n> So far MAXBLCKSZ has been equal to BLCKSZ. What effect will a MAXBLCKSZ=32768\n> have on these tape files? Should I leave it as MAXBLCKSZ this big or change\n> them to BLCKSZ to mirror the real block size being used?\n> \n\nI would keep it equal to BLCKSZ. I see no reason to make it different,\nunless the btree sorting is expecting to take 2x the block size. Vadim\nmay know.\n\n\n> \n> > I can check the aix compiler, but what does gcc and other compilers do with\n> > bit field alignment?\n> \n> The ibm compiler allocates the ItemIdData as four bytes. My C book says though\n> that the individual compiler is free to align bit fields however it chooses.\n> The bit-fields might not always be packed or allowed to cross integer boundaries.\n> \n> darrenk\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 21:19:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tape files and MAXBLCKSZ vs. BLCKSZ" } ]
[ { "msg_contents": "\nHi...\n\n\tI got ahold of Julie today (maintainer of PostODBC) about including the\nPostODBC stuff as part of the general distribution, so that we pretty much\nhad all the interfaces covered.\n\n\tJulie agreed, and uploaded a zip file of the current sources for me to\nintegrate into the source tree...which I did...and then *very* quickly\nundid...PostODBC falls under LGPL, and therefore can't be included as part of\nour source distribution without contaminating our code :(\n\n\tDoes anyone know of *any* way around this? Like, can a section of our\ndistribution contain software that falls under LGPL without it affecting *our*\ncopyright (Berkeley)? Or does it have to remain completely seperate? Its\neffectively a seperate package, but because its wrapped in our \"tar\" file\nfor distribution, how does that affect things?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 21:07:44 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "PostODBC..." }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Hi...\n>\n> I got ahold of Julie today (maintainer of PostODBC) about including the\n> PostODBC stuff as part of the general distribution, so that we pretty much\n> had all the interfaces covered.\n>\n> Julie agreed, and uploaded a zip file of the current sources for me to\n> integrate into the source tree...which I did...and then *very* quickly\n> undid...PostODBC falls under LGPL, and therefore can't be included as part of\n> our source distribution without contaminating our code :(\n>\n> Does anyone know of *any* way around this? Like, can a section of our\n> distribution contain software that falls under LGPL without it affecting *our*\n> copyright (Berkeley)? Or does it have to remain completely seperate? Its\n> effectively a seperate package, but because its wrapped in our \"tar\" file\n> for distribution, how does that affect things?\n\nI'm no expert, but (for example) RedHat distributes Linux as well as commercial\nproducts on the same CDROM. There are separate licensing statements for each\ncategory of software. It would seem to be the same issue with us; we aren't\n_forcing_ someone to use both categories...\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 02:02:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostODBC..." }, { "msg_contents": "On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n\n> I'm no expert, but (for example) RedHat distributes Linux as well as commercial\n> products on the same CDROM. There are separate licensing statements for each\n> category of software. It would seem to be the same issue with us; we aren't\n> _forcing_ someone to use both categories...\n\n\tRight, this I have no problems with...but, would that mean that we could\ndistribute it as PostODBC.tar.gz on the same CD as PostgreSQL-v6.3.tar.gz, or\nas part of the overall tar file? Where does the line get drawn? :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:19:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostODBC..." } ]
[ { "msg_contents": "Forwarded message:\n> \n> Update of /usr/local/cvsroot/pgsql/src/interfaces/odbc/src/socket\n> In directory hub.org:/home/staff/scrappy/src/pgsql/src/interfaces/odbc/src/socket\n> \n> Removed Files:\n> \tcompat.h connect.h connectp.cpp errclass.cpp errclass.h \n> \tsockio.cpp sockio.h wrapper.cpp wrapper.h \n> Log Message:\n> \n> Can't include this...it falls under GPL...it will contaminate all the\n> other code :(\n\nCan't we just GPL that directory? We already distribute the source.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 21:20:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "[COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h connect.h\n\tconnectp.cpp errclass.cpp errclass.h sockio.cpp sockio.h wO (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> > Can't include this...it falls under GPL...it will contaminate all the\n> > other code :(\n> \n> Can't we just GPL that directory? We already distribute the source.\n\n\tThis is what I'm curious about...can the GPL be directory specific?\n\n\tWhat do you mean \"we already distribute the source\"? The source to what?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:37:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h\n\tconnect.h connectp.cpp errclass.cpp errclass.h sockio.cpp\n\tsockio.h wO (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > > Can't include this...it falls under GPL...it will contaminate all the\n> > > other code :(\n> > \n> > Can't we just GPL that directory? We already distribute the source.\n> \n> \tThis is what I'm curious about...can the GPL be directory specific?\n> \n> \tWhat do you mean \"we already distribute the source\"? The source to what?\n\nThe Postodbc source is already distributed. It is not like we are\ngiving people a simple binary.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 22:10:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h\n\tconnect.h connectp.cpp errclass.cpp errclass.h sockio.cpp sockio" } ]
[ { "msg_contents": "Do we have code to preserver grants in pg_dump? I maked it in the TODO\nlist as completed, but I am not sure it was done.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 22:40:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump and groups" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Do we have code to preserver grants in pg_dump? I maked it in the TODO\n> list as completed, but I am not sure it was done.\n\n\tYup *nod*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 23:55:21 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump and groups" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > Do we have code to preserver grants in pg_dump? I marked it in the TODO\n> > list as completed, but I am not sure it was done.\n> \n> \tYup *nod*\n\n6.3, a \"no excuses\" release.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 6 Jan 1998 23:01:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump and groups" } ]
[ { "msg_contents": "\nHi...\n\n\tWell, I decided to use WebGlimpse to provide a search engine in front of\nthe MHonarc archives...its slower then I'd like, but that's what the upgrade\nis for (and many other things)...\n\n\tpgsql-hackers and pgsql-questions are currently searchable\n\n\tAnd, finally, you can access it all, until Neil gets hooks in place on the\nregular pages, at:\n\n\thttp://www.postgresql.org/mhonarc\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jan 1998 00:04:06 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Search engine in place..." } ]
[ { "msg_contents": "I have fixed the Node 0 problem with views. It was added as part of my\nreadnode/outnode additions.\n\nAt this point, I think I am done with the readnode/outnode changes. \nShould make rewrite system a little more robust.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 03:07:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "fix for views and outnodes" } ]
[ { "msg_contents": "Bruce Momjian <[email protected]>\n> [email protected]\n> > \n> > Hi,\n> > \n> > I'm seeing similar problems, mainly due to failure to sort correctly\n> > even though there is an \"order by\" clause.\n> > \n> > I did a few tests and found that the sort sort seemed to fail when there\n> > were multiple columns in the \"order by\" clause. (Not conclusive)\n> > \n> > I don't know when it 1st appeared as I've been trying to compile on \n> > SPARC-Linux for the past few attempts and this is the 1st time I've\n> > had a fully working package to run the regression tests on!!\n> \n> Sort is now fixed. When I added UNION, I needed to add UNIQUE from\n> optimizer, so I added a SortClause node to the routine. Turns out it\n> was NULL'ing it for every sort field. Should work now.\n\nThanks for the quick response, I'm building the new code now so will know\nin the morning how it stands.\n\nI'm building the whole thing with -O instead of -O2 to see if it helps\nwith some of the other errors I'm seeing. ( an old problem with gcc and\nthe SPARC processor makes me suspicious)\n\nLater...\n\nThe sorting problems are fixes but I'm still getting many fails.\n\nWill investigate...\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 09:40:41 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" } ]
[ { "msg_contents": "> \tJulie agreed, and uploaded a zip file of the current sources for me to\n> integrate into the source tree...which I did...and then *very* quickly\n> undid...PostODBC falls under LGPL, and therefore can't be included as part of\n> our source distribution without contaminating our code :(\n> \n> \tDoes anyone know of *any* way around this? Like, can a section of our\n> distribution contain software that falls under LGPL without it affecting *our*\n> copyright (Berkeley)? Or does it have to remain completely seperate? Its\n> effectively a seperate package, but because its wrapped in our \"tar\" file\n> for distribution, how does that affect things?\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \nBy LGPL, I assume you mean the library version of GPL. I thought that the whole\npoint of the library version was that it didn't contaminate any other code.\nThat's how commercial products can release executables which are linked with\nthe GNU libc (or whatever) without the whole product falling under the GPL.\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Wed, 7 Jan 1998 11:23:40 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostODBC..." } ]
[ { "msg_contents": "> > 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> > 8 bytes + two int fields @ 4 bytes each\n> > 4 bytes + pointer on page to tuple\n> > -------- =\n> > 60 bytes per tuple\n> > \n> > ...\n> \n> Nice math exercise.\n> \n> Does anyone want to tell me the row overhead on commercial databases?\n\nI've seen this for Oracle, but I _can't_ find it right now. I'll dig it\nup tonite...this is driving me nuts trying to remember where it is now.\n\nBut this I do have handy! It's an HTML page from IBM DB2 docs. A touch\nlong, but I found it to most interesting.\n\nIf there are any of the linked pages that someone else is interested in,\ncontact me and if I have it, I can send it to you off-list.\n\nDarren aka [email protected]\n\n<HTML>\n<HEAD>\n <TITLE>DB2 Administration Guide</TITLE>\n</HEAD>\n<BODY TEXT=\"#000000\" BGCOLOR=\"#FFFFFF\" LINK=\"#9900CC\" VLINK=\"#3366CC\" ALINK=\"#3399CC\">\n\n<H2><A NAME=\"HDRDBSIZE\"></A>Estimating Space Requirements for Tables</H2>\n\n<P>The following information provides a general rule for estimating the\nsize of a database: </P>\n\n<UL COMPACT>\n<LI><A HREF=\"#HDROPCAT\">&quot;System Catalog Tables&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPLOB\">&quot;Large Object (LOB) Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPINX\">&quot;Index Space&quot;</A> </LI>\n</UL>\n\n<P>After reading these sections, you should read <A HREF=\"sqld00025.html#HDRTBSPACE\">&quot;Designing\nand Choosing Table Spaces&quot;</A>. </P>\n\n<P>Information is not provided for the space required by such things as:\n</P>\n\n<UL COMPACT>\n<LI>The local database directory file </LI>\n\n<LI>The system database directory file </LI>\n\n<LI>The file management overhead required by the operating system, including:\n</LI>\n\n<UL COMPACT>\n<LI>file block size </LI>\n\n<LI>directory control space </LI>\n</UL>\n</UL>\n\n<P>Information such as row size and structure is precise. However, multiplication\nfactors for file overhead because of disk fragmentation, free space, and\nvariable length columns will vary in your own database since there is such\na wide range of possibilities for the column types and lengths of rows\nin a database. After initially estimating your database size, create a\ntest database and populate it with representative data. You will then find\na multiplication factor that is more accurate for your own particular database\ndesign. </P>\n\n<H3><A NAME=\"HDROPCAT\"></A>System Catalog Tables</H3>\n\n<P>When a database is initially created, system catalog tables are created.\nThese system tables will grow as user tables, views, indexes, authorizations,\nand packages are added to the database. Initially, they use approximately\n1600 KB of disk space. </P>\n\n<P>The amount of space allocated for the catalog tables depends on the\ntype of table space and the extent size for the table space. For example,\nif a DMS table space with an extent size of 32 is used, the catalog table\nspace will initially be allocated 20MB of space. For more information,\nsee <A HREF=\"sqld00025.html#HDRTBSPACE\">&quot;Designing and Choosing Table\nSpaces&quot;</A>. </P>\n\n<H3><A NAME=\"HDROPDAT\"></A>User Table Data</H3>\n\n<P>Table data is stored on 4KB pages. Each page contains 76 bytes of overhead\nfor the database manager. This leaves 4020 bytes to hold user data (or\nrows), although no row can exceed 4005 bytes in length. A row will <I>not</I>\nspan multiple pages. </P>\n\n<P>Note that the table data pages <B>do not</B> contain the data for columns\ndefined with LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, or DBCLOB data\ntypes. The rows in a table data page do, however, contain a descriptor\nof these columns. (See <A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A>\nfor information about estimating the space required for the table objects\nthat will contain the data stored using these data types.) </P>\n\n<P>Rows are inserted into the table in a first-fit order. The file is searched\n(using a free space map) for the first available space that is large enough\nto hold the new row. When a row is updated, it is updated in place unless\nthere is insufficient room left on the 4KB page to contain it. If this\nis the case, a &quot;tombstone record&quot; is created in the original\nrow location which points to the new location in the table file of the\nupdated row. </P>\n\n<P>See <A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A> for information\nabout how LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB and DBCLOB data is\nstored and for estimating the space required to store these types of columns.\n</P>\n\n<P>For each user table in the database, the space needed is: </P>\n\n<PRE> (average row size + 8) * number of rows * 1.5\n</PRE>\n\n<P>The average row size is the sum of the average column sizes. For information\non the size of each column, see CREATE TABLE in the <A HREF=\"/data/db2/support/sqls00aa/sqls0.html\"><I>SQL\nReference</I>. </A></P>\n\n<P>The factor of &quot;1.5&quot; is for overhead such as page overhead\nand free space. </P>\n\n<H3><A NAME=\"HDROPLF\"></A>Long Field Data</H3>\n\n<P>If a table has LONG VARCHAR or LONG VARGRAPHIC data, in addition to\nthe byte count of 20 for the LONG VARCHAR or LONG VARGRAPHIC descriptor\n(in the table row), the data itself must be stored. Long field data is\nstored in a separate table object which is structured differently from\nthe other data types (see <A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A>\nand <A HREF=\"#HDROPLOB\">&quot;Large Object (LOB) Data&quot;</A>). </P>\n\n<P>Data is stored in 32KB areas that are broken up into segments whose\nsizes are &quot;powers of two&quot; times 512 bytes. (Hence these segments\ncan be 512 bytes, 1024 bytes, 2048 bytes, and so on, up to 32KB.) </P>\n\n<P>They are stored in a fashion that enables free space to be reclaimed\neasily. Allocation and free space information is stored in 4KB allocation\npages, which appear infrequently throughout the object. </P>\n\n<P>The amount of unused space in the object depends on the size of the\nlong field data and whether this size is relatively constant across all\noccurrences of the data. For data entries larger than 255 bytes, this unused\nspace can be up to 50 percent of the size of the long field data. </P>\n\n<P>If character data is less than 4KB in length, the CHAR, GRAPHIC, VARCHAR,\nor VARGRAPHIC data types should be used instead of LONG VARCHAR or LONG\nVARGRAPHIC. </P>\n\n<H3><A NAME=\"HDROPLOB\"></A>Large Object (LOB) Data</H3>\n\n<P>If a table has BLOB, CLOB, or DBCLOB data, in addition to the byte count\n(between 72 and 280 bytes) for the BLOB, CLOB, or DBCLOB descriptor (in\nthe table row), the data itself must be stored. This data is stored in\ntwo separate table objects that are structured differently than other data\ntypes (see <A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A>). </P>\n\n<P>To estimate the space required by large object data, you need to consider\nthe two table objects used to store data defined with these data types:\n</P>\n\n<UL>\n<LI><B>LOB Data Objects</B> </LI>\n\n<P>Data is stored in 64MB areas that are broken up into segments whose\nsizes are &quot;powers of two&quot; times 1024 bytes. (Hence these segments\ncan be 1024 bytes, 2048 bytes, 4096 bytes, and so on, up to 64MB.) </P>\n\n<P>To reduce the amount of disk space used by the LOB data, you can use\nthe COMPACT parameter on the <I>lob-options-clause</I> on the CREATE TABLE\nand ALTER TABLE statements. The COMPACT option minimizes the amount of\ndisk space required by allowing the LOB data to be split into smaller segments\nso that it will use the smallest amount of space possible. Without the\nCOMPACT option, the entire LOB value must contiguously fit into a single\nsegment. Appending to LOB values stored using the COMPACT option may result\nin slower performance compared to LOB values for which the COMPACT option\nis not specified. </P>\n\n<P>The amount of free space contained in LOB data objects will be influenced\nby the amount of update and delete activity, as well as the size of the\nLOB values being inserted. </P>\n\n<LI><B>LOB Allocation Objects</B> </LI>\n\n<P>Allocation and free space information is stored in 4KB allocation pages\nseparated from the actual data. The number of these 4KB pages is dependent\non the amount of data, including unused space, allocated for the large\nobject data. The overhead is calculated as follows: one 4KB pages for every\n64GB plus one 4KB page for every 8MB. </P>\n</UL>\n\n<P>If character data is less than 4KB in length, the CHAR, GRAPHIC, VARCHAR,\nor VARGRAPHIC data types should be used instead of BLOB, CLOB or DBCLOB.\n</P>\n\n<H3><A NAME=\"HDROPINX\"></A>Index Space</H3>\n\n<P>For each index, the space needed can be estimated as: </P>\n\n<PRE> (average index key size + 8) * number of rows * 2\n</PRE>\n\n<P>where: </P>\n\n<UL COMPACT>\n<LI>The &quot;average index key size&quot; is the byte count of each column\nin the index key. See the CREATE TABLE statement <A HREF=\"/data/db2/support/sqls00aa/sqls0.html\"><I>SQL\nReference</I> </A>for information on how to calculate the byte count for\ncolumns with different data types. (Note that to estimate the average column\nsize for VARCHAR and VARGRAPHIC columns, use an average of the current\ndata size, plus one byte. Do not use the maximum declared size.) </LI>\n\n<LI>The factor of 2 is for overhead, such as non-leaf pages and free space.\n</LI>\n</UL>\n\n<P><B>Note: </B></P>\n\n<BLOCKQUOTE>\n<P>For every column that allows nulls, add one extra byte for the null\nindicator. </P>\n</BLOCKQUOTE>\n\n<P>Temporary space is required when creating the index. The maximum amount\nof temporary space required during index creation can be estimated as:\n</P>\n\n<PRE> (average index key size + 8) * number of rows * 3.2\n</PRE>\n\n<P>where the factor of 3.2 is for index overhead as well as space required\nfor the sorting needed to create the index. </P>\n\n<P>\n<HR><B>[ <A HREF=\"sqld0.html#ToC\">Table of Contents</A>\n| <A HREF=\"sqld00022.html\">Previous Page</A> | <A HREF=\"sqld00024.html\">Next\nPage</A> ]</B> \n<HR></P>\n\n</BODY>\n</HTML>\n", "msg_date": "Wed, 7 Jan 1998 09:54:35 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> I've seen this for Oracle, but I _can't_ find it right now. I'll dig it\n> up tonite...this is driving me nuts trying to remember where it is now.\n> \n> But this I do have handy! It's an HTML page from IBM DB2 docs. A touch\n> long, but I found it to most interesting.\n> \n> If there are any of the linked pages that someone else is interested in,\n> contact me and if I have it, I can send it to you off-list.\n\nInteresting that they have \"tombstone\" records, which sounds like our\ntime travel that vacuum cleans up.\n\nThey recommend (rowsize+8) * 1.5.\n\nSounds like we are not too bad.\n\nI assume our index overhead is not as large as data rows, but still\nsignificant. I am adding a mention of it to the FAQ. That comes up\noften too.\n\n\tIndexes do not contain the same overhead, but do contain the\n\tdata that is being indexed, so they can be large also.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 12:18:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "\nIn the next update to s_lock.c, would it be possible to add an\n#else to the #ifdef with nothing more than a ; in it? Like...\n\n#if defined (__alpha__) && defined(linux)\n\n... alpha linux code ...\n\n#else\n;\n#endif\n\nOr perhaps put a #include <stdio.h> outside the ifdef'd block?\n\nThe aix compiler requires there be _some_ sort of valid code left\nafter the pre-processor finishes with the file, and currently\nthere isn't, so my compile fails since the s_lock.c file's always\nin the make.\n\nOr could the #if be moved to the makefile to add s_lock.c to OBJS\nif defined(__alpha__) and defined(linux)?\n\n\nDarren aka [email protected]\n", "msg_date": "Wed, 7 Jan 1998 10:14:19 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Linux/Alpha's s_lock.c and other ports..." }, { "msg_contents": "\nThis was done...I didn't think the #if's shoudl be around the header\nfile stuff, so just moved them down a bit \n\n\nOn Wed, 7 Jan 1998, Darren King wrote:\n\n> \n> In the next update to s_lock.c, would it be possible to add an\n> #else to the #ifdef with nothing more than a ; in it? Like...\n> \n> #if defined (__alpha__) && defined(linux)\n> \n> ... alpha linux code ...\n> \n> #else\n> ;\n> #endif\n> \n> Or perhaps put a #include <stdio.h> outside the ifdef'd block?\n> \n> The aix compiler requires there be _some_ sort of valid code left\n> after the pre-processor finishes with the file, and currently\n> there isn't, so my compile fails since the s_lock.c file's always\n> in the make.\n> \n> Or could the #if be moved to the makefile to add s_lock.c to OBJS\n> if defined(__alpha__) and defined(linux)?\n> \n> \n> Darren aka [email protected]\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jan 1998 18:43:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha's s_lock.c and other ports..." } ]
[ { "msg_contents": "Hi All,\n\nI suspect this is an O/S or platform problem but can anyone offer any\nsuggestions as to how I might locate the cause.\n\nDROP TABLE FLOAT8_TBL;\nDROP\n\nCREATE TABLE FLOAT8_TBL(f1 float8);\nCREATE\n\nINSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-200');\nINSERT 277993 1\n\nSELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\n\n\nThe ABORT message comes from tcop.c when we are hit by a FPE signal by\nthe operating system.\n\n.....\n\nHere's some additional tests that seem to show the threshold.\n\npostgres=> CREATE TABLE FLOAT8_TBL(f1 float8);\nCREATE\npostgres=> INSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-150');\nINSERT 278057 1\npostgres=> SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nbad|?column?\n---+--------\n | 1\n(1 row)\n\npostgres=> INSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-151');\nINSERT 278058 1\npostgres=> SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 16:11:09 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Floating point exceptions." } ]
[ { "msg_contents": "Does someone want to remind me why we allocate the full size for char()\nand varchar(), when we really can just allocate the size of the given\nstring?\n\nI relize char() has to be padded, but why varchar()?\n\nIn my experience, char() is full size as defined by create, and\nvarchar() is the the size of the actual data in the field, like text,\nbut with a pre-defined limit.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 12:42:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar/char size" }, { "msg_contents": "On Wed, 7 Jan 1998, Bruce Momjian wrote:\n\n> In my experience, char() is full size as defined by create, and\n> varchar() is the the size of the actual data in the field, like text,\n> but with a pre-defined limit.\n\n\tCan you remind me what the difference is between text and varchar? Why\nwould you use varchar over text?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jan 1998 18:44:24 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> \n> On Wed, 7 Jan 1998, Bruce Momjian wrote:\n> \n> > In my experience, char() is full size as defined by create, and\n> > varchar() is the the size of the actual data in the field, like text,\n> > but with a pre-defined limit.\n> \n> \tCan you remind me what the difference is between text and varchar? Why\n> would you use varchar over text?\n\nOnly because SQL people are used to varchar, and not text, and sometimes\npeople want to have a maximum size if they are displaying this data in a\nform that is only of limited size.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 18:04:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> Does someone want to remind me why we allocate the full size for char()\n> and varchar(), when we really can just allocate the size of the given\n> string?\n> I relize char() has to be padded, but why varchar()?\n\n> In my experience, char() is full size as defined by create, and\n> varchar() is the the size of the actual data in the field, like text,\n> but with a pre-defined limit.\n\nWell, in many relational databases access can be optimized by having\nfixed-length tuple storage structures. Also, it allows re-use of deleted\nspace in storage pages. It may be that neither of these points have any\nbearing on Postgres, and never will, but unless that clearly the case then\nI would be inclined to keep the storage scheme as it is currently.\n\n - Tom\n\n", "msg_date": "Thu, 08 Jan 1998 03:07:13 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> \n> > Does someone want to remind me why we allocate the full size for char()\n> > and varchar(), when we really can just allocate the size of the given\n> > string?\n> > I relize char() has to be padded, but why varchar()?\n> \n> > In my experience, char() is full size as defined by create, and\n> > varchar() is the the size of the actual data in the field, like text,\n> > but with a pre-defined limit.\n> \n> Well, in many relational databases access can be optimized by having\n> fixed-length tuple storage structures. Also, it allows re-use of deleted\n> space in storage pages. It may be that neither of these points have any\n> bearing on Postgres, and never will, but unless that clearly the case then\n> I would be inclined to keep the storage scheme as it is currently.\n\nWith Ingres and Informix char() is fixed size, while varchar() is\nVARiable size.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 22:17:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "Bruce Momjian wrote:\n\n> >\n> > > Does someone want to remind me why we allocate the full size for char()\n> > > and varchar(), when we really can just allocate the size of the given\n> > > string?\n> > > I relize char() has to be padded, but why varchar()?\n> >\n> > > In my experience, char() is full size as defined by create, and\n> > > varchar() is the the size of the actual data in the field, like text,\n> > > but with a pre-defined limit.\n> >\n> > Well, in many relational databases access can be optimized by having\n> > fixed-length tuple storage structures. Also, it allows re-use of deleted\n> > space in storage pages. It may be that neither of these points have any\n> > bearing on Postgres, and never will, but unless that clearly the case then\n> > I would be inclined to keep the storage scheme as it is currently.\n>\n> With Ingres and Informix char() is fixed size, while varchar() is\n> VARiable size.\n\nGo for it. Let me know if I can help with testing or anything...\n\n - Tom\n\n", "msg_date": "Thu, 08 Jan 1998 03:20:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> \n> > >\n> > > > Does someone want to remind me why we allocate the full size for char()\n> > > > and varchar(), when we really can just allocate the size of the given\n> > > > string?\n> > > > I relize char() has to be padded, but why varchar()?\n> > >\n> > > > In my experience, char() is full size as defined by create, and\n> > > > varchar() is the the size of the actual data in the field, like text,\n> > > > but with a pre-defined limit.\n> > >\n> > > Well, in many relational databases access can be optimized by having\n> > > fixed-length tuple storage structures. Also, it allows re-use of deleted\n> > > space in storage pages. It may be that neither of these points have any\n> > > bearing on Postgres, and never will, but unless that clearly the case then\n> > > I would be inclined to keep the storage scheme as it is currently.\n> >\n> > With Ingres and Informix char() is fixed size, while varchar() is\n> > VARiable size.\n> \n> Go for it. Let me know if I can help with testing or anything...\n\nI know we have text, and that it is better than the others, but if our\nvarchar() were both variable sized storage, and you could place a max on\nit, it would be useful for certain applications.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 22:24:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Does someone want to remind me why we allocate the full size for char()\n> and varchar(), when we really can just allocate the size of the given\n> string?\n> \n> I relize char() has to be padded, but why varchar()?\n> \n> In my experience, char() is full size as defined by create, and\n> varchar() is the the size of the actual data in the field, like text,\n> but with a pre-defined limit.\n\nIs CHAR padded on disk? Of course it should be padded for \nrepresentation, but for storage, couldn't it be stored just like\nTEXT or VARCHAR? Before storing, it could be trimmed, and when\nread from storage, it could be padded with spaces on the right.\n\nBtw, why is VARCHAR not simply an alias for TEXT, with maxlen added?\nShouldn't these types be the same internally, but with maxlen checked\nfor VARCHAR in the parser and maxlen set to \"infinite\"(-1?) for TEXT?\nOr perhaps CHAR could be put into the same type also?\n\nIf we have a type called VARTEXT(int maxLen, bool doPaddingProcessing):\n\nVARCHAR(10) becomes VARTEXT(10, false)\t// 10 chars, no padding\nTEXT becomes VARTEXT(0, false)\t\t// infinite length, no padding\nCHAR(10) becomes VARTEXT(10, true)\t// 10 chars, padded\n\nWould not this be easier to handle than three different types? This\ntype stuff would be handled in the parser. There would be only one\nstorage function, which could do any kind of coding to make the VARTEXT\ntake as little space as possible on disk.\nPerhaps it would (in some cases) be good to have the possibility to\nspecify compression of the text. That could be another bool attribute\nto VARTEXT, used by \"COMPRESSED VARCHAR()\" or \"COMPRESSED TEXT\" so that\npeople can squeeze the maximum out of their disk space.\n\nA related question: Is it possible to store tuples over more than one\nblock? Would it be possible to split a big TEXT into multiple blocks?\n\n/* m */\n", "msg_date": "Fri, 09 Jan 1998 14:26:43 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> > Does someone want to remind me why we allocate the full size for char()\n> > and varchar(), when we really can just allocate the size of the given\n> > string?\n> >\n> > I relize char() has to be padded, but why varchar()?\n> >\n> > In my experience, char() is full size as defined by create, and\n> > varchar() is the the size of the actual data in the field, like text,\n> > but with a pre-defined limit.\n>\n> Is CHAR padded on disk? Of course it should be padded for\n> representation, but for storage, couldn't it be stored just like\n> TEXT or VARCHAR? Before storing, it could be trimmed, and when\n> read from storage, it could be padded with spaces on the right.\n\nMy CA/Ingres Admin manual points out that there is a tradeoff between\ncompressing tuples to save disk storage and the extra processing work\nrequired to uncompress for use. They suggest that the only case where you\nwould consider compressing on disk is when your system is very I/O bound,\nand you have CPU to burn.\n\nThe default for Ingres is to not compress anything, but you can specify\ncompression on a table-by-table basis.\n\nbtw, char() is a bit trickier to handle correctly if you do compress it on\ndisk, since trailing blanks must be handled correctly all the way through.\nFor example, you would want 'hi' = 'hi ' to be true, which is not a\nrequirement for varchar().\n\n - Tom\n\n", "msg_date": "Fri, 09 Jan 1998 14:50:42 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Does someone want to remind me why we allocate the full size for char()\n> > and varchar(), when we really can just allocate the size of the given\n> > string?\n> > \n> > I relize char() has to be padded, but why varchar()?\n> > \n> > In my experience, char() is full size as defined by create, and\n> > varchar() is the the size of the actual data in the field, like text,\n> > but with a pre-defined limit.\n> \n> Is CHAR padded on disk? Of course it should be padded for \n> representation, but for storage, couldn't it be stored just like\n> TEXT or VARCHAR? Before storing, it could be trimmed, and when\n> read from storage, it could be padded with spaces on the right.\n\nWell, traditionally, CHAR() is fixed length, and VARCHAR() is variable. \nThis is how Ingres and Informix handle it.\n\nThere is very little difference in the types because internally they are\nhandled the same. The only difference is when we need to specify a max\nlength, we do that with those types.\n\n> \n> Btw, why is VARCHAR not simply an alias for TEXT, with maxlen added?\n> Shouldn't these types be the same internally, but with maxlen checked\n> for VARCHAR in the parser and maxlen set to \"infinite\"(-1?) for TEXT?\n> Or perhaps CHAR could be put into the same type also?\n\nRight now we do some of the special processing using the OID of VARCHAR\nand BPCHAR, which is char(). We would have to generalize the length\nidea for each type, which is not hard to do.\n\n> \n> If we have a type called VARTEXT(int maxLen, bool doPaddingProcessing):\n> \n> VARCHAR(10) becomes VARTEXT(10, false)\t// 10 chars, no padding\n> TEXT becomes VARTEXT(0, false)\t\t// infinite length, no padding\n> CHAR(10) becomes VARTEXT(10, true)\t// 10 chars, padded\n> \n> Would not this be easier to handle than three different types? This\n> type stuff would be handled in the parser. There would be only one\n> storage function, which could do any kind of coding to make the VARTEXT\n> take as little space as possible on disk.\n> Perhaps it would (in some cases) be good to have the possibility to\n> specify compression of the text. That could be another bool attribute\n> to VARTEXT, used by \"COMPRESSED VARCHAR()\" or \"COMPRESSED TEXT\" so that\n> people can squeeze the maximum out of their disk space.\n> \n> A related question: Is it possible to store tuples over more than one\n> block? Would it be possible to split a big TEXT into multiple blocks?\n\nI don't know why it is not possible, but I suppose it goes to the\ninternal workings of PostgreSQL and how rows are added and modified.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 10:58:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "On Fri, 9 Jan 1998, Bruce Momjian wrote:\n\n> > Is CHAR padded on disk? Of course it should be padded for \n> > representation, but for storage, couldn't it be stored just like\n> > TEXT or VARCHAR? Before storing, it could be trimmed, and when\n> > read from storage, it could be padded with spaces on the right.\n> \n> Well, traditionally, CHAR() is fixed length, and VARCHAR() is variable. \n> This is how Ingres and Informix handle it.\n\n\tBut how do we store this to the file system? If I setup a table\nwith a char(20), and one of the records has a value of \"a\", does it then\nwrite 1 byte to the file system, or does it write 1 byte (\"a\") + 19 bytes\n(\"\")?\n\n\tIf the second, is there a reason why, as far as writing to the\nfile system is concerned, char() can't be treated like varchar()? I'd\nimagine you could save one helluva lot of \"disk space\" by doing that, no?\n\n\tThen again, thinkiing of it that way, I may as well just use\nvarchar() instead, right?\n\n\tSee, this is what *really* gets me lost...I use text for\neverything, since I really haven't got a clue as to *why* I'd want to use\neither char() or varchar() instead...\n\n\tNow, from what I *think* I recall you stating, char() and\nvarchar() are more for backwards compatibility? Compatibility with other\nSQL engines? If so...as long as we have a type char(), does our backend\nrepresentation have to be any different between char() and text? \n\n\n", "msg_date": "Fri, 9 Jan 1998 12:56:17 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "> \n> On Fri, 9 Jan 1998, Bruce Momjian wrote:\n> \n> > > Is CHAR padded on disk? Of course it should be padded for \n> > > representation, but for storage, couldn't it be stored just like\n> > > TEXT or VARCHAR? Before storing, it could be trimmed, and when\n> > > read from storage, it could be padded with spaces on the right.\n> > \n> > Well, traditionally, CHAR() is fixed length, and VARCHAR() is variable. \n> > This is how Ingres and Informix handle it.\n> \n> \tBut how do we store this to the file system? If I setup a table\n> with a char(20), and one of the records has a value of \"a\", does it then\n> write 1 byte to the file system, or does it write 1 byte (\"a\") + 19 bytes\n> (\"\")?\n\n20+VARHDRSZ bytes for char(20), 1+VARHDRSZ for varchar(20)\n\n> \n> \tIf the second, is there a reason why, as far as writing to the\n> file system is concerned, char() can't be treated like varchar()? I'd\n> imagine you could save one helluva lot of \"disk space\" by doing that, no?\n\nBut then you have variable length records where char(x) forces a fixed\nlength. Currently, the code treats all varlena structures as variable,\nso we readly don't take advantage of this, but we may some day.\n\n> \n> \tThen again, thinkiing of it that way, I may as well just use\n> varchar() instead, right?\n\nYep.\n\n> \n> \tSee, this is what *really* gets me lost...I use text for\n> everything, since I really haven't got a clue as to *why* I'd want to use\n> either char() or varchar() instead...\n> \n> \tNow, from what I *think* I recall you stating, char() and\n> varchar() are more for backwards compatibility? Compatibility with other\n> SQL engines? If so...as long as we have a type char(), does our backend\n> representation have to be any different between char() and text? \n\nWe need the fixed length trim cabability of char(), and I think we need\nthe padding of char() too.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 13:21:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" } ]
[ { "msg_contents": "> > I created a table with two columns of type int, and loaded about 300 K records\n> > in it. So, the total size of the table is approx. that of 600 K integers,\n> > roughly 2.4 MB.\n> > But, the file corresponding to the table in pgsql/data/base directory\n> > has a size of 19 MB. I was wondering if I have done something wrong in\n> > the installation or usage, or is it the normal behavior ?\n> \n> 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> 8 bytes + two int fields @ 4 bytes each\n> 4 bytes + pointer on page to tuple\n> -------- =\n> 60 bytes per tuple\n> \n> 8192 / 60 give 136 tuples per page.\n> \n> 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> \n> 2206 * 8192 = 18,071,532\n\nThe above is for the current release of 6.2.1. For 6.3, a couple of things\nhave been removed from the header that gives a 13% size savings for the above.\nThat percentage will go down of course as you add fields to the table.\n\nA little more accurate by including the tuple rounding before storage. For\nme the above would still be true if there is one or two int4s since the four\nbytes I would save would be taken back by the double-word tuple alignment.\n\nWith the current src tree...again, all with aix alignment...\n\n 40 bytes + each row header\n 8 bytes + two int fields @ 4 bytes each\n--------- =\n 48 bytes per tuple (round up to next highest mulitple of 8)\n 4 bytes + pointer on page to tuple\n--------- =\n 52 bytes per tuple\n \n8192 bytes - page size\n 8 bytes - page header\n 0 bytes - \"special\" Opaque space at page end...currently unused.\n---------- =\n8184 bytes\n\n8184 / 52 gives 157 tuples per page.\n\n300000 / 157 ... round up ... need 1911 pages which gives us ...\n\n1911 * 8192 = 15,654,912 ... 13% smaller than 6.2 file size!\n\nspace = pg_sz * ceil(num_tuples / floor((pg_sz - pg_hdr - pg_opaque) / tup_sz))\n\nwhere tup_sz is figured out from above. You can figure out what your\nplatform is using by creating the table, inserting one record and then\nexamining the table file with a binary editor such as bpatch or beav.\n\nUsing the above and knowing the size of the fields, you should be able\nto accurately calculate the amount a space any table will require before\nyou create it.\n\ndarrenk\n", "msg_date": "Wed, 7 Jan 1998 13:03:07 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> A little more accurate by including the tuple rounding before storage. For\n> me the above would still be true if there is one or two int4s since the four\n> bytes I would save would be taken back by the double-word tuple alignment.\n> \n> With the current src tree...again, all with aix alignment...\n> \n> 40 bytes + each row header\n> 8 bytes + two int fields @ 4 bytes each\n> --------- =\n> 48 bytes per tuple (round up to next highest mulitple of 8)\n> 4 bytes + pointer on page to tuple\n> --------- =\n> 52 bytes per tuple\n> \n\nThanks. Updated FAQ.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 14:26:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "I have applied the following patch to allow varchar() fields to store\njust the needed bytes, and not the maximum size.\n\nI have made a few more cleanup changes related to this, and it seems to\nwork perfectly.\n\nI think this is one of those \"Why didn't we do this earlier?\" patches.\n\n\ttest=> create table testvarchar (x varchar(2));\n\tCREATE\n\ttest=> insert into testvarchar values ('1');\n\tINSERT 912201 1\n\ttest=> insert into testvarchar values ('22');\n\tINSERT 912202 1\n\ttest=> insert into testvarchar values ('333');\n\tINSERT 912203 1\n\ttest=> select * from testvarchar;\n\t x\n\t--\n\t 1\n\t22\n\t33\n\t(3 rows)\n\nAnd if I create a varchar(2000), it does not take several 8k blocks to\nstore 10 rows, like it did before.\n\nThis makes varchar() behave much more like text, with a pre-defined\nlength limit.\n\nAlso, the fact that varchar() no longer has all those trailing zero's\nshould make it more portable with other types.\n\n---------------------------------------------------------------------------\n\n*** ./backend/utils/adt/varchar.c.orig\tWed Jan 7 12:43:00 1998\n--- ./backend/utils/adt/varchar.c\tWed Jan 7 13:26:16 1998\n***************\n*** 70,85 ****\n \t\ttyplen = len + VARHDRSZ;\n \t}\n \telse\n- \t{\n \t\tlen = typlen - VARHDRSZ;\n- \t}\n \n \tif (len > 4096)\n \t\telog(ERROR, \"bpcharin: length of char() must be less than 4096\");\n \n \tresult = (char *) palloc(typlen);\n! \t*(int32 *) result = typlen;\n! \tr = result + VARHDRSZ;\n \tfor (i = 0; i < len; i++, r++, s++)\n \t{\n \t\t*r = *s;\n--- 70,83 ----\n \t\ttyplen = len + VARHDRSZ;\n \t}\n \telse\n \t\tlen = typlen - VARHDRSZ;\n \n \tif (len > 4096)\n \t\telog(ERROR, \"bpcharin: length of char() must be less than 4096\");\n \n \tresult = (char *) palloc(typlen);\n! \tVARSIZE(result) = typlen;\n! \tr = VARDATA(result);\n \tfor (i = 0; i < len; i++, r++, s++)\n \t{\n \t\t*r = *s;\n***************\n*** 108,116 ****\n \t}\n \telse\n \t{\n! \t\tlen = *(int32 *) s - VARHDRSZ;\n \t\tresult = (char *) palloc(len + 1);\n! \t\tStrNCpy(result, s + VARHDRSZ, len+1);\t/* these are blank-padded */\n \t}\n \treturn (result);\n }\n--- 106,114 ----\n \t}\n \telse\n \t{\n! \t\tlen = VARSIZE(s) - VARHDRSZ;\n \t\tresult = (char *) palloc(len + 1);\n! \t\tStrNCpy(result, VARDATA(s), len+1);\t/* these are blank-padded */\n \t}\n \treturn (result);\n }\n***************\n*** 129,155 ****\n varcharin(char *s, int dummy, int typlen)\n {\n \tchar\t *result;\n! \tint\t\t\tlen = typlen - VARHDRSZ;\n \n \tif (s == NULL)\n \t\treturn ((char *) NULL);\n \n! \tif (typlen == -1)\n! \t{\n! \n! \t\t/*\n! \t\t * this is here because some functions can't supply the typlen\n! \t\t */\n! \t\tlen = strlen(s);\n! \t\ttyplen = len + VARHDRSZ;\n! \t}\n \n \tif (len > 4096)\n \t\telog(ERROR, \"varcharin: length of char() must be less than 4096\");\n \n! \tresult = (char *) palloc(typlen);\n! \t*(int32 *) result = typlen;\n! \tstrncpy(result + VARHDRSZ, s, len+1);\n \n \treturn (result);\n }\n--- 127,147 ----\n varcharin(char *s, int dummy, int typlen)\n {\n \tchar\t *result;\n! \tint\t\t\tlen;\n \n \tif (s == NULL)\n \t\treturn ((char *) NULL);\n \n! \tlen = strlen(s) + VARHDRSZ;\n! \tif (typlen != -1 && len > typlen)\n! \t\tlen = typlen;\t/* clip the string at max length */\n \n \tif (len > 4096)\n \t\telog(ERROR, \"varcharin: length of char() must be less than 4096\");\n \n! \tresult = (char *) palloc(len);\n! \tVARSIZE(result) = len;\n! \tmemmove(VARDATA(result), s, len - VARHDRSZ);\n \n \treturn (result);\n }\n***************\n*** 168,176 ****\n \t}\n \telse\n \t{\n! \t\tlen = *(int32 *) s - VARHDRSZ;\n \t\tresult = (char *) palloc(len + 1);\n! \t\tStrNCpy(result, s + VARHDRSZ, len+1);\n \t}\n \treturn (result);\n }\n--- 160,168 ----\n \t}\n \telse\n \t{\n! \t\tlen = VARSIZE(s) - VARHDRSZ;\n \t\tresult = (char *) palloc(len + 1);\n! \t\tStrNCpy(result, VARDATA(s), len+1);\n \t}\n \treturn (result);\n }\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 14:41:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar size" } ]
[ { "msg_contents": "I see a small problem I am working on with the new varchar().\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 17:21:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar" } ]
[ { "msg_contents": "Bruce Momjian <[email protected]>\n\n> > \n> > Doing a VACUUM on the regression database I'm getting:-\n> > \n> > regression=> vacuum;\n> > ABORT: nodeRead: Bad type 0\n> > regression=>\n> > \n> > \n> > Keith.\n> > \n> > \n> > \n> \n> Try the newest version. I think I fixed it.\n> \n\nOops, forgot to reply...\n\nYes, this fixes the problem nicely, thanks Bruce.\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 22:39:51 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM error on CVS build 07-JAN-98" } ]
[ { "msg_contents": "I have found that the varchar() change I made requires a new\npg_attribute field, so I am rolling back the change until I get a fully\nworking patch.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 22:03:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "rollback varchar change" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have found that the varchar() change I made requires a new\n> pg_attribute field, so I am rolling back the change until I get a fully\n ^^^^^^^^^^^^^^^^^^\n\nPlease try some other way. As I remember, you said about breaking\nvl_len into 2 parts - very nice for me, but I'd recommend to leave\nvarlena as is and add new structure - for compatibility and to allow\ntext (etc) be longer 2^16 someday (multi-representation feature).\nJust like attlen -1 is used in many parts of code to flag varlena,\nyou could use -2 to flag new structure.\n\n> working patch.\n\nVadim\n", "msg_date": "Thu, 08 Jan 1998 11:47:36 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] rollback varchar change" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > I have found that the varchar() change I made requires a new\n> > pg_attribute field, so I am rolling back the change until I get a fully\n> ^^^^^^^^^^^^^^^^^^\n> \n> Please try some other way. As I remember, you said about breaking\n> vl_len into 2 parts - very nice for me, but I'd recommend to leave\n> varlena as is and add new structure - for compatibility and to allow\n> text (etc) be longer 2^16 someday (multi-representation feature).\n> Just like attlen -1 is used in many parts of code to flag varlena,\n> you could use -2 to flag new structure.\n\nOK, I have now figured out that my original idea was sound. The only\nproblem is that I started using VARSIZE in varchar.c instead of the old\nvcTruelen() function. It turns out that constants have the full length\nof the type with trailing nulls, and the disk data doesn't. I put the\nold comparison function back, and it seems to be working just fine now.\n\nI will look through the code some more to make sure we are safe. The\nregression tests pass, so it must be working pefectly :-) The attlen\nfield must contain -1 so it knows it is a varlena, and just references\nthe real attribute length when it needs to do some creation things.\n\nThat works fine for my purposes.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 00:07:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] rollback varchar change" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > > I have found that the varchar() change I made requires a new\n> > > pg_attribute field, so I am rolling back the change until I get a fully\n> > ^^^^^^^^^^^^^^^^^^\n> >\n> > Please try some other way. As I remember, you said about breaking\n> > vl_len into 2 parts - very nice for me, but I'd recommend to leave\n> > varlena as is and add new structure - for compatibility and to allow\n> > text (etc) be longer 2^16 someday (multi-representation feature).\n> > Just like attlen -1 is used in many parts of code to flag varlena,\n> > you could use -2 to flag new structure.\n> \n> OK, I have now figured out that my original idea was sound. The only\n\nHaving new column in pg_attribute I'm not sure that we need in new\nstructure...\n\nVadim\n", "msg_date": "Thu, 08 Jan 1998 12:54:15 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] rollback varchar change" } ]
[ { "msg_contents": "Let me go over the issues with the varchar() change.\n\nchar() will continue to store full bytes, while varchar() function like\ntext, but with a limited length.\n\nNow, pg_attribute.attlen is access everywhere, trying to find out how\nlong the data field is. With text, the length is -1, but with varchar\ncurrently, it is the max length, and hence, it has to store all those\nbytes.\n\nNow, my idea is to add a new pg_attribute column called 'attmaxlen'\nwhich will hold the maximum length of the field. char() and varchar()\nwill use this field, and the code will have be changed. Cases where\nattlen is referenced to determine data size will continue to use -1, but\nreferences to all functions that create a data entry will use the\nattmaxlen. I see 124 references to attlen in the code. Not too bad. \nMost are obvious.\n\nWe had some of this work in the past, fixing places where the size was\nnot properly passed into the table creation code, because varchar() and\nchar() do not have lengths defined in pg_type like everyone else, but it\nis only in pg_attribute.\n\nThis is a related change to allow data reference and tuple max length\nreference to be separate. I can see other new types using this field\nto.\n\nCome to think of it, I wonder if I could have the disk copy of\npg_attribute use the pg_type length, and use the pg_attribute length\nonly when creating/updating entries? I wonder if that is what it does\nalready. Looks like that may be true.\n\nComments?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 7 Jan 1998 23:31:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar() change" }, { "msg_contents": "> Let me go over the issues with the varchar() change.\n>\n> char() will continue to store full bytes, while varchar() function like\n> text, but with a limited length.\n>\n> Now, pg_attribute.attlen is access everywhere, trying to find out how\n> long the data field is. With text, the length is -1, but with varchar\n> currently, it is the max length, and hence, it has to store all those\n> bytes.\n>\n> Now, my idea is to add a new pg_attribute column called 'attmaxlen'\n> which will hold the maximum length of the field. char() and varchar()\n> will use this field, and the code will have be changed. Cases where\n> attlen is referenced to determine data size will continue to use -1, but\n> references to all functions that create a data entry will use the\n> attmaxlen. I see 124 references to attlen in the code. Not too bad.\n> Most are obvious.\n>\n> We had some of this work in the past, fixing places where the size was\n> not properly passed into the table creation code, because varchar() and\n> char() do not have lengths defined in pg_type like everyone else, but it\n> is only in pg_attribute.\n>\n> This is a related change to allow data reference and tuple max length\n> reference to be separate. I can see other new types using this field\n> to.\n>\n> Come to think of it, I wonder if I could have the disk copy of\n> pg_attribute use the pg_type length, and use the pg_attribute length\n> only when creating/updating entries? I wonder if that is what it does\n> already. Looks like that may be true.\n>\n> Comments?\n\nIs what you are trying to do related to what could be used to implement\nother (SQL92) data types like numeric(precision,scale) where there are one\nor two additional parameters which are assigned when a column/class/type is\ndefined and which must be available when working with column/class/type\ninstances? We probably don't want to do anything about the latter for v6.3\n(spread pretty thin with the work we've already picked up) but I'd like to\ndo something for v6.4...\n\nOh, while I'm thinking about it, this kind of thing is probably also\nnecessary to get arrays working as expected (enforcing dimensions specified\nin the declaration).\n\n - Tom\n\n", "msg_date": "Thu, 08 Jan 1998 05:37:29 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar() change" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Let me go over the issues with the varchar() change.\n> \n> char() will continue to store full bytes, while varchar() function like\n> text, but with a limited length.\n> \n> Now, pg_attribute.attlen is access everywhere, trying to find out how\n> long the data field is. With text, the length is -1, but with varchar\n> currently, it is the max length, and hence, it has to store all those\n> bytes.\n> \n> Now, my idea is to add a new pg_attribute column called 'attmaxlen'\n> which will hold the maximum length of the field. char() and varchar()\n> will use this field, and the code will have be changed. Cases where\n> attlen is referenced to determine data size will continue to use -1, but\n> references to all functions that create a data entry will use the\n> attmaxlen. I see 124 references to attlen in the code. Not too bad.\n> Most are obvious.\n\nOk. I agreed that we have to add new column to pg_attribute, but I recommend\n\n1. use some other name - not attmaxlen: this field could be used for \n NUMBER, etc and \"maxlen\" is not good name for storing precision, etc\n (atttspec ?)\n2. use -2 for varchar: let's think about attlen -1 as about \"un-limited\"\n varlena, and about attlen -2 as about \"limited\" one, with maxlen\n specified in att???. I don't see problem with -2 - just new case of\n switch (attlen) - and this will allow leave text (-1) untouched\n (or you will have to store -1 in att??? for text to differentiate\n text from varchar)...\n Hmm, ... on the other hand, we could check atttype before switch(attlen)\n in heaptuple.c and other places - don't know what's better...\n\nVadim\n", "msg_date": "Thu, 08 Jan 1998 12:51:45 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar() change" }, { "msg_contents": "> > Come to think of it, I wonder if I could have the disk copy of\n> > pg_attribute use the pg_type length, and use the pg_attribute length\n> > only when creating/updating entries? I wonder if that is what it does\n> > already. Looks like that may be true.\n> >\n> > Comments?\n> \n> Is what you are trying to do related to what could be used to implement\n> other (SQL92) data types like numeric(precision,scale) where there are one\n> or two additional parameters which are assigned when a column/class/type is\n> defined and which must be available when working with column/class/type\n> instances? We probably don't want to do anything about the latter for v6.3\n> (spread pretty thin with the work we've already picked up) but I'd like to\n> do something for v6.4...\n> \n> Oh, while I'm thinking about it, this kind of thing is probably also\n> necessary to get arrays working as expected (enforcing dimensions specified\n> in the declaration).\n\nYes, I had the numeric in mind.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 01:16:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar() change" } ]
[ { "msg_contents": "OK, I now figured out why the initial code was not working. It was not\nconstant null padding, but the fact that the old vcTruetype did not\ninclude VARHDRSZ, and VARSIZE does, to strncmp() was comparing junk\nbytes.\n\nNow, with the proper VARSIZE() - VARHDRSZ, things work fine. No varlena\nstructure changes, and no new columns.\n\nThe funny thing is that with varchar() and char(), because they are\nfixed length for all data but are also varlena, they seem to work fine\nin the code as -1 or as the default size plus VARHDRSZ, so code works no\nmatter how it is referenced.\n\nWith our new change, this will not work. Data access MUST use the type\nlength of -1, and maximum length checks must be performed using the\npg_attribute length.\n\nFortunately, it looks like all the 'attlen' references I saw already had\nthe proper handling, in part to fix problems we had in creating tables,\nlike the old SELECT INTO TABLE bug for varchar, where all the varchar\nlengths were -1.\n\nIt turns out the old code could use -1 or the max length, but had to use\nthe max length when creating tables, and we hacked that around to get it\nworking.\n\nNow, it turns out the only way to get stuff into a able is through\nINSERT with a constant, data from another table, or COPY, and I think we\ncheck all of those for proper length already. In fact, we can reduce\nthe restriction that the old just not be longer than the new.\n\nI can't see any place where the system uses the pg_attribute length over\nthe pg_type length.\n\nThe Descriptor tuple is type length, execept when a table is being\ncreated, but that makes sense.\n\nLet's let it run for a while and see if any problems come up. If\nsomeone wants to hunt through the code looking for a possible problem,\ngo ahead.\n\nI think it as important feature for people using varchar(), and\nhopefully if there is a problem, testing will bring it out. Usually a\nproblem of this nature will show itself fairly early.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 01:29:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar() fix" } ]
[ { "msg_contents": "From: Constantin Teodorescu <[email protected]> wrote\n\n> Peter T Mount wrote:\n> >\n> > The only solution I was able to give was for them to use cursors, and\n> > fetch the result in chunks.\n>\n> Got it!!!\n>\n> Seems everyone has 'voted' for using cursors.\n>\nAs I saw it the cursors were suggested as a replacement for opening a\nseparate connection, not as a substitute for row-level callbacks, which\nwould be very nice to have and which can probably be implemented in a\nbackward compatible manner anyhow.\n\n> As a matter of fact, I have tested both a\n> BEGIN ; DECLARE CURSOR ; FETCH N; END;\n> and a\n> SELECT FROM\n>\n> Both of them are locking for write the tables that they use, until end\n> of processing.\n>\n> Fetching records in chunks (100) would speed up a little the processing.\n>\n> But I am still convinced that if frontend would be able to process\n> tuples as soon as they come, the overall time of processing a big table\n> would be less.\n> Fetching in chunks, the frontend waits for the 100 records to come (time\n> A) and then process them (time B). A and B cannot be overlapped.\n>\nPerhaps you could overlap A2 and B1 (by sending the request for next 100\nand then processing the first 100.\n\nStill I think that using callbacks for special cases would be more\nefficient and also more \"symmetric\" with what backend does\n\n> Thanks a lot for helping me to decide. Reports in PgAccess will use\n> cursors.\n>\nI still urge you to add callbacks to libpq and libpgtcl.\n\nThe way I see it would be one additional function that sets (or resets\nif given NULL) the callback.\n\nBTW, are you sure that you can't do something similar using the current\nlibpq?\n\nHannu\n\n\n\n\n\n\n", "msg_date": "Thu, 08 Jan 1998 11:37:39 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I want to change libpq and libpgtcl for better handling of large\n\tquery" } ]
[ { "msg_contents": "Bruce wrote:\n> \n> On Wed, 7 Jan 1998, Bruce Momjian wrote:\n>> \n>> > In my experience, char() is full size as defined by create, and\n>> > varchar() is the the size of the actual data in the field, like\ntext,\n>> > but with a pre-defined limit.\n>> \n>> \tCan you remind me what the difference is between text and\nvarchar? Why\n>> would you use varchar over text?\n>\n>Only because SQL people are used to varchar, and not text, and\nsometimes\n>people want to have a maximum size if they are displaying this data in\na\n>form that is only of limited size\n\n1. Thanks for the very nice change !\n2. now the difference:\n\t- varchar must fit directly into tuple\n\t- varchar enforces a supplied max length (as in varchar(256))\n\t- text has no size limit (2Gb in Informix) (therefore should be\npointer to LOB iff >= max row size)\n\t- max size of varchar is limited by max row size (32k)\n\t- max size can be used to align btree index (advantage ?\n(Informix does it))\n\t- therefore varchar better performance than text for small texts\n(implementation specific)\n\t- index for text ? (is btree useful for avg. 50k html pages ? I\ndon't think so.)\n\nAndreas\n\t\n", "msg_date": "Thu, 8 Jan 1998 11:11:35 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" } ]
[ { "msg_contents": "I posted a fairly large patch for the jdbc driver a couple of days ago,\nbut I haven't seen it appear on the patches list. Did anyone else see it?\n\nIf not, I'll repackage it into smaller chunks, and resend it.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 8 Jan 1998 14:17:03 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Did the patch get recieved?" }, { "msg_contents": "On Thu, 8 Jan 1998, Peter T Mount wrote:\n\n> I posted a fairly large patch for the jdbc driver a couple of days ago,\n> but I haven't seen it appear on the patches list. Did anyone else see it?\n> \n> If not, I'll repackage it into smaller chunks, and resend it.\n\n\tJust FTP it into ftp.postgresql.org:/pub/incoming and let us know\nits there? Then I'll apply to the source tree from that...if its that\nbig?\n\n", "msg_date": "Thu, 8 Jan 1998 10:01:18 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Did the patch get recieved?" }, { "msg_contents": "On Thu, 8 Jan 1998, The Hermit Hacker wrote:\n\n> On Thu, 8 Jan 1998, Peter T Mount wrote:\n> \n> > I posted a fairly large patch for the jdbc driver a couple of days ago,\n> > but I haven't seen it appear on the patches list. Did anyone else see it?\n> > \n> > If not, I'll repackage it into smaller chunks, and resend it.\n> \n> \tJust FTP it into ftp.postgresql.org:/pub/incoming and let us know\n> its there? Then I'll apply to the source tree from that...if its that\n> big?\n\nWill do. I was going to post another patch with the bits I'm working on\nnow, so I'll combine the lot in one go. That will probably be tomorrow.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 8 Jan 1998 22:20:57 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Did the patch get recieved?" } ]
[ { "msg_contents": "I've just fixed a minor bug in the datestyle handling in the jdbc driver\nthat was caused by two spaces being added after NOTICE: in notifications\nbeing sent from the backend.\n\nThis broke how the jdbc driver discovered what datestyle is in use.\n\nHopefully, the changes I've made should make it more resilient to this\ntype of change.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 8 Jan 1998 14:20:03 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "When did the formatting of NOTICE messages change?" } ]
[ { "msg_contents": "> I have applied the following patch to allow varchar() fields to store\n> just the needed bytes, and not the maximum size.\n> \n> ...\n> \n> And if I create a varchar(2000), it does not take several 8k blocks to\n> store 10 rows, like it did before.\n\nFixes the following \"problem\" too...\n\nCurrently, you can create a table with attributes that _can_ total more\nthan the max_tup_size if they maximum size, but not be able to insert\nvalid data into all of them.\n\nFor instance, ...\n\ncreate table foo (bar varchar(4000),\n bah varchar(3000),\n baz varchar(2000));\n\n... is fine as long as one of the attributes is null. Now you can have\nnon-null values for all three as long as they don't go over max_tup_size\nin _total_.\n\ndarrenk\n\n", "msg_date": "Thu, 8 Jan 1998 11:31:10 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar size" } ]
[ { "msg_contents": "\nA few things that I have noticed will be affected by allowing the\ndisk block size to be other than 8k. (4k, 8k, 16k or 32k)\n\n1. Rules\n\nThe rule system currently stores plans as tuples in pg_rewrite.\nMaking the block size smaller will accordingly reduce the size of\nthe rules you can create.\n\nBut, the converse is also true...bigger blocks -> bigger rules.\n\nAre the rules ever going to become large objects? Is this something\nto put on the TODO to investigate now that Peter has fixed them?\n\n\n2. Attribute limits\n\nShould the size limits of the varchar/char be driven by the chosen\nblock size?\n\nSince the current max len is 4k, should I for now advise that the\nblock size not be made smaller than the current 8k? Or could the\nlimit be dropped from 4096 to 4000 to allow 4k blocks?\n\nOracle has a limit of 2000 on their varchar since they allow blocks\nof as little as 2k.\n\nSeems there would be an inconsistency in there with telling the user\nthat the text/varchar/char limit is 4096 and then not letting them\nstore a value of that size because of the tuple/block size limit.\n\nPerhaps mention this as a caveat also if using 4k blocks? Are 4k\nblock something that someone would be beneficial or only 16k/32k?\n\nOn the flip-side of this, uping the max text size though will run\ninto the 8k packet size.\n\nI've run thru the regression tests a few times with 4k blocks and\nthey seem to pass with the same differences. Today I will try with\n16k and 32k. If those work, I'll submit the patch for perusal.\n\nComments welcome...\n\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 12:02:40 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Disk block size issues." }, { "msg_contents": "> \n> \n> A few things that I have noticed will be affected by allowing the\n> disk block size to be other than 8k. (4k, 8k, 16k or 32k)\n> \n> 1. Rules\n> \n> The rule system currently stores plans as tuples in pg_rewrite.\n> Making the block size smaller will accordingly reduce the size of\n> the rules you can create.\n\nI say make it match the given block size at compile time.\n\n> \n> But, the converse is also true...bigger blocks -> bigger rules.\n> \n> Are the rules ever going to become large objects? Is this something\n> to put on the TODO to investigate now that Peter has fixed them?\n> \n> \n> 2. Attribute limits\n> \n> Should the size limits of the varchar/char be driven by the chosen\n> block size?\n\nYes, they should be calculated based on the compile block size.\n\n> \n> Since the current max len is 4k, should I for now advise that the\n> block size not be made smaller than the current 8k? Or could the\n> limit be dropped from 4096 to 4000 to allow 4k blocks?\n> \n> Oracle has a limit of 2000 on their varchar since they allow blocks\n> of as little as 2k.\n> \n> Seems there would be an inconsistency in there with telling the user\n> that the text/varchar/char limit is 4096 and then not letting them\n> store a value of that size because of the tuple/block size limit.\n> \n> Perhaps mention this as a caveat also if using 4k blocks? Are 4k\n> block something that someone would be beneficial or only 16k/32k?\n\nJust make the max size based on the block size.\n\n> \n> On the flip-side of this, uping the max text size though will run\n> into the 8k packet size.\n\nThis is an interesting point. While we can compute most of the changes\nat compile time, we will have to communicate with clients that were\ncompiled with different max limits.\n\nI recommend we increase the max client buffer size to what we believe is\nthe largest block size anyone would ever reasonably choose. That way,\nall can communicate. I recommend you contact Peter Mount for JDBC,\nOpenlink for ODBC, and all the other client maintainers and let them\nknow the changes will be in 6.3 so they can be ready with new version\nwhen 6.3 starts beta on February 1.\n\n> \n> I've run thru the regression tests a few times with 4k blocks and\n> they seem to pass with the same differences. Today I will try with\n> 16k and 32k. If those work, I'll submit the patch for perusal.\n\nGreat.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 21:03:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "Darren King wrote:\n> \n> A few things that I have noticed will be affected by allowing the\n> disk block size to be other than 8k. (4k, 8k, 16k or 32k)\n> \n> 1. Rules\n> \n> The rule system currently stores plans as tuples in pg_rewrite.\n> Making the block size smaller will accordingly reduce the size of\n> the rules you can create.\n> \n> But, the converse is also true...bigger blocks -> bigger rules.\n> \n> Are the rules ever going to become large objects? Is this something\n> to put on the TODO to investigate now that Peter has fixed them?\n\nIt's better to implement multi-representation feature for all verlena\ntypes. We could use on-disk vl_len < 0 to flag that data of size ABS(vl_len)\nare in large object specified in vl_data. It seems very easy to do.\n\nThis will also resolve item 2 below.\n\nVadim\n\n> \n> 2. Attribute limits\n> \n> Should the size limits of the varchar/char be driven by the chosen\n> block size?\n> \n> Since the current max len is 4k, should I for now advise that the\n> block size not be made smaller than the current 8k? Or could the\n> limit be dropped from 4096 to 4000 to allow 4k blocks?\n> \n> Oracle has a limit of 2000 on their varchar since they allow blocks\n> of as little as 2k.\n> \n> Seems there would be an inconsistency in there with telling the user\n> that the text/varchar/char limit is 4096 and then not letting them\n> store a value of that size because of the tuple/block size limit.\n> \n> Perhaps mention this as a caveat also if using 4k blocks? Are 4k\n> block something that someone would be beneficial or only 16k/32k?\n> \n> On the flip-side of this, uping the max text size though will run\n> into the 8k packet size.\n> \n> I've run thru the regression tests a few times with 4k blocks and\n> they seem to pass with the same differences. Today I will try with\n> 16k and 32k. If those work, I'll submit the patch for perusal.\n> \n> Comments welcome...\n> \n> [email protected]\n", "msg_date": "Fri, 09 Jan 1998 09:50:54 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." } ]
[ { "msg_contents": "Hi,\n\nI seem to be getting a failure in the \"triggers\" regression tests in the\nlatest CVS source.\n\nI can't be 100% sure that this test worked before but now I'm seeing errors\nthat would indicate check_foreign_key() is not working.\n\nAnyone else seeing this?\n\nKeith.\n\n\n--- expected/triggers.out Tue Jan 6 20:59:58 1998\n+++ results/triggers.out Thu Jan 8 18:08:12 1998\n@@ -40,27 +40,25 @@\n QUERY: insert into fkeys2 values (40, '4', 5);\n QUERY: insert into fkeys2 values (50, '5', 3);\n QUERY: insert into fkeys2 values (70, '5', 3);\n-ERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys\n QUERY: insert into fkeys values (10, '1', 2);\n QUERY: insert into fkeys values (30, '3', 3);\n QUERY: insert into fkeys values (40, '4', 2);\n QUERY: insert into fkeys values (50, '5', 2);\n QUERY: insert into fkeys values (70, '5', 1);\n-ERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys\n QUERY: insert into fkeys values (60, '6', 4);\n-ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n QUERY: delete from pkeys where pkey1 = 30 and pkey2 = '3';\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n-ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n+NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n QUERY: delete from pkeys where pkey1 = 40 and pkey2 = '4';\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n QUERY: update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = \n'5';\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n-ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n+NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n QUERY: update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = \n'1';\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n+ERROR: Cannot insert a duplicate key into a unique index\n QUERY: DROP TABLE pkeys;\n QUERY: DROP TABLE fkeys;\n QUERY: DROP TABLE fkeys2;\n\n", "msg_date": "Thu, 8 Jan 1998 20:28:19 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "refinit, check_foreign_key() not working?" }, { "msg_contents": "> I seem to be getting a failure in the \"triggers\" regression tests in the\n> latest CVS source.\n\nI saw things like this yesterday (and they are in expected/triggers.out) but\ntoday they seem to have gone away (the orig file in my diff below is from\ntoday's distribution and the second file is today's result). The only \"ERROR\"\nmessages remaining are two reasonable ones.\n\nI'll update the source tree for regression output soon (hopefully tonight). I am\nseeing small differences in select_views.out, but afaict these are due to\nchanges in the backend but are still producing a valid result. One thing which\nis confusing is that the \"<\" operator for paths is currently just counting nodes\non the path. Funny enough there is an old comment from the original implementors\nsaying that it is a kludge and they will change it. I'm planning on changing\nthis to compare the lengths of the paths instead (a bit more intuitive I think).\n\n - Tom\n\ngolem$ diff triggers.out.orig triggers.out\n43d42\n< ERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys\n49d47\n< ERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys\n51d48\n< ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n54c51\n< ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n60c57\n< ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n63a61\n> ERROR: Cannot insert a duplicate key into a unique index\ngolem$\n\n>\n>\n> I can't be 100% sure that this test worked before but now I'm seeing errors\n> that would indicate check_foreign_key() is not working.\n>\n> Anyone else seeing this?\n>\n> Keith.\n>\n> --- expected/triggers.out Tue Jan 6 20:59:58 1998\n> +++ results/triggers.out Thu Jan 8 18:08:12 1998\n> @@ -40,27 +40,25 @@\n> QUERY: insert into fkeys2 values (40, '4', 5);\n> QUERY: insert into fkeys2 values (50, '5', 3);\n> QUERY: insert into fkeys2 values (70, '5', 3);\n> -ERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys\n> QUERY: insert into fkeys values (10, '1', 2);\n> QUERY: insert into fkeys values (30, '3', 3);\n> QUERY: insert into fkeys values (40, '4', 2);\n> QUERY: insert into fkeys values (50, '5', 2);\n> QUERY: insert into fkeys values (70, '5', 1);\n> -ERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys\n> QUERY: insert into fkeys values (60, '6', 4);\n> -ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n> QUERY: delete from pkeys where pkey1 = 30 and pkey2 = '3';\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> -ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n> +NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> QUERY: delete from pkeys where pkey1 = 40 and pkey2 = '4';\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> QUERY: update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 =\n> '5';\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> -ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n> +NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> QUERY: update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 =\n> '1';\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> +ERROR: Cannot insert a duplicate key into a unique index\n> QUERY: DROP TABLE pkeys;\n> QUERY: DROP TABLE fkeys;\n> QUERY: DROP TABLE fkeys2;\n\n\n\n", "msg_date": "Fri, 09 Jan 1998 02:56:48 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] refinit, check_foreign_key() not working?" } ]
[ { "msg_contents": "While implementing a method to retrieve the permissions on a table,\nthe statement: \"grant all on test to public;\" kills the backend.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 8 Jan 1998 21:57:44 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "grant broken" }, { "msg_contents": "> \n> While implementing a method to retrieve the permissions on a table,\n> the statement: \"grant all on test to public;\" kills the backend.\n\nWorks here.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 20:48:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "On Thu, 8 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > While implementing a method to retrieve the permissions on a table,\n> > the statement: \"grant all on test to public;\" kills the backend.\n> \n> Works here.\n\nI've just resynced with cvs, rebuilt from scratch and still:\n\ntest=> \\z\n\nDatabase = test\n +------------------+----------------------------------------------------+\n | Relation | Grant/Revoke Permissions |\n +------------------+----------------------------------------------------+\n | test | | \n +------------------+----------------------------------------------------+\ntest=> grant all on test to pmount;\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\nThis happens both with and without the large object patch, so that's ruled\nout.\n\nPlatform: Linux 2.0.27\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 9 Jan 1998 15:32:22 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "How about a new initdb?\n\n> \n> On Thu, 8 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > While implementing a method to retrieve the permissions on a table,\n> > > the statement: \"grant all on test to public;\" kills the backend.\n> > \n> > Works here.\n> \n> I've just resynced with cvs, rebuilt from scratch and still:\n> \n> test=> \\z\n> \n> Database = test\n> +------------------+----------------------------------------------------+\n> | Relation | Grant/Revoke Permissions |\n> +------------------+----------------------------------------------------+\n> | test | | \n> +------------------+----------------------------------------------------+\n> test=> grant all on test to pmount;\n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> \n> This happens both with and without the large object patch, so that's ruled\n> out.\n> \n> Platform: Linux 2.0.27\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n> Main Homepage: http://www.demon.co.uk/finder\n> Work Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 11:31:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "On Fri, 9 Jan 1998, Bruce Momjian wrote:\n\n> How about a new initdb?\n\nThat was the first thing I tried. When that didn't work, I removed the\nentire distribution, resynced the source with cvs, and recompiled.\n\nWhen checking that the large object fix wasn't to blame, I did a fresh\ninitdb again before testing.\n\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 9 Jan 1998 17:09:34 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "> \n> On Fri, 9 Jan 1998, Bruce Momjian wrote:\n> \n> > How about a new initdb?\n> \n> That was the first thing I tried. When that didn't work, I removed the\n> entire distribution, resynced the source with cvs, and recompiled.\n> \n> When checking that the large object fix wasn't to blame, I did a fresh\n> initdb again before testing.\n\nCan you give me a test case? Is it \\z on an empty database, or does a\ntable have to have a specific permissoin?\n\n---------------------------------------------------------------------------\n\ntest=> \\z\n\nDatabase = test\n +------------------+----------------------------------------------------+\n | Relation | Grant/Revoke Permissions |\n +------------------+----------------------------------------------------+\n | test | {\"=arwR\",\"wilson=arwR\"} | \n | test2 | | \n | test3 | | \n | test4 | | \n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 16:56:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "On Fri, 9 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Fri, 9 Jan 1998, Bruce Momjian wrote:\n> > \n> > > How about a new initdb?\n> > \n> > That was the first thing I tried. When that didn't work, I removed the\n> > entire distribution, resynced the source with cvs, and recompiled.\n> > \n> > When checking that the large object fix wasn't to blame, I did a fresh\n> > initdb again before testing.\n> \n> Can you give me a test case? Is it \\z on an empty database, or does a\n> table have to have a specific permissoin?\n\nIt's a test database, with a single table in it, and two users (the DBA,\nand a normal user).\n\nOne of the methods in JDBC's DatabaseMetaData returns details about the\nrights granted on the database, so to test the method, I was setting up\nthe normal user with update rights to the table, so I had something to\nwork on.\n\nThe backend simply dies when ever the grant statement is entered\ncorrectly.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 10 Jan 1998 12:08:41 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "> > Can you give me a test case? Is it \\z on an empty database, or does a\n> > table have to have a specific permissoin?\n>\n> It's a test database, with a single table in it, and two users (the DBA,\n> and a normal user).\n>\n> One of the methods in JDBC's DatabaseMetaData returns details about the\n> rights granted on the database, so to test the method, I was setting up\n> the normal user with update rights to the table, so I had something to\n> work on.\n>\n> The backend simply dies when ever the grant statement is entered\n> correctly.\n\nIf I understood the test case you published, you are specifying to the grant\ncommand the database \"test\", not a table within the database. The man page for\ngrant is not very specific, but is this supposed to work?\n\n - Tom\n\n", "msg_date": "Sat, 10 Jan 1998 16:34:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant broken" }, { "msg_contents": "On Sat, 10 Jan 1998, Thomas G. Lockhart wrote:\n\n> If I understood the test case you published, you are specifying to the grant\n> command the database \"test\", not a table within the database. The man page for\n> grant is not very specific, but is this supposed to work?\n\nNo, there is a table called test, that I'm granting permissions to. The\ndatabase is also called test.\n\nCould grant be confusing the two?\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 11 Jan 1998 11:40:40 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant broken" } ]
[ { "msg_contents": "Vadim, I know you are still thinking about subselects, but I have some\nmore clarification that may help.\n\nWe have to add phantom range table entries to correlated subselects so\nthey will pass the parser. We might as well add those fields to the\ntarget list of the subquery at the same time:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where taba.col3 = tabb.col4)\n\nbecomes:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2, tabb.col4 <---\n\t\t from tabb, taba <---\n\t\t where taba.col3 = tabb.col4)\n\nWe add a field to TargetEntry and RangeTblEntry to mark the fact that it\nwas entered as a correlation entry:\n\n\tbool\tisCorrelated;\n\nSecond, we need to hook the subselect to the main query. I recommend we\nadd two fields to Query for this:\n\n\tQuery *parentQuery;\n\tList *subqueries;\n\nThe parentQuery pointer is used to resolve field names in the correlated\nsubquery.\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2, tabb.col4 <---\n\t\t from tabb, taba <---\n\t\t where taba.col3 = tabb.col4)\n\nIn the query above, the subquery can be easily parsed, and we add the\nsubquery to the parsent's parentQuery list.\n\nIn the parent query, to parse the WHERE clause, we create a new operator\ntype, called IN or NOT_IN, or ALL, where the left side is a Var, and the\nright side is an index to a slot in the subqueries List.\n\nWe can then do the rest in the upper optimizer.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 8 Jan 1998 22:55:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Vadim, I know you are still thinking about subselects, but I have some\n> more clarification that may help.\n> \n> We have to add phantom range table entries to correlated subselects so\n> they will pass the parser. We might as well add those fields to the\n> target list of the subquery at the same time:\n> \n> select *\n> from taba\n> where col1 = (select col2\n> from tabb\n> where taba.col3 = tabb.col4)\n> \n> becomes:\n> \n> select *\n> from taba\n> where col1 = (select col2, tabb.col4 <---\n> from tabb, taba <---\n> where taba.col3 = tabb.col4)\n> \n> We add a field to TargetEntry and RangeTblEntry to mark the fact that it\n> was entered as a correlation entry:\n> \n> bool isCorrelated;\n\nNo, I don't like to add anything in parser. Example:\n\n select *\n from tabA\n where col1 = (select col2\n from tabB\n where tabA.col3 = tabB.col4\n and exists (select * \n from tabC \n where tabB.colX = tabC.colX and\n tabC.colY = tabA.col2)\n )\n\n: a column of tabA is referenced in sub-subselect \n(is it allowable by standards ?) - in this case it's better \nto don't add tabA to 1st subselect but add tabA to second one\nand change tabA.col3 in 1st to reference col3 in 2nd subquery temp table -\nthis gives us 2-tables join in 1st subquery instead of 3-tables join.\n(And I'm still not sure that using temp tables is best of what can be \ndone in all cases...)\n\nInstead of using isCorrelated in TE & RTE we can add \n\nIndex varlevel;\n\nto Var node to reflect (sub)query from where this Var is come\n(where is range table to find var's relation using varno). Upmost query\nwill have varlevel = 0, all its (dirrect) children - varlevel = 1 and so on.\n ^^^ ^^^^^^^^^^^^\n(I don't see problems with distinguishing Vars of different children\non the same level...)\n\n> \n> Second, we need to hook the subselect to the main query. I recommend we\n> add two fields to Query for this:\n> \n> Query *parentQuery;\n> List *subqueries;\n\nAgreed. And maybe Index queryLevel.\n\n> In the parent query, to parse the WHERE clause, we create a new operator\n> type, called IN or NOT_IN, or ALL, where the left side is a Var, and the\n ^^^^^^^^^^^^^^^^^^\nNo. We have to handle (a,b,c) OP (select x, y, z ...) and \n'_a_constant_' OP (select ...) - I don't know is last in standards,\nSybase has this.\n\nWell,\n\ntypedef enum OpType\n{\n OP_EXPR, FUNC_EXPR, OR_EXPR, AND_EXPR, NOT_EXPR\n\n+ OP_EXISTS, OP_ALL, OP_ANY\n\n} OpType;\n\ntypedef struct Expr\n{\n NodeTag type;\n Oid typeOid; /* oid of the type of this expr */\n OpType opType; /* type of the op */\n Node *oper; /* could be Oper or Func */\n List *args; /* list of argument nodes */\n} Expr;\n\nOP_EXISTS: oper is NULL, lfirst(args) is SubSelect (index in subqueries\n List, following your suggestion)\n\nOP_ALL, OP_ANY:\n\noper is List of Oper nodes. We need in list because of data types of\na, b, c (above) can be different and so Oper nodes will be different too.\n\nlfirst(args) is List of expression nodes (Const, Var, Func ?, a + b ?) -\nleft side of subquery' operator.\nlsecond(args) is SubSelect.\n\nNote, that there are no OP_IN, OP_NOTIN in OpType-s for Expr. We need in\nIN, NOTIN in A_Expr (parser node), but both of them have to be transferred\nby parser into corresponding ANY and ALL. At the moment we can do:\n\nIN --> = ANY, NOT IN --> <> ALL\n\nbut this will be \"known bug\": this breaks OO-nature of Postgres, because of\noperators can be overrided and '=' can mean s o m e t h i n g (not equality).\nExample: box data type. For boxes, = means equality of _areas_ and =~\nmeans that boxes are the same ==> =~ ANY should be used for IN.\n\n> right side is an index to a slot in the subqueries List.\n\nVadim\n", "msg_date": "Fri, 09 Jan 1998 22:10:06 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subselects" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Vadim, I know you are still thinking about subselects, but I have some\n> > more clarification that may help.\n> > \n> > We have to add phantom range table entries to correlated subselects so\n> > they will pass the parser. We might as well add those fields to the\n> > target list of the subquery at the same time:\n> > \n> > select *\n> > from taba\n> > where col1 = (select col2\n> > from tabb\n> > where taba.col3 = tabb.col4)\n> > \n> > becomes:\n> > \n> > select *\n> > from taba\n> > where col1 = (select col2, tabb.col4 <---\n> > from tabb, taba <---\n> > where taba.col3 = tabb.col4)\n> > \n> > We add a field to TargetEntry and RangeTblEntry to mark the fact that it\n> > was entered as a correlation entry:\n> > \n> > bool isCorrelated;\n> \n> No, I don't like to add anything in parser. Example:\n> \n> select *\n> from tabA\n> where col1 = (select col2\n> from tabB\n> where tabA.col3 = tabB.col4\n> and exists (select * \n> from tabC \n> where tabB.colX = tabC.colX and\n> tabC.colY = tabA.col2)\n> )\n> \n> : a column of tabA is referenced in sub-subselect \n\nThis is a strange case that I don't think we need to handle in our first\nimplementation.\n\n> (is it allowable by standards ?) - in this case it's better \n> to don't add tabA to 1st subselect but add tabA to second one\n> and change tabA.col3 in 1st to reference col3 in 2nd subquery temp table -\n> this gives us 2-tables join in 1st subquery instead of 3-tables join.\n> (And I'm still not sure that using temp tables is best of what can be \n> done in all cases...)\n\nI don't see any use for temp tables in subselects anymore. After having\nimplemented UNIONS, I now see how much can be done in the upper\noptimizer. I see you just putting the subquery PLAN into the proper\nplace in the plan tree, with some proper JOIN nodes for IN, NOT IN.\n\n> \n> Instead of using isCorrelated in TE & RTE we can add \n> \n> Index varlevel;\n\nOK. Sounds good.\n\n> \n> to Var node to reflect (sub)query from where this Var is come\n> (where is range table to find var's relation using varno). Upmost query\n> will have varlevel = 0, all its (dirrect) children - varlevel = 1 and so on.\n> ^^^ ^^^^^^^^^^^^\n> (I don't see problems with distinguishing Vars of different children\n> on the same level...)\n> \n> > \n> > Second, we need to hook the subselect to the main query. I recommend we\n> > add two fields to Query for this:\n> > \n> > Query *parentQuery;\n> > List *subqueries;\n> \n> Agreed. And maybe Index queryLevel.\n\nSure. If it helps.\n\n> \n> > In the parent query, to parse the WHERE clause, we create a new operator\n> > type, called IN or NOT_IN, or ALL, where the left side is a Var, and the\n> ^^^^^^^^^^^^^^^^^^\n> No. We have to handle (a,b,c) OP (select x, y, z ...) and \n> '_a_constant_' OP (select ...) - I don't know is last in standards,\n> Sybase has this.\n\nI have never seen this in my eight years of SQL. Perhaps we can leave\nthis for later, maybe much later.\n\n> \n> Well,\n> \n> typedef enum OpType\n> {\n> OP_EXPR, FUNC_EXPR, OR_EXPR, AND_EXPR, NOT_EXPR\n> \n> + OP_EXISTS, OP_ALL, OP_ANY\n> \n> } OpType;\n> \n> typedef struct Expr\n> {\n> NodeTag type;\n> Oid typeOid; /* oid of the type of this expr */\n> OpType opType; /* type of the op */\n> Node *oper; /* could be Oper or Func */\n> List *args; /* list of argument nodes */\n> } Expr;\n> \n> OP_EXISTS: oper is NULL, lfirst(args) is SubSelect (index in subqueries\n> List, following your suggestion)\n> \n> OP_ALL, OP_ANY:\n> \n> oper is List of Oper nodes. We need in list because of data types of\n> a, b, c (above) can be different and so Oper nodes will be different too.\n> \n> lfirst(args) is List of expression nodes (Const, Var, Func ?, a + b ?) -\n> left side of subquery' operator.\n> lsecond(args) is SubSelect.\n> \n> Note, that there are no OP_IN, OP_NOTIN in OpType-s for Expr. We need in\n> IN, NOTIN in A_Expr (parser node), but both of them have to be transferred\n> by parser into corresponding ANY and ALL. At the moment we can do:\n> \n> IN --> = ANY, NOT IN --> <> ALL\n> \n> but this will be \"known bug\": this breaks OO-nature of Postgres, because of\n> operators can be overrided and '=' can mean s o m e t h i n g (not equality).\n> Example: box data type. For boxes, = means equality of _areas_ and =~\n> means that boxes are the same ==> =~ ANY should be used for IN.\n\nThat is interesting, to use =~ for ANY.\n\nYes, but how many operators take a SUBQUERY as an operand. This is a\nspecial case to me.\n\nI think I see where you are trying to go. You want subselects to behave\nlike any other operator, with a subselect type, and you do all the\nsubselect handling in the optimizer, with special Nodes and actions.\n\nI think this may be just too much of a leap. We have such clean query\nlogic for single queries, I can't imagine having an operator that has a\nQuery operand, and trying to get everything to properly handle it. \nUNIONS were very easy to implement as a List off of Query, with some\nforeach()'s in rewrite and the high optimizer.\n\nSubselects are SQL standard, and are never going to be over-ridden by a\nuser. Same with UNION. They want UNION, they get UNION. They want\nSubselect, we are going to spin through the Query structure and give\nthem what they want.\n\nThe complexities of subselects and correlated queries and range tables\nand stuff is so bizarre that trying to get it to work inside the type\nsystem could be a huge project.\n\n> \n> > right side is an index to a slot in the subqueries List.\n\nI guess the question is what can we have by February 1?\n\nI have been reading some postings, and it seems to me that subselects\nare the litmus test for many evaluators when deciding if a database\nengine is full-featured.\n\nSorry to be so straightforward, but I want to keep hashing this around\nuntil we get a conclusion, so coding can start.\n\nMy suggestions have been, I believe, trying to get subselects working\nwith the fullest functionality by adding the least amount of code, and\nkeeping the logic clean.\n\nHave you checked out the UNION code? It is very small, but it works. I\nthink it could make a good sample for subselects.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 17:31:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > No, I don't like to add anything in parser. Example:\n> >\n> > select *\n> > from tabA\n> > where col1 = (select col2\n> > from tabB\n> > where tabA.col3 = tabB.col4\n> > and exists (select *\n> > from tabC\n> > where tabB.colX = tabC.colX and\n> > tabC.colY = tabA.col2)\n> > )\n> >\n> > : a column of tabA is referenced in sub-subselect\n> \n> This is a strange case that I don't think we need to handle in our first\n> implementation.\n\nI don't know is this strange case or not :)\nBut I would like to know is this allowed by standards - can someone\ncomment on this ?\nAnd I don't see problems with handling this...\n\n> \n> > (is it allowable by standards ?) - in this case it's better\n> > to don't add tabA to 1st subselect but add tabA to second one\n> > and change tabA.col3 in 1st to reference col3 in 2nd subquery temp table -\n> > this gives us 2-tables join in 1st subquery instead of 3-tables join.\n> > (And I'm still not sure that using temp tables is best of what can be\n> > done in all cases...)\n> \n> I don't see any use for temp tables in subselects anymore. After having\n> implemented UNIONS, I now see how much can be done in the upper\n> optimizer. I see you just putting the subquery PLAN into the proper\n> place in the plan tree, with some proper JOIN nodes for IN, NOT IN.\n\nWhen saying about temp tables, I meant tables created by node Material\nfor subquery plan. This is one of two ways - run subquery once for all\npossible upper plan tuples and then just join result table with upper\nquery. Another way is re-run subquery for each upper query tuple,\nwithout temp table but may be with caching results by some ways.\nActually, there is special case - when subquery can be alternatively \nformulated as joins, - but this is just special case.\n\n> > > In the parent query, to parse the WHERE clause, we create a new operator\n> > > type, called IN or NOT_IN, or ALL, where the left side is a Var, and the\n> > ^^^^^^^^^^^^^^^^^^\n> > No. We have to handle (a,b,c) OP (select x, y, z ...) and\n> > '_a_constant_' OP (select ...) - I don't know is last in standards,\n> > Sybase has this.\n> \n> I have never seen this in my eight years of SQL. Perhaps we can leave\n> this for later, maybe much later.\n\nAre you saying about (a, b, c) or about 'a_constant' ?\nAgain, can someone comment on are they in standards or not ?\nTom ?\nIf yes then please add parser' support for them now...\n\n> > Note, that there are no OP_IN, OP_NOTIN in OpType-s for Expr. We need in\n> > IN, NOTIN in A_Expr (parser node), but both of them have to be transferred\n> > by parser into corresponding ANY and ALL. At the moment we can do:\n> >\n> > IN --> = ANY, NOT IN --> <> ALL\n> >\n> > but this will be \"known bug\": this breaks OO-nature of Postgres, because of\n> > operators can be overrided and '=' can mean s o m e t h i n g (not equality).\n> > Example: box data type. For boxes, = means equality of _areas_ and =~\n> > means that boxes are the same ==> =~ ANY should be used for IN.\n> \n> That is interesting, to use =~ for ANY.\n> \n> Yes, but how many operators take a SUBQUERY as an operand. This is a\n> special case to me.\n> \n> I think I see where you are trying to go. You want subselects to behave\n> like any other operator, with a subselect type, and you do all the\n> subselect handling in the optimizer, with special Nodes and actions.\n> \n> I think this may be just too much of a leap. We have such clean query\n> logic for single queries, I can't imagine having an operator that has a\n> Query operand, and trying to get everything to properly handle it.\n> UNIONS were very easy to implement as a List off of Query, with some\n> foreach()'s in rewrite and the high optimizer.\n> \n> Subselects are SQL standard, and are never going to be over-ridden by a\n> user. Same with UNION. They want UNION, they get UNION. They want\n> Subselect, we are going to spin through the Query structure and give\n> them what they want.\n> \n> The complexities of subselects and correlated queries and range tables\n> and stuff is so bizarre that trying to get it to work inside the type\n> system could be a huge project.\n\nPostgreSQL is a robust, next-generation, Object-Relational DBMS (ORDBMS),\nderived from the Berkeley Postgres database management system. While\nPostgreSQL retains the powerful object-relational data model, rich data types and\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\neasy extensibility of Postgres, it replaces the PostQuel query language with an\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nextended subset of SQL.\n^^^^^^^^^^^^^^^^^^^^^^\n\nShould we say users that subselect will work for standard data types only ?\nI don't see why subquery can't be used with ~, ~*, @@, ... operators, do you ?\nIs there difference between handling = ANY and ~ ANY ? I don't see any.\nCurrently we can't get IN working properly for boxes (and may be for others too)\nand I don't like to try to resolve these problems now, but hope that someday\nwe'll be able to do this. At the moment - just convert IN into = ANY and\nNOT IN into <> ALL in parser.\n\n(BTW, do you know how DISTINCT is implemented ? It doesn't use = but\nuse type_out funcs and uses strcmp()... DISTINCT is standard SQL thing...)\n\n> >\n> > > right side is an index to a slot in the subqueries List.\n> \n> I guess the question is what can we have by February 1?\n> \n> I have been reading some postings, and it seems to me that subselects\n> are the litmus test for many evaluators when deciding if a database\n> engine is full-featured.\n> \n> Sorry to be so straightforward, but I want to keep hashing this around\n> until we get a conclusion, so coding can start.\n> \n> My suggestions have been, I believe, trying to get subselects working\n> with the fullest functionality by adding the least amount of code, and\n> keeping the logic clean.\n> \n> Have you checked out the UNION code? It is very small, but it works. I\n> think it could make a good sample for subselects.\n\nThere is big difference between subqueries and queries in UNION - \nthere are not dependences between UNION queries.\n\nOk, opened issues:\n\n1. Is using upper query' vars in all subquery levels in standard ?\n2. Is (a, b, c) OP (subselect) in standard ?\n3. What types of expressions (Var, Const, ...) are allowed on the left\n side of operator with subquery on the right ?\n4. What types of operators should we support (=, >, ..., like, ~, ...) ?\n (My vote for all boolean operators).\n\nAnd - did we get consensus on presentation subqueries stuff in Query,\nExpr and Var ?\nI would like to have something done in parser near Jan 17 to get\nsubqueries working by Feb 1. I vote for support of all standard\nthings (1. - 3.) in parser right now - if there will be no time\nto implement something like (a, b, c) then optimizer will call\nelog(WARN) (oh, sorry, - elog(ERROR)).\n\nVadim\n", "msg_date": "Sun, 11 Jan 1998 00:19:08 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subselects" }, { "msg_contents": "> > > Note, that there are no OP_IN, OP_NOTIN in OpType-s for Expr. We need in\n> > > IN, NOTIN in A_Expr (parser node), but both of them have to be transferred\n> > > by parser into corresponding ANY and ALL. At the moment we can do:\n> > >\n> > > IN --> = ANY, NOT IN --> <> ALL\n> > >\n> > > but this will be \"known bug\": this breaks OO-nature of Postgres, because of\n> > > operators can be overrided and '=' can mean s o m e t h i n g (not equality).\n> > > Example: box data type. For boxes, = means equality of _areas_ and =~\n> > > means that boxes are the same ==> =~ ANY should be used for IN.\n> >\n> > That is interesting, to use =~ for ANY.\n\nIf I understand the discussion, I would think is is fine to make an assumption about\nwhich operator is used to implement a subselect expression. If someone remaps an\noperator to mean something different, then they will get a different result (or a\nnonsensical one) from a subselect.\n\nI'd be happy to remap existing operators to fit into a convention which would work\nwith subselects (especially if I got to help choose :).\n\n> > Subselects are SQL standard, and are never going to be over-ridden by a\n> > user. Same with UNION. They want UNION, they get UNION. They want\n> > Subselect, we are going to spin through the Query structure and give\n> > them what they want.\n>\n> PostgreSQL is a robust, next-generation, Object-Relational DBMS (ORDBMS),\n> derived from the Berkeley Postgres database management system. While\n> PostgreSQL retains the powerful object-relational data model, rich data types and\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> easy extensibility of Postgres, it replaces the PostQuel query language with an\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> extended subset of SQL.\n> ^^^^^^^^^^^^^^^^^^^^^^\n>\n> Should we say users that subselect will work for standard data types only ?\n> I don't see why subquery can't be used with ~, ~*, @@, ... operators, do you ?\n> Is there difference between handling = ANY and ~ ANY ? I don't see any.\n> Currently we can't get IN working properly for boxes (and may be for others too)\n> and I don't like to try to resolve these problems now, but hope that someday\n> we'll be able to do this. At the moment - just convert IN into = ANY and\n> NOT IN into <> ALL in parser.\n>\n> (BTW, do you know how DISTINCT is implemented ? It doesn't use = but\n> use type_out funcs and uses strcmp()... DISTINCT is standard SQL thing...)\n\n?? I didn't know that. Wouldn't we want it to eventually use \"=\" through a sorted\nlist? That would give more consistant behavior...\n\n> > I have been reading some postings, and it seems to me that subselects\n> > are the litmus test for many evaluators when deciding if a database\n> > engine is full-featured.\n> >\n> > Sorry to be so straightforward, but I want to keep hashing this around\n> > until we get a conclusion, so coding can start.\n> >\n> > My suggestions have been, I believe, trying to get subselects working\n> > with the fullest functionality by adding the least amount of code, and\n> > keeping the logic clean.\n> >\n> > Have you checked out the UNION code? It is very small, but it works. I\n> > think it could make a good sample for subselects.\n>\n> There is big difference between subqueries and queries in UNION -\n> there are not dependences between UNION queries.\n>\n> Ok, opened issues:\n>\n> 1. Is using upper query' vars in all subquery levels in standard ?\n\nI'm not certain. Let me know if you do not get an answer from someone else and I will\nresearch it.\n\n> 2. Is (a, b, c) OP (subselect) in standard ?\n\nYes. In fact, it _is_ the standard, and \"a OP (subselect)\" is a special case where\nthe parens are allowed to be omitted from a one element list.\n\n> 3. What types of expressions (Var, Const, ...) are allowed on the left\n> side of operator with subquery on the right ?\n\nI think most expressions are allowed. The \"constant OP (subselect)\" case you were\nasking about is just a simplified case since \"(a, b, constant) OP (subselect)\" where\na and b are column references should be allowed. Of course, our optimizer could\nperhaps change this to \"(a, b) OP (subselect where x = constant)\", or for the first\nexample \"EXISTS (subselect where x = constant)\".\n\n> 4. What types of operators should we support (=, >, ..., like, ~, ...) ?\n> (My vote for all boolean operators).\n\nSounds good. But I'll vote with Bruce (and I'll bet you already agree) that it is\nimportant to get an initial implementation for v6.3 which covers a little, some, or\nall of the usual SQL subselect constructs. If we have to revisit this for v6.4 then\nwe will have the benefit of feedback from others in practical applications which\nalways uncovers new things to consider.\n\n> And - did we get consensus on presentation subqueries stuff in Query,\n> Expr and Var ?\n> I would like to have something done in parser near Jan 17 to get\n> subqueries working by Feb 1. I vote for support of all standard\n> things (1. - 3.) in parser right now - if there will be no time\n> to implement something like (a, b, c) then optimizer will callelog(WARN) (oh,\n> sorry, - elog(ERROR)).\n\nGreat. I'd like to help with the remaining parser issues; at the moment \"row_expr\"\ndoes the right thing with expression comparisions but just parses then ignores\nsubselect expressions. Let me know what structures you want passed back and I'll put\nthem in, or if you prefer put in the first one and I'll go through and clean up and\nadd the rest.\n\n - Tom\n\n", "msg_date": "Sat, 10 Jan 1998 18:01:03 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subselects" }, { "msg_contents": "On Sun, 11 Jan 1998, Vadim B. Mikheev wrote:\n\n> > > No, I don't like to add anything in parser. Example:\n> > >\n> > > select *\n> > > from tabA\n> > > where col1 = (select col2\n> > > from tabB\n> > > where tabA.col3 = tabB.col4\n> > > and exists (select *\n> > > from tabC\n> > > where tabB.colX = tabC.colX and\n> > > tabC.colY = tabA.col2)\n> > > )\n> > >\n> > > : a column of tabA is referenced in sub-subselect\n> > \n> > This is a strange case that I don't think we need to handle in our first\n> > implementation.\n> \n> I don't know is this strange case or not :)\n> But I would like to know is this allowed by standards - can someone\n> comment on this ?\n> And I don't see problems with handling this...\n\n\tI don't know about \"the standards\", but in my mind, the above should\nwork if subselects work...so what if you add a third or fourth level subselect\nto the overall query? IMHO, the \"outer most\" (inner most?) subselect should\nbe resolved to provide the \"EXISTS\" list, the the next should be resolved,\netc...\n\n\tHell...looking at this, I'd almost think that you could use subselects to\nforce a pseudo-ordering onto a large complex JOIN (ya ya, really messy though)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 10 Jan 1998 14:51:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> Are you saying about (a, b, c) or about 'a_constant' ?\n> Again, can someone comment on are they in standards or not ?\n> Tom ?\n> If yes then please add parser' support for them now...\n\nAs I mentioned a few minutes ago in my last message, I parse the row descriptors and\nthe subselects but for subselect expressions (e.g. \"(a,b) OP (subselect)\" I currently\nignore the result. I didn't want to pass things back as lists until something in the\nbackend was ready to receive them.\n\nIf it is OK, I'll go ahead and start passing back a list of expressions when a row\ndescriptor is present. So, what you will find is lexpr or rexpr in the A_Expr node\nbeing a list rather than an atomic node.\n\nAlso, I can start passing back the subselect expression as the rexpr; right now the\nparser calls elog() and quits.\n\nbtw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\nmakeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\nIf lists are handled farther back, this routine should move to there also and the\nparser will just pass the lists. Note that some assumptions have to be made about the\nmeaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n\"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\nto disallow those cases or to look for specific appearance of the operator to guess\nthe behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\nit has \"<>\" or \"!\" then build as \"or\"s.\n\nLet me know what you want...\n\n - Tom\n\n", "msg_date": "Sat, 10 Jan 1998 19:31:29 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> I would like to have something done in parser near Jan 17 to get\n> subqueries working by Feb 1.\n\nHere are some changes to gram.y and to keywords.c which start to pass through\nsubselect constructs. I won't commit until/unless you have a chance to look at it and\nagree that this is something close to the right direction to head.\n\n - Tom\n\npostgres=> create table x (i int);\nCREATE\npostgres=> insert into x values (1);\nINSERT 18121 1\npostgres=> select i from x where i = 1;\ni\n-\n1\n(1 row)\n\npostgres=> select i from x where i in (select i from x);\nERROR: transformExpr: does not know how to transform node 604\npostgres=> select i from x where (i, 1) in (select i, 1 from x);\nERROR: transformExpr: does not know how to transform node 501\npostgres=>", "msg_date": "Sat, 10 Jan 1998 19:55:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> \n> This is a multi-part message in MIME format.\n> --------------130974A8F3C8025EB9E3F25C\n> Content-Type: text/plain; charset=us-ascii\n> Content-Transfer-Encoding: 7bit\n> \n> > I would like to have something done in parser near Jan 17 to get\n> > subqueries working by Feb 1.\n> \n> Here are some changes to gram.y and to keywords.c which start to pass through\n> subselect constructs. I won't commit until/unless you have a chance to look at it and\n> agree that this is something close to the right direction to head.\n> \n\nDo you realize these are the files, and not context diffs?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 10 Jan 1998 21:37:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > > I would like to have something done in parser near Jan 17 to get\n> > > subqueries working by Feb 1.\n> >\n> > Here are some changes to gram.y and to keywords.c which start to pass through\n> > subselect constructs. I won't commit until/unless you have a chance to look at it and\n> > agree that this is something close to the right direction to head.\n> >\n>\n> Do you realize these are the files, and not context diffs?\n\nYup. Thought it would be easier for you, but probably should have sent a diff. Sorry.\n\n - Tom\n\n", "msg_date": "Sun, 11 Jan 1998 03:31:33 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Here are context diffs of gram.y and keywords.c; sorry about sending the full files.\nThese start sending lists of arguments toward the backend from the parser to\nimplement row descriptors and subselects.\n\nThey should apply OK even over Bruce's recent changes...\n\n - Tom", "msg_date": "Sun, 11 Jan 1998 05:58:01 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> I would like to have something done in parser near Jan 17 to get\n> subqueries working by Feb 1. I vote for support of all standard\n> things (1. - 3.) in parser right now - if there will be no time\n> to implement something like (a, b, c) then optimizer will call\n> elog(WARN) (oh, sorry, - elog(ERROR)).\n\nFirst, let me say I am glad we are still on schedule for Feb 1. I was\npanicking because I thought we wouldn't make it in time.\n\n\n> > > (is it allowable by standards ?) - in this case it's better\n> > > to don't add tabA to 1st subselect but add tabA to second one\n> > > and change tabA.col3 in 1st to reference col3 in 2nd subquery temp table -\n> > > this gives us 2-tables join in 1st subquery instead of 3-tables join.\n> > > (And I'm still not sure that using temp tables is best of what can be\n> > > done in all cases...)\n> > \n> > I don't see any use for temp tables in subselects anymore. After having\n> > implemented UNIONS, I now see how much can be done in the upper\n> > optimizer. I see you just putting the subquery PLAN into the proper\n> > place in the plan tree, with some proper JOIN nodes for IN, NOT IN.\n> \n> When saying about temp tables, I meant tables created by node Material\n> for subquery plan. This is one of two ways - run subquery once for all\n> possible upper plan tuples and then just join result table with upper\n> query. Another way is re-run subquery for each upper query tuple,\n> without temp table but may be with caching results by some ways.\n> Actually, there is special case - when subquery can be alternatively \n> formulated as joins, - but this is just special case.\n\nThis is interesting. It really only applies for correlated subqueries,\nand certainly it may help sometimes to just evaluate the subquery for\nvalid values that are going to come from the upper query than for all\npossible values. Perhaps we can use the 'cost' value of each query to\ndecide how to handle this.\n\n> \n> > > > In the parent query, to parse the WHERE clause, we create a new operator\n> > > > type, called IN or NOT_IN, or ALL, where the left side is a Var, and the\n> > > ^^^^^^^^^^^^^^^^^^\n> > > No. We have to handle (a,b,c) OP (select x, y, z ...) and\n> > > '_a_constant_' OP (select ...) - I don't know is last in standards,\n> > > Sybase has this.\n> > \n> > I have never seen this in my eight years of SQL. Perhaps we can leave\n> > this for later, maybe much later.\n> \n> Are you saying about (a, b, c) or about 'a_constant' ?\n> Again, can someone comment on are they in standards or not ?\n> Tom ?\n> If yes then please add parser' support for them now...\n\nOK, Thomas says it is, so we will put in as much code as we can to handle\nit.\n\n> Should we say users that subselect will work for standard data types only ?\n> I don't see why subquery can't be used with ~, ~*, @@, ... operators, do you ?\n> Is there difference between handling = ANY and ~ ANY ? I don't see any.\n> Currently we can't get IN working properly for boxes (and may be for others too)\n> and I don't like to try to resolve these problems now, but hope that someday\n> we'll be able to do this. At the moment - just convert IN into = ANY and\n> NOT IN into <> ALL in parser.\n\nOK.\n\n> \n> (BTW, do you know how DISTINCT is implemented ? It doesn't use = but\n> use type_out funcs and uses strcmp()... DISTINCT is standard SQL thing...)\n\nI did not know that either.\n\n> There is big difference between subqueries and queries in UNION - \n> there are not dependences between UNION queries.\n\nYes, I know UNIONS are trivial compared to subselects.\n\n> \n> Ok, opened issues:\n> \n> 1. Is using upper query' vars in all subquery levels in standard ?\n> 2. Is (a, b, c) OP (subselect) in standard ?\n> 3. What types of expressions (Var, Const, ...) are allowed on the left\n> side of operator with subquery on the right ?\n> 4. What types of operators should we support (=, >, ..., like, ~, ...) ?\n> (My vote for all boolean operators).\n> \n> And - did we get consensus on presentation subqueries stuff in Query,\n> Expr and Var ?\n\nOK, here are my concrete ideas on changes and structures.\n\nI think we all agreed that Query needs new fields:\n\n Query *parentQuery;\n List *subqueries;\n\nMaybe query level too, but I don't think so (see later ideas on Var).\n\nWe need a new Node structure, call it Sublink:\n\n\tint \tlinkType\t(IN, NOTIN, ANY, EXISTS, OPERATOR...)\n\tOid\toperator\t/* subquery must return single row */\n\tList\t*lefthand;\t/* parent stuff */\n\tNode \t*subquery;\t/* represents nodes from parser */\n\tIndex\tSubindex;\t/* filled in to index Query->subqueries */\n\nOf course, the names are just suggestions. Every time we run through\nthe parsenodes of a query to create a Query* structure, when we do the\nWHERE clause, if we come upon one of these Sublink nodes (created in the\nparser), we move the supplied Query* in Sublink->subquery to a local\nList variable, and we set Subquery->subindex to equal the index of the\nnew query, i.e. is it the first subquery we found, 1, or the second, 2,\netc.\n\nAfter we have created the parent Query structure, we run through our\nlocal List variable of subquery parsenodes we created above, and add\nQuery* entries to Query->subqueries. In each subquery Query*, we set\nthe parentQuery pointer.\n\nAlso, when parsing the subqueries, we need to keep track of correlated\nreferences. I recommend we add a field to the Var structure:\n\n\tIndex\tsublevel;\t/* range table reference:\n\t\t\t\t = 0 current level of query\n\t\t\t\t < 0 parent above this many levels\n\t\t\t\t > 0 index into subquery list\n\t\t\t\t */\n\nThis way, a Var node with sublevel 0 is the current level, and is true\nin most cases. This helps us not have to change much code. sublevel =\n-1 means it references the range table in the parent query. sublevel =\n-2 means the parent's parent. sublevel = 2 means it references the range\ntable of the second entry in Query->subqueries. Varno and varattno are\nstill meaningful. Of course, we can't reference variables in the\nsubqueries from the parent in the parser code, but Vadim may want to.\n\nWhen doing a Var lookup in the parser, we look in the current level\nfirst, but if not found, if it is a subquery, we can look at the parent\nand parent's parent to set the sublevel, varno, and varatno properly.\n\nWe create no phantom range table entries in the subquery, and no phantom\ntarget list entries. We can leave that all for the upper optimizer.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 00:59:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> We need a new Node structure, call it Sublink:\n> \n> int linkType (IN, NOTIN, ANY, EXISTS, OPERATOR...)\n> Oid operator /* subquery must return single row */\n> List *lefthand; /* parent stuff */\n> Node *subquery; /* represents nodes from parser */\n> Index Subindex; /* filled in to index Query->subqueries */\n\nOk, I agreed that it's better to have new node and don't put subquery stuff\ninto Expr node.\n\nint linkType\n is one of EXISTS, ANY, ALL, EXPR. EXPR is for the case of expression\n subqueries (following Sybase naming) which must return single row -\n (a, b, c) = (subquery).\n Note again, that there are no linkType for IN and NOTIN here. \n User' IN and NOT IN must be converted to = ANY and <> ALL by parser.\n\nWe need not in Oid operator! In all cases we need in\n\nList *oper\n list of Oper nodes for each of a, b, c, ... and operator (=, ...)\n corresponding to data type of a, b, c, ...\n\nList *lefthand\n is list of Var/Const nodes - representation of (a, b, c, ...)\n\nWhat is Node *subquery ?\nIn optimizer we need either in Subindex (to get subquery from Query->subqueries\nwhen beeing in Sublink) or in Node *subquery inside Sublink itself.\nBTW, after some thought I don't see how Query->subqueries will be usefull.\nSo, may be just add bool hassubqueries to Query (and Query *parentQuery)\nand use Query *subquery in Sublink, but not subindex ?\n\n> \n> Also, when parsing the subqueries, we need to keep track of correlated\n> references. I recommend we add a field to the Var structure:\n> \n> Index sublevel; /* range table reference:\n> = 0 current level of query\n> < 0 parent above this many levels\n> > 0 index into subquery list\n> */\n> \n> This way, a Var node with sublevel 0 is the current level, and is true\n> in most cases. This helps us not have to change much code. sublevel =\n> -1 means it references the range table in the parent query. sublevel =\n> -2 means the parent's parent. sublevel = 2 means it references the range\n> table of the second entry in Query->subqueries. Varno and varattno are\n> still meaningful. Of course, we can't reference variables in the\n> subqueries from the parent in the parser code, but Vadim may want to.\n ^^^^^^^^^^^^^^^^^\nNo. So, just use sublevel >= 0: 0 - current level, 1 - one level up, ...\nsublevel is for optimizer only - executor will not use it.\n\n> \n> When doing a Var lookup in the parser, we look in the current level\n> first, but if not found, if it is a subquery, we can look at the parent\n> and parent's parent to set the sublevel, varno, and varatno properly.\n> \n> We create no phantom range table entries in the subquery, and no phantom\n> target list entries. We can leave that all for the upper optimizer.\n\nOk.\n\nVadim\n", "msg_date": "Mon, 12 Jan 1998 12:09:20 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> btw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\n> makeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\n> If lists are handled farther back, this routine should move to there also and the\n> parser will just pass the lists. Note that some assumptions have to be made about the\n> meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> to disallow those cases or to look for specific appearance of the operator to guess\n> the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> it has \"<>\" or \"!\" then build as \"or\"s.\n\nOh, god! I never thought about this!\nOk, I have to agree:\n\n1. Only <, <=, =, >, >=, <> is allowed with subselects\n2. Use OR's for <>, and so - we need in bool useor in SubLink \n for <>, <> ANY and <> ALL:\n\ntypedef struct SubLink {\n\tNodeTag\t\ttype;\n\tint\t\tlinkType; /* EXISTS, ALL, ANY, EXPR */\n\tbool\t\tuseor; /* TRUE for <> */\n\tList\t *lefthand; /* List of Var/Const nodes on the left */\n\tList\t *oper; /* List of Oper nodes */\n\tQuery\t *subquery; /* */\n} SubLink;\n\nVadim\n", "msg_date": "Mon, 12 Jan 1998 16:34:45 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> btw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\n> makeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\n> If lists are handled farther back, this routine should move to there also and the\n> parser will just pass the lists. Note that some assumptions have to be made about the\n> meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> to disallow those cases or to look for specific appearance of the operator to guess\n> the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> it has \"<>\" or \"!\" then build as \"or\"s.\n\nSorry, I forgot something: is (a, b) OP (x, y) in standard ?\nIf not then I suggest to don't implement it at all and allow\n(a, b) OP [ANY|ALL] (subselect) only.\n\nVadim\n", "msg_date": "Mon, 12 Jan 1998 16:40:48 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > btw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\n> > makeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\n> > If lists are handled farther back, this routine should move to there also and the\n> > parser will just pass the lists. Note that some assumptions have to be made about the\n> > meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> > \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> > to disallow those cases or to look for specific appearance of the operator to guess\n> > the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> > it has \"<>\" or \"!\" then build as \"or\"s.\n>\n> Sorry, I forgot something: is (a, b) OP (x, y) in standard ?\n\nYes. The problem wouldn't be very interesting otherwise :)\n\n - Tom\n\n> If not then I suggest to don't implement it at all and allow\n> (a, b) OP [ANY|ALL] (subselect) only.\n\n\n\n", "msg_date": "Mon, 12 Jan 1998 13:41:31 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> \n> Thomas G. Lockhart wrote:\n> > \n> > btw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\n> > makeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\n> > If lists are handled farther back, this routine should move to there also and the\n> > parser will just pass the lists. Note that some assumptions have to be made about the\n> > meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> > \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> > to disallow those cases or to look for specific appearance of the operator to guess\n> > the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> > it has \"<>\" or \"!\" then build as \"or\"s.\n> \n> Oh, god! I never thought about this!\n> Ok, I have to agree:\n> \n> 1. Only <, <=, =, >, >=, <> is allowed with subselects\n> 2. Use OR's for <>, and so - we need in bool useor in SubLink \n> for <>, <> ANY and <> ALL:\n\nAh, but this is just a problem when there are multiple fields on the\nleft.\n\n> \n> typedef struct SubLink {\n> \tNodeTag\t\ttype;\n> \tint\t\tlinkType; /* EXISTS, ALL, ANY, EXPR */\n> \tbool\t\tuseor; /* TRUE for <> */\n> \tList\t *lefthand; /* List of Var/Const nodes on the left */\n> \tList\t *oper; /* List of Oper nodes */\n> \tQuery\t *subquery; /* */\n> } SubLink;\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 08:58:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > We need a new Node structure, call it Sublink:\n> > \n> > int linkType (IN, NOTIN, ANY, EXISTS, OPERATOR...)\n> > Oid operator /* subquery must return single row */\n> > List *lefthand; /* parent stuff */\n> > Node *subquery; /* represents nodes from parser */\n> > Index Subindex; /* filled in to index Query->subqueries */\n> \n> Ok, I agreed that it's better to have new node and don't put subquery stuff\n> into Expr node.\n> \n> int linkType\n> is one of EXISTS, ANY, ALL, EXPR. EXPR is for the case of expression\n> subqueries (following Sybase naming) which must return single row -\n> (a, b, c) = (subquery).\n> Note again, that there are no linkType for IN and NOTIN here. \n> User' IN and NOT IN must be converted to = ANY and <> ALL by parser.\n> \n> We need not in Oid operator! In all cases we need in\n> \n> List *oper\n> list of Oper nodes for each of a, b, c, ... and operator (=, ...)\n> corresponding to data type of a, b, c, ...\n> \n> List *lefthand\n> is list of Var/Const nodes - representation of (a, b, c, ...)\n\nI see, the opoids would be different for '=' if different variable types\nare used in (a,b,c) in (subselect). Got it.\n\n> \n> What is Node *subquery ?\n> In optimizer we need either in Subindex (to get subquery from Query->subqueries\n> when beeing in Sublink) or in Node *subquery inside Sublink itself.\n> BTW, after some thought I don't see how Query->subqueries will be usefull.\n> So, may be just add bool hassubqueries to Query (and Query *parentQuery)\n> and use Query *subquery in Sublink, but not subindex ?\n\nOK, I originally created it because the parser would have trouble\nfilling in a List* field in SelectStmt while it was parsing a WHERE\nclause. I decided to just stick the SelectStmt* into Sublink->subquery.\n\nWhile we are going through the parse output to fill in the Query*, I\nthought we should move the actual subquery parse output to a separate\nplace, and once the Query* was completed, spin through the saved\nsubquery parse list and stuff Query->subqueries with a list of Query*\nfor the subqueries. I thought this would be easier, because we would\nthen have all the subqueries in a nice list that we can manage easier.\n\nIn fact, we can fill Query->subqueries with SelectStmt* as we process\nthe WHERE clause, then convert them to Query* at the end.\n\nIf you would rather keep the subquery Query* entries in the Sublink\nstructure, we can do that. The only issue I see is that when you want\nto get to them, you have to wade through the WHERE clause to find them. \nFor example, we will have to run the subquery Query* through the rewrite\nsystem. Right now, for UNION, I have a nice union List* in Query, and I\njust spin through it in postgres.c for each Union query. If we keep the\nsubquery Query* inside Sublink, we have to have some logic to go through\nand find them.\n\nIf we just have an Index in Sublink to the Query->subqueries, we can use\nthe nth() macro to find them quite easily.\n\nBut it is up to you. I really don't know how you are going to handle\nthings like:\n\n\tselect *\n\tfrom taba\n\twhere x = 3 and y = 5 and (z=6 or q in (select g from tabb ))\n\nMy logic was to break the problem down to single queries as much as\npossible, so we would be breaking the problem up into pieces. Whatever\nis easier for you.\n\n> \n> > \n> > Also, when parsing the subqueries, we need to keep track of correlated\n> > references. I recommend we add a field to the Var structure:\n> > \n> > Index sublevel; /* range table reference:\n> > = 0 current level of query\n> > < 0 parent above this many levels\n> > > 0 index into subquery list\n> > */\n> > \n> > This way, a Var node with sublevel 0 is the current level, and is true\n> > in most cases. This helps us not have to change much code. sublevel =\n> > -1 means it references the range table in the parent query. sublevel =\n> > -2 means the parent's parent. sublevel = 2 means it references the range\n> > table of the second entry in Query->subqueries. Varno and varattno are\n> > still meaningful. Of course, we can't reference variables in the\n> > subqueries from the parent in the parser code, but Vadim may want to.\n> ^^^^^^^^^^^^^^^^^\n> No. So, just use sublevel >= 0: 0 - current level, 1 - one level up, ...\n> sublevel is for optimizer only - executor will not use it.\n\nOK, if you don't need to reference range tables DOWN in subqueries, we\ncan use positive numbers.\n\n> > When doing a Var lookup in the parser, we look in the current level\n> > first, but if not found, if it is a subquery, we can look at the parent\n> > and parent's parent to set the sublevel, varno, and varatno properly.\n> > \n> > We create no phantom range table entries in the subquery, and no phantom\n> > target list entries. We can leave that all for the upper optimizer.\n> \n> Ok.\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 09:23:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> typedef struct SubLink {\n> \tNodeTag\t\ttype;\n> \tint\t\tlinkType; /* EXISTS, ALL, ANY, EXPR */\n> \tbool\t\tuseor; /* TRUE for <> */\n> \tList\t *lefthand; /* List of Var/Const nodes on the left */\n> \tList\t *oper; /* List of Oper nodes */\n> \tQuery\t *subquery; /* */\n> } SubLink;\n\nIf you want Query* inside Sublink, rather than a separate Query* field,\nthis can be our SubLink structure.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 09:25:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Ok. I don't see how Query->subqueries could me help, but I foresee\nthat Query->sublinks can do it. Could you add this ? \n\nBruce Momjian wrote:\n> \n> >\n> > What is Node *subquery ?\n> > In optimizer we need either in Subindex (to get subquery from Query->subqueries\n> > when beeing in Sublink) or in Node *subquery inside Sublink itself.\n> > BTW, after some thought I don't see how Query->subqueries will be usefull.\n> > So, may be just add bool hassubqueries to Query (and Query *parentQuery)\n> > and use Query *subquery in Sublink, but not subindex ?\n> \n> OK, I originally created it because the parser would have trouble\n> filling in a List* field in SelectStmt while it was parsing a WHERE\n> clause. I decided to just stick the SelectStmt* into Sublink->subquery.\n> \n> While we are going through the parse output to fill in the Query*, I\n> thought we should move the actual subquery parse output to a separate\n> place, and once the Query* was completed, spin through the saved\n> subquery parse list and stuff Query->subqueries with a list of Query*\n> for the subqueries. I thought this would be easier, because we would\n> then have all the subqueries in a nice list that we can manage easier.\n> \n> In fact, we can fill Query->subqueries with SelectStmt* as we process\n> the WHERE clause, then convert them to Query* at the end.\n> \n> If you would rather keep the subquery Query* entries in the Sublink\n> structure, we can do that. The only issue I see is that when you want\n> to get to them, you have to wade through the WHERE clause to find them.\n> For example, we will have to run the subquery Query* through the rewrite\n> system. Right now, for UNION, I have a nice union List* in Query, and I\n> just spin through it in postgres.c for each Union query. If we keep the\n> subquery Query* inside Sublink, we have to have some logic to go through\n> and find them.\n> \n> If we just have an Index in Sublink to the Query->subqueries, we can use\n> the nth() macro to find them quite easily.\n> \n> But it is up to you. I really don't know how you are going to handle\n> things like:\n> \n> select *\n> from taba\n> where x = 3 and y = 5 and (z=6 or q in (select g from tabb ))\n\nNo problems.\n\n> \n> My logic was to break the problem down to single queries as much as\n> possible, so we would be breaking the problem up into pieces. Whatever\n> is easier for you.\n\nVadim\n", "msg_date": "Tue, 13 Jan 1998 21:20:25 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> \n> Ok. I don't see how Query->subqueries could me help, but I foresee\n> that Query->sublinks can do it. Could you add this ? \n\nOK, so instead of moving the query out of the SubLink structure, you\nwant the Query* in the Sublink structure, and a List* of SubLink\npointers in the query structure?\n\n\tQuery\n\t{\n\t\t...\n\t\tList *sublink; /* list of pointers to Sublinks\n\t\t...\n\t}\n\nI can do that. Let me know.\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 13 Jan 1998 09:48:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > > btw, to implement \"(a,b,c) OP (d,e,f)\" I made a new routine in the parser called\n> > > makeRowExpr() which breaks this up into a sequence of \"and\" and/or \"or\" expressions.\n> > > If lists are handled farther back, this routine should move to there also and the\n> > > parser will just pass the lists. Note that some assumptions have to be made about the\n> > > meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> > > \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> > > to disallow those cases or to look for specific appearance of the operator to guess\n> > > the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> > > it has \"<>\" or \"!\" then build as \"or\"s.\n> >\n> > Sorry, I forgot something: is (a, b) OP (x, y) in standard ?\n> \n> Yes. The problem wouldn't be very interesting otherwise :)\n\nCould we restrict OPs to standard ones (like we do for subselects) - I don't\nlike assumption about ORs for operators with \"!\" ?\n\"Assume as little as possible\" is good rule...\n\nVadim\n", "msg_date": "Tue, 13 Jan 1998 21:51:45 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > > > Note that some assumptions have to be made about the\n> > > > meaning of \"(a,b) OP (c,d)\", since usually we only have knowledge of the behavior of\n> > > > \"a OP c\". Easy for the standard SQL operators, unknown for others, but maybe it is OK\n> > > > to disallow those cases or to look for specific appearance of the operator to guess\n> > > > the behavior (e.g. if the operator has \"<\" or \"=\" or \">\" then build as \"and\"s and if\n> > > > it has \"<>\" or \"!\" then build as \"or\"s.\n>\n> Could we restrict OPs to standard ones (like we do for subselects) - I don't\n> like assumption about ORs for operators with \"!\" ?\n> \"Assume as little as possible\" is good rule...\n\nYes, I agree. The suggestion about \"!\" was made without thinking very hard just to raise the\npossibility. Extending to other operators in a reliable way is an interesting problem, but is\nnot required and can be explicitly disallowed for now.\n\n - Tom\n\n", "msg_date": "Tue, 13 Jan 1998 15:24:30 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Ok. I don't see how Query->subqueries could me help, but I foresee\n> > that Query->sublinks can do it. Could you add this ?\n> \n> OK, so instead of moving the query out of the SubLink structure, you\n> want the Query* in the Sublink structure, and a List* of SubLink\n> pointers in the query structure?\n\nYes.\n\n> \n> Query\n> {\n> ...\n> List *sublink; /* list of pointers to Sublinks\n> ...\n> }\n> \n> I can do that. Let me know.\n\nThanks!\n\nAre there any opened issues ?\n\nVadim\n", "msg_date": "Wed, 14 Jan 1998 10:09:02 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > Ok. I don't see how Query->subqueries could me help, but I foresee\n> > > that Query->sublinks can do it. Could you add this ?\n> > \n> > OK, so instead of moving the query out of the SubLink structure, you\n> > want the Query* in the Sublink structure, and a List* of SubLink\n> > pointers in the query structure?\n> \n> Yes.\n> \n> > \n> > Query\n> > {\n> > ...\n> > List *sublink; /* list of pointers to Sublinks\n> > ...\n> > }\n> > \n> > I can do that. Let me know.\n> \n> Thanks!\n> \n> Are there any opened issues ?\n\nOK, what do you need me to do. Do you want me to create the Sublink\nsupport stuff, fill them in in the parser, and pass them through the\nrewrite section and into the optimizer. I will prepare a list of\nchanges.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 18:18:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> typedef struct SubLink {\n> \tNodeTag\t\ttype;\n> \tint\t\tlinkType; /* EXISTS, ALL, ANY, EXPR */\n> \tbool\t\tuseor; /* TRUE for <> */\n> \tList\t *lefthand; /* List of Var/Const nodes on the left */\n> \tList\t *oper; /* List of Oper nodes */\n> \tQuery\t *subquery; /* */\n> } SubLink;\n\nOK, we add this structure above. During parsing, *subquery actually\nwill hold Node *parsetree, not Query *.\n\nAnd add to Query:\n\n\tbool\thasSubLinks;\n\nAlso need a function to return a List* of SubLink*. I just did a\nsimilar thing with Aggreg*. And Var gets:\n\n\tint uplevels;\n\nIs that it?\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 18:26:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > typedef struct SubLink {\n> > NodeTag type;\n> > int linkType; /* EXISTS, ALL, ANY, EXPR */\n> > bool useor; /* TRUE for <> */\n> > List *lefthand; /* List of Var/Const nodes on the left */\n> > List *oper; /* List of Oper nodes */\n> > Query *subquery; /* */\n> > } SubLink;\n> \n> OK, we add this structure above. During parsing, *subquery actually\n> will hold Node *parsetree, not Query *.\n ^^^^^^^^^^^^^^^\nBut optimizer will get node Query here, yes ?\n\n> \n> And add to Query:\n> \n> bool hasSubLinks;\n> \n> Also need a function to return a List* of SubLink*. I just did a\n> similar thing with Aggreg*. And Var gets:\n> \n> int uplevels;\n> \n> Is that it?\n\nYes.\n\nVadim\n", "msg_date": "Fri, 16 Jan 1998 16:34:15 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Are there any opened issues ?\n> \n> OK, what do you need me to do. Do you want me to create the Sublink\n> support stuff, fill them in in the parser, and pass them through the\n> rewrite section and into the optimizer. I will prepare a list of\n> changes.\n\nPlease do this. I'm ready to start coding of things in optimizer.\n\nVadim\n", "msg_date": "Fri, 16 Jan 1998 16:37:20 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" } ]
[ { "msg_contents": "> \n> A related question: Is it possible to store tuples over more than one\n> block? Would it be possible to split a big TEXT into multiple blocks?\n> \n\nPossible, but would cut the access speed to (1 / # blocks), no?\n\nThere is a var in the tuple header, t_chain, 6.2.1 that has since been\nremoved for 6.3. I think its original purpose was with time-travel,\n_but_, if we go with a ROWID instead of an oid in the future, this could\nbe put back in the header and would be the actual address of the next\nblock in the chain.\n\nOracle has this concept of chained rows. It is how they implement all\nof their LONG* types and also handle rows of normal types that are\nlarger than the block size.\n\ndarrenk\n", "msg_date": "Fri, 9 Jan 1998 09:43:52 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar/char size" }, { "msg_contents": "Darren King wrote:\n> > A related question: Is it possible to store tuples over more than one\n> > block? Would it be possible to split a big TEXT into multiple blocks?\n> Possible, but would cut the access speed to (1 / # blocks), no?\n\nFor \"big\" (multile blocks) rows, maybe. Consecutive blocks should be\nbuffered by the disk or the os, so I don't think the difference would\nbe big, or even noticeable.\n\n> There is a var in the tuple header, t_chain, 6.2.1 that has since been\n> removed for 6.3. I think its original purpose was with time-travel,\n> _but_, if we go with a ROWID instead of an oid in the future, this could\n> be put back in the header and would be the actual address of the next\n> block in the chain.\n> \n> Oracle has this concept of chained rows. It is how they implement all\n> of their LONG* types and also handle rows of normal types that are\n> larger than the block size.\n\nYes! I can't see why PostgreSQL should not be able to store rows bigger\nthan one block? I have seen people referring to this limitation every\nnow and then, but I don't understand why it has to be that way?\nIs this something fundamental to PostgreSQL?\n\n/* m */\n", "msg_date": "Mon, 12 Jan 1998 18:18:47 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Storing rows bigger than one block" }, { "msg_contents": "Mattias Kregert wrote:\n> \n> Darren King wrote:\n> \n> > There is a var in the tuple header, t_chain, 6.2.1 that has since been\n> > removed for 6.3. I think its original purpose was with time-travel,\n> > _but_, if we go with a ROWID instead of an oid in the future, this could\n> > be put back in the header and would be the actual address of the next\n> > block in the chain.\n\nNo, this is not for time-travel. Look at implementation guide.\n\n> >\n> > Oracle has this concept of chained rows. It is how they implement all\n> > of their LONG* types and also handle rows of normal types that are\n> > larger than the block size.\n> \n> Yes! I can't see why PostgreSQL should not be able to store rows bigger\n> than one block? I have seen people referring to this limitation every\n> now and then, but I don't understand why it has to be that way?\n> Is this something fundamental to PostgreSQL?\n ^^^^^^^^^^^\nIt seems that answeer is \"No\". Just - not implemented feature.\nPersonally, I would like multi-representation feature more than that.\nAnd easy to implement.\n\nVadim\n", "msg_date": "Tue, 13 Jan 1998 15:54:09 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Storing rows bigger than one block" } ]
[ { "msg_contents": "Hi all,\n\nHave just tried 9. Jan snapshot on AIX 4.1.5,\ncompiles with gcc not with cc (fails in heaptuple.c), aix won't get\ndefined though, did a -Daix\n\nThe as keyword is now obligatory as in:\n\nregression=> select name a, age b from emp;\nERROR: parser: parse error at or near \"a\"\n\nstrange results as in (should be syntax error):\nregression=> select name 'a' from emp;\n?column?\n--------\na\n(1 row)\n\nSince the other(DBMS)s don't insist on the as I would suggest not to be\nstricter than the others even if it needs a lot of brainwork. (Tom ?)\n\nMight try something like:\n\tif not registered right unary operator then label (probably no\ngood)\nor\n\tforce non alpha 1. char for unary operators (i think this is\nbest)\nor even\n\tdisallow creation of right and left unary operators alltogether\n(can always use function instead)\n\nhm... really not easy... \nComments ?\n\nAndreas \n", "msg_date": "Fri, 9 Jan 1998 16:17:13 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "column labels now with obligatory 'as'" }, { "msg_contents": "> The as keyword is now obligatory as in:\n>\n> regression=> select name a, age b from emp;\n> ERROR: parser: parse error at or near \"a\"\n\nI have confirmed that this has been the behavior since at least v6.0. It\ndoes appear to be the case that omitting the \"AS\" is allowed in SQL92\nsyntax. As you note below, conflicts in syntax with unary operators are\nthe problem. I suspect it is the right unary operators only, of which\nthere are only a few:\n\n ! integer factorial\n % truncate float8\n\nThe unary right operators have another problem:\n\npostgres=> select 1.23 %;\nERROR: parser: parse error at or near \"\"\n\nFor some reason the parser can't end a statement with a unary right\noperator. Don't know why exactly. Can currently work around by adding junk\nto the end or by putting parens around:\n\npostgres=> select 1.23 % order by 1;\n?column?\n--------\n 1\n(1 row)\n\n> strange results as in (should be syntax error):\n> regression=> select name 'a' from emp;\n> ?column?\n> --------\n> a\n> (1 row)\n\nThis is new support for SQL92 type specification of string constants. The\nsyntax requires a valid type followed by a string literal constant. \"name\"\nis a Postgres type. Glad it works :)\n\n> Since the other(DBMS)s don't insist on the as I would suggest not to be\n> stricter than the others even if it needs a lot of brainwork. (Tom ?)\n\nDo other systems allow right unary operators? Which ones now allow\ncreating new right unary operators? How do they handle it? These are the\nfeatures which lead to the parser ambiguities, since you can't hardcode\nthe cases into yacc, our parser-of-choice.\n\n> Might try something like:\n> if not registered right unary operator then label (probably no\n> good)\n\nyacc would still have already found shift/reduce conflicts...\n\n> or\n> force non alpha 1. char for unary operators (i think this is\n> best)\n\nThis isn't the source of the problem, and in fact is already required by\nthe scanner. The problem is with parsing\n\n column Op label\nvs\n column Op column\n\n> or even\n> disallow creation of right and left unary operators alltogether\n> (can always use function instead)\n\nWell, losing other features to get this one may not be worth it- \"throwing\nout the baby with the bathwater\".\n\n> hm... really not easy...\n> Comments ?\n\nThe problem is with parsing\n\n column Op label\nvs\n column Op columnThe parser only knows about syntax, and is not allowed\nto peek and see if, for example, something is a valid type (at that point\nyacc has already complained anyway).\n\nWe could probably break the shift/reduce conflict in yacc by insisting\nthat right unary expressions be enclosed in parentheses, but that seems\nugly. It is certainly true though that the most common action by users is\nlabeling columns, not using right unary operators. Comments?\n\n - Tom\n\nAnother lurking problem is allowing \";\" to be an operator as well as a\nstatement terminator. Probably a holdover from Postquel, but doesn't work\nat all well in SQL. We should yank it.\n\n", "msg_date": "Fri, 09 Jan 1998 16:26:49 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] column labels now with obligatory 'as'" }, { "msg_contents": "> We could probably break the shift/reduce conflict in yacc by insisting\n> that right unary expressions be enclosed in parentheses, but that seems\n> ugly. It is certainly true though that the most common action by users is\n> labeling columns, not using right unary operators. Comments?\n> \n> - Tom\n> \n> Another lurking problem is allowing \";\" to be an operator as well as a\n> statement terminator. Probably a holdover from Postquel, but doesn't work\n> at all well in SQL. We should yank it.\n\nI agree on ';'.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 12:34:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] column labels now with obligatory 'as'" } ]
[ { "msg_contents": "> Have just tried 9. Jan snapshot on AIX 4.1.5,\n> compiles with gcc not with cc (fails in heaptuple.c), aix won't get\n> defined though, did a -Daix\n\nIBM compiler doesn't like the (void)NULLs in the heap_getattr and\nStrNCpy macros. Change to (bool)NULL in heap_getattr in include/\naccess/heapam.h and to (char)NULL in StrNCpy in include/c.h. In 6.2,\nheap_getattr has (char)NULL (which is what bool is, so this worked),\nbut gcc gave warnings, so they were changed to _errors_ to \"fix\" them\nfor 6.3.\n\ndarrenk\n", "msg_date": "Fri, 9 Jan 1998 10:35:07 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] column labels now with obligatory 'as'" }, { "msg_contents": "> > Have just tried 9. Jan snapshot on AIX 4.1.5,\n> > compiles with gcc not with cc (fails in heaptuple.c), aix won't get\n> > defined though, did a -Daix\n>\n> IBM compiler doesn't like the (void)NULLs in the heap_getattr and\n> StrNCpy macros. Change to (bool)NULL in heap_getattr in include/\n> access/heapam.h and to (char)NULL in StrNCpy in include/c.h. In 6.2,\n> heap_getattr has (char)NULL (which is what bool is, so this worked),\n> but gcc gave warnings, so they were changed to _errors_ to \"fix\" them\n> for 6.3.\n\nAck! It looks to me like the StrNCpy macro needs (void *)NULL or just\nNULL, not (void)NULL. Same with heap_getattr. Was this a workaround for\na compiler problem on another platform, or are we mixing type coersions?\n\n - Tom\n\n", "msg_date": "Fri, 09 Jan 1998 17:14:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] column labels now with obligatory 'as'" }, { "msg_contents": "> \n> > > Have just tried 9. Jan snapshot on AIX 4.1.5,\n> > > compiles with gcc not with cc (fails in heaptuple.c), aix won't get\n> > > defined though, did a -Daix\n> >\n> > IBM compiler doesn't like the (void)NULLs in the heap_getattr and\n> > StrNCpy macros. Change to (bool)NULL in heap_getattr in include/\n> > access/heapam.h and to (char)NULL in StrNCpy in include/c.h. In 6.2,\n> > heap_getattr has (char)NULL (which is what bool is, so this worked),\n> > but gcc gave warnings, so they were changed to _errors_ to \"fix\" them\n> > for 6.3.\n> \n> Ack! It looks to me like the StrNCpy macro needs (void *)NULL or just\n> NULL, not (void)NULL. Same with heap_getattr. Was this a workaround for\n> a compiler problem on another platform, or are we mixing type coersions?\n\nPrevented the warning 'right-hand side of conditional skipped...' Try\nand and see what you get. It has to do with macro usage and constant\nparameters.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 15:10:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] column labels now with obligatory 'as'" } ]
[ { "msg_contents": ">> \n>> While implementing a method to retrieve the permissions on a table,\n>> the statement: \"grant all on test to public;\" kills the backend.\n>\n>Works here.\n\nDon't work here either:\n(dbx) where\nOrderedElemPop(elem = 0x2008b4c8), line 130 in \"oset.c\"\nAllocSetFree(0x2008b1c8, 0x2008b4d8), line 226 in \"aset.c\"\nAllocSetReset(0x2008b1c8), line 134 in \"aset.c\"\nEndPortalAllocMode(), line 914 in \"portalmem.c\"\nAtCommit_Memory(), line 688 in \"xact.c\"\nCommitTransaction(), line 899 in \"xact.c\"\nCommitTransactionCommand(), line 1084 in \"xact.c\"\nPostgresMain(0xa, 0x2ff22aa0), line 1600 in \"postgres.c\"\nmain(argc = 10, argv = 0x2ff22aa0), line 79 in \"main.c\"\n\n(invalid char ptr (0x2008b4d8))), line 226 in \"aset.c\"\n\nAndreas\n", "msg_date": "Fri, 9 Jan 1998 16:36:27 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant broken" } ]
[ { "msg_contents": "> > A few things that I have noticed will be affected by allowing the\n> > disk block size to be other than 8k. (4k, 8k, 16k or 32k)\n> > \n> > 1. Rules\n> > \n> > The rule system currently stores plans as tuples in pg_rewrite.\n> > Making the block size smaller will accordingly reduce the size of\n> > the rules you can create.\n> \n> I say make it match the given block size at compile time.\n\nFor now it does. There's a comment in rewriteDefine.c though that\nindicates the original pg coders thought about putting the stored\nplans into large objects if 8k was too limiting.\n\nCould be nice to have the type limits stored in a system table so\nthe user or a program could query the limits of the current db.\n\n> > 2. Attribute limits\n> > \n> > Should the size limits of the varchar/char be driven by the chosen\n> > block size?\n> \n> Yes, they should be calculated based on the compile block size.\n> ...\n> Just make the max size based on the block size.\n> ...\n> This is an interesting point. While we can compute most of the changes\n> at compile time, we will have to communicate with clients that were\n> compiled with different max limits.\n> \n> I recommend we increase the max client buffer size to what we believe is\n> the largest block size anyone would ever reasonably choose. That way,\n> all can communicate. I recommend you contact Peter Mount for JDBC,\n> Openlink for ODBC, and all the other client maintainers and let them\n> know the changes will be in 6.3 so they can be ready with new version\n> when 6.3 starts beta on February 1.\n\nSo the buffer size will be defined in one place also that they should all\nreference when compiling or running? In include/config.h I assume?\n\nThis could be difficult for the ODBC and JDBC drivers to determine\nautomagically since they are usually compiled on different systems that\nthe postgres src.\n\nOther stuff...\n\nCould the block size be made into a command line option, like \"-k 8192\"?\n\nWould only require that the BLCKSZ define become a variable and that it\nbe passed to the backends too. Much easier than having to recompile/install\npostgres to change the block size. Could have multiple postmasters running\ndifferent block-sized databases without having to have a binary around for\neach size.\n\nRenaming BLCKSZ...\n\nHow about PG_BLOCK_SIZE? Or if it's made a variable, DiskBlockSize, keeping\nit in the tradition of SortMem, ShowStats, etc.\n\ndarrenk\n", "msg_date": "Fri, 9 Jan 1998 10:43:27 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "> \n> > > A few things that I have noticed will be affected by allowing the\n> > > disk block size to be other than 8k. (4k, 8k, 16k or 32k)\n> > > \n> > > 1. Rules\n> > > \n> > > The rule system currently stores plans as tuples in pg_rewrite.\n> > > Making the block size smaller will accordingly reduce the size of\n> > > the rules you can create.\n> > \n> > I say make it match the given block size at compile time.\n> \n> For now it does. There's a comment in rewriteDefine.c though that\n> indicates the original pg coders thought about putting the stored\n> plans into large objects if 8k was too limiting.\n\nYep, I saw that too.\n\n> Could be nice to have the type limits stored in a system table so\n> the user or a program could query the limits of the current db.\n\nSomeday.\n\n> \n> > > 2. Attribute limits\n> > > \n> > > Should the size limits of the varchar/char be driven by the chosen\n> > > block size?\n> > \n> > Yes, they should be calculated based on the compile block size.\n> > ...\n> > Just make the max size based on the block size.\n> > ...\n> > This is an interesting point. While we can compute most of the changes\n> > at compile time, we will have to communicate with clients that were\n> > compiled with different max limits.\n> > \n> > I recommend we increase the max client buffer size to what we believe is\n> > the largest block size anyone would ever reasonably choose. That way,\n> > all can communicate. I recommend you contact Peter Mount for JDBC,\n> > Openlink for ODBC, and all the other client maintainers and let them\n> > know the changes will be in 6.3 so they can be ready with new version\n> > when 6.3 starts beta on February 1.\n> \n> So the buffer size will be defined in one place also that they should all\n> reference when compiling or running? In include/config.h I assume?\n\nYes, in config.h, and let's call it PG... so it is clear, and everything\ncan key off of that.\n\n> \n> This could be difficult for the ODBC and JDBC drivers to determine\n> automagically since they are usually compiled on different systems that\n> the postgres src.\n\nI think they will need to handle the maximum size someone could ever\nchoose. Let's face it, 32k or 64k is not too much to ask for a buffer. \nI just hope there are not too many of them. I only see it in one place\nin libpq. The others are malloc'ed based on how big the result is when\nit comes back from the socket.\n\nI recommend we add a test in config.h to make sure they do not set the\nmax size greater than some predefined limit, and mention why we test\nthere (for clients). The interface/* files will not use the backend\nblock size, but will use another config.h define called PGMAXBLCKSZ, or\nsomething like that, so they can interoperate will all backends.\n\n> \n> Other stuff...\n> \n> Could the block size be made into a command line option, like \"-k 8192\"?\n\nToo scary for me.\n\n> \n> Would only require that the BLCKSZ define become a variable and that it\n> be passed to the backends too. Much easier than having to recompile/install\n> postgres to change the block size. Could have multiple postmasters running\n> different block-sized databases without having to have a binary around for\n> each size.\n\nYes, we could do that, but if they ever start the postmaster with a\ndifferent value, he is lost. I thought because of the bit fields and\ncases where BLCKSZ is used in macros to define sized arrays that we\ncan't make it variable.\n\nI think we should make it a config.h constant for now, but I am not firm\non this.\n\n> \n> Renaming BLCKSZ...\n> \n> How about PG_BLOCK_SIZE? Or if it's made a variable, DiskBlockSize, keeping\n> it in the tradition of SortMem, ShowStats, etc.\n\nI like that new name.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 12:28:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "On Fri, 9 Jan 1998, Darren King wrote:\n\n> How about PG_BLOCK_SIZE? Or if it's made a variable, DiskBlockSize, keeping\n> it in the tradition of SortMem, ShowStats, etc.\n\n\tI know of one site that builds their Virtual Websites into\nchroot()'d environments...something like this would be perfect for them,\nas it would prvent them having to recompile for each individual size...\n\n\tBut...initdb would have to have an appropriate option...and we'd\nhave to have a mechanism in place that checks that -k parameter is\nactually appropriate.\n\n\tWould it not make a little more sense to have a pg_block_size file\ncreated in the data directory that postmaster reads at startup?\n\n\n", "msg_date": "Fri, 9 Jan 1998 13:10:20 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "> \n> On Fri, 9 Jan 1998, Darren King wrote:\n> \n> > How about PG_BLOCK_SIZE? Or if it's made a variable, DiskBlockSize, keeping\n> > it in the tradition of SortMem, ShowStats, etc.\n> \n> \tI know of one site that builds their Virtual Websites into\n> chroot()'d environments...something like this would be perfect for them,\n> as it would prvent them having to recompile for each individual size...\n> \n> \tBut...initdb would have to have an appropriate option...and we'd\n> have to have a mechanism in place that checks that -k parameter is\n> actually appropriate.\n> \n> \tWould it not make a little more sense to have a pg_block_size file\n> created in the data directory that postmaster reads at startup?\n\nI like that, but the postmaster and each backend would have to read that\nfile before starting, or the postmaster can pass it down into the\npostgres backend via a command-line option.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 13:36:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "On Fri, 9 Jan 1998, Bruce Momjian wrote:\n\n> > Other stuff...\n> > \n> > Could the block size be made into a command line option, like \"-k 8192\"?\n> \n> Too scary for me.\n\n\tI kinda like this one...if it can be relatively implimented. The main\nreason I like it is that, like -B and -S, it means that someone could deal\nwith \"tweaking\" a system without having to recompile from scratch...\n\n\tThat said, I'd much rather that -k option being something that is \nan option only available when *creating* the database (ie. initdb) with a\npg_blocksize file being created and checked when postmaster starts up.\n\n\tEssentially, make '-k 8192' an option only available to the postgres\nprocess, not the postmaster process. And not settable by the -O option to\npostmaster...\n\n> Yes, we could do that, but if they ever start the postmaster with a\n> different value, he is lost. \n\n\tSee above...it should only be something that is settable at initdb time,\nnot accessible via 'postmaster' itself...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 9 Jan 1998 17:48:59 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "\n=> \tI kinda like this one...if it can be relatively implimented. The main\n=> reason I like it is that, like -B and -S, it means that someone could deal\n=> with \"tweaking\" a system without having to recompile from scratch...\n=> \nThe -S flag for the postmaster seems to be setting the silentflag. But the\nFAQ says, it can be used to set the sort memory. The following is 6.2.1 version\ncode in src/backend/postmaster/postmaster.c\n case 'S':\n\n /*\n * Start in 'S'ilent mode (disassociate from controlling\n * tty). You may also think of this as 'S'ysV mode since\n * it's most badly needed on SysV-derived systems like\n * SVR4 and HP-UX.\n */\n silentflag = 1;\n break;\n\nAm I looking at the wrong file? Can someone please tell me how to increase\nthe sort memory size.\n\nThanks\n--shiby\n\n\n", "msg_date": "Fri, 09 Jan 1998 17:40:45 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues. " }, { "msg_contents": "Bug in FAQ, fixed now. The -S in postmaster is silent, the -S in\npostgres is sort. The FAQ had it as postmaster when it should have been\npostgres.\n\n> \n> \n> => \tI kinda like this one...if it can be relatively implimented. The main\n> => reason I like it is that, like -B and -S, it means that someone could deal\n> => with \"tweaking\" a system without having to recompile from scratch...\n> => \n> The -S flag for the postmaster seems to be setting the silentflag. But the\n> FAQ says, it can be used to set the sort memory. The following is 6.2.1 version\n> code in src/backend/postmaster/postmaster.c\n> case 'S':\n> \n> /*\n> * Start in 'S'ilent mode (disassociate from controlling\n> * tty). You may also think of this as 'S'ysV mode since\n> * it's most badly needed on SysV-derived systems like\n> * SVR4 and HP-UX.\n> */\n> silentflag = 1;\n> break;\n> \n> Am I looking at the wrong file? Can someone please tell me how to increase\n> the sort memory size.\n> \n> Thanks\n> --shiby\n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 17:54:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." }, { "msg_contents": "On Fri, 9 Jan 1998, Bruce Momjian wrote:\n\n> > > This is an interesting point. While we can compute most of the changes\n> > > at compile time, we will have to communicate with clients that were\n> > > compiled with different max limits.\n> > > \n> > > I recommend we increase the max client buffer size to what we believe is\n> > > the largest block size anyone would ever reasonably choose. That way,\n> > > all can communicate. I recommend you contact Peter Mount for JDBC,\n> > > Openlink for ODBC, and all the other client maintainers and let them\n> > > know the changes will be in 6.3 so they can be ready with new version\n> > > when 6.3 starts beta on February 1.\n\nI'll be ready :-)\n\n> > So the buffer size will be defined in one place also that they should all\n> > reference when compiling or running? In include/config.h I assume?\n> \n> Yes, in config.h, and let's call it PG... so it is clear, and everything\n> can key off of that.\n> \n> > \n> > This could be difficult for the ODBC and JDBC drivers to determine\n> > automagically since they are usually compiled on different systems that\n> > the postgres src.\n\nNot necesarily for JDBC. Because of it's nature, there is no real reason\nwhy we can't even include it precompiled with the source - the same jar\nfile runs on any platform.\n\nInfact, this does bring up the same problem we were discussing about\nearlier, where we were thinking about changing the protocol on startup. If\nthat change occurs, then this value is an ideal candidate to add to the\nstartup packet.\n\n> I think they will need to handle the maximum size someone could ever\n> choose. Let's face it, 32k or 64k is not too much to ask for a buffer. \n> I just hope there are not too many of them. I only see it in one place\n> in libpq. The others are malloc'ed based on how big the result is when\n> it comes back from the socket.\n> \n> I recommend we add a test in config.h to make sure they do not set the\n> max size greater than some predefined limit, and mention why we test\n> there (for clients). The interface/* files will not use the backend\n> block size, but will use another config.h define called PGMAXBLCKSZ, or\n> something like that, so they can interoperate will all backends.\n\nSlight problem with JDBC (or Java in general), in that we don't use .h\nfiles, so settings in config.h are useless to us. So far, certain\nconstants have been duplicated in the source.\n\nI was thinking of possibly adding a couple of functions to the backend, to\nallow us to get certain details about the backend, which is needed for\ncertain DatabaseMetaData methods. Perhaps adding PGMAXBLCKSZ to that may \nget round the problem.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 10 Jan 1998 12:02:10 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Disk block size issues." } ]
[ { "msg_contents": "Hi,\n\nI have built and installed the latest version of the PostgreSQL RDBMS\non S/Linux but am having problems with the regression tests.\n\nMany of the failures come from floating point exceptions.\n\nI've written a small c prog to demonstrate the problem which seems to\nshow that using exp() on a number less than 1.0e-150 gives an FPE.\n\nCan anyone tell me what's broken, if anything.\n\nI'm really not sure whether it is...\n\ncpu : Fujitsu or Weitek Power-UP\nfpu : Fujitsu or Weitek on-chip FPU \nkernel : sparclinux-2.0-980106\nlibc/m : libc.so.5.3.12/libm.so.5.0.6\ngcc : gcc version 2.7.2.1\n\nSomething else???\nExpected behaviour???\n\nThe machine is a SPARCstation IPX running Red Hat 4.2.\n\nThanks for any help,\nKeith.\n\n[emkxp01@sparclinux ~]$ cat mxdb.c\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <float.h>\n\nmain()\n{\n double dbval;\n\n dbval = atof(\"1.0e-150\");\n printf(\"dbl1 = %e\\n\", dbval);\n printf(\"Exp dbl1 = %e\\n\", exp(dbval));\n dbval = atof(\"1.0e-151\");\n printf(\"dbl2 = %e\\n\", dbval);\n printf(\"Exp dbl2 = %e\\n\", exp(dbval));\n} \n[emkxp01@sparclinux ~]$ gcc mxdb.c -lm\n[emkxp01@sparclinux ~]$ ./a.out\ndbl1 = 1.000000e-150\nExp dbl1 = 1.000000e+00\ndbl2 = 1.000000e-151\nFloating exception (core dumped) \n\n\n", "msg_date": "Fri, 9 Jan 1998 15:53:57 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Trouble with exp() on S/Linux?" } ]
[ { "msg_contents": "Tom,\n\nThanks for your testing and feedback.\n\nI got the attached reply from Jakub Jelinek on the \"sparclinux\" list which\nexplains the problem.\n\nI guess that your sparc20 does not need help from the kernel with this math.\n\nIf you get chance to try a snapshot I'd be interested in your experiences.\n\nThanks,\nKeith.\n\nBTW: I used the linux-elf template with a few minor changes.\n\n*** linux-elf Sat Oct 25 09:49:56 1997\n--- sparc-linux-elf Thu Jan 8 16:56:44 1998\n***************\n*** 1,9 ****\n AROPT:crs\n! CFLAGS:-O2 -m486\n! SHARED_LIB:-fpic\n ALL:\n SRCH_INC:/usr/include/ncurses /usr/include/readline\n! SRCH_LIB:\n USE_LOCALE:no\n DLSUFFIX:.so\n YFLAGS:-d\n--- 1,9 ----\n AROPT:crs\n! CFLAGS:-O2\n! SHARED_LIB:-fPIC\n ALL:\n SRCH_INC:/usr/include/ncurses /usr/include/readline\n! SRCH_LIB:/usr/local/lib\n USE_LOCALE:no\n DLSUFFIX:.so\n YFLAGS:-d \n------------- Begin Forwarded Message -------------\n\nFrom: Jakub Jelinek <[email protected]>\nSubject: Re: Trouble with exp() on S/Linux?\nTo: [email protected]\nDate: Fri, 9 Jan 1998 17:32:16 +0100 (MET)\nMime-Version: 1.0\nX-Orcpt: rfc822;[email protected]\n\n> Many of the failures come from floating point exceptions.\n> \n> I've written a small c prog to demonstrate the problem which seems to\n> show that using exp() on a number less than 1.0e-150 gives an FPE.\n> \n> Can anyone tell me what's broken, if anything.\nKernel is broken.\n> Expected behaviour???\n\nNo, just need to be fixed. On some FPUs hw expect subnormal calculations to\nbe emulated in software and kernel is not doing that. We have support for\nthis for sparc64, it just needs to be backported.\n\nCheers,\n Jakub\n___________________________________________________________________\nJakub Jelinek | [email protected] | http://sunsite.mff.cuni.cz\nAdministrator of SunSITE Czech Republic, MFF, Charles University\n___________________________________________________________________\nUltralinux - first 64bit OS to take full power of the UltraSparc\nLinux version 2.0.32 on a sparc machine (291.64 BogoMips).\n___________________________________________________________________\n\n\n------------- End Forwarded Message -------------\n\n\n", "msg_date": "Fri, 9 Jan 1998 18:30:59 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trouble with exp() on S/Linux?" } ]
[ { "msg_contents": "While you can do a UNION of views, you can not do a VIEW of UNIONs.\n\nThis is OK:\n\n\tselect * from view1 UNION select * from view2;\n\nThis is not OK:\n\n\tcreate view testv as select * from test1 UNION select * from test2;\n\nDoes the standard allow this? Thomas? I currently print a 'not\nimplemented' message.\n\nInformix does not allow it, and I can't figure out how to do it with the\nre-write system yet.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 16:33:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Views on UNIONs" } ]
[ { "msg_contents": "> On Fri, 9 Jan 1998, Bruce Momjian wrote:\n> \n> > > Could the block size be made into a command line option, like \"-k 8192\"?\n> > \n> > Too scary for me.\n> \n> \tI kinda like this one...if it can be relatively implimented. The main\n> reason I like it is that, like -B and -S, it means that someone could deal\n> with \"tweaking\" a system without having to recompile from scratch...\n> \n> \tThat said, I'd much rather that -k option being something that is \n> an option only available when *creating* the database (ie. initdb) with a\n> pg_blocksize file being created and checked when postmaster starts up.\n> \n> \tEssentially, make '-k 8192' an option only available to the postgres\n> process, not the postmaster process. And not settable by the -O option to\n> postmaster...\n> \n> > Yes, we could do that, but if they ever start the postmaster with a\n> > different value, he is lost. \n> \n> \tSee above...it should only be something that is settable at initdb time,\n> not accessible via 'postmaster' itself...\n\nThis is a pretty reasonable restriction, but...\n\nThe major change would be like Bruce has stated earlier, the variables that\nare declared with the #define value would have to be made into pointers and\npalloc'd/pfree'd as necessary. Could get pretty ugly in files like nbtsort.c\nwith double-dereferenced pointers and all.\n\nI'll make a list of these variables this weekend and come with a more definate\nopinion on the subject.\n\ndarrenk\n", "msg_date": "Fri, 9 Jan 1998 17:24:04 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Disk block size issues." } ]
[ { "msg_contents": "Here's a blast from the past. Shows how I keep those open issues in my\nmailbox.\n\nForwarded message:\n> From [email protected] Wed Jan 29 14:49:26 1997\n> X-Authentication-Warning: hub.org: pgsql set sender to [email protected] using -f\n> Date: Wed, 29 Jan 1997 13:38:10 -0500\n> From: [email protected] (Darren King)\n> Message-Id: <9701291838.AA22296@ceodev>\n> To: [email protected]\n> Subject: [HACKERS] Max size of data types and tuples.\n> Mime-Version: 1.0\n> Content-Type: text/plain; charset=US-ASCII\n> Content-Transfer-Encoding: 7bit\n> Content-Md5: 9bBKVeeTrq97EhvkS6qT3A==\n> Sender: [email protected]\n> Reply-To: [email protected], [email protected] (Darren King)\n> \n> \n> Does anyone know of a valid reason for the 4096 byte limit\n> on the textual fields (char, varchar, text, etc...)?\n> \n> Could it be bumped to (MAXTUPLEN - sizeof(int)) in the parser\n> and in utils/adt/varchar.c? MAXTUPLEN would have to be\n> calculated more accurately though as noted in the comment\n> around its definition.\n> \n> \n> The following are just some of my observations and comments\n> on things that I think should be cleaned up a little in\n> the source (and I'll volunteer to do them).\n> \n> 1. Clean up the #defines for the max block size. Currently,\n> there are at least four references to 8192...\n> \n> include/config.h\n> include/storage/bufmgr.h\n> include/optimizer/internal.h\n> backend/rewrite/rewriteDefine.c\n> \n> The _PAGE_SIZE_ in internal.h is _not_ used anywhere in the\n> optimizer. This define should be linked to BLCKSZ, but would\n> be better to remove it and just use BLCKSZ.\n> \n> optimizer/costsize.c includes storage/bufpage.h for BLCKSZ,\n> but in fact this is defined in config.h!. Also included in\n> \n> executor/nodeHash.c\n> executor/nodeHashjoin.c\n> utils/sort/psort.c\n> \n> __These includes of storage/bufpage.h can be removed.__\n> \n> \n> There should be #define MAX_BLOCK_SIZE 8192\n> #define CURRENT_BLOCK_SIZE 8192\n> \n> The MAX_BLOCK_SIZE is a hard and fast limit since only 13 bits\n> can be used in offsets and the like. I believe that in the\n> future, PostgreSql should support block sizes other than 8k,\n> like 2k and 4k. Long-term goal, but the code should be done\n> to allow this at a future time. The comments in storage/bufpage.h\n> suggest to me that the original designers of Postgres had this\n> in mind.\n> \n> \n> 2. Once the block size issue is taken care of, calculate the\n> maximum tuple size more accurately.\n> \n> \n> 3. When #1 & #2 are resolved, let the textual fields have a max\n> of (MAX_TUPLE_SIZE - sizeof(int)).\n> \n> \n> 4. Since only 13 bits are needed for storing the size of these\n> textual fields in a tuple, could PostgreSql use a 16-bit int to\n> store it? Currently, the size is padded to four bytes in the\n> tuple and this eats space if you have many textual fields.\n> Without further digging, I'm assuming that the size is double-word\n> aligned so that the actual text starts on a double-word boundary.\n> \n> Thanks for reading this far. :)\n> \n> Comments, suggestions most welcome...\n> \n> \n> Darren [email protected]\n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 9 Jan 1998 22:08:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "[HACKERS] Max size of data types and tuples. (fwd)" } ]
[ { "msg_contents": "gcc -I../../include -I/usr/include/ncurses -I/usr/local/include/readline -O2 -m486 -Wall -Wmissing-prototypes -I.. -Wno-error -c scan.c -o scan.o\nlex.yy.c:800: warning: no previous prototype for `yylex'\nscan.l: In function `yylex':\nscan.l:202: `ABORT' undeclared (first use this function)\nscan.l:202: (Each undeclared identifier is reported only once\nscan.l:202: for each function it appears in.)\nscan.l: At top level:\nscan.l:379: warning: no previous prototype for `yyerror'\nscan.l: In function `yyerror':\nscan.l:380: `ABORT' undeclared (first use this function)\nlex.yy.c: At top level:\nlex.yy.c:2103: warning: `yy_flex_realloc' defined but not used\nmake[2]: *** [scan.o] Error 1\nmake[2]: Leaving directory `/home/local/postgresql-pre.6.3/src/backend/parser'\nmake[1]: *** [parser.dir] Error 2\nmake[1]: Leaving directory `/home/local/postgresql-pre.6.3/src/backend'\nmake: *** [all] Error 2\n\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Sat, 10 Jan 1998 09:33:08 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "snapshot doesn't compile on linux" } ]
[ { "msg_contents": "Hi all,\n\nin the current snapshot there is no default for PGHOST \nin src/interfaces/libpq/fe-connect.c. \nIn 6.2 this was 'localhost', which is a reasonable default.\n\nIf there is no real reason for not having a default for\nthe host-option, please provide 'localhost' as default.\n\n\nthanks\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Sat, 10 Jan 1998 09:48:51 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "DefaultHost" }, { "msg_contents": "On Sat, 10 Jan 1998, Edmund Mergl wrote:\n\n> Hi all,\n> \n> in the current snapshot there is no default for PGHOST \n> in src/interfaces/libpq/fe-connect.c. \n> In 6.2 this was 'localhost', which is a reasonable default.\n> \n> If there is no real reason for not having a default for\n> the host-option, please provide 'localhost' as default.\n\nI think this was changed to allow UNIX Sockets to be the default if no\nhostname is provided.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 10 Jan 1998 12:13:24 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "\n in the current snapshot there is no default for PGHOST \n in src/interfaces/libpq/fe-connect.c. \n In 6.2 this was 'localhost', which is a reasonable default.\n\n If there is no real reason for not having a default for\n the host-option, please provide 'localhost' as default.\n\nNo host means connect with unix domain socket.\nIf host (PGHOST) is set connection is made with tcp socket.\n\nThis is related to the problem I mentioned before with\nDBD::Pg and libpq.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "10 Jan 1998 12:59:59 -0000", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "Goran Thyni wrote:\n> \n> in the current snapshot there is no default for PGHOST\n> in src/interfaces/libpq/fe-connect.c.\n> In 6.2 this was 'localhost', which is a reasonable default.\n> \n> If there is no real reason for not having a default for\n> the host-option, please provide 'localhost' as default.\n> \n> No host means connect with unix domain socket.\n> If host (PGHOST) is set connection is made with tcp socket.\n> \n> This is related to the problem I mentioned before with\n> DBD::Pg and libpq.\n> \n> regards,\n> --\n> ---------------------------------------------\n> G�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n\nok, but there is still no reasonable default. The only way\nI got things working is starting the postmaster with -i\nand setting the environment-variable PGHOST to 'localhost'.\n\nIt should be possible to connect to the backend on the \nlocalhost without having to define the host in the environment \nor as argument in PQconnectdb(). \n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Sat, 10 Jan 1998 15:18:18 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "\n > No host means connect with unix domain socket.\n > If host (PGHOST) is set connection is made with tcp socket.\n > \n > This is related to the problem I mentioned before with\n > DBD::Pg and libpq.\n\n ok, but there is still no reasonable default. The only way\n I got things working is starting the postmaster with -i\n and setting the environment-variable PGHOST to 'localhost'.\n\n It should be possible to connect to the backend on the \n localhost without having to define the host in the environment \n or as argument in PQconnectdb(). \n\nThe problem is in PQconnectdb() in libpq.\nI will supply a patch as soon as I have time to make a clean one\nagainst current sources.\n\n regards,\n\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "10 Jan 1998 14:50:18 -0000", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "\nOK, here comes a patch, DBD::Pg (and possibly other 3rd party clients)\ncan connect to unix sockets.\nPatch is against current source tree.\n\nBackground:\nlibpq set some policy for client, which it should not\nIMHO. It prevent some 3rd party clients to connect with\nunix domain sockets etc.\n\n regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n------------------------- snip ---------------------------------\n\ndiff -cr pg-sup/pgsql/src/interfaces/libpq/fe-connect.c pgsql-hack/src/interfaces/libpq/fe-connect.c\n*** pg-sup/pgsql/src/interfaces/libpq/fe-connect.c\tFri Dec 5 17:33:44 1997\n--- pgsql-hack/src/interfaces/libpq/fe-connect.c\tSat Jan 10 16:09:37 1998\n***************\n*** 150,156 ****\n \tPGconn\t *conn;\n \tPQconninfoOption *option;\n \tchar\t\terrorMessage[ERROR_MSG_LENGTH];\n! \n \t/* ----------\n \t * Allocate memory for the conn structure\n \t * ----------\n--- 150,156 ----\n \tPGconn\t *conn;\n \tPQconninfoOption *option;\n \tchar\t\terrorMessage[ERROR_MSG_LENGTH];\n! \tchar* tmp;\n \t/* ----------\n \t * Allocate memory for the conn structure\n \t * ----------\n***************\n*** 177,213 ****\n \t}\n \n \t/* ----------\n- \t * Check that we have all connection parameters\n- \t * ----------\n- \t */\n- \tfor (option = PQconninfoOptions; option->keyword != NULL; option++)\n- \t{\n- \t\tif (option->val != NULL)\n- \t\t\tcontinue;\t\t\t/* Value was in conninfo */\n- \n- \t\t/* ----------\n- \t\t * No value was found for this option. Return an error.\n- \t\t * ----------\n- \t\t */\n- \t\tconn->status = CONNECTION_BAD;\n- \t\tsprintf(conn->errorMessage,\n- \t\t\t\t\"ERROR: PQconnectdb(): Cannot determine a value for option '%s'.\\n\",\n- \t\t\t\toption->keyword);\n- \t\tstrcat(conn->errorMessage,\n- \t\t\t \"Option not specified in conninfo string\");\n- \t\tif (option->environ)\n- \t\t{\n- \t\t\tstrcat(conn->errorMessage,\n- \t\t\t\t \", environment variable \");\n- \t\t\tstrcat(conn->errorMessage, option->environ);\n- \t\t\tstrcat(conn->errorMessage, \"\\nnot set\");\n- \t\t}\n- \t\tstrcat(conn->errorMessage, \" and no compiled in default value.\\n\");\n- \t\tconninfo_free();\n- \t\treturn conn;\n- \t}\n- \n- \t/* ----------\n \t * Setup the conn structure\n \t * ----------\n \t */\n--- 177,182 ----\n***************\n*** 218,231 ****\n \tconn->port = NULL;\n \tconn->notifyList = DLNewList();\n \n! \tconn->pghost = strdup(conninfo_getval(\"host\"));\n! \tconn->pgport = strdup(conninfo_getval(\"port\"));\n! \tconn->pgtty = strdup(conninfo_getval(\"tty\"));\n! \tconn->pgoptions = strdup(conninfo_getval(\"options\"));\n! \tconn->pguser = strdup(conninfo_getval(\"user\"));\n! \tconn->pgpass = strdup(conninfo_getval(\"password\"));\n! \tconn->pgauth = strdup(conninfo_getval(\"authtype\"));\n! \tconn->dbName = strdup(conninfo_getval(\"dbname\"));\n \n \t/* ----------\n \t * Free the connection info - all is in conn now\n--- 187,208 ----\n \tconn->port = NULL;\n \tconn->notifyList = DLNewList();\n \n! \ttmp = conninfo_getval(\"host\");\n! \tconn->pghost = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"port\");\n! \tconn->pgport = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"tty\");\n! \tconn->pgtty = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"options\");\n! \tconn->pgoptions = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"user\");\n! \tconn->pguser = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"password\");\n! \tconn->pgpass = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"authtype\");\n! \tconn->pgauth = tmp ? strdup(tmp) : NULL;\n! \ttmp = conninfo_getval(\"dbname\");\n! \tconn->dbName = tmp ? strdup(tmp) : NULL;\n \n \t/* ----------\n \t * Free the connection info - all is in conn now\n\n", "msg_date": "10 Jan 1998 15:24:52 -0000", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "On Sat, 10 Jan 1998, Edmund Mergl wrote:\n\n> Hi all,\n> \n> in the current snapshot there is no default for PGHOST \n> in src/interfaces/libpq/fe-connect.c. \n> In 6.2 this was 'localhost', which is a reasonable default.\n\n\tDefault operation now is 'Unix Domain Sockets'...PGHOST isn't a viable\ndefault. The only time that PGHOST is a viable default is if -i is issued to\npostmaster when started up...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 10 Jan 1998 14:34:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DefaultHost" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Sat, 10 Jan 1998, Edmund Mergl wrote:\n> \n> > Hi all,\n> >\n> > in the current snapshot there is no default for PGHOST\n> > in src/interfaces/libpq/fe-connect.c.\n> > In 6.2 this was 'localhost', which is a reasonable default.\n> \n> Default operation now is 'Unix Domain Sockets'...PGHOST isn't a viable\n> default. The only time that PGHOST is a viable default is if -i is issued to\n> postmaster when started up...\n> \n> Marc G. Fournier\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n\n\nsorry, but I still didn't get that. What do I have to do in order\nto be able to connect to the backend running on the localhost\nwithout setting PGHOST and without specifying the host in \nPQconnectdb() ?\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Sat, 10 Jan 1998 21:01:27 +0100", "msg_from": "Edmund Mergl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DefaultHost" } ]
[ { "msg_contents": "I am trying to run the latest version (supping source daily) and I get the\nfollowing error when I run psql.\n\nConnection to database 'darcy' failed.\nconnectDB() failed: Is the postmaster running and accepting connections at 'UNIX Socket' on port '5432'?\n\nHave I missed some change that I have to make? This is on the same\nsystem as the server.\n\nTIA.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 10 Jan 1998 09:19:33 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Can't run current PostgreSQL" }, { "msg_contents": "On Sat, 10 Jan 1998, D'Arcy J.M. Cain wrote:\n\n> I am trying to run the latest version (supping source daily) and I get the\n> following error when I run psql.\n> \n> Connection to database 'darcy' failed.\n> connectDB() failed: Is the postmaster running and accepting connections at 'UNIX Socket' on port '5432'?\n> \n> Have I missed some change that I have to make? This is on the same\n> system as the server.\n\n\tNew default startup disabled TCP/IP connections, using Unix Domain Sockets\nexclusively. If PGHOST/PGPORT are set, then the \"frontends\" try to use TCP/IP\nvs Unix Domain, and will therefore fail. To get the old behaviour, startup\npostmaster with the -i option to turn TCP/IP connections back on again...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 10 Jan 1998 14:45:36 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Can't run current PostgreSQL" }, { "msg_contents": "> \n> I am trying to run the latest version (supping source daily) and I get the\n> following error when I run psql.\n> \n> Connection to database 'darcy' failed.\n> connectDB() failed: Is the postmaster running and accepting connections at 'UNIX Socket' on port '5432'?\n> \n> Have I missed some change that I have to make? This is on the same\n> system as the server.\n\nI have seen cases since the password changes that the postmaster does\nnot start properly. If I start it from a script, it failes, but if I\nstart it from the command line or gdb, it works.\n\nHaven't figured out why yet.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 10 Jan 1998 17:45:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Can't run current PostgreSQL" } ]
[ { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, a few questions:\n> \n> Should we use sortmerge, so we can use our psort as temp tables,\n> or do we use hashunique?\n> \n> How do we pass the query to the optimizer? How do we represent\n> the range table for each, and the links between them in correlated\n> subqueries?\n\nMy suggestion is just use varlevel in Var and don't put upper query'\nrelations into subquery range table.\n\nVadim\n", "msg_date": "Sun, 11 Jan 1998 00:41:19 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselects" }, { "msg_contents": "Vadim B. Mikheev wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > OK, a few questions:\n> >\n> > Should we use sortmerge, so we can use our psort as temp tables,\n> > or do we use hashunique?\n> >\n> > How do we pass the query to the optimizer? How do we represent\n> > the range table for each, and the links between them in correlated\n> > subqueries?\n> \n> My suggestion is just use varlevel in Var and don't put upper query'\n> relations into subquery range table.\n\nHmm... Sorry, it seems that I did reply to very old message - forget it.\n\nVadim\n", "msg_date": "Sun, 11 Jan 1998 00:58:52 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselects" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > OK, a few questions:\n> > \n> > Should we use sortmerge, so we can use our psort as temp tables,\n> > or do we use hashunique?\n> > \n> > How do we pass the query to the optimizer? How do we represent\n> > the range table for each, and the links between them in correlated\n> > subqueries?\n> \n> My suggestion is just use varlevel in Var and don't put upper query'\n> relations into subquery range table.\n> \n> Vadim\n> \n\nOK.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 00:23:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselects" } ]
[ { "msg_contents": "Thus spake The Hermit Hacker\n> > connectDB() failed: Is the postmaster running and accepting connections at 'UNIX Socket' on port '5432'?\n> \tNew default startup disabled TCP/IP connections, using Unix Domain Sockets\n> exclusively. If PGHOST/PGPORT are set, then the \"frontends\" try to use TCP/IP\n> vs Unix Domain, and will therefore fail. To get the old behaviour, startup\n> postmaster with the -i option to turn TCP/IP connections back on again...\n\nHmmm. I saw that discussion but I didn't think it applied to me because\nI don't run it over the network. So if I unset PGHOST/PGPORT then it\nshould work?\n\nI tried this and it wasn't set. I tried setting it and it still didn't\nwork. So, so far I have tried;\n\n With and without PGHOST/PGPORT defined\n With and without -i flag\n\nSomething else?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sat, 10 Jan 1998 17:23:50 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Can't run current PostgreSQL" } ]
[ { "msg_contents": "Here are some of the fixes in 6.3 if people want to test them.\n\nThe only problem I know of is that:\n\n\tupdate test set x = max(test2.y)\n\nonly updates one row each time it is run. Any ideas on a fix?\n\n---------------------------------------------------------------------------\n\nRELIABILITY\n-----------\n* Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n* Overhaul bufmgr/lockmgr/transaction manager\n* -Fix CLUSTER\n* Remove EXTEND?\n* -Aggregates on VIEW always returns zero (maybe because there is no oid for views?)\n* CREATE VIEW requires super-user priviledge\n* can lo_export()/lo_import() read/write anywhere, causing a security problem?\n* Tables that start with xinv confused to be large objects\n* Two and three dimmensional arrays display improperly, missing {}\n* -Add GROUP BY to INSERT INTO table SELECT * FROM table2\n* lo_unlink() crashes server\n* Prevent auto-table reference, like SELECT table.col WHERE col = 3 (?)\n* -Remove un-needed malloc() calls and replace with palloc().\n* SELECT * FROM table WHERE int4_column = '1' fails\n* SELECT a[1] FROM test fails, it needs test.a[1]\n* -SELECT COUNT(*) FROM TAB1, TAB2 fails\n* -SELECT SUM(2+2) FROM table dumps core\n* UPDATE table SET table.value = 3 fails\n* -UPDATE key_table SET keyval=count(reftab.num) fails\n* -INSERT INTO ... SELECT DISTINCT ... does not accept DISTINCT\n* -INSERT INTO table SELECT id, count(*) FROM table2 GROUP BY id generates error\n* Make pg_dump preserve inheritance column order, do non-inherits first\n* User who can create databases can modify pg_database table\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 10 Jan 1998 22:50:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.3 patches" }, { "msg_contents": "Some more items from my list which have been added since v6.2.1:\n\nSQL92 binary and hex input and string type coersion\nallow casting of non-constants using both SQL92 and Postgres syntax\nparser support for PRIMARY, FOREIGN KEY\nbackend support for PRIMARY KEY, UNIQUE (create index)\nmore parser support for DEFAULT, CHECK clauses\nadd 'doy' as argument to datetime_part()\nfunctions datetime_time(), time(datetime)\nfunctions int4_datetime(), int4_timespan()\nadd unixdate package to contrib area\nconstant CURRENT_USER as GetPgUserName()\nsupport SQL3 syntax TRUE, FALSE\nimplement IS TRUE, IS FALSE, IS NOT TRUE, IS NOT FALSE\nprovide timezone support in libpq and backend\nsupport SQL92 \"delimited identifiers\" (libpq, scan.l, psql)\nPGTZ environment variable for frontend\nfix session initialization from front-end environment variables\nuse PGDATESTYLE to initialize backend at startup\nadd GERMAN date style\nallow alternate locations for databases\nadd hash functions for datetime, timespan\nfix hash function declarations for float8 and int4\nconvert text to/from int4, int2\n\nAlso, we've added UNIONs (didn't see it on Bruce's list).\n\n - Tom\n\n", "msg_date": "Sun, 11 Jan 1998 06:10:27 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.3 patches" }, { "msg_contents": "> \n> Some more items from my list which have been added since v6.2.1:\n> \n> SQL92 binary and hex input and string type coersion\n> allow casting of non-constants using both SQL92 and Postgres syntax\n> parser support for PRIMARY, FOREIGN KEY\n> backend support for PRIMARY KEY, UNIQUE (create index)\n> more parser support for DEFAULT, CHECK clauses\n> add 'doy' as argument to datetime_part()\n> functions datetime_time(), time(datetime)\n> functions int4_datetime(), int4_timespan()\n> add unixdate package to contrib area\n> constant CURRENT_USER as GetPgUserName()\n> support SQL3 syntax TRUE, FALSE\n> implement IS TRUE, IS FALSE, IS NOT TRUE, IS NOT FALSE\n> provide timezone support in libpq and backend\n> support SQL92 \"delimited identifiers\" (libpq, scan.l, psql)\n> PGTZ environment variable for frontend\n> fix session initialization from front-end environment variables\n> use PGDATESTYLE to initialize backend at startup\n> add GERMAN date style\n> allow alternate locations for databases\n> add hash functions for datetime, timespan\n> fix hash function declarations for float8 and int4\n> convert text to/from int4, int2\n> \n> Also, we've added UNIONs (didn't see it on Bruce's list).\n\nI was just highlighting fixes to things currently on the TODO list under\nreliability.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 07:46:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.3 patches" } ]
[ { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> \n> > I have added new commands to psql to give a list of types, operators,\n> > and aggregates.\n> \n> [...]\n> \n> > Any suggestions for improvements? \\do is slow, but the use of a\n> > function to do an outer join looking for descriptions slowed this down.\n> \n> How about an \\d command that will show all default values?\n> \n> [...]\n\nThis will be in 6.3.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:08:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] new psql commands" } ]
[ { "msg_contents": "OK, we never installed this for 6.2 because we were already in Beta. \nCan we do this for 6.3? Vadim suggested we make this part of libpq, so\nall applications could make use of it.\n\nI have one of the original patches, but not the others. Martin, what do you\nthink? Any other comments on this?\n\n\n> \n> \n> Since one of my pet hates is people who add features without\n> adding them to the documentation, I thought I'd better supply\n> a patch to the psql man page which documents the .psqlrc file :-)\n> (I forgot yesterday....)\n> \n> \n> Andrew\n> \n> \n> \n> *** psql.1.old\tSat Jun 21 14:54:46 1997\n> --- psql.1\tSat Jun 21 15:02:09 1997\n> ***************\n> *** 97,102 ****\n> --- 97,113 ----\n> environment variable or, if that's not set, to the Unix account name of the\n> current user.\n> .PP\n> + When\n> + .IR \"psql\"\n> + starts, it reads SQL commands from\n> + .IR \"/etc/psqlrc\"\n> + and then from\n> + .IR \"$(HOME)/.psqlrc\"\n> + This allows SQL commands like\n> + .IR SET\n> + which can be used to set the date style to be run at the start of\n> + evry session.\n> + .PP\n> .IR \"psql\"\n> understands the following command-line options:\n> .TP\n> \n> \n> \n> ----------------------------------------------------------------------------\n> Dr. Andrew C.R. Martin University College London\n> EMAIL: (Work) [email protected] (Home) [email protected]\n> URL: http://www.biochem.ucl.ac.uk/~martin\n> Tel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:18:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PSQL man page patch" } ]
[ { "msg_contents": "> \n> 1. Permissions on tables was not preserved when moving from\n> 6.1 to 6.2 (using 6.1 postmaster and 6.2 pg_dumpall).\n\nFixed in 6.3.\n\n> \n> 2. \\z in psql does not show permissions for sequences!\n\nAre you sure we want to show them? Let me know.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:37:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Small? things to fix..." } ]
[ { "msg_contents": "Applied with #if defined(sco) for 6.3. Beta testing Feb 1.\n\n> \n> You wrote:\n> \n> > > 2.) For the float8, it's required to edit the file\n> > >\n> > > ./src/include/utils/memutils.h\n> > >\n> > > #define DOUBLEALIGN(LEN) INTALIGN(LEN)\n> > > #define MAXALIGN(LEN) INTALIGN(LEN)\n> > >\n> > > Otherwise the backend will crash at the insertion of any float8.\n> > \n> > I am unsure why the existing code did not work.\n> \n> Sorry, I am sure. Let me try to convince you.\n> \n> I must quote the HTML version of the manual entitled as\n> \"Programming Tools Guide Appendix A, ANSI implementation-defined\n> behavior\".\n> \n> ****<Beginning of partial partial citation>\n> \n> This section describes the implementation-defined characteristics of\n> structures, unions, enumerations, and bit-fields. It corresponds to \n> section ``F.3.9 Structures, Unions, Enumerations, and Bit-Fields'' in \n> the ANSI document. \n> ........\n> 80x86 does not impose a restriction on the alignment of objects;\n> any object can start at any address. However, for certain objects, \n> having a particular starting address can speed up processor access. \n> \n> The C compiler aligns the whole structure on a 4-byte boundary by \n> default (see ``Pragmas''). All [4|8|10]-byte objects are aligned on a \n> 4-byte boundary, 2-byte objects are aligned on a 2-byte boundary, while \n> 1-byte objects are not aligned. \n> \n> ****<End of citation>\n> \n> Now, it's clear: the *double* struct members will be aligned to a \n> *4-byte* address boundary (on SCO), but *the original code* computes \n> \"DOUBLEALIGN\" and \"MAXALIGN\" to a \n> *8-byte boundary*, because it defines the boundary of alignment as \n> *sizeof(double)* which is equal to 8 (on SCO). \n> This may lead to the \"segmentation violation error\", \n> which is only the consequence of a correct malloc (palloc) executed \n> after the corruption of administrative areas of malloc caused by \n> erroneous access of double struct members. (I have traced it.)\n> \n> Let me make some possibly unneccesary comments:\n> This type of assumptions is very \"popular\" in sytems originally\n> developed on other (BSD-derived or RISC-based) sytems. \n> The most popular form is the assumption about the behaviour of *malloc*: \n> it will align an malloc(sizeof(something)) to a *8-byte boundary*. \n> But it isn't the case. \n> Fortunately the postgreSQL not uses this assumption which holds \n> for your reference platform too.\n> \n> \n> Regards,\n> Tamas\n> _________________________________________\n> Tamas Laufer\n> Voice/Fax: +36-72-447-570 \n> Email: [email protected] \n> H-7632 Pecs, Fulep L. u 26 III/11 Hungary\n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:45:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Some BUG-FIXES to postgreSQL on SCO 3.2v5.0.2" } ]
[ { "msg_contents": "Again, installed in 6.3.\n> \n> This is a SCO problem report.\n> \n> It seems the alignment macros do not work on SCO because a malloc(8) is\n> aligned on a 4-byte boundary on SCO, and not an 8-byte boundary as\n> assumed by our alignment code.\n> \n> I always thought that memory would be malloc'ed to align with the size\n> of the request, but SCO doesn't do that.\n> \n> Would people please look at memutils.h and tell me what to do? We have\n> some fairly complex alignment stuff ifdef'ed out at the top of the file,\n> and some simpler stuff used in its place.\n> \n> > \n> > You wrote:\n> > \n> > > > 2.) For the float8, it's required to edit the file\n> > > >\n> > > > ./src/include/utils/memutils.h\n> > > >\n> > > > #define DOUBLEALIGN(LEN) INTALIGN(LEN)\n> > > > #define MAXALIGN(LEN) INTALIGN(LEN)\n> > > >\n> > > > Otherwise the backend will crash at the insertion of any float8.\n> > > \n> > > I am unsure why the existing code did not work.\n> > \n> > Sorry, I am sure. Let me try to convince you.\n> > \n> > I must quote the HTML version of the manual entitled as\n> > \"Programming Tools Guide Appendix A, ANSI implementation-defined\n> > behavior\".\n> > \n> > ****<Beginning of partial partial citation>\n> > \n> > This section describes the implementation-defined characteristics of\n> > structures, unions, enumerations, and bit-fields. It corresponds to \n> > section ``F.3.9 Structures, Unions, Enumerations, and Bit-Fields'' in \n> > the ANSI document. \n> > ........\n> > 80x86 does not impose a restriction on the alignment of objects;\n> > any object can start at any address. However, for certain objects, \n> > having a particular starting address can speed up processor access. \n> > \n> > The C compiler aligns the whole structure on a 4-byte boundary by \n> > default (see ``Pragmas''). All [4|8|10]-byte objects are aligned on a \n> > 4-byte boundary, 2-byte objects are aligned on a 2-byte boundary, while \n> > 1-byte objects are not aligned. \n> > \n> > ****<End of citation>\n> > \n> > Now, it's clear: the *double* struct members will be aligned to a \n> > *4-byte* address boundary (on SCO), but *the original code* computes \n> > \"DOUBLEALIGN\" and \"MAXALIGN\" to a \n> > *8-byte boundary*, because it defines the boundary of alignment as \n> > *sizeof(double)* which is equal to 8 (on SCO). \n> > This may lead to the \"segmentation violation error\", \n> > which is only the consequence of a correct malloc (palloc) executed \n> > after the corruption of administrative areas of malloc caused by \n> > erroneous access of double struct members. (I have traced it.)\n> > \n> > Let me make some possibly unneccesary comments:\n> > This type of assumptions is very \"popular\" in sytems originally\n> > developed on other (BSD-derived or RISC-based) sytems. \n> > The most popular form is the assumption about the behaviour of *malloc*: \n> > it will align an malloc(sizeof(something)) to a *8-byte boundary*. \n> > But it isn't the case. \n> > Fortunately the postgreSQL not uses this assumption which holds \n> > for your reference platform too.\n> > \n> > \n> > Regards,\n> > Tamas\n> > _________________________________________\n> > Tamas Laufer\n> > Voice/Fax: +36-72-447-570 \n> > Email: [email protected] \n> > H-7632 Pecs, Fulep L. u 26 III/11 Hungary\n> > \n> \n> \n> -- \n> Bruce Momjian\n> [email protected]\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:45:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] Some BUG-FIXES to postgreSQL on SCO\n 3.2v5.0.2" } ]
[ { "msg_contents": "Just checking to make sure this properly documented for 6.3 beta. Is\nit?\n\n> \n> > > CREATE DATABASE dbname WITH LOCATION = 'dbpath';\n> > How much work would it be to also be able to specify an alternate\n> > path for a table within a database? I have some multi-Gb tables\n> > and am scrambling for room. If I could have the tables on separate\n> > disks, that'd be wonderful.\n> \n> Well, it is (almost) trivial to get the full database in a different location;\n> in fact I put into service an unused column in pg_database which had clearly\n> been defined for this purpose. Probably not so trivial for individual tables,\n> indices, etc. If it is not on the ToDo list, perhaps Bruce could add it? I'm\n> probably not going to pursue it at the moment, myself, but would be happy to\n> work with someone if they want to do it :) As an aside, there is _no_\n> performance penalty for alternate database locations, but there might be for\n> distributed tables/indices since the location would need to be looked up at\n> least occasionally.\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:48:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Alternate locations for databases" }, { "msg_contents": "> Just checking to make sure this properly documented for 6.3 beta. Is\n> it?\n\nman initlocationman create_database\n\nOf course, it is not _properly_ documented. I haven't yet absorbed the man page\ninfo into the new html/postscript docs, and may not be able to in time for v6.3.\nWill see if anyone volunteers to help.\n\n - Tom\n\n> > > > CREATE DATABASE dbname WITH LOCATION = 'dbpath';\n> > > How much work would it be to also be able to specify an alternate\n> > > path for a table within a database? I have some multi-Gb tables\n> > > and am scrambling for room. If I could have the tables on separate\n> > > disks, that'd be wonderful.\n> >\n> > Well, it is (almost) trivial to get the full database in a different location;\n> > in fact I put into service an unused column in pg_database which had clearly\n> > been defined for this purpose. Probably not so trivial for individual tables,\n> > indices, etc. If it is not on the ToDo list, perhaps Bruce could add it? I'm\n> > probably not going to pursue it at the moment, myself, but would be happy to\n> > work with someone if they want to do it :) As an aside, there is _no_\n> > performance penalty for alternate database locations, but there might be for\n> > distributed tables/indices since the location would need to be looked up at\n> > least occasionally.\n\n", "msg_date": "Mon, 12 Jan 1998 04:57:55 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alternate locations for databases" }, { "msg_contents": "On Mon, 12 Jan 1998, Thomas G. Lockhart wrote:\n\n> > Just checking to make sure this properly documented for 6.3 beta. Is\n> > it?\n> \n> man initlocationman create_database\n> \n> Of course, it is not _properly_ documented. I haven't yet absorbed the man page\n> info into the new html/postscript docs, and may not be able to in time for v6.3.\n> Will see if anyone volunteers to help.\n\nI am familiar with a utility man2html that creates html from man pages.\nI would be happy to do this as you folks wish (although I'm not\nsure it's a great help).\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Mon, 12 Jan 1998 21:59:31 -0500 (EST)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Alternate locations for databases" } ]
[ { "msg_contents": "> \n> > * allow varchar() to only store used bytes, not maximum\n> \n> Hmm. Some file managers might make use of the fact that a column is of\n> fixed storage size, which I would think varchar() should be. Also, it\n> would possibly allow rows to be updated in place with predictable\n> performance. The text type (or varchar w/o maximum specified) are\n> available for truely variable length fields. I'll bet that others DBs\n> make these assumptions too...\n\nChar() is now fixed, varchar() is not, hence the varchar name. text is\nunchanged.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:49:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO addition" } ]
[ { "msg_contents": "I believe we have addressed many of these issues in 6.3. 6.3 beta\ntesting begins Feb 1. At that time, can you test things for hpux 9 and\nsend us a new patch.\n\nAlso, try to keep in mind we are supporting hpux 10 too, so if you can\nmake the patch changes only affect hpux 9, that would help, so hpux 10\npeople don't come along and try removing your stuff. I think this is\nhow hpux 9 got broken in the first place.\n\n> \n> At 11:15 03.11.97 -0500, you wrote:\n> >> Ooops, my fault. Forgot that some mail programs folds lines longer than 80\n> >> characters. I'm sending the diff again uuencoded. If you have a mail reader\n> >> which folds lines please make sure the window is at least 80 characters wide.\n> >> \n> >> Knut Tvedten\n> >> \n> >> begin 664\n> >> hpux-9.07.diff\n> >> M*BHJ('-R8R]B86-K96YD+W!O<G0O:'!U>\"]P;W)T+F,)36]N($]C=\"`R,\"`Q\n> >> \n> >> M-#HR,CHQ,R`Q.3DW\"BTM+2!S<F,O8F%C:V5N9\"]P;W)T+VAP=7@O<&]R=\"YC\n> >> M\"4UO;B!/8W0@\n> >> ,C`@,34Z,#`Z-#@@,3DY-PHJ*BHJ*BHJ*BHJ*BHJ*BH**BHJ\n> >> M(#$W+#(S(\"HJ*BH*(\"`@*B\\*(\"\n> >> `C:6YC;'5D92`\\=6YI<W1D+F@^\"0D)\"2\\J\n> >\n> >Sorry, still no good, as you can see.\n> >\n> >-- \n> >Bruce Momjian\n> >[email protected]\n> >\n> \n> It should be ok now!\n> \n> Knut Tvedten\n> \n> begin 664 hpux-9.07.diff\n> M*BHJ('-R8R]B86-K96YD+W!O<G0O:'!U>\"]P;W)T+F,)36]N($]C=\"`R,\"`Q\n> M-#HR,CHQ,R`Q.3DW\"BTM+2!S<F,O8F%C:V5N9\"]P;W)T+VAP=7@O<&]R=\"YC\n> M\"4UO;B!/8W0@,C`@,34Z,#`Z-#@@,3DY-PHJ*BHJ*BHJ*BHJ*BHJ*BH**BHJ\n> M(#$W+#(S(\"HJ*BH*(\"`@*B\\*(\"`C:6YC;'5D92`\\=6YI<W1D+F@^\"0D)\"2\\J\n> M(&9O<B!R86YD*\"DO<W)A;F0H*2!P<F]T;W1Y<&5S(\"HO\"B`@(VEN8VQU9&4@\n> M/&UA=&@N:#X)\"0D)+RH@9F]R('!O=R@I('!R;W1O='EP92`J+PHA(\"-I;F-L\n> M=61E(#QS>7,O<WES8V%L;\"YH/@D)+RH@9F]R('-Y<V-A;&P@(V1E9FEN97,@\n> M*B\\*(\"`*(\"`C:6YC;'5D92`B8RYH(@H@(`HM+2T@,3<L,C<@+2TM+0H@(\"`J\n> M+PH@(\"-I;F-L=61E(#QU;FES=&0N:#X)\"0D)+RH@9F]R(')A;F0H*2]S<F%N\n> M9\"@I('!R;W1O='EP97,@*B\\*(\"`C:6YC;'5D92`\\;6%T:\"YH/@D)\"0DO*B!F\n> M;W(@<&]W*\"D@<')O=&]T>7!E(\"HO\"B$@(VEN8VQU9&4@/'-Y<R]S>7-C86QL\n> M+F@^\"0D)+RH@9F]R('-Y<V-A;&P@(V1E9FEN97,@*B\\*(2`*(2`C:69N9&5F\n> M($A!5D5?1T544E5304=%\"B$@(VEN8VQU9&4@/'-Y<R]R97-O=7)C92YH/@D)\n> M\"2\\J(&9O<B!S=')U8W0@<G5S86=E(\"HO\"B$@(V5N9&EF\"B`@\"B`@(VEN8VQU\n> M9&4@(F,N:\"(*(\"`**BHJ*BHJ*BHJ*BHJ*BHJ\"BHJ*B`S,2PT.2`J*BHJ\"BTM\n> M+2`S-2PW-B`M+2TM\"B`@\"2`J+PH@('T*(\"`**R`C:69N9&5F($A!5D5?4D%.\n> M1$]-\"B`@;&]N9PH@(')A;F1O;2@I\"B`@>PH@(`ER971U<FX@*&QR86YD-#@H\n> M*2D[\"B`@?0HK(\"-E;F1I9@H@(`HK(\"-I9FYD968@2$%615]34D%.1$]-\"B`@\n> M=F]I9`H@('-R86YD;VTH=6YS:6=N960@<V5E9\"D*(\"![\"B`@\"7-R86YD-#@H\n> M*&QO;F<@:6YT*2!S965D*3L*(\"!]\"BL@(V5N9&EF\"B`@\"BL@(VEF;F1E9B!(\n> M059%7T=%5%)54T%'10HK(&EN=`H@(&=E=')U<V%G92AI;G0@=VAO+\"!S=')U\n> M8W0@<G5S86=E(\"H@<G4I\"B`@>PH@(`ER971U<FX@*'-Y<V-A;&PH4UE37T=%\n> M5%)54T%'12P@=VAO+\"!R=2DI.PH@('T**R`C96YD:68**R`**R`C:69N9&5F\n> M($A!5D5?4DE.5`HK(&1O=6)L90HK(')I;G0H9&]U8FQE('@I\"BL@>PHK(\"`@\n> M(\"`@(\"`@9&]U8FQE(&EP.PHK(\"`@(\"`@(\"`@9&]U8FQE(&9P.PHK(`HK(\"`@\n> M(\"`@(\"`@9G`@/2!F86)S*&UO9&8H>\"P@)FEP*2D[\"BL@(\"`@(\"`@(\"`**R`@\n> M(\"`@(\"`@(&EF(\"AI<\"`^/2`P*0HK(\"`@(\"`@(\"`@(\"`@(\"`@(\"!R971U<FXH\n> M*&9P(#X](#`N-2D@/R!I<\"`K(#[email protected]!I<\"D[\"BL@(\"`@(\"`@(\"!E;'-E\"BL@\n> M(\"`@(\"`@(\"`@(\"`@(\"`@(')E='5R;B@H9G`@/CT@,\"XU*2`_(&EP(\"T@,2`Z\n> M(&EP*3L**R!]\"BL@(V5N9&EF\"BHJ*B!S<F,O8F%C:V5N9\"]P;W)T+VAP=7@O\n> M<G5S86=E<W1U8BYH\"4UO;B!/8W0@,C`@,38Z,#`Z-#`@,3DY-PHM+2T@<W)C\n> M+V)A8VME;F0O<&]R=\"]H<'5X+W)U<V%G97-T=6(N:`E-;VX@3V-T(#(P(#$V\n> M.C`P.C`U(#$Y.3<**BHJ*BHJ*BHJ*BHJ*BHJ\"BHJ*B`Q-BPS,2`J*BHJ\"B`@\n> M(VEN8VQU9&4@/'-Y<R]T:6UE+F@^\"0D)+RH@9F]R('-T<G5C=\"!T:6UE=F%L\n> M(\"HO\"B`@(VEN8VQU9&4@/'-Y<R]T:6UE<RYH/@D)\"2\\J(&9O<B!S=')U8W0@\n> M=&US(\"HO\"B`@(VEN8VQU9&4@/&QI;6ET<RYH/@D)\"0DO*B!F;W(@0TQ+7U1#\n> M2R`J+PH@(`HA(\"-D969I;F4@4E5304=%7U-%3$8)\"3`*(2`C9&5F:6YE(%)5\n> M4T%'15]#2$E,1%)%3B`M,0HA(`HA('-T<G5C=\"!R=7-A9V4*(2![\"B$@\"7-T\n> M<G5C=\"!T:6UE=F%L(')U7W5T:6UE.PDO*B!U<V5R('1I;64@=7-E9\"`J+PHA\n> M(`ES=')U8W0@=&EM979A;\"!R=5]S=&EM93L)+RH@<WES=&5M('1I;64@=7-E\n> M9\"`J+PHA('T[\"B$@\"B$@97AT97)N(&EN=`EG971R=7-A9V4H:6YT('=H;RP@\n> M<W1R=6-T(')U<V%G92`J(')U<V%G92D[\"B`@\"B`@(V5N9&EF\"0D)\"0D)\"2\\J\n> M(%)54T%'15-454)?2\"`J+PHM+2T@,38L,C,@+2TM+0H@(\"-I;F-L=61E(#QS\n> M>7,O=&EM92YH/@D)\"2\\J(&9O<B!S=')U8W0@=&EM979A;\"`J+PH@(\"-I;F-L\n> M=61E(#QS>7,O=&EM97,N:#X)\"0DO*B!F;W(@<W1R=6-T('1M<R`J+PH@(\"-I\n> M;F-L=61E(#QL:6UI=',N:#X)\"0D)+RH@9F]R($-,2U]40TL@*B\\**R`C:6YC\n> M;'5D92`\\<WES+W)E<V]U<F-E+F@^\"B`@\"B$@97AT97)N(&EN=\"!G971R=7-A\n> M9V4H:6YT('=H;RP@<W1R=6-T(')U<V%G92`J(')U<V%G92D[\"B`@\"B`@(V5N\n> M9&EF\"0D)\"0D)\"2\\J(%)54T%'15-454)?2\"`J+PHJ*BH@<W)C+V)A8VME;F0O\n> M=&-O<\"]P;W-T9W)E<RYC\"4UO;B!/8W0@,C`@,30Z,C$Z,S4@,3DY-PHM+2T@\n> M<W)C+V)A8VME;F0O=&-O<\"]P;W-T9W)E<RYC\"4UO;B!/8W0@,C`@,30Z,CDZ\n> M,3<@,3DY-PHJ*BHJ*BHJ*BHJ*BHJ*BH**BHJ(#$S-S(L,3,W-R`J*BHJ\"BTM\n> M+2`Q,S<R+#$S.#`@+2TM+0H@(`EI9B`H<VEG<V5T:FUP*%=A<FY?<F5S=&%R\n> M=\"P@,2D@(3T@,\"D*(\"`)>PH@(`D)26Y787)N(#T@,3L**R`C:68@(61E9FEN\n> M960H55-%7U!/4TE87U-)1TY!3%,I\"BL@\"0EP<7-I9VYA;\"A324=(55`L(&AA\n> M;F1L95]W87)N*3L**R`C96YD:68*(\"`*(\"`)\"71I;64H)G1I;2D[\"B`@\"BHJ\n> M*B!S<F,O:6YC;'5D92]P;W)T+VAP=7@N:`E-;VX@3V-T(#(P(#$T.C(S.C`W\n> M(#$Y.3<*+2TM('-R8R]I;F-L=61E+W!O<G0O:'!U>\"YH\"4UO;B!/8W0@,C`@\n> M,34Z-#4Z,C4@,3DY-PHJ*BHJ*BHJ*BHJ*BHJ*BH**BHJ(#$L-R`J*BHJ\"B`@\n> M(V1E9FEN92!*35!?0E5&\"B`@(V1E9FEN92!54T5?4$]325A?5$E-10HM(\"-D\n> M969I;F4@55-%7U!/4TE87U-)1TY!3%,*(\"`C9&5F:6YE($A!4U]415-47T%.\n> M1%]3150*(\"!T>7!E9&5F('-T<G5C=`H@('L*(\"`):6YT\"0D)<V5M6S1=.PHM\n> M+2T@,2PR,\"`M+2TM\"B`@(V1E9FEN92!*35!?0E5&\"B`@(V1E9FEN92!54T5?\n> M4$]325A?5$E-10H@(\"-D969I;F4@2$%37U1%4U1?04Y$7U-%5`HK(\"\\J(\"-D\n> M969I;F4@55-%7U!/4TE87U-)1TY!3%,@;6]V960@=&\\@36%K969I;&4N:'!U\n> M>\"`J+PHK(`HK(\"-I9FYD968@2$%615]204Y$3TT**R!L;VYG(')A;F1O;2AV\n> M;VED*3L**R`C96YD:68**R`**R`C:69N9&5F($A!5D5?4U)!3D1/30HK('9O\n> M:60@<W)A;F1O;2AU;G-I9VYE9\"!S965D*3L**R`C96YD:68**R`**R`C:69N\n> M9&5F($A!5D5?4DE.5`HK(&1O=6)L92!R:6YT*&1O=6)L92!X*3L**R`C96YD\n> M:68**R`*(\"!T>7!E9&5F('-T<G5C=`H@('L*(\"`):6YT\"0D)<V5M6S1=.PHJ\n> M*BH@<W)C+VUA:V5F:6QE<R]-86ME9FEL92YH<'5X\"4UO;B!/8W0@,C`@,30Z\n> M,C,Z,C$@,3DY-PHM+2T@<W)C+VUA:V5F:6QE<R]-86ME9FEL92YH<'5X\"4UO\n> M;B!/8W0@,C`@,38Z-#8Z,#`@,3DY-PHJ*BHJ*BHJ*BHJ*BHJ*BH**BHJ(#(L\n> M-R`J*BHJ\"BTM+2`R+#$S(\"TM+2T*(\"`C($A0+558(#$P(&AA<R!A('-E;&5C\n> M=\"@I(&EN(&QI8F-U<G-E<RP@<V\\@=V4@;F5E9\"!T;R!G970@=&AE(&QI8F,@\n> M=F5R<VEO;B!F:7)S=`H@(&EF97$@*\"0H2%!56%]-04I/4BDL(#$P*0H@(\"`@\n> M($Q$1DQ!1U,Z/2`M5VPL+44@+6QC(\"0H3$1&3$%'4RD**R`@(\"!#1DQ!1U,K\n> M/2`M1%5315]03U-)6%]324=.04Q3\"BL@96YD:68**R`**R`C($A0+558(#`Y\n> M(&YE961S(&QI8F,@8F5F;W)E(&QI8E!7+\"!S;R!W92!N965D('1O(&=E=\"!T\n> M:&4@;&EB8R!V97)S:6]N(&9I<G-T\"BL@:69E<2`H)\"A(4%587TU!2D]2*2P@\n> M,#DI\"BL@(\"`@3$1&3$%'4SH](\"U7;\"PM12`D*$Q$1DQ!1U,Z+6Q05STM;&,@\n> M+6Q05RD*(\"!E;F1I9@H@(`H@(\",@1&]E<R!A;GEO;F4@=7-E('1H:7,@<W1U\n> M9F8_\"BHJ*BHJ*BHJ*BHJ*BHJ*@HJ*BH@,C,L,C8@*BHJ*@H@(`H@(\"4N<VPZ\n> L(\"4N;PH@(`DD*$Q$*2`M8B`M;R`D0\"`D/`HM(`HM+2T@,CDL,S$@+2TM+0HZ\n> `\n> end\n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 15:53:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Patches for getting version 6.2/6.2.1 running on" } ]
[ { "msg_contents": "Fixed in 6.3. Beta is Feb 1.\n\n> sql query \"insert into XXX select * from YYY\" does not working if\n> YYY are view.\n> \n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> \n> \n> [ic@cms ic]$ createdb bugexample\n> [ic@cms ic]$ psql bugexample\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: bugexample\n> \n> bugexample=> create table foo ( x int );\n> CREATE\n> bugexample=> insert into foo values (1);\n> INSERT 3035758\n> bugexample=> insert into foo values (2);\n> INSERT 3035759\n> bugexample=> select * from foo;\n> x\n> -\n> 1\n> 2\n> (2 rows)\n> \n> bugexample=> create view bar as select * from foo;\n> CREATE\n> bugexample=> select * from bar;\n> x\n> -\n> 1\n> 2\n> (2 rows)\n> \n> bugexample=> create table foobar ( x int );\n> CREATE\n> bugexample=> insert into foobar select * from bar;\n> INSERT 0\n> bugexample=> select * from foobar;\n> x\n> -\n> (0 rows)\n> \n> bugexample=> \n> \n> \n> \n> NOTE: This bug also hapenes if we runing\n> \"insert into XXX (F1,F2...) select F1,F2... from YYY\",\n> or if we select from multiple tables and one or more tables\n> are views, independant of \"where ....\" expression.\n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> ?\n> \n> \n> \n> ic\n> \n> P.S. Sorry for my bad english grammatic ... :-(\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:33:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] postgresql-6.1.1: insert into XXX select * from YYY does\n\tnot working where YYY are view." } ]
[ { "msg_contents": "Distinct is now disabled for Views in 6.3. It reports an error and fails.\n\n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> \n> [ic@cms PGconi]$ createdb bugexample\n> [ic@cms PGconi]$ psql bugexample\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: bugexample\n> \n> bugexample=> create table foo ( x int );\n> CREATE\n> bugexample=> insert into foo values (1);\n> INSERT 3035854\n> bugexample=> insert into foo values (1);\n> INSERT 3035855\n> bugexample=> insert into foo values (2);\n> INSERT 3035856\n> bugexample=> create view bar as select distinct * from foo;\n> CREATE\n> bugexample=> select * from bar;\n> x\n> -\n> 1\n> 1\n> 2\n> (3 rows)\n> \n> bugexample=> \n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> ?\n> \n> \n> \n> ic\n> \n> P.S. Sorry for my bad english grammatic ... :-(\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:34:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] postgresql-6.1.1: losing \"distinct\" in \"create view XXX as\n\tselect distinct ...\"." } ]
[ { "msg_contents": "Drop table is not a transaction-able operation. This is expected\nbehavior.\n\n\n> [ic@cms PGconi]$ createdb bugexample\n> [ic@cms PGconi]$ psql bugexample\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: bugexample\n> \n> bugexample=> create table foobar ( x int );\n> CREATE\n> bugexample=> insert into foobar values (1);\n> INSERT 3035918\n> bugexample=> insert into foobar values (2);\n> INSERT 3035919\n> bugexample=> \\d\n> \n> Database = bugexample\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | ic | foobar | table |\n> +------------------+----------------------------------+----------+\n> bugexample=> select * from foobar;\n> x\n> -\n> 1\n> 2\n> (2 rows)\n> \n> bugexample=> begin;\n> BEGIN\n> bugexample=> drop table foobar;\n> DROP\n> bugexample=> abort;\n> ABORT\n> bugexample=> select * from foobar;\n> x\n> -\n> (0 rows)\n> \n> bugexample=> \n> \n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> ?\n> \n> \n> \n> ic\n> \n> P.S. Sorry for my bad english grammatic ... :-(\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:35:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] postgresql-6.1.1: wrong roll-back'ing \"drop table\" query." } ]
[ { "msg_contents": "Can someone figure out why this is happening?\n\n\nForwarded message:\n> ------------------------------------------------\n> Dropping table after aborting a transanction makes PosgresSQL unsable.\n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> [srashd]t-ishii{67} psql -e test < b\n> QUERY: drop table test;\n> WARN:Relation test Does Not Exist!\n> QUERY: create table test (i int4);\n> QUERY: create index iindex on test using btree(i);\n> QUERY: begin;\n> QUERY: insert into test values (100);\n> QUERY: select * from test;\n> i\n> ---\n> 100\n> (1 row)\n> \n> QUERY: rollback;\n> QUERY: drop table test;\n> NOTICE:AbortTransaction and not in in-progress state \n> NOTICE:AbortTransaction and not in in-progress state \n> \n> Note that if I do not make an index, it would be ok.\n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:39:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "[BUGS] NOTICE:AbortTransaction and not in in-progress state (fwd)" } ]
[ { "msg_contents": "Is this fixed yet?\n\nForwarded message:\n> I seem to recall that someone here said that because of the way comparisons\n> are done, in an ORDER-BY query, the NULLs will always come up last.\n> \n> It seems to have another effect, too: if I do a SELECT ... ORDER BY\n> col1,col2 - and the col1 attribute has nulls, the rows with the nulls don't\n> get sorted at all.\n> \n> Apparently, this is because the rows which have NULL in the col1 attribute\n> are not considered to have an equal value in col1, which is the requirement\n> for sorting on col2 - or am I missing something here?\n> \n> Herouth\n> \n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:43:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "[QUESTIONS] ORDER BY and nulls (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Is this fixed yet?\n> \n> Forwarded message:\n> > I seem to recall that someone here said that because of the way comparisons\n> > are done, in an ORDER-BY query, the NULLs will always come up last.\n> >\n> > It seems to have another effect, too: if I do a SELECT ... ORDER BY\n> > col1,col2 - and the col1 attribute has nulls, the rows with the nulls don't\n> > get sorted at all.\n> >\n> > Apparently, this is because the rows which have NULL in the col1 attribute\n> > are not considered to have an equal value in col1, which is the requirement\n> > for sorting on col2 - or am I missing something here?\n\nI hope to fix this after Feb 1 (seems easy to do).\n\nVadim\n", "msg_date": "Mon, 12 Jan 1998 10:06:27 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [QUESTIONS] ORDER BY and nulls (fwd)" } ]
[ { "msg_contents": "> > It has to be this way, otherwise it would be possible for user to see\n> > other users' passwords in pg_user. I spoke to you all about this when I\n> > first started. I was going to make a separate relation (pg_password),\n> > but I was convinced not to since there is a one to one correlation\n> > between users and passwords. At this point I sent email to the effect\n> > that pg_user could no longer be readable by the group 'public'. If it\n> > was readable by public, then the passwords would have to be encrypted in\n> > pg_user. If this is the case, then the frontends will have to pass an\n> > unencrypted password over the network. Again this degrades the security\n> > of PostgreSQL. \n> > \n> > The real solution to this problem would be to create a pg_privileges\n> > relation, overhauling the privileges system entirely. Then we could\n> > just restrict access to the password column of pg_user. However, I\n> > would suggest that the entire pg_privileges table be cached in shared\n> > memory to speed things up. I am unsure if the catalog table are cached\n> > in shared memory or not (They really should be, but then this would\n> > probably require some logging to files in case of system crash). \n> > \n> > In the meantime, there should really be nothing that the average user\n> > will need from pg_user. The '\\d' is the only problem I have encountered\n> > thus far, and I hope to solve that problem soon. Therefore, if you\n> > really, really need something from pg_user, then you need to have select\n> > privileges given to you explicitly, or you could explicitly give them to\n> > public. This would, however, give public the ability to see user\n> > passwords (If you are using, HBA only, then just give public the select\n> > over pg_user). \n> \n> \tWait, let me just get this straight here...pg_user is, by default,\n> unreadable by the general public, but is changeable just using a simple\n> grant/revoke??\n> \n> \tIf so, I'm confused as to why this is a bad thing? Bruce? Sort\n> of seems to me that its like the TCP/Unix Socket argument...go to the most\n> secure first, then let the one setting it up downgrade as they feel is\n> appropriate...no?\n\nOK, general question. Does pg_user need to be readable? Do\nnon-postgres users want to see who owns each table? I don't know.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 11 Jan 1998 16:53:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Sun, 11 Jan 1998, Bruce Momjian wrote:\n\n> > \tWait, let me just get this straight here...pg_user is, by default,\n> > unreadable by the general public, but is changeable just using a simple\n> > grant/revoke??\n> > \n> > \tIf so, I'm confused as to why this is a bad thing? Bruce? Sort\n> > of seems to me that its like the TCP/Unix Socket argument...go to the most\n> > secure first, then let the one setting it up downgrade as they feel is\n> > appropriate...no?\n> \n> OK, general question. Does pg_user need to be readable? Do\n> non-postgres users want to see who owns each table? I don't know.\n\n\tErk...hrmmm...my understanding is that if pg_user is non-readable, then\ndoing a \\d to list tables won't tell me who owns any of the tables...which\ncould be a problem if multiple users have access to the same database, but\nhave \"personal tables\"? \n\n\tActually, right now I think that this is one of the potential problems\nI brought up previous...\n\n\tIf I create a database, *anyone* that is a user (createuser <>) has access\nto that database...granted that I can use the 'revoke' command to restrict\ntable access, there should be some means of restricting a database (and its\ntables) to the owner of that database...\n\n\tOn top of that, a table/database should be restricted by default...for\nexample, this should not happen:\n\n> createdb scrappy\n> psql\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: scrappy\n\nscrappy=> \\q\n> su\nPassword:\n# su - acctng\n> psql scrappy\n> ~scrappy/pgsql/bin/psql scrappy\nConnection to database 'scrappy' failed.\nFATAL 1:SetUserId: user \"acctng\" is not in \"pg_user\"\n> logout\n# exit\n> createuser acctng\nEnter user's postgres ID or RETURN to use unix user ID: 1010 ->\nIs user \"acctng\" allowed to create databases (y/n) n\nIs user \"acctng\" allowed to add users? (y/n) n\ncreateuser: acctng was successfully added\ndon't forget to create a database for acctng\n> su\nPassword:\n# su - acctng\n> ~scrappy/pgsql/bin/psql scrappy\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: scrappy\n\nscrappy=> \\d\nWARN:pg_user: Permission denied.\nscrappy=>\n\n\tI shouldn't be able to get into the database itself...right now, there\nreally isn't any \"cross database\" boundaries...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 12 Jan 1998 01:19:51 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Sun, 11 Jan 1998, Bruce Momjian wrote:\n\n> OK, general question. Does pg_user need to be readable? Do\n> non-postgres users want to see who owns each table? I don't know.\n\nI'd say yes, as we have stuff in JDBC yet to implement that will access\nthis table.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Mon, 12 Jan 1998 06:58:55 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> On Sun, 11 Jan 1998, Bruce Momjian wrote:\n> \n> > > \tWait, let me just get this straight here...pg_user is, by default,\n> > > unreadable by the general public, but is changeable just using a simple\n> > > grant/revoke??\n> > > \n> > > \tIf so, I'm confused as to why this is a bad thing? Bruce? Sort\n> > > of seems to me that its like the TCP/Unix Socket argument...go to the most\n> > > secure first, then let the one setting it up downgrade as they feel is\n> > > appropriate...no?\n> > \n> > OK, general question. Does pg_user need to be readable? Do\n> > non-postgres users want to see who owns each table? I don't know.\n> \n> \tErk...hrmmm...my understanding is that if pg_user is non-readable, then\n> doing a \\d to list tables won't tell me who owns any of the tables...which\n> could be a problem if multiple users have access to the same database, but\n> have \"personal tables\"? \n> \n> \tActually, right now I think that this is one of the potential problems\n> I brought up previous...\n> \n> \tIf I create a database, *anyone* that is a user (createuser <>) has access\n> to that database...granted that I can use the 'revoke' command to restrict\n> table access, there should be some means of restricting a database (and its\n> tables) to the owner of that database...\n> \n> \tOn top of that, a table/database should be restricted by default...for\n> example, this should not happen:\n\nYes, I agree we should be able to restrict who gets into which\ndatabases. It is on the TODO list.\n\n* More access control over who can create tables and access the database\n\nThe reason it doesn't get complained about more is that many commercial\ndatabases have similar lack of funciontality.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 08:30:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Mon, 12 Jan 1998, Bruce Momjian wrote:\n \n> Yes, I agree we should be able to restrict who gets into which\n> databases. It is on the TODO list.\n> \n> * More access control over who can create tables and access the database\n> \n> The reason it doesn't get complained about more is that many commercial\n> databases have similar lack of funciontality.\n\n\t*nod* \n\n", "msg_date": "Mon, 12 Jan 1998 08:32:26 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Mon, 12 Jan 1998, Bruce Momjian wrote:\n> Yes, I agree we should be able to restrict who gets into which\n> databases. It is on the TODO list.\n> \n> * More access control over who can create tables and access the database\n> \n> The reason it doesn't get complained about more is that many commercial\n> databases have similar lack of funciontality.\n\nAlthough not perfect, we can do this now by using different password files\nfor each database, and having an entry for each database in pg_hba.conf\n\n--\nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Mon, 12 Jan 1998 23:10:07 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> On Mon, 12 Jan 1998, Bruce Momjian wrote:\n> > Yes, I agree we should be able to restrict who gets into which\n> > databases. It is on the TODO list.\n> > \n> > * More access control over who can create tables and access the database\n> > \n> > The reason it doesn't get complained about more is that many commercial\n> > databases have similar lack of funciontality.\n> \n> Although not perfect, we can do this now by using different password files\n> for each database, and having an entry for each database in pg_hba.conf\n\nSomeone sent in a patch to pg_hba.conf that allows use of the %\ncharacter to say people can only access databases with their name on it.\nMarc will apply it soon, and Marc, we need a manual page mention for it.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 21:10:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "> \n> Here's a blast from the past. Shows how I keep those open issues in my\n> mailbox.\n> \n> Forwarded message:\n> > Date: Wed, 29 Jan 1997 13:38:10 -0500\n> > From: [email protected] (Darren King)\n> > To: [email protected]\n> > Subject: [HACKERS] Max size of data types and tuples.\n\nStill buried in my 'received' box here too. Can't imagine all the bugs\nand/or issues you have kept in yours.\n\n\n> > 1. Clean up the #defines for the max block size. Currently,\n> > there are at least four references to 8192...\n\nThink I found and fixed all of these up.\n\n\n> > __These includes of storage/bufpage.h can be removed.__\n\nStill _quite_ a few #includes that can be removed throughout. First,\n\"utils/elog.h\" and \"util/palloc.h\" are include in \"postgres.h\", so are\nunnecessary to include by themselves since \"postgres.h\" is include in\n_every_ .c file, correct?\n\nAlso numerous #includes of \"storage/bufpage.h\" and \"storage/fd.h\" that are\nunnecessary since the things they were included for (BLCKSZ and SEEK_*) are\nnow either in \"config.h\" or found in a system include file.\n\n\n> > 2. Once the block size issue is taken care of, calculate the\n> > maximum tuple size more accurately.\n...\n> > 3. When #1 & #2 are resolved, let the textual fields have a max\n> > of (MAX_TUPLE_SIZE - sizeof(int)).\n\nThis could be done as soon as I come up with a way of defining the packet\nsize for the interfaces since this is the newest limiting factor.\n\nPeter's suggestion of backend functions for getting info might be the way to\ngo. It would let the various interfaces get the info they need and would be\na step towards JDBC and ODBC compliance.\n\ndarrenk\n", "msg_date": "Sun, 11 Jan 1998 21:04:15 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" }, { "msg_contents": "> \n> > \n> > Here's a blast from the past. Shows how I keep those open issues in my\n> > mailbox.\n> > \n> > Forwarded message:\n> > > Date: Wed, 29 Jan 1997 13:38:10 -0500\n> > > From: [email protected] (Darren King)\n> > > To: [email protected]\n> > > Subject: [HACKERS] Max size of data types and tuples.\n> \n> Still buried in my 'received' box here too. Can't imagine all the bugs\n> and/or issues you have kept in yours.\n\nNot too bad now.\n\n> \n> \n> > > 1. Clean up the #defines for the max block size. Currently,\n> > > there are at least four references to 8192...\n> \n> Think I found and fixed all of these up.\n> \n> \n> > > __These includes of storage/bufpage.h can be removed.__\n> \n> Still _quite_ a few #includes that can be removed throughout. First,\n> \"utils/elog.h\" and \"util/palloc.h\" are include in \"postgres.h\", so are\n> unnecessary to include by themselves since \"postgres.h\" is include in\n> _every_ .c file, correct?\n\nYes, must be included. Period. Even 3rd party apps.\n\n> \n> Also numerous #includes of \"storage/bufpage.h\" and \"storage/fd.h\" that are\n> unnecessary since the things they were included for (BLCKSZ and SEEK_*) are\n> now either in \"config.h\" or found in a system include file.\n> \n> \n> > > 2. Once the block size issue is taken care of, calculate the\n> > > maximum tuple size more accurately.\n> ...\n> > > 3. When #1 & #2 are resolved, let the textual fields have a max\n> > > of (MAX_TUPLE_SIZE - sizeof(int)).\n> \n> This could be done as soon as I come up with a way of defining the packet\n> size for the interfaces since this is the newest limiting factor.\n> \n> Peter's suggestion of backend functions for getting info might be the way to\n> go. It would let the various interfaces get the info they need and would be\n> a step towards JDBC and ODBC compliance.\n\nAgain, we could just set 3rd party apps to be the maximum tuple size we\nwill ever have to support.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 10:29:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" }, { "msg_contents": "On Wed, 14 Jan 1998, Bruce Momjian wrote:\n\n> > This could be done as soon as I come up with a way of defining the packet\n> > size for the interfaces since this is the newest limiting factor.\n> > \n> > Peter's suggestion of backend functions for getting info might be the way to\n> > go. It would let the various interfaces get the info they need and would be\n> > a step towards JDBC and ODBC compliance.\n> \n> Again, we could just set 3rd party apps to be the maximum tuple size we\n> will ever have to support.\n\nCurrently, were returning some defaults based on the 8K block size.\n\nProbably for these, we may be able to get away with the values we are\nsetting. However, there are a few things that I think we will still need\nto implement as functions.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 14 Jan 1998 19:36:59 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" }, { "msg_contents": "> \n> On Wed, 14 Jan 1998, Bruce Momjian wrote:\n> \n> > > This could be done as soon as I come up with a way of defining the packet\n> > > size for the interfaces since this is the newest limiting factor.\n> > > \n> > > Peter's suggestion of backend functions for getting info might be the way to\n> > > go. It would let the various interfaces get the info they need and would be\n> > > a step towards JDBC and ODBC compliance.\n> > \n> > Again, we could just set 3rd party apps to be the maximum tuple size we\n> > will ever have to support.\n> \n> Currently, were returning some defaults based on the 8K block size.\n> \n> Probably for these, we may be able to get away with the values we are\n> setting. However, there are a few things that I think we will still need\n> to implement as functions.\n\nOK, let's decide soon, so people can be ready for Feb 1.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 16:27:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" }, { "msg_contents": "On Wed, 14 Jan 1998, Bruce Momjian wrote:\n> > On Wed, 14 Jan 1998, Bruce Momjian wrote:\n> > \n> > > > This could be done as soon as I come up with a way of defining the packet\n> > > > size for the interfaces since this is the newest limiting factor.\n> > > > \n> > > > Peter's suggestion of backend functions for getting info might be the way to\n> > > > go. It would let the various interfaces get the info they need and would be\n> > > > a step towards JDBC and ODBC compliance.\n> > > \n> > > Again, we could just set 3rd party apps to be the maximum tuple size we\n> > > will ever have to support.\n> > \n> > Currently, were returning some defaults based on the 8K block size.\n> > \n> > Probably for these, we may be able to get away with the values we are\n> > setting. However, there are a few things that I think we will still need\n> > to implement as functions.\n> \n> OK, let's decide soon, so people can be ready for Feb 1.\n\nI'm going to sort out what needs to be done to get us as close to\ncompliance as possible over the next couple of days. Hopefully, we can\ndecide on some of them then.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 15 Jan 1998 19:25:43 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" }, { "msg_contents": "Are we done with these issues, or are you still working on them, or is\nPeter working on this?\n\n> \n> > \n> > > \n> > > Here's a blast from the past. Shows how I keep those open issues in my\n> > > mailbox.\n> > > \n> > > Forwarded message:\n> > > > Date: Wed, 29 Jan 1997 13:38:10 -0500\n> > > > From: [email protected] (Darren King)\n> > > > To: [email protected]\n> > > > Subject: [HACKERS] Max size of data types and tuples.\n> > \n> > Still buried in my 'received' box here too. Can't imagine all the bugs\n> > and/or issues you have kept in yours.\n> \n> Not too bad now.\n> \n> > \n> > \n> > > > 1. Clean up the #defines for the max block size. Currently,\n> > > > there are at least four references to 8192...\n> > \n> > Think I found and fixed all of these up.\n> > \n> > \n> > > > __These includes of storage/bufpage.h can be removed.__\n> > \n> > Still _quite_ a few #includes that can be removed throughout. First,\n> > \"utils/elog.h\" and \"util/palloc.h\" are include in \"postgres.h\", so are\n> > unnecessary to include by themselves since \"postgres.h\" is include in\n> > _every_ .c file, correct?\n> \n> Yes, must be included. Period. Even 3rd party apps.\n> \n> > \n> > Also numerous #includes of \"storage/bufpage.h\" and \"storage/fd.h\" that are\n> > unnecessary since the things they were included for (BLCKSZ and SEEK_*) are\n> > now either in \"config.h\" or found in a system include file.\n> > \n> > \n> > > > 2. Once the block size issue is taken care of, calculate the\n> > > > maximum tuple size more accurately.\n> > ...\n> > > > 3. When #1 & #2 are resolved, let the textual fields have a max\n> > > > of (MAX_TUPLE_SIZE - sizeof(int)).\n> > \n> > This could be done as soon as I come up with a way of defining the packet\n> > size for the interfaces since this is the newest limiting factor.\n> > \n> > Peter's suggestion of backend functions for getting info might be the way to\n> > go. It would let the various interfaces get the info they need and would be\n> > a step towards JDBC and ODBC compliance.\n> \n> Again, we could just set 3rd party apps to be the maximum tuple size we\n> will ever have to support.\n> \n> \n> -- \n> Bruce Momjian\n> [email protected]\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 21:49:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max size of data types and tuples. (fwd)" } ]
[ { "msg_contents": "> Are we sure the new expected/triggers.out is correct?\n\nNo.\n\n> The query..\n>\n> QUERY: insert into fkeys values (70, '5', 1);\n> and\n> QUERY: insert into fkeys values (70, '5', 1);\n>\n> should be triggering...\n>\n> QUERY: create trigger check_fkeys_pkey_exist\n> before insert or update on fkeys\n> for each row\n> execute procedure\n> check_primary_key ('fkey1', 'fkey2', 'pkeys', 'pkey1', 'pkey2');\n>\n> and failing because 70 does not exist in pkeys.\n>\n> I think the old triggers.out was correct.\n>\n> Keith.\n>\n> BTW the results I'm getting now check OK against the old expected out file.\n> (CVS snapshot about 2 hours ago.)\n\nWell, I'll try with a fresh snapshot. Don't know why I'm getting a different\nresult...\n\n - Tom\n\n", "msg_date": "Mon, 12 Jan 1998 04:52:01 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: triggers regression tests." } ]
[ { "msg_contents": "Vadim wrote:\n> but this will be \"known bug\": this breaks OO-nature of Postgres,\nbecause of\n> operators can be overrided and '=' can mean s o m e t h i n g (not\nequality).\n> Example: box data type. For boxes, = means equality of _areas_ and =~\n> means that boxes are the same ==> =~ ANY should be used for IN.\n\nOk, here I think there should be a restriction to have the = operator\nalways be defined as equality operator. Because in the long run it will\nbe hard \nto write equality restrictions. a = a1 and b =~ b1 and c +*#~ c1.\nAlso =, >, <, >= and the like will allways be candidates for use by the\noptimizer\n(boolean math to simplify restriction or to make an existing index\nusable could be used).\n\nI vote for: = must always be defined as equality in user defined types.\n\n(if such comparison is not possible for a special type the = should not\nbe defined for it) \nI therefore also suggest changing the box ops =~ to = and the area = to\nsome other sign.\n\nAndreas\n", "msg_date": "Mon, 12 Jan 1998 09:12:34 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "= is not always defined as equality is bad" }, { "msg_contents": "> \n> Vadim wrote:\n> > but this will be \"known bug\": this breaks OO-nature of Postgres,\n> because of\n> > operators can be overrided and '=' can mean s o m e t h i n g (not\n> equality).\n> > Example: box data type. For boxes, = means equality of _areas_ and =~\n> > means that boxes are the same ==> =~ ANY should be used for IN.\n> \n> Ok, here I think there should be a restriction to have the = operator\n> always be defined as equality operator. Because in the long run it will\n> be hard \n> to write equality restrictions. a = a1 and b =~ b1 and c +*#~ c1.\n> Also =, >, <, >= and the like will allways be candidates for use by the\n> optimizer\n> (boolean math to simplify restriction or to make an existing index\n> usable could be used).\n\nI think each operator in pg_operator has a 'commutative' field for this:\n\n| oprcom | oid | 4 |\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 08:33:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] = is not always defined as equality is bad" } ]
[ { "msg_contents": "hello,\n\nis there any way to be able to peer to a database' information (tables and\ntheir fields) similar to reflection in java using libpq? i know one can\nuse the \\d recursively but isn't there a way using libpq? i need the\ninformation to properly modify a particular part in radius.\n\nthanks.\n\n[---]\nNeil D. Quiogue <[email protected]>\nIPhil Communications Network, Inc.\nOther: [email protected]\n\n", "msg_date": "Mon, 12 Jan 1998 17:38:13 +0800 (HKT)", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": true, "msg_subject": "libpq and db information" }, { "msg_contents": "On Mon, 12 Jan 1998, neil d. quiogue wrote:\n\n> is there any way to be able to peer to a database' information (tables and\n> their fields) similar to reflection in java using libpq? i know one can\n> use the \\d recursively but isn't there a way using libpq? i need the\n> information to properly modify a particular part in radius.\n\ndon't you just love replying to your own questions. i'm looking at how\npg_dump.c implemented it (through the system catalogue). unless anyone\nhas another answer, i'll be glad to hear it.\n\n[---]\nNeil D. Quiogue <[email protected]>\nIPhil Communications Network, Inc.\nOther: [email protected]\n\n", "msg_date": "Mon, 12 Jan 1998 19:59:30 +0800 (HKT)", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] libpq and db information" }, { "msg_contents": "On Mon, 12 Jan 1998, neil d. quiogue wrote:\n\n> hello,\n> \n> is there any way to be able to peer to a database' information (tables and\n> their fields) similar to reflection in java using libpq? i know one can\n> use the \\d recursively but isn't there a way using libpq? i need the\n> information to properly modify a particular part in radius.\n\n\tNot sure if this is what you are looking for, but to implement\nthis in some of my code, I just went into the pg_dump.c code and pulled\nout the required SQL statement...\n\n\n", "msg_date": "Mon, 12 Jan 1998 08:11:40 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq and db information" }, { "msg_contents": "On Mon, 12 Jan 1998, neil d. quiogue wrote:\n\n> hello,\n> \n> is there any way to be able to peer to a database' information (tables and\n> their fields) similar to reflection in java using libpq? i know one can\n> use the \\d recursively but isn't there a way using libpq? i need the\n> information to properly modify a particular part in radius.\n\nIn jdbc, there are methods that allow you to get details about what tables\nand columns are present, and their details.\n\nI'm still working on the code to get information about access rights to\nthe various tables.\n\nAll of these use SQL queries, based on libpq.\n\nThese are new additions to the jdbc driver for V6.3, due beta Feb 1st.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Mon, 12 Jan 1998 19:39:48 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] libpq and db information" } ]
[ { "msg_contents": "> > > select *\n> > > from tabA\n> > > where col1 = (select col2\n> > > from tabB\n> > > where tabA.col3 = tabB.col4\n> > > and exists (select *\n> > > from tabC\n> > > where tabB.colX = tabC.colX and\n> > > tabC.colY = tabA.col2)\n> > > )\n\nI checked this in Informix, it works.\n\nAndreas\n", "msg_date": "Mon, 12 Jan 1998 11:25:00 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "nested subselects are allowed" } ]
[ { "msg_contents": "> My CA/Ingres Admin manual points out that there is a tradeoff between\n> compressing tuples to save disk storage and the extra processing work\n> required to uncompress for use. They suggest that the only case where you\n> would consider compressing on disk is when your system is very I/O bound,\n> and you have CPU to burn.\n> \n> The default for Ingres is to not compress anything, but you can specify\n> compression on a table-by-table basis.\n> \n> btw, char() is a bit trickier to handle correctly if you do compress it on\n> disk, since trailing blanks must be handled correctly all the way through.\n> For example, you would want 'hi' = 'hi ' to be true, which is not a\n> requirement for varchar().\n> \n> - Tom\n\nAnybody thought about real gzip style compression? There's a specialiased\nRDBMS called Iditis (written specifically for one task) which, like\nPostgreSQL stores data at the file level and uses a gzip-based library\nto access the files. I gather this is transparent to the software. Has\nanyone thought of anything equivalent for PG/SQL?\n\nTo be honest I haven't looked into how Iditis does it (it's a commercial\nprogram and I don't have the source). I don't actually see how this\ncould be done for small writes of data - how does it build the lookup\ntables for the compression? However, it might be worth considering for\nuse with the text field type.\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 12 Jan 1998 14:36:20 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Compression (was Re: [HACKERS] varchar/char size)" } ]