threads
listlengths
1
2.99k
[ { "msg_contents": "I've been seeing trouble with varchar() columns for a little while, and\nsince it is still there with a fresh install of the development tree\nit's time to report it:\n\npostgres=> create table t (v varchar(80),i int);\nCREATE\npostgres=> insert into t values ('hi',1);\nINSERT 18122 1\npostgres=> select * from t;\nv |i\n--+-\nhi|0\n(1 row)\n\nAs you can see, the varchar() column apparently trashes the subsequent\ncolumn. I ran across it trying to verify the tutorial examples for the\ndocumentation. If the varchar() is the last column in the table, the\nproblem does not crop up, at least in the simplest case:\n\npostgres=> create table t2 (i int, v varchar(80));\nCREATE\npostgres=> insert into t2 values (2,'hi');\nINSERT 18133 1\npostgres=> select * from t2;\ni|v\n-+--\n2|hi\n(1 row)\n\nAlso, I believe that if the varchar() field is substantially shorter the\nproblem does not manifest itself:\n\npostgres=> create table t4 (v varchar(4), i int);\nCREATE\npostgres=> insert into t4 values ('hi',4);\nINSERT 18156 1\npostgres=> select * from t4;\nv |i\n--+-\nhi|4\n(1 row)\n\nbut varchar(10) still shows the problem:\n\npostgres=> create table t3 (v varchar(10), i int);\nCREATE\npostgres=> insert into t3 values ('hi',3);\nINSERT 18145 1\npostgres=> select * from t3;\nv |i\n--+-\nhi|0\n(1 row)\n\nThis is from the development source as of around 1998-01-12 14:30 GMT\n\n -\nTom\n\n", "msg_date": "Mon, 12 Jan 1998 14:51:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "varchar() troubles" }, { "msg_contents": "> \n> I've been seeing trouble with varchar() columns for a little while, and\n> since it is still there with a fresh install of the development tree\n> it's time to report it:\n> \n> postgres=> create table t (v varchar(80),i int);\n> CREATE\n> postgres=> insert into t values ('hi',1);\n> INSERT 18122 1\n> postgres=> select * from t;\n> v |i\n> --+-\n> hi|0\n> (1 row)\n\nDid you see this before or only after the varchar() length change I\nmade?\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 10:24:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar() troubles" }, { "msg_contents": "Bruce Momjian wrote:\n\n> >\n> > I've been seeing trouble with varchar() columns for a little while, and\n> > since it is still there with a fresh install of the development tree\n> > it's time to report it:\n> >\n> > postgres=> create table t (v varchar(80),i int);\n> > CREATE\n> > postgres=> insert into t values ('hi',1);\n> > INSERT 18122 1\n> > postgres=> select * from t;\n> > v |i\n> > --+-\n> > hi|0\n> > (1 row)\n>\n> Did you see this before or only after the varchar() length change I\n> made?\n\nPretty sure only after but it's hard to tell for sure. My trees for 971204\nand 971222 both core dump on inserts to varchar, but I can't remember what\nelse I was doing with the trees at the time. v6.2.1p5 works OK on this:\n\npostgres=> create table t (v varchar(80),i int);\nCREATE\npostgres=> insert into t values ('hi',1);\nINSERT 142735 1\npostgres=> select * from t;\nv |i\n--+-\nhi|1\n(1 row)\n\n -\nTom\n\n", "msg_date": "Mon, 12 Jan 1998 15:41:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: varchar() troubles" }, { "msg_contents": "> Pretty sure only after but it's hard to tell for sure. My trees for 971204\n> and 971222 both core dump on inserts to varchar, but I can't remember what\n> else I was doing with the trees at the time.\n\nI now recall that the varchar code was broken during this time (for who knows\nhow long?), as I discovered when trying to reproduce the tutorial results for\nthe documentation.\n\nThe problem was that some things were copied using VARSIZE rather than\nsubtracting out VARHDRSZ first (actually, I think it might have use\nsizeof(int) and other dangers too). I patched that near the end of the year\nand my 980101.d tree and 980106.d tree do not exhibit the symptom:\n\npostgres=> create table t (v varchar(80),i int);\nCREATE\npostgres=> insert into t values ('hi', 1);\nINSERT 643562 1\npostgres=> select * from t;\nv |i\n--+-\nhi|1\n(1 row)\n\nHope this helps :/\n\n - Tom\n\n", "msg_date": "Mon, 12 Jan 1998 17:13:58 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles" }, { "msg_contents": "> Pretty sure only after but it's hard to tell for sure. My trees for 971204\n> and 971222 both core dump on inserts to varchar, but I can't remember what\n> else I was doing with the trees at the time. v6.2.1p5 works OK on this:\n> \n> postgres=> create table t (v varchar(80),i int);\n> CREATE\n> postgres=> insert into t values ('hi',1);\n> INSERT 142735 1\n> postgres=> select * from t;\n> v |i\n> --+-\n> hi|1\n> (1 row)\n\nI am working on it. Look at this:\n\n\ttest=> create table t15 (x varchar(7),x1 int, x2 int, x3 int)\n\ttest-> ;\n\tCREATE\n\ttest=> insert into t15 values ('as',1,2,3);\n\tINSERT 143436 1\n\ttest=> select * from t15;\n\tx |x1|x2|x3\n\t--+--+--+--\n\tas| 2| 3| 0\n\t(1 row)\n\t\nSrange, huh?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 15:50:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar() troubles" }, { "msg_contents": "> Pretty sure only after but it's hard to tell for sure. My trees for 971204\n> and 971222 both core dump on inserts to varchar, but I can't remember what\n> else I was doing with the trees at the time. v6.2.1p5 works OK on this:\n> \n> postgres=> create table t (v varchar(80),i int);\n> CREATE\n> postgres=> insert into t values ('hi',1);\n> INSERT 142735 1\n> postgres=> select * from t;\n> v |i\n> --+-\n> hi|1\n> (1 row)\n\nThe data on disk is OK, so it must be the retrieve code.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 16:01:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: varchar() troubles" }, { "msg_contents": "> \n> > Pretty sure only after but it's hard to tell for sure. My trees for 971204\n> > and 971222 both core dump on inserts to varchar, but I can't remember what\n> > else I was doing with the trees at the time.\n> \n> I now recall that the varchar code was broken during this time (for who knows\n> how long?), as I discovered when trying to reproduce the tutorial results for\n> the documentation.\n> \n> The problem was that some things were copied using VARSIZE rather than\n> subtracting out VARHDRSZ first (actually, I think it might have use\n> sizeof(int) and other dangers too). I patched that near the end of the year\n> and my 980101.d tree and 980106.d tree do not exhibit the symptom:\n> \n> postgres=> create table t (v varchar(80),i int);\n> CREATE\n> postgres=> insert into t values ('hi', 1);\n> INSERT 643562 1\n> postgres=> select * from t;\n> v |i\n> --+-\n> hi|1\n> (1 row)\n\nI have found that ExecEvalVar() uses a descriptor that has the attr\nlength set to the maximum, instead of -1. The ExecTypeFromTL() comment\nsays:\n\n/* ----------------------------------------------------------------\n * ExecTypeFromTL\n *\n * Currently there are about 4 different places where we create\n * TupleDescriptors. They should all be merged, or perhaps\n * be rewritten to call BuildDesc().\n *\n\nClearly stating that the tuple descriptors in the system are created in\nseveral places. Some places have the length set wrong. I am going to\nhave to take a look at all those places, and make sure they have\nconsistent behaviour.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 23:15:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: varchar() troubles" } ]
[ { "msg_contents": "\n> OK, we never installed this for 6.2 because we were already in Beta. \n> Can we do this for 6.3? Vadim suggested we make this part of libpq, so\n> all applications could make use of it.\n> \n> I have one of the original patches, but not the others. Martin, what do you\n> think? Any other comments on this?\n> \n> \n> > \n> > \n> > adding them to the documentation, I thought I'd better supply\n> > a patch to the psql man page which documents the .psqlrc file :-)\n> > (I forgot yesterday....)\n> > \n> > \n> > Andrew\n> > \n> > \n[DELETED patch to man page for psql.1 which documents suggested addition\nof support for a /etc/psqlrc and/or $(HOME)/.psqlrc containing SQL code\nto be run whenever psql is started]\n\nPersonally, I think this should just be a function of psql not libpq - it's\nreally there as a convenience to the person running psql to save typing a\nfew lines of SQL every time (like setting the date format). If you are\nrunning PG/SQL via some other interface (such as Perl), then it is\ntrivial to write those few lines as part of your Perl script rather than\nin a .psqlrc file.\n\nI still have the patch file for the source as well as for the man page\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 12 Jan 1998 14:58:03 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "> ... patch to man page for psql.1 which documents suggested addition\n> of support for a /etc/psqlrc and/or $(HOME)/.psqlrc containing SQL code\n> to be run whenever psql is started]\n>\n> Personally, I think this should just be a function of psql not libpq - it's\n> really there as a convenience to the person running psql to save typing a\n> few lines of SQL every time (like setting the date format). If you are\n> running PG/SQL via some other interface (such as Perl), then it is\n> trivial to write those few lines as part of your Perl script rather than\n> in a .psqlrc file.\n\nI have added support for PGTZ to libpq and fixed the existing PGDATESTYLE\nsupport, so at least a few common environment settings will not need .psqlrc.\nThe bigger problem is that .psqlrc could contain commands which might screw up\nan embedded application. For example. a query to show the current time in\n.psqlrc would result in an unexpected response from the backend as an app fires\nup. It does seem like a nice feature for at least psql though, as Andrew\nsuggests. I'd suggest that if we add it to libpq that we do so as a separate\ncall or an option, so an embedded app can choose to use it or not.\n\n - Tom\n\n> I still have the patch file for the source as well as for the man page\n\n\n\n", "msg_date": "Mon, 12 Jan 1998 15:27:46 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Mon, 12 Jan 1998 09:06:50 -0600", "msg_from": "Andy Doerr <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "Try changing your OS default memory size. Unsure how to do this under\nAIX.\n\n> \n> \n> ============================================================================\n> POSTGRESQL BUG REPORT TEMPLATE\n> ============================================================================\n> \n> \n> Your name\t\t: David Hartwig\n> Your email address\t: [email protected]\n> \n> Category\t\t: runtime: back-end: SQL\n> Severity\t\t: serious\n> \n> Summary: palloc fails with lots of ANDs and ORs\n> \n> System Configuration\n> --------------------\n> Operating System : AIX 4.1\n> \n> PostgreSQL version : 6.2\n> \n> Compiler used : native CC\n> \n> Hardware:\n> ---------\n> RS 6000\n> \n> Versions of other tools:\n> ------------------------\n> NA\n> \n> --------------------------------------------------------------------------\n> \n> Problem Description:\n> --------------------\n> The follow is a mail message describing the problem on the PostODBC mailing list:\n> \n> \n> I have run across this also. We traced it down to a failure in the PostgreSQL server. This occurs under the following conditions. \n> \n> 1. MS Access \n> 2. Specify a multi-part key in the link time setup with postgresql \n> 3. Click on table view. \n> \n> What happens is MS Access takes the following steps. First it selects all possible key values for the table being viewed. I\n> suspect it maps the key values to the relative row position in the display. Then it uses the mapping to generate future queries based\n> on the mapping and the rows showing on the screen. The queries take the following form: \n> \n> SELECT keypart1, keypart2, keypart3, col4, col5, col6 ... FROM example_table \n> WHERE \n> (keypart1 = row1keypartval1 AND keypart2 = row1keypartval2 AND keypart3 = row1keypartval3) OR \n> (keypart1 = row2keypartval1 AND keypart2 = row2keypartval2 AND keypart3 = row2keypartval3) OR \n> . \n> . -- 28 lines of this stuff. Why 28... Why not 28 \n> . \n> (keypart1 = row27keypartval1 AND keypart2 = row27keypartval2 AND keypart3 = row27keypartval3) OR \n> (keypart1 = row28keypartval1 AND keypart2 = row28keypartval2 AND keypart3 = row28keypartval3); \n> \n> \n> The PostgreSQL sever chokes on this statement claiming it is out of memory. (palloc) In this example I used a three part key. I\n> do not recall if a three part key is enough to trash the backend. It has been a while. I have tried sending these kinds of statements\n> directly through the psql monitor and get the same result. \n> \n> \n> --------------------------------------------------------------------------\n> \n> Test Case:\n> ----------\n> select c1, c1 c3, c4, c5 ... from example_table\n> where\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something) or\n> (c1 = something and c2 = something and c3 = something and c4 = something);\n> \n> \n> --------------------------------------------------------------------------\n> \n> Solution:\n> ---------\n> \n> \n> --------------------------------------------------------------------------\n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 12 Jan 1998 11:35:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] General Bug Report: palloc fails with lots of ANDs and ORs" } ]
[ { "msg_contents": "\n\nHi...\n\n\tWell, since this has all sort of died off, and since I'd like to\nget some resolution on it.\n\n\tDoes anyone here *understand* the LGPL? If we put the ODBC\ndrivers *under* src/interfaces, does that risk contaminating the rest of\nthe code *in any way*? Anyone here done a reasonably thorough study of\nthe LGPL and can comment on it?\n\n\n\n", "msg_date": "Mon, 12 Jan 1998 13:12:59 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC & LGPL license..." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 12-Jan-98 The Hermit Hacker wrote:\n> Does anyone here *understand* the LGPL? If we put the ODBC\n>drivers *under* src/interfaces, does that risk contaminating the rest of\n>the code *in any way*? Anyone here done a reasonably thorough study of\n>the LGPL and can comment on it?\n\nMy understanding from Stallman's statements on the matter are: Distribution of\nGPL'd source with non-GPL'd source is fine, as long as it is simple to figure\nout which is which. By definition, GPL'd sources can be distributed freely.\nFor binaries which fall under the GPL, again, mixing them with other stuff is\nOK, as long as GPL'd stuff is identified as such. Sources must be available,\nof course.\n\nLGPL is completely different. LGPL is what you use when you link your\nnon-GPL'd sources against a library built with GPL'd sources. In that case,\nyou are legal IFF you stuff can be re-linked against a different, non-GPL'd\nlibrary without recompilation. Actually, there's a bit of confusion on my\npart about how much recompilation is permitted.\n\nCompanies like DG/Sequent/Sun/etc wouldn't be able to include FSF software on\nthe distributions if the above were not the case.\n\nObCaveat: I'm not a lawyer. I don't look like a lawyer, I don't smell like a\nlawyer, and I don't lie like a lawyer.\n\n\n=====================================================================\n| \"If you're all through trying to burn the field down, will you |\n| kindly get up and tell me why you're sitting in a fruit field, |\n| stark naked, frying peaches?\" |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\n\niQBVAwUBNLp4SYdzVnzma+gdAQHRmAIArMU8KwW6eoplN/hiQ79Sev4TdeAEVcBp\nejh/Px3zYZH6xJh75uXRLnelyXZeij5+UUNs4wwE3GIUQ9d02rBbQw==\n=uGid\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 12 Jan 1998 14:57:57 -0500 (EST)", "msg_from": "\"Brian E. Gallew\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] ODBC & LGPL license..." }, { "msg_contents": ">>>>> \"scrappy\" == The Hermit Hacker <[email protected]> writes:\n\n > Hi...\n\n > \tWell, since this has all sort of died off, and since I'd like\n > to get some resolution on it.\n\n > \tDoes anyone here *understand* the LGPL? If we put the ODBC\n > drivers *under* src/interfaces, does that risk contaminating the\n > rest of the code *in any way*? Anyone here done a reasonably\n > thorough study of the LGPL and can comment on it?\n\nWhy not put the LGPL libraries in a separate area from the rest of the\ncode (src/lgpl?). This would make the libraries covered by the\naggregation clause (part of section 2 says -- In addition, mere\naggregation of another work not based on the Library with the Library\n(or with a work based on the Library) on a volume of a storage or\ndistribution medium does not bring the other work under the scope of\nthis License.). I think this clearly states that it would not risk\ncontaminating any of the other code.\n\nI would consider sending mail to GNU ([email protected]) to get any\nadditional clarification needed.\n\n\n\nKent S. Gordon\nArchitect\niNetSpace Co.\nvoice: (972)851-3494 fax:(972)702-0384 e-mail:[email protected]\n\n\n\n\n", "msg_date": "Tue, 13 Jan 1998 18:02:31 -0600 (CST)", "msg_from": "\"Kent S. Gordon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC & LGPL license..." } ]
[ { "msg_contents": "Hi,\n\nI can confirm that I see the same problem here.\n\nMaybe todo with the recent varchar() storage changes.\n\nKeith.\n\n\n\"Thomas G. Lockhart\" <[email protected]>\n\n> I've been seeing trouble with varchar() columns for a little while, and\n> since it is still there with a fresh install of the development tree\n> it's time to report it:\n> \n> postgres=> create table t (v varchar(80),i int);\n> CREATE\n> postgres=> insert into t values ('hi',1);\n> INSERT 18122 1\n> postgres=> select * from t;\n> v |i\n> --+-\n> hi|0\n> (1 row)\n> \n> As you can see, the varchar() column apparently trashes the subsequent\n> column. I ran across it trying to verify the tutorial examples for the\n> documentation. If the varchar() is the last column in the table, the\n> problem does not crop up, at least in the simplest case:\n> \n> postgres=> create table t2 (i int, v varchar(80));\n> CREATE\n> postgres=> insert into t2 values (2,'hi');\n> INSERT 18133 1\n> postgres=> select * from t2;\n> i|v\n> -+--\n> 2|hi\n> (1 row)\n> \n> Also, I believe that if the varchar() field is substantially shorter the\n> problem does not manifest itself:\n> \n> postgres=> create table t4 (v varchar(4), i int);\n> CREATE\n> postgres=> insert into t4 values ('hi',4);\n> INSERT 18156 1\n> postgres=> select * from t4;\n> v |i\n> --+-\n> hi|4\n> (1 row)\n> \n> but varchar(10) still shows the problem:\n> \n> postgres=> create table t3 (v varchar(10), i int);\n> CREATE\n> postgres=> insert into t3 values ('hi',3);\n> INSERT 18145 1\n> postgres=> select * from t3;\n> v |i\n> --+-\n> hi|0\n> (1 row)\n> \n> This is from the development source as of around 1998-01-12 14:30 GMT\n> \n> -\n> Tom\n> \n> \n\n", "msg_date": "Mon, 12 Jan 1998 18:24:00 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar() troubles" } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Mon, 12 Jan 1998 10:33:10 -0800", "msg_from": "Kevin Witten <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "\nAbout commercial databases lacking access control \nfunctionality to a degree \"similar\" to PG: hmm, I dunno\nabout that. Seems like kind of a slur on the commercial\nproducts :-)\n \nThe two Big Names (Oracle and Sybase) have pretty well- \ndeveloped access control. I can't speak to Oracle in detail, \nbut I've used Sybase for years and it has an internal AC \nmechanism at least as functional as the *nix file system. It \noffers access control over databases as well as tables, and \ngood granularity (many modes and degrees of access/privilege, \nuser vs group, etc). Maybe an Oracle person can tell us how \nthat engine does access control, but I was under the \nimpression that it uses internal auth tables, like Sybase. \n\nI have never liked tying access to external OS usernames. \nMaybe it's just a personal idiosyncrasy :-) I'd jump for joy \nif PG access control became OS-independent, with usernames, \ngroups, and passcodes maintained internally to the engine... \njust my $0.02. \n\nde\n\nPS I'd also jump for joy if SQL statements could access\ntables from more than one DB. I can dream, can't I?\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n", "msg_date": "Mon, 12 Jan 1998 19:07:36 -0800 (PST)", "msg_from": "De Clarke <[email protected]>", "msg_from_op": true, "msg_subject": "access control" }, { "msg_contents": "On Mon, 12 Jan 1998, De Clarke wrote:\n\n> I have never liked tying access to external OS usernames. \n> Maybe it's just a personal idiosyncrasy :-) I'd jump for joy \n> if PG access control became OS-independent, with usernames, \n> groups, and passcodes maintained internally to the engine... \n> just my $0.02. \n\n\tUmmmm...this was added in for v6.3 for over a month now :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 12 Jan 1998 23:20:30 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] access control" }, { "msg_contents": "\n>> I have never liked tying access to external OS usernames. \n>> Maybe it's just a personal idiosyncrasy :-) I'd jump for \n>> joy...\n\n> Ummmm...this was added in for v6.3 for over a month now :)\n\nblush, shuffle, stammer!\n\nsorry, I've been buried in other projects and have been\nout of touch. I have 6.2.1 running at work and home, but\nhaven't caught up with the 6.3 features list yet! \nvisualize me jumping for joy :-)\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n", "msg_date": "Tue, 13 Jan 1998 10:55:48 -0800 (PST)", "msg_from": "De Clarke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] access control " } ]
[ { "msg_contents": "*sigh* we have the same problem and i used the libpq c interface and i\nbased it from pg_dump.c (and bruce mentioned psql.c). as with the Gtk+,\nis this the tcl/tk library? i'm not sure but someone made PgAccess which\nmay be similar to what you're doing. in any case, i'm forwarding this to\nthe more knowledgable people.\n\n[---]\nNeil D. Quiogue <[email protected]>\nIPhil Communications Network, Inc.\nOther: [email protected]\n\n---------- Forwarded message ----------\nDate: Mon, 12 Jan 1998 15:01:52 +0100 (CET)\nFrom: Zio Budda <[email protected]>\nTo: [email protected]\nSubject: for the development team\n\nHi, sorry for the offtopic, but i don't know the right e-mail address, so\nplz forward this mail to the development team.\n\n-begin\nHi, i'm Michel and i'm working on a interface like Access95 for Postgresql\nunder Linux with Gtk+ libraries. I want to know where and how is\nrecord/save the type, the name of every table of selected database and\nfileds names/attributes of the tables of selected database too.\nPlz respond to this... tnx\n \n\nMorelli 'ZioBudda' Davide Michel - Member of Pluto Linux User Group\[email protected] - http://www.dau.ing.univaq.it/~ziobudda/\nLinux Problem? Ask to [email protected]\n\"E che la scimmia sia con te in ogni posto e in ogni momento.\" (ZioBudda)\n\n", "msg_date": "Tue, 13 Jan 1998 12:38:56 +0800 (HKT)", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": true, "msg_subject": "for the development team (fwd)" } ]
[ { "msg_contents": "Forwarded message:\n> > The problem was that some things were copied using VARSIZE rather than\n> > subtracting out VARHDRSZ first (actually, I think it might have use\n> > sizeof(int) and other dangers too). I patched that near the end of the year\n> > and my 980101.d tree and 980106.d tree do not exhibit the symptom:\n> > \n> > postgres=> create table t (v varchar(80),i int);\n> > CREATE\n> > postgres=> insert into t values ('hi', 1);\n> > INSERT 643562 1\n> > postgres=> select * from t;\n> > v |i\n> > --+-\n> > hi|1\n> > (1 row)\n> \n> I have found that ExecEvalVar() uses a descriptor that has the attr\n> length set to the maximum, instead of -1. The ExecTypeFromTL() comment\n> says:\n> \n> /* ----------------------------------------------------------------\n> * ExecTypeFromTL\n> *\n> * Currently there are about 4 different places where we create\n> * TupleDescriptors. They should all be merged, or perhaps\n> * be rewritten to call BuildDesc().\n> *\n> \n> Clearly stating that the tuple descriptors in the system are created in\n> several places. Some places have the length set wrong. I am going to\n> have to take a look at all those places, and make sure they have\n> consistent behaviour.\n\nVadim, can you look at this for me. If you set a break at ExecEvalVar\nbefore executing the SELECT, you will see its\ntupledescriptor->attrs[0].attlen is the max length, and not -1 as it\nshould be.\n\nI can't figure out where that is getting set. Can you also check the\nother tupledescriptor initializations to see they have the -1 for\nvarchar too. I am stumped.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 13 Jan 1998 00:42:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > I have found that ExecEvalVar() uses a descriptor that has the attr\n> > length set to the maximum, instead of -1. The ExecTypeFromTL() comment\n...\n> \n> Vadim, can you look at this for me. If you set a break at ExecEvalVar\n> before executing the SELECT, you will see its\n> tupledescriptor->attrs[0].attlen is the max length, and not -1 as it\n> should be.\n> \n> I can't figure out where that is getting set. Can you also check the\n> other tupledescriptor initializations to see they have the -1 for\n> varchar too. I am stumped.\n\nWhy attlen should be -1 ?\nattlen in pg_attribute for v in table t is 84, why run-time attlen\nshould be -1 ? How else maxlen constraint could be checked ?\nIMHO, you have to change heap_getattr() to check is atttype == VARCHAROID\nand use vl_len if yes. Also, other places where attlen is used must be \nchanged too - e.g. ExecEvalVar():\n\n {\n len = tuple_type->attrs[attnum - 1]->attlen;\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n byval = tuple_type->attrs[attnum - 1]->attbyval ? true : false;\n }\n\n execConstByVal = byval;\n execConstLen = len;\n ^^^^^^^^^^^^^^^^^^ - used in nodeHash.c\n\nVadim\n", "msg_date": "Wed, 14 Jan 1998 14:09:35 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > I have found that ExecEvalVar() uses a descriptor that has the attr\n> > > length set to the maximum, instead of -1. The ExecTypeFromTL() comment\n> ...\n> > \n> > Vadim, can you look at this for me. If you set a break at ExecEvalVar\n> > before executing the SELECT, you will see its\n> > tupledescriptor->attrs[0].attlen is the max length, and not -1 as it\n> > should be.\n> > \n> > I can't figure out where that is getting set. Can you also check the\n> > other tupledescriptor initializations to see they have the -1 for\n> > varchar too. I am stumped.\n> \n> Why attlen should be -1 ?\n> attlen in pg_attribute for v in table t is 84, why run-time attlen\n> should be -1 ? How else maxlen constraint could be checked ?\n> IMHO, you have to change heap_getattr() to check is atttype == VARCHAROID\n> and use vl_len if yes. Also, other places where attlen is used must be \n> changed too - e.g. ExecEvalVar():\n> \n> {\n> len = tuple_type->attrs[attnum - 1]->attlen;\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> byval = tuple_type->attrs[attnum - 1]->attbyval ? true : false;\n> }\n> \n> execConstByVal = byval;\n> execConstLen = len;\n> ^^^^^^^^^^^^^^^^^^ - used in nodeHash.c\n> \n\nThe major problem is that TupleDesc comes from several places, and\nattlen means several things.\n\nThere are some cases where TupleDesc (int numatt, Attrs[]) is created\non-the-fly (tupdesc.c), and the attlen is the length of the type. In\nother cases, we get attlen from opening the relation, heap_open(), and\nin these cases it is the length as defined for the particular attribute.\n\nCertainly a bad situation. I am not sure about a fix.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 18:41:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" } ]
[ { "msg_contents": "Since PostgreSQL doesn't have column level permissions, I tried to do\nsomething with views like this.\n\nCREATE TABLE account (\n uid int, # Unique UID for account\n login char8, # User login - must also be unique\n cdate date, # Creation date\n a_active bool, # true or false\n gedit bool, # edit privs for group\n bid int, # reference to billing group table\n password text, # Encrypted password\n gcos text, # Public information\n home text, # home directory\n shell char8); # which shell\nCREATE UNIQUE INDEX account_uid ON account (uid);\nCREATE UNIQUE INDEX account_login ON account (login char8_ops);\nREVOKE ALL ON account FROM PUBLIC;\n\nCREATE VIEW passwd AS SELECT uid, login, bid, gcos, home, shell\n FROM account WHERE a_active = 't';\n \nREVOKE ALL ON passwd FROM PUBLIC;\nGRANT SELECT ON passwd TO PUBLIC;\n\nUnfortunately this doesn't work. The VIEW inherits the permissions\nfrom the table it is a view of. It seems to me that allowing a view\nto define permissions separately from its parent would be a useful\nthing. So, does anyone know if this behaviour is allowed by the\nSQL spec and if it is allowed, would this be difficult to do?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 13 Jan 1998 10:44:32 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Priviliges on tables and views" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> \n> REVOKE ALL ON account FROM PUBLIC;\n> \n> CREATE VIEW passwd AS SELECT uid, login, bid, gcos, home, shell\n> FROM account WHERE a_active = 't';\n> \n> REVOKE ALL ON passwd FROM PUBLIC;\n> GRANT SELECT ON passwd TO PUBLIC;\n> \n> Unfortunately this doesn't work. The VIEW inherits the permissions\n> from the table it is a view of. It seems to me that allowing a view\n> to define permissions separately from its parent would be a useful\n> thing. So, does anyone know if this behaviour is allowed by the\n> SQL spec and if it is allowed, would this be difficult to do?\n\nThis is allowed by SQL and this is very useful thing. Not easy to implement:\nviews are handled by RULES - after parsing and before planning, - but\npermissions are checked by executor (execMain.c:InitPlan()->ExecCheckPerms()).\n\nVadim\n", "msg_date": "Wed, 14 Jan 1998 10:19:50 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Priviliges on tables and views" }, { "msg_contents": "Thus spake Vadim B. Mikheev\n> > CREATE VIEW passwd AS SELECT uid, login, bid, gcos, home, shell\n> > FROM account WHERE a_active = 't';\n> > \n> > REVOKE ALL ON passwd FROM PUBLIC;\n> > GRANT SELECT ON passwd TO PUBLIC;\n> > \n> > Unfortunately this doesn't work. The VIEW inherits the permissions\n> > from the table it is a view of. It seems to me that allowing a view\n> > to define permissions separately from its parent would be a useful\n> > thing. So, does anyone know if this behaviour is allowed by the\n> > SQL spec and if it is allowed, would this be difficult to do?\n> \n> This is allowed by SQL and this is very useful thing. Not easy to implement:\n> views are handled by RULES - after parsing and before planning, - but\n> permissions are checked by executor (execMain.c:InitPlan()->ExecCheckPerms()).\n\nOh well. Is it worth putting on the TODO list at least? Maybe someone\nwill get to it eventually.\n\nIn the meantime, how close are we to being able to update views? I can\ndo what I want that way - just make two tables with public perms on\none but not the other and make a view for the combined table instead\nof for a subset of a table.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 13 Jan 1998 22:49:13 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Priviliges on tables and views" }, { "msg_contents": "> \n> Thus spake Vadim B. Mikheev\n> > > CREATE VIEW passwd AS SELECT uid, login, bid, gcos, home, shell\n> > > FROM account WHERE a_active = 't';\n> > > \n> > > REVOKE ALL ON passwd FROM PUBLIC;\n> > > GRANT SELECT ON passwd TO PUBLIC;\n> > > \n> > > Unfortunately this doesn't work. The VIEW inherits the permissions\n> > > from the table it is a view of. It seems to me that allowing a view\n> > > to define permissions separately from its parent would be a useful\n> > > thing. So, does anyone know if this behaviour is allowed by the\n> > > SQL spec and if it is allowed, would this be difficult to do?\n> > \n> > This is allowed by SQL and this is very useful thing. Not easy to implement:\n> > views are handled by RULES - after parsing and before planning, - but\n> > permissions are checked by executor (execMain.c:InitPlan()->ExecCheckPerms()).\n> \n> Oh well. Is it worth putting on the TODO list at least? Maybe someone\n> will get to it eventually.\n> \n> In the meantime, how close are we to being able to update views? I can\n> do what I want that way - just make two tables with public perms on\n> one but not the other and make a view for the combined table instead\n> of for a subset of a table.\n\nCertainly is a good item for the TODO list. Added:\n\n* Allow VIEW permissions to be set separately from the underlying tables\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 09:48:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Priviliges on tables and views" } ]
[ { "msg_contents": "Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\nis still broken.\n\nInfact, it gets worse. Once you try to use the grant statement, then any\nother connection that was active at that time - even on different\ndatabases, will die with the same message, hinting it may be something in\nshared memory that's breaking.\n\nHeres the output of psql, first as user postgres:\n\n[postgres@maidast pgsql]$ psql -h localhost db1\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: db1\n\ndb1=> create table a (id int4,url text);\nCREATE\ndb1=> insert into a values (1,'http://www.demon.co.uk/finder');\nINSERT 18250 1\ndb1=> grant all on a to public;\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\nAnd as user pmount:\n\n[pmount@maidast pmount]$ psql -h localhost\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: pmount\n\npmount=> create table a (id int4,url text);\nCREATE\npmount=> insert into a values (1,'http://www.demon.co.uk/finder');\nINSERT 18314 1\npmount=> select * from a;\nid|url \n--+-----------------------------\n 1|http://www.demon.co.uk/finder\n(1 row)\n\npmount=> select * from a;\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\nNB: the select done as pmount was after the postgres user tried to grant\nrights, while the other statements were before.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Tue, 13 Jan 1998 19:53:44 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "grant still broken" }, { "msg_contents": "> Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> is still broken.\n\nI see the same behavior here, so it's not just you :)\n\n - Tom\n\n", "msg_date": "Wed, 14 Jan 1998 02:46:27 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "> \n> Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> is still broken.\n> \n> Infact, it gets worse. Once you try to use the grant statement, then any\n> other connection that was active at that time - even on different\n> databases, will die with the same message, hinting it may be something in\n> shared memory that's breaking.\n> \n> Heres the output of psql, first as user postgres:\n> \n> [postgres@maidast pgsql]$ psql -h localhost db1\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: db1\n> \n> db1=> create table a (id int4,url text);\n> CREATE\n> db1=> insert into a values (1,'http://www.demon.co.uk/finder');\n> INSERT 18250 1\n> db1=> grant all on a to public;\n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n\nGee, still works fine here under bsd/os 3.0.\n\n\t#$ sql test\n\tWelcome to the POSTGRESQL interactive sql monitor:\n\t Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\t\n\t type \\? for help on slash commands\n\t type \\q to quit\n\t type \\g or terminate with semicolon to execute query\n\t You are currently connected to the database: test\n\t\n\ttest=> create table a (id int4,url text);\n\tCREATE\n\ttest=> insert into a values (1,'http://www.demon.co.uk/finder');\n\tINSERT 143530 1\n\ttest=> grant all on a to public;\n\tCHANGE\n\ttest=> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 13 Jan 1998 22:18:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "On Tue, 13 Jan 1998, Bruce Momjian wrote:\n\n> Gee, still works fine here under bsd/os 3.0.\n> \n> \t#$ sql test\n> \tWelcome to the POSTGRESQL interactive sql monitor:\n> \t Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \t\n> \t type \\? for help on slash commands\n> \t type \\q to quit\n> \t type \\g or terminate with semicolon to execute query\n> \t You are currently connected to the database: test\n> \t\n> \ttest=> create table a (id int4,url text);\n> \tCREATE\n> \ttest=> insert into a values (1,'http://www.demon.co.uk/finder');\n> \tINSERT 143530 1\n> \ttest=> grant all on a to public;\n> \tCHANGE\n> \ttest=> \n\nFreeBSD 3.0-CURRENT as of yesterday:\n\nscrappy=> create table a (id int4,url text);\nCREATE\nscrappy=> insert into a values (1,'http://www.demon.co.uk/finder');\nINSERT 143210 1\nscrappy=> grant all on a to public;\nCHANGE\nscrappy=> \\z\n\nDatabase = scrappy\n +------------------+----------------------------------------------------+\n | Relation | Grant/Revoke Permissions |\n +------------------+----------------------------------------------------+\n | a | {\"=arwR\"} |\n | testtable | |\n +------------------+----------------------------------------------------+\nscrappy=>\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 13 Jan 1998 23:54:26 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "On Wed, 14 Jan 1998, Thomas G. Lockhart wrote:\n\n> > Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> > is still broken.\n> \n> I see the same behavior here, so it's not just you :)\n\nTom, what platform are you running it on? It seems the others have no\nproblems with BSD.\n\nI'm using Linux.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 14 Jan 1998 06:58:37 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "\n\nOn Wed, 14 Jan 1998, Peter T Mount wrote:\n\n> On Wed, 14 Jan 1998, Thomas G. Lockhart wrote:\n> \n> > > Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> > > is still broken.\n> > \n> > I see the same behavior here, so it's not just you :)\n> \n> Tom, what platform are you running it on? It seems the others have no\n> problems with BSD.\n> \n> I'm using Linux.\n> \n\nI'm running Linux and I'm getting the current cvs tree now. I will try to \nreproduce the problem on my system and track it down.\n\nI am still running 6.2.0. Anything I need to watch out for when running\nthe latest? \n\n\n\n-James\n\n\n", "msg_date": "Wed, 14 Jan 1998 07:25:09 -0500 (EST)", "msg_from": "James Hughes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "> > > Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> > > is still broken.\n> >\n> > I see the same behavior here, so it's not just you :)\n>\n> Tom, what platform are you running it on? It seems the others have no\n> problems with BSD.\n>\n> I'm using Linux.\n\nMe too. Shouldn't be platform-specific, but maybe it will be clear once the\nsolution is found. There was a reported problem with \"group by\" sorting which I\ncould reproduce but which other platforms could not; it went away after a month\nor so. Maybe we'll get lucky on this one too :/\n\n - Tom\n\n", "msg_date": "Wed, 14 Jan 1998 15:10:58 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "> > > > Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> > > > is still broken.\n> > >\n> > > I see the same behavior here, so it's not just you :)\n> >\n> > Tom, what platform are you running it on? It seems the others have no\n> > problems with BSD.\n> >\n> > I'm using Linux.\n>\n> I'm running Linux and I'm getting the current cvs tree now. I will try to\n> reproduce the problem on my system and track it down.\n>\n> I am still running 6.2.0. Anything I need to watch out for when running\n> the latest?\n\nJust watch out for better performance :)\n\nOh, there is a problem with varchar() at the moment.\n\n", "msg_date": "Wed, 14 Jan 1998 15:13:37 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" }, { "msg_contents": "\n\nOn Wed, 14 Jan 1998, Thomas G. Lockhart wrote:\n\n> > > > > Well, I've rebuilt postgres again, using a fresh cvs snapshot, and grant\n> > > > > is still broken.\n> > > >\n> > > > I see the same behavior here, so it's not just you :)\n> > >\n> > > Tom, what platform are you running it on? It seems the others have no\n> > > problems with BSD.\n> > >\n> > > I'm using Linux.\n> >\n> > I'm running Linux and I'm getting the current cvs tree now. I will try to\n> > reproduce the problem on my system and track it down.\n> >\n> > I am still running 6.2.0. Anything I need to watch out for when running\n> > the latest?\n> \n> Just watch out for better performance :)\n> \n> Oh, there is a problem with varchar() at the moment.\n> \n\nI've (quickly) compiled and installed this mornings cvs and am getting some\nodd behavior. I have a death in the family, so I'm not going to be able to\nsort it all out before this weekend. \n\nI get segfaults and authentication problems when trying to run perl scripts.\nAlso, I'm not sure I have a handle on the new acl stuff.\n\n\n`Anyways', hopefully I can get everything worked out and compile a debug\nversion to poke through with gdb this weekend.\n\n\n-James\n\n", "msg_date": "Wed, 14 Jan 1998 10:34:47 -0500 (EST)", "msg_from": "James Hughes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] grant still broken" } ]
[ { "msg_contents": "Hi All,\n\nI wonder if it might be useful to maintain a \"bugs\" regression test, where\nwe could include various tests that people have supplied to highlight bugs\nin the system and that have subsequently been fixed?\n\nI'm thinking along the lines of the current varchar() bug that can be\neasily demonstrated in a couple of lines of sql.\n\nIdeally the varchar() bug test could be included in the varchar tests\nbut it may be easier to collect such misc things in a seperate file\nthat could be regularly updated as we get examples.\n\nThe people testing and fixing the problems could supply the additions\nto the bugs.sql and bugs.out files as we find/fix them?\n\nJust a thought?\n\nKeith.\n\n", "msg_date": "Tue, 13 Jan 1998 23:36:18 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "how about a \"bugs\" regression test?" } ]
[ { "msg_contents": "Sorry for the response delay. I was out of town.\n\nI don't believe that pg_user needs to be readable by users in general. They\ndon't really need to know who else has access to the DB, and they certainly \ndon't need to know what access they do have (e.g. usesuper and createuser).\n\nAs for the suggestion that the passwords don't need to be in the cache, this is\nincorrect. For the system (as I have designed it) to work, the postmaster must\ncheck at each login to see if the user has a password. Using another relation\nalong with a select to look up the password from pg_user is not as efficient,\nand it is not possible from the postmaster. In order for this to work, each\ntime that pg_user or pg_password (if we use a 2nd relation) is modified, a join\nmust be performed between the two (essentially perform a select on a view that\nperforms the join) before the data can be copied to the pg_pwd file for the\npostmaster to use. I don't even know if the copy command will work with a view.\nFor these reasons I still believe that pg_user should just remain non-accessible\nto the general public.\n\nTodd A. Brandys\n", "msg_date": "Tue, 13 Jan 1998 23:09:38 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> Sorry for the response delay. I was out of town.\n> \n> I don't believe that pg_user needs to be readable by users in general. They\n> don't really need to know who else has access to the DB, and they certainly \n> don't need to know what access they do have (e.g. usesuper and createuser).\n> \n> As for the suggestion that the passwords don't need to be in the cache, this is\n> incorrect. For the system (as I have designed it) to work, the postmaster must\n> check at each login to see if the user has a password. Using another relation\n> along with a select to look up the password from pg_user is not as efficient,\n> and it is not possible from the postmaster. In order for this to work, each\n> time that pg_user or pg_password (if we use a 2nd relation) is modified, a join\n> must be performed between the two (essentially perform a select on a view that\n> performs the join) before the data can be copied to the pg_pwd file for the\n> postmaster to use. I don't even know if the copy command will work with a view.\n> For these reasons I still believe that pg_user should just remain non-accessible\n> to the general public.\n> \n> Todd A. Brandys\n> \n\nCan't we create a function to get the info:\n\ncreate function get_passwd returns text as\n\t'select passwd from pg_password'\n\tlanguage 'sql';\n\nAnd this will return a null for password not found, and a valid password\nfor others. I don't think a view will work. I think you would have to\ndo a SELECT ... INTO and do a COPY from that temp table. Sounds like\nsome work.\n\nNow this is done ONLY when a password changed is made, or a user is\ndeleted or added. Is that correct? Doesn't sound like too much of a\nhit to me. Now if it was done for every connection, we would have big\ntroubles.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 10:03:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Wed, 14 Jan 1998, Bruce Momjian wrote:\n\n> > Sorry for the response delay. I was out of town.\n> > \n> > I don't believe that pg_user needs to be readable by users in general. They\n> > don't really need to know who else has access to the DB, and they certainly \n> > don't need to know what access they do have (e.g. usesuper and createuser).\n> > \n> > As for the suggestion that the passwords don't need to be in the cache, this is\n> > incorrect. For the system (as I have designed it) to work, the postmaster must\n> > check at each login to see if the user has a password. Using another relation\n> > along with a select to look up the password from pg_user is not as efficient,\n> > and it is not possible from the postmaster. In order for this to work, each\n> > time that pg_user or pg_password (if we use a 2nd relation) is modified, a join\n> > must be performed between the two (essentially perform a select on a view that\n> > performs the join) before the data can be copied to the pg_pwd file for the\n> > postmaster to use. I don't even know if the copy command will work with a view.\n> > For these reasons I still believe that pg_user should just remain non-accessible\n> > to the general public.\n> > \n> > Todd A. Brandys\n> > \n> \n> Can't we create a function to get the info:\n> \n> create function get_passwd returns text as\n> \t'select passwd from pg_password'\n> \tlanguage 'sql';\n> \n> And this will return a null for password not found, and a valid password\n> for others. I don't think a view will work. I think you would have to\n> do a SELECT ... INTO and do a COPY from that temp table. Sounds like\n> some work.\n> \n> Now this is done ONLY when a password changed is made, or a user is\n> deleted or added. Is that correct? Doesn't sound like too much of a\n> hit to me. Now if it was done for every connection, we would have big\n> troubles.\n\n\tJust curious here, but why couldn't this be done in postgres vs\npostmaster? Essentially, the way I'm seeing things right now, we'd added\nprocessing to the \"startup loop\" inside of the postmaster...while\npostmaster is authenticating one user, the next person trying to connect\nwould have to wait that little bit longer...\n\n\tFork off the postgres process first, then authenticate inside of\nthere...which would get rid of the problem with pg_user itself being a\ntext file vs a relation...no?\n\n\n", "msg_date": "Wed, 14 Jan 1998 11:25:08 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "\nI would like to suggest the following augmentation to the PostgreSQL DBMS.\nThis augmentation is to add a pg_privileges table for each database instance.\nSuch a table should be responsible for maintaining the SELECT, UPDATE, INSERT,\nand DELETE permissions on all database objects. Furthermore, it should maintain\nother privileges such as the CREATE DATABASE, CREATE USER, DESTROY USER, \nCREATE TABLE, and the list goes on. One other benefit this would bring would be\nto allow the setting of privileges on table columns. This would alleviate\nthe question of creating a separte relation for holding passwords rather than\nkeeping this info in pg_user (Simply make the password field non-selectable by\npublic).\n\nI don't know that I can volunteer to perform all the changes this would involve,\nbut I would be very willing to help, as this would greatly improve the security\nof PostgreSQL.\n\nIf anyone has any comments or concerns about such a project, let me know. Suuch a\nsystem should be crafted with care. I would like to reach a consensus among the\nhacker community before I begin to make any mods to bring this about.\n\nI see the changes taking place in the following order:\n\n1) Code the creation of pg_privileges.\n2) Make sure the initial permissions of database instance object are in the\n pg_privileges relation upon database creation.\n3) Rewrite the GRANT and REVOKE statements to update pg_privileges, and (this\n must be done at the same time) supplant the old privileges system. This\n would give us table privileges as they are now.\n4-Infinity) Begin adding new privileges such as CREATE USER, CREATE DATABASE,\n CREATE TABLE, DESTROY TABLE, etc to the system.\n\nThis is a very coarse view of how to accomplish this task. Also, I left out\ncolumn privileges. This should probably be listed at (3.5) above.\n\nLet me know what you think (If you send a reply to the pgsql-hackers email\naccount, please be certain to cc me also). I will pull all the comments\ntogether and start to create a requirements document for pg_privileges.\n\nTodd A. Brandys\[email protected]\n", "msg_date": "Tue, 13 Jan 1998 23:40:57 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Suggest a pg_privileges table" }, { "msg_contents": "On Tue, 13 Jan 1998, todd brandys wrote:\n\n> \n> I would like to suggest the following augmentation to the PostgreSQL DBMS.\n> This augmentation is to add a pg_privileges table for each database instance.\n> Such a table should be responsible for maintaining the SELECT, UPDATE, INSERT,\n> and DELETE permissions on all database objects. Furthermore, it should maintain\n> other privileges such as the CREATE DATABASE, CREATE USER, DESTROY USER, \n> CREATE TABLE, and the list goes on. One other benefit this would bring would be\n> to allow the setting of privileges on table columns. This would alleviate\n> the question of creating a separte relation for holding passwords rather than\n> keeping this info in pg_user (Simply make the password field non-selectable by\n> public).\n\nThis could be useful for implementing the getColumnPrivileges() and\ngetTablePrivileges() methods in the JDBC driver.\n\n> If anyone has any comments or concerns about such a project, let me know. Suuch a\n> system should be crafted with care. I would like to reach a consensus among the\n> hacker community before I begin to make any mods to bring this about.\n> \n> I see the changes taking place in the following order:\n> \n> 1) Code the creation of pg_privileges.\n> 2) Make sure the initial permissions of database instance object are in the\n> pg_privileges relation upon database creation.\n> 3) Rewrite the GRANT and REVOKE statements to update pg_privileges, and (this\n> must be done at the same time) supplant the old privileges system. This\n> would give us table privileges as they are now.\n> 4-Infinity) Begin adding new privileges such as CREATE USER, CREATE DATABASE,\n> CREATE TABLE, DESTROY TABLE, etc to the system.\n> \n> This is a very coarse view of how to accomplish this task. Also, I left out\n> column privileges. This should probably be listed at (3.5) above.\n> \n> Let me know what you think (If you send a reply to the pgsql-hackers email\n> account, please be certain to cc me also). I will pull all the comments\n> together and start to create a requirements document for pg_privileges.\n\nHereis whats needed for JDBC:\n\n Each privilige description has the following columns:\n \n 1. TABLE_CAT String => table catalog (may be null) \n 2. TABLE_SCHEM String => table schema (may be null)\n 3. TABLE_NAME String => table name\n 4. COLUMN_NAME String => column name \n 5. GRANTOR => grantor of access (may be null)\n 6. GRANTEE String => grantee of access \n 7. PRIVILEGE String => name of access (SELECT, INSERT, UPDATE,\n REFRENCES, ...)\n 8. IS_GRANTABLE String => \"YES\" if grantee is permitted to grant\n to others; \"NO\" if not; null if unknown \n\nNow, the first two we return null for, and only getColumnPrivileges()\nreturns COLUMN_NAME\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 14 Jan 1998 07:07:58 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Suggest a pg_privileges table" } ]
[ { "msg_contents": "On Tue, 13 Jan 1998, todd brandys wrote:\n\n> On Sun, 11 Jan 1998, Bruce Momjian wrote:\n> \n> > OK, general question. Does pg_user need to be readable? Do\n> > non-postgres users want to see who owns each table? I don't know.\n> \n> I'd say yes, as we have stuff in JDBC yet to implement that will access\n> this table.\n> \n> ----------------------------------\n> \n> What is it that you need to implement in JDBC for which a general user needs\n> to be able to see tables that other users own? If this is some type of admin,\n> 'stuff' that the postgres user will execute, then he/she will be able to run it\n> no problem.\n\nIt's a call that's part of the JDBC specification. so it can be called by\nuser code, or admin code.\n\nHere's what I have on it.\n\nInterface java.sql.DatabaseMetaData\n\n public abstract ResultSet getTablePrivileges(String catalog,\n String schemaPattern,\n String tableNamePattern)\n\t\t\t\t\tthrows SQLException\n \n \n Get a description of the access rights for each table available\n in a catalog. Note that a table privilege applies to one or \n more columns in the table. It would be wrong to assume that \n this priviledge applies to all columns (this may be true for\n some systems but is not true for all.)\n \n Only privileges matching the schema and table name criteria are\n returned. They are ordered by TABLE_SCHEM, TABLE_NAME, and\n PRIVILEGE.\n \n Each privilige description has the following columns:\n \n 1. TABLE_CAT String => table catalog (may be null) \n 2. TABLE_SCHEM String => table schema (may be null)\n 3. TABLE_NAME String => table name \n 4. GRANTOR => grantor of access (may be null) \n 5. GRANTEE String => grantee of access \n 6. PRIVILEGE String => name of access (SELECT, INSERT, UPDATE,\n REFRENCES, ...) \n 7. IS_GRANTABLE String => \"YES\" if grantee is permitted to grant\n to others; \"NO\" if not; null if unknown \n \n \n Parameters: \n catalog - a catalog name; \"\" retrieves those without a \n catalog; null means drop catalog name from the selection\n criteria \n schemaPattern - a schema name pattern; \"\" retrieves those\n without a schema \n tableNamePattern - a table name pattern \n \n Returns:\n ResultSet - each row is a table privilege description\n \n Throws: SQLException \n if a database-access error occurs. \n\n See Also: \n getSearchStringEscape \n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 14 Jan 1998 06:52:45 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "On Wed, 14 Jan 1998, Ralf Mattes wrote:\n\n> Yes, i agree. Mike's implementation is the way to go in a traditional\n> realtional db (BTW, a question about oids: lurking on this list \n> i overheard the idea of dropping oids. This would break a lot\n> of my code! What's the last word on this ?).\n\n\tThe last we discussed in pgsql-hackers was that OIDs would not be\ndropped... \n\n> I think what's going on is a shift away from the OR-model. A\n> lot of the OO features come from Postgres non-relational past.\n> I don't see a lot of development emphasis on the OO side :-(\n> mostly people work on getting PostgreSQL more ANSI-sql conforming.\n> This by itself is very nice, but i think it would be a good\n> idea for the developers to make a public statement saying\n> how they envision PostgreSQLs future (i.e. will it still support\n> the OO features or not).\n\n\tNobody is actively *removing* OO features...or at least not that\nI'm aware of...we (the developers) are addressing problems as ppl are\nbringing them up, and adding in those features required to bring in\nANSI-SQL compliancy...\n\n\tThat said, does ANSI-SQL compliancy mean that the OR model no\nlonger has a place? Quite frankly, I haven't got a clue...since I've\nnever used it. What I would recommend is anyone *actually* using it\nshould pop onto the pgsql-hackers mailing list, where this sort of stuff\nis discussed, so that if someone does bring up removing a feature because\nthere doesn't appear to be any apparent use for it, they can pop up and\nrecommend against it.\n\n\tI guess as far as that is concerned, is there anything that should\nbe added to the current regression tests that *test* whether we break some\npart of the OR model?\n\n\n\n", "msg_date": "Wed, 14 Jan 1998 08:17:50 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "> > I think what's going on is a shift away from the OR-model. A\n> > lot of the OO features come from Postgres non-relational past.\n> > I don't see a lot of development emphasis on the OO side :-(\n> > mostly people work on getting PostgreSQL more ANSI-sql conforming.\n> > This by itself is very nice, but i think it would be a good\n> > idea for the developers to make a public statement saying\n> > how they envision PostgreSQLs future (i.e. will it still support\n> > the OO features or not).\n\nOO and OR are not the same. Postgres is definitely OR, and definitely not OO\n(yet).\n\n> Nobody is actively *removing* OO features...or at least not that\n> I'm aware of...\n\nNobody's even doing it secretly :)\n\n> we (the developers) are addressing problems as ppl are\n> bringing them up, and adding in those features required to bring in\n> ANSI-SQL compliancy...\n\nYes, at this stage of Postgres' career, the main emphasis *from a user's\npoint of view* is on moving the parser and the backend capabilities toward\nSQL92/3 compliance and on fixing broken features. A year ago, the main\nemphasis was on getting the backend to stop crashing. I would guess that the\nnext 6 months/year will continue to see work on SQL compliance and backend\nperformance, but after that the project will move on to getting more OO in\nthe OR features. Just a guess though...\n\nHowever, we have been pretty clear in the hackers discussions that we will\nnot sacrifice _any_ of the OR/OO features of Postgres solely to obtain\ncompliance with bad/ugly/stupid features of the standard. Also, we consider\nany feature which damages OR as being bad and ugly and stupid...\n\n> That said, does ANSI-SQL compliancy mean that the OR model no\n> longer has a place?\n\nNaw, SQL3 has some OR features (many demonstrated by Postgres 5 years\nearlier). It's the wave of the future...\n\n> Quite frankly, I haven't got a clue...since I've\n> never used it. What I would recommend is anyone *actually* using it\n> should pop onto the pgsql-hackers mailing list, where this sort of stuff\n> is discussed, so that if someone does bring up removing a feature because\n> there doesn't appear to be any apparent use for it, they can pop up and\n> recommend against it.\n\nThe only case of a deprecated feature in the last year of development was the\nremoval of \"time travel\" which was done not for standards compliance but for\nperformance reasons (and only after extensive discussions on the questions\nlist where it was not generally felt to be an important feature).\n\nThe only other discussions I can recall of \"feature removal\" involve the\npossibility of removing some of the \"specialty character types\" like char16\nsince one can achieve an identical or better result with other existing\nstandard or quasi-standard types like varchar() and text(). There may be\nother cases of \"type consolidation\" in the future as we work to avoid \"type\nbloat\".\n\n> I guess as far as that is concerned, is there anything that should\n> be added to the current regression tests that *test* whether we break some\n> part of the OR model?\n\nAlready there. Don't sweat it...\n\nThere has been reluctance on the part of the current developers to write a\n\"future directions\" statement because, as volunteers, we really can't\n_guarantee_ the behavior of future developers. Also, future development\nrequires that someone actually do the work, and if someone has stated that\nthey are going to work on a feature but then must drop from the project for\npersonal reasons (yes, sometimes there is more to life than Postgres) there\nis not some new developer guaranteed to take his place.\n\nHowever, imho it is appropriate to write a future directions statement which,\nfor example, applies to the next few releases, or which is reevaluated from\ntime to time.\n\n - Tom\n\n", "msg_date": "Wed, 14 Jan 1998 14:44:18 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 14 Jan 1998, Ralf Mattes wrote:\n> \n> > Yes, i agree. Mike's implementation is the way to go in a traditional\n> > realtional db (BTW, a question about oids: lurking on this list\n> > i overheard the idea of dropping oids. This would break a lot\n> > of my code! What's the last word on this ?).\n> \n> The last we discussed in pgsql-hackers was that OIDs would not be\n> dropped...\n\n..but would be optional.\n\nVadim\n", "msg_date": "Thu, 15 Jan 1998 09:27:21 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, Jan 15, 1998 at 09:27:21AM +0700, Vadim B. Mikheev wrote:\n> The Hermit Hacker wrote:\n> > \n> > On Wed, 14 Jan 1998, Ralf Mattes wrote:\n> > \n> > > Yes, i agree. Mike's implementation is the way to go in a traditional\n> > > realtional db (BTW, a question about oids: lurking on this list\n> > > i overheard the idea of dropping oids. This would break a lot\n> > > of my code! What's the last word on this ?).\n> > \n> > The last we discussed in pgsql-hackers was that OIDs would not be\n> > dropped...\n> \n> ..but would be optional.\n> \n> Vadim\n> \n\nOIDs are a bastardization of the relational model. If you have to keep\nthem, then do so, but their use should be SEVERELY discouraged.\n\n--\n-- \nKarl Denninger ([email protected])| MCSNet - Serving Chicagoland and Wisconsin\nhttp://www.mcs.net/ | T1's from $600 monthly to FULL DS-3 Service\n\t\t\t | NEW! K56Flex support on ALL modems\nVoice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS\nFax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost\n", "msg_date": "Thu, 15 Jan 1998 09:34:14 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": " \n> OIDs are a bastardization of the relational model. If you have to keep\n> them, then do so, but their use should be SEVERELY discouraged.\n\n\tActually, I use them quite extensively...I have several WWW-based\nsearch directories that are searched with:\n\nselect oid,<fields> from <table> where <search conditions>;\n\n\tThat display minimal data to the browser, and then if someone\nwants more detailed information, I just do:\n\nselect * from <table> where oid = '';\n\n\tIts also great if you mess up the original coding for a table and\nwant to remove 1 of many duplicates that you've accidently let pass\nthrough :(\n\n\n\n", "msg_date": "Thu, 15 Jan 1998 10:40:43 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "Thus spake The Hermit Hacker\n> > OIDs are a bastardization of the relational model. If you have to keep\n> > them, then do so, but their use should be SEVERELY discouraged.\n\nI agree although I do use them in some library routines (Tcl and Python)\nthat I use in a get/update cycle. The way it is set up though, the calling\nroutine is never aware of the use of OIDs. It's almost like part of the\ndatabase engine as far as the routine is concerned.\n\nexample:\n\nfrom pg import *\nfrom dbgen import *\ndb = connect('table')\nuser = db_get(db, 'user', 100)\nuser['name'] = 'Joe'\ndb_update(db, 'user', user)\n\nThe db_get puts the oid into the Python dictionary and the db_update uses\nthat to update the same record. The caller never sees it.\n\nNote: Of course I realize that a simple SQL statement will do this. It's\na contrived example.\n\n> \tActually, I use them quite extensively...I have several WWW-based\n> search directories that are searched with:\n> \n> select oid,<fields> from <table> where <search conditions>;\n> \n> \tThat display minimal data to the browser, and then if someone\n> wants more detailed information, I just do:\n> \n> select * from <table> where oid = '';\n\nBut really there should be a proper key on this database. I think that\nthat's what Karl was getting at. If you need a unique ID number then\nyou should really create one and make it a unique index on the table.\n\n> \tIts also great if you mess up the original coding for a table and\n> want to remove 1 of many duplicates that you've accidently let pass\n> through :(\n\nWith unique keys this shouldn't really be a problem. In fact, the entire\nrecord can be a complex key if necessary (I have done this on small\ntables) so it should always be possible. If you can still get dups\nwith the entire record keyed then just add an extra, non-keyed field\nwhich holds the count.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 15 Jan 1998 13:33:40 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, 15 Jan 1998, D'Arcy J.M. Cain wrote:\n\n> > \tActually, I use them quite extensively...I have several WWW-based\n> > search directories that are searched with:\n> > \n> > select oid,<fields> from <table> where <search conditions>;\n> > \n> > \tThat display minimal data to the browser, and then if someone\n> > wants more detailed information, I just do:\n> > \n> > select * from <table> where oid = '';\n> \n> But really there should be a proper key on this database. I think that\n> that's what Karl was getting at. If you need a unique ID number then\n> you should really create one and make it a unique index on the table.\n> \n\n\tThis stuff was all built \"pre unique index\"...and we all know what\nits like to go in and \"Rebuild\" something that is already working :(\n\n\n", "msg_date": "Thu, 15 Jan 1998 13:42:51 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, Jan 15, 1998 at 10:58:22PM +0000, Ralf Mattes wrote:\n> On Thu, 15 Jan 1998, Karl Denninger wrote:\n> \n> > > > The last we discussed in pgsql-hackers was that OIDs would not be\n> > > > dropped...\n> > > \n> > > ..but would be optional.\n> > > Vadim\n> \n> Phew, safed some code... :-)\n> \n> > OIDs are a bastardization of the relational model. If you have to keep\n> > them, then do so, but their use should be SEVERELY discouraged.\n> \n> Yes, shure, but Postgres (and many com. systems) isn't afull im-\n> plementation of the relational model. And sometimes i's very handy\n> to be able ti identify a specific record/tuple (i use them in front\n> end user interfaces. The interface stores the oid of the currently\n> displayed record--if the user changes/deletes the record it's easy\n> to do an update/delete. Even so it's possible to store the unique\n> index key this is much more elaborate to implement and is a pain\n> when the table definitions aren't hardcoded in the frontend \n> application). I don't see why oids per se violate the relational\n> model (and of course when some of my dbs started there was nothing\n> like 'unique key' in postgres and in some theunique key stretches\n> over several fields...(\n> \n> Ralf\n\nUnique indices over multiple fields are both legal and work, and do what you\nwould expect.\n\nI understand why people like OIDs - \"row numbers\" are useful to lots of\nfolks. That doesn't change the fact that they are a throwback and I can't\nfind much of a good reason to use them in a relational world.\n\nI've done a *lot* of DBMS coding over the last 15 years, with a boatload of\nit on custom database packages that didn't do relational anything :-) \n\nFrankly, the faster and further I can get away from the concept of a \nrow ID, the better I feel.\n\n--\n-- \nKarl Denninger ([email protected])| MCSNet - Serving Chicagoland and Wisconsin\nhttp://www.mcs.net/ | T1's from $600 monthly to FULL DS-3 Service\n\t\t\t | NEW! K56Flex support on ALL modems\nVoice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS\nFax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost\n", "msg_date": "Thu, 15 Jan 1998 14:32:20 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "Thus spake Karl Denninger\n> Frankly, the faster and further I can get away from the concept of a \n> row ID, the better I feel.\n\nRow IDs are for spreadsheets, not databases. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 15 Jan 1998 17:15:36 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, 15 Jan 1998, Karl Denninger wrote:\n\n> > > The last we discussed in pgsql-hackers was that OIDs would not be\n> > > dropped...\n> > \n> > ..but would be optional.\n> > Vadim\n\nPhew, safed some code... :-)\n\n> OIDs are a bastardization of the relational model. If you have to keep\n> them, then do so, but their use should be SEVERELY discouraged.\n\nYes, shure, but Postgres (and many com. systems) isn't afull im-\nplementation of the relational model. And sometimes i's very handy\nto be able ti identify a specific record/tuple (i use them in front\nend user interfaces. The interface stores the oid of the currently\ndisplayed record--if the user changes/deletes the record it's easy\nto do an update/delete. Even so it's possible to store the unique\nindex key this is much more elaborate to implement and is a pain\nwhen the table definitions aren't hardcoded in the frontend \napplication). I don't see why oids per se violate the relational\nmodel (and of course when some of my dbs started there was nothing\nlike 'unique key' in postgres and in some theunique key stretches\nover several fields...(\n\nRalf\n \n\n===========================================================================\nRalf Mattes\nJoh.-Seb.- Bach Str. 13\nD-79104 Freiburg i. Br.\nemail: [email protected] / [email protected]\n===========================================================================\n\n", "msg_date": "Thu, 15 Jan 1998 22:58:22 +0000 ( )", "msg_from": "Ralf Mattes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "Hi all! I'm back :)\n[and was bit by the varchar() bug so you can ignore earlier message..\nthat's what I get for not reading pg-hackers for 2 months... but still\nupdating from CVS *sigh*]\n\nOn Wed, 14 Jan 1998, Thomas G. Lockhart wrote:\n\n> > > I think what's going on is a shift away from the OR-model. A\n> > > lot of the OO features come from Postgres non-relational past.\n> > > I don't see a lot of development emphasis on the OO side :-(\n> > > mostly people work on getting PostgreSQL more ANSI-sql conforming.\n> > > This by itself is very nice, but i think it would be a good\n> > > idea for the developers to make a public statement saying\n> > > how they envision PostgreSQLs future (i.e. will it still support\n> > > the OO features or not).\n> \n> OO and OR are not the same. Postgres is definitely OR, and definitely not OO\n> (yet).\n\nSo what does it need / what should I read to see what the TODO for\nOO-compatibility?\n\nI've seen a lot of data to suggest that -Postgres- is the heart of most OO\ndatabases... or at least a very strong influence.\n[especially after reading the history of OO database research :]\n\n> > we (the developers) are addressing problems as ppl are\n> > bringing them up, and adding in those features required to bring in\n> > ANSI-SQL compliancy...\n> \n> Yes, at this stage of Postgres' career, the main emphasis *from a user's\n> point of view* is on moving the parser and the backend capabilities toward\n> SQL92/3 compliance and on fixing broken features. A year ago, the main\n> emphasis was on getting the backend to stop crashing. I would guess that the\n> next 6 months/year will continue to see work on SQL compliance and backend\n> performance, but after that the project will move on to getting more OO in\n> the OR features. Just a guess though...\n\nGood - enough time for me to start working .... :)\n\n> However, we have been pretty clear in the hackers discussions that we will\n> not sacrifice _any_ of the OR/OO features of Postgres solely to obtain\n> compliance with bad/ugly/stupid features of the standard. Also, we consider\n> any feature which damages OR as being bad and ugly and stupid...\n\n*grin* - good additude! :)\n\n> > That said, does ANSI-SQL compliancy mean that the OR model no\n> > longer has a place?\n> \n> Naw, SQL3 has some OR features (many demonstrated by Postgres 5 years\n> earlier). It's the wave of the future...\n\nYep - Postgres was the testcase for all of those OR features :)\n[read : postgres was used to design them]\nThe work was too leading-edge for the database people to bother looking...\n(but guess where \"Datablades\" came from :)\n\n> > Quite frankly, I haven't got a clue...since I've\n> > never used it. What I would recommend is anyone *actually* using it\n> > should pop onto the pgsql-hackers mailing list, where this sort of stuff\n> > is discussed, so that if someone does bring up removing a feature because\n> > there doesn't appear to be any apparent use for it, they can pop up and\n> > recommend against it.\n> \n> The only case of a deprecated feature in the last year of development was the\n> removal of \"time travel\" which was done not for standards compliance but for\n> performance reasons (and only after extensive discussions on the questions\n> list where it was not generally felt to be an important feature).\n\nTime travel has it's uses... but can be implemented using standard hooks\nif ever needed. Though it was considered of paramount importance to the\npeople who were working on postgres way back. (for data reliability and\nrecoverability).\n\nI don't miss it (I am implementing it using standard SQL :)\n\n> > I guess as far as that is concerned, is there anything that should\n> > be added to the current regression tests that *test* whether we break some\n> > part of the OR model?\n> \n> Already there. Don't sweat it...\n> \n> There has been reluctance on the part of the current developers to write a\n> \"future directions\" statement because, as volunteers, we really can't\n> _guarantee_ the behavior of future developers. Also, future development\n> requires that someone actually do the work, and if someone has stated that\n> they are going to work on a feature but then must drop from the project for\n> personal reasons (yes, sometimes there is more to life than Postgres) there\n> is not some new developer guaranteed to take his place.\n> \n> However, imho it is appropriate to write a future directions statement which,\n> for example, applies to the next few releases, or which is reevaluated from\n> time to time.\n\nheh. Life is why I never got to \"ALTER TABLE DROP <column\" yet... That\nand I'm still learning how postgres is put together in my copious free\ntime *grin*.... (is it still needed?)\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Fri, 16 Jan 1998 23:56:27 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" } ]
[ { "msg_contents": "> Bruce,\n> \n> I did some homework. Here is what I have. The default max data segment size on our (AIX 4.1.4) box is around 130000 kbytes.\n> \n> I put together a query which put me just past the threshold of the palloc \"out of memory error\". It is as follows:\n> \n> create table outlet (\n> number int,\n> name varchar(30),\n> ...\n> }\n> \n> create unique index outlet_key on outlet using btree (number);\n> \n> select count(*) from outlet\n> where\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1) or\n> (number = 1 and number = 1 and number = 1);\n> \n> Not pretty but it makes the point. Take out two OR clauses and the query works fine (but a bit slow).\n> \n> The above query is all it takes to use up all 130000 Kbytes of memory. And, since the query takes a long time to finally fail, I was able to\n> observe the memory consumption.\n> \n> I extended the max data segment to 300000. And tried again. I could observer the memory consumption up to about 280000 when the system\n> suddenly got sick. I was getting all kinds of messages like \"cant fork\"; bad stuff. The system did finally recover on its own. I am not\n> sure happened there. I know that ulimit puts us right around the physical memory limits of out system.\n> \n> Using 300 meg for the above query seems like a bit of a problem. It is difficult to imagine where all that memory is being used. I will\n> research the problem further if you need more information.\n> \n\n\nWow, looks like a bug. Vadim, why would this happen? I got the same\npalloc failure message here, and there is NO data in the table.\n\nOriginal messages attached.\n\n\n---------------------------------------------------------------------------\n\n> Bruce Momjian wrote:\n> \n> > Try changing your OS default memory size. Unsure how to do this under\n> > AIX.\n> >\n> > >\n> > >\n> > > ============================================================================\n> > > POSTGRESQL BUG REPORT TEMPLATE\n> > > ============================================================================\n> > >\n> > >\n> > > Your name : David Hartwig\n> > > Your email address : [email protected]\n> > >\n> > > Category : runtime: back-end: SQL\n> > > Severity : serious\n> > >\n> > > Summary: palloc fails with lots of ANDs and ORs\n> > >\n> > > System Configuration\n> > > --------------------\n> > > Operating System : AIX 4.1\n> > >\n> > > PostgreSQL version : 6.2\n> > >\n> > > Compiler used : native CC\n> > >\n> > > Hardware:\n> > > ---------\n> > > RS 6000\n> > >\n> > > Versions of other tools:\n> > > ------------------------\n> > > NA\n> > >\n> > > --------------------------------------------------------------------------\n> > >\n> > > Problem Description:\n> > > --------------------\n> > > The follow is a mail message describing the problem on the PostODBC mailing list:\n> > >\n> > >\n> > > I have run across this also. We traced it down to a failure in the PostgreSQL server. This occurs under the following conditions.\n> > >\n> > > 1. MS Access\n> > > 2. Specify a multi-part key in the link time setup with postgresql\n> > > 3. Click on table view.\n> > >\n> > > What happens is MS Access takes the following steps. First it selects all possible key values for the table being viewed. I\n> > > suspect it maps the key values to the relative row position in the display. Then it uses the mapping to generate future queries based\n> > > on the mapping and the rows showing on the screen. The queries take the following form:\n> > >\n> > > SELECT keypart1, keypart2, keypart3, col4, col5, col6 ... FROM example_table\n> > > WHERE\n> > > (keypart1 = row1keypartval1 AND keypart2 = row1keypartval2 AND keypart3 = row1keypartval3) OR\n> > > (keypart1 = row2keypartval1 AND keypart2 = row2keypartval2 AND keypart3 = row2keypartval3) OR\n> > > .\n> > > . -- 28 lines of this stuff. Why 28... Why not 28\n> > > .\n> > > (keypart1 = row27keypartval1 AND keypart2 = row27keypartval2 AND keypart3 = row27keypartval3) OR\n> > > (keypart1 = row28keypartval1 AND keypart2 = row28keypartval2 AND keypart3 = row28keypartval3);\n> > >\n> > >\n> > > The PostgreSQL sever chokes on this statement claiming it is out of memory. (palloc) In this example I used a three part key. I\n> > > do not recall if a three part key is enough to trash the backend. It has been a while. I have tried sending these kinds of statements\n> > > directly through the psql monitor and get the same result.\n> > >\n> > >\n> > > --------------------------------------------------------------------------\n> > >\n> > > Test Case:\n> > > ----------\n> > > select c1, c1 c3, c4, c5 ... from example_table\n> > > where\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something) or\n> > > (c1 = something and c2 = something and c3 = something and c4 = something);\n> > >\n> > >\n> > > --------------------------------------------------------------------------\n> > >\n> > > Solution:\n> > > ---------\n> > >\n> > >\n> > > --------------------------------------------------------------------------\n> > >\n> > >\n> > >\n> >\n> > --\n> > Bruce Momjian\n> > [email protected]\n> \n> \n> \n> --------------20C7AC27E8BCA117B23354BE\n> Content-Type: text/x-vcard; charset=us-ascii; name=\"vcard.vcf\"\n> Content-Transfer-Encoding: 7bit\n> Content-Description: Card for David Hartwig\n> Content-Disposition: attachment; filename=\"vcard.vcf\"\n> \n> begin: vcard\n> fn: David Hartwig\n> n: Hartwig;David\n> org: Insight Distribution Systems\n> adr: 222 Shilling Circle;;;Hunt Valley ;MD;21030;USA\n> email;internet: [email protected]\n> title: Manager Research & Development\n> tel;work: (410)403-2308\n> x-mozilla-cpt: ;0\n> x-mozilla-html: TRUE\n> version: 2.1\n> end: vcard\n> \n> \n> --------------20C7AC27E8BCA117B23354BE--\n> \n> \n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 10:33:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] General Bug Report: palloc fails with lots of ANDs and ORs" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce,\n> >\n> > I did some homework. Here is what I have. The default max data segment size on our (AIX 4.1.4) box is around 130000 kbytes.\n> >\n> > I put together a query which put me just past the threshold of the palloc \"out of memory error\". It is as follows:\n> >\n> > create table outlet (\n> > number int,\n> > name varchar(30),\n> > ...\n> > }\n> >\n> > create unique index outlet_key on outlet using btree (number);\n> >\n> > select count(*) from outlet\n> > where\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1);\n> >\n...\n> >\n> \n> Wow, looks like a bug. Vadim, why would this happen? I got the same\n> palloc failure message here, and there is NO data in the table.\n\nThis is bug in optimizer - try to EXPLAIN query...\nI have no time to fix it now - could return to this after Feb 1.\n\nVadim\n", "msg_date": "Thu, 15 Jan 1998 11:34:26 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: palloc fails with lots\n\tof ANDs and ORs" }, { "msg_contents": "> >\n> > select count(*) from outlet\n> > where\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1) or\n> > (number = 1 and number = 1 and number = 1);\n> >\n> > Not pretty but it makes the point. Take out two OR clauses and the query \n> > works fine (but a bit slow).\n> >\n> > The above query is all it takes to use up all 130000 Kbytes of memory. \n> > And, since the query takes a long time to finally fail, I was able to\n> > observe the memory consumption.\n\nOptimizator tries to transform qual above into AND clause with\n3 (# of and-ed clauses) ^ 9 (# of OR-s) = 19683 args (each arg\nis OR clause with 9 op. expressions. My estimation for current\ncnfify() code is that this will require =~ 500Mb of memory :)\nI made little changes - just to free memory when it's possible:\n\n current code with free-ing\n\n6 ORs 14.3 Mb 4.3 Mb\n7 ORs 53 Mb 10.3 Mb\n8 ORs estimation: ~ 160 Mb 30.6 MB\n\nI'm not sure should I aplly my changes or not - it doesn't fix\nproblem, just reduces memory impact. It obviously can't help you,\nDavid, in your real example (3 ^ 28 = 22876792454961 clauses - he he :).\n\nResume: cnfify() makes mathematically strong but in some cases\npractically unwise work. I can't fix this for 6.3\n\nVadim\n", "msg_date": "Tue, 17 Feb 1998 14:57:17 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: palloc fails with lots\n\tof ANDs and ORs" }, { "msg_contents": "Added to TODO list.\n\n\n> \n> > >\n> > > select count(*) from outlet\n> > > where\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1) or\n> > > (number = 1 and number = 1 and number = 1);\n> > >\n> > > Not pretty but it makes the point. Take out two OR clauses and the query \n> > > works fine (but a bit slow).\n> > >\n> > > The above query is all it takes to use up all 130000 Kbytes of memory. \n> > > And, since the query takes a long time to finally fail, I was able to\n> > > observe the memory consumption.\n> \n> Optimizator tries to transform qual above into AND clause with\n> 3 (# of and-ed clauses) ^ 9 (# of OR-s) = 19683 args (each arg\n> is OR clause with 9 op. expressions. My estimation for current\n> cnfify() code is that this will require =~ 500Mb of memory :)\n> I made little changes - just to free memory when it's possible:\n> \n> current code with free-ing\n> \n> 6 ORs 14.3 Mb 4.3 Mb\n> 7 ORs 53 Mb 10.3 Mb\n> 8 ORs estimation: ~ 160 Mb 30.6 MB\n> \n> I'm not sure should I aplly my changes or not - it doesn't fix\n> problem, just reduces memory impact. It obviously can't help you,\n> David, in your real example (3 ^ 28 = 22876792454961 clauses - he he :).\n> \n> Resume: cnfify() makes mathematically strong but in some cases\n> practically unwise work. I can't fix this for 6.3\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 16 Mar 1998 00:09:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] General Bug Report: palloc fails with lots\n\tof ANDs and ORs" } ]
[ { "msg_contents": "On Wed, 14 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > Check out:\n> > \n> > http://www.hub.org/stats/postgresql.current.html\n> > \n> > At the bottom of it is the referrer report, which attempts to show where\n> > ppl are coming from. Its not auto-generated, just something that I did\n> > manually this evening cause I wanted to see how popular mhonarc was...its\n> > just slightly popular *grin*\n> > \n> \n> Why so many hits from Japan and Italy?\n\n\tWe've always been particularly popular in Japan, I've\nfound...there are *alot* of \"Search Engine\" hits that seem to point to\nsites in Japan\n\n\n\n", "msg_date": "Wed, 14 Jan 1998 10:59:23 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [CORE] Something...interesting..." } ]
[ { "msg_contents": "On Wed, 14 Jan 1998, Mattias Kregert wrote:\n\n> May I suggest a minor change?\n> \n> - \"Keep the directory tidy\"\n> \n> The \"data/base/*/\" directories looks awful, almost as bad as the\n> c:\\windows dir on my dos partition... ;-) All types of files are\n> just thrown in, without any structure. It would be nice if the\n> files were put into separate subdirs:\n> data/base/mydb/{systables,tables,indexes,sequences,tmp} etc.\n\n\tI like to be able to do an 'ls -lt' on the directory to watch\nvacuum's process, so don't really like this idea, except the idea of\nmoving the tmp files into a seperate subdirectory, as you are right, being\nable to \"move\" just the temp file creation to a seperate area would be\nnice\n\n", "msg_date": "Wed, 14 Jan 1998 12:01:15 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Just a small thing for 6.3 ..." }, { "msg_contents": "May I suggest a minor change?\n\n- \"Keep the directory tidy\"\n\n The \"data/base/*/\" directories looks awful, almost as bad as the\n c:\\windows dir on my dos partition... ;-) All types of files are\n just thrown in, without any structure. It would be nice if the\n files were put into separate subdirs:\n data/base/mydb/{systables,tables,indexes,sequences,tmp} etc.\n\n Sometimes my temporary sort files grow too big, and I would love to\n be able to symlink tmp/ to another partition.\n\n\n/* m */\n", "msg_date": "Wed, 14 Jan 1998 18:02:01 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Just a small thing for 6.3 ..." }, { "msg_contents": "> > The \"data/base/*/\" directories looks awful, almost as bad as the\n> > c:\\windows dir on my dos partition... ;-) All types of files are\n> > just thrown in, without any structure. It would be nice if the\n> > files were put into separate subdirs:\n> > data/base/mydb/{systables,tables,indexes,sequences,tmp} etc.\n>\n> I like to be able to do an 'ls -lt' on the directory to watch\n> vacuum's process, so don't really like this idea, except the idea of\n> moving the tmp files into a seperate subdirectory, as you are right, being\n> able to \"move\" just the temp file creation to a seperate area would be\n> nice\n\nThe system would have to keep track of which kinds of things are temporary\nand which are permanent. This might be easier to do once we start\nimplementing the session- and transaction-only definitions in SQL92. Not\nthere for v6.3...\n\n - Tom\n\n", "msg_date": "Thu, 15 Jan 1998 02:06:05 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Just a small thing for 6.3 ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 14 Jan 1998, Mattias Kregert wrote:\n> \n> > May I suggest a minor change?\n> >\n> > - \"Keep the directory tidy\"\n> >\n> > The \"data/base/*/\" directories looks awful, almost as bad as the\n> > c:\\windows dir on my dos partition... ;-) All types of files are\n> > just thrown in, without any structure. It would be nice if the\n> > files were put into separate subdirs:\n> > data/base/mydb/{systables,tables,indexes,sequences,tmp} etc.\n\nWe told about TABLESPACE concept...\n\n> I like to be able to do an 'ls -lt' on the directory to watch\n> vacuum's process, so don't really like this idea, except the idea of\n\nYou'll be able to do this - vacuum' lock file will be placed in .../mydb\nwhen vacuuming any table from any tablespace.\n\n> moving the tmp files into a seperate subdirectory, as you are right, being\n> able to \"move\" just the temp file creation to a seperate area would be\n> nice\n\nYou are able to define tmp-dir at compile time right now...\n\nVadim\n", "msg_date": "Thu, 15 Jan 1998 11:01:52 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Just a small thing for 6.3 ..." } ]
[ { "msg_contents": "\n\nJust added July, August and September of 1997 to the MHonarc\narchive...searchable index will be updated later tonight\n\nAlso, am working on automating the cvs2html script to produce an HTML\ndisplay of the CVS logs (http://www.postgresql.org/cvs-logs/pgsql.html)\n\n\n\n", "msg_date": "Wed, 14 Jan 1998 14:10:02 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Just added..." } ]
[ { "msg_contents": "> \n> Hi, Bruce!\n> \n> vac=> \\d t\n> \n> Table = t\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | v | varchar() | 80 |\n> | i | int4 | 4 |\n> +----------------------------------+----------------------------------+-------+\n> vac=> explain select sum(2+i) from t where 1 > 0;\n> ERROR: replace_result_clause: Can not handle this tlist!\n> \n> \"where 1 > 0\" is also handled by Result node -> something still unfixed here\n> (in optimizer). Will you fix this ?\n> \n> Vadim\n> \n\nI have decided the whole qry_aggs is bad. It is bad becuase it makes\nmultiple copies of Aggreg, and both copies must be processed by any\nchanges by rewrite and optimizer. I am removing the field\nQuery->qry_aggs, and replacing it with a function that will called when\ncreating the Agg which spins through the Plan target list and returns a\nlinked list of Agg*. Much cleaner, and I can remove much of the special\nqry_aggs handling I added to get other Agg stuff to work.\n\nAlso, this relates to the SubLink change. I am now going to recommend\nnot having a separate Sublink pointer list field in Query, but adding a\nfunction that will return a list of valid Sublink entries in from\nqry->qual, or maybe even qry->targetlist.\n\n\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 14:58:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for select sum(2+2)..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have decided the whole qry_aggs is bad. It is bad becuase it makes\n> multiple copies of Aggreg, and both copies must be processed by any\n> changes by rewrite and optimizer. I am removing the field\n> Query->qry_aggs, and replacing it with a function that will called when\n> creating the Agg which spins through the Plan target list and returns a\n> linked list of Agg*. Much cleaner, and I can remove much of the special\n> qry_aggs handling I added to get other Agg stuff to work.\n> \n> Also, this relates to the SubLink change. I am now going to recommend\n> not having a separate Sublink pointer list field in Query, but adding a\n> function that will return a list of valid Sublink entries in from\n> qry->qual, or maybe even qry->targetlist.\n\nAgreed. This will also simplify readfuncs.c\n\nVadim\n", "msg_date": "Thu, 15 Jan 1998 10:57:21 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for select sum(2+2)..." } ]
[ { "msg_contents": "I have kept qry_numAgg, so the code can easily determine if it should go\nlooking for Aggreg entries. I will have a qry_numSubLink field, so you\ncan easily know if SubLinks exist in the Query.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 14 Jan 1998 14:59:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "qry_aggs removal" } ]
[ { "msg_contents": "\nSeems like the discussion on views and access control is\ndrifting in a direction that interests me. At the risk of\nonce again bringing up an issue that's already been solved\nand tabled, I offer the following wish item:\n\nContent-based access control (CBAC). In my experience,\nwhen these words are uttered, DBAs and MIS designers groan.\nI wish CBAC were never required. Unfortunately sometimes\nit is, and I wonder if the PG team is thinking about it.\n\nColumn protection is not CBAC, of course, though it sorta\nfeels like it. Column protection can be useful, but I've\nhad less need for it than true CBAC. I'd like to see\ncolumn AC in PG someday, but it's not very important to me\npersonally -- whereas I have immediate requirements for CBAC.\n\nIn true CBAC, the whole record is confidential. In table T, \nUser X \"owns\" some records and User Y \"owns\" some records, and \nthe two of them should not see each other's records. You can \naddress this problem with views (if your view mechanism allows \nRSE as well as FSE, and if views don't inherit AC from parent \ntables). But this gets you into a maintenance headache, where \nyou're creating new views every time a new user joins the \ncrowd. \n\nWhat I'd like, when I really think about it, is a rule \nmechanism for selects. Perhaps PG already has this, but my \nconceptual model is the Sybase trigger feature. On Update and \nDelete (but not select) in the older Sybase engines, the DB \ndesigner can interrupt the transaction and abort or alter it \naccording to rules coded in SQL. This was *very* useful, but \n(at least back then) it didn't work on select. \n\nI wish that all I had to do was code the records with the \nowner UID, and slap a \"select trigger\" or rule on the table \nthat said effectively: on a select, if record N has a user \ncode not matching the user code of the query connection, \nsuppress that record from the output stream. [We could get \nmore sophisticated than that, of course: if (RSE) and \n(group|user code) in (list)...] \n\nOf course the privileged user (database owner or postgres) \nwould have to be exempt. I'd like to see this suppression \nmechanism work for COUNT and all other stats. In fact, to each \nuser, the table should look like it contains only that user's \ndata. That would be truly cool. Like a view, but rule-based.\n\nI *could* write a canned UI that creates this kind of view \ndynamically as it starts up, but (imho) rules don't belong in \nthe app, they belong in the engine. I want my access control \nto be proof against C code using API lib, interactive sql \nsessions, and any other way users can query the db. \n\nSo, is this \"already in 6.3\"? Has anyone faced this problem\nand solved it by various other clever means? I haven't\nthought it through as clearly as I would like, so I'd be\ninterested to hear from those who have.\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n", "msg_date": "Wed, 14 Jan 1998 12:22:34 -0800 (PST)", "msg_from": "De Clarke <[email protected]>", "msg_from_op": true, "msg_subject": "content-based access control (Re: views, access control, etc)" }, { "msg_contents": "De Clarke wrote:\n> \n> Seems like the discussion on views and access control is\n> drifting in a direction that interests me. At the risk of\n> once again bringing up an issue that's already been solved\n> and tabled, I offer the following wish item:\n> \n> Content-based access control (CBAC). In my experience,\n> when these words are uttered, DBAs and MIS designers groan.\n> I wish CBAC were never required. Unfortunately sometimes\n> it is, and I wonder if the PG team is thinking about it.\n> \n> Column protection is not CBAC, of course, though it sorta\n> feels like it. Column protection can be useful, but I've\n> had less need for it than true CBAC. I'd like to see\n> column AC in PG someday, but it's not very important to me\n> personally -- whereas I have immediate requirements for CBAC.\n> \n> In true CBAC, the whole record is confidential. In table T,\n> User X \"owns\" some records and User Y \"owns\" some records, and\n> the two of them should not see each other's records. You can\n> address this problem with views (if your view mechanism allows\n> RSE as well as FSE, and if views don't inherit AC from parent\n> tables). But this gets you into a maintenance headache, where\n> you're creating new views every time a new user joins the\n> crowd.\n> \n> What I'd like, when I really think about it, is a rule\n> mechanism for selects. Perhaps PG already has this, but my\n> conceptual model is the Sybase trigger feature. On Update and\n> Delete (but not select) in the older Sybase engines, the DB\n> designer can interrupt the transaction and abort or alter it\n> according to rules coded in SQL. This was *very* useful, but\n> (at least back then) it didn't work on select.\n\nYou could use PG triggers on Updates, Deletes and Inserts (to insert\nowner user name) and try to use RULEs to rewrite SELECT statement.\n(I never played with RULEs...)\n\nVadim\n", "msg_date": "Thu, 15 Jan 1998 11:11:51 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] content-based access control (Re: views,\n access control,\n\tetc)" }, { "msg_contents": "\nVadim said:\n\n> You could use PG triggers on Updates, Deletes and Inserts (to insert\n> owner user name) and try to use RULEs to rewrite SELECT statement.\n> (I never played with RULEs...)\n\nPerhaps if it can be done easily with rules, and then the \nmethod can be published -- but I have to wonder about using \nfeatures that even the hardcore developers have never played \nwith :-) I guess I was hoping for some kind of specific\nsupport in the core.\n\nIf CBAC were supported in a straightforward, easy way in PG it \nwould be a major \"selling\" point imho. We could say \"and\nPostgreSQL supports content-based access control...\" \n\n\t\t----------------------------\n\nOn the related subject, I like oids. They are very useful. \nBut I *also* want table-specific autonumbering (monotonic \ninteger series in which gaps do indicate deleted records) and \nI'd really prefer that it be provided automatically as a table \ncreation option, rather than having to write the same darn \ntrigger N hundred times (as I have using the older \nSybase engines). \n\nIn fact, while I'm dreaming: why not have a set of *three*\ntable creation options? \n\t-recno \tcauses an autonumber field to be prepended \n\t\tto each record. \n\t-user\tcauses the user ID of the inserting process \n\t\tto be prepended to each record. \n\t-stamp\tcauses the timestamp at insert time to be \n\t\tprepended. \nThe fields could have fixed names (like oid), say \"recno\",\n\"user\", and \"stamp\" -- or as a luxury option the user could\nsupply their names as part of the option syntax:\n\t-user uid -stamp itime -recno seqno \n\nLike oid, these fields would *not appear* in result streams \nunless explicitly included in the FSE. 'select *' would not \nreveal them, but 'select oid,recno,user,stamp,*' would show \nall fields. (If these options are specified at table create \ntime, then all subsequent inserts to that table automatically \nfill the special fields, without the user or developer having \nto know or care.) \n\nOf the couple of hundred tables I've designed and deployed in \nvarious apps over the last 5 years, perhaps 10 or 20 have \n*not* needed these fields. I've cut-n-pasted the same darn \nboilerplate code a couple of hundred times to implement these \nfields on all the other tables. If I could specify a few \nsimple options on 'create table' instead, so much drudgery \nwould be eliminated! And users would not see the \"bookkeeping\" \nfields by default. Sounds heavenly to me. \n\nOnce you imagine that automatic entry options of this sort \nexists, then CBAC seems like a logical accompaniment, \navailable only on those tables where the UID auto-stamp has \nbeen selected at create time. It would be just one more \noption: CBAC: Y/N. If Y, then the uid field is checked \nagainst the owner of the select query and the filtering done \nas described in my last post to HACKERS.\n\nThis seems (to me anyway) like a killer value-added feature,\nsomething that would make PG so-o-o-o attractive for real,\npractical, bread-n-butter app development. How hard would \nit be? (I know, that's the most annoying question anyone \ncan ask a developer).\n\nAm I alone in thinking this would be incredibly cool and\nworth some effort? I would love to see PG be demonstrably\nBETTER than the commercial competition in several specific\nareas (then maybe I could get approval to use it for\nproduction work) and this looks like a good opportunity.\n\nde\n\n.............................................................................\n:De Clarke, Software Engineer UCO/Lick Observatory, UCSC:\n:Mail: [email protected] | \"There is no problem in computer science that cannot: \n:Web: www.ucolick.org | be solved by another level of indirection\" --J.O. :\n\n\n\n", "msg_date": "Thu, 15 Jan 1998 12:53:00 -0800 (PST)", "msg_from": "De Clarke <[email protected]>", "msg_from_op": true, "msg_subject": "CBAC (content based access control), OIDs, auto fields" }, { "msg_contents": "> Am I alone in thinking this would be incredibly cool and\n> worth some effort? I would love to see PG be demonstrably\n> BETTER than the commercial competition in several specific\n> areas (then maybe I could get approval to use it for\n> production work) and this looks like a good opportunity.\n> \n\nYes, these would all be nice, but we have to get the 'bread and butter'\nfeatures working perfectly first, like subselects.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 16:03:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CBAC (content based access control), OIDs, auto fields" }, { "msg_contents": "On Thu, 15 Jan 1998, De Clarke wrote:\n\n> Am I alone in thinking this would be incredibly cool and\n> worth some effort? I would love to see PG be demonstrably\n> BETTER than the commercial competition in several specific\n> areas (then maybe I could get approval to use it for\n> production work) and this looks like a good opportunity.\n\n\tLook forward to seeing the patches? :)\n\n\tI think the concepts are cool (uid, recno, timestamp) for having\nthis directly in the tables, and the concept of being able to have record\nbased restrictions on a table doubly so. \n\n\tAn example of the usefulness of this is the radiusd accounting\npackage I've been working on. Be nice to be able to let the salesman have\ndirect select access to the login records for *their* clients, and then\nhave full access to the personal information records to actually make\nchanges...without them having access to another clients records...\n\n\n", "msg_date": "Thu, 15 Jan 1998 16:10:37 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CBAC (content based access control), OIDs, auto fields" }, { "msg_contents": "Thus spake De Clarke\n> In fact, while I'm dreaming: why not have a set of *three*\n> table creation options? \n> \t-recno \tcauses an autonumber field to be prepended \n> \t\tto each record. \n> \t-user\tcauses the user ID of the inserting process \n> \t\tto be prepended to each record. \n> \t-stamp\tcauses the timestamp at insert time to be \n> \t\tprepended. \n> The fields could have fixed names (like oid), say \"recno\",\n> \"user\", and \"stamp\" -- or as a luxury option the user could\n> supply their names as part of the option syntax:\n> \t-user uid -stamp itime -recno seqno \n> \n> Like oid, these fields would *not appear* in result streams \n> unless explicitly included in the FSE. 'select *' would not \n> reveal them, but 'select oid,recno,user,stamp,*' would show \n> all fields. (If these options are specified at table create \n\nI like the general idea but I think they should either show up\nwhen '*' is selected or else automatically put them into each\ntable. If you don't then generic functions will have problems.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 15 Jan 1998 23:21:30 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CBAC (content based access control), OIDs, auto fields" } ]
[ { "msg_contents": "\n\nJust finished adding in May and June's archives into MHonarc...only, what,\n6 months to go :)\n\n\n\n", "msg_date": "Wed, 14 Jan 1998 15:48:23 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "May & June added..." } ]
[ { "msg_contents": "> > > > I came across the same grouping problem myself. In my installation it\n> > > > only happens with tables containing thousands of rows (for example 10000\n> > > > or more).\n\nI've been keeping track of this problem and was preparing to send a message to the\nlist saying that it had disappeared as of a few weeks ago. Even did some more test\ncases to confirm, and (unfortunately) tried one last case:\n\ntgl=> select c1, c2, count(*) from v group by c1, c2;\nc1 |c2 |count\n----+----+-----\nfoo1|foo2| 2\nfoo1|foo2| 2\nfoo1|foo2| 3\nfoo1|foo2| 4\nfoo1|foo2| 2\nfoo1|foo2|27151\n(6 rows)\n\nwhere other cases like:\n\ntgl=> select c1, count(*) from v group by c1;\nc1 |count\n----+-----\nfoo1|27164\n(1 row)\n\nseem to work. I get identical results for both char16 and for text fields in the\ntwo-column table, and the order of the \"group by\" does not matter.\n\n>From what others said earlier, the problem is not reproducible on all systems, but\nclearly shows up on at least two (perhaps both Linux?). Bruce, do you have some\nsuggestions on where to look to track this down? Where in the code does the sorting\nand ordering happen during the select?\n\n - Tom\n\n> OK, thanks for the cookbook (retained below) on how to demonstrate the problem.\n> The limit for triggering the problem seems to be system-dependent, but not\n> related to the postmaster -B option (I tried with both 256 and 64 with the same\n> results).\n>\n> This is a problem which is _not_ present in v6.1. I suspect it may be related to\n> changes in sorting for using \"psort\", but have nothing on which to base that\n> other than the v6.1 success.\n>\n> My results (following the cookbook):\n>\n> tgl=> create table test (field1 char16, field2 char8);\n> CREATE\n> tgl=> copy test from '/home/tgl/postgres/testagg.input';\n> COPY\n> tgl=> select count(*) from test;\n> count\n> -----\n> 6791\n> (1 row)\n>\n> tgl=> select field1, field2, count(*) from test group by field1, field2;\n> NOTICE:copyObject: don't know how to copy 720\n> NOTICE:copyObject: don't know how to copy 720\n> field1|field2|count\n> ------+------+-----\n> foo1 |foo2 | 6791\n> (1 row)\n>\n> tgl=> copy test from '/home/tgl/postgres/testagg.input';\n> COPY\n> tgl=> select count(*) from test;\n> count\n> -----\n> 13582\n> (1 row)\n>\n> tgl=> select field1, field2, count(*) from test group by field1, field2;\n> NOTICE:copyObject: don't know how to copy 720\n> NOTICE:copyObject: don't know how to copy 720\n> field1|field2|count\n> ------+------+-----\n> foo1 |foo2 | 2\n> foo1 |foo2 | 2\n> foo1 |foo2 | 3\n> foo1 |foo2 | 4\n> foo1 |foo2 | 2\n> foo1 |foo2 |13569\n> (6 rows)\n>\n> I tried v6.1 with up to 27164 rows and did not see the problem. Any ideas\n> hackers??\n>\n> -\n> Tom\n>\n> > > Yes. If possible please shrink the test case to the minimum needed to\n> > > exhibit the problem. TIA\n> >\n> > create table test (field1 char16, field2 char8);\n> >\n> > Insert 6791 or more rows. An easy way is to:\n> > - create a text file using vi\n> > - insert a line with 2 words like foo1 and foo2 separated\n> > using a tab\n> > - copy it (yy) and paste it 6790 times (6790p)\n> > - save it and exit vi\n> > Then, using psql enter the following query:\n> > copy test from 'the_file_you_created';\n> >\n> > Now to trigger the bug:\n> > select field1, field2, count(*) from test group by field1, field2;\n> >\n> > At my system I would see many lines like this:\n> > foo1 foo2 1\n> > foo1 foo2 3\n> > foo1 foo2 1\n> > foo1 foo2 3\n> > foo1 foo2 1\n> > foo1 foo2 3\n> > foo1 foo2 1\n> > foo1 foo2 3\n> > instead of:\n> > foo1 foo2 6791\n> >\n> > This also happens when some of the rows are different from the others\n> > (they don't have to be the same like in the example above).\n> >\n> > When the table has 6790 or less rows, everything is ok.\n> >\n> > When the table contains int's instead of char8/char16's you will probably\n> > need more rows in order to exhibit the problem. I will try to find out the\n> > exact number of rows needed in that case.\n> >\n> > Cheers,\n> > Ronald\n\n\n\n", "msg_date": "Thu, 15 Jan 1998 03:54:15 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] trouble grouping rows" }, { "msg_contents": "Is this fixed in the current release?\n\n> \n> > > > > I came across the same grouping problem myself. In my installation it\n> > > > > only happens with tables containing thousands of rows (for example 10000\n> > > > > or more).\n> \n> I've been keeping track of this problem and was preparing to send a message to the\n> list saying that it had disappeared as of a few weeks ago. Even did some more test\n> cases to confirm, and (unfortunately) tried one last case:\n> \n> tgl=> select c1, c2, count(*) from v group by c1, c2;\n> c1 |c2 |count\n> ----+----+-----\n> foo1|foo2| 2\n> foo1|foo2| 2\n> foo1|foo2| 3\n> foo1|foo2| 4\n> foo1|foo2| 2\n> foo1|foo2|27151\n> (6 rows)\n> \n> where other cases like:\n> \n> tgl=> select c1, count(*) from v group by c1;\n> c1 |count\n> ----+-----\n> foo1|27164\n> (1 row)\n> \n> seem to work. I get identical results for both char16 and for text fields in the\n> two-column table, and the order of the \"group by\" does not matter.\n> \n> >From what others said earlier, the problem is not reproducible on all systems, but\n> clearly shows up on at least two (perhaps both Linux?). Bruce, do you have some\n> suggestions on where to look to track this down? Where in the code does the sorting\n> and ordering happen during the select?\n> \n> - Tom\n> \n> > OK, thanks for the cookbook (retained below) on how to demonstrate the problem.\n> > The limit for triggering the problem seems to be system-dependent, but not\n> > related to the postmaster -B option (I tried with both 256 and 64 with the same\n> > results).\n> >\n> > This is a problem which is _not_ present in v6.1. I suspect it may be related to\n> > changes in sorting for using \"psort\", but have nothing on which to base that\n> > other than the v6.1 success.\n> >\n> > My results (following the cookbook):\n> >\n> > tgl=> create table test (field1 char16, field2 char8);\n> > CREATE\n> > tgl=> copy test from '/home/tgl/postgres/testagg.input';\n> > COPY\n> > tgl=> select count(*) from test;\n> > count\n> > -----\n> > 6791\n> > (1 row)\n> >\n> > tgl=> select field1, field2, count(*) from test group by field1, field2;\n> > NOTICE:copyObject: don't know how to copy 720\n> > NOTICE:copyObject: don't know how to copy 720\n> > field1|field2|count\n> > ------+------+-----\n> > foo1 |foo2 | 6791\n> > (1 row)\n> >\n> > tgl=> copy test from '/home/tgl/postgres/testagg.input';\n> > COPY\n> > tgl=> select count(*) from test;\n> > count\n> > -----\n> > 13582\n> > (1 row)\n> >\n> > tgl=> select field1, field2, count(*) from test group by field1, field2;\n> > NOTICE:copyObject: don't know how to copy 720\n> > NOTICE:copyObject: don't know how to copy 720\n> > field1|field2|count\n> > ------+------+-----\n> > foo1 |foo2 | 2\n> > foo1 |foo2 | 2\n> > foo1 |foo2 | 3\n> > foo1 |foo2 | 4\n> > foo1 |foo2 | 2\n> > foo1 |foo2 |13569\n> > (6 rows)\n> >\n> > I tried v6.1 with up to 27164 rows and did not see the problem. Any ideas\n> > hackers??\n> >\n> > -\n> > Tom\n> >\n> > > > Yes. If possible please shrink the test case to the minimum needed to\n> > > > exhibit the problem. TIA\n> > >\n> > > create table test (field1 char16, field2 char8);\n> > >\n> > > Insert 6791 or more rows. An easy way is to:\n> > > - create a text file using vi\n> > > - insert a line with 2 words like foo1 and foo2 separated\n> > > using a tab\n> > > - copy it (yy) and paste it 6790 times (6790p)\n> > > - save it and exit vi\n> > > Then, using psql enter the following query:\n> > > copy test from 'the_file_you_created';\n> > >\n> > > Now to trigger the bug:\n> > > select field1, field2, count(*) from test group by field1, field2;\n> > >\n> > > At my system I would see many lines like this:\n> > > foo1 foo2 1\n> > > foo1 foo2 3\n> > > foo1 foo2 1\n> > > foo1 foo2 3\n> > > foo1 foo2 1\n> > > foo1 foo2 3\n> > > foo1 foo2 1\n> > > foo1 foo2 3\n> > > instead of:\n> > > foo1 foo2 6791\n> > >\n> > > This also happens when some of the rows are different from the others\n> > > (they don't have to be the same like in the example above).\n> > >\n> > > When the table has 6790 or less rows, everything is ok.\n> > >\n> > > When the table contains int's instead of char8/char16's you will probably\n> > > need more rows in order to exhibit the problem. I will try to find out the\n> > > exact number of rows needed in that case.\n> > >\n> > > Cheers,\n> > > Ronald\n> \n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 13 Feb 1998 15:12:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] trouble grouping rows" }, { "msg_contents": "[About the grouping problem]\n\nBruce wrote:\n> Is this fixed in the current release?\n\nI found the following in one of the earlier messages about this:\n\nVadim wrote:\n> Serj wrote:\n> > \n> > In snapshot 270198 \"GROUP BY\" bug present too.\n> > \n> > Test on linux-elf (\"select a,b,count(*) from c group by a,b\");\n\n> We know by what this bug is caused - will be fixed before 6.3 beta 2.\n> There will be also patch for 6.2.1\n\n> As \"workarround\" - try to increase -S max_sort_memory to prevent\n> disk sorting. Sorry.\n\n> Vadim\n\nSo I guess it isn't fixed yet? (*uncertain look on his face*)\n\nCheers,\nRonald\n", "msg_date": "Mon, 16 Feb 1998 11:38:50 +0100 (MET)", "msg_from": "Ronald Baljeu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] trouble grouping rows" }, { "msg_contents": "Ronald Baljeu wrote:\n> \n> So I guess it isn't fixed yet? (*uncertain look on his face*)\n\nNo. I just found a way to get core. It was not easy to do on my FreeBSD.\n\nHope to fix in 1-2 days.\n\nVadim\n", "msg_date": "Tue, 17 Feb 1998 17:24:43 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] trouble grouping rows" } ]
[ { "msg_contents": "Got my Linux Journal today and the first featured article is entitled\n\n \"PostgreSQL - The Linux of Databases\"\n\nNow scrappy, before you get bent out of joint, they mean this in a nice\nway ;-)\n\nThe author is Rolf Herzog from Germany. It seems like a good article,\nwith a few factual errors but on the whole complimentary without\nignoring the weak points (yes Bruce, subselects are mentioned).\n\nMentions Marc by name and gets 6.5 pages of space (the longest article\nin the magazine if you fudge counting a couple of others which have a\ntwo page listing and lots of large graphics).\n\nIt runs through examples from src/tutorial and also calls out sequences,\ntriggers, defaults, constraints, and (I noticed this one :) the\ndate/time features as being helpful and noteworthy. Of the 5 items\nmentioned as missing from the SQL capabilities, at least one (primary\nkey) and perhaps two (subselects) will be available in v6.3.\n\n - Tom\n\n", "msg_date": "Thu, 15 Jan 1998 04:23:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Linux Journal article on PostgreSQL" }, { "msg_contents": "On Thu, 15 Jan 1998, Thomas G. Lockhart wrote:\n\n> Got my Linux Journal today and the first featured article is entitled\n> \n> \"PostgreSQL - The Linux of Databases\"\n> \n> Now scrappy, before you get bent out of joint, they mean this in a nice\n> way ;-)\n\n\t*rofl* Is it available anywhere on the 'Net?\n\n> It runs through examples from src/tutorial and also calls out sequences,\n> triggers, defaults, constraints, and (I noticed this one :) the\n> date/time features as being helpful and noteworthy. Of the 5 items\n> mentioned as missing from the SQL capabilities, at least one (primary\n> key) and perhaps two (subselects) will be available in v6.3.\n\n\tAs I think I mentioned to just Bruce the other day, I think that\nv6.3 is going to be our biggest \"jump forward\" yet...there are *sooo* many\nadvances going into her...\n\n", "msg_date": "Thu, 15 Jan 1998 07:47:41 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" }, { "msg_contents": "> > Got my Linux Journal today and the first featured article is entitled\n> >\n> > \"PostgreSQL - The Linux of Databases\"\n> >\n> > Now scrappy, before you get bent out of joint, they mean this in a nice\n> > way ;-)\n>\n> *rofl* Is it available anywhere on the 'Net?\n\nI'm not certain, but since it _is_ a magazine I think they want you to buy a\ncopy. Let me know if you can't find it on a newstand or in a computer store\n(~$3US) and I'll try to find one here. Also, some articles do get republished\na month or more later by Linux Gazette, which is available on line.\n\n> > It runs through examples from src/tutorial and also calls out sequences,\n> > triggers, defaults, constraints, and (I noticed this one :) the\n> > date/time features as being helpful and noteworthy. Of the 5 items\n> > mentioned as missing from the SQL capabilities, at least one (primary\n> > key) and perhaps two (subselects) will be available in v6.3.\n>\n> As I think I mentioned to just Bruce the other day, I think that\n> v6.3 is going to be our biggest \"jump forward\" yet...there are *sooo* many\n> advances going into her...\n\nI agree wrt user-visible features. I'd think that maybe the most important\nsingle step was the work y'all did a while ago to settle down the backend and\nget the crashes out since it gives us a reliable base to work from. btw, the\nfactual error in the magazine which annoyed me the most is the statement\nthat:\n\n\"PostgreSQL is now developed by a couple of volunteers, who coordinate their\nefforts via the Internet.\"\n\nwhich vastly understates the wide range of contributions. I wonder if he was\nthinking of both Marc and scrappy :) Oh well...\n\n - Tom\n\n\n", "msg_date": "Thu, 15 Jan 1998 15:34:58 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" }, { "msg_contents": "On Thu, 15 Jan 1998, Thomas G. Lockhart wrote:\n\n> \n> \"PostgreSQL is now developed by a couple of volunteers, who coordinate their\n> efforts via the Internet.\"\n> \n> which vastly understates the wide range of contributions. I wonder if he was\n> thinking of both Marc and scrappy :) Oh well..\n\n\tIt does understate it, but there are really only three *high\nvisibility\" programmers working, and, oh, a couple of dozen less visible\nones...he might only be noticing the high visibility ones and totally\nmissing the important contributions by the less visible ones :9\n\n\n", "msg_date": "Thu, 15 Jan 1998 10:48:08 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Thu, 15 Jan 1998, Thomas G. Lockhart wrote:\n> \n> > Got my Linux Journal today and the first featured article is entitled\n> >\n> > \"PostgreSQL - The Linux of Databases\"\n> >\n> > Now scrappy, before you get bent out of joint, they mean this in a nice\n> > way ;-)\n> \n> *rofl* Is it available anywhere on the 'Net?\n> \n\n\n http://www.linuxjournal.com/\n\n\n/* m */\n", "msg_date": "Fri, 16 Jan 1998 12:26:30 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" } ]
[ { "msg_contents": "> Content-based access control (CBAC). In my experience,\n> when these words are uttered, DBAs and MIS designers groan.\n> I wish CBAC were never required. Unfortunately sometimes\n> it is, and I wonder if the PG team is thinking about it.\n\nWe handle this with views in our Informix Data Warehouse Installation.\nThis said Informix has separate view and table permissions.\nThe way we did it we only need one single view per table.\nAll users are only granted access to this view.\nThere is a separate cbac_table with fields (username, group).\nThe column group (we say mandant) is also in the data tables.\nThe view is always a join between the cabc_table and the data_table:\ncreate view emp as\n\tselect d.* from data_table d, cbac_table c where \n\td.group = c.group and c.user = USER -- USER is a db\nsupplied var (SQLID in DB/2) \n\twith check option;\n\nThis said, user joins on these views can be very nasty for the\noptimizer, \nbut it works great with recent versions of Informix.\n\nThe user name is in CURRENT_USER in postgresql,\nSeparate view and table privs are on the TODO,\nupdateable views are also on the TODO. (I think) \n\nAndreas\n", "msg_date": "Thu, 15 Jan 1998 12:01:23 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Content-based access control" } ]
[ { "msg_contents": "I have a potential patch for the glibc2 date problem; however I can't\ntest it because the snapshot won't build. \n\n\nHas this build error been cured recently, or do I have my own unique problem?\n\n\nO/S: Linux 2.0.32\nGlibc: 2.0.6\nGcc: 2.7.2.3\nFlex: 2.5.4\nBison: 1.25\nPostgreSQL: snapshot downloaded Jan 12\n\nmake[2]: Entering directory `/usr1/home/olly/mypackages/pgsql/src/backend/parse\nr'\n/usr/bin/bison -y -d gram.y\nmv y.tab.c gram.c\nmv y.tab.h parse.h\ngcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486 \n-Wall -Wmissing-prototypes -I.. -Wno-error -c analyze.c -o analyze.o\ngcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486 \n-Wall -Wmissing-prototypes -I.. -Wno-error -c gram.c -o gram.o\n/usr/share/bison.simple: In function `yyparse':\n/usr/share/bison.simple:327: warning: implicit declaration of function \n`yyerror'\n/usr/share/bison.simple:387: warning: implicit declaration of function `yylex'\ngcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486 \n-Wall -Wmissing-prototypes -I.. -Wno-error -c keywords.c -o keywords.o\n\n....\n\ngcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486 \n-Wall -Wmissing-prototypes -I.. -Wno-error -c scan.c -o scan.o\nlex.yy.c:800: warning: no previous prototype for `yylex'\nscan.l: In function `yylex':\nscan.l:202: `ABORT' undeclared (first use this function)\nscan.l:202: (Each undeclared identifier is reported only once\nscan.l:202: for each function it appears in.)\nscan.l: At top level:\nscan.l:379: warning: no previous prototype for `yyerror'\nscan.l: In function `yyerror':\nscan.l:380: `ABORT' undeclared (first use this function)\nlex.yy.c: At top level:\nlex.yy.c:2103: warning: `yy_flex_realloc' defined but not used\nmake[2]: *** [scan.o] Error 1\nmake[2]: Leaving directory `/usr1/home/olly/mypackages/pgsql/src/backend/parser\n'\n\nscan.l version is:\n$Header: /usr/local/cvsroot/pgsql/src/backend/parser/scan.l,v 1.34 1998/01/05 \n16:39:19 momjian Exp $\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\nUnsolicited email advertisements are not welcome; any person sending\nsuch will be invoiced for telephone time used in downloading together\nwith a £25 administration charge.\n\n\n", "msg_date": "Thu, 15 Jan 1998 12:45:41 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cannot build recent snapshot" }, { "msg_contents": "On Thu, 15 Jan 1998, Oliver Elphick wrote:\n\n> I have a potential patch for the glibc2 date problem; however I can't\n> test it because the snapshot won't build. \n\n\tremove src/backend/parser/scan.c...that should fix your problem\n\n\n", "msg_date": "Thu, 15 Jan 1998 08:06:46 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cannot build recent snapshot" }, { "msg_contents": "The Hermit Hacker wrote:\n >On Thu, 15 Jan 1998, Oliver Elphick wrote:\n >\n >> I have a potential patch for the glibc2 date problem; however I can't\n >> test it because the snapshot won't build. \n >\n >\tremove src/backend/parser/scan.c...that should fix your problem\n >\n\nThanks, it did.\n\n\nThe patch for glibc2 dates is attached. With this applied, a Linux system\nwith libc6 (glibc2) passes all the date and time related regression tests.\n\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\nUnsolicited email advertisements are not welcome; any person sending\nsuch will be invoiced for telephone time used in downloading together\nwith a �25 administration charge.", "msg_date": "Thu, 15 Jan 1998 18:02:25 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for glibc2 date problems " }, { "msg_contents": "> The patch for glibc2 dates is attached. With this applied, a Linux system\n> with libc6 (glibc2) passes all the date and time related regression tests.\n\nIt looks as though this patch is a bit Linux-specific (or specific to some version of glibc which has only been tested on\nLinux).\n\nCan we wait until glibc2 settles down, or provide this as an add-on patch rather than merging it into the main tree? I hate\nadding machine-specific code into otherwise general code...\n\nAnother possibility would be to add a new #define variable like HAVE_FUNNY_LIBRARY in config.h or in linux.h so we can\npossibly use this with other ports if necessary in the future.\n\nI'm planning on installing RH5.0 sometime soon (I have a clean disk so can fall back to RH4.2). I'm sure I'll sound more\nsympathetic by then :)\n\n - Tom\n\n", "msg_date": "Fri, 16 Jan 1998 02:16:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patch for glibc2 date problems" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> I have a potential patch for the glibc2 date problem; however I can't\n> test it because the snapshot won't build.\n> \n> Has this build error been cured recently, or do I have my own unique problem?\n> \n> O/S: Linux 2.0.32\n> Glibc: 2.0.6\n> Gcc: 2.7.2.3\n> Flex: 2.5.4\n> Bison: 1.25\n> PostgreSQL: snapshot downloaded Jan 12\n> \n> make[2]: Entering directory `/usr1/home/olly/mypackages/pgsql/src/backend/parse\n> r'\n> /usr/bin/bison -y -d gram.y\n> mv y.tab.c gram.c\n> mv y.tab.h parse.h\n> gcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486\n> -Wall -Wmissing-prototypes -I.. -Wno-error -c analyze.c -o analyze.o\n> gcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486\n> -Wall -Wmissing-prototypes -I.. -Wno-error -c gram.c -o gram.o\n> /usr/share/bison.simple: In function `yyparse':\n> /usr/share/bison.simple:327: warning: implicit declaration of function\n> `yyerror'\n> /usr/share/bison.simple:387: warning: implicit declaration of function `yylex'\n> gcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486\n> -Wall -Wmissing-prototypes -I.. -Wno-error -c keywords.c -o keywords.o\n> \n> ....\n> \n> gcc -I../../include -I/usr/include/ncurses -I/usr/include/readline -O2 -m486\n> -Wall -Wmissing-prototypes -I.. -Wno-error -c scan.c -o scan.o\n> lex.yy.c:800: warning: no previous prototype for `yylex'\n> scan.l: In function `yylex':\n> scan.l:202: `ABORT' undeclared (first use this function)\n> scan.l:202: (Each undeclared identifier is reported only once\n> scan.l:202: for each function it appears in.)\n> scan.l: At top level:\n> scan.l:379: warning: no previous prototype for `yyerror'\n> scan.l: In function `yyerror':\n> scan.l:380: `ABORT' undeclared (first use this function)\n> lex.yy.c: At top level:\n> lex.yy.c:2103: warning: `yy_flex_realloc' defined but not used\n> make[2]: *** [scan.o] Error 1\n> make[2]: Leaving directory `/usr1/home/olly/mypackages/pgsql/src/backend/parser\n> '\n\nMy solution was to remove scan.c from that directory and let it be\nrebuilt\nfrom scan.l - the Makefile does not remove scan.c on \"make clean\"\nI had a couple of warnings, but no premature termination of make.\n(Linux 2.0.29, libc-5.4.33)\n\nHope this helps\n\nDarrell\n", "msg_date": "Thu, 15 Jan 1998 21:32:05 -0800", "msg_from": "\"Darrell A. Escola\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cannot build recent snapshot" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >> The patch for glibc2 dates is attached. With this applied, a Linux system\n >> with libc6 (glibc2) passes all the date and time related regression tests.\n >\n >It looks as though this patch is a bit Linux-specific (or specific to some v\n >ersion of glibc which has only been tested on\n >Linux).\n\nI don't have experience of using glibc2 on any other type of machine.\n\nHowever, isn't part of the point of it to remove inter-machine differences?\n\n >\n >Can we wait until glibc2 settles down, or provide this as an add-on patch ra\n >ther than merging it into the main tree? I hate\n >adding machine-specific code into otherwise general code...\n\nI guess that's up to you. \n >\n >Another possibility would be to add a new #define variable like HAVE_FUNNY_L\n >IBRARY in config.h or in linux.h so we can\n\nWhy isn't\n #if __GLIBC__ < 2\nenough for this?\n >possibly use this with other ports if necessary in the future.\n >\n\nI don't have experience of using glibc2 on any other type of machine.\nHowever, isn't part of the point of it to remove inter-machine differences?\n\n >I'm planning on installing RH5.0 sometime soon (I have a clean disk so can f\n >all back to RH4.2). I'm sure I'll sound more\n >sympathetic by then :)\n >\n > - Tom\n\nMy assumption was that any system using glibc2 would not have a broken \nrint() function; so the general change to TMODULO would be justified.\n\nThe change of the test of `var != 0' to `var != rint(var)' should not break\nanything, even if var is non-zero. It is merely saying, don't use\ndecimal points if there's no decimal part.\n\nThe remaining part of the patch is to force the undefinition of\nHAVE_INT_TIMEZONE; again this is glibc2-specific, but I don't know\nany reason to suppose it wouldn't be needed on any machine with glibc2.\n\nIt would really be helpful to have someone on a non-Linux machine test\nit; but is there anyone?\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\nUnsolicited email advertisements are not welcome; any person sending\nsuch will be invoiced for telephone time used in downloading together\nwith a £25 administration charge.\n\n\n", "msg_date": "Fri, 16 Jan 1998 06:42:44 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Patch for glibc2 date problems " }, { "msg_contents": "On Fri, 16 Jan 1998, Thomas G. Lockhart wrote:\n\n> > The patch for glibc2 dates is attached. With this applied, a Linux system\n> > with libc6 (glibc2) passes all the date and time related regression tests.\n \n> It looks as though this patch is a bit Linux-specific (or specific to\nsome version of glibc which has only been tested on Linux). \n \n> Can we wait until glibc2 settles down, or provide this as an add-on\npatch rather than merging it into the main tree? I hate adding\nmachine-specific code into otherwise general code... \n \n> Another possibility would be to add a new #define variable like\nHAVE_FUNNY_LIBRARY in config.h or in linux.h so we can possibly use this\nwith other ports if necessary in the future. \n\nHow about making this #define 'HAVE_GLIBC' ???\n\nMaarten\n\n_____________________________________________________________________________\n| Maarten Boekhold, Faculty of Electrical Engineering TU Delft, NL |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Fri, 16 Jan 1998 10:52:06 +0100 (MET)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patch for glibc2 date problems" }, { "msg_contents": "> \n> \"Thomas G. Lockhart\" wrote:\n> >> The patch for glibc2 dates is attached. With this applied, a Linux system\n> >> with libc6 (glibc2) passes all the date and time related regression tests.\n> >\n> >It looks as though this patch is a bit Linux-specific (or specific to some v\n> >ersion of glibc which has only been tested on\n> >Linux).\n> \n> I don't have experience of using glibc2 on any other type of machine.\n> \n> However, isn't part of the point of it to remove inter-machine differences?\n> \n> >\n> >Can we wait until glibc2 settles down, or provide this as an add-on patch ra\n> >ther than merging it into the main tree? I hate\n> >adding machine-specific code into otherwise general code...\n> \n> I guess that's up to you. \n> >\n> >Another possibility would be to add a new #define variable like HAVE_FUNNY_L\n> >IBRARY in config.h or in linux.h so we can\n> \n> Why isn't\n> #if __GLIBC__ < 2\n> enough for this?\n> >possibly use this with other ports if necessary in the future.\n> >\n> \n> I don't have experience of using glibc2 on any other type of machine.\n> However, isn't part of the point of it to remove inter-machine differences?\n> \n> >I'm planning on installing RH5.0 sometime soon (I have a clean disk so can f\n> >all back to RH4.2). I'm sure I'll sound more\n> >sympathetic by then :)\n> >\n> > - Tom\n> \n> My assumption was that any system using glibc2 would not have a broken \n> rint() function; so the general change to TMODULO would be justified.\n> \n> The change of the test of `var != 0' to `var != rint(var)' should not break\n> anything, even if var is non-zero. It is merely saying, don't use\n> decimal points if there's no decimal part.\n> \n> The remaining part of the patch is to force the undefinition of\n> HAVE_INT_TIMEZONE; again this is glibc2-specific, but I don't know\n> any reason to suppose it wouldn't be needed on any machine with glibc2.\n> \n> It would really be helpful to have someone on a non-Linux machine test\n> it; but is there anyone?\n\nOur code is complicated enough without adding patches for OS bugs. A\ngood place for the patch is the Linux FAQ. If it really becomes a\nproblem, we can put the patch as a separate file in the distribution,\nand mention it in the INSTALL instructions. If you really want to get\nfancy, as I did with the flex bug, you can run a test at compile time to\nsee if the bug exists.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 09:17:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patch for glibc2 date problems" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >The other good possibility is for Oliver to develop a patch kit (he has done\n > so\n >already) which we can include in the v6.3 distribution to be applied only to\n >linux/glibc2. When the beta settles down perhaps Oliver can generate a new p\n >atch\n >kit which we can include?\n\nI'm happy to remake the patch in a week or so. However I'd like to be clear\nwhat you think is 'Linux/glibc-specific' (and therefore undesirable); obviously\nit is better to make changes as general as possible.\n\nI'm attaching the previous patch I made (which, of course, may not\nactually work with the current release - I haven't tried yet) so as to comment\non it:\n\n1. I use `#if __GLIBC__ < 2' to test for glibc, where the change is only\n for glibc. So I use it to force HAVE_INT_TIMEZONE to be undefined\n and to redefine TMODULO(); incidentally, I used the hint in the existing\n source that modf() was sometimes broken and chose to use glibc's\n unbroken modf(). \n\n [ Incidentally, is modf() _still_ broken on whatever platforms it was\n broken on? ]\n\n2. The second set of changes are to do with unnecessary printing of n.00\n instead of n, where the test was whether fsec is non-zero. With the\n glibc version, fsec is often non-zero but integer, so I substituted\n the test (fsec != rint(fsec)) for (fsec != 0). This seems to me to\n be a completely general change that will work correctly on all\n implementations.\n\nNow I can understand that you may choose to ignore the existence of glibc \nfor the standard code (not that I agree, mind you) but I really don't see\nhow I could make the changes any more general than I have.\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1", "msg_date": "Tue, 03 Feb 1998 12:02:08 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] (: JDBC+(Sun ~3:pm MST) CVS :) -also question about\n\tregression tests" }, { "msg_contents": "> 1. I use `#if __GLIBC__ < 2' to test for glibc, where the change is only\n> for glibc. So I use it to force HAVE_INT_TIMEZONE to be undefined\n> and to redefine TMODULO(); incidentally, I used the hint in the existing\n> source that modf() was sometimes broken and chose to use glibc's\n> unbroken modf().\n>\n> [ Incidentally, is modf() _still_ broken on whatever platforms it was\n> broken on? ]\n\nProbably, since it was on Solaris, which should have been mature enough by now to not exhibit the problem :(\n\n> 2. The second set of changes are to do with unnecessary printing of n.00\n> instead of n, where the test was whether fsec is non-zero. With the\n> glibc version, fsec is often non-zero but integer, so I substituted\n> the test (fsec != rint(fsec)) for (fsec != 0). This seems to me to\n> be a completely general change that will work correctly on all\n> implementations.\n\nThe second set of changes does work in general, but is an extra math library call which is _only_ required for glibc2. This\nis the math library bug I was alluding to earlier.\n\nThe extra overhead of this unnecessary function call should not be required for all platforms, since it only there as a bug\nfix workaround for one platform. \"fsec\" is \"fractional seconds\", not \"fractional seconds except for glibc2 where it could be\nan integer too\".\n\n> Now I can understand that you may choose to ignore the existence of glibc\n> for the standard code (not that I agree, mind you) but I really don't see\n> how I could make the changes any more general than I have.\n\nHeh, don't get me wrong, I _love_ glibc2 and am looking forward to being able to write thread-safe code using it. However,\nthe glibc2 you are using seems to be v2.0.5 or thereabouts and it still apparently has a few rough edges, this math rounding\nproblem being one of them.\n\nAs you know, the problem is not really with rint() but with whatever allowed the fsec variable to become a non-zero integer.\n_That_ is the thing which should be patched with glibc2-specific #ifdefs. Between all other tested platforms (~10?) the\nbehavior is uniform and correct. So, I'll be at least somewhat happier with the patches if you identify the place where fsec\nis going bad and fix that for glibc2, rather than coding a general workaround later in the code.\n\nThe more I think about it, the more I want to go back and rip out the general workaround I put in to fix Solaris. Then, we\ncan isolate up in the top of the file some #ifdef equivalent functions for Solaris and for glibc2.\n\nSomeday the glibc2 behavior will be fixed, and it should be easy to then remove the workaround code.\n\n - Tom\n\n", "msg_date": "Tue, 03 Feb 1998 15:12:30 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] (: JDBC+(Sun ~3:pm MST) CVS :) -also question about\n\tregression tests" }, { "msg_contents": "Bruce Momjian wrote:\n >I am noticing a lot of animal lovers in the group.\n\nYes, and I suppose I'm another one.\n\nI'm 46 with a dog, three cats, a wife and five children, and we live\nhere on the Isle of Wight (Britain) and have done since 1996. The\nchildren range from 20 down to 13, and only three are now at home all\nthe time. My prime leisure activity (apart from computers...) is singing,\nmainly in church.\n\nI started my working life as an accountant, but then, as the ad says,\nI discovered computers. For a number of years I worked with the PICK\ndatabase and one of its Unix copies, UniVerse. Now I'm doing sufficient\nconsultancy to keep all the menagerie fed and housed, and doing my best\nto promote Open Source software (note the new advertisement-speak for\nfree software!).\n\nI maintain the PostgreSQL package for Debian GNU/Linux, but haven't\nreally found my way into the real meat of the source code yet. I am\njust starting the design of a system for a pharmaceutical supplies\nmanufacturer, to replace his existing UniVerse-based one.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n\n", "msg_date": "Fri, 20 Feb 1998 19:18:03 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Who is everyone? " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nThen Bruce Momjian wrote:\n> I am noticing a lot of animal lovers in the group.\n\nI guess I'm sorta one, too (I'm just too allergic to actually keep\nthem at my own house).\n\nI'm a 31 year old father of two (a third is due 29Mar). I recently\nfinished my BS in Electrical and Computer Engineering at Carnegie\nMellon University, located in Pittsburgh, PA, USA. As it happens,\nI've been a full-time employee here for the last five years, doing\nsystem administration. Currently, I'm riding herd on a coupld of\nHP9000s, a Sequent, two DG AViiONs, a VAXcluster, and an SP/2. My\nonly real project in PostgreSQL was actually written against 4.2 in\nQUEL (a helpdesk app that is still listed on the PostgreSQL home page\nin spite of the fact that it hasn't been updated to keep it even\nfunctional. NOTE TO MYSELF: do something about that!). I ride horse,\nand watch fish. Since the allergies started, I've had to stop playing\nwith the dogs and cats.\n\n- -- \n\n\n=====================================================================\n| \"If you're all through trying to burn the field down, will you |\n| kindly get up and tell me why you're sitting in a fruit field, |\n| stark naked, frying peaches?\" |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\n\niQBVAwUBNO3deYdzVnzma+gdAQGZ3gH8DDD4TLacIyjf2Al30pUm1iIhJZUp8HYp\n+sgoMDO4fuHfigWdrxBHOuGDyiQ+9i8vkt7lNfShinckzju0aIaGsg==\n=2EQp\n-----END PGP SIGNATURE-----\n\n", "msg_date": "20 Feb 1998 14:46:01 -0500", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Who is everyone? " }, { "msg_contents": "Bruce Momjian wrote:\n >All tables are created with default permissions for SELECT to PUBLIC, so\n >views are no different.\n\nIs this not contrary to the SQL standard? I understood that SQL tables\nare created with permissions for their creator only; any permissions for\nother users must be granted explicitly. According to \"SQL The Standard\nHandbook\" (Cannan & Otten, 1993), the owner of the schema in which a table\nis created is given a full set of privileges, and no other user can access\nthe table or even discover that it exists!\n\nIt certainly seems undesirable to give automatic access to data of unknown\nsensitivity. Surely the default permission should be for the table's\ncreator alone or for the owner of the PostgreSQL database (which I suppose \nis equivalent to the `schema').\n\nI see that Jan Wieck has posted a method for preventing world readability;\nperhaps this should just be flagged as a configurable option.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n\n", "msg_date": "Mon, 23 Feb 1998 21:53:57 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Here it is - view permissions " }, { "msg_contents": "\nOliver Elphick wrote:\n>\n> Bruce Momjian wrote:\n> >All tables are created with default permissions for SELECT to PUBLIC, =\n> so\n> >views are no different.\n>\n> Is this not contrary to the SQL standard? I understood that SQL tables\n> are created with permissions for their creator only; any permissions for\n> other users must be granted explicitly. According to \"SQL The Standard\n> Handbook\" (Cannan & Otten, 1993), the owner of the schema in which a tabl=\n> e\n> is created is given a full set of privileges, and no other user can acces=\n> s\n> the table or even discover that it exists!\n\n ^^^^^^^^^^^^^^!!!\n\n Ha!\n\n The next table we must hide and create a view on :-)\n\n This time the view must check if the user has at least SELECT\n permission on the table/view and hide rows. More tricky -\n I'll try to work it out. But not doday - I'm tired and I know\n what can happen then (saying '... and make even this little\n thing' at 23:00 to reach the state of 22:59 at 04:00 :-).\n Good night to all!\n\n But a last word: There are even more such tables as the\n tables/views are also reflected in pg_attributes, pg_rewrite\n and so on. Even if here only the Oid shows up.\n\n If we really want to get all this up to the highest level, we\n need sometimes a proacl field in pg_proc ... *Ack* - Bruce,\n *Outch* - no - not the pumpgun - *Help*\n\n :-)\n\n>\n> It certainly seems undesirable to give automatic access to data of unknow=\n> n\n> sensitivity. Surely the default permission should be for the table's\n> creator alone or for the owner of the PostgreSQL database (which I suppos=\n> e =\n>\n> is equivalent to the `schema').\n>\n> I see that Jan Wieck has posted a method for preventing world readability=\n> ;\n> perhaps this should just be flagged as a configurable option.\n\n But configurable at compile time - not a runtime option\n please.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 23 Feb 1998 23:27:59 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Here it is - view permissions" }, { "msg_contents": " > \n> Bruce Momjian wrote:\n> >All tables are created with default permissions for SELECT to PUBLIC, so\n> >views are no different.\n> \n> Is this not contrary to the SQL standard? I understood that SQL tables\n> are created with permissions for their creator only; any permissions for\n> other users must be granted explicitly. According to \"SQL The Standard\n> Handbook\" (Cannan & Otten, 1993), the owner of the schema in which a table\n> is created is given a full set of privileges, and no other user can access\n\nWill be the default in 6.3, I think.\n \n> the table or even discover that it exists!\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nNot in 6.3, or maybe ever. Too much OO stuff for that, I think.\n\n> \n> It certainly seems undesirable to give automatic access to data of unknown\n> sensitivity. Surely the default permission should be for the table's\n> creator alone or for the owner of the PostgreSQL database (which I suppose \n> is equivalent to the `schema').\n> \n> I see that Jan Wieck has posted a method for preventing world readability;\n> perhaps this should just be flagged as a configurable option.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 23 Feb 1998 17:35:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Here it is - view permissions" }, { "msg_contents": "\nSo I've never gotten the distinction -- what makes postgreSQL an\nobject oriented database, aside from the oid attribute and class\ninheritance (which could work a little better.. no way to find out the\nchild class of a tuple in a select from parent_class* query).\n\nand what makes it relational? the fact that it can do joins?\n\nin confused delerium,\n--brett\n\nOn Mon, 23 February 1998, at 17:35:09, Bruce Momjian wrote:\n\n> Will be the default in 6.3, I think.\n> \n> > the table or even discover that it exists!\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Not in 6.3, or maybe ever. Too much OO stuff for that, I think.\n> \n> > \n> > It certainly seems undesirable to give automatic access to data of unknown\n> > sensitivity. Surely the default permission should be for the table's\n> > creator alone or for the owner of the PostgreSQL database (which I suppose \n> > is equivalent to the `schema').\n> > \n> > I see that Jan Wieck has posted a method for preventing world readability;\n> > perhaps this should just be flagged as a configurable option.\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 23 Feb 1998 15:06:32 -0800", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Here it is - view permissions" }, { "msg_contents": "> So I've never gotten the distinction -- what makes postgreSQL an\n> object oriented database...\n\nIt is not. It is object-relational, which is relational with some object-oriented\nfeatures. The type/function extensibility is the most visible of these features.\n\n> and what makes it relational? the fact that it can do joins?\n\nAnd allows one to use other aspects of relational algebra.\n\n - Tom\n\n", "msg_date": "Tue, 24 Feb 1998 02:37:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Here it is - view permissions" }, { "msg_contents": "On Tue, 24 February 1998, at 02:37:02, Thomas G. Lockhart wrote:\n\n> > So I've never gotten the distinction -- what makes postgreSQL an\n> > object oriented database...\n> \n> It is not. It is object-relational, which is relational with some object-oriented\n> features. The type/function extensibility is the most visible of these features.\n\nWhoops, that should have been a no-brainer (because I already knew that)\n\n> \n> > and what makes it relational? the fact that it can do joins?\n> \n> And allows one to use other aspects of relational algebra.\n\nwhat is relational algebra? operations on entire tuples?\n", "msg_date": "Mon, 23 Feb 1998 21:00:31 -0800", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Here it is - view permissions" }, { "msg_contents": "Last week, I wrote this, but no-one has answered. :( \n >I am currently designing a database that I expect to put into use after\n >6.4 is released. This makes use of inheritance, and I would like to\n >ask about how inheritance will relate to the handling of constraints.\n >\n >1. I would like to be able to say:\n >\n > create table job\n > (\n > ...\n > resid char(4) not null references resource*(id),\n > ...\n > )\n >\n > to indicate that the foreign key constraint would be satisfied by the\n > presence of the desired item in any class of the inheritance tree \nstarting\n > at resource. The parser does not recognise this syntax at present.\n > (This is parallel to `select ... from class*', by which we can currently\n > list all items in an inheritance tree.)\n >\n >2. Will all constraints on a class be inherited along with the column\n > definitions?\n >\n > If constraints are inherited, there is the possibility of conflict or\n > redefinition.\n >\n > In single inheritance, could a constraint be redefined by being restated \n > in the descendent?\n >\n > In multiple inheritance, a conflict of column types causes an error; how\n > will a conflict of constraint names be handled, if the check condition\n > is different? (Options: error; drop the constraint; require a new\n > definition of the constraint in the descendent class.)\n >\n > At the moment, check constraints are inherited and are silently mangled\n > by prefixing the class name; this can lead to an impossible combination\n > of constraints, which could be solved if redefinition were possible.\n >\n > Example:\n >\n >junk=> create table aa (id char(4) check (id > 'M'), name text);\n >CREATE\n >junk=> create table bb (id char(4) check (id < 'M'), qty int);\n >CREATE\n >junk=> create table ab (value money) inherits (aa, bb);\n >CREATE\n >junk=> insert into ab values ('ABCD', 5);\n >ERROR: ExecAppend: rejected due to CHECK constraint aa_id\n >junk=> insert into ab values ('WXYZ', 5);\n >ERROR: ExecAppend: rejected due to CHECK constraint bb_id\n >\n > We could perhaps allow syntax such as:\n >\n > create table ab (..., constraint id check (id > 'E' and id < 'Q'))\n > inherits (aa, bb)\n > undefine (constraint aa_id, constraint bb_id)\n >\n > Is this feasible?\n >\n > At present, primary key definitions are not inherited. Could they be?\n > (either to share the same index or have a new one for each class, at\n > the designer's option.)\n >\n \n\nHaving thought a bit more about it, I feel that all constraints should be\ninherited, including primary key, and that the primary key index should\nserve the entire tree, so that `select key from foo*' can be guaranteed\nnot to contain any duplicates. Where there are non-identical constraints\nof the same name, there ought to be a method for selecting the one to use\nand also a method of redefining the constraint. (These ideas are taken\nfrom the Eiffel language.)\n\n CREATE TABLE foobar (<field_definitions_and_constraints>)\n INHERITS (foo, bar)\n CHOOSE (constraint foo_id)\n REDEFINE (constraint id check (...))\n\nThe CHOOSE clause says that constraint foo_id is to be used in preference\nto bar_id; the REDEFINE clause allows id (which would otherwise be foo_id, \nfrom the CHOOSE clause) to be redefined as something new.\n\nThe CHOOSE clause would only be valid if there were constraints of the\nsame name in more than one ancestor. If the constraints were actually the\nsame, it would be legal but unnecessary.\n\nThe REDEFINE clause would be legal if any ancestor contained the redefined\nconstraint. \n\nI am thinking only of check constraints, at the moment. As I said above,\nI think that the primary key should apply to all descendants, using the same\nindex. I am not sure about foreign keys, NOT NULL and UNIQUE.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Mon, 20 Apr 1998 21:45:08 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraints and inheritance " }, { "msg_contents": "Oliver Elphick wrote:\n> \n> I am thinking only of check constraints, at the moment. As I said above,\n> I think that the primary key should apply to all descendants, using the same\n> index. I am not sure about foreign keys, NOT NULL and UNIQUE.\n\nThis can be done: we could store table' oid in btree index items\nand keep index items for all tables in one index.\n\nVadim\n", "msg_date": "Tue, 21 Apr 1998 09:18:31 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Constraints and inheritance" }, { "msg_contents": "I have a bug reported on the Debian package of 6.3.2:\n\n > videotapes=> grant all on tapes to www-data;\n > ERROR: aclparse: non-existent user \"www\"\n\nIt is, in fact, impossible to create the user www-data:\n\n template1=> create user www-data;\n ERROR: parser: parse error at or near \"-\"\n template1=> create user 'www-data';\n ERROR: parser: parse error at or near \"'\"\n\nSo there are two problems:\n\n1. The error message\n\n\t `ERROR: aclparse: non-existent user \"www\"'\n\n is incorrect. The parser should actually object to the `-' character; it\n appears to be silently dropping the `-data'.\n\n2. The range of possible user names is not the same as the range of possible\n Unix login names. However, the manual pages do not define what characters\n are valid. The SQL standard is silent on this point; it simply regards\n the current user name as an identifier supplied by the system. On the\n other hand, it is clear that PostgreSQL regards a user name as an SQL\n identifier, so that there is no distinction of case and no punctuation\n characters are allowed.\n\nIs it possible to make the parser accept the full range of Unix login names,\nincluding some punctuation characters and upper- and lower-case letters?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Sun, 26 Apr 1998 08:02:09 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "User names cannot contain `-'" }, { "msg_contents": "> \n> I have a bug reported on the Debian package of 6.3.2:\n> \n> > videotapes=> grant all on tapes to www-data;\n> > ERROR: aclparse: non-existent user \"www\"\n> \n> It is, in fact, impossible to create the user www-data:\n> \n> template1=> create user www-data;\n> ERROR: parser: parse error at or near \"-\"\n> template1=> create user 'www-data';\n> ERROR: parser: parse error at or near \"'\"\n> \n> So there are two problems:\n> \n> 1. The error message\n> \n> \t `ERROR: aclparse: non-existent user \"www\"'\n> \n> is incorrect. The parser should actually object to the `-' character; it\n> appears to be silently dropping the `-data'.\n> \n> 2. The range of possible user names is not the same as the range of possible\n> Unix login names. However, the manual pages do not define what characters\n> are valid. The SQL standard is silent on this point; it simply regards\n> the current user name as an identifier supplied by the system. On the\n> other hand, it is clear that PostgreSQL regards a user name as an SQL\n> identifier, so that there is no distinction of case and no punctuation\n> characters are allowed.\n\nWe allow undercores, but not dashes.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 10:11:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] User names cannot contain `-'" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n\n> I have a bug reported on the Debian package of 6.3.2:\n> \n> > videotapes=> grant all on tapes to www-data;\n> > ERROR: aclparse: non-existent user \"www\"\n> \n> It is, in fact, impossible to create the user www-data:\n> \n> template1=> create user www-data;\n> ERROR: parser: parse error at or near \"-\"\n> template1=> create user 'www-data';\n> ERROR: parser: parse error at or near \"'\"\n\nI believe createuser program did that for me (either that or it was\npreinstalled; I don't recall.) I did not issue a direct SQL command\nto do it, so I think it is likely that createuser did it.\n\nIn any case:\n\n\ntemplate1=> select usename, usesysid, valuntil from pg_shadow;\nusename |usesysid|valuntil \n--------+--------+----------------------------\npostgres| 31|Sat Jan 31 00:00:00 2037 CST\nwww-data| 33|Sat Jan 31 00:00:00 2037 CST\njgoerzen| 1000|Sat Jan 31 00:00:00 2037 CST\n(3 rows)\n\n\n-- \nJohn Goerzen Linux, Unix programming [email protected] |\nDeveloper, Debian GNU/Linux (Free powerful OS upgrade) www.debian.org |\n----------------------------------------------------------------------------+\n``You'll notice that this scanner, Bill [Gates]...'' <Blue Screen of Death>\n``Whoa!'' <Applause> ``Moving right along....'' -- Microsoft (Comdex\n video at: http://cnn.com/TECH/computing/9804/20/gates.comdex/index.html\n", "msg_date": "27 Apr 1998 20:16:39 -0500", "msg_from": "John Goerzen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User names cannot contain `-'" }, { "msg_contents": "John Goerzen wrote:\n \n >> Were you able to create a user `www-data'?\n >> I get:\n >\n >I believe that the createuser script did it for me. \n\nYes, it does. Very inconsistent!\n\n >> I agree that there is a bug, but it is that the error message is wrong!\n >\n >If grant would permit the username to be quoted in ' characters, then\n >the problem ought to go away, I think.\n\nCurrently, it appears that the authors don't want to change, so it is\nnecessary to specify a postgres user-id when connecting.\n\nHowever there is, as you say in another mail, no convenient way of\ndoing that automatically. We need an environment variable or a\ncommand-line option to specify the user and (optionally) password.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Tue, 28 Apr 1998 12:04:43 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug#21681: postgresql: Doesn't allow granting to www-data " }, { "msg_contents": "On 27 Apr 1998, John Goerzen wrote:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> \n> > I have a bug reported on the Debian package of 6.3.2:\n> > \n> > > videotapes=> grant all on tapes to www-data;\n> > > ERROR: aclparse: non-existent user \"www\"\n> > \n> > It is, in fact, impossible to create the user www-data:\n> > \n> > template1=> create user www-data;\n> > ERROR: parser: parse error at or near \"-\"\n> > template1=> create user 'www-data';\n> > ERROR: parser: parse error at or near \"'\"\n> \n> I believe createuser program did that for me (either that or it was\n> preinstalled; I don't recall.) I did not issue a direct SQL command\n> to do it, so I think it is likely that createuser did it.\n> \n> In any case:\n> \n> \n> template1=> select usename, usesysid, valuntil from pg_shadow;\n> usename |usesysid|valuntil \n> --------+--------+----------------------------\n> postgres| 31|Sat Jan 31 00:00:00 2037 CST\n> www-data| 33|Sat Jan 31 00:00:00 2037 CST\n> jgoerzen| 1000|Sat Jan 31 00:00:00 2037 CST\n> (3 rows)\n\n\tThis might have already been gone over, but if this was an upgrade\nfrom a previous release, its possible that this was created with a\n'dump/reload'?\n\n\n", "msg_date": "Tue, 28 Apr 1998 08:35:29 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: User names cannot contain `-'" }, { "msg_contents": "> >> Were you able to create a user `www-data'?\n> >I believe that the createuser script did it for me.\n> Yes, it does. Very inconsistent!\n> >> I agree that there is a bug, but it is that the error message is \n> >> wrong!\n> >If grant would permit the username to be quoted in ' characters, then\n> >the problem ought to go away, I think.\n> \n> Currently, it appears that the authors don't want to change, so it is\n> necessary to specify a postgres user-id when connecting.\n\n\"Don't want to change\"? Probably not. We're trying to figure out how to\ncope with an ever-increasing number of interested users _and_\ndevelopers, and don't always react quickly to good suggestions.\n\nThe topic just came up recently, as I recall, and your suggestions are\ngood. Do you really want the patch applied which disables the more\ngeneral user names, or do you want to move more slowly and try to get\nfull user names in v6.4 (we have several months to get this right; in\nfact we may already have them; see below :)\n\n> However there is, as you say in another mail, no convenient way of\n> doing that automatically. We need an environment variable or a\n> command-line option to specify the user and (optionally) password.\n\nSorry, I didn't follow the whole discussion. Is the problem only with\nexplicit CREATE USER and GRANT commands in SQL, or are there other\ninterfaces which would show problems too (you mention command-line\noptions above, but I don't know to what).\n\nOh, I just tried something:\n\ntgl=> create user \"hi-there\";\nCREATE USER\ntgl=> select usename from pg_user;\nusename\n--------\npostgres\ntgl\nhi-there\n(3 rows)\n\nIsn't this what you want?? I haven't figured out how to get GRANT to\nwork, but it seems to swallow the double-quoted user name as it\nshould...\n\n - Tom\n\nbtw, how is it going with the docs conversion, Oliver? I'd expect that\nit would keep you out of trouble for a little while at least :)\n", "msg_date": "Tue, 28 Apr 1998 12:38:29 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to\n\twww-data" }, { "msg_contents": "The Hermit Hacker wrote:\n >> template1=> select usename, usesysid, valuntil from pg_shadow;\n >> usename |usesysid|valuntil \n >> --------+--------+----------------------------\n >> postgres| 31|Sat Jan 31 00:00:00 2037 CST\n >> www-data| 33|Sat Jan 31 00:00:00 2037 CST\n >...\n >\n >\tThis might have already been gone over, but if this was an upgrade\n >from a previous release, its possible that this was created with a\n >'dump/reload'?\n\nCreateuser does not use the CREATE USER command. It updates the\nsystem tables directly. This enables it to be used to specify a user id,\nwhich CREATE USER does not allow. However, it also allows inconsistencies\nto arise, as here. So createuser can put in user names that CREATE USER\ncannot and that GRANT does not recognise.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Tue, 28 Apr 1998 16:47:20 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: User names cannot contain `-' " }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >\"Don't want to change\"? Probably not. We're trying to figure out how to\n >cope with an ever-increasing number of interested users _and_\n >developers, and don't always react quickly to good suggestions.\n \nDon't take it personally! I'm happy to accept the developers' decisions,\nsince you know the code much better.\n\n >The topic just came up recently, as I recall, and your suggestions are\n >good. Do you really want the patch applied which disables the more\n >general user names,\n\nI would prefer to have Unix user names allowed throughout. However, if\nthe developers decide not to do this, the patch to createuser is\nrequired to maintain consistency. From Bruce's original reply, I had\nthought that was the position.\n\n > or do you want to move more slowly and try to get\n >full user names in v6.4 (we have several months to get this right; in\n >fact we may already have them; see below :)\n\nBy all means, lets have them! \n\n >> However there is, as you say in another mail, no convenient way of\n >> doing that automatically. We need an environment variable or a\n >> command-line option to specify the user and (optionally) password.\n >\n >Sorry, I didn't follow the whole discussion. Is the problem only with\n >explicit CREATE USER and GRANT commands in SQL, or are there other\n >interfaces which would show problems too (you mention command-line\n >options above, but I don't know to what).\n\nSorry; that's what comes of running a three-way discussion. The problem is\nthat you can't (I think) start a connection while supplying another\nuser-name than your login-name, except by the -u option to psql. This\nleads to an interactive prompt for name and password. This is not\nconvenient for CGI scripts on web-servers (which is how the original\nproblem manifested itself.) It seems to be desirable to be able to\nspecify the postgres user name while starting the connection.\n\n >Oh, I just tried something:\n >\n >tgl=> create user \"hi-there\";\n >CREATE USER\n\n >Isn't this what you want?? I haven't figured out how to get GRANT to\n >work, but it seems to swallow the double-quoted user name as it\n >should...\n\nYes it is; I hadn't tried double-quotes, because single-quotes are used\nfor strings - it didn't occur to me! (Incidentally, WHY double-quotes here\ninstead of single-quotes? Surely that's against SQL practice?) It doesn't\nwork for GRANT, though, with either kind of quote:\n\n bray-> grant all on address to www-data;\n ERROR: aclparse: non-existent user \"www\"\n bray=> grant all on address to \"www-data\";\n ERROR: aclparse: mode flags must use \"arwR\"\n bray=> grant all on address to 'www-data';\n ERROR: parser: parse error at or near \"'\"\n\n\nOverall, it seems to me that a user-name is just a string, that is used\nas a key into pg_shadow. The SQL92 definition allows it to be a\ncharacter string literal. So there ought to be no problem in specifying\na string rather than an identifier in all the relevant places.\n(I speak in happy ignorance of whatever the real problems may be!)\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Tue, 28 Apr 1998 16:47:25 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to\n\twww-data" }, { "msg_contents": "> Don't take it personally! I'm happy to accept the developers' \n> decisions, since you know the code much better.\n\nNo offense taken. It's just that you got an opinion, not a \"decision\".\nWhen you are bouncing ideas off of the list, you might get an opinion\nback, but if you really think it should be pursued then it's OK to push\nthe topic to see if it resonates with someone. For something big, it\ndoes usually take a consensus among the core developers to get it\nincorporated, but on smaller issues which fit in with how things already\nare it isn't that complicated.\n\nbtw, we try to not be too thin-skinned; sometimes even polite people\nneed to be blunt to get their point across when we're being dense :)\n\n> >Do you really want the patch applied which disables the more\n> >general user names,\n> I would prefer to have Unix user names allowed throughout.\n\nNot exactly possible with SQL92 grammar; see below...\n\n> (What started this thread?) The problem is\n> that you can't (I think) start a connection while supplying another\n> user-name than your login-name, except by the -u option to psql. This\n> leads to an interactive prompt for name and password.\n\nThis sounds like a feature to add to libpq and/or psql...\n\n> >tgl=> create user \"hi-there\";\n> >CREATE USER\n> I hadn't tried double-quotes, because single-quotes are \n> used for strings - it didn't occur to me! (Incidentally, WHY \n> double-quotes here instead of single-quotes? Surely that's against SQL \n> practice?)\n\nI don't think so; syntactically an authorization ID seems to resemble\nother identifiers, not literal strings. Identifiers are allowed to be\n\"delimited\", when means surrounded by double-quotes, to allow mixed case\nand weird characters into an identifier.\n\n> It doesn't work for GRANT, though, with either kind of quote:\n\nYeah, well, that is probably a smaller problem. Perhaps there is a query\nburied in the backend which needs to have some double-quoting applied.\nWe can look into it, eh?\n\n> Overall, it seems to me that a user-name is just a string, that is \n> used as a key into pg_shadow. The SQL92 definition allows it to be a\n> character string literal.\n\nI don't see this in my reference books. A character string literal is\ndelimited by single-quotes; never run across it for a user name\n(actually an \"authorization ID\" in SQL92).\n\n> So there ought to be no problem in specifying\n> a string rather than an identifier in all the relevant places.\n> (I speak in happy ignorance of whatever the real problems may be!)\n\nYup :)\n\nThe SQL92 character set specifically allows only a few characters other\nthan [A-Za-z0-9] in non-delimited identifiers. And, it specifically\ndefines most other interesting characters (including \"-\") as explicit\ndelimiters.\n\nI think we should concentrate on making the features work as well as\npossible within the SQL92 framework, and within the limitations of our\nlex/yacc grammar.\n\nDelimited identifiers are afaik the way to do this...\n\n - Tom\n", "msg_date": "Wed, 29 Apr 1998 02:19:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to\n\twww-data" }, { "msg_contents": "More info on the GRANT problem with user names containing a minus sign:\n\nIt turns out that \"+-=\" are used inside an ACL string constructed\ninternally in the backend. So, putting one of those characters into the\nuser name causes what follows to be misinterpreted.\n\n * aclparse\n * Consumes and parses an ACL specification of the form:\n * [group|user] [A-Za-z0-9]*[+-=][rwaR]*\n\nI think that we would need to restructure this internal information to\nmake the user name field unambiguous no matter its contents.\n\nBruce, can you put this on the ToDo list? In the meantime I would\nsuggest _not_ restricting the allowable user names elsewhere, since this\nis a bug fix kind of thing...\n\n - Tom\n", "msg_date": "Wed, 29 Apr 1998 02:58:53 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to\n\twww-data" }, { "msg_contents": "> \n> More info on the GRANT problem with user names containing a minus sign:\n> \n> It turns out that \"+-=\" are used inside an ACL string constructed\n> internally in the backend. So, putting one of those characters into the\n> user name causes what follows to be misinterpreted.\n> \n> * aclparse\n> * Consumes and parses an ACL specification of the form:\n> * [group|user] [A-Za-z0-9]*[+-=][rwaR]*\n> \n> I think that we would need to restructure this internal information to\n> make the user name field unambiguous no matter its contents.\n> \n> Bruce, can you put this on the ToDo list? In the meantime I would\n> suggest _not_ restricting the allowable user names elsewhere, since this\n> is a bug fix kind of thing...\n\nAdded to TODO:\n\n* Restructure storing of GRANT permission information to allow +-=\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 28 Apr 1998 23:56:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#21681: postgresql: Doesn't allow granting to" }, { "msg_contents": "To bring everything back to base here...In one of the posts CC'd to\nme, somebody wondered how this mess got started, so let me recap...\n\nOn Debian, a webserver runs under the user www-data. I had created a\nwww-data user for PostgreSQL using createuser, so that CGI scripts\nwould be able to access the database (using ident authentication).\nHowever, the newer Postgres releases default to not granting select to \npublic (wise), so it did not work on the first try. I looked into it, \nand remembered -- Ah ha! -- that I needed to grant all (in this case)\nto www-data. However, psql wouldn't parse my grant command due to the \nhyphen in the www-data username. I had no problem creating the user\n(although it appears now that the SQL way of creating the user may be\nbroken/strange). I also had no problem getting the CGI to connect to\nthe database once www-data was able to access the tables. (I was able \nto use a temporary workaround of granting all to public on the\nspecific tables, since nobody else happens to have a user account with \nthis postgres installation).\n\nThe problem, then, was the GRANT command for me. I reported this\nusing Debian's bug-tracking system as per usual procedure, and Oliver\nsent it on up to y'all as per usual procedure.\n\n\"Oliver Elphick\" <[email protected]> writes:\n\n> However there is, as you say in another mail, no convenient way of\n> doing that automatically. We need an environment variable or a\n> command-line option to specify the user and (optionally) password.\n\nident will work fine for the CGI. It is just that I need to be able\nto grant access to the ident'd user :-)\n\nAnd finally, I hope you excuse this paragraph in the Debian logs\nOliver... :-)\n\nFYI, while the -hackers are getting CCd anyway, I ought to mention\nhere how amazingly cool I think Postgres is. At work, I am currently\nwriting a multiplatform Perl/Tk application using DBI::Pg as the\ndatabase driver, and Postgres running on BSDi. (One of my jobs is at\nan ISP, where BSDi is in heavy use.) Besides being free (yay! it's a \nsmallish ISP), it also is the only one that runs natively on BSDi\n(this is the BIG win), excepting the mSQLs of the world that we would\nrather ignore :-) Another person there is investigating moving all of \nthe billing information over from flatfile ASCII databases to a\nPostgres database, and I will be working with moving usage accounting\nsoftware from flatfile databases to Postgres, as well as my current\ncall tracking project. Nobody has officially said that we will stick\nwith Postgres, but I think it is likely. Good job guys! (At home, I\ndo the relatively simple task of indexing my videotape collection with \na couple of tables with no more than 700 rows each, compared with\n15000/table with 10 tables at work <g>)\n\n-- \nJohn Goerzen Linux, Unix programming [email protected] |\nDeveloper, Debian GNU/Linux (Free powerful OS upgrade) www.debian.org |\n----------------------------------------------------------------------------+\n``You'll notice that this scanner, Bill [Gates]...'' <Blue Screen of Death>\n``Whoa!'' <Applause> ``Moving right along....'' -- Microsoft (Comdex\n video at: http://cnn.com/TECH/computing/9804/20/gates.comdex/index.html\n", "msg_date": "29 Apr 1998 00:13:52 -0500", "msg_from": "John Goerzen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug#21681: postgresql: Doesn't allow granting to www-data" }, { "msg_contents": "Hi all,\n\nI think that PostgreSQL money type should be very useful if we could\nremove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\nIn europe now we have Euro. If we remove the $ it will be useful otherwise\nwe have to rename it to 'dollar'. ;-)\n----\nPS: Is there a reason to left justify it ?\n\nselect dollar from prova;\ndollar\n----------\n$300.32\n$302.21\n$312.10\n$12,312.10\n$12,386.00\n$12,312.00\n Thanks, Jose'\n\n", "msg_date": "Mon, 11 May 1998 13:33:55 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "money or dollar type" }, { "msg_contents": "\"Jose' Soares Da Silva\" wrote:\n >I think that PostgreSQL money type should be very useful if we could\n >remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n >In europe now we have Euro. If we remove the $ it will be useful otherwise\n >we have to rename it to 'dollar'. ;-)\n \nCompile with LANG support and set the LANG environment variable for the\npostmaster. Restart the postmaster.\n\nThen you get your own currency symbol:\n\n\njunk=> select * from moneybag;\nwho|amount \n---+-------\nA |£250.00\n(1 row)\n\n\nBut I don't like the fact that this has to be done in the backend. It\nmeans that the currency of money is tied to the LANG environment of the\npostmaster, rather than to the data itself. One of the characteristics of\nmoney is the currency in which it is denominated; this ought to be part\nof the datatype. It would then be invalid to perform arithmetical\noperations between different currencies, which would correctly reflect\nthe real world.\n\nTherefore, I propose that the money type be extended to include a\ncurrency definition, the default being that of the backend environment.\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Search me, O God, and know my heart; try me, and know \n my thoughts. And see if there be any wicked way in me,\n and lead me in the way everlasting.\" \n Psalms 139:23,24 \n\n\n", "msg_date": "Mon, 11 May 1998 16:14:44 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] money or dollar type " }, { "msg_contents": "> I think that PostgreSQL money type should be very useful if we could\n> remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n> In europe now we have Euro. If we remove the $ it will be useful \n> otherwise we have to rename it to 'dollar'. ;-)\n\nHave you tried compiling with \"USE_LOCALE\" turned on and with the right\nsetting for LC_xxx? The code is supposed to use local conventions, but I\ndon't know if it works in the way you want. I agree that it should...\n\n> PS: Is there a reason to left justify it ?\n\nThat is just an artifact of the column formatting; all columns are left\njustified in psql afaik.\n\n - Tom\n", "msg_date": "Mon, 11 May 1998 14:29:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Therefore, I propose that the money type be extended to include a\n> currency definition, the default being that of the backend environment.\n\nThis is not a bad idea; it would address some problems that I have in\nmy application too. (What I was planning to do was store a separate\ncurrency field associated with every money amount, but I think Oliver's\nidea is better.)\n\nHowever, what money *really* needs is more precision. Has there been\nany thought of working on the full SQL exact-numeric package? (If I\nread what I've seen correctly, that boils down to user-specifiable\ndecimal field widths, right?) A variable-width money type including\na currency indicator would actually solve my problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 May 1998 11:42:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] money or dollar type " }, { "msg_contents": "\"Jose' Soares Da Silva\" wrote:\n >On Mon, 11 May 1998, Oliver Elphick wrote:\n >\n >> \"Jose' Soares Da Silva\" wrote:\n >> >I think that PostgreSQL money type should be very useful if we could\n >> >remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n >> >In europe now we have Euro. If we remove the $ it will be useful otherw\n >ise\n >> >we have to rename it to 'dollar'. ;-)\n >> \n >> Compile with LANG support and set the LANG environment variable for the\n >> postmaster. Restart the postmaster.\n >> \n >> Then you get your own currency symbol:\n >What's happening with EURO sign?\n \nI guess that will need a tweak to libc to support it - I wonder if the\nglibc developers have thought about it?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Search me, O God, and know my heart; try me, and know \n my thoughts. And see if there be any wicked way in me,\n and lead me in the way everlasting.\" \n Psalms 139:23,24 \n\n\n", "msg_date": "Mon, 11 May 1998 18:30:07 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] money or dollar type " }, { "msg_contents": "On Mon, 11 May 1998, Thomas G. Lockhart wrote:\n\n> > I think that PostgreSQL money type should be very useful if we could\n> > remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n> > In europe now we have Euro. If we remove the $ it will be useful \n> > otherwise we have to rename it to 'dollar'. ;-)\n> \n> Have you tried compiling with \"USE_LOCALE\" turned on and with the right\n> setting for LC_xxx? The code is supposed to use local conventions, but I\n> don't know if it works in the way you want. I agree that it should...\nThanks Tom, I will try it.\n> \n> > PS: Is there a reason to left justify it ?\n> \n> That is just an artifact of the column formatting; all columns are left\n> justified in psql afaik.\n ^^^^^\nSorry Tom, I can't find the word 'afaik' on my dictionary.\n\nAny way, seems that psql justify numbers to the right and text to the left, \nmoney is numeric then I expect that psql justify it to the right.\nIt has also a little problem justifying varchars, look:\n\nprova=> select var as my_varchar from prova where var = '12';\nmy_varchar\n----------\n 12 <--right justified\n(1 row)\n\nprova=> select var as my_varchar from prova;\nmy_varchar\n----------\n12 \t\t<--left justified, this time ???\na12\na12\n(3 rows)\n Jose'\n\n", "msg_date": "Mon, 11 May 1998 17:24:10 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "On Mon, 11 May 1998, Oliver Elphick wrote:\n\n> \"Jose' Soares Da Silva\" wrote:\n> >I think that PostgreSQL money type should be very useful if we could\n> >remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n> >In europe now we have Euro. If we remove the $ it will be useful otherwise\n> >we have to rename it to 'dollar'. ;-)\n> \n> Compile with LANG support and set the LANG environment variable for the\n> postmaster. Restart the postmaster.\n> \n> Then you get your own currency symbol:\nWhat's happening with EURO sign?\n> \n> \n> junk=> select * from moneybag;\n> who|amount \n> ---+-------\n> A |�250.00\n> (1 row)\n> \n> \n> But I don't like the fact that this has to be done in the backend. It\n> means that the currency of money is tied to the LANG environment of the\n> postmaster, rather than to the data itself. One of the characteristics of\n> money is the currency in which it is denominated; this ought to be part\n> of the datatype. It would then be invalid to perform arithmetical\n> operations between different currencies, which would correctly reflect\n> the real world.\n> \n> Therefore, I propose that the money type be extended to include a\n> currency definition, the default being that of the backend environment.\nI agree, currently we can have only one currency definition. We can't\nhave for example Dollars and Pesetas in the same database.\n Jose'\n\n", "msg_date": "Mon, 11 May 1998 17:38:33 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] money or dollar type " }, { "msg_contents": "> However, what money *really* needs is more precision. Has there been\n> any thought of working on the full SQL exact-numeric package?\n\nYes. The problem is that afaik there is no variable-width exact numeric\npackage available. BCD arithmetic could work if a package were\navailable. The GNU extended precision package looks interesting, but we\nwould have to translate from a string to internal format for every\noperation, or somehow store the internal representation in each tuple\nwhich seems messy.\n\nI'm thinking of moving the 64-bit integer contrib package I wrote into\nthe native backend as a foundation for the numeric/decimal data types.\nWe would need to get feedback from more of the supported platforms on\nhow to do 64-bit integers (a few processors have them as a \"long\" type,\nand the GNU 32-bit compilers seem to allow a \"long long\" declaration,\nbut I don't know what other systems do for this). \n\nThe only other thing which would need to be handled is how to pass along\nthe two value precision/scale parameters which are a part of the\ndeclaration for these types. I've just finished working on the type\nconversion algorithms so understand the current \"atttypmod\" field a bit\nbetter, but have not decided how to extend it to multiple fields.\n\n - Tom\n", "msg_date": "Tue, 12 May 1998 05:17:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": "On Mon, 11 May 1998, Thomas G. Lockhart wrote:\n\n> > I think that PostgreSQL money type should be very useful if we could\n> > remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n> > In europe now we have Euro. If we remove the $ it will be useful \n> > otherwise we have to rename it to 'dollar'. ;-)\n> \n> Have you tried compiling with \"USE_LOCALE\" turned on and with the right\n> setting for LC_xxx? The code is supposed to use local conventions, but I\n> don't know if it works in the way you want. I agree that it should...\n> \n> > PS: Is there a reason to left justify it ?\n> \n> That is just an artifact of the column formatting; all columns are left\n> justified in psql afaik.\n> \n\nSeems there's some problems with type 'money'... I can't multiply or\ndivide 'money' types, and can't cast it properly to other data types.\n\nprova=> select ename,job,hiredate, sal from employees;\nename |job | hiredate|sal\n------+----------+----------+---------\nALLEN |SALESMAN |1981-02-20|$1,600.00\nBLAKE |MANAGER |1981-05-01|$2,850.00\nJONES |CLERK |1981-12-03|$950.00\nMILLER|SALESMAN |1981-09-28|$1,250.00\nCLARK |SALESMAN |1981-09-08|$1,500.00\nKING |SALESMAN |1981-02-22|$1,250.00\n(6 rows)\n\nprova=> select ename,job,hiredate, sal*1.1 as dream from employees;\nERROR: There is no operator '*' for types 'money' and 'money'\n You will either have to retype this query using an explicit cast,\n or you will have to define the operator using CREATE OPERATOR\n\nprova=> select ename,job,hiredate,sal, sal::float as dream from employees;\nename |job | hiredate|sal | dream\n------+----------+----------+---------+----------\nALLEN |SALESMAN |1981-02-20|$1,600.00|1079143604\nBLAKE |MANAGER |1981-05-01|$2,850.00|1079143508\nJONES |CLERK |1981-12-03|$950.00 |1079143412\nMILLER|SALESMAN |1981-09-28|$1,250.00|1079143316\nCLARK |SALESMAN |1981-09-08|$1,500.00|1079143220\nKING |SALESMAN |1981-02-22|$1,250.00|1079143120\n(6 rows)\n\nIs this a bug ?\n Jose'\n\n", "msg_date": "Tue, 12 May 1998 10:07:53 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "Thus spake Jose' Soares Da Silva\n> I think that PostgreSQL money type should be very useful if we could\n> remove the dollar sign. We can't use it with Lira/Peseta/Mark etc.\n> In europe now we have Euro. If we remove the $ it will be useful otherwise\n> we have to rename it to 'dollar'. ;-)\n\nI have been trying to remove this from the code. For some reason I can't\ncompile the system (something about wrong number of args to gettimeofday\nin backend/tcop/postgres.c) but in the meantime, have you tried the\nUSE_LOCALE define? That should at least switch it to your local money\nindicator.\n\n> PS: Is there a reason to left justify it ?\n\nNot that I can think of but I'm not sure where you change that.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 12 May 1998 06:37:39 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "Thus spake Thomas G. Lockhart\n> > PS: Is there a reason to left justify it ?\n> \n> That is just an artifact of the column formatting; all columns are left\n> justified in psql afaik.\n\nInts are right formatted so it must be possible to do.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 12 May 1998 06:41:23 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "> Seems there's some problems with type 'money'... I can't multiply or\n> divide 'money' types, and can't cast it properly to other data types.\n> Is this a bug ?\n\nWith the new type conversion code:\n\ntgl=> create table mm (m money);\nCREATE\ntgl=> insert into mm values ('$1600.00');\nINSERT 268105 1\ntgl=> select m * 1.1 from mm;\n?column?\n---------\n$1,760.00\n(1 row)\n\nBut,\n\ntgl=> select cast(m as float8) from mm;\n float8\n----------\n1077124288\n(1 row)\n\nSo there is some funny interaction on the casting, the same as you found\nin v6.3.2 (and presumably forever), which I will look into...\n\n - Tom\n", "msg_date": "Tue, 12 May 1998 13:38:39 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "> prova=> select var as my_varchar from prova where var = '12';\n> my_varchar\n> ----------\n> 12 <--right justified\n> (1 row)\n> \n> prova=> select var as my_varchar from prova;\n> my_varchar\n> ----------\n> 12 \t\t<--left justified, this time ???\n> a12\n> a12\n> (3 rows)\n> Jose'\n\nI can't reproduce this here.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:47:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "> The only other thing which would need to be handled is how to pass along\n> the two value precision/scale parameters which are a part of the\n> declaration for these types. I've just finished working on the type\n> conversion algorithms so understand the current \"atttypmod\" field a bit\n> better, but have not decided how to extend it to multiple fields.\n> \n\nI have thought about this. Just bitmask the 16-bits to two 8-bit\nquantities. Give you max 256 length with 256 currencies.\n\nThe only place they are used is in the type-specific *.c function, so\nyou just us the mask there, or create a union of :8 and :8 and reference\nit that way.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 12 May 1998 12:54:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": "On Tue, 12 May 1998, Bruce Momjian wrote:\n\n> > prova=> select var as my_varchar from prova where var = '12';\n> > my_varchar\n> > ----------\n> > 12 <--right justified\n> > (1 row)\n> > \n> > prova=> select var as my_varchar from prova;\n> > my_varchar\n> > ----------\n> > 12 \t\t<--left justified, this time ???\n> > a12\n> > a12\n> > (3 rows)\n> > Jose'\n> \n> I can't reproduce this here.\nSeems that PostgreSQL justify data based on data not on data type.\nMy environment is:\n PostgreSQL v6.3\n Linux 2.0.33\n\nalso Daniel A. Gauthier <[email protected]>\nreported the same problem.\n\nhere my script:\n\ncreate table prova ( my_varchar varchar );\nCREATE\ninsert into prova values ('12');\nINSERT 528521 1\ninsert into prova values ('a12');\nINSERT 528522 1\nselect * from prova where my_varchar = '12';\nmy_varchar\n----------\n 12\n(1 row)\n\nselect * from prova;\nmy_varchar\n----------\n12\na12\n(2 rows)\n\nEOF\n Jose'\n\n", "msg_date": "Wed, 13 May 1998 09:55:01 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "> \n> create table prova ( my_varchar varchar );\n> CREATE\n> insert into prova values ('12');\n> INSERT 528521 1\n> insert into prova values ('a12');\n> INSERT 528522 1\n> select * from prova where my_varchar = '12';\n> my_varchar\n> ----------\n> 12\n> (1 row)\n> \n> select * from prova;\n> my_varchar\n> ----------\n> 12\n> a12\n> (2 rows)\n> \n\nOK, I can reproduce this now. I would love to know why it is happening.\nSeems very strange to me.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 13 May 1998 11:51:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] money or dollar type" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have thought about this. Just bitmask the 16-bits to two 8-bit\n> quantities. Give you max 256 length with 256 currencies.\n\nUh, no: what we were discussing was the total width and decimal place\nposition of exact numerics. Probably, 255 numeric digits are enough\nfor practical purposes, so I don't feel an urgent need to make atttypmod\nwider for this. But if you want to make it 32 bits, that would\neliminate any concern --- we'd have room for 64k-digit numerics...\n\nIf we're going to associate currencies with the money datatype, the\ncurrency needs to be part of the data, not part of the column type.\nI need to be able to store amounts of different currencies in the same\ncolumn. (Otherwise, a transaction log would need a separate column for\nevery possible currency, all but one of which would be null in any given\nrow. Ick.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 May 1998 13:36:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] money or dollar type " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I have thought about this. Just bitmask the 16-bits to two 8-bit\n> > quantities. Give you max 256 length with 256 currencies.\n> \n> Uh, no: what we were discussing was the total width and decimal place\n> position of exact numerics. Probably, 255 numeric digits are enough\n> for practical purposes, so I don't feel an urgent need to make atttypmod\n> wider for this. But if you want to make it 32 bits, that would\n> eliminate any concern --- we'd have room for 64k-digit numerics...\n> \n> If we're going to associate currencies with the money datatype, the\n> currency needs to be part of the data, not part of the column type.\n> I need to be able to store amounts of different currencies in the same\n> column. (Otherwise, a transaction log would need a separate column for\n> every possible currency, all but one of which would be null in any given\n> row. Ick.)\n\nYep, good point.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 13 May 1998 14:06:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] money or dollar type" }, { "msg_contents": "Was looking through the new docs and noticed that the example for\ncreating a database in an alternate location has trouble:\n\n $ mkdir private_db\n $ initlocation ~/private_db\n Creating Postgres database system directory\n/home/olly/private_db/base\n\n $ chmod a+rx private_db\n $ chmod a+rwx private_db/base\n $ psql\n ...\n\nThe chmod's are a Bad Idea (tm) since it blows the security assumptions\nfor Postgres. The protections are explicitly set by initlocation to lock\ndown these directories.\n\nI guess that the alternate location setup (initlocation) was really\nmeant as a tool for the Postgres administrator, not for individual\nusers. If users create alternate locations, and then for example create\na database and then delete the directories from the file system rather\nthan through Postgres things will become ugly. The assumption is that\nthe administrator is likely to be more careful since she is likely to be\nmore aware of the issues.\n\nI have (or had) some #ifdef code which _requires_ that environment\nvariables be used to specify alternate locations, rather than allowing\nabsolute paths also. This helps ensure that locations are used which\nhave been set up by the Postgres administrator, since the admin must\nhave defined the environment variables for the backend before it starts\nup.\n\nI'm not sure how to write an example which had initlocation being run by\nsomeone other than the Postgres superuser while still being clear on\nthese security/integrity issues. What would you suggest?\n\n - Tom\n", "msg_date": "Fri, 15 May 1998 13:43:16 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "CREATE DATABASE" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >Was looking through the new docs and noticed that the example for\n >creating a database in an alternate location has trouble:\n >\n > $ mkdir private_db\n > $ initlocation ~/private_db\n > Creating Postgres database system directory\n >/home/olly/private_db/base\n >\n > $ chmod a+rx private_db\n > $ chmod a+rwx private_db/base\n > $ psql\n > ...\n >\n >The chmod's are a Bad Idea (tm) since it blows the security assumptions\n >for Postgres. The protections are explicitly set by initlocation to lock\n >down these directories.\n\nWell, I documented what I actually had to do to make it work.\n\nI got the impression that this particular feature had not been properly\nthought out!\n\n >\n >I guess that the alternate location setup (initlocation) was really\n >meant as a tool for the Postgres administrator, not for individual\n >users. If users create alternate locations, and then for example create\n >a database and then delete the directories from the file system rather\n >than through Postgres things will become ugly.\n\nShouldn't it cope with this possibility anyway? (The mad systems\nadministrator....) What would happen to PostgreSQl if a database\ndirectory disappeared from under it?\n\n > The assumption is that\n >the administrator is likely to be more careful since she is likely to be\n >more aware of the issues.\n \nIf so, the restriction should be placed in the code: only the postgres\nadministrator should be allowed to run the command. That would get rid\nof the permissions problems. On the other hand, it would remove most of\nthe point of having a separate location (seeing that it could just as\nwell be implemented by symbolic links).\n\n >I have (or had) some #ifdef code which _requires_ that environment\n >variables be used to specify alternate locations, rather than allowing\n >absolute paths also. This helps ensure that locations are used which\n >have been set up by the Postgres administrator, since the admin must\n >have defined the environment variables for the backend before it starts\n >up.\n >\n >I'm not sure how to write an example which had initlocation being run by\n >someone other than the Postgres superuser while still being clear on\n >these security/integrity issues. What would you suggest?\n >\n\nThe first thing to do is decide whether this feature is required at all.\nI'm trying to think of things in its favour:\n\n1. It enables the administrator to direct a large database to a partition\n with sufficient space.\n\n2. It allows especially sensitive data to be stored on an encrypted file\n system.\n\n3. It allows a database to be created on removable media. (Is this a good\n thing?)\n\n\nIf run by a user:\n\n4. It might give the chance of having data that is secure against the\n administrator, but only if a backend can be launched that is owned by\n the user rather than by the administrator.\n\nSince 4 isn't possible at the moment, there seems to be no reason for \nallowing a user to run this command, even if he is otherwise allowed to\ncreate databases. Removing the ability from users means that the\ndocumentation can be simplified by documenting it for administrative\nuse only.\n\nEven simpler is to remove it altogether and let the administrator\nhandle 1-3 by unix commands and symbolic links.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Let not your heart be troubled: ye believe in God, \n believe also in me.\" John 14:1 \n\n\n", "msg_date": "Fri, 15 May 1998 17:28:28 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE " }, { "msg_contents": "> Well, I documented what I actually had to do to make it work.\n\nWell, not to cause trouble here but it will work as I intended, and\ndidn't work as you tried to use it. That's not so bad, and the usage is\ndocumented in the administrator's guide. But your points below are good\nand should be used to improve things.\n\n> Shouldn't it cope with this possibility anyway? (The mad systems\n> administrator....) What would happen to PostgreSQl if a database\n> directory disappeared from under it?\n\nPostgres, via the master database, would think that the database\nexisted, but would not be able to connect and not be able to delete it.\nThe administrator would have to brute-force copy a database (template1\nwould do) into the removed area, at which time Postgres could remove it\nand clean up internally.\n\n> > The assumption is that\n> >the administrator is likely to be more careful since she is likely \n> >to be more aware of the issues.\n> If so, the restriction should be placed in the code: only the postgres\n> administrator should be allowed to run the command. That would get \n> rid of the permissions problems. On the other hand, it would remove \n> most of the point of having a separate location (seeing that it could \n> just as well be implemented by symbolic links).\n\nNo, there is a point to having a separate location; that and the\nsoftlink kludge are two different things. The \"initlocation\" alternate\nlocation can be used by any database user (with createdb privileges) to\ncreate and maintain a new database, once set up by the Postgres\nadministrator.\nSoft links don't do that; Postgres doesn't know about them and can't\ncreate/delete/create databases in multiple locations by using them\nafaik.\n\n\"initlocation\" isn't actually the issue; the real issue is whether\nalternate locations not created by the Postgres superuser should be\nallowed at all. I'm thinking not, given these issues which would have to\nbe kept in mind by every user trying to use the command themselves.\n\n> The first thing to do is decide whether this feature is required at \n> all. I'm trying to think of things in its favour:\n> 1. It enables the administrator to direct a large database to a \n> partition with sufficient space.\n\nbingo. This is sufficient to make the feature desirable, but the\ndownsides should be addressed.\n\n> 2. It allows especially sensitive data to be stored on an encrypted \n> file system.\n> 3. It allows a database to be created on removable media. (Is this a \n> good thing?)\n\nRemovable media are not a good thing for this mechanism. Postgres used\nto have direct support for databases on removable media (tape); this\nshould be resurrected if someone wants the feature.\n\n> If run by a user:\n> 4. It might give the chance of having data that is secure against the\n> administrator, but only if a backend can be launched that is owned \n> by the user rather than by the administrator.\n\nIn that case the user would be the administrator. Postgres allows that,\nand allows multiple servers on the same machine (the listening port\nwould need to be different for each one).\n\n> Since 4 isn't possible at the moment, there seems to be no reason for\n> allowing a user to run this command, even if he is otherwise allowed \n> to create databases. Removing the ability from users means that the\n> documentation can be simplified by documenting it for administrative\n> use only.\n\nI did document it for administrators only ;)\n\n> Even simpler is to remove it altogether and let the administrator\n> handle 1-3 by unix commands and symbolic links.\n\nNo, that is not a solid option.\n\nAt the moment, initlocation is just a shell utility which sets up a\ndirectory area and changes permissions to be appropriate for Postgres\nuse *if it is run by the Postgres superuser*.\n\nIf we are concerned that alternate database locations can be misused and\nshould be under tighter control of the Postgres superuser, then it would\nbe easy to take out (or #ifdef) the code which allows absolute path\nnames and instead require that all paths for alternate locations be\nspecified as environment variables (that is already supported and was\nalways preferable imho).\n\nThat way the Postgres administrator can run initlocation, define an\nenvironment variable pointing at that alternate location, and then start\nup the backend. Users would have only those previously defined areas to\nwork with.\n\nI'm being pretty direct here in my comments because you are raising very\ngood issues and we haven't had much discussion of them in the past. I am\nnot intending to be difficult or obstinant, just trying to clear things\nup and get a plan for improving things.\n\nComments (on the issues, and/or whether I'm being a pain :)?\n\n - Tom\n", "msg_date": "Sat, 16 May 1998 02:00:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >> Well, I documented what I actually had to do to make it work.\n >\n >Well, not to cause trouble here but it will work as I intended, and\n >didn't work as you tried to use it. That's not so bad, and the usage is\n >documented in the administrator's guide.\n\nI was actually working from the existing man pages, but now that I check,\nit isn't sufficiently clear from the guide that initlocation is an\nadministrator-only command. (Sections II.7, III.22) Even reading them in\nthe light of your comments, I cannot see that this restriction is stated.\n(Or have you updated this since 6.3.2?)\n...\n >\"initlocation\" isn't actually the issue; the real issue is whether\n >alternate locations not created by the Postgres superuser should be\n >allowed at all. I'm thinking not, given these issues which would have to\n >be kept in mind by every user trying to use the command themselves.\n\nIf initlocation is to be run only by the administrator, it should be owned\nby the administrator login and have execute-permission only for the\nowner.\n...\n >> Since 4 isn't possible at the moment, there seems to be no reason for\n >> allowing a user to run this command, even if he is otherwise allowed \n >> to create databases. Removing the ability from users means that the\n >> documentation can be simplified by documenting it for administrative\n >> use only.\n >\n >I did document it for administrators only ;)\n\nI still don't see it! Sorry.\n...\n >At the moment, initlocation is just a shell utility which sets up a\n >directory area and changes permissions to be appropriate for Postgres\n >use *if it is run by the Postgres superuser*.\n\nSo make it executable only by the superuser. If some site wanted to have\nmultiple administrators, it would need some fancy code to check that\nthe postmaster listening on $PGPORT was owned by the same user as is\nrunning initlocation. Then it could be made generally executable.\n \n >If we are concerned that alternate database locations can be misused and\n >should be under tighter control of the Postgres superuser, then it would\n >be easy to take out (or #ifdef) the code which allows absolute path\n >names and instead require that all paths for alternate locations be\n >specified as environment variables (that is already supported and was\n >always preferable imho).\n >\n >That way the Postgres administrator can run initlocation, define an\n >environment variable pointing at that alternate location, and then start\n >up the backend. Users would have only those previously defined areas to\n >work with.\n\nHow many alternate locations can be specified? I see PGDATA2 listed in your\ndocumentation. Can this be extended to PGDATAn?\n\n >\n >I'm being pretty direct here in my comments because you are raising very\n >good issues and we haven't had much discussion of them in the past. I am\n >not intending to be difficult or obstinant, just trying to clear things\n >up and get a plan for improving things.\n >\n >Comments (on the issues, and/or whether I'm being a pain :)?\n\nNo problem! In this case, I was documenting from the point of view of a \nuser, since I was running commands in my own name (with database creation\nprivileges). The old man pages don't mention any restriction, so it didn't\noccur to me that this was an administrator only command. The way I read it,\nit seemed to be designed for a user to set up his own private area.\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"In my Father's house are many mansions: if it were not\n so, I would have told you. I go to prepare a place for\n you. And if I go and prepare a place for you, I will \n come again, and receive you unto myself; that where I \n am, there ye may be also.\" John 14:2,3 \n\n\n", "msg_date": "Sat, 16 May 1998 10:02:15 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nThen <[email protected]> spoke up and said:\n> That way the Postgres administrator can run initlocation, define an\n> environment variable pointing at that alternate location, and then start\n> up the backend. Users would have only those previously defined areas to\n> work with.\n\nPersonally, I don't care *how* alternate locations are\nsetup/specified, as long as\n1) With a single postmaster, I can access databases in all of the\nlocations. \n2) Adding a new location should be reserved to the PostgreSQL\nsuperuser.\n3) Adding a new location should *not* require restarting the\npostmaster.\n\nAdditionally, it would be nice to have syntax available for spreading\na database across multiple locations (a la Ingres).\n- -- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================\n\n-----BEGIN PGP SIGNATURE-----\nVersion: 2.6.2\n\niQBVAwUBNWAxr4dzVnzma+gdAQEeGQH/c2L/THi6ZpijGz+JyCfnLhgaVnf/CC0Z\nw3iX7UEAp1C5Oc/UkmkX3cMZ3cEYkdYAYOBvGtKbhnxO9x90wp5+Mw==\n=8QdB\n-----END PGP SIGNATURE-----\n\n", "msg_date": "18 May 1998 09:03:43 -0400", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CREATE DATABASE" }, { "msg_contents": "> ... and what worked for me was: default (datetime(now())\n> \n> So, the reason I'm annoying you now is this: For some time now, I've\n> been wondering what version of PostgreSQL my ISP has running. He's\n> never answered that particular query, and the version files are not\n> readable by lowly me.\n> \n> So, can you tell me in what version of PostgreSQL this worked/broke?\n\npostgres=> create table t (i int,\npostgres-> d datetime default datetime(text 'now'));\nCREATE\n\nWorks now. Have you tried\n\npostgres=> select version();\nversion\n--------------------------------------------------------------\nPostgreSQL 6.4.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.1\n(1 row)\n\nThe CVS log shows something was added back in 1996. Don't know if it\nactually worked back then though...\nrevision 1.1\ndate: 1996/08/28 01:57:23; author: scrappy;\n", "msg_date": "Fri, 07 Aug 1998 05:23:36 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default 'now'" }, { "msg_contents": "Hi all\n\nCould this be added to the FAQ for regular users? Something like \"How do I know\nwhat version I'm using?\"\n\nJust a thought.\n\nOn Aug 7, 5:23am, Thomas G. Lockhart wrote:\n> Works now. Have you tried\n>\n> postgres=> select version();\n> version\n> --------------------------------------------------------------\n> PostgreSQL 6.4.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.1\n> (1 row)\n\n-- \nSincerely,\n\nJazzman (a.k.a. Justin Hickey) e-mail: [email protected]\nHigh Performance Computing Center\nNational Electronics and Computer Technology Center (NECTEC)\nBangkok, Thailand\n==================================================================\nPeople who think they know everything are very irritating to those\nof us who do. ---Anonymous\n\nJazz and Trek Rule!!!\n==================================================================\n", "msg_date": "Fri, 7 Aug 1998 14:55:23 +0000", "msg_from": "\"Justin Hickey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Default 'now'" }, { "msg_contents": "Added to FAQ.\n\n> Hi all\n> \n> Could this be added to the FAQ for regular users? Something like \"How do I know\n> what version I'm using?\"\n> \n> Just a thought.\n> \n> On Aug 7, 5:23am, Thomas G. Lockhart wrote:\n> > Works now. Have you tried\n> >\n> > postgres=> select version();\n> > version\n> > --------------------------------------------------------------\n> > PostgreSQL 6.4.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.1\n> > (1 row)\n> \n> -- \n> Sincerely,\n> \n> Jazzman (a.k.a. Justin Hickey) e-mail: [email protected]\n> High Performance Computing Center\n> National Electronics and Computer Technology Center (NECTEC)\n> Bangkok, Thailand\n> ==================================================================\n> People who think they know everything are very irritating to those\n> of us who do. ---Anonymous\n> \n> Jazz and Trek Rule!!!\n> ==================================================================\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 22 Aug 1998 07:41:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Default 'now'" }, { "msg_contents": "Hi,\n\nI have successeed in porting PostgreSQL to Win NT. The patch that is\nincluded is for development version from Sep 15, but I think that the\nchanges are version independent. The main difference from other port is\nthe renamed system table pg_version (vs. PG_VERSION) to pg_ver - Windows\nfile names are case insensitive :-) So this should be solved on a global\nlevel perhaps in main sources. And also the communication through\nAF_UNIX sockets is disabled. There are only two other changes:\n- added some flag while opening directory with open() syscall\n- changed flags for file descriptor in function pq_init()\nand that's all :-)\n\nthe steps ;-) are:\ndo the steps that Joost wrote some time ago\npatch <pgsql.diff (maybe by hand for newer versions of PostgreSQL)\nmake\nmake install\ninitdb\npostmaster -i\n\nI was able to run postmaster and two concurrent psql connections\nyesterday. I will run the test later.\n\n\t\t\t\t\tDan Horak\n\n\nPS: where are you, Joost? the email for you is returning to me", "msg_date": "Wed, 7 Oct 1998 14:03:35 +0200 ", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": false, "msg_subject": "NT port of PGSQL - success" }, { "msg_contents": "> Hi,\n> \n> I have successeed in porting PostgreSQL to Win NT. The patch that is\n> included is for development version from Sep 15, but I think that the\n> changes are version independent. The main difference from other port is\n> the renamed system table pg_version (vs. PG_VERSION) to pg_ver - Windows\n\nI thought Windows allowed any case, so you could open a file with\n\"PG_VERSION\" or \"pg_version\" and it will open any file of any matching\ncase.\n\n> file names are case insensitive :-) So this should be solved on a global\n> level perhaps in main sources. And also the communication through\n> AF_UNIX sockets is disabled. There are only two other changes:\n> - added some flag while opening directory with open() syscall\n> - changed flags for file descriptor in function pq_init()\n> and that's all :-)\n> \n> the steps ;-) are:\n> do the steps that Joost wrote some time ago\n> patch <pgsql.diff (maybe by hand for newer versions of PostgreSQL)\n> make\n> make install\n> initdb\n> postmaster -i\n> \n> I was able to run postmaster and two concurrent psql connections\n> yesterday. I will run the test later.\n\nThis is amazing. You can actually run SQL statements on NT.\n\nWhat would you like done with this patch? Should it merged into the\ntree, or just used for people testing things on NT, and later merged in\nas you feel more comfortable? You can make a 6.4 final patch, perhaps.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Wed, 7 Oct 1998 20:23:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NT port of PGSQL - success" }, { "msg_contents": "> Hi,\n> \n> I have successeed in porting PostgreSQL to Win NT. The patch that is\n> included is for development version from Sep 15, but I think that the\n> changes are version independent. The main difference from other port is\n> the renamed system table pg_version (vs. PG_VERSION) to pg_ver - Windows\n> file names are case insensitive :-) So this should be solved on a global\n> level perhaps in main sources. And also the communication through\n> AF_UNIX sockets is disabled. There are only two other changes:\n> - added some flag while opening directory with open() syscall\n> - changed flags for file descriptor in function pq_init()\n> and that's all :-)\n\n\nI have removed the data/base/*/pg_version file because it was never\nused. We had removed the 'version' functions long ago, but\ninclude/catalog/pg_version.h was still being processed by genbki.sh. No\nlonger. backend/command/version.c is also no longer compiled.\n\nThis should make your table changes unnecessary.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Thu, 8 Oct 1998 17:40:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NT port of PGSQL - success" }, { "msg_contents": "> Hello Bruce\n> \n> On Oct 8, 5:40pm, Bruce Momjian wrote:\n> > I have removed the data/base/*/pg_version file because it was never\n> > used. We had removed the 'version' functions long ago, but\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Does this mean that the following from the FAQ is wrong?\n> \n> \t3.25) How do I tell what PostgreSQL version I am running?\n> \n> \tFrom psql, type select version();\n> \n> If so then this question should probably be changed to point users to the\n> PG_VERSION file.\n\nNo, the version command was removed. I should not have said version\nfunction. select version() works just as always.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Thu, 8 Oct 1998 22:46:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] version functions (was NT port of PGSQL - success)" }, { "msg_contents": "Hello Bruce\n\nOn Oct 8, 5:40pm, Bruce Momjian wrote:\n> I have removed the data/base/*/pg_version file because it was never\n> used. We had removed the 'version' functions long ago, but\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nDoes this mean that the following from the FAQ is wrong?\n\n\t3.25) How do I tell what PostgreSQL version I am running?\n\n\tFrom psql, type select version();\n\nIf so then this question should probably be changed to point users to the\nPG_VERSION file.\n\nJust a thought\n\n-- \nSincerely,\n\nJazzman (a.k.a. Justin Hickey) e-mail: [email protected]\nHigh Performance Computing Center\nNational Electronics and Computer Technology Center (NECTEC)\nBangkok, Thailand\n==================================================================\nPeople who think they know everything are very irritating to those\nof us who do. ---Anonymous\n\nJazz and Trek Rule!!!\n==================================================================\n", "msg_date": "Fri, 9 Oct 1998 09:07:23 +0000", "msg_from": "\"Justin Hickey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] version functions (was NT port of PGSQL - success)" }, { "msg_contents": "Jazzman wrote:\n>\n> Hello Bruce\n>\n> On Oct 8, 5:40pm, Bruce Momjian wrote:\n> > I have removed the data/base/*/pg_version file because it was never\n> > used. We had removed the 'version' functions long ago, but\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>\n> Does this mean that the following from the FAQ is wrong?\n>\n> 3.25) How do I tell what PostgreSQL version I am running?\n>\n> From psql, type select version();\n>\n> If so then this question should probably be changed to point users to the\n> PG_VERSION file.\n\n No, it is still correct. The version function is there and it\n returns the compiled in string from version.h.\n\n But take a look at version.c please. I think it should use\n memcpy() or strncpy() instead of strcpy(). As it is now it\n writes the null byte after the palloc'ed area.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 9 Oct 1998 12:05:48 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] version functions (was NT port of PGSQL - success)" }, { "msg_contents": "> No, it is still correct. The version function is there and it\n> returns the compiled in string from version.h.\n> \n> But take a look at version.c please. I think it should use\n> memcpy() or strncpy() instead of strcpy(). As it is now it\n> writes the null byte after the palloc'ed area.\n\nYes, thanks. Fixed using StrNCpy().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Fri, 9 Oct 1998 12:39:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] version functions (was NT port of PGSQL - success)" }, { "msg_contents": "I believe Tom Lane removed the double-backslash from the psql output by\nmodifying PQprint.\n\nI recommend that is reversed, because psql escapes out the table column\ndelimiters with a single backslash, and now the format will be\nambigious.\n\n\trtest=> create table test(x text);\n\tERROR: test relation already exists\n\ttest=> create table test3(x text);\n\tCREATE\n\ttest=> insert into test3 values ('\\\\x');\n\tINSERT 322185 1\n\ttest=> select * from test3;\n\tx \n\t--\n\t\\x\n\t(1 row)\n\t\n\nThis used to show as:\n\n\tx \n\t--\n\t\\\\x\n\t(1 row)\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Fri, 9 Oct 1998 16:24:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "backslash in psql output" }, { "msg_contents": "> now the format will be ambigious.\n> test=> insert into test3 values ('\\\\x');\n> test=> select * from test3;\n> --\n> \\x\n> This used to show as:\n> --\n> \\\\x\n> Comments?\n\nWell, actually I've been thinking that this is closer to the behavior we\nmight want (though I haven't looked carefully at the new version). Of\ncourse it bothered me more than it should have, since I misunderstood\nwhere the re-escaping was happening; I had thought it was happening in\nthe backend.\n\npsql could have an option to re-escape strings, but imho by default\nshould display what is stored, not what was typed in originally.\n\npg_dump _should_ re-escape everything, so that it reloads properly.\n\n - $0.02 from Tom\n", "msg_date": "Sat, 10 Oct 1998 02:31:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "> > now the format will be ambigious.\n> > test=> insert into test3 values ('\\\\x');\n> > test=> select * from test3;\n> > --\n> > \\x\n> > This used to show as:\n> > --\n> > \\\\x\n> > Comments?\n> \n> Well, actually I've been thinking that this is closer to the behavior we\n> might want (though I haven't looked carefully at the new version). Of\n> course it bothered me more than it should have, since I misunderstood\n> where the re-escaping was happening; I had thought it was happening in\n> the backend.\n> \n> psql could have an option to re-escape strings, but imho by default\n> should display what is stored, not what was typed in originally.\n> \n> pg_dump _should_ re-escape everything, so that it reloads properly.\n\nBut what about backward compatability? Aren't there people expecting\npsql output to show double backslashes? What do we do to display pipes\nin the output?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Fri, 9 Oct 1998 23:04:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "> > psql could have an option to re-escape strings, but imho by default\n> > should display what is stored, not what was typed in originally.\n> > pg_dump _should_ re-escape everything, so that it reloads properly.\n> But what about backward compatability? Aren't there people expecting\n> psql output to show double backslashes? What do we do to display \n> pipes in the output?\n\nWe can include a command-line option to re-enable the escapes. Anyone\ncan use that form, and can define an alias for it if they need.\n\nI understand your concern about backward compatibility, but imho the\nconvention it was using makes things more confusing, especially for a\nnew user. I'm not certain (assuming things have changed recently) that\nthe new behavior is what I would want, but the old behavior was not one\nI would have guessed at.\n\nUsing psql to pipe output to another program is the best example of why\none might _not_ want to re-escape the strings. If the goal is to use\ndata from the database, then that data is what you will want to see in\nthe pipe.\n\nAnyway, in the long run I'd like to consider moving toward a \"no-escape\"\ndefault output for psql. If you think that it is too short notice to\nfully understand the ramifications of a change this close to release, or\nif your \"faithful correspondents\" have cottage cheese for brains, then\ngo ahead and revert it ;) But I like the new behavior in principle...\n\n - Tom\n\nAnother possibility is to make it a configuration option:\n\n ./configure --enable-psql-string-escapes\n\nor something like that...\n", "msg_date": "Sat, 10 Oct 1998 04:21:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "> We can include a command-line option to re-enable the escapes. Anyone\n> can use that form, and can define an alias for it if they need.\n> \n> I understand your concern about backward compatibility, but imho the\n> convention it was using makes things more confusing, especially for a\n> new user. I'm not certain (assuming things have changed recently) that\n> the new behavior is what I would want, but the old behavior was not one\n> I would have guessed at.\n> \n> Using psql to pipe output to another program is the best example of why\n> one might _not_ want to re-escape the strings. If the goal is to use\n> data from the database, then that data is what you will want to see in\n> the pipe.\n> \n> Anyway, in the long run I'd like to consider moving toward a \"no-escape\"\n> default output for psql. If you think that it is too short notice to\n> fully understand the ramifications of a change this close to release, or\n> if your \"faithful correspondents\" have cottage cheese for brains, then\n> go ahead and revert it ;) But I like the new behavior in principle...\n> \n> - Tom\n> \n> Another possibility is to make it a configuration option:\n> \n> ./configure --enable-psql-string-escapes\n> \n> or something like that...\n> \n\nI realize the double-backslash is confusing, but I don't think we can\nmake such a user-visible change at this time. I think we need to open\ndiscussion on this issue on the general list, and to include discussion\nof NULL displays, and any other issues, as well as how to properly\noutput the column separation character if that appears in the data.\n\nSo, I think we have to put it back to the old way, and open discussion\nabout this after 6.4.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sat, 10 Oct 1998 01:20:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "Bruce Momjian wrote:\n >> > now the format will be ambigious.\n >> > test=> insert into test3 values ('\\\\x');\n >> > test=> select * from test3;\n >> > --\n >> > \\x\n >> > This used to show as:\n >> > --\n >> > \\\\x\n >> > Comments?\n >>...\n >But what about backward compatability? Aren't there people expecting\n >psql output to show double backslashes? What do we do to display pipes\n >in the output?\n\nThat change seems a good thing: the front-end ought to display what the\nuser wants. Any manipulations should be done behind the scenes. If I\nstore a DOS pathname, I don't want to see the backslashes doubled in it.\nEven worse, I don't want to see them eliminated altogether, which is what\nhappens now if I don't remember to double them on input.\n\nYou mentioned that psql backslash-escapes the column delimiter character.\nI think that this behaviour ought to be removed as well; it should be obvious\nfrom the alignment with headings and other lines whether a pipe character is\npart of the data or a column delimiter. If it really matters, a user\ncan specify another character to use as delimiter. \n\nAn unsophisticated user expects to type characters and have them\naccepted; he should not have to know that certain characters need to be\ndoubled or escaped, nor that certain characters he sees in the output are\nto be ignored.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"He that covereth his sins shall not prosper; but whoso\n confesseth and forsaketh them shall have mercy.\" \n Proverbs 28:13 \n\n\n", "msg_date": "Sat, 10 Oct 1998 08:01:55 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] backslash in psql output " }, { "msg_contents": "> Bruce Momjian wrote:\n> >> > now the format will be ambigious.\n> >> > test=> insert into test3 values ('\\\\x');\n> >> > test=> select * from test3;\n> >> > --\n> >> > \\x\n> >> > This used to show as:\n> >> > --\n> >> > \\\\x\n> >> > Comments?\n> >>...\n> >But what about backward compatability? Aren't there people expecting\n> >psql output to show double backslashes? What do we do to display pipes\n> >in the output?\n> \n> That change seems a good thing: the front-end ought to display what the\n> user wants. Any manipulations should be done behind the scenes. If I\n> store a DOS pathname, I don't want to see the backslashes doubled in it.\n> Even worse, I don't want to see them eliminated altogether, which is what\n> happens now if I don't remember to double them on input.\n> \n> You mentioned that psql backslash-escapes the column delimiter character.\n> I think that this behaviour ought to be removed as well; it should be obvious\n> from the alignment with headings and other lines whether a pipe character is\n> part of the data or a column delimiter. If it really matters, a user\n> can specify another character to use as delimiter. \n\nIf the user is reading psql output into a program, it is very unclear\nhow to determine a valid column delimiter vs. a delimiter in the data. \nYes, they can change delimiters, but they person has to choose one that\nwould never appear in the data stream, and that is sometimes impossible.\n\nI don't know how many people do this type of thing, but I think we have\nto ask the general users.\n\n> \n> An unsophisticated user expects to type characters and have them\n> accepted; he should not have to know that certain characters need to be\n> doubled or escaped, nor that certain characters he sees in the output are\n> to be ignored.\n\nIf we don't require double backslashes on input, we can no longer accept\nC escape sequences like \\r or \\n, or octal values. As it is now, octal\nvalues are now interpreted, instead of returning their values:\n\n\ttest=> insert into test3 values ('\\253');\n\tINSERT 323237 1\n\ttest=> select * from test3;\n\tx \n\t--------\n\tnedd\n\t� \n\t� \n\t(3 rows)\n\nI don't know if it always did this or not.\n\nI can understand people wanting to change things, but we have to discuss\nall the issues.\n\nAnd what about the COPY command. Do you want to change the display of\nescape characters there too? I hope not.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sat, 10 Oct 1998 11:02:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I realize the double-backslash is confusing, but I don't think we can\n> make such a user-visible change at this time. I think we need to open\n> discussion on this issue on the general list, and to include discussion\n> of NULL displays, and any other issues, as well as how to properly\n> output the column separation character if that appears in the data.\n> So, I think we have to put it back to the old way, and open discussion\n> about this after 6.4.\n\nWell, actually there *was* public discussion of this issue, on the\npgsql-interfaces list around 12/13 August. The consensus was that\nunnecessary backslashing was a bad idea --- in fact, I didn't see\n*anyone* arguing in favor of the old behavior, and the people who\nactually had backslashes in their data definitely didn't want it.\nAdmittedly it was a pretty small sample (Tom Lockhart and I were\ntwo of the primary complainers) but there wasn't any sentiment\nfor keeping the old behavior.\n\nKeep in mind that what we are discussing here is the behavior of\nPQprint(), not the behavior of FE/BE transport protocol or anything\nelse that affects data received by applications. PQprint's goal in\nlife is to present data in a reasonably human-friendly way, *not*\nto produce a completely unambiguous machine-readable syntax. Its\noutput format is in fact very ambiguous. Here's an example:\n\nplay=> create table test(id int4, val text);\nplay=> insert into test values (1, NULL);\nplay=> insert into test values (2, ' ');\nplay=> insert into test values (3, 'foobar');\nplay=> insert into test values (4, 'oneback\\\\slash');\nplay=> insert into test values (5, 'onevert|bar');\nplay=> select * from test;\nid|val\n--+-------------\n 1|\n 2|\n 3|foobar\n 4|oneback\\slash\n 5|onevert|bar\n(5 rows)\n\nYou can't tell the difference between a NULL field and an all-blanks\nvalue in this format; nor can you really be sure how many trailing\nblanks there are in tuples 3 and 5. So the goal is readability,\nnot lack of ambiguity. Given that goal, I don't see the value of\nprinting backslash escapes. Are you really having difficulty telling\nthe data vertical bar from the ones used as column separators?\nPhysical alignment is the cue the eye relies on, I think.\n\nThe only cases that PQprint inserted backslashes for were the column\nseparator char (unnecessary per above example), newlines (also not\nexactly hard to recognize), and backslash itself. All of these\nseem unnecessary and confusing to me.\n\nI'm sorry that this change sat in my to-do queue for so long, but\nI don't see it as a last-minute thing. The consensus to do it was\nestablished two months ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Oct 1998 11:19:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output " }, { "msg_contents": "Bruce Momjian wrote:...\n >And what about the COPY command. Do you want to change the display of\n >escape characters there too? I hope not.\n\nNo; but I wouldn't expect unsophisticated users to be using COPY.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"He that covereth his sins shall not prosper; but whoso\n confesseth and forsaketh them shall have mercy.\" \n Proverbs 28:13 \n\n\n", "msg_date": "Sat, 10 Oct 1998 23:06:22 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] backslash in psql output " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> If the user is reading psql output into a program, it is very unclear\n> how to determine a valid column delimiter vs. a delimiter in the data. \n\nQuite true, but the PQprint output format has never been\nprogram-friendly, as I pointed out previously.\n\nI think the appropriate response to this gripe is to add another\ncommand-line switch to psql, which would change its display of SELECT\ndata to something more suitable for program consumption, rather than\ntrying to make a single output format that's a bad compromise between\nhuman readability and machine readability.\n\nPerhaps the COPY data syntax would work for a program-friendly output\nformat? (Basically tabs and newlines are field and row delimiters,\nwith (IIRC) backslash-escaping of tabs, newlines, backslash itself,\nand maybe nulls and suchlike. Also \\N stands for a null field.)\n\n> And what about the COPY command. Do you want to change the display of\n> escape characters there too? I hope not.\n\nI wasn't proposing any such thing, unless it proves necessary to ensure\nwe can copy arbitrary-8-bit text data out and back in. But that's a\ntask for a future release.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Oct 1998 18:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output " }, { "msg_contents": "> You can't tell the difference between a NULL field and an all-blanks\n> value in this format; nor can you really be sure how many trailing\n> blanks there are in tuples 3 and 5. So the goal is readability,\n> not lack of ambiguity. Given that goal, I don't see the value of\n> printing backslash escapes. Are you really having difficulty telling\n> the data vertical bar from the ones used as column separators?\n> Physical alignment is the cue the eye relies on, I think.\n> \n> The only cases that PQprint inserted backslashes for were the column\n> separator char (unnecessary per above example), newlines (also not\n> exactly hard to recognize), and backslash itself. All of these\n> seem unnecessary and confusing to me.\n\nOK, I understand your point here, that we must maximize readability, and\nthat robustness is not as important.\n\nOK, let's keep the removal of backslashes. Can you recommend a nice\nNULL display, perhaps '[NULL]' or '<NULL>'?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sat, 10 Oct 1998 21:45:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "Thus spake Bruce Momjian\n> OK, let's keep the removal of backslashes. Can you recommend a nice\n> NULL display, perhaps '[NULL]' or '<NULL>'?\n\nI'd like to make at least one vote to keep the status quo. Some of\nus have come to depend on the existing behaviour. At least make\nit an option that you have to turn on.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 11 Oct 1998 00:40:24 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "> Thus spake Bruce Momjian\n> > OK, let's keep the removal of backslashes. Can you recommend a nice\n> > NULL display, perhaps '[NULL]' or '<NULL>'?\n> \n> I'd like to make at least one vote to keep the status quo. Some of\n> us have come to depend on the existing behaviour. At least make\n> it an option that you have to turn on.\n\nMan, I can't win. :-)\n\nI vote for the status quo, and people want double backslashes removed. \nI want to add a NULL display, and people want status quo.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sun, 11 Oct 1998 01:02:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > I realize the double-backslash is confusing, but I don't think we can\n> > make such a user-visible change at this time. I think we need to open\n> > discussion on this issue on the general list, and to include discussion\n> > of NULL displays, and any other issues, as well as how to properly\n> > output the column separation character if that appears in the data.\n> > So, I think we have to put it back to the old way, and open discussion\n> > about this after 6.4.\n> \n> Well, actually there *was* public discussion of this issue, on the\n> pgsql-interfaces list around 12/13 August. The consensus was that\n> unnecessary backslashing was a bad idea --- in fact, I didn't see\n> *anyone* arguing in favor of the old behavior, and the people who\n> actually had backslashes in their data definitely didn't want it.\n> Admittedly it was a pretty small sample (Tom Lockhart and I were\n> two of the primary complainers) but there wasn't any sentiment\n> for keeping the old behavior.\n> \n> Keep in mind that what we are discussing here is the behavior of\n> PQprint(), not the behavior of FE/BE transport protocol or anything\n> else that affects data received by applications. PQprint's goal in\n> life is to present data in a reasonably human-friendly way, *not*\n> to produce a completely unambiguous machine-readable syntax. Its\n> output format is in fact very ambiguous. Here's an example:\n> \n> play=> create table test(id int4, val text);\n> play=> insert into test values (1, NULL);\n> play=> insert into test values (2, ' ');\n> play=> insert into test values (3, 'foobar');\n> play=> insert into test values (4, 'oneback\\\\slash');\n> play=> insert into test values (5, 'onevert|bar');\n> play=> select * from test;\n> id|val\n> --+-------------\n> 1|\n> 2|\n> 3|foobar\n> 4|oneback\\slash\n> 5|onevert|bar\n> (5 rows)\n> \n> You can't tell the difference between a NULL field and an all-blanks\n> value in this format; nor can you really be sure how many trailing\n> blanks there are in tuples 3 and 5. So the goal is readability,\n> not lack of ambiguity. Given that goal, I don't see the value of\n> printing backslash escapes. Are you really having difficulty telling\n> the data vertical bar from the ones used as column separators?\n> Physical alignment is the cue the eye relies on, I think.\n> \n> The only cases that PQprint inserted backslashes for were the column\n> separator char (unnecessary per above example), newlines (also not\n> exactly hard to recognize), and backslash itself. All of these\n> seem unnecessary and confusing to me.\n> \n> I'm sorry that this change sat in my to-do queue for so long, but\n> I don't see it as a last-minute thing. The consensus to do it was\n> established two months ago.\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n\nIn my opinion we should privilege machine-readableness first and then provide\nsome user option to enable user-friendly conversion in psql output if one\nreally needs it.\n\nIn situations where data is processed by other programs it is very important\nthat there is no ambiguity in strings exchanged between the application and\nthe backend. This is already done for input, which supports C-like escape,\nbut not yet for output, which can produce ambiguous data when nulls, arrays\nor non-printing characters are involved. This is the reason why I always use\nmy C-like output functions (contrib/string-io) in all my applications.\n\nThese arguments apply also to the copy command which uses the same output\nfunctions. Consider the case where a text field contains a multi-line string\nwith newlines embedded; if you export the table into an external files the\nfield is split into many lines which are interpreted as separate records by\ncommonly used line-oriented filters like awk or grep.\n\nI believe that the right way to handle all this stuff is the following:\n\n input:\n\n binary data escaped data\n | |\n (user conversion) (psql input)\n | |\n +-----------------------+\n |\n escaped query\n |\n (libpq)\n |\n escaped query escaped data\n | |\n (parser unescape) (copy-from unescape)\n | |\n +-----------------------+\n |\n binary data\n |\n (input function)\n |\n internal data\n\n\n output:\n\n internal data\n |\n (output function)\n |\n escaped data\n |\n +-----------------------+\n | |\n (libpq) (copy-to)\n | |\n escaped data escaped data\n |\n |\n +-----------------------+-----------------------+\n | | |\n (user conversion) (psql output) (psql unescape)\n | | |\n binary data escaped data binary data\n\n\nIn the above schema binary data means the external representation of data\ncontaining non-printing or delimiters characters like quotes or newlines.\nIn this schema all the data exchanged with the backend should be escaped\nin order to guarantee unambiguity to applications. The input and output\nuser conversion functions could be provided by libpq as utilities, and the\nconversion could possibly be done automatically by libpq itself if some\nglobal flag is set by the application.\nPsql input should accept only escaped data while the output could be escaped\n(default) or binary depending on a user supplied switch.\nFiles read or written by the copy command should be always escaped with\nexactly one record for line. Pg_dump should produce escaped strings.\nAll this stuff requires the use of new output functions like those provided\nin contrib/string-io.\n\nThere is still the problem of distinguishing between scalars and arrays which\nis necessary for user output conversion. In my output functions I solved the\nproblem by escaping the first '{' of each field if it is not an array.\nAnother problem is that array input requires a double escaping, one for the\nquery parser and a second one for the array parser. Also nulls (\\0) are not\nhandled by the input code. This should be fixed if we want true binary data.\n\nI don't know if C-escapes violate the ansi sql standard but I believe they\nmakes life easier for the programmer. And if we add some global flag in\nlibpq we could also do automatic conversion to be compatible with ansi sql\nand old applications. Note that arrays aren't ansi sql anyway.\n\nAnyway a runtime switch is preferable to a configure switch.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Sun, 11 Oct 1998 14:24:14 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "On Sun, 11 Oct 1998, D'Arcy J.M. Cain wrote:\n\n> I'd like to make at least one vote to keep the status quo. Some of\n> us have come to depend on the existing behaviour. At least make\n> it an option that you have to turn on.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n\nTo which I add my hearty approval (I'd like both options)!\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Sun, 11 Oct 1998 13:10:41 -0400 (EDT)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "(Sorry for being so slow to respond to this...)\n\nMassimo Dal Zotto <[email protected]> writes:\n> I believe that the right way to handle all this stuff is the following:\n\n> input:\n\n> binary data escaped data\n> | |\n> (user conversion) (psql input)\n> | |\n> +-----------------------+\n> |\n> escaped query\n> |\n> (libpq)\n> |\n> escaped query escaped data\n> | |\n> (parser unescape) (copy-from unescape)\n> | |\n> +-----------------------+\n> |\n> binary data\n> |\n> (input function)\n> |\n> internal data\n\n\n> output:\n\n> internal data\n> |\n> (output function)\n> |\n> escaped data\n> |\n> +-----------------------+\n> | |\n> (libpq) (copy-to)\n> | |\n> escaped data escaped data\n> |\n> |\n> +-----------------------+-----------------------+\n> | | |\n> (user conversion) (psql output) (psql unescape)\n> | | |\n> binary data escaped data binary data\n\n\nI disagree with this, at least for the output side. The FE/BE protocol\nfor SELECT/FETCH results is already completely 8-bit clean. There is\nno reason to escape output data before passing it across the wire and\nthrough libpq. The application program might want to escape the data\nfor its own purposes, but that's not our concern.\n\nAs far as I can tell, COPY IN/OUT data is the only case where we really\nhave an issue. Since the COPY protocol is inherently text-based, we\nhave to escape anything that won't do as text. (Offhand, I think only\ntab, newline, null, and of course backslash are essential to convert,\nthough we might also want to convert other nonprinting characters for\nreadability's sake.) The conversions involved need to be nailed down\nand documented as part of the FE/BE protocol.\n\nCoping with array-valued fields is also a concern --- there needs to\nbe some reasonable way for an application to discover that a given field\nis an array and what datatype it is an array of. But I think the need\nthere is to extend the RowDescription information returned by SELECT,\nnot to modify the data representation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Oct 1998 12:28:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output " }, { "msg_contents": "> As far as I can tell, COPY IN/OUT data is the only case where we really\n> have an issue. Since the COPY protocol is inherently text-based, we\n> have to escape anything that won't do as text. (Offhand, I think only\n> tab, newline, null, and of course backslash are essential to convert,\n> though we might also want to convert other nonprinting characters for\n> readability's sake.) The conversions involved need to be nailed down\n> and documented as part of the FE/BE protocol.\n\\. as end-of-input is also escaped. Not sure that gets sent to the\nbackend, or is just used by the frontend protocol to signal\nend-of-input.\n\n\n> \n> Coping with array-valued fields is also a concern --- there needs to\n> be some reasonable way for an application to discover that a given field\n> is an array and what datatype it is an array of. But I think the need\n> there is to extend the RowDescription information returned by SELECT,\n> not to modify the data representation.\n\nYes, arrays can be a problem. Not sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Oct 1998 14:29:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backslash in psql output" }, { "msg_contents": "Hi. I believe we are still shooting for a Nov 1 release, though without\nreports of successful regression tests on more platforms I'm not sure we\ncan do that. I know that at least some of those listed below are the\nactive development platforms for some contributors, so those are\nprobably covered but I need confirmation. So, if you have a platform you\nhave tested or plan to have tested in the next few days please speak up.\nNow. Or at least Soon :)\n\nHere are the ones on the \"currently supported\" list (let me know if you\nhave something running on another platform. Any Ultrix people out there\nstill?):\n\n_ AIX 4.1.x-4.2\n_ BSDi\n_ FreeBSD 2.2.x-3.x\n_ NetBSD 1.3\n_ NetBSD 1.3 NS32532\n_ NetBSD 1.3 Sparc\n_ NetBSD 1.3 VAX\n_ DGUX 5.4R4.11 m88k\n_ HPUX 10.20\n_ IRIX 6.x\n_ Digital 4.0\n_ linux 2.0.x Alpha\nx linux 2.0.x x86\n_ linux 2.0.x Sparc\nx mklinux PPC750\n_ SCO\n_ Solaris x86\n_ Solaris 2.5.1-2.6 x86\n_ SunOS 4.1.4 Sparc\nx SVR4 MIPS\n_ SVR4 4.4 m88k\nx Unixware x86\nx Windows NT\n\n\nThe porting info goes into the Admin Guide in the docs. I plan to freeze\nthat one last, a few days before release to give Bruce et al a chance to\npolish the installation and release notes.\n\nThe other docs will need to freeze earlier to give me a chance to\ngenerate hardcopy for v6.4. So the freeze schedule will be (again\nassuming a Nov 1 release, and I'm probably not giving myself enough\ntime):\n\nOct 26: freeze Programmer's Guide and Developer's Guide\nOct 27: freeze User's Guide and reference pages\nOct 28: freeze Admin Guide\nOct 29-30: finish hardcopy, generate html\n\nI will be out of town Oct 31-Nov 1, so need to finish a day or two\nearly. As it is, I should have frozen some docs by now to get this stuff\ndone.\n\nSo, if you have anything else to contribute or update for docs, SEND IT\nIN NOW. Or at least let me know it is coming soon. Or it will have to\nwait for v6.5...\n\nTIA\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 04:29:38 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Last call?" }, { "msg_contents": "> Here are the ones on the \"currently supported\" list (let me know if you\n> have something running on another platform. Any Ultrix people out there\n> still?):\n\nChange:\n\t\n\t> _ BSDi\n\t\nto\n\t\n\t> _ BSDI 3.x and 4.0\n\n\nAlso, the Windows NT item is of much interest. Can we report this as\nworking, and have a binary of 6.4 built at the time of the 6.4 release,\nNovember 1? (I am CC'ing the NT person.)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Oct 1998 01:17:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Last call?" }, { "msg_contents": "> Change:\n> > _ BSDi\n> to\n> > _ BSDI 3.x and 4.0\n\nThe underscores meant that I don't know if the platform has been\nregression tested yet. Exes meant I did. I was assuming that you had\nalready done BSDI (isn't that you're development platform?) and was\nhoping to get a report from you saying to put an \"x\" for that one.\n\nSo the only platforms I've marked as being confirmed are NT, Unixware,\nSVR4, mklinux, and linux. But I know there are others which are already\nworking, I'm just not certain exactly which ones...\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 14:21:19 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Last call?" }, { "msg_contents": "Hi Tom and all\n\nI did not know what the '_' and 'x' meant the first time, as I recall the\none for 'Linux ix86' had an '_', or unknown, I can do this for you, should\nI grab the Bata2 or the latest snapshot?\n\nOn Sun, 25 Oct 1998, Thomas G. Lockhart wrote:\n\n> > Change:\n> > > _ BSDi\n> > to\n> > > _ BSDI 3.x and 4.0\n> \n> The underscores meant that I don't know if the platform has been\n> regression tested yet. Exes meant I did. I was assuming that you had\n> already done BSDI (isn't that you're development platform?) and was\n> hoping to get a report from you saying to put an \"x\" for that one.\n> \n> So the only platforms I've marked as being confirmed are NT, Unixware,\n> SVR4, mklinux, and linux. But I know there are others which are already\n> working, I'm just not certain exactly which ones...\n> \n> - Tom\n> \n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sun, 25 Oct 1998 10:32:51 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >So the only platforms I've marked as being confirmed are NT, Unixware,\n >SVR4, mklinux, and linux. \n\nTom, can you distinguish between linux with libc5 and with glibc (aka libc6)?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye then be risen with Christ, seek those things \n which are above, where Christ sitteth on the right \n hand of God. Set your affection on things above, not \n on things on the earth.\" Colossians 3:1,2\n\n\n", "msg_date": "Sun, 25 Oct 1998 16:03:47 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] Last call? " }, { "msg_contents": "> I did not know what the '_' and 'x' meant the first time, as I recall \n> the one for 'Linux ix86' had an '_', or unknown\n\nThanks Terry, but I've already got that one done (it's my development\nplatform). The Linux Alpha and Sparc need testing, which is probably\nwhat you remember seeing...\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 16:09:57 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "> can you distinguish between linux with libc5 and with glibc?\n\nYes, if necessary. I'm running glibc at work (with the RH 6.3.2 rpms)\nand libc5 at home (with v6.4beta). I don't expect there to be any\nsignificant differences; are you aware of any?\n\nAlso, I'm planning on doing a clean install of v6.4beta on my glibc\nmachine, so can verify it then.\n\nOr are you just saying that we should mention both explicitly so that\npeople can know that it would work on their machine for sure?\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 16:14:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Last call?" }, { "msg_contents": "> > Change:\n> > > _ BSDi\n> > to\n> > > _ BSDI 3.x and 4.0\n> \n> The underscores meant that I don't know if the platform has been\n> regression tested yet. Exes meant I did. I was assuming that you had\n> already done BSDI (isn't that you're development platform?) and was\n> hoping to get a report from you saying to put an \"x\" for that one.\n> \n> So the only platforms I've marked as being confirmed are NT, Unixware,\n> SVR4, mklinux, and linux. But I know there are others which are already\n> working, I'm just not certain exactly which ones...\n\nOh, sorry. Put an X for BSDI.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Oct 1998 11:44:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Last call?" }, { "msg_contents": " Here are the ones on the \"currently supported\" list (let me know if you\n have something running on another platform. Any Ultrix people out there\n still?):\n\n x NetBSD 1.3.2/i386\n\nI'm running on NetBSD 1.3.2/i386 and as of yesterday the regressions\nall passed. I'm trying to redo them every few days to make sure\nnothing creeps in so this should be a supported platform.\n\nCheers,\nBrook\n", "msg_date": "Sun, 25 Oct 1998 10:29:51 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Last call?" }, { "msg_contents": "> x NetBSD 1.3.2/i386\n> I'm running on NetBSD 1.3.2/i386 and as of yesterday the regressions\n> all passed. I'm trying to redo them every few days to make sure\n> nothing creeps in so this should be a supported platform.\n\nThis is exactly what makes it a supported platform :)\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 17:33:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Last call?" }, { "msg_contents": "> > 'Linux ix86' had an '_', or unknown\n> Thanks Terry, but I've already got that one done\n\nBut Oliver was asking about glibc2 vs libc5. Do you happen to have a\nglibc machine (RH 5.x?) available? If so we could use a direct report of\nsuccess since I'm still running RH4.2/libc5 for development...\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 18:07:59 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "I compiled it on RH 5.1... no problems.\n\nTaral\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Thomas G.\n> Lockhart\n> Sent: Sunday, October 25, 1998 12:08 PM\n> To: Terry Mackintosh; PostgreSQL-development\n> Subject: Re: [HACKERS] Re: [DOCS] Last call?\n> \n> \n> > > 'Linux ix86' had an '_', or unknown\n> > Thanks Terry, but I've already got that one done\n> \n> But Oliver was asking about glibc2 vs libc5. Do you happen to have a\n> glibc machine (RH 5.x?) available? If so we could use a direct report of\n> success since I'm still running RH4.2/libc5 for development...\n> \n> - Tom\n> \n", "msg_date": "Sun, 25 Oct 1998 12:15:15 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "I'm sure Oliver runs a libc6 system. He is one of the debian core\ndevelopers.\n\n-Egon\n\nOn Sun, 25 Oct 1998, Thomas G. Lockhart wrote:\n\n> > > 'Linux ix86' had an '_', or unknown\n> > Thanks Terry, but I've already got that one done\n> \n> But Oliver was asking about glibc2 vs libc5. Do you happen to have a\n> glibc machine (RH 5.x?) available? If so we could use a direct report of\n> success since I'm still running RH4.2/libc5 for development...\n> \n> - Tom\n> \n> \n\n", "msg_date": "Sun, 25 Oct 1998 19:19:39 +0100 (MET)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "Hi Tom\n\nNope, RH4.2/libc5, same as you.\nStill waiting for the dust to settle on the glibc thing:)\n\nOn Sun, 25 Oct 1998, Thomas G. Lockhart wrote:\n\n> > > 'Linux ix86' had an '_', or unknown\n> > Thanks Terry, but I've already got that one done\n> \n> But Oliver was asking about glibc2 vs libc5. Do you happen to have a\n> glibc machine (RH 5.x?) available? If so we could use a direct report of\n> success since I'm still running RH4.2/libc5 for development...\n> \n> - Tom\n\n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sun, 25 Oct 1998 13:26:45 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "I cannot believe, RH4.2 isn't a glibc2 system.\n\n-Egon\n\nOn Sun, 25 Oct 1998, Terry Mackintosh wrote:\n\n> Hi Tom\n> \n> Nope, RH4.2/libc5, same as you.\n> Still waiting for the dust to settle on the glibc thing:)\n> \n> On Sun, 25 Oct 1998, Thomas G. Lockhart wrote:\n> \n> > > > 'Linux ix86' had an '_', or unknown\n> > > Thanks Terry, but I've already got that one done\n> > \n> > But Oliver was asking about glibc2 vs libc5. Do you happen to have a\n> > glibc machine (RH 5.x?) available? If so we could use a direct report of\n> > success since I'm still running RH4.2/libc5 for development...\n> > \n> > - Tom\n> \n> \n> Terry Mackintosh <[email protected]> http://www.terrym.com\n> sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n> \n> Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n> -------------------------------------------------------------------\n> Success Is A Choice ... book by Rick Patino, get it, read it!\n> \n> \n> \n\n", "msg_date": "Sun, 25 Oct 1998 19:26:51 +0100 (MET)", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "\nI just gave today's (Oct 25) snapshot a try on Sparc Linux. \nLooks good except datetime. I'm getting failures due to this type \nof thing:\n\nregression=> SELECT ('today'::datetime );\n?column?\n----------------------------\nSun Oct 25 00:00:00 1998 EDT\n(1 row)\n\nregression=> SELECT ('tomorrow'::datetime - '1 day'::timespan);\n?column?\n----------------------------\nSun Oct 25 01:00:00 1998 EDT\n(1 row) \n\n\nI *think* this may because we're not too far into EST yet.\nSound good?\n\nMy machine is Kernel is 2.0.29. Libc 5.3.12.\n\nTom Szybist\[email protected]\n\n\n\n\nIn message <[email protected]>, \"Thomas G. Lockhart\" writes:\n> Hi. I believe we are still shooting for a Nov 1 release, though without\n> reports of successful regression tests on more platforms I'm not sure we\n> can do that. I know that at least some of those listed below are the\n> active development platforms for some contributors, so those are\n> probably covered but I need confirmation. So, if you have a platform you\n> have tested or plan to have tested in the next few days please speak up.\n> Now. Or at least Soon :)\n> \n> Here are the ones on the \"currently supported\" list (let me know if you\n> have something running on another platform. Any Ultrix people out there\n> still?):\n> \n> _ AIX 4.1.x-4.2\n> _ BSDi\n> _ FreeBSD 2.2.x-3.x\n> _ NetBSD 1.3\n> _ NetBSD 1.3 NS32532\n> _ NetBSD 1.3 Sparc\n> _ NetBSD 1.3 VAX\n> _ DGUX 5.4R4.11 m88k\n> _ HPUX 10.20\n> _ IRIX 6.x\n> _ Digital 4.0\n> _ linux 2.0.x Alpha\n> x linux 2.0.x x86\n> _ linux 2.0.x Sparc\n> x mklinux PPC750\n> _ SCO\n> _ Solaris x86\n> _ Solaris 2.5.1-2.6 x86\n> _ SunOS 4.1.4 Sparc\n> x SVR4 MIPS\n> _ SVR4 4.4 m88k\n> x Unixware x86\n> x Windows NT\n> \n> \n> The porting info goes into the Admin Guide in the docs. I plan to freeze\n> that one last, a few days before release to give Bruce et al a chance to\n> polish the installation and release notes.\n> \n> The other docs will need to freeze earlier to give me a chance to\n> generate hardcopy for v6.4. So the freeze schedule will be (again\n> assuming a Nov 1 release, and I'm probably not giving myself enough\n> time):\n> \n> Oct 26: freeze Programmer's Guide and Developer's Guide\n> Oct 27: freeze User's Guide and reference pages\n> Oct 28: freeze Admin Guide\n> Oct 29-30: finish hardcopy, generate html\n> \n> I will be out of town Oct 31-Nov 1, so need to finish a day or two\n> early. As it is, I should have frozen some docs by now to get this stuff\n> done.\n> \n> So, if you have anything else to contribute or update for docs, SEND IT\n> IN NOW. Or at least let me know it is coming soon. Or it will have to\n> wait for v6.5...\n> \n> TIA\n> \n> - Tom\n> \n", "msg_date": "Sun, 25 Oct 1998 14:24:13 -0500", "msg_from": "\"Thomas A. Szybist\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >> can you distinguish between linux with libc5 and with glibc?\n >\n >Yes, if necessary. I'm running glibc at work (with the RH 6.3.2 rpms)\n >and libc5 at home (with v6.4beta). I don't expect there to be any\n >significant differences; are you aware of any?\n\nNo; there are minor textual differences in the math overflow messages\nand, last time I tried, the geometry tests had differences of around\n10 ^ -12.\n\n >...\n >Or are you just saying that we should mention both explicitly so that\n >people can know that it would work on their machine for sure?\n \nPrecisely.\n\nThe difference is such a major one that linux-libc5 and linux-glibc\nshould be regarded nearly as two different systems.\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye then be risen with Christ, seek those things \n which are above, where Christ sitteth on the right \n hand of God. Set your affection on things above, not \n on things on the earth.\" Colossians 3:1,2\n\n\n", "msg_date": "Sun, 25 Oct 1998 21:31:23 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call? " }, { "msg_contents": "\"Thomas A. Szybist\" wrote:\n> \n> I just gave today's (Oct 25) snapshot a try on Sparc Linux. \n> Looks good except datetime. I'm getting failures due to this type \n> of thing:\n> \n> regression=> SELECT ('today'::datetime );\n> ?column?\n> ----------------------------\n> Sun Oct 25 00:00:00 1998 EDT\n> (1 row)\n> \n> regression=> SELECT ('tomorrow'::datetime - '1 day'::timespan);\n> ?column?\n> ----------------------------\n> Sun Oct 25 01:00:00 1998 EDT\n> (1 row) \n> \n> \n> I *think* this may because we're not too far into EST yet.\n> Sound good?\n> \n\nThis is not a failure. The date 24 hours (1 day) before 'Mon Oct 26 00:00:00 1998 EST' is 'Sun Oct 25 01:00:00 1998 EDT'. You would think it should be 'Sun Oct 25 00:00:00 1998 EST', but thanks to Daylight Savings Time that datetime does not exist :-)\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n", "msg_date": "Sun, 25 Oct 1998 16:35:16 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "> I just gave today's (Oct 25) snapshot a try on Sparc Linux.\n> Looks good except datetime.\n\nOK, picky picky. Run the test tomorrow instead. And I'll mark you down\nas having tested and passed today :)\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 22:56:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "> I compiled it on RH 5.1... no problems.\n\nOK. Just to be unambiguous, I'd prefer a statement that includes \"...\nand passed all regression tests\". It did, right?\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 22:58:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "> The difference is such a major one that linux-libc5 and linux-glibc\n> should be regarded nearly as two different systems.\n\nSure. Well, I'm planning on building v6.4beta at work anyway, and will\ntry the regression tests. So we should have a firm confirmation for v6.4\nthen. Though it sounds like Oliver has tried something fairly recently?\n\nAside from the math rounding problems (I even see slight differences on\nmy libc5 machine when using the egcs compiler) I _really_ don't expect\nto see a true failure.\n\n - Tom\n", "msg_date": "Sun, 25 Oct 1998 23:02:53 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "Err.. umm... Updated 10/25 17:05 CST. The following tests fail on my system\n(RH 5.1 - glibc):\n\nint2, int4: Different error format\n(Math result not representable is now Numerical result out of range)\nfloat8: Weird... one query that was supposed to fail returned a bunch of\nNaNs, another failed when it wasn't supposed to with an out of range error.\ngeometry: The results are only approximately right... but only in the first\nsig. figure :(\ndatetime: CDT/CST thing\nsanity_check: **backend aborts**\nrandom: Fails because it returns a value outside the 80-120 range\nmisc: Rows out of order\n\nI'm afraid this one can't go as 'supported'. Sorry.\n\nTaral\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Thomas G.\n> Lockhart\n> Sent: Sunday, October 25, 1998 4:58 PM\n> To: Taral\n> Cc: Terry Mackintosh; PostgreSQL-development\n> Subject: Re: [HACKERS] Re: [DOCS] Last call?\n>\n>\n> > I compiled it on RH 5.1... no problems.\n>\n> OK. Just to be unambiguous, I'd prefer a statement that includes \"...\n> and passed all regression tests\". It did, right?\n>\n> - Tom\n>\n\n", "msg_date": "Sun, 25 Oct 1998 17:29:22 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "> sanity_check: **backend aborts**\n> random: Fails because it returns a value outside the 80-120 range\n> misc: Rows out of order\n\nAll passed when I reinitialized the db.\n\nTaral\n", "msg_date": "Sun, 25 Oct 1998 17:37:15 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "> > sanity_check: **backend aborts**\n> > random: Fails because it returns a value outside the 80-120 range\n> > misc: Rows out of order\n> All passed when I reinitialized the db.\n\nOK, good. \n\nbtw, the random test will still occasionally fail, since a two or three\nsigma result will have values outside the 80-120 range. But rerunning\nshould get something within range, usually.\n\n - Tom\n", "msg_date": "Mon, 26 Oct 1998 00:42:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "> > > sanity_check: **backend aborts**\n> > > random: Fails because it returns a value outside the 80-120 range\n> > > misc: Rows out of order\n\nNote that int2,int4,float8,geometry are still failing...\n\nWhy is geometry sooo far out?\n\nTaral\n", "msg_date": "Sun, 25 Oct 1998 19:17:28 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [DOCS] Last call?" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> _ HPUX 10.20\n\nYou can put down an X for both HPUX 9.03 and 10.20.\n\nI discovered a number of minor problems when I tried to compile with\nHP's cc instead of gcc like I usually do. I just committed fixes for\nthose.\n\nI am still getting a discrepancy in the \"rules\" regression test,\nnamely a difference in the order in which tuples are returned:\n\n*** expected/rules.out\tFri Oct 2 12:28:01 1998\n--- results/rules.out\tSun Oct 25 19:31:42 1998\n***************\n*** 315,322 ****\n pname |sysname\n ------+-------\n bm |pluto \n- jwieck|orion \n jwieck|notjw \n (3 rows)\n \n QUERY: delete from rtest_system where sysname = 'orion';\n--- 315,322 ----\n pname |sysname\n ------+-------\n bm |pluto \n jwieck|notjw \n+ jwieck|orion \n (3 rows)\n \n QUERY: delete from rtest_system where sysname = 'orion';\n\n----------------------\n\nThis happens on all four permutations of HPUX version and compiler.\nAre other people really seeing the tuple order given in the \"expected\"\nfile?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Oct 1998 20:30:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "In message <[email protected]>, \"Thomas G. Lockhart\" writes:\n> > I just gave today's (Oct 25) snapshot a try on Sparc Linux.\n> > Looks good except datetime.\n> \n> OK, picky picky. Run the test tomorrow instead. And I'll mark you down\n> as having tested and passed today :)\n> \n> - Tom\n\nDo I still get full credit ?:)\n\nI noticed your list did not have Solaris / Sparc. I gave that a spin\nas well. Aside from datetime, my results confirm the expected diffs\nsupplied.\n\nTom Szybist\[email protected]\n", "msg_date": "Sun, 25 Oct 1998 21:46:53 -0500", "msg_from": "\"Thomas A. Szybist\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "I still need reports on:\n\n> _ AIX 4.1.x-4.2\n> _ FreeBSD 2.2.x-3.x\n> _ NetBSD 1.3 NS32532\n> _ NetBSD 1.3 Sparc\n> _ NetBSD 1.3 VAX\n> _ DGUX 5.4R4.11 (m88k or ? we need a new maintainer)\n> _ IRIX 6.x (Andrew, are you available?)\n> _ Digital 4.0\n> _ linux 2.0.x Alpha\n> _ SCO (Billy, can you confirm success now?)\n> _ Solaris x86\n> _ Solaris 2.5.1-2.6 Sparc\n> _ SunOS 4.1.4 Sparc\n> _ SVR4 4.4 m88k (I have v6.3 \"confirmed with patching\". better now?)\n> x Windows NT\n\nTatsuo, is your usual stable of machines available for reporting? It\nwould be nice to get confirmation with that big-endian/little-endian\nmix.\n\nHow should I list WindowsNT? I have \"mostly working, check web site for\npatches\" at the moment, but I don't know if we have more info available\nnow.\n\nI would welcome confirming reports on other platforms too.\n\n - Tom\n", "msg_date": "Mon, 26 Oct 1998 15:12:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "On Mon, 26 Oct 1998, Thomas G. Lockhart wrote:\n\n> I still need reports on:\n> \n> > _ linux 2.0.x Alpha\n\n \tI have gotten it to compile successfully again (fixes to our old\nfriend S_LOCK) and I should have a patch out in a few days (big test {at\ncollege} on Wedesday, so it will later in the week). Most of the\nregression tests are working, except the datetime ones (no, sorry, can't\nblame daylight savings this time, that is unless the year is all of a\nsudden 2136. :( ). I doubt I will have that working by Nov 2, as I have\nbeen trying for a few months with out success to find these bugs. I will\npost the patch and more detail on the datetime problems by the end of this\nweek to the pgsql-{ports,hacker,patch} mailing lists. TTYAL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Mon, 26 Oct 1998 09:45:12 -0600 (CST)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "Tom Lane wrote:\n\n> I am still getting a discrepancy in the \"rules\" regression test,\n> namely a difference in the order in which tuples are returned:\n>\n> *** expected/rules.out Fri Oct 2 12:28:01 1998\n> --- results/rules.out Sun Oct 25 19:31:42 1998\n> ***************\n> *** 315,322 ****\n> pname |sysname\n> ------+-------\n> bm |pluto\n> - jwieck|orion\n> jwieck|notjw\n> (3 rows)\n>\n> QUERY: delete from rtest_system where sysname = 'orion';\n> --- 315,322 ----\n> pname |sysname\n> ------+-------\n> bm |pluto\n> jwieck|notjw\n> + jwieck|orion\n> (3 rows)\n>\n> QUERY: delete from rtest_system where sysname = 'orion';\n>\n> ----------------------\n>\n> This happens on all four permutations of HPUX version and compiler.\n> Are other people really seeing the tuple order given in the \"expected\"\n> file?\n\n I think they should be in the order given in the expected\n file.\n\n The rows inserted into the rtest_admin table (really a table,\n not a view) are:\n\n pname|sysname\n -----+-------\n jw |orion\n jw |notjw\n bm |neptun\n\n Then two updates are performed. The rules that are there\n would add parsetrees as if these queries are given:\n\n UPDATE rtest_admin SET sysname = 'pluto'\n WHERE rtest_system.sysname = 'neptun'\n AND rtest_admin.sysname = rtest_system.sysname;\n\n UPDATE rtest_admin SET pname = 'jwieck'\n WHERE rtest_person.pdesc = 'Jan Wieck'\n AND rtest_admin.pname = rtest_person.pdesc;\n\n These two queries will produce join plans. Since there are no\n indices on any of the tables, they should produce tuples in\n exactly the order they where entered into the table. An\n UPDATE creates a new tuple, inserts it and outdates the\n current by ctid.\n\n In rtest_system and rtest_person there are only 1 row that\n matches each of the given qualifications. So the question is\n why on HPUX the order of tuples returned in the resulting\n join plans differs from other OS's. The SELECT that produces\n the wrong order above should result in a SeqScan, and that\n must return the tuples in ctid order.\n\n The first rule update on rtest_admin (fired at the UPDATE on\n rtest_system.sysname) doesn't change the order of the tuples\n (or did you omit this in your mail?). So why the hell does\n the second? Updated rows always appear at the end of a\n SeqScan in the order they where updated. There is no vacuum\n between the updates, so the space from the outdated tuples\n should not be reused here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 26 Oct 1998 19:44:02 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> These two queries will produce join plans. Since there are no\n> indices on any of the tables, they should produce tuples in\n> exactly the order they where entered into the table.\n\nI modified the rules.sql test to perform an EXPLAIN of the query\nthat is generating the unexpected result, and it says:\n\nQUERY: explain update rtest_person set pname = 'jwieck' where pdesc = 'Jan Wieck';\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=0.00 size=1 width=42)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on rtest_admin (cost=0.00 size=0 width=30)\n -> Seq Scan (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on rtest_person (cost=0.00 size=0 width=12)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on rtest_person (cost=0.00 size=0 width=18)\n\nThe thing that jumps out at me here is the \"sort\" step (which I assume\nis on pname for rtest_admin and pdesc for rtest_person, those being the\nfields to be joined). Since the prior state of rtest_admin is\n\nQUERY: select * from rtest_admin;\npname|sysname\n-----+-------\njw |orion \njw |notjw \nbm |pluto \n(3 rows)\n\nthe two rows that will be updated have equal sort keys and therefore the\nsort could legally return them in either order. Does Postgres contain\nits own sort algorithm for this kind of operation, or does it depend on\nthe system qsort? System library qsorts don't normally guarantee\nstability for equal keys. It could be that we're looking at a byproduct\nof some minor implementation difference between the qsorts on my machine\nand yours. If that's it, though, I'm surprised that I'm the only one\nreporting a difference in the test's output.\n\nBTW, I get the same query plan if I EXPLAIN the query that you say the\nrule should expand to,\n\n UPDATE rtest_admin SET pname = 'jwieck'\n WHERE rtest_person.pdesc = 'Jan Wieck'\n AND rtest_admin.pname = rtest_person.pdesc;\n\nso there doesn't seem to be anything wrong with the rule rewriter...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Oct 1998 18:07:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "rules regression test diff (was Re: [HACKERS] Last call?)" }, { "msg_contents": "On Mon, 26 Oct 1998, Thomas G. Lockhart wrote:\n\n> I still need reports on:\n> \n> > _ AIX 4.1.x-4.2\n\n\tDuane out there was going to do this one for us, assuming that he\nstill had access to that machine...\n\n> > _ FreeBSD 2.2.x-3.x\n\n\tBuilding now...\n\n> > _ NetBSD 1.3 NS32532\n> > _ NetBSD 1.3 Sparc\n> > _ NetBSD 1.3 VAX\n> > _ DGUX 5.4R4.11 (m88k or ? we need a new maintainer)\n> > _ IRIX 6.x (Andrew, are you available?)\n> > _ Digital 4.0\n> > _ linux 2.0.x Alpha\n> > _ SCO (Billy, can you confirm success now?)\n> > _ Solaris x86\n> > _ Solaris 2.5.1-2.6 Sparc\n\n\tWill rebuild and regress test both the x86/Sparc platforms for 2.6\nduring the day tomorrow...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Oct 1998 23:08:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "On Sun, 25 Oct 1998, Thomas G. Lockhart wrote:\n\n> _ linux 2.0.x Alpha\n\nNeither 6.4B2 nor the oct26 snapshot compile on my RH 5.0 Alpha Linux\n(2.0.30) computer.\n\nI don't have much time to work on it, but I can provide the gmake.log and\naccess to the machine if it would help.\n\nDoug\n\n------------------------------------------------------------------------\nMy views are actually owned by a small, strange, orange man from the\nplanet zikalikzutorabibian who can only communicate by making cryptic\noblong symbols in warm food.\n\nIf you have problems with my views, please take them up with him.\n------------------------------------------------------------------------\nFor Hire: Will write code/manage networks for obscene amounts of money.\n------------------------------------------------------------------------\nDoug Babst - Owner | P.O. Box 103 | (402) 463-3426 - Voice\nTCG Computer Services | Hastings, NE 68901 | (402) 463-1754 - Fax\nhttp://www.tcgcs.com | http://www.babst.org/~dbabst/resume\n\n", "msg_date": "Tue, 27 Oct 1998 00:10:39 -0600 (CST)", "msg_from": "Douglas W Babst <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": ">Tatsuo, is your usual stable of machines available for reporting? It\n>would be nice to get confirmation with that big-endian/little-endian\n>mix.\n\nI have run the regression tests of Oct26 Snapshot on some platforms\nand a cross platform testing on MkLinux and FreeBSD.\n\nHere are results.\n\n(1) regression tests on some platforms\n\nLinuxPPC on PowerMac:\n\to PowerBook 2400c (PPC 603e) running 2.1.24 kernel\n\n\t Note that 2.1.24 kernel doesn't support PCMCIA cards,\n\t so we cannot use the network facility at all. sigh.\n\t (Unix domain sockets are ok)\n\t On the other hand, 2.1.1xx kernels do support PCMCIA,\n\t unfortunately, these are broken in that using the Unix domain\n\t sockets causes the system crash.\n\t Anyway, these are not PostgreSQL's problem, of course.\n\n\to regresion tests are almost good except the datetime test.\n\n\t\tSELECT ('tomorrow'::datetime = ('yesterday'::datetime\n\t\t + '2 days'::timespan)) as \"True\"; --> shows 'false'\n\t\tSELECT ('current'::datetime = 'now'::datetime)\n\t\tas \"True\"; --> shows 'false'\n\t\tSELECT count(*) AS one FROM DATETIME_TBL WHERE \n\t\td1 = 'today'::datetime; --> no row selected\n\t\n\t\tThey were ok on 6.3.2.\n\nMkLinux on PowerMac:\n\to PowerMac 7600 (PPC 750) running MkLinux DR3\n\to Same failure of the datetime test as LinuxPPC\n\nFreeBSD:\n\to FreeBSD 2.2.6-RELEASE\n\to datetime testing fails (seems same phenomenon as LinuxPPC)\n\to int8 testing fails (is this normal?)\n\nSeems there's something wrong with datetime. Comment?\n\n\n(2) cross platform testing\n\nI have run the test with FreeBSD 2.2.7 and Mklinux. Seems they are\nhappy to talk to each other.\n\n\nPlease let me know If you need addional testings.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Tue, 27 Oct 1998 17:06:40 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "Tom Lane wrote:\n\n> I modified the rules.sql test to perform an EXPLAIN of the query\n> that is generating the unexpected result, and it says:\n>\n> QUERY: explain update rtest_person set pname = 'jwieck' where pdesc = 'Jan Wieck';\n> NOTICE: QUERY PLAN:\n>\n> Merge Join (cost=0.00 size=1 width=42)\n> -> Seq Scan (cost=0.00 size=0 width=0)\n> -> Sort (cost=0.00 size=0 width=0)\n> -> Seq Scan on rtest_admin (cost=0.00 size=0 width=30)\n> -> Seq Scan (cost=0.00 size=0 width=0)\n> -> Sort (cost=0.00 size=0 width=0)\n> -> Seq Scan on rtest_person (cost=0.00 size=0 width=12)\n>\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on rtest_person (cost=0.00 size=0 width=18)\n\n Isn't it nice to have EXPLAIN doing the rewrite step?\n\n> [...]\n>\n> the two rows that will be updated have equal sort keys and therefore the\n> sort could legally return them in either order. Does Postgres contain\n> its own sort algorithm for this kind of operation, or does it depend on\n> the system qsort? System library qsorts don't normally guarantee\n> stability for equal keys. It could be that we're looking at a byproduct\n> of some minor implementation difference between the qsorts on my machine\n> and yours. If that's it, though, I'm surprised that I'm the only one\n> reporting a difference in the test's output.\n\n Could be the reason. createfirstrun() in psort.c is using\n qsort as a first try. Maybe we should add ORDER BY pname,\n sysname to the queries to avoid it.\n\n>\n> BTW, I get the same query plan if I EXPLAIN the query that you say the\n> rule should expand to,\n>\n> UPDATE rtest_admin SET pname = 'jwieck'\n> WHERE rtest_person.pdesc = 'Jan Wieck'\n> AND rtest_admin.pname = rtest_person.pdesc;\n>\n> so there doesn't seem to be anything wrong with the rule rewriter...\n\n Sure. The parsetree generated by the rule system is exactly\n that what the parser outputs for this query. I hope some\n people understand my new documentation of the rule system.\n Sometimes the lonesome rule-rider needs a partner too.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Oct 1998 10:45:30 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: rules regression test diff (was Re: [HACKERS] Last call?)" }, { "msg_contents": "> LinuxPPC on PowerMac:\n> o PowerBook 2400c (PPC 603e) running 2.1.24 kernel\n> o regresion tests are almost good except the datetime test.\n> They were ok on 6.3.2.\n> \n> MkLinux on PowerMac:\n> o PowerMac 7600 (PPC 750) running MkLinux DR3\n> o Same failure of the datetime test as LinuxPPC\n> \n> FreeBSD:\n> o FreeBSD 2.2.6-RELEASE\n> o datetime testing fails (seems same phenomenon as LinuxPPC)\n> o int8 testing fails (is this normal?)\n> \n> Seems there's something wrong with datetime. Comment?\n\nYes. I have learned to never ask for regression testing reports near a\ndaylight savings time boundary. I assume Japan set the clock back an\nhour last Sunday? The explicit tests for 'yesterday', 'today',\n'tomorrow' combined with date arithmetic fail since there is an hour\noffset across that boundary. In a day or two the tests will succeed.\n\nI'm not sure why FreeBSD has trouble with int8. It of course requires\nsupport from the compiler, and configure tries to test for it. You don't\nuse gcc? If not, then perhaps you could check into 64-bit integer\nsupport on your compiler. If you do use gcc, perhaps the test is having\ntrouble finding the complete set of libraries. I used to have a problem\non my Linux box with that; the 64-bit int subtraction routine didn't\nmake it into libc, but was hidden in some machine-specific library way\ndown the tree.\n\nPerhaps you and Marc can look into the FreeBSD int8 problem?\n\n> (2) cross platform testing\n> I have run the test with FreeBSD 2.2.7 and Mklinux. Seems they are\n> happy to talk to each other.\n\nThis is all good. Thanks.\n\n - Tom\n", "msg_date": "Tue, 27 Oct 1998 15:38:44 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> \to regresion tests are almost good except the datetime test.\n> \t\tSELECT ('tomorrow'::datetime = ('yesterday'::datetime\n> \t\t + '2 days'::timespan)) as \"True\"; --> shows 'false'\n\nI assume you ran these tests during Monday (PST)? That's a symptom\nof the daylight savings time transition issue that was just discussed\nto death on pg-hackers. Not to worry --- it's just a shortcoming\nof the datetime regression test, not a bug in Postgres.\n\n> \t\tSELECT ('current'::datetime = 'now'::datetime)\n> \t\tas \"True\"; --> shows 'false'\n> \t\tSELECT count(*) AS one FROM DATETIME_TBL WHERE \n> \t\td1 = 'today'::datetime; --> no row selected\n\nThese two worry me more. They don't look like they should be\nsubject to the DST issue, and no one else reported seeing them\nfail over the weekend. Thomas, any thoughts?\n\n> FreeBSD:\n> \to int8 testing fails (is this normal?)\n\nIt is if the platform does not have a 64-bit integer data type.\n\nIf FreeBSD's compiler and C library do support a 64-bit int type,\nthen there is a problem that we ought to fix. Most likely,\nconfigure doesn't know the name of the type to try, or isn't trying\nthe right format string for sprintf/sscanf of a long long int.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Oct 1998 11:04:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> Yes. I have learned to never ask for regression testing reports near a\n> daylight savings time boundary. I assume Japan set the clock back an\n> hour last Sunday?\n\nJapan doesn't use DST last I heard, but it doesn't matter. The\nregression tests are run with TZ=PST8PDT, so American DST boundaries are\nthe windows in which the datetime test will fail, anywhere in the world.\n\n(Given correctly installed worldwide TZ info on the local system, of\ncourse, but I suspect you can assume that most Unix systems will have\ninfo about the American timezones...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Oct 1998 11:39:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "> Japan doesn't use DST last I heard, but it doesn't matter. The\n> regression tests are run with TZ=PST8PDT, so American DST boundaries \n> are the windows in which the datetime test will fail, anywhere in the \n> world.\n\nOh. Right.\n\n - Tom\n", "msg_date": "Tue, 27 Oct 1998 17:00:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n\n> _ NetBSD 1.3\n\nWe normally write NetBSD/i386 1.3 (or 1.3.2, the latest version).\n\n> _ NetBSD 1.3 Sparc\n\nI just ran the regression tests on NetBSD/sparc 1.3H, which is an\ninterim (current) version between 1.3.2 and 1.4. All tests pass\nexcept float8, which generates the following extra error messages:\n\nbarsoom:tih> diff -c expected/float8-NetBSD.out results/float8.out \n*** expected/float8-NetBSD.out Thu Oct 8 18:12:14 1998\n--- results/float8.out Tue Oct 27 20:07:18 1998\n***************\n*** 213,219 ****\n--- 213,221 ----\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n ERROR: Bad float8 input format '-10e400'\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n+ ERROR: Bad float8 input format '10e-400'\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400');\n+ ERROR: Bad float8 input format '-10e-400'\n QUERY: DELETE FROM FLOAT8_TBL;\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('0.0');\n QUERY: INSERT INTO FLOAT8_TBL(f1) VALUES ('-34.84');\nbarsoom:tih> \n\n> _ NetBSD 1.3 VAX\n\nUnfortunately, I can't run the regression tests under NetBSD/vax, as\nmy old VAX is having stability problems at the moment. Things were\nfine with 6.3, though, and since NetBSD/i386 and NetBSD/sparc both\nlike 6.4BETA, it's extremely probable that NetBSD/vax will as well.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "27 Oct 1998 20:41:34 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Last call?" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> the two rows that will be updated have equal sort keys and therefore the\n>> sort could legally return them in either order. Does Postgres contain\n>> its own sort algorithm for this kind of operation, or does it depend on\n>> the system qsort? System library qsorts don't normally guarantee\n>> stability for equal keys. It could be that we're looking at a byproduct\n>> of some minor implementation difference between the qsorts on my machine\n>> and yours. If that's it, though, I'm surprised that I'm the only one\n>> reporting a difference in the test's output.\n\n> Could be the reason. createfirstrun() in psort.c is using\n> qsort as a first try. Maybe we should add ORDER BY pname,\n> sysname to the queries to avoid it.\n\nI think this is the answer then. Some poking around in HP's patch\ndocumentation shows that they modified their version of qsort a while\nback:\n\n\tPHCO_6780:\n\tqsort performs very badly on sorted blocks of data\n\t- customer found that qsort on a file with 100,000\n\trandomly sorted records took seconds, whereas a file\n\tof 100,000 records containing large sorted blocks\n\ttook over an hour to sort.\n\tqsort needed to pick an alternate pivot point when\n\tdetecting sorted or partially sorted data in order\n\tto improve poor performance.\n\nSo I guess it's not so surprising if HP's qsort has slightly different\nbehavior on equal-keyed data than everyone else's.\n\nDoes anyone object to modifying this regression test to sort the\ntuples for display? It seems that only the one query needs to be\nchanged...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Oct 1998 14:49:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rules regression test diff (was Re: [HACKERS] Last call?) " }, { "msg_contents": "> I just ran the regression tests on NetBSD/sparc 1.3H, which is an\n> interim (current) version between 1.3.2 and 1.4. All tests pass\n> except float8, which generates the following extra error messages:\n\nOK, looks good...\n\n - Thomas\n", "msg_date": "Tue, 27 Oct 1998 21:16:45 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Last call?" }, { "msg_contents": "Would like to get reports running the Postgres v6.4beta on these\nremaining platforms in the next couple of days:\n\n> _ DGUX (needs a new maintainer)\n> _ IRIX 6.x (Andrew?)\n> _ Digital 4.0\n> _ linux 2.0.x Alpha (needs some work; someone have patches?)\n> _ NetBSD 1.3 VAX (Tom H's machine is down; anyone else?)\n> _ SCO (Billy, have you had any luck with this?)\n> _ Solaris x86 (Marc?)\n> _ SunOS 4.1.4 Sparc (Tatsuo?)\n> _ SVR4 4.4 m88k\n\nTIA\n\n - Tom\n", "msg_date": "Wed, 28 Oct 1998 17:31:33 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "On Wed, 28 Oct 1998, Thomas G. Lockhart wrote:\n\n> Would like to get reports running the Postgres v6.4beta on these\n> remaining platforms in the next couple of days:\n> \n> > _ DGUX (needs a new maintainer)\n> > _ IRIX 6.x (Andrew?)\n> > _ Digital 4.0\n> > _ linux 2.0.x Alpha (needs some work; someone have patches?)\n> > _ NetBSD 1.3 VAX (Tom H's machine is down; anyone else?)\n> > _ SCO (Billy, have you had any luck with this?)\n> > _ Solaris x86 (Marc?)\n\n\nsorry...yes...beautiful build and regression test...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Oct 1998 15:26:22 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "Hi, \n\nI just compiled beta4 on Solaris 7 (also known as 2.7) using egcs 1.1.\n\nEverything went very smoothly and regression tests were passed.\n\nThomas G. Lockhart writes:\n > Would like to get reports running the Postgres v6.4beta on these\n > remaining platforms in the next couple of days:\n > \n > > _ DGUX (needs a new maintainer)\n > > _ IRIX 6.x (Andrew?)\n > > _ Digital 4.0\n > > _ linux 2.0.x Alpha (needs some work; someone have patches?)\n > > _ NetBSD 1.3 VAX (Tom H's machine is down; anyone else?)\n > > _ SCO (Billy, have you had any luck with this?)\n > > _ Solaris x86 (Marc?)\n > > _ SunOS 4.1.4 Sparc (Tatsuo?)\n > > _ SVR4 4.4 m88k\n > \n > TIA\n > \n > - Tom\n > \n > \n\nMfG/Regards\n--\n /==== Siemens AG\n / Ridderbusch / , ICP CS XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n", "msg_date": "Wed, 28 Oct 1998 21:28:23 +0100 (MET)", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "Hi,\n\nhere are my results for building beta3 on MIPS SVR4. The regression\ntests are as good as they can be. There are a couple of changes\nnecessary, which I've described in a little README, which I've\nappended below.\n\nThomas G. Lockhart writes:\n > Would like to get reports running the Postgres v6.4beta on these\n > remaining platforms in the next couple of days:\n > \n > > _ DGUX (needs a new maintainer)\n > > _ IRIX 6.x (Andrew?)\n > > _ Digital 4.0\n > > _ linux 2.0.x Alpha (needs some work; someone have patches?)\n > > _ NetBSD 1.3 VAX (Tom H's machine is down; anyone else?)\n > > _ SCO (Billy, have you had any luck with this?)\n > > _ Solaris x86 (Marc?)\n > > _ SunOS 4.1.4 Sparc (Tatsuo?)\n > > _ SVR4 4.4 m88k\n > \n > TIA\n > \n > - Tom\n > \n > \n\nReadme for building Postgresql 6.4 on Siemens RM Systems\n========================================================\n\n1. Overview\n===========\n\nThis README describes the necessary steps to build the Postgresql\ndatabase system on SINIX/Reliant UNIX. Reliant UNIX (previously called\nSINIX) is a SVR4 variant, which runs on Siemens RM400/RM600 servers.\nThese servers use the MIPS R3000/R4400/R10000 family of processors.\n\nThe following description is based on SINIX-P 5.42A10 running on a\nRM600-xx with 4 processors (both the machine and the operating system\nare pretty old, therefore your milage my vary on newer os versions).\n\n2. Building\n===========\n\nYou can not use the GCC version for this platform (2.7.2.3 from\nftp://ftp.mch.sni.de/sni/mr/pd/gnu). But you have to install it\nanyway, since the GCC cpp must be used during an intermediate step,\nwhen header files are produced. You have to use the Siemens\nC-compilations environment (I used CDS 1.1A00) for the actual\ncompilation process. Reason is, that the postgresql backend is build\nwith several 'ld -R' passes. The linker expects the symbols to be\nsorted in ELF object file, which this GCC port apparently does not do.\n\nApart from flex, bison, you also need GNU awk. Awk is used in\ngenbki.sh and the expression used in the shell script appears to be\ntoo complex for the system awk. Therefore install GNU awk and make\nsure, that this version is found before the system awk (ordering of\nPATH variable).\n\nI configured with\n\n ./configure --with-template=svr4 \\\n --prefix=/home/tools/pgsql-6.4 \\\n --with-includes=/home/tools/include \\\n --with-libraries=/home/tools/lib\n\nYou might be tempted to run configure with the additional argument\n--with-CC='cc -W0' to activate the native C-compiler. However, when I\ndid this, compilation stopped with this error message.\n\n cc -W0 -I../../../include -I../../../backend -I/home/tools/include\n -I../.. -c istrat.c -o istrat.o\n istrat.c 496: [error]: 2324 Undefined: 'F_OIDEQ'\n 2086 c1: errors: 1, warnings: 15\n\nAfter that the following changes are necessary (the changes are\nexplicitly listed, since the changes might not be compatible with\nother SVR4 platforms, which use the same files):\n\no Add -lsocket before -lnsl in Makefile.global\n\no edit Makefile.port and extend LDFLAGS with -lmutex, so that it\n like this\n\n LDFLAGS+= -lmutex -lc /usr/ucblib/libucb.a -LD-Blargedynsym\n\n And add a line to enable the native C-compiler\n\n CUSTOM_CC = cc -W0 -O2\n\no configure does not correctly reqognize the number of arguments\n for gettimeofday. Therefore 'undef' HAVE_GETTIMEOFDAY_2_ARGS.\n\no These two patches are necessary to fix a compiler bug.\n\n In backend/utils/adt/date.c around line 179 change\n\n EncodeTimeSpan(tm, 0, DateStyle, buf);\n\n to\n\n EncodeTimeSpan(tm, 0.0, DateStyle, buf);\n\n and in backend/utils/adt/dt.c around line 349 and 359 change\n\n tm2datetime(&tt, 0, NULL, &dt);\n\n to\n\n tm2datetime(&tt, 0.0, NULL, &dt);\n\n Patches in diff -u form are appended below.\n\no In src/backend/port/Makefile remove the 'strcasecmp.o'.\n\n3. Restrictions\n===============\n\nConnecting to the backend via unix domain sockets does not work and\nthe 64bit data type is not supported. Otherwise the regression test\nshows no remarkable differences.\n\n4. Patches\n==========\n\nPlease remove the first blank column, if you apply the patches.\n\n diff -u src/backend/utils/adt/date.c~ src/backend/utils/adt/date.c\n --- src/backend/utils/adt/date.c~ Wed Oct 28 20:43:42 1998\n +++ src/backend/utils/adt/date.c Wed Oct 28 20:43:42 1998\n @@ -176,7 +176,7 @@\n else\n {\n reltime2tm(time, tm);\n - EncodeTimeSpan(tm, 0, DateStyle, buf);\n + EncodeTimeSpan(tm, 0.0, DateStyle, buf);\n }\n\n result = palloc(strlen(buf) + 1);\n diff -u src/backend/utils/adt/dt.c~ src/backend/utils/adt/dt.c\n --- src/backend/utils/adt/dt.c~ Wed Oct 28 20:45:42 1998\n +++ src/backend/utils/adt/dt.c Wed Oct 28 20:45:42 1998\n @@ -346,7 +346,7 @@\n if (DATETIME_IS_CURRENT(dt))\n {\n GetCurrentTime(&tt);\n - tm2datetime(&tt, 0, NULL, &dt);\n + tm2datetime(&tt, 0.0, NULL, &dt);\n dt = dt2local(dt, -CTimeZone);\n\n #ifdef DATEDEBUG\n @@ -356,7 +356,7 @@\n else\n { /* if (DATETIME_IS_EPOCH(dt1)) */\n GetEpochTime(&tt);\n - tm2datetime(&tt, 0, NULL, &dt);\n + tm2datetime(&tt, 0.0, NULL, &dt);\n #ifdef DATEDEBUG\n printf(\"SetDateTime- epoch time is %f\\n\", dt);\n #endif\n\nMfG/Regards\n--\n /==== Siemens AG\n / Ridderbusch / , ICP CS XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 5251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n", "msg_date": "Wed, 28 Oct 1998 23:07:26 +0100 (MET)", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n> Would like to get reports running the Postgres v6.4beta on these\n> remaining platforms in the next couple of days:\n> \n[...]\n> > _ SCO (Billy, have you had any luck with this?)\n\nTom,\n\nWhich SCO?\n\nThe only port I can support at this time is SCO UnixWare 7 (I no longer have \nUnixWare 2.x loaded on my machines). The changes I made to support that port \nshould also work for UnixWare 2.x if the UDK (SCO Universal Development Kit) \nis used to build it, but this has not been tested. Those same changes may \nalso work for SCO OpenServer 5.x, if the UDK is used to build it.\n\nI have not done any work with the univel (aka UnixWare 2.x) port for 6.4. I can not say if it will work with 6.4.\n\nBTW. Once 6.4 is offically released, I will supply a binary version that should execute on SCO OpenServer 5.x, UnixWare 2.x, and UnixWare 7.x. The ability to generate binaries that will run across all of these platforms is the main reason the UDK exists.\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n", "msg_date": "Wed, 28 Oct 1998 23:41:37 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "> > > _ SCO (Billy, have you had any luck with this?)\n> Which SCO?\n> The only port I can support at this time is SCO UnixWare 7 (I no \n> longer have UnixWare 2.x loaded on my machines).\n\nAh, I was confused, and had thought they were more distinct. Should I\nlist them separately, or should I just say SCO UnixWare 7 to cover all\ncurrent SCO products?\n\n - Tom\n", "msg_date": "Thu, 29 Oct 1998 05:17:31 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call?" }, { "msg_contents": "Frank Ridderbusch <[email protected]> writes:\n> You might be tempted to run configure with the additional argument\n> --with-CC='cc -W0' to activate the native C-compiler. However, when I\n> did this, compilation stopped with this error message.\n\n> cc -W0 -I../../../include -I../../../backend -I/home/tools/include\n> -I../.. -c istrat.c -o istrat.o\n> istrat.c 496: [error]: 2324 Undefined: 'F_OIDEQ'\n> 2086 c1: errors: 1, warnings: 15\n\nI believe that is the first symptom you'd see when configure chooses\na cpp-from-stdin technique that doesn't actually work. We went around\non that a couple of times in the past three or four days, and eventually\nchanged the shell scripts so that they don't need cpp from stdin.\n\nSo, with the current sources (or BETA4 when it's out) it might work to\nspecify --with-CC; would you try it and let us know?\n\nThis cpp mistake may also be the root of the apparent need to have gcc\ninstalled --- please check and see if that's still true.\n\n> After that the following changes are necessary (the changes are\n> explicitly listed, since the changes might not be compatible with\n> other SVR4 platforms, which use the same files):\n\nI think all of these config changes could be handled with a special\nMakefile.port for Siemens ... is that worth adding?\n\n> o configure does not correctly reqognize the number of arguments\n> for gettimeofday. Therefore 'undef' HAVE_GETTIMEOFDAY_2_ARGS.\n\nThis should be fixed; can you look into it and see why configure\nis getting the wrong answer? Look at configure.in --- there is a\nsmall test program that the script tries to compile, and if there\nis no compile error then it assumes gettimeofday has two args.\nA first guess is that gettimeofday is not declared in sys/time.h\non your machine. If we add wherever it is declared then perhaps\nthe test will work.\n\n> o In src/backend/port/Makefile remove the 'strcasecmp.o'.\n\nLikewise, this should be fixable by improving configure's test\nto see whether the system has strcasecmp.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Oct 1998 10:41:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "Tom Lane writes:\n > Frank Ridderbusch <[email protected]> writes:\n > > You might be tempted to run configure with the additional argument\n....\n > \n > This cpp mistake may also be the root of the apparent need to have gcc\n > installed --- please check and see if that's still true.\n\nOnce beta4 is ready I will definitely rebuild.\n\n > > After that the following changes are necessary (the changes are\n > > explicitly listed, since the changes might not be compatible with\n > > other SVR4 platforms, which use the same files):\n > \n > I think all of these config changes could be handled with a special\n > Makefile.port for Siemens ... is that worth adding?\n\nWell I'd say, this short before your expected release date, I wouldn't \nwant to shake things up by adding another special Makefile.port with\nthe possible iterations to get it right. And although I would like it\nto be otherwise, Siemens RM system have only a great market share in\nGermany. I think, if Pyramid Nile users and, I think NEC has/had a MIPS\nbased SVR4, would speak up, then you should add a special MIPS based\nMakefile.port \n\n > > o configure does not correctly reqognize the number of arguments\n > > for gettimeofday. Therefore 'undef' HAVE_GETTIMEOFDAY_2_ARGS.\n > \n > This should be fixed; can you look into it and see why configure\n > is getting the wrong answer? Look at configure.in --- there is a\n > small test program that the script tries to compile, and if there\n > is no compile error then it assumes gettimeofday has two args.\n > A first guess is that gettimeofday is not declared in sys/time.h\n > on your machine. If we add wherever it is declared then perhaps\n > the test will work.\n\nAs I say, my system is pretty old (still R3000) and ths os is just as\nold. And indeed the missing prototype in sys/time.h is the\ncause. Later OS version have it defined. Therefore I really see no\nneed for changes here.\n\n > > o In src/backend/port/Makefile remove the 'strcasecmp.o'.\n > \n > Likewise, this should be fixable by improving configure's test\n > to see whether the system has strcasecmp.\n > \n > \t\t\tregards, tom lane\n\nWell, this problem is originally caused by the need to link against\n/usr/ucblib/libucb.a. Without libucb.a, I'm getting three undefined\nsymbols\n\n strncasecmp commands/SUBSYS.o\n alloca bootstrap/SUBSYS.o\n strcasecmp commands/SUBSYS.o\n\nalloca is ok (GCC users have it build in). But strncasecmp and\nstrcasecmp are defined in the same archive member in\nlibucb.a. Therefore I get multiple defines if I link with strcasecmp.o \nfrom pgsql.\n\nCome to think of it, shouldn't configure check also for strncasecmp if \nit checks for strcasecmp? But apparently, since no one complains, it\nappears, that all other platforms do have strncasecmp and shouldn't\nalso need strcasecmp.\n\nI general, I would say, leave the configure process as it is, once\nbeta4 is available. I'm quite happy as it stand, with all the\nshortcommings of my dated hard- and software.\n\nRegards,\n\n\tFrank\n", "msg_date": "Thu, 29 Oct 1998 22:41:25 +0100 (MET)", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "Frank Ridderbusch <[email protected]> writes:\n>> Likewise, this should be fixable by improving configure's test\n>> to see whether the system has strcasecmp.\n\n> Well, this problem is originally caused by the need to link against\n> /usr/ucblib/libucb.a. Without libucb.a, I'm getting three undefined\n> symbols\n> strncasecmp commands/SUBSYS.o\n> alloca bootstrap/SUBSYS.o\n> strcasecmp commands/SUBSYS.o\n> alloca is ok (GCC users have it build in). But strncasecmp and\n> strcasecmp are defined in the same archive member in\n> libucb.a. Therefore I get multiple defines if I link with strcasecmp.o \n> from pgsql.\n\nOK, the reason that configure is deciding strcasecmp.o is needed is that\nit has no idea you are planning to link with /usr/ucblib/libucb.a.\n\nRather than editing the generated makefiles after the fact, you might\nhave better luck if you edit the template file you plan to use *before*\nrunning configure. I think adding \"-L/usr/ucblib -lucb\" to the LIBS\nline in the template file would solve this particular problem. I think\nyou had mentioned needing a few other unusual libraries as well --- you\nshould be able to take care of all of them that way.\n\n> Come to think of it, shouldn't configure check also for strncasecmp if \n> it checks for strcasecmp? But apparently, since no one complains, it\n> appears, that all other platforms do have strncasecmp and shouldn't\n> also need strcasecmp.\n\nBasically configure is assuming that your system library provides\nboth or neither. I think that's a reasonable assumption. Of course,\nif we find any actual instances of systems with just one, we may have\nto change it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Oct 1998 19:34:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Last call? " }, { "msg_contents": "Hello!\n\nI tried postgresql.v6.4-BETA3 and found some problems especially\nwith FreeBSD.\nThe following is my environment:\n\tFreeBSD 2.2.7-RELEASE\n\tGNU Make version 3.76.1, by Richard Stallman and Roland McGrath.\n\tgcc version 2.7.2.1\n\tGNU Bison version 1.25\n\tflex version 2.5.4\n\n* makefiles/Makefile.freebsd\n\tIt should have \"-Bforcearchive\" option just like as\n\tmakefiles/Makefile.bsd (and it used to be).\n\tWithout it, I got such errors in test/regress/results/misc.out.\n\n========================================================================\nERROR: Can't find function reverse_name in file /usr/local/src/postgresql.v6.4-BETA3/pgsql/src/test/regress/input/../regress.so\n========================================================================\n\n\t# And the original Makefile.freebsd does not seem to work\n\t# as you intended. As `\\' overrides `#', the last 2 lines\n\t# are commented out together.\n\n\n* Makefile.shlib\n\tFreeBSD is not supported.\n\tSo no shared library for plpgsql is installed and regression\n\ttest for it fails.\n\n========================================================================\ngmake -C src install\ngmake[3]: Entering directory `/usr/local/src/postgresql.v6.4-BETA3/pgsql/src/pl/\nplpgsql/src'\n/usr/bin/install -c -m 644 /usr/local/pgsql/lib/plpgsql.so\nusage: install [-CcDps] [-f flags] [-g group] [-m mode] [-o owner] file1 file2\n install [-CcDps] [-f flags] [-g group] [-m mode] [-o owner] file1 ...\n fileN directory\n install -d [-g group] [-m mode] [-o owner] directory ...\ngmake[3]: *** [install] Error 64\n========================================================================\n\n\tAnd no shared libraries for libecpg, libpq++ and libpq.a\n\tare made and installed.\n\n\n* pg_dumpall\n\tPg_dumpall command outputs \"create database\" statement such as:\n========================================================================\ncreate database with encoding='EUC_JP' DBNAME;\n========================================================================\n\tBut it should be:\n========================================================================\ncreate database DBNAME with encoding='EUC_JP';\n========================================================================\n\n\nRegards\n\n-- \nASCII CORPORATION\nTechnical Center\nSHIOZAKI Takehiko\n<[email protected]>\n", "msg_date": "Fri, 30 Oct 1998 10:09:21 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": false, "msg_subject": "v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "On Fri, 30 Oct 1998, SHIOZAKI Takehiko wrote:\n\n> Hello!\n> \n> I tried postgresql.v6.4-BETA3 and found some problems especially\n> with FreeBSD.\n> The following is my environment:\n> \tFreeBSD 2.2.7-RELEASE\n> \tGNU Make version 3.76.1, by Richard Stallman and Roland McGrath.\n> \tgcc version 2.7.2.1\n> \tGNU Bison version 1.25\n> \tflex version 2.5.4\n> \n> * makefiles/Makefile.freebsd\n> \tIt should have \"-Bforcearchive\" option just like as\n> \tmakefiles/Makefile.bsd (and it used to be).\n> \tWithout it, I got such errors in test/regress/results/misc.out.\n\n\tUnder ELF, this does not work, so adding it back in \"as is\" isn't\nan option. Let me see if I can come up with something here that works...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Oct 1998 22:23:53 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": ">> I tried postgresql.v6.4-BETA3 and found some problems especially\n>> with FreeBSD.\n>> The following is my environment:\n>> \tFreeBSD 2.2.7-RELEASE\n>> \tGNU Make version 3.76.1, by Richard Stallman and Roland McGrath.\n>> \tgcc version 2.7.2.1\n>> \tGNU Bison version 1.25\n>> \tflex version 2.5.4\n>> \n>> * makefiles/Makefile.freebsd\n>> \tIt should have \"-Bforcearchive\" option just like as\n>> \tmakefiles/Makefile.bsd (and it used to be).\n>> \tWithout it, I got such errors in test/regress/results/misc.out.\n>\n>\tUnder ELF, this does not work, so adding it back in \"as is\" isn't\n>an option. Let me see if I can come up with something here that works...\n\nMaybe we should have separate makefile/Makefiles.whatever for FreeBSD\n3.x and 2.2.x ?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 30 Oct 1998 12:13:36 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "On Fri, 30 Oct 1998, Tatsuo Ishii wrote:\n\n> >> I tried postgresql.v6.4-BETA3 and found some problems especially\n> >> with FreeBSD.\n> >> The following is my environment:\n> >> \tFreeBSD 2.2.7-RELEASE\n> >> \tGNU Make version 3.76.1, by Richard Stallman and Roland McGrath.\n> >> \tgcc version 2.7.2.1\n> >> \tGNU Bison version 1.25\n> >> \tflex version 2.5.4\n> >> \n> >> * makefiles/Makefile.freebsd\n> >> \tIt should have \"-Bforcearchive\" option just like as\n> >> \tmakefiles/Makefile.bsd (and it used to be).\n> >> \tWithout it, I got such errors in test/regress/results/misc.out.\n> >\n> >\tUnder ELF, this does not work, so adding it back in \"as is\" isn't\n> >an option. Let me see if I can come up with something here that works...\n> \n> Maybe we should have separate makefile/Makefiles.whatever for FreeBSD\n> 3.x and 2.2.x ?\n\n\tEwww...\n\n\tWaht does tools/ccsym show for you? I'm guessing, but you don't\nhave a __ELF__ defined, correct?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Oct 1998 23:52:50 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": ">> Maybe we should have separate makefile/Makefiles.whatever for FreeBSD\n>> 3.x and 2.2.x ?\n>\n>\tEwww...\n>\n>\tWaht does tools/ccsym show for you? I'm guessing, but you don't\n>have a __ELF__ defined, correct?\n\nRight.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 30 Oct 1998 12:57:34 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "On Fri, 30 Oct 1998, Tatsuo Ishii wrote:\n\n> >> Maybe we should have separate makefile/Makefiles.whatever for FreeBSD\n> >> 3.x and 2.2.x ?\n> >\n> >\tEwww...\n> >\n> >\tWaht does tools/ccsym show for you? I'm guessing, but you don't\n> >have a __ELF__ defined, correct?\n> \n> Right.\n\n\tAnd you don't have a /usr/include/elf.h, correct? almost fixed,\njust want to confirm my assumptions are correct, that is all :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Oct 1998 00:03:29 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": ">\tAnd you don't have a /usr/include/elf.h, correct? almost fixed,\n>just want to confirm my assumptions are correct, that is all :)\n\nWe have /usr/include/elf.h (FreeBSD 2.2.6-RELEASE). What about yours,\nTakehiko?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 30 Oct 1998 13:12:02 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "On Fri, 30 Oct 1998, Tatsuo Ishii wrote:\n\n> >\tAnd you don't have a /usr/include/elf.h, correct? almost fixed,\n> >just want to confirm my assumptions are correct, that is all :)\n> \n> We have /usr/include/elf.h (FreeBSD 2.2.6-RELEASE). What about yours,\n\n\tOh, shit, there goes that idea :( Hrmmm...back to the drawing\nboard :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Oct 1998 00:17:32 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "[On Oct 30, Tatsuo Ishii <[email protected]> writes:]\n>\n>We have /usr/include/elf.h (FreeBSD 2.2.6-RELEASE). What about yours,\n>Takehiko?\n\nI also have /usr/include/elf.h in FreeBSD 2.2.7-RELEASE with RCS header:\n\t$Id: elf.h,v 1.1.2.1 1998/01/27 16:23:39 jdp Exp $\nAnd I don't have it in FreeBSD 2.2.5-RELEASE.\n\n-- \n\u001b$B$?$@$7!\";d$O<*I!0v9\"$,<e$$$N$G!\"2q5D$O6X1l$K$7$F$/$@$5$$!#\u001b(B\n--\n\u001b$B1v:j\u001b(B \u001b$B5#I'\u001b(B(SHIOZAKI Takehiko)\t<[email protected]>\nTechnical Center,\tASCII CORPORATION\n", "msg_date": "Fri, 30 Oct 1998 13:19:08 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "SHIOZAKI Takehiko <[email protected]> writes:\n> * Makefile.shlib\n> \tFreeBSD is not supported.\n\nCan you provide an appropriate entry to add to Makefile.shlib for\nFreeBSD? It's probably much like one of the existing entries...\n\n> gmake[3]: Entering directory `/usr/local/src/postgresql.v6.4-BETA3/pgsql/src/pl/\n> plpgsql/src'\n> /usr/bin/install -c -m 644 /usr/local/pgsql/lib/plpgsql.so\n> usage: install [-CcDps] [-f flags] [-g group] [-m mode] [-o owner] file1 file2\n> install [-CcDps] [-f flags] [-g group] [-m mode] [-o owner] file1 ...\n> fileN directory\n> install -d [-g group] [-m mode] [-o owner] directory ...\n> gmake[3]: *** [install] Error 64\n\nOops. The plpgsql Makefile should exit a little more gracefully on\na system not supported by Makefile.shlib. I'll do something about\nthat tomorrow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Oct 1998 23:41:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Fri, 30 Oct 1998, SHIOZAKI Takehiko wrote:\n>> * makefiles/Makefile.freebsd\n>> It should have \"-Bforcearchive\" option just like as\n>> makefiles/Makefile.bsd (and it used to be).\n\n> \tUnder ELF, this does not work, so adding it back in \"as is\" isn't\n> an option. Let me see if I can come up with something here that works...\n\nCan we teach configure how to tell the difference between\nELF and pre-ELF BSD?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Oct 1998 23:44:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "On Thu, 29 Oct 1998, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Fri, 30 Oct 1998, SHIOZAKI Takehiko wrote:\n> >> * makefiles/Makefile.freebsd\n> >> It should have \"-Bforcearchive\" option just like as\n> >> makefiles/Makefile.bsd (and it used to be).\n> \n> > \tUnder ELF, this does not work, so adding it back in \"as is\" isn't\n> > an option. Let me see if I can come up with something here that works...\n> \n> Can we teach configure how to tell the difference between\n> ELF and pre-ELF BSD?\n\n\tWorking on it now...:)\n\n\tIts not a \"nice fix\", but it will get us through the release.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Oct 1998 00:56:05 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD " }, { "msg_contents": "> > * makefiles/Makefile.freebsd\n> > \tIt should have \"-Bforcearchive\" option just like as\n> > \tmakefiles/Makefile.bsd (and it used to be).\n> > \tWithout it, I got such errors in test/regress/results/misc.out.\n> \n> \tUnder ELF, this does not work, so adding it back in \"as is\" isn't\n> an option. Let me see if I can come up with something here that works...\n\nTake a look at bsd 4.0. I had to do the same thing. I defined a new\ntemplate file, and changed the DLSUFFIX, and checked that in\nMakefile.shlib.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Oct 1998 10:59:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "[On Oct 29, Tom Lane <[email protected]> writes:]\n>\n>Can you provide an appropriate entry to add to Makefile.shlib for\n>FreeBSD? It's probably much like one of the existing entries...\n\nO.K.\nI tried snapshot(Oct30) and made some patches.\n# I think that it is confused to manage both Makefile.shlib and\n# makefiles/Makefile.*, don't you?\n\n* configure\n\tNow FreeBSD 2.X is not supported..., so I added its entry.\n\tIf ELF_SYSTEM is set, gmake treat it defined even though\n\tit is \"false\". So nothing should be set to use \"ifdef\".\n\tBSD_SHLIB etc. may have same problems.\n\n* Makefile.shlib\n\tAs you said, FreeBSD entry is much like BSD's.\n\tI only added ELF_SYSTEM code.\n\n* makefiles/Makefile.freebsd\n\tIfdef/else/endif can not be indented with TABs.\n\n========================================================================\n*** configure.in.orig\tFri Oct 30 17:00:20 1998\n--- configure.in\tSat Oct 31 10:07:38 1998\n***************\n*** 17,23 ****\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd1*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n--- 17,23 ----\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd[12]*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n***************\n*** 52,58 ****\n then\n \tELF_SYS=true\n else\n! \tELF_SYS=false\n fi\n \n if test \"X$need_tas\" = \"Xyes\"\n--- 52,58 ----\n then\n \tELF_SYS=true\n else\n! \tELF_SYS=\n fi\n \n if test \"X$need_tas\" = \"Xyes\"\n*** configure.orig\tFri Oct 30 17:00:20 1998\n--- configure\tSat Oct 31 10:07:00 1998\n***************\n*** 617,623 ****\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd1*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n--- 617,623 ----\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd[12]*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n***************\n*** 652,658 ****\n then\n \tELF_SYS=true\n else\n! \tELF_SYS=false\n fi\n \n if test \"X$need_tas\" = \"Xyes\"\n--- 652,658 ----\n then\n \tELF_SYS=true\n else\n! \tELF_SYS=\n fi\n \n if test \"X$need_tas\" = \"Xyes\"\n*** Makefile.shlib.orig\tSat Oct 31 10:12:32 1998\n--- Makefile.shlib\tSat Oct 31 10:14:47 1998\n***************\n*** 56,61 ****\n--- 56,74 ----\n # Makefile.global (or really Makefile.port) to supply DLSUFFIX and other\n # symbols.\n \n+ ifeq ($(PORTNAME), freebsd)\n+ ifdef BSD_SHLIB\n+ install-shlib-dep\t:= install-shlib\n+ shlib\t\t\t\t:= lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n+ ifdef ELF_SYSTEM\n+ LDFLAGS_SL\t\t\t:= -x -Bshareable\n+ else\n+ LDFLAGS_SL\t\t\t:= -x -Bshareable -Bforcearchive\n+ endif\n+ CFLAGS\t\t\t\t+= $(CFLAGS_SL)\n+ endif\n+ endif\n+ \n ifeq ($(PORTNAME), bsd)\n ifdef BSD_SHLIB\n install-shlib-dep\t:= install-shlib\n*** makefiles/Makefile.freebsd.orig\tSat Oct 31 09:13:20 1998\n--- makefiles/Makefile.freebsd\tSat Oct 31 09:13:45 1998\n***************\n*** 5,13 ****\n \t@${AR} cq [email protected] `lorder $<.obj | tsort`\n \t${RANLIB} [email protected]\n \t@rm -f $@\n! \tifdef ELF_SYSTEM\n! \t\t$(LD) -x -Bshareable -o $@ [email protected]\n! \telse\n! \t $(LD) -x -Bshareable -Bforcearchive -o $@ [email protected]\n! \tendif\n \n--- 5,13 ----\n \t@${AR} cq [email protected] `lorder $<.obj | tsort`\n \t${RANLIB} [email protected]\n \t@rm -f $@\n! ifdef ELF_SYSTEM\n! \t$(LD) -x -Bshareable -o $@ [email protected]\n! else\n! \t$(LD) -x -Bshareable -Bforcearchive -o $@ [email protected]\n! endif\n \n========================================================================\n\n-- \nASCII CORPORATION\nTechnical Center\nSHIOZAKI Takehiko\n<[email protected]>\n", "msg_date": "Sat, 31 Oct 1998 11:21:13 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "On Sat, 31 Oct 1998, SHIOZAKI Takehiko wrote:\n\n> [On Oct 29, Tom Lane <[email protected]> writes:]\n> >\n> >Can you provide an appropriate entry to add to Makefile.shlib for\n> >FreeBSD? It's probably much like one of the existing entries...\n> \n> O.K.\n> I tried snapshot(Oct30) and made some patches.\n> # I think that it is confused to manage both Makefile.shlib and\n> # makefiles/Makefile.*, don't you?\n\n\tSomething ot look at for v6.5 .. right now, it works as is :)\n\n> * configure\n> \tNow FreeBSD 2.X is not supported..., so I added its entry.\n> \tIf ELF_SYSTEM is set, gmake treat it defined even though\n> \tit is \"false\". So nothing should be set to use \"ifdef\".\n> \tBSD_SHLIB etc. may have same problems.\n\n\tMy oops, actually...I meant to put 'freebsd2*', not 'freebsd1*' :)\nI put in yours though, anyway...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Oct 1998 00:06:37 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "[On Oct 31, The Hermit Hacker <[email protected]> writes:]\n>\n>\tMy oops, actually...I meant to put 'freebsd2*', not 'freebsd1*' :)\n>I put in yours though, anyway...\n\nBETA5 stll needs the following small patch for FreeBSD 2.X.\n# And my patch for nbtsearch.c is not applied yet. Was it wrong?\n\n========================================================================\n*** configure.orig\tFri Oct 30 17:00:20 1998\n--- configure\tSat Oct 31 10:07:00 1998\n***************\n*** 617,623 ****\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd12*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n--- 617,623 ----\n linux*) os=linux need_tas=no ;;\n bsdi*) os=bsdi need_tas=no ;;\n freebsd3*) os=freebsd need_tas=no elf=yes ;;\n! freebsd[12]*) os=freebsd need_tas=no ;;\n netbsd*|openbsd*) os=bsd need_tas=no ;;\n dgux*) os=dgux need_tas=no ;;\n aix*) os=aix need_tas=no ;;\n*** makefiles/Makefile.freebsd.orig\tMon Nov 2 11:38:01 1998\n--- makefiles/Makefile.freebsd\tMon Nov 2 11:38:08 1998\n***************\n*** 8,13 ****\n ifdef ELF_SYSTEM\n \t$(LD) -x -Bshareable -o $@ [email protected]\n else\n! $(LD) -x -Bshareable -Bforcearchive -o $@ [email protected]\n endif\n \n--- 8,13 ----\n ifdef ELF_SYSTEM\n \t$(LD) -x -Bshareable -o $@ [email protected]\n else\n! \t$(LD) -x -Bshareable -Bforcearchive -o $@ [email protected]\n endif\n \n========================================================================\n\n-- \nASCII CORPORATION\nTechnical Center\nSHIOZAKI Takehiko\n<[email protected]>\n", "msg_date": "Mon, 2 Nov 1998 14:18:12 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "On Mon, 2 Nov 1998, SHIOZAKI Takehiko wrote:\n\n> [On Oct 31, The Hermit Hacker <[email protected]> writes:]\n> >\n> >\tMy oops, actually...I meant to put 'freebsd2*', not 'freebsd1*' :)\n> >I put in yours though, anyway...\n> \n> BETA5 stll needs the following small patch for FreeBSD 2.X.\n\nModifying configure directly isn't accetable, as it will change again once\nautoconf is run. Should have been written as:\n\n\tfreebsd1*|freebsd2*) ... :(\n\nSecond patch applied...\n\n> # And my patch for nbtsearch.c is not applied yet. Was it wrong?\n\n\tDid I miss something?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Nov 1998 01:31:15 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "[On Nov 2, The Hermit Hacker <[email protected]> writes:]\n>\n>> # And my patch for nbtsearch.c is not applied yet. Was it wrong?\n>\n>\tDid I miss something?\n\nPlease make sure whether this is correct.\nhttp://www.PostgreSQL.ORG/mhonarc/pgsql-hackers/1998-10/msg01135.html\n\n-- \nASCII CORPORATION\nTechnical Center\nSHIOZAKI Takehiko\n<[email protected]>\n", "msg_date": "Mon, 2 Nov 1998 14:36:46 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "On Mon, 2 Nov 1998, SHIOZAKI Takehiko wrote:\n\n> [On Nov 2, The Hermit Hacker <[email protected]> writes:]\n> >\n> >> # And my patch for nbtsearch.c is not applied yet. Was it wrong?\n> >\n> >\tDid I miss something?\n> \n> Please make sure whether this is correct.\n> http://www.PostgreSQL.ORG/mhonarc/pgsql-hackers/1998-10/msg01135.html\n\n\tBruce, can you check that one. Basically, we try and pass:\n\n\trel->rd_rel->relname\n\n\tinstead of\n\n\trel->rd_rel->relname.data\n\n\taround line 217 (elog(...)) in nbtsearch.c ...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 2 Nov 1998 01:56:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "> On Mon, 2 Nov 1998, SHIOZAKI Takehiko wrote:\n> \n> > [On Nov 2, The Hermit Hacker <[email protected]> writes:]\n> > >\n> > >> # And my patch for nbtsearch.c is not applied yet. Was it wrong?\n> > >\n> > >\tDid I miss something?\n> > \n> > Please make sure whether this is correct.\n> > http://www.PostgreSQL.ORG/mhonarc/pgsql-hackers/1998-10/msg01135.html\n> \n> \tBruce, can you check that one. Basically, we try and pass:\n> \n> \trel->rd_rel->relname\n> \n> \tinstead of\n> \n> \trel->rd_rel->relname.data\n> \n> \taround line 217 (elog(...)) in nbtsearch.c ...\n\nOh, I see the problem now. Originally, I thought it was just a question\nof style, but now I see that the whole structure is being passed to\nelog(), rather than the character pointer.\n\nPatch applied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Nov 1998 10:27:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.4-BETA3 problems with FreeBSD" }, { "msg_contents": "RedHat 5.1 i386 with PostgreSQL v6.4 final version\n\n./configure --with-tcl\n\nafter installing libpgtcl , cannot run pgaccess because library crypt\nhasn't been linked into libpgtcl.so\n\nmust manually add -lcrypt in Makefile into src/interfaces/libpgtcl\n\nAlso, PgAccess version that has been included in 6.4 final version is\n0.90 , but current stable version is 0.91\n(ftp://ftp.flex.ro/pub/pgaccess/pgaccess-0.91.tar.gz)\n\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Thu, 05 Nov 1998 16:48:52 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "> Also, PgAccess version that has been included in 6.4 final version is\n> 0.90 , but current stable version is 0.91\n> (ftp://ftp.flex.ro/pub/pgaccess/pgaccess-0.91.tar.gz)\n\nYes, it missed the code freeze cutoff and the bug fix cutoff. Next\nrelease try to get the new version announced at least a couple of weeks\nbefore the nominal release date, and a month before would be even\nbetter. Should be in v6.4.1 but I don't think anyone has committed it\nyet.\n\n - Tom\n", "msg_date": "Thu, 05 Nov 1998 17:29:59 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> RedHat 5.1 i386 with PostgreSQL v6.4 final version\n> after installing libpgtcl , cannot run pgaccess because library crypt\n> hasn't been linked into libpgtcl.so\n> must manually add -lcrypt in Makefile into src/interfaces/libpgtcl\n\nI don't think that's the right approach. libpgtcl doesn't depend on\nlibcrypt. libpq does, and that's where we need to fix this, or else\nwe'll see the same problem with any other shared library that depends\non libpq.\n\nWould you check whether it works to add the following patch to\nsrc/interfaces/libpq's Makefile, and then build libpgtcl *without*\na reference to crypt? (I can't test it here since crypt is part of\nlibc on my machine...)\n\n\n*** Makefile.in~\tThu Nov 5 18:08:26 1998\n--- Makefile.in\tThu Nov 5 18:11:43 1998\n***************\n*** 34,39 ****\n--- 34,43 ----\n OBJS+= common.o wchar.o conv.o\n endif\n \n+ # If crypt is a separate library, rather than part of libc,\n+ # make sure it gets included in shared libpq.\n+ SHLIB_LINK= $(findstring -lcrypt,$(LIBS))\n+ \n # Shared library stuff, also default 'all' target\n include $(SRCDIR)/Makefile.shlib\n \n\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Nov 1998 18:16:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "On Thu, 5 Nov 1998, Thomas G. Lockhart wrote:\n\n> > Also, PgAccess version that has been included in 6.4 final version is\n> > 0.90 , but current stable version is 0.91\n> > (ftp://ftp.flex.ro/pub/pgaccess/pgaccess-0.91.tar.gz)\n> \n> Yes, it missed the code freeze cutoff and the bug fix cutoff. Next\n> release try to get the new version announced at least a couple of weeks\n> before the nominal release date, and a month before would be even\n> better. Should be in v6.4.1 but I don't think anyone has committed it\n> yet.\n\n\tWorking on it right now...sorry Constantin :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 5 Nov 1998 23:14:01 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "> On Thu, 5 Nov 1998, Thomas G. Lockhart wrote:\n> \n> > > Also, PgAccess version that has been included in 6.4 final version is\n> > > 0.90 , but current stable version is 0.91\n> > > (ftp://ftp.flex.ro/pub/pgaccess/pgaccess-0.91.tar.gz)\n> > \n> > Yes, it missed the code freeze cutoff and the bug fix cutoff. Next\n> > release try to get the new version announced at least a couple of weeks\n> > before the nominal release date, and a month before would be even\n> > better. Should be in v6.4.1 but I don't think anyone has committed it\n> > yet.\n> \n> \tWorking on it right now...sorry Constantin :(\n\nI did pull down 0.90 about a month ago, and never went back to see if a\nnewer one existed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 5 Nov 1998 23:37:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] crypt not included when compiling\n\tlibpgtcl !!!!!!!" }, { "msg_contents": "Tom Lane wrote:\n> \n> I don't think that's the right approach. libpgtcl doesn't depend on\n> libcrypt. libpq does, and that's where we need to fix this, or else\n> we'll see the same problem with any other shared library that depends\n> on libpq.\n> \n> Would you check whether it works to add the following patch to\n> src/interfaces/libpq's Makefile, and then build libpgtcl *without*\n> a reference to crypt? (I can't test it here since crypt is part of\n> libc on my machine...)\n\nCompletely delete postgresql from my machine and any trace of libpq and\nlibpgtcl\nUntarred the 6.4 final distribution\napply the patch\ncompiled from scratch\ninstalled\n\nGot :\n\n[teo@teo teo]$ Error in startup script: couldn't load file\n\"libpgtcl.so\": /usr/local/pgsql/lib/libpgtcl.so: undefined symbol: crypt\n while executing\n\"load libpgtcl.so\"\n (procedure \"main\" line 3)\n invoked from within\n\"main $argc $argv\"\n (file \"pgaccess.tcl\" line 4813) \n\n\nWhen modify the Makefile from libpgtcl adding -lcrypt , recompile,\ninstall . Everything it's ok.\n\n\n\n\n> \n> *** Makefile.in~ Thu Nov 5 18:08:26 1998\n> --- Makefile.in Thu Nov 5 18:11:43 1998\n> ***************\n> *** 34,39 ****\n> --- 34,43 ----\n> OBJS+= common.o wchar.o conv.o\n> endif\n> \n> + # If crypt is a separate library, rather than part of libc,\n> + # make sure it gets included in shared libpq.\n> + SHLIB_LINK= $(findstring -lcrypt,$(LIBS))\n> +\n> # Shared library stuff, also default 'all' target\n> include $(SRCDIR)/Makefile.shlib\n> \n> \n> regards, tom lane\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Fri, 06 Nov 1998 08:18:57 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> Tom Lane wrote:\n>> Would you check whether it works to add the following patch to\n>> src/interfaces/libpq's Makefile, and then build libpgtcl *without*\n>> a reference to crypt? (I can't test it here since crypt is part of\n>> libc on my machine...)\n>\n> [ Still doesn't work ]\n\nOdd. Perhaps your system doesn't support libraries referring to other\nshared libraries? (Seems unlikely, but...) In that case you'd be\ngetting the static libpq.a bound into libpgtcl.so, minus crypt of\ncourse, and then this would happen.\n\nIf that's what's happening you should easily be able to tell from the\nsizes of libpgtcl.so and libpq.so. On my machine, libpgtcl is actually\nsmaller than libpq (about half as big, in fact). If libpq is getting\nbound into libpgtcl.so then libpgtcl.so would have to be bigger than\nlibpq.so. Which one is bigger on your machine?\n\nAlso, do you have any sort of utility that shows directly what a\nprogram's shared library requirements are? For example, on HPUX\nI can do\n\n$ chatr /usr/local/pgsql/bin/pgtclsh\n/usr/local/pgsql/bin/pgtclsh:\n shared executable\n shared library dynamic path search:\n SHLIB_PATH disabled second\n embedded path enabled first /usr/local/pgsql/lib\n shared library list:\n static pgtclsh\n dynamic ../../interfaces/libpgtcl/libpgtcl.sl\n dynamic ../../interfaces/libpq/libpq.sl\n dynamic /usr/lib/libdld.sl\n dynamic /lib/libcurses.sl\n dynamic /lib/libc.sl\n shared library binding:\n deferred\n\n$ chatr /usr/local/pgsql/lib/libpgtcl.sl\n/usr/local/pgsql/lib/libpgtcl.sl:\n shared library\n shared library list:\n dynamic ../libpq/libpq.sl\n\n\nOn Linux I think the corresponding command is \"ldd\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Nov 1998 10:27:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "Yes. libpgtcl is bigger than libpq\n\nBut ldd shows that libpq is statically linked :-( . Maybe this is the\nproblem ?\n\n[root@teo libpgtcl]# ldd libpgtcl.so.2.0\n libcrypt.so.1 => /lib/libcrypt.so.1 (0x40014000)\n libc.so.6 => /lib/libc.so.6 (0x40041000)\n /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x00000000)\n\n[root@teo libpgtcl]# ldd ../libpq/libpq.so.2.0\n statically linked\n\n[root@teo libpgtcl]# vdir ../libpq/libpq.so.2.0\n-rwxr-xr-x 1 root root 50014 Nov 6 10:08\n../libpq/libpq.so.2.0\n\n[root@teo libpgtcl]# vdir libpgtcl.so.2.0\n-rwxr-xr-x 1 root root 64725 Nov 6 10:19 libpgtcl.so.2.0 \n\n\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Fri, 06 Nov 1998 15:49:59 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> [root@teo libpgtcl]# ldd libpgtcl.so.2.0\n> libcrypt.so.1 => /lib/libcrypt.so.1 (0x40014000)\n> libc.so.6 => /lib/libc.so.6 (0x40041000)\n> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x00000000)\n\n> [root@teo libpgtcl]# ldd ../libpq/libpq.so.2.0\n> statically linked\n\nWell, that's pretty interesting. Evidently you were able to persuade\nthe linker to dynamically link libpgtcl.so to libcrypt.so, but the patch\nI suggested did *not* persuade the linker to dynamically link libpq.so\nto libcrypt.so. Just exactly what change did you make in libpgtcl's\nMakefile, anyway? I assumed it was simply adding -lcrypt, but now I\nam not so sure.\n\nIt's also curious that libpq.so is not showing any dynamic dependency\non libc. I think we must have the wrong linker options for libpq.\n\n> [root@teo libpgtcl]# vdir ../libpq/libpq.so.2.0\n> -rwxr-xr-x 1 root root 50014 Nov 6 10:08\n> ../libpq/libpq.so.2.0\n\n> [root@teo libpgtcl]# vdir libpgtcl.so.2.0\n> -rwxr-xr-x 1 root root 64725 Nov 6 10:19 libpgtcl.so.2.0 \n\nAnd we also need to figure out why the linker is including libpq.a into\nlibpgtcl.so, instead of creating a dynamic link from libpgtcl.so to\nlibpq.so like it did for libcrypt and libc.\n\n(BTW, do you have a libcrypt.a in /lib, or just libcrypt.so?)\n\nIt seems clear from your ldd results that your machine is capable of\ndoing the right thing, but we aren't invoking the linker with the right\noptions. Where I want to get to is:\n\n\tlibpgtcl.so: dynamic dependency on libpq.so (and libc of course)\n\n\tlibpq.so: dynamic dependency on libcrypt.so (and libc of course)\n\nIt might be worth extracting the part of the \"make all\" log that shows\nwhat commands are being used to build each of these libraries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Nov 1998 13:45:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "Tom Lane wrote:\n \n >It's also curious that libpq.so is not showing any dynamic dependency\n >on libc. I think we must have the wrong linker options for libpq.\n\nTo get the shared library dependency on libc you need to add -lc at the \nend of the link options.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Let all that you do be done in love.\" \n 1 Corinthians 16:14 \n\n\n", "msg_date": "Fri, 06 Nov 1998 22:30:01 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "Tom Lane wrote:\n> \n> Constantin Teodorescu <[email protected]> writes:\n> > [root@teo libpgtcl]# ldd libpgtcl.so.2.0\n> > libcrypt.so.1 => /lib/libcrypt.so.1 (0x40014000)\n> > libc.so.6 => /lib/libc.so.6 (0x40041000)\n> > /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x00000000)\n> \n> > [root@teo libpgtcl]# ldd ../libpq/libpq.so.2.0\n> > statically linked\n> \n> Well, that's pretty interesting. Evidently you were able to persuade\n> the linker to dynamically link libpgtcl.so to libcrypt.so, but the patch\n> I suggested did *not* persuade the linker to dynamically link libpq.so\n> to libcrypt.so.\n\nThe patch that you send me I have applied ONLY to Makefile.in from\nsrc/interfaces/libpgtcl directory.\nShould I apply it to Makefile.in from src/interfaces/libpq directory ?\n\n\n> Just exactly what change did you make in libpgtcl's\n> Makefile, anyway? I assumed it was simply adding -lcrypt, but now I\n> am not so sure.\n\nYes. Only add -lcrypt\n\n\n> And we also need to figure out why the linker is including libpq.a into\n> libpgtcl.so, instead of creating a dynamic link from libpgtcl.so to\n> libpq.so like it did for libcrypt and libc.\n> \n> (BTW, do you have a libcrypt.a in /lib, or just libcrypt.so?)\n\nin /usr/lib libcrypt.a and libcrypt.so\nin /lib/libcrypt.so\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Sat, 07 Nov 1998 08:20:25 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> The patch that you send me I have applied ONLY to Makefile.in from\n> src/interfaces/libpgtcl directory.\n> Should I apply it to Makefile.in from src/interfaces/libpq directory ?\n\nNo no, that change was intended to be applied to libpq's Makefile *not*\nlibpgtcl's. Sorry if I wasn't clear.\n\nThere's also the question of why the linker is making a dynamic\ndependency on libc for libpgtcl but not libpq. Oliver suggested\nthat -lc is needed as well as -lcrypt in the link step, but that\ndoesn't seem to explain all the facts --- libpgtcl doesn't have\n-lc, and it seems to be getting built right. Still, you might try\nadding -lc to SHLIB_LINK in libpq's Makefile, and see if that\nchanges the ldd result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Nov 1998 10:45:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "Sferacarta Software <[email protected]> el d�a Mon, 9 Nov 1998 14:33:06 \n+0100, escribi�:\n\n>Hello Radhakrishnan,\n>\n>luned�, 9 novembre 98, you wrote:\n>\n>\n>RCV> how can i limit the number of rows obtained from a select statement\n>RCV> in postgreSQL to say, 10 rows while the select condition actually\n>RCV> matches more than that. in oracle we can use the ROW_NUM variable\n>RCV> for this purpose but now i met such an issue with postgreSQL\n>\n>On v6.4 you can specify a limit for queries as:\n>\n>set QUERY_LIMIT TO '10';\n>To have only the first 10 rows from a select;\n\nthis will limit _all_ the querys, right ?\nwich is not very flexible.\n\nI can't do something like\n\nselect * from news order by news_date limit 10\n\n?\n\nSergio\n\n", "msg_date": "Sat, 7 Nov 1998 13:39:27 -0300", "msg_from": "Sergio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] limiting the rows selected in postgresql" }, { "msg_contents": "\nhow can i limit the number of rows obtained from a select statement\nin postgreSQL to say, 10 rows while the select condition actually\nmatches more than that. in oracle we can use the ROW_NUM variable\nfor this purpose but now i met such an issue with postgreSQL\n\nthanks in advance\n\nCV Radhakrishnan\n\nhttp://www.river-valley.com\n\n", "msg_date": "Mon, 9 Nov 1998 18:22:58 +0530 (IST)", "msg_from": "RADHAKRISHNAN C V <[email protected]>", "msg_from_op": false, "msg_subject": "limiting the rows selected in postgresql" }, { "msg_contents": "Hello Radhakrishnan,\n\nluned�, 9 novembre 98, you wrote:\n\n\nRCV> how can i limit the number of rows obtained from a select statement\nRCV> in postgreSQL to say, 10 rows while the select condition actually\nRCV> matches more than that. in oracle we can use the ROW_NUM variable\nRCV> for this purpose but now i met such an issue with postgreSQL\n\nOn v6.4 you can specify a limit for queries as:\n\nset QUERY_LIMIT TO '10';\nTo have only the first 10 rows from a select;\n\n-Jose'-\n\n\n", "msg_date": "Mon, 9 Nov 1998 14:33:06 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] limiting the rows selected in postgresql" }, { "msg_contents": "On Sat, 7 Nov 1998, Sergio wrote:\n\n> Date: Sat, 7 Nov 1998 13:39:27 -0300\n> From: Sergio <[email protected]>\n> To: [email protected]\n> Subject: Re: [INTERFACES] limiting the rows selected in postgresql\n> \n> Sferacarta Software <[email protected]> el dО©╫a Mon, 9 Nov 1998 14:33:06 \n> +0100, escribiО©╫:\n> \n> >Hello Radhakrishnan,\n> >\n> >lunedО©╫, 9 novembre 98, you wrote:\n> >\n> >\n> >RCV> how can i limit the number of rows obtained from a select statement\n> >RCV> in postgreSQL to say, 10 rows while the select condition actually\n> >RCV> matches more than that. in oracle we can use the ROW_NUM variable\n> >RCV> for this purpose but now i met such an issue with postgreSQL\n> >\n> >On v6.4 you can specify a limit for queries as:\n> >\n> >set QUERY_LIMIT TO '10';\n> >To have only the first 10 rows from a select;\n> \n> this will limit _all_ the querys, right ?\n> wich is not very flexible.\n> \n> I can't do something like\n> \n> select * from news order by news_date limit 10\n> \n> ?\n\nSergio,\n\nbe patient. Jan Wieck is working on patch for support LIMIT in select query.\nI'm using his patch (trial 2) with 6.4 and it works great for me.\nWith this patch you can limit output using LIMIT offset,count statement.\nHope, this patch will come to 6.4.1.\n\n\tRegards,\n\n\t\tOleg\n\nPS. Browse hackers mailing list for last month to read discussion on\n this subject.\n> \n> Sergio\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Mon, 9 Nov 1998 23:27:49 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] limiting the rows selected in postgresql" }, { "msg_contents": "\nI did this with:-\n\ndeclare tmp cursor for select * from table;fetch forward 10 in tmp;\n\nBut if someone else has a better way, please let me know....\n\nOn Sat, 7 Nov 1998, Sergio wrote:\n\n> Sferacarta Software <[email protected]> el d���a Mon, 9 Nov 1998 14:33:06 \n> +0100, escribi���:\n> \n> >Hello Radhakrishnan,\n> >\n> >luned���, 9 novembre 98, you wrote:\n> >\n> >\n> >RCV> how can i limit the number of rows obtained from a select statement\n> >RCV> in postgreSQL to say, 10 rows while the select condition actually\n> >RCV> matches more than that. in oracle we can use the ROW_NUM variable\n> >RCV> for this purpose but now i met such an issue with postgreSQL\n> >\n> >On v6.4 you can specify a limit for queries as:\n> >\n> >set QUERY_LIMIT TO '10';\n> >To have only the first 10 rows from a select;\n> \n> this will limit _all_ the querys, right ?\n> wich is not very flexible.\n> \n> I can't do something like\n> \n> select * from news order by news_date limit 10\n> \n> ?\n> \n> Sergio\n> \n> \n\nJames ([email protected])\nVortex Internet\nMy Windows unders~1 long filena~1, and yours?\n\n", "msg_date": "Mon, 9 Nov 1998 22:06:11 +0000 (GMT)", "msg_from": "A James Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] limiting the rows selected in postgresql" }, { "msg_contents": "pg_dump has improved a lot since 6.3.2, but here is one bug that relates\nto inheritance, where a parent table has constraints. \n\nThese are the original table definitions:\n\ncreate table individual\n(\n gender char(1) check (gender = 'M' or gender = 'F' or \ngender is null),\n born datetime check ((born >= '1 Jan 1880' and born \n<= 'today') or born is null),\n surname text,\n forenames text,\n title text,\n old_surname text,\n mobile text,\n ni_no text,\n\n constraint is_named check (not (surname isnull and forenames isnull))\n)\n inherits (person)\n;\n\ncreate table outworker\n(\n started datetime not null,\n finish datetime\n\n)\n inherits (individual)\n;\n\nThis is the output from trying to reload the pg_dump output:\n\nCREATE TABLE \"individual\" (\"gender\" char(1), \"born\" \"datetime\", \"surname\" \n\"text\", \"forenames\" \"text\", \"title\" \"text\", \"old_surname\" \"text\", \"mobile\" \n\"text\", \"ni_no\" \"text\", CONSTRAINT is_named CHECK (NOT ( surname IS NULL AND \nforenames IS NULL )), CONSTRAINT individual_born CHECK (( born >= '1 Jan \n1880' AND born <= 'today' ) OR born IS NULL), CONSTRAINT individual_gender \nCHECK (gender = 'M' OR gender = 'F' OR gender IS NULL)) inherits ( \"person\");\nCREATE\n\nCREATE TABLE \"outworker\" (\"started\" \"datetime\" NOT NULL, \"finish\" \"datetime\", \nCONSTRAINT individual_gender CHECK (gender = 'M' OR gender = 'F' OR gender IS \nNULL), CONSTRAINT individual_born CHECK (( born >= '1 Jan 1880' AND born <= \n'today' ) OR born IS NULL), CONSTRAINT is_named CHECK (NOT ( surname IS NULL \nAND forenames IS NULL ))) inherits ( \"individual\");\nERROR: DefineRelation: name (individual_gender) of CHECK constraint duplicated\n\n\nThe problem is that pg_dump is unnecessarily restating the constraints for\nthe parent table in its descendants.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Therefore being justified by faith, we have peace \n with God through our Lord Jesus Christ.\" Romans 5:1\n\n\n", "msg_date": "Mon, 16 Nov 1998 11:50:34 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg-dump bug (at 6.4)" }, { "msg_contents": "I have investigated further the bug in pg_dump relating to inherited\ncheck constraints. This arises in src/bin/pg_dump/pg_dump.c in getTables(), where the query recovers all the constraints for a table, whether or not\nthey are inherited:\n \n 1477 sprintf(query, \"SELECT rcname, rcsrc from pg_relcheck \" \n 1478 \"where rcrelid = '%s'::oid \", \n 1479 tblinfo[i].oid); \n\nIn the following example, a constraint is inherited from the\ntable `individual':\n\n bray=> select oid, relname from pg_class\n where oid in\n (select rcrelid from pg_relcheck\n where rcname = 'is_named')\n\t order by oid desc;\n oid|relname \n -----+----------\n 67552|staff \n 67436|outworker \n 67111|individual\n (3 rows)\n\n\n bray=> select rcrelid, rcname, rcsrc from pg_relcheck\n where rcname = 'is_named'\n order by rcrelid desc;\n rcrelid|rcname |rcsrc \n -------+--------+---------------------------------------------\n 67552|is_named|NOT ( surname IS NULL AND forenames IS NULL )\n 67436|is_named|NOT ( surname IS NULL AND forenames IS NULL )\n 67111|is_named|NOT ( surname IS NULL AND forenames IS NULL )\n (3 rows)\n\n\npg_dump writes all three constraints into its output, which causes the\ntable creation to fail on the inherited tables when the database is\nrestored.\n\nWe actually need to select a check constraint only if, for each constraint,\ntblinfo[i].oid = min(rcrelid). However, I cannot work out how\nto write the query (not least because there is no min()\nfunction for oids).\n\nCan anyone take this further, please?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For by grace are ye saved through faith; and that not\n of yourselves. It is the gift of God; not of works, \n lest any man should boast.\" Ephesians 2:8,9 \n\n\n", "msg_date": "Wed, 18 Nov 1998 01:38:05 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg-dump bug (at 6.4) " }, { "msg_contents": "Still trying to fix the bug with inherited check constraints...\n\nI have tried to create a min(oid) aggregate, but when I use it, I get\nthe message `ERROR: fmgr_info: function 108994: cache lookup failed'.\n\nWhat is the problem, please?\n\nI created it thus:\n\ncreate function oid4smaller (oid, oid) returns oid as\n '/home/olly/cprogs/oidcompare.so' language 'c';\n\ncreate aggregate min (basetype = oid, sfunc1 = oid4smaller,\n stype1 = oid, stype2 = oid);\n\n\nThe C file is compiled and linked thus (for Linux x86):\n\n $ gcc -o oidcompare.o -c -I/usr/include/postgresql oidcompare.c\n $ gcc -shared -o oidcompare.so oidcompare.o\n\nand it says:\n#include <postgresql/postgres.h>\n\nOid oid4smaller(Oid o1, Oid o2) {\n return (o1 < o2 ? o1 : o2);\n}\n\nOid oid4larger(Oid o1, Oid o2) {\n return (o1 > o2 ? o1 : o2);\n}\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If my people, which are called by my name, shall \n humble themselves, and pray, and seek my face, and \n turn from their wicked ways; then will I hear from \n heaven, and will forgive their sin, and will heal \n their land.\" II Chronicles 7:14 \n\n\n", "msg_date": "Fri, 20 Nov 1998 23:30:29 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump bug - problems along the way" }, { "msg_contents": "Hello Oliver,\n\nsabato, 21 novembre 98, you wrote:\n\nOE> Still trying to fix the bug with inherited check constraints...\n\nOE> I have tried to create a min(oid) aggregate, but when I use it, I get\nOE> the message `ERROR: fmgr_info: function 108994: cache lookup failed'.\n\nOE> What is the problem, please?\n\nOE> I created it thus:\n\nOE> create function oid4smaller (oid, oid) returns oid as\nOE> '/home/olly/cprogs/oidcompare.so' language 'c';\n\nOE> create aggregate min (basetype = oid, sfunc1 = oid4smaller,\nOE> stype1 = oid, stype2 = oid);\n\nTry this...it works...\n\ncreate function oid4smaller (oid, oid) returns oid as\n'\nbegin\n if $1 > $2 then\n return $2;\n else\n return $1;\n end if;\n end;\n' language 'plpgsql';\n\ncreate aggregate m (basetype = oid, sfunc1 = oid4smaller,\n stype1 = oid, stype2 = oid);\n\nprova=> select oid from a;\n oid\n------\n376064\n376065\n380064\n380065\n380066\n380067\n(6 rows)\n\nprova=> select min(oid) from a;\n min\n------\n376064\n(1 row)\n\n\n-Jose'-\n\n\n", "msg_date": "Mon, 23 Nov 1998 15:35:34 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug - problems along the way" }, { "msg_contents": "I think this is a bug in 6.4:\n\nbray=> select rcname, rcsrc from pg_relcheck where rcrelid = '115404'::oid and \nrcrelid in (select min(rcrelid) from pg_relcheck group by rcname);\nERROR: parser: Subselect has too many or too few fields.\n\nThe subselect only produces one column; so I think that the error message \nis wrong.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD is nigh unto all them that call upon him, to\n all that call upon him in truth.\" \n Psalms 145:18\n\n\n", "msg_date": "Mon, 23 Nov 1998 17:17:30 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Parser bug?" }, { "msg_contents": "Sferacarta Software wrote:\n >Try this...it works...\n >\n >create function oid4smaller (oid, oid) returns oid as\n >'\n >begin\n > if $1 > $2 then\n > return $2;\n > else\n > return $1;\n > end if;\n > end;\n >' language 'plpgsql';\n\nI'm afraid it doesn't work for me; clearly the problem is elsewhere:\n \nbray=> select min(oid) from europe;\nERROR: fmgr_info: function 108994: cache lookup failed\n\n**Idea** - try in another database -- it works, so it must be a database\ncorruption of some kind.\n\nThanks for your help.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD is nigh unto all them that call upon him, to\n all that call upon him in truth.\" \n Psalms 145:18\n\n\n", "msg_date": "Mon, 23 Nov 1998 17:17:31 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump bug - problems along the way " }, { "msg_contents": ">\n> Sferacarta Software wrote:\n> >Try this...it works...\n> >\n> >create function oid4smaller (oid, oid) returns oid as\n> >'\n> >begin\n> > if $1 > $2 then\n> > return $2;\n> > else\n> > return $1;\n> > end if;\n> > end;\n> >' language 'plpgsql';\n>\n> I'm afraid it doesn't work for me; clearly the problem is elsewhere:\n>\n> bray=> select min(oid) from europe;\n> ERROR: fmgr_info: function 108994: cache lookup failed\n>\n> **Idea** - try in another database -- it works, so it must be a database\n> corruption of some kind.\n\n Looks like you dropped and recreated the function used in the\n min(oid) aggregate without dropping and recreating the\n aggregate itself.\n\n Note that the functions used in an aggregate are referenced\n by OID, not by name. In pg_aggregate the pg_proc tuple with\n the old OID is still referenced and cannot be found (cache\n lookup failed). Drop the agg and recreate it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 23 Nov 1998 19:12:53 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug - problems along the way" }, { "msg_contents": "Jan Wieck wrote:\n >> bray=> select min(oid) from europe;\n >> ERROR: fmgr_info: function 108994: cache lookup failed\n >>\n >> **Idea** - try in another database -- it works, so it must be a database\n >> corruption of some kind.\n >\n > Looks like you dropped and recreated the function used in the\n > min(oid) aggregate without dropping and recreating the\n > aggregate itself.\n >\n > Note that the functions used in an aggregate are referenced\n > by OID, not by name. In pg_aggregate the pg_proc tuple with\n > the old OID is still referenced and cannot be found (cache\n > lookup failed). Drop the agg and recreate it.\n\nYes; that is what I did, which explains why it works in another\ndatabase where I hadn't been playing. Thank you.\n\nMay I suggest that the error message be changed from\n\n `ERROR: fmgr_info: function 108994: cache lookup failed'\n\nto:\n\n `ERROR: fmgr_info: function 108994 not found in pg_proc'\n\nwhich would be a better explanation of what was wrong, and would have\nlet me diagnose the problem for myself (I hope!). The existing message\nsuggests that the function itself is present but somehow faulty.\n\n(src/backend/utils/fmgr/fmgr.c in fmgr_info())\n\nIn general, is it not better to say as explicitly as possible what is\nwrong, rather than to use terms like \"cache lookup\" which are only\nmeaningful to someone who knows the program internals? There are several\nother error messages in that same file which could do with similar\nclarification.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD is nigh unto all them that call upon him, to\n all that call upon him in truth.\" \n Psalms 145:18\n\n\n", "msg_date": "Mon, 23 Nov 1998 19:50:06 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump bug - problems along the way " }, { "msg_contents": "> I think this is a bug in 6.4:\n> \n> bray=> select rcname, rcsrc from pg_relcheck where rcrelid =\n> '115404'::oid and rcrelid in (select min(rcrelid) from pg_relcheck\n> group by rcname); ERROR: parser: Subselect has too many or too\n> few fields.\n> \n> The subselect only produces one column; so I think that the\n> error message is wrong.\n\nWhat is the GROUP BY doing?\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 15:10:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parser bug?" }, { "msg_contents": "Bruce Momjian wrote:\n >> I think this is a bug in 6.4:\n >> \n >> bray=> select rcname, rcsrc from pg_relcheck where rcrelid =\n >> '115404'::oid and rcrelid in (select min(rcrelid) from pg_relcheck\n >> group by rcname); ERROR: parser: Subselect has too many or too\n >> few fields.\n >> \n >> The subselect only produces one column; so I think that the\n >> error message is wrong.\n >\n >What is the GROUP BY doing?\n\nThis relates to the bug in pg_dump which messes up inherited constraints.\n\nThe object is to find which is the table in an inheritance hierarchy for\nwhich the check constraint is first defined, which must inevitably be the\none with the lowest numbered oid. The GROUP BY operates with the aggregate\nto return the low-numbered oid for each separate rcname.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD is nigh unto all them that call upon him, to\n all that call upon him in truth.\" \n Psalms 145:18\n\n\n", "msg_date": "Mon, 23 Nov 1998 23:18:26 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parser bug? " }, { "msg_contents": "> Bruce Momjian wrote:\n> >> I think this is a bug in 6.4:\n> >> \n> >> bray=> select rcname, rcsrc from pg_relcheck where rcrelid =\n> >> '115404'::oid and rcrelid in (select min(rcrelid) from pg_relcheck\n> >> group by rcname); ERROR: parser: Subselect has too many or too\n> >> few fields.\n> >> \n> >> The subselect only produces one column; so I think that the\n> >> error message is wrong.\n> >\n> >What is the GROUP BY doing?\n> \n> This relates to the bug in pg_dump which messes up inherited constraints.\n> \n> The object is to find which is the table in an inheritance hierarchy for\n> which the check constraint is first defined, which must inevitably be the\n> one with the lowest numbered oid. The GROUP BY operates with the aggregate\n> to return the low-numbered oid for each separate rcname.\n\nMaybe I should be clearer. You are grouping by a column that is not in\nthe target list. If you try the subquery on its own, it should fail\nwith a better error message.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 22:42:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parser bug?" }, { "msg_contents": "Bruce Momjian wrote:\n...\n >> >> bray=> select rcname, rcsrc from pg_relcheck where rcrelid =\n >> >> '115404'::oid and rcrelid in (select min(rcrelid) from pg_relcheck\n >> >> group by rcname); ERROR: parser: Subselect has too many or too\n >> >> few fields.\n >> >> \n >> >> The subselect only produces one column; so I think that the\n >> >> error message is wrong.\n >> >\n >> >What is the GROUP BY doing?\n \n...\n\n >Maybe I should be clearer. You are grouping by a column that is not in\n >the target list. If you try the subquery on its own, it should fail\n >with a better error message.\n \nIt doesn't fail; it produces the results I want.\n\n bray=> select min(rcrelid) from pg_relcheck group by rcname;\n min\n ------\n 115940\n 115026\n 115026\n 115026\n ... etc ...\n\nAny way, why should it be an error to group by a column that is not in the\nresults list, if the results list comprises aggregates only?\n\n(Mind you, I think I have not yet got a reliable way of finding the\nultimate ancestor of an inherited constraint. Is it actually possible to\ndo this with queries or do we have to add a boolean flag to pg_relcheck\nto be set where the constraint is/is not inherited?)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n\n", "msg_date": "Tue, 24 Nov 1998 08:11:50 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parser bug? " }, { "msg_contents": "> Bruce Momjian wrote:\n> ...\n> >> >> bray=> select rcname, rcsrc from pg_relcheck where rcrelid =\n> >> >> '115404'::oid and rcrelid in (select min(rcrelid) from pg_relcheck\n> >> >> group by rcname); ERROR: parser: Subselect has too many or too\n> >> >> few fields.\n> >> >> \n> >> >> The subselect only produces one column; so I think that the\n> >> >> error message is wrong.\n> >> >\n> >> >What is the GROUP BY doing?\n> \n> ...\n> \n> >Maybe I should be clearer. You are grouping by a column that is not in\n> >the target list. If you try the subquery on its own, it should fail\n> >with a better error message.\n> \n> It doesn't fail; it produces the results I want.\n> \n> bray=> select min(rcrelid) from pg_relcheck group by rcname;\n> min\n> ------\n> 115940\n> 115026\n> 115026\n> 115026\n> ... etc ...\n> \n> Any way, why should it be an error to group by a column that is not in the\n> results list, if the results list comprises aggregates only?\n> \n> (Mind you, I think I have not yet got a reliable way of finding the\n> ultimate ancestor of an inherited constraint. Is it actually possible to\n> do this with queries or do we have to add a boolean flag to pg_relcheck\n> to be set where the constraint is/is not inherited?)\n\nGee, I didn't know we could do that. Seems like doing that in a\nsubquery messes things up. My guess is that the GROUP BY internally\ncarries the GROUP BY column, and that is not getting stripped when used\nin a subquery, so it thinks the subquery returns two columns. Perhaps\nthe junknode code needs to be added somewhere for subqueries?\n\nCan anyone else comment on this possibility?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 24 Nov 1998 10:22:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parser bug?" }, { "msg_contents": "hi.\n\nI hope this is not a rehash of anything; a quick look at the mailing list\narchives turned up similar, but not identical stories.\n\nWhen I use the \"-z\" option (dump permissions)when dumping a database I have, I\nget a segfault and no output. The other options are irrelevant (i.e., I can\nspecify any other option or options I like, it still happens). To the best of\nmy knowledge I have nothing tricky or complex in my database, just standard\ntypes like varchar, bool and int, the refint stuff and a trigger or two.\n\nIt does NOT segfault with template1, but nor do I get any output, maybe this\nis normal, I'm a total novice at this :-)\n\nThis is not a critical issue for me, I can always set the permissions on my\ntables manually, but a) it would be nice not to have to and b) I thought it\nmight interest someone... Seems to me that whatever the reasons for it,\npg_dump should not lose it to the extent of segfaulting :-)\n\nRegards, K.\n\n---\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nKarl Auer ([email protected]) Geschaeft/work +41-1-6327531\nKommunikation, ETHZ RZ Privat/home +41-1-4517941\nClausiusstrasse 59 Fax +41-1-6321225\nCH-8092 ZUERICH Switzerland\n", "msg_date": "Tue, 24 Nov 1998 16:49:43 +0100 (MET)", "msg_from": "Karl Auer <[email protected]>", "msg_from_op": false, "msg_subject": "pg_dump - segfault with -z option" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> hi.\n> \n> I hope this is not a rehash of anything; a quick look at the mailing list\n> archives turned up similar, but not identical stories.\n> \n> When I use the \"-z\" option (dump permissions)when dumping a database I have, I\n> get a segfault and no output. The other options are irrelevant (i.e., I can\n> specify any other option or options I like, it still happens). To the best of\n> my knowledge I have nothing tricky or complex in my database, just standard\n> types like varchar, bool and int, the refint stuff and a trigger or two.\n> \n> It does NOT segfault with template1, but nor do I get any output, maybe this\n> is normal, I'm a total novice at this :-)\n> \n> This is not a critical issue for me, I can always set the permissions on my\n> tables manually, but a) it would be nice not to have to and b) I thought it\n> might interest someone... Seems to me that whatever the reasons for it,\n> pg_dump should not lose it to the extent of segfaulting :-)\n\nI believe this will be fixed in the first 6.4 minor release. Should we\nschedule 6.4.1 soon, folks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 24 Nov 1998 11:08:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> (Mind you, I think I have not yet got a reliable way of finding the\n> ultimate ancestor of an inherited constraint. Is it actually possible to\n> do this with queries or do we have to add a boolean flag to pg_relcheck\n> to be set where the constraint is/is not inherited?)\n\nIn fact, I was about to point out that the query you were describing\ncouldn't possibly give you a reliable answer, quite independent of\nwhether the backend is implementing it properly or not.\n\nConsider\n\nCREATE TABLE parent1 (i1 int4, CONSTRAINT c1 CHECK (i1 > 0));\n\nCREATE TABLE child1 (CONSTRAINT c2 CHECK (i1 > 4)) INHERITS (parent1);\n\nCREATE TABLE child2 (CONSTRAINT c2 CHECK (i1 > 4)) INHERITS (parent1);\n\nThis will give us a pg_relcheck like\n\n\trcrelid\t\trcname\trcbin\t\trcsrc\n\n\tparent1\t\tc1\tgobbledegook\ti1 > 0\n\tchild1\t\tc1\tgobbledegook\ti1 > 0\n\tchild2\t\tc1\tgobbledegook\ti1 > 0\n\tchild1\t\tc2\tgobbledegook\ti1 > 4\n\tchild2\t\tc2\tgobbledegook\ti1 > 4\n\n(where I've written table names in place of numeric OIDs for rcrelid).\nNow child2 did not inherit c2 from child1, but child1 has a lower OID\nthan child2, so your test would mistakenly omit c2 from child2's\ndefinition.\n\nIt seems to me that the correct way to do this is to compare each of a\ntable's constraints against its immediate parent's constraints, and omit\nfrom the child any constraints that have the same rcname AND the same\nrcsrc as a constraint of the parent. (You need not look at anything\nother than the immediate parent, because constraints inherited from\nmore distant ancestors will also be listed for the parent.)\n\nThere is a case that pg_relcheck does not allow you to distinguish,\nand that is whether or not the child definition was actually written\nwith a redundant constraint:\n\nCREATE TABLE parentx (i1 int4, CONSTRAINT c1 CHECK (i1 > 0));\n\nCREATE TABLE childx (CONSTRAINT c1 CHECK (i1 > 0)) INHERITS (parentx);\n\nUnless we modify pg_relcheck, pg_dump will have to dump this as simply\n\nCREATE TABLE parentx (i1 int4, CONSTRAINT c1 CHECK (i1 > 0));\n\nCREATE TABLE childx () INHERITS (parentx);\n\nsince it cannot tell that childx's constraint wasn't simply inherited.\nHowever, it's not clear to me that suppression of redundant constraints\nis a bad thing ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Nov 1998 11:17:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Parser bug? " }, { "msg_contents": "Karl Auer <[email protected]> writes:\n> When I use the \"-z\" option (dump permissions)when dumping a database I\n> have, I get a segfault and no output.\n\nAre you on Postgres 6.4? I recall fixing several nasty problems with\npermissions in pg_dump between 6.3.2 and 6.4.\n\nIf you are on 6.4, could you use gdb or something to get a backtrace\nshowing exactly where pg_dump dies?\n\nFWIW, I currently use -z routinely, but my database is probably even\nsimpler than yours ... no triggers, for example. My guess is pg_dump\ndoesn't work for permissions attached to a trigger, or something along\nthat line.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Nov 1998 11:25:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option " }, { "msg_contents": "Thanks Tom.\n\nYes, I am on 6.4. As to gdb, I really wouldn't know how to get the info you\nrequest. The segfault doesn't result in a core file either.\n\nThere is nothing sacred about my project (supporting RADIUS authentication),\nI'll happily send you my database creation scripts if you want to try making\nthis happen yourself. This is 6.4 as stated, running on a SuSE 5.3 (Linux\n2.0.34) distribution. No problems running the database and all the build\ntests work fine.\n\nI have no permissions attached to triggers as far as I know; and the\npermissions aren't complicated either. The segfault occurs when running the\ndump as user postgres, which should have godlike permissions anyway...\n\u001b\nRegards, K.\n\nAm 24-Nov-98 schrieb Tom Lane:\n> Karl Auer <[email protected]> writes:\n>> When I use the \"-z\" option (dump permissions)when dumping a database I\n>> have, I get a segfault and no output.\n> \n> Are you on Postgres 6.4? I recall fixing several nasty problems with\n> permissions in pg_dump between 6.3.2 and 6.4.\n> \n> If you are on 6.4, could you use gdb or something to get a backtrace\n> showing exactly where pg_dump dies?\n\n---\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nKarl Auer ([email protected]) Geschaeft/work +41-1-6327531\nKommunikation, ETHZ RZ Privat/home +41-1-4517941\nClausiusstrasse 59 Fax +41-1-6321225\nCH-8092 ZUERICH Switzerland\n", "msg_date": "Tue, 24 Nov 1998 17:38:40 +0100 (MET)", "msg_from": "Karl Auer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Should we schedule 6.4.1 soon, folks.\n\nI still have about 10 things on my to-do-for-6.4.1 list, and no hope\nof doing any of 'em before this weekend. If \"soon\" means \"a couple\nof weeks\", then fine ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Nov 1998 11:54:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "6.4.1 schedule (was segfault with -z option)" }, { "msg_contents": "On Tue, 24 Nov 1998, Bruce Momjian wrote:\n\n> Date: Tue, 24 Nov 1998 11:08:42 -0500 (EST)\n> From: Bruce Momjian <[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [HACKERS] pg_dump - segfault with -z option\n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > hi.\n> \n> I believe this will be fixed in the first 6.4 minor release. Should we\n> schedule 6.4.1 soon, folks.\n> \n\nAlso, how about to include Jan's patch for LIMIT to 6.4.1 ?\n\n\tRegards,\n\n\t\tOleg\n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 24 Nov 1998 20:05:45 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option" }, { "msg_contents": "I have very simple database without triggers and also see\n20:10[mira]:~/tmp>pg_dump -z guta > qq\nSegmentation fault\n\nDatabase = guta\n +---------------------+----------------------------------------+\n | Relation | Grant/Revoke Permissions |\n +---------------------+----------------------------------------+\n | groups | {\"=\",\"megera=arwR\",\"er=r\",\"httpd=arw\"} |\n | groups_pkey | |\n | status | {\"=\",\"megera=arwR\",\"er=r\",\"httpd=arw\"} |\n | status_pkey | |\n | stidx_status_id | |\n | ugmidx_group_id | |\n | ugmidx_user_id | |\n | user_group_map | {\"=\",\"megera=arwR\",\"er=r\",\"httpd=arw\"} |\n | user_group_map_pkey | |\n | users | {\"=\",\"megera=arwR\",\"er=r\",\"httpd=arw\"} |\n | users_pkey | |\n |\n\n\tRegards,\n\n\t\tOleg\n\nOn Tue, 24 Nov 1998, Tom Lane wrote:\n\n> Date: Tue, 24 Nov 1998 11:25:46 -0500\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [HACKERS] pg_dump - segfault with -z option \n> \n> Karl Auer <[email protected]> writes:\n> > When I use the \"-z\" option (dump permissions)when dumping a database I\n> > have, I get a segfault and no output.\n> \n> Are you on Postgres 6.4? I recall fixing several nasty problems with\n> permissions in pg_dump between 6.3.2 and 6.4.\n> \n> If you are on 6.4, could you use gdb or something to get a backtrace\n> showing exactly where pg_dump dies?\n> \n> FWIW, I currently use -z routinely, but my database is probably even\n> simpler than yours ... no triggers, for example. My guess is pg_dump\n> doesn't work for permissions attached to a trigger, or something along\n> that line.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 24 Nov 1998 20:11:09 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> pg_dump should not lose it to the extent of segfaulting :-)\n\n> I believe this will be fixed in the first 6.4 minor release.\n\nIf you meant that it has already been fixed, I don't think so.\nI see only one post-6.4 patch so far against pg_dump, and it fixes\na wrong-output kind of problem, not a core-dump kind of problem.\n\nBTW, is Oliver intending to try to fix pg_dump's inherited-constraints\nproblems in time for 6.4.1, or is that a longer-range project?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Nov 1998 12:21:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option " }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> (Mind you, I think I have not yet got a reliable way of finding the\n >> ultimate ancestor of an inherited constraint. Is it actually possible to\n >> do this with queries or do we have to add a boolean flag to pg_relcheck\n >> to be set where the constraint is/is not inherited?)\n >\n >In fact, I was about to point out that the query you were describing\n >couldn't possibly give you a reliable answer, quite independent of\n >whether the backend is implementing it properly or not.\n\nYes, I had been using a concrete example; the results were not\nsufficiently general.\n \n[... skip example ...]\n\n >It seems to me that the correct way to do this is to compare each of a\n >table's constraints against its immediate parent's constraints, and omit\n >from the child any constraints that have the same rcname AND the same\n >rcsrc as a constraint of the parent. (You need not look at anything\n >other than the immediate parent, because constraints inherited from\n >more distant ancestors will also be listed for the parent.)\n\nThat looks good. I'll see if I can do it that way.\n\n >There is a case that pg_relcheck does not allow you to distinguish,\n >and that is whether or not the child definition was actually written\n >with a redundant constraint:\n \n[...skip example...]\n\n >since it cannot tell that childx's constraint wasn't simply inherited.\n >However, it's not clear to me that suppression of redundant constraints\n >is a bad thing ;-)\n \nIt seems to be quite reasonable to drop them.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n\n", "msg_date": "Tue, 24 Nov 1998 17:29:22 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parser bug? " }, { "msg_contents": "On Tue, 24 Nov 1998, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Should we schedule 6.4.1 soon, folks.\n> \n> I still have about 10 things on my to-do-for-6.4.1 list, and no hope\n> of doing any of 'em before this weekend. If \"soon\" means \"a couple\n> of weeks\", then fine ...\n\n\tWhat's on that scheduale? Anything we can help with?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 24 Nov 1998 13:29:36 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 schedule (was segfault with -z option)" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> I still have about 10 things on my to-do-for-6.4.1 list, and no hope\n>> of doing any of 'em before this weekend. If \"soon\" means \"a couple\n>> of weeks\", then fine ...\n\n> \tWhat's on that scheduale? Anything we can help with?\n\nHere's my unvarnished to-do list. Most of it is configuration checks\nthat I figured I was in a good position to test (particularly the HPUX\nitems), but if you feel that I'm holding things up then feel free to\ntake a couple...\n\nActually, what I was kind of hoping *you* would work on is the\nprocess-title-setting issue, since you had once indicated you'd\nhave a go at that for 6.4.1. I still think we need to snaffle\nthe process-title code from sendmail. What we have now doesn't\nmanage to change the ps listing on HPUX, and a couple of other\nOSes too if I recall the discussion properly.\n\n\t\t\tregards, tom lane\n\n\nAuer reports segfault with pg_dump -z ... wtf?\n\nHPUX FAQ! Ought to document yacc issues for HPUX 9. HP's lex doesn't\nseem to work either, under both 9.* & 10.*. If you don't install patch\nPHSS_4630 on HPUX 9, rint() is broken which leads to weird bugs in\ndatetime.\n\nUpdate FAQ_CVS, also web page if not same file --- 1.9 is obsolete.\n\npostmaster.c's fflush(NULL) doesn't work on SUNOS ... seems safe to\njust fflush stdout and stderr instead. Worth making a configure test?\n11/24 msgs\n\nfloat8 regress test, pow, exp problems --- 11/18, 11/24 msgs\n\nconfigure should unset USE_TK if X stuff not found. Cf. situation on\ncsp linux box, which has Tcl but no X.\n\nconfigure.in ought not have a hardwired set of assumptions about what\nthe template file can define ... why not just read what's there, for\npete's sake? Proposed to list 11/22/98.\n\nmake install in doc failed on hpc200. Likely cause: no \"zcat\" in path.\nNeed to use autoconf to look for gzcat or zcat.\n\nbogus selection of template when .similar entry includes partial rev\nnumber --- see ts' msg of 11/19 8:44. Note that second try tries\nto match without regard to version number, which is good ... it just\ngoes too far when there are multiple possibilities in .similar.\nNote the wrong fix here could break HPUX --- might want to delete\nrev number from hp line in .similar. (Fixed 11/22, I think.)\n\nhpux 10.01 does not have POSIX signals? Perhaps we ought to use\nautoconf to detect whether system has SA_RESTART... trick here is\nthat Makefile.hpux needs to know it too, to adjust library link order.\nSo would need to set info in both Makefile.global and config.h.\nCould get rid of -DHPUX_10, which'd be a good idea anyway (in fact,\nit might be possible to stop calling uname at all in Makefile.hpux,\nwhich'd be a performance win).\nNOTE: make sure template file or Makefile.port can override autoconf decision\nin either direction (or at least the no-posix direction)... Makefile.port\ncan't do it because of config.h dependency. See pgsql msgs of 10/30.\n\nWhy does configure want libcurses to be included? Causes problems\nin Makefile.hpux for HP10.. Could suppress libBSD instead of adding\nan extra -lc pass if we got rid of curses. (libBSD only has signal\nstuff.) Looks like libreadline requires either libtermcap or libcurses,\nwhichever is available on a given platform. Safest thing might be to\njust put -lBSD in front of -lc ... Cf msgs 10/31 to 11/1.\n\n%.sl rule in Makefile.hpux is probably not needed anymore?\n", "msg_date": "Tue, 24 Nov 1998 12:41:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 schedule (was segfault with -z option) " }, { "msg_contents": "Oleg Bartunov wrote:\n\n> On Tue, 24 Nov 1998, Bruce Momjian wrote:\n> > I believe this will be fixed in the first 6.4 minor release. Should we\n> > schedule 6.4.1 soon, folks.\n> >\n>\n> Also, how about to include Jan's patch for LIMIT to 6.4.1 ?\n\n No!\n\n Should be put out as a separate patch. We forgot about the\n initial v6.4-feature-patch I created over that branch/tag\n discussion. I still have it here - who can put it onto the\n ftp-server?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 24 Nov 1998 18:55:47 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option" }, { "msg_contents": "On Tue, 24 Nov 1998, Jan Wieck wrote:\n\n> Oleg Bartunov wrote:\n> \n> > On Tue, 24 Nov 1998, Bruce Momjian wrote:\n> > > I believe this will be fixed in the first 6.4 minor release. Should we\n> > > schedule 6.4.1 soon, folks.\n> > >\n> >\n> > Also, how about to include Jan's patch for LIMIT to 6.4.1 ?\n> \n> No!\n> \n> Should be put out as a separate patch. We forgot about the\n> initial v6.4-feature-patch I created over that branch/tag\n> discussion. I still have it here - who can put it onto the\n> ftp-server?\n\n\tYou :) ~pgsql/ftp/pub/patches is writable by those in group\npgsql, which is all those wiht comitter's access...gets wiped out after\neach release is made though, as its only pertinent to the current\nrelease...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 24 Nov 1998 14:24:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option" }, { "msg_contents": "Tom Lane wrote:\n >BTW, is Oliver intending to try to fix pg_dump's inherited-constraints\n >problems in time for 6.4.1, or is that a longer-range project?\n \nI hope to fix it quickly, since I want to back port a good pg_dump into\nDebian's 6.3.2 before I release a 6.4 for Debian.\n\nIf anyone else can fix it, don't wait for me to find out how!\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n\n", "msg_date": "Tue, 24 Nov 1998 18:33:01 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump - segfault with -z option " }, { "msg_contents": "Tom Lane wrote:\n...\n >In fact, I was about to point out that the query you were describing\n >couldn't possibly give you a reliable answer, quite independent of\n >whether the backend is implementing it properly or not.\n\nI think this will do the job; can you please check it out:\n\nselect rcname,rcsrc from pg_relcheck, pg_inherits as i \n where rcrelid = '%s'::oid \n and (not exists -- for\n\t\t (select * from pg_inherits -- non-inherited\n where inhrel = pg_relcheck.rcrelid) -- tables\n \n or (not exists --\n\t\t (select * from pg_relcheck as c -- for\n where c.rcname = pg_relcheck.rcname -- inherited\n and c.rcrelid = i.inhparent) -- tables\n and rcrelid = i.inhrel)); --\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n\n", "msg_date": "Tue, 24 Nov 1998 19:22:02 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Parser bug? " }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >I think this will do the job; can you please check it out:\n \nIt seems to work, at least for my database, so here is a patch:\n\n*** postgresql-6.4/src/bin/pg_dump/pg_dump.c\tTue Nov 24 23:01:27 1998\n--- postgresql-6.4.orig/src/bin/pg_dump/pg_dump.c\tMon Oct 26 01:05:07 1998\n***************\n*** 1459,1504 ****\n \t\ttblinfo[i].ncheck = atoi(PQgetvalue(res, i, i_relchecks));\n \t\ttblinfo[i].ntrig = atoi(PQgetvalue(res, i, i_reltriggers));\n \n! \t\t/* Exclude inherited CHECKs from CHECK constraints total */\n! \t\tif (tblinfo[i].ncheck > 0)\n! \t\t{\n! \t\t\tPGresult\t*res2;\n! \t\t\tint\t\tntups2;\n! \n! \t\t\tif (g_verbose)\n! \t\t\t\tfprintf(stderr, \"%s excluding inherited CHECK constraints \"\n! \t\t\t\t\t\t\"for relation: '%s' %s\\n\",\n! \t\t\t\t\t\tg_comment_start,\n! \t\t\t\t\t\ttblinfo[i].relname,\n! \t\t\t\t\t\tg_comment_end);\n! \n! \t\t\tsprintf(query, \"SELECT * from pg_relcheck, pg_inherits as i \"\n! \t\t\t\t\t\"where rcrelid = '%s'::oid \"\n! \t\t\t\t\t\" and exists \"\n! \t\t\t\t\t\" (select * from pg_relcheck as c \"\n! \t\t\t\t\t\" where c.rcname = pg_relcheck.rcname \"\n! \t\t\t\t\t\" and c.rcrelid = i.inhparent) \"\n! \t\t\t\t\t\" and rcrelid = i.inhrel\",\n! \t\t\t\t\ttblinfo[i].oid);\n! \t\t\tres2 = PQexec(g_conn, query);\n! \t\t\tif (!res2 ||\n! \t\t\t\tPQresultStatus(res2) != PGRES_TUPLES_OK)\n! \t\t\t{\n! \t\t\t\tfprintf(stderr, \"getTables(): SELECT (for inherited CHECK) failed\\n\");\n! \t\t\t\texit_nicely(g_conn);\n! \t\t\t}\n! \t\t\tntups2 = PQntuples(res2);\n! \t\t\ttblinfo[i].ncheck -= ntups2;\n! \t\t\tif (tblinfo[i].ncheck < 0)\n! \t\t\t{\n! \t\t\t\tfprintf(stderr, \"getTables(): found more inherited CHECKs than total for \"\n! \t\t\t\t\t\t\"relation %s\\n\",\n! \t\t\t\t\t\ttblinfo[i].relname);\n! \t\t\t\texit_nicely(g_conn);\n! \t\t\t}\n! \t\t}\n! \n! \t\t/* Get CHECK constraints originally defined for this table */\n \t\tif (tblinfo[i].ncheck > 0)\n \t\t{\n \t\t\tPGresult *res2;\n--- 1459,1465 ----\n \t\ttblinfo[i].ncheck = atoi(PQgetvalue(res, i, i_relchecks));\n \t\ttblinfo[i].ntrig = atoi(PQgetvalue(res, i, i_reltriggers));\n \n! \t\t/* Get CHECK constraints */\n \t\tif (tblinfo[i].ncheck > 0)\n \t\t{\n \t\t\tPGresult *res2;\n***************\n*** 1513,1531 ****\n \t\t\t\t\t\ttblinfo[i].relname,\n \t\t\t\t\t\tg_comment_end);\n \n! \t\t\tsprintf(query, \"SELECT DISTINCT rcname, rcsrc \"\n! \t\t\t\t\t\"from pg_relcheck, pg_inherits as i \"\n! \t\t\t\t\t\"where rcrelid = '%s'::oid \"\n! \t\t\t\t\t/* allow all checks from tables that do not inherit */\n! \t\t\t\t\t\" and (not exists \"\n! \t\t\t\t\t\" (select * from pg_inherits \"\n! \t\t\t\t\t\" where inhrel = pg_relcheck.rcrelid)\"\n! \t\t\t\t\t/* and allow checks that are not inherited from other tables */\n! \t\t\t\t\t\" or (not exists \"\n! \t\t\t\t\t\" (select * from pg_relcheck as c \"\n! \t\t\t\t\t\" where c.rcname = pg_relcheck.rcname \"\n! \t\t\t\t\t\" and c.rcrelid = i.inhparent) \"\n! \t\t\t\t\t\" and rcrelid = i.inhrel))\",\n \t\t\t\t\ttblinfo[i].oid);\n \t\t\tres2 = PQexec(g_conn, query);\n \t\t\tif (!res2 ||\n--- 1474,1481 ----\n \t\t\t\t\t\ttblinfo[i].relname,\n \t\t\t\t\t\tg_comment_end);\n \n! \t\t\tsprintf(query, \"SELECT rcname, rcsrc from pg_relcheck \"\n! \t\t\t\t\t\"where rcrelid = '%s'::oid \",\n \t\t\t\t\ttblinfo[i].oid);\n \t\t\tres2 = PQexec(g_conn, query);\n \t\t\tif (!res2 ||\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n", "msg_date": "Tue, 24 Nov 1998 23:16:23 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for pg_dump (6.4) inheritance bug " }, { "msg_contents": "> > What's on that scheduale? Anything we can help with?\n\nI've made a few fixes, and have one more on my list for v6.4.x:\n\nUpdate contrib/linux startup scripts (someone had modified them to give\nboth sh and csh examples, but the labeling is scrambled).\n\nWon't get to it until next week, but it isn't critical to a release\neither...\n\n - Tom\n", "msg_date": "Wed, 25 Nov 1998 02:27:15 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 schedule (was segfault with -z option)" }, { "msg_contents": "> I've made a few fixes, and have one more on my list for v6.4.x:\n\nNever mind. I went ahead and committed the updates for this.\n\n - Tom\n", "msg_date": "Wed, 25 Nov 1998 02:58:25 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 schedule (was segfault with -z option)" }, { "msg_contents": "I tried posting a bug report to [email protected] and it bounced.\n\nI redirected the bounce message to [email protected] (this list) and\nit never showed up.\n\nHello?\n\nAssuming this message got though I'd like to report my problems compiling\n6.4 with KTH-KRB, and what I can say about how to fix it.\n\nI'm still testing, but basically it looks like if you have kerberos 4 then\nyou need to disable use of the system crypt routines. This *should* be\nhandled in the configure stuff. I fixed it by modifying fe-auth.c and\nfe-connect.c in libpq to not include <crypt.h> and by modifying\nMakefile.global to include -lresolv instead of -lcrypt (that gives the\nright load order).\n\nThe function des_encrypt exists in both the KTH kerberos and the system\ncrypt libraries with different arguments.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Wed, 25 Nov 1998 12:32:08 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Testing, Hello?" }, { "msg_contents": "This is a Debian Linux bug report.\n\nSize is still defined as unsigned int in 6.4. Paul is suggesting that this\nbe changed to size_t for compatibility with alpha.\n\nIs that correct?\n\n------- Forwarded Message\n\nDate: Wed, 16 Sep 1998 21:03:44 +0200\nFrom: Paul Slootman <[email protected]>\nTo: [email protected]\nSubject: Bug#26778: postgresql-dev: postgresql/c.h typedefs Size as 'unsigned i\n\t nt', should be 'size_t'\n\nPackage: postgresql-dev\nVersion: 6.3.2-11\n\nUsing the above wrong type for \"Size\" leads to conflicts with system\nprototypes, e.g. palloc() which is apparently a define for malloc().\nReplacing the \"typedef unsigned int Size;\" with \"typedef size_t Size;\"\nleads to a usable system again; at least, on Alpha, where \"unsigned int\"\nis NOT the same as \"unsigned long\", unlike i386.\n\nThanks,\nPaul Slootman\n\n- -- System Information\nDebian Release: 2.0\nKernel Version: Linux alf 2.0.35 #1 Tue Aug 11 11:09:24 CEST 1998 alpha unknown\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Let us therefore come boldly unto the throne of grace,\n that we may obtain mercy, and find grace to help in \n time of need.\" Hebrews 4:16 \n\n\n", "msg_date": "Wed, 25 Nov 1998 23:48:34 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql/c.h typedefs Size as 'unsigned int' (fwd)" }, { "msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n> I'm still testing, but basically it looks like if you have kerberos 4 then\n> you need to disable use of the system crypt routines. This *should* be\n> handled in the configure stuff.\n\nThat's fairly unpleasant, since it's not out of the question that a\ngiven site might need to support both auth methods to cope with varying\nclients.\n\n> The function des_encrypt exists in both the KTH kerberos and the system\n> crypt libraries with different arguments.\n\nNot everywhere --- there's no such routine in my crypt library, for\ninstance. I would not like to see kerberos + crypt disabled everywhere\nbecause it does not work on your machine.\n\nIdeally we'd need an autoconf test to discover whether kerberos and\ncrypt libraries are compatible on a given machine, and an autoconf\n--with switch to allow the user to decide which one to include if\nthey're not. Do you have any ideas about a simple way to check whether\nthis problem exists on a given platform?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Nov 1998 19:18:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testing, Hello? " }, { "msg_contents": "At 4:18 PM -0800 11/25/98, Tom Lane wrote:\n>\"Henry B. Hotz\" <[email protected]> writes:\n>> I'm still testing, but basically it looks like if you have kerberos 4 then\n>> you need to disable use of the system crypt routines. This *should* be\n>> handled in the configure stuff.\n>\n>That's fairly unpleasant, since it's not out of the question that a\n>given site might need to support both auth methods to cope with varying\n>clients.\n\nYeah. I note that if you use the Solaris built-in kerberos support the\nconflict should not exist. For Postgres this problem is specific to the\nKTH kerberos implementation I think, but it also exists with SSL. I have\nno information about MIT kerberos IV or V.\n\n>> The function des_encrypt exists in both the KTH kerberos and the system\n>> crypt libraries with different arguments.\n>\n>Not everywhere --- there's no such routine in my crypt library, for\n>instance. I would not like to see kerberos + crypt disabled everywhere\n>because it does not work on your machine.\n\nThis is Solaris 2.5, presumably 2.6 and 7 have the same problem.\n\n>Ideally we'd need an autoconf test to discover whether kerberos and\n>crypt libraries are compatible on a given machine, and an autoconf\n>--with switch to allow the user to decide which one to include if\n>they're not. Do you have any ideas about a simple way to check whether\n>this problem exists on a given platform?\n\nIf you include <crypt.h> and <krb.h> from the system and\n/usr/athena/include respectively then you get a compile error.\n\nMy problem may actually be a bit obscure. I'm using the KTH implementation\nof kerberos IV because I want to be able to use the JPL AFS kerberos\nserver. (AFS kerberos is an incompatable variant of MIT kerberos IV for\nthose who don't know. Solaris and NetBSD come with MIT kerberos IV support\nbuilt-in. MIT kerberos V can support both kerberos IV variants, but\nPostgres is a client.)\n\nI will put in a plug for autoconf support for kerberos in any case. We\nneed a --with-kerberos[={4,5}] option and --with-kerberos-include=..,\n--with-kerberos-lib=.., and --with-kerberos-srvtab=.. options.\n\nThe administrator guide says support for kerberos IV will disappear when 5\nis released. I think there should be a fairly long delay in that. Many\npeople will need to use kerberos IV in order to use an institutional\ncapability, like AFS accounting. Many people should prefer to use the\nbuilt-in capabilities of their OS and all current bundled kerberos support\nis at version IV. This will take a *long* time.\n\nFinally let me put in a big public thank-you to Tom Ivar Helbekkmo for\npatiently explaining many things that I should have understood from the\ndocumentation.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Mon, 30 Nov 1998 09:53:28 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testing, Hello?" }, { "msg_contents": "On Mon, 30 Nov 1998, Henry B. Hotz wrote:\n\n> those who don't know. Solaris and NetBSD come with MIT kerberos IV support\n> built-in. MIT kerberos V can support both kerberos IV variants, but\n> Postgres is a client.)\n\nSolaris 7 ships with Kerberos V support now. I'll dig up a little info I\nput together on Solaris 7 and send it on.\n\nDax Kelson\nInternet Connect, Inc.\n\n", "msg_date": "Mon, 30 Nov 1998 13:14:11 -0700 (MST)", "msg_from": "Dax Kelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testing, Hello?" }, { "msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n> At 4:18 PM -0800 11/25/98, Tom Lane wrote:\n>> Ideally we'd need an autoconf test to discover whether kerberos and\n>> crypt libraries are compatible on a given machine, and an autoconf\n>> --with switch to allow the user to decide which one to include if\n>> they're not. Do you have any ideas about a simple way to check whether\n>> this problem exists on a given platform?\n\n> If you include <crypt.h> and <krb.h> from the system and\n> /usr/athena/include respectively then you get a compile error.\n\nOK, that seems pretty easy to check.\n\n> I will put in a plug for autoconf support for kerberos in any case. We\n> need a --with-kerberos[={4,5}] option and --with-kerberos-include=..,\n> --with-kerberos-lib=.., and --with-kerberos-srvtab=.. options.\n\nWhere does that information get entered now --- do you have to do it\nmanually after running configure?\n\nI'd be willing to do the autoconf hacking, but since I have no kerberos\nsetup here, I can't test it; and I'm not familiar enough with kerberos\nto expect to get it right the first time. If you can test but don't\nwant to hack the code, let's get together off-list and work on it.\n\n> The administrator guide says support for kerberos IV will disappear when 5\n> is released. I think there should be a fairly long delay in that.\n\nAs long as we have kerb4 support in the code (and I'm not hearing anyone\npropose to take that out), it ought to be supported at the autoconf\nlevel too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Dec 1998 11:25:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testing, Hello? " }, { "msg_contents": "At 8:25 AM -0800 12/1/98, Tom Lane wrote:\n>\"Henry B. Hotz\" <[email protected]> writes:\n>> If you include <crypt.h> and <krb.h> from the system and\n>> /usr/athena/include respectively then you get a compile error.\n>\n>OK, that seems pretty easy to check.\n\nSpecifically des_encrypt() is declared with a different number of arguments.\n\n>> I will put in a plug for autoconf support for kerberos in any case. We\n>> need a --with-kerberos[={4,5}] option and --with-kerberos-include=..,\n>> --with-kerberos-lib=.., and --with-kerberos-srvtab=.. options.\n>\n>Where does that information get entered now --- do you have to do it\n>manually after running configure?\n\nYes. It's documented in the administrator guide actually. You edit\nMakefile.global. The comments in Makefile.global are pretty clear.\n\nThe autoconf support should just be a matter of copying the flags I\nindicated into Makefile.global.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Tue, 1 Dec 1998 09:44:34 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Testing, Hello?" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> It seems to work, at least for my database, so here is a patch:\n> [ patch to prevent dumping of inherited constraints snipped ]\n\nI have applied this patch to both the main CVS tree and REL6_4 branch,\nalong with Constantin Teodorescu's suggestion to improve the formatting\nof pg_dump's CREATE TABLE commands, and some work of my own to stop\nan occasional coredump in pg_dump -z.\n\nFYI, I was able to simplify your query for fetching non-inherited\nchecks; it now looks like:\n\nsprintf(query, \"SELECT rcname, rcsrc from pg_relcheck \"\n\t\t\"where rcrelid = '%s'::oid \"\n\t\t\" and not exists \"\n\t\t\" (select * from pg_relcheck as c, pg_inherits as i \"\n\t\t\" where i.inhrel = pg_relcheck.rcrelid \"\n\t\t\" and c.rcname = pg_relcheck.rcname \"\n\t\t\" and c.rcsrc = pg_relcheck.rcsrc \"\n\t\t\" and c.rcrelid = i.inhparent) \",\n\t\ttblinfo[i].oid);\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Dec 1998 17:29:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Patch for pg_dump (6.4) inheritance bug " }, { "msg_contents": "On Thu, 5 Nov 1998, Constantin Teodorescu wrote:\n> RedHat 5.1 i386 with PostgreSQL v6.4 final version\n> \n> ./configure --with-tcl\n> \n> after installing libpgtcl , cannot run pgaccess because library crypt\n> hasn't been linked into libpgtcl.so\n> \n> must manually add -lcrypt in Makefile into src/interfaces/libpgtcl\n\nHi Constantin. This is a problem only with systems running glibc, because\nfor legal reasons the FSF doesn't want to include encryption as part of\nglibc. I assume you are doing this on Debian or on Red Hat 5.x?\n\nPerhaps we need an extra template for glibc? \n\nI came across this problem too today (I am currently packaging 6.4 into an\nRPM for Red Hat 5.2). I have generated a patch file which fixes the lack\nof -lcrypt in a number of places for glibc systems. After I have\nthoroughly tested it I'll try to post it here.\n\n> Also, PgAccess version that has been included in 6.4 final version is\n> 0.90 , but current stable version is 0.91\n> (ftp://ftp.flex.ro/pub/pgaccess/pgaccess-0.91.tar.gz)\n\nThanks for the pointer. I will incorporate that as another patch in\nthe .spec file for my Red Hat RPM. \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n \"Linux represents a best-of-breed UNIX, that is trusted in mission\n critical applications...\" -- internal Microsoft memo\n\n", "msg_date": "Sun, 6 Dec 1998 15:45:15 -0500 (EST)", "msg_from": "Eric Lee Green <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Eric Lee Green <[email protected]> writes:\n> On Thu, 5 Nov 1998, Constantin Teodorescu wrote:\n>> must manually add -lcrypt in Makefile into src/interfaces/libpgtcl\n\n> I came across this problem too today (I am currently packaging 6.4 into an\n> RPM for Red Hat 5.2). I have generated a patch file which fixes the lack\n> of -lcrypt in a number of places for glibc systems.\n\nI believe this is already fixed in the current Postgres sources. Rather\nthan generating your own patch, would you verify that the current\nsources are fixed?\n\nYou can get the current code from the Postgres CVS server --- see\nhttp://www.postgresql.org/docs/faq-cvs.shtml. Do the checkout with\n\"-r REL6_4\" to get the soon-to-be-6.4.1 stable branch, rather than\nthe 6.5 development branch...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Dec 1998 16:06:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "> (I am currently packaging 6.4 into an RPM for Red Hat 5.2).\n\nIs this an alternative to the RedHat packaging, or are you using the\nsame layout and RPM setup that the RedHat 6.3 package had? Just curious;\nI've been thinking about trying to help with RPM support but would be\nhappy that someone else is looking after it.\n\nbtw, we are currently looking at a date/time peculiarity in Postgres\nwith glibc2 which gives funny results for at least one test case. You\nmight want to stay in touch so we can fix that up for you also, once we\nhave a good workaround.\n\nActually, we should also test with RH5.2 to make sure the date/time\nproblem is still present, since it may have been due to a problem in\nglibc2 which has since been fixed. I'm only running RH4.2 and RH5.1 so\ncan't directly test that myself...\n\n - Tom\n", "msg_date": "Sun, 06 Dec 1998 23:39:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "On Sun, 6 Dec 1998, Tom Lane wrote:\n> Eric Lee Green <[email protected]> writes:\n> > On Thu, 5 Nov 1998, Constantin Teodorescu wrote:\n> >> must manually add -lcrypt in Makefile into src/interfaces/libpgtcl\n> \n> > I came across this problem too today (I am currently packaging 6.4 into an\n> > RPM for Red Hat 5.2). I have generated a patch file which fixes the lack\n> > of -lcrypt in a number of places for glibc systems.\n> \n> I believe this is already fixed in the current Postgres sources. Rather\n> than generating your own patch, would you verify that the current\n> sources are fixed?\n\nHi, Tom. I'll see if I can do that, but be forewarned that I will not be\nable to actually run the \"current\" Postgres sources in order to actually\ntest it. I am working on a database application, not on Postgres, and I\nhave to maintain a stable Postgres on my development machine. The reason\nI'm working on the 6.4 RPM for Red Hat 5.2 is for deployment purposes, not\nfor testing purposes, though of course I'll test it before deploying it.\n(I want the latest stable version possible for deployment, because once\ndeployed, it will be out there for a LONG time).\n\nBTW, congrats to everybody on an excellent job. I examined several\ndatabases, including some commercial ones, and for everything but speed\nPostgreSQL measured up nicely. And speed wasn't THAT bad, certainly quite\nacceptable for my purposes (and definitely a lot faster than it used to\nbe!). \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n \"Linux represents a best-of-breed UNIX, that is trusted in mission\n critical applications...\" -- internal Microsoft memo\n\n", "msg_date": "Mon, 7 Dec 1998 00:18:48 -0500 (EST)", "msg_from": "Eric Lee Green <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "On Sun, 6 Dec 1998, Thomas G. Lockhart wrote:\n> > (I am currently packaging 6.4 into an RPM for Red Hat 5.2).\n> \n> Is this an alternative to the RedHat packaging, or are you using the\n> same layout and RPM setup that the RedHat 6.3 package had? Just curious;\n\nI am using the same layout and RPM setup that the Red Hat 6.3 package\nhad, but I had to make a fair number of changes to their setup to make\n6.4 work. I think I probably ended up changing about 1/3rd of the\n\"action\" portions of their .spec file, and the patch file I used to\nmake it install and link cleanly was generated from scratch. I am not\ndoing this for Red Hat Software, I am doing it because I need to\ndeploy 6.4 and doing it via RPM is the easiest way to deploy on Red\nHat. And perhaps more importantly, I'm doing it because it's there\n:-).\n\n> I've been thinking about trying to help with RPM support but would be\n> happy that someone else is looking after it.\n\nWell, if you have some suggestions I'm certainly not loathe to ignore them!\nRight now, the only suggestion I can think of that I'd like to implement\nwould be to somehow figure out how to compile the Python module at the same\ntime that I compile the rest of it, but I'm not sure that Python is capable\nyet of doing that Perl trick of compiling a module against the Perl .so \nlink library without having the source code around. I'll have to check\nwith the Python gurus on that one. \n\n> Actually, we should also test with RH5.2 to make sure the date/time\n> problem is still present, since it may have been due to a problem in\n> glibc2 which has since been fixed. I'm only running RH4.2 and RH5.1 so\n> can't directly test that myself...\n\nWell, RH5.2 is still using glibc 2.0.7, albeit patched up about 10\nmore times since RH 5.1 :-). But sure, if you have a test case to run,\ndrop it in the mail to me and I'll be happy to run it. \n\nOne thought: Do you have a TZ environment variable set on your RH5.1?\nI don't remember whether glibc needs one or not (I know there were other\nprograms sensitive to the TZ environment variable under RH5.1, like BRU, \nbut it is 12:30am here on the east coast and my mind is not as clear as\nit should be :-). \n\n--\nEric Lee Green [email protected] http://www.linux-hw.com/~eric\n \"Linux represents a best-of-breed UNIX, that is trusted in mission\n critical applications...\" -- internal Microsoft memo\n\n", "msg_date": "Mon, 7 Dec 1998 00:33:36 -0500 (EST)", "msg_from": "Eric Lee Green <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!!" }, { "msg_contents": "Eric Lee Green <[email protected]> writes:\n> I have to maintain a stable Postgres on my development machine. The reason\n> I'm working on the 6.4 RPM for Red Hat 5.2 is for deployment purposes, not\n> for testing purposes, though of course I'll test it before deploying it.\n> (I want the latest stable version possible for deployment, because once\n> deployed, it will be out there for a LONG time).\n\nFair enough. I have not been around Postgres very long and may be out\nof line to make this remark, but here goes anyway: as far as I've seen,\nthe minor releases (6.3.2, 6.4.1, etc) are likely to be your best bet\nfor a solid install-and-forget version. Major releases are for feature\nadditions, minor releases are made to clean up whatever got broken. \nWe do the best we can to test major releases before they go out, but\nthings do slip through the cracks sometimes.\n\nIn short, unless you need something in the next few days, the upcoming\n6.4.1 release is likely to be a better bet for an RPM distribution\nthan 6.4.\n\nPerhaps one of the senior Postgres guys would like to expound on the\nrelease philosophy? The above is just what I've observed, it may\nnot be the official policy...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Dec 1998 00:54:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] crypt not included when compiling libpgtcl !!!!!!! " }, { "msg_contents": "Microsoft SQL Server v6.5 have SQL92 join syntax. I don't have the\nstandard in front of me but here's what I remember.\n\njoin_clause :\n table_name|view_name|join_clause [alias ][LEFT |RIGHT |CROSS ] JOIN\ntable_name|view_name|join_clause [alias ]ON join_tatements\n\nThe allows for neat little tricks like (hope you can follow it):\n SELECT a3.name, a3.address, a3.city, a4.state_abbrev, a6.postal_code,\na9.country_code\n FROM (\n (\n (states_list a5\n JOIN postal_codes a6 ON (a5.stateid = a6.stateid)\n ) a4\n RIGHT JOIN \n (clients a1 \n LEFT JOIN addresses a2 ON (a1.clientid = a2.clientid AND\na2.prefered = 1)\n ) a3 ON (a3.stateid = a4.stateid)\n ) a7 \n LEFT JOIN \n countries a8 ON (a7.countryid = a8.countryid)\n ) a9\n\nI'm not sure if Microsoft implemented it but I believe that subselects\nwould be a great addition the above. \n\nI can load up a Microsoft SQL server for any testing you need done. I'm\npretty sure that the Help files have a run down of their supported\nsyntax but I never trust Microsoft to stick to a standard (even their\nown).\n\n\n\n\n> -----Original Message-----\n> From: Dan Gowin [mailto:[email protected]]\n> Sent: Friday, December 11, 1998 7:26 AM\n> To: 'Thomas G. Lockhart'; PGSQL HACKERS (E-mail)\n> Subject: RE: [HACKERS] JOIN syntax. Examples?\n>\n>\n> I run three HP minicomputers and two Sun Ultra 3000 all with\n> Oracle 7.3 and one with Oracle 8.0 .\n>\n> Send it to me.\n>\n> D.\n>\n>\n> -----Original Message-----\n> From: Thomas G. Lockhart [mailto:[email protected]]\n> Sent: Friday, December 11, 1998 1:36 AM\n> To: Postgres Hackers List\n> Subject: [HACKERS] JOIN syntax. Examples?\n>\n>\n> Well, I've started looking through my books for info on\n> joins. The cross\n> join was pretty easy:\n>\n> postgres=> select * from (a cross join b);\n> i| j|i| k\n> -+----+-+--\n> 1|10.1|1|-1\n> 2|20.2|1|-1\n> 4| |1|-1\n> <snip>\n>\n> which I've put into my copy of the parser.\n>\n> Does anyone have a commercial installation which has good support for\n> SQL92 joins? I'd like to send some small test cases to verify that I\n> understand what the behavior should be.\n>\n> Also, if anyone has worked with join syntax, outer joins\n> especially, it\n> would be great to get some test case contributions...\n>\n> - Tom\n> \n", "msg_date": "Fri, 11 Dec 1998 12:28:01 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "> Microsoft SQL Server v6.5 have SQL92 join syntax. I don't have the\n> standard in front of me but here's what I remember.\n\nOK, it's pretty clear that Oracle doesn't implement SQL92-syntax on\nouter joins (unless they support it as an alternative; does anyone find\n\"OUTER JOIN\" in the syntax docs?).\n\nLet's assume that M$ may be close to standard, but given that they don't\nbother following standards in other areas (WHERE x = NULL, etc) we can't\nuse them as a truth generator.\n\nWe are looking for a system which supports syntax like DeJuan gave:\n\nSELECT * FROM (A LEFT OUTER JOIN B USING (X));\nor\nSELECT * FROM (A LEFT OUTER JOIN B ON (A.X = B.X));\n\netc. if we are going to try for the SQL92 standard,\n\nrather than the Oracle form:\n\nSELECT * FROM A, B WHERE A.X = (+) B.X;\n\nor the Informix form:\n\nSELECT * FROM A, OUTER B WHERE A.X = B.X;\n (is the WHERE clause required here?)\n\nDoes anyone have a non-M$ RDBMS which implements SQL92 joins?\n\notoh, any system which can test the results of a query, even if the\nquery needs to be translated first, has some benefit. As/if I progress\nI'll take some of you up on the offer to run queries.\n\n - Tom\n", "msg_date": "Fri, 11 Dec 1998 22:11:28 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n >Does anyone have a non-M$ RDBMS which implements SQL92 joins?\n \nThe book \"The Practical SQL Handbook\", which is often recommended on\nthese lists, uses the syntax `*=' and `=*' for left and right outer\njoins (page 211). I think we ought to support this syntax as well,\nsince it will save new users from confusion.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The spirit of the Lord GOD is upon me; because the \n LORD hath anointed me to preach good tidings unto the \n meek; he hath sent me to bind up the brokenhearted, to\n proclaim liberty to the captives, and the opening of \n the prison to them that are bound.\" \n Isaiah 61:1 \n\n\n", "msg_date": "Fri, 11 Dec 1998 22:35:18 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples? " }, { "msg_contents": "On Fri, 11 Dec 1998, Oliver Elphick wrote:\n> The book \"The Practical SQL Handbook\", which is often recommended on\n> these lists, uses the syntax `*=' and `=*' for left and right outer\n> joins (page 211). I think we ought to support this syntax as well,\n> since it will save new users from confusion.\n\n'A Guide to The SQL Standard\" (4th Ed.) seems to indicate that the MS\nsyntax is fairly close.\n\nISBN 0-201-96426-0\n\n-- \n| Matthew N. Dodd | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |\n| [email protected] | This Space For Rent | ix86,sparc,m68k,pmax,vax |\n| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage? |\n\n", "msg_date": "Fri, 11 Dec 1998 18:39:58 -0500 (EST)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples? " }, { "msg_contents": "> The book \"The Practical SQL Handbook\", which is often recommended on\n> these lists, uses the syntax `*=' and `=*' for left and right outer\n> joins (page 211). I think we ought to support this syntax as well,\n> since it will save new users from confusion.\n\nThis one conflicts with Postgres' operator extensibility features, since\nit would look just like a legal operator.\n\nThe two books I have at hand (besides my old Ingres docs) are A Guide to\nthe SQL Standard by Date and Darwen and Understanding the New SQL by\nMelton and Simon. Both focus on SQL standard syntax, and neither mention\nthe various outer join syntaxes accepted by Oracle, Informix, or Sybase.\n\nAn explanation for the lack of standards compliance by the big three\nprobably involves the fact that they predate the standard by a\nsignificant number of years.\n\n - Tom\n", "msg_date": "Sat, 12 Dec 1998 04:02:38 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "> This is a Debian Linux bug report.\n> \n> Size is still defined as unsigned int in 6.4. Paul is suggesting that this\n> be changed to size_t for compatibility with alpha.\n> \n> Is that correct?\n\nI am going to change Size to size_t only in the CURRENT tree, but leave\n6.4.1/RELEASE tree alone, OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 22:42:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgresql/c.h typedefs Size as 'unsigned int' (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n >> This is a Debian Linux bug report.\n >> \n >> Size is still defined as unsigned int in 6.4. Paul is suggesting that thi\n >s\n >> be changed to size_t for compatibility with alpha.\n >> \n >> Is that correct?\n >\n >I am going to change Size to size_t only in the CURRENT tree, but leave\n >6.4.1/RELEASE tree alone, OK?\n \nOK, I will change it in the Debian release of 6.4.1 and let you know if it\ncauses any problems (not that I expect any).\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For thou art my hope, O Lord GOD; thou art my trust \n from my youth.\" Psalms 71:5 \n\n\n", "msg_date": "Sun, 13 Dec 1998 10:04:37 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgresql/c.h typedefs Size as 'unsigned int' (fwd) " }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > The book \"The Practical SQL Handbook\", which is often recommended on\n> > these lists, uses the syntax `*=' and `=*' for left and right outer\n> > joins (page 211). I think we ought to support this syntax as well,\n> > since it will save new users from confusion.\n> \n> This one conflicts with Postgres' operator extensibility features, since\n> it would look just like a legal operator.\n\nso does =\n\nCould it be possible to extend the operator extensibility features \nto achieve the behaviour of outer/cross joins ?\n\n> The two books I have at hand (besides my old Ingres docs) are A Guide to\n> the SQL Standard by Date and Darwen and Understanding the New SQL by\n> Melton and Simon. Both focus on SQL standard syntax, and neither mention\n> the various outer join syntaxes accepted by Oracle, Informix, or Sybase.\n\nHas anybody tried out DB2 ?\n\nI have downloaded it (for linux) but have not yet tried it.\n \n> An explanation for the lack of standards compliance by the big three\n> probably involves the fact that they predate the standard by a\n> significant number of years.\n\nNot to mention that both =* and =(+) are more concise and easier to \nfollow, at least for one with my headshape.\n\nThe standard is probably the 'worst common denominator' or something \nlike that :(\n\n-----------------\nHannu\n", "msg_date": "Mon, 14 Dec 1998 21:10:52 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "> > This one conflicts with Postgres' operator extensibility features, \n> > since it would look just like a legal operator.\n> so does =\n\nBut in fact its usage for joins matches the typical usage elsewhere.\n\n> Has anybody tried out DB2 ?\n> I have downloaded it (for linux) but have not yet tried it.\n\nJust downloaded it this morning (and afternoon, it's a thin pipe at home\nfor 60MB of files :) Have you looked at what it takes to do an\ninstallation yet?\n\n> Not to mention that both =* and =(+) are more concise and easier to\n> follow, at least for one with my headshape.\n> The standard is probably the 'worst common denominator' or something\n> like that :(\n\nDeJuan points out a major strength of the SQL92 syntax, which allows\nmultiple outer joins in the same query. One of my books shows an\nexample:\n\n select * from\n q1 full outer join q2 on (q1.id = q2.id)\n full outer join q3 on (coalesce(q1.id,q2.id)=q3.id)\n full outer join q4 on (coalesce(q1.id,q2.id,q3.id)=q4.id)\n\nI suppose one can do something similar using a *= operator by using\nparentheses? Not sure though...\n\n - Tom\n", "msg_date": "Tue, 15 Dec 1998 01:01:01 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "> > Has anybody tried out DB2 ?\n> > I have downloaded it (for linux) but have not yet tried it.\n> Just downloaded it this morning (and afternoon, it's a thin pipe at \n> home for 60MB of files :) Have you looked at what it takes to do an\n> installation yet?\n\nWell, I'll have to save it for later, at least at home. It's glibc2\nonly. Also, the tar file has a bunch of rpms but also other files. Don't\nknow what's up with that...\n\n - Tom\n", "msg_date": "Tue, 15 Dec 1998 01:20:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "Hi all,\n\n>> > Has anybody tried out DB2 ?\n>> > I have downloaded it (for linux) but have not yet tried it.\n>> Just downloaded it this morning (and afternoon, it's a thin pipe at \n>> home for 60MB of files :) Have you looked at what it takes to do an\n>> installation yet?\n\nCould someone tell me please where I can download DB2?\nThanks,\n-Jose'-\n\n\n", "msg_date": "Tue, 15 Dec 1998 14:43:20 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "Hello Thomas,\n\nvenerd�, 11 dicembre 98, you wrote:\n\n>> Microsoft SQL Server v6.5 have SQL92 join syntax. I don't have the\n>> standard in front of me but here's what I remember.\n\nTGL> OK, it's pretty clear that Oracle doesn't implement SQL92-syntax on\nTGL> outer joins (unless they support it as an alternative; does anyone find\nTGL> \"OUTER JOIN\" in the syntax docs?).\n\nTGL> Let's assume that M$ may be close to standard, but given that they don't\nTGL> bother following standards in other areas (WHERE x = NULL, etc) we can't\nTGL> use them as a truth generator.\n\nTGL> We are looking for a system which supports syntax like DeJuan gave:\n\nTGL> SELECT * FROM (A LEFT OUTER JOIN B USING (X));\nTGL> or\nTGL> SELECT * FROM (A LEFT OUTER JOIN B ON (A.X = B.X));\n\nTGL> etc. if we are going to try for the SQL92 standard,\n\nTGL> rather than the Oracle form:\n\nTGL> SELECT * FROM A, B WHERE A.X = (+) B.X;\n\nTGL> or the Informix form:\n\nTGL> SELECT * FROM A, OUTER B WHERE A.X = B.X;\nTGL> (is the WHERE clause required here?)\n\nTGL> Does anyone have a non-M$ RDBMS which implements SQL92 joins?\n\nDownload OCELOT for Win32 at http://ourworld.compuserve.com/homepages/OCELOTSQL\ntheir database implements SQL92 joins.\n\nTheir home page says:\n\nOcelot makes the only Database Management System (DBMS) that supports\nthe full ANSI / ISO SQL Standard (1992).\n...\nThis is also the only place on the Net where you can find documentation\nthat explains and provides examples of the full SQL-92 standard. This is version 1.0.\n\nI'm trying it, is very interesting but it is only for M$-win.\n\n-Jose'-\n\n\n", "msg_date": "Mon, 11 Jan 1999 15:45:51 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "> Download OCELOT for Win32...\n> their database implements SQL92 joins.\n> I'm trying it, is very interesting but it is only for M$-win.\n\nMy linux system doesn't know how to boot or run M$ stuff. Funny, but my\nMac before that didn't know how either :)\n\n - Tom\n", "msg_date": "Tue, 12 Jan 1999 03:05:16 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "Hello Thomas,\n\nmarted�, 12 gennaio 99, you wrote:\n\n>> Download OCELOT for Win32...\n>> their database implements SQL92 joins.\n>> I'm trying it, is very interesting but it is only for M$-win.\n\nTGL> My linux system doesn't know how to boot or run M$ stuff. Funny, but my\nTGL> Mac before that didn't know how either :)\n\nTGL> - Tom\n\nYou are a very puritan, I'm glad for you ;)\nUnfortunately I can't be 100% puritan like you :(\n\nI tried some joins on Ocelot...seems nice.\nIf you want something more significant I can try it for you.\n\ntable P:\n\nPNO PNAME COLOR WEIGHT CITY\n-----------------------------------------\nP1 NUT RED 12 LONDON\nP4 SCREW RED 14 LONDON\nP2 BOLT GREEN 17 PARIS\n\ntable SP:\nSNO PNO QTY\n-----------------------\nS1 P1 300\nS1 P2 200\nS1 P2 200\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP NATURAL JOIN P; \nPNO CITY\n---------------\nP1 LONDON\nP2 PARIS\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP LEFT OUTER JOIN P USING (PNO); \nPNO CITY\n\n---------------\nP1 LONDON\nP2 PARIS\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP LEFT OUTER JOIN P ON (P.PNO = sp.pno);\nPNO CITY\n---------------\nP1 LONDON\nP2 ?\nP2 PARIS\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP RIGHT OUTER JOIN P ON (P.PNO = sp.pno);\nPNO CITY\n---------------\nP1 LONDON\nP2 PARIS\n? PARIS\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP FULL OUTER JOIN P ON (P.PNO = sp.pno);\nPNO CITY\n---------------\nP1 LONDON\nP2 ?\nP2 PARIS\n? PARIS\n\nSELECT DISTINCT SP.PNO, P.CITY FROM SP INNER JOIN P ON (P.PNO = sp.pno);\nPNO CITY\n---------------\nP1 LONDON\nP2 PARIS\n\n\n-Jose'-\n\n\n", "msg_date": "Tue, 12 Jan 1999 15:15:35 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: [HACKERS] JOIN syntax. Examples?" }, { "msg_contents": "\nTatsuo, Vadim, Oleg, Scrappy,\n\nMany thanks for the response.\n\nA couple of you weren't convinced that this\nis a Postgres problem so let me try to clear\nthe water a little bit. Maybe the use of \nApache and mod_perl is confusing the issue -\nthe point I was trying to make is that if \nthere are 49+ concurrent postgres processes\non a normal machine (i.e. where kernel \nparameters are the defaults, etc.) the \npostmaster dies in a nasty way with \npotentially damaging results. \n\nHere's a case without Apache/mod_perl that\ncauses exactly the same behaviour. Simply\nenter the following 49 times:\n\nkandinsky:patrick> psql template1 &\n\nNote that I tried to automate this without\nsuccess: \n\nperl -e 'for ( 1..49 ) { system(\"/usr/local/pgsql/bin/psql template1 &\"); }'\n\nThe 49th attempt to initiate a connection \nfails:\n\nConnection to database 'template1' failed.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while processing the request.\n\nand the error_log says:\n\nInitPostgres\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\nproc_exit(3) [#0]\nshmem_exit(3) [#0]\nexit(3)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 1521 exited with status 768\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process 1518\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nFATAL: s_lock(dfebe065) at spin.c:125, stuck spinlock. Aborting.\n\nFATAL: s_lock(dfebe065) at spin.c:125, stuck spinlock. Aborting.\n\n\nEven if there is a hard limit there is no way that \nPostgres should die in this spectacular fashion.\nI wouldn't have said that it was unreasonable for\nsome large applications to peak at >48 processes\nwhen using powerful hardware with plenty of RAM.\n\nThe other point is that even if one had 1 GB RAM,\nPostgres won't scale beyond 48 processes, using\nprobably less than 100 MB of RAM. Would it be\npossible to make the 'MaxBackendId' configurable\nfor those who have the resources?\n\nI have reproduced this behaviour on both \nFreeBSD 2.2.8 and Intel Solaris 2.6 using\nversion 6.4.x of PostgreSQL.\n\nI'll try to change some of the parameters\nsuggested and see how far I get but the bottom \nline is Postgres shouldn't be dying like this.\n\nLet me know if you need any more info.\n\nCheers.\n\n\n\nPatrick\n\n-- \n\n#===============================#\n\\ KAN Design & Publishing Ltd /\n/ T: +44 (0)1223 511134 \\\n\\ F: +44 (0)1223 571968 /\n/ E: mailto:[email protected] \\ \n\\ W: http://www.kan.co.uk /\n#===============================#\n", "msg_date": "Fri, 29 Jan 1999 16:05:28 +0000", "msg_from": "Patrick Verdon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "Patrick Verdon wrote:\n> \n> \n> Even if there is a hard limit there is no way that\n> Postgres should die in this spectacular fashion.\n\n[snip]\n\n> I have reproduced this behaviour on both\n> FreeBSD 2.2.8 and Intel Solaris 2.6 using\n> version 6.4.x of PostgreSQL.\n> \n> I'll try to change some of the parameters\n> suggested and see how far I get but the bottom\n> line is Postgres shouldn't be dying like this.\n\nWe definitely need a chapter on tuning postgres in some of the manuals.\n\nIt should contain not only the parameters that one can change in\nPostgreSQL - for either better response or for taking a larger load -\nbut also the ways one can tune the underlying OS, being it Linux, *BSD, \nSolaris or whatever.\n\nEven commercial databases (at least Oracle) tend to rebuild kernel \nduring installation (obsereved with Oracle 7.1 on Solaris)\n\nWhen I once needed the info about setting shared memory limits on \nsolaris I cried out here and got the example lines (I actually had them \nalready copied from a macine where oracle was running)\n\nBut the same info, and possibly more(increasing limits for max \nfiles per process/globally, shared mem config, ... whatever else \nis needed) seems to be essential part of setting up a serious DB \nserver on any system.\n\n---------------\nHannu\n", "msg_date": "Fri, 29 Jan 1999 19:02:43 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "Thomas Metz <[email protected]> writes:\n> SELECT DISTINCT ON id id, name FROM test ORDER BY name;\n> [doesn't work as expected]\n\nThere have been related discussions before on pg-hackers mail list;\nyou might care to check the list archives. The conclusion I recall\nis that it's not real clear how the combination of SELECT DISTINCT\non one column and ORDER BY on another *should* work. Postgres'\ncurrent behavior is clearly wrong IMHO, but there isn't a unique\ndefinition of right behavior, because it's not clear which tuples\nshould get selected for the sort.\n\nThis \"SELECT DISTINCT ON attribute\" option strikes me as even more\nbogus. Where did we get that from --- is it in the SQL92 standard?\nIf you SELECT DISTINCT on a subset of the attributes to be returned,\nthen there's no unique definition of which values get returned in the\nother columns. In Thomas' example:\n\n> Assuming the table TEST as follows:\n> ID NAME\n> - -----------------\n> 1 Alex\n> 2 Oliver\n> 1 Thomas\n> 2 Fenella\n\n> SELECT DISTINCT ON id id, name FROM test;\n> produces:\n> ID NAME\n> - -----------------\n> 1 Alex\n> 2 Oliver\n\nThere's no justifiable reason for preferring this output over\n\t1 Thomas\n\t2 Oliver\nor\n\t1 Alex\n\t2 Fenella\nor\n\t1 Thomas\n\t2 Fenella\n\nAny of these are \"DISTINCT ON id\", but it's purely a matter of\nhappenstance table order and unspecified implementation choices which\none will appear. Do we really have (or want) a statement with\ninherently undefined behavior?\n\nAnyway, to answer Thomas' question, the way SELECT DISTINCT is\nimplemented is that first there's a sort on the DISTINCT columns,\nthen there's a pass that eliminates adjacent duplicates (like the Unix\nuniq(1) program). In the current backend, doing an ORDER BY on another\ncolumn overrides the sorting on the DISTINCT columns, so when the\nduplicate-eliminator runs it will fail to get rid of duplicates that\ndon't happen to appear consecutively in its input. That's pretty\nbroken, but then the entire concept of combining these two options\ndoesn't seem well defined; the SELECT DISTINCT doesn't make any promises\nabout which tuples (with the same DISTINCT columns) it's going to pick,\ntherefore the result of ordering by some other column isn't clear.\n\nIf you're willing to live with poorly defined behavior, the fix\nis fairly obvious: run the sort and uniq passes for the DISTINCT\ncolumns, *then* run the sort on the ORDER BY columns --- which\nwill use whichever tuple the DISTINCT phase selected at random\nout of each set with the same DISTINCT value.\n\nI think the issue got put on the back burner last time in hopes that\nsome definition with consistent behavior would come up, but I haven't\nseen any hope that there is one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jan 1999 12:02:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT DISTINCT ON ... ORDER BY ..." }, { "msg_contents": "Patrick Verdon <[email protected]> writes:\n> the point I was trying to make is that if there are 49+ concurrent\n> postgres processes on a normal machine (i.e. where kernel parameters\n> are the defaults, etc.) the postmaster dies in a nasty way with\n> potentially damaging results.\n\nRight. It looks to me like your problem is running out of SysV\nsemaphores:\n\n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\n\n(read the man page for semget(2):\n [ENOSPC] A semaphore identifier is to be created, but the\n system-imposed limit on the maximum number of\n allowed semaphore identifiers system wide would be\n exceeded.\nOld bad habit of Unix kernel programmers: re-use closest available error\ncode, rather than deal with the hassle of inventing a new kernel errno.)\n\nYou can increase the kernel's number-of-semaphores parameter (on my box,\nboth SEMMNI and SEMMNS need to be changed), but it'll probably take a\nkernel rebuild to do it.\n\n> Even if there is a hard limit there is no way that \n> Postgres should die in this spectacular fashion.\n\nWell, running out of resources is something that it's hard to guarantee\nrecovery from. Postgres is designed on the assumption that it's better\nto try to prevent corruption of the database than to try to limp along\nafter a failure --- so the crash recovery behavior is exactly what you\nsee, mutual mass suicide of all surviving backends. Restarting all your\nclients is a pain in the neck, agreed, but would you rather have\ndatabase corruption spreading invisibly?\n\n> The other point is that even if one had 1 GB RAM,\n> Postgres won't scale beyond 48 processes, using\n> probably less than 100 MB of RAM. Would it be\n> possible to make the 'MaxBackendId' configurable\n> for those who have the resources?\n\nMaxBackendId is 64 by default, so that's not the limit you're hitting.\n\nIt should be easier to configure MaxBackendId --- probably it should be\nan option to the configure script. I've put this on my personal to-do\nlist. (I don't think it's a good idea to have *no* upper limit, even\nif it were easy to do in the code --- otherwise an unfriendly person\ncould run you out of memory by starting more and more clients. If he\nstops just short of exhausting swap space, then Postgres is perfectly\nhappy, but all the rest of your system starts misbehaving ... not cool.)\n\nAnother thing we ought to look at is changing the use of semaphores so\nthat Postgres uses a fixed number of semaphores, not a number that\nincreases as more and more backends are started. Kernels are\ntraditionally configured with very low limits for the SysV IPC\nresources, so having a big appetite for semaphores is a Bad Thing.\n\nRight now it looks like we use a sema per backend to support spinlocks.\nPerhaps we could just use a single sema that all backends block on when\nwaiting for a spinlock? This might be marginally slower, or it might\nnot, but hopefully one is not blocking on spinlocks too often anyway.\nOr, given that the system seems to contain only a small fixed number of\nspinlocks, maybe a sema per spinlock would work best.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jan 1999 13:13:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "Tom Lane wrote:\n >Thomas Metz <[email protected]> writes:\n >> SELECT DISTINCT ON id id, name FROM test ORDER BY name;\n >> [doesn't work as expected]\n >\n >There have been related discussions before on pg-hackers mail list;\n >you might care to check the list archives. The conclusion I recall\n >is that it's not real clear how the combination of SELECT DISTINCT\n >on one column and ORDER BY on another *should* work. Postgres'\n >current behavior is clearly wrong IMHO, but there isn't a unique\n >definition of right behavior, because it's not clear which tuples\n >should get selected for the sort.\n >\n >This \"SELECT DISTINCT ON attribute\" option strikes me as even more\n >bogus. Where did we get that from --- is it in the SQL92 standard?\n\nI looked through the standard yesterday and couldn't find it. It doesn't\nseem to be a useful extension, since it does nothing that you can't do\nwith GROUP BY and seems much less well defined. For the moment I have\nadded a brief description to the reference documentation for SELECT.\n\n >If you SELECT DISTINCT on a subset of the attributes to be returned,\n >then there's no unique definition of which values get returned in the\n >other columns. In Thomas' example:\n ...\n >Any of these are \"DISTINCT ON id\", but it's purely a matter of\n >happenstance table order and unspecified implementation choices which\n >one will appear. Do we really have (or want) a statement with\n >inherently undefined behavior?\n\nWe have it; I suggest we don't want it!\n \n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"My son, if sinners entice thee, consent thou not.\" \n Proverbs 1:10 \n\n\n", "msg_date": "Fri, 29 Jan 1999 18:24:03 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: SELECT DISTINCT ON ... ORDER BY ... " }, { "msg_contents": "> MaxBackendId is 64 by default, so that's not the limit you're hitting.\n> \n> It should be easier to configure MaxBackendId --- probably it should be\n> an option to the configure script. I've put this on my personal to-do\n> list. (I don't think it's a good idea to have *no* upper limit, even\n\nOr even better, MaxBackendId can be set at the run time such as\npostmaster's option. Also, it would be nice if we could monitor number\nof backends currently running. Maybe we should have a new protocol for\nthis kind of puropose?\n\nBTW, as I pointed out before, PostgreSQL will have serious problem\nonce hitting the MaxBackendId. My patches I proposed for this seem\nstill under discussion. I think we should solve the problem in the\nnext release in whatever way, however.\n---\nTatsuo Ishii\n", "msg_date": "Sat, 30 Jan 1999 10:18:52 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "On Fri, 29 Jan 1999, Patrick Verdon wrote:\n\n> \n> Tatsuo, Vadim, Oleg, Scrappy,\n> \n> Many thanks for the response.\n> \n> A couple of you weren't convinced that this\n> is a Postgres problem so let me try to clear\n> the water a little bit. Maybe the use of \n> Apache and mod_perl is confusing the issue -\n> the point I was trying to make is that if \n> there are 49+ concurrent postgres processes\n> on a normal machine (i.e. where kernel \n> parameters are the defaults, etc.) the \n> postmaster dies in a nasty way with \n> potentially damaging results. \n> \n> Here's a case without Apache/mod_perl that\n> causes exactly the same behaviour. Simply\n> enter the following 49 times:\n> \n> kandinsky:patrick> psql template1 &\n> \n> Note that I tried to automate this without\n> success: \n> \n> perl -e 'for ( 1..49 ) { system(\"/usr/local/pgsql/bin/psql template1 &\"); }'\n> \n> The 49th attempt to initiate a connection \n> fails:\n> \n> Connection to database 'template1' failed.\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or while processing the request.\n> \n> and the error_log says:\n> \n> InitPostgres\n> IpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\n\n\nthis error indicates taht you are out of semaphores...you have enough\nconfigures to allow for 48 processes, but not the 49th...\n\n> I have reproduced this behaviour on both \n> FreeBSD 2.2.8 and Intel Solaris 2.6 using\n> version 6.4.x of PostgreSQL.\n\nBoth of them have \"default\" settings for semaphores...I don't recall what\nthey are, but the error you are seeing about IPCSemaphoreCreate indicates\nthat you are exceeding it...\n\n> I'll try to change some of the parameters\n> suggested and see how far I get but the bottom \n> line is Postgres shouldn't be dying like this.\n\nPostgreSQL cannot allocate past what the operating sytem has hardcoded as\nthe max...maybe a more graceful exit should be in order, though? Or is\nthat what you mean?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 30 Jan 1999 04:05:50 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> BTW, as I pointed out before, PostgreSQL will have serious problem\n> once hitting the MaxBackendId. My patches I proposed for this seem\n> still under discussion.\n\nNot sure why that didn't get applied before, but I just put it in,\nand verified that you can start exactly MaxBackendId backends\n(assuming that you don't hit any kernel resource limits on the way).\n\nBTW, we do recover quite gracefully from hitting MAXUPRC (kernel\nlimit on processes for one userid) :-). But that's just because the\npostmaster's initial fork() fails. A failure any later than that\nin backend startup will be treated as a backend crash ...\n\nI agree with Hannu Krosing's remark that we really need some\ndocumentation about kernel parameters that have to be checked when\nsetting up a non-toy database server. I've personally run into\nNFILES limits, for instance, with not all that many backends running.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jan 1999 15:18:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postmaster dies with many child processes\n\t(spinlock/semget failed)" }, { "msg_contents": "I said:\n> Another thing we ought to look at is changing the use of semaphores so\n> that Postgres uses a fixed number of semaphores, not a number that\n> increases as more and more backends are started. Kernels are\n> traditionally configured with very low limits for the SysV IPC\n> resources, so having a big appetite for semaphores is a Bad Thing.\n\nI've been looking into this issue today, and it looks possible but messy.\n\nThe source of the problem is the lock manager\n(src/backend/storage/lmgr/proc.c), which wants to be able to wake up a\nspecific process that is blocked on a lock. I had first thought that it\nwould be OK to wake up any one of the processes waiting for a lock, but\nafter looking at the lock manager that seems a bad idea --- considerable\nthought has gone into the queuing order of waiting processes, and we\ndon't want to give that up. So we need to preserve this ability.\n\nThe way it's currently done is that each extant backend has its own\nSysV-style semaphore, and when you want to wake up a particular backend\nyou just V() its semaphore. (BTW, the semaphores get allocated in\nchunks of 16, so an out-of-semaphores condition will always occur when\ntrying to start the 16*N+1'th backend...) This is simple and reliable\nbut fails if you want to have more backends than the kernel has SysV\nsemaphores. Unfortunately kernels are usually configured with not\nvery many semaphores --- 64 or so is typical. Also, running the system\ndown to nearly zero free semaphores is likely to cause problems for\nother subsystems even if Postgres itself doesn't run out.\n\nWhat seems practical to do instead is this:\n* At postmaster startup, allocate a fixed number of semaphores for\n use by all child backends. (\"Fixed\" can really mean \"configurable\",\n of course, but the point is we won't ask for more later.)\n* The semaphores aren't dedicated to use by particular backends.\n Rather, when a backend needs to block, it finds a currently free\n semaphore and grabs it for the duration of its wait. The number\n of the semaphore a backend is using to wait with would be recorded\n in its PROC struct, and we'd also need an array of per-sema data\n to keep track of free and in-use semaphores.\n* This works with very little extra overhead until we have more\n simultaneously-blocked backends than we have semaphores. When that\n happens (which we hope is really seldom), we overload semaphores ---\n that is, we use the same sema to block two or more backends. Then\n the V() operation by the lock's releaser might wake the wrong backend.\n So, we need an extra field in the LOCK struct to identify the intended\n wake-ee. When a backend is released in ProcSleep, it has to look at\n the lock it is waiting on to see if it is supposed to be wakened\n right now. If not, it V()s its shared semaphore a second time (to\n release the intended wakee), then P()s the semaphore again to go\n back to sleep itself. There probably has to be a delay in here,\n to ensure that the intended wakee gets woken and we don't have its\n bed-mates indefinitely trading wakeups among the wrong processes.\n This is why we don't want this scenario happening often.\n\nI think this could be made to work, but it would be a delicate and\nhard-to-test change in what is already pretty subtle code.\n\nA considerably more straightforward approach is just to forget about\nincremental allocation of semaphores and grab all we could need at\npostmaster startup. (\"OK, Mac, you told me to allow up to N backends?\nFine, I'm going to grab N semaphores at startup, and if I can't get them\nI won't play.\") This would force the DB admin to either reconfigure the\nkernel or reduce MaxBackendId to something the kernel can support right\noff the bat, rather than allowing the problem to lurk undetected until\ntoo many clients are started simultaneously. (Note there are still\npotential gotchas with running out of processes, swap space, or file\ntable slots, so we wouldn't have really guaranteed that N backends can\nbe started safely.)\n\nIf we make MaxBackendId settable from a postmaster command-line switch\nthen this second approach is probably not too inconvenient, though it\nsurely isn't pretty.\n\nAny thoughts about which way to jump? I'm sort of inclined to take\nthe simpler approach myself...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jan 1999 19:11:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Reducing sema usage (was Postmaster dies with many child processes)" }, { "msg_contents": "I said:\n> Any thoughts about which way to jump? I'm sort of inclined to take\n> the simpler approach myself...\n\nA further thought: we could leave the semaphore management as-is,\nand instead try to make running out of semaphores a less catastrophic\nfailure. I'm thinking that the postmaster could be the one to try\nto allocate more semaphores whenever there are none left, just before\ntrying to fork a new backend. (The postmaster has access to the same\nshared memory as the backends, right? So no reason it couldn't do this.)\nIf the allocation fails, it can simply refuse the connection request,\nrather than having to proceed as though we'd had a full-fledged backend\ncrash. This only works because we can predict the number of semas\nneeded by an additional backend -- but we can: one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Jan 1999 19:42:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing sema usage (was Postmaster dies with many child\n\tprocesses)" }, { "msg_contents": "> A considerably more straightforward approach is just to forget about\n> incremental allocation of semaphores and grab all we could need at\n> postmaster startup. (\"OK, Mac, you told me to allow up to N backends?\n> Fine, I'm going to grab N semaphores at startup, and if I can't get them\n> I won't play.\") This would force the DB admin to either reconfigure the\n> kernel or reduce MaxBackendId to something the kernel can support right\n> off the bat, rather than allowing the problem to lurk undetected until\n> too many clients are started simultaneously. (Note there are still\n> potential gotchas with running out of processes, swap space, or file\n> table slots, so we wouldn't have really guaranteed that N backends can\n> be started safely.)\n> \n> If we make MaxBackendId settable from a postmaster command-line switch\n> then this second approach is probably not too inconvenient, though it\n> surely isn't pretty.\n> \n> Any thoughts about which way to jump? I'm sort of inclined to take\n> the simpler approach myself...\n\nSemaphore are hard enough without overloading them. I say just gram\nthem on startup. They are cheap. Many databases use semaphores for\nevery row/page they lock, and boy that can be a lot of semaphores. We\nare only getting a few.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 20:42:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "> I said:\n> > Any thoughts about which way to jump? I'm sort of inclined to take\n> > the simpler approach myself...\n> \n> A further thought: we could leave the semaphore management as-is,\n> and instead try to make running out of semaphores a less catastrophic\n> failure. I'm thinking that the postmaster could be the one to try\n> to allocate more semaphores whenever there are none left, just before\n> trying to fork a new backend. (The postmaster has access to the same\n> shared memory as the backends, right? So no reason it couldn't do this.)\n> If the allocation fails, it can simply refuse the connection request,\n> rather than having to proceed as though we'd had a full-fledged backend\n> crash. This only works because we can predict the number of semas\n> needed by an additional backend -- but we can: one.\n\nIf they asked for 64 backends, we better be able go give them to them,\nand not crash or fail under a load. 64 semaphores is nothing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 30 Jan 1999 20:45:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "On Sat, 30 Jan 1999, Tom Lane wrote:\n\n> I said:\n> > Another thing we ought to look at is changing the use of semaphores so\n> > that Postgres uses a fixed number of semaphores, not a number that\n> > increases as more and more backends are started. Kernels are\n> > traditionally configured with very low limits for the SysV IPC\n> > resources, so having a big appetite for semaphores is a Bad Thing.\n> \n> I've been looking into this issue today, and it looks possible but messy.\n> \n> The source of the problem is the lock manager\n> (src/backend/storage/lmgr/proc.c), which wants to be able to wake up a\n> specific process that is blocked on a lock. I had first thought that it\n> would be OK to wake up any one of the processes waiting for a lock, but\n> after looking at the lock manager that seems a bad idea --- considerable\n> thought has gone into the queuing order of waiting processes, and we\n> don't want to give that up. So we need to preserve this ability.\n> \n> The way it's currently done is that each extant backend has its own\n> SysV-style semaphore, and when you want to wake up a particular backend\n> you just V() its semaphore. (BTW, the semaphores get allocated in\n> chunks of 16, so an out-of-semaphores condition will always occur when\n> trying to start the 16*N+1'th backend...) This is simple and reliable\n> but fails if you want to have more backends than the kernel has SysV\n> semaphores. Unfortunately kernels are usually configured with not\n> very many semaphores --- 64 or so is typical. Also, running the system\n> down to nearly zero free semaphores is likely to cause problems for\n> other subsystems even if Postgres itself doesn't run out.\n> \n> What seems practical to do instead is this:\n> * At postmaster startup, allocate a fixed number of semaphores for\n> use by all child backends. (\"Fixed\" can really mean \"configurable\",\n> of course, but the point is we won't ask for more later.)\n> * The semaphores aren't dedicated to use by particular backends.\n> Rather, when a backend needs to block, it finds a currently free\n> semaphore and grabs it for the duration of its wait. The number\n> of the semaphore a backend is using to wait with would be recorded\n> in its PROC struct, and we'd also need an array of per-sema data\n> to keep track of free and in-use semaphores.\n> * This works with very little extra overhead until we have more\n> simultaneously-blocked backends than we have semaphores. When that\n> happens (which we hope is really seldom), we overload semaphores ---\n> that is, we use the same sema to block two or more backends. Then\n> the V() operation by the lock's releaser might wake the wrong backend.\n> So, we need an extra field in the LOCK struct to identify the intended\n> wake-ee. When a backend is released in ProcSleep, it has to look at\n> the lock it is waiting on to see if it is supposed to be wakened\n> right now. If not, it V()s its shared semaphore a second time (to\n> release the intended wakee), then P()s the semaphore again to go\n> back to sleep itself. There probably has to be a delay in here,\n> to ensure that the intended wakee gets woken and we don't have its\n> bed-mates indefinitely trading wakeups among the wrong processes.\n> This is why we don't want this scenario happening often.\n> \n> I think this could be made to work, but it would be a delicate and\n> hard-to-test change in what is already pretty subtle code.\n> \n> A considerably more straightforward approach is just to forget about\n> incremental allocation of semaphores and grab all we could need at\n> postmaster startup. (\"OK, Mac, you told me to allow up to N backends?\n> Fine, I'm going to grab N semaphores at startup, and if I can't get them\n> I won't play.\") This would force the DB admin to either reconfigure the\n> kernel or reduce MaxBackendId to something the kernel can support right\n> off the bat, rather than allowing the problem to lurk undetected until\n> too many clients are started simultaneously. (Note there are still\n> potential gotchas with running out of processes, swap space, or file\n> table slots, so we wouldn't have really guaranteed that N backends can\n> be started safely.)\n> \n> If we make MaxBackendId settable from a postmaster command-line switch\n> then this second approach is probably not too inconvenient, though it\n> surely isn't pretty.\n> \n> Any thoughts about which way to jump? I'm sort of inclined to take\n> the simpler approach myself...\n\nI'm inclined to agree...get rid of the 'hard coded' max, make it a\nsettable option on run time, and 'reserve the semaphores' on startup...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 30 Jan 1999 21:52:42 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> A further thought: we could leave the semaphore management as-is,\n>> and instead try to make running out of semaphores a less catastrophic\n>> failure.\n\n> If they asked for 64 backends, we better be able go give them to them,\n> and not crash or fail under a load. 64 semaphores is nothing.\n\nThat argument would be pretty convincing if pre-grabbing the semaphores\nwas sufficient to ensure we could start N backends, but of course it's\nnot sufficient. The system could also run out of processes or file\ndescriptors, and I doubt that it's reasonable to grab all of those\ninstantly at postmaster startup.\n\nThe consensus seems clear not to go for the complex solution I described\nat first. But I'm still vacillating whether to do pre-reservation of\nsemaphores or just fix the postmaster to reject a connection cleanly if\nno more can be gotten. An advantage of the latter is that it would more\nreadily support on-the-fly changes of the max backend limit. (Which I\nam *not* proposing to support now; I only plan to make it settable at\npostmaster startup; but someday we might want to change it on the fly.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 Jan 1999 13:00:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "Tom Lane wrote:\n >Bruce Momjian <[email protected]> writes:\n >> If they asked for 64 backends, we better be able go give them to them,\n >> and not crash or fail under a load. 64 semaphores is nothing.\n >\n >That argument would be pretty convincing if pre-grabbing the semaphores\n >was sufficient to ensure we could start N backends, but of course it's\n >not sufficient. The system could also run out of processes or file\n >descriptors, and I doubt that it's reasonable to grab all of those\n >instantly at postmaster startup.\n \nThe major problem at the moment is not that a new backend fails, but\nthat it brings down everything else with it. How about having a new\nbackend set a one-byte flag in shared memory when it has\nfinished setting itself up? as long as the flag is unset, the\nbackend is still starting itself up, and a failure will not require\nother backends to be brought down.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Jesus saith unto him, I am the way, the truth, and the\n life; no man cometh unto the Father, but by me.\" \n John 14:6 \n\n\n", "msg_date": "Sun, 31 Jan 1999 21:33:35 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> The major problem at the moment is not that a new backend fails, but\n> that it brings down everything else with it.\n\nAgreed.\n\n> How about having a new backend set a one-byte flag in shared memory\n> when it has finished setting itself up? as long as the flag is unset,\n> the backend is still starting itself up, and a failure will not\n> require other backends to be brought down.\n\nNot much win to be had there, I suspect. The main problem is that as\nsoon as a new backend starts altering shared memory, you have potential\ncorruption issues to worry about if it goes down. And there's not\nreally very much the new backend can do before it alters shared memory.\nIn fact, it can't do much of *anything* until it's made an entry for\nitself in the lock manager's PROC array, because it cannot find out\nanything interesting without locking shared structures.\n\nHmm. If that's true, then the failure to get a sema would occur very\nearly in the new backend's lifetime, before it's had a chance to create\nany trouble. Maybe the very easiest solution to the sema issue is to\nmake the new backend send a failure report to its client and then\nexit(0) instead of exit(1), so that the postmaster considers it a clean\nexit rather than a crash...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 Jan 1999 18:02:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "Tom Lane wrote:\n> \n> I said:\n> > Another thing we ought to look at is changing the use of semaphores so\n> > that Postgres uses a fixed number of semaphores, not a number that\n> > increases as more and more backends are started. Kernels are\n> > traditionally configured with very low limits for the SysV IPC\n> > resources, so having a big appetite for semaphores is a Bad Thing.\n> \n...\n> \n> Any thoughts about which way to jump? I'm sort of inclined to take\n> the simpler approach myself...\n\nCould we use sigpause (or something like this) to block\nand some signal to wake up?\n\nVadim\n", "msg_date": "Mon, 01 Feb 1999 09:42:57 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "> Hmm. If that's true, then the failure to get a sema would occur very\n> early in the new backend's lifetime, before it's had a chance to \n> create any trouble. Maybe the very easiest solution to the sema issue \n> is to make the new backend send a failure report to its client and \n> then exit(0) instead of exit(1), so that the postmaster considers it a \n> clean exit rather than a crash...\n\nSounds like the cleanest solution too. If it pans out, I like it...\n\n - Tom\n", "msg_date": "Mon, 01 Feb 1999 12:46:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Reducing sema usage (was Postmaster dies with many\n\tchild processes)" }, { "msg_contents": "Oliver Elphick ha scritto:\n\n> Tom Lane wrote:\n> >Thomas Metz <[email protected]> writes:\n> >> SELECT DISTINCT ON id id, name FROM test ORDER BY name;\n> >> [doesn't work as expected]\n> >\n> >There have been related discussions before on pg-hackers mail list;\n> >you might care to check the list archives. The conclusion I recall\n> >is that it's not real clear how the combination of SELECT DISTINCT\n> >on one column and ORDER BY on another *should* work. Postgres'\n> >current behavior is clearly wrong IMHO, but there isn't a unique\n> >definition of right behavior, because it's not clear which tuples\n> >should get selected for the sort.\n> >\n> >This \"SELECT DISTINCT ON attribute\" option strikes me as even more\n> >bogus. Where did we get that from --- is it in the SQL92 standard?\n>\n> I looked through the standard yesterday and couldn't find it. It doesn't\n> seem to be a useful extension, since it does nothing that you can't do\n> with GROUP BY and seems much less well defined. For the moment I have\n> added a brief description to the reference documentation for SELECT.\n>\n> >If you SELECT DISTINCT on a subset of the attributes to be returned,\n> >then there's no unique definition of which values get returned in the\n> >other columns. In Thomas' example:\n> ...\n> >Any of these are \"DISTINCT ON id\", but it's purely a matter of\n> >happenstance table order and unspecified implementation choices which\n> >one will appear. Do we really have (or want) a statement with\n> >inherently undefined behavior?\n>\n> We have it; I suggest we don't want it!\n\nYes, seems that SELECT DISTINCT ON is not part of SQL92 but it is very\ninteresting and I think it is something missing to Standard.\nI don't know how to do the following, if we take off DISTINCT ON from\nPostgreSQL:\n\ndb=> select distinct cognome, nome,via from membri where cap = '41010';\ncognome|nome |via\n-------+----------------+--------------------------\nFIORANI|ELISABETTA |VIA PRETI PARTIGIANI, 63\nFIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\nFIORANI|MATTIA |VIA PRETI PARTIGIANI, 63\nFIORANI|SIMONE |VIA PRETI PARTIGIANI, 63\nGOZZI |LILIANA |VIA MAGNAGHI, 39\nGOZZI |MATTEO |VIA MAGNAGHI, 39\nRUSSO |DAVIDE |STRADA CORLETTO SUD, 194/1\nRUSSO |ELENA TERESA |STRADA CORLETTO SUD, 194/1\nRUSSO |FORTUNATO |STRADA CORLETTO SUD, 194/1\nRUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n(10 rows)\n\ndb=> select distinct on cognome cognome, nome,via from membri where cap =\n'41010';\ncognome|nome |via\n-------+----------------+--------------------------\nFIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\nGOZZI |LILIANA |VIA MAGNAGHI, 39\nRUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n(3 rows)\n\n\n\n \nOliver Elphick ha scritto:\nTom Lane wrote:\n  >Thomas Metz <[email protected]> writes:\n  >> SELECT DISTINCT ON id id, name FROM test ORDER BY name;\n  >> [doesn't work as expected]\n  >\n  >There have been related discussions before on pg-hackers mail\nlist;\n  >you might care to check the list archives.  The conclusion\nI recall\n  >is that it's not real clear how the combination of SELECT DISTINCT\n  >on one column and ORDER BY on another *should* work. \nPostgres'\n  >current behavior is clearly wrong IMHO, but there isn't a unique\n  >definition of right behavior, because it's not clear which\ntuples\n  >should get selected for the sort.\n  >\n  >This \"SELECT DISTINCT ON attribute\" option strikes me as even\nmore\n  >bogus.  Where did we get that from --- is it in the SQL92\nstandard?\nI looked through the standard yesterday and couldn't find it. \nIt doesn't\nseem to be a useful extension, since it does nothing that you can't\ndo\nwith GROUP BY and seems much less well defined.  For the moment\nI have\nadded a brief description to the reference documentation for SELECT.\n  >If you SELECT DISTINCT on a subset of the attributes to be returned,\n  >then there's no unique definition of which values get returned\nin the\n  >other columns.  In Thomas' example:\n ...\n  >Any of these are \"DISTINCT ON id\", but it's purely a matter\nof\n  >happenstance table order and unspecified implementation choices\nwhich\n  >one will appear.  Do we really have (or want) a statement\nwith\n  >inherently undefined behavior?\nWe have it; I suggest we don't want it!\nYes, seems that SELECT DISTINCT ON is not part of SQL92 but it is very\ninteresting and I think it is something missing to Standard.\nI don't know how to do the following, if we take off DISTINCT ON\nfrom PostgreSQL:\ndb=> select distinct cognome, nome,via from membri where cap = '41010';\ncognome|nome           \n|via\n-------+----------------+--------------------------\nFIORANI|ELISABETTA      |VIA PRETI PARTIGIANI,\n63\nFIORANI|GASTONE        \n|VIA PRETI PARTIGIANI, 63\nFIORANI|MATTIA         \n|VIA PRETI PARTIGIANI, 63\nFIORANI|SIMONE         \n|VIA PRETI PARTIGIANI, 63\nGOZZI  |LILIANA        \n|VIA MAGNAGHI, 39\nGOZZI  |MATTEO         \n|VIA MAGNAGHI, 39\nRUSSO  |DAVIDE         \n|STRADA CORLETTO SUD, 194/1\nRUSSO  |ELENA TERESA    |STRADA CORLETTO SUD,\n194/1\nRUSSO  |FORTUNATO       |STRADA\nCORLETTO SUD, 194/1\nRUSSO  |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n(10 rows)\ndb=> select distinct on cognome cognome, nome,via from membri where\ncap = '41010';\ncognome|nome           \n|via\n-------+----------------+--------------------------\nFIORANI|GASTONE        \n|VIA PRETI PARTIGIANI, 63\nGOZZI  |LILIANA        \n|VIA MAGNAGHI, 39\nRUSSO  |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n(3 rows)", "msg_date": "Mon, 01 Feb 1999 15:57:52 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] Re: SELECT DISTINCT ON ... ORDER BY ..." }, { "msg_contents": "\"jose' soares\" wrote:\n >Yes, seems that SELECT DISTINCT ON is not part of SQL92 but it is very\n >interesting and I think it is something missing to Standard.\n >I don't know how to do the following, if we take off DISTINCT ON from\n >PostgreSQL:\n >\n >db=> select distinct cognome, nome,via from membri where cap = '41010';\n >cognome|nome |via\n >-------+----------------+--------------------------\n >FIORANI|ELISABETTA |VIA PRETI PARTIGIANI, 63\n >FIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\n >FIORANI|MATTIA |VIA PRETI PARTIGIANI, 63\n >FIORANI|SIMONE |VIA PRETI PARTIGIANI, 63\n >GOZZI |LILIANA |VIA MAGNAGHI, 39\n >GOZZI |MATTEO |VIA MAGNAGHI, 39\n >RUSSO |DAVIDE |STRADA CORLETTO SUD, 194/1\n >RUSSO |ELENA TERESA |STRADA CORLETTO SUD, 194/1\n >RUSSO |FORTUNATO |STRADA CORLETTO SUD, 194/1\n >RUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n >(10 rows)\n >\n >db=> select distinct on cognome cognome, nome,via from membri where cap =\n >'41010';\n >cognome|nome |via\n >-------+----------------+--------------------------\n >FIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\n >GOZZI |LILIANA |VIA MAGNAGHI, 39\n >RUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n >(3 rows)\n\nThis gives the same results:\n\njunk=> select cognome, nome, via from membri where cap = '41010' group by cognome;\ncognome|nome |via \n-------+----------+--------------------------\nFIORANI|ELISABETTA|VIA PRETI PARTIGIANI, 63 \nGOZZI |LILIANA |VIA MAGNAGHI, 39 \nRUSSO |DAVIDE |STRADA CORLETTO SUD, 194/1\n(3 rows)\n\nThe particular values returned for nome and via are different from yours\nbut the same as I get using DISTINCT ON. Since nome and via are not\naggregated, the value returned for those columns is unpredictable and \ntherefore not useful. I think that it is actually a bug that you are\nable to name them at all.\n\nIn fact, if you add an aggregate column to the column list, GROUP BY does\nnot then allow columns that are neither grouped nor aggregated:\n\njunk=> select cognome, nome,via, max(age) from membri where cap = '41010' group by cognome;\nERROR: parser: illegal use of aggregates or non-group column in target list\njunk=> select cognome, max(age) from membri where cap = '41010' group by cognome;\ncognome|max\n-------+---\nFIORANI| 54\nGOZZI | 76\nRUSSO | 45\n(3 rows)\n\nwhich definitely suggests that it is a bug to allow such fields when no\naggregate is specified.\n\nDISTINCT ON fails with an aggregate, even if no other columns are named:\n\njunk=> select distinct on cognome cognome, max(age) from membri where cap = '41010';\nERROR: parser: illegal use of aggregates or non-group column in target list\n\nwhich makes it even less useful!\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And be not conformed to this world; but be ye \n transformed by the renewing of your mind, that ye may \n prove what is that good, and acceptable, and perfect, \n will of God.\" Romans 12:2 \n\n\n", "msg_date": "Mon, 01 Feb 1999 16:40:37 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Re: [HACKERS] Re: SELECT DISTINCT ON ... ORDER BY ... " }, { "msg_contents": "Oliver Elphick ha scritto:\n\n> \"jose' soares\" wrote:\n> >Yes, seems that SELECT DISTINCT ON is not part of SQL92 but it is very\n> >interesting and I think it is something missing to Standard.\n> >I don't know how to do the following, if we take off DISTINCT ON from\n> >PostgreSQL:\n> >\n> >db=> select distinct cognome, nome,via from membri where cap = '41010';\n> >cognome|nome |via\n> >-------+----------------+--------------------------\n> >FIORANI|ELISABETTA |VIA PRETI PARTIGIANI, 63\n> >FIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\n> >FIORANI|MATTIA |VIA PRETI PARTIGIANI, 63\n> >FIORANI|SIMONE |VIA PRETI PARTIGIANI, 63\n> >GOZZI |LILIANA |VIA MAGNAGHI, 39\n> >GOZZI |MATTEO |VIA MAGNAGHI, 39\n> >RUSSO |DAVIDE |STRADA CORLETTO SUD, 194/1\n> >RUSSO |ELENA TERESA |STRADA CORLETTO SUD, 194/1\n> >RUSSO |FORTUNATO |STRADA CORLETTO SUD, 194/1\n> >RUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n> >(10 rows)\n> >\n> >db=> select distinct on cognome cognome, nome,via from membri where cap =\n> >'41010';\n> >cognome|nome |via\n> >-------+----------------+--------------------------\n> >FIORANI|GASTONE |VIA PRETI PARTIGIANI, 63\n> >GOZZI |LILIANA |VIA MAGNAGHI, 39\n> >RUSSO |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n> >(3 rows)\n>\n> This gives the same results:\n>\n> junk=> select cognome, nome, via from membri where cap = '41010' group by cognome;\n> cognome|nome |via\n> -------+----------+--------------------------\n> FIORANI|ELISABETTA|VIA PRETI PARTIGIANI, 63\n> GOZZI |LILIANA |VIA MAGNAGHI, 39\n> RUSSO |DAVIDE |STRADA CORLETTO SUD, 194/1\n\nThis is very interesting and useful, I thought it wasn't possible. Seems that standard allows\nonly the \"order by\" column(s)\nand the aggregate function(s) on target list.\nI tried the same query on Informix, also on Ocelot but it gives me an error.\n\nOn Informix and Ocelot\nqueries like:\n select cognome, max(age) from membri where cap = '41010' group by cognome;\nare allowed.\nbut\nqueries like:\n select cognome, nome, via from membri where cap = '41010' group by cognome;\naren't allowed.\n\n-Jose'-\n\n\n\n \nOliver Elphick ha scritto:\n\"jose' soares\" wrote:\n  >Yes, seems that SELECT DISTINCT ON is not part of SQL92\nbut it is very\n  >interesting and I think it is something missing to Standard.\n  >I don't know how to do the following, if we take off DISTINCT\nON from\n  >PostgreSQL:\n  >\n  >db=> select distinct cognome, nome,via from membri where\ncap = '41010';\n  >cognome|nome           \n|via\n  >-------+----------------+--------------------------\n  >FIORANI|ELISABETTA      |VIA PRETI\nPARTIGIANI, 63\n  >FIORANI|GASTONE        \n|VIA PRETI PARTIGIANI, 63\n  >FIORANI|MATTIA         \n|VIA PRETI PARTIGIANI, 63\n  >FIORANI|SIMONE         \n|VIA PRETI PARTIGIANI, 63\n  >GOZZI  |LILIANA        \n|VIA MAGNAGHI, 39\n  >GOZZI  |MATTEO         \n|VIA MAGNAGHI, 39\n  >RUSSO  |DAVIDE         \n|STRADA CORLETTO SUD, 194/1\n  >RUSSO  |ELENA TERESA    |STRADA CORLETTO\nSUD, 194/1\n  >RUSSO  |FORTUNATO      \n|STRADA CORLETTO SUD, 194/1\n  >RUSSO  |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n  >(10 rows)\n  >\n  >db=> select distinct on cognome cognome, nome,via from\nmembri where cap =\n  >'41010';\n  >cognome|nome           \n|via\n  >-------+----------------+--------------------------\n  >FIORANI|GASTONE        \n|VIA PRETI PARTIGIANI, 63\n  >GOZZI  |LILIANA        \n|VIA MAGNAGHI, 39\n  >RUSSO  |MAURIZIO ANTONIO|STRADA CORLETTO SUD, 194/1\n  >(3 rows)\nThis gives the same results:\njunk=> select cognome, nome, via from membri where cap = '41010'\ngroup by cognome;\ncognome|nome      |via\n-------+----------+--------------------------\nFIORANI|ELISABETTA|VIA PRETI PARTIGIANI, 63\nGOZZI  |LILIANA   |VIA MAGNAGHI, 39\nRUSSO  |DAVIDE    |STRADA CORLETTO SUD, 194/1\nThis is very interesting and useful, I thought it wasn't possible. Seems\nthat standard allows only the \"order by\" column(s)\nand the aggregate function(s) on target list.\nI tried the same query on Informix, also on Ocelot but it gives me\nan error.\nOn Informix and Ocelot\nqueries like:\n   select  cognome, max(age) from membri where cap\n= '41010' group by cognome;\nare allowed.\nbut\nqueries like:\n   select cognome, nome, via from membri where cap =\n'41010' group by cognome;\naren't allowed.\n-Jose'-", "msg_date": "Tue, 02 Feb 1999 16:25:19 +0100", "msg_from": "\"jose' soares\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] Re: SELECT DISTINCT ON ... ORDER BY ..." }, { "msg_contents": "At 17:25 +0200 on 02/02/1999, jose' soares wrote:\n\n\n> This gives the same results:\n>\n> junk=> select cognome, nome, via from membri where cap = '41010'\n> group by cognome;\n> cognome|nome����� |via\n> -------+----------+--------------------------\n> FIORANI|ELISABETTA|VIA PRETI PARTIGIANI, 63\n> GOZZI� |LILIANA�� |VIA MAGNAGHI, 39\n> RUSSO� |DAVIDE��� |STRADA CORLETTO SUD, 194/1\n>\n> This is very interesting and useful, I thought it wasn't possible. Seems\n>that standard allows only the \"order by\" column(s)\n> and the aggregate function(s) on target list.\n> I tried the same query on Informix, also on Ocelot but it gives me an error.\n\nAnd with good reason, too. The above query has the same drawback as the\n\"select distinct on\", which is: it does not fully specify which value\nshould be selected for the \"nome\" and \"via\" fields.\n\nThus, running this same query on a table that has the same data but was,\nfor example, filled in a different order, gives a different result. That's\nbad, because order should not make a difference for output. Tables are\ntaken to be unordered sets.\n\nIf you want to have a representative of the \"nome\" and \"via\" fields, and it\ndoesn't matter which representative, then min(nome) or max(nome) should do\nthe trick. And this query (select cognome, min(nome), min(via)... group by\ncognome) should give you the same result on all databases, no matter which\nrows were inserted first.\n\nIf it was up to me, I wouldn't use the above form, and frankly, I am\nsurprised the Postgres allows this.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 2 Feb 1999 17:57:08 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] Re: SELECT DISTINCT ON ... ORDER BY ..." }, { "msg_contents": "I get this error when I try to create a function using plpgsql:\n\nERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\nRecognized languages are sql, C, internal and the created procedural\nlanguages.\n\nDo I need to specify another flag when I compile Postgresql?\n\nAdam\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nSi hoc legere scis nimium eruditionis habes.\n\n\n", "msg_date": "Mon, 10 May 1999 16:32:09 -0400", "msg_from": "\"Adam H. Pendleton\" <[email protected]>", "msg_from_op": false, "msg_subject": "plpgsql error" }, { "msg_contents": "Edit: /usr/src/pgsql/postgresql-6.4.2/src/pl/plpgsql/src/mklang.sql\n\nChange: as '${exec_prefix}/lib/plpgsql.so'\nto: as '/usr/local/pgsql/lib/plpgsql.so'\n\nThen: psql your_db < mklang.sql\n\nThis should really be part of the documentation as I wasted two days on\nthis same problem a few weeks back.\n\nHave fun\n\nAndy\n\nOn Mon, 10 May 1999, Adam H. Pendleton wrote:\n\n> I get this error when I try to create a function using plpgsql:\n> \n> ERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\n> Recognized languages are sql, C, internal and the created procedural\n> languages.\n> \n> Do I need to specify another flag when I compile Postgresql?\n> \n> Adam\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> Si hoc legere scis nimium eruditionis habes.\n> \n> \n> \n\n\n\n", "msg_date": "Mon, 10 May 1999 16:05:22 -0500 (CDT)", "msg_from": "Andy Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] plpgsql error" }, { "msg_contents": "\nYes, this clearly looks broken. In mklang.sql.in, @libdir@ is replaced\nwith ${exec_prefix} in mklang.sql. Seems it should be something else.\n\n\n> Edit: /usr/src/pgsql/postgresql-6.4.2/src/pl/plpgsql/src/mklang.sql\n> \n> Change: as '${exec_prefix}/lib/plpgsql.so'\n> to: as '/usr/local/pgsql/lib/plpgsql.so'\n> \n> Then: psql your_db < mklang.sql\n> \n> This should really be part of the documentation as I wasted two days on\n> this same problem a few weeks back.\n> \n> Have fun\n> \n> Andy\n> \n> On Mon, 10 May 1999, Adam H. Pendleton wrote:\n> \n> > I get this error when I try to create a function using plpgsql:\n> > \n> > ERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\n> > Recognized languages are sql, C, internal and the created procedural\n> > languages.\n> > \n> > Do I need to specify another flag when I compile Postgresql?\n> > \n> > Adam\n> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > Si hoc legere scis nimium eruditionis habes.\n> > \n> > \n> > \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 17:38:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] plpgsql error" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, this clearly looks broken. In mklang.sql.in, @libdir@ is replaced\n> with ${exec_prefix} in mklang.sql. Seems it should be something else.\n\nOh ... OK, that looks like a garden-variety configure bug (one too many\nlevels of quoting, or some such). I can look at this if no one else\nbeats me to it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 May 1999 17:55:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, this clearly looks broken. In mklang.sql.in, @libdir@ is replaced\n> > with ${exec_prefix} in mklang.sql. Seems it should be something else.\n> \n> Oh ... OK, that looks like a garden-variety configure bug (one too many\n> levels of quoting, or some such). I can look at this if no one else\n> beats me to it.\n\nIt replaces @libdir@ with ${exec_prefix}/lib. It appears the\nconfigure code expects the replacement to occour in a Makefile, so\n${exec_prefix} can be replaced in Makefile.global. However,\n$exec_prefix is not in Makefile.global, so maybe it is just a problem\nwith configure that $exec_prefix is replace before @libdir@, and\nlibdir's that contain exec_prefix have a problem.\n\nHowever, it appears the default value of libdir contains exec_prefix, so\nyou would think they would have found such a problem themselves in\ntesting.\n\nI am confused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 20:22:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": ">\n> Edit: /usr/src/pgsql/postgresql-6.4.2/src/pl/plpgsql/src/mklang.sql\n>\n> Change: as '${exec_prefix}/lib/plpgsql.so'\n> to: as '/usr/local/pgsql/lib/plpgsql.so'\n>\n> Then: psql your_db < mklang.sql\n>\n> This should really be part of the documentation as I wasted two days on\n> this same problem a few weeks back.\n\n And this became IMHO an FAQ. Should we avoid it by installing\n PL/pgSQL and PL/Tcl (if built) by default in the template1\n database during intidb? Installing it in template1 would\n have the effect that it will be available after every\n createdb.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 11 May 1999 09:35:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] plpgsql error" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Oh ... OK, that looks like a garden-variety configure bug (one too many\n>> levels of quoting, or some such). I can look at this if no one else\n>> beats me to it.\n\n> It replaces @libdir@ with ${exec_prefix}/lib. It appears the\n> configure code expects the replacement to occour in a Makefile, so\n> ${exec_prefix} can be replaced in Makefile.global. However,\n> $exec_prefix is not in Makefile.global, so maybe it is just a problem\n> with configure that $exec_prefix is replace before @libdir@, and\n> libdir's that contain exec_prefix have a problem.\n\nconfigure is designed to generate makefiles that look like this:\n\n\texec_prefix=/usr/local\n\tbindir=${exec_prefix}/bin\n\tlibdir=${exec_prefix}/lib\n\nwith the notion that this will simplify after-the-fact hand tweaking\nof install destinations in the makefile if you feed a need to do that.\nSo that's why libdir's default definition looks the way it does.\n\nNow, that works OK in makefiles and in shell scripts, where the\nreference to the exec_prefix variable can get expanded when the file\nis read. But it falls down for mklang.sql, where the value of libdir\nis substituted into an SQL command --- Postgres ain't gonna expand the\nvariable reference.\n\nWhat we need is to substitute a \"fully expanded\" version of libdir into\nthis file, instead of a version that might depend on other variables.\n\nAny shell-scripting gurus on the list? I thought this would be an easy\nfix, but I'm having some difficulty getting the configure script to\nproduce a fully-expanded value for libdir. Given a shell variable that\nmay contain $-references to other variables, the requirement is to\nassign to a new variable an expanded value containing no $-references.\nI tried\n\texpanded_libdir=\"$libdir\"\nbut that just gets you an exact copy, no recursive expansion. A few\nother ideas didn't work either; the Bourne shell doesn't seem to want\nto re-expand text it's already expanded. Suggestions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 May 1999 10:18:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "> >\n> > Edit: /usr/src/pgsql/postgresql-6.4.2/src/pl/plpgsql/src/mklang.sql\n> >\n> > Change: as '${exec_prefix}/lib/plpgsql.so'\n> > to: as '/usr/local/pgsql/lib/plpgsql.so'\n> >\n> > Then: psql your_db < mklang.sql\n> >\n> > This should really be part of the documentation as I wasted two days on\n> > this same problem a few weeks back.\n> \n> And this became IMHO an FAQ. Should we avoid it by installing\n> PL/pgSQL and PL/Tcl (if built) by default in the template1\n> database during intidb? Installing it in template1 would\n> have the effect that it will be available after every\n> createdb.\n\nSure, why not?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 May 1999 10:56:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] plpgsql error" }, { "msg_contents": "> What we need is to substitute a \"fully expanded\" version of libdir into\n> this file, instead of a version that might depend on other variables.\n> \n> Any shell-scripting gurus on the list? I thought this would be an easy\n> fix, but I'm having some difficulty getting the configure script to\n> produce a fully-expanded value for libdir. Given a shell variable that\n> may contain $-references to other variables, the requirement is to\n> assign to a new variable an expanded value containing no $-references.\n> I tried\n\n\n\n> \texpanded_libdir=\"$libdir\"\n> but that just gets you an exact copy, no recursive expansion. A few\n> other ideas didn't work either; the Bourne shell doesn't seem to want\n> to re-expand text it's already expanded. Suggestions?\n\nPlease try:\n\n\texpanded_libdir=\"`eval echo $libdir`\"\n\nThen I assume you have to do a:\n\nsed \"s/@libdir@/$expanded_libdir/g\" <mklang.sql.template >mklang.sql\n\nI can take it if you commit what you have. The one item I am not sure\nabout is having it generate mklang.sql when the configure values change.\nWhen they run configure, I think we have to generate a new file, so the\nMakefile can see the change in datestamp and generate a new mklang.sql. \nSounds like we need mklang.template.in, mklang.template, and mklang.sql\nand a rule in the makefile that mklang.sql depends on mklang.template.\n\nYou can complete it, or I will take a crack at it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 May 1999 11:21:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "Tom Lane wrote:\n >Any shell-scripting gurus on the list? I thought this would be an easy\n >fix, but I'm having some difficulty getting the configure script to\n >produce a fully-expanded value for libdir. Given a shell variable that\n >may contain $-references to other variables, the requirement is to\n >assign to a new variable an expanded value containing no $-references.\n >I tried\n >\texpanded_libdir=\"$libdir\"\n >but that just gets you an exact copy, no recursive expansion. A few\n >other ideas didn't work either; the Bourne shell doesn't seem to want\n >to re-expand text it's already expanded. Suggestions?\n \nUse eval:\n\n$ v1=DF_\\$EIFFEL_GTK \n$ echo $v1\nDF_$EIFFEL_GTK\n$ v2=$v1\n$ echo $v2\nDF_$EIFFEL_GTK\n$ eval v2=$v1\n$ echo $v2\nDF_/usr/lib/eiffel-gtk\n$ \n\n\nbut if it gets too complicated, you might have to change to Perl\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Search me, O God, and know my heart; try me, and know \n my thoughts. And see if there be any wicked way in me,\n and lead me in the way everlasting.\" \n Psalms 139:23,24 \n\n\n", "msg_date": "Tue, 11 May 1999 16:52:24 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Use eval:\n> $ eval v2=$v1\n> $ echo $v2\n> DF_/usr/lib/eiffel-gtk\n\nLooks good. Thanks for the clue.\n\n> but if it gets too complicated, you might have to change to Perl\n\nIf configure depended on Perl, you couldn't build Postgres at all\nwithout having a working Perl installation... somehow I doubt that\nwould go over well...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 May 1999 14:08:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": " Any shell-scripting gurus on the list? I thought this would be an easy\n fix, but I'm having some difficulty getting the configure script to\n produce a fully-expanded value for libdir. Given a shell variable that\n may contain $-references to other variables, the requirement is to\n assign to a new variable an expanded value containing no $-references.\n I tried\n\t expanded_libdir=\"$libdir\"\n but that just gets you an exact copy, no recursive expansion. A few\n other ideas didn't work either; the Bourne shell doesn't seem to want\n to re-expand text it's already expanded. Suggestions?\n\nIsn't the correct solution to have the Makefile contain a rule that\ncreates the file from a template (e.g., with sed -e\n's/@xxx@/${xxx}/g')? That way make resolves the variable references\nand you needn't worry about it. You can have the rule depend on\nsomething like Makefile or Makefile.global or wherever the relevant\nvariables are set so that if local tweaks are made the files get\nremade automatically.\n\nCheers,\nBrook\n\n\n", "msg_date": "Tue, 11 May 1999 15:59:34 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "> Any shell-scripting gurus on the list? I thought this would be an easy\n> fix, but I'm having some difficulty getting the configure script to\n> produce a fully-expanded value for libdir. Given a shell variable that\n> may contain $-references to other variables, the requirement is to\n> assign to a new variable an expanded value containing no $-references.\n> I tried\n> \t expanded_libdir=\"$libdir\"\n> but that just gets you an exact copy, no recursive expansion. A few\n> other ideas didn't work either; the Bourne shell doesn't seem to want\n> to re-expand text it's already expanded. Suggestions?\n> \n> Isn't the correct solution to have the Makefile contain a rule that\n> creates the file from a template (e.g., with sed -e\n> 's/@xxx@/${xxx}/g')? That way make resolves the variable references\n> and you needn't worry about it. You can have the rule depend on\n> something like Makefile or Makefile.global or wherever the relevant\n> variables are set so that if local tweaks are made the files get\n> remade automatically.\n\nYes, that is what we were saying.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 May 1999 18:04:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": ">> Isn't the correct solution to have the Makefile contain a rule that\n\n> Yes, that is what we were saying.\n\nThe problem is simply a matter of failing to expand indirect references\nin that substitution process. I just committed a fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 May 1999 19:01:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> Isn't the correct solution to have the Makefile contain a rule that\n> creates the file from a template (e.g., with sed -e\n> 's/@xxx@/${xxx}/g')? That way make resolves the variable references\n> and you needn't worry about it.\n\n(after further thought...) Oh, right, I see what you're saying: don't\ngenerate mklang.sql in configure at all, but let pl/plpgsql/src/Makefile\nbe responsible for it. Yeah, that'd be a cleaner solution. However,\nwhat I just committed works ;-). If you feel like improving it, be\nmy guest; I have other items on the to-do list...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 May 1999 19:26:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Oh ... OK, that looks like a garden-variety configure bug (one too many\n> >> levels of quoting, or some such). I can look at this if no one else\n> >> beats me to it.\n> \n> > It replaces @libdir@ with ${exec_prefix}/lib. It appears the\n> > configure code expects the replacement to occour in a Makefile, so\n> > ${exec_prefix} can be replaced in Makefile.global. However,\n> > $exec_prefix is not in Makefile.global, so maybe it is just a problem\n> > with configure that $exec_prefix is replace before @libdir@, and\n> > libdir's that contain exec_prefix have a problem.\n> \n> configure is designed to generate makefiles that look like this:\n> \n> \texec_prefix=/usr/local\n> \tbindir=${exec_prefix}/bin\n> \tlibdir=${exec_prefix}/lib\n> \n> with the notion that this will simplify after-the-fact hand tweaking\n> of install destinations in the makefile if you feed a need to do that.\n> So that's why libdir's default definition looks the way it does.\n\nTom, I like your fix in configure.in better than adding a silly\nMakefile�rule. Yours is much cleaner. You just created an\nexpanded_libdir in configure.in and let that be expanded in the *.in\nfiles. Great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 May 1999 00:39:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": ">\n> Brook Milligan <[email protected]> writes:\n> > Isn't the correct solution to have the Makefile contain a rule that\n> > creates the file from a template (e.g., with sed -e\n> > 's/@xxx@/${xxx}/g')? That way make resolves the variable references\n> > and you needn't worry about it.\n>\n> (after further thought...) Oh, right, I see what you're saying: don't\n> generate mklang.sql in configure at all, but let pl/plpgsql/src/Makefile\n> be responsible for it. Yeah, that'd be a cleaner solution. However,\n> what I just committed works ;-). If you feel like improving it, be\n> my guest; I have other items on the to-do list...\n\n I've just committed a little change to initdb and it's\n Makefile. The initdb Makefile now expands __DLSUFFIX__ into\n it and initdb uses $PGLIB/plpgsql__DLSUFFIX__ to test if it\n is there and then runs the appropriate queries against\n template1. Same for PL/Tcl.\n\n If anyone agrees we can get rid of these mklang.sql scripts\n totally.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 12 May 1999 12:37:24 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "> >\n> > Brook Milligan <[email protected]> writes:\n> > > Isn't the correct solution to have the Makefile contain a rule that\n> > > creates the file from a template (e.g., with sed -e\n> > > 's/@xxx@/${xxx}/g')? That way make resolves the variable references\n> > > and you needn't worry about it.\n> >\n> > (after further thought...) Oh, right, I see what you're saying: don't\n> > generate mklang.sql in configure at all, but let pl/plpgsql/src/Makefile\n> > be responsible for it. Yeah, that'd be a cleaner solution. However,\n> > what I just committed works ;-). If you feel like improving it, be\n> > my guest; I have other items on the to-do list...\n> \n> I've just committed a little change to initdb and it's\n> Makefile. The initdb Makefile now expands __DLSUFFIX__ into\n> it and initdb uses $PGLIB/plpgsql__DLSUFFIX__ to test if it\n> is there and then runs the appropriate queries against\n> template1. Same for PL/Tcl.\n> \n> If anyone agrees we can get rid of these mklang.sql scripts\n> totally.\n\nSure.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 May 1999 08:50:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": " Tom, I like your fix in configure.in better than adding a silly\n Makefile�rule. Yours is much cleaner. You just created an\n expanded_libdir in configure.in and let that be expanded in the *.in\n files. Great.\n\nThe problem with solutions like this is that we end up proliferating\nexpanded and unexpanded versions of the same variables; hence,\nmaintenance problems with coordinating the various uses of this\ninformation.\n\nWe've already had this discussion with some of the other cases (the\nperl or tcl interfaces come to mind but I'm not sure). It would be\nunfortunate to make configure that much more complex when we don't\nreally need to at all. It seems we should be using Makefile rules to\ndo what they are best at: automatically generating files based on\nknown rules that depend on a specific database of local configuration\noptions; configure's role should simply be to create that database.\n\nCheers,\nBrook\n\n\n", "msg_date": "Wed, 12 May 1999 08:08:27 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I like your fix in configure.in better than adding a silly\n> Makefile rule. Yours is much cleaner. You just created an\n> expanded_libdir in configure.in and let that be expanded in the *.in\n> files.\n\nNot really --- did you see what I had to do to get the thing expanded\nproperly? Ick... Brook's approach would be cleaner. But I don't want\nto spend more time on it now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 May 1999 10:12:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I've just committed a little change to initdb and it's\n> Makefile. The initdb Makefile now expands __DLSUFFIX__ into\n> it and initdb uses $PGLIB/plpgsql__DLSUFFIX__ to test if it\n> is there and then runs the appropriate queries against\n> template1. Same for PL/Tcl.\n> If anyone agrees we can get rid of these mklang.sql scripts\n> totally.\n\nWell, I hate to be a party-pooper, but I don't agree: I like having\nthe flexibility of installing plpgsql into just selected databases.\nIf it's automatically included into template1 then I no longer have\nany choice in the matter.\n\nPerhaps this could be driven by a configuration switch?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 May 1999 10:22:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": ">\n> Bruce Momjian <[email protected]> writes:\n> > Tom, I like your fix in configure.in better than adding a silly\n> > Makefile rule. Yours is much cleaner. You just created an\n> > expanded_libdir in configure.in and let that be expanded in the *.in\n> > files.\n>\n> Not really --- did you see what I had to do to get the thing expanded\n> properly? Ick... Brook's approach would be cleaner. But I don't want\n> to spend more time on it now.\n\n And it's not required for PL/pgSQL or PL/Tcl any more. initdb\n now installs them in template1 (if their shared objects are\n installed in the libdir), so we can remove the mklang.sql\n scripts. So concentrate on your other items.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 12 May 1999 17:03:36 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> > I've just committed a little change to initdb and it's\n> > Makefile. The initdb Makefile now expands __DLSUFFIX__ into\n> > it and initdb uses $PGLIB/plpgsql__DLSUFFIX__ to test if it\n> > is there and then runs the appropriate queries against\n> > template1. Same for PL/Tcl.\n> > If anyone agrees we can get rid of these mklang.sql scripts\n> > totally.\n>\n> Well, I hate to be a party-pooper, but I don't agree: I like having\n> the flexibility of installing plpgsql into just selected databases.\n> If it's automatically included into template1 then I no longer have\n> any choice in the matter.\n>\n> Perhaps this could be driven by a configuration switch?\n\n Or maybe some switch to initdb and createdb?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 12 May 1999 17:06:09 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>> Perhaps this could be driven by a configuration switch?\n\n> Or maybe some switch to initdb and createdb?\n\nThat's a good idea; it would save having to propagate the value out of\nconfigure and into the places where it'd be needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 May 1999 11:20:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error " }, { "msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> >> Perhaps this could be driven by a configuration switch?\n>\n> > Or maybe some switch to initdb and createdb?\n>\n> That's a good idea; it would save having to propagate the value out of\n> configure and into the places where it'd be needed.\n\n I've thought a little more about that one and I don't like it\n anymore. It's a bad idea that enabling a language in such a\n looser-friendly way is only possible during db creation, not\n for existing databases.\n\n I'll remove that from initdb again and instead add two new\n utilities \"installpl\" and \"removepl\". That way it's also\n possible to automate the process in scripts but it is as\n flexible as can.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 12 May 1999 17:58:16 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] plpgsql error" }, { "msg_contents": "Restoring a full dump from 6.4.2 into 6.5:\n\nWhen the owner of a database does not have usesuper privilege in pg_shadow,\nit is not possible to recreate items that were created with superuser\nprivilege, even if the superuser is running the script:\n\n======== example ==============\n\\connect template1\nselect datdba into table tmp_pg_shadow from pg_database where datname = \n'template1';\ndelete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\ndrop table tmp_pg_shadow;\ncopy pg_shadow from stdin;\n...\nuser1 1005 t t f t \\N\n\\.\n\\connect template1 user1\ncreate database morejunk;\n\\connect morejunk user1\n...\nCREATE FUNCTION \"plpgsql_call_handler\" ( ) RETURNS opaque AS \n'/usr/lib/postgresql/lib/plpgsql.so' LANGUAGE 'C';\n...\n\nResult:\nQUERY: CREATE FUNCTION \"plpgsql_call_handler\" ( ) RETURNS opaque AS \n'/usr/lib/postgresql/lib/plpgsql.so' LANGUAGE 'C';\nERROR: Only users with Postgres superuser privilege are permitted to create a \nfunction in the 'C' language. Others may use the 'sql' language or the \ncreated procedural languages.\n======================\n\nIt would seem that there should be some command that operates with \nsuperuser privilege, whatever the nominal state indicated by the data.\nSince the above script was being run by postgres, it should all have\nbeen capable of being executed.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"There is a way that seems right to a man, but in the \n end it leads to death.\" \n Proverbs 16:25 \n\n\n", "msg_date": "Tue, 08 Jun 1999 00:32:54 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem when reloading data from older version" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Restoring a full dump from 6.4.2 into 6.5:\n> When the owner of a database does not have usesuper privilege in pg_shadow,\n> it is not possible to recreate items that were created with superuser\n> privilege, even if the superuser is running the script:\n\n> QUERY: CREATE FUNCTION \"plpgsql_call_handler\" ( ) RETURNS opaque AS \n> '/usr/lib/postgresql/lib/plpgsql.so' LANGUAGE 'C';\n> ERROR: Only users with Postgres superuser privilege are permitted to create\n> a function in the 'C' language. Others may use the 'sql' language or the \n> created procedural languages.\n\nHmm. This seems wrong; if the function was created by the superuser\nthen it should have proowner set to the superuser, and pg_dump looks\nlike it does the right thing about reconnecting as the function owner\n(assuming you used -z, which is now default but wasn't in 6.4.2...).\n\nIs it possible that \"plpgsql_call_handler\" was somehow marked as being\nowned by the database owner rather than the superuser? If so, I'd think\nthat that is the real bug.\n\n> It would seem that there should be some command that operates with \n> superuser privilege, whatever the nominal state indicated by the data.\n\nI'd be worried about security holes if it's not designed very carefully...\n\n> Since the above script was being run by postgres, it should all have\n> been capable of being executed.\n\nI wonder whether we need a notion of \"effective\" and \"real\" user ID,\nsuch as most Unix systems have. Then it'd be possible for the system\nto know \"I may be creating objects on behalf of user X, but I really\nam the superuser\" and apply protection checks appropriately. This'd\nbe a much more elegant solution than \\connect for pg_dump scripts,\nsince the whole script would run in a single superuser session and just\ndo a SET VARIABLE or something to indicate which user would be the owner\nof created objects.\n\nHowever, that's not going to happen for 6.5. For a short-term fix, we\nneed to look at why pg_dump didn't reconnect as superuser before trying\nto create that C function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jun 1999 20:13:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem when reloading data from older version " }, { "msg_contents": "Tom Lane wrote:\n >Hmm. This seems wrong; if the function was created by the superuser\n >then it should have proowner set to the superuser, and pg_dump looks\n >like it does the right thing about reconnecting as the function owner\n >(assuming you used -z, which is now default but wasn't in 6.4.2...).\n \nAh... looking back, I see that I did not use -z.\n\nUsing -z, it works OK.\n\n >I wonder whether we need a notion of \"effective\" and \"real\" user ID,\n >such as most Unix systems have. Then it'd be possible for the system\n >to know \"I may be creating objects on behalf of user X, but I really\n >am the superuser\" and apply protection checks appropriately. This'd\n >be a much more elegant solution than \\connect for pg_dump scripts,\n >since the whole script would run in a single superuser session and just\n >do a SET VARIABLE or something to indicate which user would be the owner\n >of created objects.\n \nI definitely agree with that. It's also needed in order to restrict\npassword manipulation of other users' passwords to the superuser alone.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Fear not, for I am with thee; be not dismayed, \n for I am thy God. I will strengthen thee and I will \n help thee; yea, I will uphold thee with the right hand\n of my righteousness.\" Isaiah 41:10 \n\n\n", "msg_date": "Tue, 08 Jun 1999 07:01:48 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem when reloading data from older version " }, { "msg_contents": "> Tom Lane wrote:\n> >Hmm. This seems wrong; if the function was created by the superuser\n> >then it should have proowner set to the superuser, and pg_dump looks\n> >like it does the right thing about reconnecting as the function owner\n> >(assuming you used -z, which is now default but wasn't in 6.4.2...).\n> \n> Ah... looking back, I see that I did not use -z.\n> \n> Using -z, it works OK.\n\n-z is now default in 6.5.\n\n\n> \n> >I wonder whether we need a notion of \"effective\" and \"real\" user ID,\n> >such as most Unix systems have. Then it'd be possible for the system\n> >to know \"I may be creating objects on behalf of user X, but I really\n> >am the superuser\" and apply protection checks appropriately. This'd\n> >be a much more elegant solution than \\connect for pg_dump scripts,\n> >since the whole script would run in a single superuser session and just\n> >do a SET VARIABLE or something to indicate which user would be the owner\n> >of created objects.\n> \n> I definitely agree with that. It's also needed in order to restrict\n> password manipulation of other users' passwords to the superuser alone.\n> \n> -- \n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"Fear not, for I am with thee; be not dismayed, \n> for I am thy God. I will strengthen thee and I will \n> help thee; yea, I will uphold thee with the right hand\n> of my righteousness.\" Isaiah 41:10 \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jun 1999 11:40:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem when reloading data from older version" }, { "msg_contents": "This patch should enable 6.5 to build on Motorola 68000 architecture. It comes\nfrom Roman Hodek <[email protected]>.\n\n\n\n\n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"I love them that love me; and those that seek me early\n shall find me.\" Proverbs 8:17", "msg_date": "Thu, 10 Jun 1999 16:29:11 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for m68k architecture " }, { "msg_contents": "Applied. You man want to give us a fix for /template/.similar for that\nplatform, if needed to configure guesses the proper platform.\n\n> This patch should enable 6.5 to build on Motorola 68000 architecture. It comes\n> from Roman Hodek <[email protected]>.\n> \n> \nContent-Description: ol\n\n[Attachment, skipping...]\n\n> Vote against SPAM: http://www.politik-digital.de/spam/\n> ========================================\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"I love them that love me; and those that seek me early\n> shall find me.\" Proverbs 8:17\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jun 1999 18:58:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Patch for m68k architecture" }, { "msg_contents": "At 8:29 AM -0700 6/10/99, Oliver Elphick wrote:\n>This patch should enable 6.5 to build on Motorola 68000 architecture. It\n>comes\n>from Roman Hodek <[email protected]>.\n\nHas anyone compared the Linux/m68k patch with the NetBSD/m68k patch (which\nI believe was already included in 6.5)?\n\nAlso I have been trying to cross-post some traffic on the\[email protected] list to the PG-ports list and it hasn't been\nappearing afaict. Am I just not looking carefully enough or is something\nscrewy?\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Fri, 11 Jun 1999 10:59:28 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Patch for m68k architecture" }, { "msg_contents": ">At 8:29 AM -0700 6/10/99, Oliver Elphick wrote:\n>>This patch should enable 6.5 to build on Motorola 68000 architecture. It\n>>comes\n>>from Roman Hodek <[email protected]>.\n>\n>Has anyone compared the Linux/m68k patch with the NetBSD/m68k patch (which\n>I believe was already included in 6.5)?\n\nyes.\n\n>Also I have been trying to cross-post some traffic on the\n>[email protected] list to the PG-ports list and it hasn't been\n>appearing afaict. Am I just not looking carefully enough or is something\n>screwy?\n\nI have tried 6.4beta4 on NetBSD 1.3.3/m68k. It failed while running\ninitdb:\n\nCreating template database in /usr/local/pgsql/data/base/template1\n\nFATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n\nFATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n\nSeems something really bad is going on...\n--\nTatsuo Ishii\n", "msg_date": "Sat, 12 Jun 1999 17:47:18 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture " }, { "msg_contents": ">>At 8:29 AM -0700 6/10/99, Oliver Elphick wrote:\n>>>This patch should enable 6.5 to build on Motorola 68000 architecture. It\n>>>comes\n>>>from Roman Hodek <[email protected]>.\n>>\n>>Has anyone compared the Linux/m68k patch with the NetBSD/m68k patch (which\n>>I believe was already included in 6.5)?\n>\n>yes.\n>\n>>Also I have been trying to cross-post some traffic on the\n>>[email protected] list to the PG-ports list and it hasn't been\n>>appearing afaict. Am I just not looking carefully enough or is something\n>>screwy?\n>\n>I have tried 6.4beta4 on NetBSD 1.3.3/m68k. It failed while running\n>initdb:\n>\n>Creating template database in /usr/local/pgsql/data/base/template1\n>\n>FATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n>\n>FATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n>\n>Seems something really bad is going on...\n\nI reverted back the patch for include/storage/s_lock.h and seems\nNetBSD/m68k port begins to work again.\n\nI think we should revert back the linux/m68k patches and leave them\nfor 6.5.1. Objection?\n--\nTatsuo Ishii\n\n", "msg_date": "Sun, 13 Jun 1999 00:39:19 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture " }, { "msg_contents": "Tatsuo Ishii wrote:\n >>>At 8:29 AM -0700 6/10/99, Oliver Elphick wrote:\n >>>>This patch should enable 6.5 to build on Motorola 68000 architecture. \n...\n >>Seems something really bad is going on...\n >\n >I reverted back the patch for include/storage/s_lock.h and seems\n >NetBSD/m68k port begins to work again.\n >\n >I think we should revert back the linux/m68k patches and leave them\n >for 6.5.1. Objection?\n\nThat seems sensible; presumably no other current users are on linux_m68k\nor this would have been sorted already. I will keep it in the Debian\nversion where there can't be any conflict with NetBSD users.\n\nIt seems that the patch needs to depend not only on being m68k but also\non being linux. What defined variable can we use to distinguish between\nthe two?\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And he said unto Jesus, Lord, remember me when thou \n comest into thy kingdom. And Jesus said unto him, \n Verily I say unto thee, To day shalt thou be with me \n in paradise.\" Luke 23:42,43 \n\n\n", "msg_date": "Sat, 12 Jun 1999 21:02:06 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture " }, { "msg_contents": "> >I reverted back the patch for include/storage/s_lock.h and seems\n> >NetBSD/m68k port begins to work again.\n> >\n> >I think we should revert back the linux/m68k patches and leave them\n> >for 6.5.1. Objection?\n>\n>That seems sensible; presumably no other current users are on linux_m68k\n>or this would have been sorted already. I will keep it in the Debian\n>version where there can't be any conflict with NetBSD users.\n>\n>It seems that the patch needs to depend not only on being m68k but also\n>on being linux. What defined variable can we use to distinguish between\n>the two?\n\nI have changed \n#if defined(__mc68000__)\nto:\n#if defined(__mc68000__) && defined(__linux__)\nin s_lock.h.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 13 Jun 1999 09:10:17 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture " }, { "msg_contents": "At 1:47 AM -0700 6/12/99, Tatsuo Ishii wrote:\n>I have tried 6.4beta4 on NetBSD 1.3.3/m68k. It failed while running\n>initdb:\n>\n>Creating template database in /usr/local/pgsql/data/base/template1\n>\n>FATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n>\n>FATAL: s_lock(001bbea3) at bufmgr.c:1992, stuck spinlock. Aborting.\n>\n>Seems something really bad is going on...\n\nThat certainly seems bad enough, but I did not see that problem. Are you\ntalking about way back when 6.4 was still in beta? Or did you mean 6.5?\n\nAs I tried to post earlier: when I built 6.4.2 using the patches it built\nfine and initdb worked. Most regression tests seemed ok-ish, but one of\nthem noticed that 'now' - 'current' was more than 200 days.\n\nI had a problem building 6.5, but I think it was related to my\nconfiguration rather than to Postgres.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Sun, 13 Jun 1999 17:39:23 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture" }, { "msg_contents": "At 8:39 AM -0700 6/12/99, Tatsuo Ishii wrote:\n>I reverted back the patch for include/storage/s_lock.h and seems\n>NetBSD/m68k port begins to work again.\n>\n>I think we should revert back the linux/m68k patches and leave them\n>for 6.5.1. Objection?\n\nI would like to support linux/m68k, but on the Mac 68K platform the NetBSD\nfolks are more numerous. Unless linux outnumbers NetBSD on some 68k\nplatforms other than Mac (and I think Amiga) I don't think you should break\na (mostly) working port in order to support a less widely used one.\n\nJust my $0.02.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Sun, 13 Jun 1999 17:50:50 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture" }, { "msg_contents": ">>I reverted back the patch for include/storage/s_lock.h and seems\n>>NetBSD/m68k port begins to work again.\n>>\n>>I think we should revert back the linux/m68k patches and leave them\n>>for 6.5.1. Objection?\n>\n>I would like to support linux/m68k, but on the Mac 68K platform the NetBSD\n>folks are more numerous. Unless linux outnumbers NetBSD on some 68k\n>platforms other than Mac (and I think Amiga) I don't think you should break\n>a (mostly) working port in order to support a less widely used one.\n\nI think the change I made in s_lock.h should make both NetBSD/m68k and \nLinux/m68k happy. Can some Linux/m68k folks confirm it?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 14 Jun 1999 10:03:31 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Patch for m68k architecture " }, { "msg_contents": "The regression tests no longer seem to be using the \"alternative\" expected\nfiles should they exist. I have run out of time looking for the cause, but\nthe story so far is in going from version 1.28-1.29 of regress.sh,\nSYSTEM has gone from \n... printf\"%s-%s\", $1, a[1] }'\nto\n... printf\"%s-%s\", $portname, a[1] }'\nwhich means an example of output has changed from\ni386-netbsd\nto\ni386-unknown-netbsd1.4-netbsd\n\nNow, portname comes from PORTNAME=${os} in configure, which it appears ought\nto be set in my case to\n\n netbsd*)\n os=bsd need_tas=no\n case \"$host_cpu\" in\n powerpc) elf=yes ;;\n esac ;;\n\n\"bsd\", so I would expect SYSTEM to be set to \"bsd-netbsd\" ?! which doesn't\nseem right either...\n\nMaybe \"someone\" could take another look?\n\nCheers,\n\nPatrick\n\n", "msg_date": "Mon, 14 Jun 1999 12:48:56 +0100 (BST)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": false, "msg_subject": "regress.sh" }, { "msg_contents": ">\n> The regression tests no longer seem to be using the \"alternative\" expected\n> files should they exist. I have run out of time looking for the cause, but\n> the story so far is in going from version 1.28-1.29 of regress.sh,\n> SYSTEM has gone from\n> ... printf\"%s-%s\", $1, a[1] }'\n> to\n> ... printf\"%s-%s\", $portname, a[1] }'\n> which means an example of output has changed from\n> i386-netbsd\n> to\n> i386-unknown-netbsd1.4-netbsd\n>\n> Now, portname comes from PORTNAME=${os} in configure, which it appears ought\n> to be set in my case to\n>\n> netbsd*)\n> os=bsd need_tas=no\n> case \"$host_cpu\" in\n> powerpc) elf=yes ;;\n> esac ;;\n>\n> \"bsd\", so I would expect SYSTEM to be set to \"bsd-netbsd\" ?! which doesn't\n> seem right either...\n>\n> Maybe \"someone\" could take another look?\n\n Ouch - looks like my recent change made while adding the\n NUMERIC regression tests.\n\n Looking at the actual sources I wonder why it can cause any\n problems. At the very beginning I've added\n\n portname=$1\n export portname\n shift\n\n That variable is used ONLY ONCE in the awk line you're\n quoting above. Prior to my changes, $1 was directly used as\n argument to awk and all remaining args ignored silently by\n regress.sh.\n\n Is it required that variables local in regress.sh have upper\n case? If so, why?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 14 Jun 1999 15:40:12 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Looking at the actual sources I wonder why it can cause any\n> problems. At the very beginning I've added\n> \n> portname=$1\n> export portname\n> shift\n> \n> That variable is used ONLY ONCE in the awk line you're\n> quoting above. Prior to my changes, $1 was directly used as\n> argument to awk and all remaining args ignored silently by\n> regress.sh.\n\nAh! portname=$1 means take $1 command line argument that the script\nwas called by.\n\nawk -F\\- '{ split($3,a,/[0-9]/); printf\"%s-%s\", $1, a[1] }'\n\n$1 here is the 1st variable from the line split by awk. ie., $1 in the\nfirst case is \"sh\" syntax, $1 in second case is \"awk\" syntax.\n\nSo now that I know there is no intentional magic, we can go back successfully\nwith\n\n39c39\n< SYSTEM=`../../config.guess | awk -F\\- '{ split($3,a,/[0-9]/); printf\"%s-%s\", $portname, a[1] }'`\n---\n> SYSTEM=`../../config.guess | awk -F\\- '{ split($3,a,/[0-9]/); printf\"%s-%s\", $1, a[1] }'`\n\nthe only remaining query being:\n\n*** expected/random.out Sun Aug 30 19:50:58 1998\n--- results/random.out Mon Jun 14 15:18:04 1999\n***************\n*** 19,23 ****\n WHERE random NOT BETWEEN 80 AND 120;\n random\n ------\n! (0 rows)\n \n--- 19,24 ----\n WHERE random NOT BETWEEN 80 AND 120;\n random\n ------\n! 124\n! (1 row)\n\n?\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 14 Jun 1999 15:22:09 +0100 (BST)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Is it required that variables local in regress.sh have upper\n> case? If so, why?\n\nNope, you just plain broke it. The only use of the script's $1\nparameter is *above* where you inserted portname=$1 (the test to\nsee if on windows).\n\nThe $1 in the awk script is awk's own meaning of $1. Since it is\ninside single quotes, the shell doesn't substitute for it.\n\nI strongly suggest patching this before 6.5 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jun 1999 10:22:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh " }, { "msg_contents": "> the only remaining query being:\n> *** expected/random.out Sun Aug 30 19:50:58 1998\n> --- results/random.out Mon Jun 14 15:18:04 1999\n> ***************\n> *** 19,23 ****\n> WHERE random NOT BETWEEN 80 AND 120;\n> random\n> ------\n> ! (0 rows)\n> \n> --- 19,24 ----\n> WHERE random NOT BETWEEN 80 AND 120;\n> random\n> ------\n> ! 124\n> ! (1 row)\n\nWell, sometimes random is too random. I'll bet if you run again you\nwill see a different result; I'd hope that *usually* you will see the\nhoped-for result. I didn't want to make the criteria too loose so that\nwe would miss real problems. But sometimes the test fails, even on my\nmachine.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 14 Jun 1999 14:54:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": ">\n> [email protected] (Jan Wieck) writes:\n> > Is it required that variables local in regress.sh have upper\n> > case? If so, why?\n>\n> Nope, you just plain broke it. The only use of the script's $1\n> parameter is *above* where you inserted portname=$1 (the test to\n> see if on windows).\n\n Uhhhh - and there where times where I wrote awk scripts who's\n output got piped into 'groff -p' to produce statistical\n reports with graphics - hurts to see that I'm no real\n programmer anymore :-(\n\n>\n> The $1 in the awk script is awk's own meaning of $1. Since it is\n> inside single quotes, the shell doesn't substitute for it.\n>\n> I strongly suggest patching this before 6.5 ...\n\n No comment other that \"sorry\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 14 Jun 1999 16:59:07 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> Well, sometimes random is too random.\n\n?! So all is alright then :)\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 14 Jun 1999 16:18:09 +0100 (BST)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": "Thomas Lockhart wrote:\n >> the only remaining query being:\n >> *** expected/random.out Sun Aug 30 19:50:58 1998\n >> --- results/random.out Mon Jun 14 15:18:04 1999\n >> ***************\n >> *** 19,23 ****\n >> WHERE random NOT BETWEEN 80 AND 120;\n >> random\n >> ------\n >> ! (0 rows)\n >> \n >> --- 19,24 ----\n >> WHERE random NOT BETWEEN 80 AND 120;\n >> random\n >> ------\n >> ! 124\n >> ! (1 row)\n >\n >Well, sometimes random is too random. I'll bet if you run again you\n >will see a different result; I'd hope that *usually* you will see the\n >hoped-for result. I didn't want to make the criteria too loose so that\n >we would miss real problems. But sometimes the test fails, even on my\n >machine.\n \nEvery time I have run it I have had a 124 row. That's about 8 times.\n(glibc2/linux/i386)\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"I beseech you therefore, brethren, by the mercies of \n God, that ye present your bodies a living sacrifice, \n holy, acceptable unto God, which is your reasonable \n service.\" Romans 12:1 \n\n\n", "msg_date": "Mon, 14 Jun 1999 20:29:37 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regress.sh " }, { "msg_contents": "I'm trying to do some Linux rpms for the v6.5 release, and want to\ninclude more interfaces than RH has done in the past. The pygres\ninstallation instructions make no sense to me. There is a mention of a\n\"make redhat\", but no Makefile, and a mention of \"building python\",\nwhich is already on my system. Can anyone lead me through what\n*really* needs to be done? I'm guessing that README.linux is out of\ndate, but don't know enough to be sure...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 20 Jun 1999 04:34:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "pygres/python installation" }, { "msg_contents": "Thomas Lockhart wrote:\n >I'm trying to do some Linux rpms for the v6.5 release, and want to\n >include more interfaces than RH has done in the past. The pygres\n >installation instructions make no sense to me. There is a mention of a\n >\"make redhat\", but no Makefile, and a mention of \"building python\",\n >which is already on my system. Can anyone lead me through what\n >*really* needs to be done? I'm guessing that README.linux is out of\n >date, but don't know enough to be sure...\n\nI believe it is.\n\nHere are the instructions I use for building Pygresql for the Debian\ndistribution, which I put together after asking for help on Debian lists:\n\nbuild-python:\n cd src/interfaces/python && \\\n cp /usr/lib/python1.5/config/Makefile.pre.in `pwd`\n cd src/interfaces/python && \\\n echo *shared* > Setup\n cd src/interfaces/python && \\\n echo _pg pgmodule.c -I../../include -I../libpq \\\n -L../libpq -lpq -lcrypt \\\n >> Setup\n cd src/interfaces/python && \\\n $(MAKE) -f Makefile.pre.in boot\n cd src/interfaces/python && \\\n $(MAKE)\n touch build-python\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"There is therefore now no condemnation to them which \n are in Christ Jesus, who walk not after the flesh, but\n after the Spirit.\" Romans 8:1 \n\n\n", "msg_date": "Sun, 20 Jun 1999 07:19:43 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pygres/python installation " }, { "msg_contents": "I mentioned this the other day on another list. I want to reiterate it\nhere because I can't seem to get anywhere.\n\nI create a temporary table\n=> create temp table foo (bar text);\nCREATE\n=> insert into foo values ('hi');\nERROR: pg_temp.29112.0: Permission denied.\n\nThis apparently happens if and only if the user that executes this has\npg_shadow.usecatupd = 'f'.\n\nI have tried this with the 6.5.1 source rpm bundle, fresh after initdb and\nalso with a 6.5.0 tar ball installation -- same result. (both on RH Linux\n5.2-ish)\n\nA potential reason that this has gone unnoticed so far is that when you\ncreate a user thus:\n=> create user joe;\nthe usecatupd defaults to true (why?).\n\nAlso this does not have anything to do with superuser status, the ability\nto create and use regular tables, the ability to create databases, the\ndatatypes in the temp table, any hba stuff, or anything else I could think\nof.\n\nAnyone got a clue?\n\nRegards,\n\tPeter\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Fri, 30 Jul 1999 13:22:19 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Cannot insert into temp tables" }, { "msg_contents": "> I mentioned this the other day on another list. I want to reiterate it\n> here because I can't seem to get anywhere.\n> \n> I create a temporary table\n> => create temp table foo (bar text);\n> CREATE\n> => insert into foo values ('hi');\n> ERROR: pg_temp.29112.0: Permission denied.\n> \n> This apparently happens if and only if the user that executes this has\n> pg_shadow.usecatupd = 'f'.\n> \n> I have tried this with the 6.5.1 source rpm bundle, fresh after initdb and\n> also with a 6.5.0 tar ball installation -- same result. (both on RH Linux\n> 5.2-ish)\n> \n> A potential reason that this has gone unnoticed so far is that when you\n> create a user thus:\n> => create user joe;\n> the usecatupd defaults to true (why?).\n> \n> Also this does not have anything to do with superuser status, the ability\n> to create and use regular tables, the ability to create databases, the\n> datatypes in the temp table, any hba stuff, or anything else I could think\n> of.\n\nOK, you have good points. usecatupd should not be set by default. \nMaking changes to the system tables can mess things up for everyone. \nInitdb will give the postgres superuser permissions, but now createuser\nand the SQL command CREATE USER will not give this permission. Also, I\nhave fixed the code so temp tables, which are acutally named pg_temp,\ncan be updated by normal users without usecatupd permissions.\n\nAttached is a patch. I will apply it to the current tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? src/log\n? src/config.log\n? src/config.cache\n? src/config.status\n? src/GNUmakefile\n? src/Makefile.global\n? src/Makefile.custom\n? src/backend/fmgr.h\n? src/backend/parse.h\n? src/backend/postgres\n? src/backend/global1.bki.source\n? src/backend/local1_template1.bki.source\n? src/backend/global1.description\n? src/backend/local1_template1.description\n? src/backend/bootstrap/bootparse.c\n? src/backend/bootstrap/bootstrap_tokens.h\n? src/backend/bootstrap/bootscanner.c\n? src/backend/catalog/genbki.sh\n? src/backend/catalog/global1.bki.source\n? src/backend/catalog/global1.description\n? src/backend/catalog/local1_template1.bki.source\n? src/backend/catalog/local1_template1.description\n? src/backend/port/Makefile\n? src/backend/utils/Gen_fmgrtab.sh\n? src/backend/utils/fmgr.h\n? src/backend/utils/fmgrtab.c\n? src/bin/cleardbdir/cleardbdir\n? src/bin/createdb/createdb\n? src/bin/createlang/createlang\n? src/bin/createuser/createuser\n? src/bin/destroydb/destroydb\n? src/bin/destroylang/destroylang\n? src/bin/destroyuser/destroyuser\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_dump/Makefile\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pg_version/Makefile\n? src/bin/pg_version/pg_version\n? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/Makefile\n? src/bin/psql/psql\n? src/include/version.h\n? src/include/config.h\n? src/interfaces/ecpg/lib/Makefile\n? src/interfaces/ecpg/lib/libecpg.so.3.0.0\n? src/interfaces/ecpg/lib/libecpg.so.3.0.1\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgtcl/Makefile\n? src/interfaces/libpgtcl/libpgtcl.so.2.0\n? src/interfaces/libpq/Makefile\n? src/interfaces/libpq/libpq.so.2.0\n? src/interfaces/libpq++/Makefile\n? src/interfaces/libpq++/libpq++.so.3.0\n? src/interfaces/odbc/GNUmakefile\n? src/interfaces/odbc/Makefile.global\n? src/lextest/lex.yy.c\n? src/lextest/lextest\n? src/pl/plpgsql/src/Makefile\n? src/pl/plpgsql/src/mklang.sql\n? src/pl/plpgsql/src/pl_gram.c\n? src/pl/plpgsql/src/pl.tab.h\n? src/pl/plpgsql/src/pl_scan.c\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/mkMakefile.tcldefs.sh\n? src/pl/tcl/Makefile.tcldefs\nIndex: src/backend/catalog/aclchk.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/aclchk.c,v\nretrieving revision 1.26\ndiff -c -r1.26 aclchk.c\n*** src/backend/catalog/aclchk.c\t1999/07/17 20:16:47\t1.26\n--- src/backend/catalog/aclchk.c\t1999/07/30 17:58:38\n***************\n*** 392,397 ****\n--- 392,398 ----\n \t */\n \tif (((mode & ACL_WR) || (mode & ACL_AP)) &&\n \t\t!allowSystemTableMods && IsSystemRelationName(relname) &&\n+ \t\tstrncmp(relname,\"pg_temp.\", strlen(\"pg_temp.\")) != 0 &&\n \t\t!((Form_pg_shadow) GETSTRUCT(tuple))->usecatupd)\n \t{\n \t\telog(DEBUG, \"pg_aclcheck: catalog update to \\\"%s\\\": permission denied\",\nIndex: src/backend/commands/user.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/commands/user.c,v\nretrieving revision 1.32\ndiff -c -r1.32 user.c\n*** src/backend/commands/user.c\t1999/07/17 20:16:54\t1.32\n--- src/backend/commands/user.c\t1999/07/30 17:58:38\n***************\n*** 169,175 ****\n \tsnprintf(sql, SQL_LENGTH,\n \t\t\t \"insert into %s (usename,usesysid,usecreatedb,usetrace,\"\n \t\t\t \"usesuper,usecatupd,passwd,valuntil) \"\n! \t\t\t \"values('%s',%d,'%c','t','%c','t',%s%s%s,%s%s%s)\",\n \t\t\t ShadowRelationName,\n \t\t\t stmt->user,\n \t\t\t max_id + 1,\n--- 169,175 ----\n \tsnprintf(sql, SQL_LENGTH,\n \t\t\t \"insert into %s (usename,usesysid,usecreatedb,usetrace,\"\n \t\t\t \"usesuper,usecatupd,passwd,valuntil) \"\n! \t\t\t \"values('%s',%d,'%c','f','%c','f',%s%s%s,%s%s%s)\",\n \t\t\t ShadowRelationName,\n \t\t\t stmt->user,\n \t\t\t max_id + 1,\nIndex: src/bin/createuser/createuser.sh\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/bin/createuser/createuser.sh,v\nretrieving revision 1.11\ndiff -c -r1.11 createuser.sh\n*** src/bin/createuser/createuser.sh\t1999/01/31 05:04:25\t1.11\n--- src/bin/createuser/createuser.sh\t1999/07/30 17:58:45\n***************\n*** 218,224 ****\n QUERY=\"insert into pg_shadow \\\n (usename, usesysid, usecreatedb, usetrace, usesuper, usecatupd) \\\n values \\\n! ('$NEWUSER', $SYSID, '$CANCREATE', 't', '$CANADDUSER','t')\"\n \n RES=`$PSQL -c \"$QUERY\" template1`\n \n--- 218,224 ----\n QUERY=\"insert into pg_shadow \\\n (usename, usesysid, usecreatedb, usetrace, usesuper, usecatupd) \\\n values \\\n! ('$NEWUSER', $SYSID, '$CANCREATE', 'f', '$CANADDUSER','f')\"\n \n RES=`$PSQL -c \"$QUERY\" template1`", "msg_date": "Fri, 30 Jul 1999 14:08:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cannot insert into temp tables" }, { "msg_contents": "Bruce Momjian wrote:\n...\n >OK, you have good points. usecatupd should not be set by default. \n >Making changes to the system tables can mess things up for everyone. \n >Initdb will give the postgres superuser permissions, but now createuser\n >and the SQL command CREATE USER will not give this permission. Also, I\n >have fixed the code so temp tables, which are acutally named pg_temp,\n >can be updated by normal users without usecatupd permissions.\n >\n >Attached is a patch. I will apply it to the current tree.\n\nBruce, this change has some other implications. I tested\nthe effect of the patch by altering the rights of my own account (setting\nusecatupd to false). I cannot now create other users: although usesuper is\ntrue, the attempt to update pg_shadow with the new user's row fails:\n\n olly@linda$ createuser fred\n Enter user's postgres ID -> 999\n Is user \"fred\" allowed to create databases (y/n) n\n Is user \"fred\" a superuser? (y/n) n\n ERROR: pg_shadow: Permission denied.\n createuser: fred was NOT added successfully\n\nso I think your change needs to be extended to allow pg_shadow to be\nupdated when a user is created; in this case, usesuper should\noverride usecatupd.\n\nOn the other hand, a user with usecreatedb true is able to modify\npg_database outside the context of a create database command. This also\nseems to be undesirable. I think that create user, alter user and create\ndatabase should work even though the user does not have usecatupd, but the\nuser should be able to change the affected tables only through those\ncommands and not by direct manipulation, unless he has usecatupd in\naddition to other privileges.\n\nI regret that I can only point out these problems rather than provide \na fix...\n\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Have not I commanded thee? Be strong and of a good \n courage; be not afraid, neither be thou dismayed; for \n the LORD thy God is with thee whithersoever thou \n goest.\" Joshua 1:9 \n\n\n", "msg_date": "Sat, 31 Jul 1999 18:26:22 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Cannot insert into temp tables " }, { "msg_contents": "I've posted a tarball of new man pages at\n\n ftp://postgresql.org/pub/doc/man.tar.gz\n\nwhich were generated from the sgml sources used in the other docs.\nThere are a few new pages corresponding to applications like pgtclsh,\nipcclean, etc which were not documented in reference pages previously.\n\nAlso, the old man pages have been removed from the main branch of the\ncvs repository (to take effect for the v6.6 release). All information\nfrom those man pages appears somewhere else in the new docs; usually\nin the reference pages but sometimes in a more suitable place.\n\nPlease let me know if you see any problems with these.\n\n - Thomas\n\nbtw, the man tarball will also appear in a source distribution of\nPostgres, much like the tarballs of html documentation. We may choose\nto build the docs on the fly for a release, making the tarballs\nsuperfluous, but I decided to do it this way to get us started...\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 02:40:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "New man pages" }, { "msg_contents": "> I've posted a tarball of new man pages at\n> \n> ftp://postgresql.org/pub/doc/man.tar.gz\n> \n> which were generated from the sgml sources used in the other docs.\n> There are a few new pages corresponding to applications like pgtclsh,\n> ipcclean, etc which were not documented in reference pages previously.\n> \n> Also, the old man pages have been removed from the main branch of the\n> cvs repository (to take effect for the v6.6 release). All information\n> from those man pages appears somewhere else in the new docs; usually\n> in the reference pages but sometimes in a more suitable place.\n\nGreat. Good job.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Aug 1999 22:54:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] New man pages" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > All information\n> > from those man pages appears somewhere else in the new docs; usually\n> > in the reference pages but sometimes in a more suitable place.\n> Great. Good job.\n\nThanks. So, at the moment we have man pages for section one (1) and\nsection ell (l). Is this the right section numbering for the future?\nHow did we choose the \"ell\"? Should we consider having some utilities\nlike initdb documented in, say, section eight (8)? How about having\nsections like one-sql (1sql) or something similar?\n\nIt's easy to move things around, and I'm wondering if the current\nscheme works for people. I don't have a strong opinion on this...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 03:04:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > All information\n> > > from those man pages appears somewhere else in the new docs; usually\n> > > in the reference pages but sometimes in a more suitable place.\n> > Great. Good job.\n> \n> Thanks. So, at the moment we have man pages for section one (1) and\n> section ell (l). Is this the right section numbering for the future?\n> How did we choose the \"ell\"? Should we consider having some utilities\n\nIt is very confusing. I think they chose 'l' for language. Does\nsomeone want to suggest a better section number. Maybe just put them\nall in 1.\n\n> like initdb documented in, say, section eight (8)? How about having\n> sections like one-sql (1sql) or something similar?\n\nOh, I get it. Can everyone handle multi-character man sections?\n\n> \n> It's easy to move things around, and I'm wondering if the current\n> scheme works for people. I don't have a strong opinion on this...\n> \n\nI would like to use existing sections, rather than do our own. I found\nI had to modify the man page search to look in a manl, and others may\nhave the same problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Aug 1999 23:08:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> Oh, I get it. Can everyone handle multi-character man sections?\n\nThat is how, for example, the X system does their man pages. There are\nsections \"1x\", etc. Except that now that I look on my RH linux system\nthey are squirreled away in /usr/X11/man/man1/, etc so I must have\nseen that on another system. Perhaps my old Alpha boxes??\n\n> I would like to use existing sections, rather than do our own. I found\n> I had to modify the man page search to look in a manl, and others may\n> have the same problem.\n\nYes, that is a consideration. It is easy to automate adding a new\nsection (like \"1sql\" or \"1p\" or ??) for packages, but is something the\nadmin needs to remember to do on a from-source installation. \n\notoh, it does eliminate the possibility of man page pollution if we\nmanage to have the same man page name as some other existing page.\n*That* would be a bad thing. And in general adding ~75 man pages to\nexisting sections is a pretty big load...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 03:21:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> > Oh, I get it. Can everyone handle multi-character man sections?\n> \n> That is how, for example, the X system does their man pages. There are\n> sections \"1x\", etc. Except that now that I look on my RH linux system\n> they are squirreled away in /usr/X11/man/man1/, etc so I must have\n> seen that on another system. Perhaps my old Alpha boxes??\n\nSCO does that. Section 1M.\n\n> \n> > I would like to use existing sections, rather than do our own. I found\n> > I had to modify the man page search to look in a manl, and others may\n> > have the same problem.\n> \n> Yes, that is a consideration. It is easy to automate adding a new\n> section (like \"1sql\" or \"1p\" or ??) for packages, but is something the\n> admin needs to remember to do on a from-source installation. \n> \n> otoh, it does eliminate the possibility of man page pollution if we\n> manage to have the same man page name as some other existing page.\n> *That* would be a bad thing. And in general adding ~75 man pages to\n> existing sections is a pretty big load...\n\nOh, good point. How do we get around that. I don't see another\nexisting section that looks appropriate for SQL commands.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Aug 1999 23:29:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] New man pages" }, { "msg_contents": "Thomas Lockhart wrote:\n >> Oh, I get it. Can everyone handle multi-character man sections?\n >\n >That is how, for example, the X system does their man pages. There are\n >sections \"1x\", etc. Except that now that I look on my RH linux system\n >they are squirreled away in /usr/X11/man/man1/, etc so I must have\n >seen that on another system. Perhaps my old Alpha boxes??\n \nPages from multi-character sections are stored in the directory for the\nfirst character. For instance: /usr/man/man7/select.7l.gz\n\n >> I would like to use existing sections, rather than do our own. I found\n >> I had to modify the man page search to look in a manl, and others may\n >> have the same problem.\n \nFor Debian, I have relocated the SQL pages to section 7l and commands such\nas psql and createuser go in section 1. Policy requires me to use one of\nthe numbered sections (1-8), though I can use a suffix to ensure uniqueness.\n\nOn Debian GNU/Linux, the sections are:\n1 User commands\n2 System calls\n3 Library routines\n4 Devices\n5 File formats\n6 Games\n7 Miscellaneous\n8 System administration\n\n...\n >otoh, it does eliminate the possibility of man page pollution if we\n >manage to have the same man page name as some other existing page.\n\nAs of course we do; for example, select is also in section 2.\n\n >*That* would be a bad thing. And in general adding ~75 man pages to\n >existing sections is a pretty big load...\n\nI'm not sure that's much of a problem. These are the figures from my\nsystem for /usr/man, /usr/share/man, /usr/X11R6/man and /usr/local/man\ncombined:\n\nSection Count\n1 2258\n2 236\n3 6554\n4 39\n5 236\n6 26\n7 128\n8 517\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 12:31:04 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "> Pages from multi-character sections are stored in the directory for the\n> first character. For instance: /usr/man/man7/select.7l.gz\n\nOh! afaik that is one option; the man system in general could also\nhandle man7l/select.7.gz right? You would update /etc/man.config to\nadd, say, \"7l\" to the list of sections.\n\nBut is is against Debian policy to invent new directories for pages? I\nsee that my RH linux system actually does about the same as Debian;\nthere are some \".1x\" files in the /usr/man/man1 directory.\n\n> >> I would like to use existing sections, rather than do our own. I found\n> >> I had to modify the man page search to look in a manl, and others may\n> >> have the same problem.\n> For Debian, I have relocated the SQL pages to section 7l and commands such\n> as psql and createuser go in section 1. Policy requires me to use one of\n> the numbered sections (1-8), though I can use a suffix to ensure uniqueness.\n> On Debian GNU/Linux, the sections are:\n> 1 User commands\n> 2 System calls\n> 3 Library routines\n> 4 Devices\n> 5 File formats\n> 6 Games\n> 7 Miscellaneous\n> 8 System administration\n\nSame for Linux (\"man 7 man\" has a summary).\n\n> >otoh, it does eliminate the possibility of man page pollution if we\n> >manage to have the same man page name as some other existing page.\n> As of course we do; for example, select is also in section 2.\n\nA near miss, since we weren't likely to have chosen section 2 for\n*our* select. But it does illustrate the risk.\n\n> >*That* would be a bad thing. And in general adding ~75 man pages to\n> >existing sections is a pretty big load...\n> I'm not sure that's much of a problem. These are the figures from my\n> system for /usr/man, /usr/share/man, /usr/X11R6/man and /usr/local/man\n> combined:\n\nRight.\n\nSo, do Oliver's conventions make sense for most platforms? istm that\nthey do. Would folks have problems with a mapping similar to what\nOliver uses? We would use section one (1) and section seven (7), with\na qualifier of ell (l) on each of the man page names. I won't do\nanything about it right now, but would like to get a consensus now\nthat the subject has come up. Speak up now or forever hold your...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 12:47:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "Thomas Lockhart wrote:\n >> Pages from multi-character sections are stored in the directory for the\n >> first character. For instance: /usr/man/man7/select.7l.gz\n >\n >Oh! afaik that is one option; the man system in general could also\n >handle man7l/select.7.gz right? You would update /etc/man.config to\n >add, say, \"7l\" to the list of sections.\n >\n >But is is against Debian policy to invent new directories for pages?\n\n6.1 Manual pages \n\nYou must install manual pages in nroff source form, in appropriate places\nunder /usr/share/man. You should only use sections 1 to 9 (see the FHS\n ^^^\n FHS only defines 1 to 8; this may be an error\n\nfor more details). You must not install a preformatted `cat page'. \n\nThe FHS has a section (4.8.2) on manual pages, which we ought to follow\nif possible: <http://www.pathname.com/fhs/2.0/fhs-4.8.2.html>\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 14:16:04 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Oh, I get it. Can everyone handle multi-character man sections?\n>\n> That is how, for example, the X system does their man pages. There are\n> sections \"1x\", etc. Except that now that I look on my RH linux system\n> they are squirreled away in /usr/X11/man/man1/, etc so I must have\n> seen that on another system. Perhaps my old Alpha boxes??\n\nHPUX, as usual, is off in left field somewhere: they use 1m for sysadmin\ncommands, but everything else just goes into the single-digit-named\nsubdirectories (man1, man3, etc). There is no separate namespace for\nsection 3c vs. section 3m, for example --- all those man pages live in\nman3. And sections named by a bare letter don't work at all. AFAICT\nthis section search logic is implemented by hardwired hacks in the guts\nof man(1) --- there is no way to affect it with MANPATH, for example,\nbecause MANPATH determines where the manual root directories are, not\nwhich subdirectories get looked at.\n\nNewer implementations of man(1) are probably cleaner, but I fear that\nHPUX's may be representative of what you'll find on older Unixes.\n\nI'd like to see us change away from putting SQL commands in section l\n(ell), simply because that doesn't work on HPUX. Something like 8l\nor 8s would work a lot better for me. However, I'm not sure which\nmajor section to use --- there doesn't seem to be very much cross-\nplatform standardization about the meanings of the sections beyond 4.\nOn BSD, section 8 seems to contain admin programs (the stuff HPUX\nkeeps in 1m). I don't see any sections on either my HPUX box or\na nearby BSD box that contain pages for individual keywords of\na programming language...\n\n> otoh, it does eliminate the possibility of man page pollution if we\n> manage to have the same man page name as some other existing page.\n> *That* would be a bad thing. And in general adding ~75 man pages to\n> existing sections is a pretty big load...\n\nAs long as we install into /usr/local/pgsql/man/man*, naming conflicts\nwith other packages aren't too big a deal --- there's no physical file\nconflict, and people can just add or remove /usr/local/pgsql/man/ in\ntheir MANPATH settings to see or not see Postgres manpages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Aug 1999 10:27:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "> I'd like to see us change away from putting SQL commands in section l\n> (ell), simply because that doesn't work on HPUX. Something like 8l\n> or 8s would work a lot better for me. However, I'm not sure which\n> major section to use --- there doesn't seem to be very much cross-\n> platform standardization about the meanings of the sections beyond 4.\n> On BSD, section 8 seems to contain admin programs (the stuff HPUX\n> keeps in 1m). I don't see any sections on either my HPUX box or\n> a nearby BSD box that contain pages for individual keywords of\n> a programming language...\n\nOliver uses section 7 (miscellaneous...) for SQL commands, which seems\nto be the right choice given the guidelines for Debian, RedHat, FHS,\nand my imprecise impression of what is typical.\n\n> > otoh, it does eliminate the possibility of man page pollution if we\n> > manage to have the same man page name as some other existing page.\n> > *That* would be a bad thing. And in general adding ~75 man pages to\n> > existing sections is a pretty big load...\n> As long as we install into /usr/local/pgsql/man/man*, naming conflicts\n> with other packages aren't too big a deal --- there's no physical file\n> conflict, and people can just add or remove /usr/local/pgsql/man/ in\n> their MANPATH settings to see or not see Postgres manpages.\n\nYeah, but \"we\" don't always install into /usr/local/pgsql, though we\ncan suggest that as a possibility.\n\nbtw, on my Solaris boxes MANPATH is worse than useless; if you specify\nit then none of the other paths mentioned in /etc/man.config (or\nwherever that is on Solaris) get used. So you have to recreate all of\nthe default MANPATH settings in your environment variable. Of course,\nnow that I've whined about this perhaps someone knows a way around\nthis?\n\nIs there any concern about using \"l\" (ell) for the section\ndiscriminator?\n\nSo, I'll count you as not objecting to sections \"1l\" (one-ell) and\n\"7l\" (seven-ell), ok?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 14:44:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "On Tue, Aug 10, 1999 at 12:47:53PM +0000, Thomas Lockhart wrote:\n> \n> So, do Oliver's conventions make sense for most platforms? istm that\n> they do. Would folks have problems with a mapping similar to what\n> Oliver uses? We would use section one (1) and section seven (7), with\n> a qualifier of ell (l) on each of the man page names. I won't do\n> anything about it right now, but would like to get a consensus now\n> that the subject has come up. Speak up now or forever hold your...\n> \n\nWell, they make sense for _me_ but then, I run Debian and use Oliver's \npackages ;-) As to the general proposal: put them in the nubmered sections\nwitha unique suffix. I'd suggest, however, at least a two character suffix:\nhow about pg? or sql? (tcl/tk uses section 3 with foo.3tcl and bar.3tk)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 10 Aug 1999 09:44:22 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": " So, do Oliver's conventions make sense for most platforms? istm that\n they do. Would folks have problems with a mapping similar to what\n Oliver uses? We would use section one (1) and section seven (7), with\n a qualifier of ell (l) on each of the man page names. I won't do\n anything about it right now, but would like to get a consensus now\n that the subject has come up. Speak up now or forever hold your...\n\nOK, I'll speak up. :)\n\nThis doesn't make too much sense to me based on my experience with\nzillions of other man pages.\n\nA quick look through all of my system-supplied, X11, and locally\ninstalled man pages in section 1 shows none with any suffix other than\n.1 or .1.gz. What exactly is the point of a .1l or whatever? Is it\njust to avoid name collisions? If so, why not use something more\nmeaningful, like .1sql? Note that the downside of any \"odd\" suffix,\nthough, is that the man system will likely need reconfiguring so as to\nrecognize it. This will add an extra installation step, one that\nprobably cannot be easily automated. Perhaps a relevant question here\nis, how likely is a name collision anyway? Is it likely enough to\nrequire reconfiguration of the man system for all users?\n\nAs far as sections go, I think the following conventions apply for\nsections of relevance to PostgreSQL:\n\n1 - most of the commands which comprise the user environment\n3 - an overview of the library functions, their error returns and\n other common definitions and concepts\n5 - documentation on binary and configuration file formats\n8 - information related to system operation and maintenance\n\nThus, it seems that documentation for user commands, e.g., CREATE\nTABLE, SELECT, UPDATE, ... should go into section 1, the API to\nlibpq/libpq++/libpgtcl/... should go into section 3, configuration\nfiles like pg_hba.conf should go into section 5, and admin commands,\ne.g., createdb, createuser, pg_dump, ... should go into section 8.\n\nCheers,\nBrook\n", "msg_date": "Tue, 10 Aug 1999 08:48:15 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> On BSD, section 8 seems to contain admin programs (the stuff HPUX\n>> keeps in 1m). I don't see any sections on either my HPUX box or\n>> a nearby BSD box that contain pages for individual keywords of\n>> a programming language...\n\n> Oliver uses section 7 (miscellaneous...) for SQL commands, which seems\n> to be the right choice given the guidelines for Debian, RedHat, FHS,\n> and my imprecise impression of what is typical.\n\n7 works for me; there's not much there except a few kernel device\ndriver manpages on my box.\n\n> Is there any concern about using \"l\" (ell) for the section\n> discriminator?\n> So, I'll count you as not objecting to sections \"1l\" (one-ell) and\n> \"7l\" (seven-ell), ok?\n\nHow about \"s\" for \"SQL\"? ell is too easily mistaken for one; we'd\nhave to put \"no, it's not section eleven\" in the FAQ ...\n\n> btw, on my Solaris boxes MANPATH is worse than useless; if you specify\n> it then none of the other paths mentioned in /etc/man.config (or\n> wherever that is on Solaris) get used. So you have to recreate all of\n> the default MANPATH settings in your environment variable. Of course,\n> now that I've whined about this perhaps someone knows a way around\n> this?\n\nEr, why not \"MANPATH=/usr/local/pgsql/man:$MANPATH\" ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Aug 1999 11:01:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "Tom Lane wrote:\n >As long as we install into /usr/local/pgsql/man/man*, naming conflicts\n >with other packages aren't too big a deal --- there's no physical file\n >conflict, and people can just add or remove /usr/local/pgsql/man/ in\n >their MANPATH settings to see or not see Postgres manpages.\n\nBut do remember us poor distribution maintainers. It makes life a lot\neasier for us if upstream writers make an effort not to conflict with the\nrest of the world!\n\nIf some local user or administrator puts PostgreSQL on his machine, it\nmay be appropriate to use /usr/local, but a PostgreSQL package installed as \npart of a distribution should never use it.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 16:04:12 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> I'd suggest, however, at least a two character suffix:\n\nDoesn't work with the standard man(1) on either HPUX or SunOS...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Aug 1999 11:18:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "------- Start of forwarded message -------\n So, do Oliver's conventions make sense for most platforms? istm that\n they do. Would folks have problems with a mapping similar to what\n Oliver uses? We would use section one (1) and section seven (7), with\n a qualifier of ell (l) on each of the man page names. I won't do\n anything about it right now, but would like to get a consensus now\n that the subject has come up. Speak up now or forever hold your...\n\nOK, I'll speak up. :)\n\nThis doesn't make too much sense to me based on my experience with\nzillions of other man pages.\n\nA quick look through all of my system-supplied, X11, and locally\ninstalled man pages in section 1 shows none with any suffix other than\n.1 or .1.gz. What exactly is the point of a .1l or whatever? Is it\njust to avoid name collisions? If so, why not use something more\nmeaningful, like .1sql? Note that the downside of any \"odd\" suffix,\nthough, is that the man system will likely need reconfiguring so as to\nrecognize it. This will add an extra installation step, one that\nprobably cannot be easily automated. Perhaps a relevant question here\nis, how likely is a name collision anyway? Is it likely enough to\nrequire reconfiguration of the man system for all users?\n\nAs far as sections go, I think the following conventions apply for\nsections of relevance to PostgreSQL:\n\n1 - most of the commands which comprise the user environment\n3 - an overview of the library functions, their error returns and\n other common definitions and concepts\n5 - documentation on binary and configuration file formats\n8 - information related to system operation and maintenance\n\nThus, it seems that documentation for user commands, e.g., CREATE\nTABLE, SELECT, UPDATE, ... should go into section 1, the API to\nlibpq/libpq++/libpgtcl/... should go into section 3, configuration\nfiles like pg_hba.conf should go into section 5, and admin commands,\ne.g., createdb, createuser, pg_dump, ... should go into section 8.\n\nCheers,\nBrook\n", "msg_date": "Tue, 10 Aug 1999 10:30:09 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> > Pages from multi-character sections are stored in the directory for the\n> > first character. For instance: /usr/man/man7/select.7l.gz\n> \n> Oh! afaik that is one option; the man system in general could also\n> handle man7l/select.7.gz right? You would update /etc/man.config to\n> add, say, \"7l\" to the list of sections.\n> \n> But is is against Debian policy to invent new directories for pages? I\n> see that my RH linux system actually does about the same as Debian;\n> there are some \".1x\" files in the /usr/man/man1 directory.\n\nI have never seen a 'name.1x' or anything with a more than\nsingle-character file prefix, and once it is formatted, it becomes\nname.0. I don't see it buys us anything to do this. What we could do\nis to put throw them in section 7, assuming there is no conflict. I\nonly have two pgp man pages in my 7.\n\n> \n> > >> I would like to use existing sections, rather than do our own. I found\n> > >> I had to modify the man page search to look in a manl, and others may\n> > >> have the same problem.\n> > For Debian, I have relocated the SQL pages to section 7l and commands such\n> > as psql and createuser go in section 1. Policy requires me to use one of\n> > the numbered sections (1-8), though I can use a suffix to ensure uniqueness.\n> > On Debian GNU/Linux, the sections are:\n> > 1 User commands\n> > 2 System calls\n> > 3 Library routines\n> > 4 Devices\n> > 5 File formats\n> > 6 Games\n> > 7 Miscellaneous\n> > 8 System administration\n> \n> Same for Linux (\"man 7 man\" has a summary).\n> \n> > >otoh, it does eliminate the possibility of man page pollution if we\n> > >manage to have the same man page name as some other existing page.\n> > As of course we do; for example, select is also in section 2.\n> \n> A near miss, since we weren't likely to have chosen section 2 for\n> *our* select. But it does illustrate the risk.\n> \n> > >*That* would be a bad thing. And in general adding ~75 man pages to\n> > >existing sections is a pretty big load...\n> > I'm not sure that's much of a problem. These are the figures from my\n> > system for /usr/man, /usr/share/man, /usr/X11R6/man and /usr/local/man\n> > combined:\n> \n> Right.\n> \n> So, do Oliver's conventions make sense for most platforms? istm that\n> they do. Would folks have problems with a mapping similar to what\n> Oliver uses? We would use section one (1) and section seven (7), with\n> a qualifier of ell (l) on each of the man page names. I won't do\n> anything about it right now, but would like to get a consensus now\n> that the subject has come up. Speak up now or forever hold your...\n\nI agree with the 7, but see no need for the additional qualifier. I\nthink that could cause more problems than it is worth.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 12:43:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> btw, on my Solaris boxes MANPATH is worse than useless; if you specify\n> it then none of the other paths mentioned in /etc/man.config (or\n> wherever that is on Solaris) get used. So you have to recreate all of\n> the default MANPATH settings in your environment variable. Of course,\n> now that I've whined about this perhaps someone knows a way around\n> this?\n> \n> Is there any concern about using \"l\" (ell) for the section\n> discriminator?\n> \n> So, I'll count you as not objecting to sections \"1l\" (one-ell) and\n> \"7l\" (seven-ell), ok?\n\nWhy not just put it in section 7, not 7l.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 13:00:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "Hackers: has anyone any insight on this one, please?\n\nCraig Sanders wrote:\n >one of my clients is having problems with postgresql. i've upgraded to\n >the latest version 6.5.1-5 hoping that might fix the problem..no luck.\n >\n >any ideas?\n >\n >\n >BTW, i turned on debugging in postmaster.init and this is a sample of\n >what it shows - \"ERROR: vacuum: can't destroy lock file!\". there are 2\n >or 3 instances of this in the logs.\n >\n...\n >ERROR: vacuum: can't destroy lock file!\n >AbortCurrentTransaction\n...\n\nThis does seem to be a problem with 6.5.1. I have got a similar one coming up\nin the regression test database. Very interestingly, it has arisen since\nmy last clean vacuum and I have not touched the database since then. I\nwonder if it is possible that vacuum has itself corrupted the database.\n\nI have found no useful information in the logs; the actual error seems to\nindicate that the vc_shutdown() routine is being called a second time for\nthe same database, but I cannot yet see why (if you want to investigate,\nit is in src/backend/commands/vacuum.c).\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 18:27:35 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: (fwd) Problems with Postgres " }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >Hackers: has anyone any insight on this one, please?\n \nThis is where things are going wrong:\n\nregression=> vacuum verbose analyze;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 4: Changed 0, Reapped 1, Empty 0, New 0; Tup 248: Vac 0, \nKeep/VTL\n 0/0, Crash 0, UnUsed 31, MinLen 105, MaxLen 109; Re-using: Free/Avail. Space \n41\n16/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_type_typname_index: Pages 4; Tuples 248: Deleted 0. Elapsed \n0/\n0 sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 248: Deleted 0. Elapsed 0/0 \nse\nc.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 22: Changed 0, Reapped 1, Empty 0, New 0; Tup 1666: Vac 0, \nKeep/V\nTL 0/0, Crash 0, UnUsed 50, MinLen 97, MaxLen 100; Re-using: Free/Avail. Space \n5\n072/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_attribute_attrelid_index: Pages 10; Tuples 1666: Deleted 0. \nEl\napsed 0/0 sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 12; Tuples 1666: Deleted \n0\n. Elapsed 0/0 sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 28; Tuples 1666: Deleted \n0\n. Elapsed 0/0 sec.\nNOTICE: --Relation pg_proc--\nNOTICE: Pages 24: Changed 0, Reapped 0, Empty 0, New 0; Tup 1070: Vac 0, \nKeep/V\nTL 0/0, Crash 0, UnUsed 0, MinLen 145, MaxLen 2369; Re-using: Free/Avail. \nSpace\n0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_proc_prosrc_index: Pages 10; Tuples 1070. Elapsed 0/0 sec.\nNOTICE: Index pg_proc_proname_narg_type_index: Pages 17; Tuples 1070. Elapsed \n0\n/0 sec.\nNOTICE: Index pg_proc_oid_index: Pages 5; Tuples 1070. Elapsed 0/0 sec.\nNOTICE: --Relation pg_class--\nNOTICE: Pages 3: Changed 0, Reapped 2, Empty 0, New 0; Tup 210: Vac 0, \nKeep/VTL\n 0/0, Crash 0, UnUsed 14, MinLen 102, MaxLen 132; Re-using: Free/Avail. Space \n14\n32/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_class_relname_index: Pages 5; Tuples 210: Deleted 0. Elapsed \n0\n/0 sec.\nNOTICE: Index pg_class_oid_index: Pages 2; Tuples 210: Deleted 0. Elapsed 0/0 \ns\nec.\n\nThe relation now being vacuumed is bt_text_heap\n\nthe pg_vlock file gets deleted in vc_abort with this backtrace:\n(gdb) bt\n#0 vc_abort () at vacuum.c:252\n#1 0x8078f28 in TransactionIdAbort (transactionId=156164) at transam.c:578\n#2 0x80e0269 in XactLockTableWait (xid=156164) at lmgr.c:332\n#3 0x806c9a9 in heap_delete (relation=0x8216ac8, tid=0x8225c48, ctid=0x0) at \nheapam.c:1149\n#4 0x808ce04 in vc_delhilowstats (relid=2073242, attcnt=0, attnums=0x0) at \nvacuum.c:2484\n#5 0x80897fc in vc_vacone (relid=2073242, analyze=1, va_cols=0x0) at \nvacuum.c:510\n#6 0x80891fe in vc_vacuum (VacRelP=0x0, analyze=1 '\\001', va_cols=0x0) at \nvacuum.c:279\n#7 0x80890df in vacuum (vacrel=0x0, verbose=1, analyze=1 '\\001', va_spec=0x0) \nat vacuum.c:164\n#8 0x80e7ee0 in ProcessUtility (parsetree=0x8237e08, dest=Remote) at \nutility.c:638\n#9 0x80e4e66 in pg_exec_query_dest (query_string=0xbfffa2b4 \"vacuum verbose \nanalyze;\", dest=Remote, aclOverride=0) at postgres.c:727\n#10 0x80e4d74 in pg_exec_query (query_string=0xbfffa2b4 \"vacuum verbose \nanalyze;\") at postgres.c:656\n#11 0x80e5edf in PostgresMain (argc=6, argv=0xbffff42c, real_argc=10, \nreal_argv=0xbffff9b4) at postgres.c:1647\n#12 0x80cec5f in DoBackend (port=0x81d5140) at postmaster.c:1628\n#13 0x80ce73a in BackendStartup (port=0x81d5140) at postmaster.c:1373\n#14 0x80cde59 in ServerLoop () at postmaster.c:823\n#15 0x80cd989 in PostmasterMain (argc=10, argv=0xbffff9b4) at postmaster.c:616\n#16 0x80a4245 in main (argc=10, argv=0xbffff9b4) at main.c:97\n\n >Craig Sanders wrote:\n > >one of my clients is having problems with postgresql. i've upgraded to\n > >the latest version 6.5.1-5 hoping that might fix the problem..no luck.\n > >\n > >any ideas?\n > >\n > >\n > >BTW, i turned on debugging in postmaster.init and this is a sample of\n > >what it shows - \"ERROR: vacuum: can't destroy lock file!\". there are 2\n > >or 3 instances of this in the logs.\n > >\n >...\n > >ERROR: vacuum: can't destroy lock file!\n > >AbortCurrentTransaction\n >...\n >\n >This does seem to be a problem with 6.5.1. I have got a similar one coming \n >up\n >in the regression test database. Very interestingly, it has arisen since\n >my last clean vacuum and I have not touched the database since then. I\n >wonder if it is possible that vacuum has itself corrupted the database.\n >\n >I have found no useful information in the logs; the actual error seems to\n >indicate that the vc_shutdown() routine is being called a second time for\n >the same database, but I cannot yet see why\n\nIn fact vc_abort unlinks pg_vlock, but then the vacuum keeps running and\nvc_shutdown prints the error when it tries to unlink pg_vlock again.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 20:42:58 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: (fwd) Problems with Postgres " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> This is where things are going wrong:\n> the pg_vlock file gets deleted in vc_abort with this backtrace:\n> (gdb) bt\n> #0 vc_abort () at vacuum.c:252\n> #1 0x8078f28 in TransactionIdAbort (transactionId=156164) at transam.c:578\n> #2 0x80e0269 in XactLockTableWait (xid=156164) at lmgr.c:332\n> #3 0x806c9a9 in heap_delete (relation=0x8216ac8, tid=0x8225c48, ctid=0x0) at heapam.c:1149\n\nAh-hah, I *knew* that code was bogus: TransactionIdAbort() has no\nbusiness calling vc_abort(). I fixed that about two days ago\nin both current and REL6_5 branches...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Aug 1999 17:21:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: (fwd) Problems with Postgres " }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> Thus, it seems that documentation for user commands, e.g., CREATE\n> TABLE, SELECT, UPDATE, ... should go into section 1,\n\nNo. Section 1 is exclusively for *programs*, ie, commands available\nat a shell command line. There isn't really anyplace in the standard\nman conventions to put a separate man page for each command accepted\nby a single program, which is what SQL language constructs are.\nWe're usurping more \"man page namespace\" than we ought to by putting\neach SQL construct on its own man page. However, one man page for\nthe whole of SQL isn't too appealing, so we have little choice.\nThe proposal to put 'em in section 7 sounded reasonable to me.\n\n> the API to\n> libpq/libpq++/libpgtcl/... should go into section 3, configuration\n> files like pg_hba.conf should go into section 5, and admin commands,\n> e.g., createdb, createuser, pg_dump, ... should go into section 8.\n\nThese would make sense, but you start to run into some of the\ncross-platform idiosyncracies in section usage as soon as you go past\nsection 3. For example, on HPUX file format docs live in section 4,\nand admin commands live in section 1m. I don't think HP invented those\nconventions on their own --- they are probably common to a lot of\nold-line SysV-derived Unixes. The Debian conventions that Oliver\nmentioned look like they descend from BSD Unix.\n\nI agree with putting the libpq etc. APIs in section 3, assuming we still\nhave man pages for them at all (at one time there was talk of dropping\nthose manpages in favor of the chapters in the SGML docs). I'd be\ninclined to just put createdb and friends in section 1, rather than\nworrying about where they should go to classify them as admin commands...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Aug 1999 17:32:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "Tom Lane wrote:\n >Ah-hah, I *knew* that code was bogus: TransactionIdAbort() has no\n >business calling vc_abort(). I fixed that about two days ago\n >in both current and REL6_5 branches...\n \nTom, can you give me a patch, or instructions to fix it? I don't want to\nrelease the whole REL6_5 branch until it's officially released.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n", "msg_date": "Tue, 10 Aug 1999 22:45:16 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: (fwd) Problems with Postgres " }, { "msg_contents": "> > btw, on my Solaris boxes MANPATH is worse than useless; if you specify\n> > it then none of the other paths mentioned in /etc/man.config (or\n> > wherever that is on Solaris) get used. So you have to recreate all of\n> > the default MANPATH settings in your environment variable. Of course,\n> > now that I've whined about this perhaps someone knows a way around\n> > this?\n> Er, why not \"MANPATH=/usr/local/pgsql/man:$MANPATH\" ?\n\n'Cause MANPATH is not defined to start with. My point was that man\ndoes a great job just using its configuration file, but if you start\nusing MANPATH you have to (apparently) figure out what paths were in\nthe config file and put those in too...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 11 Aug 1999 03:10:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> > > btw, on my Solaris boxes MANPATH is worse than useless; if you specify\n> > > it then none of the other paths mentioned in /etc/man.config (or\n> > > wherever that is on Solaris) get used. So you have to recreate all of\n> > > the default MANPATH settings in your environment variable. Of course,\n> > > now that I've whined about this perhaps someone knows a way around\n> > > this?\n> > Er, why not \"MANPATH=/usr/local/pgsql/man:$MANPATH\" ?\n> \n> 'Cause MANPATH is not defined to start with. My point was that man\n> does a great job just using its configuration file, but if you start\n> using MANPATH you have to (apparently) figure out what paths were in\n> the config file and put those in too...\n\nYes, I have seen this too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 23:36:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "I've been getting a strange error from the vacuum command. When I try\n'vacuum verbose analyze' the vacuum goes through the tables fine until\njust after finishing one particular table. Then I get the error:\n\nNOTICE: AbortTransaction and not in in-progress state\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nWhen I try to vacuum the tables individually, I get no problems with\naborted backends.\n\nDoes anyone know what is going on here?\n\n-Tony\n\nPostgreSQL 6.5.1 on RH Linux 6.0, PII/400 MHz, 512 Meg RAM, 18 Gig HD\n\n\n\n", "msg_date": "Thu, 12 Aug 1999 16:33:32 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Aborted Transaction During Vacuum" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I've been getting a strange error from the vacuum command. When I try\n> 'vacuum verbose analyze' the vacuum goes through the tables fine until\n> just after finishing one particular table. Then I get the error:\n\n> NOTICE: AbortTransaction and not in in-progress state\n> pqReadData() -- backend closed the channel unexpectedly.\n\nInteresting. Is there any additional message appearing in the\npostmaster log? Is a core file being generated? (look in the\ndata/base/ subdirectory for the database in question) If there\nis a corefile, a debugger backtrace from it would be helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Aug 1999 21:28:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> Interesting. Is there any additional message appearing in the\n> postmaster log? Is a core file being generated? (look in the\n> data/base/ subdirectory for the database in question) If there\n> is a corefile, a debugger backtrace from it would be helpful.\n>\n> regards, tom lane\n\nTom,\n\n I tried the 'vacuum verbose analyze' again today. I get the same error\nwith the AbortTransaction. There is a core file generated but no pg_vlock\nfile. The core is over 1 Gig in size (38 Megs gzipped) so I'm not so sure\nyou'd want to get that (you can have it if you want it though!). Plus,\nthere seems to be nothing written to the postmaster.log file (I re-started\nthe postmaster before the vacuum using 'nohup postmaster -i -B 15000 -o -F\n> postmaster.log&').\n\n Oliver Elphick ([email protected]) told me that this sounds\nlike like a bug you patched for him a few days ago:\n\n> It does sound very like the bug that was found a few days back where the\n\n> pg_vlock file gets deleted by a mistaken call to vc_abort(). This only\n\n> gets called in the analyze code.\n\nWe're probably talking about the same bug. When I try just 'vacuum verbose'\nwithout the analyze, the vacuum completes just fine. So it must be within\nthe analyze code.\n\n-Tony\n\n\n", "msg_date": "Fri, 13 Aug 1999 10:28:10 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> I tried the 'vacuum verbose analyze' again today. I get the same error\n> with the AbortTransaction. There is a core file generated but no pg_vlock\n> file. The core is over 1 Gig in size (38 Megs gzipped) so I'm not so sure\n> you'd want to get that (you can have it if you want it though!).\n\nIt wouldn't do me any good anyway without a copy of the executable and a\ncopy of gdb built for whatever platform you are on. I was hoping you\ncould run gdb on the corefile there and just send the backtrace output\n(\"gdb postgres-executable core-file\" and then say \"bt\").\n\n> Oliver Elphick ([email protected]) told me that this sounds\n> like like a bug you patched for him a few days ago:\n\nI doubt it's the same bug --- the error message generated by the other\nbug was different...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 13:37:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "Tom Lane wrote:\n >> Oliver Elphick ([email protected]) told me that this sounds\n >> like like a bug you patched for him a few days ago:\n >\n >I doubt it's the same bug --- the error message generated by the other\n >bug was different...\n\nThe error message I posted was the log output at level 3, which he doesn't\nseem to be running; if you switch to logging at that level, look for\nmessages that mention pg_vlock.\n\nAs far as I recall, the only error message he has listed so far is from\npsql, which merely records the unlink failure at the end of the vacuum.\nEverything else looks remarkably like the vc_abort bug.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n\n", "msg_date": "Fri, 13 Aug 1999 19:04:43 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> As far as I recall, the only error message he has listed so far is from\n> psql, which merely records the unlink failure at the end of the vacuum.\n> Everything else looks remarkably like the vc_abort bug.\n\nBut the vc_abort problem didn't cause a backend coredump --- it just\nreported an error and failed to complete the vacuum, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 14:09:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> As far as I recall, the only error message he has listed so far is from\n >> psql, which merely records the unlink failure at the end of the vacuum.\n >> Everything else looks remarkably like the vc_abort bug.\n >\n >But the vc_abort problem didn't cause a backend coredump --- it just\n >reported an error and failed to complete the vacuum, no?\n >\n\nI got a coredump too; I never mentioned it, because I found the proximate\ncause and curing that made it go away. When unlink failed in vc_shutdown,\nit called ELOG and a segfault occurred a little later.\n\nI forgot about that when your patch fixed the original problem.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n\n", "msg_date": "Fri, 13 Aug 1999 19:28:31 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "Tom Lane wrote:\n\n> But the vc_abort problem didn't cause a backend coredump --- it just\n> reported an error and failed to complete the vacuum, no?\n>\n> regards, tom lane\n\nHere's the error message again:\n\nNOTICE: --Relation ex_ellipse_proc--\nNOTICE: --Relation ex_ellipse_proc--\nNOTICE: Pages 2545: Changed 0, Reapped 0, Empty 0, New 0; Tup 30535: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 660, MaxLen 660; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 1/0 sec.\nNOTICE: Pages 2545: Changed 0, Reapped 0, Empty 0, New 0; Tup 30535: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 660, MaxLen 660; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 1/0 sec.\nNOTICE: Index pkex_ellipse_proc: Pages 138; Tuples 30535. Elapsed 0/0 sec.\nNOTICE: Index pkex_ellipse_proc: Pages 138; Tuples 30535. Elapsed 0/0 sec.\nNOTICE: --Relation ex_ellipse_cell--\nNOTICE: --Relation ex_ellipse_cell--\nNOTICE: Pages 370: Changed 0, Reapped 0, Empty 0, New 0; Tup 6109: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 80, MaxLen 2736; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 0/0 sec.\nNOTICE: Pages 370: Changed 0, Reapped 0, Empty 0, New 0; Tup 6109: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 80, MaxLen 2736; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pkex_ellipse_cell: Pages 42; Tuples 6109. Elapsed 0/0 sec.\nNOTICE: Index pkex_ellipse_cell: Pages 42; Tuples 6109. Elapsed 0/0 sec.\nNOTICE: --Relation ex_ellipse_opto--\nNOTICE: --Relation ex_ellipse_opto--\nNOTICE: Pages 26356: Changed 0, Reapped 0, Empty 0, New 0; Tup 71475: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 1760, MaxLen 6108; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 3/2 sec.\nNOTICE: Pages 26356: Changed 0, Reapped 0, Empty 0, New 0; Tup 71475: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 1760, MaxLen 6108; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 3/2 sec.\nNOTICE: Index pkex_ellipse_opto: Pages 357; Tuples 71475. Elapsed 0/0 sec.\nNOTICE: Index pkex_ellipse_opto: Pages 357; Tuples 71475. Elapsed 0/0 sec.\nNOTICE: --Relation ex_ellipse_opto_proc--\nNOTICE: --Relation ex_ellipse_opto_proc--\nNOTICE: Pages 14742: Changed 0, Reapped 0, Empty 0, New 0; Tup 30535: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 1816, MaxLen 5900; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 1/1 sec.\nNOTICE: Pages 14742: Changed 0, Reapped 0, Empty 0, New 0; Tup 30535: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0,\nMinLen 1816, MaxLen 5900; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail.\nPages 0/0. Elapsed 1/1 sec.\nNOTICE: Index pkex_ellipse_opto_proc: Pages 138; Tuples 30535. Elapsed 0/0\nsec.\nNOTICE: Index pkex_ellipse_opto_proc: Pages 138; Tuples 30535. Elapsed 0/0\nsec.\nERROR: vacuum: can't destroy lock file!\nNOTICE: AbortTransaction and not in in-progress state\nNOTICE: AbortTransaction and not in in-progress state\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n[postgres@bigred ~]$\n\n\n\n-Tony\n\n\n", "msg_date": "Fri, 13 Aug 1999 11:38:12 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>> But the vc_abort problem didn't cause a backend coredump --- it just\n>> reported an error and failed to complete the vacuum, no?\n\n> I got a coredump too; I never mentioned it, because I found the proximate\n> cause and curing that made it go away. When unlink failed in vc_shutdown,\n> it called ELOG and a segfault occurred a little later.\n\nAh, I wish I'd known that. So what Tony is seeing is exactly the same\nbehavior you observed. OK, I feel better now --- I thought the coredump\nwas probably some platform-specific misbehavior that only Tony was seeing.\n\nWe still need to figure out what is causing it, because I can see no\nreason for a coredump after vc_shutdown elog()s. Something is being\nclobbered that should not be. But it sounds like installing the\nvc_abort patch will get Tony on his feet, and then we can look for the\nsecondary bug at our leisure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 17:31:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "On Tue, 10 Aug 1999, Tom Lane wrote:\n\n> > manage to have the same man page name as some other existing page.\n> > *That* would be a bad thing. And in general adding ~75 man pages to\n> > existing sections is a pretty big load...\n> \n> As long as we install into /usr/local/pgsql/man/man*, naming conflicts\n> with other packages aren't too big a deal --- there's no physical file\n> conflict, and people can just add or remove /usr/local/pgsql/man/ in\n> their MANPATH settings to see or not see Postgres manpages.\n\nJumping in mid-stream and a week late (love holidays, eh? *grin*) ... My\nopinion is to make the default ${PREFIX}/man and have a --manpath=\nvariable set to configure to move it to a different place...\n\nI *believe* that Oracle installs its man pages under its install\ndirectory, but can't confirm right now...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 00:00:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages " }, { "msg_contents": "On Tue, 10 Aug 1999, Brook Milligan wrote:\n\n> ------- Start of forwarded message -------\n> So, do Oliver's conventions make sense for most platforms? istm that\n> they do. Would folks have problems with a mapping similar to what\n> Oliver uses? We would use section one (1) and section seven (7), with\n> a qualifier of ell (l) on each of the man page names. I won't do\n> anything about it right now, but would like to get a consensus now\n> that the subject has come up. Speak up now or forever hold your...\n> \n> OK, I'll speak up. :)\n> \n> This doesn't make too much sense to me based on my experience with\n> zillions of other man pages.\n> \n> A quick look through all of my system-supplied, X11, and locally\n\nJust a quick note, but, under FreeBSD *at least*, X11 puts its man pages\nin /usr/X11R6/man/* ... so it doesn't conflict with \"system\" man pages...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 00:03:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "On Wed, 11 Aug 1999, Thomas Lockhart wrote:\n\n> > > btw, on my Solaris boxes MANPATH is worse than useless; if you specify\n> > > it then none of the other paths mentioned in /etc/man.config (or\n> > > wherever that is on Solaris) get used. So you have to recreate all of\n> > > the default MANPATH settings in your environment variable. Of course,\n> > > now that I've whined about this perhaps someone knows a way around\n> > > this?\n> > Er, why not \"MANPATH=/usr/local/pgsql/man:$MANPATH\" ?\n> \n> 'Cause MANPATH is not defined to start with. My point was that man\n> does a great job just using its configuration file, but if you start\n> using MANPATH you have to (apparently) figure out what paths were in\n> the config file and put those in too...\n\nOkay, I run Solaris at work (2.5.x -> 7) and have yet to find a\n/etc/man.config file...I've always used MANPATH here :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 00:04:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [ANNOUNCE] New man pages" }, { "msg_contents": "> Bruce Momjian wrote:\n> ...\n> >OK, you have good points. usecatupd should not be set by default. \n> >Making changes to the system tables can mess things up for everyone. \n> >Initdb will give the postgres superuser permissions, but now createuser\n> >and the SQL command CREATE USER will not give this permission. Also, I\n> >have fixed the code so temp tables, which are acutally named pg_temp,\n> >can be updated by normal users without usecatupd permissions.\n> >\n> >Attached is a patch. I will apply it to the current tree.\n> \n> Bruce, this change has some other implications. I tested\n> the effect of the patch by altering the rights of my own account (setting\n> usecatupd to false). I cannot now create other users: although usesuper is\n> true, the attempt to update pg_shadow with the new user's row fails:\n\n\nOK, I have committed a fix for this. I now give update catalog rights\nto anyone who has createdb or super-user rights. This is in the current\ntree only. Let me know if you see any additional problems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 12:43:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Cannot insert into temp tables" }, { "msg_contents": "Hmm. I see a problem coming up for myself :(\n\nI'm working on JOIN syntax and (hopefully) OUTER JOIN capabilities for\nthe next release. That will take a lot of time afaik. We also have\nlots of great new features (almost certainly) coming up, such as WAL\nand RI. Lots of stuff, and taken together it should lead to a v7.0\nrelease.\n\nHowever! I've been waiting for v7.0 to do internal remapping of the\ndatetime types (and possibly other types; I'm not remembering right\nnow) so that the 4-byte TIMESTAMP becomes, internally, what is\ncurrently DATETIME and so that the current DATETIME becomes a synonym\nfor TIMESTAMP. If we are releasing v7.0 rather than v6.6, then I\nshould work on that now to ensure it is completed. If we might release\nas v6.6, I should continue to work on OUTER JOIN, since the type\nchanges shouldn't go in yet.\n\nAny opinions/comments on the right path to take? With code changes and\ndocs changes I would think that the datetime stuff is two or a few\nweeks work with my off-work time, which I suppose wouldn't impace a\nrelease schedule too much...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 08 Oct 1999 15:07:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Features for next release" }, { "msg_contents": "Thomas Lockhart wrote:\n\n>\n> Hmm. I see a problem coming up for myself :(\n>\n> I'm working on JOIN syntax and (hopefully) OUTER JOIN capabilities for\n> the next release. That will take a lot of time afaik. We also have\n> lots of great new features (almost certainly) coming up, such as WAL\n> and RI. Lots of stuff, and taken together it should lead to a v7.0\n> release.\n\n What I actually see when starting/stopping the postmaster\n lets me think that Vadim alread has the WAL target in sight.\n\n And for RI I can say that we'll have at least FOREIGN KEY for\n MATCH FULL without PENDANT in the next release. The key\n features are in place and there's plenty of time left to do\n all the stuff around.\n\n>\n> However! I've been waiting for v7.0 to do internal remapping of the\n> datetime types (and possibly other types; I'm not remembering right\n> now) so that the 4-byte TIMESTAMP becomes, internally, what is\n> currently DATETIME and so that the current DATETIME becomes a synonym\n> for TIMESTAMP. If we are releasing v7.0 rather than v6.6, then I\n> should work on that now to ensure it is completed. If we might release\n> as v6.6, I should continue to work on OUTER JOIN, since the type\n> changes shouldn't go in yet.\n>\n> Any opinions/comments on the right path to take? With code changes and\n> docs changes I would think that the datetime stuff is two or a few\n> weeks work with my off-work time, which I suppose wouldn't impace a\n> release schedule too much...\n\n Break TIMESTAMP right now. IMHO it would be THE reason (a\n functional incompatibility to v6.5) we where looking for to\n be 100% sure the next release must be v7.0.\n\n I clearly vote for v7.0 - even if I personally hoped we would\n have a little chance to produce the ultimate devils\n PostgreSQL-v6.6.6 someday.\n\n Apropos devil: tomorrow Braunschweig Lions vs. Hamburg Blue\n Devils here in Hamburg - German bowl! Go Devils go!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 8 Oct 1999 18:25:50 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "> Hmm. I see a problem coming up for myself :(\n> \n> I'm working on JOIN syntax and (hopefully) OUTER JOIN capabilities for\n> the next release. That will take a lot of time afaik. We also have\n> lots of great new features (almost certainly) coming up, such as WAL\n> and RI. Lots of stuff, and taken together it should lead to a v7.0\n> release.\n> \n> However! I've been waiting for v7.0 to do internal remapping of the\n> datetime types (and possibly other types; I'm not remembering right\n> now) so that the 4-byte TIMESTAMP becomes, internally, what is\n> currently DATETIME and so that the current DATETIME becomes a synonym\n> for TIMESTAMP. If we are releasing v7.0 rather than v6.6, then I\n> should work on that now to ensure it is completed. If we might release\n> as v6.6, I should continue to work on OUTER JOIN, since the type\n> changes shouldn't go in yet.\n> \n> Any opinions/comments on the right path to take? With code changes and\n> docs changes I would think that the datetime stuff is two or a few\n> weeks work with my off-work time, which I suppose wouldn't impace a\n> release schedule too much...\n\nMy guess is 7.0. With the minor releases still being made, Marc is in\nno rush.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 13:06:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "> Thomas Lockhart wrote:\n> \n> >\n> > Hmm. I see a problem coming up for myself :(\n> >\n> > I'm working on JOIN syntax and (hopefully) OUTER JOIN capabilities for\n> > the next release. That will take a lot of time afaik. We also have\n> > lots of great new features (almost certainly) coming up, such as WAL\n> > and RI. Lots of stuff, and taken together it should lead to a v7.0\n> > release.\n> \n> What I actually see when starting/stopping the postmaster\n> lets me think that Vadim alread has the WAL target in sight.\n\nYes, looks very impressive:\n\t\n\tDEBUG: Data Base System is starting up at Fri Oct 8 01:38:04 1999\n\tDEBUG: Data Base System was shutdowned at Fri Oct 8 01:37:42 1999\n\tDEBUG: CheckPoint record at (0, 968)\n\tDEBUG: Redo record at (0, 968); Undo record at (0, 0)\n\tDEBUG: NextTransactionId: 20995; NextOid: 0\n\tDEBUG: Invalid NextTransactionId/NextOid\n\tDEBUG: Data Base System is in production state at Fri Oct 8 01:38:04 1999\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 13:15:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "Found this quote by Philip Greenspun (who is unabashedly in love with\nOracle) in a LinuxWorld article:\n\"> The open source purist's only realistic choice for an RDBMS\n> is PostgreSQL (see Resources for the URL). In some ways,\n> PostgreSQL has more advanced features than any commercial\n> RDBMS. Most important, the loosely organized, unpaid \n> developers of PostgreSQL were able to convert to an \n> Oracle-style multiversion concurrency system (see below),\n> leaving all the rest of the commercial competition \n> deadlocked in the dust. If you've decided to accept John\n> Ousterhout as your personal savior, you'll be delighted\n> to learn that you can run Tcl procedures inside the \n> PostgreSQL database. And if your business can't live without\n> commercial support for your RDBMS, you can buy it \n> (see Resources for a link). \n\nCheers! (the full series of articles, which are about AOLserver, is\navailable at http://linuxworld.com)\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 08 Oct 1999 13:28:56 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> Found this quote by Philip Greenspun (who is unabashedly in love with\n> Oracle) in a LinuxWorld article:\n> \"> The open source purist's only realistic choice for an RDBMS\n> > is PostgreSQL (see Resources for the URL). In some ways,\n> > PostgreSQL has more advanced features than any commercial\n> > RDBMS. Most important, the loosely organized, unpaid \n> > developers of PostgreSQL were able to convert to an \n> > Oracle-style multiversion concurrency system (see below),\n> > leaving all the rest of the commercial competition \n> > deadlocked in the dust. If you've decided to accept John\n\nThat is am amazing quotation. Vince, can we get a link to it on the\nwebpage?\n\n\n\n> > Ousterhout as your personal savior, you'll be delighted\n> > to learn that you can run Tcl procedures inside the \n> > PostgreSQL database. And if your business can't live without\n> > commercial support for your RDBMS, you can buy it \n> > (see Resources for a link). \n> \n> Cheers! (the full series of articles, which are about AOLserver, is\n> available at http://linuxworld.com)\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 14:25:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL.]" }, { "msg_contents": "On Fri, Oct 08, 1999 at 06:25:50PM +0200, Jan Wieck wrote:\n> I clearly vote for v7.0 - even if I personally hoped we would\n\nMe too. Although it means I have to check whether there is more stuff I have\nto add to ECPG before 7.0 comes out.\n\n> have a little chance to produce the ultimate devils\n> PostgreSQL-v6.6.6 someday.\n\nIt seems we are too superstitous to allow this number. :-)\n\n> Apropos devil: tomorrow Braunschweig Lions vs. Hamburg Blue\n> Devils here in Hamburg - German bowl! Go Devils go!\n\nJan, now you surprise me. You are a football fan? I never expected another\nGerman on this list to love football (american style that is). Maybe we are\nable to meet at a football games. But then I cannot come to the German Bowl. \n\nDo you go to NFLE games too?\n\nMichael\n\nP.S.: Sorry for being off-topic for this list in the last part of my mail.\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 8 Oct 1999 20:57:43 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "It seems people are thinking winter or early spring (northern hemisphere\nthat is ;-)) for the next major release, and by then I think there will\nbe enough cool stuff done that we can call it 7.0. The only really big\nto-do item that no one seems to be committed to doing in this cycle is\ntuples bigger than a disk block, and maybe once the dust settles for\nlong queries someone will feel like tackling that...\n\nI have another reason for calling it 7.0, which is that if we fix\nthe function-call interface the way I want to, we will break existing\nuser-written loadable modules that contain C-language functions.\nBetter to do that in a \"7.0\" than in a \"6.6\", no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Oct 1999 19:14:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "Lamar Owen wrote:\n> \n> Found this quote by Philip Greenspun (who is unabashedly in love with\n> Oracle) in a LinuxWorld article:\n\nIt's nice to know he is a fan. btw, Philip was inquiring about the\nfeasibility of making PostgreSQL more compatible with Oracle to allow\nhim to port a large software project he built (in fact, you referred\nus to his web site on some other topic recently). Until we have outer\njoins it is a non-starter for him, but it was nice he asked :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 09 Oct 1999 02:05:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > ... PostgreSQL has more advanced features than any commercial\n> > RDBMS. Most important, the loosely organized, unpaid\n> > developers of PostgreSQL were able to convert to an\n> > Oracle-style multiversion concurrency system (see below),\n> > leaving all the rest of the commercial competition\n> > deadlocked in the dust.\n\nThere we go, riding Vadim's coattails to glory :))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 09 Oct 1999 02:18:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > > ... PostgreSQL has more advanced features than any commercial\n> > > RDBMS. Most important, the loosely organized, unpaid\n> > > developers of PostgreSQL were able to convert to an\n> > > Oracle-style multiversion concurrency system (see below),\n> > > leaving all the rest of the commercial competition\n> > > deadlocked in the dust.\n> \n> There we go, riding Vadim's coattails to glory :))\n\nYep. The article makes it sound like we all did MVCC. No one told him\nis was just one person.\n\nI was mentioning to the MySQL folks that we have some brilliant\ndevelopers. Certainly most database companies have only a few really\ngood developers. Because we have the entire Internet to get people, we\ndraw the best database developers in the world.\n\nAdded to developers page:\n\nWe don't hire people to develop PostgreSQL. We have the entire Internet\nto get people, so we draw the best database developers in the world. \n\n[Jan, can we get a better image for the developers page?]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 22:46:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > > ... PostgreSQL has more advanced features than any commercial\n> > > RDBMS. Most important, the loosely organized, unpaid\n> > > developers of PostgreSQL were able to convert to an\n> > > Oracle-style multiversion concurrency system (see below),\n> > > leaving all the rest of the commercial competition\n> > > deadlocked in the dust.\n> \n> There we go, riding Vadim's coattails to glory :))\n> \n\nNew text:\n\nWe don't hire developers. We reach across the Internet, drawing the\nbest database developers in the world to PostgreSQL.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 22:53:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > > ... PostgreSQL has more advanced features than any commercial\n> > > RDBMS. Most important, the loosely organized, unpaid\n> > > developers of PostgreSQL were able to convert to an\n> > > Oracle-style multiversion concurrency system (see below),\n> > > leaving all the rest of the commercial competition\n> > > deadlocked in the dust.\n> \n> There we go, riding Vadim's coattails to glory :))\n\nWow, that web page looks good now, with the quote at the bottom. Jan,\nwe need the nicer world image.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 8 Oct 1999 22:55:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> > > > ... PostgreSQL has more advanced features than any commercial\n> > > > RDBMS. Most important, the loosely organized, unpaid\n> > > > developers of PostgreSQL were able to convert to an\n> > > > Oracle-style multiversion concurrency system (see below),\n> > > > leaving all the rest of the commercial competition\n> > > > deadlocked in the dust.\n> >\n> > There we go, riding Vadim's coattails to glory :))\n>\n> Wow, that web page looks good now, with the quote at the bottom. Jan,\n> we need the nicer world image.\n\n You mean one with the mountains - no?\n\n Well, I'll spend some time, polish up the Povray sources etc.\n so Vince can easily maintain the map after - only that he\n needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n shouldn't be a problem since both are freely available and\n easily to install.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 11 Oct 1999 12:42:23 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "On Mon, 11 Oct 1999, Jan Wieck wrote:\n\n> Bruce Momjian wrote:\n> \n> >\n> > > > > ... PostgreSQL has more advanced features than any commercial\n> > > > > RDBMS. Most important, the loosely organized, unpaid\n> > > > > developers of PostgreSQL were able to convert to an\n> > > > > Oracle-style multiversion concurrency system (see below),\n> > > > > leaving all the rest of the commercial competition\n> > > > > deadlocked in the dust.\n> > >\n> > > There we go, riding Vadim's coattails to glory :))\n> >\n> > Wow, that web page looks good now, with the quote at the bottom. Jan,\n> > we need the nicer world image.\n> \n> You mean one with the mountains - no?\n> \n> Well, I'll spend some time, polish up the Povray sources etc.\n> so Vince can easily maintain the map after - only that he\n> needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> shouldn't be a problem since both are freely available and\n> easily to install.\n\nNot a problem. I may have some questions re Povray's setup (I've \nnever used it).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 11 Oct 1999 07:40:20 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > Wow, that web page looks good now, with the quote at the bottom. Jan,\n> > we need the nicer world image.\n> \n> You mean one with the mountains - no?\n> \n> Well, I'll spend some time, polish up the Povray sources etc.\n> so Vince can easily maintain the map after - only that he\n> needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> shouldn't be a problem since both are freely available and\n> easily to install.\n\nYes. I would also like to see the sources.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 09:45:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "> > > Wow, that web page looks good now, with the quote at the bottom. Jan,\n> > > we need the nicer world image.\n> > \n> > You mean one with the mountains - no?\n> > \n> > Well, I'll spend some time, polish up the Povray sources etc.\n> > so Vince can easily maintain the map after - only that he\n> > needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> > shouldn't be a problem since both are freely available and\n> > easily to install.\n\nI have tcl 8.0.5 and povray here too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 11 Oct 1999 09:46:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> Lamar Owen wrote:\n> >\n> > Found this quote by Philip Greenspun (who is unabashedly in love with\n> > Oracle) in a LinuxWorld article:\n> \n> It's nice to know he is a fan. btw, Philip was inquiring about the\n> feasibility of making PostgreSQL more compatible with Oracle to allow\n> him to port a large software project he built (in fact, you referred\n> us to his web site on some other topic recently). Until we have outer\n> joins it is a non-starter for him, but it was nice he asked :)\n\nThat would be the ArsDigita Community System -- ACS for short. This is\nan absolutely wonderful community system for database-backed web sites\n-- www.arsdigita.com for more info. Version 7 territory! I have played\nwith ACS against the free development-only Oracle 8i for Linux -- very\ncomprehensive. However, Oracle for a database-backed web site is priced\nsky-high -- I saw one quote for $25,000 to back a website with Oracle 8i\non a 500MHz Pentium III (their license factors server speed into the\ncost -- a 386-16 might only cost $1,000 on that scale...;-)) \nCompatibility with Oracle is IMHO a pretty good thing, as long as we\ndon't become an Oracle clone -- PostgreSQL has too many nice extras for\nthat.\n\nYeah, it was _because_ of AOLserver and Philip Greenspun that I got into\nPostgreSQL in the first place. \n\nBTW: RedHat 6.1 shipped, with the last RPM's I posted.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 11 Oct 1999 11:24:38 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting Quote you might enjoy about PGSQL." }, { "msg_contents": "On Fri, 8 Oct 1999, Tom Lane wrote:\n\n> It seems people are thinking winter or early spring (northern hemisphere\n> that is ;-)) for the next major release, and by then I think there will\n> be enough cool stuff done that we can call it 7.0. The only really big\n> to-do item that no one seems to be committed to doing in this cycle is\n> tuples bigger than a disk block, and maybe once the dust settles for\n> long queries someone will feel like tackling that...\n> \n> I have another reason for calling it 7.0, which is that if we fix\n> the function-call interface the way I want to, we will break existing\n> user-written loadable modules that contain C-language functions.\n> Better to do that in a \"7.0\" than in a \"6.6\", no?\n\nBased on all the comments flyng around, v7.0 it is. to be released...some\nday :)\n\nBruce, you asked for a v6.5.3 to be released...anything outstanding that\nshould prevent me from doing that tomorrow afternoon? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 01:07:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "> > I have another reason for calling it 7.0, which is that if we fix\n> > the function-call interface the way I want to, we will break existing\n> > user-written loadable modules that contain C-language functions.\n> > Better to do that in a \"7.0\" than in a \"6.6\", no?\n> \n> Based on all the comments flyng around, v7.0 it is. to be released...some\n> day :)\n> \n> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> should prevent me from doing that tomorrow afternoon? \n\nNot that I know of. I was waiting to see if we could come up with other\npatches, but I don't think that is going to happen anytime soon.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 09:56:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": ">> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n>> should prevent me from doing that tomorrow afternoon? \n\n> Not that I know of. I was waiting to see if we could come up with other\n> patches, but I don't think that is going to happen anytime soon.\n\nOn the other hand, is there a reason to be in a rush to put out 6.5.3?\nI didn't think we had many important changes from 6.5.2 yet.\n\nI imagine that we will acquire a few more fixes for 6.5.* as time goes\non, and it looks like 7.0 will be months away. Maybe a 6.5.* update\nevery month or two (if anything's been patched) would be the way to\nproceed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 10:14:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "> >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> >> should prevent me from doing that tomorrow afternoon? \n> \n> > Not that I know of. I was waiting to see if we could come up with other\n> > patches, but I don't think that is going to happen anytime soon.\n> \n> On the other hand, is there a reason to be in a rush to put out 6.5.3?\n> I didn't think we had many important changes from 6.5.2 yet.\n> \n> I imagine that we will acquire a few more fixes for 6.5.* as time goes\n> on, and it looks like 7.0 will be months away. Maybe a 6.5.* update\n> every month or two (if anything's been patched) would be the way to\n> proceed.\n> \n\nMissing pgaccess source code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 10:41:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "On Tue, 12 Oct 1999, Tom Lane wrote:\n\n> >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> >> should prevent me from doing that tomorrow afternoon? \n> \n> > Not that I know of. I was waiting to see if we could come up with other\n> > patches, but I don't think that is going to happen anytime soon.\n> \n> On the other hand, is there a reason to be in a rush to put out 6.5.3?\n> I didn't think we had many important changes from 6.5.2 yet.\n\nv6.5.3, I believe, was because PgAccess somehow got removed in v6.5.2 :(\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 11:44:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "> On Tue, 12 Oct 1999, Tom Lane wrote:\n> \n> > >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> > >> should prevent me from doing that tomorrow afternoon? \n> > \n> > > Not that I know of. I was waiting to see if we could come up with other\n> > > patches, but I don't think that is going to happen anytime soon.\n> > \n> > On the other hand, is there a reason to be in a rush to put out 6.5.3?\n> > I didn't think we had many important changes from 6.5.2 yet.\n> \n> v6.5.3, I believe, was because PgAccess somehow got removed in v6.5.2 :(\n\nOK, it was me, I didn't want to add it to 6.5.2 because it was not a\nmajor bugfix, I got it 24 hours before release, it was a completely new\nsource tree layout, I asked the author to check my work, I didn't check\nmy work, and it was not committed properly. I feel better now. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 10:48:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> On the other hand, is there a reason to be in a rush to put out 6.5.3?\n>> I didn't think we had many important changes from 6.5.2 yet.\n\n> v6.5.3, I believe, was because PgAccess somehow got removed in v6.5.2 :(\n\nAh, right, I'd forgotten that. OK, we need a 6.5.3.\n\nIt sounds like Peter wants another day to check JDBC though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Oct 1999 11:24:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "Bruce Momjian wrote:\n >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n >> should prevent me from doing that tomorrow afternoon? \n >\n >Not that I know of. I was waiting to see if we could come up with other\n >patches, but I don't think that is going to happen anytime soon.\n\nWill this include a patch to let NUMERIC fields be indexed?\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Blessed is the man who makes the LORD his trust, \n who does not look to the proud, to those who turn \n aside to false gods.\" Psalms 40:4 \n\n\n", "msg_date": "Tue, 12 Oct 1999 17:03:46 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Features for next release " }, { "msg_contents": "> Bruce Momjian wrote:\n> >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> >> should prevent me from doing that tomorrow afternoon? \n> >\n> >Not that I know of. I was waiting to see if we could come up with other\n> >patches, but I don't think that is going to happen anytime soon.\n> \n> Will this include a patch to let NUMERIC fields be indexed?\n\nNot sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 12:08:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": ">\n> > > > Wow, that web page looks good now, with the quote at the bottom. Jan,\n> > > > we need the nicer world image.\n> > >\n> > > You mean one with the mountains - no?\n> > >\n> > > Well, I'll spend some time, polish up the Povray sources etc.\n> > > so Vince can easily maintain the map after - only that he\n> > > needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> > > shouldn't be a problem since both are freely available and\n> > > easily to install.\n>\n> I have tcl 8.0.5 and povray here too.\n\n I've setup an example for the new developers page at\n\n <http://www.PostgreSQL.ORG/~wieck>\n\n The image size is adjusted for the page width.\n\n To maintain the hotspots I made a little, slightly\n overspecialized, Tcl/Tk application that creates the imagemap\n so it can easily be pasted into the page.\n\n I'll contact you and Vince via private mail after packing it\n up.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 12 Oct 1999 18:20:18 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "New developer globe (was: Re: [HACKERS] Interesting Quote you might\n\tenjoy about PGSQL.)" }, { "msg_contents": ">\n> > Bruce Momjian wrote:\n> > >> Bruce, you asked for a v6.5.3 to be released...anything outstanding that\n> > >> should prevent me from doing that tomorrow afternoon?\n> > >\n> > >Not that I know of. I was waiting to see if we could come up with other\n> > >patches, but I don't think that is going to happen anytime soon.\n> >\n> > Will this include a patch to let NUMERIC fields be indexed?\n>\n> Not sure.\n\n Not FOR sure. Adding the default operator class etc. is\n adding catalog entries. Thus it requires \"initdb\" and that's\n a NONO for bugfix releases according to our release policy.\n\n It's done in the v7.0 tree.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 12 Oct 1999 18:27:47 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Features for next release" }, { "msg_contents": "> >\n> > > > > Wow, that web page looks good now, with the quote at the bottom. Jan,\n> > > > > we need the nicer world image.\n> > > >\n> > > > You mean one with the mountains - no?\n> > > >\n> > > > Well, I'll spend some time, polish up the Povray sources etc.\n> > > > so Vince can easily maintain the map after - only that he\n> > > > needs Povray 3.1 and maybe Tcl/Tk 8.0 to do it, but that\n> > > > shouldn't be a problem since both are freely available and\n> > > > easily to install.\n> >\n> > I have tcl 8.0.5 and povray here too.\n> \n> I've setup an example for the new developers page at\n> \n> <http://www.PostgreSQL.ORG/~wieck>\n> \n> The image size is adjusted for the page width.\n> \n> To maintain the hotspots I made a little, slightly\n> overspecialized, Tcl/Tk application that creates the imagemap\n> so it can easily be pasted into the page.\n> \n> I'll contact you and Vince via private mail after packing it\n> up.\n\nYikes, that's amazingly good looking.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 12:34:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)u" }, { "msg_contents": "> > I have tcl 8.0.5 and povray here too.\n> \n> I've setup an example for the new developers page at\n> \n> <http://www.PostgreSQL.ORG/~wieck>\n> \n> The image size is adjusted for the page width.\n> \n> To maintain the hotspots I made a little, slightly\n> overspecialized, Tcl/Tk application that creates the imagemap\n> so it can easily be pasted into the page.\n> \n> I'll contact you and Vince via private mail after packing it\n> up.\n\nI just realized I can put my mouse over a pin, and the name appears. \nThat is great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 12:35:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "Here is my proposal for an outline for a PostgreSQL book. Many of us\nhave been asked by publishers about writing a book. Here is what I\nthink would be a good outline for the book.\n\nI am interested in whether this is a good outline for a PostgreSQL book,\nhow our existing documentation matches this outline, where our existing\ndocumentation can be managed into a published book, etc.\n\nAny comments would be welcome.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n...................................................................\n\nThe attached document is in both web page and text formats.\nView the one which looks best.\n\n\n\n\nPostgreSQL Book Proposal\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL Book Proposal\nBruce Momjian\n\n\n\n1.\nIntroduction\n2.\nInstallation\n\n\n(a)\nGetting POSTGRESQL\n(b)\nCompiling\n(c)\nInitialization\n(d)\nStarting the server\n(e)\nCreating a database\n(f)\nIssuing database commands \n3.\nIntroduction to SQL\n\n\n\n(a)\nWhy a database?\n(b)\nCreating tables \n(c)\nAdding data with INSERT\n(d)\nViewing data with SELECT\n(e)\nRemoving data with DELETE\n(f)\nModifying data with UPDATE\n(g)\nRestriction with WHERE\n(h)\nSorting data with ORDER BY\n(i)\nUsage of NULL values\n4.\nAdvanced SQL Commands\n\n\n\n(a)\nInserting data from a SELECT\n(b)\nAggregates: COUNT, SUM, etc.\n(c)\nGROUP BY with aggregates\n(d)\nHAVING with aggregates\n(e)\nJoining tables\n(f)\nUsing table aliases\n(g)\nUNION clause\n(h)\nSubqueries\n(i)\nTransactions\n(j)\nCursors\n(k)\nIndexing\n(l)\nTable defaults\n(m)\nPrimary/Foreign keys\n(n)\nAND/OR usage \n(o)\nLIKE clause usage\n(p)\nTemporary tables\n(q)\nImporting data\n5.\nPOSTGRESQL'S Unique Features\n\n\n\n(a)\nObject ID'S (OID)\n(b)\nMulti-version Concurrency Control (MVCC)\n(c)\nLocking and Deadlocks\n(d)\nVacuum\n(e)\nViews\n(f)\nRules\n(g)\nSequences\n(h)\nTriggers\n(i)\nLarge Objects(BLOBS)\n(j)\nAdding User-defined Functions\n(k)\nAdding User-defined Operators\n(l)\nAdding User-defined Types\n(m)\nExotic Preinstalled Types\n(n)\nArrays\n(o)\nInheritance\n6.\nInterfacing to the POSTGRESQL Database\n\n\n\n(a)\nC Language API\n(b)\nEmbedded C\n(c)\nC++\n(d)\nJAVA\n(e)\nODBC\n(f)\nPERL\n(g)\nTCL/TK\n(h)\nPYTHON\n(i)\nWeb access (PHP)\n(j)\nServer-side programming (PLPGSQL and SPI)\n7.\nPOSTGRESQL Adminstration\n\n\n\n(a)\nCreating users and databases\n(b)\nBackup and restore\n(c)\nPerformance tuning\n(d)\nTroubleshooting\n(e)\nCustomization options\n(f)\nSetting access permissions\n8.\nAdditional Resources\n\n\n\n(a)\nFrequently Asked Questions (FAQ'S)\n(b)\nMailing list support\n(c)\nSupplied documentation\n(d)\nCommercial support\n(e)\nModifying the source code\n\n\n\n\n\n\n PostgreSQL Book Proposal\n \n Bruce Momjian\n \n 1.\n Introduction\n 2.\n Installation\n (a)\n Getting POSTGRESQL\n (b)\n Compiling\n (c)\n Initialization\n (d)\n Starting the server\n (e)\n Creating a database\n (f)\n Issuing database commands\n 3.\n Introduction to SQL\n (a)\n Why a database?\n (b)\n Creating tables\n (c)\n Adding data with INSERT\n (d)\n Viewing data with SELECT\n (e)\n Removing data with DELETE\n (f)\n Modifying data with UPDATE\n (g)\n Restriction with WHERE\n (h)\n Sorting data with ORDER BY\n (i)\n Usage of NULL values\n 4.\n Advanced SQL Commands\n (a)\n Inserting data from a SELECT\n (b)\n Aggregates: COUNT, SUM, etc.\n (c)\n GROUP BY with aggregates\n (d)\n HAVING with aggregates\n (e)\n Joining tables\n (f)\n Using table aliases\n (g)\n UNION clause\n (h)\n Subqueries\n (i)\n Transactions\n (j)\n Cursors\n (k)\n Indexing\n (l)\n Table defaults\n (m)\n Primary/Foreign keys\n (n)\n AND/OR usage\n (o)\n LIKE clause usage\n (p)\n Temporary tables\n (q)\n Importing data\n 5.\n POSTGRESQL'S Unique Features\n (a)\n Object ID'S (OID)\n (b)\n Multi-version Concurrency Control (MVCC)\n (c)\n Locking and Deadlocks\n (d)\n Vacuum\n (e)\n Views\n (f)\n Rules\n (g)\n Sequences\n (h)\n Triggers\n (i)\n Large Objects(BLOBS)\n (j)\n Adding User-defined Functions\n (k)\n Adding User-defined Operators\n (l)\n Adding User-defined Types\n (m)\n Exotic Preinstalled Types\n (n)\n Arrays\n (o)\n Inheritance\n 6.\n Interfacing to the POSTGRESQL Database\n (a)\n C Language API\n (b)\n Embedded C\n (c)\n C++\n (d)\n JAVA\n (e)\n ODBC\n (f)\n PERL\n (g)\n TCL/TK\n (h)\n PYTHON\n (i)\n Web access (PHP)\n (j)\n Server-side programming (PLPGSQL and SPI)\n 7.\n POSTGRESQL Adminstration\n (a)\n Creating users and databases\n (b)\n Backup and restore\n (c)\n Performance tuning\n (d)\n Troubleshooting\n (e)\n Customization options\n (f)\n Setting access permissions\n 8.\n Additional Resources\n (a)\n Frequently Asked Questions (FAQ'S)\n (b)\n Mailing list support\n (c)\n Supplied documentation\n (d)\n Commercial support\n (e)\n Modifying the source code", "msg_date": "Tue, 12 Oct 1999 13:16:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Outline for PostgreSQL book" }, { "msg_contents": "Jan Wieck wrote:\n> I've setup an example for the new developers page at\n> \n> <http://www.PostgreSQL.ORG/~wieck>\n> \n> The image size is adjusted for the page width.\n> \n> To maintain the hotspots I made a little, slightly\n> overspecialized, Tcl/Tk application that creates the imagemap\n> so it can easily be pasted into the page.\n\nI get a Javascript error:\nJavaScript Error: http://www.PostgreSQL.ORG/~wieck/, line 51:\n\npg_dot_start is not defined.\n\nNetscape Communicator 4.61.\n\nOtherwise, this globe is stunning.\n\nThere are a couple of typos (to be pedantic...): (;-))\nUnder Thomas, 'documentation' is incorrectly spelled.\nUnder Tom Lane, 'He has works on the optimizer', unless some arcane\nusage of works is in vogue, should be either 'He works on' or 'He has\nworked on', with the latter fitting with the tense already used in his\ndescription.\n\nIf Vince wants to add my name as RPM maintainer, and no one objects, I\nvolunteer to continue to maintain the RedHat Linux RPM set (the last\nrevision of which shipped with RedHat 6.1).\n\nAgain, THIS GLOBE IS STUNNING!\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 12 Oct 1999 13:21:57 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "> Again, THIS GLOBE IS STUNNING!\n> \n\nYep. That's the word for it. I will fix the wording once Jan is\nfinished. The mistakes are mine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 13:26:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "On Tue, 12 Oct 1999, Lamar Owen wrote:\n\n> If Vince wants to add my name as RPM maintainer, and no one objects, I\n> volunteer to continue to maintain the RedHat Linux RPM set (the last\n> revision of which shipped with RedHat 6.1).\n\nVince always thought that was Bruce's page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 12 Oct 1999 13:26:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "Lamar Owen wrote:\n\n>\n> Jan Wieck wrote:\n> > I've setup an example for the new developers page at\n> >\n> > <http://www.PostgreSQL.ORG/~wieck>\n> >\n> > The image size is adjusted for the page width.\n> >\n> > To maintain the hotspots I made a little, slightly\n> > overspecialized, Tcl/Tk application that creates the imagemap\n> > so it can easily be pasted into the page.\n>\n> I get a Javascript error:\n> JavaScript Error: http://www.PostgreSQL.ORG/~wieck/, line 51:\n>\n> pg_dot_start is not defined.\n\n Ooops - thanks and corrected, try again. I forgot to remove\n the onLoad() in the <body>. That blinking-dot stuff isn't\n required anymore since the globe looks already nice enough\n without animation.\n\n> Again, THIS GLOBE IS STUNNING!\n\n\n Tnx :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 12 Oct 1999 19:26:59 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "> On Tue, 12 Oct 1999, Lamar Owen wrote:\n> \n> > If Vince wants to add my name as RPM maintainer, and no one objects, I\n> > volunteer to continue to maintain the RedHat Linux RPM set (the last\n> > revision of which shipped with RedHat 6.1).\n> \n> Vince always thought that was Bruce's page.\n> \n\nSure, I'll add you once Jan finishes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 13:27:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "Jan Wieck wrote:\n> > pg_dot_start is not defined.\n> \n> Ooops - thanks and corrected, try again. I forgot to remove\n> the onLoad() in the <body>. That blinking-dot stuff isn't\n> required anymore since the globe looks already nice enough\n> without animation.\n\nAh, yes. Much better.\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Tue, 12 Oct 1999 13:40:05 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "On Tue, 12 Oct 1999, Bruce Momjian wrote:\n\n> > > I have tcl 8.0.5 and povray here too.\n> > \n> > I've setup an example for the new developers page at\n> > \n> > <http://www.PostgreSQL.ORG/~wieck>\n> > \n> > The image size is adjusted for the page width.\n> > \n> > To maintain the hotspots I made a little, slightly\n> > overspecialized, Tcl/Tk application that creates the imagemap\n> > so it can easily be pasted into the page.\n> > \n> > I'll contact you and Vince via private mail after packing it\n> > up.\n> \n> I just realized I can put my mouse over a pin, and the name appears. \n> That is great.\n\nmost cool...one thing though...considering teh width of those tags, how\nabout adding location info in there also? For instance \"Edmund, Mergyl:\nStuttgart, Germany\" ... :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 12 Oct 1999 14:43:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "> > > I'll contact you and Vince via private mail after packing it\n> > > up.\n> > \n> > I just realized I can put my mouse over a pin, and the name appears. \n> > That is great.\n> \n> most cool...one thing though...considering teh width of those tags, how\n> about adding location info in there also? For instance \"Edmund, Mergyl:\n> Stuttgart, Germany\" ... :)\n\nMarc, let's not get Jan upset. :-)\n\nTheir location is already in the lower page next to their name anyway.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 13:45:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": ">\n> > > > I'll contact you and Vince via private mail after packing it\n> > > > up.\n> > >\n> > > I just realized I can put my mouse over a pin, and the name appears.\n> > > That is great.\n> >\n> > most cool...one thing though...considering teh width of those tags, how\n> > about adding location info in there also? For instance \"Edmund, Mergyl:\n> > Stuttgart, Germany\" ... :)\n>\n> Marc, let's not get Jan upset. :-)\n\n Bruce, to upset me Marc needs alot of more efford!\n\n And why not? If you UPDATE you'll see that we can get rid of\n most of the entire body. Remember - a picture says more than\n a thousand words :-)\n\n Instead there could be more important information like our\n release policy (no features in bufix releases, backward\n compatibility in minor releases), how to submit\n patches/improvements, where to find documentation on how to\n enhance/hack PostgreSQL etc.\n\n>\n> Their location is already in the lower page next to their name anyway.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 12 Oct 1999 21:11:23 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "Hi,\n\nwho is a maintainer of Developers List. \nI understand that I'm not a postgres coder but is't\npossible to mention somewhere:\n\nBartunov, Oleg in Moscow, Russia ([email protected])\nhas introduced locale support.\n\nIt was rather long ago and was very limited but\nI still proud for this little hack many people\nused. (take into account I'm not a C-programmer at all :-)\n\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 12 Oct 1999 23:28:48 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "> > Marc, let's not get Jan upset. :-)\n> \n> Bruce, to upset me Marc needs alot of more efford!\n> \n> And why not? If you UPDATE you'll see that we can get rid of\n> most of the entire body. Remember - a picture says more than\n> a thousand words :-)\n\nWow, that is cool. No need for text at bottom, except to list names and\ne-mail addresses.\n\n> Instead there could be more important information like our\n> release policy (no features in bufix releases, backward\n> compatibility in minor releases), how to submit\n> patches/improvements, where to find documentation on how to\n> enhance/hack PostgreSQL etc.\n\nThat's an interesting idea, though I think the documentation or download\npage may be a better place for that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 15:37:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "> Hi,\n> \n> who is a maintainer of Developers List. \n> I understand that I'm not a postgres coder but is't\n> possible to mention somewhere:\n> \n> Bartunov, Oleg in Moscow, Russia ([email protected])\n> has introduced locale support.\n> \n> It was rather long ago and was very limited but\n> I still proud for this little hack many people\n> used. (take into account I'm not a C-programmer at all :-)\n\nSure, let me do that when Jan is done.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 15:38:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight enjoy about PGSQL.)" }, { "msg_contents": "Bruce Momjian wrote:\n >Here is my proposal for an outline for a PostgreSQL book. Many of us\n >have been asked by publishers about writing a book. Here is what I\n >think would be a good outline for the book.\n >\n >I am interested in whether this is a good outline for a PostgreSQL book,\n >how our existing documentation matches this outline, where our existing\n >documentation can be managed into a published book, etc.\n >\n >Any comments would be welcome.\n \n\n\n > PostgreSQL Book Proposal\n > \n > Bruce Momjian\n > \n > 1.\n > Introduction\n > 2.\n > Installation\n...\n > 3.\n > Introduction to SQL\n...\n\nIt looks good; I have a comment on the order of chapters, however.\n\nI suggest that Installlation goes in an Appendix. More and more, people\nwill be coming to machines that already have software installed, or have\nthem installed as packages (.rpm or .deb).\n\nInstallation is the administrator's job, and not of interest to `normal'\nusers, so it should not be placed in the book as if every user had to do\nit.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Blessed is the man who makes the LORD his trust, \n who does not look to the proud, to those who turn \n aside to false gods.\" Psalms 40:4 \n\n\n", "msg_date": "Tue, 12 Oct 1999 21:49:55 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book " }, { "msg_contents": "Bruce Momjian wrote:\n >> > Marc, let's not get Jan upset. :-)\n >> \n >> Bruce, to upset me Marc needs alot of more efford!\n >> \n >> And why not? If you UPDATE you'll see that we can get rid of\n >> most of the entire body. Remember - a picture says more than\n >> a thousand words :-)\n >\n >Wow, that is cool. No need for text at bottom, except to list names and\n >e-mail addresses.\n \nDon't forget the people who use Lynx and other such browsers.\n\nIn Lynx the JavaScript stuff is not visible at all, so there ought to be\nnormal text for those who cannot display images.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Blessed is the man who makes the LORD his trust, \n who does not look to the proud, to those who turn \n aside to false gods.\" Psalms 40:4 \n\n\n", "msg_date": "Tue, 12 Oct 1999 21:53:56 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New developer globe (was: Re: [HACKERS] Interesting Quote you\n\tmight" }, { "msg_contents": "> It looks good; I have a comment on the order of chapters, however.\n> \n> I suggest that Installlation goes in an Appendix. More and more, people\n> will be coming to machines that already have software installed, or have\n> them installed as packages (.rpm or .deb).\n\n> \n> Installation is the administrator's job, and not of interest to `normal'\n> users, so it should not be placed in the book as if every user had to do\n> it.\n\nDone. Certainly better to get that at the end. That chapter is going\nto be a mess to look at.\n\nNew version attached. No PDF version this time. Do people like PDF?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n...................................................................\n\nThe attached document is in both web page and text formats.\nView the one which looks best.\n\n\n\n\nPostgreSQL Book Proposal\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL Book Proposal\nBruce Momjian\n\n\n\n1.\nIntroduction\n\n\n(a)\nHistory of POSTGRESQL\n(b)\nOpen source software\n(c)\nWhen to use a database\n2.\nIssuing database commands\n\n\n\n(a)\nStarting a database session\n(b)\nControlling a session\n(c)\nSending queries\n(d)\nGetting help\n3.\nIntroduction to SQL\n\n\n\n(a)\nCreating tables \n(b)\nAdding data with INSERT\n(c)\nViewing data with SELECT\n(d)\nRemoving data with DELETE\n(e)\nModifying data with UPDATE\n(f)\nRestricting with WHERE\n(g)\nSorting data with ORDER BY\n(h)\nUsing NULL values\n4.\nAdvanced SQL Commands\n\n\n\n(a)\nInserting data from a SELECT\n(b)\nAggregates: COUNT, SUM, ...\n(c)\nGROUP BY with aggregates\n(d)\nHAVING with aggregates\n(e)\nJoining tables\n(f)\nUsing table aliases\n(g)\nUNION clause\n(h)\nSubqueries\n(i)\nTransactions\n(j)\nCursors\n(k)\nIndexing\n(l)\nColumn defaults\n(m)\nPrimary/foreign keys\n(n)\nAND/OR usage \n(o)\nLIKE clause usage\n(p)\nTemporary tables\n(q)\nImporting data\n5.\nPOSTGRESQL'S Unique Features\n\n\n\n(a)\nObject ID'S (OID'S)\n(b)\nMulti-Version Concurrency Control\n(c)\nLocking and deadlocks\n(d)\nVacuum\n(e)\nViews\n(f)\nRules\n(g)\nSequences\n(h)\nTriggers\n(i)\nLarge objects(BLOBS)\n(j)\nAdding user-defined functions\n(k)\nAdding user-defined operators\n(l)\nAdding user-defined types\n(m)\nExotic pre-installed types\n(n)\nArrays\n(o)\nInheritance\n6.\nInterfacing to the POSTGRESQL Database\n\n\n\n(a)\nC Language API\n(b)\nEmbedded C\n(c)\nC++\n(d)\nJAVA\n(e)\nODBC\n(f)\nPERL\n(g)\nTCL/TK\n(h)\nPYTHON\n(i)\nWeb access (PHP)\n(j)\nServer-side programming (PLPGSQL and SPI)\n7.\nPOSTGRESQL Administration\n\n\n\n(a)\nCreating users and databases\n(b)\nBackup and restore\n(c)\nPerformance\n(d)\nTroubleshooting\n(e)\nCustomization\n(f)\nSetting access permissions\n(g)\nInternational character encodings\n8.\nAdditional Resources\n\n\n\n(a)\nFrequently Asked Questions (FAQ'S)\n(b)\nMailing list support\n(c)\nSupplied documentation\n(d)\nCommercial support\n(e)\nModifying the source code\n9.\nAppendix: Installation\n\n\n\n(a)\nGetting POSTGRESQL\n(b)\nCompiling\n(c)\nInitialization\n(d)\nStarting the server\n(e)\nCreating a database\n10.\nAnnotated Bibliography\n\n\n\n\n\n\n PostgreSQL Book Proposal\n \n Bruce Momjian\n \n 1.\n Introduction\n (a)\n History of POSTGRESQL\n (b)\n Open source software\n (c)\n When to use a database\n 2.\n Issuing database commands\n (a)\n Starting a database session\n (b)\n Controlling a session\n (c)\n Sending queries\n (d)\n Getting help\n 3.\n Introduction to SQL\n (a)\n Creating tables\n (b)\n Adding data with INSERT\n (c)\n Viewing data with SELECT\n (d)\n Removing data with DELETE\n (e)\n Modifying data with UPDATE\n (f)\n Restricting with WHERE\n (g)\n Sorting data with ORDER BY\n (h)\n Using NULL values\n 4.\n Advanced SQL Commands\n (a)\n Inserting data from a SELECT\n (b)\n Aggregates: COUNT, SUM, ...\n (c)\n GROUP BY with aggregates\n (d)\n HAVING with aggregates\n (e)\n Joining tables\n (f)\n Using table aliases\n (g)\n UNION clause\n (h)\n Subqueries\n (i)\n Transactions\n (j)\n Cursors\n (k)\n Indexing\n (l)\n Column defaults\n (m)\n Primary/foreign keys\n (n)\n AND/OR usage\n (o)\n LIKE clause usage\n (p)\n Temporary tables\n (q)\n Importing data\n 5.\n POSTGRESQL'S Unique Features\n (a)\n Object ID'S (OID'S)\n (b)\n Multi-Version Concurrency Control\n (c)\n Locking and deadlocks\n (d)\n Vacuum\n (e)\n Views\n (f)\n Rules\n (g)\n Sequences\n (h)\n Triggers\n (i)\n Large objects(BLOBS)\n (j)\n Adding user-defined functions\n (k)\n Adding user-defined operators\n (l)\n Adding user-defined types\n (m)\n Exotic pre-installed types\n (n)\n Arrays\n (o)\n Inheritance\n 6.\n Interfacing to the POSTGRESQL Database\n (a)\n C Language API\n (b)\n Embedded C\n (c)\n C++\n (d)\n JAVA\n (e)\n ODBC\n (f)\n PERL\n (g)\n TCL/TK\n (h)\n PYTHON\n (i)\n Web access (PHP)\n (j)\n Server-side programming (PLPGSQL and SPI)\n 7.\n POSTGRESQL Administration\n (a)\n Creating users and databases\n (b)\n Backup and restore\n (c)\n Performance\n (d)\n Troubleshooting\n (e)\n Customization\n (f)\n Setting access permissions\n (g)\n International character encodings\n 8.\n Additional Resources\n (a)\n Frequently Asked Questions (FAQ'S)\n (b)\n Mailing list support\n (c)\n Supplied documentation\n (d)\n Commercial support\n (e)\n Modifying the source code\n 9.\n Appendix: Installation\n (a)\n Getting POSTGRESQL\n (b)\n Compiling\n (c)\n Initialization\n (d)\n Starting the server\n (e)\n Creating a database\n 10.\n Annotated Bibliography", "msg_date": "Tue, 12 Oct 1999 16:56:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\nOn 12-Oct-99 Bruce Momjian wrote:\n>> It looks good; I have a comment on the order of chapters, however.\n>> \n>> I suggest that Installlation goes in an Appendix. More and more, people\n>> will be coming to machines that already have software installed, or have\n>> them installed as packages (.rpm or .deb).\n> \n>> \n>> Installation is the administrator's job, and not of interest to `normal'\n>> users, so it should not be placed in the book as if every user had to do\n>> it.\n> \n> Done. Certainly better to get that at the end. That chapter is going\n> to be a mess to look at.\n> \n> New version attached. No PDF version this time. Do people like PDF?\n\nNot at this early stage. Wait till it's further along.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 12 Oct 1999 17:18:11 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> PostgreSQL Book Proposal\n> \n> Bruce Momjian\n\n[...]\n\n> 4.\n> Advanced SQL Commands\n\n[...]\n\n\n> 6.\n> Interfacing to the POSTGRESQL Database\n> (a)\n> C Language API\n> (b)\n> Embedded C\n> (c)\n> C++\n> (d)\n> JAVA\n> (e)\n> ODBC\n> (f)\n> PERL\n> (g)\n> TCL/TK\n> (h)\n> PYTHON\n> (i)\n> Web access (PHP)\n> (j)\n> Server-side programming (PLPGSQL and SPI)\n\nIsn't (j) logically part of chapter 4? (Or 5, if it's PostgreSQL\nspecific.) Or am I completely confused? (Where can I read about\nPLPGSQL and/or SPI, other than in the forthcoming book?)\n\nIf it came to a choice between having very short sections in chapter\n6, and having two or three of them covered in more depth, I'd go for\nthe latter. \n\n(Of course, you'll inevitably choose two or three which don't match\nwhat many readers will want (whichever two or three you choose), but\neven so, I think I'd get more out of a reasonably thorough coverage of\na couple of languages that I won't use than superficial coverage of\nall of them which doesn't really reveal anything useful.)\n", "msg_date": "12 Oct 1999 23:08:05 +0100", "msg_from": "Bruce Stephens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Outline for PostgreSQL book" }, { "msg_contents": "At 16:56 12-10-99 -0400, Bruce Momjian wrote: \n>\n> > Installation is the administrator's job, and not of interest to `normal'\n> > users, so it should not be placed in the book as if every user had to do\n> > it.\n> Done. Certainly better to get that at the end. That chapter is going\n> to be a mess to look at.\n\n\nI agree on this particular item BUT...\n\nwouldn't it be nice if the book would be a 'hard' book?\nthere already are lots of database tutorials, introductions to sql etc.\n\nfocus on db developers, go into (maybe not that deep) how and why postgres\nis/was developed the way it is/was.\n\n'hard' books are a 'good thing' (tm) and it would be too bad if this would\nbecome another entry-level booklet.\n\nOTOH an entry-level book is probably required to get as big a user-base as\npossible.\n\n\n", "msg_date": "Wed, 13 Oct 1999 01:27:29 +0200", "msg_from": "gravity <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "At 16:56 12-10-99 -0400, Bruce Momjian wrote: \n> Installation is the administrator's job, and not of interest to `normal'\n> users, so it should not be placed in the book as if every user had to do\n> it.\nDone. Certainly better to get that at the end. That chapter is going\nto be a mess to look at.\n\nI agree on this particular item BUT...\n\nwouldn't it be nice if the book would be a 'hard' book?\nthere already are lots of database tutorials, introductions to sql etc.\n\nfocus on db developers, go into (maybe not that deep) how and why postgres\nis/was developed the way it is/was.\n\n'hard' books are a 'good thing' (tm) and it would be too bad if this would\nbecome another entry-level booklet.\n\nOTOH an entry-level book is probably required to get as big a user-base as\npossible.\n\n\n", "msg_date": "Wed, 13 Oct 1999 01:41:27 +0200", "msg_from": "gravity <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\nOn 12-Oct-99 Bruce Momjian wrote:\n>> It looks good; I have a comment on the order of chapters, however.\n>> \n>> I suggest that Installlation goes in an Appendix. More and more, people\n>> will be coming to machines that already have software installed, or have\n>> them installed as packages (.rpm or .deb).\n> \n>> \n>> Installation is the administrator's job, and not of interest to `normal'\n>> users, so it should not be placed in the book as if every user had to do\n>> it.\n> \n> Done. Certainly better to get that at the end. That chapter is going\n> to be a mess to look at.\n\nIt doesn't have to be a mess. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 12 Oct 1999 19:56:11 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> I agree on this particular item BUT...\n> \n> wouldn't it be nice if the book would be a 'hard' book?\n> there already are lots of database tutorials, introductions to sql etc.\n> \n> focus on db developers, go into (maybe not that deep) how and why postgres\n> is/was developed the way it is/was.\n> \n> 'hard' books are a 'good thing' (tm) and it would be too bad if this would\n> become another entry-level booklet.\n> \n> OTOH an entry-level book is probably required to get as big a user-base as\n> possible.\n\nPublishers have already talked to me about multiple books. I think we\nneed to start with an newbie book, with the chapters clearly arranged so\nexperienced people can skip newbie chapters.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 20:02:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> \n> On 12-Oct-99 Bruce Momjian wrote:\n> >> It looks good; I have a comment on the order of chapters, however.\n> >> \n> >> I suggest that Installlation goes in an Appendix. More and more, people\n> >> will be coming to machines that already have software installed, or have\n> >> them installed as packages (.rpm or .deb).\n> > \n> >> \n> >> Installation is the administrator's job, and not of interest to `normal'\n> >> users, so it should not be placed in the book as if every user had to do\n> >> it.\n> > \n> > Done. Certainly better to get that at the end. That chapter is going\n> > to be a mess to look at.\n> \n> It doesn't have to be a mess. \n> \n\nFrankly, if I can get away with having whole sections that point to URL\nfiles, so much the better. If I can just point them to the INSTALL\nfile, and be done with it, great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 20:03:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\nOn 13-Oct-99 Bruce Momjian wrote:\n>> \n>> On 12-Oct-99 Bruce Momjian wrote:\n>> >> It looks good; I have a comment on the order of chapters, however.\n>> >> \n>> >> I suggest that Installlation goes in an Appendix. More and more, people\n>> >> will be coming to machines that already have software installed, or have\n>> >> them installed as packages (.rpm or .deb).\n>> > \n>> >> \n>> >> Installation is the administrator's job, and not of interest to `normal'\n>> >> users, so it should not be placed in the book as if every user had to do\n>> >> it.\n>> > \n>> > Done. Certainly better to get that at the end. That chapter is going\n>> > to be a mess to look at.\n>> \n>> It doesn't have to be a mess. \n>> \n> \n> Frankly, if I can get away with having whole sections that point to URL\n> files, so much the better. If I can just point them to the INSTALL\n> file, and be done with it, great.\n\nThe installation can be quick and painless. I'll explain more tomorrow.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 12 Oct 1999 20:55:59 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > OTOH an entry-level book is probably required to get as big a user-base as\n> > possible.\n>\n> Publishers have already talked to me about multiple books. I think we\n> need to start with an newbie book, with the chapters clearly arranged so\n> experienced people can skip newbie chapters.\n\n I don't think it really matters that much if the first book\n about PostgreSQL is more for a newbie than a professional or\n vice versa. What count's is that it is up to date and\n correct. If I go to a book store and find only one book on a\n topic, it's usually not the one \"I\" was looking for. But\n what would make other authors write another book on the same\n topic - most likely the authors who write details I haven't\n known before? It's the success of the former one.\n\n Think about it a little.\n\n The first book has to be successful. Therefore it has to\n address most of the interested people. Those who know how to\n get the information they need out of manpages, RFC's and W3C\n recommendations aren't the ppl who to address in this case.\n So let it please be a newbie book, and the hard ones will\n follow.\n\n Another problem is that during the last release cycles, it\n wasn't that easy to follow all the changes in the\n capabilities of PostgreSQL. Not even for me, and I'm not\n counting myself to the outermost circle. Now what chance do\n you give a book that's written based on v6.5 if we are about\n to release v7.1 some months ahead? And more important, if it\n happens this way, does our \"aggressive\" development invite\n other authors to take a chance on the same topic? I don't\n think so.\n\n If we really want professional publishing about PostgreSQL\n (we want - no?), the core team has to co-operate with the\n authors of those books in a way, that they can write their\n book based on the upcoming release and sell it with a CD\n where that release is included. At the time it is published,\n there should only be bugfixes available on the net - not\n already two newer releases.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 13 Oct 1999 04:15:07 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > OTOH an entry-level book is probably required to get as big a user-base as\n> > > possible.\n> >\n> > Publishers have already talked to me about multiple books. I think we\n> > need to start with an newbie book, with the chapters clearly arranged so\n> > experienced people can skip newbie chapters.\n> \n> I don't think it really matters that much if the first book\n> about PostgreSQL is more for a newbie than a professional or\n> vice versa. What count's is that it is up to date and\n> correct. If I go to a book store and find only one book on a\n> topic, it's usually not the one \"I\" was looking for. But\n> what would make other authors write another book on the same\n> topic - most likely the authors who write details I haven't\n> known before? It's the success of the former one.\n> \n> Think about it a little.\n> \n> The first book has to be successful. Therefore it has to\n> address most of the interested people. Those who know how to\n> get the information they need out of manpages, RFC's and W3C\n> recommendations aren't the ppl who to address in this case.\n> So let it please be a newbie book, and the hard ones will\n> follow.\n\n\n\nThat was my thought too.\n\n\n> Another problem is that during the last release cycles, it\n> wasn't that easy to follow all the changes in the\n> capabilities of PostgreSQL. Not even for me, and I'm not\n> counting myself to the outermost circle. Now what chance do\n> you give a book that's written based on v6.5 if we are about\n> to release v7.1 some months ahead? And more important, if it\n> happens this way, does our \"aggressive\" development invite\n> other authors to take a chance on the same topic? I don't\n> think so.\n> \n> If we really want professional publishing about PostgreSQL\n> (we want - no?), the core team has to co-operate with the\n> authors of those books in a way, that they can write their\n> book based on the upcoming release and sell it with a CD\n> where that release is included. At the time it is published,\n> there should only be bugfixes available on the net - not\n> already two newer releases.\n\nI have a list of interested publishers, and am going to post it so\npeople can get involved and start writing.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Oct 1999 22:36:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "You should put one special section to talk about DATE. (datetime,\nabstime, etc..)\nThat is the most asked by the mailing list.\nYou could put tons of samples on how to use and manipulate DATE.\nie. How to get the time, how to get the minute, how to get the hour,\n how to get the month in number (1,2,3,4),\n how to get the month in word (JAN, FEB, MAR)\n\nThe most important thing is to have alot of samples.\nPractical samples are more useful than just syntax.\n\nRegards,\nChairudin Sentosa\n\n> Bruce Momjian wrote:\n> \n> Here is my proposal for an outline for a PostgreSQL book. Many of us\n> have been asked by publishers about writing a book. Here is what I\n> think would be a good outline for the book.\n> \n> I am interested in whether this is a good outline for a PostgreSQL\n> book,\n> how our existing documentation matches this outline, where our\n> existing\n> documentation can be managed into a published book, etc.\n> \n> Any comments would be welcome.\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> ...................................................................\n> \n> The attached document is in both web page and text formats.\n> View the one which looks best.\n> \n> ---------------------------------------------------------------\n> \n> PostgreSQL Book Proposal\n> \n> Bruce Momjian\n> \n> 1. Introduction\n> 2. Installation\n> \n> (a) Getting POSTGRESQL\n> (b) Compiling\n> (c) Initialization\n> (d) Starting the server\n> (e) Creating a database\n> (f) Issuing database commands\n> 3. Introduction to SQL\n> \n> (a) Why a database?\n> (b) Creating tables\n> (c) Adding data with INSERT\n> (d) Viewing data with SELECT\n> (e) Removing data with DELETE\n> (f) Modifying data with UPDATE\n> (g) Restriction with WHERE\n> (h) Sorting data with ORDER BY\n> (i) Usage of NULL values\n> 4. Advanced SQL Commands\n> \n> (a) Inserting data from a SELECT\n> (b) Aggregates: COUNT, SUM, etc.\n> (c) GROUP BY with aggregates\n> (d) HAVING with aggregates\n> (e) Joining tables\n> (f) Using table aliases\n> (g) UNION clause\n> (h) Subqueries\n> (i) Transactions\n> (j) Cursors\n> (k) Indexing\n> (l) Table defaults\n> (m) Primary/Foreign keys\n> (n) AND/OR usage\n> (o) LIKE clause usage\n> (p) Temporary tables\n> (q) Importing data\n> 5. POSTGRESQL'S Unique Features\n> \n> (a) Object ID'S (OID)\n> (b) Multi-version Concurrency Control (MVCC)\n> (c) Locking and Deadlocks\n> (d) Vacuum\n> (e) Views\n> (f) Rules\n> (g) Sequences\n> (h) Triggers\n> (i) Large Objects(BLOBS)\n> (j) Adding User-defined Functions\n> (k) Adding User-defined Operators\n> (l) Adding User-defined Types\n> (m) Exotic Preinstalled Types\n> (n) Arrays\n> (o) Inheritance\n> 6. Interfacing to the POSTGRESQL Database\n> \n> (a) C Language API\n> (b) Embedded C\n> (c) C++\n> (d) JAVA\n> (e) ODBC\n> (f) PERL\n> (g) TCL/TK\n> (h) PYTHON\n> (i) Web access (PHP)\n> (j) Server-side programming (PLPGSQL and SPI)\n> 7. POSTGRESQL Adminstration\n> \n> (a) Creating users and databases\n> (b) Backup and restore\n> (c) Performance tuning\n> (d) Troubleshooting\n> (e) Customization options\n> (f) Setting access permissions\n> 8. Additional Resources\n> \n> (a) Frequently Asked Questions (FAQ'S)\n> (b) Mailing list support\n> (c) Supplied documentation\n> (d) Commercial support\n> (e) Modifying the source code\n> \n> ---------------------------------------------------------------\n> \n> PostgreSQL Book Proposal\n> \n> Bruce Momjian\n> \n> 1.\n> Introduction\n> 2.\n> Installation\n> (a)\n> Getting POSTGRESQL\n> (b)\n> Compiling\n> (c)\n> Initialization\n> (d)\n> Starting the server\n> (e)\n> Creating a database\n> (f)\n> Issuing database commands\n> 3.\n> Introduction to SQL\n> (a)\n> Why a database?\n> (b)\n> Creating tables\n> (c)\n> Adding data with INSERT\n> (d)\n> Viewing data with SELECT\n> (e)\n> Removing data with DELETE\n> (f)\n> Modifying data with UPDATE\n> (g)\n> Restriction with WHERE\n> (h)\n> Sorting data with ORDER BY\n> (i)\n> Usage of NULL values\n> 4.\n> Advanced SQL Commands\n> (a)\n> Inserting data from a SELECT\n> (b)\n> Aggregates: COUNT, SUM, etc.\n> (c)\n> GROUP BY with aggregates\n> (d)\n> HAVING with aggregates\n> (e)\n> Joining tables\n> (f)\n> Using table aliases\n> (g)\n> UNION clause\n> (h)\n> Subqueries\n> (i)\n> Transactions\n> (j)\n> Cursors\n> (k)\n> Indexing\n> (l)\n> Table defaults\n> (m)\n> Primary/Foreign keys\n> (n)\n> AND/OR usage\n> (o)\n> LIKE clause usage\n> (p)\n> Temporary tables\n> (q)\n> Importing data\n> 5.\n> POSTGRESQL'S Unique Features\n> (a)\n> Object ID'S (OID)\n> (b)\n> Multi-version Concurrency Control (MVCC)\n> (c)\n> Locking and Deadlocks\n> (d)\n> Vacuum\n> (e)\n> Views\n> (f)\n> Rules\n> (g)\n> Sequences\n> (h)\n> Triggers\n> (i)\n> Large Objects(BLOBS)\n> (j)\n> Adding User-defined Functions\n> (k)\n> Adding User-defined Operators\n> (l)\n> Adding User-defined Types\n> (m)\n> Exotic Preinstalled Types\n> (n)\n> Arrays\n> (o)\n> Inheritance\n> 6.\n> Interfacing to the POSTGRESQL Database\n> (a)\n> C Language API\n> (b)\n> Embedded C\n> (c)\n> C++\n> (d)\n> JAVA\n> (e)\n> ODBC\n> (f)\n> PERL\n> (g)\n> TCL/TK\n> (h)\n> PYTHON\n> (i)\n> Web access (PHP)\n> (j)\n> Server-side programming (PLPGSQL and SPI)\n> 7.\n> POSTGRESQL Adminstration\n> (a)\n> Creating users and databases\n> (b)\n> Backup and restore\n> (c)\n> Performance tuning\n> (d)\n> Troubleshooting\n> (e)\n> Customization options\n> (f)\n> Setting access permissions\n> 8.\n> Additional Resources\n> (a)\n> Frequently Asked Questions (FAQ'S)\n> (b)\n> Mailing list support\n> (c)\n> Supplied documentation\n> (d)\n> Commercial support\n> (e)\n> Modifying the source code\n", "msg_date": "Wed, 13 Oct 1999 11:22:01 +0700", "msg_from": "Chairudin Sentosa Harjo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Here is my proposal for an outline for a PostgreSQL book. Many of us\n> have been asked by publishers about writing a book. Here is what I\n> think would be a good outline for the book.\n> \n> I am interested in whether this is a good outline for a PostgreSQL book,\n> how our existing documentation matches this outline, where our existing\n> documentation can be managed into a published book, etc.\n> \n> Any comments would be welcome.\n\nA chapter following the Introduction discussing the philosopy behind\nPostgres development, the benefits of Postgres, and an outline of the\narchitecture would be useful.\n\nFor the rest - I can't wait :)\n\n--------\nRegards\nTheo\n", "msg_date": "Wed, 13 Oct 1999 09:53:45 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\nOn 13-Oct-99 Vince Vielhaber wrote:\n> \n> On 13-Oct-99 Bruce Momjian wrote:\n\n2BOOK Authors:\nPlease, try to keep rights for translating this book into another\nlanguages by you self, not by publisher.\n\n\nI may ask some St.Pitersburg's publishing company \nto make russian translation of this book, but some publishers\nlike O'Reilly have too hard license policy \nand too long reaction time. \n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Wed, 13 Oct 1999 12:06:59 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Dmitry Samersoff wrote:\n> \n> On 13-Oct-99 Vince Vielhaber wrote:\n> >\n> > On 13-Oct-99 Bruce Momjian wrote:\n> \n> 2BOOK Authors:\n> Please, try to keep rights for translating this book into another\n> languages by you self, not by publisher.\n> \n> I may ask some St.Pitersburg's publishing company\n> to make russian translation of this book, but some publishers\n> like O'Reilly have too hard license policy\n> and too long reaction time.\n> \n\nI may ask some Indonesian's publishing company to make\nIndonesian translation of this book too.\nI may help the translation from English to Indonesian language.\n\nRegards,\nChairudin Sentosa\n", "msg_date": "Wed, 13 Oct 1999 15:21:35 +0700", "msg_from": "Chairudin Sentosa Harjo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Chairudin Sentosa Harjo wrote:\n> \n> You should put one special section to talk about DATE. (datetime,\n> abstime, etc..)\n> That is the most asked by the mailing list.\n> You could put tons of samples on how to use and manipulate DATE.\n> ie. How to get the time, how to get the minute, how to get the hour,\n> how to get the month in number (1,2,3,4),\n> how to get the month in word (JAN, FEB, MAR)\n how to select all individuals that are more than 50 years old today\nat 12:00PM\n\nAnd ouf course about TimeZones (why sometimes I insert one date and \nget out another ;-p)\n\nAnd of course mention that mostly it is not PostgreSQL's problem but\njust a very hairy subject ;(\n\nAnd perhaps some discussion about Date/Time types also in the section \nfor programming languages (what you get from date field, how to be \nsure that what you give to postgresql is understood correctly)\n\n> The most important thing is to have alot of samples.\n> Practical samples are more useful than just syntax.\n\nAgreed.\n\nMaybe \"Advanced SQL Commands\" could also contain:\n\n* Outer Joins\n* Casts (bot AS and ::, with some explanations on why and when)\n\nNot sure if Views and Triggers should go under Uniqe features - they are \nsupported on most commercial SQL databases.\n\n\"Interfacing to the POSTGRESQL Database\" could also contain more options\nthan PHP,\nat least as references - there are much more of both established (CGI,\nPyApache,mod_perl) \nand emerging (like the recent pgxml announcement) technologies for that\npurpos.\n\nServer side programming should also mention PL/Tcl \n\nUnder which section would requirements the restrictions (db size, field\nsize/count, record size/count) \nor lackof them :) go?\n(I recently checked Oracle8i for Linux, and it quoted 128MB RAM as\nminimum and 256 \nas recommended - I think we are a lot less demanding, I guess PG can\nreasonably run in 16MB ?)\n\n-----------------\nHannu\n", "msg_date": "Wed, 13 Oct 1999 12:43:55 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\n\n> Bruce Momjian wrote:\n> \n> \n> New version attached. No PDF version this time. Do people like PDF?\n> \n\nThere could be also a chapter on client side tools - pgAccess et.al.\nand also small (but working ;) examples for using ODBC from MS Access, \nMS Query, Delphi, JBuilder, Visual Cafe - each about 2-5 pages to get \npeople started.\n\n--------------\nHannu\n", "msg_date": "Wed, 13 Oct 1999 12:48:58 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Tue, 12 Oct 1999, Vince Vielhaber wrote:\n\n> \n> On 13-Oct-99 Bruce Momjian wrote:\n> > Frankly, if I can get away with having whole sections that point to URL\n> > files, so much the better. If I can just point them to the INSTALL\n> > file, and be done with it, great.\n> \n> The installation can be quick and painless. I'll explain more tomorrow.\n\nOk it's tomorrow. This will apply mainly to the actual installation \nprocess, but will also apply to the instructions for the book.\n\n1) Add to configure: --with-postgres-user= and --with-postgres-group=\n with the default being postgres:postgres.\n\n2) When generating the makefiles, use the -g and -o parameters to install.\n These, of course, will come from configure.\n\nAt this point, anyone can build PostgreSQL, but only root and the \npostgres user can install it. Either way, the ownership will be correct\nsince we added the -g and -o to install during the configuration process.\n\nCurrently, the only difference between the way I do it and the above is\nthat I override INSTALL in Makefile.custom (which I believe Tom Lockhart\ntipped me off to). I build as myself and use sudo to install (as many\nother admins also do).\n\n3) gmake\n\n4) gmake install (or sudo gmake install)\n\nAlternately, steps 3 and 4 can be done this way:\n\n3a) gmake > gmake.out 2>&1 &\n tail -f gmake.out\n\n4a) gmake install >> gmake.out 2>&1 &\n tail -f gmake.out\n\nBut I don't usually bother.\n\nThe final step may not be necessary if steps 1 and 2 are done right. I\ndo it just to make sure the directories have the right permissions:\n\n5) cd /usr/local (or whatever the directory is just below postgres')\n chown -R postgres:postgres pgsql (substitute as necessary).\n\nThen do whatever initdb's and createdb's etc as necessary/desired.\n\nEven doing step 5 (and doing the custom makefile), on a 450 P-III running\nFreeBSD 3.2-RELEASE, I have the entire system installed and running in \nabout 20-30 minutes.\n\n\nSo how's that? There may be some minor differences between systems, \nI seem to recall HP having some with install(1), but I believe there\nwas a workaround script that fixed it for most things called bsd.install\nor something like that. I'll verify HPs install on systems from 8-10.2\nlater this mourning - think I have a couple SGIs I can check too.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 05:53:12 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "[Charset KOI8-R unsupported, filtering to ASCII...]\n> \n> On 13-Oct-99 Vince Vielhaber wrote:\n> > \n> > On 13-Oct-99 Bruce Momjian wrote:\n> \n> 2BOOK Authors:\n> Please, try to keep rights for translating this book into another\n> languages by you self, not by publisher.\n> \n> \n> I may ask some St.Pitersburg's publishing company \n> to make russian translation of this book, but some publishers\n> like O'Reilly have too hard license policy \n> and too long reaction time. \n\nActually, I want to make sure the book is accessible on the web. I will\nkeep your translation idea in mind. Good point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:09:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "[Charset KOI8-R unsupported, filtering to ASCII...]\n> \n> On 13-Oct-99 Vince Vielhaber wrote:\n> > \n> > On 13-Oct-99 Bruce Momjian wrote:\n> \n> 2BOOK Authors:\n> Please, try to keep rights for translating this book into another\n> languages by you self, not by publisher.\n> \n> \n> I may ask some St.Pitersburg's publishing company \n> to make russian translation of this book, but some publishers\n> like O'Reilly have too hard license policy \n> and too long reaction time. \n> \n\nFYI, I just did bibliography, and got:\n\n (a) The Practical SQL Handbook, Bowman et al., Addison Wesley\n (b) Web Development with PHP and PostgreSQL, \\ldots{}, Addison Wesley\n (c) A Guide to The SQL Standard, C.J. Date, Addison Wesley\n (d) An Introduction to Database Systems, C.J. Date, Addison Wesley\n (e) SQL For Smarties, Joe Celko, Morgan, Kaufmann\n\nLooks like Addision Wesley is the winner.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:11:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> And perhaps some discussion about Date/Time types also in the section \n> for programming languages (what you get from date field, how to be \n> sure that what you give to postgresql is understood correctly)\n> \n> > The most important thing is to have alot of samples.\n> > Practical samples are more useful than just syntax.\n> \n> Agreed.\n> \n> Maybe \"Advanced SQL Commands\" could also contain:\n> \n> * Outer Joins\n\nBut we don't have them yet.\n\n> * Casts (bot AS and ::, with some explanations on why and when)\n> \n> Not sure if Views and Triggers should go under Uniqe features - they are \n> supported on most commercial SQL databases.\n\nYes, I know, but Triggers are specific to the database, and I guess I\nhave to move views to the other section.\n\n> \n> \"Interfacing to the POSTGRESQL Database\" could also contain more options\n> than PHP,\n> at least as references - there are much more of both established (CGI,\n> PyApache,mod_perl) \n\nThat's probably too advanced for this book. Not sure how much detail I\nam going to give each interface anyway. Probably just two examples of\neach.\n\n> and emerging (like the recent pgxml announcement) technologies for that\n> purpos.\n> \n> Server side programming should also mention PL/Tcl \n> \n\nAdded.\n\n\n> Under which section would requirements the restrictions (db size, field\n> size/count, record size/count) \n> or lackof them :) go?\n\nUnder Administration, or Installation.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:20:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> \n> \n> > Bruce Momjian wrote:\n> > \n> > \n> > New version attached. No PDF version this time. Do people like PDF?\n> > \n> \n> There could be also a chapter on client side tools - pgAccess et.al.\n> and also small (but working ;) examples for using ODBC from MS Access, \n> MS Query, Delphi, JBuilder, Visual Cafe - each about 2-5 pages to get \n> people started.\n\nI wasn't sure where to put pgaccess. I will add it to interfaces. The\nothers are probably better left to another book, Interfaces PostgreSQL\nto PC's.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:21:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Wed, 13 Oct 1999, Bruce Momjian wrote:\n\n> > And perhaps some discussion about Date/Time types also in the section \n> > for programming languages (what you get from date field, how to be \n> > sure that what you give to postgresql is understood correctly)\n> > \n> > > The most important thing is to have alot of samples.\n> > > Practical samples are more useful than just syntax.\n> > \n> > Agreed.\n> > \n> > Maybe \"Advanced SQL Commands\" could also contain:\n> > \n> > * Outer Joins\n> \n> But we don't have them yet.\n\nYes, but we will (or should) by the time the book comes out. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 13 Oct 1999 07:35:21 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> > 2BOOK Authors:\n> > Please, try to keep rights for translating this book into another\n> > languages by you self, not by publisher.\n> > \n> > \n> > I may ask some St.Pitersburg's publishing company \n> > to make russian translation of this book, but some publishers\n> > like O'Reilly have too hard license policy \n> > and too long reaction time. \n> \n> Actually, I want to make sure the book is accessible on the web. I will\n> keep your translation idea in mind. Good point.\n\nWhat about translating the regular docs first? Are you planning on using a\nlot of the existing documentation for the book? Perhaps one could organize\nthe distributed documentation as a sort of abridged version of the\nofficial book, similar to what is being done with Perl. Just an idea.\n\nAnd what kind of timespan did you have in mind? \n\nI had the insane plan of providing a German documentation (at least the\nUser's Guide) in time for 7.0. (big market) On the other hand, I have lots\nof insane plans ...\n\n\t-Peter\n\n", "msg_date": "Wed, 13 Oct 1999 13:49:30 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> > Actually, I want to make sure the book is accessible on the web. I will\n> > keep your translation idea in mind. Good point.\n> \n> What about translating the regular docs first? Are you planning on using a\n> lot of the existing documentation for the book? Perhaps one could organize\n> the distributed documentation as a sort of abridged version of the\n> official book, similar to what is being done with Perl. Just an idea.\n> \n> And what kind of timespan did you have in mind? \n> \n> I had the insane plan of providing a German documentation (at least the\n> User's Guide) in time for 7.0. (big market) On the other hand, I have lots\n> of insane plans ...\n\nNot sure on a timespan. I think I could do most if it in one month if I\ndid nothing else. My wife seems supportive of the idea.\n\nNot sure how to merge current documentation into it. I would like to\npoint them to URL locations as much as possible. I thought if I give\nthem enough to get started, and to understand how the current docs fit\ntogether, that would be good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 07:54:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Wed, 13 Oct 1999, Bruce Momjian wrote:\n\n> Not sure how to merge current documentation into it. I would like to\n> point them to URL locations as much as possible. I thought if I give\n> them enough to get started, and to understand how the current docs fit\n> together, that would be good.\n\nPersonally, I always think that computer books that point you to URLs to\nget the complete information are less than desirable. The very point of\nreading the book is that you don't have to get up to your computer all the\ntime. Books should be self-contained and add to the existing documentation\nsince otherwise I won't need it.\n\nAlso, think about the fact that the online documentation might change more\nquickly than a book is published. In a magazine you can do that, but in a\nbook that's questionable. I have a few books that are only about two years\nold and the information in them is still very valid, but the URLs with all\nthe examples and all don't work anymore. Who knows why, I don't have the\ntime to find out.\n\nBut the publishers are probably a lot smarter in that area than I am.\n\n\t-Peter\n\n\n", "msg_date": "Wed, 13 Oct 1999 14:08:01 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> On Wed, 13 Oct 1999, Bruce Momjian wrote:\n> \n> > Not sure how to merge current documentation into it. I would like to\n> > point them to URL locations as much as possible. I thought if I give\n> > them enough to get started, and to understand how the current docs fit\n> > together, that would be good.\n> \n> Personally, I always think that computer books that point you to URLs to\n> get the complete information are less than desirable. The very point of\n> reading the book is that you don't have to get up to your computer all the\n> time. Books should be self-contained and add to the existing documentation\n> since otherwise I won't need it.\n> \n> Also, think about the fact that the online documentation might change more\n> quickly than a book is published. In a magazine you can do that, but in a\n> book that's questionable. I have a few books that are only about two years\n> old and the information in them is still very valid, but the URLs with all\n> the examples and all don't work anymore. Who knows why, I don't have the\n> time to find out.\n> \n\nI just don't want to produce a 500 page book to cover these topics.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 08:51:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": ">Here is my proposal for an outline for a PostgreSQL book. Many of us\n>have been asked by publishers about writing a book. Here is what I\n>think would be a good outline for the book.\n>\n>I am interested in whether this is a good outline for a PostgreSQL book,\n>how our existing documentation matches this outline, where our existing\n>documentation can be managed into a published book, etc.\n>\n>Any comments would be welcome.\n\nFYI, here is the table of contents from my PostgreSQL book published\nin Japan at the beginning of this year (I do not guarantee the\naccuracy of translation to English, however).\n\nNote that the second edition about to be released will contain many more \ntopics including MVCC, transactions, views, rules, triggers...\n---\nTatsuo Ishii\n\n-----------------------------------------------------------\nChapter 1\n\nIntroduction to PostgreSQL\n\n1.1 History of PostgreSQL\n1.2 Advantages of PostgreSQL\n\nChapter 2\n\nInstallation\n\n2.1 Before installation\n2.2 Preparing for installation\n2.3 Compiling and installation\n2.4 Setting your environment\n2.5 Initialization\n2.6 Adding users and creating database\n2.7 Using psql\n2.8 Security\n\nChapter 3\n\nLearning PostgreSQL\n\n3.1 Processes and modules\n3.2 Source tree\n3.3 Data types\n3.4 User defined functions\n3.5 User defined operators\n3.6 User defined types\n\nChapter 4\n\nMake applications\n\n4.1 Tcl/Tk\n4.2 C\n4.3 PHP\n4.4 Perl\n4.5 Java\n\nChapter 5\n\nTips for PostgreSQL\n\n5.1 Backuping database\n5.2 Benchmark tests\n5.3 Performance tuning\n5.4 Troubles shooting\n5.5 New functionalities (while writing this book 6.4 was about to release)\n5.6 Developers and future plans\n5.7 Resources on the Internet\n\nAppendix\n\nTotal pages: 309\n", "msg_date": "Wed, 13 Oct 1999 23:37:56 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book " }, { "msg_contents": "On Wed, 13 Oct 1999, Peter Eisentraut wrote:\n\n> I had the insane plan of providing a German documentation (at least the\n> User's Guide) in time for 7.0. (big market) On the other hand, I have lots\n> of insane plans ...\n\nHi Peter,\n\nI translated the FAQ and Linux-FAQ to german recently. I consider to\ntranslate the online doc, too. But it is so much for one person, so I\ndidn't started yet. What about a cooperation? \n(Warning: I really need some advices to set up my sgml system ;))\n\nhave fun!\nKarsten\n\n\n", "msg_date": "Wed, 13 Oct 1999 16:52:11 +0200 (CEST)", "msg_from": "Karsten Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "On Wed, 13 Oct 1999, Bruce Momjian wrote:\n\n> > Maybe \"Advanced SQL Commands\" could also contain:\n> > \n> > * Outer Joins\n> \n> But we don't have them yet.\n\nAs someone else mentioned previously, due to time required to write,\npublish and get this out on the shelf, shouldn't we be revolving this\naround the upcoming v7 release, which I believe Thomas has Outer Joins\ntarget'd for?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 13 Oct 1999 12:31:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Hello,\n\nI just joined this list and am getting into the middle of this thread,\nbut I did go back and read the archives.\n\nJust a few comments.\n\nTo answer a previous question, I like PDFs.\n\nI like the approach that Bruce Eckel has taken to writing his last\ncouple of books. It was somewhat of an opensource approach. He wrote the\nbook and kept it on his site in PDF format and released regular updates\nbased on either his additions to the book or corrections of errors and\nsuch by readers. It helped him to develop a good rapport with his\naudience. I do not believe the book being available on the web really\nimpacted the sales of the published book that much. I know I printed one\ncopy and purchased one copy.\nhttp://www.BruceEckel.com/\n\nOne thing to consider when writing the book is keeping it on target\nabout PostgreSQL. I was browsing Amazon and SQL books and was reading\nabout the MySQL book by O'Reilly. It got some really bad reviews based\nit being to general on databases and not specific about MySQL. Many said\ngreat db book, poor MySQL book.\nRead about it at, http://www.amazon.com/exec/obidos/ASIN/1565924347 .\n\nTheir are many books which cover the basics of databases and database\ndesign and you list a few below. What is needed is a PostgreSQL book.\nTheir will be overlap, but the emphasis should be on database\ndevelopment with PostgreSQL. That is what will differentiate it from\nothers.\n\nAddison Wesley is an excellent publisher as is O'Reilly. I would\nrecommend trying the above approach when writing the book and find a\npublisher friendly to the idea. Bruce's Thinking in Java was published\nby Prentice Hall. I don't know about any of the other publishers.\n\nJimmie Houchin\n\n\nBruce Momjian wrote:\n[snip]\n> FYI, I just did bibliography, and got:\n> \n> (a) The Practical SQL Handbook, Bowman et al., Addison Wesley\n> (b) Web Development with PHP and PostgreSQL, \\ldots{}, Addison Wesley\n> (c) A Guide to The SQL Standard, C.J. Date, Addison Wesley\n> (d) An Introduction to Database Systems, C.J. Date, Addison Wesley\n> (e) SQL For Smarties, Joe Celko, Morgan, Kaufmann\n> \n> Looks like Addision Wesley is the winner.\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n", "msg_date": "Wed, 13 Oct 1999 10:50:43 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Bruce Momjian wrote:\n> Actually, I want to make sure the book is accessible on the web. I will\n> keep your translation idea in mind. Good point.\n\nTalk to Philip Greenspun. Morgan-Kaufman cut him a deal where his book,\n\"Philip and Alex's Guide to Web Publishing\" is also available free on\nthe web (http://photo.net/wtr/thebook). His e-mail is [email protected].\n\nAnd you gotta see the quality of his book! Amazon carries it. \n(although, I must say, some of his choices for pictures were a little\nover the top....)\n\n--\nLamar Owen\nWGCR Internet Radio\nPisgah Forest, North Carolina\n1 Peter 4:11\n", "msg_date": "Wed, 13 Oct 1999 11:52:41 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> On Wed, 13 Oct 1999, Peter Eisentraut wrote:\n> \n> > I had the insane plan of providing a German documentation (at least the\n> > User's Guide) in time for 7.0. (big market) On the other hand, I have lots\n> > of insane plans ...\n> \n> Hi Peter,\n> \n> I translated the FAQ and Linux-FAQ to german recently. I consider to\n> translate the online doc, too. But it is so much for one person, so I\n> didn't started yet. What about a cooperation? \n> (Warning: I really need some advices to set up my sgml system ;))\n\nI got sgmltools 2.0.1 and installed it on BSDI with no problems. Only\nissue was that I needed install GNU m4. See SGML docs for sgmltools\ninstall notes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 12:43:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> On Wed, 13 Oct 1999, Bruce Momjian wrote:\n> \n> > > Maybe \"Advanced SQL Commands\" could also contain:\n> > > \n> > > * Outer Joins\n> > \n> > But we don't have them yet.\n> \n> As someone else mentioned previously, due to time required to write,\n> publish and get this out on the shelf, shouldn't we be revolving this\n> around the upcoming v7 release, which I believe Thomas has Outer Joins\n> target'd for?\n\nYes, for 7.0, but no sense in writing stuff until it is completed. I\ndon't even have WAL mentioned.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 12:46:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Hello,\n\nI agree with your comments here. When I posted my message I had forgotten\nabout Philip's book. Excellent book and website, photos, many are great and\nmany are as you say \"a little over the top\".\n\nBased on his book which I learned about from his articles on LinuxWorld I\nhave decided to deploy my website with AOLserver. I currently am planning\nto use PostgreSQL for the database.\n\nHe does a great job of discussing how to develop excellent websites.\n\nJimmie Houchin\n\n\nAt 11:52 AM -0400 10/13/99, Lamar Owen wrote:\n>Bruce Momjian wrote:\n>> Actually, I want to make sure the book is accessible on the web. I will\n>> keep your translation idea in mind. Good point.\n>\n>Talk to Philip Greenspun. Morgan-Kaufman cut him a deal where his book,\n>\"Philip and Alex's Guide to Web Publishing\" is also available free on\n>the web (http://photo.net/wtr/thebook). His e-mail is [email protected].\n>\n>And you gotta see the quality of his book! Amazon carries it.\n>(although, I must say, some of his choices for pictures were a little\n>over the top....)\n>\n>--\n>Lamar Owen\n>WGCR Internet Radio\n>Pisgah Forest, North Carolina\n>1 Peter 4:11\n>\n>************\n\n", "msg_date": "Wed, 13 Oct 1999 11:48:24 -0500", "msg_from": "Jimmie Houchin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Hello,\n> \n> I just joined this list and am getting into the middle of this thread,\n> but I did go back and read the archives.\n> \n> Just a few comments.\n> \n> To answer a previous question, I like PDFs.\n> \n> I like the approach that Bruce Eckel has taken to writing his last\n> couple of books. It was somewhat of an opensource approach. He wrote the\n> book and kept it on his site in PDF format and released regular updates\n> based on either his additions to the book or corrections of errors and\n> such by readers. It helped him to develop a good rapport with his\n> audience. I do not believe the book being available on the web really\n> impacted the sales of the published book that much. I know I printed one\n> copy and purchased one copy.\n> http://www.BruceEckel.com/\n\nYes, I would like to do that, and the publisher I was talking to today\nseemed to think it was fine. He said it helps sell books. I would be\nvery unhappy if the book could not be put on the web. In fact, he is\nsubscribed to the hackers list, so he may be reading this.\n\n\n> One thing to consider when writing the book is keeping it on target\n> about PostgreSQL. I was browsing Amazon and SQL books and was reading\n> about the MySQL book by O'Reilly. It got some really bad reviews based\n> it being to general on databases and not specific about MySQL. Many said\n> great db book, poor MySQL book.\n> Read about it at, http://www.amazon.com/exec/obidos/ASIN/1565924347 .\n\nGood point. I will keep it in mind.\n\n> Their are many books which cover the basics of databases and database\n> design and you list a few below. What is needed is a PostgreSQL book.\n> Their will be overlap, but the emphasis should be on database\n> development with PostgreSQL. That is what will differentiate it from\n> others.\n\n\nGood. And I will show actual PostgreSQL examples in the book, with\nPostgreSQL output from psql, etc.\n\n> Addison Wesley is an excellent publisher as is O'Reilly. I would\n> recommend trying the above approach when writing the book and find a\n> publisher friendly to the idea. Bruce's Thinking in Java was published\n> by Prentice Hall. I don't know about any of the other publishers.\n\nYes, I am a big Addison Wesley fan. I realized this when I found most\nof the books I like were from them. I am not a big O'Reilly fan, though\nI have a few of their books.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 12:52:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Bruce Momjian wrote:\n> > Actually, I want to make sure the book is accessible on the web. I will\n> > keep your translation idea in mind. Good point.\n> \n> Talk to Philip Greenspun. Morgan-Kaufman cut him a deal where his book,\n> \"Philip and Alex's Guide to Web Publishing\" is also available free on\n> the web (http://photo.net/wtr/thebook). His e-mail is [email protected].\n> \n> And you gotta see the quality of his book! Amazon carries it. \n> (although, I must say, some of his choices for pictures were a little\n> over the top....)\n\nAgain, having it on the web will be done. No better way to have it\navailable to everyone, and it helps sell books, and the publisher thinks\nthat is fine.\n\nMorgan-Kaufmann is good, but I don't have many of their books.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 12:53:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> \n> > PostgreSQL Book Proposal\n> > \n> > Bruce Momjian\n> \n> [...]\n> \n> > 4.\n> > Advanced SQL Commands\n> \n> [...]\n> \n> \n> > 6.\n> > Interfacing to the POSTGRESQL Database\n> > (a)\n> > C Language API\n> > (b)\n> > Embedded C\n> > (c)\n> > C++\n> > (d)\n> > JAVA\n> > (e)\n> > ODBC\n> > (f)\n> > PERL\n> > (g)\n> > TCL/TK\n> > (h)\n> > PYTHON\n> > (i)\n> > Web access (PHP)\n> > (j)\n> > Server-side programming (PLPGSQL and SPI)\n> \n> Isn't (j) logically part of chapter 4? (Or 5, if it's PostgreSQL\n> specific.) Or am I completely confused? (Where can I read about\n> PLPGSQL and/or SPI, other than in the forthcoming book?)\n> \n> If it came to a choice between having very short sections in chapter\n> 6, and having two or three of them covered in more depth, I'd go for\n> the latter. \n\nNot sure. They are properly 'programming' to me, so I put them there. \nThey address a similar programmatic need in the database.\n\n> (Of course, you'll inevitably choose two or three which don't match\n> what many readers will want (whichever two or three you choose), but\n> even so, I think I'd get more out of a reasonably thorough coverage of\n> a couple of languages that I won't use than superficial coverage of\n> all of them which doesn't really reveal anything useful.)\n\nI was going to do a newbie thing and show the advantages of each one. \nNot sure I want to go into great depth on any of them. Just enough to\nget people started, and using the documentation.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Oct 1999 14:31:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Outline for PostgreSQL book" }, { "msg_contents": "Bruce Stephens wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> \n> > PostgreSQL Book Proposal\n> >\n> > Bruce Momjian\n> \n> [...]\n> \n> > 4.\n> > Advanced SQL Commands\n> \n> [...]\n> \n> > 6.\n> > Interfacing to the POSTGRESQL Database\n> > (a)\n> > C Language API\n> > (b)\n> > Embedded C\n> > (c)\n> > C++\n> > (d)\n> > JAVA\n> > (e)\n> > ODBC\n> > (f)\n> > PERL\n> > (g)\n> > TCL/TK\n> > (h)\n> > PYTHON\n> > (i)\n> > Web access (PHP)\n> > (j)\n> > Server-side programming (PLPGSQL and SPI)\n> \n> Isn't (j) logically part of chapter 4? (Or 5, if it's PostgreSQL\n> specific.) Or am I completely confused? (Where can I read about\n> PLPGSQL and/or SPI, other than in the forthcoming book?)\n> \n> If it came to a choice between having very short sections in chapter\n> 6, and having two or three of them covered in more depth, I'd go for\n> the latter.\n> \n> (Of course, you'll inevitably choose two or three which don't match\n> what many readers will want (whichever two or three you choose), but\n> even so, I think I'd get more out of a reasonably thorough coverage of\n> a couple of languages that I won't use than superficial coverage of\n> all of them which doesn't really reveal anything useful.)\n> \n> ************\n\n\nI second this opinion.\n", "msg_date": "Thu, 14 Oct 1999 08:24:28 +0700", "msg_from": "Chairudin Sentosa Harjo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [DOCS] Outline for PostgreSQL book" }, { "msg_contents": "\nBruce,\n\nWere you going to do this as a \"group\" project or\nis it a \"solo\" venture?\n\nIf its a group project, I assume you are the \"owner\"\nof the \"uber-contract\" until it sells to a book \npublisher. To do this right, you would divide\nthis contract into small peices, or sub-contracts.\nThen, as people contribute, their time is tracked\nagainst these components. Right now, for instance,\nyou have a book-outline project which people are \nworking on; it could be decided that the book-outline\nsub contract is worth \".5%\" of the book ownership.\nThen, as people track their time against these\nsub-contracts, a summary (once a week) is presented\nto you (the owner of the contract) for approval\nof their hours. If approved, then the contributer\nowns a portion of the sub-contract, and thus a \nportion of the overall book. How compensation is\ndone is up to you if you specify it before hand,\nor it is up to the owners if you wait until the book\nsells. The compensation schedule may be something \nlike this:\n \nUp to | Compensation\n------+-------------------\n 0.1% | Listed as \"helper\"\n 0.5% | Listed as \"contributor\", including a \n | very small biography; perhaps including\n | company name for free advertising. \n 2.0% | Listed as \"major contributor\", including\n | a full biography and a picture.\n10.0% | Listed as \"co-author\", entitiled to \n | the same percentage of royalty stream.\n30.0% | Listed as \"editor\", entitied to a \n | percentage of any book signing bonus plus\n | the same percentage of royalty stream\n\nThus, the total royalty stream allocated will\nbe less than 100%, and will be divided among,\nat most 9 people, which is managable. The remainder\nof the royalty stream can be donated to the\nsite maintenance or can be used to fund a bonus\npool for future projects / documentation maintance.\n\n...\n\nHowever, if it is a \"solo\" deal, then you you should \nmake it pretty clear -- otherwise I see heart-ache and\nbad feelings.\n\n...\n\nWhy do it this way? Beacuse PostgreSQL is a community\neffort, and a solo book sucks. Also, you want the\nbook done, so direct rewards for contribution work\ndoes wonders! To use parts of the documentation, \nThomas Lockheart would have to be given a percentage \nin the book, etc. Anyway, if the system is clear\nyou will get far more help than otherwise...\n\nI'll be glad to maintain the \"contract hierarchy\"\n(per your direction) and to accept timesheets\nfrom people, preparing a weelky summary for you\nto approve. Then, I will maintain a page with \nprogress on each sub-contract (or sub-sub-contract)\nso that we can go about this in a very organized way. \n\nI'm in it to see a kick ass book and to see\na collaborative effort win.\n\nWhat do you think? Thomas? \n\nClark\n\n\n", "msg_date": "Thu, 14 Oct 1999 10:04:32 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Business Plan for PostgreSQL book?" }, { "msg_contents": "Last night Marc and i picked up \n\nObject Relation DBMS\nMichael Stonebraker & Paul Brown\nPub: Morgan Kaufman\nISBN 1 55860 452 9\n\nIt was reccommented by a potential client, i'll put up \na review when i finish it.\n\nJeff\n\nOn Wed, 13 Oct 1999, Bruce Momjian wrote:\n\n> [Charset KOI8-R unsupported, filtering to ASCII...]\n> > \n> > On 13-Oct-99 Vince Vielhaber wrote:\n> > > \n> > > On 13-Oct-99 Bruce Momjian wrote:\n> > \n> > 2BOOK Authors:\n> > Please, try to keep rights for translating this book into another\n> > languages by you self, not by publisher.\n> > \n> > \n> > I may ask some St.Pitersburg's publishing company \n> > to make russian translation of this book, but some publishers\n> > like O'Reilly have too hard license policy \n> > and too long reaction time. \n> > \n> \n> FYI, I just did bibliography, and got:\n> \n> (a) The Practical SQL Handbook, Bowman et al., Addison Wesley\n> (b) Web Development with PHP and PostgreSQL, \\ldots{}, Addison Wesley\n> (c) A Guide to The SQL Standard, C.J. Date, Addison Wesley\n> (d) An Introduction to Database Systems, C.J. Date, Addison Wesley\n> (e) SQL For Smarties, Joe Celko, Morgan, Kaufmann\n> \n> Looks like Addision Wesley is the winner.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nJeff MacDonald\[email protected]\n\n===================================================================\n So long as the Universe had a beginning, we can suppose it had a \ncreator, but if the Universe is completly self contained , having \nno boundry or edge, it would neither be created nor destroyed\n It would simply be.\n===================================================================\n\n\n", "msg_date": "Thu, 14 Oct 1999 11:07:52 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "At 7:04 AM -0700 10/14/99, Clark C. Evans wrote:\n>Bruce,\n>\n>Were you going to do this as a \"group\" project or\n>is it a \"solo\" venture?\n>\n>If its a group project, I assume you are the \"owner\"\n>of the \"uber-contract\" until it sells to a book\n>publisher. To do this right, you would divide\n>this contract into small peices, or sub-contracts.\n>Then, as people contribute, their time is tracked\n>against these components. Right now, for instance,\n>you have a book-outline project which people are\n>working on; it could be decided that the book-outline\n>sub contract is worth \".5%\" of the book ownership.\n>Then, as people track their time against these\n>sub-contracts, a summary (once a week) is presented\n>to you (the owner of the contract) for approval\n>of their hours. If approved, then the contributer\n\nThe way this is normally done is to do a page or word count of the finished\nproduct. Each author owns a \"chapter\". The editor negotiates with the\npublisher and takes a cut off the top. Counting time is a bad idea IMHO.\nToo much uncertainty and inequity.\n\nOf course in this case there is a lot of pre-existing text which makes\ncounting pages hard. Is there any way we can make this whole thing owned\nby postgresql.org and just use the proceeds for the project? I haven't\nbeen tracking the legal status.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Thu, 14 Oct 1999 09:07:12 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "\nThanks Bruce, just thought that I'd put in\nthe suggestion. I'm sure with your experience\nit will be a fantastic book!\n\n;) Clark\n\nOn Thu, 14 Oct 1999, Bruce Momjian wrote:\n>\n> I am taking this on as a solo project.\n> \n\n", "msg_date": "Thu, 14 Oct 1999 13:51:16 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "> \n> Bruce,\n> \n> Were you going to do this as a \"group\" project or\n> is it a \"solo\" venture?\n\nI am taking this on as a solo project.\n\n> However, if it is a \"solo\" deal, then you you should \n> make it pretty clear -- otherwise I see heart-ache and\n> bad feelings.\n\nI have hesitated to talk about this in any more detail until I have\ndiscussed this via telephone with Marc, Thomas, and Tom Lane. I feel I\nneed to get clearance from them on this. Vadim, unfortunately, is too\nfar away to telephone.\n\n\n> Why do it this way? Beacuse PostgreSQL is a community\n> effort, and a solo book sucks. Also, you want the\n> book done, so direct rewards for contribution work\n> does wonders! To use parts of the documentation, \n> Thomas Lockheart would have to be given a percentage \n> in the book, etc. Anyway, if the system is clear\n> you will get far more help than otherwise...\n\nWell, I don't think a \"group-written\" book is going to read very well. \nEveryone has a different style, and a mis-mash of writing styles in a\nbook will not work. It also will take too long to produce a book in\nthat way.\n\nThe book will be available via the Web and in PDF format even before it\nis completed. (I am writing it using LyX/LaTeX). I have written the\nfirst two chapters, and will be putting them out for everyone to read\nand use very soon. This book project will clearly be a win for all\nPostgreSQL users, whether they buy the book or not.\n\nThat doesn't mean I will not be including significant amount of our\nexisting documentation. For example, I would probably include the\n'manual' pages at the end of the book, like many computer books.\n\nAs far a money, let me mention something. While making $0 with\nPostgreSQL (I don't use it in my work, or even at home to store any\ndata.), I have always offered to put money into the project because I\nthink it is only fair that the costs be born fairly by the people\ninvolved. I have sent money to support our server, I have offered to\nsend more in the past, and have offered to host the PostgreSQL Award\naround-the-world tour by including checks to pay for every leg of the\ntrip. I have done other monetary gifts for PostgreSQL.\n\nSo, if there need for some money for PostgreSQL, let me know. With or\nwithout the book, I am always interested in helping.\n\nOnce I talk to everyone, I will be saying more.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 14:05:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "\nWell Henery, the collaborative book idea is dead.\nBut, here are my comments anyway. BTW, I didn't\nget your e-mail... is there a \"docs\" list and not\na \"pgsql-docs\" list? If so, where are the archives?\n\nOn Thu, 14 Oct 1999, Henery B. Hotz wrote:\n> >Then, as people contribute, their time is tracked\n> >against these components. Right now, for instance,\n> >you have a book-outline project which people are\n> >working on; it could be decided that the book-outline\n> >sub contract is worth \".5%\" of the book ownership.\n> >Then, as people track their time against these\n> >sub-contracts, a summary (once a week) is presented\n> >to you (the owner of the contract) for approval\n> >of their hours. \n>\n> The way this is normally done is to do a page or word count \n> of the finished product. Each author owns a \"chapter\". The \n> editor negotiates with the publisher and takes a cut off \n> the top. Counting time is a bad idea IMHO.\n> Too much uncertainty and inequity.\n\nI don't believe this is the case at all. For two reasons:\n\n1. When someone decides they want to contribute, their hours\n can be multipled by a factor commensurate with their \n relevant expeience. This multiplier can be wrong at first,\n but as the project moves along, the owner of the contract will\n have insentive to \"promote\" the multiplier for those who have\n really demonstrated that they are valid contributors.\n This will work out organically... besides most people will \n be very honest and submit a multiplier commensurate with their\n experience. The (nonexistent?) few that don't will be quickly\n identified... it really isn't all that hard.\n\n2. If someone logs a ton of hours and tries to get them approved,\n the contract owner would have a good laugh and simply deny the\n hours. I doubt this would happen in reality; in fact you might\n have the opposite problem -- people not turning in an adequate\n reflection of the time they have spent.\n\nAs for counting words or basing it on chapters -- this really limits \nthe ability of the vast number of contributers to feel like they \nare a part of the process. A tiny, but juicy kernel of wisdom is \nworth a ton. \n\nPity nobody want's to do it. Oh well. We all have such\nlittle faith in other people. Damn shame.\n\nClark\n\n\n\n\n", "msg_date": "Thu, 14 Oct 1999 14:12:33 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "> >If its a group project, I assume you are the \"owner\"\n> >of the \"uber-contract\" until it sells to a book\n> >publisher. To do this right, you would divide\n> >this contract into small peices, or sub-contracts.\n> >Then, as people contribute, their time is tracked\n> >against these components. Right now, for instance,\n> >you have a book-outline project which people are\n> >working on; it could be decided that the book-outline\n> >sub contract is worth \".5%\" of the book ownership.\n> >Then, as people track their time against these\n> >sub-contracts, a summary (once a week) is presented\n> >to you (the owner of the contract) for approval\n> >of their hours. If approved, then the contributer\n> \n> The way this is normally done is to do a page or word count of the finished\n> product. Each author owns a \"chapter\". The editor negotiates with the\n> publisher and takes a cut off the top. Counting time is a bad idea IMHO.\n> Too much uncertainty and inequity.\n> \n> Of course in this case there is a lot of pre-existing text which makes\n> counting pages hard. Is there any way we can make this whole thing owned\n> by postgresql.org and just use the proceeds for the project? I haven't\n> been tracking the legal status.\n\nThe first two chapters are Intro/History, like the one I wrote for the\nBSD magazine, and chapter 2 is using psql, which is all new.\n\nWe really don't have much hand-holding stuff. Chapters 1-4 are going to\nbe all new. The later chapters may use some existing stuff.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 14:14:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": " \nBruce,\n\nOn the slim chance that you may change your mind,\nI've decided to address some of your points.\n\nOn Thu, 14 Oct 1999, Bruce Momjian wrote:\n> \n> Well, I don't think a \"group-written\" book is going to read very well. \n\nYou already started one. It hardly looks like a \"solo\" \nproject to me. There is _nothing_ saying that it has to be\ndesign-by-committee.\n\nIn fact, I assumed that you would run the book as a \n\"dictatorship\" where you keep complete artistic\ncontrol over every element... including the voice.\n\nThere is a ton other people can do... think: \n \n \"What can I delegate\". \n\nIt will bring you amazing power -- you might end up owning only\n40% of the book.... however, the book will be 1000% better\nand sell thousands more copies.\n\n> Everyone has a different style, and a mis-mash of writing \n> styles in a book will not work.\n\nCompare to: Everyone has a different style, a mis-match of \n programming styles in a computer program will not work.\n\nAnswer: Modulize, design your book so that this will \n be a non-issue. 1/2 of the book will be dreaming\n up and presenting 'cool' examples. The little bit\n of text that is in a personal voice can be re-written.\n\n> It also will take too long to produce a book in that way.\n\nCompare to: It will take too long to make a computer program that way.\n \nAnswer: Once the outline is made; and the project is broken\n down into modules much of it can be done in parallel.\n Debugging can also be done in parallel (as Eric Raymond\n so clearly writes). So what if a complete re-write \n is needed at the end: Plan to throw one away.\n\n> The book will be available via the Web and in PDF format even before it\n> is completed. (I am writing it using LyX/LaTeX). I have written the\n> first two chapters, and will be putting them out for everyone to read\n\ncomment, debug, suggest improvements on, revise, help with,\n\n> and use very soon.\n\nYes, I know. You want to leverage the collaborative PostgreSQL \ncommunity process when every possible. Amazing how well it works!\n\n> This book project will clearly be a win for all\n> PostgreSQL users, whether they buy the book or not.\n\nThis much is true, but it's not the issue.\n\n> That doesn't mean I will not be including significant amount of our\n> existing documentation. For example, I would probably include the\n> 'manual' pages at the end of the book, like many computer books.\n\nThey are seveal hundred pages and will be a great\nresource when writing the book -- they must have taken\nhundreds of hours to generate. I doubt that they\nwould be all that useful verbadim at the end of the book.\n \n> As far a money, let me mention something. While making $0 with\n> PostgreSQL (I don't use it in my work, or even at home to store any\n> data.), I have always offered to put money into the project because I\n> think it is only fair that the costs be born fairly by the people\n> involved. I have sent money to support our server, I have offered to\n> send more in the past, and have offered to host the PostgreSQL Award\n> around-the-world tour by including checks to pay for every leg of the\n> trip. I have done other monetary gifts for PostgreSQL.\n\nYes, I know. Most committed free software developers are in a similar\nboat and I wish there was a more equitable way of developing software\nlike PostgreSQL.\n\n> So, if there need for some money for PostgreSQL, let me know. With or\n> without the book, I am always interested in helping.\n\nBruce, I'm not questioning your integrety; if anything it should be\nthe other way around as I am not a significant contributor and \nas an 'familar outsider' really don't have the right to question\nyour actions. Infact, I admire you a TON and that's why I'm \nspending _my_ time authoring this e-mail.\n\n...\n\n Please understand. I'm suggesting an alternative way of doing\n things; that maybye, just mabye could turn out to be bigger and \n more useful than expected. Consider this as a \"small project\" \n to see if the members of PostgreSQL community can deliver as\n a cohesive unit; not as an individual.\n \n If an accountable process like this were to work for a book --\n then I would bet solid money that it would work for\n application software development for profit. And this, \n being able to generate a profit should be the goal that we\n as a community should be striving for.\n\n...\n\n Imagine the book titled:\n \"PostgreSQL: The Definitive Guide, by Bruce Momjian\"\n \n Now imagine you going into Crysler corporation trying to \n bid on a production control system. Do you think you\n will get it with the Oracle representative right next to you?\n\n Imagine instead the book titled:\n\n \"PostgreSQL: The Definitive Guide\n A collaborative work by the PostgreSQL Community, \n Edited by Bruce Momjian\"\n\n Put yourself back in the board room at Crysler with\n Oracle sales person next to you. Do you think you\n are in a better position? I think so. The first book\n says you are a lone wolf. The second one shows\n you are a leader. It also demonstrates all too clearly\n that you an muster the entire PostgreSQL community \n behind you, ready to deliver on your promises.\n\n That is way powerful. Far more powerful than the \n lone wolf approach chosen by Larry Wall.\n \n...\n\nSo, you mentioned the $ word. Is this about $? Yes.\nHowever, it is not about the immediate money nor about\nyour right to profit from PostgreSQL. It is about a \nkey juncture for the PostgreSQL community; we can either\nfragment off as individuals... going Solo. Or,\nwe can develop a business model that lets us move\ntogether as a community. Oracle isn't scared about \nthe first one. Its petrified about the second.\n\nBest Wishes,\n\nClark\n\n", "msg_date": "Thu, 14 Oct 1999 15:01:55 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "> 2. If someone logs a ton of hours and tries to get them approved,\n> the contract owner would have a good laugh and simply deny the\n> hours. I doubt this would happen in reality; in fact you might\n> have the opposite problem -- people not turning in an adequate\n> reflection of the time they have spent.\n> \n> As for counting words or basing it on chapters -- this really limits \n> the ability of the vast number of contributers to feel like they \n> are a part of the process. A tiny, but juicy kernel of wisdom is \n> worth a ton. \n> \n> Pity nobody want's to do it. Oh well. We all have such\n> little faith in other people. Damn shame.\n\nI guess I figured that if we were going to do a collaborative book, we\nwould have done that with our documentation already. But in fact, it\nusually takes one person and lots of time. In our current Doc's case,\nit is Thomas.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 15:34:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "> \n> Thanks Bruce, just thought that I'd put in\n> the suggestion. I'm sure with your experience\n> it will be a fantastic book!\n> \n\nIt was a good suggestion. I just don't think writing scales to many\npeople very well.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 15:35:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "\n\nOn Thu, 14 Oct 1999, Bruce Momjian wrote:\n> I guess I figured that if we were going to do a collaborative book, we\n> would have done that with our documentation already. \n\nSeveal Reasons:\n\n(a) we didn't have the goal to write a book for profit, (b) we didn't \nhave a business model, (c) we didn't have a leader.\n\nSeems we have all three now:\n\n(a) Lots of people seem to have latched on to the goal, and it is \napparant that there is significant demand. (b) The business model \nis there, it may not be perfect, but we can nail that down once we \nhave experience. Everyone I've met in PostgreSQL is resonable; \nthe inital agreement will be loose, as the book progresses the \nagreement can be tightened down as needed. (c) We have a *sweet* \nleader, Bruce Momjian, he has proven to be on top of things -- a \nman with not only great vision, but the drive to make things tick.\n\nAre you ready to delegate?\n\nClark\n\n\n\n", "msg_date": "Thu, 14 Oct 1999 15:47:01 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "\nMy apologies. \n\nOn Thu, 14 Oct 1999, The Hermit Hacker wrote:\n> IMHO, if you feel that you need to continue along this vein, please take\n> it offlist and with Bruce privately...if someone else wishes to put the\n> time, effort and \"risk of maritial status\" up on the block and work on a\n> collaborative effort, so be it...but I personally am getting a major\n> distaste in my mouth from watching you trying to pressure Bruce into a\n> direction that he has already stated no desire to go. If ppl don't like\n> that, fine...don't buy the book when its done. Personally, I know where\n> my book buying dollars are going when the time comes...\n\n\n", "msg_date": "Thu, 14 Oct 1999 16:23:24 -0400 (EDT)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "On Thu, 14 Oct 1999, Clark C. Evans wrote:\n\n> \n> \n> On Thu, 14 Oct 1999, Bruce Momjian wrote:\n> > I guess I figured that if we were going to do a collaborative book, we\n> > would have done that with our documentation already. \n> \n> Seveal Reasons:\n> \n> (a) we didn't have the goal to write a book for profit, (b) we didn't \n> have a business model, (c) we didn't have a leader.\n> \n> Seems we have all three now:\n> \n> (a) Lots of people seem to have latched on to the goal, and it is \n> apparant that there is significant demand. (b) The business model \n> is there, it may not be perfect, but we can nail that down once we \n> have experience. Everyone I've met in PostgreSQL is resonable; \n> the inital agreement will be loose, as the book progresses the \n> agreement can be tightened down as needed. (c) We have a *sweet* \n> leader, Bruce Momjian, he has proven to be on top of things -- a \n> man with not only great vision, but the drive to make things tick.\n> \n> Are you ready to delegate?\n\nDelegate what? Bruce is doing this as a solo project, what would he need\nto delegate? *confused look* Now, the way I see it, he *could* have gone\noff and written the book withuot letting any of us know about it, and\nwihtout asking any of us for input as to how we'd like to see it look...he\ndidn't. \n\nIMHO, if you feel that you need to continue along this vein, please take\nit offlist and with Bruce privately...if someone else wishes to put the\ntime, effort and \"risk of maritial status\" up on the block and work on a\ncollaborative effort, so be it...but I personally am getting a major\ndistaste in my mouth from watching you trying to pressure Bruce into a\ndirection that he has already stated no desire to go. If ppl don't like\nthat, fine...don't buy the book when its done. Personally, I know where\nmy book buying dollars are going when the time comes...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 14 Oct 1999 17:56:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "> \n> My apologies. \n> \n> On Thu, 14 Oct 1999, The Hermit Hacker wrote:\n> > IMHO, if you feel that you need to continue along this vein, please take\n> > it offlist and with Bruce privately...if someone else wishes to put the\n> > time, effort and \"risk of maritial status\" up on the block and work on a\n> > collaborative effort, so be it...but I personally am getting a major\n> > distaste in my mouth from watching you trying to pressure Bruce into a\n> > direction that he has already stated no desire to go. If ppl don't like\n> > that, fine...don't buy the book when its done. Personally, I know where\n> > my book buying dollars are going when the time comes...\n\nThomas mentioned that I need to request more than 20 free copies from\nthe publisher. I will either get more for free, or pay for them myself\nand have them delivered to the main devlopers.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 14 Oct 1999 17:42:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Business Plan for PostgreSQL book?" }, { "msg_contents": "For external reasons I didn't get as much done as I wanted. Originally, I\nplanned for 4 weeks and I think I can keep that.\n\nThe source is posted at http://www.pathwaynet.com/~peter/psql3.tar.gz\n\nCHANGELOG\n* Accommodated object descriptions in \\d* commands.\n* Re-wrote \\dd\n* Re-wrote \\d \"table\"\n* \\pset command as generic way for setting printing options (e.g.,\n \\pset format html, \\pset null \"(null)\")\n* Proliferated use of const char * and enums\n* Rewrote \\di, \\dt, \\ds, \\dS. Say hello to \\dv for views, which are\n now recognized correctly. You can also call, e.g., \\dtvi for a list\n of indices, tables, and views. The possibilities are endless ...\n (where \"endless\" = 325)\n\nAlso I wrote new printing routines which are better abstracted so that one\ncould easily throw in new formats. Here are some examples. Let me know if\nyou hate them.\n\n(Sorry that this is a little long.)\n\npeter@localhost:5432 play=> \\pset format aligned\npeter@localhost:5432 play=> \\pset border 0\npeter@localhost:5432 play=> select * from foo; \n first second \n-------- ------------\n 2 \n 0 -----\n 0 hi\n 34 \n -1 -2\n yo &&& <> ho\n99999999 this is text\n(7 rows)\npeter@localhost:5432 play=> \\pset border 1\npeter@localhost:5432 play=> select * from foo;\n first | second \n----------+--------------\n 2 | \n 0 | -----\n 0 | hi\n 34 | \n -1 | -2\n | yo &&& <> ho\n 99999999 | this is text\n(7 rows)\npeter@localhost:5432 play=> \\pset border 2\npeter@localhost:5432 play=> select * from foo;\n+----------+--------------+\n| first | second |\n+----------+--------------+\n| 2 | |\n| 0 | ----- |\n| 0 | hi |\n| 34 | |\n| -1 | -2 |\n| | yo &&& <> ho |\n| 99999999 | this is text |\n+----------+--------------+\n(7 rows)\npeter@localhost:5432 play=> \\pset format unaligned\npeter@localhost:5432 play=> \\pset fieldsep \",,\"\npeter@localhost:5432 play=> select * from foo;\nfirst,,second\n2,,\n0,,-----\n0,,hi\n34,,\n-1,,-2\n,,yo &&& <> ho\n99999999,,this is text\n(7 rows)\npeter@localhost:5432 play=> \\t\nTurned on only tuples\npeter@localhost:5432 play=> \\x\nTurned on expanded table representation\npeter@localhost:5432 play=> select * from foo;\n\nfirst,,2\nsecond,,\n\nfirst,,0\nsecond,,-----\n\nfirst,,0\nsecond,,hi\n\nfirst,,34\nsecond,,\n\nfirst,,-1\nsecond,,-2\n\nfirst,,\nsecond,,yo &&& <> ho\n\nfirst,,99999999\nsecond,,this is text\npeter@localhost:5432 play=> \\pset border 1\npeter@localhost:5432 play=> \\pset format html\npeter@localhost:5432 play=> \\pset expanded\nTurned off expanded display\npeter@localhost:5432 play=> select * from foo;\n<table border=1>\n <tr valign=top>\n <td align=right>2</td>\n <td align=left>&nbsp;</td>\n </tr>\n <tr valign=top>\n <td align=right>0</td>\n <td align=left>-----</td>\n </tr>\n <tr valign=top>\n <td align=right>0</td>\n <td align=left>hi</td>\n </tr>\n <tr valign=top>\n <td align=right>34</td>\n <td align=left>&nbsp;</td>\n </tr>\n <tr valign=top>\n <td align=right>-1</td>\n <td align=left>-2</td>\n </tr>\n <tr valign=top>\n <td align=right>&nbsp;</td>\n <td align=left>yo &amp;&amp;&amp; &lt;&gt; ho</td>\n </tr>\n <tr valign=top>\n <td align=right>99999999</td>\n <td align=left>this is text</td>\n </tr>\n</table>\npeter@localhost:5432 play=> \\t\nTurned off only tuples\npeter@localhost:5432 play=> \\pset border 2\npeter@localhost:5432 play=> \\pset null \"(null)\"\npeter@localhost:5432 play=> \\pset format latex\npeter@localhost:5432 play=> select * from foo;\n\\begin{tabular}{|r|l|}\n\\hline\nfirst & second \\\\\n\\hline\n2 & (null) \\\\\n0 & ----- \\\\\n0 & hi \\\\\n34 & (null) \\\\\n-1 & -2 \\\\\n(null) & yo &&& <> ho \\\\ % Yes, this needs to be escaped.\n99999999 & this is text \\\\\n\\hline\n\\end{tabular}\n\n(7 rows) \\\\\npeter@localhost:5432 play=> \\pset null\nCurrent null display is \"(null)\".\npeter@localhost:5432 play=> \\pset fieldsep\nCurrent field separator is \",,\".\n\n\n(Oh, you actually read all the way down to here? Good.)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 19 Oct 1999 21:41:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "psql Week 3" }, { "msg_contents": "> For external reasons I didn't get as much done as I wanted. Originally, I\n> planned for 4 weeks and I think I can keep that.\n> \n> The source is posted at http://www.pathwaynet.com/~peter/psql3.tar.gz\n> \n> CHANGELOG\n> * Accommodated object descriptions in \\d* commands.\n> * Re-wrote \\dd\n> * Re-wrote \\d \"table\"\n> * \\pset command as generic way for setting printing options (e.g.,\n> \\pset format html, \\pset null \"(null)\")\n> * Proliferated use of const char * and enums\n> * Rewrote \\di, \\dt, \\ds, \\dS. Say hello to \\dv for views, which are\n> now recognized correctly. You can also call, e.g., \\dtvi for a list\n> of indices, tables, and views. The possibilities are endless ...\n> (where \"endless\" = 325)\n> \n\nVery cool. Nice new formats. Man, I will have to add them to this\nbook. I am going to have to have pre-7.0 psql backslash command\nlistings, and 7.0 backslash listings. This improvement is long overdue.\npsql has always been one of our nifty features. It just got niftier.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 17:49:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 3" }, { "msg_contents": "Bruce Momjian wrote:\n >> For external reasons I didn't get as much done as I wanted. Originally, I\n >> planned for 4 weeks and I think I can keep that.\n >> \n >> The source is posted at http://www.pathwaynet.com/~peter/psql3.tar.gz\n >> \n >> CHANGELOG\n >> * Accommodated object descriptions in \\d* commands.\n >> * Re-wrote \\dd\n >> * Re-wrote \\d \"table\"\n >> * \\pset command as generic way for setting printing options (e.g.,\n >> \\pset format html, \\pset null \"(null)\")\n >> * Proliferated use of const char * and enums\n >> * Rewrote \\di, \\dt, \\ds, \\dS. Say hello to \\dv for views, which are\n >> now recognized correctly. You can also call, e.g., \\dtvi for a list\n >> of indices, tables, and views. The possibilities are endless ...\n >> (where \"endless\" = 325)\n >> \n >\n >Very cool. Nice new formats. Man, I will have to add them to this\n >book. I am going to have to have pre-7.0 psql backslash command\n >listings, and 7.0 backslash listings. This improvement is long overdue.\n >psql has always been one of our nifty features. It just got niftier.\n \nMy previous database experience was with PICK, where the data dictionaries\ninclude an output format and the command language allows for column\nformatting 'on the fly'. This is something that I have missed with\nPostgreSQL, though I believe the SQL standard does not cover it.\n\nWhat I would like would be the ability to attach a format to a column, so\nthat text could be truncated at 25 characters or floats lined up with a\nspecified number of decimal places. (May be we can already and I've missed\nit?) I would particularly like to be able to do text wrapping within a \ncolumn and totalling of numeric columns:\n\n-----+---------------------+--------\nid | description | qty\n-----+---------------------+--------\nabc1 | text that rambles | 35.43\n | on and on about |\n | something or other |\ndef2 | more useless text | 2.00\nhgf3 | and yet more text | 355.10\n | to read |\n-----+---------------------+--------\n | | 392.53\n-----+---------------------+--------\n(3 rows)\n\nDo you think there's any place for this in PostgreSQL? Perhaps it needs a\nseparate front-end tool.\n\n\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Commit thy way unto the LORD; trust also in him and \n he shall bring it to pass.\" Psalms 37:5 \n\n\n", "msg_date": "Tue, 19 Oct 1999 23:52:04 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql Week 3 " }, { "msg_contents": "hi...\n\n> What I would like would be the ability to attach a format to a column, so\n> that text could be truncated at 25 characters or floats lined up with a\n> specified number of decimal places. (May be we can already and I've missed\n> it?) I would particularly like to be able to do text wrapping within a \n> column and totalling of numeric columns:\n\ni miss this from oracle as well...\n\n> \n> Do you think there's any place for this in PostgreSQL? Perhaps it needs a\n> separate front-end tool.\n\npsql would seem the proper place for this?\n\n-- \nAaron J. Seigo\nSys Admin\n", "msg_date": "Tue, 19 Oct 1999 17:06:12 -0600", "msg_from": "\"Aaron J. Seigo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 3" }, { "msg_contents": "> >Very cool. Nice new formats. Man, I will have to add them to this\n> >book. I am going to have to have pre-7.0 psql backslash command\n> >listings, and 7.0 backslash listings. This improvement is long overdue.\n> >psql has always been one of our nifty features. It just got niftier.\n> \n> My previous database experience was with PICK, where the data dictionaries\n> include an output format and the command language allows for column\n> formatting 'on the fly'. This is something that I have missed with\n> PostgreSQL, though I believe the SQL standard does not cover it.\n\nI had that with Progress. Yes, it was handy.\n\n> \n> What I would like would be the ability to attach a format to a column, so\n> that text could be truncated at 25 characters or floats lined up with a\n\nchar(25)?\n\n> specified number of decimal places. (May be we can already and I've missed\n\nnumeric(16,2)?\n\n> it?) I would particularly like to be able to do text wrapping within a \n> column and totalling of numeric columns:\n\n\n> \n> -----+---------------------+--------\n> id | description | qty\n> -----+---------------------+--------\n> abc1 | text that rambles | 35.43\n> | on and on about |\n> | something or other |\n> def2 | more useless text | 2.00\n> hgf3 | and yet more text | 355.10\n> | to read |\n> -----+---------------------+--------\n> | | 392.53\n> -----+---------------------+--------\n> (3 rows)\n> \n> Do you think there's any place for this in PostgreSQL? Perhaps it needs a\n> separate front-end tool.\n\nThat's a job for pgaccess. I thought it did stuff like that. If you\nwant printing like that, we need a report-writer, which is an important\napplication.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Oct 1999 19:10:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 3" }, { "msg_contents": "On Tue, 19 Oct 1999, Oliver Elphick wrote:\n\n> My previous database experience was with PICK, where the data dictionaries\n> include an output format and the command language allows for column\n> formatting 'on the fly'. This is something that I have missed with\n> PostgreSQL, though I believe the SQL standard does not cover it.\n\nHmm. Well, the idea of keeping formatting data in the backend doesn't\nsound too exciting to me. The data type already contains a fair amount of\nformatting information in itself.\n\nBut psql is not supposed to be a report generator. It's also not a\ntypesetting engine. The idea was to sort of write a shell, but one that\ndoesn't use the OS kernel but a database server.\n\n> \n> What I would like would be the ability to attach a format to a column, so\n> that text could be truncated at 25 characters or floats lined up with a\n> specified number of decimal places. (May be we can already and I've missed\n\nOkay, lining up floats is sort of on the wish list (as distinct from todo\nlist). I'll keep that in mind.\n\n> it?) I would particularly like to be able to do text wrapping within a \n> column and totalling of numeric columns:\n\nWrapping text also seemed like a nice idea to me, but psql is supposed to\nbe a query interpreter. Wrapping text is a whole different animal and\nthere are specialty programs out there for that.\n\nAnd totalling numeric columns is the job of a report generator. All psql\ndoes is send a query and output the results, and it wants to make that job\nas fun as possible along the way. It doesn't know anything about what's in\nthe query results. That is probably an important line to keep.\n\n> \n> -----+---------------------+--------\n> id | description | qty\n> -----+---------------------+--------\n> abc1 | text that rambles | 35.43\n> | on and on about |\n> | something or other |\n> def2 | more useless text | 2.00\n> hgf3 | and yet more text | 355.10\n> | to read |\n> -----+---------------------+--------\n> | | 392.53\n> -----+---------------------+--------\n> (3 rows)\n> \n> Do you think there's any place for this in PostgreSQL? Perhaps it needs a\n> separate front-end tool.\n\nI hear pg_access has a report generator. If you want a text-based report\ngenerator, then you're seemingly on your own.\n\n\n\t-Peter\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 20 Oct 1999 12:16:11 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql Week 3 " }, { "msg_contents": "> Bruce Momjian wrote:\n> > Actually, I want to make sure the book is accessible on the web. I will\n> > keep your translation idea in mind. Good point.\n> \n> Talk to Philip Greenspun. Morgan-Kaufman cut him a deal where his book,\n> \"Philip and Alex's Guide to Web Publishing\" is also available free on\n> the web (http://photo.net/wtr/thebook). His e-mail is [email protected].\n\nI have this from Addison Wesley. It is online now.\n\n> And you gotta see the quality of his book! Amazon carries it. \n> (although, I must say, some of his choices for pictures were a little\n> over the top....)\n\nMuch thanks for sending me this suggestion.\n\nI read it at:\n\n\thttp://photo.net/wtr/dead-trees/story.html\n\nFirst, it is very long, so let me give you a quick summary. He did his\nfirst book for MacMillan, which is like Sams, Waite, etc. Basically your\nfat book for dummies, and had a pretty terrible experience. You have to\nread it to appreciate it. This reminds me of Thomas's comment that we\nneed a good, quality publisher for our books. If the first book that\ncame out on PostgreSQL was \"PostgreSQL for Dummies\", Marc would have a\nfit. And I have seen him in fits -- it is not pretty.\n\nHe did his second book for Morgan Kaufmann, who is a quality publisher\nlike Addison Wesley. Totally different experience, and a different\nquality book.\n\nThis is required reading for anyone who is interested in the book\nwriting experience or is considering writing a book. People, we need to\nstay with the quality publishers, if possible.\n\nI am sure there will be a \"PostgreSQL Unleashed\" book one day, with tons\nof graphic arrows and very little content, but let's put it off as long\nas we can.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Oct 1999 23:49:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "\nIf I may add..\n\nI would highly recommend that you all read Phil Greenspun's story about his\nexperience in publishing his book with MacMillan. Read between the lines,\nhowever. Where ever reference is made to MacMillan you could substitue\nalmost <any publisher>. Greenspun is a great story teller so the material is\na good read. I'm sure he embellished a lot on his experiences. He strikes\nme as being a puckish kind of guy, somewhat rare amongst engineers. Read\nsome of his other material. A Day at the Zoo is a good one.\n\nPaul\n\nPaul W. Becker\nAddison Wesley Longman\n671 Andover Road\nValley Cottage, NY 10989\n\n914-268-8003 (v)\n914-268-3874 (f)\n\nAssistant: Ross Venables 781-944-3700 x2501\n\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Wednesday, October 20, 1999 11:49 PM\nTo: Lamar Owen\nCc: Dmitry Samersoff; Vince Vielhaber; Oliver Elphick;\nPostgreSQL-documentation; \"PostgreSQL-development\"@candle.pha.pa.us;\[email protected]; Jan Wieck\nSubject: Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book\n\n\n> Bruce Momjian wrote:\n> > Actually, I want to make sure the book is accessible on the web. I will\n> > keep your translation idea in mind. Good point.\n>\n> Talk to Philip Greenspun. Morgan-Kaufman cut him a deal where his book,\n> \"Philip and Alex's Guide to Web Publishing\" is also available free on\n> the web (http://photo.net/wtr/thebook). His e-mail is [email protected].\n\nI have this from Addison Wesley. It is online now.\n\n> And you gotta see the quality of his book! Amazon carries it.\n> (although, I must say, some of his choices for pictures were a little\n> over the top....)\n\nMuch thanks for sending me this suggestion.\n\nI read it at:\n\n\thttp://photo.net/wtr/dead-trees/story.html\n\nFirst, it is very long, so let me give you a quick summary. He did his\nfirst book for MacMillan, which is like Sams, Waite, etc. Basically your\nfat book for dummies, and had a pretty terrible experience. You have to\nread it to appreciate it. This reminds me of Thomas's comment that we\nneed a good, quality publisher for our books. If the first book that\ncame out on PostgreSQL was \"PostgreSQL for Dummies\", Marc would have a\nfit. And I have seen him in fits -- it is not pretty.\n\nHe did his second book for Morgan Kaufmann, who is a quality publisher\nlike Addison Wesley. Totally different experience, and a different\nquality book.\n\nThis is required reading for anyone who is interested in the book\nwriting experience or is considering writing a book. People, we need to\nstay with the quality publishers, if possible.\n\nI am sure there will be a \"PostgreSQL Unleashed\" book one day, with tons\nof graphic arrows and very little content, but let's put it off as long\nas we can.\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Thu, 21 Oct 1999 10:06:16 -0400", "msg_from": "\"Paul Becker\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Paul Becker wrote:\n> a good read. I'm sure he embellished a lot on his experiences. He strikes\n> me as being a puckish kind of guy, somewhat rare amongst engineers. Read\n> some of his other material. A Day at the Zoo is a good one.\n\nAnd \"Travels with Samantha\" is a hoot. His coding style is very similar\nto his writing style -- lispish. But, then again, he's an old-hand MIT\nLISP hacker. You should read his tcl one day -- I've laughed till I've\ncried over some of his code. (I don't know if that's more of a\nstatement about his coding style, or about my reading habits.....)\n\nPhilip is certainly an interesting writer.\n\nOh, BTW, Bruce, \"You're Welcome.\"\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 21 Oct 1999 11:09:37 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "> Paul Becker wrote:\n> > a good read. I'm sure he embellished a lot on his experiences. He strikes\n> > me as being a puckish kind of guy, somewhat rare amongst engineers. Read\n> > some of his other material. A Day at the Zoo is a good one.\n> \n> And \"Travels with Samantha\" is a hoot. His coding style is very similar\n> to his writing style -- lispish. But, then again, he's an old-hand MIT\n> LISP hacker. You should read his tcl one day -- I've laughed till I've\n> cried over some of his code. (I don't know if that's more of a\n> statement about his coding style, or about my reading habits.....)\n> \n> Philip is certainly an interesting writer.\n\nYes, he was interesting. He clearly caused some of his own problems\nwith MacMillan, and I agreed with MacMillan on a number of issues.\n\nWhat I found interesting is something I had suspected for quite some\ntime. I found that I have some great computer books, but when I go to\nthe computer section of a book store, most books there are junk.\n\nThen I ordered Knuth's \"Art of Computer Programming\" directly from\nAddison Wesley, and I started receiving their quarterly book bulletins. \nI said, \"Hey, I read that book, and that one, and that one...\" I\nrealized that most of my books are from a handful of book publishers,\nAddison Wesley being the most popular in my bookshelf.\n\nBasically, I realized that not all the publishers are the same. Some\nproduce quality, and others take a much more marketing slant in book\nproduction, usually producing poor quality books.\n\nOne of the reasons I am posting this to the list is so people\nconsidering book writing ( and I know some publishers have been lurking\nin the past months), be careful who you sign with. It can affect your\nwhole outlook on the process, and in the end, it is your name that is on\nthe cover of the book, not the publisher.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 21 Oct 1999 12:55:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "I'm currently studying for a Masters at Brunel University, London. I was\nlooking for a thesis and since I had an interest in Opensource development\nand databases I thought that I would like to work on PGsql. As I also had\nan interest in trying to understand CORBA and other related technologies it\nmade sense that I try and work on a project that ties PGsql and CORBA\ntogether in some way.\n\nMarc G. Fournier and Bruce Momjian were contacted with regards to this and\nthey seemed to think that there was room for me to work on this, although at\nthis stage I hadn't gone through the mailing list.\n\nsince then I've finally got PGsql up and working and I'm now embarking on\nunderstand how to system actual works. I thought a good place to start with\nmy research would be to go through the mailing lists to try and see the\ncurrent status for development in this area. I did this for several hours\ntrying to get the current take on CORBA with regards to PGsql. It seems\nthere was a lot of initial talk back in Nov 1998 on the hackers list with\nthe first thoughts of some sort of CORBA implementation. The conversation\norientated over 2 main areas; which ORB and implementation methods, with\nsome peoples offering to work on this. It was then suggested that the\nconverstaion take place in the interfaces mailing list. I've since been over\nsome of this but find it very difficult to understand the current status\nwith regards to PGsql and CORBA. I've seen many references to people who\nhave developed\na project that allows them to work with PGsql via CORBA, but non of this\nseems to be part of the main project or system. There is some reference to\nMicheal Robinson working on an implementation but again I'm not sure how\nthis fits\nin-http://www.postgresql.org/mhonarc/pgsql-interfaces/1998-11/msg00090.html\n\nIs there room for me to work on this project in such a way that it is\nadequate for my masters. If anyone is working on this, or has a good\nknowledge of the current status of the CORBA implementation for PGsql can\nyou please let me know, so I can know whether to get started on this or not.\nThe reference thread for my initial point of contact with Marc G. Fournier\nand Bruce Momjian and how they think I should attack the project is -\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.html\n\nRegards\n\nMark Proctor\nemail : [email protected]\n\n\n\n\n\n\nI'm currently studying for a Masters at \nBrunel University, London. I was looking \nfor a thesis and since I had an interest \nin Opensource development and databases I thought that I would like to work \non PGsql.  As I also had an interest \nin trying to understand CORBA and other \nrelated technologies it made sense that I \ntry and work on a project that ties PGsql \nand CORBA together in some way.Marc G. Fournier and Bruce Momjian were \ncontacted with regards to this and they \nseemed to think that there was room for me \nto work on this, although at this stage I \nhadn't gone through the mailing list.since then I've finally got PGsql \nup and working and I'm now embarking on \nunderstand how to system actual works. I \nthought a good place to start with my\nresearch would be to go through the mailing lists to try and see the current status for development \nin this area. I did this for several hours \ntrying to get the current take on CORBA \nwith regards to PGsql. It seems there was \na lot of initial talk back in Nov 1998 on\nthe hackers list with the first thoughts of some sort of CORBA implementation. The conversation \norientated over 2 main areas; which ORB \nand implementation methods, with some \npeoples offering to work on this. It was \nthen suggested that the converstaion take place\nin the interfaces mailing list. I've since been over some of this but find it very difficult to \nunderstand the current status with regards \nto PGsql and CORBA. I've seen many \nreferences to people who have developeda project that allows them to work \nwith PGsql via CORBA, but non of this \nseems to be part of the main project or system. There is some reference to Micheal \nRobinson working on an implementation but \nagain I'm not sure how this fits in-http://www.postgresql.org/mhonarc/pgsql-interfaces/1998-11/msg00090.html\nIs there \nroom for me to work on this project in such a\nway that it is adequate for my masters. If anyone is working on this, or has a good knowledge of \nthe current status of the CORBA \nimplementation for PGsql can you please \nlet me know, so I can know whether to get \nstarted on this or not. The reference thread for\nmy initial point of contact with Marc G. Fournier and Bruce Momjian and how they think I should \nattack the project is - http://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.htmlRegardsMark \nProctoremail : [email protected]", "msg_date": "Tue, 9 Nov 1999 01:38:35 -0000", "msg_from": "\"Mark Proctor\" <[email protected]>", "msg_from_op": false, "msg_subject": "CORBA STATUS" }, { "msg_contents": "> Is there room for me to work on this project in such a way that it is\n> adequate for my masters. If anyone is working on this, or has a good\n> knowledge of the current status of the CORBA implementation for PGsql can\n> you please let me know, so I can know whether to get started on this or not.\n> The reference thread for my initial point of contact with Marc G. Fournier\n> and Bruce Momjian and how they think I should attack the project is -\n> http://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.html\n\nI know of no one working on it. There were two ideas as I remember. \nOne was to replace our existing client/server communication with corba,\nand the second was to have a corba server that accepted corba requests\nand sent them to a PostgreSQL server. I prefer this second approach as\nI think CORBA may be too much overhead for people who don't need it. \nOur current client/server communication is quite efficient.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 8 Nov 1999 21:09:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> > Is there room for me to work on this project in such a way that it is\n> > adequate for my masters. If anyone is working on this, or has a good\n> > knowledge of the current status of the CORBA implementation for PGsql can\n> > you please let me know, so I can know whether to get started on this or not.\n> > The reference thread for my initial point of contact with Marc G. Fournier\n> > and Bruce Momjian and how they think I should attack the project is -\n> > http://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.html\n> I know of no one working on it.\n\nRight. No one is working on it, or if they are they haven't told\nanyone. It's all yours ;)\n\n> There were two ideas as I remember.\n> One was to replace our existing client/server communication with corba,\n> and the second was to have a corba server that accepted corba requests\n> and sent them to a PostgreSQL server. I prefer this second approach as\n> I think CORBA may be too much overhead for people who don't need it.\n> Our current client/server communication is quite efficient.\n\nActually, our current protocol is about the best you can do assuming\nthat you don't have something as powerful as Corba to do it right.\n\nIn the time since Corba was first brought up wrt Postgres, I have been\ninvolved with extensive Corba development for a family of systems at\nwork. It is a tremendously powerful standard, though just\nre-implementing the existing interfaces using Corba would probably\ndefeat the power and flexibility Corba can give you.\n\nPostgres currently avoids endian and other data representation issues\nbetween client and server by converting all data to strings. Corba can\nefficiently pass binary info back and forth, automatically handling\nendian issues *if necessary*. This alone should make a Corba-based\ninterface using native internal representations of data types more\nefficient both in speed and size than our current scheme.\n\nUsing Corba's DII, we might even cope with Postgres' type\nextensibility features in a transparent manner.\n\nOne trick will be to choose a Corba ORB to use on the server side. It\nshould probably be a C implementation, though Corba more naturally\nmaps to an OO language such as C++. It will be a trick to find an ORB\nwhich is supported on as many platforms as Postgres is. One of the\nmost portable ORBs is TAO, which we are using at work, but that is C++\nand involves 1.3GB of disk space to fully build!! But the runtime\nsizes are pretty reasonable, adding just a few megabytes of shared\nlibraries to a plain-vanilla client/server application.\n\nimho it will be extremely difficult to find an ORB which could be\ninjected directly into the Postgres server. It would likely reduce the\nnumber of platforms Postgres runs on, and would not be considered\nacceptable.\n\nFor a first cut, you might consider Bruce's \"plan B\", which involves\nwriting a new client to the Postgres backend, which would be a Corba\nserver to other clients. That would allow you to start working with\nCorba without hacking up the backend early on.\n\nAt the extreme end, fully Corba-ized Postgres server is an intriguing\nthought, allowing backend to be broken up into distributable modules.\n\nHave fun thinking about the possibilities...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 09 Nov 1999 07:55:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> imho it will be extremely difficult to find an ORB which could be\n> injected directly into the Postgres server. It would likely reduce the\n> number of platforms Postgres runs on, and would not be considered\n> acceptable.\n\nOn the other hand, this is where the power of configure comes into\nplay. Assuming we have servers for multiple ORBs, configure can look\nto see what's installed (TAO, Orbit, whatever) and then build only\nthat server.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "9 Nov 1999 08:15:30 -0500", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On 9 Nov 1999, Brian E Gallew wrote:\n\n> Then <[email protected]> spoke up and said:\n> > imho it will be extremely difficult to find an ORB which could be\n> > injected directly into the Postgres server. It would likely reduce the\n> > number of platforms Postgres runs on, and would not be considered\n> > acceptable.\n> \n> On the other hand, this is where the power of configure comes into\n> play. Assuming we have servers for multiple ORBs, configure can look\n> to see what's installed (TAO, Orbit, whatever) and then build only\n> that server.\n\nACtually, I believe Thomas was referring to those platforms that we\ncurrently support that have no ORBs available to them...being a \"purely C\"\nserver so far, how many of our currently supported platforms are we going\nto cut off with this? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 9 Nov 1999 10:12:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> ACtually, I believe Thomas was referring to those platforms that we\n> currently support that have no ORBs available to them...being a \"purely C\"\n> server so far, how many of our currently supported platforms are we going\n> to cut off with this? \n\nBut that's just it: we're not cutting anybody off. We may not have a\nCORBA server available for that platform, but the standard server will\ncontinue to work quite nicely. I guess that I'm looking at this more\nas contrib code than as mainstream, required functionality. Besides,\nonce the CORBA server is built on a platform that can support it,\nclients can run from (basically) anywhere.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "9 Nov 1999 09:44:27 -0500", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> > > imho it will be extremely difficult to find an ORB which could be\n> > > injected directly into the Postgres server. It would likely reduce the\n> > > number of platforms Postgres runs on, and would not be considered\n> > > acceptable.\n> > On the other hand, this is where the power of configure comes into\n> > play. Assuming we have servers for multiple ORBs, configure can look\n> > to see what's installed (TAO, Orbit, whatever) and then build only\n> > that server.\n\nIn the long run, that would be neat. In the short run, the details of\neach ORB vary considerably wrt, for example, the names and numbers of\n#include files. So it would complicate the code to try bringing along\ntwo ORBs at the beginning. We might expect the ORBs to converge a bit\nover time, so this will be easier later.\n\n> ACtually, I believe Thomas was referring to those platforms that we\n> currently support that have no ORBs available to them...being a \"purely C\"\n> server so far, how many of our currently supported platforms are we going\n> to cut off with this?\n\nDon't know, and it doesn't matter (yet). We shouldn't avoid the issue\nwithout someone looking at it further just because we *might* lose\nsome platforms; better to push it farther as a demonstration at least\nbefore deciding that it isn't a possibility.\n\nAnyway, I know that at least one ORB, TAO, runs on many more types of\nplatforms than Postgres does (e.g. VxWorks, Lynx, Solaris, NT, ...),\nthough Postgres runs on more Unix-style platforms. But that particular\nORB is not a good candidate for us, for reasons I already mentioned\n(C++, large build size, poor configure support).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 09 Nov 1999 15:06:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "\nOn 09-Nov-99 Thomas Lockhart wrote:\n>> > > imho it will be extremely difficult to find an ORB which could be\n>> > > injected directly into the Postgres server. It would likely reduce the\n>> > > number of platforms Postgres runs on, and would not be considered\n>> > > acceptable.\n>> > On the other hand, this is where the power of configure comes into\n>> > play. Assuming we have servers for multiple ORBs, configure can look\n>> > to see what's installed (TAO, Orbit, whatever) and then build only\n>> > that server.\n> \n> In the long run, that would be neat. In the short run, the details of\n> each ORB vary considerably wrt, for example, the names and numbers of\n>#include files. So it would complicate the code to try bringing along\n> two ORBs at the beginning. We might expect the ORBs to converge a bit\n> over time, so this will be easier later.\n> \n>> ACtually, I believe Thomas was referring to those platforms that we\n>> currently support that have no ORBs available to them...being a \"purely C\"\n>> server so far, how many of our currently supported platforms are we going\n>> to cut off with this?\n> \n> Don't know, and it doesn't matter (yet). We shouldn't avoid the issue\n> without someone looking at it further just because we *might* lose\n> some platforms; better to push it farther as a demonstration at least\n> before deciding that it isn't a possibility.\n> \n> Anyway, I know that at least one ORB, TAO, runs on many more types of\n> platforms than Postgres does (e.g. VxWorks, Lynx, Solaris, NT, ...),\n> though Postgres runs on more Unix-style platforms. But that particular\n> ORB is not a good candidate for us, for reasons I already mentioned\n> (C++, large build size, poor configure support).\n\nIMHO, There has no ideal ORB for all platforms.\nwe use ORBacus (http://www.ooc.com), \nbecause it's the only known for me ORB, working without threads \nso its really faster and more stable than another ones under FreeBSD,\nbut it's not free.\n\nMay be it is better make directory CORBA under interfaces subtree\nand time-to-time put objects for differend ORB's inside, \ninto separate directory.\n\nProbably, It's better to make separate configure for \nsome parts of postgres distributions to allow users to build/upgrade\nparts of postgres i.e psql or perl interface \n\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Tue, 09 Nov 1999 20:56:19 +0400 (MSK)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "\nI have done some research into slapping\na CORBA interface onto the PgSQL server.\nI have looked at ORBit but my findings \nseems to apply to most CORBA implementations.\n\nI found that there is a fundamental problem\nconcerning the difference in process models\nin pgsql and the POA (Portable Object Adaptor)\nin CORBA implementations.\n\nAFAICS, POA assumes a threaded server while\nPgSQL uses a traditional forking model.\n\nI see 3 ways to resolve this:\n\n1. Adapt the PgSQL server to use threads instead of forking.\n I am not sure we want to do this since it is a major\n undertaking and will not help the rock solid stability\n we expect from a professional RDBMS.\n\n2. Write a Forking Object Adaptor (FOA).\n Would be a big job but it would benefit other similar\n projects. A FOA would have to be rewritten for every\n ORB. If you want to do this I suggest starting with\n ORBit since it is all C and thereby easier to slap\n onto PgSQL.\n\n3. Extend postmaster to proxy CORBA requests.\n Postmaster could fork of postgres processes\n on incoming connections. Postmaster will keep \n listening and proxying request and responses \n to/from the postgres process through some kind\n of IPC. This will make the postmaster multithreaded \n and the postgres processes still singlethreaded.\n\nIt is doubtful if the gains:\n- standard-based protocols\n- less code to maintain (?)\njustifies the amount of work required.\n\nAn easier way out is to, as suggested by others,\nimplement the CORBA-service outside the server\nitself. It will in some cases give some latency\nrelated performance degrading compared to a \nin-server implementation.\n\nOne effort to do something like that\n(and more) is gnome-db. Check out\nhttp://www.gnome.org/gnome-db/\n\nregards,\n-----------------\nG�ran Thyni\nOn quiet nights you can hear Windows NT reboot!\n", "msg_date": "Tue, 09 Nov 1999 18:15:06 +0100", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> ACtually, I believe Thomas was referring to those platforms that we\n> currently support that have no ORBs available to them...being a \"purely C\"\n> server so far, how many of our currently supported platforms are we going\n> to cut off with this?\n\nOmniORB http://www.uk.research.att.com/omniORB/omniORBForm.html runs on\nmost\nUnix-like platforms. It is free and it is pretty fast. I used it a about\n18 months ago and it was already pretty good then.\n\nAdriaan\n", "msg_date": "Tue, 09 Nov 1999 19:15:17 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Tue, 9 Nov 1999, Dmitry Samersoff wrote:\n> > Anyway, I know that at least one ORB, TAO, runs on many more types of\n> > platforms than Postgres does (e.g. VxWorks, Lynx, Solaris, NT, ...),\n> > though Postgres runs on more Unix-style platforms. But that particular\n> > ORB is not a good candidate for us, for reasons I already mentioned\n> > (C++, large build size, poor configure support).\n> \n> IMHO, There has no ideal ORB for all platforms.\n> we use ORBacus (http://www.ooc.com), \n> because it's the only known for me ORB, working without threads \n> so its really faster and more stable than another ones under FreeBSD,\n> but it's not free.\n\n FNORB - http://www.dstc.edu.au/Products/Fnorb/\n Fnorb is a CORBA 2.0 ORB written in Python. Free for non-commercial\nuse.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 9 Nov 1999 18:25:23 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "\nWhomever does this implementation, or works on this, needs to bear in mind\nthat there are several different ORBs out there, and the code should be\nwritten in such a way that *I* can use MICO, Oleg here can use Fnorb,\nsomeone else can use ORBit, etc...I personally don't want to have to go\nut and grab <INSERT ORB of the Day here> just so that I can use CORBA,\nwhen I already have Mico installed for other software...\n\n\nOn Tue, 9 Nov 1999, Oleg Broytmann wrote:\n\n> On Tue, 9 Nov 1999, Dmitry Samersoff wrote:\n> > > Anyway, I know that at least one ORB, TAO, runs on many more types of\n> > > platforms than Postgres does (e.g. VxWorks, Lynx, Solaris, NT, ...),\n> > > though Postgres runs on more Unix-style platforms. But that particular\n> > > ORB is not a good candidate for us, for reasons I already mentioned\n> > > (C++, large build size, poor configure support).\n> > \n> > IMHO, There has no ideal ORB for all platforms.\n> > we use ORBacus (http://www.ooc.com), \n> > because it's the only known for me ORB, working without threads \n> > so its really faster and more stable than another ones under FreeBSD,\n> > but it's not free.\n> \n> FNORB - http://www.dstc.edu.au/Products/Fnorb/\n> Fnorb is a CORBA 2.0 ORB written in Python. Free for non-commercial\n> use.\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 9 Nov 1999 17:39:20 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> Whomever does this implementation, or works on this, needs to bear in mind\n> that there are several different ORBs out there, and the code should be\n> written in such a way that *I* can use MICO, Oleg here can use Fnorb,\n> someone else can use ORBit, etc...I personally don't want to have to go\n> ut and grab <INSERT ORB of the Day here> just so that I can use CORBA,\n> when I already have Mico installed for other software...\n\nThen forget it. Corba implementations don't work this way (yet).\n\nSorry you're not interested...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 03:01:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Wed, 10 Nov 1999, Thomas Lockhart wrote:\n\n> > Whomever does this implementation, or works on this, needs to bear in mind\n> > that there are several different ORBs out there, and the code should be\n> > written in such a way that *I* can use MICO, Oleg here can use Fnorb,\n> > someone else can use ORBit, etc...I personally don't want to have to go\n> > ut and grab <INSERT ORB of the Day here> just so that I can use CORBA,\n> > when I already have Mico installed for other software...\n> \n> Then forget it. Corba implementations don't work this way (yet).\n\nWait...when we talked about this months back, I swore that one of the\nconclusions *was* that this was possible...it would involve us doing\nwrapper functions in our code that were defined in an include file based\non which ORB implementation was used...?\n\nBasically...\n\npg_<corba function> maps to <insert mico corba function here>\n\t\t\t or <insert orbit corba function here>\n\t\t \t or <insert other implementation function here>\n\nHas this ability changed? *raised eyebrow*\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 10 Nov 1999 00:07:13 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> Wait...when we talked about this months back, I swore that one of the\n> conclusions *was* that this was possible...it would involve us doing\n> wrapper functions in our code that were defined in an include file based\n> on which ORB implementation was used...?\n> Basically...\n> pg_<corba function> maps to <insert mico corba function here>\n> or <insert orbit corba function here>\n> or <insert other implementation function here>\n> Has this ability changed? *raised eyebrow*\n\nNo, this probably is not necessary since the C or C++ mappings for\nfunction calls in Corba are very well defined. \n\nWhat is not fully specified in the Corba standard is, for example,\nwhich header files (and by what names) will be generated by the IDL\nstubber, so each Orb has, or might have, different conventions for\ninclude files. This probably impacts server-side code a bit more than\nclients.\n\nThere is some interest for some Orbs to try lining up the header file\nnames, but I don't know how feasible it is in the short term.\n\nWe could probably isolate this into Postgres-specific header files,\nbut there will probably be Orb-specific #ifdef blocks in those\nheaders.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 04:25:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Wed, 10 Nov 1999, Thomas Lockhart wrote:\n\n> > Wait...when we talked about this months back, I swore that one of the\n> > conclusions *was* that this was possible...it would involve us doing\n> > wrapper functions in our code that were defined in an include file based\n> > on which ORB implementation was used...?\n> > Basically...\n> > pg_<corba function> maps to <insert mico corba function here>\n> > or <insert orbit corba function here>\n> > or <insert other implementation function here>\n> > Has this ability changed? *raised eyebrow*\n> \n> No, this probably is not necessary since the C or C++ mappings for\n> function calls in Corba are very well defined. \n> \n> What is not fully specified in the Corba standard is, for example,\n> which header files (and by what names) will be generated by the IDL\n> stubber, so each Orb has, or might have, different conventions for\n> include files. This probably impacts server-side code a bit more than\n> clients.\n> \n> There is some interest for some Orbs to try lining up the header file\n> names, but I don't know how feasible it is in the short term.\n> \n> We could probably isolate this into Postgres-specific header files,\n> but there will probably be Orb-specific #ifdef blocks in those\n> headers.\n\nIs there any reason configure couldn't handle this?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 10 Nov 1999 06:22:45 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Wed, 10 Nov 1999, Thomas Lockhart wrote:\n\n> > Wait...when we talked about this months back, I swore that one of the\n> > conclusions *was* that this was possible...it would involve us doing\n> > wrapper functions in our code that were defined in an include file based\n> > on which ORB implementation was used...?\n> > Basically...\n> > pg_<corba function> maps to <insert mico corba function here>\n> > or <insert orbit corba function here>\n> > or <insert other implementation function here>\n> > Has this ability changed? *raised eyebrow*\n> \n> No, this probably is not necessary since the C or C++ mappings for\n> function calls in Corba are very well defined. \n> \n> What is not fully specified in the Corba standard is, for example,\n> which header files (and by what names) will be generated by the IDL\n> stubber, so each Orb has, or might have, different conventions for\n> include files. This probably impacts server-side code a bit more than\n> clients.\n> \n> There is some interest for some Orbs to try lining up the header file\n> names, but I don't know how feasible it is in the short term.\n> \n> We could probably isolate this into Postgres-specific header files,\n> but there will probably be Orb-specific #ifdef blocks in those\n> headers.\n\nRight, which is something that I thought we had pseudo-agreed upon the\nlast time through, that we woudl most likely require this...hadn't\nrealized it was for 'Orb-header files', but, IMHO, that's no worse then\nputting in HAVE_MICO vs HAVE_ORBIT blocks and having it a configure\noption...\n\nSee...I am interested, just not interested in having us tied to one\n\"vendor\"...:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 10 Nov 1999 10:58:12 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Wed, 10 Nov 1999, Vince Vielhaber wrote:\n\n> On Wed, 10 Nov 1999, Thomas Lockhart wrote:\n> \n> > > Wait...when we talked about this months back, I swore that one of the\n> > > conclusions *was* that this was possible...it would involve us doing\n> > > wrapper functions in our code that were defined in an include file based\n> > > on which ORB implementation was used...?\n> > > Basically...\n> > > pg_<corba function> maps to <insert mico corba function here>\n> > > or <insert orbit corba function here>\n> > > or <insert other implementation function here>\n> > > Has this ability changed? *raised eyebrow*\n> > \n> > No, this probably is not necessary since the C or C++ mappings for\n> > function calls in Corba are very well defined. \n> > \n> > What is not fully specified in the Corba standard is, for example,\n> > which header files (and by what names) will be generated by the IDL\n> > stubber, so each Orb has, or might have, different conventions for\n> > include files. This probably impacts server-side code a bit more than\n> > clients.\n> > \n> > There is some interest for some Orbs to try lining up the header file\n> > names, but I don't know how feasible it is in the short term.\n> > \n> > We could probably isolate this into Postgres-specific header files,\n> > but there will probably be Orb-specific #ifdef blocks in those\n> > headers.\n> \n> Is there any reason configure couldn't handle this?\n\nAs simple as a '--with-corba=mico' configure option ... or, I would think?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 10 Nov 1999 10:58:47 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "The Hermit Hacker wrote:\n >> Is there any reason configure couldn't handle this?\n >\n >As simple as a '--with-corba=mico' configure option ... or, I would think?\n \n>From the point of view of a package maintainer, I would much prefer a\nsolution that separated the choice of Orb from the main build.\n\nIf the choice goes into configure, I will have to pick a single Orb and\nbuild the Debian package for that, or else make packages for every\nOrb that Debian supports. It would be better if I could build a generic\nOrb-enabled PostgreSQL and produce a little pg-orb connection library\nfor each Debian-supported Orb.\n\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But thanks be to God, which giveth us the victory \n through our Lord Jesus Christ.\" \n I Corinthians 15:57 \n\n\n", "msg_date": "Wed, 10 Nov 1999 15:42:53 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CORBA STATUS " }, { "msg_contents": "> See...I am interested, just not interested in having us tied to one\n> \"vendor\"...:)\n\nI know. That still doesn't keep me from being in a bad mood after\nspending 3.5 hours at the dentist yesterday :((\n\nOn the Corba fork vs thread issue:\n\nIt is true that the server process would need to be handed off to the\nclient in a different manner from the postmaster; with Corba you don't\njust fork onto a different port and be done with it.\n\nHowever, the postmaster *could* start up a server process and return\nan IOR to the client, which givs the client a direct handle to the\nserver. The client would then initiate contact directly with the\nserver, and the postmaster is no longer involved.\n\nafaik you could still fork in the postmaster, though whether our\nstreamlined tricks would work with every Orb is not certain. But that\nis an optimization for our specific forking implementation, not a\nfundamental feature.\n\nAs I mentioned, the real performance benefits come from having an\noptimized query connection which bypasses the *expensive* string\nconversions we currently use to pass data around.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 15:43:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "On Wed, 10 Nov 1999, Oliver Elphick wrote:\n> The Hermit Hacker wrote:\n> >> Is there any reason configure couldn't handle this?\n> >\n> >As simple as a '--with-corba=mico' configure option ... or, I would think?\n> \n> From the point of view of a package maintainer, I would much prefer a\n> solution that separated the choice of Orb from the main build.\n> \n> If the choice goes into configure, I will have to pick a single Orb and\n> build the Debian package for that, or else make packages for every\n> Orb that Debian supports. It would be better if I could build a generic\n> Orb-enabled PostgreSQL and produce a little pg-orb connection library\n> for each Debian-supported Orb.\n\nThe same issue is true for the RPM's -- which ORB? If I'm on RedHat Linux, the\nchoice of ORB is going to depend upon the choice of desktops -- KDE or GNOME. \nORBit is packaged standard for GNOME -- KDE 2 is going to use something else --\nnow, my understanding of CORBA is quite limited -- Thomas, you have far more\nexperience, as you are actively developing CORBA stuff. If I choose to install\njust KDE, KDE's ORB is going to be installed -- if I install GNOME, ORBit is\ngoing to be installed. If I install both (the default), both ORB's will be\nresident.\n\nI can force the installation of a particular ORB through dependencies, but\nthat seems messy to me.\n\nI CAN produce multiple sets of packages -- but that's going to cause all\nmanner of confusion for users.\n\nIs it possible in the CORBA context to do what Oliver mentioned with a 'pg_orb'\nabstraction layer to a generic CORBA-enabled postgreSQL??\n\nIt may seem like Oliver and I are getting the cart before the horse, but\nthe strategic decision of how to integrate CORBA into the system is going to\nhave wide-ranging repercussions for integrators.\n\n --\n Lamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Nov 1999 10:57:51 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> The same issue is true for the RPM's -- which ORB? If I'm on RedHat Linux, the\n> choice of ORB is going to depend upon the choice of desktops -- KDE or GNOME.\n> ORBit is packaged standard for GNOME -- KDE 2 is going to use something else --\n> now, my understanding of CORBA is quite limited -- Thomas, you have far more\n> experience, as you are actively developing CORBA stuff. If I choose to install\n> just KDE, KDE's ORB is going to be installed -- if I install GNOME, ORBit is\n> going to be installed. If I install both (the default), both ORB's will be\n> resident.\n\nRight.\n\n> I can force the installation of a particular ORB through dependencies, but\n> that seems messy to me.\n\nNot so bad, but I understand your point.\n\n> I CAN produce multiple sets of packages -- but that's going to cause all\n> manner of confusion for users.\n\nRight. Not worth the effort.\n\n> Is it possible in the CORBA context to do what Oliver mentioned with a 'pg_orb'\n> abstraction layer to a generic CORBA-enabled postgreSQL??\n\nMaybe, as a second step. The first step (which we are a *long* ways\naway from; getting a consensus on how to proceed will take a miracle)\nwill be to get an implementation using one Orb with the feature set we\nwant.\n\n> It may seem like Oliver and I are getting the cart before the horse, but\n> the strategic decision of how to integrate CORBA into the system is going to\n> have wide-ranging repercussions for integrators.\n\nThis may not be what people want to hear, and may not be what turns\nout, but imho and imle (\"little experience\" ;)...\n\nCorba is really intended to allow clients and servers implemented with\ndifferent Orb products to interoperate transparently. It has very\ncarefully stayed away from over-specifying exactly *how* a particular\nclient or server would be implemented for a particular Orb.\n\nThe header file conventions, or lack thereof, is a good example of\nthis. I'm familiar with a couple of the C++ Orbs. Mico produces a\nsingle header file per interface, while TAO produces two. Hmm, it just\ndawned on me that I might be able to jigger the output file names from\nTAO's IDL compiler to make the names line up with Mico. Will get back\nto you on this detail :)\n\nAnyway, if Corba is in our future, I would think that we would work\nwith a single Orb for the server-side implementation, at least at\nfirst. Once we are up and running, then we talk about how to slip in\nsomeone's favorite other Orb.\n\nFor clients, we will have to pick and choose depending on the language\nand features required. For example, TAO has portability and some\nrealtime features and optimizations that make it the *only* choice for\nour realtime systems at work. But Mico has a nice TCL binding, so we\nare using that to implement some TCL GUIs for commanding and telemetry\ninterfaces.\n\nNot a big deal, and we quickly got over the *flap arms all over* \"Oh\nno! This Orb doesn't support language X!!!!\".\n\nbtw, the Orb which has more language bindings than any other is ILU.\nILU predates Corba by several years, but it has evolved to support the\nCorba standard in many areas.\n\nCorba was primarily intended to decouple clients from servers.\nInter-orb transportability at the source code level was a secondary\nconcern, though the Corba standard, or at least some of the\nconventions used in the open-source implementations, may be converging\na bit to help with the source portability. And the biggest\nsource-level portability concern, the call-level interfaces, is not a\nproblem.\n\nAs we are introducing Corba to new users at work (we've got O(20)\nprogrammers who will be using it on our testbeds and ground\nimplementations for optical interferometers), I emphasize the\nfollowing:\n\n1) Corba makes distributed computing easy, in that clients (the\ncalling programs) call servers (the subroutines) as though they were\nlocal to the client. But in fact they could reside anywhere.\n\n2) Specifying interfaces through IDL is a *great* way to design\nsystems. If you have the interface, then you know what you need to\nimplement. From then on, clients and servers, or callers and call-ees,\ncan be implemented independently. If we end up with Corba in our\nserver, then we could/should start specifying our internal interfaces\nwith IDL also.\n\n3) Since clients and servers are decoupled through well-defined\ninterfaces, and since these interfaces can be decoupled \"on the wire\",\nyou have great flexibility in how you mix and match Orb products to\nimplement clients and servers. But afaik all of the Orbs have a \"short\ncircuit\" which will allow you to build Corba-enabled routines written\nwith that Orb into the same image, without taking a hit at runtime to\nmarshall/unmarshall/network/etc.\n\nOne in an occasional series... ;)\n\n - Thomas\n\nbtw, I'm guessing that the way to get Corba going is to have a code\nfreeze/fork, and have a few people work on demonstrating Corba using\nthat frozen version. Then we re-merge later if the Corba demo was a\nsuccess *and* if Corba is what we want in the main-line product. That\ncould happen during our 7.x series of releases, and if the world wants\nCorba, we could mainstream it for v8.0.\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 Nov 1999 17:23:47 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n\n> From the point of view of a package maintainer, I would much prefer\n> a solution that separated the choice of Orb from the main build.\n> \n> If the choice goes into configure, I will have to pick a single Orb\n> and build the Debian package for that, or else make packages for\n> every Orb that Debian supports. It would be better if I could build\n> a generic Orb-enabled PostgreSQL and produce a little pg-orb\n> connection library for each Debian-supported Orb.\n\nIt probably makes sense to try to get things working with one ORB, and\nthen see if it's worth generalising. I'd guess ORBit is a good one to\nstart with since it's C-based, and it's pretty small: if I have to\ninstall an extra package on my machine just to run PostgreSQL, I'm not\ngoing to mind ORBit, whereas TAO might annoy me (IIRC, TAO is quite\nbig; I may be thinking of another ORB, though). ORBit does IIOP, I\nbelieve, so that covers GNOME and KDE people.\n", "msg_date": "10 Nov 1999 23:37:38 +0000", "msg_from": "Bruce Stephens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "> It probably makes sense to try to get things working with one ORB, and\n> then see if it's worth generalising. I'd guess ORBit is a good one to\n> start with since it's C-based, and it's pretty small: if I have to\n> install an extra package on my machine just to run PostgreSQL, I'm not\n> going to mind ORBit, whereas TAO might annoy me (IIRC, TAO is quite\n> big; I may be thinking of another ORB, though). ORBit does IIOP, I\n> believe, so that covers GNOME and KDE people.\n\nYeah, TAO might annoy you; it takes 1.3GB of disk space to build,\nthough much less once built ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 11 Nov 1999 15:35:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "Hello,\nI am attempting to recover a database previously secured with pg_dump,\nhowever on attempting to restore using \n\npgsql < db_security_file \n\nI get the following error(s)\n\n\nERROR: tbl_breeders relation already exists\n\nI have removed all data tables and user from the database,\nwhat am I over looking?\n\nStephen \n\n\n", "msg_date": "Thu, 11 Nov 1999 13:00:57 -0800", "msg_from": "\"Stephen Martin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Error on db recovery.." }, { "msg_contents": "\"Stephen Martin\" <[email protected]> el d�a Thu, 11 Nov 1999 \n13:00:57 -0800, escribi�:\n\n>Hello,\n>I am attempting to recover a database previously secured with pg_dump,\n>however on attempting to restore using \n>\n>pgsql < db_security_file \n\nbad, bad ...\n\ngo to /usr/doc/postgres and read ...\n\n\nSergio\n\n", "msg_date": "Thu, 11 Nov 1999 19:04:20 -0300", "msg_from": "\"Sergio A. Kessler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Error on db recovery.." }, { "msg_contents": "> Dmitry Samersoff wrote:\n> > \n> > On 13-Oct-99 Vince Vielhaber wrote:\n> > >\n> > > On 13-Oct-99 Bruce Momjian wrote:\n> > \n> > 2BOOK Authors:\n> > Please, try to keep rights for translating this book into another\n> > languages by you self, not by publisher.\n> > \n> > I may ask some St.Pitersburg's publishing company\n> > to make russian translation of this book, but some publishers\n> > like O'Reilly have too hard license policy\n> > and too long reaction time.\n> > \n> \n> I may ask some Indonesian's publishing company to make\n> Indonesian translation of this book too.\n> I may help the translation from English to Indonesian language.\n\nAddison-Wesley does a lot of foreign rights sales. Let me know when you\nwant information.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 20:31:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Outline for PostgreSQL book" }, { "msg_contents": "Mark Proctor wrote:\n\n> I'm currently studying for a Masters at BrunelUniversity, London. I\n> was looking for a thesis andsince I had an interest inOpensource\n> development anddatabases I thought that I would like to work onPGsql.\n> As I also had an interest in trying tounderstand CORBA and other\n> related technologies itmade sense that I try and work on a project\n> that tiesPGsql and CORBA together in some way.\n>\n> Marc G. Fournier and Bruce Momjian were contacted withregards to this\n> and they seemed to think that therewas room for me to work on this,\n> although at thisstage I hadn't gone through the mailing list.\n>\n> since then I've finally got PGsql up and working andI'm now embarking\n> on understand how to system actualworks. I thought a good place to\n> start with myresearch would be to go through the mailing lists totry\n> and see the current status for development in thisarea. I did this for\n> several hours trying to get thecurrent take on CORBA with regards to\n> PGsql. It seemsthere was a lot of initial talk back in Nov 1998 onthe\n> hackers list with the first thoughts of some sortof CORBA\n> implementation. The conversation orientatedover 2 main areas; which\n> ORB and implementationmethods, with some peoples offering to work on\n> this.It was then suggested that the converstaion take placein the\n> interfaces mailing list. I've since been oversome of this but find it\n> very difficult to understandthe current status with regards to PGsql\n> and CORBA.I've seen many references to people who have developed\n> a project that allows them to work with PGsql viaCORBA, but non of\n> this seems to be part of the mainproject or system. There is some\n> reference to Micheal Robinsonworking on an implementation but again\n> I'm not surehow this fits\n> in-http://www.postgresql.org/mhonarc/pgsql-interfaces/1998-11/msg00090.html\n>\n> Is there room for me to work on this project in such away that it is\n> adequate for my masters. If anyone isworking on this, or has a good\n> knowledge of thecurrent status of the CORBA implementation for\n> PGsqlcan you please let me know, so I can know whether toget started\n> on this or not. The reference thread formy initial point of contact\n> with Marc G. Fournier andBruce Momjian and how they think I should\n> attack theproject is\n> -http://www.postgresql.org/mhonarc/pgsql-hackers/1999-09/msg00076.html\n>\n> Regards\n>\n> Mark Proctor\n> email : [email protected]\n\nDear Mark,\n\nas this reply stuck between my brain and my fingers for two weeks now,\nit might come too late.\nBut Corba defines a Facility (or a Service) to access object oriented\ndatabases via a standardized interface. Since Postgres is an OORDBMS\n(there is a definitely shorter abbreviation), it might suit it well to\noffer such an interface. I never looked too deep into the standard\ndocument but if you are interested, I'll look up the exact location.\n\nSince this does not interfere with Postgres' internals it should be much\neasier to do. And since a redesign (for speed) of the backend interface\nwould not provide new features, I would suggest taking a look into this\narea.\n\nIMHO the Corba specs should get more attention. I really like these\nstandardization efforts, though they rarely affect everyday (free)\nprogramming environments.\n\nRegards\n Christof\n (which was tempted to implement them some month ago but decided to\nbuild actual programs on now-available and (to me) well known technology\n(ecpg driven Corba objects)).\n\n", "msg_date": "Tue, 30 Nov 1999 05:44:10 +0100", "msg_from": "Christof Petig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CORBA STATUS" }, { "msg_contents": "In my digging around in initdb, I came to the --username option, which\nsupposedly allows you to initialize the database system with another\nusername. Not a bad idea, really.\n\nObviously, you'd have to be root in that case, because you want to create\nfiles in someone else's name. But if you are root, the backends will\nrefuse to execute, wisely so. So this option is totally broken.\n\nI propose that I remove it, and that it instead be possible that you can\ndo\n\nroot# su -l postgres -c 'initdb ...'\n\nif you are not a fan of logging in and out of user accounts during the\ninstall process.\n\nThat would also remove a whole bunch of the username/id checking logic,\nsince you just use the user you're logged in as ($UID, $USER), and if the\nbackend doesn't like it, it will tell you. pg_id adieu.\n\n\nAnother question: Is there a reason why the system views in initdb are all\ncreated with a CREATE TABLE and then with a CREATE RULE, instead of using\nCREATE VIEW? Is that a left over from the time before there were views as\nsuch?\n\n\nQuestion 3: Is there a reason why the template1 database is vacuumed twice\nin the process? Once before all the views are created (no analyze) and\nonce at the very end (with analyze).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 9 Dec 1999 01:18:37 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "More initdb follies" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In my digging around in initdb, I came to the --username option, which\n> supposedly allows you to initialize the database system with another\n> username. Not a bad idea, really.\n>\n> Obviously, you'd have to be root in that case, because you want to create\n> files in someone else's name. But if you are root, the backends will\n> refuse to execute, wisely so. So this option is totally broken.\n>\n> I propose that I remove it,\n\nMakes sense to me --- I was saying more or less the same thing, I think,\nwhen I said that initdb should pay attention to the effective UID it's\nrun under *and nothing else* to determine the postgres user name/ID.\nIn particular, if you want to be able to run it via \"su\", it mustn't\nassume that environment variables like LOGNAME or USER are set correctly\nfor the postgres user. If it needs the user name it should look it up\nfrom the EUID.\n\n> Another question: Is there a reason why the system views in initdb are all\n> created with a CREATE TABLE and then with a CREATE RULE, instead of using\n> CREATE VIEW? Is that a left over from the time before there were views as\n> such?\n\nProbably, but it's before my time.\n\n> Question 3: Is there a reason why the template1 database is vacuumed twice\n> in the process? Once before all the views are created (no analyze) and\n> once at the very end (with analyze).\n\nI've wondered about that too. There are some comments in initdb\nsuggesting that other orderings might fail, but I wouldn't be surprised\nif those are obsolete. Have you tried altering the procedure?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Dec 1999 20:38:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More initdb follies " }, { "msg_contents": "> Question 3: Is there a reason why the template1 database is vacuumed twice\n> in the process? Once before all the views are created (no analyze) and\n> once at the very end (with analyze).\n\nYes, there is a reason, though I can't remember why. We need the\nanalyze at the end so the system tables are completely optimized, but we\nneed the earlier vacuum to set some table statistics that don't get set\nby the raw load used by initdb. You can try removing the first one to\nsee if it works.\n\nSeems it works without the first initdb here. I will apply a patch to\nremove the first initdb. I think we needed it long ago for some reason.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 8 Dec 1999 21:54:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More initdb follies" }, { "msg_contents": "> Question 3: Is there a reason why the template1 database is vacuumed twice\n> in the process? Once before all the views are created (no analyze) and\n> once at the very end (with analyze).\n\nI see the line before the first vacuum:\n\n# If the COPY is first, the VACUUM generates an error, so we vacuum first\n\nThat may have been me adding this. Seems it is no longer an issue. \nRemoved.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 8 Dec 1999 21:55:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More initdb follies" }, { "msg_contents": "Peter Eisentraut wrote:\n >In my digging around in initdb, I came to the --username option, which\n >supposedly allows you to initialize the database system with another\n >username. Not a bad idea, really.\n >\n >Obviously, you'd have to be root in that case, because you want to create\n >files in someone else's name. But if you are root, the backends will\n >refuse to execute, wisely so. So this option is totally broken.\n >\n >I propose that I remove it, and that it instead be possible that you can\n >do\n >\n >root# su -l postgres -c 'initdb ...'\n\nYou can already do this; this example is from the Debian package installation\n(which runs as root, of course):\n\n su postgres -c \"cd ${PGHOME}; . ./${PROFILE}; initdb -e ${Encoding} -l\n${PGLIB} -r ${PGDATA} -u postgres\"\n\nIt can be quite tricky, though, since there are a number of different su\nversions around. I would prefer it if root were able to run initdb directly\nand set the ownerships as part of the process. The ability to run as root\nshould only apply to initdb.\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Let not sin therefore reign in your mortal body, that \n ye should obey it in the lusts thereof.\" \n Romans 6:12 \n\n\n", "msg_date": "Thu, 09 Dec 1999 11:14:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More initdb follies " }, { "msg_contents": "On 1999-12-09, Oliver Elphick mentioned:\n\n> >I propose that I remove it, and that it instead be possible that you can\n> >do\n> >\n> >root# su -l postgres -c 'initdb ...'\n> \n> You can already do this; this example is from the Debian package installation\n\n> It can be quite tricky, though, since there are a number of different su\n> versions around. I would prefer it if root were able to run initdb directly\n\nOf course the user would actually invoke the su himself, so he better know\nwhat his does.\n\n> and set the ownerships as part of the process. The ability to run as root\n> should only apply to initdb.\n\nI don't think there is a good (let alone portable) way to set ownership of\na shell script at run time. While you could do all kinds of funny business\nwith file ownerships, the postgres backends will still refuse to execute.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Sat, 11 Dec 1999 01:20:38 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More initdb follies " }, { "msg_contents": "On 1999-12-08, Tom Lane mentioned:\n\n> Makes sense to me --- I was saying more or less the same thing, I think,\n> when I said that initdb should pay attention to the effective UID it's\n> run under *and nothing else* to determine the postgres user name/ID.\n> In particular, if you want to be able to run it via \"su\", it mustn't\n> assume that environment variables like LOGNAME or USER are set correctly\n> for the postgres user. If it needs the user name it should look it up\n> from the EUID.\n\n(The odd thing is, that our installation instructions suggest to su as\npostgres and do the install that way, yet this seems to work anyway. On my\nsystem (GNU sh-utils), USER is only set if you 'su -', but it seems\nrelying on the user doing this right is way too much to ask.)\n\nHmm, portability issues. Can you be sure EUID is implemented everywhere?\n(I guess so.) How do you determine the user name from a UID? That would be\nthe inverse of what pg_id does now.\n\nAn idea I just had would be to use\n* First resort: `id -n -u`\n* Second resort: `whoami`\n* Third resort: --username option\n\nThe first two don't distinguish between su and su -, at least here. Not\nsure if #3 is necessary though. One of those you should have.\n\nOn the other hand, one more thing to think about is that the only place\nthe superuser's UID and name are actually used, is to initialize the\ncatalogs. All the files will happily be created under which ever user you\nrun this as (as they should). Using the above three step approach, you can\nchoose to name your _database_ superuser whatever you want, independent of\nthe Unix filesystem concerns. Of course this is nothing to encourage, but\nit's possible. I could install the system in my home directory in some\ncomputing lab under my own user name, yet still name the superuser\npostgres to have a standard environment within the database. I guess the\n--username option does kind of work, just not as expected.\n\n(The next thing would be the ability to choose the database superuser's\nusesysid as well, which might not be so far-fetched either, because if\n1) you already have 40000 users on your system when you install PostgreSQL\n2) you assign the Unix user postgres the uid 400001\n3) you add your 40000 users to the database\nyou end up with usesysid's 40000-80000 (+/- 1). Not the end of the world\nbut kind of dumb. The you could simply assign 0 (or 1 if zero is not\nallowed) to the superuser at initdb time.)\n\nThere seem to be a lot of misconceptions (possibly caused by restrictions\nin the past) about all the user names and ids of the installation files,\nthe data directory, the server process, the database users, the database\nsuper user, etc. In fact, most of these can be chosen freely, the only\nrequirement is that the server process can access the data directory.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sat, 11 Dec 1999 01:20:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More initdb follies " }, { "msg_contents": "On Wed, 08 Dec 1999, I wrote:\n> I have a table with a field varchar(5), filed with right alligned numbers.\n> Ordering was fine, just like we expected compared with nummeric.\n> \n> Starting from RedHat version 6.1 the ordening seems to remove the leading\n> blanco's, what was not for use. I try different versions of postgres, as\n> there are 6.5.2-1, 6.5.3-1, 6.5.3-2\n> I also try to change varchar in char, and remove the index on the varchar\n> field but nothing helps.\n\nToday i take up again and see the ordering for me (third time i install\nRH6.1), now \nwith postgres version 6.5.2-1\n\tthe problem is always the same\n\nWhen i do the following\nCREATE TABLE BLANK (column1 varchar(5));\nINSERT INTO BLANK (column1) VALUES (' 1');\nINSERT INTO BLANK (column1) VALUES (' 11');\nINSERT INTO BLANK (column1) VALUES (' 100');\nINSERT INTO BLANK (column1) VALUES (' 2');\n\nthen:\nSELECT * FROM BLANK order by column1;\n\nI received\n\n 1\n 100\n 11\n 2\t\t--> mark also a not aligned output.\n\nand I expected\n 1\n 2\n 11\n 100\n\n\nAnybody has an idea?? \n\nFrans\n\n", "msg_date": "Tue, 14 Dec 1999 23:46:47 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": false, "msg_subject": "ordering RH6.1" }, { "msg_contents": "Frans Van Elsacker <[email protected]> writes:\n> When i do the following\n> CREATE TABLE BLANK (column1 varchar(5));\n> INSERT INTO BLANK (column1) VALUES (' 1');\n> INSERT INTO BLANK (column1) VALUES (' 11');\n> INSERT INTO BLANK (column1) VALUES (' 100');\n> INSERT INTO BLANK (column1) VALUES (' 2');\n> then:\n> SELECT * FROM BLANK order by column1;\n\n> I received\n> 1\n> 100\n> 11\n> 2\t\t--> mark also a not aligned output.\n\n> and I expected\n> 1\n> 2\n> 11\n> 100\n\n> Anybody has an idea?? \n\nBizarre. I see the expected results under both 6.5.3 and current\ndevelopment sources:\n\nplay=> SELECT * FROM BLANK order by column1;\ncolumn1\n-------\n 1\n 2\n 11\n 100\n(4 rows)\n\nI wonder if this could be a LOCALE or MULTIBYTE issue. Do you have\neither feature enabled in your copy, and if so what locale/encoding\ndo you use? (I'm running plain vanilla no-USE_LOCALE, no-MULTIBYTE\ncode, so that might be why I don't see anything funny...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Dec 1999 18:28:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1 " }, { "msg_contents": "Tom Lane wrote:\n >Bizarre. I see the expected results under both 6.5.3 and current\n >development sources:\n >\n >play=> SELECT * FROM BLANK order by column1;\n >column1\n >-------\n > 1\n > 2\n > 11\n > 100\n >(4 rows)\n >\n >I wonder if this could be a LOCALE or MULTIBYTE issue. Do you have\n >either feature enabled in your copy, and if so what locale/encoding\n >do you use? (I'm running plain vanilla no-USE_LOCALE, no-MULTIBYTE\n >code, so that might be why I don't see anything funny...)\n\nI've tried this with both locale and multibyte enabled and with\nLANG=en_GB. The results are correct. (Version = 6.5.3).\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The fear of the LORD is the instruction of wisdom, and\n before honour is humility.\" Proverbs 15:33 \n\n\n", "msg_date": "Wed, 15 Dec 1999 10:29:09 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ordering RH6.1 " }, { "msg_contents": "Tom Lane wrote:\n \n> I wonder if this could be a LOCALE or MULTIBYTE issue. Do you have\n> either feature enabled in your copy, and if so what locale/encoding\n> do you use? (I'm running plain vanilla no-USE_LOCALE, no-MULTIBYTE\n> code, so that might be why I don't see anything funny...)\n\nHe's running the RPM distribution, which at that release has\n--enable-locale but no multibyte.\n\nUsing the no-locale RPM's I last built, I can't reproduce his results.\n\nFrans, try out the no-locale rpm set and see if the result changes, if\nyou please. (using wget, you would do: wget\nhttp://www.ramifordistat.net/postgres/RPMS/redhat-6.x/postgresql*-2nl.i386.rpm\n)\n\nThis will verify whether it is locale-related or not. I would install\nthe locale RPMs and test for you right now, but my 6.1 machine is at\nhome. If it is inconvenient for you to download this, let me know, and\nI'll try to test tonight at home -- although, I've been meaning to do\njust that for nearly a week now, but I haven't even fired up the machine\nat home in the last week.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 15 Dec 1999 10:55:28 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "I download all the rpm from\n(http://www.ramifordistat.net/postgres/RPMS/redhat-6.x/postgresql*-2nl.i386.\nrpm)\nand have install each of them in redhat 6.1.\n\nBut we received the same bad results as before.\n\n\t- my redhat was downloaded from a mirror site as a cd-image\n\t- and there where some older version of postgres (v 6.5.2-1) installed on\nour test system before. They were first uninstalled.\n\nThis are the only two packages that we have running on our machine. I've\nchoose the half-automatic graphic install of redhat. Selected some standard\ntools like ftp, network tool,... and choose a belgian keyboard. I have left\nthe timezone selection as default.\n\nStrange things!\n\nI see only the following posibilities :\n\t\t- Our redhat cd-image is different from yours (Why ??)\n\t\t- influence of the older versions \n\t\t- keyboard settings ???\n\nThis is our result :\n\n[postgres@dekatest pgsql]$ psql test\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.3 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: test\n\ntest=> CREATE TABLE BLANK (column1 varchar(5));\nCREATE\ntest=> INSERT INTO BLANK (column1) VALUES (' 1');\nINSERT 587145 1\ntest=> INSERT INTO BLANK (column1) VALUES (' 11');\nINSERT 587146 1\ntest=> INSERT INTO BLANK (column1) VALUES (' 100');\nINSERT 587147 1\ntest=> INSERT INTO BLANK (column1) VALUES (' 2');\nINSERT 587148 1\ntest=> SELECT * FROM BLANK order by column1;\ncolumn1\n-------\n 1\n 100\n 11\n 2\n(4 rows)\n \nAny Idea ?\n\ngreetings,\nFrans\n\n\nAt 10:55 15/12/99 -0500, you wrote:\n>Tom Lane wrote:\n> \n>> I wonder if this could be a LOCALE or MULTIBYTE issue. Do you have\n>> either feature enabled in your copy, and if so what locale/encoding\n>> do you use? (I'm running plain vanilla no-USE_LOCALE, no-MULTIBYTE\n>> code, so that might be why I don't see anything funny...)\n>\n>He's running the RPM distribution, which at that release has\n>--enable-locale but no multibyte.\n>\n>Using the no-locale RPM's I last built, I can't reproduce his results.\n>\n>Frans, try out the no-locale rpm set and see if the result changes, if\n>you please. (using wget, you would do: wget\n>http://www.ramifordistat.net/postgres/RPMS/redhat-6.x/postgresql*-2nl.i386.\nrpm\n>)\n>\n>This will verify whether it is locale-related or not. I would install\n>the locale RPMs and test for you right now, but my 6.1 machine is at\n>home. If it is inconvenient for you to download this, let me know, and\n>I'll try to test tonight at home -- although, I've been meaning to do\n>just that for nearly a week now, but I haven't even fired up the machine\n>at home in the last week.\n>\n>--\n>Lamar Owen\n>WGCR Internet Radio\n>1 Peter 4:11\n>\n>\n\n", "msg_date": "Thu, 16 Dec 1999 23:53:45 +0100", "msg_from": "Frans Van Elsacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "[Cristian, Jeff: we have a problem here. RedHat 6.1 Install versus\nRedHat 6.0 upgraded to 6.1 behaves differently. Ideas of where to start\nlooking?]\n\nFrans Van Elsacker wrote:\n> But we received the same bad results as before.\n> column1\n> -------\n> 1\n> 100\n> 11\n> 2\n> (4 rows)\n> \n> Any Idea ?\n\nOk, I bit the bullet and spent the whole day (plus or minus a couple of\nhours) putting together a test bed. I have three machines in this\ntestbed:\n1.)\tMy production server, running Mandrake 5.3, Postgresql 6.5.3-2nl;\n2.)\tMy backup server, running RedHat 6.0, Postgresql 6.5.3-2nl;\n3.)\tMy development server, freshly installed with RH 6.1 + all updates,\nPostgreSQL 6.5.3-2nl.\n\nI have now reproduced the results. HOWEVER, my home machine didn't\nreproduce the earlier results, and it is RedHat 6.1 (an upgrade from RH\n6.0).\n\nFor Mandrake 5.3:\n\ncolumn1\n-------\n 1\n 2\n 11\n 100\n(4 rows)\n\nFor RH 6.0: ditto Mandrake 5.3.\n\nfor RH 6.1 (fresh install):\ncolumn1\n-------\n 1\n 100\n 11\n 2\n\nSo, I moved the physical database structure over from the 6.1 machine to\nthe 6.0 machine and redid the select: the right results.\n\nThe RedHat 6.0 machine is running the same exact postgres binaries that\nthe RedHat 6.1 machine is running -- the 6.5.3-2nl rpms were built on my\nhome RedHat 6.1 machine. The Mandrake 5.3 machine is running the RedHat\n5.2 binaries built by the alternate boot set on the development server\n(which is why it took most of the day to set things up....).\n\nOk, hackers:\n\nWhat library routine is used to do the order by in this case?\n\nI'm going to retry this exact set of queries again at home -- I wasn't\nable to reproduce the last set of results -- but we'll see what happens\nhere.\n\nStrange.\n\nI'll see what I can find -- this also explains some strange regression\nresults I was mailed awhile back. In fact, let's try regression on the\nRH 6.1 fresh install.... AND I AM GETTING FAILURES THAT I HAVE NEVER\nGOTTEN AT HOME ON MY UPGRADE REDHAT 6.1! \n\nRecap while I'm waiting for regression to finish:\nThe fresh install of RedHat 6.1 is from the exact same CD that I\nupgraded my home box from RH 6.0. The ONLY difference is the fresh\ninstall versus the upgrade -- same versions of PostgreSQL. I am going to\ndouble check regression at home, but I have not seen these results\nbefore, and I distinctly remember running regression at home. I'll keep\nyou all updated.\n\n[Nine minutes later]\n\nFailures: float8, geometry, select implicit, select having, and select\nviews. The regress.out and regression.diffs are attached. Float8 and\ngeometry are normal.\n\nLooking at the regression diffs, it is obvious that there is a collation\nproblem here. But where is this collation sequence problem coming from?\n(Note that the 6.5.3-2nl RPMs are built without locale support.)\n\nI'm going to go digging into a diff of my home machine versus this new\nRH 6.1 install.\n--\nLamar Owen\nWGCR Internet Radio", "msg_date": "Thu, 16 Dec 1999 19:20:47 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "At 07:20 PM 12/16/99 -0500, Lamar Owen wrote:\n\n>I'll see what I can find -- this also explains some strange regression\n>results I was mailed awhile back. In fact, let's try regression on the\n>RH 6.1 fresh install.... AND I AM GETTING FAILURES THAT I HAVE NEVER\n>GOTTEN AT HOME ON MY UPGRADE REDHAT 6.1! \n\nI know that AOLserver works on 6.0 and has problems on 6.1 (a 2.2.12\nkernel) and works again if you upgrade your 6.1 to a 2.2.13 kernel.\n(have to use \"-i\" on 6.1 or it won't work, though I forget offhand\nwhat \"-i\" does for AOLserver)\n\nThis ain't specific help but at least PostgreSQL's not alone in \nhaving weird problems with RH 6.* releases.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 16 Dec 1999 17:06:01 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "On Thu, 16 Dec 1999, Lamar Owen wrote:\n \n> [Cristian, Jeff: we have a problem here. RedHat 6.1 Install versus\n> RedHat 6.0 upgraded to 6.1 behaves differently. Ideas of where to start\n> looking?]\n> I'm going to retry this exact set of queries again at home -- I wasn't\n> able to reproduce the last set of results -- but we'll see what happens\n> here.\n\nOk, confirmation. On my home machine, which was upgraded to RedHat 6.1 from\nRedHat 6.0, I get the correct results:\ncolumn1\n------\n 1\n 11\n 100\n 2\n(4 rows)\n\n> Recap while I'm waiting for regression to finish:\n> The fresh install of RedHat 6.1 is from the exact same CD that I\n> upgraded my home box from RH 6.0. The ONLY difference is the fresh\n> install versus the upgrade -- same versions of PostgreSQL. I am going to\n> double check regression at home, but I have not seen these results\n> before, and I distinctly remember running regression at home. I'll keep\n> you all updated.\n\nUpdate: regression tests that fail on my 6.0-6.1 home machine: float8 and\ngeometry -- which are normal to fail on RedHat any version. IOW, no collation\nproblems at home! Oh, I'm running the exact same postgresql binary RPM's at\nhome as I am running on the fresh RH 6.1 install at work.\n\nTime to dig into date and time stamps on installed RPMs versus updated RPMs.\n\nFrans, try installing RedHat 6.0 on a box, then upgrading to RH 6.1, then rerun\nyour tests and see what happens.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 16 Dec 1999 20:53:49 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> For RH 6.0: [ correct results ]\n\n> for RH 6.1 (fresh install):\n> column1\n> -------\n> 1\n> 100\n> 11\n> 2\n\n> So, I moved the physical database structure over from the 6.1 machine to\n> the 6.0 machine and redid the select: the right results.\n\n> The RedHat 6.0 machine is running the same exact postgres binaries that\n> the RedHat 6.1 machine is running -- the 6.5.3-2nl rpms were built on my\n> home RedHat 6.1 machine.\n\nWow. Same data files, same binaries, different results. Sure looks\nlike the finger is pointing at 6.1's libc. (I'm assuming that the\nbinaries make use of a shared-library libc, not statically-linked-in\nroutines, right?)\n\n> Ok, hackers:\n> What library routine is used to do the order by in this case?\n\nIf you compiled with USE_LOCALE, it's strcoll(); if not, strncmp().\nSee varstr_cmp() in src/backend/utils/adt/varlena.c.\n\n> Looking at the regression diffs, it is obvious that there is a collation\n> problem here. But where is this collation sequence problem coming from?\n> (Note that the 6.5.3-2nl RPMs are built without locale support.)\n\nOK...\n\nYour regression failures show collation problems in all three of bpchar,\nvarchar, and name. (But curiously, not for text ... hmm ...). bpchar\nand varchar both use varstr_cmp(), but namelt just calls strncmp\nunconditionally --- see adt/name.c. So the evidence is looking very\nstrong that strncmp has got some kind of problem on RH 6.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 1999 21:04:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1 " }, { "msg_contents": "On Thu, 16 Dec 1999, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> Wow. Same data files, same binaries, different results. Sure looks\n> like the finger is pointing at 6.1's libc. (I'm assuming that the\n> binaries make use of a shared-library libc, not statically-linked-in\n> routines, right?)\n\nRight.\n\n> Your regression failures show collation problems in all three of bpchar,\n> varchar, and name. (But curiously, not for text ... hmm ...). bpchar\n> and varchar both use varstr_cmp(), but namelt just calls strncmp\n> unconditionally --- see adt/name.c. So the evidence is looking very\n> strong that strncmp has got some kind of problem on RH 6.1.\n\nMore information: the LOCALE enabled-binaries act the same way. So, there's an\nissue with both strcoll and strncmp. What gets me is that it works perfectly\nfine on my RedHat 6.1 box that was upgraded from RedHat 6.0 -- but it does not\nwork fine at all on a box that I did a fresh install on today -- from the same\nCD I did the upgrade.\n\nHmmm....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 1999 21:56:22 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "On Thu, 16 Dec 1999, Lamar Owen wrote:\n> On Thu, 16 Dec 1999, Lamar Owen wrote:\n> \n> > [Cristian, Jeff: we have a problem here. RedHat 6.1 Install versus\n> > RedHat 6.0 upgraded to 6.1 behaves differently. Ideas of where to start\n> > looking?]\n> > I'm going to retry this exact set of queries again at home -- I wasn't\n> > able to reproduce the last set of results -- but we'll see what happens\n> > here.\n> \n> Ok, confirmation. On my home machine, which was upgraded to RedHat 6.1 from\n> RedHat 6.0, I get the correct results:\n\nMore information: it seems that the i18n support is the cause of this. If you\nremove or rename the file /etc/sysconfig/i18n and restart, then even the fresh\nRedHat 6.1 install provides the correct results for this query. This file, I\nthink, is created during a fresh installation of RH 6.1 -- it doesn't seem to\nbelong to any RPM. An upgrade wouldn't create this file..... (Jeff? Cristian?\nam I right on this?)\n\nRunning regression, I get the float8 and geometry failures, but I now get an\nopr_sanity failure -- but the other collation failures are gone. The opr_sanity\nfailure diff: \n\n*** expected/opr_sanity.out\tWed May 12 11:02:34 1999\n--- results/opr_sanity.out\tThu Dec 16 22:39:45 1999\n***************\n*** 48,56 ****\n (p1.proargtypes[0] < p2.proargtypes[0]);\n proargtypes|proargtypes\n -----------+-----------\n- 25| 1043\n 1042| 1043\n! (2 rows)\n \n QUERY: SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]\n FROM pg_proc AS p1, pg_proc AS p2\n--- 48,55 ----\n (p1.proargtypes[0] < p2.proargtypes[0]);\n proargtypes|proargtypes\n -----------+-----------\n 1042| 1043\n! (1 row)\n \n QUERY: SELECT DISTINCT p1.proargtypes[1], p2.proargtypes[1]\n FROM pg_proc AS p1, pg_proc AS p2\n\n\nHOWEVER, after doing an initdb and rerunning regression, this test no longer\nfails. FWIW.\n\nI seems that charmap (i18n) rears its ugly head even if locale doesn't.\n\n--\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 16 Dec 1999 22:35:48 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "On Thu, 16 Dec 1999, Lamar Owen wrote:\n\n> More information: the LOCALE enabled-binaries act the same way. So, there's an\n> issue with both strcoll and strncmp. What gets me is that it works perfectly\n> fine on my RedHat 6.1 box that was upgraded from RedHat 6.0 -- but it does not\n> work fine at all on a box that I did a fresh install on today -- from the same\n> CD I did the upgrade.\n\nAny differences in the environment variables maybe?\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n", "msg_date": "Thu, 16 Dec 1999 22:52:23 -0500 (EST)", "msg_from": "Cristian Gafton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "On Thu, 16 Dec 1999, Cristian Gafton wrote:\n> On Thu, 16 Dec 1999, Lamar Owen wrote:\n> \n> > More information: the LOCALE enabled-binaries act the same way. So, there's an\n> > issue with both strcoll and strncmp. What gets me is that it works perfectly\n> > fine on my RedHat 6.1 box that was upgraded from RedHat 6.0 -- but it does not\n> > work fine at all on a box that I did a fresh install on today -- from the same\n> > CD I did the upgrade.\n> \n> Any differences in the environment variables maybe?\n\nIn a nutshell, yes. /etc/sysconfig/i18n on the fresh install sets LANG,\nLC_ALL, and LINGUAS all to be \"en_US\". The upgraded machine at home doesn't\nhave an /etc/sysconfig/i18n -- nor does the RH 6.0 box.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 16 Dec 1999 22:58:21 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": ">\n> Failures: float8, geometry, select implicit, select having, and select\n> views. The regress.out and regression.diffs are attached. Float8 and\n> geometry are normal.\n\nMore problem data...\n\nWe installed RH 6.1 fresh, then installed pgsql 6.5.2 from tar.gz. Failures on the following regression tests: float8, geometry, opr_sanity, sanity_check, random, and misc.\n\nOn a hunch from this thread, I then removed all postgresql-related RPM packages (with 'rpm -e'), rebuilt pgsql 6.5.2, and all regression tests passed (except float8 and geometry, which is normal).\n\nI've also noticed some new (and possibly related?) RH 6.1 wierdness with a fairly mature perl module, Date::Manip-5.35, that wasn't showing up on RH 6.0. It is now failing the first time it attempts to ascertain the timezone, and then appears to succeed everytime thereafter for a given process (and TZ is clearly set in the configuration for the module as well as showing up with the\n'date' command). The RPM removal above had no effect on that problem.\n\nCheers,\nEd Loehr\n\n\n\n\n", "msg_date": "Thu, 16 Dec 1999 23:20:05 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "Lamar Owen wrote:\n\n> More information: it seems that the i18n support is the cause of this. If you\n> remove or rename the file /etc/sysconfig/i18n and restart, then even the fresh\n> RedHat 6.1 install provides the correct results for this query. This file, I\n> think, is created during a fresh installation of RH 6.1 -- it doesn't seem to\n> belong to any RPM. An upgrade wouldn't create this file..... (Jeff? Cristian?\n> am I right on this?)\n\nStill more data...\n\nAfter renaming the file /etc/sysconfig/i18n and rebooting, the perl module\nDate::Manip timezone lookup failure described previously has ceased.\n\nIt seems there may be at least two issues, possibly related. My pgsql regression\ntests were fixed by nuking the pgsql-related RPMs, but that didn't fix the\nDate::Manip perl module problem. Renaming i18n did. I didn't test the\nSELECT query in question prior to making these changes, but that SELECT query does\nindeed now return expected results.\n\nCheers,\nEd Loehr\n\n\n", "msg_date": "Thu, 16 Dec 1999 23:54:36 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "> After renaming the file /etc/sysconfig/i18n and rebooting, the perl module\n> Date::Manip timezone lookup failure described previously has ceased.\n\nI spoke too soon. Timezone problem is not fixed by this as it first appeared.\n\nCheers,\nEd Loehr\n\n", "msg_date": "Fri, 17 Dec 1999 00:09:45 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordering RH6.1" }, { "msg_contents": "May I humbly suggest a TODO item (or two)?\n\nOn the general list a while back someone wanted to implement\na view where the records visible to the client were\ndependent upon an application defined user id. I, too, am in\na similar circumstance. However, one solution would be to do\nsomething like this:\n\nSELECT authenticate(<userid>, <password>);\n\nwhere <userid> and <password> are submitted by the client\napplication as input from the user. The authenticate()\nroutine would be a 'C' language routine which would validate\nthe userid and password by checking some sort of password\ntable (the database equivalent of shadowed passwords) and\nthen either set or clear an environmental variable, say,\nCLIENTID and returning either 0 or 1. Then, one could have a\nview such as:\n\nCREATE VIEW 'mypayroll' AS SELECT * FROM payroll WHERE\nemployeeid = clientid();\n\nand the clientid() function simply returns the value of the\nenvironmental variable CLIENTID. Of course this would only\nbe useful for single-connection client applications. The\nclient application would initially connect to PostgreSQL\nusing a PostgreSQL userid/password which would only have\nSELECT privileges on the 'mypayroll' table.\n\nAt any rate. the whole point of this deal is that anyone\nwanting to write a 'C' function which needs to access tuples\nhas to use the SPI interface. And at the moment, that means\nthey need the source tree to the backend. I wrote Lamar a\nnote a while back regarding this and he said to check the\ndependencies on spi.h. Well, a 'make depend' for spi.c\nyielded 86 headers, so if that's any indication, it would\nmean a substantial number of additional headers would be\nrequired for properly allowing users with binary\ndistributions to write SPI code. Quite frankly, I don't\nknow why spi.h is even being shipped with various packages\nat the moment. As a result, no one can use the contrib code\nwithout the backend source, nor can one write 'C' functions\nthat access tables. So I was wondering if this is something\nthat might ever be addressed? Of course, the whole issue\ndisappears if there is ever a PostgreSQL equivalent of a\n'setuid' attribute and grant/revoke privileges for\nfunctions....\n\nJust some thoughts...\n\nMike Mascari\n\n\n", "msg_date": "Fri, 17 Dec 1999 20:32:02 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "SPI header dependencies" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> On the general list a while back someone wanted to implement\n> a view where the records visible to the client were\n> dependent upon an application defined user id. I, too, am in\n> a similar circumstance. However, one solution would be to do\n> something like this:\n> SELECT authenticate(<userid>, <password>);\n> where <userid> and <password> are submitted by the client\n> application as input from the user.\n\nThis seems like a completely redundant mechanism to me.\nWhat is wrong with using the *existing* user authentication\nmechanisms, and then using getpgusername() or CURRENT_USER\nin your queries?\n\n> At any rate. the whole point of this deal is that anyone\n> wanting to write a 'C' function which needs to access tuples\n> has to use the SPI interface. And at the moment, that means\n> they need the source tree to the backend.\n\nI can't really see anyone writing SPI functions without access to\na source tree; there's too much stuff that's most readily learned\nby looking at the code. But I think you are right that the installed\ninclude tree is probably insufficient to *compile* the average SPI\nfunction, and that's not good. We haven't been paying much attention\nto the question of which headers need to be installed. There are\nprobably some installed that needn't be anymore, as well as vice versa.\n\nProposed TODO:\n* Re-examine list of header files that get installed, add/delete as needed\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Dec 1999 12:20:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI header dependencies " }, { "msg_contents": "Tom Lane wrote:\n >Proposed TODO:\n >* Re-examine list of header files that get installed, add/delete as needed\n\nI attach the list of extra include files that I needed to include in Debian's\npostgresql-dev package\n\naccess/funcindex.h\naccess/heapam.h\naccess/htup.h\naccess/ibit.h\naccess/itup.h\naccess/relscan.h\naccess/sdir.h\naccess/skey.h\naccess/strat.h\naccess/transam.h\naccess/tupdesc.h\naccess/tupmacs.h\naccess/xact.h\ncatalog/catname.h\ncatalog/pg_am.h\ncatalog/pg_attribute.h\ncatalog/pg_class.h\ncatalog/pg_index.h\ncatalog/pg_language.h\ncatalog/pg_proc.h\ncatalog/pg_type.h\nexecutor/execdefs.h\nexecutor/execdesc.h\nexecutor/executor.h\nexecutor/hashjoin.h\nexecutor/tuptable.h\nlib/fstack.h\nnodes/execnodes.h\nnodes/memnodes.h\nnodes/nodes.h\nnodes/params.h\nnodes/parsenodes.h\nnodes/pg_list.h\nnodes/plannodes.h\nnodes/primnodes.h\nnodes/relation.h\nparser/parse_node.h\nparser/parse_type.h\nrewrite/prs2lock.h\nstorage/block.h\nstorage/buf.h\nstorage/buf_internals.h\nstorage/bufmgr.h\nstorage/bufpage.h\nstorage/fd.h\nstorage/ipc.h\nstorage/item.h\nstorage/itemid.h\nstorage/itemptr.h\nstorage/lmgr.h\nstorage/lock.h\nstorage/off.h\nstorage/page.h\nstorage/shmem.h\nstorage/sinvaladt.h\nstorage/spin.h\ntcop/dest.h\ntcop/pquery.h\ntcop/tcopprot.h\ntcop/utility.h\nutils/array.h\nutils/builtins.h\nutils/datetime.h\nutils/datum.h\nutils/dt.h\nutils/fcache.h\nutils/hsearch.h\nutils/inet.h\nutils/int8.h\nutils/mcxt.h\nutils/memutils.h\nutils/nabstime.h\nutils/numeric.h\nutils/portal.h\nutils/rel.h\nutils/syscache.h\nutils/tqual.h\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"In the beginning was the Word, and the Word was with \n God, and the Word was God. The same was in the \n beginning with God. All things were made by him; and \n without him was not any thing made that was made.\" \n John 1:1-3 \n\n\n", "msg_date": "Sat, 18 Dec 1999 18:31:05 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] SPI header dependencies " }, { "msg_contents": "Tom Lane wrote:\n> \n> Mike Mascari <[email protected]> writes:\n> > SELECT authenticate(<userid>, <password>);\n> > where <userid> and <password> are submitted by the client\n> > application as input from the user.\n> \n> This seems like a completely redundant mechanism to me.\n> What is wrong with using the *existing* user authentication\n> mechanisms, and then using getpgusername() or CURRENT_USER\n> in your queries?\n\nI agree. I imagine the poster's development probably took\nthe same course as mine - first he was using PostgreSQL as a\nbackend to a web server, like Apache. He then probably using\nBasic authentication with something like mod_auth_pgsql. In\norder to authenticate web pages using something like\nmod_auth_pgsql, the httpd user (www, nobody, etc.) would\nconnect to the database and check the user name and\nencrypted password submitted against a user-specified table.\nSince the only application that is going to be connecting to\nPostgreSQL is the webserver, one is tempted (including me)\nto create and manage fake webuser id's and passwords, and\nonly have a single real PostgreSQL user id connect to the\ndatabase...particularly when the webuser list numbers in the\nthousands. That's why I attributed the LRU file descriptor\nexhaustion problem I reported about a month ago to kernel\nproblems instead of the password authentication leak - 90%\nof our users use the web-server. The httpd process runs as a\nuser id which does not have a shell account, and can only\nconnect to the database on localhost. This whole scheme\nlooks good at first, until you find yourself developing\nWindows-based clients...You either have to shoe-horn in a\nhack (like the above) or bite the bullet and migrate your\ncore authentication mechanism to PostgreSQL's.\n\n> Proposed TODO:\n> * Re-examine list of header files that get installed, add/delete as needed\n> \n> regards, tom lane\n\nSounds great. Although hopefully not needed in the next\nrelease :-) , the most annoying thing in the past was the\ninability to build a refint.so from the various binary\ndistributions...\n\nMike Mascari\n", "msg_date": "Sat, 18 Dec 1999 18:50:57 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI header dependencies" }, { "msg_contents": " > Proposed TODO:\n > * Re-examine list of header files that get installed, add/delete as needed\n\n Sounds great. Although hopefully not needed in the next\n release :-) , the most annoying thing in the past was the\n inability to build a refint.so from the various binary\n distributions...\n\nAbsolutely needed! There is more to SPI than refint. The foreign\nkeys cannot remove the need for that interface to be complete as\ninstalled.\n\nSame problem happens when trying to include trigger.h, so those\ndependencies need looking at too.\n\nCheers,\nBrook\n", "msg_date": "Sat, 18 Dec 1999 17:07:25 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI header dependencies" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> Tom Lane wrote:\n> >Proposed TODO:\n> >* Re-examine list of header files that get installed, add/delete as needed\n> \n> I attach the list of extra include files that I needed to include in Debian's\n> postgresql-dev package\n[snip]\n\nI owe you again, Oliver. This list gives the postgres-dev package\neverything needed to do SPI development?? Good thing I haven't done my\nplanned 6.5.3-3 release yet.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 20 Dec 1999 12:13:18 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI header dependencies" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> Tom Lane wrote:\n> >Proposed TODO:\n> >* Re-examine list of header files that get installed, add/delete as needed\n> \n> I attach the list of extra include files that I needed to include in Debian's\n> postgresql-dev package\n[snip]\n\nI owe you again, Oliver. This list gives the postgres-dev package\neverything needed to do SPI development?? Good thing I haven't done my\nplanned 6.5.3-3 release yet.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 20 Dec 1999 13:29:20 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SPI header dependencies" }, { "msg_contents": "There appear to have been changes in the shared library libpq.\n\nThe default library from 6.5.3 with psql from current tree gives:\n\n olly@linda$ psql template1\n ...\n psql: error in loading shared libraries: psql: undefined symbol: \ncreatePQExpBuffer\n\n olly@linda$ LD_PRELOAD=/usr/local/pgsql/lib/libpq.so.2.0 psql template1\n ...\n template1=>\n\nSince the library has changed, it needs to have a new version number.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Be patient therefore, brethren, unto the coming of the\n Lord...Be patient; strengthen your hearts, for \n the coming of the Lord draweth nigh.\" \n James 5:7,8 \n\n\n", "msg_date": "Mon, 10 Jan 2000 14:11:30 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Shared library version " }, { "msg_contents": "> There appear to have been changes in the shared library libpq.\n> \n> The default library from 6.5.3 with psql from current tree gives:\n> \n> olly@linda$ psql template1\n> ...\n> psql: error in loading shared libraries: psql: undefined symbol: \n> createPQExpBuffer\n> \n> olly@linda$ LD_PRELOAD=/usr/local/pgsql/lib/libpq.so.2.0 psql template1\n> ...\n> template1=>\n> \n> Since the library has changed, it needs to have a new version number.\n\nSeems I should just kick up every minor version number for 7.0 for all\ninterfaces. OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 10:06:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared library version" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> There appear to have been changes in the shared library libpq.\n> Since the library has changed, it needs to have a new version number.\n\nYou're right, we need to bump the number before release (and I hope we\nremember!). Past practice has not been to bump the number during\ndevelopment cycles, since we'd shortly have ridiculously high version\nnumbers if we incremented them at every development change.\n\nlibpq++ has also had API changes requiring a new version number before\nrelease, I think --- any others?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 10:22:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared library version " }, { "msg_contents": "On Mon, Jan 10, 2000 at 10:22:44AM -0500, Tom Lane wrote:\n> You're right, we need to bump the number before release (and I hope we\n> remember!). Past practice has not been to bump the number during\n> development cycles, since we'd shortly have ridiculously high version\n> numbers if we incremented them at every development change.\n\nEhem... I do exactly that with libecpg. For simply changes not involving API\nchanges I increment just the patch level. That's why libecpg has major\nversion, minor version and patchlevel. \n\nI find it very difficult to keep track of the changes without changing\nversion number.\n\nSo please, do not change this number upon release.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Mon, 10 Jan 2000 21:18:17 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared library version" }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> There appear to have been changes in the shared library libpq.\n >> Since the library has changed, it needs to have a new version number.\n >\n >You're right, we need to bump the number before release (and I hope we\n >remember!). Past practice has not been to bump the number during\n >development cycles, since we'd shortly have ridiculously high version\n >numbers if we incremented them at every development change.\n\nYes, but it should be bumped the first time it changes; I agree that it \nneed not be increased during later development of the same release.\n\nA patch would be superfluous. The necessary change is simply to\nincrement SO_MINOR_VERSION in src/interfaces/libpq/Makefile\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Enter into his gates with thanksgiving, and into his \n courts with praise. Be thankful unto him, and bless \n his name.\" Psalms 100:4 \n\n\n", "msg_date": "Tue, 11 Jan 2000 09:40:56 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Shared library version " }, { "msg_contents": "On 2000-01-10, Tom Lane mentioned:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > There appear to have been changes in the shared library libpq.\n> > Since the library has changed, it needs to have a new version number.\n> \n> You're right, we need to bump the number before release (and I hope we\n> remember!). Past practice has not been to bump the number during\n> development cycles, since we'd shortly have ridiculously high version\n> numbers if we incremented them at every development change.\n> \n> libpq++ has also had API changes requiring a new version number before\n> release, I think --- any others?\n\nIt would at least be fair to bump the minor version number when you do the\nbranch for a new version, so now we'd be at 2.1. IIRC the dynamic linker\nwill pick the one with the higher minor version. Since we do not have any\nincompatible changes (?) we shouldn't bump the major version.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 11 Jan 2000 14:26:45 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared library version " }, { "msg_contents": "On Tue, 11 Jan 2000, Peter Eisentraut wrote:\n\n> On 2000-01-10, Tom Lane mentioned:\n> \n> > \"Oliver Elphick\" <[email protected]> writes:\n> > > There appear to have been changes in the shared library libpq.\n> > > Since the library has changed, it needs to have a new version number.\n> > \n> > You're right, we need to bump the number before release (and I hope we\n> > remember!). Past practice has not been to bump the number during\n> > development cycles, since we'd shortly have ridiculously high version\n> > numbers if we incremented them at every development change.\n> > \n> > libpq++ has also had API changes requiring a new version number before\n> > release, I think --- any others?\n> \n> It would at least be fair to bump the minor version number when you do the\n> branch for a new version, so now we'd be at 2.1. IIRC the dynamic linker\n> will pick the one with the higher minor version. Since we do not have any\n> incompatible changes (?) we shouldn't bump the major version.\n\nlibpq++ got a major number bump on my first sweep. This time through \nit should only need a minor since I don't forsee any operational changes,\njust additional functionality and bug fixes.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Jan 2000 09:22:35 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Shared library version " }, { "msg_contents": "Using the cvs version updated this morning, this query kills the backend,\nwith no explanation in the log (-d 3):\n\n create table junk (id char(4) primary key, name text not null)\n\nIf the primary key constraint is omitted, it is OK.\n\nThis worked yesterday. Is this a solved problem, or do I need to trace it?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Enter into his gates with thanksgiving, and into his \n courts with praise. Be thankful unto him, and bless \n his name.\" Psalms 100:4 \n\n\n", "msg_date": "Tue, 11 Jan 2000 22:29:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE ... PRIMARY KEY kills backend" }, { "msg_contents": "> Using the cvs version updated this morning, this query kills the backend,\n> with no explanation in the log (-d 3):\n> \n> create table junk (id char(4) primary key, name text not null)\n> \n> If the primary key constraint is omitted, it is OK.\n> \n> This worked yesterday. Is this a solved problem, or do I need to trace it?\n> \n\nWorks for me on current sources:\n\ntest=> create table junk (id char(4) primary key, name text not null);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'junk_pkey'\nfor table 'junk'\nCREATE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 19:01:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Using the cvs version updated this morning, this query kills the backend,\n> with no explanation in the log (-d 3):\n>\n> create table junk (id char(4) primary key, name text not null)\n\nWorks for me:\n\nregression=# create table junk (id char(4) primary key, name text not null);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'junk_pkey' for table 'junk'\nCREATE\n\nAre you sure you have a complete update of the INDEX_MAX_KEYS changes?\nI committed the last of them about 1am EST (6am GMT) this morning, and\nit was a change to config.h.in ---- you would need to do a *full*\nconfigure, build, initdb cycle to be sure you have working code.\n\nIf that doesn't do it for you, there may be a platform-dependent bug\nstill lurking; can you provide a debugger backtrace of the crashed\nbackend?\n\nI'd also suggest running the regress tests ... they pass here, with\nthe exception of the 'array' test ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 19:09:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend " }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> Using the cvs version updated this morning, this query kills the backend,\n >> with no explanation in the log (-d 3):\n >>\n >> create table junk (id char(4) primary key, name text not null)\n >\n >Works for me:\n >\n >regression=# create table junk (id char(4) primary key, name text not null);\n >NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'junk_pkey' for\n > table 'junk'\n >CREATE\n >\n >Are you sure you have a complete update of the INDEX_MAX_KEYS changes?\n >I committed the last of them about 1am EST (6am GMT) this morning, and\n >it was a change to config.h.in ---- you would need to do a *full*\n >configure, build, initdb cycle to be sure you have working code.\n \n./configure --with-tcl --with-mb=UNICODE --with-odbc --enable-locale \n--with-maxbackends=64 --with-pgport=5431 --program-prefix=pg7.\n\n(The program-prefix doesn't seem to do anything.)\n\nDatabase destoyed and initdb run...\n\n >If that doesn't do it for you, there may be a platform-dependent bug\n >still lurking; can you provide a debugger backtrace of the crashed\n >backend?\n\nSegmentation fault (in the end):\n#0 0x400f068d in _IO_default_xsputn () from /lib/libc.so.6\n#1 0x400e0126 in vfprintf () from /lib/libc.so.6\n#2 0x400edf23 in vsnprintf () from /lib/libc.so.6\n#3 0x80a8e82 in appendStringInfo ()\n#4 0x80c244d in _outTypeName ()\n#5 0x80c43da in _outNode ()\n#6 0x80c2391 in _outColumnDef ()\n... \n#157128 0x80c23f6 in _outColumnDef ()\n#157129 0x80c43ca in _outNode ()\n#157130 0x80c407c in _outNode ()\n#157131 0x80c3f7a in _outConstraint ()\n#157132 0x80c475a in _outNode ()\n#157133 0x80c407c in _outNode ()\n#157134 0x80c23f6 in _outColumnDef ()\n#157135 0x80c43ca in _outNode ()\n#157136 0x80c407c in _outNode ()\n#157137 0x80c219c in _outCreateStmt ()\n#157138 0x80c43aa in _outNode ()\n#157139 0x80c2578 in _outQuery ()\n#157140 0x80c43fa in _outNode ()\n#157141 0x80c47fd in nodeToString ()\n#157142 0x80ed791 in pg_parse_and_plan ()\n#157143 0x80eda46 in pg_exec_query_dest ()\n#157144 0x80eda01 in pg_exec_query ()\n#157145 0x80eeb82 in PostgresMain ()\n#157146 0x80d7ee7 in DoBackend ()\n#157147 0x80d7abe in BackendStartup ()\n#157148 0x80d6cc9 in ServerLoop ()\n#157149 0x80d66ae in PostmasterMain ()\n#157150 0x80ae2cb in main ()\n#157151 0x400bc7e2 in __libc_start_main () from /lib/libc.so.6\n\nI don't have any line number info, so I'll have to rebuild in order to\ndo more detailed tracing.\n\n >I'd also suggest running the regress tests ... they pass here, with\n >the exception of the 'array' test ...\n\nThe regression test doesn't seem to work at all with multibyte enabled:\n\n=============== creating new regression database... =================\ncreatedb: \"UNICODE\" is not a valid encoding name.\ncreatedb failed\nACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n\nThe reason is that regress.sh uses createdb; createdb has a bad test for\nthe encoding value (I have sent a patch separately). So I assume no-one\nhas run regression tests with multibyte-encoding enabled?\n\nI fixed the createdb bug and ran the regression test. The constraints\ntest failed when trying to create a table with a primary key. Every\ntest thereafter failed immediately [pqReadData() -- backend closed the\nchannel unexpectedly]; it appears that the primary key error messes up\nthe postmaster in some way.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For the LORD is good; his mercy is everlasting; and \n his truth endureth to all generations.\" \n Psalms 100:5 \n\n\n", "msg_date": "Wed, 12 Jan 2000 10:31:43 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend " }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >I don't have any line number info, so I'll have to rebuild in order to\n >do more detailed tracing.\n\nQUERY: create table oljunk (id char(2) primary key, name text);\n\n(gdb) n\n134 _outNode(str, node->raw_default);\n(gdb) p *str\n$25 = {\n data = 0x81dd788 \"{ QUERY :command 5 :create oljunk { CREATE :relname oljunk :istemp false \\t:columns ({ COLUMNDEF :colname id :typename { TYPENAME :name bpchar :timezone false :setof false typmod 6 :arrayBounds :arr\"..., len = 263, \n maxlen = 512}\n(gdb) n\n135 appendStringInfo(str, \" :cooked_default %s :constraints \",\n(gdb) n\n137 _outNode(str, node->constraints);\n(gdb) n\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x400f068a in _IO_default_xsputn () from /lib/libc.so.6\n\nThis is the backtrace before calling _outNode() at 137:\n#0 _outColumnDef (str=0xbfffe8cc, node=0x81dd610) at outfuncs.c:137\n str = 0xbfffe8cc\n node = (ColumnDef *) 0x81dd610\n#1 0x80c43da in _outNode (str=0xbfffe8cc, obj=0x81dd610) at outfuncs.c:1355\n str = 0xbfffe8cc\n obj = (void *) 0x81dd610\n#2 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd970) at outfuncs.c:1336\n l = (List *) 0x81dd970\n str = 0xbfffe8cc\n obj = (void *) 0x81dd970\n#3 0x80c21ac in _outCreateStmt (str=0xbfffe8cc, node=0x81dd7b8)\n at outfuncs.c:74\n str = 0xbfffe8cc\n node = (CreateStmt *) 0x81dd7b8\n#4 0x80c43ba in _outNode (str=0xbfffe8cc, obj=0x81dd7b8) at outfuncs.c:1348\n str = 0xbfffe8cc\n obj = (void *) 0x81dd7b8\n#5 0x80c2588 in _outQuery (str=0xbfffe8cc, node=0x81dd8e8) at outfuncs.c:185\n str = 0xbfffe8cc\n node = (Query *) 0x81dd8e8\n#6 0x80c440a in _outNode (str=0xbfffe8cc, obj=0x81dd8e8) at outfuncs.c:1364\n str = 0xbfffe8cc\n obj = (void *) 0x81dd8e8\n#7 0x80c480d in nodeToString (obj=0x81dd8e8) at outfuncs.c:1570\n obj = (void *) 0x81dd8e8\n str = {\n data = 0x81ddc20 \"{ QUERY :command 5 :create oljunk { CREATE :relname oljunk :istemp false \\t:columns ({ COLUMNDEF :colname id :typename { TYPENAME :name bpchar :timezone false :setof false typmod 6 :arrayBounds :arr\"..., len = 298, \n maxlen = 512}\n#8 0x80ed7a1 in pg_parse_and_plan (\n query_string=0x8184da0 \"create table oljunk (id char(2) primary key, name text)\", typev=0x0, nargs=0, queryListP=0xbfffe97c, dest=Remote, \n aclOverride=0 '\\000') at postgres.c:435\n query_string = 0x81dd8e8 \"X\\002\"\n aclOverride = 0 '\\000'\n querytree_list = (List *) 0x81dd8e8\n plan_list = (List *) 0x0\n querytree_list_item = (List *) 0x81dda60\n querytree = (Query *) 0x81dd8e8\n plan = (Plan *) 0x81dd8e8\n new_list = (List *) 0x0\n rewritten = (List *) 0xf5\n\n\nNow we run on a bit, and we go into a recursive loop inside _outNode:\n\n_outNode (str=0xbfffe8cc, obj=0x81dd9a0) at outfuncs.c:1323\n1323 if (obj == NULL)\n(gdb) p *str\n$21 = {\n data = 0x81dde28 \"{ QUERY :command 5 :create oljunk { CREATE :relname oljunk :istemp false \\t:columns ({ COLUMNDEF :colname id :typename { TYPENAME :name bpchar :timezone false :setof false typmod 6 :arrayBounds :arrayBounds <>} :is_not_null true :is_sequence false :raw_default <> :cooked_default <> :constraints ({ oljunk_pkey :type PRIMARY KEY ({ COLUMNDEF :colname id :typename { TYPENAME :name bpchar :timezone false :setof false typmod 6 :arrayBounds :arrayBounds <>} :is_not_null true :is_sequence false :raw_default <> :cooked_default <> :constraints ({ oljunk_pkey :type PRIMARY KEY ({ COLUMNDEF :colname id :typename { TYPENAME :name bpchar :timezone false :setof false typmod 6 :arrayBounds :arrayBounds <>} :is_not_null true :is_sequence false :raw_default <> :cooked_default <> :constraints ({ oljunk_pkey :type PRIMARY KEY \", len = 823, maxlen = 1024}\n(gdb) \n\n(gdb) bt\n#0 _outNode (str=0xbfffe8cc, obj=0x81dd9a0) at outfuncs.c:1323\n#1 0x80c3f8a in _outConstraint (str=0xbfffe8cc, node=0x81dd5e8)\n at outfuncs.c:1283\n#2 0x80c476a in _outNode (str=0xbfffe8cc, obj=0x81dd5e8) at outfuncs.c:1528\n#3 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd660) at outfuncs.c:1336\n#4 0x80c2406 in _outColumnDef (str=0xbfffe8cc, node=0x81dd610)\n at outfuncs.c:137\n#5 0x80c43da in _outNode (str=0xbfffe8cc, obj=0x81dd610) at outfuncs.c:1355\n#6 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd9a0) at outfuncs.c:1336\n#7 0x80c3f8a in _outConstraint (str=0xbfffe8cc, node=0x81dd5e8)\n at outfuncs.c:1283\n#8 0x80c476a in _outNode (str=0xbfffe8cc, obj=0x81dd5e8) at outfuncs.c:1528\n#9 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd660) at outfuncs.c:1336\n#10 0x80c2406 in _outColumnDef (str=0xbfffe8cc, node=0x81dd610)\n at outfuncs.c:137\n#11 0x80c43da in _outNode (str=0xbfffe8cc, obj=0x81dd610) at outfuncs.c:1355\n#12 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd9a0) at outfuncs.c:1336\n#13 0x80c3f8a in _outConstraint (str=0xbfffe8cc, node=0x81dd5e8)\n at outfuncs.c:1283\n#14 0x80c476a in _outNode (str=0xbfffe8cc, obj=0x81dd5e8) at outfuncs.c:1528\n#15 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd660) at outfuncs.c:1336\n#16 0x80c2406 in _outColumnDef (str=0xbfffe8cc, node=0x81dd610)\n at outfuncs.c:137\n#17 0x80c43da in _outNode (str=0xbfffe8cc, obj=0x81dd610) at outfuncs.c:1355\n#18 0x80c408c in _outNode (str=0xbfffe8cc, obj=0x81dd970) at outfuncs.c:1336\n#19 0x80c21ac in _outCreateStmt (str=0xbfffe8cc, node=0x81dd7b8)\n at outfuncs.c:74\n#20 0x80c43ba in _outNode (str=0xbfffe8cc, obj=0x81dd7b8) at outfuncs.c:1348\n#21 0x80c2588 in _outQuery (str=0xbfffe8cc, node=0x81dd8e8) at outfuncs.c:185\n#22 0x80c440a in _outNode (str=0xbfffe8cc, obj=0x81dd8e8) at outfuncs.c:1364\n#23 0x80c480d in nodeToString (obj=0x81dd8e8) at outfuncs.c:1570\n#24 0x80ed7a1 in pg_parse_and_plan (\n query_string=0x8184da0 \"create table oljunk (id char(2) primary key, name text)\", typev=0x0, nargs=0, queryListP=0xbfffe97c, dest=Remote, \n aclOverride=0 '\\000') at postgres.c:435\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For the LORD is good; his mercy is everlasting; and \n his truth endureth to all generations.\" \n Psalms 100:5 \n\n\n", "msg_date": "Wed, 12 Jan 2000 12:24:14 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend " }, { "msg_contents": "> ./configure --with-tcl --with-mb=UNICODE --with-odbc --enable-locale \n> --with-maxbackends=64 --with-pgport=5431 --program-prefix=pg7.\n\nI didn't see your problem here. My configuratuion is:\n\n\t./configure --with-mb=EUC_JP\n\n> The reason is that regress.sh uses createdb; createdb has a bad test for\n> the encoding value (I have sent a patch separately). So I assume no-one\n> has run regression tests with multibyte-encoding enabled?\n\nThank you for the fix. I have just committed your changes.\n\nI already have fixed that too with my private working files, but\nforgot to commit.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Jan 2000 22:14:01 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Segmentation fault (in the end):\n> #0 0x400f068d in _IO_default_xsputn () from /lib/libc.so.6\n> #1 0x400e0126 in vfprintf () from /lib/libc.so.6\n> #2 0x400edf23 in vsnprintf () from /lib/libc.so.6\n> #3 0x80a8e82 in appendStringInfo ()\n> #4 0x80c244d in _outTypeName ()\n> #5 0x80c43da in _outNode ()\n> #6 0x80c2391 in _outColumnDef ()\n> ... \n> #157128 0x80c23f6 in _outColumnDef ()\n> #157129 0x80c43ca in _outNode ()\n> #157130 0x80c407c in _outNode ()\n ^^^^^^\n\nHmm, I take it this is a stack-growth-limit-exceeded failure, although\nyour system isn't reporting it that way.\n\nThis is a known bug that's been there for quite a while: PRIMARY KEY\ngenerates a parse tree with circular references, so if you have parse\ntree dumping turned on, you get an infinite recursion in the dumper\ncode. It needs to be fixed, but hasn't gotten to the top of anyone's\nto-do list (and a clean way to fix it isn't obvious).\n\n> I fixed the createdb bug and ran the regression test. The constraints\n> test failed when trying to create a table with a primary key.\n\nIf you had -d set high enough, it would...\n\n> Every\n> test thereafter failed immediately [pqReadData() -- backend closed the\n> channel unexpectedly]; it appears that the primary key error messes up\n> the postmaster in some way.\n\nThe WAL postmaster takes a few seconds to recover before it will allow\nnew connections (I have a proposal on the table that it should just\ndelay accepting the connections, instead of rejecting 'em, but there\nhasn't been any discussion about that). I usually see that the next\nthree or four regression tests fail after a crash, but if your machine\nis fast enough it might be that they all do.\n\nI have noticed that recent versions of libpq fail to display the\nconnection-refused error message that I assume the postmaster is\nreturning. That useta work ... someone broke it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 10:31:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CREATE TABLE ... PRIMARY KEY kills backend " }, { "msg_contents": "[Version: CVS as of yesterday]\nWhen I create a table that inherits from another table that uses foreign\nkeys, I get something like this:\n\n ERROR: cache lookup of attribute 10 in relation 124171 failed\n\nThis is happening in get_attribute_name() of backend/utils/adt/ruleutils.c\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For the LORD is good; his mercy is everlasting; and \n his truth endureth to all generations.\" \n Psalms 100:5 \n\n\n", "msg_date": "Thu, 13 Jan 2000 22:14:39 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with foreign keys and inheritance" }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >When I create a table that inherits from another table that uses foreign\n >keys, I get something like this:\n >\n > ERROR: cache lookup of attribute 10 in relation 124171 failed\n >\n >This is happening in get_attribute_name() of backend/utils/adt/ruleutils.c\n\nI'm still trying to track this down; it seems to be happening when the\nbackend is trying to fetch details of the ancestor class, in\ndeparse_expression().\n\nHowever, I cannot find relation 124171; is there any way to find out\nwhere a relation is, given only its oid?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For I know that my redeemer liveth, and that he shall \n stand at the latter day upon the earth\" \n Job 19:25 \n\n\n", "msg_date": "Sat, 15 Jan 2000 21:44:42 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance " }, { "msg_contents": "Oliver Elphick wrote:\n> \n> However, I cannot find relation 124171; is there any way to find out\n> where a relation is, given only its oid?\n\nThis might give you a pretty good hint...\n\n\tselect * from pg_attribute where attrelid = 124171;\n\nCheers,\nEd Loehr\n", "msg_date": "Sat, 15 Jan 2000 17:26:27 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance" }, { "msg_contents": "Ed Loehr wrote:\n >Oliver Elphick wrote:\n >> \n >> However, I cannot find relation 124171; is there any way to find out\n >> where a relation is, given only its oid?\n >\n >This might give you a pretty good hint...\n >\n >\tselect * from pg_attribute where attrelid = 124171;\n\nNothing.\n \nI tried looking for the oid in every system table listed by \\dS - no joy :-(\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For I know that my redeemer liveth, and that he shall \n stand at the latter day upon the earth\" \n Job 19:25 \n\n\n", "msg_date": "Sat, 15 Jan 2000 23:45:59 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>>> However, I cannot find relation 124171; is there any way to find out\n>>> where a relation is, given only its oid?\n>> \n>> This might give you a pretty good hint...\n>> \n>> select * from pg_attribute where attrelid = 124171;\n\nActually, \"select * from pg_class where oid = 124171\" is the canonical\nanswer; if that doesn't produce anything, you have no such table.\n \n> I tried looking for the oid in every system table listed by \\dS - no joy :-(\n\nIs it possible that you dropped the table in question since that try?\nIf you recreated it, it wouldn't have the same OID the second time.\n\nAnother possibility is that the rule dumper is picking up a completely\nwrong number for some reason. I thought that code was pretty solid by\nnow, but it might still have some glitches left.\n\nIf you provided an SQL script that reproduces the problem, more people\nmight be motivated to look for it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Jan 2000 20:31:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance " }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >[Version: CVS as of yesterday]\n >When I create a table that inherits from another table that uses foreign\n >keys, I get something like this:\n >\n > ERROR: cache lookup of attribute 10 in relation 124171 failed\n >\n >This is happening in get_attribute_name() of backend/utils/adt/ruleutils.c\n\nHere is an SQL script that makes this happen:\n\n========================================================\ncreate database newj with encoding = 'SQL_ASCII';\n\\connect newj\ncreate table person\n(\n id char(10) primary key,\n name text not null,\n address int,\n salutation text default 'Dear Sir',\n envelope text,\n email text,\n www text\n);\n\ncreate table individual\n(\n gender char(1) check (gender = 'M' or gender = 'F'\n or gender is null),\n born datetime check ((born >= '1 Jan 1880'\n and born <= 'today') or born is null),\n surname text,\n forenames text,\n title text,\n old_surname text,\n mobile text,\n ni_no text,\n constraint is_named check (not (surname isnull and forenames isnull))\n)\n inherits (person);\n\ncreate table organisation\n(\n contact char(10) references individual (id) match full,\n structure char(1) check (structure='L' or structure='C'\n or structure='U' or structure='O')\n)\n inherits (person);\n\ncreate table customer\n(\n acs_code char(8),\n acs_addr int, class char(1) default '',\n type char(2),\n area char(2),\n country char(2),\n vat_class char(1),\n vat_number char(12),\n discount numeric(6,3) check (discount >= -50.0::numeric(6,3)\n and discount <= 50.0)::numeric(6,3),\n commission bool default 'f',\n status char(1) default '',\n deliver_to int,\n factor_code text\n)\n inherits (organisation);\n========================================================\n\nTable customer does not get created; instead, I get:\n\n ERROR: cache lookup of attribute 10 in relation <some_oid> failed\n\n-- \n\n\n", "msg_date": "Sun, 16 Jan 2000 15:10:34 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>> [Version: CVS as of yesterday]\n>> When I create a table that inherits from another table that uses foreign\n>> keys, I get something like this:\n>> \n>> ERROR: cache lookup of attribute 10 in relation 124171 failed\n\nAh, I see it. It's got nothing to do with foreign keys, just inherited\nconstraints. We're trying to deparse the inherited constraint\nexpressions at a time that the relation-in-process-of-being-created\nisn't yet officially visible. So trying to look up its attributes is\nfailing. Need another CommandCounterIncrement() in there to make it\nwork.\n\nThis must have been busted for a good while, I think. I rewrote that\nmodule months ago and probably broke it then. Probably should add\na regress test case that uses inherited constraints...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Jan 2000 13:10:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with foreign keys and inheritance " }, { "msg_contents": "[Using current cvs] I have a problem with COPY when called like this:\n\n psql -e bray </tmp/ol\n\nwith one particular file, containing commands and data like this:\n\n========================================\ncopy address from stdin;\n1 Some place, Regal Way Somewhere Oxon AB1 3CF [ Tel: 01367 888888 ] No\n Deliveries Pm Fridays GB \\N \\N \\N \\N \\N \\N \\N \\N \\N \\N \\N \\N\n...\n1000 73 Some Road London SW1 1ZZ GB 44 81 999 9999 \\N\n \\N \\N \\N \\N \\N \\N \\N\n\\.\n-- 1000 records written\n\nselect count(*) from address;\n\ncopy address from stdin;\n1001...\n... and so on up to 3916 records in total, divided into 1000 record chunks\n========================================\n\npsql or libpq seems to choke on the data, so that some spurious error\narises, such as null input into a non-null field. Thereafter, libpq\nseems to get stuck in a COPY state:\n\ncopy address from stdin;\n-- 1000 records written\nselect count(*) from address;\nPQexec: you gotta get out of a COPY state yourself.\n\n(The comment in PQexec says it does this to preserve backwards\ncompatibility, but getting stuck in COPY state is not backwards\ncompatible!)\n\nIf I remove the SQL commands from the input file, go into psql and do:\n\n copy address from '/tmp/ol';\n\nall 3916 records are added correctly. This seems to indicate that\nthe problem is not in the backend.\n\nI found that if I broke the first 1000 records into 2 equal parts, all\nof them were added correctly without error; so I conclude that data\nis being buffered and lost somewhere in psql or libpq, and the problem is\ndependent on the amount of data being copied.\n\nThis began to happen within the last week, but I don't know which\nrecent change is responsible.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Neither is there salvation in any other; for there is \n none other name under heaven given among men, whereby \n we must be saved.\" Acts 4:12 \n\n\n", "msg_date": "Thu, 20 Jan 2000 15:21:24 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "COPY problems with psql / libpq" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> I found that if I broke the first 1000 records into 2 equal parts, all\n> of them were added correctly without error; so I conclude that data\n> is being buffered and lost somewhere in psql or libpq, and the problem is\n> dependent on the amount of data being copied.\n\nI have the following note in my (much too long) to-do list:\n\n: psql.c doesn't appear to cope correctly with quoted newlines in COPY data;\n: if one falls just after a buffer boundary, trouble!\n: Does fe-exec.c work either??\n\n(This note is some months old, and may or may not still apply since\nPeter's rework of psql.) It could be that your dataset is hitting this\nproblem or a similar one. A buffer-boundary problem would explain why\nthe error seems to be so dataset-specific.\n\n> copy address from stdin;\n> -- 1000 records written\n> select count(*) from address;\n> PQexec: you gotta get out of a COPY state yourself.\n\nIt sure sounds like psql is failing to recognize the trailing \\.\nof the COPY data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jan 2000 11:02:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq " }, { "msg_contents": "On Thu, Jan 20, 2000 at 11:02:31AM -0500, Tom Lane wrote:\n> \n> It sure sounds like psql is failing to recognize the trailing \\.\n> of the COPY data.\n\nPrecisely what I saw yesterday (cf Subject: pg_dump disaster) - but what\ndoes one do about it?\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 20 Jan 2000 16:40:25 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq" }, { "msg_contents": "* Patrick Welche <[email protected]> [000120 09:10] wrote:\n> On Thu, Jan 20, 2000 at 11:02:31AM -0500, Tom Lane wrote:\n> > \n> > It sure sounds like psql is failing to recognize the trailing \\.\n> > of the COPY data.\n> \n> Precisely what I saw yesterday (cf Subject: pg_dump disaster) - but what\n> does one do about it?\n\nIs this with a recent snapshot or 6.5.3 using libpq? \nEither way, you should check the contents of the send buffer, please let\nme know if there is data queued in it. You can include the 'internal'\nheader for libpq (libpq-int.h?) to get at the send buffer.\n\n-Alfred\n", "msg_date": "Thu, 20 Jan 2000 09:22:16 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq" }, { "msg_contents": "On Thu, Jan 20, 2000 at 09:22:16AM -0800, Alfred Perlstein wrote:\n> * Patrick Welche <[email protected]> [000120 09:10] wrote:\n> > On Thu, Jan 20, 2000 at 11:02:31AM -0500, Tom Lane wrote:\n> > > \n> > > It sure sounds like psql is failing to recognize the trailing \\.\n> > > of the COPY data.\n> > \n> > Precisely what I saw yesterday (cf Subject: pg_dump disaster) - but what\n> > does one do about it?\n> \n> Is this with a recent snapshot or 6.5.3 using libpq? \n\nFor me, it's using yesterday's cvs'd source - but I obviously can't speak\nfor Oliver.\n\n> Either way, you should check the contents of the send buffer, please let\n> me know if there is data queued in it. You can include the 'internal'\n> header for libpq (libpq-int.h?) to get at the send buffer.\n\nThat will take a while. In the meantime, just pg_dumpall something and try\nto read the output back in. I do have ^M's in some of the text columns if\nthat matters.\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 20 Jan 2000 17:26:59 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq" }, { "msg_contents": "Patrick Welche wrote:\n >On Thu, Jan 20, 2000 at 09:22:16AM -0800, Alfred Perlstein wrote:\n >> * Patrick Welche <[email protected]> [000120 09:10] wrote:\n >> > On Thu, Jan 20, 2000 at 11:02:31AM -0500, Tom Lane wrote:\n >> > > \n >> > > It sure sounds like psql is failing to recognize the trailing \\.\n >> > > of the COPY data.\n >> > \n >> > Precisely what I saw yesterday (cf Subject: pg_dump disaster) - but what\n >> > does one do about it?\n >> \n >> Is this with a recent snapshot or 6.5.3 using libpq? \n >\n >For me, it's using yesterday's cvs'd source - but I obviously can't speak\n >for Oliver.\n \nThis morning's.\n\n >> Either way, you should check the contents of the send buffer, please let\n >> me know if there is data queued in it. You can include the 'internal'\n >> header for libpq (libpq-int.h?) to get at the send buffer.\n >\n >That will take a while. In the meantime, just pg_dumpall something and try\n >to read the output back in. I do have ^M's in some of the text columns if\n >that matters.\n\nI can't do that, because pg_dump seems to be broken if there are tables\nwith foreign key constraints. (See a separate message.)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Neither is there salvation in any other; for there is \n none other name under heaven given among men, whereby \n we must be saved.\" Acts 4:12 \n\n\n", "msg_date": "Thu, 20 Jan 2000 18:44:51 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq " }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> I found that if I broke the first 1000 records into 2 equal parts, all\n >> of them were added correctly without error; so I conclude that data\n >> is being buffered and lost somewhere in psql or libpq, and the problem is\n >> dependent on the amount of data being copied.\n >\n >I have the following note in my (much too long) to-do list:\n >\n >: psql.c doesn't appear to cope correctly with quoted newlines in COPY data;\n >: if one falls just after a buffer boundary, trouble!\n >: Does fe-exec.c work either??\n \nNew-lines are not the problem in this particular case, since the data\ndoes not contain any.\n\n \n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Neither is there salvation in any other; for there is \n none other name under heaven given among men, whereby \n we must be saved.\" Acts 4:12 \n\n\n", "msg_date": "Thu, 20 Jan 2000 18:55:40 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq " }, { "msg_contents": "On 2000-01-20, Tom Lane mentioned:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > I found that if I broke the first 1000 records into 2 equal parts, all\n> > of them were added correctly without error; so I conclude that data\n> > is being buffered and lost somewhere in psql or libpq, and the problem is\n> > dependent on the amount of data being copied.\n\nThe buffering is line-based though and the default buffer is 8192. If\nyour line is longer than that you're in all sorts of other troubles.\n\n> \n> I have the following note in my (much too long) to-do list:\n> \n> : psql.c doesn't appear to cope correctly with quoted newlines in COPY data;\n\nWhat's a quoted newline?\na) \"<newline>\"\nb) \"\\n\"\nc) \\<newline>\n\nEarlier you also mentioned to me something in general about control\ncharacters messing up COPY. Could you give me some details on that so I\ncan look into it?\n\n> : if one falls just after a buffer boundary, trouble!\n> : Does fe-exec.c work either??\n> \n> (This note is some months old, and may or may not still apply since\n> Peter's rework of psql.) It could be that your dataset is hitting this\n\nI haven't touched that code.\n\n> problem or a similar one. A buffer-boundary problem would explain why\n> the error seems to be so dataset-specific.\n\nAt first I thunk a PQExpBuffer based solution would be the answer, but\nas I said above, if you overflow the buffer, you're in trouble anyway.\n\n> \n> > copy address from stdin;\n> > -- 1000 records written\n> > select count(*) from address;\n> > PQexec: you gotta get out of a COPY state yourself.\n> \n> It sure sounds like psql is failing to recognize the trailing \\.\n> of the COPY data.\n\nThe last call in the function psql/copy.c:handleCopyIn is\n\n\treturn !PQendcopy(conn);\n\nand there is no way it can exit earlier. Also the connection seems to be\ngood, since that's checked right after it returns. The calls to PQputvalue\nare not checked for return values, so problems might get missed there, but\nthat would in any case still point to a problem elsewhere. Gotta pass the\nbuck to libpq ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Thu, 20 Jan 2000 22:54:56 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> : psql.c doesn't appear to cope correctly with quoted newlines in COPY data;\n\n> What's a quoted newline?\n> a) \"<newline>\"\n> b) \"\\n\"\n> c) \\<newline>\n\n(c). That's how a newline appearing in the data is supposed to be\nrepresented. IIRC, I was worried that if the \\ falls at the end of\na bufferload and the newline at the start of the next, psql and/or\nlibpq would fail to recognize the pattern; if so, they'd probably\nthink the newline is a record boundary.\n\nPatrick could be falling victim to this, but Oliver sez he has no\nnewlines in his data, so there's at least one other problem.\n\n> that would in any case still point to a problem elsewhere. Gotta pass the\n> buck to libpq ...\n\nCould be. I think Alfred is on the hook here...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jan 2000 17:35:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY problems with psql / libpq " }, { "msg_contents": "I would like to work on improving implementation of inheritance,\nespecially with regard to referential integrity. I suspect there are\na number of issues that may be related and will need to be done together.\nIn addition, this will be my first attempt to do anything serious in\nthe PostgreSQL code itself, so I would like to get some hints as\nto what I haven't even thought about!\n\nFirst, I would like to change the definition of the foreign key\nconstraints to allow the inheritance star to follow a table name.\nThis would mean that, for RI purposes, the named table would be\naggregated with its descendants. So \"REFERENCES tbl\" would mean that\nthe foreign key must exist in tbl, but \"REFERENCES tbl*\" would allow it\nto exist either in tbl or in any of tbl's descendants.\n\nImplications: where * is used, dropping a descendant table is OK, so\nlong as the parent continues to exist. ON DELETE actions would apply\nto all the relations in the table to be dropped; to reduce complexity,\nthis should be broken down into:\n `DELETE FROM descendant; DROP TABLE descendant'\nand the whole should be treated as atomic. If any one relation could\nnot be deleted, the whole operation would fail.\n\nUse of ON DELETE or ON UPDATE implies there must be an index on the\nreferring column, to enable checking or deletion to be done speedily.\nThis doesn't seem to happen at the moment. If the reference is to\nan inheritance group, it would seem to be appropriate that all the\ntables in the group should use the same index. Similarly, where\na unique or primary key constraint is inherited, it may be desirable\nto use a single index to manage the constraint. The implication of\nthis would be that there must be a check when a table is dropped\nto make sure that a grouped index is not dropped until the last\ntable in the group is dropped.\n\nIs this feasible, or would it require too many changes elsewhere?\n\nAnother item I would like to get fixed is to make sure that all\nconstraints are inherited when a descendant table is created; this\nis a current TODO item. It will also be necessary to ensure that\nadded constraints get inherited, when ALTER TABLE ... ADD/DROP\nCONSTRAINT gets implemented.\n\n====== Design proposal =======\n\nI think that the implications of inheritance have never been fully\nexplored and I would like to establish the framework in which future\nwork that involves inheritance will be done.\n\nIt seems to me that declaring a table to inherit from another, and\nenabling both to be read together by the table* syntax, together\nimply certain things about an inheritance group:\n\n1. All tables in the group must possess all the columns of their\nancestor, and all those columns must be of the same type.\n\n2. Some constraints at least must be shared - primary key is the most\nobvious example; I think that _all_ constraints on inherited columns\nshould be shared. It is probably not practicable to force table\nconstraints to be shared upwards.\n\n3. There seems to be no need to enforce similar restrictions on\nGRANT. In fact it is quite likely that different permissions could\napply to different tables in the hierarchy.\n\n4. Dropping a table implies dropping all its descendants.\n\n==============================\n\nI would like to consider the implications of this proposal in the light\nof the ALTER TABLE commands that have recently been added.\n\nThe grammar for ALTER TABLE allows either `ALTER TABLE table ...' or\n`ALTER TABLE table* ...'. I would like to suggest that an alteration\nto a parent table must necessarily involve all its descendants and\nthat alterations to inherited columns must be done in the appropriate\nparent. So, given this hierarchy of tables:\n\n t1 (c1 char(2) primary key,\n c2 text)\n\n t2 (c3 int not null\n c4 timestamp default current_timestamp) inherits (t1)\n\n t3 (c5 text not null) inherits (t2)\n\nadding a column to t1, means the same column must be added to t2 and t3\nand must appear before any columns originating in t2; columns c1 to c4\ncannot be dropped from table t3 unless they are also dropped from the\nparents. Alterations to c2 must be done in t1, and alterations to c4\nmust be done in t2. Any table constraint applied to t1 would automatically\nbe inherited by t2 and t3, a new constraint added to t2 would be\ninherited by t3 but would not affect t1.\n\nAttempts to use ALTER TABLE to bypass these restrictions should be\ndisallowed.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"If anyone has material possessions and sees his\n brother in need but has no pity on him, how can the\n love of God be in him?\"\n I John 3:17 \n\n\n", "msg_date": "Mon, 24 Jan 2000 21:52:56 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inheritance, referential integrity and other constraints" }, { "msg_contents": "\nAs long as you're working on this area you could fix the problem where\nif you do ALTER table* ADD COLUMN ... pg_dump no longer works because\nthe column orders have changed in different inherited tables.\n\nOliver Elphick wrote:\n> \n> I would like to work on improving implementation of inheritance,\n> especially with regard to referential integrity. I suspect there are\n> a number of issues that may be related and will need to be done together.\n> In addition, this will be my first attempt to do anything serious in\n> the PostgreSQL code itself, so I would like to get some hints as\n> to what I haven't even thought about!\n> \n> First, I would like to change the definition of the foreign key\n> constraints to allow the inheritance star to follow a table name.\n> This would mean that, for RI purposes, the named table would be\n> aggregated with its descendants. So \"REFERENCES tbl\" would mean that\n> the foreign key must exist in tbl, but \"REFERENCES tbl*\" would allow it\n> to exist either in tbl or in any of tbl's descendants.\n> \n> Implications: where * is used, dropping a descendant table is OK, so\n> long as the parent continues to exist. ON DELETE actions would apply\n> to all the relations in the table to be dropped; to reduce complexity,\n> this should be broken down into:\n> `DELETE FROM descendant; DROP TABLE descendant'\n> and the whole should be treated as atomic. If any one relation could\n> not be deleted, the whole operation would fail.\n> \n> Use of ON DELETE or ON UPDATE implies there must be an index on the\n> referring column, to enable checking or deletion to be done speedily.\n> This doesn't seem to happen at the moment. If the reference is to\n> an inheritance group, it would seem to be appropriate that all the\n> tables in the group should use the same index. Similarly, where\n> a unique or primary key constraint is inherited, it may be desirable\n> to use a single index to manage the constraint. The implication of\n> this would be that there must be a check when a table is dropped\n> to make sure that a grouped index is not dropped until the last\n> table in the group is dropped.\n> \n> Is this feasible, or would it require too many changes elsewhere?\n> \n> Another item I would like to get fixed is to make sure that all\n> constraints are inherited when a descendant table is created; this\n> is a current TODO item. It will also be necessary to ensure that\n> added constraints get inherited, when ALTER TABLE ... ADD/DROP\n> CONSTRAINT gets implemented.\n> \n> ====== Design proposal =======\n> \n> I think that the implications of inheritance have never been fully\n> explored and I would like to establish the framework in which future\n> work that involves inheritance will be done.\n> \n> It seems to me that declaring a table to inherit from another, and\n> enabling both to be read together by the table* syntax, together\n> imply certain things about an inheritance group:\n> \n> 1. All tables in the group must possess all the columns of their\n> ancestor, and all those columns must be of the same type.\n> \n> 2. Some constraints at least must be shared - primary key is the most\n> obvious example; I think that _all_ constraints on inherited columns\n> should be shared. It is probably not practicable to force table\n> constraints to be shared upwards.\n> \n> 3. There seems to be no need to enforce similar restrictions on\n> GRANT. In fact it is quite likely that different permissions could\n> apply to different tables in the hierarchy.\n> \n> 4. Dropping a table implies dropping all its descendants.\n> \n> ==============================\n> \n> I would like to consider the implications of this proposal in the light\n> of the ALTER TABLE commands that have recently been added.\n> \n> The grammar for ALTER TABLE allows either `ALTER TABLE table ...' or\n> `ALTER TABLE table* ...'. I would like to suggest that an alteration\n> to a parent table must necessarily involve all its descendants and\n> that alterations to inherited columns must be done in the appropriate\n> parent. So, given this hierarchy of tables:\n> \n> t1 (c1 char(2) primary key,\n> c2 text)\n> \n> t2 (c3 int not null\n> c4 timestamp default current_timestamp) inherits (t1)\n> \n> t3 (c5 text not null) inherits (t2)\n> \n> adding a column to t1, means the same column must be added to t2 and t3\n> and must appear before any columns originating in t2; columns c1 to c4\n> cannot be dropped from table t3 unless they are also dropped from the\n> parents. Alterations to c2 must be done in t1, and alterations to c4\n> must be done in t2. Any table constraint applied to t1 would automatically\n> be inherited by t2 and t3, a new constraint added to t2 would be\n> inherited by t3 but would not affect t1.\n> \n> Attempts to use ALTER TABLE to bypass these restrictions should be\n> disallowed.\n> \n> --\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"If anyone has material possessions and sees his\n> brother in need but has no pity on him, how can the\n> love of God be in him?\"\n> I John 3:17\n> \n> ************\n", "msg_date": "Tue, 25 Jan 2000 10:33:07 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "Chris Bitmead wrote:\n >As long as you're working on this area you could fix the problem where\n >if you do ALTER table* ADD COLUMN ... pg_dump no longer works because\n >the column orders have changed in different inherited tables.\n\nIt seems that this might be quite a problem; I would not like to have\nto do a physical insert into every row in a huge table. Would it be\nfeasible to add a column order attribute to pg_attribute for tables\naltered in this way? A null entry in that would indicate the table was unaltered from its creation.\n\nPerhaps this could be combined with the idea of column hiding: a zero\ncolumn number would mean it was hidden.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"My little children, let us not love in word, neither \n in tongue; but in deed and in truth.\" \n I John 3:18 \n\n\n", "msg_date": "Tue, 25 Jan 2000 08:06:34 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inheritance, referential integrity and other\n\tconstraints" }, { "msg_contents": "I took up the issue of\n\n* Alter TABLE ADD COLUMN does not honor DEFAULT, add CONSTRAINT\n\nand chances are excellent that this will get all done in the next day or\nthree.\n\nFirst a syntactical issue. We currently allow the following statement,\nalthough it is not legal SQL:\n\ncreate table test (\n\ta int4,\n\tb int4 check (a>b)\n);\n\nIt's not legal because the column constraint for \"b\" may only reference\ncolumn \"b\". Instead you could legally write\n\ncreate table test (\n\ta int4,\n\tb int4,\n\tcheck (a>b)\n);\n\nbecause the check constraint is now a table constraint. Big deal. Now the\nproblem is that because I reuse the same syntactical elements, the\nfollowing will work:\n\ncreate table test (a int4);\nalter table test add column b int4 check (a>b);\n\nNo harm done, but how about:\n\ncreate table test (a int4, b int4);\nalter table test add column c text check (a>b);\n\nI guess this would be sort of equivalent to saying\n\nalter table test add column c text;\nalter table test add constraint check (a>b);\n\nSo, I guess what I'm saying is whether you want to allow the mentioned\nweirdness or not.\n\n\nSecondly, an internal question. If I use SearchSysCacheTuple() on a\ncondition with several potential matches, what is the defined behaviour?\nCan I call it again to get the next tuple?\n\n\nThirdly, about TODO item\n\n* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n\nActually, according to what I would expect out of the blue, it puts it\ninto the *right* place. Even good ol' SQL, although they naturally do not\nknow about inheritance, seems to agree:\n\n\"... the degree of [the table] is increased by 1 and the ordinal position\nof that [new] column is equal to the new degree of [the table] ...\"\n(11.11)\n\nWhat that says to me is that if you add a column to a table (during create\nor alter) then the new column gets placed after all the others. Thus,\nwe're in compliance without even knowing it. \n\nOr maybe look at it this way:\ncreate table test1 (a int4);\ncreate table test2 (b int4) inherits (test1);\n ^ col #1 ^ col #2\nalter table test1* add column c int4;\n ^ col #3\n\nEverything has its order and it's not like the inheritance as such is\nbroken.\n\nSurely, trying to stick the column in between is going to be three times\nas much work as dropping columns would be, whichever way you do it. (And\nmaking attributes invisible ain't gonna help you. ;)\n\nWhat do you all say?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 26 Jan 2000 00:15:35 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Column ADDing issues" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Or maybe look at it this way:\n> create table test1 (a int4);\n> create table test2 (b int4) inherits (test1);\n> ^ col #1 ^ col #2\n> alter table test1* add column c int4;\n> ^ col #3\n\n> Everything has its order and it's not like the inheritance as such is\n> broken.\n\nYes, a whole bunch of stuff is broken after this happens. Go back and\nconsult the archives --- or maybe Chris Bitmead will fill you in; he's\ngot plenty of scars to show for this set of problems. (All I recall\noffhand is that pg_dump and reload can fail to generate a working\ndatabase.) The bottom line is that it would be a lot nicer if column c\nhad the same column position in both the parent table and the child\ntable(s).\n\nI suggest you be very cautious about messing with ALTER TABLE until you\nunderstand why inheritance makes it such a headache ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2000 19:10:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues " }, { "msg_contents": "Peter Eisentraut wrote:\n >Thirdly, about TODO item\n >\n >* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n >\n >Actually, according to what I would expect out of the blue, it puts it\n >into the *right* place. Even good ol' SQL, although they naturally do not\n >know about inheritance, seems to agree:\n >\n >\"... the degree of [the table] is increased by 1 and the ordinal position\n >of that [new] column is equal to the new degree of [the table] ...\"\n >(11.11)\n >\n >What that says to me is that if you add a column to a table (during create\n >or alter) then the new column gets placed after all the others. Thus,\n >we're in compliance without even knowing it. \n >\n >Or maybe look at it this way:\n >create table test1 (a int4);\n >create table test2 (b int4) inherits (test1);\n > ^ col #1 ^ col #2\n >alter table test1* add column c int4;\n > ^ col #3\n >\n >Everything has its order and it's not like the inheritance as such is\n >broken.\n >\n >Surely, trying to stick the column in between is going to be three times\n >as much work as dropping columns would be, whichever way you do it. (And\n >making attributes invisible ain't gonna help you. ;)\n\nIt is:\ncreate table test1 (a int4);\ncreate table test2 (b int4) inherits (test1);\n ^ col #2 ^ col #1\nalter table test1* add column c int4;\n ^ col #3 but needs to be #2, since it _is_\n #2 of test1\n\nAs far as inheritance goes, all the descendants are treated as one table,\nincluding those created on a different branch from test2. All of them\nhave to return the right columns for a single query; the two options for\ndealing with this seem to be logical column numbering, or rewriting the\ndescendant tables. (But I haven't spent enough time in the code to be\nsure of that.)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"My little children, let us not love in word, neither \n in tongue; but in deed and in truth.\" \n I John 3:18 \n\n\n", "msg_date": "Wed, 26 Jan 2000 00:45:20 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Column ADDing issues " }, { "msg_contents": "On 2000-01-25, Chris Bitmead mentioned:\n\n> As long as you're working on this area you could fix the problem where\n> if you do ALTER table* ADD COLUMN ... pg_dump no longer works because\n> the column orders have changed in different inherited tables.\n\nThis should be fixed in pg_dump then. As I see it, ALTER table* ADD COLUMN\ndoes exactly the right thing.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 26 Jan 2000 19:34:29 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "On 2000-01-24, Oliver Elphick mentioned:\n\n> I would like to work on improving implementation of inheritance,\n> especially with regard to referential integrity. I suspect there are\n> a number of issues that may be related and will need to be done together.\n\nWhat I really consider a problem, and it would be great if you could\ntackle that, is that there is no real standard that all of this does or\neven could follow. For example, I wrote the other day that depending on\nwhich way you see it, the behaviour of alter table x* add colum might be\nconsidered right. Also I just looked into item 'Disallow inherited columns\nwith the same name as new columns' and it seems that someone actually made\nprovisions for this to be allowed, meaning that\ncreate table test1 (x int);\ncreate table test2 (x int) inherits (test1);\nwould result in test2 looking exactly like test1. No one knows what the\nmotivation was. (I removed it anyway.)\n\n> It will also be necessary to ensure that\n> added constraints get inherited, when ALTER TABLE ... ADD/DROP\n> CONSTRAINT gets implemented.\n\nI assume the semantics of ADD CONSTRAINT will be exactly the same as of\nall the other alter table commands, in that if you specify a star then it\ngets inherited, if not then not. But the problem with ADD CONSTRAINT is of\ncourse that the entire table needs to be verified against the constraint\nbefore allowing it to be added. This is fine if you do ADD CONSTRAINT\nUNIQUE (a, b), because the index will take care of it, but it's trickier\nif you add a trigger based constraint. The former might get into 7.0 if I\nhurry, the latter most likely not.\n\nWhat needs discussion is whether indexes should be shared between\ninherited tables, or whether each new descendant table needs a new\none. Not sure if this just made sense, though.\n\n\n> I think that the implications of inheritance have never been fully\n> explored and I would like to establish the framework in which future\n> work that involves inheritance will be done.\n\nPrecisely.\n\n \n> It seems to me that declaring a table to inherit from another, and\n> enabling both to be read together by the table* syntax, together\n> imply certain things about an inheritance group:\n> \n> 1. All tables in the group must possess all the columns of their\n> ancestor, and all those columns must be of the same type.\n\nIsn't it this way now?\n\n> \n> 2. Some constraints at least must be shared - primary key is the most\n> obvious example; I think that _all_ constraints on inherited columns\n> should be shared. It is probably not practicable to force table\n> constraints to be shared upwards.\n\nNot sure about this one. See the ranting about the shared indexes\nabove. Might be a great pain.\n\n> \n> 4. Dropping a table implies dropping all its descendants.\n\nActually what it does now is to refuse dropping when descendants\nexist. What seems to be the proper solution to this is to implement the\nproper DROP TABLE SQL syntax by adding a RESTRICT/CASCADE at the\nend. Restrict refuses dropping if anything (descendants, views,\netc.) references the table, cascade drops everything else as\nwell. Implementing this could be your first step to glory ;) since it\nseems it's more a matter of man hours than conceptual difficulty. Then\nagain, I could be wrong.\n\n\n> The grammar for ALTER TABLE allows either `ALTER TABLE table ...' or\n> `ALTER TABLE table* ...'. I would like to suggest that an alteration\n> to a parent table must necessarily involve all its descendants and\n> that alterations to inherited columns must be done in the appropriate\n> parent. So, given this hierarchy of tables:\n\nIt's been a while since I looked into C++, but when you alter a descendant\n(such as making a formerly public method private) you surely do not affect\nthe parents. The other way around I think the choice of star-or-not should\nbe given to the user. But this is again one of the issues that have no\npoint of reference, so I'm glad you bring it up for discussion.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 26 Jan 2000 19:35:14 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "At 07:35 PM 1/26/00 +0100, Peter Eisentraut wrote:\n>On 2000-01-24, Oliver Elphick mentioned:\n>\n>> I would like to work on improving implementation of inheritance,\n>> especially with regard to referential integrity. I suspect there are\n>> a number of issues that may be related and will need to be done together.\n>\n>What I really consider a problem, and it would be great if you could\n>tackle that, is that there is no real standard that all of this does or\n>even could follow. For example, I wrote the other day that depending on\n>which way you see it, the behaviour of alter table x* add colum might be\n>considered right.\n\nAre you basing this on your earlier comment:\n\n\"\nOr maybe look at it this way:\ncreate table test1 (a int4);\ncreate table test2 (b int4) inherits (test1);\n ^ col #1 ^ col #2\nalter table test1* add column c int4;\n ^ col #3\n\n\"?\n\nIf so, I thought Oliver pointed out that you had the numbering wrong.\nI thought so, too...\n\nWhich is right?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 26 Jan 2000 11:05:39 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "On Wed, Jan 26, 2000 at 07:34:29PM +0100, Peter Eisentraut wrote:\n> On 2000-01-25, Chris Bitmead mentioned:\n> \n> > As long as you're working on this area you could fix the problem where\n> > if you do ALTER table* ADD COLUMN ... pg_dump no longer works because\n> > the column orders have changed in different inherited tables.\n> \n> This should be fixed in pg_dump then. As I see it, ALTER table* ADD COLUMN\n> does exactly the right thing.\n\nNo, the problem is that right now, the order of columns in a child table\ndepends on the exact history of how all the columns got into each table.\nIdeally, we want to be able to describe all the tables without reference\nto history, only to (meta)content. The exact order of columns in a table\nreally isn't much use to users, in any case (even though it is visible,\ntechnically. This had got to be a backward compatability feature of\nthe original SQL, isn't it?)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Wed, 26 Jan 2000 14:21:16 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "Peter Eisentraut wrote:\n >On 2000-01-24, Oliver Elphick mentioned:\n >\n >> I would like to work on improving implementation of inheritance,\n >> especially with regard to referential integrity. I suspect there are\n >> a number of issues that may be related and will need to be done together.\n >\n >What I really consider a problem, and it would be great if you could\n >tackle that, is that there is no real standard that all of this does or\n >even could follow.\n\nThat is the point of this thread: to settle the design.\n\n > For example, I wrote the other day that depending on\n >which way you see it, the behaviour of alter table x* add colum might be\n >considered right. Also I just looked into item 'Disallow inherited columns\n >with the same name as new columns' and it seems that someone actually made\n >provisions for this to be allowed, meaning that\n >create table test1 (x int);\n >create table test2 (x int) inherits (test1);\n >would result in test2 looking exactly like test1. No one knows what the\n >motivation was. (I removed it anyway.)\n\nThat's a relief! Unless you have actually removed the ability to do\nrepeated inheritance?\n\n >> It will also be necessary to ensure that\n >> added constraints get inherited, when ALTER TABLE ... ADD/DROP\n >> CONSTRAINT gets implemented.\n >\n >I assume the semantics of ADD CONSTRAINT will be exactly the same as of\n >all the other alter table commands, in that if you specify a star then it\n >gets inherited, if not then not. \n\nThis is the point of policy that needs deciding. The fact that we can\nsay `SELECT ... FROM table*' implies to me that inheritance is a\npermanent relationship between tables. That is why we cannot DROP an\ninherited column. The question is, how close is the relationship?\nWe have to decide what model of inheritance we are using, because a\nlot of design will flow automatically from that decision.\n\nWe can choose to follow a language model, but we must then decide which\nlanguage - Smalltalk, Eiffel, C++? The fact that multiple inheritance\nis possible seems to exclude Smalltalk; C++ is a conceptual mess (OK,\nyou can guess I'm an Eiffel fan!). As a matter of fact, I don't think\nthat language models are very useful to PostgreSQL - an RDBMS with\ninheritance is a unique animal! I think we must devise a consistent and\nuseful scheme and not trouble overmuch about fitting it into a theoretical\nframework, not least because the amount of work involved in a pure \nimplementation is likely to be horrendous. At present, PostgreSQL\nsupports multiple, repeated inheritance in reading tables, and partially\nsupports it in creating and altering them. This scheme needs tidying\nand completing.\n\nThe question to answer, then, is what inheritance is useful for; those are\nthe uses to be catered for. I see its main use as being in the division\nof similar data by kind. I have used it like this:\n\n /---- customer\n /------- organisation <\n / \\---- supplier\n person <\n \\ /---- staff\n \\------- individual <\n \\---- contact\n\nthe idea being that each sub-level gives a more specialised view.\n\nI want to be able to say `REFERENCES person*' to refer to the whole\ngroup, or `REFERENCES organisation*' for a sub-group, or\n`REFERENCES customer' for a single table. Each is a valid use according\nto how much specialisation is required in the individual case.\n(The data is only at the lowest level descendant tables of this group\nIn Eiffel terms, person, organisation and individual would be deferred\nclasses, that cannot be used directly but must have at least one\ndescendant.)\n\nWith this kind of scheme, some constraints can, perhaps, be allowed to\ndiffer, but I feel that PRIMARY KEY and REFERENCES, at the very least,\nshould be inherited. UNIQUE should probably be inherited, and CHECK\nconstraints, DEFAULT and NOT NULL, can quite likely be allowed to differ.\nWhat do you all think about this?\n\nIf we do allow differences, I think that they should not depend on the\nuser's remembering to add * to the table name. I think that an\nalteration to a parent table alone should require a UNINHERITED keyword\nto make the intention explicit. (After all, the user may not realise\nthat the table is a parent; I think the RDBMS should protect him against\nobvious traps.)\n\n > But the problem with ADD CONSTRAINT is of\n >course that the entire table needs to be verified against the constraint\n >before allowing it to be added. This is fine if you do ADD CONSTRAINT\n >UNIQUE (a, b), because the index will take care of it, but it's trickier\n >if you add a trigger based constraint. The former might get into 7.0 if I\n >hurry, the latter most likely not.\n >\n >What needs discussion is whether indexes should be shared between\n >inherited tables, or whether each new descendant table needs a new\n >one. Not sure if this just made sense, though.\n \nPerhaps we need a concept of grouped indexes to go with the grouped\ntables that inheritance creates. Clearly this is one of the issues\nthat the original designers didn't think through. If we consider the\nuses of an index, we can see that it is used first for fast access to\ntuples and second to enforce uniqueness. If (as I am suggesting)\nthe constraints that require an index (PRIMARY KEY, REFERENCES and UNIQUE)\nare forced to be group-wide, it will follow that the corresponding\nindexes should also be group-wide. On the other hand, a user-created\nindex for fast access could apply to a single table in the group.\n\n >> I think that the implications of inheritance have never been fully\n >> explored and I would like to establish the framework in which future\n >> work that involves inheritance will be done.\n >\n >Precisely.\n >\n > \n >> It seems to me that declaring a table to inherit from another, and\n >> enabling both to be read together by the table* syntax, together\n >> imply certain things about an inheritance group:\n >> \n >> 1. All tables in the group must possess all the columns of their\n >> ancestor, and all those columns must be of the same type.\n >\n >Isn't it this way now?\n\nNot if you allow columns to be dropped from or added to an individual\ntable, after it has become a parent, without enforcing the same change\non its descendants. I am suggesting that this must be disallowed. I am\nalso suggesting that adding columns to a parent requires either logical\ncolumn numbering or else physical insertion into the descendants in the\ncorrect sequence.\n \n >> 2. Some constraints at least must be shared - primary key is the most\n >> obvious example; I think that _all_ constraints on inherited columns\n >> should be shared. It is probably not practicable to force table\n >> constraints to be shared upwards.\n >\n >Not sure about this one. See the ranting about the shared indexes\n >above. Might be a great pain.\n\nI fear it will be; but I suspect it is necessary in at least some cases\n(see below).\n\n >> 4. Dropping a table implies dropping all its descendants.\n >\n >Actually what it does now is to refuse dropping when descendants\n >exist. What seems to be the proper solution to this is to implement the\n >proper DROP TABLE SQL syntax by adding a RESTRICT/CASCADE at the\n >end. Restrict refuses dropping if anything (descendants, views,\n >etc.) references the table, cascade drops everything else as\n >well. Implementing this could be your first step to glory ;) since it\n >seems it's more a matter of man hours than conceptual difficulty. Then\n >again, I could be wrong.\n\nIn this case, why not simply require `DROP TABLE table*', if table has\ndescendants? I'm not at all sure that allowing a CASCADE option for DROP\nTABLE is a good idea; someone could end up wiping out most of the database\nwith an ill-considered command; and RESTRICT should be the normal case.\n\n >> The grammar for ALTER TABLE allows either `ALTER TABLE table ...' or\n >> `ALTER TABLE table* ...'. I would like to suggest that an alteration\n >> to a parent table must necessarily involve all its descendants and\n >> that alterations to inherited columns must be done in the appropriate\n >> parent. So, given this hierarchy of tables:\n >\n >It's been a while since I looked into C++, but when you alter a descendant\n >(such as making a formerly public method private) you surely do not affect\n >the parents. The other way around I think the choice of star-or-not should\n >be given to the user. But this is again one of the issues that have no\n >point of reference, so I'm glad you bring it up for discussion.\n\nHere, my point is that `SELECT * FROM table*' must be able to get a\nconsistent view throughout the inheritance group. If an inherited\ncolumn is altered, the alteration may be one that would break that view.\nThe question to be decided is how far we go in enforcing similarity in\nthe columns that are shared.\n\nSome things cannot be allowed: renaming columns must only be done to\nthe group as a whole; inherited columns can only be dropped from the \nwhole group; a column cannot change its type in a descendant.\n\nHowever, some differences are going to be allowed.\nConsider this, as a case in point:\n\n a (id char2 primary key, name text not null)\n b (tp char(1) not null default 'B', supplier text) inherits (a);\n c (tp char(1) not null default 'C', customer text) inherits (a);\n\nIt seems quite a sensible use of inheritance to allow different defaults\nfor tp in tables b and c. However, we then have difficulty here:\n\n d (c1 text) inherits (b,c)\n\nWhich tp is to be inherited? At present, PostgreSQL avoids the problem\nby not inheriting any constraints. We need something like:\n\n d (c1 text) inherits (b,c) using b.tp\n\n\nNow I have finished writing this, I can see that I have changed my mind\nabout the necessity of rigorously enforcing column sharing. I think this\nshows that I am still confused about what we want from inheritance; we\nprobably need to discuss this quite a bit more thoroughly before we\ncan come up with a design that we can all be happy with and that will\nlast.\n\n\n\nFinal note: I have just realised that most of what I am using inheritance\nfor could be done with views and unions, provided that we can REFERENCE a\nview (which I haven't tested). One really radical option would be to strip\nout inheritance altogether!\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Wash me thoroughly from mine iniquity, and cleanse me \n from my sin. For I acknowledge my transgressions; and \n my sin is ever before me. Against thee, thee only, \n have I sinned, and done this evil in thy sight...\"\n Psalms 51:2-4 \n\n\n", "msg_date": "Wed, 26 Jan 2000 23:58:35 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inheritance, referential integrity and other\n\tconstraints" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> What needs discussion is whether indexes should be shared between\n> inherited tables, or whether each new descendant table needs a new\n> one. Not sure if this just made sense, though.\n\nShared indexes should definitely be allowed, if not the default. When\nyou've got deep hierarchies of inheritance it makes queries slow to have\nto consult a whole lot of indexes.\n\n> > 1. All tables in the group must possess all the columns of their\n> > ancestor, and all those columns must be of the same type.\n> \n> Isn't it this way now?\n\nNo, because you can do an ALTER TABLE (without the *) on the base, and\nit doesn't get propagated to the descendants. Possibly this should be\ndisallowed, although it needs more thought.\n\nBTW, if I remember right, Informix/Illustra has made the \"*\" also\ninclude subclasses syntax the default. In other words you DON'T use the\n*. If you only want a particular class and not sub-classes you have to\nwrite \"ONLY <classname>\", or something. IMHO this is the RIGHT THING.\nFor almost everything (eg ALTER TABLE above) you always want to include\nsubclasses. Same goes for any random query. This is the OO way, you\ndon't think about subclasses unless you are doing something strange.\n\nThis is a pretty big change, but IMHO it should be made at some time.\n\"*\" syntax should be eliminated and made default and something like ONLY\nbe added for when you really only want that one table. This won't affect\nanyone using postgres as a RDBMS, only those people using it as ORDBMS.\n\n\n> It's been a while since I looked into C++, but when you alter a descendant\n> (such as making a formerly public method private) you surely do not affect\n> the parents.\n\nI don't think we're talking about descendants. Rather parents.\n\n> The other way around I think the choice of star-or-not should\n> be given to the user.\n\nBut then you can create a hierachy using ALTER that you couldn't have\ncreated using plain CREATEs. This is bad I think and also has never been\ndone in any object system/language I've ever heard of.\n", "msg_date": "Thu, 27 Jan 2000 11:28:00 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "Oliver Elphick wrote:\n\n> If we do allow differences, I think that they should not depend on the\n> user's remembering to add * to the table name. I think that an\n> alteration to a parent table alone should require a UNINHERITED keyword\n> to make the intention explicit. (After all, the user may not realise\n> that the table is a parent; I think the RDBMS should protect him against\n> obvious traps.)\n\nI agree, and think that the \"*\" is a trap in general. Which is why I\nsuggest we go the Informix/Illustra route and dump \"*\" altogether,\nreplacing it with \"ONLY\" or some such, when you don't want inherited.\n\n> Perhaps we need a concept of grouped indexes to go with the grouped\n> tables that inheritance creates. Clearly this is one of the issues\n> that the original designers didn't think through. If we consider the\n> uses of an index, we can see that it is used first for fast access to\n> tuples and second to enforce uniqueness. If (as I am suggesting)\n> the constraints that require an index (PRIMARY KEY, REFERENCES and UNIQUE)\n> are forced to be group-wide, it will follow that the corresponding\n> indexes should also be group-wide. On the other hand, a user-created\n> index for fast access could apply to a single table in the group.\n\nI think indexes too should be inherited (physically as well as\nlogically) unless you choose the ONLY keyword.\n\n> a (id char2 primary key, name text not null)\n> b (tp char(1) not null default 'B', supplier text) inherits (a);\n> c (tp char(1) not null default 'C', customer text) inherits (a);\n> \n> It seems quite a sensible use of inheritance to allow different defaults\n> for tp in tables b and c. However, we then have difficulty here:\n> \n> d (c1 text) inherits (b,c)\n> \n> Which tp is to be inherited? At present, PostgreSQL avoids the problem\n> by not inheriting any constraints. We need something like:\n> \n> d (c1 text) inherits (b,c) using b.tp\n\nHmmm. I don't think that's right at all. For example tp might be a\ndifferent type in b and c, and code might depend on that. It would be\nlogically unreasonable to have an inherited \"d\" not have BOTH tp from b\nand c. I think from memory, Eiffel solves this by renaming doesn't it? I\nthink you need either renaming or scope resolving syntax. This would\nprobably get very messy, and I think it's probably quite sufficient to\nforce the user to not inherit the same name from b and C. If you want\nthat, you have to rename tp to be something else in b and/or c.\n\n> Final note: I have just realised that most of what I am using inheritance\n> for could be done with views and unions, provided that we can REFERENCE a\n> view (which I haven't tested). One really radical option would be to strip\n> out inheritance altogether!\n\nPlease no! Yep, inheritance in SELECT is actually implemented as a UNION\ninternally. But don't dump it!\n", "msg_date": "Thu, 27 Jan 2000 13:01:29 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "Chris Bitmead wrote:\n >Oliver Elphick wrote:\n...\n >> a (id char2 primary key, name text not null)\n >> b (tp char(1) not null default 'B', supplier text) inherits (a);\n >> c (tp char(1) not null default 'C', customer text) inherits (a);\n >> \n >> It seems quite a sensible use of inheritance to allow different defaults\n >> for tp in tables b and c. However, we then have difficulty here:\n >> \n >> d (c1 text) inherits (b,c)\n >> \n >> Which tp is to be inherited? At present, PostgreSQL avoids the problem\n >> by not inheriting any constraints. We need something like:\n >> \n >> d (c1 text) inherits (b,c) using b.tp\n >\n >Hmmm. I don't think that's right at all. For example tp might be a\n >different type in b and c, and code might depend on that.\n\nNo, the inheritance system doesn't allow them to be different types.\nYou get an error if you try to create such a table:\n\njunk=> \\d d\nTable = d\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| id | char() | 1 |\n| words | text | var |\n| nu | float8 | 8 |\n+----------------------------------+----------------------------------+-------+\njunk=> \\d b\nTable = b\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| id | char() | 1 |\n| words | text | var |\n| nu | numeric | 8.2 |\n+----------------------------------+----------------------------------+-------+\njunk=> \\d d\nTable = d\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| id | char() | 1 |\n| words | text | var |\n| nu | float8 | 8 |\n+----------------------------------+----------------------------------+-------+\njunk=> create table e (x text) inherits (b,d);\nERROR: float8 and numeric conflict for nu\n\nAnd this is right because `SELECT nu FROM b*' and `SELECT nu FROM d*'\nboth need to work.\n\n > It would be\n >logically unreasonable to have an inherited \"d\" not have BOTH tp from b\n >and c.\n\nBecause the column names are identical, they are overlaid and treated\nas the same column. This is so whether or not they ultimately derive\nfrom the same parent, so it isn't strictly a case of repeated inheritance\nas in Eiffel. (There, repeatedly inherited features of the same parent\nare silently combined, but identical names from unrelated classes are\nconflicts.)\n\n > I think from memory, Eiffel solves this by renaming doesn't it? I\n >think you need either renaming or scope resolving syntax. This would\n >probably get very messy, and I think it's probably quite sufficient to\n >force the user to not inherit the same name from b and C. If you want\n >that, you have to rename tp to be something else in b and/or c.\n \nBut we do allow this at the moment; identically named and typed columns\nare taken to be the same column. This is so, even if they don't appear\nin the same order:\n\njunk=> \\d m\nTable = m\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| c1 | char() | 2 |\n| c2 | int4 | 4 |\n| c3 | text | var |\n| c4 | numeric | 8.2 |\n+----------------------------------+----------------------------------+-------+\n\nso it looks as if the recent discussion about column ordering and\ninheritance was off the point!\n\n >> Final note: I have just realised that most of what I am using inheritance\n >> for could be done with views and unions, provided that we can REFERENCE a\n >> view (which I haven't tested). One really radical option would be to stri\n >p\n >> out inheritance altogether!\n >\n >Please no! Yep, inheritance in SELECT is actually implemented as a UNION\n >internally. But don't dump it!\n \nWell no; I didn't really mean it!\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Come now, and let us reason together, saith the LORD; \n though your sins be as scarlet, they shall be as white\n as snow; though they be red like crimson, they shall \n be as wool.\" Isaiah 1:18 \n\n\n", "msg_date": "Thu, 27 Jan 2000 11:08:54 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inheritance, referential integrity and other\n\tconstraints" }, { "msg_contents": "On 2000-01-25, Tom Lane mentioned:\n\n> > Everything has its order and it's not like the inheritance as such is\n> > broken.\n> \n> Yes, a whole bunch of stuff is broken after this happens. Go back and\n> consult the archives --- or maybe Chris Bitmead will fill you in; he's\n> got plenty of scars to show for this set of problems. (All I recall\n> offhand is that pg_dump and reload can fail to generate a working\n> database.) The bottom line is that it would be a lot nicer if column c\n> had the same column position in both the parent table and the child\n> table(s).\n\nThis should be fixed in pg_dump by infering something via the oids of the\npg_attribute entries. No need to mess up the backend for it.\n\nMaybe pg_dump should optionally dump schemas in terms of insert into\npg_something commands rather than actual DDL. ;)\n\n> \n> I suggest you be very cautious about messing with ALTER TABLE until you\n> understand why inheritance makes it such a headache ;-)\n\nI'm just trying to get the defaults and constraints working. If\ninheritance stays broken the way it previously was, it's beyond my\npowers. But I get the feeling that people rather not alter their tables\nunless they have *perfect* alter table commands. I don't feel like arguing\nwith them, they'll just have to do without then.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 27 Jan 2000 18:41:45 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues " }, { "msg_contents": "On 2000-01-26, Oliver Elphick mentioned:\n\n> As far as inheritance goes, all the descendants are treated as one table,\n> including those created on a different branch from test2. All of them\n> have to return the right columns for a single query; the two options for\n> dealing with this seem to be logical column numbering, or rewriting the\n> descendant tables. (But I haven't spent enough time in the code to be\n> sure of that.)\n\nLogical column ordering seems like a rather clean solution. The system\ncould also make educated decisions such as storing fixed size attributes\nbefore variables sized ones. Kind of like a Cluster within the table.\n\nI still think that fixing this in pg_dump might be the path of least\nresistance, but we've got until autumn(?) to figure it out.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 27 Jan 2000 18:41:55 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues " }, { "msg_contents": "> > I suggest you be very cautious about messing with ALTER TABLE until you\n> > understand why inheritance makes it such a headache ;-)\n> \n> I'm just trying to get the defaults and constraints working. If\n> inheritance stays broken the way it previously was, it's beyond my\n> powers. But I get the feeling that people rather not alter their tables\n> unless they have *perfect* alter table commands. I don't feel like arguing\n> with them, they'll just have to do without then.\n\nOK, so am I hearing we don't want ALTER TABLE DROP COLUMN without it\nworking for inhertance. Is this really the way we want things? May as\nwell disable ADD COLUMN too because that doesn't work for inheritance\neither.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Jan 2000 12:52:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues" }, { "msg_contents": "On Thu, Jan 27, 2000 at 12:52:43PM -0500, Bruce Momjian wrote:\n> > > I suggest you be very cautious about messing with ALTER TABLE until you\n> > > understand why inheritance makes it such a headache ;-)\n> > \n> > I'm just trying to get the defaults and constraints working. If\n> > inheritance stays broken the way it previously was, it's beyond my\n> > powers. But I get the feeling that people rather not alter their tables\n> > unless they have *perfect* alter table commands. I don't feel like arguing\n> > with them, they'll just have to do without then.\n> \n> OK, so am I hearing we don't want ALTER TABLE DROP COLUMN without it\n> working for inhertance. Is this really the way we want things? May as\n> well disable ADD COLUMN too because that doesn't work for inheritance\n> either.\n\nBruce, I hope your playing devil's advocate here. What I'm hearing,\nfrom this discussion, is a number of people interested in getting psql's\nobject features defined in a useful way. As far as impacting Peter's work\non getting ALTER commands working, I hope he understands that getting\nthe commands working for the SQL92 case, and leaving inheritance broken\n(as it currently is) is far preferable to holding off for the *perfect*\nproblem definition. I interpreted his last sentence to mean \"they'll\njust have to do without *perfect* alter table commands\", not \"I'm not\ngoing to work on this at all anymore\". At least, I sure that's what I\nhope he means :-)\n\nIf you meant the later, Peter, let me say that, in my opinion, very\nfew people are currently using postgres's inheritence features, and are\nalready having to manage with the broken state they're in. I'm glad to\nsee interest in improving them, but I see that as post 7.0 work. Heck,\nIf Oliver & Co. come up with an interesting, consistent object model,\nthat'd be reason enough for an 8.0 release. ;-) (No, please, not another\nversion number thread!) Certainly might be worth a long range development\nfork in the CVS, at least.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 27 Jan 2000 12:55:44 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues" }, { "msg_contents": "> On Thu, Jan 27, 2000 at 12:52:43PM -0500, Bruce Momjian wrote:\n> > > > I suggest you be very cautious about messing with ALTER TABLE until you\n> > > > understand why inheritance makes it such a headache ;-)\n> > > \n> > > I'm just trying to get the defaults and constraints working. If\n> > > inheritance stays broken the way it previously was, it's beyond my\n> > > powers. But I get the feeling that people rather not alter their tables\n> > > unless they have *perfect* alter table commands. I don't feel like arguing\n> > > with them, they'll just have to do without then.\n> > \n> > OK, so am I hearing we don't want ALTER TABLE DROP COLUMN without it\n> > working for inhertance. Is this really the way we want things? May as\n> > well disable ADD COLUMN too because that doesn't work for inheritance\n> > either.\n> \n> Bruce, I hope your playing devil's advocate here. What I'm hearing,\n> from this discussion, is a number of people interested in getting psql's\n> object features defined in a useful way. As far as impacting Peter's work\n> on getting ALTER commands working, I hope he understands that getting\n> the commands working for the SQL92 case, and leaving inheritance broken\n> (as it currently is) is far preferable to holding off for the *perfect*\n> problem definition. I interpreted his last sentence to mean \"they'll\n> just have to do without *perfect* alter table commands\", not \"I'm not\n> going to work on this at all anymore\". At least, I sure that's what I\n> hope he means :-)\n\nI interpret it the other way. ALTER TABLE DROP is currently disabled in\ngram.y, and I believe he thinks that unless it is 100%, we don't want\nit. Now, I believe that is very wrong, and I think it is fine as it is,\nbut I can see why he would think that after the hard time he was given.\n\nThis whole thing has wrapped around, and now I am not sure what signal\nwe are sending Peter. I personally like what he has done, seeing that\nhe did exactly what I suggested when he asked on the list months ago. I\ndon't want to do a phantom attribute thing at this point with very\nlittle payback. I also am not terribly concerned about inheritance\neither as it needs work in many areas.\n\nHowever, I am only one voice, and no one is giving direction to him.\n\nWe had better decide what we want in this area.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Jan 2000 17:15:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues" }, { "msg_contents": "On 2000-01-26, Don Baccus mentioned:\n\n> If so, I thought Oliver pointed out that you had the numbering wrong.\n> I thought so, too...\n\nSomeone made the good point that, independent of which numbering you might\nprefer, using add column gives you a setup which you could not achieve\nusing only create table. That makes sense to me, so I'm withdrawing my\nargument. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 27 Jan 2000 23:28:16 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n\treferential integrity and other constraints" }, { "msg_contents": "On 2000-01-26, Oliver Elphick mentioned:\n\n> >considered right. Also I just looked into item 'Disallow inherited columns\n> >with the same name as new columns' and it seems that someone actually made\n> >provisions for this to be allowed, meaning that\n> >create table test1 (x int);\n> >create table test2 (x int) inherits (test1);\n> >would result in test2 looking exactly like test1. No one knows what the\n> >motivation was. (I removed it anyway.)\n> \n> That's a relief! Unless you have actually removed the ability to do\n> repeated inheritance?\n\nUgh, I just realized that of course you _have_ to allow duplicate column\nnames, e.g. in a scheme like\n b\n a < > d\n c\n\nthe columns of \"a\" would arrive duplicated at \"d\" and the above logic\nwould merge them. I haven't finished that fix yet, so I better scrap it\nnow. Seem like this TODO item was really a non-starter.\n\n\n> Final note: I have just realised that most of what I am using inheritance\n> for could be done with views and unions, provided that we can REFERENCE a\n> view (which I haven't tested). One really radical option would be to strip\n> out inheritance altogether!\n\nSure, I could live with that. It's not like it ever worked (in its\nentirety). And any\n\ntable a (a1, a2, a3)\ntable b (b1, b2) inherits (a)\n\ncan also be implemented as\n\ntable a (a_id, a1, a2, a3)\ntable b_bare (b_id, b1, b2)\ncreate view b as\n select a1, a2, a3, b1, b2 from outer join a, b on a_id = b_id\n{or whatever that syntax was}\n\nplus an insert rule or two.\n\nIt would make the rest of the code soooo much easier. (Sarcasm intended,\nbut a glimpse of truth as well.)\n\n+++\nSlashdot: \"Self-proclaimed most advanced open-source database drops\nobject-oriented facilities to simplify code base\"\nAC reply: \"Now when's KDE moving to C?\"\n+++\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 27 Jan 2000 23:28:24 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n\treferential integrity and other constraints" }, { "msg_contents": "Oliver Elphick wrote:\n\n> No, the inheritance system doesn't allow them to be different types.\n> You get an error if you try to create such a table:\n\nHmm. While it might allow it, I can't see the logic in it. Can't think\nof any OO language that thinks this way. All other languages you get\ntwo different variables either with :: scope resolution in C++ or\nrenaming in Eiffel.\n\n> Because the column names are identical, they are overlaid and treated\n> as the same column. This is so whether or not they ultimately derive\n> from the same parent, so it isn't strictly a case of repeated inheritance\n> as in Eiffel. (There, repeatedly inherited features of the same parent\n> are silently combined, but identical names from unrelated classes are\n> conflicts.)\n\nWhich seems like the right thing to me.\n", "msg_date": "Fri, 28 Jan 2000 11:10:39 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n> > > I suggest you be very cautious about messing with ALTER TABLE until you\n> > > understand why inheritance makes it such a headache ;-)\n> >\n> > I'm just trying to get the defaults and constraints working. If\n> > inheritance stays broken the way it previously was, it's beyond my\n> > powers. But I get the feeling that people rather not alter their tables\n> > unless they have *perfect* alter table commands. I don't feel like arguing\n> > with them, they'll just have to do without then.\n>\n> OK, so am I hearing we don't want ALTER TABLE DROP COLUMN without it\n> working for inhertance. Is this really the way we want things? May as\n> well disable ADD COLUMN too because that doesn't work for inheritance\n> either.\n>\n\nI think this is not a good idea. Many of us doesn't interest inheritance.\nALTER ADD COLUMN is not complete but it is better than nothing.\n\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ************\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n", "msg_date": "Fri, 28 Jan 2000 15:55:58 +0100", "msg_from": "Jose Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Column ADDing issues" }, { "msg_contents": "Oliver Elphick wrote:\n\n> I would like to work on improving implementation of inheritance,\n> especially with regard to referential integrity. I suspect there are\n> a number of issues that may be related and will need to be done together.\n> In addition, this will be my first attempt to do anything serious in\n> the PostgreSQL code itself, so I would like to get some hints as\n> to what I haven't even thought about!\n>\n> First, I would like to change the definition of the foreign key\n> constraints to allow the inheritance star to follow a table name.\n> This would mean that, for RI purposes, the named table would be\n> aggregated with its descendants. So \"REFERENCES tbl\" would mean that\n> the foreign key must exist in tbl, but \"REFERENCES tbl*\" would allow it\n> to exist either in tbl or in any of tbl's descendants.\n\n I haven't thought about it in depth up to now, but I think\n that would cause much trouble in the RI triggers. They don't\n even have the full functionality and must be tested well for\n 7.0.\n\n Can we wait with such an issue until after 7.0.\n\n> Use of ON DELETE or ON UPDATE implies there must be an index on the\n> referring column, to enable checking or deletion to be done speedily.\n> This doesn't seem to happen at the moment. If the reference is to\n> an inheritance group, it would seem to be appropriate that all the\n> tables in the group should use the same index. Similarly, where\n> a unique or primary key constraint is inherited, it may be desirable\n> to use a single index to manage the constraint. The implication of\n> this would be that there must be a check when a table is dropped\n> to make sure that a grouped index is not dropped until the last\n> table in the group is dropped.\n\n Yes and yes. I thought about checking if there is a unique\n index at the time, the referencing table is created (or later\n the constraint added). But there is no way, except blowing\n up the DROP INDEX, to prevent someone from removing it later.\n And doing so would prevent then from fixing a corrupted\n index, so I'd be the first to vote against.\n\n> Another item I would like to get fixed is to make sure that all\n> constraints are inherited when a descendant table is created; this\n> is a current TODO item. It will also be necessary to ensure that\n> added constraints get inherited, when ALTER TABLE ... ADD/DROP\n> CONSTRAINT gets implemented.\n\n Yepp.\n\n But please don't start on it before 7.0. I would expect\n touching it right now could become a showstopper.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 28 Jan 2000 18:26:16 +0100 (CET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance,\n referential integrity and other constraints" }, { "msg_contents": "I'm down to the point where the parallel tests mostly work with a small\nSI buffer --- but they do still sometimes fail. I've realized that\nthere is a whole class of bugs along the following lines:\n\nThere are plenty of routines that do two or more SearchSysCacheTuple\ncalls to get the information they need. As the code stands, it is\nunsafe to continue accessing the tuple returned by SearchSysCacheTuple\nafter making a second such call, because the second call could possibly\ncause an SI cache reset message to be processed, thereby flushing the\ncontents of the caches.\n\nheap_open and CommandCounterIncrement are other routines that could\ncause cache entries to be dropped.\n\nThis is a very insidious kind of bug because the probability of\noccurrence is very low (at normal SI buffer size a reset is unlikely,\nand even if it happens, you won't observe a failure unless the\npfree'd tuple is actually overwritten before you're done with it).\nSo we cannot hope to catch these things by testing.\n\nI am not sure what to do about it. One solution path is to make\nall the potential trouble spots do SearchSysCacheTupleCopy and then\npfree the copied tuple when done. However, that adds a nontrivial\namount of overhead, and it'd be awfully easy to miss some trouble\nspots or to introduce new ones in the future.\n\nAnother possibility is to introduce some sort of notion of a reference\ncount, and to make the standard usage pattern be\n\ttuple = SearchSysCacheTuple(...);\n\t... use tuple ...\n\tReleaseSysCacheTuple(tuple);\nThe idea here is that a tuple with positive refcount would not be\ndeleted during a cache reset, but would simply be removed from its\ncache, and then finally deleted when released (or during elog\nrecovery).\n\nThis might allow us to get rid of SearchSysCacheTupleCopy, too,\nsince the refcount should be just as good as palloc'ing one's own\ncopy for most purposes.\n\nI haven't looked at the callers of SearchSysCacheTuple to see whether\nthis would be a practical change to make. I was wondering if anyone\nhad any comments or better ideas...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2000 10:41:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Another nasty cache problem" }, { "msg_contents": "> I'm down to the point where the parallel tests mostly work with a small\n> SI buffer --- but they do still sometimes fail. I've realized that\n> there is a whole class of bugs along the following lines:\n> \n> There are plenty of routines that do two or more SearchSysCacheTuple\n> calls to get the information they need. As the code stands, it is\n> unsafe to continue accessing the tuple returned by SearchSysCacheTuple\n> after making a second such call, because the second call could possibly\n> cause an SI cache reset message to be processed, thereby flushing the\n> contents of the caches.\n\nYes, I have always been aware of this problem. The issue is that since\ncache entries are removed on a oldest-removed-first basis, I never\nthought that several cache lookups would be a problem. If you do many\ncache lookups and expect very old ones to still exist, that could be a\nproblem.\n\nHowever, a full reset of the cache could cause major problems. Is there\na way to re-load the cache after the reset with the most recently cached\nentries? Seems that would be easier. However, your issue is probably\nthat the new cache entries would have different locations from the old\nentries. Is it possible to delay the cache reset of the five most\nrecent cache entries, and do them later? I don't see many places where\nmore than 2-3 cache entries are kept. Maybe we need to keep them around\nsomehow during cache reset.\n\n> I am not sure what to do about it. One solution path is to make\n> all the potential trouble spots do SearchSysCacheTupleCopy and then\n> pfree the copied tuple when done. However, that adds a nontrivial\n> amount of overhead, and it'd be awfully easy to miss some trouble\n> spots or to introduce new ones in the future.\n\nSounds like a lot of overhead to do the copy.\n\n> \n> Another possibility is to introduce some sort of notion of a reference\n> count, and to make the standard usage pattern be\n> \ttuple = SearchSysCacheTuple(...);\n> \t... use tuple ...\n> \tReleaseSysCacheTuple(tuple);\n> The idea here is that a tuple with positive refcount would not be\n> deleted during a cache reset, but would simply be removed from its\n> cache, and then finally deleted when released (or during elog\n> recovery).\n\nIf you can do that, can't you just keep the few most recent ones by\ndefault. Seems that would be very clean.\n\n> This might allow us to get rid of SearchSysCacheTupleCopy, too,\n> since the refcount should be just as good as palloc'ing one's own\n> copy for most purposes.\n\nYes, that would be nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Jan 2000 12:57:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, I have always been aware of this problem. The issue is that since\n> cache entries are removed on a oldest-removed-first basis, I never\n> thought that several cache lookups would be a problem.\n\nThey're not, under normal circumstances...\n\n> However, a full reset of the cache could cause major problems. Is there\n> a way to re-load the cache after the reset with the most recently cached\n> entries? Seems that would be easier. However, your issue is probably\n> that the new cache entries would have different locations from the old\n> entries. Is it possible to delay the cache reset of the five most\n> recent cache entries, and do them later?\n\nI don't think that's a good answer; what if one of those entries is the\none that the SI messages wanted us to update? With a scheme like that,\nyou might be protecting a cache entry that actually isn't being used\nanymore. With a refcount you'd at least know whether it was safe to\nthrow the entry away.\n\nOf course this just begs the question of what to do when an SI update\nmessage arrives for a tuple that is locked down by refcount. Maybe\nwe have to kick out an elog(ERROR) then. Could be messy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2000 13:22:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, I have always been aware of this problem. The issue is that since\n> > cache entries are removed on a oldest-removed-first basis, I never\n> > thought that several cache lookups would be a problem.\n> \n> They're not, under normal circumstances...\n> \n> > However, a full reset of the cache could cause major problems. Is there\n> > a way to re-load the cache after the reset with the most recently cached\n> > entries? Seems that would be easier. However, your issue is probably\n> > that the new cache entries would have different locations from the old\n> > entries. Is it possible to delay the cache reset of the five most\n> > recent cache entries, and do them later?\n> \n> I don't think that's a good answer; what if one of those entries is the\n> one that the SI messages wanted us to update? With a scheme like that,\n> you might be protecting a cache entry that actually isn't being used\n> anymore. With a refcount you'd at least know whether it was safe to\n> throw the entry away.\n> \n> Of course this just begs the question of what to do when an SI update\n> message arrives for a tuple that is locked down by refcount. Maybe\n> we have to kick out an elog(ERROR) then. Could be messy.\n\nYep, that was my question. You can re-load it, but if it is the one\nthat just got invalidated, what do you reload? My guess is that you\nkeep using the same cache entry until the current transaction finishes,\nat which point you can throw it away.\n\nNow, if we did proper locking, no SI message could arrive for such an\nentry.\n\nMy assumption is that these are mostly system cache entries, and they\nrarely change, right? If someone is operating on a table that gets an\nSI entry, odds are that later on the system will fail because the table\nis changed in some way, right?\n\nActually, don't we have a transaction id for the transaction that loaded\nthat cache entry. We can add a transaction id to the cache record that\nshows the transaction that last accessed that cache entry. Then we can\nsay if any SI message comes in for a cache entry that was accessed by\nthe current transaction, we throw an elog.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Jan 2000 14:25:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Now, if we did proper locking, no SI message could arrive for such an\n> entry.\n\n> My assumption is that these are mostly system cache entries, and they\n> rarely change, right? If someone is operating on a table that gets an\n> SI entry, odds are that later on the system will fail because the table\n> is changed in some way, right?\n\nIf the tuple is actually *changed* then that's true (and locking should\nhave prevented it anyway). But we also issue cache flushes against\nwhole system tables in order to handle VACUUM of a system table. There,\nthe only thing that's actually been modified is the tuple's physical\nlocation (ctid). We don't want to blow away transactions that are just\nlooking at cache entries when a VACUUM happens.\n\nPerhaps the caches shouldn't store ctid? Not sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2000 16:54:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Now, if we did proper locking, no SI message could arrive for such an\n> > entry.\n> \n> > My assumption is that these are mostly system cache entries, and they\n> > rarely change, right? If someone is operating on a table that gets an\n> > SI entry, odds are that later on the system will fail because the table\n> > is changed in some way, right?\n> \n> If the tuple is actually *changed* then that's true (and locking should\n> have prevented it anyway). But we also issue cache flushes against\n> whole system tables in order to handle VACUUM of a system table. There,\n> the only thing that's actually been modified is the tuple's physical\n> location (ctid). We don't want to blow away transactions that are just\n> looking at cache entries when a VACUUM happens.\n> \n> Perhaps the caches shouldn't store ctid? Not sure.\n\nI am guilt of that. There are a few place where I grab the tuple from\nthe cache, then use that to update the heap. I thought it was a nifty\nsolution at the time. I thought I used the CacheCopy calls for that,\nbut I am not positive. Even if I did, that doesn't help because the\ncopy probably has an invalid tid at that point, thought I have opened\nthe table. Maybe I have to make sure I open the table before geting the\ntid from the cache.\n\nIs it only the tid that is of concern. If so, that can probably be\nfixed somehow.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Jan 2000 16:56:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Perhaps the caches shouldn't store ctid? Not sure.\n\n> I am guilt of that. There are a few place where I grab the tuple from\n> the cache, then use that to update the heap. I thought it was a nifty\n> solution at the time. I thought I used the CacheCopy calls for that,\n> but I am not positive. Even if I did, that doesn't help because the\n> copy probably has an invalid tid at that point, thought I have opened\n> the table. Maybe I have to make sure I open the table before geting the\n> tid from the cache.\n\nI believe we worked that out and fixed it a few months ago: it's safe\nto use the cache to find a tuple you want to update, if you open and\nlock the containing table *before* doing the cache lookup. Then you\nknow VACUUM's not running on that table (since you have it locked)\nand you have an up-to-date TID for the tuple (since the open+lock\nwould have processed any pending shared-inval messages). I went\naround and made sure that's true everywhere.\n\nWhat I was thinking about was adding code to the caches that would\n(a) maintain refcounts on cached tuples, (b) reread rather than\ndiscard a tuple if it is invalidated while refcount > 0, and (c)\nkick out an error if the reread shows that the tuple has in fact\nchanged. It seems that we would need to ignore the TID when deciding\nif a tuple has changed, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2000 20:52:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "> I believe we worked that out and fixed it a few months ago: it's safe\n> to use the cache to find a tuple you want to update, if you open and\n> lock the containing table *before* doing the cache lookup. Then you\n> know VACUUM's not running on that table (since you have it locked)\n> and you have an up-to-date TID for the tuple (since the open+lock\n> would have processed any pending shared-inval messages). I went\n> around and made sure that's true everywhere.\n\nGood.\n\n> What I was thinking about was adding code to the caches that would\n> (a) maintain refcounts on cached tuples, (b) reread rather than\n> discard a tuple if it is invalidated while refcount > 0, and (c)\n> kick out an error if the reread shows that the tuple has in fact\n> changed. It seems that we would need to ignore the TID when deciding\n> if a tuple has changed, however.\n\nYes, that is one solution. We can do it the same way heap_fetch works.\nIt requires a Buffer pointer which it uses to return a value that calls\nReleaseBuffer() when completed.\n\nHowever, would just throwing an elog on any cache invalidate on a cache\nrow looked up in the current transaction/command counter make more\nsense? Sometimes you are using that cache oid in some later actions\nthat really can't be proper unlocked at the end? Would be less code.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Jan 2000 21:18:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "On Sun, 30 Jan 2000, Tom Lane wrote:\n\n> There are plenty of routines that do two or more SearchSysCacheTuple\n> calls to get the information they need. As the code stands, it is\n> unsafe to continue accessing the tuple returned by SearchSysCacheTuple\n> after making a second such call, because the second call could possibly\n> cause an SI cache reset message to be processed, thereby flushing the\n> contents of the caches.\n> \n> heap_open and CommandCounterIncrement are other routines that could\n> cause cache entries to be dropped.\n\nThis sort of thing should be documented, at least in the comment on top of\nthe function. From the developer's FAQ I gathered something like that\nthese tuples can be used for a short while, which is of course very exact.\n\nAnyway, I just counted 254 uses of SearchSysCacheTuple in the backend tree\nand a majority of these are probably obviously innocent. Since I don't\nhave any more developing planned, I would volunteer to take a look at all\nof those and look for violations of second cache look up, heap_open, and\nCommandCounterIncrement, fixing them where possible, or at least pointing\nthem out to more experienced people. That might save you from going out of\nyour way and instituting some reference count or whatever, and it would be\nan opportunity for me to read some code.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 31 Jan 2000 13:55:15 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "On Sun, 30 Jan 2000, Bruce Momjian wrote:\n\n> > Perhaps the caches shouldn't store ctid? Not sure.\n> \n> I am guilt of that. There are a few place where I grab the tuple from\n> the cache, then use that to update the heap. I thought it was a nifty\n> solution at the time. I thought I used the CacheCopy calls for that,\n> but I am not positive. Even if I did, that doesn't help because the\n> copy probably has an invalid tid at that point, thought I have opened\n> the table. Maybe I have to make sure I open the table before geting the\n> tid from the cache.\n\nUrgh, I better check my code for that as well ... :(\n\n> \n> Is it only the tid that is of concern. If so, that can probably be\n> fixed somehow.\n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 31 Jan 2000 13:57:48 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This sort of thing should be documented,\n\n... or changed ...\n\n> Anyway, I just counted 254 uses of SearchSysCacheTuple in the backend tree\n> and a majority of these are probably obviously innocent. Since I don't\n> have any more developing planned, I would volunteer to take a look at all\n> of those and look for violations of second cache look up, heap_open, and\n> CommandCounterIncrement, fixing them where possible, or at least pointing\n> them out to more experienced people. That might save you from going out of\n> your way and instituting some reference count or whatever, and it would be\n> an opportunity for me to read some code.\n\nI appreciate the offer, but I don't really want to fix it that way.\nIf that's how things have to work, then the code will be *extremely*\nfragile --- any routine that opens a relation or looks up a cache tuple\nwill potentially break its callers as well as itself. And since the\nprobability of failure is so low, we'll never find it; we'll just keep\ngetting the occasional irreproducible failure report from the field.\nI think we need a designed-in solution rather than a restrictive coding\nrule.\n\nAlso, I am not sure that the existing uses are readily fixable. For\nexample, I saw a number of crashes in the parser last night, most of\nwhich traced to uses of Operator or Type pointers --- which are really\nSearchSysCacheTuple results, but the parser passes them around with wild\nabandon. I don't see any easy way of restructuring that code to avoid\nthis.\n\nI am starting to think that Bruce's idea might be the way to go: lock\ndown any cache entry that's been referenced since the last transaction\nstart or CommandCounterIncrement, and elog() if it's changed by\ninvalidation. Then the only coding rule needed is \"cached tuples don't\nstay valid across CommandCounterIncrement\", which is relatively\nsimple to check for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2000 10:24:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Tom Lane wrote:\n> I'm down to the point where the parallel tests mostly work with a small\n> SI buffer --- but they do still sometimes fail.\n\nHave you committed your changes? I tried the parallel tests with cvs of around\n5pm GMT 31 Jan, and they were all fine (I just ran out of procs at one point).\nThis is much better than last week! Thanks! I also tried that nonsensical\njoin from the other day, and it failed in the same way again:\n\nnewnham=# select * from crsids,\"tblPerson\" where\nnewnham-# crsids.crsid != \"tblPerson\".\"CRSID\";\nBackend sent B message without prior T\nD21Enter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself.\n\nAfter \\. :\n\nUnknown protocol character 'M' read from backend. (The protocol character is the first character the backend sends in response to a query it receives).\nPQendcopy: resetting connection\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281' received.\nAsynchronous NOTIFY 'ndropoulou' from backend with pid '1818589281' received.\n\n\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\n\nbut no NOTICEs about SI anywhere any more, in fact no messages at all until\nthe \"Unknown protocol character\" bit above. The psql frontend process grows to\nabout 120Mb in size before this if that matters (200Mb swap free).\n\n(Looking at why pg_dumpall creates unique indices for each different type\nof index at the moment...)\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 31 Jan 2000 19:13:56 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Does anyone know what is happening here? There is no other user; the\nCREATE TABLE is part of a much larger script that was working OK last\nThursday. The script drops and recreates an entire database.\n\nbray=# \\d country\n Table \"country\"\n Attribute | Type | Modifier \n-----------+---------+----------\n id | char(2) | not null\n name | text | not null\n region | text | \n telcode | text | \nIndex: country_pkey\nConstraint: (id ~ '[A-Z]{2}'::text)\n\nbray=# create table country_ccy\n(\n country char(2) references country (id) match full,\n ccy char(3) references currency (symbol) match full,\n primary key (country, ccy)\n)\n;\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n'country_ccy_pkey'\nfor table 'country_ccy'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: RelationClearRelation: relation 21645 modified while in use\n\nbray=# select relname from pg_class where oid = 21645;\n relname \n---------\n country\n(1 row)\n\ncountry was referenced in a previous table's foreign key. In view of\nthe notes on RelationClearRelation, I am wondering if refcount wasn't \ndecremented after that table was created.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"My son, if sinners entice thee, consent thou not.\" \n Proverbs 1:10 \n\n\n", "msg_date": "Mon, 31 Jan 2000 22:18:19 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Recent RI changes have broken something" }, { "msg_contents": "> I am starting to think that Bruce's idea might be the way to go: lock\n> down any cache entry that's been referenced since the last transaction\n> start or CommandCounterIncrement, and elog() if it's changed by\n> invalidation. Then the only coding rule needed is \"cached tuples don't\n> stay valid across CommandCounterIncrement\", which is relatively\n> simple to check for.\n\nYea, I had a good idea ...\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 31 Jan 2000 20:54:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm down to the point where the parallel tests mostly work with a small\n>> SI buffer --- but they do still sometimes fail.\n\n> Have you committed your changes? I tried the parallel tests with cvs\n> of around 5pm GMT 31 Jan, and they were all fine (I just ran out of\n> procs at one point). This is much better than last week! Thanks!\n\nYes, I committed what I had last night (about 04:35 GMT 1/31).\n\nThere are cache-flush-related bugs still left to deal with, but they\nseem to be far lower in probability than the ones squashed so far.\nI'm finding that even with MAXNUMMESSAGES set to 8, the parallel tests\nusually pass; so it seems we need some other way of testing to nail down\nthe remaining problems.\n\n> I also tried that nonsensical join from the other day, and it failed in\n> the same way again:\n> newnham=# select * from crsids,\"tblPerson\" where\n> newnham-# crsids.crsid != \"tblPerson\".\"CRSID\";\n> Backend sent B message without prior T\n\nHmm. Can you provide a self-contained test case (a script to build the\nfailing tables, preferably)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2000 21:02:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> ERROR: RelationClearRelation: relation 21645 modified while in use\n\nThis is probably my fault. Can you provide a simple test case?\nThe two table declarations might be enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2000 21:06:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Recent RI changes have broken something " }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> ERROR: RelationClearRelation: relation 21645 modified while in use\n >\n >This is probably my fault. Can you provide a simple test case?\n >The two table declarations might be enough.\n \nThey seem to need to have the data loaded too. The attached gzipped\ntar is the minimum extract that will trigger the bug. It contains\none example script (psql -d template1 -e <example) and 2 data files.\nYou will have to amend the script to put in the correct path of the\ndata files for your backend to find them.\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And be not conformed to this world; but be ye \n transformed by the renewing of your mind, that ye may \n prove what is that good, and acceptable, and perfect, \n will of God.\" Romans 12:2", "msg_date": "Tue, 01 Feb 2000 09:29:31 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Recent RI changes have broken something " }, { "msg_contents": ">> \"Oliver Elphick\" <[email protected]> writes:\n>>> ERROR: RelationClearRelation: relation 21645 modified while in use\n>> \n>> This is probably my fault. Can you provide a simple test case?\n>> The two table declarations might be enough.\n\nI think this is fixed now. Hopefully I didn't break SELECT FOR UPDATE\nwhile I was at it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Feb 2000 19:04:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Recent RI changes have broken something " }, { "msg_contents": "On Mon, Jan 31, 2000 at 09:02:30PM -0500, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> > Tom Lane wrote:\n> \n> There are cache-flush-related bugs still left to deal with, but they\n> seem to be far lower in probability than the ones squashed so far.\n> I'm finding that even with MAXNUMMESSAGES set to 8, the parallel tests\n> usually pass; so it seems we need some other way of testing to nail down\n> the remaining problems.\n> \n> > I also tried that nonsensical join from the other day, and it failed in\n> > the same way again:\n> > newnham=# select * from crsids,\"tblPerson\" where\n> > newnham-# crsids.crsid != \"tblPerson\".\"CRSID\";\n> > Backend sent B message without prior T\n> \n> Hmm. Can you provide a self-contained test case (a script to build the\n> failing tables, preferably)?\n\nIt seems this is a memory exhaustion thing: I have 128Mb real memory.\nAttached below is the C program used to create some random data in\ntables test and test2 of database test (which needs to exist). Executing\nthe non-sensical query\n\n select * from test,test2 where test.i!=test2.i;\n\nshould result in 2600*599=1557400 (ie lots of) rows to be returned.\nThe process's memory consumption during this select grows to 128Mb, and after\na moment or two:\n\nBackend sent D message without prior T\nBackend sent D message without prior T\n...\n\nWhich isn't quite the same message as before, but is of the same type.\n\n 59 processes: 2 running, 57 sleeping\nCPU states: 2.3% user, 86.4% nice, 9.3% system, 0.0% interrupt, 1.9% idle\nMemory: 74M Act, 37M Inact, 184K Wired, 364K Free, 95M Swap, 262M Swap free\n\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n 1547 prlw1 50 0 128M 516K run 1:28 59.28% 59.28% psql\n 1552 postgres 50 0 1920K 632K run 1:37 24.32% 24.32% postgres\n\nlater, while the \"Backend sent...\" messages appear\n\n 1547 prlw1 -5 0 128M 68M sleep 1:41 23.00% 23.00% psql\n 1552 postgres 2 0 1920K 4K sleep 1:41 141.00% 6.88% <postgres>\n\nNote that there is still plenty of swap space. The 128Mb number seems to be\nmore than a coincidence (how to prove?)\n\nSo, is this only happening to me? How can lack of real memory affect timing\nof interprocess communication?\n\nCheers,\n\nPatrick\n\n==========================================================================\n\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"libpq-fe.h\"\n\nconst char *progname;\n\nPGresult *send_query(PGconn *db, const char *query)\n{\n PGresult *res;\n\n res=PQexec(db,query);\n switch(PQresultStatus(res))\n {\n case PGRES_EMPTY_QUERY:\n printf(\"PGRES_EMPTY_QUERY: %s\\n\",query);\n break;\n case PGRES_COMMAND_OK:\n printf(\"PGRES_COMMAND_OK: %s\\n\",query);\n break;\n case PGRES_TUPLES_OK:\n printf(\"PGRES_TUPLES_OK: %s\\n\",query);\n break;\n case PGRES_COPY_OUT:\n printf(\"PGRES_COPY_OUT: %s\\n\",query);\n break;\n case PGRES_COPY_IN:\n printf(\"PGRES_COPY_IN: %s\\n\",query);\n break;\n case PGRES_BAD_RESPONSE:\n printf(\"PGRES_BAD_RESPONSE: %s\\n\",query);\n exit(1);\n break;\n case PGRES_NONFATAL_ERROR:\n printf(\"PGRES_NONFATAL_ERROR: %s\\n\",query);\n break;\n case PGRES_FATAL_ERROR:\n printf(\"PGRES_FATAL_ERROR: %s\\n\",query);\n exit(1);\n break;\n default:\n fprintf(stderr,\"Error from %s: Unknown response from \"\\\n \"PQresultStatus()\\n\",progname);\n exit(1);\n break;\n }\n\n return res;\n}\n\nchar get_letter(void)\n{\n int c;\n\n do c=(int)random()%128;\n while(!(isascii(c)&&isalpha(c)));\n\n return (char)tolower(c);\n}\n\nunsigned int get_num(void)\n{\n return random()%100;\n}\n\nint main(int argc, char* argv[])\n{\n char id[7],query[2048];\n int i;\n PGconn *db;\n PGresult *res;\n\n progname=argv[0];\n\n srandom(42); /* same data each time hopefully */\n\n db=PQconnectdb(\"dbname=test\");\n if(PQstatus(db)==CONNECTION_BAD)\n {\n fprintf(stderr,\"Error from %s: Unable to connect to database \\\"test\\\".\\n\",\n progname);\n exit(1);\n }\n\n res=send_query(db,\"create table test (txt text,var varchar(7),i integer)\");\n PQclear(res);\n res=send_query(db,\"create table test2(txt text,var varchar(7),i integer)\");\n PQclear(res);\n\n for(i=1;i<=2600;++i)\n {\n sprintf(id,\"%c%c%c%c%03u\",get_letter(),get_letter(),get_letter(),\n get_letter(),get_num());\n\n sprintf(query,\"insert into test values ('%s','%s','%i')\",id,id,i);\n res=send_query(db,query);\n PQclear(res);\n }\n\n for(i=1;i<=600;++i)\n {\n sprintf(id,\"%c%c%c%c%03u\",get_letter(),get_letter(),get_letter(),\n get_letter(),get_num());\n\n sprintf(query,\"insert into test2 values ('%s','%s','%i')\",id,id,i);\n res=send_query(db,query);\n PQclear(res);\n }\n\n PQfinish(db);\n\n return 0;\n}\n", "msg_date": "Thu, 3 Feb 2000 11:24:34 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "On Thu, Feb 03, 2000 at 11:24:34AM +0000, Patrick Welche wrote:\n> char id[7],query[2048];\n ^\n should be 8 to be safe..\n", "msg_date": "Thu, 3 Feb 2000 12:14:26 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Tom Lane wrote:\n >>> \"Oliver Elphick\" <[email protected]> writes:\n >>>> ERROR: RelationClearRelation: relation 21645 modified while in use\n >>> \n >>> This is probably my fault. Can you provide a simple test case?\n >>> The two table declarations might be enough.\n >\n >I think this is fixed now. Hopefully I didn't break SELECT FOR UPDATE\n >while I was at it.\n \nYes, it is fixed from my point of view. I can't say about SELECT FOR\nUPDATE...\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"O come, let us worship and bow down; let us kneel \n before the LORD our maker.\" Psalms 95:6 \n\n\n", "msg_date": "Thu, 03 Feb 2000 15:06:42 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Recent RI changes have broken something " }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> Note that there is still plenty of swap space. The 128Mb number seems to be\n> more than a coincidence (how to prove?)\n\nThe majority of Unix systems have a process size limit kernel parameter,\nwhich is normally set to less than the amount of available swap space\n(you don't want a single process running wild to chew up all your swap\nand make other stuff start failing for lack of swap...) Check your\nkernel parameters.\n\nIt sounds to me like the backend has hit the size limit and is not\nreacting gracefully to failure of malloc() to allocate more space.\nIt ought to exit with an elog(FATAL), probably. Sigh, time to take\nanother pass through the code to cast a suspicious eye on everyplace\nthat calls malloc() directly.\n\nThere's a separate question about *why* such a simple query is chewing\nup so much memory. What query plan does EXPLAIN show for your test\nquery?\n\nYou said this was with current sources, right?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2000 12:00:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Tom Lane wrote:\n >There's a separate question about *why* such a simple query is chewing\n >up so much memory. What query plan does EXPLAIN show for your test\n >query?\n \nI can show a similar problem.\n\n >You said this was with current sources, right?\n \nThis is with current sources: I managed to kill the backend before\nit had used up all swap. If left to run on 6.5.3 or CVS as of 2\nweeks back it would kill the whole machine; I haven't let it get that\nfar today.\n\nbray=# explain select * from pg_operator as a, pg_operator as b;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=12604.88 rows=258064 width=162)\n -> Seq Scan on pg_operator b (cost=24.76 rows=508 width=81)\n -> Seq Scan on pg_operator a (cost=24.76 rows=508 width=81)\n\nEXPLAIN\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"O come, let us worship and bow down; let us kneel \n before the LORD our maker.\" Psalms 95:6 \n\n\n", "msg_date": "Thu, 03 Feb 2000 18:41:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "On Thu, Feb 03, 2000 at 12:00:21PM -0500, Tom Lane wrote:\n>\n> The majority of Unix systems have a process size limit kernel parameter,\n> which is normally set to less than the amount of available swap space\n> (you don't want a single process running wild to chew up all your swap\n> and make other stuff start failing for lack of swap...) Check your\n> kernel parameters.\n\nProbably to do with the shell limit:\n\nmemoryuse 125460 kbytes\n \n> There's a separate question about *why* such a simple query is chewing\n> up so much memory. What query plan does EXPLAIN show for your test\n> query?\n\ntest=# explain select * from test,test2 where test.i!=test2.i;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=64104.80 rows=1559400 width=56)\n -> Seq Scan on test2 (cost=24.80 rows=600 width=28)\n -> Seq Scan on test (cost=106.80 rows=2600 width=28)\n\nEXPLAIN\n\n> You said this was with current sources, right?\n\nThey're about 2 days old now. (Well, after your SI buffer overrun fixes)\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 3 Feb 2000 18:47:07 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Tom Lane wrote:\n>> There's a separate question about *why* such a simple query is chewing\n>> up so much memory. What query plan does EXPLAIN show for your test\n>> query?\n \n> I can show a similar problem.\n\n> bray=# explain select * from pg_operator as a, pg_operator as b;\n> NOTICE: QUERY PLAN:\n\n> Nested Loop (cost=12604.88 rows=258064 width=162)\n> -> Seq Scan on pg_operator b (cost=24.76 rows=508 width=81)\n> -> Seq Scan on pg_operator a (cost=24.76 rows=508 width=81)\n\nOK, I sussed this one --- there's a (longstanding) memory leak in\ncatcache.c. When entering a system-table tuple into the cache,\nit forgot to free the copy of the tuple that had been created in\ntransaction-local memory context. Cause enough cache entries to\nbe created within one transaction, and you'd start to notice the\nleak. The above query exhibits the problem because it produces\nabout 250K tuples each with six regproc columns, and each regprocout\ncall does a cache lookup to convert regproc OID to procedure name.\nSince you're cycling through 500-plus different procedure names,\nand the cache only keeps ~ 300 entries, there's going to be a\nfresh cache entry made every time :-(\n\nWith the fix I just committed, current sources execute the above query\nin constant backend memory space. psql's space usage still goes to the\nmoon, of course, since it's trying to buffer the whole query result :-(\n... but there's no way around that short of a major redesign of libpq's\nAPI. When and if we switch over to CORBA, we really need to rethink\nthe client access API so that buffering the query result in the client-\nside library is an option not a requirement.\n\nI do not think this is the same problem that Patrick Welche is\ncomplaining of, unfortunately.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2000 23:17:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Tom Lane wrote:\n\n> With the fix I just committed, current sources execute the above query\n> in constant backend memory space. psql's space usage still goes to the\n> moon, of course, since it's trying to buffer the whole query result :-(\n> ... but there's no way around that short of a major redesign of libpq's\n> API. When and if we switch over to CORBA, we really need to rethink\n> the client access API so that buffering the query result in the client-\n> side library is an option not a requirement.\n\nWhat about portals? Doesn't psql use portals?\n", "msg_date": "Fri, 04 Feb 2000 15:39:59 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n> 1547 prlw1 50 0 128M 516K run 1:28 59.28% 59.28% psql\n> 1552 postgres 50 0 1920K 632K run 1:37 24.32% 24.32% postgres\n\nSigh, I shoulda read this closely enough to notice that you were\ncomplaining of psql memory overrun, not backend memory overrun :-(\n\nThe major problem here is that libpq's API is designed on the assumption\nthat libpq will buffer the whole query result in application memory\nbefore letting the app see any of it. I see no way around that without\na fundamental redesign of the API. Which will happen someday, but not\ntoday.\n\nThe minor problem is that libpq doesn't react very gracefully to running\nout of memory. It detects it OK, but then aborts query processing,\nwhich means it gets out of step with the backend. It needs to be fixed\nso that it continues to absorb tuples (but drops them on the floor)\nuntil the backend is done. I've known of this problem for some time,\nbut have had many higher-priority problems to worry about. Perhaps\nsomeone else would like to take it on...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Feb 2000 00:26:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> What about portals? Doesn't psql use portals?\n\nNo ... portals are a backend concept ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Feb 2000 00:42:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > What about portals? Doesn't psql use portals?\n> \n> No ... portals are a backend concept ...\n\nSince when?\n\nAccording to the old doco you do...\n\nselect portal XX * from table_name where ...;\n\nfetch 20 into XX.\n\nIf the PQexec() is called with \"fetch 20\" at a time\nwouldn't this mean that you wouldn't exhaust front-end\nmemory with a big query?\n", "msg_date": "Fri, 04 Feb 2000 16:57:54 +1100", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n>> No ... portals are a backend concept ...\n\n> Since when?\n\n> According to the old doco you do...\n\n> select portal XX * from table_name where ...;\n\n> fetch 20 into XX.\n\nThat still works if you spell it in the SQL-approved way,\nDECLARE CURSOR followed by FETCH.\n\n> If the PQexec() is called with \"fetch 20\" at a time\n> wouldn't this mean that you wouldn't exhaust front-end\n> memory with a big query?\n\nSure, and that's how you work around the problem. Nonetheless\nthis requires the user to structure his queries to avoid sucking\nup a lot of data in a single query. If the user doesn't have any\nparticular reason to need random access into a query result, it'd\nbe nicer to be able to read the result in a streaming fashion\nwithout buffering it anywhere *or* making arbitrary divisions in it.\n\nIn any case, psql doesn't (and IMHO shouldn't) convert a SELECT\ninto a series of FETCHes for you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Feb 2000 01:33:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "It seems that I am still tracking problems, but each time they turn out to\nhave a different cause: A slight variant on the select that caused memory\nto run out gives\n\n\nnewnham=# select crsids.surname, \"tblPerson\".\"Surname\" from crsids,\"tblPerson\" where crsids.usn=\"tblPerson\".\"USN\"::int4;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nNested Loop (cost=66496.62 rows=34359 width=40)\n -> Seq Scan on tblPerson (cost=157.62 rows=2625 width=24)\n -> Seq Scan on crsids (cost=25.27 rows=584 width=16)\n\nthis is the table I based the memory hog on (2600*600). The backend closes\ninstantly ie., no memory usage! And, as before, it is hard to find a test case\nthat will do the same as repeatably (ie., test case never crashes, the\nabove case crashes every single time). \"tblPerson\", as its strange\ncapitalisation suggests, was imported from M$ access via ODBC.\n\nselect test.txt,test2.var from test,test2 where test2.i=test.var::int4;\n\nNested Loop (cost=63504.80 rows=2600 width=40)\n -> Seq Scan on test2 (cost=24.80 rows=600 width=16)\n -> Seq Scan on test (cost=105.80 rows=2600 width=24)\n\nworks fine.\n\nAny thoughts on where to look?\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 4 Feb 2000 17:11:53 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> newnham=# select crsids.surname, \"tblPerson\".\"Surname\" from crsids,\"tblPerson\" where crsids.usn=\"tblPerson\".\"USN\"::int4;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n\n> Any thoughts on where to look?\n\nIs there anything in the postmaster log? Is there a core file (look\nin the database subdirectory, ie .../data/base/yourdatabase/core)?\nIf so, compiling the backend with -g and extracting a backtrace from\nthe resulting corefile with gdb would be very useful info.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Feb 2000 15:58:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "On Fri, Feb 04, 2000 at 03:58:57PM -0500, Tom Lane wrote:\n> \n> > Any thoughts on where to look?\n> \n> Is there anything in the postmaster log?\n\nDEBUG: Data Base System is in production state at Fri Feb 4 17:11:05 2000\nServer process (pid 3588) exited with status 11 at Fri Feb 4 17:14:57 2000\nTerminating any active server processes...\nServer processes were terminated at Fri Feb 4 17:14:57 2000\nReinitializing shared memory and semaphores\n\n> Is there a core file (look\n> in the database subdirectory, ie .../data/base/yourdatabase/core)?\n\nBut no core file ... so who knows what the sigsegv comes from. (don't worry\ncoredumpsize unlimited)\n\n\n> If so, compiling the backend with -g and extracting a backtrace from\n> the resulting corefile with gdb would be very useful info.\n\n(already have the -g..)\n\nStill looking...\n\nCheers,\n\nPatrick\n", "msg_date": "Sat, 5 Feb 2000 14:35:15 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n>> Is there anything in the postmaster log?\n\n> DEBUG: Data Base System is in production state at Fri Feb 4 17:11:05 2000\n> Server process (pid 3588) exited with status 11 at Fri Feb 4 17:14:57 2000\n\n> But no core file ... so who knows what the sigsegv comes from. (don't worry\n> coredumpsize unlimited)\n\nThere sure oughta be a corefile after a SIGSEGV. Hmm. How are you\nstarting the postmaster --- is it from a system startup script?\nIt might work better to start it from an ordinary user process.\nI discovered the other day on a Linux box that the system just plain\nwould not dump a core file from a process started by root, even though\nthe process definitely had nonzero \"ulimit -c\" and had set its euid\nto a nonprivileged userid. But start the same process by hand from an\nunprivileged login, and it would dump a core file. Weird. Dunno if\nyour platform behaves the same way, but it's worth trying.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Feb 2000 12:18:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "Someone mentioned recently that a timezone style of \"GMT+0800\" was on\ntheir FreeBSD machine as an allowed time zone, that its behavior was\nthe same as the usual ISO8601 timezone of \"-0800\", and that this\nconformed to some sort of Posix standard.\n\nI had posted patches for this, and have just modified the patch to be\ncleaner and more robust.\n\nBefore committing this (or at least before completing our upcoming\nbeta period), I'd like confirmation that this actually matches\nexpected behavior for a machine implementing a \"GMT+0800\" (or similar)\ntime zone, and that it is indeed a Posix standard? Anyone??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sun, 06 Feb 2000 22:24:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "Thomas Lockhart wrote:\n >Someone mentioned recently that a timezone style of \"GMT+0800\" was on\n >their FreeBSD machine as an allowed time zone, that its behavior was\n >the same as the usual ISO8601 timezone of \"-0800\", and that this\n >conformed to some sort of Posix standard.\n >\n >I had posted patches for this, and have just modified the patch to be\n >cleaner and more robust.\n >\n >Before committing this (or at least before completing our upcoming\n >beta period), I'd like confirmation that this actually matches\n >expected behavior for a machine implementing a \"GMT+0800\" (or similar)\n >time zone, and that it is indeed a Posix standard? Anyone??\n >\nThis seems to be the case for me.\n\nDebian GNU/Linux, using libc6 version 2.1.2:\n\nlinda:olly $ date\nSun Feb 6 23:55:52 GMT 2000\nlinda:olly $ TZ=GMT+8\nlinda:olly $ date\nSun Feb 6 15:56:26 GMT 2000\nlinda:olly $ TZ=posix/Etc/GMT+8\nlinda:olly $ date\nSun Feb 6 15:59:22 GMT+8 2000\n\n\n \n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Lift up your heads, O ye gates; and be ye lift up, ye \n everlasting doors; and the King of glory shall come \n in. Who is this King of glory? The LORD strong and \n mighty, the LORD mighty in battle.\" \n Psalms 24:7,8 \n\n\n", "msg_date": "Mon, 07 Feb 2000 00:00:19 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD " }, { "msg_contents": "Thomas Lockhart writes:\n> Someone mentioned recently that a timezone style of \"GMT+0800\" was on\n> their FreeBSD machine as an allowed time zone, that its behavior was\n> the same as the usual ISO8601 timezone of \"-0800\", and that this\n> conformed to some sort of Posix standard.\n> \n> I had posted patches for this, and have just modified the patch to be\n> cleaner and more robust.\n> \n> Before committing this (or at least before completing our upcoming\n> beta period), I'd like confirmation that this actually matches\n> expected behavior for a machine implementing a \"GMT+0800\" (or similar)\n> time zone, and that it is indeed a Posix standard? Anyone??\n\nI can confirm that it is a POSIX standard. Section 8.1.1 \"Extensions\nto Time Functions\" of POSIX 1003.1-1988 says TZ can be of the form\n :characters\nfor implementation-defined behaviour or else\n std offset [dst [offset][,start[/time],end[/time]]]\n(spaces for readability only) where std is three or more bytes\ndesignating the standard time zone (any characters except a leading\ncolon, digits, comma, minus, plus or NUL allowed) and offset is the\nvalue one must add to the local time to arrive at Coordinated\nUniversal Time. offset is of the form hh[:mm[:ss]] with hh required\nand may be a single digit. Followed by gory details about the rest of\nthe fields. Phew.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Mon, 7 Feb 2000 12:24:48 +0000 (GMT)", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "> I can confirm that it is a POSIX standard. Section 8.1.1 \"Extensions\n> to Time Functions\" of POSIX 1003.1-1988 says TZ can be of the form\n> :characters\n> for implementation-defined behaviour or else\n> std offset [dst [offset][,start[/time],end[/time]]]\n> (spaces for readability only) where std is three or more bytes\n> designating the standard time zone (any characters except a leading\n> colon, digits, comma, minus, plus or NUL allowed) and offset is the\n> value one must add to the local time to arrive at Coordinated\n> Universal Time. offset is of the form hh[:mm[:ss]] with hh required\n> and may be a single digit. Followed by gory details about the rest of\n> the fields. Phew.\n\nThanks for the info. How do they define \"the standard time zone\"? Is\nit *any* time zone, or \"GMT\", or some other set of choices?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 07 Feb 2000 14:59:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "Thomas Lockhart writes:\n> > I can confirm that it is a POSIX standard. Section 8.1.1 \"Extensions\n> > to Time Functions\" of POSIX 1003.1-1988 says TZ can be of the form\n> > :characters\n> > for implementation-defined behaviour or else\n> > std offset [dst [offset][,start[/time],end[/time]]]\n> > (spaces for readability only) where std is three or more bytes\n> > designating the standard time zone (any characters except a leading\n> > colon, digits, comma, minus, plus or NUL allowed) and offset is the\n> > value one must add to the local time to arrive at Coordinated\n> > Universal Time. offset is of the form hh[:mm[:ss]] with hh required\n> > and may be a single digit. Followed by gory details about the rest of\n> > the fields. Phew.\n> \n> Thanks for the info. How do they define \"the standard time zone\"? Is\n> it *any* time zone, or \"GMT\", or some other set of choices?\n\nIt's \"standard\" in the sense of not-summer/not-daylight-savings rather\nthan in the \"POSIX compliance\" sense. In other words, std can be any\nthree bytes you like subject to the not-leading-colon, not-digits etc.\nconstraints above. Later in the section it says that summer time is\nassumed to be one hour ahead of standard time if no offset follows\ndst. Also,\n If [an offset is] preceded by a \"-\"; the time zone shall be east of\n the Prime Meridian; otherwise it shall be west (which may be\n indicated by an optional preceding \"+\").\nThat's the bit which shows that the \"+\" is OK. Aha, but I've just\nlooked back at your original message and it refers to \"GMT+0800\"\nwhereas POSIX requires a \":\" between the hours and minutes. So in\nfact, \"GMT+0800\" is *not* legal and it should be \"GMT+08:00\" or\n\"GMT+08\" or \"GMT+8\" (single digit hours are allowed).\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Mon, 7 Feb 2000 15:12:08 +0000 (GMT)", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "From: \"Malcolm Beattie\" <[email protected]>\n> Thomas Lockhart writes:\n> > Before committing this (or at least before completing our upcoming\n> > beta period), I'd like confirmation that this actually matches\n> > expected behavior for a machine implementing a \"GMT+0800\" (or similar)\n> > time zone, and that it is indeed a Posix standard? Anyone??\n>\n> I can confirm that it is a POSIX standard. Section 8.1.1 \"Extensions\n> to Time Functions\" of POSIX 1003.1-1988 says TZ can be of the form\n> :characters\n> for implementation-defined behaviour or else\n> std offset [dst [offset][,start[/time],end[/time]]]\n\nIt probably won't affect anything, but some implementations (FreeBSD most\nnotably) have a bug in parsing TZ light savings string. The M notation gives\none day off for switching to/from light savings. Actually, it incorrectly\nassumes Sunday as 0 for Zeller Congruence when it's Saturday.\n\nGene Sokolov.\n\n\n", "msg_date": "Mon, 7 Feb 2000 19:07:12 +0300", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "> > Thanks for the info. How do they define \"the standard time zone\"? Is\n> > it *any* time zone, or \"GMT\", or some other set of choices?\n> It's \"standard\" in the sense of not-summer/not-daylight-savings rather\n> than in the \"POSIX compliance\" sense. In other words, std can be any\n> three bytes you like subject to the not-leading-colon, not-digits etc.\n> constraints above. Later in the section it says that summer time is\n> assumed to be one hour ahead of standard time if no offset follows\n> dst. Also,\n> If [an offset is] preceded by a \"-\"; the time zone shall be east of\n> the Prime Meridian; otherwise it shall be west (which may be\n> indicated by an optional preceding \"+\").\n> That's the bit which shows that the \"+\" is OK. Aha, but I've just\n> looked back at your original message and it refers to \"GMT+0800\"\n> whereas POSIX requires a \":\" between the hours and minutes. So in\n> fact, \"GMT+0800\" is *not* legal and it should be \"GMT+08:00\" or\n> \"GMT+08\" or \"GMT+8\" (single digit hours are allowed).\n\nOK. I'll need to generalize the current code, which looks specifically\nfor \"gmt\". Possibly, we'll have just the \"GMT+/-####\" case handled for\n7.0, but if I get time.\n\nAnd we'll allow a superset of the Posix standard, so \"GMT+0800\" will\nbe legal (otherwise, it would disallow the ISO8601 standard which imho\nshould take precedence).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 08 Feb 2000 18:24:44 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "On 2000-02-08, Thomas Lockhart mentioned:\n\n> OK. I'll need to generalize the current code, which looks specifically\n> for \"gmt\". Possibly, we'll have just the \"GMT+/-####\" case handled for\n> 7.0, but if I get time.\n\nI believe it should preferrably be called \"UTC\".\n\n> And we'll allow a superset of the Posix standard, so \"GMT+0800\" will\n\nHe mentioned earlier that it has to be GMT+08:00 or GMT+8.\n\n> be legal (otherwise, it would disallow the ISO8601 standard which imho\n> should take precedence).\n\nOh please, it should. Is it just me or is this notation not making any\nsense? If GMT+08:00 means \"you need to add 8 hours to your local time zone\nto get to GMT\", then x = a + b means \"you need to add 'b' to 'x' in order\nto get 'a'\". Darn standards. How about NOON+01:30 to indicate 10:30(am)?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 10 Feb 2000 02:14:30 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "> > OK. I'll need to generalize the current code, which looks specifically\n> > for \"gmt\". Possibly, we'll have just the \"GMT+/-####\" case handled for\n> > 7.0, but if I get time.\n> I believe it should preferrably be called \"UTC\".\n\nThe specific case we are solving is an interaction with the zinc\ntimezone database available on (at least) Linux and FreeBSD. Both\nplatforms have \"GMT+/-n\" zones defined, and we'll want to correctly\nparse them.\n\n> > And we'll allow a superset of the Posix standard, so \"GMT+0800\" will\n> He mentioned earlier that it has to be GMT+08:00 or GMT+8.\n\nRight. That's why I thought I'd mention that we'll do a superset.\n\n> > be legal (otherwise, it would disallow the ISO8601 standard which imho\n> > should take precedence).\n> Oh please, it should. Is it just me or is this notation not making any\n> sense? If GMT+08:00 means \"you need to add 8 hours to your local time zone\n> to get to GMT\", then x = a + b means \"you need to add 'b' to 'x' in order\n> to get 'a'\". Darn standards. How about NOON+01:30 to indicate 10:30(am)?\n\nNot in this lifetime. afaict there is a common thread to date/time\nrepresentation in these two standards, as they both involve using a\n\"+/-\" notation to represent time zones.\n\nIt is certainly annoying that there is a sign flip on the numeric\nfields for the two standards (Posix and ISO8601). But only one, Posix,\nhas the preceding alpha time zone, so I should be able to figure it\nout.\n\nThe annoying thing is that my token parsing is getting trickier, since\nI also allow \"January-1 2000\" as a date specification, which from a\n*token* standpoint is pretty similar to \"gmt+8\"...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 10 Feb 2000 06:52:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Need confirmation of \"Posix time standard\" on FreeBSD" }, { "msg_contents": "The killer query was:\n\nselect crsids.surname,\"tblPerson\".\"Surname\" from crsids,\"tblPerson\" where crsids.usn=\"tblPerson\".\"USN\"::int4;\n\nand the reason for the SIGSEGV, is that somehow, text_int4(text *string) in\nsrc/backend/utils/adt/int.c is called with string=(text *)0x0, so obviously\nthis is a problem!\n\ncrsids.usn is integer, \"tblPerson\".\"USN\" is varchar(9).\n\nOddly enough, text_int4 is called from fmgr.c:136 which is in the case\nstatement for n_arguments=2, yet that should be 1\n\n(gdb) print {FmgrInfo}0x8221a30\n$4 = {fn_addr = 0x80f9dbc <text_int4>, fn_plhandler = 0, fn_oid = 1620, \n fn_nargs = 1}\n\nunless gdb is reporting the wrong line number. values->data[0]=0=string.\n\nI have a backtrace and a pretty printed copy of the query tree if useful...\n\nStill trying to make a small test case...\n\nAny suggestions appreciated!\n\nCheers,\n\nPatrick\n\n(source of 31st Jan)\n\nOn Sat, Feb 05, 2000 at 12:18:29PM -0500, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> >> Is there anything in the postmaster log?\n> \n> > DEBUG: Data Base System is in production state at Fri Feb 4 17:11:05 2000\n> > Server process (pid 3588) exited with status 11 at Fri Feb 4 17:14:57 2000\n> \n> > But no core file ... so who knows what the sigsegv comes from. (don't worry\n> > coredumpsize unlimited)\n> \n> There sure oughta be a corefile after a SIGSEGV. Hmm. How are you\n> starting the postmaster --- is it from a system startup script?\n> It might work better to start it from an ordinary user process.\n> I discovered the other day on a Linux box that the system just plain\n> would not dump a core file from a process started by root, even though\n> the process definitely had nonzero \"ulimit -c\" and had set its euid\n> to a nonprivileged userid. But start the same process by hand from an\n> unprivileged login, and it would dump a core file. Weird. Dunno if\n> your platform behaves the same way, but it's worth trying.\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2000 21:04:42 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> and the reason for the SIGSEGV, is that somehow, text_int4(text *string) in\n> src/backend/utils/adt/int.c is called with string=(text *)0x0, so obviously\n> this is a problem!\n\nUm. Probably you have a NULL value in \"tblPerson\".\"USN\" somewhere?\n\nThere are a lot of functions without adequate defenses against NULL\ninputs :-( --- we've been cleaning them up slowly, but evidently you\nfound another one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2000 18:18:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "On Fri, Feb 11, 2000 at 06:18:32PM -0500, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> > and the reason for the SIGSEGV, is that somehow, text_int4(text *string) in\n> > src/backend/utils/adt/int.c is called with string=(text *)0x0, so obviously\n> > this is a problem!\n> \n> Um. Probably you have a NULL value in \"tblPerson\".\"USN\" somewhere?\n\nYes of course! Naturally I was looking for something far too complicated and\nthe trees got in the way.. And that's why my test case didn't work.\n\n> There are a lot of functions without adequate defenses against NULL\n> inputs :-( --- we've been cleaning them up slowly, but evidently you\n> found another one.\n\nSo the trouble is, if the function returns and int, and you want to say\nreturn null, there really isn't a value that can be stuck into the int\nthat represents null?\n\nIn the meantime, I think this might help, so I would have seen:\n\nnewnham=# select crsids.surname,\"tblPerson\".\"Surname\" from crsids,\"tblPerson\" where crsids.usn=\"tblPerson\".\"USN\"::int4;\nERROR: Trying to convert NULL text to integer (int4)\n\nCheers,\n\nPatrick\n\n\n\nIndex: int.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/int.c,v\nretrieving revision 1.32\ndiff -c -r1.32 int.c\n*** int.c 2000/01/26 05:57:14 1.32\n--- int.c 2000/02/14 11:22:32\n***************\n*** 277,282 ****\n--- 277,285 ----\n int len;\n char *str;\n \n+ if (!string)\n+ elog(ERROR, \"Trying to convert NULL text to integer (int2)\");\n+ \n len = (VARSIZE(string) - VARHDRSZ);\n \n str = palloc(len + 1);\n***************\n*** 317,322 ****\n--- 320,328 ----\n \n int len;\n char *str;\n+ \n+ if (!string)\n+ elog(ERROR, \"Trying to convert NULL text to integer (int4)\");\n \n len = (VARSIZE(string) - VARHDRSZ);\n\n", "msg_date": "Mon, 14 Feb 2000 11:23:56 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n \n> + if (!string)\n> + elog(ERROR, \"Trying to convert NULL text to integer (int2)\");\n\nThis is unreasonable behavior. The correct patch is just\n\n\tif (!string)\n\t\treturn 0;\n\nwhich will allow the function manager to plow ahead with returning the\nNULL that it's going to return anyway. See the past pghackers threads\nabout redesigning the function manager interface if you don't understand\nwhat's going on here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Feb 2000 11:00:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem " }, { "msg_contents": "On Mon, Feb 14, 2000 at 11:00:16AM -0500, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> \n> > + if (!string)\n> > + elog(ERROR, \"Trying to convert NULL text to integer (int2)\");\n> \n> This is unreasonable behavior. The correct patch is just\n> \n> \tif (!string)\n> \t\treturn 0;\n> \n> which will allow the function manager to plow ahead with returning the\n> NULL that it's going to return anyway. See the past pghackers threads\n> about redesigning the function manager interface if you don't understand\n> what's going on here.\n\nOff top of head, that means that null and the string \"0\" both return 0..\nOK - I'll look for the mail thread.\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 14 Feb 2000 16:14:44 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another nasty cache problem" }, { "msg_contents": "I've just committed changes to \"reunify\" the date/time types.\n\"timestamp\" and \"interval\" are now the two primary date/time types for\nusers. Also, I've changed the default date style to \"ISO\" (not just in\ntime for Y2K, but we'll be ready for \"Y3K\").\n\nAlso, I made some changes to have NUMERIC be a \"known\" type for\npurposes of implicit type coersion, but have not tested to see if the\nunderlying conversion functions are available.\n\ninitdb required (and enforced by a catalog version change).\n\nRegression tests pass, except for the rules test due to ongoing rules\nformatting work.\n\n - Thomas\n\nThe detailed change log:\n\nMake NUMERIC a known native type for purposes of type coersion. Not\ntested.\nMake ISO date style (e.g. \"2000-02-16 09:33\") the default.\nImplement \"date/time grand unification\".\n Transform datetime and timespan into timestamp and interval.\n Deprecate datetime and timespan, though translate to new types in\ngram.y.\n Transform all datetime and timespan catalog entries into new types.\n Make \"INTERVAL\" reserved word allowed as a column identifier in\ngram.y.\n Remove dt.h, dt.c files, and retarget datetime.h, datetime.c as\nutility\n routines for all date/time types.\n date.{h,c} now deals with date, time types.\n timestamp.{h,c} now deals with timestamp, interval types.\n nabstime.{h,c} now deals with abstime, reltime, tinterval types.\nAll regression tests pass except for rules.sql (unrelated).\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 16 Feb 2000 17:41:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Date/time types: big change" }, { "msg_contents": "On Wed, 16 Feb 2000, Thomas Lockhart wrote:\n\n> I've just committed changes to \"reunify\" the date/time types.\n> \"timestamp\" and \"interval\" are now the two primary date/time types for\n> users. Also, I've changed the default date style to \"ISO\" (not just in\n> time for Y2K, but we'll be ready for \"Y3K\").\n\nI still don't like our Y2038 status. ;)\n\nAnyway, the question I have is what did you do with functions such as\ndatetimein() or comparison functions and such for the old types? Did you\nremove them? What if some, say, user-defined trigger function uses them?\n\nThe reason I'm asking is that I would like to see the floating point types\nconverted to SQL in a similar fashion, but when I rename, say, float4eq to\nrealeq it might break user applications. Or not? This is all hypothetical\nof course.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 16 Feb 2000 19:37:57 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change" }, { "msg_contents": "> I've just committed changes to \"reunify\" the date/time types.\n> \"timestamp\" and \"interval\" are now the two primary date/time types for\n> users. Also, I've changed the default date style to \"ISO\" (not just in\n> time for Y2K, but we'll be ready for \"Y3K\").\n> \n\nI think we need a consensus on this. I think this may be a problem for\nsome people. Comments?\n\n\ttest=> create table x ( y date);\n\tCREATE\n\ttest=> insert into x values ('02/01/99');\n\tINSERT 18697 1\n\ttest=> select * from x;\n\t y \n\t------------\n\t 02-01-1999\n\t(1 row)\n\t\n\ttest=> set datestyle to 'iso';\n\tSET VARIABLE\n\ttest=> select * from x;\n\t y \n\t------------\n\t 1999-02-01\n\t(1 row)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 16 Feb 2000 13:55:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Also, I've changed the default date style to \"ISO\" (not just in\n>> time for Y2K, but we'll be ready for \"Y3K\").\n\n> I think we need a consensus on this. I think this may be a problem for\n> some people. Comments?\n\nGood point. Perhaps there should be a way to select the default date\nstyle at configure or initdb time. I don't mind if the \"default default\"\nis ISO, but if I had apps that were dependent on the old default setting\nI'd sure be annoyed by this change...\n\nHas anyone thought much about the fact that beginning next year,\nheuristics to guess which field is the year will become nearly useless?\nQuick, when is '01/02/03'? I suspect a lot of people who got away with\nnot thinking hard about datestyles will suddenly realize that they need\nto set the default datestyle to whatever they are accustomed to using.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Feb 2000 17:09:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Also, I've changed the default date style to \"ISO\" (not just in\n> >> time for Y2K, but we'll be ready for \"Y3K\").\n> \n> > I think we need a consensus on this. I think this may be a problem for\n> > some people. Comments?\n> \n> Good point. Perhaps there should be a way to select the default date\n> style at configure or initdb time. I don't mind if the \"default default\"\n> is ISO, but if I had apps that were dependent on the old default setting\n> I'd sure be annoyed by this change...\n> \n> Has anyone thought much about the fact that beginning next year,\n> heuristics to guess which field is the year will become nearly useless?\n> Quick, when is '01/02/03'? I suspect a lot of people who got away with\n> not thinking hard about datestyles will suddenly realize that they need\n> to set the default datestyle to whatever they are accustomed to using.\n\nWow, that is an excellent point. I was doing it for 2000, and was\nthinking, gee, that's not too hard. I can see it getting much more\nconfusing next year, as you said.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 16 Feb 2000 17:13:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "Tom Lane wrote:\n >Bruce Momjian <[email protected]> writes:\n >>> Also, I've changed the default date style to \"ISO\" (not just in\n >>> time for Y2K, but we'll be ready for \"Y3K\").\n >\n >> I think we need a consensus on this. I think this may be a problem for\n >> some people. Comments?\n >\n >Good point. Perhaps there should be a way to select the default date\n >style at configure or initdb time. I don't mind if the \"default default\"\n >is ISO, but if I had apps that were dependent on the old default setting\n >I'd sure be annoyed by this change...\n >\n >Has anyone thought much about the fact that beginning next year,\n >heuristics to guess which field is the year will become nearly useless?\n >Quick, when is '01/02/03'? I suspect a lot of people who got away with\n >not thinking hard about datestyles will suddenly realize that they need\n >to set the default datestyle to whatever they are accustomed to using.\n \nI have code to let the installer choose the default datestyle in Debian's installation script for PostgreSQL. It makes its own best guess on\nthe basis of the timezone and then asks the user with its own guess as\nthe presented default.\n\nSee the attached script; I don't know how generalisable the timezone\nguessing would be.\n\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But as many as received him, to them gave he power to \n become the sons of God, even to them that believe on \n his name\" John 1:12", "msg_date": "Wed, 16 Feb 2000 23:20:26 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "> > I've just committed changes to \"reunify\" the date/time types.\n> > \"timestamp\" and \"interval\" are now the two primary date/time types for\n> > users. Also, I've changed the default date style to \"ISO\" (not just in\n> > time for Y2K, but we'll be ready for \"Y3K\").\n> I still don't like our Y2038 status. ;)\n\nYeah. Like the ntp rfc doc says: \"we'll expect the solution to appear\nbefore it is needed\" or something to that effect.\n\n> Anyway, the question I have is what did you do with functions such as\n> datetimein() or comparison functions and such for the old types? Did you\n> remove them? What if some, say, user-defined trigger function uses them?\n\nThen they are SOL. I had originally implemented datetime and timespan\nas an experiment to see if a floating point number could behave well\nenough to represent dates (I was worried about rounding and the\n.999999 problem, especially with the wide range of platforms we\nsupport).\n\nSo it turns out that they work. In the meantime, someone contributed a\ntimestamp type, but did not fully implement it and chose a 4 byte\nrepresentation, which is fundamentally flawed imho.\n\nI've been waiting a year or two to do this upgrade, and the major rev\nbump is the time and place to do it.\n\nOne reason why I didn't carry along both datetime *and* timestamp is\nthe large number of related functions and operators. It would have\nsignificantly increased the size of the catalogs (mostly because\ntimestamp didn't have much to start with).\n\n> The reason I'm asking is that I would like to see the floating point types\n> converted to SQL in a similar fashion, but when I rename, say, float4eq to\n> realeq it might break user applications. Or not? This is all hypothetical\n> of course.\n\nLots of work for not much gain imho. For the date/time stuff, it made\nsense because timestamp needed to be replaced. There isn't the same\nunderlying need for the floating point types afaik.\n\nOn the other hand, 7.0 (or 8.0, but that may be another 4 years ;) is\nthe time to do it. Does anyone else see this as an issue?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 17 Feb 2000 06:04:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> But, I'd have no objection to a configure or initdb option; I *would*\n> suggest that the old default (and it is the default mostly because\n> original Postgres95 had no other styles implemented) is a relatively\n> poor choice, and that ISO should be the default choice in the absence\n> of an explicit configure or initdb switch.\n\nAs I said, I have no objection to making ISO the new \"standard default\";\nI just think some people will need a way to change the default in their\ninstallations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2000 01:12:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "> >> Also, I've changed the default date style to \"ISO\" (not just in\n> >> time for Y2K, but we'll be ready for \"Y3K\").\n> > I think we need a consensus on this. I think this may be a problem for\n> > some people. Comments?\n> Good point. Perhaps there should be a way to select the default date\n> style at configure or initdb time. I don't mind if the \"default default\"\n> is ISO, but if I had apps that were dependent on the old default setting\n> I'd sure be annoyed by this change...\n\nI've been talking about this for quite some time, but there *really*\nis no excuse to not go to the ISO date/time standard. Every other date\nstyle is prone to misinterpretation, and the ISO standard is commonly\nused in other instances where reliable date reporting is needed.\n\nI've waited until a major rev to do this, and the groundwork has been\nthere for a year or two. There are some good summaries of the issues\non the web.\n\nBut, I'd have no objection to a configure or initdb option; I *would*\nsuggest that the old default (and it is the default mostly because\noriginal Postgres95 had no other styles implemented) is a relatively\npoor choice, and that ISO should be the default choice in the absence\nof an explicit configure or initdb switch.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 17 Feb 2000 06:14:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> The reason I'm asking is that I would like to see the floating point types\n>> converted to SQL in a similar fashion, but when I rename, say, float4eq to\n>> realeq it might break user applications. Or not? This is all hypothetical\n>> of course.\n\n> Lots of work for not much gain imho. For the date/time stuff, it made\n> sense because timestamp needed to be replaced. There isn't the same\n> underlying need for the floating point types afaik.\n\n> On the other hand, 7.0 (or 8.0, but that may be another 4 years ;) is\n> the time to do it. Does anyone else see this as an issue?\n\nI think it's too late in the 7.0 cycle to start thinking about renaming\nthe numeric types. While you implemented the date/time changes at\nalmost the last minute, the changes had been discussed and agreed to\nlong ago, and you knew exactly what you needed to do. I don't think\nthat constitutes a precedent for a hurried revision of the numeric\ntypes...\n\nWe've already postponed 7.0 beta twice. Seems to me it's time to\nstart raising the bar for what we will accept into this revision.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2000 01:29:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change " }, { "msg_contents": "On Thu, 17 Feb 2000, Tom Lane wrote:\n\n> I think it's too late in the 7.0 cycle to start thinking about renaming\n> the numeric types.\n\nI didn't mean that this should happen now or even soon. It was more of a\npolicy/practice inquiry.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 17 Feb 2000 13:00:17 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Thu, 17 Feb 2000, Tom Lane wrote:\n>> I think it's too late in the 7.0 cycle to start thinking about renaming\n>> the numeric types.\n\n> I didn't mean that this should happen now or even soon. It was more of a\n> policy/practice inquiry.\n\nOK, fair enough. What I'm thinking at the moment is let's wait and see\nhow painful or painless the transition is for the date/time types.\nThe number of squawks we hear about that should give us a clue whether\nwe want to be in a hurry to rename the numeric types...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2000 11:08:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change " }, { "msg_contents": "> I've been talking about this for quite some time, but there *really*\n> is no excuse to not go to the ISO date/time standard. Every other date\n> style is prone to misinterpretation, and the ISO standard is commonly\n> used in other instances where reliable date reporting is needed.\n> \n> I've waited until a major rev to do this, and the groundwork has been\n> there for a year or two. There are some good summaries of the issues\n> on the web.\n> \n> But, I'd have no objection to a configure or initdb option; I *would*\n> suggest that the old default (and it is the default mostly because\n> original Postgres95 had no other styles implemented) is a relatively\n> poor choice, and that ISO should be the default choice in the absence\n> of an explicit configure or initdb switch.\n\nWell, no one is objecting to this yet, so it may be a good choice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 17 Feb 2000 11:58:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "On Thu, Feb 17, 2000 at 06:14:23AM +0000, Thomas Lockhart wrote:\n> I've been talking about this for quite some time, but there *really*\n> is no excuse to not go to the ISO date/time standard. Every other date\n\nYes, please let's go to this standard. It's an awful lot of work to fix apps\njust because they expect US notation and most people in Germany cannot even\nthink about typing it that way.\n\n> poor choice, and that ISO should be the default choice in the absence\n> of an explicit configure or initdb switch.\n\nCompletely agree.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 17 Feb 2000 19:41:07 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "On Wed, Feb 16, 2000 at 11:20:26PM +0000, Oliver Elphick wrote:\n> I have code to let the installer choose the default datestyle in Debian's installation script for PostgreSQL. It makes its own best guess on\n> the basis of the timezone and then asks the user with its own guess as\n> the presented default.\n\nYes, Oliver's script works nicely on all Debian machines.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 17 Feb 2000 19:42:33 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "> >>> Also, I've changed the default date style to \"ISO\" (not just in\n> >>> time for Y2K, but we'll be ready for \"Y3K\").\n> >... Perhaps there should be a way to select the default date\n> >style at configure or initdb time. I don't mind if the \"default default\"\n> >is ISO, but if I had apps that were dependent on the old default setting\n> >I'd sure be annoyed by this change...\n\nWell, that is the joy of a major release; not all backward\ncompatibility is guaranteed. This has been a *documented change* for\nat least a year or two; check the chapter on date/time data types for\nmore info.\n\nHowever, istm that we could/should have more \"default settings\"\ntraveling in the pg_database table. We've got the encoding, which if\nset for template1 will be set for every db. We've got the database\nlocation, which can point to an alternate location.\n\nWouldn't it be reasonable to allow a \"default datestyle\", or something\nmore general to help with other defaults? Hmm, could be a text field\nwhich allows something like \"PGDATESTYLE='ISO',LANGUAGE='french',...\"\nso that it is extensible, but maybe that detail is a bad idea because\nit is a bit fragile.\n\nWhat fields would be appropriate for v7.0? The datestyle and timezone\nare two obvious candidates, and if we add them now then we could make\nuse of them later.\n\nLater, we can get things like \n\n ALTER DATABASE SET DEFAULT DATESTYLE='ISO';\n\netc etc.\n\nFor v7.1, I'm hoping to work with Tatsuo and others to get closer to\nthe general character sets and collation sequences allowed by SQL92.\nAt that point, the MULTIBYTE hardcoded differences in the backend\nmight go away and we will need these configurable default values.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 17 Feb 2000 18:48:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "> > >>> Also, I've changed the default date style to \"ISO\" (not just in\n> > >>> time for Y2K, but we'll be ready for \"Y3K\").\n> > >... Perhaps there should be a way to select the default date\n> > >style at configure or initdb time. I don't mind if the \"default default\"\n> > >is ISO, but if I had apps that were dependent on the old default setting\n> > >I'd sure be annoyed by this change...\n> \n> Well, that is the joy of a major release; not all backward\n> compatibility is guaranteed. This has been a *documented change* for\n> at least a year or two; check the chapter on date/time data types for\n> more info.\n\n> \n> However, istm that we could/should have more \"default settings\"\n> traveling in the pg_database table. We've got the encoding, which if\n> set for template1 will be set for every db. We've got the database\n> location, which can point to an alternate location.\n\nBut we have to store this information in the database because it is\nrelated to how the data is stored. Do the date/time fields also have\nthat assumption _in_ that stored data? If so, we need it stored in the\ndatabase, if not, it seems some SET command or psql startup file setting\nis enough. Many people work on the same database from different\nlocations and may need different settings. I would only store database\nsettings that relate to the data, not how the data is displayed. That\nstuff belongs outside the database, I think.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 17 Feb 2000 23:52:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "-----------------------------------------------------------------------\n\nOn Wed, 16 Feb 2000, Thomas Lockhart wrote:\n\n> I've just committed changes to \"reunify\" the date/time types.\n> \"timestamp\" and \"interval\" are now the two primary date/time types for\n> users. Also, I've changed the default date style to \"ISO\" (not just in\n> time for Y2K, but we'll be ready for \"Y3K\").\n> \n> Also, I made some changes to have NUMERIC be a \"known\" type for\n> purposes of implicit type coersion, but have not tested to see if the\n> underlying conversion functions are available.\n> \n> initdb required (and enforced by a catalog version change).\n> \n> Regression tests pass, except for the rules test due to ongoing rules\n> formatting work.\n\n\nGreat, you fix my formatting code for timestamp. Thanks Thomas!\n\nBut conversion timestam to 'tm' struct is not Y2038 ready \n(POSIX 'tm' limitation?):\n\ntest=# select to_char('Fri Feb 18 11:57:47 2038 CET'::timestamp, 'HH:MI:SS YYYY');\n to_char\n---------------\n 10:57:47 2038\n(1 row)\n\nOr simple:\n\ntest=# select 'Fri Feb 18 11:57:47 2038 CET'::timestamp;\n ?column?\n--------------------------\n Thu Feb 18 10:57:47 2038\n(1 row)\n\n\nOr am I something leave out?\n\n\t\t\t\t\t\t\tKarel\n\n\n\n", "msg_date": "Fri, 18 Feb 2000 12:27:04 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change" }, { "msg_contents": "On Thu, 17 Feb 2000, Thomas Lockhart wrote:\n\n> However, istm that we could/should have more \"default settings\"\n> traveling in the pg_database table. We've got the encoding, which if\n> set for template1 will be set for every db. We've got the database\n> location, which can point to an alternate location.\n\nI don't think this should be a per database setting. Why not use an\nenvironment variable PGDATESTYLE for it. That's easy enough for now.\nBefore we throw all kinds of per database defaults around, I'd like to see\nsome sort of a concept where exactly a \"database\" stands versus \"schema\",\netc. What happens if one day queries across databases are allowed?\n\n> For v7.1, I'm hoping to work with Tatsuo and others to get closer to\n> the general character sets and collation sequences allowed by SQL92.\n\nExcellent.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 18 Feb 2000 15:29:34 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "> But conversion timestam to 'tm' struct is not Y2038 ready\n> (POSIX 'tm' limitation?):\n> to_char\n> ---------------\n> 10:57:47 2038\n> (1 row)\n> Or am I something leave out?\n\nNo, that is the expected behavior. In most of the world (certainly in\nthe US), time zones and daylight savings time were both very nebulous\nthings until around the turn of the century. I recall reading that in\nthe US building the continental railroads in the 1860's provoked\nthinking about standardizing time zone.\n\nThere are also minor changes in time zone and DST behavior in recent\nhistory; in the US we had a year or two in the 1970's which ran DST\nyear round due to the oil shortage.\n\nSo, since the actual time zone behavior for years past 2038 is\nuncertain, and since the Unix time support routines don't support\nanything past 2038 anyway, I omit time zone calculations after\n2038-01-18 and before 1901-12-14. Everything is carried as equivalent\nto GMT, but no time zone adjustment is carried out.\n\nbtw, there *may* be some edge effects which are, um, unexpected; e.g.\nhaving a time zone adjustment as you enter a date w/o an explicit tz\ninto the database, to which the same adjustment is *not* applied as\nthe date is read back out. Feel free to test it out...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 18 Feb 2000 14:48:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I don't think this should be a per database setting. Why not use an\n> environment variable PGDATESTYLE for it.\n\nWe already have that, and I wouldn't have brought up the issue if I\nthought it was sufficient. The case where you really want to be able\nto set a default at the database or installation level is when you have\na ton of client apps running on a bunch of different machines, and you\ncan't or don't want to fix them all at once. A client-side fix doesn't\nget the job done for a dbadmin faced with that kind of situation.\n\nOr were you talking about a server-side env variable? That could work\nI guess, but I thought you were intent on eliminating env-var\ndependencies in initdb and postmaster startup ... for good reasons ...\n\n> Before we throw all kinds of per database defaults around, I'd like to see\n> some sort of a concept where exactly a \"database\" stands versus \"schema\",\n> etc. What happens if one day queries across databases are allowed?\n\nPresumably a client doing that would make sure to request the same\ndatestyle (or whatever) from each database. You're right though\nthat we could use some global thinking about what parameters need\nto be settable from where, and what their scopes need to be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Feb 2000 10:35:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "> Or were you talking about a server-side env variable?\n\nfwiw, we've already got that one...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 18 Feb 2000 15:48:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Or were you talking about a server-side env variable?\n\n> fwiw, we've already got that one...\n\nWe do? (... examines code ...) By golly, you're right.\n\nOK, I agree with Peter: this is enough to save the day for a dbadmin\nwho really really doesn't want to switch to default ISO datestyle,\nso I withdraw my complaint.\n\nI do, however, suggest that the backend env var needs to be documented\nmore prominently than it is now. One might also ask why its set of\nallowed values is inconsistent with the SET command's (probably\npostgres.c ought to just call a routine in variable.c, rather than\nhaving its own parsing code)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Feb 2000 10:52:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "> I do, however, suggest that the backend env var needs to be documented\n> more prominently than it is now.\n\nHmm, I thought it was in the Admin Guide, but in fact it shows up only\nin the Data Types chapter and in the release notes. Should be added to\nruntime.sgml just before (?) \"Starting postmaster\".\n\n> One might also ask why its set of\n> allowed values is inconsistent with the SET command's (probably\n> postgres.c ought to just call a routine in variable.c, rather than\n> having its own parsing code)?\n\nI'm vaguely recalling that there was a \"chicken and egg\" problem with\nthe backend firing up... Ah, in fact I think (still from my sometimes\nfaulty memory) that it had to do with whether the Postgres memory\nmanagement stuff (palloc et al) was available at the time postgres.c\nneeded to make the call.\n\nFeel free to review it though and make sweeping or small changes.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 18 Feb 2000 16:28:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu" }, { "msg_contents": "On 2000-02-18, Tom Lane mentioned:\n\n> Or were you talking about a server-side env variable? That could work\n> I guess, but I thought you were intent on eliminating env-var\n> dependencies in initdb and postmaster startup ... for good reasons ...\n\nYes, as you noticed.\n\nI don't mind postmaster startup environment variables that much. The ones\nfor initdb were much more evil. This really seems to be an item for the\nGrand Unified Configuration File, but until that happens it's easier to\nhave a dozen of orthogonal environment variables than having to reorganize\nthis whole thing later on.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Sat, 19 Feb 2000 15:12:44 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big changeu " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> One might also ask why its set of\n>> allowed values is inconsistent with the SET command's (probably\n>> postgres.c ought to just call a routine in variable.c, rather than\n>> having its own parsing code)?\n\n> I'm vaguely recalling that there was a \"chicken and egg\" problem with\n> the backend firing up... Ah, in fact I think (still from my sometimes\n> faulty memory) that it had to do with whether the Postgres memory\n> management stuff (palloc et al) was available at the time postgres.c\n> needed to make the call.\n\nYup, your memory is still working...\n\n> Feel free to review it though and make sweeping or small changes.\n\nOK, I tweaked the code in variable.c to not depend on palloc(), and\nmade the change. In the course of doing so, I noticed what I assume\nis a bug: RESET DateStyle and SET DateStyle = 'DEFAULT' were still\nsetting to Postgres style. Presumably they should reset to ISO style\nin the brave new world, no?\n\nWhat I actually did was to make them reset to whatever the backend's\nstartup setting is. Thus, if a postmaster PGDATESTYLE environment\nvariable exists, it will determine the result of RESET DateStyle as\nwell as the state of a new backend. (A client-side PGDATESTYLE setting\ncannot affect RESET, of course, since it just causes a SET command to\nbe issued.) I think this is appropriate behavior, but it might be open\nto debate.\n\nBTW, here is an interesting corner case for you: what happens when\nthe postmaster is started with PGDATESTYLE=\"DEFAULT\"? You get ISO\nnow, but I almost committed code that would have gone into infinite\nrecursion...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Feb 2000 17:25:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Date/time types: big change" }, { "msg_contents": "Does anybody know how to disable \"pager\" of psql? It's really annoying\nwhen I use psql in my emacs's shell buffer...\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 22 Feb 2000 15:57:30 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "psql and pager" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> Does anybody know how to disable \"pager\" of psql? It's really annoying\n> when I use psql in my emacs's shell buffer...\n\nOne way is to unset the PAGER environment variable.\n\n-- \nChris Bitmead\nmailto:[email protected]\n", "msg_date": "Tue, 22 Feb 2000 21:09:55 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and pager" }, { "msg_contents": "> > Does anybody know how to disable \"pager\" of psql? It's really annoying\n> > when I use psql in my emacs's shell buffer...\n> \n> One way is to unset the PAGER environment variable.\n\nNew psql seems to try to use more even PAGER is not set.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 22 Feb 2000 19:13:25 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and pager" }, { "msg_contents": "Tatsuo Ishii wrote:\n >Does anybody know how to disable \"pager\" of psql? It's really annoying\n >when I use psql in my emacs's shell buffer...\n \nBefore you start psql, do `export PAGER=cat'\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD bless thee, and keep thee; The LORD make his\n face shine upon thee, and be gracious unto thee; The \n LORD lift up his countenance upon thee, and give thee \n peace.\" Numbers 6:24-26 \n\n\n", "msg_date": "Tue, 22 Feb 2000 10:32:05 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and pager " }, { "msg_contents": "On Tue, 22 Feb 2000, Tatsuo Ishii wrote:\n\n> Does anybody know how to disable \"pager\" of psql? It's really annoying\n> when I use psql in my emacs's shell buffer...\n\n\\pset pager\n\nPut it in .psqlrc if you like. (Or in .psqlrc-7.0.0 to not affect the old\none.)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 22 Feb 2000 13:20:07 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and pager" }, { "msg_contents": "> > Does anybody know how to disable \"pager\" of psql? It's really annoying\n> > when I use psql in my emacs's shell buffer...\n> \n> \\pset pager\n> \n> Put it in .psqlrc if you like. (Or in .psqlrc-7.0.0 to not affect the old\n> one.)\n\nThansk. I have read the man page of psql. However I misunderstood:\n\n>Toggles the list of a pager to do table output. If the environment\n>variable PAGER is set, the output is piped to the specified program.\n>Otherwise more is used. \n>\n>In any case, psql only uses the pager if it seems appropriate. \n\nI thought psql may use a pager even if \\pset toggles off a pager. I\nneed to learn English more:-)\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 22 Feb 2000 21:30:07 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and pager" }, { "msg_contents": "On Tue, 22 Feb 2000, Tatsuo Ishii wrote:\n\n> I thought psql may use a pager even if \\pset toggles off a pager. \n\nNo, only if it's toggled on.\n\nIn the old version the use of the pager was determined by whether or not\nthe environment variable PAGER was set. This behaviour is not very\ncooperative with other programs that might use the same variable.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 22 Feb 2000 13:46:51 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and pager" }, { "msg_contents": "\n-----Pďż˝vodnďż˝ zprďż˝va-----\nOd: Tom Lane <[email protected]>\nKomu: Oliver Elphick <[email protected]>\nKopie: [email protected] <[email protected]>;\[email protected] <[email protected]>\nDatum: 22. ďż˝nora 2000 18:06\nPďż˝edmďż˝t: Re: [HACKERS] Out of memory problem (forwarded bug report)\n\n\n>\"Oliver Elphick\" <[email protected]> writes:\n>> Can someone advise, please, how to deal with this problem in 6.5.3?\n>\n\n>My guess is that the cause is memory leaks during expression evaluation;\n>but without seeing the complete view definitions and underlying table\n>definitions, it's impossible to know what processing is being invoked\n>by this query...\n>\n> regards, tom lane\n\n\n\n Well, I will append views and underlying table definition:\n\n1) Once again - failure query:\nselect comm_type,name,tot_bytes,tot_packets\nfrom flow_sums_days_send_200002_view\nwhere day='2000-02-21' and name not like '@%'\nunion all\nselect comm_type,name,tot_bytes,tot_packets\nfrom flow_sums_days_receive_200002_view\nwhere day='2000-02-21' and name not like '@%'\n\n2) views definition:\ncreate view flow_sums_days_send_200002_view as\nselect\n 'send'::varchar as comm_type, date_trunc('day',start) as day,\n src_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\nfrom flow_sums_200002\ngroup by day, src_name\n\ncreate view flow_sums_days_receive_200002_view as\nselect\n 'receive'::varchar as comm_type, date_trunc('day',start) as day,\n dst_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\nfrom flow_sums_200002\ngroup by day, dst_name\n\n\nI wanted create only one usefull view:\n\ncreate view flow_sums_days_200002_view as\nselect\n 'send'::varchar as comm_type, date_trunc('day',start) as day,\n src_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\nfrom flow_sums_200002\ngroup by day, src_name\nUNION ALL\nselect\n 'receive'::varchar as comm_type, date_trunc('day',start) as day,\n dst_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\nfrom flow_sums_200002\ngroup by day, dst_name\n\n...but Postgres cann't use clause UNION ALL at view definition. So I created\ntwo views mentioned above and I wanted use this ones with UNION ALL clause\nonly.\n\n3) underlaying table definition:\ncreate table flow_sums_200002 (\n primary_collector varchar(50) not null,\n start datetime not null,\n end_period datetime not null,\n dead_time_rel float4 not null,\n src_name varchar(50) not null,\n dst_name varchar(50) not null,\n bytes int8 not null,\n packets int4 not null\n)\n\n Today this table has about 3 000 000 rows and the select command\nmentioned above returns 190 + 255 rows.\n\n\n Now I don't use clause \"UNION ALL\" and the program executes two queryes\nand then adds both result to new result. I reduced time increment of number\nrows to flow_sums_200002 table (three times less). This table contains data\nof February 2000 and the program will create table flow_sums_200003 with\nrelevant views next month.\n Well, now this solution solve my problem but always depends on number of\nrows - I only moved limit of rows count.\n\n\n Thank You, V. Benes\n\nP.S.: I append part of top on my system while the query is running:\n\nCPU states: 98.6% user, 1.3% system, 0.0% nice, 0.0% idle\nMem: 127256K av, 124316K used, 2940K free, 29812K shrd, 2620K buff\nSwap: 128516K av, 51036K used, 77480K free 7560K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND\n2942 postgres 20 0 141M 99M 17348 R 0 99.0 80.4 1:22 postmaster\n\n=> postmaster later took 80 - 95% of memory, free memory decressed to 2 MB,\nCPU was overloaded (0% idle and 99% by user process of postmaster). Have You\never seen something similar :-) ?\n\n\n", "msg_date": "Wed, 23 Feb 2000 08:26:11 +0100", "msg_from": "\"=?iso-8859-2?B?VmxhZGlt7XIgQmVuZbk=?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Out of memory problem (forwarded bug report) " }, { "msg_contents": "Vladimir,\n Thanks for the details. I think you are undoubtedly running into\nexpression evaluation memory leaks. Basically, any expression that\nyields a non-pass-by-value data type consumes memory that is not\nreclaimed until end of statement --- so when you process a few million\nrows, that memory starts to add up. (Yes, I realize this is a horrible\nmisfeature. It's on our TO-DO list to fix it, but it probably won't\nhappen until 7.1 or 7.2.) In the meantime the best I can offer you\nis workarounds.\n\n I think the major problems here are coming from the\n\"date_trunc('day',start)\" calculation (because its datetime result is\npass-by-reference) and to a lesser extent from the sum(bytes)\ncalculation (because int8 is pass-by-reference). You could easily\nreplace \"date_trunc('day',start)\" with \"date(start)\"; since date is\na pass-by-value type, that won't leak memory, and it should give\nequivalent results. The int8 sum is not quite so easy to fix.\nI assume you can't get away with switching to int4 --- probably\nyour sum would overflow an int4? It may be that just fixing the\ninefficient date_trunc calc will reduce your memory requirements\nenough to get by. If not, the only good news I have is that release\n7.0 does fix the memory-leak problem for internal calculations of\naggregate functions like sum(). You can get the first beta release\nfor 7.0 now.\n\n\t\t\tregards, tom lane\n\n\n\"=?iso-8859-2?B?VmxhZGlt7XIgQmVuZbk=?=\" <[email protected]> writes:\n> -----P�vodn� zpr�va-----\n> Od: Tom Lane <[email protected]>\n> Komu: Oliver Elphick <[email protected]>\n> Kopie: [email protected] <[email protected]>;\n> [email protected] <[email protected]>\n> Datum: 22. �nora 2000 18:06\n> P�edm�t: Re: [HACKERS] Out of memory problem (forwarded bug report)\n\n\n>> \"Oliver Elphick\" <[email protected]> writes:\n>>> Can someone advise, please, how to deal with this problem in 6.5.3?\n>> \n\n>> My guess is that the cause is memory leaks during expression evaluation;\n>> but without seeing the complete view definitions and underlying table\n>> definitions, it's impossible to know what processing is being invoked\n>> by this query...\n>> \n>> regards, tom lane\n\n\n\n> Well, I will append views and underlying table definition:\n\n> 1) Once again - failure query:\n> select comm_type,name,tot_bytes,tot_packets\n> from flow_sums_days_send_200002_view\n> where day='2000-02-21' and name not like '@%'\n> union all\n> select comm_type,name,tot_bytes,tot_packets\n> from flow_sums_days_receive_200002_view\n> where day='2000-02-21' and name not like '@%'\n\n> 2) views definition:\n> create view flow_sums_days_send_200002_view as\n> select\n> 'send'::varchar as comm_type, date_trunc('day',start) as day,\n> src_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\n> from flow_sums_200002\n> group by day, src_name\n\n> create view flow_sums_days_receive_200002_view as\n> select\n> 'receive'::varchar as comm_type, date_trunc('day',start) as day,\n> dst_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\n> from flow_sums_200002\n> group by day, dst_name\n\n\n> I wanted create only one usefull view:\n\n> create view flow_sums_days_200002_view as\n> select\n> 'send'::varchar as comm_type, date_trunc('day',start) as day,\n> src_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\n> from flow_sums_200002\n> group by day, src_name\n> UNION ALL\n> select\n> 'receive'::varchar as comm_type, date_trunc('day',start) as day,\n> dst_name as name, sum(bytes) as tot_bytes, sum(packets) as tot_packets\n> from flow_sums_200002\n> group by day, dst_name\n\n> ...but Postgres cann't use clause UNION ALL at view definition. So I created\n> two views mentioned above and I wanted use this ones with UNION ALL clause\n> only.\n\n> 3) underlaying table definition:\n> create table flow_sums_200002 (\n> primary_collector varchar(50) not null,\n> start datetime not null,\n> end_period datetime not null,\n> dead_time_rel float4 not null,\n> src_name varchar(50) not null,\n> dst_name varchar(50) not null,\n> bytes int8 not null,\n> packets int4 not null\n> )\n\n> Today this table has about 3 000 000 rows and the select command\n> mentioned above returns 190 + 255 rows.\n\n\n> Now I don't use clause \"UNION ALL\" and the program executes two queryes\n> and then adds both result to new result. I reduced time increment of number\n> rows to flow_sums_200002 table (three times less). This table contains data\n> of February 2000 and the program will create table flow_sums_200003 with\n> relevant views next month.\n> Well, now this solution solve my problem but always depends on number of\n> rows - I only moved limit of rows count.\n\n\n> Thank You, V. Benes\n\n> P.S.: I append part of top on my system while the query is running:\n\n> CPU states: 98.6% user, 1.3% system, 0.0% nice, 0.0% idle\n> Mem: 127256K av, 124316K used, 2940K free, 29812K shrd, 2620K buff\n> Swap: 128516K av, 51036K used, 77480K free 7560K cached\n\n> PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND\n> 2942 postgres 20 0 141M 99M 17348 R 0 99.0 80.4 1:22 postmaster\n\n> => postmaster later took 80 - 95% of memory, free memory decressed to 2 MB,\n> CPU was overloaded (0% idle and 99% by user process of postmaster). Have You\n> ever seen something similar :-) ?\n\n", "msg_date": "Thu, 24 Feb 2000 00:45:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Out of memory problem (forwarded bug report) " }, { "msg_contents": "Tom Lane wrote:\n...\n >your sum would overflow an int4? It may be that just fixing the\n >inefficient date_trunc calc will reduce your memory requirements\n >enough to get by. If not, the only good news I have is that release\n >7.0 does fix the memory-leak problem for internal calculations of\n >aggregate functions like sum(). You can get the first beta release\n >for 7.0 now.\n\nI'm putting together a Debian release of the beta at the moment. \n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Thy word is a lamp unto my feet, and a light unto my \n path.\" Psalms 119:105 \n\n\n", "msg_date": "Thu, 24 Feb 2000 11:22:38 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Out of memory problem (forwarded bug report) " }, { "msg_contents": "Don't know if you know this already, but since april 23, you've been\non SecurityFocus.com for the cleartext passwords in pg_shadow:\n\n http://www.securityfocus.com/bid/1139\n\nI know it has been discussed at least a couple of times before, but in\nmy opinion this is an issue that needs a solution.\n\nThe problem with cleartext passwords is not just that root, postgres\nsuper user or anyone who has legally or illegally got access to the\nsystem can see the passwords a user uses to log in to PostgreSQL. The\nproblem lies in the well known fact that we tend to use the same\npassword several places, if not everywhere. With all the passwords\nneeded these days, that is how it _has_ to be.\n\nThe first PostgreSQL based site that gets cracked, will make headlines\nstating that passwords have got into the wrong hands. Do we (or you)\nwant that?\n\n\nSverre.\n\n-- \n<URL:mailto:[email protected]>\n<URL:http://home.sol.no/~sverrehu/> Echelon bait: semtex, bin Laden,\n plutonium, North Korea, nuclear bomb\n", "msg_date": "Sat, 6 May 2000 00:40:24 +0200", "msg_from": "\"Sverre H. Huseby\" <[email protected]>", "msg_from_op": false, "msg_subject": "You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Sverre H. Huseby wrote:\n\n> Don't know if you know this already, but since april 23, you've been\n> on SecurityFocus.com for the cleartext passwords in pg_shadow:\n> \n> http://www.securityfocus.com/bid/1139\n> \n> I know it has been discussed at least a couple of times before, but in\n> my opinion this is an issue that needs a solution.\n> \n> The problem with cleartext passwords is not just that root, postgres\n> super user or anyone who has legally or illegally got access to the\n> system can see the passwords a user uses to log in to PostgreSQL. The\n> problem lies in the well known fact that we tend to use the same\n> password several places, if not everywhere. With all the passwords\n> needed these days, that is how it _has_ to be.\n> \n> The first PostgreSQL based site that gets cracked, will make headlines\n> stating that passwords have got into the wrong hands. Do we (or you)\n> want that?\n\nYou've lost me here ... the only person(s) that can get at those passwords\nare those that have compromised the system already. Even if the passwords\n*weren't* in cleartext, there is nothing that stops me from downloading\nthe data/* directory down to my computer and running pg_upgrade to \"make\nit my own\", removing the passwords ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 5 May 2000 20:25:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Fri, 5 May 2000, The Hermit Hacker wrote:\n> You've lost me here ... the only person(s) that can get at those passwords\n> are those that have compromised the system already. Even if the passwords\n> *weren't* in cleartext, there is nothing that stops me from downloading\n> the data/* directory down to my computer and running pg_upgrade to \"make\n> it my own\", removing the passwords ... \n\nYou don't get it. Its one of most basic things about security of the\npassword databases: Cleartext must not be available for anyone, not even\nthe administrators. The damage one can do with list of 10000 passwords\nfar exceeds damage you can do to the database which contain these\npasswords. Why? Because people tend to use same password everywhere. \n\n(Yes, I know that they shouldn't, however, you must take good care of\npasswords users entrusted to you). \n\nThere is no excuse for not storing it as a hash or at least in crypt(3)\nway.\n\n-alex\n\n", "msg_date": "Fri, 5 May 2000 19:39:15 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Fri, 5 May 2000, The Hermit Hacker wrote:\n\n> On Sat, 6 May 2000, Sverre H. Huseby wrote:\n> \n> > Don't know if you know this already, but since april 23, you've been\n> > on SecurityFocus.com for the cleartext passwords in pg_shadow:\n> > \n> > http://www.securityfocus.com/bid/1139\n> > \n> > I know it has been discussed at least a couple of times before, but in\n> > my opinion this is an issue that needs a solution.\n> > \n> > The problem with cleartext passwords is not just that root, postgres\n> > super user or anyone who has legally or illegally got access to the\n> > system can see the passwords a user uses to log in to PostgreSQL. The\n> > problem lies in the well known fact that we tend to use the same\n> > password several places, if not everywhere. With all the passwords\n> > needed these days, that is how it _has_ to be.\n> > \n> > The first PostgreSQL based site that gets cracked, will make headlines\n> > stating that passwords have got into the wrong hands. Do we (or you)\n> > want that?\n> \n> You've lost me here ... the only person(s) that can get at those passwords\n> are those that have compromised the system already. Even if the passwords\n> *weren't* in cleartext, there is nothing that stops me from downloading\n> the data/* directory down to my computer and running pg_upgrade to \"make\n> it my own\", removing the passwords ... \n\nSame defense I used when I responded to the BugTRAQ post. Even tho I \nunderstand the possible ramifications of cleartext passwords, I still\nstand by my previous comments, an admin needs to properly maintain and\nprotect the systems they're entrusted to. However after reading about\nthe www.apache.org compromise details earlier today I'm of the opinion\nnow that we should look into encrypting the passwords. I'm also of the\nopinion that I should volunteer to at least help in the fixing of it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 5 May 2000 21:12:56 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> ... I'm of the opinion now that we should look into encrypting the\n> passwords.\n\nI think it'd be a reasonable thing to work on. I don't particularly\nintend to be stampeded into doing something about it by \"public\nrelations\" pressure from people who would rather make inflated claims\nthan get their hands dirty by contributing a solution ;-). (And, yes,\nthese claims are inflated. If you don't trust your dbadmin, the\nsecurity of your password is the least of your worries --- the data\nin your database may well be far more critical info than anything the\ndbadmin could find in your personal account. The general opinion on\nthe pghackers list has been that password-based security is the least\ndesirable of the authentication options we offer, anyway. A security-\nconscious site wouldn't even be using database passwords.)\n\nThe main potential hazard I see is portability. Is crypt(3) available\non *all* the platforms Postgres runs on? Does it give the same answers\non all those platforms? If not, what shall we use instead? Don't\nforget that the frontend libraries have to have it too (or are you going\nto keep transmitting passwords in cleartext?). So that means you'd\nbetter have it for Win, Mac, BeOS, etc, not just for dozens of Unix\nvariants --- and they *must* all give the same results.\n\nThere are also lesser worries about patents and US export regulations.\nIf we include an encryption package in the distribution we could\neliminate the portability problem, only to find ourselves facing\nheadaches in those departments :-(\n\nSo, by all means let's look for a solution ... but I suspect that\nthe cost/benefit ratio of fixing this is a lot higher than is being\nclaimed in some quarters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 01:47:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "I wrote:\n> The main potential hazard I see is portability. Is crypt(3) available\n> on *all* the platforms Postgres runs on?\n\nWaitasec, what am I saying? We already *do* have crypt password\nsupport, at least on those platforms where crypt(3) is available.\n\nAs near as I can tell, the crypt option transmits an encrypted password\nacross the wire (good), but the comparison at the server end is done by\ntaking the cleartext password stored in pg_shadow, crypt()ing it on\nthe fly, and comparing that to what was sent by the client.\n\nThis does have the advantage that the same pg_shadow entry will support\nboth cleartext-password and crypted-password connections, but we could\nget that another way. Assuming that the server has crypt(), the\npassword could be stored always encrypted instead of always not.\nCleartext-password connections would be handled just by crypting the\nreceived password before comparing. (Before you ask, no I don't think\nwe should remove the option of cleartext-password connections. What of\nclients running on platforms with neither crypt() nor anything better\nlike Kerberos? Should they be forced to drop down to no security at\nall? I think not.)\n\nThis'd take some rejiggering in (at least) CREATE USER and ALTER USER,\nbut it seems doable. I withdraw the complaint about portability...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 03:09:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "[Tom Lane]\n\n| If you don't trust your dbadmin, the security of your password is\n| the least of your worries --- the data in your database may well\n| be far more critical info than anything the dbadmin could find in\n| your personal account.\n\nIt may, and then again, it may not. There are lots of databases out\nthere that do not contain secret or critical data. All databases I\nhave made fall into this category. But the password I use on my\nPostgreSQL account is (or used to be, until I discovered the cleartext\npasswords) the same password I use most other places. I don't care if\nanyone reads the data, as long as they don't start testing my password\non all other sites they may guess I have access to. I have my\nPostgreSQL database on an ISP on the other side of the globe. Why\nshould I trust those people more than, say, my neighbour?\n\n| The main potential hazard I see is portability. Is crypt(3) available\n| on *all* the platforms Postgres runs on? Does it give the same answers\n| on all those platforms? If not, what shall we use instead?\n\nI implemented MD5 in Java a couple of years ago. I'm sure me or\nsomeone else will be able to convert it to C. I'll make the license\nanything you want it to be if you care to use it.\n\n| There are also lesser worries about patents and US export regulations.\n| If we include an encryption package in the distribution we could\n| eliminate the portability problem, only to find ourselves facing\n| headaches in those departments :-(\n\nAFAIK, MD5 is not restricted, as it can't be used for\nencryption/decryption. It is a one way hashing function only. Please\ncorrect me if I am wrong, I never understood those stupid export\nregulations anyway.\n\n\nSverre - who really do not want _anyone_ to know his passwords.\n\n-- \n<URL:mailto:[email protected]>\n<URL:http://home.sol.no/~sverrehu/> Echelon bait: semtex, bin Laden,\n plutonium, North Korea, nuclear bomb\n", "msg_date": "Sat, 6 May 2000 09:09:33 +0200", "msg_from": "\"Sverre H. Huseby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> I wrote:\n> > The main potential hazard I see is portability. Is crypt(3) available\n> > on *all* the platforms Postgres runs on?\n> \n> Waitasec, what am I saying? We already *do* have crypt password\n> support, at least on those platforms where crypt(3) is available.\n> \n> As near as I can tell, the crypt option transmits an encrypted password\n> across the wire (good), but the comparison at the server end is done by\n> taking the cleartext password stored in pg_shadow, crypt()ing it on\n> the fly, and comparing that to what was sent by the client.\n> \n> This does have the advantage that the same pg_shadow entry will support\n> both cleartext-password and crypted-password connections, but we could\n> get that another way. Assuming that the server has crypt(), the\n> password could be stored always encrypted instead of always not.\n> Cleartext-password connections would be handled just by crypting the\n> received password before comparing. (Before you ask, no I don't think\n> we should remove the option of cleartext-password connections. What of\n> clients running on platforms with neither crypt() nor anything better\n> like Kerberos? Should they be forced to drop down to no security at\n> all? I think not.)\n> \n> This'd take some rejiggering in (at least) CREATE USER and ALTER USER,\n> but it seems doable. I withdraw the complaint about portability...\n\nYes, agreed. Doing it in the backend only is the way to go. We already\nhave wire crypting.\n\nI think the only problem is moving dumps from on machine to another. \nThe crypt version may not exist or be different on different machines.\n\nHowever, I now remember there was a bigger issue. I think the actual\npassword has to be crypted based on the salt used supplied to the\nclient. We can't do that based on the crypted version because we don't\nknow the client can generate that version.\n\nNow, at the time, we were looking at Unix-style crypting of the\npassword, which is one-way. This will not work. We need something that\nwe can uncrypt in the backend before applying the client-supplied salt\nto see if the passwords match.\n\nThe goal here was to make wire sniffing unproductive, and because the\nserver supplied the salt to be used by the client, you can't just\nre-use a sniffed password you saw on the wire.\n\nAt least this is my recollection of the problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 10:25:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > This'd take some rejiggering in (at least) CREATE USER and ALTER USER,\n> > but it seems doable. I withdraw the complaint about portability...\n> \n> Yes, agreed. Doing it in the backend only is the way to go. We already\n> have wire crypting.\n> \n> I think the only problem is moving dumps from on machine to another. \n> The crypt version may not exist or be different on different machines.\n> \n> However, I now remember there was a bigger issue. I think the actual\n> password has to be crypted based on the salt used supplied to the\n> client. We can't do that based on the crypted version because we don't\n> know the client can generate that version.\n> \n> Now, at the time, we were looking at Unix-style crypting of the\n> password, which is one-way. This will not work. We need something that\n> we can uncrypt in the backend before applying the client-supplied salt\n> to see if the passwords match.\n> \n> The goal here was to make wire sniffing unproductive, and because the\n> server supplied the salt to be used by the client, you can't just\n> re-use a sniffed password you saw on the wire.\n> \n> At least this is my recollection of the problem.\n> \n> \n\nWe can do it with MD5. Sverre has offered up a java version of it\nthat he wrote, I can convert it to C and make sure it at least runs\non FreeBSD, IRIX, DOS/Windows, and HPUX 8-10. If it runs in unix then\nit should also run in OS/2. If we roll our own we should be safe. I\ncan even include a simple test to make sure it works for all platforms\nwe support.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 10:53:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> > The goal here was to make wire sniffing unproductive, and because the\n> > server supplied the salt to be used by the client, you can't just\n> > re-use a sniffed password you saw on the wire.\n> > \n> > At least this is my recollection of the problem.\n> > \n> > \n> \n> We can do it with MD5. Sverre has offered up a java version of it\n> that he wrote, I can convert it to C and make sure it at least runs\n> on FreeBSD, IRIX, DOS/Windows, and HPUX 8-10. If it runs in unix then\n> it should also run in OS/2. If we roll our own we should be safe. I\n> can even include a simple test to make sure it works for all platforms\n> we support.\n\nYes, I seem to remember that was the issue. If we only did crypting on\nthe server, and allowed passwords to come cleartext from clients, then\nwe only needed crypting on the server. If we crypt in a one-way fashion\non the client before coming to the server using a random salt, we have\nto do the other part of the crypting on the client too.\n\nIn other words, it is the one-way nature of the password crypt we used\non the client that caused us to need the _exact_ same input string to\ngo into that crypt on the client and server, so we would need the same\ncrypt process in both places.\n\nNow, let me ask another, better question:\n\nRight now the password receives a random salt from the server, it uses\nthat salt to crypt the password, then send that back for comparison with\nthe clear-text password we store in the system.\n\nWhat if we:\n\tstore the password in pg_shadow like a unix-style password with salt\n\tpass the random salt and the salt from pg_shadow to the client\n\tclient crypts the password twice through the routine:\n\t\tonce using the pg_shadow salt\n\t\tanother time using the random salt\n\nand passes that back to the server. The server can use the pg_shadow\ncopy of the password, use the random salt make a new version, and\ncompare the result.\n\nThis has the huge advantage of not requiring any new crypting methods on\nthe client. It only requires the crypt to happen twice using two\ndifferent salts.\n\nSounds like a winner. Comments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 11:17:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > We can do it with MD5. Sverre has offered up a java version of it\n> > that he wrote, I can convert it to C and make sure it at least runs\n> > on FreeBSD, IRIX, DOS/Windows, and HPUX 8-10. If it runs in unix then\n> > it should also run in OS/2. If we roll our own we should be safe. I\n> > can even include a simple test to make sure it works for all platforms\n> > we support.\n> \n> Yes, I seem to remember that was the issue. If we only did crypting on\n> the server, and allowed passwords to come cleartext from clients, then\n> we only needed crypting on the server. If we crypt in a one-way fashion\n> on the client before coming to the server using a random salt, we have\n> to do the other part of the crypting on the client too.\n> \n> In other words, it is the one-way nature of the password crypt we used\n> on the client that caused us to need the _exact_ same input string to\n> go into that crypt on the client and server, so we would need the same\n> crypt process in both places.\n> \n> Now, let me ask another, better question:\n> \n> Right now the password receives a random salt from the server, it uses\n> that salt to crypt the password, then send that back for comparison with\n> the clear-text password we store in the system.\n> \n> What if we:\n> \tstore the password in pg_shadow like a unix-style password with salt\n> \tpass the random salt and the salt from pg_shadow to the client\n> \tclient crypts the password twice through the routine:\n> \t\tonce using the pg_shadow salt\n> \t\tanother time using the random salt\n> \n> and passes that back to the server. The server can use the pg_shadow\n> copy of the password, use the random salt make a new version, and\n> compare the result.\n> \n> This has the huge advantage of not requiring any new crypting methods on\n> the client. It only requires the crypt to happen twice using two\n> different salts.\n> \n> Sounds like a winner. Comments?\n\nOverlycomplicated?\n\nWhat was your objection to MD5 again?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 11:38:03 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> > This has the huge advantage of not requiring any new crypting methods on\n> > the client. It only requires the crypt to happen twice using two\n> > different salts.\n> > \n> > Sounds like a winner. Comments?\n> \n> Overlycomplicated?\n> \n> What was your objection to MD5 again?\n\nNot really. We are using our same unix crypt code, which works on all\nplatforms. The change to each interface is minimal. Not much testing\nwill be required.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 11:48:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> > Sounds like a winner. Comments?\n> \n> Overlycomplicated?\n> \n> What was your objection to MD5 again?\n\nAlso, MD5 is not ideal for passwords. Seems the standard unix-style\npassword crypting is the standard, so it should be used to crypt our own\npasswords in pg_shadow. I am sure someone would find some problem with\nus using md5 for password storage.\n\nWe already use the unix-style password crypt to send passwords over the\nwire. Why not use it for storage too?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 11:54:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > > Sounds like a winner. Comments?\n> > \n> > Overlycomplicated?\n> > \n> > What was your objection to MD5 again?\n> \n> Also, MD5 is not ideal for passwords. Seems the standard unix-style\n> password crypting is the standard, so it should be used to crypt our own\n> passwords in pg_shadow. I am sure someone would find some problem with\n> us using md5 for password storage.\n\nFreeBSD uses MD5 by default since at least ver 2.2, possibly earlier.\n \n> We already use the unix-style password crypt to send passwords over the\n> wire. Why not use it for storage too?\n\nCan ALL clients we support use it over the wire? \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 12:09:13 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> On Sat, 6 May 2000, Bruce Momjian wrote:\n> \n> > > > Sounds like a winner. Comments?\n> > > \n> > > Overlycomplicated?\n> > > \n> > > What was your objection to MD5 again?\n> > \n> > Also, MD5 is not ideal for passwords. Seems the standard unix-style\n> > password crypting is the standard, so it should be used to crypt our own\n> > passwords in pg_shadow. I am sure someone would find some problem with\n> > us using md5 for password storage.\n> \n> FreeBSD uses MD5 by default since at least ver 2.2, possibly earlier.\n\nOh, I didn't know that. Interesting.\n\n> \n> > We already use the unix-style password crypt to send passwords over the\n> > wire. Why not use it for storage too?\n> \n> Can ALL clients we support use it over the wire? \n\nThat is an excellent question. Any client that can use passwords has to\ndo this, so yes, I think they all do. I can say for sure Java has it,\nand that is usually the hardest.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 12:10:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> > Also, MD5 is not ideal for passwords. Seems the standard unix-style\n> > password crypting is the standard, so it should be used to crypt our own\n> > passwords in pg_shadow. I am sure someone would find some problem with\n> > us using md5 for password storage.\n> \n> FreeBSD uses MD5 by default since at least ver 2.2, possibly earlier.\n> \n> > We already use the unix-style password crypt to send passwords over the\n> > wire. Why not use it for storage too?\n> \n> Can ALL clients we support use it over the wire? \n\nYes, I think so. Java has its own, and the others use libpq do to it. \nThe beauty of my suggesting is that all we have to do is pass the\npg_shadow salt along with the random salt, and call the crypt code\ntwice, first with the pg_shadow salt, then with the random salt.\n\nThe server pass the pg_shadow version through the random salt crypt, and\ncompares.\n\nNow, I we want to move all the stuff to use MD5 rather than the standard\nunix password crypt, that is another option, though I am not sure what\nvalue it would have.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 12:28:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "[Bruce Momjian]\n\n| \tstore the password in pg_shadow like a unix-style password with salt\n| \tpass the random salt and the salt from pg_shadow to the client\n| \tclient crypts the password twice through the routine:\n| \t\tonce using the pg_shadow salt\n| \t\tanother time using the random salt\n\nThat's close to what I thought of a couple of days ago too, except I\nwould have used MD5, since I already have that implemented. :) (It\nseems you already have crypt, so you wouldn't need MD5.)\n\nDoes anyone here really _know_ (and I mean KNOW)\nsecurity/cryptography? If so, could you please comment on this\nscheme? And while you're at it, whats better of MD5 and Unix crypt\n(triple DES ++, isn't it?) from a security perspective?\n\n\nSverre.\n\n-- \n<URL:mailto:[email protected]>\n<URL:http://home.sol.no/~sverrehu/> Echelon bait: semtex, bin Laden,\n plutonium, North Korea, nuclear bomb\n", "msg_date": "Sat, 6 May 2000 18:45:26 +0200", "msg_from": "\"Sverre H. Huseby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > > Also, MD5 is not ideal for passwords. Seems the standard unix-style\n> > > password crypting is the standard, so it should be used to crypt our own\n> > > passwords in pg_shadow. I am sure someone would find some problem with\n> > > us using md5 for password storage.\n> > \n> > FreeBSD uses MD5 by default since at least ver 2.2, possibly earlier.\n> > \n> > > We already use the unix-style password crypt to send passwords over the\n> > > wire. Why not use it for storage too?\n> > \n> > Can ALL clients we support use it over the wire? \n> \n> Yes, I think so. Java has its own, and the others use libpq do to it. \n> The beauty of my suggesting is that all we have to do is pass the\n> pg_shadow salt along with the random salt, and call the crypt code\n> twice, first with the pg_shadow salt, then with the random salt.\n> \n> The server pass the pg_shadow version through the random salt crypt, and\n> compares.\n> \n> Now, I we want to move all the stuff to use MD5 rather than the standard\n> unix password crypt, that is another option, though I am not sure what\n> value it would have.\n> \n> \n\nHow about ODBC? This is from the ODBC driver source connection.c:\n \n self->errormsg = \"Password crypt authentication not supported\";\n\nIs that because of the platform it's running on or what it's talking\nto?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 12:56:57 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> > Now, I we want to move all the stuff to use MD5 rather than the standard\n> > unix password crypt, that is another option, though I am not sure what\n> > value it would have.\n> > \n> > \n> \n> How about ODBC? This is from the ODBC driver source connection.c:\n> \n> self->errormsg = \"Password crypt authentication not supported\";\n> \n> Is that because of the platform it's running on or what it's talking\n> to?\n\nSeems we don't have crypt support, so you can't send crypt passwords\nfrom an ODBC client. That is news to me.\n\n From looking there, and looking at pg_hba.conf, we have both 'password'\nand 'crypt' authentication in there. \n\nHowever, this is not a problem because we can still do backend-only\ncrypting when comparing client-sent cleartext passwords to pg_shadow\npasswords.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 13:02:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n\n> FreeBSD uses MD5 by default since at least ver 2.2, possibly\n> earlier.\n\nSo does Red Hat Linux, and probably Linux distributions as well. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "06 May 2000 13:03:10 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "on 5/6/00 12:45 PM, Sverre H. Huseby at [email protected] wrote:\n\n> Does anyone here really _know_ (and I mean KNOW)\n> security/cryptography? If so, could you please comment on this\n> scheme? And while you're at it, whats better of MD5 and Unix crypt\n> (triple DES ++, isn't it?) from a security perspective?\n\nFinally something I can comment on with a tiny bit of authority :)\n\nThe unix crypt command is a sneaky version of DES (I've never heard of\nTriple-DES being used for this). Your password is transformed into a DES key\nwhich is then used to encrypt a block of 0's. The result is what's stored in\nthe password file. Poor Man's Hash, in a sense :)\n\nMD5 is quite standard (as hashing algs go) and much more secure. It allows\nfor longer passwords, and it's quite fast (easily tens of thousands of MD5\nhashes per second on today's midlevel processors). I strongly recommend you\nuse that.\n\n| store the password in pg_shadow like a unix-style password with salt\n| pass the random salt and the salt from pg_shadow to the client\n| client crypts the password twice through the routine:\n| once using the pg_shadow salt\n| another time using the random salt\n\nMy first impression of this scheme is that it's quite good. Use MD5 instead\nof crypt, and it's great. You've got a good challenge-response setup here,\nand with MD5 you can even make your salt much longer than the 2 bytes of\nunix crypt salt, thus much more secure.\n\nI like it!\n\n-Ben\n\n", "msg_date": "Sat, 06 May 2000 13:17:22 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n\tpasswords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > > Now, I we want to move all the stuff to use MD5 rather than the standard\n> > > unix password crypt, that is another option, though I am not sure what\n> > > value it would have.\n> > > \n> > > \n> > \n> > How about ODBC? This is from the ODBC driver source connection.c:\n> > \n> > self->errormsg = \"Password crypt authentication not supported\";\n> > \n> > Is that because of the platform it's running on or what it's talking\n> > to?\n> \n> Seems we don't have crypt support, so you can't send crypt passwords\n> from an ODBC client. That is news to me.\n> \n> >From looking there, and looking at pg_hba.conf, we have both 'password'\n> and 'crypt' authentication in there. \n> \n> However, this is not a problem because we can still do backend-only\n> crypting when comparing client-sent cleartext passwords to pg_shadow\n> passwords.\n\nBut what I'm proposing will let ALL clients send an encrypted password\nover the wire and we can also store them encrypted. By comparing twice\nwe can maintain backward compatibility. The backend would compare the\npassword received with the stored md5 password and compare the received\npassword after md5ing it in case it was sent clear-text.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 13:19:16 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> But what I'm proposing will let ALL clients send an encrypted password\n> over the wire and we can also store them encrypted. By comparing twice\n> we can maintain backward compatibility. The backend would compare the\n> password received with the stored md5 password and compare the received\n> password after md5ing it in case it was sent clear-text.\n\nBut you can do that with our current system. Store them in pg_shadow\nusing unix password format. If a cleartext password comes in, crypt it\nusing the pg_shadow salt and compare them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 13:21:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> on 5/6/00 12:45 PM, Sverre H. Huseby at [email protected] wrote:\n> \n> > Does anyone here really _know_ (and I mean KNOW)\n> > security/cryptography? If so, could you please comment on this\n> > scheme? And while you're at it, whats better of MD5 and Unix crypt\n> > (triple DES ++, isn't it?) from a security perspective?\n> \n> Finally something I can comment on with a tiny bit of authority :)\n> \n> The unix crypt command is a sneaky version of DES (I've never heard of\n> Triple-DES being used for this). Your password is transformed into a DES key\n> which is then used to encrypt a block of 0's. The result is what's stored in\n> the password file. Poor Man's Hash, in a sense :)\n> \n> MD5 is quite standard (as hashing algs go) and much more secure. It allows\n> for longer passwords, and it's quite fast (easily tens of thousands of MD5\n> hashes per second on today's midlevel processors). I strongly recommend you\n> use that.\n> \n> | store the password in pg_shadow like a unix-style password with salt\n> | pass the random salt and the salt from pg_shadow to the client\n> | client crypts the password twice through the routine:\n> | once using the pg_shadow salt\n> | another time using the random salt\n> \n> My first impression of this scheme is that it's quite good. Use MD5 instead\n> of crypt, and it's great. You've got a good challenge-response setup here,\n> and with MD5 you can even make your salt much longer than the 2 bytes of\n> unix crypt salt, thus much more secure.\n> \n> I like it!\n> \n\nGood. I only recommend our current setup because we already have code\nin most interfaces to handle it. I have no problem moving to md5, but\nthis should be done for _all_ crypting. I just see no reason to mix\nstandard password crypt with md5 and try to keep two crypts working on\nall interfaces. The easy way would be to use our current crypt stuff to\nget it working, then move to md5 if we can get it working on all our\ninterfaces.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 13:23:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > But what I'm proposing will let ALL clients send an encrypted password\n> > over the wire and we can also store them encrypted. By comparing twice\n> > we can maintain backward compatibility. The backend would compare the\n> > password received with the stored md5 password and compare the received\n> > password after md5ing it in case it was sent clear-text.\n> \n> But you can do that with our current system. Store them in pg_shadow\n> using unix password format. If a cleartext password comes in, crypt it\n> using the pg_shadow salt and compare them.\n\nYou missed half of it. Platforms that don't have crypt would use our\nMD5 so eventually all of them would be sending encrypted passwords \nover the wire. I'm trying to accomplish two things here.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 13:25:18 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> The goal here was to make wire sniffing unproductive, and because the\n>>>> server supplied the salt to be used by the client, you can't just\n>>>> re-use a sniffed password you saw on the wire.\n\nGood point. Are we trying to make the password challenge proof against\nwire sniffing? I think that might be an unreasonable goal. People who\nare concerned about sniffing attacks ought to be using something like\nSSL to encrypt the *entire* connection (and/or a better auth protocol\nthan passwords, in the first place...). I repeat my prior comment that\nthe data in the database is likely to be just as valuable as the\npassword.\n\n> Right now the client receives a random salt from the server, it uses\n> that salt to crypt the password, then send that back for comparison with\n> the clear-text password we store in the system.\n\n> What if we:\n> \tstore the password in pg_shadow like a unix-style password with salt\n> \tpass the random salt and the salt from pg_shadow to the client\n> \tclient crypts the password twice through the routine:\n> \t\tonce using the pg_shadow salt\n> \t\tanother time using the random salt\n\n> and passes that back to the server. The server can use the pg_shadow\n> copy of the password, use the random salt make a new version, and\n> compare the result.\n\nUse the random salt to make a new version of what, exactly? If the\nserver doesn't have the cleartext password, it can't make a crypted\npassword that will correspond to any randomly chosen salt either.\n\n> This has the huge advantage of not requiring any new crypting methods on\n> the client. It only requires the crypt to happen twice using two\n> different salts.\n\nA serious objection to both this and the MD5 proposal is that it'd\ncreate a cross-version incompatibility between clients and servers.\nWe were just fending off complaints about 6.5-to-7.0 incompatibilities\nthat were considerably less serious than being unable to connect at all,\nso how well do you think this'd go over?\n\nI think we should try to stick to the current protocol: one salt sent\nby the server, one crypted password sent back. The costs of changing\nthe protocol will probably outweigh any real-world security gain.\n\nWe could get some of the benefits of a random salt if we were willing\nto enlarge the pg_shadow entry a little bit. Suppose that we allow\nN different salt values to be sent by the server (crypt(3) only allows\n4096 possible salts anyway, but I'm thinking N in the range of 100).\nWhen a password is set, crypt the password with each of these and store\n*all* the results in pg_shadow. Then we have the right crypted password\navailable to compare to the client response, and an attacker still has\na relatively low probability of having sniffed the right password.\n\nBTW, I hear \"MD5\" being chanted like a mantra, but someone will have\nto explain to me what it does that avoids the sniffed-crypted-password\nproblem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 13:29:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "> On Sat, 6 May 2000, Bruce Momjian wrote:\n> \n> > > But what I'm proposing will let ALL clients send an encrypted password\n> > > over the wire and we can also store them encrypted. By comparing twice\n> > > we can maintain backward compatibility. The backend would compare the\n> > > password received with the stored md5 password and compare the received\n> > > password after md5ing it in case it was sent clear-text.\n> > \n> > But you can do that with our current system. Store them in pg_shadow\n> > using unix password format. If a cleartext password comes in, crypt it\n> > using the pg_shadow salt and compare them.\n> \n> You missed half of it. Platforms that don't have crypt would use our\n> MD5 so eventually all of them would be sending encrypted passwords \n> over the wire. I'm trying to accomplish two things here.\n\nThat is fine: We need crypted passwords in pg_shadow, and MD5 is\nprobably better than our current setup.\n\nBut we have tons of interfaces, all of which use the old stuff. If you\nthink you can do both at the same time, go ahead. MD5 has salt\ncapability, so you can move it right into our current client dialog\nsetup, and do double-MD5 as I suggested.\n\nYou still need double-MD5 because you have to crypt the password based\non the random salt passed to the client by the server. If you can make\nthe salt larger than 2 bytes at the same time, so much the better.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 13:30:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "I said:\n> I think we should try to stick to the current protocol: one salt sent\n> by the server, one crypted password sent back. The costs of changing\n> the protocol will probably outweigh any real-world security gain.\n\nActually, since libpq handles the authentication phase of connection\nvia a state-machine, it'd be possible for the postmaster to send two\nsuccessive authentication challenge packets with different salts, and\nlibpq would respond correctly to each one. This is a little bit shaky\nbecause the current protocol document does not say that clients should\nloop at the challenge point of the protocol, so there might be non-libpq\nclients that wouldn't cope. But it's possible we could do it without\nbreaking compatibility with old clients.\n\nHowever, I still fail to see what it buys us to challenge the frontend\nwith two salts. If the password is stored crypted, the *only* thing\nwe can validate is that password with the same salt it was stored\nwith. It doesn't sound like MD5 changes this at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 14:14:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "on 5/6/00 2:14 PM, Tom Lane at [email protected] wrote:\n\n> However, I still fail to see what it buys us to challenge the frontend\n> with two salts. If the password is stored crypted, the *only* thing\n> we can validate is that password with the same salt it was stored\n> with. It doesn't sound like MD5 changes this at all.\n\nThe MD5 definitely doesn't change anything except overall security strength\nof the algorithm. The additional random salt prevents someone from sniffing\nthe communication between client and server and then simply log in by\nsending the known hash of the password. The challenge-response means that\nsniffing one login doesn't allow you to fake the next one.\n\n-Ben\n\n", "msg_date": "Sat, 06 May 2000 14:21:10 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n\tpasswords." }, { "msg_contents": "On Sat, 06 May 2000, Trond Eivind Glomsr�d wrote:\n> Vince Vielhaber <[email protected]> writes:\n> \n> > FreeBSD uses MD5 by default since at least ver 2.2, possibly\n> > earlier.\n> \n> So does Red Hat Linux, and probably Linux distributions as well. \n> \n> -- \n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n\nThere is some good information about crypt and MD5 at www.php.net in the\ndocumentation: String Functions/crypt\n\nhttp://www.php.net/manual/function.crypt.php\n\nIt explains that many systems have updated crypt() to use MD5 and how to\ncheck what hash algorithm your system's crypt() actually uses.\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sat, 6 May 2000 14:26:09 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Benjamin Adida <[email protected]> writes:\n>> It doesn't sound like MD5 changes this at all.\n\n> The MD5 definitely doesn't change anything except overall security strength\n> of the algorithm.\n\nOK, understood. So it seems that switching to MD5 would offer (a) more\nportability to platforms without crypt(3), and (b) better security,\nat the costs of (a) implementation effort and (b) cross-version\ncompatibility problems. We probably ought to keep that discussion\nseparate from the one about how the challenge protocol works.\n\n> The additional random salt prevents someone from sniffing\n> the communication between client and server and then simply log in by\n> sending the known hash of the password. The challenge-response means that\n> sniffing one login doesn't allow you to fake the next one.\n\nHow so? The server sends out one fixed salt (the one stored for that\nuser's password in pg_shadow) and one randomly-chosen salt. The client\nsends back two crypted passwords. The server can check one of them.\nWhat can it do with the other? Nothing that I can see, so where is the\nsecurity gain? A sniffer can still get in by sending back the same\npair of crypted passwords next time, no matter what random salt is\npresented.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 14:29:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "on 5/6/00 2:29 PM, Tom Lane at [email protected] wrote:\n\n> How so? The server sends out one fixed salt (the one stored for that\n> user's password in pg_shadow) and one randomly-chosen salt. The client\n> sends back two crypted passwords. The server can check one of them.\n> What can it do with the other? Nothing that I can see, so where is the\n> security gain? A sniffer can still get in by sending back the same\n> pair of crypted passwords next time, no matter what random salt is\n> presented.\n\nOkay, my understanding was that the protocol would work as follows:\n\n- client requests login\n- server sends stored salt c1, and random salt c2.\n- client performs hash_c2(hash_c1(password)) and sends result to server.\n- server performs hash_c2(stored_pg_shadow) and compares with client\nsubmission.\n- if there's a match, there's successful login.\n\nThis protocol will truly create a challenge-response where the communication\nis different at each login, and where sniffing one\nhash_c2(hash_c1(password)) doesn't give you any way to log in with a\ndifferent c2.\n\n-Ben\n\n", "msg_date": "Sat, 06 May 2000 14:32:24 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n\tpasswords." }, { "msg_contents": "On Sat, 6 May 2000, Tom Lane wrote:\n\n> Benjamin Adida <[email protected]> writes:\n> >> It doesn't sound like MD5 changes this at all.\n> \n> > The MD5 definitely doesn't change anything except overall security strength\n> > of the algorithm.\n> \n> OK, understood. So it seems that switching to MD5 would offer (a) more\n> portability to platforms without crypt(3), and (b) better security,\n> at the costs of (a) implementation effort and (b) cross-version\n> compatibility problems. We probably ought to keep that discussion\n> separate from the one about how the challenge protocol works.\n\nI agree.\n \n> > The additional random salt prevents someone from sniffing\n> > the communication between client and server and then simply log in by\n> > sending the known hash of the password. The challenge-response means that\n> > sniffing one login doesn't allow you to fake the next one.\n> \n> How so? The server sends out one fixed salt (the one stored for that\n> user's password in pg_shadow) and one randomly-chosen salt. The client\n> sends back two crypted passwords. The server can check one of them.\n> What can it do with the other? Nothing that I can see, so where is the\n> security gain? A sniffer can still get in by sending back the same\n> pair of crypted passwords next time, no matter what random salt is\n> presented.\n\nOff hand here is the only way I can see that this can work.\n\n1) client gets password from user and md5's it.\n2) upon connecting, the client receives a random salt from the server.\n3) the client md5's the already md5'd password with this new salt.\n4) the client sends the resulting hash to the server.\n5) the server takes the md5'd password from pg_shadow and md5's it\n with the same random salt it sent to the client.\n6) if it matches, the server sends yet another salt to the client.\n7) repeat steps 3, 4 and 5.\n8) if it matches the client's in.\n\nWhy should this work? Because the next time the client tries to connect\nit will be given a different salt. But why twice? It seems that once\nwould be enough since it's a random salt to begin with and the client\nshould never be getting that salt twice.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 14:40:41 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "on 5/6/00 2:40 PM, Vince Vielhaber at [email protected] wrote:\n\n> Why should this work? Because the next time the client tries to connect\n> it will be given a different salt. But why twice? It seems that once\n> would be enough since it's a random salt to begin with and the client\n> should never be getting that salt twice.\n\nNo, the reason why you would have \"two\" hashes is so that the server doesn't\nhave to store the cleartext password. The server stores an already-hashed\nversion of the password, so the client must hash the cleartext twice, once\nwith a long-term salt, once with a random, one-time salt.\n\n-Ben\n\n", "msg_date": "Sat, 06 May 2000 14:41:57 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n\tpasswords." }, { "msg_contents": "Benjamin Adida <[email protected]> writes:\n> Okay, my understanding was that the protocol would work as follows:\n\n> - client requests login\n> - server sends stored salt c1, and random salt c2.\n> - client performs hash_c2(hash_c1(password)) and sends result to server.\n> - server performs hash_c2(stored_pg_shadow) and compares with client\n> submission.\n> - if there's a match, there's successful login.\n\nOh, now I see. OK, that looks like it would work. It would definitely\nmean a change of algorithm on the client side.\n\nProbably the way to attack this would be to combine MD5 and this double\npassword-munging algorithm as a new authentication protocol type to add\nto the ones we already support. That way old clients don't have to be\nupdated instantly.\n\nOTOH, if the password stored in pg_shadow is MD5-encrypted, then we lose\nthe ability to support the old crypt-based auth method, don't we?\nOld clients could be successfully authenticated with cleartext password\nchallenge (server MD5's the transmitted password and compares to\npg_shadow), but we couldn't do anything with a crypt()-encrypted\npassword. Is that enough reason to stay with crypt() as the underlying\nhashing engine? Maybe not, but we gotta consider the tradeoffs...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 14:43:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "> > The additional random salt prevents someone from sniffing\n> > the communication between client and server and then simply log in by\n> > sending the known hash of the password. The challenge-response means that\n> > sniffing one login doesn't allow you to fake the next one.\n> \n> How so? The server sends out one fixed salt (the one stored for that\n> user's password in pg_shadow) and one randomly-chosen salt. The client\n> sends back two crypted passwords. The server can check one of them.\n> What can it do with the other? Nothing that I can see, so where is the\n> security gain? A sniffer can still get in by sending back the same\n> pair of crypted passwords next time, no matter what random salt is\n> presented.\n\nNo, you crypt the user-supplied password twice.\n\n\t'fred' -> crypt with fixed -> crypt with random\n\nServer does:\n\n\tpg_shadow password -> crypt with random\n\nThen check to see they match.\n\nDoes that help?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 14:50:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> Okay, my understanding was that the protocol would work as follows:\n> \n> - client requests login\n> - server sends stored salt c1, and random salt c2.\n> - client performs hash_c2(hash_c1(password)) and sends result to server.\n> - server performs hash_c2(stored_pg_shadow) and compares with client\n> submission.\n> - if there's a match, there's successful login.\n> \n> This protocol will truly create a challenge-response where the communication\n> is different at each login, and where sniffing one\n> hash_c2(hash_c1(password)) doesn't give you any way to log in with a\n> different c2.\n\nYes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 14:51:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Let me comment:\n\n> > How so? The server sends out one fixed salt (the one stored for that\n> > user's password in pg_shadow) and one randomly-chosen salt. The client\n> > sends back two crypted passwords. The server can check one of them.\n> > What can it do with the other? Nothing that I can see, so where is the\n> > security gain? A sniffer can still get in by sending back the same\n> > pair of crypted passwords next time, no matter what random salt is\n> > presented.\n> \n> Off hand here is the only way I can see that this can work.\n> \n> 1) client gets password from user and md5's it.\n\nNo, no md5 yet.\n\n> 2) upon connecting, the client receives a random salt from the server.\n> 3) the client md5's the already md5'd password with this new salt.\n\nmd5's plaintext password using pg_shadow salt, and random salt.\n\n> 4) the client sends the resulting hash to the server.\n> 5) the server takes the md5'd password from pg_shadow and md5's it\n> with the same random salt it sent to the client.\n\nYes.\n\n> 6) if it matches, the server sends yet another salt to the client.\n> 7) repeat steps 3, 4 and 5.\n> 8) if it matches the client's in.\n> \n> Why should this work? Because the next time the client tries to connect\n> it will be given a different salt. But why twice? It seems that once\n> would be enough since it's a random salt to begin with and the client\n> should never be getting that salt twice.\n\nNo, once with pg_shadow salt, then random salt.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 14:54:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> on 5/6/00 2:40 PM, Vince Vielhaber at [email protected] wrote:\n> \n> > Why should this work? Because the next time the client tries to connect\n> > it will be given a different salt. But why twice? It seems that once\n> > would be enough since it's a random salt to begin with and the client\n> > should never be getting that salt twice.\n> \n> No, the reason why you would have \"two\" hashes is so that the server doesn't\n> have to store the cleartext password. The server stores an already-hashed\n> version of the password, so the client must hash the cleartext twice, once\n> with a long-term salt, once with a random, one-time salt.\n> \n\nYeah, right!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 14:54:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> http://www.php.net/manual/function.crypt.php\n> It explains that many systems have updated crypt() to use MD5 and how to\n> check what hash algorithm your system's crypt() actually uses.\n\nOh, that's interesting. If that's correct, we *already* have a cross-\nplatform compatibility problem: a client compiled on a machine with\nDES-derived crypt() will be unable to authenticate itself under \"crypt\"\nprotocol to a server using MD5-based crypt(), or vice versa, because the\nwrong hashed password will be sent. Can someone with access to two such\nmachines check this?\n\nIf that's true, it seriously weakens the backwards-compatibility\nargument for sticking with crypt(), IMHO. Old clients on another\nplatform may already fail to talk to your server...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 14:55:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "> Probably the way to attack this would be to combine MD5 and this double\n> password-munging algorithm as a new authentication protocol type to add\n> to the ones we already support. That way old clients don't have to be\n> updated instantly.\n\nNot sure that will work because once we use md5 on the server side for\npg_shadow, we have to be able to do md5 on the client, I think, for\ncrypting because the md5 has to be done _before_ the random salt crypt.\n\n> \n> OTOH, if the password stored in pg_shadow is MD5-encrypted, then we lose\n> the ability to support the old crypt-based auth method, don't we?\n\nYes.\n\n> Old clients could be successfully authenticated with cleartext password\n> challenge (server MD5's the transmitted password and compares to\n> pg_shadow), but we couldn't do anything with a crypt()-encrypted\n> password. Is that enough reason to stay with crypt() as the underlying\n> hashing engine? Maybe not, but we gotta consider the tradeoffs...\n\nNot sure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 14:57:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Probably the way to attack this would be to combine MD5 and this double\n>> password-munging algorithm as a new authentication protocol type to add\n>> to the ones we already support. That way old clients don't have to be\n>> updated instantly.\n\n> Not sure that will work because once we use md5 on the server side for\n> pg_shadow, we have to be able to do md5 on the client, I think, for\n> crypting because the md5 has to be done _before_ the random salt crypt.\n\nWe can still support old clients under the cleartext-password protocol:\nclient sends password in clear, server MD5's it using salt from\npg_shadow and compares result. This is vulnerable to sniffing but no\nmore so than it was before. What we would lose is backwards\ncompatibility to the crypt-password protocol. We should still choose\na new Authentication typecode for the MD5/double-hash protocol, just to\nmake sure no one gets confused about which protocol is being requested.\n\nIf these reports are correct that some platforms already have MD5, not\nDES, inside crypt(3) then I'm definitely leaning towards going with MD5.\nThe best reason to stick with crypt as the hash engine would be to\npreserve support for the existing crypt-based protocol, but if that's\nalready broken cross-platform then the value of continuing to support it\nlooks pretty dubious. (After all, the clients on your own box are\nprobably getting updated at the same time as the server --- it's clients\non other boxes that you're really worried about backwards compatibility\nfor.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 15:06:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "\nWould public/private key pair authentication (like GPG) or SSL-like solutions\nwork? If the backend could use SSL, it would have the ability to protect\npasswords and all data too from being seen on the network. Somekind of SSL\nability would solve all security problems. Can't OpenSSL be used on top of the\nclient/backend connection? This way, the backend/database could just store\nhashed passwords anyway it wants and the client only needs to support the\nSSL-like layer in the connection. I guess this would mean adding SSL into the\nlibraries (libpq etc) for the functions that make the backend connection. It\nwould be nice if after user authentication, that the protocol could optionally\nrenegotiate back to an unencrypted connection for speed. A security option\ncould be added to databases that allows the DBA to specify whether or not\naccess to the database requires a secure connection to protect the sensitive\ninfo.\n\nI'm probably not understanding everything here, but if system crypt() is used,\nit looks like you have to go with the least common denominator algorithm\nthat is on all platforms, which might be the old 2-byte salt DES. But if you\nembed all this in the libpq etc, then you use whatever you want.\n\nAlso, isn't the salt the first x bytes of the hashed string, x depending on\nwhich algorithm used? Wouldn't things work like this:\n\n1. Server sends the first 2 (or x bytes) of the hashed password (e.g., the salt\nused to make the hashed password.\n2. The client hashes the password with the salt and sends it back to the server.\n3. The server compares what the client sent with the hash it has stored. If\nthey match the user is let in. I didn't think there was any need for this\nrandom salt and double hashing thats been discussed.\n\nIf you have to implement something into the backend and the client libraries,\nwhy not go for an SSL type solution?\n\n(people might say, it sounds fine, why don't YOU do it :)\n\n Robert Easter\n\n", "msg_date": "Sat, 6 May 2000 15:18:35 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Now, I we want to move all the stuff to use MD5 rather than the standard\n> unix password crypt,\n\nAFAIK, MD5 is one of \"the standard password crypt\"'s ;)\n\n> that is another option, though I am not sure what\n> value it would have.\n\nOne advantage would be passwords with more than 8 characters that\nmatter.\n\nIMO the salt part in the \"old\" crypt code is there only to make it \nharder for people to accidentally discover that other people have \nthe same password with them, which could easily be avoided by \nincluding the username as kind of supersalt in the md5 string, \nso the value passed over wire (and stored in DB would be\nMD5('<username>:<passwd>'). \nIf we want to make password hijacking real hard, we could store \nthe above but ask the client for \nMD5(<server-supplied-salt>+MD5(<username>+':'+<passwd>))\nand compare that\n\n-------------\nHannu\n", "msg_date": "Sat, 06 May 2000 22:36:21 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think the only problem is moving dumps from on machine to another. \n> The crypt version may not exist or be different on different machines.\n\nThis is something I was going to bring up, but I see Bruce already did.\nHow will dump/restore and upgrades cope with crypted passwords?\n\nIf we standardize on MD5 encryption then there shouldn't be a cross-\nplatform problem with incompatible hashes, but we've still got troubles.\n\nCurrently, pg_dumpall handles saving and loading of pg_shadow as a\nsimple table COPY operation. That'd work OK once you have the passwords\nstored in MD5-encrypted form, but it will surely not work for upgrading\nfrom 7.0 to 7.1 (assume 7.1 is when we'll have this stuff ready).\n\nI've never cared for dumping pg_shadow that way anyway, since it makes\ncross-version changes in the format of pg_shadow nearly impossible;\nthis password storage issue is just one case of a larger problem. What\npg_dumpall should really be doing is emitting CREATE USER commands to\nreload the contents of pg_shadow.\n\nTo do it that way, we'd need two variants of CREATE USER:\n\nCREATE USER ... WITH PASSWORD 'foo'\n\n(the existing syntax). 'foo' is cleartext and it gets hashed on its\nway into the table. The hashing step includes choosing a random salt.\n\nCREATE USER ... WITH ENCRYPTED PASSWORD 'bar'\n\nHere 'bar' is the already-encrypted password form (with salt value\nembedded); it'd be dropped into pg_shadow unchanged, although the\ncommand ought to do whatever it can to check the validity of the\nencryption format.\n\npg_dumpall would generate this second form, inserting the crypted\npassword it had read from the pg_shadow table being dumped.\n\n(Probably, there should be an ALTER USER SET ENCRYPTED PASSWORD as\nwell, for transferring already-crypted passwords, but that's not\nessential for the purpose at hand.)\n\nThat solves our problem going forward, but we're still stuck for\nhow to get 7.0 password data into 7.1. One possible avenue is to make\nsure that it is possible to distinguish whether an existing database\ncontains crypted or cleartext passwords (maybe this comes for free,\nor maybe we have to change the name of the pg_shadow password column\nor some such). Then pg_dumpall could be made to dump out either\nWITH PASSWORD 'foo' or WITH ENCRYPTED PASSWORD 'foo' depending on\nwhether it sees that it is reading cleartext or crypted passwords\nfrom the source database. Then we tell people that they have to\nuse 7.1's pg_dumpall to dump a 7.0 database in preparation for\nupdating to 7.1, or else expect to have to reset all their passwords.\n\nIs there a better way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 16:09:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> How about ODBC? This is from the ODBC driver source connection.c:\n> self->errormsg = \"Password crypt authentication not supported\";\n> Is that because of the platform it's running on or what it's talking\n> to?\n\nI think the ODBC authors didn't want to assume that libcrypt() is\navailable on the client side (which is probably right for Windows and\nMac at least). Standardizing on our own implementation of MD5 would\nsidestep that problem quite neatly.\n\nDepending on libcrypt is pretty painful even in Unix environments;\nhave you seen what we have to do to get it to work in shared-library\ncontexts, on machines where libcrypt is a separate shlib and not part of\nlibc? Yech. We could get rid of a bunch of cruft in the makefiles by\nabandoning crypt() ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 16:15:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Probably the way to attack this would be to combine MD5 and this double\n> >> password-munging algorithm as a new authentication protocol type to add\n> >> to the ones we already support. That way old clients don't have to be\n> >> updated instantly.\n> \n> > Not sure that will work because once we use md5 on the server side for\n> > pg_shadow, we have to be able to do md5 on the client, I think, for\n> > crypting because the md5 has to be done _before_ the random salt crypt.\n> \n> We can still support old clients under the cleartext-password protocol:\n> client sends password in clear, server MD5's it using salt from\n> pg_shadow and compares result. This is vulnerable to sniffing but no\n> more so than it was before. What we would lose is backwards\n> compatibility to the crypt-password protocol. We should still choose\n> a new Authentication typecode for the MD5/double-hash protocol, just to\n> make sure no one gets confused about which protocol is being requested.\n\nYes, got it. I was confused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 16:19:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> Depending on libcrypt is pretty painful even in Unix environments;\n> have you seen what we have to do to get it to work in shared-library\n> contexts, on machines where libcrypt is a separate shlib and not part of\n> libc? Yech. We could get rid of a bunch of cruft in the makefiles by\n> abandoning crypt() ...\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 16:22:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I think the only problem is moving dumps from on machine to another. \n> > The crypt version may not exist or be different on different machines.\n> \n> This is something I was going to bring up, but I see Bruce already did.\n> How will dump/restore and upgrades cope with crypted passwords?\n> \n> If we standardize on MD5 encryption then there shouldn't be a cross-\n> platform problem with incompatible hashes, but we've still got troubles.\n> \n> Currently, pg_dumpall handles saving and loading of pg_shadow as a\n> simple table COPY operation. That'd work OK once you have the passwords\n> stored in MD5-encrypted form, but it will surely not work for upgrading\n> from 7.0 to 7.1 (assume 7.1 is when we'll have this stuff ready).\n> \n> I've never cared for dumping pg_shadow that way anyway, since it makes\n> cross-version changes in the format of pg_shadow nearly impossible;\n> this password storage issue is just one case of a larger problem. What\n> pg_dumpall should really be doing is emitting CREATE USER commands to\n> reload the contents of pg_shadow.\n\nHey, pg_dumpall is only a shell script. It does what it can. :-)\n\n> \n> To do it that way, we'd need two variants of CREATE USER:\n> \n> CREATE USER ... WITH PASSWORD 'foo'\n> \n> (the existing syntax). 'foo' is cleartext and it gets hashed on its\n> way into the table. The hashing step includes choosing a random salt.\n> \n> CREATE USER ... WITH ENCRYPTED PASSWORD 'bar'\n> \n> Here 'bar' is the already-encrypted password form (with salt value\n> embedded); it'd be dropped into pg_shadow unchanged, although the\n> command ought to do whatever it can to check the validity of the\n> encryption format.\n> \n> pg_dumpall would generate this second form, inserting the crypted\n> password it had read from the pg_shadow table being dumped.\n> \n> (Probably, there should be an ALTER USER SET ENCRYPTED PASSWORD as\n> well, for transferring already-crypted passwords, but that's not\n> essential for the purpose at hand.)\n> \n> That solves our problem going forward, but we're still stuck for\n> how to get 7.0 password data into 7.1. One possible avenue is to make\n> sure that it is possible to distinguish whether an existing database\n> contains crypted or cleartext passwords (maybe this comes for free,\n> or maybe we have to change the name of the pg_shadow password column\n> or some such). Then pg_dumpall could be made to dump out either\n> WITH PASSWORD 'foo' or WITH ENCRYPTED PASSWORD 'foo' depending on\n> whether it sees that it is reading cleartext or crypted passwords\n> from the source database. Then we tell people that they have to\n> use 7.1's pg_dumpall to dump a 7.0 database in preparation for\n> updating to 7.1, or else expect to have to reset all their passwords.\n> \n> Is there a better way?\n\nIf we add this WITH PASSWORD 'foo' to 7.0.X, then 7.1 can read that\nformat, know it is in cleartext, hash it, and load it in. 7.1 can dump\nits table out as WITH ENCRYPTED PASSWORD 'foo'.\n\nWith 6.5.X and earlier, we can just tell people they have to manually\nupdate them. We can emit a warning if the pg_shadow password field does\ncontain an md5 format password.\n\nAnother idea is to add code to 7.1 to convert non-md5 shadow password\nfields to md5 format. Since we already have special handling to do\npg_pwd, we could do it there. Seems like a plan. MD5 format is all\nhex digits of a specific length. No way to get that confused with a\nreal password.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 16:34:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "on 5/6/00 3:18 PM, Robert B. Easter at [email protected] wrote:\n\n> \n> Would public/private key pair authentication (like GPG) or SSL-like solutions\n> work? If the backend could use SSL, it would have the ability to protect\n> passwords and all data too from being seen on the network. Somekind of SSL\n> ability would solve all security problems. Can't OpenSSL be used on top of\n> the\n> client/backend connection?\n\nWhile SSL could probably be an option for people dealing with tremendously\nsensitive data that shouldn't go in the clear over their internal network\n(we're not talking about passwords here, just the SQL queries and\nresponses), I think it's overkill to impose SSL for everything.\n\nThe key exchange and constant encryption overhead would significantly affect\nperformance, so this doesn't seem like something to impose on everyone.\n\n-Ben\n\n", "msg_date": "Sat, 06 May 2000 17:50:46 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n\tpasswords." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Another idea is to add code to 7.1 to convert non-md5 shadow password\n> fields to md5 format. Since we already have special handling to do\n> pg_pwd, we could do it there. Seems like a plan. MD5 format is all\n> hex digits of a specific length. No way to get that confused with a\n> real password.\n\nWell, if you're willing to depend on that, then there's no need for the\nWITH ENCRYPTED PASSWORD variant syntax: the existing syntax WITH\nPASSWORD could do it all, just by checking to see if the supplied\npassword string looks like it's already been md5-ified.\n\nThe real trick would be to get this to happen during a COPY into\npg_shadow --- if we did that, then dumps generated by 7.0 pg_dumpall\nwould still work. Perhaps a trigger on pg_shadow insert/update is\nthe right place to check and md5-ify the password? (If that's in place\nthen neither CREATE nor ALTER USER would need to do anything special!)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 17:57:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> Would public/private key pair authentication (like GPG) or SSL-like solutions\n> work?\n\nWe already have SSL support --- that's why I wasn't especially excited\nabout making the password challenge itself be proof against sniffing.\nAnybody who's afraid of sniffing attacks ought to be SSL-ifying his\nentire database connection, not just trying to protect the password.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 18:02:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "Benjamin Adida <[email protected]> writes:\n> I think it's overkill to impose SSL for everything.\n\nAgreed, and in any case we are not going to require people to install\nSSL before they can use Postgres. It's an appropriate tool for some\npeople to use depending on what their security situation is.\n\nI think we are converging on a plan that involves switching from crypt\nto MD5 as our password-hashing algorithm, so given that we are going to\nneed a client upgrade anyway, we can throw in the double hashing (two\nsalt) method you proposed without any extra pain. Might as well protect\nthe password against sniffing if we can...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 May 2000 18:12:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "On Sat, 06 May 2000, Benjamin Adida wrote:\n> While SSL could probably be an option for people dealing with tremendously\n> sensitive data that shouldn't go in the clear over their internal network\n> (we're not talking about passwords here, just the SQL queries and\n> responses), I think it's overkill to impose SSL for everything.\n> \n> The key exchange and constant encryption overhead would significantly affect\n> performance, so this doesn't seem like something to impose on everyone.\n> \n> -Ben\n\nI agree that it should not be active all the time. Just active for databases\nthat have been setup to require it if the dba sets the option for it. My idea\nis that it would work like this:\n\n1. Client connects to server. The initial connection is automatically SSL.\n2. The user is authenticated.\n3. The client and server renegotiate the connection to drop out of SSL and to a\nnormal unencrpyted connection by default. However, if the database has been set\nto require a secure connection by the database owner, then the SSL connection\nwill remain. This adds some overhead to connecting to the server, but when\npeople need performance, they use persistent connections.\n\nThe dba would have to set the database to require the SSL connection to remain\nby running commands something like:\n\nCREATE DATABASE mydb SECURE; -- creates it initially secure.\nALTER DATABASE mydb ADD|DROP SECURE; -- alters the secure option.\n(some proposed Postgres extensions:)\n\nOpenSSL is under the BSD license (www.openssl.org). Its source code can be\nintegrated into the PostgreSQL source code so that users need know nothing\nabout it. It would just get used internal to Postgres and the client\nlibraries.\n\nOpenSSL also contains an MD5 routine that can be used on the passwords.\n\nSo far, no one is excited about this so I will not push it anymore.\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sat, 6 May 2000 22:02:04 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sat, 6 May 2000, Robert B. Easter wrote:\n\n> OpenSSL is under the BSD license (www.openssl.org). Its source code can be\n> integrated into the PostgreSQL source code so that users need know nothing\n> about it. It would just get used internal to Postgres and the client\n> libraries.\nPlease do not 'integrate' code from OpenSSL into the tree. Its huge (2M\ncompressed source tree).\n\nNegotiating security protocols and reconnecting seems like a hassle, just\nhaving autoconf detect presence of openssl libraries (automatically or\n--with-openssl) is perfect. The best (as in, simplest and most\ntransparent) way to integrate SSL support is to do it like http/https:\nprovide another port on which connections will be only accepted using SSL\nprotocol. Security-minded administrators should have an option of\ndisabling non-encrypted port. On client side, use fairly simple (to my\nmemory, you use {tls|ssl}_connect instead of connect)\n\nA flag for databases that would disallow their usage if the connection is\nunencrypted would be nice though, for those people who wish to have both\nencrypted and unencrypted connections.\n\n> OpenSSL also contains an MD5 routine that can be used on the passwords.\nMD5 is extremely simple, about 50 lines of code.\n\n-alex\n\n\n", "msg_date": "Sat, 6 May 2000 22:41:09 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "\nSo we're in agreement on using MD5. Sverre, is the offer still open\nfor the java MD5 you wrote? I'll translate it to C and make sure it\nwill compile/run/give-correct-results on as many platforms as possible\nincluding DOS/Windows, hpux, FreeBSD and IRIX. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 23:06:42 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "So we're in agreement.... " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Another idea is to add code to 7.1 to convert non-md5 shadow password\n> > fields to md5 format. Since we already have special handling to do\n> > pg_pwd, we could do it there. Seems like a plan. MD5 format is all\n> > hex digits of a specific length. No way to get that confused with a\n> > real password.\n> \n> Well, if you're willing to depend on that, then there's no need for the\n> WITH ENCRYPTED PASSWORD variant syntax: the existing syntax WITH\n> PASSWORD could do it all, just by checking to see if the supplied\n> password string looks like it's already been md5-ified.\n\nYes, I would like that. Our syntax for CREATE USER is already pretty\nlarge. Why not do it automatically?\n\n> The real trick would be to get this to happen during a COPY into\n> pg_shadow --- if we did that, then dumps generated by 7.0 pg_dumpall\n> would still work. Perhaps a trigger on pg_shadow insert/update is\n> the right place to check and md5-ify the password? (If that's in place\n> then neither CREATE nor ALTER USER would need to do anything special!)\n\nWe already have a trigger for pg_shadow updates to recreate pg_pwd. \nNot sure if that happens from COPY however. I sure hope it does.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 23:10:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> Benjamin Adida <[email protected]> writes:\n> > I think it's overkill to impose SSL for everything.\n> \n> Agreed, and in any case we are not going to require people to install\n> SSL before they can use Postgres. It's an appropriate tool for some\n> people to use depending on what their security situation is.\n> \n> I think we are converging on a plan that involves switching from crypt\n> to MD5 as our password-hashing algorithm, so given that we are going to\n> need a client upgrade anyway, we can throw in the double hashing (two\n> salt) method you proposed without any extra pain. Might as well protect\n> the password against sniffing if we can...\n\nThat was my logic. Pretty cheap to do it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 23:11:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> OpenSSL is under the BSD license (www.openssl.org). Its source code can be\n> integrated into the PostgreSQL source code so that users need know nothing\n> about it. It would just get used internal to Postgres and the client\n> libraries.\n> \n> OpenSSL also contains an MD5 routine that can be used on the passwords.\n> \n> So far, no one is excited about this so I will not push it anymore.\n\nSeems like that could be a solution, but we are trying not to bloat the\nPostgreSQL tarball. Shipping SSH would certainly do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 23:14:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "> \n> So we're in agreement on using MD5. Sverre, is the offer still open\n> for the java MD5 you wrote? I'll translate it to C and make sure it\n> will compile/run/give-correct-results on as many platforms as possible\n> including DOS/Windows, hpux, FreeBSD and IRIX. \n\nYes, MD5, double-crypt with pg_shadow salt and random salt. Sounds like\na winner all around.\n\nAnd finally, we need a trigger to somehow update non-md5 strings in the\npg_shadow password field. No one is sure how to do that yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 23:21:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "\nSo, we're going to go with less security then is available on most Unix\nOSs? \n\nif we are going to do this, *please* just use the regular system\ncrypt() function ... for those that are using MD5 for their passwords, at\nleast as it is under FreeBSD, crypt() does either MD5 or DES depending on\nthe system ...\n\n\n\nOn Sat, 6 May 2000, Bruce Momjian wrote:\n\n> > \n> > So we're in agreement on using MD5. Sverre, is the offer still open\n> > for the java MD5 you wrote? I'll translate it to C and make sure it\n> > will compile/run/give-correct-results on as many platforms as possible\n> > including DOS/Windows, hpux, FreeBSD and IRIX. \n> \n> Yes, MD5, double-crypt with pg_shadow salt and random salt. Sounds like\n> a winner all around.\n> \n> And finally, we need a trigger to somehow update non-md5 strings in the\n> pg_shadow password field. No one is sure how to do that yet.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 7 May 2000 00:28:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "On Sat, 06 May 2000, Benjamin Adida wrote:\n> on 5/6/00 2:14 PM, Tom Lane at [email protected] wrote:\n> \n> > However, I still fail to see what it buys us to challenge the frontend\n> > with two salts. If the password is stored crypted, the *only* thing\n> > we can validate is that password with the same salt it was stored\n> > with. It doesn't sound like MD5 changes this at all.\n> \n> The MD5 definitely doesn't change anything except overall security strength\n> of the algorithm. The additional random salt prevents someone from sniffing\n> the communication between client and server and then simply log in by\n> sending the known hash of the password. The challenge-response means that\n> sniffing one login doesn't allow you to fake the next one.\n> \n> -Ben\n\nI see. This protects the hash, which is an effective password, from being\ngotten by sniffers. But a cracker who has stolen the hashes out of Postgres can\nstill get in no matter what until you change the passwords.\n\nI guess hashed password authentication is really not designed for use over an\nuntrusted connection. You get the hash becomes effective password problem. \nIts very important that the hashed passwords stored in Postgres cannot be read\nby anyone except the Postgres superuser.\n\nI'm I getting this right?\n\nCrypto 101 - I'm learning. :)\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sat, 6 May 2000 23:29:05 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sun, 7 May 2000, The Hermit Hacker wrote:\n\n> \n> So, we're going to go with less security then is available on most Unix\n> OSs? \n\nWhoever said MD5 is less secure than standard DES has handed you a line\nso long.... \n\nIf you want the entire story, go back and reread the thread. If you want\nit in a nutshell, MD5 isn't reversable so it's not considered encryption\nas defined in the US Govt's eyes. DES has been broken more than once. I\ndon't know what you're using on hub, but MD5 is the default in FreeBSD\nunless you actually choose DES. Also the FreeBSD folks feel that MD5 is\nthe better choice.\n\n> if we are going to do this, *please* just use the regular system\n> crypt() function ... for those that are using MD5 for their passwords, at\n> least as it is under FreeBSD, crypt() does either MD5 or DES depending on\n> the system ...\n\ncrypt() requires libcrypt which isn't available on all platforms. I \nthink we all feel we can do better.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 6 May 2000 23:42:01 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "> \n> So, we're going to go with less security then is available on most Unix\n> OSs? \n> \n> if we are going to do this, *please* just use the regular system\n> crypt() function ... for those that are using MD5 for their passwords, at\n> least as it is under FreeBSD, crypt() does either MD5 or DES depending on\n> the system ...\n> \n\nWe have to use MD5 with double-encription. We have to do that because\nall clients need MD5 too, and we can't assume they all have DES.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 6 May 2000 23:43:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> So, we're going to go with less security then is available on most Unix\n> OSs? \n\nWhat's your evidence for that assertion? Garfinkel & Spafford's\n_Practical Unix and Internet Security_ recommends MD5 as a *more*\nsecure method for storing passwords than crypt() (page 720 in my\ncopy). DES is almost 20 years older than MD5, so I'm not sure\nwhy you'd assume that it must be more secure.\n\n> if we are going to do this, *please* just use the regular system\n> crypt() function\n\nHalf of the argument for touching the issue at all is that we have a\nlot of problems with crypt() --- not available on some platforms,\ninconsistent results across platforms (not proven yet, but seems likely)\nand a serious pain in the neck for our shared libraries to boot.\nIf we have to stick with crypt I'm not sure it's worth doing anything.\n\n\nBTW, Vince, I see no need to reverse-engineer a Java implementation\ninto C. The original spec includes a C implementation ... and it\nlooks to have a reasonably BSDish license. See RFC 1321, eg at \nhttp://www.faqs.org/rfcs/rfc1321.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 00:03:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> So far, no one is excited about this so I will not push it anymore.\n\n> Seems like that could be a solution, but we are trying not to bloat the\n> PostgreSQL tarball. Shipping SSH would certainly do that.\n\nWe already have the ability to work with an externally provided SSL\nlibrary. Right at the moment I don't see the merit of folding SSL\ninto the Postgres distribution instead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 00:10:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "> I see. This protects the hash, which is an effective password, from being\n> gotten by sniffers. But a cracker who has stolen the hashes out of Postgres can\n> still get in no matter what until you change the passwords.\n> \n> I guess hashed password authentication is really not designed for use over an\n> untrusted connection. You get the hash becomes effective password problem. \n> Its very important that the hashed passwords stored in Postgres cannot be read\n> by anyone except the Postgres superuser.\n> \n> I'm I getting this right?\n\nGood point. Though they can't see the original password, they can have\na pgsql client use it to connect to the database.\n\nAnyone have a fix for that one?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 May 2000 00:17:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sun, 7 May 2000, Tom Lane wrote:\n\n> BTW, Vince, I see no need to reverse-engineer a Java implementation\n> into C. The original spec includes a C implementation ... and it\n> looks to have a reasonably BSDish license. See RFC 1321, eg at \n> http://www.faqs.org/rfcs/rfc1321.html\n\nGot it!! Thanks!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 00:18:08 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": ">> I see. This protects the hash, which is an effective password, from\n>> being gotten by sniffers. But a cracker who has stolen the hashes\n>> out of Postgres can still get in no matter what until you change the\n>> passwords.\n\nWhat's your point? Stealing a password is stealing a password,\nwhatever form it's represented in. More to the point, a cracker\nwho can get to the stored passwords in Postgres has already\nthoroughly broken the database's security; he doesn't need any\nmore access to the db than he's already got.\n\n>> Its very important that the hashed passwords stored in Postgres\n>> cannot be read by anyone except the Postgres superuser.\n\nNo different from the current system, where the cleartext passwords\nmustn't be readable by anyone except the superuser, either. That's\nnot the objective of this exercise. The objective is to ensure that\ngetting hold of the (hashed) Postgres passwords doesn't let you into\n*other* systems that a database user might have used the same\n(cleartext) password for. We're trying to provide some security\nfor other people's barns in the event that our own horses have already\nbeen stolen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 00:50:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "On Sun, 07 May 2000, Tom Lane wrote:\n> not the objective of this exercise. The objective is to ensure that\n> getting hold of the (hashed) Postgres passwords doesn't let you into\n> *other* systems that a database user might have used the same\n> (cleartext) password for. We're trying to provide some security\n> for other people's barns in the event that our own horses have already\n> been stolen.\n\nOk, now I feel like this discussion has gone full circle and it is all very\nclear to me. The objectives are satisfied by the MD5 double salt scheme. The\npassword itself is protected. That doesn't protect the user.\n\nI guess my point is about protecting your login and keeping others from getting\nin there and doing things as you, in short, protecting the user. Sure, if the\ncracker got the hashes, he got everything, but the accountability/blame\ndoesn't lie with one of the database users. The blame lies on the poor security\nof the postgres superuser only. When hashes alone are used for authentication,\nit should not matter if someone gets the hashes - the hashes alone shouldn't be\nenough let them in. Ordinarily, a cleartext password would go straight\nthrough to the other end over a path that isn't sniffable or isn't even over a\nnetwork, like on a unix host when you login to a console. But in Postgres'\ncase, you have the hash sent as an effective password from the client - it\ndoesn't send the actual password. Anytime people really want a secure network\nlogin, they have to use ssh, ssl, or pgp (pub/private key) software. Those are\nthe only things that really protect user and keeps in control of their login. \nYour password might be secure, but your login still isn't. Someone could get\nthe hashes and you might not ever know it. They could login to the database\nsystem as you for a long long time stealing your information and doing things\nas you, but they'd never get your password. The user could end up getting\nblamed for some serious stuff that is not his fault. The user doesn't have\ncontrol of his login. The dba might realize that he had a security breach and\nBob's hash was stolen and used by a cracker to login as Bob and do serious\ndamage. The db just shrugs it off and blames it on Bob, saying that Bob must\nhave compromised his clear text password.\n\nI'd say under the scheme proposed, you really have to trust your dba and change\nyour password frequently. Anyone with access to the hashes can login as you and\nmake you look bad.\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sun, 7 May 2000 01:17:50 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "\"Robert B. Easter\" <[email protected]> writes:\n> I'd say under the scheme proposed, you really have to trust your dba\n> and change your password frequently. Anyone with access to the hashes\n> can login as you and make you look bad.\n\nAgain, what's your point? The dbadmin can do whatever he wants *inside\nthe database*, including altering data that you might nominally be\nresponsible for. He doesn't need your password for that, any more than\nyour local Unix sysadmin needs anything but root privileges to alter\nyour files.\n\nThe point of this change is to make sure that the dbadmin can't get\nat your cleartext password, which might allow him to pose as you for\nnon-database purposes (if you are so foolish as to use that same\ncleartext password for non-database purposes).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 02:32:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "[Vince Vielhaber]\n\n| So we're in agreement on using MD5. Sverre, is the offer still open\n| for the java MD5 you wrote?\n\nYes, of course!\n\n| I'll translate it to C and make sure it will\n| compile/run/give-correct-results on as many platforms as possible\n| including DOS/Windows, hpux, FreeBSD and IRIX.\n\nI started translating it yesterday. I'll do some commenting and\ntesting today, if I get the time I need from my family. :) It's been\nthree years since I last wrote a C program, so someone should probably\npeek thru it looking for pointer problems. :)\n\nIf I don't finish it today, I'll send you the half finished stuff\ntonight (it's morning here now). OK?\n\n\nSverre.\n\n-- \n<URL:mailto:[email protected]>\n<URL:http://home.sol.no/~sverrehu/> Echelon bait: semtex, bin Laden,\n plutonium, North Korea, nuclear bomb\n", "msg_date": "Sun, 7 May 2000 08:34:20 +0200", "msg_from": "\"Sverre H. Huseby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "On Sun, 7 May 2000, Sverre H. Huseby wrote:\n\n> [Vince Vielhaber]\n> \n> | So we're in agreement on using MD5. Sverre, is the offer still open\n> | for the java MD5 you wrote?\n> \n> Yes, of course!\n> \n> | I'll translate it to C and make sure it will\n> | compile/run/give-correct-results on as many platforms as possible\n> | including DOS/Windows, hpux, FreeBSD and IRIX.\n> \n> I started translating it yesterday. I'll do some commenting and\n> testing today, if I get the time I need from my family. :) It's been\n> three years since I last wrote a C program, so someone should probably\n> peek thru it looking for pointer problems. :)\n> \n> If I don't finish it today, I'll send you the half finished stuff\n> tonight (it's morning here now). OK?\n\nWorks for me.\n\nBTW, you may want to look at your email configuration. The BCC was \nleaking thru. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 02:41:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Another idea is to add code to 7.1 to convert non-md5 shadow password\n> fields to md5 format. Since we already have special handling to do\n> pg_pwd, we could do it there. Seems like a plan. MD5 format is all\n> hex digits of a specific length. No way to get that confused with a\n> real password.\n\nOne way to approach it in a semi-transparent way would be to add a\ncolumn md5passwd to pg_shadow and set up a trigger to automatically \nupdate it whenever passwd is inserted/updated (and for \nsecurity-concious people the same trigger would empty the passwd \nfield itself, or set it to some special value that disables \ncrypt/cleartext logins)\n\nthe WITH ENCRYPTED PASSWORD would then update md5passwd directly, and \nreset the passwd field.\n\nI still think that the easiest way to get unique hashes would be to use \nthe username as salt when generating the value for md5passwd .\n\n----------------\nHannu\n", "msg_date": "Sun, 07 May 2000 10:11:21 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > So we're in agreement on using MD5. Sverre, is the offer still open\n> > for the java MD5 you wrote? I'll translate it to C and make sure it\n> > will compile/run/give-correct-results on as many platforms as possible\n> > including DOS/Windows, hpux, FreeBSD and IRIX.\n> \n> Yes, MD5, double-crypt with pg_shadow salt and random salt. Sounds like\n> a winner all around.\n\nwhy pg_shadow salt ? for md5 we will need to store it separately anyway.\nwhy not MD5(<server-supplied-random-salt> || MD5(<username> ||\n<password>))\nthat way we would overcome the original need for salt (accidental\ndiscovery \nof similar passwords) and would have no need for storing the salt.\n\nactually we would probably need some kind of separator as well to avoid\nthe scenario of <user>+<password> and <userpa>+<ssword> being the same \nand thus having the same md5 hash. so the escheme could be\n\nMD5(<server-supplied-random-salt> || '\\n' || MD5(<username> || '\\n' ||\n<password>))\n\nAFAIK there is no easy way to have a newline inside password. \n\n> And finally, we need a trigger to somehow update non-md5 strings in the\n> pg_shadow password field. No one is sure how to do that yet.\n\nsee my separate mail which I was unable to send yesterday as my phone \nline went down ;(\n\n--------------\nHannu\n", "msg_date": "Sun, 07 May 2000 10:21:56 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> So, we're going to go with less security then is available on most Unix\n> OSs?\n\nNo, the general consensus seems to be using MD5, not the weaker DES\ncrypt.\n\n> if we are going to do this, *please* just use the regular system\n> crypt() function ... for those that are using MD5 for their passwords, at\n> least as it is under FreeBSD, crypt() does either MD5 or DES depending on\n> the system ...\n\nWhat does it do to portbility ?\n\n-----------\nHannu\n", "msg_date": "Sun, 07 May 2000 10:27:01 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "[Vince Vielhaber]\n\n| Works for me.\n\nOk, here's the source. Please note the following:\n\n* You should change the typedefs in md5.c. I guess the PostgreSQL\n source has types that are correct for various platforms. I've run\n this on Linux (x86) only.\n\n* I use the Artistic License. Feel free to replace it with the\n license used in the rest of PostgreSQL.\n\n* I tested my implementation (or rather, the translation to C) by\n generating MD5 sums for all files in my /usr/local/bin, and\n comparing the sums with those generated by GNU's md5sum program.\n You may want to do further tests.\n\n* Although I've written tens of thousands of lines of C code in\n the past, it is almost three years since the last time I did. Java\n programming has made me rather relaxed when it comes to memory\n handling, so you should look for pointer problems and failure to\n free allocated memory. :-)\n\n* Salting is left to the caller. You will probably want to build\n convenience functions on top of md5_hash.\n\nI really hope you will find the code useful, as it is my secret wish\nto contribute to PostgreSQL. :-)\n\nPlease keep me informed on what you do with the code, even if you\nchoose to throw it in /dev/null.\n\n\nSverre.\n\n-- \n<URL:mailto:[email protected]>\n<URL:http://home.sol.no/~sverrehu/> Echelon bait: semtex, bin Laden,\n plutonium, North Korea, nuclear bomb", "msg_date": "Sun, 7 May 2000 14:39:32 +0200", "msg_from": "\"Sverre H. Huseby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "> > Yes, MD5, double-crypt with pg_shadow salt and random salt. Sounds like\n> > a winner all around.\n> \n> why pg_shadow salt ? for md5 we will need to store it separately anyway.\n> why not MD5(<server-supplied-random-salt> || MD5(<username> ||\n> <password>))\n> that way we would overcome the original need for salt (accidental\n> discovery \n> of similar passwords) and would have no need for storing the salt.\n> \n> actually we would probably need some kind of separator as well to avoid\n> the scenario of <user>+<password> and <userpa>+<ssword> being the same \n> and thus having the same md5 hash. so the escheme could be\n> \n> MD5(<server-supplied-random-salt> || '\\n' || MD5(<username> || '\\n' ||\n> <password>))\n> \n> AFAIK there is no easy way to have a newline inside password. \n\nWell, unix passwords don't use the username as salt, so why should we?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 7 May 2000 09:08:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "On Sun, 07 May 2000, you wrote:\n> So, if someone can see those hashes, why don't they just create\n> themselves a new user, grant it full privileges to the database and\n> play?\n\nI know, they can do anything. But creating a new user is something very\nobvious that the admin will see. The breach of security would be detectable. \nIf they can get in with the hashes, they can be very sneaky and it would take a\nlong time to detect. The cracker shouldn't able to compromise a current\nusers account without having to even change the password on it. Its better to\nforce the cracker have to create an account than to let him do bad things as\nyou whenever he wants. Would you like the feeling of never knowing that maybe\nsomeone has your hash and is able to get in without you knowing? Your\npassword becomes useless. Really, sensitive information in the database could\nbe insecure over a long period of time and it would never be detectable. You'd\njust have to change your password frequently to ensure that you are the only\none that can get in. Its better to make a security system where the alarm will\ngo off.\n\nIf your competitor is able to get into the database as you, because he got your\nhash after hiring some cracker to get it, he can learn all your trade secrets\nand always find a way to have the advantage. You and your company might have a\nhard time figuring out whats going on because, so to speak, the security on the\ndatabase has no alarm.\n\nI agree that the MD5 double hash solution fixes the immediate problem. Its\njust not going to be a complete security solution.\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sun, 7 May 2000 10:37:28 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> One way to approach it in a semi-transparent way would be to add a\n> column md5passwd to pg_shadow and set up a trigger to automatically \n> update it whenever passwd is inserted/updated (and for \n> security-concious people the same trigger would empty the passwd \n> field itself, or set it to some special value that disables \n> crypt/cleartext logins)\n\nI don't think it's optional to get rid of the cleartext password;\nkindly recall the original complaint we are trying to address\n(see subject line of this thread ;-)). So there's little value in\nstoring two columns.\n\nAlso, by having just one password field we can deal with either\ncleartext or pre-encrypted incoming passwords fairly easily.\nThe trigger either reformats the field, or not; no upstream code\nneeds to worry about whether the password is already encrypted.\nSo we don't need the \"WITH ENCRYPTED PASSWORD\" variant syntax,\nwhich is a good thing IMHO.\n\n> I still think that the easiest way to get unique hashes would be to use \n> the username as salt when generating the value for md5passwd .\n\nNo, I don't think that's an improvement. Please recall that the\noriginal reason for inventing salt was to make sure that it wouldn't be\nobvious whether the same user was using the same password on multiple\nmachines.\n\nSince MD5 can take an arbitrarily long input phrase, we could possibly\nrun the calculation as MD5(password || username || random salt), but\nthere *must* be some randomness in there.\n\nI doubt that it'd be all that great an idea to include the username.\nThe biggest objection to it is that renaming a user would break his\npassword (nonobviously, too). The only reason in favor of it is that\nit wouldn't be apparent when two different users share the same password\n--- but the random salt covers that problem and does more too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 11:28:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "On Sun, 7 May 2000, Bruce Momjian wrote:\n\n> > > Yes, MD5, double-crypt with pg_shadow salt and random salt. Sounds like\n> > > a winner all around.\n> > \n> > why pg_shadow salt ? for md5 we will need to store it separately anyway.\n> > why not MD5(<server-supplied-random-salt> || MD5(<username> ||\n> > <password>))\n> > that way we would overcome the original need for salt (accidental\n> > discovery \n> > of similar passwords) and would have no need for storing the salt.\n> > \n> > actually we would probably need some kind of separator as well to avoid\n> > the scenario of <user>+<password> and <userpa>+<ssword> being the same \n> > and thus having the same md5 hash. so the escheme could be\n> > \n> > MD5(<server-supplied-random-salt> || '\\n' || MD5(<username> || '\\n' ||\n> > <password>))\n> > \n> > AFAIK there is no easy way to have a newline inside password. \n> \n> Well, unix passwords don't use the username as salt, so why should we?\n\nIt could add a level of security. The client knows the username. If\nthe client were to only send LOGIN or something like that to the server\nwithout sending the username and the server only replied with the random\nsalt, the client would know that the username was the fixed salt and could\nuse that with random salt received from the server. So it's really a\nhidden salt.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 12:16:45 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> It could add a level of security. The client knows the username. If\n> the client were to only send LOGIN or something like that to the server\n> without sending the username and the server only replied with the random\n> salt, the client would know that the username was the fixed salt and could\n> use that with random salt received from the server. So it's really a\n> hidden salt.\n\nHidden from whom? The client *must* send the username to the server,\nso a sniffer who is able to see both sides of the conversation will\nstill have all the same pieces. If the sniffer only sees one side of\nthe conversation, he's still in trouble: he'll get the random salt, or\nthe hashed password, but not both. So I still don't see what the\nusername is adding to the process that will make up for rendering it\nmuch more difficult to rename users.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 13:15:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "On Sun, 7 May 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > It could add a level of security. The client knows the username. If\n> > the client were to only send LOGIN or something like that to the server\n> > without sending the username and the server only replied with the random\n> > salt, the client would know that the username was the fixed salt and could\n> > use that with random salt received from the server. So it's really a\n> > hidden salt.\n> \n> Hidden from whom? The client *must* send the username to the server,\n> so a sniffer who is able to see both sides of the conversation will\n> still have all the same pieces. If the sniffer only sees one side of\n> the conversation, he's still in trouble: he'll get the random salt, or\n> the hashed password, but not both. So I still don't see what the\n> username is adding to the process that will make up for rendering it\n> much more difficult to rename users.\n\nMy intent was not to send the username, but let the server figure it \nout by the response.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 13:21:54 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> My intent was not to send the username, but let the server figure it \n> out by the response.\n\nThat would be a neat trick. How will you do it? MD5 is not reversible.\n\nActually you could do it, but *not* by folding the username into the\npassword. Instead try this:\n\n1. Client chooses random saltC, sends saltC and MD5(username, saltC)\nto server in initial request.\n\n2. Server runs through all available usernames, computing\nMD5(thisusername, saltC) for each one and looking for a match.\n\n3. Server chooses random saltS, sends it to client along with saltP\nstored in pg_shadow entry for user.\n\n4. Client computes MD5(MD5(password, saltP), saltS) and sends to server.\nServer compares this against MD5(stored hashed password, saltS).\n\nThis would prevent a sniffer from finding out anything about the valid\nusernames, which'd be a useful improvement, but I'm not convinced that\nit's worth breaking the initial protocol for. (Not to mention the\nadditional postmaster CPU time --- step 2 above is not cheap if there\nare lots of usernames.) So far we have just been talking about adding\na different type of authentication challenge to the set we already\nsupport; that doesn't break old clients as long as they aren't\nchallenged that way. Modifying the initial connection request packet\nis a different story.\n\nI'm still of the opinion that anyone who is really concerned about\nsniffing attacks ought to be using SSL, because protecting just their\npassword and not the data that will be exchanged later in the session is\nunwise. So I'm not really excited about adding anti-sniffing frammishes\nlike this one. We've got a good scheme for the password; let's be\ncareful about adding \"improvements\" that won't carry their weight in\nthe real world. There's no such thing as a single security scheme that\naddresses every possible vulnerability. Extending one part of your\nsecurity arsenal to partially solve problems that are better solved\nby a different tool is just wasting time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 13:40:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "On Sun, 7 May 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > My intent was not to send the username, but let the server figure it \n> > out by the response.\n> \n> That would be a neat trick. How will you do it? MD5 is not reversible.\n\nCLIENT: md5(salt_from_server + md5(username + md5(password)))\n\nSERVER: md5(salt_from_server + md5(username + stored_password))\n\nThe server runs thru all available usernames using the above algorithm.\n\n> I'm still of the opinion that anyone who is really concerned about\n> sniffing attacks ought to be using SSL, because protecting just their\n> password and not the data that will be exchanged later in the session is\n> unwise. So I'm not really excited about adding anti-sniffing frammishes\n> like this one. We've got a good scheme for the password; let's be\n> careful about adding \"improvements\" that won't carry their weight in\n> the real world. There's no such thing as a single security scheme that\n> addresses every possible vulnerability. Extending one part of your\n> security arsenal to partially solve problems that are better solved\n> by a different tool is just wasting time.\n\nAgreed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 14:45:44 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n>>>> My intent was not to send the username, but let the server figure it \n>>>> out by the response.\n>> \n>> That would be a neat trick. How will you do it? MD5 is not reversible.\n\n> CLIENT: md5(salt_from_server + md5(username + md5(password)))\n\n> SERVER: md5(salt_from_server + md5(username + stored_password))\n\n> The server runs thru all available usernames using the above algorithm.\n\nNo, that doesn't work unless stored passwords contain no random salt\nat all (you could use the username alone, but as I previously said\nthat's no substitute for random salt, and of dubious value anyway).\nThat'd be a distinct *loss* in security, not an improvement.\n\nTo have salt in the stored passwords, the server must receive the\nusername first so that it can look up the pg_shadow entry and find\nwhich stored salt to send to the client (along with the randomly\ngenerated per-transaction salt). You could cloak the username as\nI suggested before, but there have to be two messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 15:16:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "On Sun, 7 May 2000, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> >>>> My intent was not to send the username, but let the server figure it \n> >>>> out by the response.\n> >> \n> >> That would be a neat trick. How will you do it? MD5 is not reversible.\n> \n> > CLIENT: md5(salt_from_server + md5(username + md5(password)))\n> \n> > SERVER: md5(salt_from_server + md5(username + stored_password))\n> \n> > The server runs thru all available usernames using the above algorithm.\n> \n> No, that doesn't work unless stored passwords contain no random salt\n> at all (you could use the username alone, but as I previously said\n> that's no substitute for random salt, and of dubious value anyway).\n> That'd be a distinct *loss* in security, not an improvement.\n> \n> To have salt in the stored passwords, the server must receive the\n> username first so that it can look up the pg_shadow entry and find\n> which stored salt to send to the client (along with the randomly\n> generated per-transaction salt). You could cloak the username as\n> I suggested before, but there have to be two messages.\n\nYou're right, it wouldn't work. It should've been like this:\n\nCLIENT: md5(salt_from_server + md5(username + password)))\n\nSERVER: md5(salt_from_server + stored_password) \n\nThe \"salt_from_server\" is your random salt. The fixed salt is the\nusername.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 15:23:37 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > One way to approach it in a semi-transparent way would be to add a\n> > column md5passwd to pg_shadow and set up a trigger to automatically\n> > update it whenever passwd is inserted/updated (and for\n> > security-concious people the same trigger would empty the passwd\n> > field itself, or set it to some special value that disables\n> > crypt/cleartext logins)\n> \n> I don't think it's optional to get rid of the cleartext password;\n> kindly recall the original complaint we are trying to address\n> (see subject line of this thread ;-)). So there's little value in\n> storing two columns.\n\nthere is some value in\n\nA. a smooth transition path via dump/restore\nB. backwards compatibility for those that need it more than security -\n we can still do DES-crypt authentication if we choose to\n\nlater (say in 7.2) we can drop password and have only md5password\n\n> Also, by having just one password field we can deal with either\n> cleartext or pre-encrypted incoming passwords fairly easily.\n> The trigger either reformats the field, or not; no upstream code\n> needs to worry about whether the password is already encrypted.\n> So we don't need the \"WITH ENCRYPTED PASSWORD\" variant syntax,\n> which is a good thing IMHO.\n\nBut how will you know if the data in the field is md5 hashed ?\n\nI know no way to tell for sure if an arbitrary string is a md5 \nhash or not. \n\nOf course we could choose a format like {MD5}:hashedstring\nand disallow cleartext passwords which start with {MD5}: like Zope does\n\n> > I still think that the easiest way to get unique hashes would be to use\n> > the username as salt when generating the value for md5passwd .\n> \n> No, I don't think that's an improvement. Please recall that the\n> original reason for inventing salt was to make sure that it wouldn't be\n> obvious whether the same user was using the same password on multiple\n> machines.\n\nOk. My previous impression was that it was in order to make sure that \ntwo users on the same machine would not find out theat the other is \nusing the same passord as she is\n\n> Since MD5 can take an arbitrarily long input phrase, we could possibly\n> run the calculation as MD5(password || username || random salt), but\n> there *must* be some randomness in there.\n> \n> I doubt that it'd be all that great an idea to include the username.\n> The biggest objection to it is that renaming a user would break his\n> password (nonobviously, too). \n\nCan we really rename users ?\n\nhannu=# create user foo;\nCREATE USER\nhannu=# alter user foo rename to bar;\nERROR: parser: parse error at or near \"rename\"\n\nWhy would one do it in the first place ?\n\n> The only reason in favor of it is that\n> it wouldn't be apparent when two different users share the same password\n> --- but the random salt covers that problem and does more too.\n\nTrue. I did not think of the one user/multiple computers scenario.\n\n-----------\nHannu\n", "msg_date": "Sun, 07 May 2000 22:29:39 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> You're right, it wouldn't work. It should've been like this:\n\n> CLIENT: md5(salt_from_server + md5(username + password)))\n\n> SERVER: md5(salt_from_server + stored_password) \n\n> The \"salt_from_server\" is your random salt. The fixed salt is the\n> username.\n\nYou're still not getting the point. I refer you to Ben Adida's\noriginal, correct description of the way to do this:\n\n> - client requests login\n> - server sends stored salt c1, and random salt c2.\n> - client performs hash_c2(hash_c1(password)) and sends result to server.\n> - server performs hash_c2(stored_pg_shadow) and compares with client\n> submission.\n> - if there's a match, there's successful login.\n\nThere have to be *two* random salts involved, one chosen when the\npassword was set (and used to cloak the stored password against people\nwith access to pg_shadow) and one chosen for the duration of this\npassword challenge (and used to cloak the challenge transaction against\npeople sniffing the packet traffic). If you give up either one of those\nbits of randomness then you lose a great deal.\n\nUsing the username instead of an independent random value to salt the\nstored password is not a small change, it is a fundamental weakening of\nthe security system. If you don't see that this is so then you don't\nunderstand anything about cryptography.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 15:34:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Tom Lane wrote:\n> \n> Vince Vielhaber <[email protected]> writes:\n> > You're right, it wouldn't work. It should've been like this:\n> \n> > CLIENT: md5(salt_from_server + md5(username + password)))\n> \n> > SERVER: md5(salt_from_server + stored_password)\n> \n> > The \"salt_from_server\" is your random salt. The fixed salt is the\n> > username.\n> \n> You're still not getting the point. I refer you to Ben Adida's\n> original, correct description of the way to do this:\n> \n> > - client requests login\n> > - server sends stored salt c1, and random salt c2.\n> > - client performs hash_c2(hash_c1(password)) and sends result to server.\n> > - server performs hash_c2(stored_pg_shadow) and compares with client\n> > submission.\n> > - if there's a match, there's successful login.\n> \n> There have to be *two* random salts involved, one chosen when the\n> password was set (and used to cloak the stored password against people\n> with access to pg_shadow) and one chosen for the duration of this\n> password challenge (and used to cloak the challenge transaction against\n> people sniffing the packet traffic). If you give up either one of those\n> bits of randomness then you lose a great deal.\n> \n> Using the username instead of an independent random value to salt the\n> stored password is not a small change, it is a fundamental weakening of\n> the security system. \n\nIt allows one of the salts to never be sent, thereby strengthening that \npart against _anyone_sniffing_the_traffic_ (just a little) as he sees \nonly one hash, different each time.\n\nIt allows _a_user_with_access_to_ pg_shadow _on_two_or_more_machines_ \nsee the fact that a user has the same password on both of them (which \ninfo he can then useto guess the password in two tries, as often seen \nin movies ;)\n\n> If you don't see that this is so then you don't understand anything \n> about cryptography.\n\nIt is too easy to think that you do ;).\n\nBTW, I don't claim to \"understand cryptography\" . \nWhat I said above is just plain common sense ;)\n\nAnd you will never get good security by cryptography only, not even \nby using SSL or SSH which are the right way to go if you want to protect \nagainst sniffing. \n\nThe current thread started from a simple the need to hide passwords \nfrom PG superusers and system ROOT's. For that we have two schemes:\n\nstore MD5(username+passwd)\n - hidden from sniffing but easily guessable salt (as most users are\ncalled 'bob')\n\nstore MD5(random_salt+password) , or more likely\nrandom_salt+MD5(random_salt+passord) \n or we will never find out the salt again ;)\n - both salts are known to sniffer who is still unable to do anything\nwith them\n\nthe difference between the two in mainly the fact that in first case the\nuser \nalready knows the salt and in the second case it must be transferred to\nher \nover fire - this makes the first one stronger, at least in case when the \nusername is chosen as creatively as the password ;)\n\notoh, in the second case three things should match username, salt and\nhash\nwhich of course makes the second case stronger. \n\nIf you understand which one is stronger you are smarter than I am ;)\n\n--------------\nHannu\n", "msg_date": "Sun, 07 May 2000 23:11:34 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "\"Robert B. Easter\" wrote:\n> \n> On Sun, 07 May 2000, Hannu Krosing wrote:\n> >\n> > But how will you know if the data in the field is md5 hashed ?\n> \n> I think they begin with $1$ and that the salt in the hashed string is like this:\n\nhow do you distinguish it from a plaintext password thet starts with $1$\n?\n\n> $1$<salt>$ -- a total of 12 characters of salt if you include the $1$$\n> characters. <salt> is 9 characters. Someone can correct me if this is not\n> true. I'm not an expert. :)\n\nWell in Zope they begin with {MD5} for MD5 hash. The md5 hash itself\nknows \nnothing about salt - it is just fed to the function before the password.\nAnd the digest can begin with anything, possibly even \\0 if not\n{uu|base64}encoded\n\n------------\nHannu\n", "msg_date": "Sun, 07 May 2000 23:19:04 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "On Sun, 07 May 2000, Hannu Krosing wrote:\n> \n> But how will you know if the data in the field is md5 hashed ?\n\nI think they begin with $1$ and that the salt in the hashed string is like this:\n\n$1$<salt>$ -- a total of 12 characters of salt if you include the $1$$\ncharacters. <salt> is 9 characters. Someone can correct me if this is not\ntrue. I'm not an expert. :)\n\nRobert B. Easter\[email protected]\n", "msg_date": "Sun, 7 May 2000 17:04:52 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Tom Lane wrote:\n>> I don't think it's optional to get rid of the cleartext password;\n>> kindly recall the original complaint we are trying to address\n>> (see subject line of this thread ;-)). So there's little value in\n>> storing two columns.\n\n> there is some value in\n\n> A. a smooth transition path via dump/restore\n\nTrue, but if we set up the translation to occur in a trigger that\nchecks for already-hashed data, then we've got that licked.\n\n> B. backwards compatibility for those that need it more than security -\n> we can still do DES-crypt authentication if we choose to\n\nThat is a definite loss, but on the other hand I now realize that the\ncrypt-based authentication has its own compatibility problems: it may\nfail with a client running on another machine already! Not to mention\nthat some popular client platforms and/or interfaces (think ODBC) never\nhave supported crypt authentication. So it's questionable that there\nare very many people using it in cross-platform situations where\nbackwards compatibility with old clients is critical. Furthermore,\nit's not like the people who do get burnt have no option at all: they\ncan switch to cleartext password authentication or one of the other\nalready-supported methods until they can bring their clients up to\nspeed. On the whole, I think the advantages of moving to MD5 clearly\noutweigh this single disadvantage.\n\n>> Also, by having just one password field we can deal with either\n>> cleartext or pre-encrypted incoming passwords fairly easily.\n>> The trigger either reformats the field, or not; no upstream code\n>> needs to worry about whether the password is already encrypted.\n>> So we don't need the \"WITH ENCRYPTED PASSWORD\" variant syntax,\n>> which is a good thing IMHO.\n\n> But how will you know if the data in the field is md5 hashed ?\n\nEasily. We will need, say, 32 bits of random salt plus the 128-bit\nMD5 hash value. Represent these in a string that consists of, say,\n8 hex digits, a '/' sign, and 32 more hex digits; use only uppercase\nA-F, not lowercase, as hex digits. Now, are you going to say that\n\t23B3C990/652B8383A48348CDEF57298289A882DD\nis plausible as a cleartext password? I don't agree...\n\n>> No, I don't think that's an improvement. Please recall that the\n>> original reason for inventing salt was to make sure that it wouldn't be\n>> obvious whether the same user was using the same password on multiple\n>> machines.\n\n> Ok. My previous impression was that it was in order to make sure that \n> two users on the same machine would not find out theat the other is \n> using the same passord as she is\n\nIt does that too. But remember that the point of not storing cleartext\npasswords is to prevent using a password stolen from User A to break\ninto User A's other accounts, not User B's accounts. So same username,\ndifferent machine is definitely a case we should consider.\n\n> Can we really rename users ?\n\nupdate pg_shadow set usename = 'bar' where usename = 'foo';\n\nAn ALTER USER syntax would be handier, but you don't really need it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 17:28:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "Tom Lane writes:\n\n> How will dump/restore and upgrades cope with crypted passwords?\n\nWe could distribute a sed or awk script that you have to run on the dumped\nfile to convert the copy to create user commands. Shouldn't be hard to\nwrite, it's just a question of whether people want to put up with it. It\nseems cleaner than any of the \"magic hooks\" that have been proposed.\n\nActually, I have some ideas in the pipe that would indeed change the\nlayout of pg_shadow slightly, so this might have to happen anyway.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 7 May 2000 23:34:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n\n>> How will dump/restore and upgrades cope with crypted passwords?\n\n> We could distribute a sed or awk script that you have to run on the dumped\n> file to convert the copy to create user commands. Shouldn't be hard to\n> write, it's just a question of whether people want to put up with it. It\n> seems cleaner than any of the \"magic hooks\" that have been proposed.\n\nTo my mind the real advantage of doing it in a trigger is that\nCREATE USER WITH PASSWORD and ALTER USER SET PASSWORD can accept\n*either* cleartext or already-hashed password data. That seems\nnicer than forcing the user to deal with two syntaxes, upgrade\nscripts, etc.\n\n> Actually, I have some ideas in the pipe that would indeed change the\n> layout of pg_shadow slightly, so this might have to happen anyway.\n\nHow far down the pipe? It'd be nice if we could fix pg_dumpall to\ndump CREATE USER commands a version before we actually need it ;-).\nI'd like to change the script for 7.1 (or maybe even 7.0.1) but keep\nbackwards compatibility for the old-style dump scripts until 7.2.\n(At the moment I'm kind of kicking myself for not having fixed the\nproblem when I saw it, but there was no talk of pg_shadow changes\nin the air at the time.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 17:53:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "On Sun, 7 May 2000, Tom Lane wrote:\n\n> Using the username instead of an independent random value to salt the\n> stored password is not a small change, it is a fundamental weakening of\n> the security system. \n\nThat's what I was doing, substituting the original random salt for the \nusername. \n\n> If you don't see that this is so then you don't\n> understand anything about cryptography.\n\nWas this smartass comment really necessary, Tom?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 7 May 2000 18:56:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n>> If you don't see that this is so then you don't\n>> understand anything about cryptography.\n\n> Was this smartass comment really necessary, Tom?\n\nProbably not. My sincerest apologies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 May 2000 18:58:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "My understanding is that what you get from crypt(pw, salt) =\n\n\t$1$<salt>$<hashed password>\n\nPlease correct me if I wrong. Again, not an expert.\n\nOn Sun, 07 May 2000, you wrote:\n> \"Robert B. Easter\" wrote:\n> > \n> > On Sun, 07 May 2000, Hannu Krosing wrote:\n> > >\n> > > But how will you know if the data in the field is md5 hashed ?\n> > \n> > I think they begin with $1$ and that the salt in the hashed string is like this:\n> \n> how do you distinguish it from a plaintext password thet starts with $1$\n> ?\n> \n> > $1$<salt>$ -- a total of 12 characters of salt if you include the $1$$\n> > characters. <salt> is 9 characters. Someone can correct me if this is not\n> > true. I'm not an expert. :)\n> \n> Well in Zope they begin with {MD5} for MD5 hash. The md5 hash itself\n> knows \n> nothing about salt - it is just fed to the function before the password.\n> And the digest can begin with anything, possibly even \\0 if not\n> {uu|base64}encoded\n> \n> ------------\n> Hannu\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Sun, 7 May 2000 19:08:33 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] You're on SecurityFocus.com for the cleartext\n passwords." }, { "msg_contents": "At 07:08 PM 5/7/00 -0400, Robert B. Easter wrote:\n>My understanding is that what you get from crypt(pw, salt) =\n>\n> $1$<salt>$<hashed password>\n\nI thought it was only $<salt - 2 chars>$<hashed password>\n\n\nRegards,\nStephan\n--\nStephan Richter - (901) 573-3308 - [email protected]\nCBU - Physics & Chemistry; Framework Web - Web Design & Development\nPGP Key: 735E C61E 5C64 F430 4F9C 798E DCA2 07E3 E42B 5391\n\n", "msg_date": "Sun, 07 May 2000 18:21:15 -0500", "msg_from": "Stephan Richter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] You're on SecurityFocus.com for\n\tthe cleartext passwords." }, { "msg_contents": "At 15:34 7/05/00 -0400, Tom Lane wrote:\n>\n>You're still not getting the point. I refer you to Ben Adida's\n>original, correct description of the way to do this:\n>\n>> - client requests login\n>> - server sends stored salt c1, and random salt c2.\n>> - client performs hash_c2(hash_c1(password)) and sends result to server.\n>> - server performs hash_c2(stored_pg_shadow) and compares with client\n>> submission.\n>> - if there's a match, there's successful login.\n>\n\nI may well have missed something here, but it seems to me that the above\nscheme is also not particularly secure since someone who has managed to get\naccess to the pg_shadow file will be able to fake a login by using a custom\nclient. ie:\n\n - client requests login\n - server sends stored salt c1, and random salt c2.\n - client ignores c1 and performs hash_c2(some_hash_from_the_file) and\nsends result to server.\n - server performs hash_c2(stored_pg_shadow) and compares with client\n submission.\n\nHave I missed somehting here? Obviously it depends on getting a copy of\npg_shadow.\n\nIt seems that there are at least two problems to solve in the whole\npassword & authentication problem:\n\n1. How to store passwords so they can't be decrypted.\n\n2. How to perform a secure handshake with a client.\n\nYou have already solved (1) by using MD5 (or even SHA, which is faster to\ncompute).\n\nTo solve (2) it seems to me that a slightly more complex interaction must\nbe undertaken using a public key algorithm:\n\n - Client sends [username] to server\n - Server sends [public key] to client\n - Client sends [enc(public key, password)] to server. \n - server uses dec(secret key,enc) and computes MD5 hash of password,\ncomparing it to pg_shadow.\n\nThis would require a decent large integer library (which certainly exist).\nFor speed, a key pair would need to be stored on the server, since key\ngeneration is quite slow.\n\nIn order for a man-in-the-middle attack to work, the attacker would also\nneed the secret key off the server. The risk could be reduced, at the\nexpense of computation, by generating a new key pair for each client,\nalthough that would be *very* expensive.\n\nAdditionally, it may be good to allow the entire client/server comms to be\ndone as an encrypted interaction, since a man-in-the-middle may not be able\nto read the password, but they will be able to read the data...\n\nFWIW, I'd be willing to write the password and handshaking code, if no-one\nelse were interested.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 09 May 2000 01:38:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "on 5/8/00 11:38 AM, Philip Warner at [email protected] wrote:\n\n> I may well have missed something here, but it seems to me that the above\n> scheme is also not particularly secure since someone who has managed to get\n> access to the pg_shadow file will be able to fake a login by using a custom\n> client. ie:\n\nYes, absolutely, but someone who gets the pg_shadow file can also alter the\ndatabase however he/she wants. The protocol defined prevents any knowledge\ngained from sniffing, and prevents discovery of plaintext passwords from\nlooking at pg_shadow.\n\nHowever, it does not prevent logins, as you mention, because once you have\nthe pg_shadow file, you've got everything anyways.\n\n-Ben\n\n", "msg_date": "Mon, 08 May 2000 11:59:10 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> To solve (2) it seems to me that a slightly more complex interaction must\n> be undertaken using a public key algorithm:\n\n> - Client sends [username] to server\n> - Server sends [public key] to client\n> - Client sends [enc(public key, password)] to server. \n> - server uses dec(secret key,enc) and computes MD5 hash of password,\n> comparing it to pg_shadow.\n\nHmm. The main problem with this is that once we get into having actual\nencryption/decryption code in Postgres, we are going to run afoul of US\nexport regulations and other headaches. MD5 doesn't pose that problem\nbecause it's only a hashing algorithm not an encryptor. I see your\npoint though, that requiring the client to send something one step\nupstream from what's stored in pg_shadow would make it harder to do\nanything useful by stealing pg_shadow. Can we get the same result with\njust MD5 operations?\n\nOne possibility that comes to mind is that we store MD5(MD5(password))\nin pg_shadow, and expect the client to transmit MD5(password).\nOf course that needs a cloaking scheme if you want to protect against\npassword sniffing, but offhand it seems that the same scheme Ben Adida\nproposed should still work...\n\n> Additionally, it may be good to allow the entire client/server comms to be\n> done as an encrypted interaction, since a man-in-the-middle may not be able\n> to read the password, but they will be able to read the data...\n\nWe have SSL capability already. I don't feel an urge to reinvent SSL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2000 12:02:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "At 12:02 8/05/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>\n>Hmm. The main problem with this is that once we get into having actual\n>encryption/decryption code in Postgres, we are going to run afoul of US\n>export regulations and other headaches. \n\nI thought that they had been relaxed recently, but I take the point: it's\nprobably not been relaxed enough. Then again, maybe 48 bit (or 56 or 112 or\nwhatever) is sufficient.\n\n\n>One possibility that comes to mind is that we store MD5(MD5(password))\n>in pg_shadow, and expect the client to transmit MD5(password).\n>Of course that needs a cloaking scheme if you want to protect against\n>password sniffing, but offhand it seems that the same scheme Ben Adida\n>proposed should still work...\n\nYes. This seems like a much simpler solution - getting the pg_shadow file\nis almost useless, as long as the effectiveness of dictionary attacks is\nreduced by a sufficiently large salt. \n\nIt still does not protect against a man-in-the-middle attack, since the new\n'password' is just a 160 bit value, rather than clear text. Unless SSL is\nused, of course.\n\n>\n>We have SSL capability already. I don't feel an urge to reinvent SSL.\n>\n\nSounds pretty reasonable to me. But if SSL is already there, don't you have\nimport/export problems?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 09 May 2000 02:34:43 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> We have SSL capability already. I don't feel an urge to reinvent SSL.\n\n> Sounds pretty reasonable to me. But if SSL is already there, don't you have\n> import/export problems?\n\nNo, because we don't include SSL in the distribution. There are just\nhooks to call it.\n\n(At one time there were export regs against even having hooks for crypto\n:-(, but I believe they finally agreed that was pretty silly...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2000 12:40:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "On Tue, 9 May 2000, Philip Warner wrote:\n\n> At 12:02 8/05/00 -0400, Tom Lane wrote:\n> >Philip Warner <[email protected]> writes:\n> >\n> >Hmm. The main problem with this is that once we get into having actual\n> >encryption/decryption code in Postgres, we are going to run afoul of US\n> >export regulations and other headaches. \n> \n> I thought that they had been relaxed recently, but I take the point: it's\n> probably not been relaxed enough. Then again, maybe 48 bit (or 56 or 112 or\n> whatever) is sufficient.\n\nBeing a Canadian based project, like OpenBSD, I do not believe that these\nissues apply ... even FreeBSD, based in California, appears to be getting\naround them now ... something about making you download rsaref seperately,\nthey can now ship with OpenSSH included as part of the system?\n\n\n", "msg_date": "Mon, 8 May 2000 13:54:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Being a Canadian based project, like OpenBSD, I do not believe that these\n> issues apply ...\n\nIf we had to do it that way we could, but I'd just as soon not create\nany questions for US mirror sites.\n\n> even FreeBSD, based in California, appears to be getting\n> around them now ... something about making you download rsaref seperately,\n\nYes, I think this is the cleanest current answer: you can connect to a\nseparately-distributed crypto engine and then it's not your problem.\n\nMD5 (or SHA1 if people like that better) isn't a crypto engine according\nto the rules, so including that in our distribution is a non-issue,\nbut a reversible encryptor might be an issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2000 13:08:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement.... " }, { "msg_contents": "Stephan Richter wrote:\n> \n> At 07:08 PM 5/7/00 -0400, Robert B. Easter wrote:\n> >My understanding is that what you get from crypt(pw, salt) =\n> >\n> > $1$<salt>$<hashed password>\n\nThat's for DES crypt (and without the $$)\n\n---------\nHannu\n", "msg_date": "Mon, 08 May 2000 22:48:35 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] You're on SecurityFocus.com forthe cleartext\n\tpasswords." }, { "msg_contents": "Tom Lane writes:\n\n> We already have the ability to work with an externally provided SSL\n> library.\n\nDoes it actually work? Has anybody tried it? Is it documented anywhere?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 8 May 2000 23:37:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> We already have the ability to work with an externally provided SSL\n>> library.\n\n> Does it actually work? Has anybody tried it? Is it documented anywhere?\n\nPicky, picky ;-)\n\nIt looks like you compile with USE_SSL (which ought to be listed as an\navailable option in config.h.in, but isn't; someday it should be a\nconfigure option, perhaps) and then add \"-l\" to the postmaster switches.\nAt least \"-l\" is documented.\n\nThere are some interactions between SSL-client-and-non-SSL-server, etc,\nwhich you can read about in the pghackers archives from last year, if\nnot in the docs. Also, I thought there was supposed to be a postmaster\noption to refuse non-SSL connections, but I don't see it now...\n\nAs for whether it works, damifino; I don't have SSL installed here.\nI do have a note in my todo list that speculates that the recent changes\nfor non-blocking connect in libpq may have broken its SSL support, and I\nasked the pghackers list about that in January. But I didn't get around\nto looking at it, and no one else picked up on it either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 May 2000 18:50:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "Tom Lane wrote:\n\n> One possibility that comes to mind is that we store MD5(MD5(password))\n> in pg_shadow, and expect the client to transmit MD5(password).\n> Of course that needs a cloaking scheme if you want to protect against\n> password sniffing, but offhand it seems that the same scheme Ben Adida\n> proposed should still work...\n\nThat would be pretty close to the RFC 2617 Digest Authentication. Why\ndon't we use that? Using a existing, widespread standard is good in\nterms of portability, and saves on validating the principal algorithm.\n\nSevo\n\n-- \[email protected]\n", "msg_date": "Tue, 09 May 2000 13:44:39 +0200", "msg_from": "Sevo Stille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: So we're in agreement...." }, { "msg_contents": "Tom Lane writes:\n\n> > Actually, I have some ideas in the pipe that would indeed change the\n> > layout of pg_shadow slightly, so this might have to happen anyway.\n> \n> How far down the pipe?\n\nIt would have to do with the access control work which I had planned to\nlook at. With the summer and all coming up and the hopefully shorter\nrelease cycle I'm not sure whether I'm going to get to it. The\nconfiguration and build clean-up should happen first anyway.\n\n> It'd be nice if we could fix pg_dumpall to dump CREATE USER commands a\n> version before we actually need it ;-).\n\nThe problem is that CREATE USER doesn't cover all fields of pg_shadow, in\nparticular usecatupd. Though perhaps this field is obscure enough to not\nbother. Also this will be pretty tricky to get to work for groups. (That\npg_group table really needs a redesign.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 9 May 2000 22:51:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> It'd be nice if we could fix pg_dumpall to dump CREATE USER commands a\n>> version before we actually need it ;-).\n\n> The problem is that CREATE USER doesn't cover all fields of pg_shadow, in\n> particular usecatupd. Though perhaps this field is obscure enough to not\n> bother.\n\nWell, we'd also want to make sure that CREATE and/or ALTER USER can be\nused to set everything in pg_shadow. A few more optional clauses\nshouldn't be a big deal ...\n\n> Also this will be pretty tricky to get to work for groups. (That\n> pg_group table really needs a redesign.)\n\nTrue. I'm inclined to think that should be looked at in the context\nof the schema support that people have been muttering about --- maybe\ngroups can be replaced by schemas somehow? (Just a thought, maybe a\nhalf-baked one.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 May 2000 16:56:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords. " }, { "msg_contents": "If I understand the original objection it's that passwords are stored \nin cleartext on the postmaster machine. That's not much of an \nobjection since you have to have your secrets available in the clear \non both ends of a connection if you want the traffic on the \nconnection secured. Both the Solaris and NetBSD ppp implementations \nhave this characteristic. Kerberos IV encrypts the passwords on the \nserver, but the master password for that encryption is still stored \nin \"/.k\" so it's the same in principle.\n\nIf I understand the current password implementation the postmaster \nchooses the salt, sends it to the client which does a unix crypt and \nreturns the encrypted password for verification. This is a nice, \nsimple system which provides what I would consider a reasonable \nminimum of security.\n\nBut it's not *really* secure. For one thing a bad guy could \nintercept the encrypted password and feed it to one of the \npassword-guessing programs, like crack. It's not very robust to \nman-in-the-middle attacks, either. Do we know how predictable the \nsalt-choosing algorithm is? What if a counterfeit server requested \nauthentication with a carefully-chosen salt (like 0)?\n\nWe are not in the business of creating security protocols. IMHO we \nshould leave that to the people who are. If we want something better \nthan the password scheme we have then we should adopt an existing \nstandard. Chap, as used with ppp, comes to mind. It might be a good \nsubstitute for the password protocol.\n\nRemember that we already support kerberos. Also there is no reason \nyou can't run the connection over an ssh tunnel. That solution would \nprotect the data as well as the passwords. There are applications \nwhere the important information is not what's in the database, but \nwhat information did someone want from it and when did they ask.\n\nOut of curiosity does SecurityFocus.com also criticise pppd for the \nsame \"problem\"? The problem is not us storing the passwords in the \nclear, but rather are we careful about the access permissions on the \nfile that contains them, and are we careful about how they get passed \naround.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Tue, 9 May 2000 14:02:17 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext\n passwords." }, { "msg_contents": "\"Henry B. Hotz\" <[email protected]> writes:\n> But it's not *really* secure. For one thing a bad guy could \n> intercept the encrypted password and feed it to one of the \n> password-guessing programs, like crack. It's not very robust to \n> man-in-the-middle attacks, either. Do we know how predictable the \n> salt-choosing algorithm is? What if a counterfeit server requested \n> authentication with a carefully-chosen salt (like 0)?\n\nI doubt the latter is a problem; AFAIK there are no weak salt values\nin crypt() --- remember the salt is not a key.\n\nHowever, the relatively small number of legal salt values (4096 IIRC)\nis a weakness; an attacker who'd sniffed one encrypted password could\nhope to get in by repeatedly connecting until he's challenged with\nthat same salt, and then he just gives the captured encrypted password.\nIf the salt-choosing code has any predictability then it might take\nmuch less than ~4K tries, but that number is too small anyway.\n\nI thought one of the major reasons for switching to a new protocol\nis that we could include much wider random salt values in it, so\nas to render that sort of attack impractical.\n\nAs for man-in-the-middle attacks, stealing passwords is the least\nof our worries in that scenario --- the attacker could just wait\nfor login to complete and then insert his own queries into the\nconversation. I think we have to rely on end-to-end encryption\nlike SSH or SSL to defend against that sort of thing.\n\n> We are not in the business of creating security protocols. IMHO we \n> should leave that to the people who are. If we want something better \n> than the password scheme we have then we should adopt an existing \n> standard.\n\nAw, that's no fun :-). But you're right, we should look to see if there\nare existing standards that meet all the criteria we are looking for.\n\n> Out of curiosity does SecurityFocus.com also criticise pppd for the \n> same \"problem\"?\n\nI checked and in fact there is nothing official about this \"criticism\";\nit's just one message posted on a web bbs by someone with no obvious\ncredentials. Still, given the other headaches that reliance on crypt()\ncauses us, it seems to make sense to work on a replacement password\nscheme that deals with more problems than just cleartext password\nstorage.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 May 2000 17:27:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: You're on SecurityFocus.com for the cleartext passwords." }, { "msg_contents": "Tom Lane wrote:\n >> Also this will be pretty tricky to get to work for groups. (That\n >> pg_group table really needs a redesign.)\n >\n >True. I'm inclined to think that should be looked at in the context\n >of the schema support that people have been muttering about --- maybe\n >groups can be replaced by schemas somehow? (Just a thought, maybe a\n >half-baked one.)\n\nPlease don't lose group support; it is a separate issue from schemas.\n\nIf schemas were available, I would use them to separate or subdivide\nprojects. For example, a financial accounting system would logically\ndivide into modules, some of which would be universally required\n(general ledger, base support, etc.) and some of which would be \nof interest to smaller groups of customers (sales ledger, order processing,\ninventory control, payroll). Each of these modules would contain a\nlogically separable set of data elements (tables, views, etc.).\nHowever, add-on modules are very likely to need to access data from\nother modules. I would use schemas to implement these modules and\nthey could cross-refer to each other with the specification \n`schema.table.column'.\n\nIt should be possible for separate schemas to be housed on different\nmachines, linked by networking.\n\nDifferent sets of users need to access different parts of the data, in\ndifferent ways. This is not necessarily related to the separation of\nmodules. For example, the directors of a company might have private\nloan accounts in the general ledger module, and their pay would be\nhandled by the payroll module; it is likely that access to their \nrecords would be heavily restricted. This is more easily done by \ngroup privileges, by giving restricted access to a \"directors\" group,\nthan by trying to split the database schema down further. Similarly,\nwhere there is extensive separation of duties, one group may\nhave write access for payments but not for invoices.\n\nFinally, use of groups makes it far easier to handle the privileges of\nnew employees. It is much easier to make a new employee a member of\nthe \"buyers\" group than to grant access rights individually to each of\na number of tables, spread across several schemas.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Therefore, my beloved brethren, be ye stedfast, \n unmoveable, always abounding in the work of the Lord, \n forasmuch as ye know that your labour is not in vain \n in the Lord.\" I Corinthians 15:58 \n\n\n", "msg_date": "Tue, 09 May 2000 22:33:36 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Groups and schemas (was: You're on SecurityFocus.com...)" }, { "msg_contents": "When creating a child (through CREATE TABLE ... INHERIT (parent)) it\nseems the child gets all of the parent's contraints _except_ its PRIMARY\nKEY. Is this normal? Should I add a PRIMARY KEY(id) statement each time\nI create an inherited table?\n\nCheers,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nIf at first you don't succeed, redefine success.\n", "msg_date": "Sat, 3 Jun 2000 17:22:56 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": false, "msg_subject": "child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "On Sat, Jun 03, 2000 at 05:22:56PM +0200, Louis-David Mitterrand wrote:\n> When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> seems the child gets all of the parent's contraints _except_ its PRIMARY\n> KEY. Is this normal? Should I add a PRIMARY KEY(id) statement each time\n> I create an inherited table?\n\nFollowing up to my previous message, I found that one can't explicitely\nadd a PRIMARY KEY on child table referencing a field on the parent\ntable, for instance:\n\n CREATE TABLE auction (\n id SERIAL PRIMARY KEY,\n title text,\n ... etc...\n );\n\nthen \n\n CREATE TABLE auction_dvd (\n zone int4,\n PRIMARY KEY(\"id\")\n ) inherits(\"auction\");\n\ndoesn't work:\n ERROR: CREATE TABLE: column 'id' named in key does not exist\n\nBut the aution_dvd table doesn't inherit the auction table's PRIMARY\nKEY, so I can insert duplicates.\n\nSolutions:\n\n1) don't use PRIMARY KEY, use UNIQUE NOT NULL (which will be inherited?)\nbut the I lose the index,\n\n2) use the OID field, but it's deprecated by PG developers?\n\nWhat would be the best solution?\n\nTIA\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nVeni, Vidi, VISA.\n", "msg_date": "Sat, 3 Jun 2000 17:51:56 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "Louis-David Mitterrand writes:\n\n> When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> seems the child gets all of the parent's contraints _except_ its PRIMARY\n> KEY. Is this normal?\n\nIt's kind of a bug.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 4 Jun 2000 03:46:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "On Sun, Jun 04, 2000 at 03:46:53AM +0200, Peter Eisentraut wrote:\n> Louis-David Mitterrand writes:\n> \n> > When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> > seems the child gets all of the parent's contraints _except_ its PRIMARY\n> > KEY. Is this normal?\n> \n> It's kind of a bug.\n\nIs it a well-known bug or have I discovered it? ;-)\n\n(I am sending a copy of the bug report to -hackers)\n\nThanks,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\n \"God is a mathematician of very high order, and he used very\n advanced mathematics in constructing the universe.\" (Dirac)\n", "msg_date": "Sun, 4 Jun 2000 09:20:16 +0200", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "\nThis is purely a 'we missed some stuff' release, with the only change\nbeing:\n\nChanges\n-------\nAdded documentation to tarball.\n\n\nSo, if you are already running, you don't need this ... this is just to\nhelp Lamar out with the RPMs more then anything ...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jun 2000 15:23:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "[v7.0.2] Very small cleanup release ..." }, { "msg_contents": "The Hermit Hacker wrote:\n >\n >This is purely a 'we missed some stuff' release, with the only change\n >being:\n \nbut where is it? ftp.postgresql.org/pub/latest still points to 7.0.1\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Let your conversation be without covetousness; and be \n content with such things as ye have. For he hath said,\n I will never leave thee, nor forsake thee.\" \n Hebrews 13:5 \n\n\n", "msg_date": "Mon, 05 Jun 2000 21:42:48 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [v7.0.2] Very small cleanup release ... " }, { "msg_contents": "Oliver Elphick wrote:\n> \n> The Hermit Hacker wrote:\n> >\n> >This is purely a 'we missed some stuff' release, with the only change\n> >being:\n \n> but where is it? ftp.postgresql.org/pub/latest still points to 7.0.1\n\nftp.postgresql.org/pub/source/v7.0.2 was where I pulled it from.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Jun 2000 16:51:59 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [v7.0.2] Very small cleanup release ..." }, { "msg_contents": "\nfixed ...\n\nOn Mon, 5 Jun 2000, Oliver Elphick wrote:\n\n> The Hermit Hacker wrote:\n> >\n> >This is purely a 'we missed some stuff' release, with the only change\n> >being:\n> \n> but where is it? ftp.postgresql.org/pub/latest still points to 7.0.1\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Let your conversation be without covetousness; and be \n> content with such things as ye have. For he hath said,\n> I will never leave thee, nor forsake thee.\" \n> Hebrews 13:5 \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jun 2000 00:50:26 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [v7.0.2] Very small cleanup release ... " }, { "msg_contents": "Hi folks,\n\nThis is my first post to your list. I've been reading it for about a week. I like the quality of the developers here and think \nthis portends well for the future of Postgres.\n\nAnyway, an idea. Not sure if RDBMSs internally already implement this technique. But in case them don't and in case \nyou've never thought of it here something I just thought of:\n\nCHAR fields have different sorting (aka collation) rules for each code page. eg the very fact that A comes before B is \nsomething that the collation info for a given code page has to specify. Well, just because a character has a lower value \nthan another character in its encoding in a given code page doesn't mean it gets sorted first. \n\nSo let me cut to the chase: I'm thinking that rather than store the actual character sequence of each field (or some \nsubset of a field) in an index why not translate the characters into their collation sequence values and store _those_ in \nthe index? \n\nThe idea is to reduce the number of times that string has to be converted to its mathematical sorting order representation. \nDon't do it every time two strings get compared. Do it when a record is inserted or that field is updated.\n\nIs this already done? Or is it not such a good idea for some reason? \n\nI'd consider this idea of greater value in something like Unicode. For 16 bit Unicode the lookup table to find each \ncharacter's ordinal value (or sorting value, whatever its called) is 128k, right? Doing a bunch of look-ups into that has to \nnot be good for L1 and L2 cache in a processor. \n\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 13:14:58 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": false, "msg_subject": "An idea on faster CHAR field indexing" }, { "msg_contents": "\n> So let me cut to the chase: I'm thinking that rather than store the\n> actual character sequence of each field (or some subset of a field)\n> in an index why not translate the characters into their collation\n> sequence values and store _those_ in the index?\n\nThis is not an obvious win, since:\n\n1. some collations rules require multiple passes over the data\n\n2. POSIX strxfrm() will convert strings of characters to a form that\n can be compared by strcmp() [i.e. single pass] but tends to greatly\n increase memory requirements\n\n I've only data for one implementation of strxfrm(), but the memory\n usage startled me. In my application it was faster to use\n strcoll() directly for collation than to pre-expand the data with\n strxfrm().\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 22 Jun 2000 06:59:06 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "Giles,\n\nI'm curious as to why the need for multiple passes. Is that true even in Latin 1 code pages? If not, this optimization \ncould at least be used for code pages that don't require multiple passes.\n\nAs for memory usage: I don't see the issue here. The translation to some collation sequence has to be done anyhow. \nWriting one's own routine to do look-ups into a collation sequence table is a fairly \ntrivial exercise. \n\nOne would have the option with SBCS code pages to either translate to 8 bit collation values or to translate them into \nmaster Unicode collation values. Not sure what the advantage would be of doing the \nlatter. I only see it as useful if you have different rows storing text in different code pages and then only if the RDBMS \ncan know for a given field on a per row basis what its code page is.\n\nOn Thu, 22 Jun 2000 06:59:06 +1000, Giles Lean wrote:\n\n>\n>> So let me cut to the chase: I'm thinking that rather than store the\n>> actual character sequence of each field (or some subset of a field)\n>> in an index why not translate the characters into their collation\n>> sequence values and store _those_ in the index?\n>\n>This is not an obvious win, since:\n>\n>1. some collations rules require multiple passes over the data\n>\n>2. POSIX strxfrm() will convert strings of characters to a form that\n> can be compared by strcmp() [i.e. single pass] but tends to greatly\n> increase memory requirements\n>\n> I've only data for one implementation of strxfrm(), but the memory\n> usage startled me. In my application it was faster to use\n> strcoll() directly for collation than to pre-expand the data with\n> strxfrm().\n>\n>Regards,\n>\n>Giles\n>\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 14:15:30 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing" }, { "msg_contents": "I've seen a number of RDBMSs that require that an entire database (whatever they call a database) \nhas to be in the same code page. There are disadvantages in this. With those disadvantages in mind \nand with the idea that it would be worth examining options other than Unicode I typed up a list of \nsome of the ways that databases could handle lots of different national languages and code pages. \n\nSuppose you have to develop a database that will store text strings for a large number of languages \ncutting across several code pages. Well, you have a few choices:\n\n1) Make the entire database Unicode (providing that your RDBMS supports Unicode).\n At least this way you know that you can store any text that you have. Whether it is Swedish, \nRussian, Japanese, German, and English is not a problem. \n The problem is that all CHAR fields in all tables are now Unicode. That makes all text storage take \nup more room. It also makes sorting and indexing take more time. It also requires translation between \nUnicode and the Code Page that any particular client PC is using. \n Aside: DB2 has a feature whereby the client PC can declare itself to the RDBMS as being in a \nparticular code page and then DB2 does all the back-end to client-end code page translation going in \nboth directions. \n\n2) Allow each table to be in a particular code page.\n This is not too satisfying. How can you do foreign keys between tables when two tables are in two \ndifferent code pages? You'd have to ignore the differences and assume that only the first 127 chars \nare used and that they are the same in the code pages of two different code pages. \n\n3) Allow individual columns to have code pages to be declared for them. \n This is okay. But it has downsides:\n A) For every national language or at least for every code page you want to support you have to \ncreate a new column. \n Picture a translation table. You might have 30 languages you translate into. So what do you do? \nMake 30 columns? You'd have a column that was a TokenID that describes what phrase or word the \nrow represents. Then all the translation columns. \n Having a column per language results in more columns than if you have a column per code page. \nBut if you have a column per code page that results in more rows. The reason is that you might put \nFrench, German, English, and several other languages in ISO 8859-1. Well, they each need to go on \na different line. But if only Japanese goes into the SHIFT-JIS (eg CP932 or CP942) column then only \none of those rows has Japanese in it. Do you put the Japanese on the same row as the English \ntranslation? Or do you put it on its own row? \n You end up with a sparse matrix appearance if you do one column for each code page. But you \nend up with a lot more columns if you do one column for each language. Then you run into limits of \nhow many columns and how many bytes a particular RDBMS can support. \n \n4) Mix code pages in a single column. Different rows may have different code pages. \n I've done this in DB2. One gives up the ability to do indexing of that column. After all, whose code \npage collation rules do you use? \n That limitation is okay if the table is a translation table for token ids that are used in the \ndevelopment of software. Basically, the programmers writing some app use Token ID constants in \ntheir code to specify what text gets displayed. Then the database contains the mappings from those \ntoken ids to the various national languages. In the case where I did this the database is populated \nfrom some outside source of data that comes from translators who don't even have direct access to \nthe database. The table then is just a storehouse that is indexed on the token id and national \nlanguage fields. \n Of course, one could come up with some scheme whereby the RDBMS would somehow know for \neach row and field what its code page is. One would need to have a way to declare a field as having a \nvariable code page and then to have a way to set a code page value along with the text for a given \nfield. I'm not arguing for this approach. Just pointing it out for completeness. \n\nNote that in order to support the mixing of code pages in a column you have to have one of two \nconditions: \n A) The column is declared in some code page. But the RDBMS can not enforce the requirement \nthat all text going into that column be in the set of encodings that are legal in that code page. Some \nof the code pages that will be put in rows in that column may use encodings that are not legal in the \ndeclared code page.\n B) Have a binary character type that is not in any code page. eg see DB2's CLOB, DBCLOB, \nVARGRAPHIC and other similar fields. I forget which one of those I used but I have used one of them. \n\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 14:25:44 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Thoughts on multiple simultaneous code page support" }, { "msg_contents": "\n> I'm curious as to why the need for multiple passes. Is that true\n> even in Latin 1 code pages?\n\nYes. Some locales want strings to be ordered first by ignoring any\naccents on chracters, then using a tie-break on equal strings by doing\na comparison that includes the accents.\n\nTo take another of your points out of order: this is an obstacle that\nUnicode doesn't resolve. Unicode gives you a character set capable of\nrepresenting characters from many different locales, but collation\norder will remain locale specific.\n\n> If not, this optimization could at least\n> be used for code pages that don't require multiple passes.\n\n... but due to the increased memory/disk space, this is likely not an\noptimisation. Measurements needed, I'd suggest.\n\nMy only experience of this was tuning a sort utility, where the extra\ntime to convert the strings with strxfrm() and the large additional\nmemory requirement killed any advantage strcmp() had over strcoll().\nWhether this would be the case for database indexes in general or\nideed ever I don't know.\n\n> As for memory usage: I don't see the issue here. The translation to\n> some collation sequence has to be done anyhow. \n\nNo; you can do the comparisons in multiple passes instead without\nextra storage allocation. Using multiple passes will be efficient if\nthe comparisons mostly don't need the second pass, which I suspect is\ntypical.\n\n> Writing one's own routine to do look-ups into a collation sequence\n> table is a fairly trivial exercise.\n\nTrue. But if you can't do character-by-character comparisons then\nsuch a simplistic implementation will fail.\n\nI hadn't mentioned this time around (but see the archives for the\nrecent discussion of LIKE) that there are locales with 2:1 and 1:2\nmappings of characters too.\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 22 Jun 2000 11:12:54 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "\n> 1) Make the entire database Unicode\n> ...\n> It also makes sorting and indexing take more time.\n\nMentioned in my other email, but what collation order were you\nproposing to use? Binary might be OK for unique keys but that doesn't\nhelp you for '<', '>' etc.\n\nMy expectation (not the same as I'd like to see, necessarily, and not\nthat my opinion counts -- I'm not a developer) would be that each\ndatabase have a locale, and that this locale's collation order be used\nfor indexing, LIKE, '<', '>' etc. If you want to store data from\nmultiple human languages using a locale that has Unicode for its\ncharacter set would be appropriate/necessary.\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 22 Jun 2000 11:17:14 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on multiple simultaneous code page support " }, { "msg_contents": "Giles,\n\nOn Thu, 22 Jun 2000 11:12:54 +1000, Giles Lean wrote:\n\n>Yes. Some locales want strings to be ordered first by ignoring any\n>accents on chracters, then using a tie-break on equal strings by doing\n>a comparison that includes the accents.\n\nI guess I don't see how this is really any different. Why order first by the character and second by the accent? For instance, \nif you know the relative order of the various forms of \"o\" then just give them all successive numbers and do a single pass \nsort. You just have to make sure that all the numbers in that set of numbers are greater than the number you assign to \"m\" \nand less than the number you assign to \"p\".\n\n>To take another of your points out of order: this is an obstacle that\n>Unicode doesn't resolve. Unicode gives you a character set capable of\n>representing characters from many different locales, but collation\n>order will remain locale specific.\n\nWith Unicode you have to have a collation order that cuts across what use to be separate character sets in separate code \npages. \n\n>... but due to the increased memory/disk space, this is likely not an\n>optimisation. Measurements needed, I'd suggest.\n\nBut why is there increased memory and disk space? Do the fields that go into an index not now already get stored twice? \nDoes the index just contain a series of references to records and that is it? \n\n\n\n\n\n", "msg_date": "Wed, 21 Jun 2000 18:45:20 -0700", "msg_from": "\"Randall Parker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing" }, { "msg_contents": "\"Randall Parker\" <[email protected]> writes:\n> On Thu, 22 Jun 2000 11:12:54 +1000, Giles Lean wrote:\n>> Yes. Some locales want strings to be ordered first by ignoring any\n>> accents on chracters, then using a tie-break on equal strings by doing\n>> a comparison that includes the accents.\n\n> I guess I don't see how this is really any different. Why order first\n> by the character and second by the accent? For instance, if you know\n> the relative order of the various forms of \"o\" then just give them all\n> successive numbers and do a single pass sort. You just have to make\n> sure that all the numbers in that set of numbers are greater than the\n> number you assign to \"m\" and less than the number you assign to \"p\".\n\nNope. Would it were that easy. I don't have a keyboard that will\nlet me type a proper example, but consider\n\n1.\ta o-with-type-1-accent c\n\n2.\ta o-with-type-2-accent b\n\nIf type-1 accent sorts before type-2 then your proposal will consider\nstring 1 less than string 2. But the correct answer (in these locales)\nis the other way round, because you mustn't look at the accents at all\nunless you discover that the strings are otherwise equal. The\ndetermining comparison here is that b < c, therefore string 2 < string 1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 23:01:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> My only experience of this was tuning a sort utility, where the extra\n> time to convert the strings with strxfrm() and the large additional\n> memory requirement killed any advantage strcmp() had over strcoll().\n> Whether this would be the case for database indexes in general or\n> ideed ever I don't know.\n\nInteresting. That certainly suggests strxfrm could be a loser for\na database index too, but I agree it'd be nice to see some actual\nmeasurements rather than speculation.\n\nWhat locale(s) were you using when testing your sort code? I suspect\nthe answers might depend on locale quite a bit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jun 2000 23:18:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "\n> Interesting. That certainly suggests strxfrm could be a loser for\n> a database index too, but I agree it'd be nice to see some actual\n> measurements rather than speculation.\n> \n> What locale(s) were you using when testing your sort code? I suspect\n> the answers might depend on locale quite a bit...\n\nI did a little more measurement today. It's still only annecdotal\nevidence -- I wasn't terribly rigorous -- but here are my results.\n\nMy data file consisted of ~660,000 lines and a total size of ~200MB.\nEach line had part descriptions in German and some uninteresting\nfields. I stripped out the uninteresting fields and read the file\ncalling calling strxfrm() for each line. I recorded the total input\nbytes and the total bytes returned by strxfrm().\n\nHP-UX 11.00 de_DE.roman8 locale:\ninput bytes: 179647811\nresult bytes: 1447833496 (increase factor 8.05)\n\nSolaris 2.6 de_CH locale:\ninput bytes: 179647811 \nresult bytes: 1085875122 (increase factor 6.04)\n\nI didn't time the test program on Solaris, but on HP-UX this program\ntook longer to run than a simplistic qsort() using strcoll() does, and\nmy comparison sort program has to write the data out as well, which\nthe strxfrm() calling program didn't do.\n\nRegards,\n\nGiles\n", "msg_date": "Thu, 22 Jun 2000 18:47:43 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "\n[ I hope this is useful for the archives, even though it's pretty much\n been covered already. --giles ]\n\n> With Unicode you have to have a collation order that cuts across\n> what use to be separate character sets in separate code pages.\n\n From the glossary at http://www.unicode.org/glossary/index.html:\n\nCollation\n The process of ordering units of textual information. Collation is\n usually specific to a particular language. Also known as\n alphabetizing or alphabetic sorting. Unicode Technical Report #10,\n \"Unicode Collation Algorithm,\" defines a complete, unambiguous,\n specified ordering for all characters in the Unicode Standard.\n\nThe \"Unicode Technical Report #10\" is accessible at:\n\nhttp://www.unicode.org/unicode/reports/tr10/\n\nThis technical report provides a way to specify \"the tailoring for any\nparticular country\" which together with the standard ordering will\nallow different Unicode implementations to sort any particular input\nidentically.\n\nSummary is that Unicode still has locales and we still have to know\nnot only the character set/code page but also the locale (\"country\nspecific tailoring\") an index was created with to use it.\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 22 Jun 2000 21:41:40 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An idea on faster CHAR field indexing " }, { "msg_contents": "Are we interested in adding Try/Catch exception code to PostgreSQL. \nThis looks interesting:\n\n\thttp://www.cs.berkeley.edu/~amc/cexcept/\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Jun 2000 13:02:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "C exception code" }, { "msg_contents": "Oh, well. At least I asked. :-)\n\n\n> \n> > Are we interested in adding Try/Catch exception code to PostgreSQL. \n> > This looks interesting:\n> > \n> > \thttp://www.cs.berkeley.edu/~amc/cexcept/\n> \n> IMHO using the C pre-processor to make C look like some other language:\n> \n> - makes the code harder to read as readers have to learn the dialect\n> first\n> \n> - makes the code harder to debug, since debugging tools don't know the\n> dialect but only the C it is translated into\n> \n> This exception implementation has the obvious(?) problem of using\n> setjump()/longjmp() where sigsetjmp()/siglongjmp() would probably be\n> necessary for postgresql.\n> \n> There are places too where this implementation would just plain not\n> work and so couldn't be used: setjmp(), longjmp(), sigsetjump(), and\n> siglongjmp() are not async safe signal functions and so can't be\n> called in signal handlers, for a start.\n> \n> Regards,\n> \n> Giles\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Jun 2000 17:34:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C exception code" }, { "msg_contents": "\n> Are we interested in adding Try/Catch exception code to PostgreSQL. \n> This looks interesting:\n> \n> \thttp://www.cs.berkeley.edu/~amc/cexcept/\n\nIMHO using the C pre-processor to make C look like some other language:\n\n- makes the code harder to read as readers have to learn the dialect\n first\n\n- makes the code harder to debug, since debugging tools don't know the\n dialect but only the C it is translated into\n\nThis exception implementation has the obvious(?) problem of using\nsetjump()/longjmp() where sigsetjmp()/siglongjmp() would probably be\nnecessary for postgresql.\n\nThere are places too where this implementation would just plain not\nwork and so couldn't be used: setjmp(), longjmp(), sigsetjump(), and\nsiglongjmp() are not async safe signal functions and so can't be\ncalled in signal handlers, for a start.\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Tue, 27 Jun 2000 07:36:53 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C exception code " }, { "msg_contents": "I just spent a rather frustrating hour trying to debug a backend startup\nfailure --- and getting nowhere because I couldn't catch the failure in\na debugger, or even step to where I thought it might be. I've seen this\nsort of difficulty before, and always had to resort to expedients like\nputting in printf's. But tonight I finally realized what the problem is.\n\nThe early stages of startup are run under signal mask BlockSig, which we\ninitialize to include *EVERY SIGNAL* (except SIGUSR1 for some reason).\nIn particular SIGTRAP is blocked, which prevents debugger breakpoints\nfrom working. Even sillier, normally-fatal signals like SIGSEGV are\nblocked. I now know by observation that HPUX, at least, takes this\nliterally: for example, if you've blocked SEGV you don't hear about bus\nerrors, you just keep going. Possibly rather slowly, if every attempted\ninstruction execution causes the hardware to fault to the kernel, but\nby golly the system will keep trying to run your code.\n\nNeedless to say I find this braindead in the extreme. Will anyone\nobject if I change the signal masks so that we never ever block\nSIGABRT, SIGILL, SIGSEGV, SIGBUS, SIGTRAP, SIGCONT, SIGSYS? Any\nother candidates? Are there any systems that do not define all\nof these signal names?\n\nBTW, once I turned this silliness off, I was able to home in on\nmy bug within minutes...\n\n\t\t\tregards, tom lane\n\nPS: The postmaster spends most of its time running under BlockSig too.\nGood thing we haven't had many postmaster bugs lately.\n", "msg_date": "Mon, 26 Jun 2000 20:44:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Is *that* why debugging backend startup is so hard!?" }, { "msg_contents": "> Needless to say I find this braindead in the extreme. Will anyone\n> object if I change the signal masks so that we never ever block\n> SIGABRT, SIGILL, SIGSEGV, SIGBUS, SIGTRAP, SIGCONT, SIGSYS? Any\n> other candidates? Are there any systems that do not define all\n> of these signal names?\n> \n> BTW, once I turned this silliness off, I was able to home in on\n> my bug within minutes...\n\nGo ahead. Current setup sound very broken. Why do they even bother\ndoing all this. Seems we should identify the signals we want to block,\nand just block those.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Jun 2000 23:47:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is *that* why debugging backend startup is so hard!?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Are we interested in adding Try/Catch exception code to PostgreSQL.\n> This looks interesting:\n> \n> http://www.cs.berkeley.edu/~amc/cexcept/\n\nHow tricky is the error handling in Postgres?\n\nAs an aside, I have just started working on a Java project, nearly done\nfor a company where they have not used the Java exception model. I.e.\nthere are error codes, setErrorCode, and ifError everywhere. A bigger\nmess you will not see. So I'm partial to a decent exception model, and\nmight even use the above in a project of my own.\n", "msg_date": "Tue, 27 Jun 2000 16:45:35 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C exception code" }, { "msg_contents": "\n> Needless to say I find this braindead in the extreme.\n\nWow, definitely braindead. Trapping some of them on systems that can\nprogrammatically generate a stack backtrace might be useful -- it\nwould help reporting what happened.\n\nBlocking them and continuing seems about the most dangerous thing that\ncould be done; if we've just got SIGSEGV or similar the code is\nconfused isn't to be trusted to safely modify data!\n\n> Will anyone object if I change the signal masks so that we never\n> ever block SIGABRT, SIGILL, SIGSEGV, SIGBUS, SIGTRAP, SIGCONT,\n> SIGSYS? Any other candidates? Are there any systems that do not\n> define all of these signal names?\n\nI'd expect these everywhere; certainly they're all defined in the\n\"Single Unix Specification, version 2\". Some of them don't exist in\nANSI C, if that matters.\n\nUsually it's easy enough to wrap code that cares in\n\n#ifdef SIGABRT\n...\n#endif\n\nso when/if a platform shows up that lacks one or more it's easy to\nfix.\n\nPotential additions to your list:\n\nSIGFPE\nSIGSTOP (can't be blocked)\n\nRegards,\n\nGiles\n\n", "msg_date": "Tue, 27 Jun 2000 18:27:26 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is *that* why debugging backend startup is so hard!? " }, { "msg_contents": "Tom Lane writes:\n\n> I just spent a rather frustrating hour trying to debug a backend startup\n> failure --- and getting nowhere because I couldn't catch the failure in\n> a debugger, or even step to where I thought it might be. I've seen this\n> sort of difficulty before, and always had to resort to expedients like\n> putting in printf's. But tonight I finally realized what the problem is.\n\nCould that be contributing to the Heisenbug I decribed on Sunday in \"Pid\nfile magically disappears\"?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 27 Jun 2000 20:11:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is *that* why debugging backend startup is so hard!?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I just spent a rather frustrating hour trying to debug a backend startup\n>> failure --- and getting nowhere because I couldn't catch the failure in\n>> a debugger, or even step to where I thought it might be. I've seen this\n>> sort of difficulty before, and always had to resort to expedients like\n>> putting in printf's. But tonight I finally realized what the problem is.\n\n> Could that be contributing to the Heisenbug I decribed on Sunday in \"Pid\n> file magically disappears\"?\n\nHm. Maybe. I haven't tried to reproduce the pid-file issue here\n(I'm up to my eyebrows in memmgr at the moment). But the blocking\nof SEGV and friends could certainly lead to some odd behavior, due\nto code plowing on after getting an error that should have crashed it.\n\nDepending on how robust your local implementation of abort(3) is,\nit's even possible that the code would fall through a failed\nAssert() test...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Jun 2000 17:05:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is *that* why debugging backend startup is so hard!? " }, { "msg_contents": "Locale does not work on BSD/OS because it does not have LC_MESSAGES. I\njust got this back from BSDI technical support. Comments?\n\n---------------------------------------------------------------------------\n\nIn message <[email protected]>, Bruce Momjian writes:\n>I assume the symbol is supposed to be defined in locale.h. HP-UX and\n>other Unix's have this symbol, but BSDI doesn't. It is only used when\n>PostgreSQL is compile with locale enabled.\n\n>I see no mention of LC_MESSAGES in the include file or manual page.\n\nLC_MESSAGES is apparently an extension; it's not in the POSIX spec, and it's\nnot in the C spec I have handy. (Admittedly, that's C99, but I don't think\nanything like this was removed since C89.) So, recommend you #ifdef that;\n\n\t#ifdef LC_MESSAGES\n\t...[interact with LC_MESSAGES]\n\t#endif\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jun 2000 10:58:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "LC_MESSAGES and BSD/OS" }, { "msg_contents": "\n> Locale does not work on BSD/OS because it does not have LC_MESSAGES. I\n> just got this back from BSDI technical support. Comments?\n\n...\n\n| LC_MESSAGES is apparently an extension; it's not in the POSIX spec, and it's\n| not in the C spec I have handy. (Admittedly, that's C99, but I don't think\n| anything like this was removed since C89.) So, recommend you #ifdef that;\n| \n| \t#ifdef LC_MESSAGES\n| \t...[interact with LC_MESSAGES]\n| \t#endif\n\nPOSIX and ANSI C have the following categories:\n\nLC_ALL\nLC_COLLATE\nLC_CTYPE\nLC_MONETARY\nLC_NUMERIC\nLC_TIME\n\n\"Implementation defined additional categories\" are allowed, and it\nlooks like many (most?) implementations use LC_MESSAGES, but they\ndon't have to, I guess.\n\nRegards,\n\nGiles\n", "msg_date": "Thu, 29 Jun 2000 04:43:26 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LC_MESSAGES and BSD/OS " }, { "msg_contents": "OK, I am willing to make them #ifdef'ed, if someone would explain what\nLC_MESSAGES is used for.\n\n\n> \n> > Locale does not work on BSD/OS because it does not have LC_MESSAGES. I\n> > just got this back from BSDI technical support. Comments?\n> \n> ...\n> \n> | LC_MESSAGES is apparently an extension; it's not in the POSIX spec, and it's\n> | not in the C spec I have handy. (Admittedly, that's C99, but I don't think\n> | anything like this was removed since C89.) So, recommend you #ifdef that;\n> | \n> | \t#ifdef LC_MESSAGES\n> | \t...[interact with LC_MESSAGES]\n> | \t#endif\n> \n> POSIX and ANSI C have the following categories:\n> \n> LC_ALL\n> LC_COLLATE\n> LC_CTYPE\n> LC_MONETARY\n> LC_NUMERIC\n> LC_TIME\n> \n> \"Implementation defined additional categories\" are allowed, and it\n> looks like many (most?) implementations use LC_MESSAGES, but they\n> don't have to, I guess.\n> \n> Regards,\n> \n> Giles\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jun 2000 14:46:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LC_MESSAGES and BSD/OS" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, I am willing to make them #ifdef'ed, if someone would explain what\n> LC_MESSAGES is used for.\n\nThe HPUX locale man pages say:\n\n LC_CTYPE determines the interpretation of text as single and/or\n multi-byte characters, the classification of characters as printable,\n and the characters matched by character class expressions in regular\n expressions.\n\n LC_COLLATE provides collation sequence definition\n for relative ordering between collating elements (single- and\n multi-character collating elements) in the locale.\n\n LC_MESSAGES determines the locale that should be used to affect the\n format and content of diagnostic messages written to standard error,\n and informative messages written to standard output.\n\n LC_MONETARY defines the rules and symbols used to\n format monetary numeric information.\n\n LC_NUMERIC defines rules and symbols used to format\n non-monetary numeric information.\n\n LC_TIME defines the rules for generating locale-specific formatted\n date strings.\n\n LC_ALL, when set to a non-empty string value, overrides the values of\n all other internationalization variables.\n\nDunno which LC_foo variable corresponds to LC_MESSAGES on BSDI.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Jun 2000 21:13:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LC_MESSAGES and BSD/OS " }, { "msg_contents": "OK, I have put #ifdef around the LC_MESSAGES to BSDI will work with\nlocale. Thanks.\n\n\n> Bruce Momjian <[email protected]> writes:\n> > OK, I am willing to make them #ifdef'ed, if someone would explain what\n> > LC_MESSAGES is used for.\n> \n> The HPUX locale man pages say:\n> \n> LC_CTYPE determines the interpretation of text as single and/or\n> multi-byte characters, the classification of characters as printable,\n> and the characters matched by character class expressions in regular\n> expressions.\n> \n> LC_COLLATE provides collation sequence definition\n> for relative ordering between collating elements (single- and\n> multi-character collating elements) in the locale.\n> \n> LC_MESSAGES determines the locale that should be used to affect the\n> format and content of diagnostic messages written to standard error,\n> and informative messages written to standard output.\n> \n> LC_MONETARY defines the rules and symbols used to\n> format monetary numeric information.\n> \n> LC_NUMERIC defines rules and symbols used to format\n> non-monetary numeric information.\n> \n> LC_TIME defines the rules for generating locale-specific formatted\n> date strings.\n> \n> LC_ALL, when set to a non-empty string value, overrides the values of\n> all other internationalization variables.\n> \n> Dunno which LC_foo variable corresponds to LC_MESSAGES on BSDI.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Jun 2000 21:19:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LC_MESSAGES and BSD/OS" }, { "msg_contents": "Tim Perdue wrote:\n\n> On wednesday or thursday, I'm going to be publishing my article on MySQL\n> vs. Postgres on PHPBuilder.com.\n>\n> Before I do that I want to confirm the major problem I had w/postgres:\n> the 8K tuple limit. When trying to import some tables from MySQL,\n> postgres kept choking because MySQL has no such limit on the size of a\n> row in the database (text fields on MySQL can be multi-megabyte).\n\nThis is beeing fixed: http://www.postgresql.org/projects/devel-toast.html\n\n>\n>\n> Is it even possible to import large text fields into postgres? If not,\n> how in the world can anyone use this to store message board posts,\n> resumes, etc? Do you have to use pgsql-specific large-object\n> import/export commands?\n\nI'm currently building a newspaper system and I just split the articles into\n8K sections. This is just a workaround until the TOAST project is finished.\n\n>\n>\n> I actually intended the article to be a win for Postgres, as I've used\n> it and had good luck with it for such a long time, but if you look at\n> the results below, it seems very positive for MySQL.\n>\n> Performace/Scalability:\n>\n> MySQL was About 50-60% faster in real-world web serving, but it crumbles\n> under a real load. Postgres on the other hand scaled 3x higher than\n> MySQL before it started to crumble on the same machine. Unfortunately,\n> Postgres would probably still lose on a high-traffic website because\n> MySQL can crank out the pages so much faster, number of concurrent\n> connections is hard to compare. MySQL also seems to make better use of\n> multiple-processor machines like the quad-xeon I tested on. Postgres\n> never saturated all 4 processors as MySQL did.\n>\n> Tools:\n> MySQL has some nice admin tools that allow you to watch individual\n> connections and queries as they progress and tools to recover from\n> corruption. I haven't seem any similar tools for postgres.\n\nHave you looked at pgAdmin? http://www.pgadmin.freeserve.co.uk/\nThere is also a tool called pgAccess.\n\n>\n> Long-term stability:\n> Postgres is undoubtably the long-run winner in stability, whereas MySQL\n> will freak out or die when left running for more than a month at a time.\n> But if you ever do have a problem with postgres, you generally have to\n> nuke the database and recover from a backup, as there are no known tools\n> to fix index and database corruption. For a long-running postgres\n> database, you will occasionally have to drop indexes and re-create them,\n> causing downtime.\n>\n> Usability:\n> Both databases use a similar command-line interface. Postgres uses\n> \"slash commands\" to help you view database structures. MySQL uses a more\n> memorable, uniform syntax like \"Show Tables; Show Databases; Describe\n> table_x;\" and has better support for altering/changing tables, columns,\n> and even databases.\n>\n> Features:\n> Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> now supports foreign keys, which can help with referential integrity.\n> Postgres supports subselects and better support for creating tables as\n> the result of queries. The \"transaction\" support that MySQL lacks is\n> included in Postgres, although you'll never miss it on a website, unless\n> you're building something for a bank, and if you're doing that, you'll\n> use oracle.\n\nNot true. Transactions are used to make atomic database operations. We use\ntransactions more than 60 times in our application (we use Cold Fusion).\n\n>\n>\n> Tim\n>\n> --\n> Founder - PHPBuilder.com / Geocrawler.com\n> Lead Developer - SourceForge\n> VA Linux Systems\n> 408-542-5723\n\nPoul L. Christiansen\nDynamic Paper\n\n", "msg_date": "Tue, 04 Jul 2000 21:15:22 +0200", "msg_from": "\"Poul L. Christiansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On wednesday or thursday, I'm going to be publishing my article on MySQL\nvs. Postgres on PHPBuilder.com.\n\nBefore I do that I want to confirm the major problem I had w/postgres:\nthe 8K tuple limit. When trying to import some tables from MySQL,\npostgres kept choking because MySQL has no such limit on the size of a\nrow in the database (text fields on MySQL can be multi-megabyte).\n\nIs it even possible to import large text fields into postgres? If not,\nhow in the world can anyone use this to store message board posts,\nresumes, etc? Do you have to use pgsql-specific large-object\nimport/export commands?\n\nI actually intended the article to be a win for Postgres, as I've used\nit and had good luck with it for such a long time, but if you look at\nthe results below, it seems very positive for MySQL.\n\nPerformace/Scalability:\n\nMySQL was About 50-60% faster in real-world web serving, but it crumbles\nunder a real load. Postgres on the other hand scaled 3x higher than\nMySQL before it started to crumble on the same machine. Unfortunately,\nPostgres would probably still lose on a high-traffic website because\nMySQL can crank out the pages so much faster, number of concurrent\nconnections is hard to compare. MySQL also seems to make better use of\nmultiple-processor machines like the quad-xeon I tested on. Postgres\nnever saturated all 4 processors as MySQL did.\n\nTools:\nMySQL has some nice admin tools that allow you to watch individual\nconnections and queries as they progress and tools to recover from\ncorruption. I haven't seem any similar tools for postgres.\n\nLong-term stability:\nPostgres is undoubtably the long-run winner in stability, whereas MySQL\nwill freak out or die when left running for more than a month at a time.\nBut if you ever do have a problem with postgres, you generally have to\nnuke the database and recover from a backup, as there are no known tools\nto fix index and database corruption. For a long-running postgres\ndatabase, you will occasionally have to drop indexes and re-create them,\ncausing downtime.\n\nUsability:\nBoth databases use a similar command-line interface. Postgres uses\n\"slash commands\" to help you view database structures. MySQL uses a more\nmemorable, uniform syntax like \"Show Tables; Show Databases; Describe\ntable_x;\" and has better support for altering/changing tables, columns,\nand even databases.\n\nFeatures:\nPostgres is undoubtedly far, far more advanced than MySQL is. Postgres\nnow supports foreign keys, which can help with referential integrity.\nPostgres supports subselects and better support for creating tables as\nthe result of queries. The \"transaction\" support that MySQL lacks is\nincluded in Postgres, although you'll never miss it on a website, unless\nyou're building something for a bank, and if you're doing that, you'll\nuse oracle.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Tue, 04 Jul 2000 12:42:31 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> On wednesday or thursday, I'm going to be publishing my article on MySQL\n> vs. Postgres on PHPBuilder.com.\n\nCool!\n\n> Features:\n> Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> now supports foreign keys, which can help with referential integrity.\n> Postgres supports subselects and better support for creating tables as\n> the result of queries. The \"transaction\" support that MySQL lacks is\n> included in Postgres, although you'll never miss it on a website, unless\n> you're building something for a bank, and if you're doing that, you'll\n> use oracle.\n\nSince MySQL version 3.23.16 it supports transactions with sleepycats DB3\nand since version 3.23.19 it is under the GPL.\n\n-Egon\n\n-- \nSIX Offene Systeme GmbH � Stuttgart - Berlin - New York\nSielminger Stra�e 63 � D-70771 Leinfelden-Echterdingen\nFon +49 711 9909164 � Fax +49 711 9909199 http://www.six.de\nPHP-Stand auf Europas gr�sster Linux-Messe: 'LinuxTag 2001'\nweitere Infos @ http://www.dynamic-webpages.de/\n", "msg_date": "Tue, 04 Jul 2000 21:51:11 +0200", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> On wednesday or thursday, I'm going to be publishing my article on MySQL\n> vs. Postgres on PHPBuilder.com.\n>\n> Before I do that I want to confirm the major problem I had w/postgres:\n> the 8K tuple limit. When trying to import some tables from MySQL,\n> postgres kept choking because MySQL has no such limit on the size of a\n> row in the database (text fields on MySQL can be multi-megabyte).\n\n I just committed the first portion of TOAST. Enabling lztext\n fields to hold multi-megabytes too. But it's not the answer\n to such big objects. I have plans to add an Oracle like\n large object handling in a future version.\n\n> I actually intended the article to be a win for Postgres, as I've used\n> it and had good luck with it for such a long time, but if you look at\n> the results below, it seems very positive for MySQL.\n\n It's never a good plan to have an initial intention which of\n the competitors should finally look good. It's visible\n between the lines.\n\n> Performace/Scalability:\n>\n> MySQL was About 50-60% faster in real-world web serving, but it crumbles\n> under a real load. Postgres on the other hand scaled 3x higher than\n> MySQL before it started to crumble on the same machine. Unfortunately,\n> Postgres would probably still lose on a high-traffic website because\n> MySQL can crank out the pages so much faster, number of concurrent\n> connections is hard to compare. MySQL also seems to make better use of\n> multiple-processor machines like the quad-xeon I tested on. Postgres\n> never saturated all 4 processors as MySQL did.\n\n The question in this case is \"what is real-world web\n serving\"? To spit out static HTML pages loaded into a\n database? To handle discussion forums like OpenACS with high\n concurrency and the need for transactions?\n\n Web applications differ in database usage as much as any\n other type of application. From huge amounts of static, never\n changing data to complex data structures with many\n dependencies constantly in motion. There is no such one\n \"real world web scenario\".\n\n> Tools:\n> MySQL has some nice admin tools that allow you to watch individual\n> connections and queries as they progress and tools to recover from\n> corruption. I haven't seem any similar tools for postgres.\n\n Yepp, we need alot more nice tools.\n\n> Long-term stability:\n> Postgres is undoubtably the long-run winner in stability, whereas MySQL\n> will freak out or die when left running for more than a month at a time.\n> But if you ever do have a problem with postgres, you generally have to\n> nuke the database and recover from a backup, as there are no known tools\n> to fix index and database corruption. For a long-running postgres\n> database, you will occasionally have to drop indexes and re-create them,\n> causing downtime.\n\n Not true IMHO. We had some problems with indices in the past.\n But you can drop/recreate them online and someone running a\n query concurrently might just use a sequential scan during\n that time. All other corruptions need backup and recovery.\n WAL is on it's way.\n\n> Usability:\n> Both databases use a similar command-line interface. Postgres uses\n> \"slash commands\" to help you view database structures. MySQL uses a more\n> memorable, uniform syntax like \"Show Tables; Show Databases; Describe\n> table_x;\" and has better support for altering/changing tables, columns,\n> and even databases.\n\n Since professional application development starts with a data\n design, such \"describe\" commands and \"alter\" features are\n unimportant. The more someone needs them, the more I know\n that he isn't well educated.\n\n Productional installations don't need any \"alter\" command at\n all. New features are developed in the development area,\n tested with real life data in the test environment and moved\n to the production server including a maybe required data\n conversion step during a downtime.\n\n 24/7 scenarios require hot standby, online synchronized\n databases with hardware takeover. All that is far away from\n our scope by now.\n\n> Features:\n> Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> now supports foreign keys, which can help with referential integrity.\n> Postgres supports subselects and better support for creating tables as\n> the result of queries. The \"transaction\" support that MySQL lacks is\n> included in Postgres, although you'll never miss it on a website, unless\n> you're building something for a bank, and if you're doing that, you'll\n> use oracle.\n\n FOREIGN KEY doesn't help with referential integrity, it\n guarantees it. No application must ever worry if it will\n find the customer when it has a problem report. It does a\n SELECT and has it or it would've never found the problem\n report first - period.\n\n And for big, functional expanding web sites, it does so even\n if one of a dozen programmers forgot it once. If the\n constraint says you cannot delete a customer who payed until\n end of the year, the database won't let you, even if one of\n the 7 CGI programs that can delete customers doesn't check.\n\n Transactions are the base for any data integrity. Especially\n in the web environment. Almost every web server I've seen has\n some timeout for CGI, ADP, ASP or whatever they call it. As\n soon as your page needs to update more than one table, you\n run the risk of getting aborted just between, leaving the\n current activity half done. No matter if a database supports\n FOREIGN KEY. I could live without it, but transactions are\n essential.\n\n Fortunately the MySQL team has changed it's point of view on\n that detail and made some noticeable advantage into that area\n by integrating BDB. The lates BETA does support transactions\n including rollback as they announced. As far as I see it, the\n integration of BDB only buys them transactions, on the cost\n of performance and maintainence efford. So the need for it\n cannot be that small as you think.\n\n Final notes:\n\n I hate these \"MySQL\" vs. \"PostgreSQL\" articles that want to\n say \"this one is the better\". Each one has it's advantages\n and disadvantages. Both have a long TODO.\n\n Your article might better analyze a couple of different\n \"real-world web services\", telling what DB usage profile they\n have and then suggesting which of the two databases is the\n better choice in each case.\n\n MySQL is a tool and PostgreSQL is a tool. But as with other\n tools, a hammer doesn't help if you need a screw driver.\n\n Please don't intend to tell anyone either of these databases\n is \"the best\". You'd do both communities a bad job. Help\n people to choose the right database for their current needs\n and tell them to reevaluate their choice for the next project\n instead of blindly staying with the same database. We'll end\n up with alot of customers using both databases parallel for\n different needs.\n\n At the bottom line both teams share the same idea, open\n source. Anyone who pays a license fee is a loss (looser?) for\n all of us.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 00:39:04 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "on 7/4/00 3:42 PM, Tim Perdue at [email protected] wrote:\n\n> Before I do that I want to confirm the major problem I had w/postgres:\n> the 8K tuple limit. When trying to import some tables from MySQL,\n> postgres kept choking because MySQL has no such limit on the size of a\n> row in the database (text fields on MySQL can be multi-megabyte).\n\nIt's possible in the current version to up your tuple limit to 16K before\ncompilation, and you can use lztext, the compressed text type, which should\ngive you up to 32K of storage. Netscape's textarea limit is 32K, so that's a\ngood basis for doing a number of web-based things. Anything that is\nmulti-megabyte is really not something I'd want to store in an RDBMS.\n\n> I actually intended the article to be a win for Postgres, as I've used\n> it and had good luck with it for such a long time, but if you look at\n> the results below, it seems very positive for MySQL.\n\nJan said that each tool has its value, and that's true. I recommend you\ndefine your evaluation context before you write this. Is this for running a\nserious mission-critical web site? Is it for logging web site hits with\ntolerance for data loss and a need for doing simple reporting?\n\n> Performace/Scalability:\n> \n> MySQL was About 50-60% faster in real-world web serving, but it crumbles\n> under a real load. Postgres on the other hand scaled 3x higher than\n> MySQL before it started to crumble on the same machine. Unfortunately,\n> Postgres would probably still lose on a high-traffic website because\n> MySQL can crank out the pages so much faster, number of concurrent\n> connections is hard to compare. MySQL also seems to make better use of\n> multiple-processor machines like the quad-xeon I tested on. Postgres\n> never saturated all 4 processors as MySQL did.\n\nWhat kind of queries did you perform? Did you use connection pooling (a lot\nof PHP apps don't, from what I've seen)? How does the performance get\naffected when a query in Postgres with subselects has to be split into 4\ndifferent queries in MySQL? Postgres is process-based, each connection\nresulting in one process. If you use connection pooling with at least as\nmany connections as you have processors, you should see it scale quite well.\nIn fact, for serious load-testing, you should have 10-15 pooled connections.\n\nI *strongly* question your intuition on Postgres running web sites. MySQL's\nwrite performance is very poor, which forces excessive caching (see sites\nlike Slashdot) to prevent updates from blocking entire web site serving.\nYes, the BDB addition might be useful. Let's see some performance tests\nusing BDB tables.\n\n> Postgres is undoubtably the long-run winner in stability, whereas MySQL\n> will freak out or die when left running for more than a month at a time.\n> But if you ever do have a problem with postgres, you generally have to\n> nuke the database and recover from a backup, as there are no known tools\n> to fix index and database corruption. For a long-running postgres\n> database, you will occasionally have to drop indexes and re-create them,\n> causing downtime.\n\nDropping indexes and recreating them does not cause downtime. I've run a\ncouple of postgres-backed web sites for months on end with no issues. I've\nsurvived a heavy slashdotting on my dual Pentium II-400, with Postgres\nWRITES and READS on every Slashdot-referred hit, resulting in perfectly\nrespectable serving times (less than 3-4 seconds to serve > 20K of data on\neach hit). No caching optimization of any kind on the app layer. And I'd\nforgotten to vacuum my database for a few days.\n\n> Features:\n> Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> now supports foreign keys, which can help with referential integrity.\n> Postgres supports subselects and better support for creating tables as\n> the result of queries. The \"transaction\" support that MySQL lacks is\n> included in Postgres, although you'll never miss it on a website, unless\n> you're building something for a bank, and if you're doing that, you'll\n> use oracle.\n\nI'm just shocked at this. Where did this \"transactions aren't necessary\"\nschool of thinking originate? I've been developing database-backed web sites\nfor 5 years now, and I can't conceive of building a serious web site without\ntransactions. How do you guarantee that a record and its children records\nare all stored together successfully? Do you run on a magic power grid that\nnever fails? Do you never have code-related error conditions that require\nrolling back a series of database edits?\n\nOne quick point: while you may well be personally unbiased, VA Linux just\nendorsed and funded MySQL. SourceForge uses MySQL. How do you expect to\nconvince readers that you're being objective in this comparison?\n\n-Ben\n\n", "msg_date": "Tue, 04 Jul 2000 19:37:25 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue writes:\n\n> the 8K tuple limit.\n\nBLCKSZ in src/include/config.h -- But it's being worked on these very\ndays.\n\n> Postgres never saturated all 4 processors as MySQL did.\n\nBlame that on your operating system?\n\n> MySQL has some nice admin tools that allow you to watch individual\n> connections and queries as they progress\n\nps\ntail -f <serverlog>\n\n> and tools to recover from corruption. I haven't seem any similar tools\n> for postgres.\n\nI always like this one -- \"tools to recover from corruption\". If your\ndatabase is truly corrupted then there's nothing you can do about it, you\nneed a backup. If your database engine just creates garbage once in a\nwhile then the solution is to fix the database engine, not to provide\nexternal tools to clean up after it.\n\n> as there are no known tools to fix index\n\nREINDEX\n\n> Both databases use a similar command-line interface. Postgres uses\n> \"slash commands\" to help you view database structures. MySQL uses a more\n> memorable, uniform syntax like \"Show Tables; Show Databases; Describe\n> table_x;\"\n\nYeah, but once you have memorized ours then it will be shorter to type. :)\nAnd you get tab completion. And what's so non-uniform about ours?\n\n> The \"transaction\" support that MySQL lacks is included in Postgres,\n> although you'll never miss it on a website,\n\nThink again. Transactions and multi-version concurrency control are\nessential for any multi-user web site that expects any writes at all. I'll\nreiterate the old Bugzilla bug: User A issues a search that \"takes\nforever\". User B wants to update some information in the database, waits\nfor user A. Now *every* user in the system, reading or writing, is blocked\nwaiting for A (and B).\n\nBut you don't even have to go that far. What if you just update two\nseparate tables at once?\n\nIf your web site is truly read only, yes, you don't need transactions. But\nthen you don't need a database either. If your web site does writes, you\nneed transactions, or you're really not trying hard enough.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 5 Jul 2000 02:12:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Benjamin Adida wrote:\n> Jan said that each tool has its value, and that's true. I recommend you\n> define your evaluation context before you write this. Is this for running a\n> serious mission-critical web site? Is it for logging web site hits with\n> tolerance for data loss and a need for doing simple reporting?\n\nThis is for what most people do with PHP and databases - run\nsemi-critical medium-traffic sites. Anyone running a mission-critical\nsite would have to look elsewhere for true robustness. I would not at\nthis time recommend any serious, life-threatening app run On either\ndatabase.\n\n> > Performace/Scalability:\n> >\n> > MySQL was About 50-60% faster in real-world web serving, but it crumbles\n> > under a real load. Postgres on the other hand scaled 3x higher than\n> > MySQL before it started to crumble on the same machine. Unfortunately,\n> > Postgres would probably still lose on a high-traffic website because\n> > MySQL can crank out the pages so much faster, number of concurrent\n> > connections is hard to compare. MySQL also seems to make better use of\n> > multiple-processor machines like the quad-xeon I tested on. Postgres\n> > never saturated all 4 processors as MySQL did.\n> \n> What kind of queries did you perform? \n\nI took a real-world page from our site\n<http://sourceforge.net/forum/forum.php?forum_id=1> and made it portable\nto both databases. Of course, I could not import the \"body\" of the\nmessage into postgres because of the 8k limitation, so the body had to\nbe dropped from both databases.\n\nThe \"nested\" view of this page requires joins against three tables and\nsome recursion to show submessages.\n\nThe test was conducted with \"ab\" (apache benchmark software) using\nvarying numbers of concurrent connections and 1000 total page views.\n\nThe \"10% inserts\" test is most realistic, as about 10% of all page views\nin a discussion forum involve posting to the database. I used a\nrandom-number generator in the PHP script to insert a row into the table\n10% of the time. If you look at the results, you'll see that MySQL was\nactually harmed somewhat more by the writes than postgres was.\n\nHere are the actual results I saw on my quad-xeon machine:\n\npostgres:\n\nconcurrency w/pconnects:\n10 cli - 10.27 pg/sec 333.69 kb/s\n20 cli - 10.24 pg/sec 332.86 kb/s\n30 cli - 10.25 pg/sec 333.01 kb/s\n40 cli - 10.0 pg/sec 324.78 kb/s\n50 cli - 10.0 pg/sec 324.84 kb/s\n75 cli - 9.58 pg/sec 311.43 kb/s\n90 cli - 9.48 pg/sec 307.95 kb/s\n100 cli - 9.23 pg/sec 300.00 kb/s\n110 cli - 9.09 pg/sec 295.20 kb/s\n120 cli - 9.28 pg/sec 295.02 kb/s (2.2% failure)\n\nconcurrency w/10% inserts & pconnects:\n30 cli - 9.97 pg/sec 324.11 kb/s\n40 cli - 10.08 pg/sec 327.40 kb/s\n75 cli - 9.51 pg/sec 309.13 kb/s\n\nMySQL:\n\nConcurrency Tests w/pconnects:\n30 cli - 16.03 pg/sec 521.01 kb/s\n40 cli - 15.64 pg/sec 507.18 kb/s *failures\n50 cli - 15.43 pg/sec 497.88 kb/s *failures\n75 cli - 14.70 pg/sec 468.64 kb/s *failures\n90 - mysql dies\n110 - mysql dies\n120 - mysql dies\n\nConcurrency Tests w/o pconnects:\n10 cli - 16.55 pg/sec 537.63 kb/s\n20 cli - 15.99 pg/sec 519/51 kb/s\n30 cli - 15.55 pg/sec 505.19 kb/s\n40 cli - 15.46 pg/sec 490.01 kb/s 4.7% failure\n50 cli - 15.59 pg/sec 482.24 kb/s 8.2% failure\n75 cli - 17.65 pg/sec 452.08 kb/s 36.3% failure\n90 cli - mysql dies\n\nconcurrency w/10% inserts & pconnects:\n20 cli - 16.37 pg/sec 531.79 kb/s\n30 cli - 16.15 pg/sec 524.64 kb/s\n40 cli - 22.04 pg/sec 453.82 kb/sec 37.8% failure\n\n\n> Did you use connection pooling (a lot\n\nI used persistent connections, yes. Without them, Postgres' showing was\nfar poorer, with mysql showing about 2x the performance.\n\n\n\n> of PHP apps don't, from what I've seen)? How does the performance get\n> affected when a query in Postgres with subselects has to be split into 4\n> different queries in MySQL?\n\nI'd really love to see a case where a real-world page view requires 4x\nthe queries on MySQL. If you are doing subselects like that on a website\nin real-time you've got serious design problems and postgres would\nfold-up and quit under the load anyway.\n\n\n> Postgres is process-based, each connection\n> resulting in one process. If you use connection pooling with at least as\n> many connections as you have processors, you should see it scale quite well.\n> In fact, for serious load-testing, you should have 10-15 pooled connections.\n> \n> I *strongly* question your intuition on Postgres running web sites. MySQL's\n\nSpecifically, what is the problem with my \"intuition\"? All I did in the\nprior message was report my results and ask for feedback before I post\nit.\n\n\n> write performance is very poor, which forces excessive caching (see sites\n> like Slashdot) to prevent updates from blocking entire web site serving.\n> Yes, the BDB addition might be useful. Let's see some performance tests\n> using BDB tables.\n\nI wouldn't use BDB tables as MySQL 3.23.x isn't stable and I wouldn't\nuse it until it is.\n\n\n> > Postgres is undoubtably the long-run winner in stability, whereas MySQL\n> > will freak out or die when left running for more than a month at a time.\n> > But if you ever do have a problem with postgres, you generally have to\n> > nuke the database and recover from a backup, as there are no known tools\n> > to fix index and database corruption. For a long-running postgres\n> > database, you will occasionally have to drop indexes and re-create them,\n> > causing downtime.\n> \n> Dropping indexes and recreating them does not cause downtime. I've run a\n> couple of postgres-backed web sites for months on end with no issues. I've\n> survived a heavy slashdotting on my dual Pentium II-400, with Postgres\n> WRITES and READS on every Slashdot-referred hit, resulting in perfectly\n> respectable serving times (less than 3-4 seconds to serve > 20K of data on\n> each hit). No caching optimization of any kind on the app layer. And I'd\n> forgotten to vacuum my database for a few days.\n\nNot sure why you're arguing with this as this was a clear win for\npostgres.\n\n\n> Do you run on a magic power grid that\n> never fails?\n\nReality is that postgres is as likely - or more likely - to wind up with\ncorrupted data than MySQL. I'm talking physical corruption where I have\nto destroy the database and recover from a dump. Just a couple months\nago I sent a message about \"Eternal Vacuuming\", in which case I had to\ndestroy and recover a multi-gigabyte database.\n\nFurther, I have had situations where postgres actually had DUPLICATE ids\nin a primary key field, probably due to some abort or other nasty\nsituation in the middle of a commit. How did I recover from That? Well,\nI had to run a count(*) next to each ID and select out the rows where\nthere was more than one of each \"unique\" id, then reinsert those rows\nand drop and rebuild the indexes and reset the sequences.\n\nI've only been using MySQL for about a year (as compared to 2 years for\npostgres), but I have never seen either of those problems with MySQL.\n\n\n> Do you never have code-related error conditions that require\n> rolling back a series of database edits?\n\nPersonally, I check every query in my PHP code. On the rare occasion\nthat it fales, I show an error and get out. Even with postgres, I have\nalways checked success or failure of a query and shown an appropriate\nerror. Never in two years of programming PHP/postgres have I ever used\ncommit/rollback, and I have written some extremely complex web apps\n(sourceforge being a prime example). Geocrawler.com runs on postgres and\nagain, I NEVER saw any need for any kind of rollback at all.\n\nThe statelessness of the web pretty much obviates the needs for\nlocks/rollbacks as each process is extremely quick and runs from start\nto finish instantly. It's not like the old days where you pull data down\ninto a local application, work on it, then upload it again.\n\nOnly now, with some extremely complex stuff that we're doing on\nSourceForge would I like to see locks and rollbacks (hence my recent\ninterest in benchmarking and comparing the two). Your average web\nprogrammer will almost never run into that in the short term.\n\n \n> One quick point: while you may well be personally unbiased, VA Linux just\n> endorsed and funded MySQL. SourceForge uses MySQL. How do you expect to\n> convince readers that you're being objective in this comparison?\n\nYour own strong biases are shown in your message. I do this stuff\nbecause I'm curious and want to find out for myself. Most readers will\nfind it interesting as I did. Few will switch from MySQL to postgres or\nvice versa because of it.\n\nAnother clarification: PHPBuilder is owned by internet.com, a competitor\nof VA Linux/Andover.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Tue, 04 Jul 2000 17:30:51 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> This is for what most people do with PHP and databases - run\n> semi-critical medium-traffic sites. Anyone running a mission-critical\n> site would have to look elsewhere for true robustness. I would not at\n> this time recommend any serious, life-threatening app run On either\n> database.\n> \n\nI've seen problems with block read errors in large Oracle\ndatabases which fail their alleged CRC check -- intermittent core\ndumps which required a dump/restore of 25 years of insurance\nclaims data (40 gig - it was a lot at the time). After being down\nfor days and restoring on a new box, the same errors occured. \n\n> \n> I'd really love to see a case where a real-world page view requires 4x\n> the queries on MySQL. If you are doing subselects like that on a website\n> in real-time you've got serious design problems and postgres would\n> fold-up and quit under the load anyway.\n\nThis can be true for Internet sites, of course. But with\ncorporate Intranet sites that dish-out and process ERP data, the\nqueries can become quite complex while concurrency is limited to\n< 1000 users.\n\n> Further, I have had situations where postgres actually had DUPLICATE ids\n> in a primary key field, probably due to some abort or other nasty\n> situation in the middle of a commit. How did I recover from That? Well,\n> I had to run a count(*) next to each ID and select out the rows where\n> there was more than one of each \"unique\" id, then reinsert those rows\n> and drop and rebuild the indexes and reset the sequences.\n\nUmm...\n\nDELETE FROM foo WHERE EXISTS \n(SELECT f.key FROM foo f WHERE f.key = foo.key AND f.oid >\nfoo.oid);\n\nI believe there's even a purely SQL (non-oid) method of doing\nthis as well.\n\n> Personally, I check every query in my PHP code. On the rare occasion\n> that it fales, I show an error and get out. Even with postgres, I have\n> always checked success or failure of a query and shown an appropriate\n> error. Never in two years of programming PHP/postgres have I ever used\n> commit/rollback, and I have written some extremely complex web apps\n> (sourceforge being a prime example). Geocrawler.com runs on postgres and\n> again, I NEVER saw any need for any kind of rollback at all.\n\nThis is the nature of the application. In the same example above,\nhow can I \"charge\" a cost center for the purchase of products in\nan in-house distribution center and \"deduct\" the resulting\nquantity from the distribution center's on-hand inventory sanely\nwithout transactions?\n\nMike Mascari\n", "msg_date": "Tue, 04 Jul 2000 20:51:30 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n\n> I'd really love to see a case where a real-world page view requires 4x\n> the queries on MySQL. If you are doing subselects like that on a website\n> in real-time you've got serious design problems and postgres would\n> fold-up and quit under the load anyway.\n\nWhy? There are some subselect queries that have no problems running in\nreal-time. There are some non-subselect queries which one should never\nattempt in real time. There is nothing fundamentally wrong with using\nsubselects for page views if it works for you. Nor is there anything\nnecessarily wrong with a design that requires subselects.\n\n> > Do you run on a magic power grid that\n> > never fails?\n> \n> Reality is that postgres is as likely - or more likely - to wind up with\n> corrupted data than MySQL.\n\nWhat do you base this statement on? With your sample size of one\ncorrupted postgres database? Also do you include inconsistent data in\nyour definition of corrupted data?\n\n> Never in two years of programming PHP/postgres have I ever used\n> commit/rollback, and I have written some extremely complex web apps\n> (sourceforge being a prime example). \n\nI would humbly suggest that you are doing it wrong then.\n\n> Geocrawler.com runs on postgres and\n> again, I NEVER saw any need for any kind of rollback at all.\n\nSo what do you do when you get an error and \"get out\" as you put it?\nLeave the half-done work in the database?\n\n> The statelessness of the web pretty much obviates the needs for\n> locks/rollbacks as each process is extremely quick and runs from start\n> to finish instantly. It's not like the old days where you pull data down\n> into a local application, work on it, then upload it again.\n\nEven in the \"old days\" you should never keep a transaction open while\nyou \"work on it\". Transactions should *always* be short, and the web\nchanges nothing.\n\nReally, REALLY there is nothing different about the web to traditional\napplications as far as the db is concerned.\n", "msg_date": "Wed, 05 Jul 2000 12:24:12 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Tue, 4 Jul 2000, Tim Perdue wrote:\n\n> Performace/Scalability:\n> \n> MySQL was About 50-60% faster in real-world web serving, but it\n> crumbles under a real load. Postgres on the other hand scaled 3x\n> higher than MySQL before it started to crumble on the same machine.\n> Unfortunately, Postgres would probably still lose on a high-traffic\n> website because MySQL can crank out the pages so much faster\n\n\tActually, this one depends alot on how the site is\nsetup/programmed. I did work with a friend several months ago using the\nnewest released versions of MySQL and PostgreSQL ... we loaded (with some\nmassaging) the exact same data/tables onto both on the *exact* same\nmachine, and the exact same operating system. When we ran their existing\nweb site, without modifications, on both MySQL and PgSQL, the MySQL was\nsubstantially faster ... when we spent a little bit of time looking at the\nqueries used, we found that due to MySQLs lack of sub-queries, each page\nbeing loaded had to do multiple queries to get the same information that\nwe could get out of PgSQL using one. Once we optimized the queries, our\ntimings to load the page went from something like 3sec for MySQL and 1sec\nfor PgSQL ... (vs something like, if I recall correctly, 19sec for\nPgSQL) ...\n\n\tSame with some recent work I did with UDMSearch ... by default,\nUDMSearch does 2+n queries to the database to get the information it\nrequires ... by re-writing the 'n' queries that are performed as an IN\nquery, I was able to cut down searches from taking ~1sec*n queries down to\na 3sec query ...\n\n\tThe point being that if you do a 1:1 comparison, MySQL will be\nfaster ... if you use features in PgSQL that don't exist in MySQL, you can\nknock that speed difference down considerably, if not surpass MySQL,\ndepending on the circumstance ...\n\n\n", "msg_date": "Tue, 4 Jul 2000 23:31:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "> Final notes:\n> \n> I hate these \"MySQL\" vs. \"PostgreSQL\" articles that want to\n> say \"this one is the better\". Each one has it's advantages\n> and disadvantages. Both have a long TODO.\n\nAlso, none of the 'comparisons' take the time to deal with the fact that\nones \"disadvantages\" can generally be overcome using its\n\"advantages\" (ie. speed issues with PostgreSQL can generally be overcome\nby making use of its high end features (ie. subselects)) ...\n\n\n", "msg_date": "Wed, 5 Jul 2000 00:19:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Tue, 4 Jul 2000, Benjamin Adida wrote:\n\n> Dropping indexes and recreating them does not cause downtime. I've run a\n\nJust got hit with a 'bits moved;recreate index' on the PostgreSQL search\nengine ... drop'd and re-created index on the fly, no server shut down ...\n\n> couple of postgres-backed web sites for months on end with no issues. I've\n> survived a heavy slashdotting on my dual Pentium II-400, with Postgres\n> WRITES and READS on every Slashdot-referred hit, resulting in perfectly\n> respectable serving times (less than 3-4 seconds to serve > 20K of data on\n> each hit). No caching optimization of any kind on the app layer. And I'd\n> forgotten to vacuum my database for a few days.\n\nWe had a *very* old version of PostgreSQL running on a Pentium acting as\nan accounting/authentication backend to a RADIUS server for an ISP\n... uptime for the server itself was *almost* 365 days (someone hit the\npower switch by accident, meaning to power down a different machine\n*sigh*) ... PostgreSQL server had been up for something like 6 months\nwithout any problems, with the previous downtime being to upgrade the\nserver ...\n\n > > > Features:\n> > Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> > now supports foreign keys, which can help with referential integrity.\n> > Postgres supports subselects and better support for creating tables as\n> > the result of queries. The \"transaction\" support that MySQL lacks is\n> > included in Postgres, although you'll never miss it on a website, unless\n> > you're building something for a bank, and if you're doing that, you'll\n> > use oracle.\n> \n> I'm just shocked at this. Where did this \"transactions aren't necessary\"\n> school of thinking originate? \n\nUmmm, hate to disparage someone else, and I may actually be incorrect, but\nI'm *almost* certain that MySQL docs, at one time, had this in it\n... where they were explaining why they didn't have and never would have\ntransaction support. Obviously this mentality has changed since, with\nthe recent addition of transactions through a third-party database product\n(re: Berkeley DB) ...\n\n> I've been developing database-backed web sites for 5 years now, and I\n> can't conceive of building a serious web site without transactions.\n> How do you guarantee that a record and its children records are all\n> stored together successfully? Do you run on a magic power grid that\n> never fails? Do you never have code-related error conditions that\n> require rolling back a series of database edits?\n\nActually, hate to admit it, but it wasn't until recently that I clued into\nwhat transaction were for and how they wre used :( I now use them for\njust about everything I do, and couldn't imagine doing without them ...\n\n\n", "msg_date": "Wed, 5 Jul 2000 00:28:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 5 Jul 2000, Peter Eisentraut wrote:\n\n> If your web site is truly read only, yes, you don't need transactions. But\n> then you don't need a database either. If your web site does writes, you\n> need transactions, or you're really not trying hard enough.\n\n\t... or not popular enough :)\n\n\n", "msg_date": "Wed, 5 Jul 2000 00:30:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Tue, 4 Jul 2000, Tim Perdue wrote:\n\n> Benjamin Adida wrote:\n> > Jan said that each tool has its value, and that's true. I recommend you\n> > define your evaluation context before you write this. Is this for running a\n> > serious mission-critical web site? Is it for logging web site hits with\n> > tolerance for data loss and a need for doing simple reporting?\n> \n> This is for what most people do with PHP and databases - run\n> semi-critical medium-traffic sites. Anyone running a mission-critical\n> site would have to look elsewhere for true robustness. I would not at\n> this time recommend any serious, life-threatening app run On either\n> database.\n\nSomeone want to give me an example of something that would be\nlife-threatening that would run on a database? I can think of loads of\nmission critical stuff, but life threatening? As for mission critical,\nmission critical is in the eye of the end-user ... all my clients run\nPostgreSQL for their backend needs, and I can guarantee you that each and\nevery one of them considers it a mission critical element to their sites\n... then again, I have 3+ years of personal experience with PostgreSQL to\nback me up ..\n\n> I took a real-world page from our site\n> <http://sourceforge.net/forum/forum.php?forum_id=1> and made it portable\n> to both databases. Of course, I could not import the \"body\" of the\n\ndid you take the time to optimize the queries to take advantage of\nfeatures that MySQL doesn't have, or just straight plug-n-play?\n\n> > of PHP apps don't, from what I've seen)? How does the performance get\n> > affected when a query in Postgres with subselects has to be split into 4\n> > different queries in MySQL?\n> \n> I'd really love to see a case where a real-world page view requires 4x\n> the queries on MySQL. If you are doing subselects like that on a website\n> in real-time you've got serious design problems and postgres would\n> fold-up and quit under the load anyway.\n\nOdd, I'll have to let one of my clients know that their site has design\nflaws ... wait, no, they had 3x the queries in MySQL as in PgSQL, so that\nprobably doesnt' apply ...\n\n> > Do you run on a magic power grid that\n> > never fails?\n> \n> Reality is that postgres is as likely - or more likely - to wind up with\n> corrupted data than MySQL. I'm talking physical corruption where I have\n> to destroy the database and recover from a dump. \n\nOdd, in my 3+ years of PostgreSQL development, I've yet to have a\nproject/database corrupt such that I had to restore from backups *knock on\nwood* INDEX corruption, yup ... 'DROP INDEX/CREATE INDEX' fixes that\nthough. Physical database corruption, nope ...\n\n> Further, I have had situations where postgres actually had DUPLICATE\n> ids in a primary key field, probably due to some abort or other nasty\n> situation in the middle of a commit. How did I recover from That?\n> Well, I had to run a count(*) next to each ID and select out the rows\n> where there was more than one of each \"unique\" id, then reinsert those\n> rows and drop and rebuild the indexes and reset the sequences.\n\nOdd, were you using transactions here, or transactionless?\n\n> > Do you never have code-related error conditions that require\n> > rolling back a series of database edits?\n> \n> Personally, I check every query in my PHP code. On the rare occasion\n> that it fales, I show an error and get out. Even with postgres, I have\n> always checked success or failure of a query and shown an appropriate\n> error. Never in two years of programming PHP/postgres have I ever used\n> commit/rollback, and I have written some extremely complex web apps\n> (sourceforge being a prime example). Geocrawler.com runs on postgres and\n> again, I NEVER saw any need for any kind of rollback at all.\n\nWait ... how does checking every query help if QUERY2 fails after QUERY1\nis sent, and you aren't using transactions?\n\n> Only now, with some extremely complex stuff that we're doing on\n> SourceForge would I like to see locks and rollbacks (hence my recent\n> interest in benchmarking and comparing the two). Your average web\n> programmer will almost never run into that in the short term.\n\nCool, at least I'm not considered average :) I *always* use transactions\nin my scripts ... *shrug* then again, I'm heavily into 'the rules of\nnormalization', so tend to not crowd everything into one table. \n\n\n", "msg_date": "Wed, 5 Jul 2000 00:40:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "The Hermit Hacker wrote:\n> > Further, I have had situations where postgres actually had DUPLICATE\n> > ids in a primary key field, probably due to some abort or other nasty\n> > situation in the middle of a commit. How did I recover from That?\n> > Well, I had to run a count(*) next to each ID and select out the rows\n> > where there was more than one of each \"unique\" id, then reinsert those\n> > rows and drop and rebuild the indexes and reset the sequences.\n> \n> Odd, were you using transactions here, or transactionless?\n\nDoes it matter? I suppose it was my programming error that somehow I got\nduplicate primary keys in a table in the database where that should be\ntotally impossible under any circumstance? Another stupid\ntransactionless program I'm sure.\n\nAt any rate, it appears that the main problem I had with postgres (the\n8K tuple limit) is being fixed and I will mention that in my writeup.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Tue, 04 Jul 2000 21:08:36 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "----- Original Message -----\nFrom: \"Tim Perdue\" <[email protected]>\n\n> Before I do that I want to confirm the major problem I had w/postgres:\n> the 8K tuple limit.\n\n Just wanted to point out that this is not *exactly* true. While the\ndefault limit is 8k, all that is required to change it to 32k is to change\none line of text in config.h (blcksz from 8k to 32k). This is pointed out\nin the FAQ. So I would really consider the *default* to be 8k and the\n*limit* to be 32k. IMHO 32k is good enough for 99% of tuples in a typical\nbulletin-board-like application. It is not unreasonable to reject posts >\n32k in size. Though you might want to evaluate performance using the 32k\ntuples; might increase or decrease depending on application.\n\n -Mike\n\n", "msg_date": "Wed, 5 Jul 2000 00:50:06 -0400", "msg_from": "\"Michael Mayo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> MySQL was About 50-60% faster in real-world web serving ...\n\nSorry if I didn't noticed, but I searched all the messages in the thread\nfor an information about the PostgreSQL version used in the test and\ndidn't found anything.\n\nTim, what version of PostgreSQL did you used? Hope it's 7.x.\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 05 Jul 2000 09:11:11 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Constantin Teodorescu wrote:\n> Tim, what version of PostgreSQL did you used? Hope it's 7.x.\n\nYes, 7.0.2\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Tue, 04 Jul 2000 23:40:44 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 4 Jul 2000, Tim Perdue wrote:\n> \n> > Further, I have had situations where postgres actually had DUPLICATE\n> > ids in a primary key field, probably due to some abort or other nasty\n> > situation in the middle of a commit. How did I recover from That?\n> > Well, I had to run a count(*) next to each ID and select out the rows\n> > where there was more than one of each \"unique\" id, then reinsert those\n> > rows and drop and rebuild the indexes and reset the sequences.\n> \n> Odd, were you using transactions here, or transactionless?\n\nActully I think I remember a recent bug report about some condition that \nfailed the uniqueness check when inside a transaction ;(\n\nI think the report came with a fix ;)\n\n------------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 11:39:39 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> The Hermit Hacker wrote:\n> > > Further, I have had situations where postgres actually had DUPLICATE\n> > > ids in a primary key field, probably due to some abort or other nasty\n> > > situation in the middle of a commit. How did I recover from That?\n> > > Well, I had to run a count(*) next to each ID and select out the rows\n> > > where there was more than one of each \"unique\" id, then reinsert those\n> > > rows and drop and rebuild the indexes and reset the sequences.\n> >\n> > Odd, were you using transactions here, or transactionless?\n> \n> Does it matter? I suppose it was my programming error that somehow I got\n> duplicate primary keys in a table in the database where that should be\n> totally impossible under any circumstance? Another stupid\n> transactionless program I'm sure.\n> \n> At any rate, it appears that the main problem I had with postgres (the\n> 8K tuple limit) is being fixed and I will mention that in my writeup.\n\nCurrently (as of 7.0.x) you could use BLKSIZE=32K + lztext datatype and \nget text fields about 64-128K depending on data if you are desperately \nafter big textfields.\n\n-----------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 11:46:52 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "The Hermit Hacker wrote:\n> On Tue, 4 Jul 2000, Tim Perdue wrote:\n>\n> > I took a real-world page from our site\n> > <http://sourceforge.net/forum/forum.php?forum_id=1> and made it portable\n> > to both databases. Of course, I could not import the \"body\" of the\n>\n> did you take the time to optimize the queries to take advantage of\n> features that MySQL doesn't have, or just straight plug-n-play?\n>\n\n What a \"real-world\", one single URL, whow.\n\n The \"made it portable to both\" lets me think it is stripped\n down to the common denominator that both databases support.\n That is no transactions, no subqueries, no features.\n\n That's no \"comparision\", it's BS - sorry. If you want to\n write a good article, take a couple of existing web\n applications and analyze the complexity of their underlying\n data model, what features are important/unimportant for them\n and what could be done better in them with each database.\n Then make suggestions which application should use which\n database and explain why you think so.\n\n> > Further, I have had situations where postgres actually had DUPLICATE\n> > ids in a primary key field, probably due to some abort or other nasty\n> > situation in the middle of a commit. How did I recover from That?\n> > Well, I had to run a count(*) next to each ID and select out the rows\n> > where there was more than one of each \"unique\" id, then reinsert those\n> > rows and drop and rebuild the indexes and reset the sequences.\n>\n> Odd, were you using transactions here, or transactionless?\n\n Mark, you cannot use Postgres transactionless. Each single\n statement run outside of a transaction block has it's own\n transaction.\n\n Anyway, what version of Postgres was it? How big was the\n indexed field?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 10:55:11 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "The Hermit Hacker wrote:\n> Someone want to give me an example of something that would be\n> life-threatening that would run on a database? I can think of loads of\n> mission critical stuff, but life threatening? \n\nHow soon we forget the Y2K horror story warnings....\n\nPharmacy dispensary systems <-No meds, folks die.\nMedical needs supply chain systems <- No Meds or Gauze or Tools, folks die.\nSurgery scheduling systems <- No doctors or rooms for surgery, folks die\nMilitary bombing flight-path systems <-Bad data for bomb location...\nWeapons Design specifications storage <- Poorly designed systems killing\nthe testers and military users\nPowergrid billing info <-No power, on assisted living (life support)\nBanking/Financial account data <-No money, slow death of hunger\nFood Shipping systems <- No food\nWater distribution/management systems <- No water (I live in a desert)\n\nJust off of the top of my head, yes, it's possible to kill people\nwith bad data.\n\n-Bop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n", "msg_date": "Wed, 05 Jul 2000 02:02:36 -0700", "msg_from": "Ron Chmara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "\n> Someone want to give me an example of something that would be\n> life-threatening that would run on a database?\n\nMedical records: I've stored blood type, HIV status, general pathology\nresults, and radiology results in a database.\n\nA government site I know about stores court records about domestic\nviolence orders. Access to this information is required on short\nnotice and its absence can definitely be life threatening.\n\nLife-threatening doesn't have to be realtime.\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Wed, 05 Jul 2000 19:41:20 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres " }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Tim Perdue wrote:\n> >\n> > The Hermit Hacker wrote:\n> > > > Further, I have had situations where postgres actually had DUPLICATE\n> > > > ids in a primary key field, probably due to some abort or other nasty\n> > > > situation in the middle of a commit. How did I recover from That?\n> > > > Well, I had to run a count(*) next to each ID and select out the rows\n> > > > where there was more than one of each \"unique\" id, then reinsert those\n> > > > rows and drop and rebuild the indexes and reset the sequences.\n\nThere a bug report that allowed tuplicate ids in an uniqe field when \nSELECT FOR UPDATE was used. Could this be your case ?\n\n---8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<----\ngamer=# create table test(i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'test_pkey'\nfor table 'test'\nCREATE\ngamer=# insert into test values(1);\nINSERT 18860 1\ngamer=# begin;\nBEGIN\ngamer=# select * from test for update;\n i \n---\n 1\n(1 row)\n\ngamer=# insert into test values(1);\nINSERT 18861 1\ngamer=# commit;\nCOMMIT\ngamer=# select * from test;\n i \n---\n 1\n 1\n(2 rows)\n\ngamer=# insert into test values(1);\nERROR: Cannot insert a duplicate key into unique index test_pkey\n---8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<----\n\nIIRC the fix was also provided, so it could be fixed in current CVS (the\nabove \nis from 7.0.2, worked the same in 6.5.3)\n\n> > > Odd, were you using transactions here, or transactionless?\n\nIronically the above has to be using transactions as select for update\nworks \nlike this only inside transactions and is thus ineffectif if \ntransaction=statement;\n\nAs multi-command statements are run as a single transaction \n(which can't be done from psql as it does its own splittng ;()\nso a command like 'select * from test for update;insert into test\nvalues(1);'\nhas the same effect \n\n> > Does it matter? I suppose it was my programming error that somehow I got\n> > duplicate primary keys in a table in the database where that should be\n> > totally impossible under any circumstance? Another stupid\n> > transactionless program I'm sure.\n\nconstraints and transactions are quite different (though connected)\nthings.\n\nlack of some types of constraints (not null, in (1,2,3)) can be overcome \nwith careful programming, others like foreign keys or unique can't\nunless \ntransactions are used)\n\nno amount of careful programming will overcome lack of transactions\n(except \nimplementing transactions yourself ;)\n\n \n-----------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 14:24:42 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "\"Robert B. Easter\" wrote:\n> \n> \n> While it is slow, I've been able to store unlimited amounts of text into\n> the database by using the following code. \n\nThanks for a really nice exaple !\n\n> I've tested inserting over 4\n> megabytes from a TEXTAREA web form using PHP. When inserting such massive\n> amounts of text, you will have to wait a while, but it will eventually succeed\n> if you don't run out of memory. If you do run out of memory, the backend\n> terminates gracefully and the transaction aborts/rollsback.\n> \n> -- Load the PGSQL procedural language\n> -- This could also be done with the createlang script/program.\n> -- See man createlang.\n> CREATE FUNCTION plpgsql_call_handler()\n> RETURNS OPAQUE AS '/usr/local/pgsql/lib/plpgsql.so'\n> LANGUAGE 'C';\n> \n> CREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'\n> HANDLER plpgsql_call_handler\n> LANCOMPILER 'PL/pgSQL';\n\nYou probably meant pl/tcl as all your code is using that ?\n\n---------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 14:51:39 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 5 Jul 2000, Jan Wieck wrote:\n\n> > Odd, were you using transactions here, or transactionless?\n> \n> Mark, you cannot use Postgres transactionless. Each single\n> statement run outside of a transaction block has it's own\n> transaction.\n\nSorry, but 'transactionless' I mean no BEGIN/END ... from what I've been\ngathering from Tim, his code goes something like:\n\ndo query 1\ndo query 2\nif query 2 fails \"oops\"\n\nvs\n\ndo query 1\ndo query 2\nif query 2 fails, abort and auto-rollback query 1\n\nThen again, Tim might be being even more simple then that:\n\ndo query 1\nexit\n\n\n\n", "msg_date": "Wed, 5 Jul 2000 08:54:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 05 Jul 2000, Hannu Krosing wrote:\n> Tim Perdue wrote:\n> > \n> > The Hermit Hacker wrote:\n> > > > Further, I have had situations where postgres actually had DUPLICATE\n> > > > ids in a primary key field, probably due to some abort or other nasty\n> > > > situation in the middle of a commit. How did I recover from That?\n> > > > Well, I had to run a count(*) next to each ID and select out the rows\n> > > > where there was more than one of each \"unique\" id, then reinsert those\n> > > > rows and drop and rebuild the indexes and reset the sequences.\n> > >\n> > > Odd, were you using transactions here, or transactionless?\n> > \n> > Does it matter? I suppose it was my programming error that somehow I got\n> > duplicate primary keys in a table in the database where that should be\n> > totally impossible under any circumstance? Another stupid\n> > transactionless program I'm sure.\n> > \n> > At any rate, it appears that the main problem I had with postgres (the\n> > 8K tuple limit) is being fixed and I will mention that in my writeup.\n> \n> Currently (as of 7.0.x) you could use BLKSIZE=32K + lztext datatype and \n> get text fields about 64-128K depending on data if you are desperately \n> after big textfields.\n> \n> -----------\n> Hannu\n\nWhile it is slow, I've been able to store unlimited amounts of text into\nthe database by using the following code. I've tested inserting over 4\nmegabytes from a TEXTAREA web form using PHP. When inserting such massive\namounts of text, you will have to wait a while, but it will eventually succeed\nif you don't run out of memory. If you do run out of memory, the backend\nterminates gracefully and the transaction aborts/rollsback.\n\n-- Load the PGSQL procedural language\n-- This could also be done with the createlang script/program.\n-- See man createlang.\nCREATE FUNCTION plpgsql_call_handler()\n\tRETURNS OPAQUE AS '/usr/local/pgsql/lib/plpgsql.so'\n\tLANGUAGE 'C';\n\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'\n\tHANDLER plpgsql_call_handler\n\tLANCOMPILER 'PL/pgSQL';\n \t\n--------------------------------------------------------------------------------\n--\n-- Large Text storage\n--\n\n\n-- \tputlgtext -\tgeneric function to store text into the\n--\t\t\tspecified text storage table.\n--\t\tThe table specified in $1 should have the following\n--\t\tfields:\n--\t\t\tid, text_seq, text_block\n--\n-- $1 is the name of the table into which $3 is stored\n-- $2 is the id of the text and references id in another table\n-- $3 is the text to store, which is broken into chunks.\n-- returns 0 on success\n-- nonzero otherwise\nCREATE FUNCTION putlgtext (TEXT, INTEGER, TEXT) RETURNS INTEGER AS '\n\tset i_table $1\n\tset i_id $2\n\tset i_t {}\n\tregsub -all {([\\\\''\\\\\\\\])} $3 {\\\\\\\\\\\\1} i_t\n\tset i_seq 0\n\twhile { $i_t != {} } {\n\t\tset i_offset 0\t\n\t\tset tblock [string range $i_t 0 [expr 7000 + $i_offset]]\n\t\t# Do not split string at a backslash\n\t\twhile { [string range $tblock end end] == \"\\\\\\\\\" && $i_offset < 1001 } {\n\t\t\tset i_offset [expr $i_offset + 1]\n\t\t\tset tblock [string range $i_t 0 [expr 7000 + $i_offset]]\n\t\t}\n\t\tset i_t [string range $i_t [expr 7000 + [expr $i_offset + 1]] end]\n\t\tspi_exec \"INSERT INTO $i_table (id, text_seq, text_block) VALUES ( $i_id , $i_seq , ''$tblock'' )\"\n\t\tincr i_seq\n\t}\n\treturn 0\n' LANGUAGE 'pltcl';\n\n-- \t\tgetlgtext - like putlgtext, this is a generic\n--\t\t\t\tfunction that does the opposite of putlgtext\n-- $1 is the table from which to get TEXT\n-- $2 is the id of the text to get\n-- returns the text concatenated from one or more rows\nCREATE FUNCTION getlgtext(TEXT, INTEGER) RETURNS TEXT AS '\n\tset o_text {}\n\tspi_exec -array q_row \"SELECT text_block FROM $1 WHERE id = $2 ORDER BY text_seq\" {\n\t\tappend o_text $q_row(text_block)\n\t}\n\treturn $o_text\n' LANGUAGE 'pltcl';\n\n-- largetext exists just to hold an id and a dummy 'lgtext' attribute.\n-- This table's trigger function provides for inserting and updating\n-- into largetext_block. The text input to lgtext actually gets\n-- broken into chunks and stored in largetext_block.\n-- Deletes to this table will chain to largetext_block automatically\n-- by referential integrity on the id attribute.\n-- Selects have to be done using the getlgtext function.\nCREATE TABLE largetext (\n\tid\t\t\t\tINTEGER PRIMARY KEY,\n\tlgtext\t\tTEXT -- dummy field\n);\nCOMMENT ON TABLE largetext IS 'Holds large text';\n\n-- This table must have the field names as they are.\n-- These attribute names are expected by put/getlgtext.\nCREATE TABLE largetext_block (\n\tid\t\t\t\t\tINTEGER NOT NULL\n\t\t\t\t\t\tREFERENCES largetext\n\t\t\t\t\t\tON DELETE CASCADE,\n\t\t\t\t\t\t\n\ttext_seq\t\t\tINTEGER NOT NULL,\n\t\n\ttext_block\t\tTEXT,\n\t\n\tPRIMARY KEY (id, text_seq)\n);\nCOMMENT ON TABLE largetext_block IS 'Holds blocks of text for table largetext';\nCREATE SEQUENCE largetext_seq;\n\n-- SELECT:\n-- SELECT id AS the_id FROM largetext;\n-- SELECT getlgtext('largetext_block', id) FROM largetext WHERE id = the_id;\n\n-- INSERT:\n-- INSERT INTO largetext (lgtext) values ('.......');\n\n-- DELETE:\n-- DELETE FROM largetext WHERE id = someid;\n-- deletes from largetext and by referential\n-- integrity, from largetext_text all associated block rows.\nCREATE FUNCTION largetext_trigfun() RETURNS OPAQUE AS '\n\tset i_t {}\n\tregsub -all {([\\\\''\\\\\\\\])} $NEW($2) {\\\\\\\\\\\\1} i_t\n\tswitch $TG_op {\n\t\tINSERT {\n\t\t\tspi_exec \"SELECT nextval(''largetext_seq'') AS new_id\"\n\t\t\tset NEW($1) $new_id\n\t\t\tspi_exec \"SELECT putlgtext(''largetext_block'', $new_id, ''$i_t'') AS rcode\"\n\t\t\tif { $rcode != 0 } then { return SKIP }\n\t\t}\n\t\tUPDATE {\n\t\t\tif { $NEW($2) != {} } then {\n\t\t\t\tspi_exec \"DELETE FROM largetext_text WHERE id = $OLD($1)\"\n\t\t\t\tspi_exec \"SELECT putlgtext(''largetext_block'', $OLD($1), ''$NEW($2)'') AS rcode\"\n\t\t\t\tif { $rcode != 0 } then { return SKIP }\n\t\t\t}\n\t\t}\n\t}\n\tset NEW($2) \"ok\"\n\treturn [array get NEW]\n' LANGUAGE 'pltcl';\n\n-- Set the function as trigger for table largetext\nCREATE TRIGGER largetext_trig BEFORE INSERT OR UPDATE\nON largetext FOR EACH ROW EXECUTE\nPROCEDURE largetext_trigfun(id,lgtext);\n\n\n\nI had to use the regsub function calls to replace the \\ escaping on literal\n'\\'s. What a pain! If anyone can try this code and suggest ways to improve\nits speed, I'd be happy.\n\n -- \n\t\t\tRobert\n", "msg_date": "Wed, 5 Jul 2000 08:27:07 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "What ? sleepycat DB3 is now GPL ? That would be a change of philosophy.\n\nPeter\n\n----- Original Message -----\nFrom: \"Egon Schmid\" <[email protected]>\nTo: \"Tim Perdue\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, July 04, 2000 7:51 PM\nSubject: Re: [HACKERS] Article on MySQL vs. Postgres\n\n\nTim Perdue wrote:\n>\n> On wednesday or thursday, I'm going to be publishing my article on MySQL\n> vs. Postgres on PHPBuilder.com.\n\nCool!\n\n> Features:\n> Postgres is undoubtedly far, far more advanced than MySQL is. Postgres\n> now supports foreign keys, which can help with referential integrity.\n> Postgres supports subselects and better support for creating tables as\n> the result of queries. The \"transaction\" support that MySQL lacks is\n> included in Postgres, although you'll never miss it on a website, unless\n> you're building something for a bank, and if you're doing that, you'll\n> use oracle.\n\nSince MySQL version 3.23.16 it supports transactions with sleepycats DB3\nand since version 3.23.19 it is under the GPL.\n\n-Egon\n\n--\nSIX Offene Systeme GmbH � Stuttgart - Berlin - New York\nSielminger Stra�e 63 � D-70771 Leinfelden-Echterdingen\nFon +49 711 9909164 � Fax +49 711 9909199 http://www.six.de\nPHP-Stand auf Europas gr�sster Linux-Messe: 'LinuxTag 2001'\nweitere Infos @ http://www.dynamic-webpages.de/\n\n\n", "msg_date": "Wed, 5 Jul 2000 13:00:53 -0000", "msg_from": "\"Peter Galbavy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Peter Galbavy wrote:\n> \n> What ? sleepycat DB3 is now GPL ? That would be a change of philosophy.\n> \n> Peter\n\nNot to my understanding. If you sell a commercial solution\ninvolving MySQL, you have to pay Sleepycat a licensing fee. For\nnon-commercial use, its free. Oh, what a tangled web we weave\nwhen we bail from BSD.\n\nMike Mascari\n", "msg_date": "Wed, 05 Jul 2000 09:06:16 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 05 Jul 2000, Hannu Krosing wrote:\n> \"Robert B. Easter\" wrote:\n> > -- Load the PGSQL procedural language\n> > -- This could also be done with the createlang script/program.\n> > -- See man createlang.\n> > CREATE FUNCTION plpgsql_call_handler()\n> > RETURNS OPAQUE AS '/usr/local/pgsql/lib/plpgsql.so'\n> > LANGUAGE 'C';\n> > \n> > CREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'\n> > HANDLER plpgsql_call_handler\n> > LANCOMPILER 'PL/pgSQL';\n> \n> You probably meant pl/tcl as all your code is using that ?\n\nYes, I mean't to say this:\n\n-- Load the TCL procedural language\n-- This could also be done with the createlang script/program.\n-- See man createlang.\nCREATE FUNCTION pltcl_call_handler()\n\tRETURNS OPAQUE AS '/usr/local/pgsql/lib/pltcl.so'\n\tLANGUAGE 'C';\n\t\nCREATE TRUSTED PROCEDURAL LANGUAGE 'pltcl'\n\tHANDLER pltcl_call_handler\n\tLANCOMPILER 'PL/tcl';\n\n\n-- \n\t\t\tRobert\n", "msg_date": "Wed, 5 Jul 2000 09:14:48 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "on 7/4/00 8:30 PM, Tim Perdue at [email protected] wrote:\n\n> This is for what most people do with PHP and databases - run\n> semi-critical medium-traffic sites. Anyone running a mission-critical\n> site would have to look elsewhere for true robustness. I would not at\n> this time recommend any serious, life-threatening app run On either\n> database.\n\nTo the person who owns the web site, data is always critical. Does\nwww.yahoo.com store \"life-threatening\" information? Not really, but if you\nlose your yahoo.com email, the \"oh sorry, our database doesn't support\ntransactions\" excuse doesn't cut it.\n\n> I took a real-world page from our site\n> <http://sourceforge.net/forum/forum.php?forum_id=1> and made it portable\n> to both databases. Of course, I could not import the \"body\" of the\n> message into postgres because of the 8k limitation, so the body had to\n> be dropped from both databases.\n> \n> The \"nested\" view of this page requires joins against three tables and\n> some recursion to show submessages.\n\nSome recursion? That is interesting. Do you mean multiple queries to the\ndatabase? I don't see any reason to have multiple queries to the database to\nshow nested messages in a forum. Using stored procedures to create sort keys\nat insertion or selection time is the efficient way to do this. Ah, but\nMySQL doesn't have stored procedures.\n\n>> Did you use connection pooling (a lot\n> \n> I used persistent connections, yes. Without them, Postgres' showing was\n> far poorer, with mysql showing about 2x the performance.\n\nWell, there must be some issue with your setup, because 10 requests per\nsecond on Postgres on reads only is far from what I've seen on much wimpier\nboxes than yours. Maybe I should look some more into how pconnect really\nhandles connection pooling, I have heard bad things that need to be\nverified.\n\n> I'd really love to see a case where a real-world page view requires 4x\n> the queries on MySQL. If you are doing subselects like that on a website\n> in real-time you've got serious design problems and postgres would\n> fold-up and quit under the load anyway.\n\nI believe the \"design problems\" come up if you need subselects and you're\nusing MySQL. I've used Illustra/Informix, Oracle, and now Postgres to build\ndatabase-backed web sites, and subselects are a vital part of any\nsomewhat-complex web app. How exactly do subselects constitute a design\nproblem in your opinion?\n\n> Specifically, what is the problem with my \"intuition\"? All I did in the\n> prior message was report my results and ask for feedback before I post\n> it.\n\nYour intuition is that Postgres will be slower because it is slower than\nMySQL at reads. I contend that:\n - Postgres 7.0 is much faster at reads than the numbers you've shown.\nI've seen it be much faster on smaller boxes.\n - The slowdown you're seeing is probably due in no small part to the\nimplementation of pconnect(), the number of times it actually connects vs.\nthe number of times it goes to the pool, how large that pool gets, etc...\n - The write-inefficiencies of MySQL will, on any serious web site, cut\nperformance so significantly that it is simply not workable. I'm thinking of\nthe delayed updates on Slashdot, the 20-25 second page loads on SourceForge\nfor permission updating and such...\n\n> Personally, I check every query in my PHP code. On the rare occasion\n> that it fales, I show an error and get out. Even with postgres, I have\n> always checked success or failure of a query and shown an appropriate\n> error. Never in two years of programming PHP/postgres have I ever used\n> commit/rollback, and I have written some extremely complex web apps\n> (sourceforge being a prime example). Geocrawler.com runs on postgres and\n> again, I NEVER saw any need for any kind of rollback at all.\n\nGeez. So you never have two inserts or updates you need to perform at once?\n*ever*? What happens if your second one fails? Do you manually attempt to\nbacktrack on the changes you've made?\n\n> The statelessness of the web pretty much obviates the needs for\n> locks/rollbacks as each process is extremely quick and runs from start\n> to finish instantly. It's not like the old days where you pull data down\n> into a local application, work on it, then upload it again.\n> \n> Only now, with some extremely complex stuff that we're doing on\n> SourceForge would I like to see locks and rollbacks (hence my recent\n> interest in benchmarking and comparing the two). Your average web\n> programmer will almost never run into that in the short term.\n\nThis is simply false. If you're not using commit/rollbacks, you're either\ncutting back on the functionality of your site, creating potential error\nsituations by the dozen, or you've got some serious design issues in your\nsystem. Commit/Rollback is not an \"advanced\" part of building web sites. It\nis a basic building block.\n\nTelling your \"average web programmer\" to ignore transactions is like telling\nyour programmers not to free memory in your C programs because, hey, who\ncares, you've got enough RAM for small programs, and they can learn to clean\nup memory when they build \"real\" systems!\n\nOf all things, this is precisely the type of thinking that crushes the\ncredibility of the open-source community. Enterprise IT managers understand\nin great detail the need for transactions. Web sites actually need *more*\nreliable technology, because you don't have that stateful session: you\nsometimes need to recreate rollback mechanisms across pages by having\ncleanup processes. Building this on a substrate that doesn't support the\nbasic transaction construct is impossible and irresponsible.\n\n> Your own strong biases are shown in your message. I do this stuff\n> because I'm curious and want to find out for myself. Most readers will\n> find it interesting as I did. Few will switch from MySQL to postgres or\n> vice versa because of it.\n\nMy bias? Well, my company doesn't have a vested interest in promoting\nPostgres or MySQL. Before I started using Postgres, I looked into MySQL.\nYou're right if you think my evaluation didn't take too long. If I have\npreferences, they're based purely on engineering decisions. That's not the\nsame as \"my company just publicly endorsed MySQL, and check it out, we think\nMySQL is better than Postgres.\"\n\nNote that I am *not* saying that you're doing this on purpose, I'm just\nsaying that you're going to have a really hard time proving your\nobjectivity.\n\n> Another clarification: PHPBuilder is owned by internet.com, a competitor\n> of VA Linux/Andover.\n\nPHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\nwith MySQL drivers (and not Postgres). PHP's mediocre connection pooling\nlimits Postgres performance.\n\nI'm happy to continue this discussion, but here's what I've noticed from\nhaving had this argument many many times: if you don't believe that\ntransactions are useful or necessary, that subselects and enforced foreign\nkey constraints are hugely important, then this discussion will lead\nnowhere. We simply begin with different assumptions.\n\nI only suggest that you begin your evaluation article by explaining:\n - your assumptions\n - the fact that the page you used for benchmarking was originally built\nfor MySQL, and thus makes no use of more advanced Postgres features.\n\n-Ben\n\n", "msg_date": "Wed, 05 Jul 2000 10:48:01 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> Benjamin Adida wrote:\n> \n> ...useless rant about all MySQL users being stupid inept programmers\n> deleted....\n> \n> > PHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\n> > with MySQL drivers (and not Postgres). PHP's mediocre connection pooling\n> > limits Postgres performance.\n> \n> Well the point of this article is obviously in relation to PHP. Yes,\n> Rasmus Lerdorf himself uses MySQL and I'm sure Jan would say he's a\n> \"wannabee\", not a \"real developer\".\n\nRather he is probably a _web_ developer and not a _database_ developer, as \nmost developers with DB background abhor lack of transactions, as you have \nsurely noticed by now, and would not use MySQL fro R/W access ;)\n\n> Yes I'm sure that PHP was designed to make Postgres look bad. All\n> benchmarks are designed to make postgres look bad. All web designers\n> build everything in just that special way that makes postgres look bad,\n> and they all do it because they're inept and stupid,\n\nOr just irresponsible. \n\nThat's how most websites grow -\n at first no writes - MySQL is great (a filesystem with SQL interface\nperformance-wize)\n then some writes in real time, when popularity grows bad things start to\nhappen.\n then delayed writes a la Slashdot to keep the performance and integrity of\ndatabase.\n\n> unlike the small crowd of postgres users.\n\nThat could be part of the problem ;)\n\nSQL is a complex beast and a programmer experienced in procedural languages \ntakes some time to learn to use it effectively. Until then he just tries to \nuse his C/Pascal/java/whatever knowledge and simple selects - and this is\nwhere MySQL excels.\n\n----------------\nHannu\n", "msg_date": "Wed, 05 Jul 2000 18:13:02 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Benjamin Adida wrote:\n\n...useless rant about all MySQL users being stupid inept programmers\ndeleted....\n\n\n> PHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\n> with MySQL drivers (and not Postgres). PHP's mediocre connection pooling\n> limits Postgres performance.\n\nWell the point of this article is obviously in relation to PHP. Yes,\nRasmus Lerdorf himself uses MySQL and I'm sure Jan would say he's a\n\"wannabee\", not a \"real developer\". \n\nYes I'm sure that PHP was designed to make Postgres look bad. All\nbenchmarks are designed to make postgres look bad. All web designers\nbuild everything in just that special way that makes postgres look bad,\nand they all do it because they're inept and stupid, unlike the small\ncrowd of postgres users.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Wed, 05 Jul 2000 08:37:58 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "> Yes I'm sure that PHP was designed to make Postgres look bad. All\n> benchmarks are designed to make postgres look bad. All web designers\n> build everything in just that special way that makes postgres look bad,\n> and they all do it because they're inept and stupid, unlike the small\n> crowd of postgres users.\n\nI don't believe that your sarcasm is unwarranted, BUT, and this is a big but\n(just like mine :), I have found that the popularity of free software is\nsometimes iversly proportional to it's complexity. Complexity in turn\nsometimes, but not always, implies that the software has more features and\nis better thought out. There are exceptions to this, but it has proven true\nfor many of the packages I have worked with.\n\nMySQL is used by Linux folks (generalising), probably because the learning\ncurve is not too steep. And the otherway round for other DB + OS\ncombinations.\n\nThe problem I think that many folk have with printed benchmarks is the\napples to oranges comparisons. To make the comparison look valid, you have\nto either reduce or ignore the differences of the fruit and just look at a\nlimited set of values. In the case of the apples and oranges, \"average\ndiameter\" may be valid, while \"green-ness\" is not. The eater of the fruit\nactually wanted to know \"which tastes better\".\n\nPeter\n\n", "msg_date": "Wed, 5 Jul 2000 15:55:13 -0000", "msg_from": "\"Peter Galbavy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "on 7/5/00 11:37 AM, Tim Perdue at [email protected] wrote:\n\n> ...useless rant about all MySQL users being stupid inept programmers\n> deleted....\n\nHmmm, okay, well, I guess my invitation to continue the conversation while\nadmitting a difference in assumptions is declined. Yes, my response was\nharsh, but harsh on MySQL. I didn't attack MySQL programmers. I attacked the\nproduct.\n\nIs there a way to do this without incurring the wrath of MySQL users? If you\nlook at the Postgres mailing list, your worries (the duplicate key thing)\nwere addressed immediately by Postgres programmers, because they (the\nPostgres team, which *doesn't* include me) understand the need to improve\nthe product.\n\nAnd no, benchmarks aren't built to make Postgres look bad. But PHP is built\naround an inefficient connection pooling system, which doesn't appear much\nunder MySQL because MySQL has extremely fast connection setup, while every\nother RDBMS on the market (Oracle, Sybase, Informix, Postgres) does not.\nThat's the cost of setting up a transaction environment, it takes a bit of\ntime. Thus, PHP's pconnect() crushes performance on all databases except\nMySQL.\n\nBut anyhow, I've clearly hit a nerve. You asked a question, I answered\ntruthfully, honestly, and logically. And you're absolutely right that I come\nout strongly against MySQL. Proceed with this information as you see fit...\n\n-Ben\n\n", "msg_date": "Wed, 05 Jul 2000 11:56:45 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "> Yes I'm sure that PHP was designed to make Postgres look bad. All\n> benchmarks are designed to make postgres look bad. All web designers\n> build everything in just that special way that makes postgres look bad,\n> and they all do it because they're inept and stupid, unlike the small\n> crowd of postgres users.\n\nAnother happy customer... ;)\n\nTim, one of the apparent \"discriminators\" between typical MySQL users\nand typical Postgres users is their perception of the importance of\ntransactions and its relevance in application design.\n\nFor myself, coming from other commercial databases and having built\nlarge data handling systems using those, doing without transactions is\ndifficult to accept. And we'd like for others to see the light too.\nHopefully the light will be a bit closer soon, since, apparently,\ntransactions are coming to the MySQL feature set.\n\nYou mentioned a speed difference in Postgres vs MySQL. The anecdotal\nreports are quite often in this direction, but we typically see\ncomparable or better performance with Postgres when we actually look at\nthe app or benchmark. Would it be possible to see the test case and to\nreproduce it here?\n\nRegards.\n\n - Thomas\n", "msg_date": "Wed, 05 Jul 2000 16:03:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Tim Perdue wrote:\n> \n> Yes I'm sure that PHP was designed to make Postgres look bad. All\n> benchmarks are designed to make postgres look bad. All web designers\n> build everything in just that special way that makes postgres look bad,\n> and they all do it because they're inept and stupid, unlike the small\n> crowd of postgres users.\n\nTim, don't be so upset.\n\nI'm not an english fluently speaker so I hope I can make myself clearly\nunderstood.\nNoone wants you to write a good article for PostgreSQL just because they\nare developing PostgreSQL.\nNoone hates MySQL.\nNoone tries to make PostgreSQL look better as it is. We don't sell it\n:-)\nIt's just a couple of things that are important in database benchmarks\nand the PostgreSQL developers knows them better.\nThat's why I consider that you have done a good thing telling us about\nyour article and I sincerely hope that you don't feel sorry for that.\nI agree with you that they were some replies to your message rather ...\nviolent I can say.\nDefinitely, MySQL and PostgreSQL has their own application preferences\nand they are making a good job each of them.\nIt's so difficult to compare them as it would be comparing two cars\n(theyu have 4 wheels, 4 doors, an engine) and we could pick for example\nthe VW Beetle and a Mercedes A-class.\n\nSo, I would say to write your article about using MySQL or PostgreSQL on\nPHP applications and let other know your results. Now, when MySQL is\nGPL, it's a good thing to make such a comparisson. But please, don't pe\nangry and upset on the PostgreSQL developers and community. They just\ntried to give a hand of help revealing some important features of\nPostgreSQL.\n\nhope it helps,\nBest regards,\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 05 Jul 2000 19:04:18 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> We had a *very* old version of PostgreSQL running on a Pentium acting as\n> an accounting/authentication backend to a RADIUS server for an ISP\n> ... uptime for the server itself was *almost* 365 days (someone hit the\n> power switch by accident, meaning to power down a different machine\n> *sigh*) ... PostgreSQL server had been up for something like 6 months\n> without any problems, with the previous downtime being to upgrade the\n> server ...\n\nAt a previous employer, there is still a database running that has not\nseen a crash downtime ever since early 1996 - the only few downtimes it\never saw were for a rare few postgres, OS and hardware upgrades. As\nthere have been no cries for help on any database or reboot issue ever\nsince I left (I still am appointed as the DB admin in case of any\ntrouble), it must be getting close to two years uptime by now, and that\nliterally unattended. \n\nSevo\n\n-- \[email protected]\n", "msg_date": "Wed, 05 Jul 2000 18:13:51 +0200", "msg_from": "Sevo Stille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> There a bug report that allowed tuplicate ids in an uniqe field when \n> SELECT FOR UPDATE was used. Could this be your case ?\n> [snip]\n> IIRC the fix was also provided, so it could be fixed in current CVS (the\n> above is from 7.0.2, worked the same in 6.5.3)\n\nIt does seem to be fixed in current CVS:\n\nregression=# create table test(i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'test_pkey' for table 'test'\nCREATE\nregression=# insert into test values(1);\nINSERT 145215 1\nregression=# begin;\nBEGIN\nregression=# select * from test for update;\n i\n---\n 1\n(1 row)\n\nregression=# insert into test values(1);\nERROR: Cannot insert a duplicate key into unique index test_pkey\nregression=#\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 12:16:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres " }, { "msg_contents": "On Wed, 5 Jul 2000, Benjamin Adida wrote:\n\n> > Another clarification: PHPBuilder is owned by internet.com, a competitor\n> > of VA Linux/Andover.\n> \n> PHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\n> with MySQL drivers (and not Postgres). PHP's mediocre connection pooling\n> limits Postgres performance.\n\nCareful here ... PHP was not built with MySQL in mind ... hell, I used PHP\nages before it even *had* MySQL support (hell, before I even know about\nPostgres95 *or* MySQL) ... also, if I recall reading on the PHP site, the\nMySQL support that is included is limited, but I don't recall where I read\nit. There is a recommendation *somewhere* that if you want to use all the\nfeatures, you ahve to install the MySQL libraries first ...\n\nJust to defend PHP, cause, well ... I like it :)\n\n", "msg_date": "Wed, 5 Jul 2000 13:30:51 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Thomas Lockhart wrote:\n> You mentioned a speed difference in Postgres vs MySQL. The anecdotal\n> reports are quite often in this direction, but we typically see\n> comparable or better performance with Postgres when we actually look at\n> the app or benchmark. Would it be possible to see the test case and to\n> reproduce it here?\n\nFinally a sensible reply from one of the core guys.\n\nhttp://www.perdue.net/benchmarks.tar.gz\n\nTo switch between postgres and mysql, copy postgres.php to database.php,\nchange the line of SQL with the LIMIT statement in forum.php. \n\nTo move to mysql, copy mysql.php to database.php and change the line of\nSQL in forum.php\n\nNo bitching about the \"bad design\" of the forum using recursion to show\nsubmessages. It can be done in memory in PHP, but I chose to hit the\ndatabase instead. This page is a good example of one that hits the\ndatabase hard. It's one of the worst on our site.\n\nAt any rate, I wish someone would write an article that explains what\nthe benefits of transactions are, and how to use them effectively in a\nweb app, skipping the religious fervor surrounding pgsql vs. myql.\nThere's a lot of people visiting PHPBuilder who just want to expand\ntheir knowledge of web development, and many of them would find that\ninteresting.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Wed, 05 Jul 2000 09:34:05 -0700", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 5 Jul 2000, Tim Perdue wrote:\n\n> Benjamin Adida wrote:\n> \n> ...useless rant about all MySQL users being stupid inept programmers\n> deleted....\n> \n> \n> > PHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\n> > with MySQL drivers (and not Postgres). PHP's mediocre connection pooling\n> > limits Postgres performance.\n> \n> Well the point of this article is obviously in relation to PHP. Yes,\n> Rasmus Lerdorf himself uses MySQL and I'm sure Jan would say he's a\n> \"wannabee\", not a \"real developer\". \n\nI would seriously doubt that Jan wuld consider Rasmus a 'wannabee'\n... Rasmus essentially built a Web optimized, HTML embedded language that\nI imagine a *large* percentage of the sites on the 'Net rely on. My\nexperience with the language is that it is clean and *very* easy to pick\nup for simple stuff, with some nice, advanced tools for the more complex\nissues ...\n\nI use PHP with PgSQL almost exclusively now for my frontends, since its\ngot some *nice* features for retrieving the results of queries (ie. I love\nbeing able to do a SELECT * and being able to retrive the results by the\nfield name instead of having to know the ordering) ...\n\n\n", "msg_date": "Wed, 5 Jul 2000 13:35:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "On Wed, 5 Jul 2000, Tim Perdue wrote:\n\n> At any rate, I wish someone would write an article that explains what\n> the benefits of transactions are, and how to use them effectively in a\n> web app, skipping the religious fervor surrounding pgsql vs. myql.\n> There's a lot of people visiting PHPBuilder who just want to expand\n> their knowledge of web development, and many of them would find that\n> interesting.\n\nI couldn't write to save my life, but if you want to try and co-write\nsomething, I'm more then willing to try and provide required input ... \n\n\n", "msg_date": "Wed, 5 Jul 2000 13:40:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "\nTim,\n\nI'm sorry if I came off harsh in my previous comments. I'm a fervent\nsupporter of open-source software, and have hit massive pushback from\nenterprise people because they see all the open-source sites using MySQL,\nand that is outrageous to them. Although MySQL has a few, important niches\nto fill, it's been used in places where I think it's hurt the credibility of\nopen-source web developers. I've been trying to talk to MySQL\ndeveloper/users about how we got to where we are, but with little success\n(and what I've told you is by far the nastiest I've ever been in this\nrespect).\n\nI hope that we can have a meaningful exchange about these issues. I'm a fan\nof Postgres, but by no means a religious supporter of it. I *am* a religious\nsupporter of transactions, subselects, and such.\n\nIf you'd like to find out more about transactions, you can check out Philip\nGreenspun's http://www.arsdigita.com/asj/aolserver/introduction-2.html which\nhas a paragraph about \"Why Oracle?\" which explains the reasons for choosing\nan ACID-compliant RDBMS.\n\nI'm also happy to write up a \"why transactions are good\" article.\n\n-Ben\n\non 7/5/00 12:34 PM, Tim Perdue at [email protected] wrote:\n\n> Thomas Lockhart wrote:\n>> You mentioned a speed difference in Postgres vs MySQL. The anecdotal\n>> reports are quite often in this direction, but we typically see\n>> comparable or better performance with Postgres when we actually look at\n>> the app or benchmark. Would it be possible to see the test case and to\n>> reproduce it here?\n> \n> Finally a sensible reply from one of the core guys.\n> \n> http://www.perdue.net/benchmarks.tar.gz\n> \n> To switch between postgres and mysql, copy postgres.php to database.php,\n> change the line of SQL with the LIMIT statement in forum.php.\n> \n> To move to mysql, copy mysql.php to database.php and change the line of\n> SQL in forum.php\n> \n> No bitching about the \"bad design\" of the forum using recursion to show\n> submessages. It can be done in memory in PHP, but I chose to hit the\n> database instead. This page is a good example of one that hits the\n> database hard. It's one of the worst on our site.\n> \n> At any rate, I wish someone would write an article that explains what\n> the benefits of transactions are, and how to use them effectively in a\n> web app, skipping the religious fervor surrounding pgsql vs. myql.\n> There's a lot of people visiting PHPBuilder who just want to expand\n> their knowledge of web development, and many of them would find that\n> interesting.\n> \n> Tim\n\n", "msg_date": "Wed, 05 Jul 2000 12:43:05 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "I have been looking at using the lztext type and I have some\nquestions/observations. Most of my experience comes from attempting to\ncompress text records in a different database (CTREE), but I think the\nexperience is transferable.\n\nMy typical table consists of variable length text records. The average\nlength record is around 1K bytes. I would like to compress my records\nto save space and improve I/O performance (smaller records means more\nrecords fit into the file system cache which means less I/O - or so the\ntheory goes). I am not too concerned about CPU as we are using a 4-way\nSun Enterprise class server. So compress seems like a good idea to me.\n\nMy experience with attempting to compress such a relatively small\n(around 1K) text string is that the compression ration is not very\ngood. This is because the string is not long enough for the LZ\ncompression algorithm to establish really good compression patterns and\nthe fact that the de-compression table has to be built into each\nrecord. What I have done in the past to get around these problems is\nthat I have \"taught\" the compression algorithm the patterns ahead of\ntime and stored the de-compression patterns in an external table. Using\nthis technique, I have achieved *much* better compression ratios.\n\nSo my questions/comments are:\n\n - What are the typical compression rations on relatively small (i.e.\naround 1K) strings seen with lztext?\n - Does anyone see a need/use for a generalized string compression\ntype that can be \"trained\" external to the individual records?\n - Am I crazy in even attempting to compress strings of this relative\nsize? My largest table correct contains about 2 million entries of\nroughly 1k size strings or about 2Gig of data. If I could compress this\nto about 33% of it's original size (not unreasonable with a trained LZ\ncompression), I would save a lot of disk space (not really important)\nand a lot of file system cache space (very important) and be able to fit\nthe entire table into memory (very, very important).\n\nThank you,\nJeff\n\n\n", "msg_date": "Wed, 05 Jul 2000 12:59:12 -0400", "msg_from": "Jeffery Collins <[email protected]>", "msg_from_op": false, "msg_subject": "lztext and compression ratios..." }, { "msg_contents": "The Hermit Hacker wrote:\n> On Wed, 5 Jul 2000, Tim Perdue wrote:\n>\n> > Benjamin Adida wrote:\n> >\n> > ...useless rant about all MySQL users being stupid inept programmers\n> > deleted....\n> >\n> >\n> > > PHP folks have a bias, too: PHP was built with MySQL in mind, it even ships\n> > > with MySQL drivers (and not Postgres). PHP's mediocre connection pooling\n> > > limits Postgres performance.\n> >\n> > Well the point of this article is obviously in relation to PHP. Yes,\n> > Rasmus Lerdorf himself uses MySQL and I'm sure Jan would say he's a\n> > \"wannabee\", not a \"real developer\".\n>\n> I would seriously doubt that Jan wuld consider Rasmus a 'wannabee'\n> .... Rasmus essentially built a Web optimized, HTML embedded language that\n> I imagine a *large* percentage of the sites ...\n\n NEVER!\n\n Once I've built a PG based middle tear with an apache module\n that could in cooperation be a complete virtual host inside\n of a DB. Including inbound Tcl scripting, DB-access, dynamic\n images and whatnot. Never finished that work until AOL-Server\n 3.0 appeared, at which point I considered my product\n \"trashwork\".\n\n Some of the sources I looked at (and learned alot from) was\n the PHP module. So I know what kind of programmer built that.\n\n Maybe someone of the PG community should spend some time\n building a better PHP coupling and contribute to that\n project. And there are more such projects out that need a\n helping hand from our side.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 20:01:54 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Jeffery Collins <[email protected]> writes:\n> My experience with attempting to compress such a relatively small\n> (around 1K) text string is that the compression ration is not very\n> good. This is because the string is not long enough for the LZ\n> compression algorithm to establish really good compression patterns and\n> the fact that the de-compression table has to be built into each\n> record. What I have done in the past to get around these problems is\n> that I have \"taught\" the compression algorithm the patterns ahead of\n> time and stored the de-compression patterns in an external table. Using\n> this technique, I have achieved *much* better compression ratios.\n\n(Puts on compression-guru hat...)\n\nThere is much in what you say. Perhaps we should consider keeping the\nlztext type around (currently it's slated for doom in 7.1, since the\nTOAST feature will make plain text do everything lztext does and more)\nand having it be different from text in that a training sample is\nsupplied when the column is defined. Not quite sure how that should\nlook or where to store the sample, but it could be a big win for tables\nhaving a large number of moderate-sized text entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 14:12:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios... " }, { "msg_contents": "I was sitting here reading about TOAST and 7.1 and another question came\ninto my mind.. I looked on the web page for some information about features\nto be added in 7.1 and an approximate 7.1 release date.. I found information\non the TOAST feature as well as Referential Integrity but nothing else.. If\nI missed it please just point me in the right direction, if not I'd like to\nas about the approximate time of 7.1 arriving in a stable form and if full\ntext indexing is going to be a part of 7.1..\n\nI ask because if 7.1 is going to include full text indexing natively and is\ngoing to arrive pretty soon, I might not continue on this project as I have\nbeen (the new full text index trigger)..\n\nThanks!!!\n\n-Mitch\n\n", "msg_date": "Wed, 5 Jul 2000 14:39:17 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL 7.1 " }, { "msg_contents": "\nfigure sometime in October before v7.1 is released ... i believe the last\ntalk talked about a beta starting sometime in September, but nothing is\never \"set in stone\" until we actually do it :)\n\n\nOn Wed, 5 Jul 2000, Mitch Vincent wrote:\n\n> I was sitting here reading about TOAST and 7.1 and another question came\n> into my mind.. I looked on the web page for some information about features\n> to be added in 7.1 and an approximate 7.1 release date.. I found information\n> on the TOAST feature as well as Referential Integrity but nothing else.. If\n> I missed it please just point me in the right direction, if not I'd like to\n> as about the approximate time of 7.1 arriving in a stable form and if full\n> text indexing is going to be a part of 7.1..\n> \n> I ask because if 7.1 is going to include full text indexing natively and is\n> going to arrive pretty soon, I might not continue on this project as I have\n> been (the new full text index trigger)..\n> \n> Thanks!!!\n> \n> -Mitch\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 5 Jul 2000 15:49:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.1 " }, { "msg_contents": "\"Mitch Vincent\" <[email protected]> writes:\n> I was sitting here reading about TOAST and 7.1 and another question came\n> into my mind.. I looked on the web page for some information about features\n> to be added in 7.1 and an approximate 7.1 release date.. I found information\n> on the TOAST feature as well as Referential Integrity but nothing else.. If\n> I missed it please just point me in the right direction, if not I'd like to\n> as about the approximate time of 7.1 arriving in a stable form and if full\n> text indexing is going to be a part of 7.1..\n\nAFAIK there are no plans on the table to do more with full text\nindexing; at least none of the core developers are working on it.\n(If someone else is and I've forgotten, my apologies.)\n\nCurrently the plans for 7.1 look like this:\n\n\t* WAL (assuming Vadim gets it done soon)\n\t* TOAST (pretty far along already)\n\t* new fmgr (fmgr itself done, still need to turn crank on\n\t\tconverting built-in functions)\n\t* better memory management, no more intraquery memory leaks\n\t (about half done)\n\t* some other stuff, but I think those are all the \"big features\"\n\t that anyone has committed to get done\n\t* whatever else gets done meanwhile\n\nWe were shooting for going beta around Aug 1 with release around Sep 1,\nbut don't hold us to that ;-).\n\nNotably missing from this list is OUTER JOIN and other things that\ndepend on querytree redesign (such as fixing all the restrictions on\nviews). We've agreed to put that off till 7.2 in hopes of keeping the\n7.1 development cycle reasonably short. Not sure what else will be\nin 7.2, but the querytree work will be.\n\n> I ask because if 7.1 is going to include full text indexing natively and is\n> going to arrive pretty soon, I might not continue on this project as I have\n> been (the new full text index trigger)..\n\nSeems like you should keep at it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 16:02:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.1 " }, { "msg_contents": "Mitch Vincent wrote:\n> I was sitting here reading about TOAST and 7.1 and another question came\n> into my mind.. I looked on the web page for some information about features\n> to be added in 7.1 and an approximate 7.1 release date.. I found information\n> on the TOAST feature as well as Referential Integrity but nothing else.. If\n> I missed it please just point me in the right direction, if not I'd like to\n> as about the approximate time of 7.1 arriving in a stable form and if full\n> text indexing is going to be a part of 7.1..\n\n The project page you've seen was an idea I had several month\n ago. I had a big success with it since I got co-developers\n for FOREIGN KEY. It was the first time I something in an\n open source project with related developers, and the result\n was stunning - even for me.\n\n Unfortunately, none of the other \"inner-circle\" developers\n catched up that idea. Maybe all they're doing is not of the\n nature that they would gain anything from co-\n developers/volunteers.\n\n Not an answer to your real question, more like a kick in the\n A.. of my colleagues.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 5 Jul 2000 22:41:18 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.1" }, { "msg_contents": "Jeffery Collins wrote:\n> I have been looking at using the lztext type and I have some\n> questions/observations. Most of my experience comes from attempting to\n> compress text records in a different database (CTREE), but I think the\n> experience is transferable.\n\n First of all I welcome any suggestions/input to the\n compression code I've implemented. Seems you're experienced\n in this area so keep going and hammer down any of my points\n below.\n\n You should know that the \"lztext\" type will disappear soon\n and be replaced with the general \"all varlena types are\n potentially compressable\" approach of TOAST.\n\n> My typical table consists of variable length text records. The average\n> length record is around 1K bytes. I would like to compress my records\n> to save space and improve I/O performance (smaller records means more\n> records fit into the file system cache which means less I/O - or so the\n> theory goes). I am not too concerned about CPU as we are using a 4-way\n> Sun Enterprise class server. So compress seems like a good idea to me.\n>\n> My experience with attempting to compress such a relatively small\n> (around 1K) text string is that the compression ration is not very\n> good. This is because the string is not long enough for the LZ\n> compression algorithm to establish really good compression patterns and\n> the fact that the de-compression table has to be built into each\n> record. What I have done in the past to get around these problems is\n> that I have \"taught\" the compression algorithm the patterns ahead of\n> time and stored the de-compression patterns in an external table. Using\n> this technique, I have achieved *much* better compression ratios.\n\n The compression algorithm used in \"lztext\" (and so in TOAST\n in the future) doesn't have a de-compression table at all.\n It's based on Adisak Pochanayon's SLZ algorithm, using\n <literal_char> or <token>. A <token> just tells how far to\n go back in the OUTPUT-buffer and how many bytes to copy from\n OUTPUT to OUTPUT. Look at the code for details.\n\n My design rules for the compression inside of Postgres have\n been\n\n - beeing fast on decompression\n - beeing good for relatively small values\n - beeing fast on compression\n\n The first rule is met by the implementation itself. Don't\n underestimate this design rule! Usually you don't update as\n often as you query. And the implementation of TOAST requires\n a super fast decompression.\n\n The second rule is met by not needing any initial\n decompression table inside of the stored value.\n\n The third rule is controlled by the default strategy of the\n algorithm, (unfortunately) hardcoded into\n utils/adt/pg_lzcompress.c. It'll never try to compress items\n smaller than 256 bytes. It'll fallback to plain storage (for\n speed advantage while decompressing a value) if less than 20%\n of compression is gained. It'll stop match loookup if a\n backward match of 128 or more bytes is found.\n\n> So my questions/comments are:\n>\n> - What are the typical compression rations on relatively small (i.e.\n> around 1K) strings seen with lztext?\n\n Don't have that small items handy. But a table containing a\n file path and it's content. All files where HTML files.\n\n From - To | count(*) | avg(length) | avg(octet_length)\n ---------------+----------+-------------+------------------\n 1024 - 2047 | 14 | 1905 | 1470\n 2048 - 4095 | 67 | 3059 | 1412\n 4096 - 8191 | 45 | 5384 | 2412\n 8192 - | 25 | 17200 | 6323\n ---------------+----------+-------------+------------------\n all | 151 | 5986 | 2529\n\n> - Does anyone see a need/use for a generalized string compression\n> type that can be \"trained\" external to the individual records?\n\n Yes, of course. Maybe \"lztext\" can be a framework for you and\n we just tell the toaster \"never apply your lousy compression\n on that\" (it's prepared for).\n\n> - Am I crazy in even attempting to compress strings of this relative\n> size? My largest table correct contains about 2 million entries of\n> roughly 1k size strings or about 2Gig of data. If I could compress this\n> to about 33% of it's original size (not unreasonable with a trained LZ\n> compression), I would save a lot of disk space (not really important)\n> and a lot of file system cache space (very important) and be able to fit\n> the entire table into memory (very, very important).\n\n Noone is crazy attempting to improve something. It might turn\n out not to work well, or beeing brain damaged from the start.\n But someone who never tries will miss all his chances.\n\n Final note:\n\n One thing to keep in mind is that the LZ algorithm you're\n thinking of must be distributable under the terms of the BSD\n license. If it's copyrighted or patented by any third party,\n not agreeing to these terms, it's out of discussion and will\n never appear in the Postgres source tree. Especially the LZ\n algorithm used in GIF is one of these show-stoppers.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 00:16:43 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios..." }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> One thing to keep in mind is that the LZ algorithm you're\n> thinking of must be distributable under the terms of the BSD\n> license. If it's copyrighted or patented by any third party,\n> not agreeing to these terms, it's out of discussion and will\n> never appear in the Postgres source tree. Especially the LZ\n> algorithm used in GIF is one of these show-stoppers.\n\nAs long as you brought it up: how sure are you that the method you've\nused is not subject to any patents? The mere fact that you learned\nit from someone who didn't patent it does not guarantee anything ---\nsomeone else could have invented it independently and filed for a\npatent.\n\nIf you can show that this method uses no ideas not found in zlib,\nthen I'll feel reassured --- a good deal of legal research went into\nzlib to make sure it didn't fall foul of any patents, and zlib has\nnow been around long enough that it'd be tough for anyone to get a\nnew patent on one of its techniques. But if SLZ has any additional\nideas in it, then we could be in trouble. There are an awful lot of\ncompression patents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jul 2000 18:40:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios... " }, { "msg_contents": "Benjamin Adida wrote:\n\n> Some recursion? That is interesting. Do you mean multiple queries to the\n> database? I don't see any reason to have multiple queries to the database to\n> show nested messages in a forum. Using stored procedures to create sort keys\n> at insertion or selection time is the efficient way to do this. Ah, but\n> MySQL doesn't have stored procedures.\n\nCan you be more specific on how you would support arbitrary nesting and\ncorrect sorting of a threaded discussion in postgres? I've thought about\nthis problem but didn't come up with anything except to re-implement the\nold recursive \" retrieve* \" from the old postgres.\n", "msg_date": "Thu, 06 Jul 2000 09:47:31 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Article on MySQL vs. Postgres" }, { "msg_contents": "Jan,\n\nThank you for your comments on lztext. They were very informative. I hope\nyou (and the rest of the list) don't mind my ramblings on compression, but I\nthink there is some performance/space advantage if compression can integrated\ninto a table. I admit to not being a compression expert. All that I know\nhas been learn from reading some zlib source.\n\nThe compression that I have been working with (and am more familiar with) is\nHuffman compression. You are probably familiar with it and may have even\nevaluated it when you were looking at compression techniques. In my limited\nunderstanding of LZ compression, there are some advantages and disadvantages\nto Huffman compression.\n\nThe disadvantages seem to be:\n - In normal Huffman compression, the string to be compressed is examined\ntwice. Once to build the translation table and once to do the translation.\n - The translation table must be written with the string. This makes the\nresult larger and limits it's effectiveness with small strings.\n - I don't know for sure, but I wouldn't be surprised if uncompression is\nslower with Huffman than LZ compression.\n\nTo get around these disadvantages, I modified my Huffman support in the\nfollowing way:\n - I trained the translation table and stored the table separately. I was\nable to do this because I knew before hand the type of text that would be in\nmy columns.\n - I \"improved\" the compression by giving the translation table the\nability to look for well known substrings in addition to byte values. For\nexample the string \"THE\" in an English text might have it's own Huffman\ntranslation value instead of relying on the individual T, H and E\ntranslations. Again I was able to do this because I knew the type of text I\nwould be storing in the column.\n\nBecause the translation table is external to the string, it is no longer\nincluded in the string. This improves the compression ratio and helps the\nuncompression as the table does not need to be read and interpreted. This\napproach also allows for and takes advantage of cross-row commonalities.\nAfter doing all of this, I was able to get the average compression ratio to\nabout 3.5 (i.e. orig size / new size) on text columns of 1K or less.\n\nOf course the disadvantage to this is that if the dataset changes overtime,\nthe compression translations to not change with it and the compression ratio\nwill get worse. In the very worst case, where the dataset totally changes\n(say the language changed from English to Russian) the \"compressed\" string\ncould actually get larger than the original string because everything the\ncompression algorithm thought it knew was totally wrong.\n\nAs I said, I did this with a different database (CTREE). My company is now\nconverting from CTREE to postgresql and I would like to find some fairly\nelegant way to include this type of compression support. My CTREE\nimplementation was rather a hack, but now that we are moving to a better\ndatabase, I would like to have a better solution for compression.\n\nOriginally I latched onto the idea of using lztext or something like lztext,\nbut I agree it is better if the column type is something standard and the\ncompression is hidden by the backend. I guess my vision would be to be able\nto define either a column type or a column attribute that would allow me to\nindicate the type of compression to have and possibly the compression\nalgorithm and translation set to use. Actually the real vision is to have it\nall hidden and have the database learn about the column as rows are added and\nmodify the compression transalation to adapt to the changing column data.\n\nJeff\n\n\n\n", "msg_date": "Wed, 05 Jul 2000 20:07:54 -0400", "msg_from": "Jeffery Collins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios..." }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > One thing to keep in mind is that the LZ algorithm you're\n> > thinking of must be distributable under the terms of the BSD\n> > license. If it's copyrighted or patented by any third party,\n> > not agreeing to these terms, it's out of discussion and will\n> > never appear in the Postgres source tree. Especially the LZ\n> > algorithm used in GIF is one of these show-stoppers.\n>\n> As long as you brought it up: how sure are you that the method you've\n> used is not subject to any patents? The mere fact that you learned\n> it from someone who didn't patent it does not guarantee anything ---\n> someone else could have invented it independently and filed for a\n> patent.\n\n Now that you ask for it: I'm not sure. Could be.\n\n> If you can show that this method uses no ideas not found in zlib,\n> then I'll feel reassured --- a good deal of legal research went into\n> zlib to make sure it didn't fall foul of any patents, and zlib has\n> now been around long enough that it'd be tough for anyone to get a\n> new patent on one of its techniques. But if SLZ has any additional\n> ideas in it, then we could be in trouble. There are an awful lot of\n> compression patents.\n\n To do so I don't know enough about the algorithms used in\n zlib. Is there someone out here who could verify that if I\n detailed enough describe what our compression code does?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 10:47:24 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios..." }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>> As long as you brought it up: how sure are you that the method you've\n>> used is not subject to any patents?\n\n> Now that you ask for it: I'm not sure. Could be.\n\n>> If you can show that this method uses no ideas not found in zlib,\n>> then I'll feel reassured\n\n> To do so I don't know enough about the algorithms used in\n> zlib. Is there someone out here who could verify that if I\n> detailed enough describe what our compression code does?\n\nAfter a quick look at the code, I don't think there is anything\nproblematic about the data representation or the decompression\nalgorithm. The compression algorithm is another story, and it's\nnot real well commented :-(. The important issues are how you\nsearch for matches in the past text and how you decide which match\nis the best one to use. Please update the code comments to describe\nthat, and I'll take another look.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jul 2000 10:22:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios... " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> >> As long as you brought it up: how sure are you that the method you've\n> >> used is not subject to any patents?\n>\n> > Now that you ask for it: I'm not sure. Could be.\n>\n> >> If you can show that this method uses no ideas not found in zlib,\n> >> then I'll feel reassured\n>\n> > To do so I don't know enough about the algorithms used in\n> > zlib. Is there someone out here who could verify that if I\n> > detailed enough describe what our compression code does?\n>\n> After a quick look at the code, I don't think there is anything\n> problematic about the data representation or the decompression\n> algorithm. The compression algorithm is another story, and it's\n> not real well commented :-(. The important issues are how you\n> search for matches in the past text and how you decide which match\n> is the best one to use. Please update the code comments to describe\n> that, and I'll take another look.\n\n Done. You'll find a new section in the top comments.\n\n While writing it I noticed that the algorithm is really\n expensive for big items. The history lookup table allocated\n is 8 times (on 32 bit architectures) the size of the input.\n So if you want to have 1MB compressed, it'll allocate 8MB for\n the history. It hit me when I was hunting a bug in the\n toaster earlier today. Doing an update to a toasted item of\n 5MB, resulting in a new value of 10MB, the backend blew up to\n 290MB of virtual memory - oh boy. I definitely need to make\n that smarter.\n\n When I wrote it I never thought about items that big. It was\n before we had the idea of TOAST.\n\n This all might open another discussion I'll start in a\n separate thread.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 6 Jul 2000 23:09:27 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: lztext and compression ratios..." }, { "msg_contents": "> \n> figure sometime in October before v7.1 is released ... i believe the last\n> talk talked about a beta starting sometime in September, but nothing is\n> ever \"set in stone\" until we actually do it :)\n\nI agree. August is a very useful month because a lot of people are slow\nat work during that month, and that gives them time to work on\nPostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jul 2000 17:35:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 7.1" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> After a quick look at the code, I don't think there is anything\n>> problematic about the data representation or the decompression\n>> algorithm. The compression algorithm is another story, and it's\n>> not real well commented :-(. The important issues are how you\n>> search for matches in the past text and how you decide which match\n>> is the best one to use. Please update the code comments to describe\n>> that, and I'll take another look.\n\n> Done. You'll find a new section in the top comments.\n\nI think we are probably OK. Anyone who wants to check the issue might\nwant to start with http://www.faqs.org/faqs/compression-faq/,\nparticularly part 1 item 8. The two patents I'm most concerned about\nare Waterworth's (4701745) and Stac's (5016009) which both cover basic\nvariants of LZ77 (for full text of these or any other US patent consult\nhttp://patent.womplex.ibm.com/). But it looks to me like this\nparticular variant can be argued not to fall under either patent.\nFor example, Waterworth's patent specifies using 3-byte hash values\nwhile you use 4-byte, and stupid as that is it's sufficient to get\nyou out from under. (A more technically valid reason is that he\ndoesn't use hash collision chains.) The Stac patent also specifies\na data structure considerably different from your hash chains.\n\nTo give you an idea of what's involved here, I attach an old message\nfrom Jean-loup Gailly, author of gzip and zlib, explaining why he\nbelieves gzip doesn't fall foul of these two patents. Not all of\nhis reasons apply to your method, but I think enough do. (BTW,\nGailly's a good guy --- I worked with him on the PNG project.\nI think I'll write to him and see if he's got time to review this\ncode for patent problems.)\n\n> While writing it I noticed that the algorithm is really\n> expensive for big items. The history lookup table allocated\n> is 8 times (on 32 bit architectures) the size of the input.\n> So if you want to have 1MB compressed, it'll allocate 8MB for\n> the history. It hit me when I was hunting a bug in the\n> toaster earlier today. Doing an update to a toasted item of\n> 5MB, resulting in a new value of 10MB, the backend blew up to\n> 290MB of virtual memory - oh boy. I definitely need to make\n> that smarter.\n\nYes, you need some smarter method for choosing the size of the\nhash table... a braindead but possibly sufficient answer is to\nhave some hard limit on the size... or you could just use a\nhard-wired constant size to begin with. I think it's usually\nconsidered wise to use a prime for the table size and reduce\nthe raw input bits modulo the prime.\n\n\t\t\tregards, tom lane\n\n\nArticle: 14604 of gnu.misc.discuss\nPath: chorus!octave.chorus.fr!jloup\nFrom: [email protected] (Jean-loup Gailly)\nNewsgroups: gnu.misc.discuss\nSubject: Re: LPF, NY Times, GNU and compression algorithms\nMessage-ID: <[email protected]>\nDate: 10 Mar 94 10:10:03 GMT\nReferences: <[email protected]> <[email protected]> <[email protected]>\nSender: [email protected]\nDistribution: gnu\nLines: 201\n\nMarc Auslander <[email protected]> writes:\n\n> Two Stac patents in the case are 4701745 and 5016009.\n>\n> From 4601745: [typo, meant: 4701745]\n>\n> the data processing means including circuit means operable to check\n> whether a sequence of successive bytes to be processed identical with\n> a sequence of bytes already processed, and including a hash generating\n> means responsive to the application of a predetermined number of bytes ...\n>\n> From 5016009\n>\n> in order to perform the hashing function, a data compression system\n> includes certain hash data structures including a history array\n> pointer ...\n>\n> also:\n>\n> ... is found ..., encoding said matching string ... a variable length\n> indicator ..., said predetermined strategy ensuring that a matching\n> string of two characters of said input data stream is compressed to\n> less than said two characters of said input data stream.\n>\n> So - on the face of it, why isn't gzip covered by these patents?\n\nLet's take each patent in turn. A clarification first: the Stac patent\n4,701,745 was not invented by Stac. It was initially owned by Ferranti\n(UK) and only recently bought by Stac. This was a very profitable\nacquisition (assuming it cost Stac far less than the $120 million they\nwon by using this patent against Microsoft).\n\n(a) 4,701,745\n\nThis algorithm is now known as LZRW1, because Ross Williams reinvented\nit independently later and posted it on comp.compression on April 22,\n1991. Exactly the same algorithm has also been patented by Gibson and\nGraybill (5,049,881). The patent office failed to recognize that\nit was the same algorithm.\n\nThis algorithm uses LZ77 with hashing but no collision chains and\noutputs unmatched bytes directly without further compression. gzip\nuses collisions chains of arbitrary length, and uses Huffman encoding\non the unmatched bytes:\n\n- Claim 1 of the patent is restricted to (emphasis added by me):\n\n output means operable to APPLY to a transfer medium each byte of data\n not forming part of such an identical sequence; and\n\n ENCODING means responsive to the identification of such a sequence\n to APPLY to the transfer medium an identification signal which\n identifies both the location in the input store of the previous\n occurrence of the sequence of bytes and the number of bytes\n contained in the sequence.\n\n The claim thus makes a clear distinction between \"encoding\" and\n \"applying to the transfer medium\". A system which compresses the\n unmatched bytes does not infringe this patent.\n\n- The description of the patent and claim 2 make clear that the check\n for identity of the sequences of bytes is to be made only once (no\n hash collision chains). Gzip performs an arbitrary number of such\n checks. The \"means\" enumerated in claim 1 specify what the hash\n table consists of, and this does not include any means for storing\n hash collision chains.\n\n- Claim 2 also requires that *all* bytes participating in the hash\n function should be compared:\n\n A system as claimed in claim 1 in which the circuit means also\n includes check means operable to check for identity between EACH\n of the said predetermined number of bytes in sequence and EACH of\n a similar sequence of bytes contained in the input store at a\n location defined by a pointer read out from the temporary store at\n said address\n\n [in plain English, this is the check for hash collision]\n\n and to check whether identity exists between succeeding bytes in\n each sequence of bytes, and a byte counter operable to count the\n number of identical bytes in each sequence.\n\n [this is the determination of the match length]\n\n Gzip never checks for equality of the third byte used in the hash\n function. The hash function is such that on a hash hit with equality\n of the first two bytes, the third byte necessarily matches.\n\n- In addition, gzip uses a \"lazy\" evaluation of string matches. Even\n when a match is found, gzip may encode (with Huffman coding) a single\n unmatched byte. This is done when gzip determines that it is more\n beneficial to parse the input string differently because a longer\n match follows. In the Waterworth patent, a string match is always\n encoded as a (length, pointer) pair.\n\nAll other claims of the patent are dependent on claim 1\n(\"a system as claimed in claim 1 in which ...\"). Since gzip\ndoes not infringe claim 1 it does not infringe the other claims.\nIn particular, claim 6 explicitly states that unmatched strings\nare not compressed:\n\n A system as claimed in claim 5 in which the data receiving means\n includes decoder means operable to separate UNCOMPRESSED bytes of\n data from identification signals.\n\nThe gzip decoder never receives uncompressed bytes since all input is\ncompressed with Huffman coding [both literals and (length, offset) pairs].\n\n\nThe only \"invention\" in the Waterworth patent is the absence of hash\ncollision chains. The original description of the LZ77 algorithm\nrequired searching for the true longest match, not just checking the\nlength of one particular match. Using hashing for string searching\nwas very well known at the time of the patent application (March 86).\nThe patent description specifies explicitly that \"Hash techniques are\nwell known and many differents forms of hash will be suitable\".\n\nThe --fast option of gzip was on purpose made slower than possible\nprecisely because of the existence of the Waterworth patent.\nSee in particular the following part of the gzip TODO file:\n\n Add a super-fast compression method, suitable for implementing\n file systems with transparent compression. One problem is that the\n best candidate (lzrw1) is patented twice (Waterworth 4,701,745\n and Gibson & Graybill 5,049,881).\n\n\n(b) 5,016,009\n\nThis is standard LZ77 with hashing, and collisions resolved using\nlinked lists. There are several important restrictions which let gzip\nescape from the patent:\n\n- the linked lists are implemented only with offsets. The end of\n a chain is detected by adding together all offsets, until the\n sum becomes greater than the size of the history buffer. gzip uses\n direct indices, and so detects the end of the chains differently.\n The exact wording of claim 1 of the patent is:\n\n ... said data compression system comprising ... an offset array means\n ... said method comprising the steps of ...\n\n calculating a difference between said history array pointer\n and said pointer obtained from said hash table means,\n storing said difference into said offset array means entry\n pointed to by said history array pointer, ...\n\n gzip never calculates such a difference and does not have any offset\n array.\n\n- unmatched strings are emitted as literal bytes without any\n compression. gzip uses Huffman encoding on the unmatched strings.\n This is the same argument as for the Waterworth patent.\n\n- unmatched strings are preceded by\n ... a \"raw\" data tag indicating that no matching data string was found\n\n gzip does not use such a tag because it uses a single Huffman table for\n both string literals and match lengths. It is only the prefix\n property of Huffman codes which allows the decoder to distinguish\n the two cases. So there is not a unique \"raw\" tag preceding all\n literals. This is not a minor point. It is one of the reasons\n giving gzip superior compression to that obtained with the Stac\n algorithm.\n\n- a string match is always encoded as a (length, pointer) pair.\n Gzip uses a \"lazy\" evaluation of string matches as described\n above for the Waterworth patent.\n\nAll other claims of the patent are dependent on claim 1 (\"the method\nof claim 1 wherein ...\"). Since gzip does not infringe claim 1 it does\nnot infringe the other claims. In any case, I have studied in detail\nall the 77 claims to make sure that gzip does not infringe.\n\nUnrelated note: this Stac patent is the only one where I found an\noriginal and non obvious idea. The hash table is refreshed using a\nincremental mechanism, so that the refresh overhead is distributed\namong all input bytes. This allows the real response time necessary in\ndisk compressors such as Stacker (the Stac product). gzip does not use\nthis idea, and refreshes the hash table in a straightforward manner\nevery 32K bytes.\n\n\nOne final comment: I estimate that I have now spent more time studying\ndata compression patents than actually implementing data compression\nalgorithms. I have a partial list of 318 data compression patents,\neven though for the moment I restrict myself mostly to lossless\nalgorithms (ignoring most image compression patents). Richard\nStallman has been *extremely* careful before accepting gzip as the GNU\ncompressor. I continue to study new patents regularly. I would of\ncourse very much prefer spending what's left of my spare time\nimproving the gzip compression algorithm instead of studying patents.\nSome improvements that I would have liked to put in gzip have not been\nincorporated because of patents.\n\nIn short, every possible precaution has been taken to make sure that\ngzip isn't covered by patents.\n\nJean-loup Gailly, author of gzip.\[email protected]\n", "msg_date": "Fri, 07 Jul 2000 01:30:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios... " }, { "msg_contents": "Maybe you just want to use zlib. Let other guys hammer out the details.\n\nOn Thu, 6 Jul 2000, Jan Wieck wrote:\n\n> While writing it I noticed that the algorithm is really\n> expensive for big items. The history lookup table allocated\n> is 8 times (on 32 bit architectures) the size of the input.\n> So if you want to have 1MB compressed, it'll allocate 8MB for\n> the history. It hit me when I was hunting a bug in the\n> toaster earlier today. Doing an update to a toasted item of\n> 5MB, resulting in a new value of 10MB, the backend blew up to\n> 290MB of virtual memory - oh boy. I definitely need to make\n> that smarter.\n> \n> When I wrote it I never thought about items that big. It was\n> before we had the idea of TOAST.\n> \n> This all might open another discussion I'll start in a\n> separate thread.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== [email protected] #\n> \n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 7 Jul 2000 09:14:07 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "[email protected] writes:\n> Maybe you just want to use zlib. Let other guys hammer out the details.\n\nI've been wondering about that myself. Jan, have you actually\nbenchmarked decompression speed on this method vs. stock zlib?\nUnless the differential is huge we might want to just skip the\npatent worries.\n\nzlib has a BSD-style license, so we could include it in our\ndistribution to avoid worries about whether it's installed.\n\nI've written to Jean-loup for his advice. He's pretty busy these\ndays (CTO at some startup, I believe) but we'll see if he has time\nto comment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 11:56:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios... " }, { "msg_contents": "> [email protected] writes:\n> > Maybe you just want to use zlib. Let other guys hammer out the details.\n> \n> I've been wondering about that myself. Jan, have you actually\n> benchmarked decompression speed on this method vs. stock zlib?\n> Unless the differential is huge we might want to just skip the\n> patent worries.\n\nIf we found later that there was a patent problem, could we just issue a\nnew release with a new compression algorithm?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 12:02:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Unless the differential is huge we might want to just skip the\n>> patent worries.\n\n> If we found later that there was a patent problem, could we just issue a\n> new release with a new compression algorithm?\n\nYeah, we could, and it could presumably even be a fully compatible\ndot-release with no change to the on-disk representation. That\nrepresentation and consequently the decompression algorithm are safe\nenough, it's the details of the compressor's search for matching\npatterns that are a patent minefield.\n\nHowever, changing the code after-the-fact might not be enough to keep us\nfrom being sued :-(. I'd rather use something that's pretty well\nestablished as being in the clear...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 12:52:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios... " }, { "msg_contents": "[email protected] wrote:\n> Maybe you just want to use zlib. Let other guys hammer out the details.\n>\n\n We cannot assume that zlib is available everywhere. Thus it\n must be determined during configure and where it isn't, TOAST\n can only move off values to make tuples fit into blocks.\n Since decompression of already in memory items is alot faster\n than doing an index scan on the TOAST table, I expect this to\n make installations without zlib damned slow.\n\n And how should binary distributions like RPM's handle it? I\n assume that this problem is already on it's way because of\n the integration of zlib into pg_dump. The only way I see is\n having different RPM's for each possible combination of\n available helper libs. Or is there another way to work\n around?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Fri, 7 Jul 2000 21:47:10 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> [email protected] wrote:\n>> Maybe you just want to use zlib. Let other guys hammer out the details.\n\n> We cannot assume that zlib is available everywhere.\n\nWe can if we include it in our distribution --- which we could; it's\npretty small and uses a BSD-style license. I can assure you the zlib\nguys would be happy with that. And it's certainly as portable as our\nown code. The real question is, is a custom compressor enough better\nthan zlib for our purposes to make it worth taking any patent risks?\n\nWe could run zlib at a low compression setting (-z1 to -z3 maybe)\nto make compression relatively fast, and since that also doesn't\ngenerate a custom Huffman tree, the overhead in the compressed data\nis minor even for short strings. And its memory footprint is\ncertainly no worse than Jan's method...\n\nThe real question is whether zlib decompression is markedly slower\nthan Jan's code. Certainly Jan's method is a lot simpler and *should*\nbe faster --- but on the other hand, zlib has had a heck of a lot\nof careful performance tuning put into it over the years. The speed\ndifference might not be as bad as all that.\n\nI think it's worth taking a look at the option.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 17:56:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios... " }, { "msg_contents": "Jan Wieck writes:\n\n> > Maybe you just want to use zlib. Let other guys hammer out the details.\n\n> And how should binary distributions like RPM's handle it?\n\nThey got all their fancy dependency mechanisms for that (which never work\nwell, but anyway).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 8 Jul 2000 02:16:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "At 21:47 7/07/00 +0200, Jan Wieck wrote:\n>\n> And how should binary distributions like RPM's handle it? I\n> assume that this problem is already on it's way because of\n> the integration of zlib into pg_dump. The only way I see is\n> having different RPM's for each possible combination of\n> available helper libs. Or is there another way to work\n> around?\n\nThis remoinded me of a question I wanted to ask Unix people: other OSs I\nuse allow for dynamic linking, at runtime and in code, against shared\nlibraries, and I know Unix must allow this. The places where zlib is used\nare pretty limited, so it might be worth considering doing the 'HAVE_ZLIB'\nkinds of checks at runtime. Then one binary fits all...\n\nIs this hard or easy - at least on machines with a libz.so?\n\nIs it worth doing?\n\nI guess the alternative on rpm is to create both: pg_dump.zlib and\npg_dump.nozlib, and install the right one?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 11:16:36 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression\n ratios..." }, { "msg_contents": "> This remoinded me of a question I wanted to ask Unix people: other OSs I\n> use allow for dynamic linking, at runtime and in code, against shared\n> libraries, and I know Unix must allow this. The places where zlib is used\n> are pretty limited, so it might be worth considering doing the 'HAVE_ZLIB'\n> kinds of checks at runtime. Then one binary fits all...\n> \n> Is this hard or easy - at least on machines with a libz.so?\n> \n> Is it worth doing?\n> \n> I guess the alternative on rpm is to create both: pg_dump.zlib and\n> pg_dump.nozlib, and install the right one?\n\nWe do dynamic loading for functions. Not sure if we want to load zlib\ndynamically if we can help it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 21:19:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": ">> This remoinded me of a question I wanted to ask Unix people: other OSs I\n>> use allow for dynamic linking, at runtime and in code, against shared\n>> libraries, and I know Unix must allow this. The places where zlib is used\n>> are pretty limited, so it might be worth considering doing the 'HAVE_ZLIB'\n>> kinds of checks at runtime. Then one binary fits all...\n\n> We do dynamic loading for functions. Not sure if we want to load zlib\n> dynamically if we can help it.\n\nNot bloody likely! Do you want to be in a position where you restart\nyour postmaster and suddenly chunks of your database are inaccessible?\nThat's what could happen to you if someone moves or deletes libz.so.\n\nI don't mind being dynamically linked to standard system shared libs;\nif libc.so is busted then whether Postgres launches is the least of\nyour worries. But dynamic dependence on an optional package that's\nprobably living in /usr/local strikes me as exceedingly risky.\n\nIf we do go with using zlib instead of homegrown code, I would recommend\nbuilding and statically linking to our own copy even if there is a copy\navailable on the system. This will prevent cross-version compatibility\nproblems as well as where'd-my-library-go syndrome. We cannot afford\nthose sorts of risks for something that could prevent us from reading\nour database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 21:49:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "> If we do go with using zlib instead of homegrown code, I would recommend\n> building and statically linking to our own copy even if there is a copy\n> available on the system. This will prevent cross-version compatibility\n> problems as well as where'd-my-library-go syndrome. We cannot afford\n> those sorts of risks for something that could prevent us from reading\n> our database.\n\nI don't know much about zlib, but it is hard to imagine it would have\nthe flexibility and optimization tuning of Jan's code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jul 2000 21:54:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "At 21:49 7/07/00 -0400, Tom Lane wrote:\n>\n>Not bloody likely! Do you want to be in a position where you restart\n>your postmaster and suddenly chunks of your database are inaccessible?\n>That's what could happen to you if someone moves or deletes libz.so.\n\nMy question was limited to it's use in pg_dump; rather than basing\npg_dump's compression bahaviour on configure, base it on it's runtime\nenvironment. But my guess is you still would be inclined, rather strongly,\nagainst it.\n\n\n>If we do go with using zlib instead of homegrown code\n\nThis begs the obvious question: should pg_dump be using Jan's compression\ncode? In all cases/when zlib is not available?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 12:00:31 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression\n ratios..." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I don't know much about zlib, but it is hard to imagine it would have\n> the flexibility and optimization tuning of Jan's code.\n\nUh, I think you have that backwards ...\n\n\t\t\tregards, tom lane\n\nPS: I do know a few things about zlib.\n", "msg_date": "Fri, 07 Jul 2000 22:18:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios... " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> Not bloody likely! Do you want to be in a position where you restart\n>> your postmaster and suddenly chunks of your database are inaccessible?\n>> That's what could happen to you if someone moves or deletes libz.so.\n\n> My question was limited to it's use in pg_dump; rather than basing\n> pg_dump's compression bahaviour on configure, base it on it's runtime\n> environment. But my guess is you still would be inclined, rather strongly,\n> against it.\n\npg_dump is rather a different case. I would want to see a runtime\noption *not* to use compression, in case you know you are going to\nneed to restore on another system where zlib (or whatever) isn't\navailable. But making an uncompressed dump today doesn't invalidate\nthe compressed dump you made yesterday, nor vice versa.\n\n>> If we do go with using zlib instead of homegrown code\n\n> This begs the obvious question: should pg_dump be using Jan's compression\n> code? In all cases/when zlib is not available?\n\nGood point. If we include zlib in the distribution it would be pretty\nsilly for pg_dump not to use it. If we don't, then Peter's remarks\nabout not liking an environment-determined feature set are relevant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jul 2000 22:28:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios... " }, { "msg_contents": "At 22:28 7/07/00 -0400, Tom Lane wrote:\n>\n>pg_dump is rather a different case. I would want to see a runtime\n>option *not* to use compression\n\nThat's there already; using -Z0 produces the same output as a mchine\nwithout zlib.\n\n>But making an uncompressed dump today doesn't invalidate\n>the compressed dump you made yesterday, nor vice versa.\n\nNot sure what you mean here.\n\n\n>Good point. If we include zlib in the distribution it would be pretty\n>silly for pg_dump not to use it. If we don't, then Peter's remarks\n>about not liking an environment-determined feature set are relevant.\n\nzlib seems to have been ported to zillions of architectures (must be a good\ndesign ;-}), so it's probably pretty safe to put in the source tree, and\nI'm sure any porting problems we encounter would be useful to the zlib\nmaintainers.\n\nIf we don't want to use zlib, then fefore using Jan's compression code in\npg_dump, I suppose I'd need to know that the output is compatible across\n32/64 bit architectures and is not sensitive to other issues like endian-ness.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 12:56:46 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression\n ratios..." }, { "msg_contents": "I get the following when building cvs:\n\ngcc -I../../include -O2 -g -Wall -Wmissing-prototypes\n-Wmissing-declarations -c pqformat.c -o pqformat.o\npqformat.c: In function `pq_endmessage':\npqformat.c:219: `errno' undeclared (first use in this function)\npqformat.c:219: (Each undeclared identifier is reported only once\npqformat.c:219: for each function it appears in.)\nmake: *** [pqformat.o] Error 1\n\nAny ideas?\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 14:44:39 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "'errno' undefined?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I get the following when building cvs:\n\nIs that with the be-pqexec removal I just did? I pulled out a couple\nof header inclusions that I thought were no longer necessary --- and\nthey weren't on my platform. But maybe something in there was needed\non yours. Where does errno get declared in your system's headers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 00:57:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": "At 00:57 8/07/00 -0400, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> I get the following when building cvs:\n>\n>Is that with the be-pqexec removal I just did? I pulled out a couple\n>of header inclusions that I thought were no longer necessary --- and\n>they weren't on my platform. But maybe something in there was needed\n>on yours. Where does errno get declared in your system's headers?\n\nerrno.h\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 15:02:05 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>> they weren't on my platform. But maybe something in there was needed\n>> on yours. Where does errno get declared in your system's headers?\n\n> errno.h\n\nHmm. The stuff I just removed doesn't look like it would cause an\ninclusion of errno.h. Which other system headers include errno.h\non your box?\n\nThe answer probably is that pqformat.h needs to include <errno.h>,\nbut I'm just curious to understand why it didn't before ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 01:17:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": "At 01:17 8/07/00 -0400, Tom Lane wrote:\n>Hmm. The stuff I just removed doesn't look like it would cause an\n>inclusion of errno.h. Which other system headers include errno.h\n>on your box?\n>\n>The answer probably is that pqformat.h needs to include <errno.h>,\n>but I'm just curious to understand why it didn't before ...\n>\n\npjw@Cerberus2:~/work/postgresql-cvs/pgsql > grep errno.h /usr/include/*.h\n/usr/include/argz.h:#include <errno.h>\n/usr/include/envz.h:#include <errno.h>\n/usr/include/errno.h: * ISO C Standard: 4.1.3 Errors <errno.h>\n/usr/include/errno.h:#endif /* errno.h */\n/usr/include/errnos.h:# include <linux/errno.h>\n/usr/include/pthread.h:#include <errno.h>\n\npjw@Cerberus2:~/work/postgresql-cvs/pgsql > grep errno.h /usr/include/*/*.h\n/usr/include/X11/Xlibint.h:#include <errno.h>\n/usr/include/X11/Xos.h:#include <errno.h>\n/usr/include/asm/unistd.h:/* user-visible error numbers are in the range -1\n- -122: see <asm-i386/errno.h> */\n/usr/include/linux/errno.h:#include <asm/errno.h>\n/usr/include/linux/isdn.h:#include <linux/errno.h>\n/usr/include/linux/mm.h:#include <linux/errno.h>\n/usr/include/linux/notifier.h:#include <linux/errno.h>\n/usr/include/linux/quota.h:#include <linux/errno.h>\n/usr/include/mysql/my_sys.h:#include <errno.h> /* errno is\na define */\n/usr/include/python1.5/Python.h:#include <errno.h>\n/usr/include/rpcsvc/bootparam.h:#include <sys/errno.h>\n/usr/include/rpcsvc/bootparam_prot.h:#include <sys/errno.h>\n/usr/include/sys/errno.h:#include <errno.h>\n\nno matches in local/include\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 08 Jul 2000 15:21:58 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": ">> The answer probably is that pqformat.h needs to include <errno.h>,\n>> but I'm just curious to understand why it didn't before ...\n\nWell, I noticed that <stdlib.h> pulls in <errno.h> on my box, so that\nexplains why I didn't see a problem. I still don't see the connection\nbetween the includes I killed and <errno.h> on your box, though...\ncurious. Anyway I committed the change to pqformat.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jul 2000 01:36:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": "Peter Eisentraut wrote:\n >> And how should binary distributions like RPM's handle it?\n >\n >They got all their fancy dependency mechanisms for that (which never work\n >well, but anyway).\n\nDebian's work very well, because the Debian repository is centralised\nunder uniform quality control.\n\nAs a package maintainer, I can allow a package to depend on zlib -- in\nwhich case the dependency will be satisfied by any version -- or I can\nspecify a versioned dependency. I might have to provide the correct\nlibrary myself, if the zlib maintainer didn't want to maintain an old\nversion. At the moment Debian provides zlib1g; if the major version\nchanges, I would expect to see this package remain and a new zlib2\npackage be created.\n\nI would prefer a solution that did not _require_ me to maintain zlib\nas well, if the dependency can be satisfied from another package.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Ask, and it shall be given you; seek, and ye shall\n find; knock, and it shall be opened unto you.\" \n Matthew 7:7 \n\n\n", "msg_date": "Sat, 08 Jul 2000 07:43:19 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression\n ratios..." }, { "msg_contents": "Philip Warner wrote:\n> At 21:49 7/07/00 -0400, Tom Lane wrote:\n> >\n> >Not bloody likely! Do you want to be in a position where you restart\n> >your postmaster and suddenly chunks of your database are inaccessible?\n> >That's what could happen to you if someone moves or deletes libz.so.\n>\n> My question was limited to it's use in pg_dump; rather than basing\n> pg_dump's compression bahaviour on configure, base it on it's runtime\n> environment. But my guess is you still would be inclined, rather strongly,\n> against it.\n>\n>\n> >If we do go with using zlib instead of homegrown code\n>\n> This begs the obvious question: should pg_dump be using Jan's compression\n> code? In all cases/when zlib is not available?\n\n It can't. My code is designed for in-memory attribute values.\n It doesn't support streaming - which I assume is required for\n pg_dump.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Sat, 8 Jul 2000 13:29:17 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios...." }, { "msg_contents": "\n> Where does errno get declared in your system's headers?\n\nThe ANSI C standard says errno is declared in <errno.h>. Since ANSI C\nalso says that the standard header files are independent, it is poor\nform for your system to have included <errno.h> via <stdlib.h>.\n\n(Yeah, since you've already added <errno.h> this is a bit pedantic.)\n\nCiao,\n\nGiles\n\n", "msg_date": "Sun, 09 Jul 2000 08:38:07 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'errno' undefined? " }, { "msg_contents": "Hi!\n\nHow can I count the number of rows affected by UPDATE or DELETE?\n\nPrzem\n\n", "msg_date": "Mon, 10 Jul 2000 09:51:28 +0200", "msg_from": "\"Przem Kowalczyk\" <[email protected]>", "msg_from_op": false, "msg_subject": "Counting affected rows" }, { "msg_contents": "\n> How can I count the number of rows affected by UPDATE or DELETE?\n\nThis depends on the interface you are using.\n\n* The psql SQL monitor always reports the number of affected rows\n (it's the 2nd number)\n\n* In DBI \"do\" returns the number of affected rows.\n\n* In JDBC \"executeUpdate\" does the same. (By the way, is there a way\n to get the number of rows from a executeQuery without counting them?).\n\nRegards,\nMit freundlichem Gru�,\n\tHolger Klawitter\n--\nHolger Klawitter +49 (0)251 484 0637\[email protected] http://www.klawitter.de/\n\n", "msg_date": "Mon, 10 Jul 2000 11:36:19 +0200", "msg_from": "Holger Klawitter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Counting affected rows" }, { "msg_contents": "Tom Lane wrote:\n> [email protected] (Jan Wieck) writes:\n> > [email protected] wrote:\n> >> Maybe you just want to use zlib. Let other guys hammer out the details.\n>\n> > We cannot assume that zlib is available everywhere.\n>\n> We can if we include it in our distribution --- which we could; it's\n> pretty small and uses a BSD-style license. I can assure you the zlib\n> guys would be happy with that. And it's certainly as portable as our\n> own code. The real question is, is a custom compressor enough better\n> than zlib for our purposes to make it worth taking any patent risks?\n\n Good, we shouldn't worry about that anymore. If we want to\n use zlib, I vote for including it into our distribution and\n link static against the one shipped with our code.\n\n If we want to ...\n\n> We could run zlib at a low compression setting (-z1 to -z3 maybe)\n> to make compression relatively fast, and since that also doesn't\n> generate a custom Huffman tree, the overhead in the compressed data\n> is minor even for short strings. And its memory footprint is\n> certainly no worse than Jan's method...\n\n Definitely not, it's memory footprint is actually much\n smaller. Thus, I need to recreate the comparision below\n again after making the history table of fixed size with a\n wrap around mechanism to get a small footprint on multi-MB\n inputs too.\n\n> The real question is whether zlib decompression is markedly slower\n> than Jan's code. Certainly Jan's method is a lot simpler and *should*\n> be faster --- but on the other hand, zlib has had a heck of a lot\n> of careful performance tuning put into it over the years. The speed\n> difference might not be as bad as all that.\n>\n> I think it's worth taking a look at the option.\n\n Some quick numbers though:\n\n I simply stripped down pg_lzcompress.c to call compress2()\n and uncompress() instead of doing anything itself (what a\n nice, small source file :-). There might be some room for\n improvement using static zlib stream allocaions and\n deflateReset(), inflateReset() or the like. But I don't\n expect a significant difference from that.\n\n The test is a Tcl (pgtclsh) script doing the following:\n\n - Loading 151 HTML files into a table t1 of structure (path\n text, content lztext).\n\n - SELECT * FROM t1 and checking for correct result set.\n Each file is read again during the check.\n\n - UPDATE t1 SET content = upper(content).\n\n � SELECT * FROM t1 and checking for correct result set.\n Each file is read again, converted to upper case using\n Tcl's \"string toupper\" function for comparision.\n\n - SELECT path FROM t1. Loop over result set to UPDATE t1\n SET content = <value> WHERE path = <path>. All files are\n read again and converted to lower case before UPDATE.\n\n - SELECT * FROM t1 and check for correct result set. Files\n are again reread and lower case converted in Tcl for\n comparision.\n\n - Doing 20 SELECT * FROM t1 to have alot more decompress\n than compress cycles.\n\n Of course, there's an index on path. Here are the timings and\n sizes:\n\n Compressor | level | heap size | toastrel | toastidx | seconds\n | | | size | size |\n -----------+-------+-----------+----------+----------+--------\n PGLZ | - | 425,984 | 950,272 | 32,768 | 5.20\n zlib | 1 | 499,712 | 614,400 | 16,384 | 6.85\n zlib | 3 | 499,712 | 557,056 | 16,384 | 6.75\n zlib | 6 | 491,520 | 524,288 | 16,384 | 7.10\n zlib | 9 | 491,520 | 524,288 | 16,384 | 7.21\n\n Seconds is an average over multiple runs. Interesting is that\n compression level 3 seems to be faster than 1. I double\n checked it because it was so surprising.\n\n Also, increasing the number of SELECT * at the end increases\n the difference. So the PGLZ decompressor does a perfect job.\n\n And what must be taken into account too is that the script,\n running on the same processor and doing all the overhead\n (reading files, doing case conversions, quoting values with\n regsub and comparisions), along with the normal Postgres\n query execution (parsing, planning, optimizing, execution)\n occupies a substantial portion of the bare runtime. Still\n PGLZ is about 25% faster than the best zlib compression level\n I'm seeing, while zlib gains a much better compression ratio\n (factor 1.7 at least).\n\n As I see it:\n\n If replacing the compressor/decompressor can cause a runtime\n difference of 25% in such a scenario, the pure difference\n between the two methods must be alot.\n\n PGLZ is what I mentioned in the comments. Optimized for speed\n on the cost of compression ratio.\n\n What I suggest:\n\n Leave PGLZ in place as the default compressor for toastable\n types. Speed is what all benchmarks talk about - on disk\n storage size is seldom a minor note.\n\n Fix it's history allocation for huge values and have someone\n (PgSQL Inc.?) patenting the compression algorithm, so we're\n safe at some point in the future. If there's a patent problem\n in it, we are already running the risk to get sued, the PGLZ\n code got shipped with 7.0, used in lztext.\n\n We can discuss about enabling zlib as a per attribute\n configurable alternative further. But is the confusion this\n might cause worth it all?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 10 Jul 2000 17:11:51 +0200 (MEST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "I've been collecting a few ideas for integrating the distribution making\nprocess into the build system. (I did take a look at the mk-release and\nmk-snapshot scripts on hub.org as well.) I have a trial implementation\nwhich works well, except that it doesn't build the documentation.\n\n1. Instead of the release_prep script we have a new target `distprep'.\nThose makefiles that want to build something for the distribution can\nsimply implement this target. This is nice because you have the distprep\nand the correspondng distclean target in the same file. To make a\ndistribution-ready tree, you do\n\n./configure\nmake distprep\nmake distclean\n\n2. In order to get rid of the CVS directories (and anything else you don't\nwant to ship, like _deadcode) and get the directory name straight,\n*without* clobbering your checked out tree, I create a directory\n\"postgresql-$(VERSION)\" and copy the files I want to distribute in there.\nThe conceptual procedure is\n\n# use your already configured tree\nmake distprep\nmkdir postgresql-$(VERSION)\ncopy all files from . into postgresql-$(VERSION), expect those you don't want\nmake -C postgresql-$(VERSION) distclean\n\nThis is done by `make distdir'.\n\n3. `make dist' depends on distdir, tars up the prepared tree, and leaves a\nfile postgresql-$(VERSION).tar.gz, which you can give to your friends.\n\n4. If you got an extra half hour you can run `make distcheck', which:\n\n* makes a distribution, using make dist\n* unpacks the distribution\n* runs configure\n* runs make -q distprep, to check whether the files you just prepared for\n distribution are really still up to date\n* builds and installs everything\n* runs make uninstall and checks whether it really uninstalled everything\n* makes another distribution from this test tree\n* checks whether this distribution is sufficiently similar to the previous\n one (i.e., same files, same size)\n\nThis approach should guard against the common tarball making problems:\nmisnamed top-level directory, missing files, funky timestamps, etc.\n\n\nIf there's any interest in this I can commit it so you can take a look. It\ndoesn't affect anything else (including release_prep).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 10 Jul 2000 20:18:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Distribution making" }, { "msg_contents": "\nOn Mon, 10 Jul 2000, Peter Eisentraut wrote:\n\n> 2. In order to get rid of the CVS directories (and anything else you don't\n> want to ship, like _deadcode) and get the directory name straight,\n\n Need we still \"_deadcode\", if we have CVS? BTW. --- who compile backend\nwith \"#define NOT_USED\"?\n\n> 3. `make dist' depends on distdir, tars up the prepared tree, and leaves a\n> file postgresql-$(VERSION).tar.gz, which you can give to your friends.\n\n And what \".orig.tar.gz\"? We Debian's user love it :-) (and IMHO it is \nreally not bad idea.)\n\n\t \t\t\t\tKarel\n\nPS. not flame please...\n\n", "msg_date": "Mon, 10 Jul 2000 20:33:37 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "> Compressor | level | heap size | toastrel | toastidx | seconds\n> | | | size | size |\n> -----------+-------+-----------+----------+----------+--------\n> PGLZ | - | 425,984 | 950,272 | 32,768 | 5.20\n> zlib | 1 | 499,712 | 614,400 | 16,384 | 6.85\n> zlib | 3 | 499,712 | 557,056 | 16,384 | 6.75\n> zlib | 6 | 491,520 | 524,288 | 16,384 | 7.10\n> zlib | 9 | 491,520 | 524,288 | 16,384 | 7.21\n\nConsider that the 25% slowness gets us a 35% disk reduction, and that\ntranslates to fewer buffer blocks and disk accesses. Seems there is a\nclear tradeoff there.\n\n> If replacing the compressor/decompressor can cause a runtime\n> difference of 25% in such a scenario, the pure difference\n> between the two methods must be alot.\n> \n> Leave PGLZ in place as the default compressor for toastable\n> types. Speed is what all benchmarks talk about - on disk\n> storage size is seldom a minor note.\n\nTrue, disk storage is not the issue, but disk access are an issue.\n\n> We can discuss about enabling zlib as a per attribute\n> configurable alternative further. But is the confusion this\n> might cause worth it all?\n\nI think we have to choose one proposal.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jul 2000 17:32:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: [GENERAL] lztext and compression ratios..." }, { "msg_contents": "Sounds pretty good to me... but Marc's the guy who would normally be\nusing it, so it's his wishes you gotta cater to...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jul 2000 18:26:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making " }, { "msg_contents": "Karel Zak writes:\n\n> > 3. `make dist' depends on distdir, tars up the prepared tree, and leaves a\n> > file postgresql-$(VERSION).tar.gz, which you can give to your friends.\n> \n> And what \".orig.tar.gz\"? We Debian's user love it :-) (and IMHO it is \n> really not bad idea.)\n\nWhat's that?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 11 Jul 2000 00:26:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "\nSounds great to me ... what we have now is a major kludge ...\n\nBasically, what I'd have to do for a release is CVSUP the latest code on\nthat branch, do a 'make dist' and that's it? \n\nIs this something that can be 'extended' to do the RPM also? :)\n\nOn Mon, 10 Jul 2000, Peter Eisentraut wrote:\n\n> I've been collecting a few ideas for integrating the distribution making\n> process into the build system. (I did take a look at the mk-release and\n> mk-snapshot scripts on hub.org as well.) I have a trial implementation\n> which works well, except that it doesn't build the documentation.\n> \n> 1. Instead of the release_prep script we have a new target `distprep'.\n> Those makefiles that want to build something for the distribution can\n> simply implement this target. This is nice because you have the distprep\n> and the correspondng distclean target in the same file. To make a\n> distribution-ready tree, you do\n> \n> ./configure\n> make distprep\n> make distclean\n> \n> 2. In order to get rid of the CVS directories (and anything else you don't\n> want to ship, like _deadcode) and get the directory name straight,\n> *without* clobbering your checked out tree, I create a directory\n> \"postgresql-$(VERSION)\" and copy the files I want to distribute in there.\n> The conceptual procedure is\n> \n> # use your already configured tree\n> make distprep\n> mkdir postgresql-$(VERSION)\n> copy all files from . into postgresql-$(VERSION), expect those you don't want\n> make -C postgresql-$(VERSION) distclean\n> \n> This is done by `make distdir'.\n> \n> 3. `make dist' depends on distdir, tars up the prepared tree, and leaves a\n> file postgresql-$(VERSION).tar.gz, which you can give to your friends.\n> \n> 4. If you got an extra half hour you can run `make distcheck', which:\n> \n> * makes a distribution, using make dist\n> * unpacks the distribution\n> * runs configure\n> * runs make -q distprep, to check whether the files you just prepared for\n> distribution are really still up to date\n> * builds and installs everything\n> * runs make uninstall and checks whether it really uninstalled everything\n> * makes another distribution from this test tree\n> * checks whether this distribution is sufficiently similar to the previous\n> one (i.e., same files, same size)\n> \n> This approach should guard against the common tarball making problems:\n> misnamed top-level directory, missing files, funky timestamps, etc.\n> \n> \n> If there's any interest in this I can commit it so you can take a look. It\n> doesn't affect anything else (including release_prep).\n> \n> -- \n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 10 Jul 2000 20:00:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "The Hermit Hacker wrote:\n> Is this something that can be 'extended' to do the RPM also? :)\n\nYes. Do you want that to happen? It won't be too hard -- the\nrequirements would be simply having an 'RPM' directory in the tarball\nthat would contain the spec file and the required patches and ancilliary\nprograms. Then, you simply issue (on a RedHat system :-)) the\nappropriate 'rpm -tb' (or 'rpm -ta') command. The binary RPM's will be\nbuilt to the usual place, and all is well with the world.\n\nAlthough the spec file might need to be in the tarball's top-level\ndirectory to work -- I'll have to investigate this.\n\nSo, the RPM-building would go like:\ndownload postgresql-$version.tar.gz to %{_topdir}/SOURCES\n(in %{_topdir}/SOURCES, execute 'rpm -tb' (or -ta).\nPick up your RPM's in %{_topdir}/RPMS/%{_arch}\nInstall them with rpm -Uvh.\n\nNo sweat. See, the RPM building process is already highly automated --\nonce the patches are made, that is.\n\nAlthough, I must admit -- it would be very nice to not have those\npatches.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Jul 2000 20:09:43 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "On Mon, 10 Jul 2000, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > Is this something that can be 'extended' to do the RPM also? :)\n> \n> Yes. Do you want that to happen? It won't be too hard -- the\n> requirements would be simply having an 'RPM' directory in the tarball\n> that would contain the spec file and the required patches and ancilliary\n> programs. Then, you simply issue (on a RedHat system :-)) the\n> appropriate 'rpm -tb' (or 'rpm -ta') command. The binary RPM's will be\n> built to the usual place, and all is well with the world.\n> \n> Although the spec file might need to be in the tarball's top-level\n> directory to work -- I'll have to investigate this.\n> \n> So, the RPM-building would go like:\n> download postgresql-$version.tar.gz to %{_topdir}/SOURCES\n> (in %{_topdir}/SOURCES, execute 'rpm -tb' (or -ta).\n> Pick up your RPM's in %{_topdir}/RPMS/%{_arch}\n> Install them with rpm -Uvh.\n> \n> No sweat. See, the RPM building process is already highly automated --\n> once the patches are made, that is.\n> \n> Although, I must admit -- it would be very nice to not have those\n> patches.\n\nIf this can be worked in along side/on top of what Peter is doing, I think\nthat would be great ...\n\n\n", "msg_date": "Mon, 10 Jul 2000 23:30:14 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "\nOn Tue, 11 Jul 2000, Peter Eisentraut wrote:\n\n> Karel Zak writes:\n> \n> > > 3. `make dist' depends on distdir, tars up the prepared tree, and leaves a\n> > > file postgresql-$(VERSION).tar.gz, which you can give to your friends.\n> > \n> > And what \".orig.tar.gz\"? We Debian's user love it :-) (and IMHO it is \n> > really not bad idea.)\n> \n> What's that?\n\n This is the original source package. But 'tar.gz' is some common archive\nfile. See any debian's ftp....\n\n\t \t\t\t\t\tKarel\n\n", "msg_date": "Tue, 11 Jul 2000 10:10:12 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "On Mon, 10 Jul 2000, The Hermit Hacker wrote:\n\n> Is this something that can be 'extended' to do the RPM also? :)\n\n Into standard *source* tree? I was long time glad of PG's devevelopment,\nthat create dateless and OS independent source.\n\n Well, we will have support for RPM, do you want support for Debian too?\nIf yes, do you accept /Debian directory in top source tree? \nDo you accept different directory philosophy for several dists.?\n\n...etc.\n\n IMHO to *source* tree belong to matter for binary making only. A\ndistribution must be out of source.\n\n Karel\n\nPS. is it this equation right: \"OS = Linux = RH\"? I hope that not.\n\n", "msg_date": "Tue, 11 Jul 2000 11:05:56 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "Peter Eisentraut wrote:\n >Karel Zak writes:\n >\n >> > 3. `make dist' depends on distdir, tars up the prepared tree, and leaves\n > a\n >> > file postgresql-$(VERSION).tar.gz, which you can give to your friends.\n >> \n >> And what \".orig.tar.gz\"? We Debian's user love it :-) (and IMHO it is \n >> really not bad idea.)\n >\n >What's that?\n\nA Debian source package consists of the upstream source (named\npackage.orig.tar.gz) together with a diff file and a control file. These\nshould be all that are needed to rebuild the binary package from the source.\n\nSome upstream developers make it hard for us, though. :-(\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I waited patiently for the LORD; and he inclined unto \n me, and heard my cry. He brought me up also out of an \n horrible pit, out of the miry clay, and set my feet \n upon a rock, and established my goings. And he hath \n put a new song in my mouth, even praise unto our God.\n Many shall see it, and fear, and shall trust in the \n LORD.\" Psalms 40:1-3 \n\n\n", "msg_date": "Tue, 11 Jul 2000 11:30:04 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Distribution making " }, { "msg_contents": "The Hermit Hacker wrote:\n> On Mon, 10 Jul 2000, Lamar Owen wrote:\n> > The Hermit Hacker wrote:\n> > > Is this something that can be 'extended' to do the RPM also? :)\n\n> > Yes. Do you want that to happen? It won't be too hard -- the\n\n> If this can be worked in along side/on top of what Peter is doing, I think\n> that would be great ...\n\nWell, it can be done. However, the RPM' have historically had a tighter\ncycle than the official release -- We have 7.0.2-1 and -2 RPM's (soon to\nbe up to -6 or -7) a problems are found in the packaging. Of course, as\nI try to synchronize things, maybe that cycle time can go away. :-)\n\nNow, I know that, in the past, feature patches were released on\npostgresql.org (as well as bugfix patches) that were inbetween releases\n-- that could be done for the RPM stuff easily enough. But, I'd want\nsome comments from RPM distribution users before going that route.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Jul 2000 11:10:01 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "Karel Zak wrote:\n> On Mon, 10 Jul 2000, The Hermit Hacker wrote:\n> > Is this something that can be 'extended' to do the RPM also? :)\n \n> Into standard *source* tree? I was long time glad of PG's devevelopment,\n> that create dateless and OS independent source.\n \n> Well, we will have support for RPM, do you want support for Debian too?\n> If yes, do you accept /Debian directory in top source tree?\n> Do you accept different directory philosophy for several dists.?\n\nI don't mind one bit to have multiple distribution support in the\ntarball. \n \n> ...etc.\n \n> IMHO to *source* tree belong to matter for binary making only. A\n> distribution must be out of source.\n\nThere are more files in the RPM set, for instance, than just the\nbinaries -- there is the spec file, which controls the building of the\nRPM's; there is a patch file (to patch around some madness in the source\nthat breaks the RPM package in some way); there are a couple of scripts\n(startup and upgrade); there is a man page for the upgrade script; as\nwell as other things. Now, getting the *source* of the RPM distribution\npackaging into the tarball might be OK -- I'm certainly not advocating\npackaging binaries in the tarball!\n\nRPM is used on many more systems than just RedHat Linux -- and I am\nactively trying to get the *source* RPM to be buildable on as many\nRPM-based systems as possible -- including being buildable on a Solaris\nsystem that might have RPM installed, or a Tru64 system with RPM\ninstalled (they DO exist). Unfortunately there are a number of\nRedHat-isms in the current RPMset that are having to be worked around --\nalthough, this situation promises to improve in the near future, if I\nhave anything to do with it.\n\nTherefore, a separate source RPM would not need to be distributed -- a\nperson can just download the tarball, execute a single command, and have\nproperly built RPM's for their system ready to install. Can't get much\neasier than that!\n\n> PS. is it this equation right: \"OS = Linux = RH\"? I hope that not.\n\nNo, and neither is RPM == \"RedHat Linux Only.\"\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Jul 2000 11:58:25 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "\nOn Tue, 11 Jul 2000, Lamar Owen wrote:\n\n> > IMHO to *source* tree belong to matter for binary making only. A\n> > distribution must be out of source.\n> \n> There are more files in the RPM set, for instance, than just the\n> binaries -- there is the spec file, which controls the building of the\n> RPM's; there is a patch file (to patch around some madness in the source\n> that breaks the RPM package in some way); there are a couple of scripts\n> (startup and upgrade); there is a man page for the upgrade script; as\n> well as other things. Now, getting the *source* of the RPM distribution\n> packaging into the tarball might be OK -- I'm certainly not advocating\n> packaging binaries in the tarball!\n\n I good known how act RH packaging. And I probably understant you.\n\n> Therefore, a separate source RPM would not need to be distributed -- a\n> person can just download the tarball, execute a single command, and have\n> properly built RPM's for their system ready to install. Can't get much\n> easier than that!\n\nYes, I know. But it expect that in the *common-original-source* must be\n.spec file. Or not? \n\n I'm not enemy of RH, I only not sure if is good \"foul\" original source.\n\n\t\t\t\t\tKarel \n\n\n", "msg_date": "Tue, 11 Jul 2000 18:22:30 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "The Hermit Hacker writes:\n\n> Basically, what I'd have to do for a release is CVSUP the latest code on\n> that branch, do a 'make dist' and that's it? \n\nYes. You'll just have to copy the documentation files in there first. Of\ncourse eventually that should work better as well.\n\n\n> Is this something that can be 'extended' to do the RPM also? :)\n\nI'd have to look at how the RPMs are made, but in theory why not.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 12 Jul 2000 02:23:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": " From: Karel Zak <[email protected]>\n Date: Tue, 11 Jul 2000 18:22:30 +0200 (CEST)\n\nHi Karel,\n\n > Yes, I know. But it expect that in the *common-original-source* must\n > be .spec file. Or not?\n\nnot, of course. But if it is there, you can easily built RPM via rpm -ta\nsource*tar.gz. Only Source RPMS (aka SRPM) must have .spec file built in.\n\n > I'm not enemy of RH, I only not sure if is good \"foul\" original\n > source.\n\nThe same for me. I do hate packagers who just rename my package.tar.gz to\npackage.orig.tar.gz and built tens of patches into one big .diff.gz which\nis crazy to maintain :-)) No flamewars please.\n\nBTW - Searching of mailing-lists is not working on postgresql.org. Try this\nvery long link :-))\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/search.cgi?ps=10&ul=http%3A%2F%2Fwww.postgresql.org%2Fmhonarc%2Fpgsql-hackers%2F%25&q=CREATE+SNAPSHOT&ps=10&o=0&m=all\n-- \nPavel Janďż˝k ml.\[email protected]\n", "msg_date": "Wed, 12 Jul 2000 12:42:47 +0200", "msg_from": "[email protected] (Pavel =?iso-8859-2?q?Jan=EDk?= ml.)", "msg_from_op": false, "msg_subject": "Re: Distribution making" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Some quick numbers though:\n> I simply stripped down pg_lzcompress.c to call compress2()\n> and uncompress() instead of doing anything itself (what a\n> nice, small source file :-).\n\nI went at it in a different way: pulled out pg_lzcompress into a\nstandalone source program that could also call zlib. These numbers\nrepresent the pure compression or decompression time for memory-to-\nmemory processing, no other overhead at all. Each run was iterated\n1000 times to make it long enough to time accurately (so you can\nread the times as \"milliseconds per operation\", though they're\nreally seconds).\n\nA small text file of about 1K:\n\nLZ compressed 990 bytes to 717 bytes in 0.35 u 0.00 sys sec\nLZ decompressed 717 bytes to 990 bytes in 0.04 u 0.00 sys sec\nzlib(3) compressed 990 bytes to 527 bytes in 0.73 u 0.00 sys sec\nzlib uncompressed 527 bytes to 990 bytes in 0.21 u 0.00 sys sec\nzlib(6) compressed 990 bytes to 513 bytes in 0.86 u 0.00 sys sec\nzlib uncompressed 513 bytes to 990 bytes in 0.21 u 0.00 sys sec\nzlib(9) compressed 990 bytes to 513 bytes in 0.86 u 0.00 sys sec\nzlib uncompressed 513 bytes to 990 bytes in 0.20 u 0.00 sys sec\n\nA larger text file, a bit under 100K:\n\nLZ compressed 92343 bytes to 54397 bytes in 49.00 u 0.05 sys sec\nLZ decompressed 54397 bytes to 92343 bytes in 3.74 u 0.00 sys sec\nzlib(3) compressed 92343 bytes to 40524 bytes in 38.48 u 0.02 sys sec\nzlib uncompressed 40524 bytes to 92343 bytes in 8.58 u 0.00 sys sec\nzlib(6) compressed 92343 bytes to 38002 bytes in 68.04 u 0.03 sys sec\nzlib uncompressed 38002 bytes to 92343 bytes in 8.13 u 0.00 sys sec\nzlib(9) compressed 92343 bytes to 37961 bytes in 71.99 u 0.03 sys sec\nzlib uncompressed 37961 bytes to 92343 bytes in 8.14 u 0.00 sys sec\n\nIt looks like the ultimate speed difference is roughly 2:1 on\ndecompression and less than that on compression, though zlib suffers\nfrom greater setup overhead that makes it look worse on short strings.\n\nHere is a much different example, a small JPEG image:\n\nLZ compressed 5756 bytes to 5764 bytes in 1.48 u 0.00 sys sec\nLZ decompressed 5764 bytes to 5756 bytes in 0.01 u 0.00 sys sec\nzlib(3) compressed 5756 bytes to 5629 bytes in 3.18 u 0.00 sys sec\nzlib uncompressed 5629 bytes to 5756 bytes in 0.75 u 0.00 sys sec\nzlib(6) compressed 5756 bytes to 5625 bytes in 3.78 u 0.00 sys sec\nzlib uncompressed 5625 bytes to 5756 bytes in 0.75 u 0.00 sys sec\nzlib(9) compressed 5756 bytes to 5625 bytes in 3.78 u 0.00 sys sec\nzlib uncompressed 5625 bytes to 5756 bytes in 0.75 u 0.00 sys sec\n\nThe PG-LZ code realizes it cannot compress this file and falls back to\nstraight bytewise copy, yielding very fast \"decompress\". zlib manages\nto eke out a few bytes' savings, so it goes ahead and compresses\ninstead of just storing literally, resulting in a big loss on\ndecompression speed for just a fractional space savings. If we use\nzlib, we'd probably be well advised to accept a compressed result only\nif it's at least, say, 25% smaller than uncompressed to avoid this\nlow-return-on-investment situation.\n\n> There might be some room for\n> improvement using static zlib stream allocaions and\n> deflateReset(), inflateReset() or the like. But I don't\n> expect a significant difference from that.\n\nIt turns out you can get a noticeable win from overriding zlib's\ndefault memory allocator, which calls calloc() even though it has\nno need for pre-zeroed storage. Just making it use malloc() instead\nsaves about 25% of the runtime on 1K-sized strings (the above numbers\ninclude this improvement). We'd probably want to make it use palloc\ninstead of malloc anyway, so this won't cost us any extra work. I\nhave not pushed on it to see what else might be gained by tweaking.\n\n> What I suggest:\n\n> Leave PGLZ in place as the default compressor for toastable\n> types. Speed is what all benchmarks talk about - on disk\n> storage size is seldom a minor note.\n\nTrue, but the other side of the coin is that better compression and\nsmaller disk space will mean less I/O, better use of shared-memory\ndisk buffers, etc. So to some extent, better compression helps pay\nfor itself.\n\n> Fix it's history allocation for huge values and have someone\n> (PgSQL Inc.?) patenting the compression algorithm, so we're\n> safe at some point in the future.\n\nThat would be a really *bad* idea. What will people say if we say\n\"Postgres contains patented algorithms, but we'll let you use them\nfor free\" ? They'll say \"no thanks, I remember Unisys' repeatedly\nbroken promises about the GIF patent\" and stay away in droves.\nThere is a *lot* of bad blood in the air about compression patents\nof any sort. We mustn't risk tainting Postgres' reputation with\nthat mess.\n\n(In any case, one would hope you couldn't get a patent on this\nmethod, though I suppose it never pays to overestimate the competence\nof the USPTO...)\n\n> If there's a patent problem\n> in it, we are already running the risk to get sued, the PGLZ\n> code got shipped with 7.0, used in lztext.\n\nBut it hasn't been documented or advertised. If we take it out\nagain in 7.1, I think our exposure to potential lawsuits from it is\nnegligible. Not that I think there is any big risk there anyway,\nbut we ought to consider the possibility.\n\nMy feeling is that going with zlib is probably the right choice.\nThe technical case for using a homebrew compressor instead isn't\nvery compelling, and the advantages of using a standardized,\nknown-patent-free library are not to be ignored.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jul 2000 03:07:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios... " }, { "msg_contents": "Tom Lane wrote:\n> \n> [email protected] (Jan Wieck) writes:\n> > Some quick numbers though:\n> > I simply stripped down pg_lzcompress.c to call compress2()\n> > and uncompress() instead of doing anything itself (what a\n> > nice, small source file :-).\n> \n> I went at it in a different way: pulled out pg_lzcompress into a\n> standalone source program that could also call zlib. These numbers\n> represent the pure compression or decompression time for memory-to-\n> memory processing, no other overhead at all. Each run was iterated\n> 1000 times to make it long enough to time accurately (so you can\n> read the times as \"milliseconds per operation\", though they're\n> really seconds).\n\nWe could just make this part extensible as well, like the rest of \npostgres, so we would have directory tree like \n\n/compressors\n /nullcompressor\n /lzcompress\n /zlib\n /lzo\n /bzip2\n /my_new_supercompressor\n /classic_huffman_for_uppercase_american_english\n\nand select the desired compressor at compilt time, or even better, on \nfield by field basis at runtime, so that field that stores mainly \ntar.gz-s at compression level 9 will use nullcompressor, and others \nwill use what is best for them.\n\n> \n> > Fix it's history allocation for huge values and have someone\n> > (PgSQL Inc.?) patenting the compression algorithm, so we're\n> > safe at some point in the future.\n> \n> That would be a really *bad* idea. What will people say if we say\n> \"Postgres contains patented algorithms, but we'll let you use them\n> for free\" ? They'll say \"no thanks, I remember Unisys' repeatedly\n> broken promises about the GIF patent\" and stay away in droves.\n> There is a *lot* of bad blood in the air about compression patents\n> of any sort. We mustn't risk tainting Postgres' reputation with\n> that mess.\n> (In any case, one would hope you couldn't get a patent on this\n> method, though I suppose it never pays to overestimate the competence\n> of the USPTO...)\n\nAnd AFAIK (IANAL ;) you can only patent previously _unpublished_ work,\neven by the patent applicant.\n\n> \n> > If there's a patent problem\n> > in it, we are already running the risk to get sued, the PGLZ\n> > code got shipped with 7.0, used in lztext.\n> \n> But it hasn't been documented or advertised. If we take it out\n> again in 7.1, I think our exposure to potential lawsuits from it is\n> negligible. Not that I think there is any big risk there anyway,\n> but we ought to consider the possibility.\n> \n> My feeling is that going with zlib is probably the right choice.\n> The technical case for using a homebrew compressor instead isn't\n> very compelling,\n\nSpeed seems to be a good reason, if we can keep it up.\n\n> and the advantages of using a standardized,\n> known-patent-free library are not to be ignored.\n\nOTOH, there are possibly patents on other part of postgres, \nlike indexing, storage methods, the mere fact that something is \nstored in another relation, using 'Z' as a protocol character, etc. \netc. So using a patent-free compression library does not help much.\n\nSo if PgSQL Inc. has lots of lawyers with nothing to do, they could do\nsome patent research and scare all developers away with their findings\n;)\n\n-------------\nHannu\n", "msg_date": "Thu, 13 Jul 2000 10:39:36 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lztext and compression ratios..." }, { "msg_contents": "Some platforms (OSF/cc, HPUX) are already using -rpath or equivalent, so\nyou don't have to specify a shared library search path at runtime. I think\nthat a lot more platforms could use this. Can people comment on whether\nand how it works on their platform? Essentially,\n\nLDFLAGS+=-rpath '$(libdir)'\n\nmight do the trick for most.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 18 Jul 2000 20:18:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Shared library search paths" }, { "msg_contents": "For the SGI Irix 6.5, \"man ld\" gives:\n\n ....\n\n -rpath library_path\n Adds the library_path to the search path for DSOs. Each\n library path is appended to the list of directories at the\n time the executable or DSO is loaded. This option directs\n rld(5) to look in the named directories, but to look only\n for DSOs, and to stop looking when the correct one is found.\n\n This option can be specified only when the -shared or\n -call_shared options are also in effect. For more\n information, see the rld(5) man page. (C, C++, F77, F90)\n\n ....\n\n -shared Produces a DSO, creates all of the tables for run-time\n linking, and resolves references to other specified shared\n objects. The object created can be used by the linker to\n make dynamic executables. (C, C++, F77, F90)\n\n ....\n\nHope this helps.\nMark\n\nPeter Eisentraut wrote:\n\n> Some platforms (OSF/cc, HPUX) are already using -rpath or equivalent, so\n> you don't have to specify a shared library search path at runtime. I think\n> that a lot more platforms could use this. Can people comment on whether\n> and how it works on their platform? Essentially,\n>\n> LDFLAGS+=-rpath '$(libdir)'\n>\n> might do the trick for most.\n>\n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n--\nMark Dalphin email: [email protected]\nMail Stop: 29-2-A phone: +1-805-447-4951 (work)\nOne Amgen Center Drive +1-805-375-0680 (home)\nThousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n\n\n\n", "msg_date": "Tue, 18 Jul 2000 14:55:35 -0700", "msg_from": "Mark Dalphin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared library search paths" }, { "msg_contents": "Peter Eisentraut wrote:\n >Some platforms (OSF/cc, HPUX) are already using -rpath or equivalent, so\n >you don't have to specify a shared library search path at runtime. I think\n >that a lot more platforms could use this. Can people comment on whether\n >and how it works on their platform? Essentially,\n >\n >LDFLAGS+=-rpath '$(libdir)'\n >\n >might do the trick for most.\n\nAs far as Debian is concerned, use of rpath is a bug. Here's a quote from\nsome Debian system documentation:\n\n libtool automatically inserts `-rpath' settings when compiling your\n program. But `-rpath' can cause big problems if the referenced\n libraries get updated. Therefore, no Debian package should use the\n `-rpath' option.\n\n libtool also refuses to link shared libraries against other shared\n libraries. Debian packages have to at least link against libc (with\n \"-lc\"), so that the dynamic linker knows whether to use the\n libc5-compat libraries or not.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For God so loved the world, that he gave his only \n begotten Son, that whosoever believeth in him should \n not perish, but have everlasting life.\" John 3:16 \n\n\n", "msg_date": "Tue, 18 Jul 2000 23:47:38 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared library search paths " }, { "msg_contents": "\nfor all the stuff I'm doign lately, I just do:\n\nsetenv LDFLAGS \"-R/usr/local/pgsql/lib -R/usr/local/lib\"\n\nand let configure handle the rest ...\n\n\nOn Tue, 18 Jul 2000, Mark Dalphin wrote:\n\n> For the SGI Irix 6.5, \"man ld\" gives:\n> \n> ....\n> \n> -rpath library_path\n> Adds the library_path to the search path for DSOs. Each\n> library path is appended to the list of directories at the\n> time the executable or DSO is loaded. This option directs\n> rld(5) to look in the named directories, but to look only\n> for DSOs, and to stop looking when the correct one is found.\n> \n> This option can be specified only when the -shared or\n> -call_shared options are also in effect. For more\n> information, see the rld(5) man page. (C, C++, F77, F90)\n> \n> ....\n> \n> -shared Produces a DSO, creates all of the tables for run-time\n> linking, and resolves references to other specified shared\n> objects. The object created can be used by the linker to\n> make dynamic executables. (C, C++, F77, F90)\n> \n> ....\n> \n> Hope this helps.\n> Mark\n> \n> Peter Eisentraut wrote:\n> \n> > Some platforms (OSF/cc, HPUX) are already using -rpath or equivalent, so\n> > you don't have to specify a shared library search path at runtime. I think\n> > that a lot more platforms could use this. Can people comment on whether\n> > and how it works on their platform? Essentially,\n> >\n> > LDFLAGS+=-rpath '$(libdir)'\n> >\n> > might do the trick for most.\n> >\n> > --\n> > Peter Eisentraut Sernanders v�g 10:115\n> > [email protected] 75262 Uppsala\n> > http://yi.org/peter-e/ Sweden\n> \n> --\n> Mark Dalphin email: [email protected]\n> Mail Stop: 29-2-A phone: +1-805-447-4951 (work)\n> One Amgen Center Drive +1-805-375-0680 (home)\n> Thousand Oaks, CA 91320 fax: +1-805-499-9955 (work)\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 18 Jul 2000 22:48:33 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Shared library search paths" }, { "msg_contents": "> Some platforms (OSF/cc, HPUX) are already using -rpath or equivalent, so\n> you don't have to specify a shared library search path at runtime. I think\n> that a lot more platforms could use this. Can people comment on whether\n> and how it works on their platform? Essentially,\n> LDFLAGS+=-rpath '$(libdir)'\n\nFor linux (at least gcc 2.7.x and 2.95.2 systems):\n\nif specified in the compilation step,\n\n -Wl,-rpath $(libdir)\n\nor if specified directly to the linker\n\n -rpath $(libdir)\n\n - Thomas\n", "msg_date": "Wed, 19 Jul 2000 01:52:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Shared library search paths" }, { "msg_contents": "Oliver Elphick writes:\n\n> As far as Debian is concerned, use of rpath is a bug. Here's a quote from\n> some Debian system documentation:\n> \n> libtool automatically inserts `-rpath' settings when compiling your\n> program.\n\nI don't think so.\n\n> But `-rpath' can cause big problems if the referenced\n> libraries get updated. Therefore, no Debian package should use the\n> `-rpath' option.\n\nI'm not sure I buy that. All -rpath does is add a directory to the search\npath that the program consults at runtime for its shared libraries. So\nit's just an alternative in place of\n\nhard-coded into dynamic linker\n/etc/ld.so.conf\nLD_LIBRARY_PATH\n\nbut it's the terminally accurate alternative.\n\nWhat does happen if the referenced library gets updated? Nothing. -rpath\ndoesn't reference any libraries, it just suggests to the runtime linker\nwhere it might look for one. I don't want to use it to find system\nlibraries, I just want psql to find libpq, and the right libpq, and I want\nto relieve installers from having to fiddle around with these settings.\n\n> libtool also refuses to link shared libraries against other shared\n> libraries.\n\nI don't think so.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 20 Jul 2000 00:46:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared library search paths " }, { "msg_contents": "Peter Eisentraut wrote:\n >Oliver Elphick writes:\n >\n >> As far as Debian is concerned, use of rpath is a bug. Here's a quote from\n >> some Debian system documentation:\n >> \n >> libtool automatically inserts `-rpath' settings when compiling your\n >> program.\n >\n >I don't think so.\n >\n >> But `-rpath' can cause big problems if the referenced\n >> libraries get updated. Therefore, no Debian package should use the\n >> `-rpath' option.\n >\n >I'm not sure I buy that. All -rpath does is add a directory to the search\n >path that the program consults at runtime for its shared libraries. \n\nI'm referring this back to the Debian mailing lists; perhaps this\ndocumentation is out of date.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Behold, what manner of love the Father hath bestowed \n upon us, that we should be called the sons of God...\" \n I John 3:1 \n\n\n", "msg_date": "Thu, 20 Jul 2000 08:53:27 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared library search paths " }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >I'm referring this back to the Debian mailing lists; perhaps this\n >documentation is out of date.\n \nHere's some responses. It seems that I shall probably have to disable\nuse of -rpath for the Debian build.\n\n=========================================================================\nDate: Fri, 21 Jul 2000 09:09:32 +0200\nFrom: \"J.H.M. Dassen (Ray)\" <[email protected]>\nTo: [email protected]\nSubject: Re: Use of -rpath\n\nIn my experience, -rpath is a big PITA. For example, Red Hat used to ship a\nlibc5 CDE package that was linked -rpath /usr/X11R6/lib. In that time,\nDebian's X was already libc6, and libc5 versions of the X libraries were in\n/usr/lib/libc5-compat. This directory was in /etc/ld.so.conf, and the\ndynamic loader was smart enough to know which library to use, /except/ when\n-rpath was used. The result was that a libc5 binary, compiled for libc5 X\nlibraris, got loaded against libc6 X libraries, and segfaulted.\n\nNow if -rpath's semantics were changed to it adding _to the end_ of the\ndynamic loader's directory search path, it might actually be useful.\n\nRay\n=========================================================================\n\nTo: [email protected]\nSubject: Re: Use of -rpath\nFrom: Brian May <[email protected]>\nDate: 21 Jul 2000 17:35:39 +1000\n\n >> From: Peter Eisentraut <[email protected]>\n Mike> * * *\n >> > libtool also refuses to link shared libraries against other\n >> shared > libraries.\n >> \n >> I don't think so.\n\n Mike> Peter seems clearly right on this:\n\n Mike> guardian:~ ldd /usr/lib/libpq.so.2.0 libc.so.6 =>\n Mike> /lib/libc.so.6 (0x40011000) libcrypt.so.1 =>\n Mike> /lib/libcrypt.so.1 (0x400ee000) /lib/ld-linux.so.2 =>\n Mike> /lib/ld-linux.so.2 (0x80000000)\n\nThat should read \"libtool 1.3 refuses to link shared libraries against\nother *uninstalled* shared libraries\".\n\nie libtool 1.3 will only allow it if both libraries are installed.\n\nlibtool 1.4 will fix this serious limitation. At least, it is serious\nfor packages like Kerberos (both implementations), which come with\nmany libraries that must (read: should) link to each other in the same\nsource structure.\n\nHowever, AFAIK, this has nothing to do with -rpath??????\n=========================================================================\nDate: Fri, 21 Jul 2000 09:39:43 +0200\nFrom: \"Marcelo E. Magallon\" <[email protected]>\nTo: [email protected]\nSubject: Re: Use of -rpath\n\n>> Peter Eisentraut <[email protected]> writes:\n\n > > As far as Debian is concerned, use of rpath is a bug. Here's a quote from\n > > some Debian system documentation:\n > > \n > > libtool automatically inserts `-rpath' settings when compiling your\n > > program.\n > \n > I don't think so.\n \n Nowadays the situation might have changed... let me check... Nope,\n 1.3.5 still uses -rpath whenever shared libraries are going to be\n used.\n\n > > But `-rpath' can cause big problems if the referenced\n > > libraries get updated. Therefore, no Debian package should use the\n > > `-rpath' option.\n > \n > I'm not sure I buy that. All -rpath does is add a directory to the search\n > path that the program consults at runtime for its shared libraries. So\n > it's just an alternative in place of\n\n The rpath in the library tells the linker where the library is\n installed. This is hardcoded into the linked program. From ld.info:\n\n > Add a directory to the runtime library search path. This is used\n > when linking an ELF executable with shared objects. All `-rpath'\n > arguments are concatenated and passed to the runtime linker, which\n > uses them to locate shared objects at runtime. The `-rpath'\n > option is also used when locating shared objects which are needed\n > by shared objects explicitly included in the link; see the\n > description of the `-rpath-link' option. If `-rpath' is not used\n > when linking an ELF executable, the contents of the environment\n > variable `LD_RUN_PATH' will be used if it is defined.\n\n This is particularly nasty when you -rpath things like /usr/lib.\n\n > > libtool also refuses to link shared libraries against other shared\n > > libraries.\n > \n > I don't think so.\n\n He's right. See (libtool.info)Inter-library dependencies. Note\n particularly:\n\n > The simple-minded inter-library dependency tracking code of libtool\n > releases prior to 1.2 was disabled because it was not clear when it\n > was possible to link one library with another, and complex failures\n > would occur. A more complex implementation of this concept was\n > re-introduced before release 1.3, but it has not been ported to all\n > platforms that libtool supports. The default, conservative behavior\n > is to avoid linking one library with another, introducing their\n > inter-dependencies only when a program is linked with them. \n\n i.e., you have to ask libtool to do this, else it won't.\n\n Greetings,\n\n Marcelo\n=========================================================================\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Greater love hath no man than this, that a man lay \n down his life for his friends.\" John 15:13 \n\n\n", "msg_date": "Fri, 21 Jul 2000 16:51:20 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared library search paths " }, { "msg_contents": "I noticed that INHERITS doesn't propogate indecies, It'd be nice\nif there was an toption to do so.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 28 Aug 2000 12:00:07 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "INHERITS doesn't offer enough functionality" }, { "msg_contents": "Alfred Perlstein wrote:\n> \n> I noticed that INHERITS doesn't propogate indecies, It'd be nice\n> if there was an toption to do so.\n\nYep it would. Are you volunteering?\n", "msg_date": "Tue, 29 Aug 2000 11:09:18 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "* Chris <[email protected]> [000828 17:15] wrote:\n> Alfred Perlstein wrote:\n> > \n> > I noticed that INHERITS doesn't propogate indecies, It'd be nice\n> > if there was an toption to do so.\n> \n> Yep it would. Are you volunteering?\n\nHa, right now I'm doing a major rewrite of my own code to hack\naround vacuum, it'll have to wait but thanks for the vote of\nconfidence. :-)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 28 Aug 2000 17:20:38 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "I've brought this up before, but maybe if I sell it as a bug someone will\nagree. :-)\n\nIt would be desirable for several reasons that PostgreSQL can be installed\nsafely with --prefix=/usr/local, or some other such shared location. \n(One reason: no PATH, MANPATH, LD_LIBRARY_PATH to set up.)\n\nThe main problem is that we'd clutter the include/ subtree hopelessly with\nfiles such as <c.h>, <os.h>, <config.h>, which has a serious chance of\nbreaking other autoconfiscated packages. Not to mention the other dozen\nor so files that will be spread out evenly.\n\nMy proposal is to set includedir=${prefix}/include/postgresql (instead of\n${prefix}/include) in such cases where the prefix is shared, i.e., it does\nnot contain something like \"pgsql\" already. (precise pattern t.b.d.) \nThis is similar to the existing RPM's, and compatible with FHS, GNU fs\nstd., and BSD hier(7). Apache and Perl also have a similar behaviour in\ntheir installation process. Additionally, one can now use `pg-config\n--includedir` to find the right include directory anywhere.\n\nMarc objected that he liked \"everything in one place\". But doing that is\nexactly what's causing the problem here. Additionally I ask why then the\ndefault is prefix is /usr/local/pgsql in the first place, which is\ncertainly not in support of that notion. I think having a cooperative\ninstallation layout should be prioritized.\n\nSecondly, I'd like to do the same thing with the share/ subtree. That\ntree is only read by the PG programs anyway so no one has to know about\nit. It's common practice for every package to have its own tree under\nshare and not to write into it directly. That would also help associate a\nfile like `global.description' to where it belongs.\n\nFinally, the doc tree should also get this treatment. Otherwise we'd get\nfiles like /usr/local/doc/{admin,tutorial}/index.html, which do not\nindicate at all what package they belong to and they could be confused\nwith documentation of the operating system proper. Users would only have\nto update their bookmarks, but I doubt that installations into shared\nprefixes are large scale anyway.\n\nComments? Better objections? :-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 27 Sep 2000 12:43:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Installation layout is still hazardous for shared prefixes" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It would be desirable for several reasons that PostgreSQL can be installed\n> safely with --prefix=/usr/local, or some other such shared location. \n> ...\n> My proposal is to set includedir=${prefix}/include/postgresql (instead of\n> ${prefix}/include) in such cases where the prefix is shared, i.e., it does\n> not contain something like \"pgsql\" already. (precise pattern t.b.d.) \n\nHmm, so basically you propose an install setup whereby 'bin' and 'lib'\nfiles can go directly into /usr/local/bin and /usr/local/lib, but\neverything else still lives in postgres-specific directories?\n\nTo do that without creating problems, we'd have to go back to making\nsure that all the programs we install have 'pg'-prefixed names. The\nscripts (createdb and so forth) don't at the moment, and names like\n'createuser' clearly have potential for confusion if they are in non-\nPG-specific directories.\n\nI think it would be a real bad idea to put the postmaster and postgres\nexecutables right in /usr/local/bin. Perhaps it is time to think about\na separate 'sbin' directory for programs that aren't supposed to be\ninvoked by normal users. Those two, initdb, initlocation, and ipcclean\ncould certainly go to sbin, also pg_id, maybe the create/drop scripts\nif you feel those are admin-only. Perhaps using a private sbin directory\ncould eliminate the issue of needing to rename stuff.\n\nThe stuff that's going into lib doesn't look like it'd cause any big\nconflicts, and I agree that not having to run ldconfig or equivalent\nwould eliminate a lot of install headaches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Sep 2000 10:53:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes " }, { "msg_contents": "On Wed, Sep 27, 2000 at 10:53:43AM -0400, Tom Lane wrote:\n> I think it would be a real bad idea to put the postmaster and postgres\n> executables right in /usr/local/bin. Perhaps it is time to think about\n> a separate 'sbin' directory for programs that aren't supposed to be\n> invoked by normal users. Those two, initdb, initlocation, and ipcclean\n> could certainly go to sbin, also pg_id, maybe the create/drop scripts\n> if you feel those are admin-only. Perhaps using a private sbin directory\n> could eliminate the issue of needing to rename stuff.\n\ngenerally there is a /usr/local/sbin or /usr/local/libexec for things like\npostgres and postmaster.\n\nat least on freebsd.\n\nif we are gonna go this route, i would prefer that we not have any \"special\"\ndirectories for postgres, other than the data directory.\n\na layout of:\n\n/usr/local/bin\n/usr/local/include/pgsql\n/usr/local/lib\n/usr/local/libexec\n/usr/local/pgsql/data or /usr/local/pgsql\n\nthe last one could also be /home/pgsql or whatever.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Wed, 27 Sep 2000 11:02:06 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes" }, { "msg_contents": "Tom Lane writes:\n\n> Hmm, so basically you propose an install setup whereby 'bin' and 'lib'\n> files can go directly into /usr/local/bin and /usr/local/lib, but\n> everything else still lives in postgres-specific directories?\n\nYes.\n\nIn detail, for those who cry \"do it like Debian\", we have these categories\nof installation directories (not all actually used by PostgreSQL):\n\nbin\t- user programs\nsbin\t- administrator programs\nlibexec\t- programs used only by other programs\n\nlib\t- libraries\ninclude\t- headers\n\nshare\t- architecture independent read-only data\netc\t- single-host read-only data\ncom\t- architecture independent writeable data\nvar\t- single-host writeable data\n\ndoc\t- documentation\nman\t- manual pages\n\nThese do not actually have to exist with these names (although they\nusually do), but the point is that each directory has a fairly orthogonal\npurpose, thus enabling the local installer to easily adopt the layout to\nhis local convention.\n\nNow the convention is that if you have a lot of files to put into one of\nthese directories you create a private subdirectory, which is what I'm\nproposing here. However, the bin, sbin, lib, and man directories are\nobviously exempted from this convention, because otherwise the\nshell/linker/man system won't find the files without the same contortions\nwe're trying to avoid here.\n\n> To do that without creating problems, we'd have to go back to making\n> sure that all the programs we install have 'pg'-prefixed names.\n\nYeah, that again. That would be a true incompatibility, though, so it's\nmore complex. But let's consider that complementary, not prerequisite to\nthe issue at hand.\n\n> I think it would be a real bad idea to put the postmaster and postgres\n> executables right in /usr/local/bin.\n\nWhy?\n\n> Perhaps it is time to think about a separate 'sbin' directory for\n> programs that aren't supposed to be invoked by normal users.\n\nGood idea as well, but again a compatibility break that needs to be\nthought about.\n\n> Perhaps using a private sbin directory could eliminate the issue of\n> needing to rename stuff.\n\nI don't think private sbin directories are conventional. But again,\nthat's really a complementary issue.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 28 Sep 2000 15:08:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared\n prefixes" }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > My proposal is to set includedir=${prefix}/include/postgresql (instead of\n> > ${prefix}/include) in such cases where the prefix is shared, i.e., it does\n> > not contain something like \"pgsql\" already. (precise pattern t.b.d.)\n \n> Hmm, so basically you propose an install setup whereby 'bin' and 'lib'\n> files can go directly into /usr/local/bin and /usr/local/lib, but\n> everything else still lives in postgres-specific directories?\n\nPeter: Please do this....\n \n> To do that without creating problems, we'd have to go back to making\n> sure that all the programs we install have 'pg'-prefixed names. The\n> scripts (createdb and so forth) don't at the moment, and names like\n> 'createuser' clearly have potential for confusion if they are in non-\n> PG-specific directories.\n\nRedHat includes PostgreSQL, with executables in /usr/bin. There have\nbeen no namespace collisions as yet, with as many packages as RedHat\nships.\n \n> I think it would be a real bad idea to put the postmaster and postgres\n> executables right in /usr/local/bin. Perhaps it is time to think about\n> a separate 'sbin' directory for programs that aren't supposed to be\n> invoked by normal users. Those two, initdb, initlocation, and ipcclean\n\nThis is doable, but not really necessary. However, if this is the\ndirection things are going..... I can certainly work with it. In fact,\nI may go ahead with 7.1's RPMset and do that, popping those executables\nin /usr/sbin -- not a big change, by any means, except to the scripts\nthat are bundled with the RPM.\n\nA good, usable, shared prefix would make my job much easier. Great gobs\nof code in the spec file would go away as PostgreSQL loses the\n'/usr/local/pgsql'-centric thinking and gets more in the step of what is\nstandard for packaging. And this would help even on system other than\nLinux FHS-compliant distributions. And it would not cause any problems\nfor those who still want to use a prefix of /usr/local/pgsql.\n\nThanks for the thinking, Peter.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 28 Sep 2000 12:44:54 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes" }, { "msg_contents": "Lamar Owen wrote:\n >Tom Lane wrote:\n >> To do that without creating problems, we'd have to go back to making\n >> sure that all the programs we install have 'pg'-prefixed names. The\n >> scripts (createdb and so forth) don't at the moment, and names like\n >> 'createuser' clearly have potential for confusion if they are in non-\n >> PG-specific directories.\n >\n >RedHat includes PostgreSQL, with executables in /usr/bin. There have\n >been no namespace collisions as yet, with as many packages as RedHat\n >ships.\n \nThe same applies to Debian, with something like 4000 binary packages in\nthe current development tree.\n\n >> I think it would be a real bad idea to put the postmaster and postgres\n >> executables right in /usr/local/bin. Perhaps it is time to think about\n >> a separate 'sbin' directory for programs that aren't supposed to be\n >> invoked by normal users. Those two, initdb, initlocation, and ipcclean\n >\n >This is doable, but not really necessary. However, if this is the\n >direction things are going..... I can certainly work with it. In fact,\n >I may go ahead with 7.1's RPMset and do that, popping those executables\n >in /usr/sbin -- not a big change, by any means, except to the scripts\n >that are bundled with the RPM.\n \nIn the Debian package, I have put the administrator programs in\n/usr/lib/postgresql/bin. The postgres user has that directory in its path\nso that all works properly. Since root cannot run these, I don't think it\nappropriate to put them in /usr/sbin.\n\n >A good, usable, shared prefix would make my job much easier. Great gobs\n >of code in the spec file would go away as PostgreSQL loses the\n >'/usr/local/pgsql'-centric thinking and gets more in the step of what is\n >standard for packaging. And this would help even on system other than\n >Linux FHS-compliant distributions. And it would not cause any problems\n >for those who still want to use a prefix of /usr/local/pgsql.\n\nAgreed.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Let not your heart be troubled; ye believe in God, \n believe also in me.\" John 14:1 \n\n\n", "msg_date": "Fri, 29 Sep 2000 10:29:48 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Installation layout is still hazardous for shared\n prefixes" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>> Tom Lane wrote:\n>>> I think it would be a real bad idea to put the postmaster and postgres\n>>> executables right in /usr/local/bin. Perhaps it is time to think about\n>>> a separate 'sbin' directory for programs that aren't supposed to be\n>>> invoked by normal users. Those two, initdb, initlocation, and ipcclean\n \n> In the Debian package, I have put the administrator programs in\n> /usr/lib/postgresql/bin. The postgres user has that directory in its path\n> so that all works properly. Since root cannot run these, I don't think it\n> appropriate to put them in /usr/sbin.\n\nThat seems like a good compromise to me. From a more general\nperspective I guess that install would see it as two separate target\ndirectories for executables, which I suppose we'd describe as \"user\"\nand \"dbadmin\" bin directories. When installing one should have a\nchoice of a \"traditional\" setup (both user and admin exes go into\na single directory, eg /usr/local/pgsql/bin) or a \"shared\" setup\n(user exes to a shared dir like /usr/local/bin, admin exes still go\nto a private dir like /usr/local/pgsql/bin).\n\nOffhand I'd make the division be user:\n\ncreatedb dropdb ecpg pg-config pgaccess pgtclsh pgtksh psql\n\nand admin:\n\ncreatelang createuser droplang dropuser initdb initlocation ipcclean\npg_ctl pg_dump pg_dumpall pg_id pg_passwd pg_restore pg_upgrade postgres\npostmaster vacuumdb\n\n(Not sure about pg_dump/pg_dumpall/pg_restore; are these of any\nsignificant use to non-superusers?) This would keep createuser/dropuser\nout of the shared bin directory, which certainly seem like the names\nmost likely to cause conflicts.\n\nThe man pages probably need to adopt the same division as the exes,\nie some to /usr/local/man and some to /usr/local/pgsql/man.\n\nNote that it'd be a real bad idea to abandon the option of the\n\"traditional\" install-tree configuration. For people like me, with\nthree or four versions of Postgres hanging around on the same machine,\nit's critical to be able to install everything into a single private\ndirectory tree.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Sep 2000 10:15:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes " }, { "msg_contents": "Tom Lane wrote:\n\n[useful an complete discussion of sbin-style programs and their place\nsnipped]\n\n> (Not sure about pg_dump/pg_dumpall/pg_restore; are these of any\n> significant use to non-superusers?) This would keep createuser/dropuser\n> out of the shared bin directory, which certainly seem like the names\n> most likely to cause conflicts.\n\npg_dump, yes, as a user might want to dump his own database.\n \n> The man pages probably need to adopt the same division as the exes,\n> ie some to /usr/local/man and some to /usr/local/pgsql/man.\n\nCurrently, since there is no collision in the executables there have\nbeen no collisions in the man pages. But, I had a radical idea about\nthe man pages -- why not package a 'man database' as a dump, let someone\nrestore that dump into a database, then you can use SQL to access your\nman pages. Of course, you still need docs outside the database, but,\nwith TOAST, this is possible.\n\nComments?\n \n> Note that it'd be a real bad idea to abandon the option of the\n> \"traditional\" install-tree configuration. For people like me, with\n> three or four versions of Postgres hanging around on the same machine,\n> it's critical to be able to install everything into a single private\n> directory tree.\n\nNo one is advocating removing the 'traditional' packaging from the\noptions -- least of all me. Choice and flexibility are my bywords. \nCurrently, the PostgreSQL installation is very inflexible WRT the\ndirectories under the installation dir.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 29 Sep 2000 10:47:33 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes" }, { "msg_contents": "Tom Lane writes:\n\n> > In the Debian package, I have put the administrator programs in\n> > /usr/lib/postgresql/bin. The postgres user has that directory in its path\n> > so that all works properly. Since root cannot run these, I don't think it\n> > appropriate to put them in /usr/sbin.\n\nThat's okay as far as Debian goes, because users do not *ever* have to run\ninitdb, postmaster, etc. because that's done by the package tool and/or\nthe rc.* stuff, but for a PostgreSQL source distribution (which is what\nwe're talking about here, binary packagers can always make their own\narrangements) initdb and postmaster do get run by the user so they have to\nbe in some kind of obvious place.\n\n> choice of a \"traditional\" setup (both user and admin exes go into\n> a single directory, eg /usr/local/pgsql/bin) or a \"shared\" setup\n> (user exes to a shared dir like /usr/local/bin, admin exes still go\n> to a private dir like /usr/local/pgsql/bin).\n\nUm, now that would go against the \"meta\" file system standard that\nAutoconf gives us, and I think it's very important to adhere by that\nbecause users expect a configure script to behave in certain ways.\n\nAn installation directory looks like PREFIX/SUBTREE/OPT-PRIVATE, where\nprefix would be /usr/local/pgsql or /usr/local, subtree is bin, man,\ninclude, etc., and OPT-PRIVATE would be \"postgresql\". But you can only\nhave one prefix, if it's /usr/local then you can't use /usr/local/pgsql\nfor some other part.\n\nWhat we could compatibly do is\n\n/usr/local/bin/psql\n/usr/local/sbin/initdb\n\nor \n\n/usr/local/bin/psql\n/usr/local/sbin/postgresql/initdb\n\nbut the latter is going to be pretty annoying because I've never seen\nsomebody make subdirectories in bin or sbin.\n\n \n> Offhand I'd make the division be user:\n> \n> createdb dropdb ecpg pg-config pgaccess pgtclsh pgtksh psql\n> \n> and admin:\n> \n> createlang createuser droplang dropuser initdb initlocation ipcclean\n> pg_ctl pg_dump pg_dumpall pg_id pg_passwd pg_restore pg_upgrade postgres\n> postmaster vacuumdb\n\nHmm, I'd rather see createuser, dropuser, vacuumdb, pg_dump, pg_restore in\nthe first category because that's the client/server split -- the second\ncategory only needs to be installed on the server, the first on clients\nand servers. That might be more useful in more ways and does not make too\nmany presumptions about usage pattern.\n\n> This would keep createuser/dropuser out of the shared bin directory,\n> which certainly seem like the names most likely to cause conflicts.\n\nI'm not so concerned about that anymore, considering that RPM and Debian\nare not concerned. Hiding away executables is probably not the terminally\nelegant plan, because users will put them in their PATH anyway before\nlong. If someone else can show some other package having a createuser\nprogram, then we need to act, but probably rather by renaming ours.\n\n\n> Note that it'd be a real bad idea to abandon the option of the\n> \"traditional\" install-tree configuration. For people like me, with\n> three or four versions of Postgres hanging around on the same machine,\n> it's critical to be able to install everything into a single private\n> directory tree.\n\nThat's definitely not going to happen of course. To make clear what I\nwant, it's this:\n\n--prefix=/usr/local/pgsql (default)\n\n/usr/local/pgsql/bin/psql ...\n/usr/local/pgsql/lib/libpq.a ...\n/usr/local/pgsql/include/libpq-fe.h ...\n/usr/local/pgsql/share/template1.bki ...\n\n--prefix=/usr/local\t!~ /(postgres)|(pgsql)/\n\n/usr/local/bin/psql ...\n/usr/local/lib/libpq.a ...\n/usr/local/include/postgresql/libpq-fe.h ...\n/usr/local/share/postgresql/template1.bki ...\n\nWhether or not we want to have a separate sbin tree is independent of\nthat, but let's not invent a completely new file system standard for that.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 29 Sep 2000 18:24:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared\n prefixes" }, { "msg_contents": "Can someone comment on this?\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Louis-David Mitterrand writes:\n> \n> > When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> > seems the child gets all of the parent's contraints _except_ its PRIMARY\n> > KEY. Is this normal?\n> \n> It's kind of a bug.\n> \n> \n> -- \n> Peter Eisentraut Sernanders v?g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 2 Oct 2000 12:29:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "I wrote:\n\n> My proposal is to set includedir=${prefix}/include/postgresql (instead of\n> ${prefix}/include) in such cases where the prefix is shared, i.e., it does\n> not contain something like \"pgsql\" already. (precise pattern t.b.d.) \n\nI think that most people agreed to this step, or at least no one\nexplicitly disagreed. Could we move ahead with this?\n\nThe default prefix will stay at /usr/local/pgsql so if you are concerned\nabout createuser, etc. clashing, you don't have to use it. The decision\nabout sbin or no sbin can be considered separately.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 4 Oct 2000 22:13:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation layout is still hazardous for shared prefixes" }, { "msg_contents": "> I have created the following patch with applies to the current source\n> tree. However, there have been significant changes since 7.0, and even\n> my patch does not incorporate all the other changes.\n> \n> I am not sure what to recommend. It would be optimal if you could take\n> your patch, grab our current snapshot, and submit a new patch that works\n> with our current code. Another option if you are too busy is to have\n> one of our folks massage my patch to more properly merge into the\n> existing code. \n> \n> Please let me know how you would like to proceed. My version of your patch\n> is attached.\n\nFirst of all, thank you for taking the time to work on my submission!\nI would prefer if someone who's already familiar with what's changed since\n7.0 would make whatever adjustments are needed. I'm occupied with\nother projects at the moment. If it turns out that no one else has time\nto work on it, I'll be the fallback.\n", "msg_date": "Thu, 12 Oct 2000 14:09:03 -0400", "msg_from": "\"David J. MacKenzie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] PostgreSQL virtual hosting support " }, { "msg_contents": "> Alfred Perlstein wrote:\n> > \n> > I noticed that INHERITS doesn't propogate indecies, It'd be nice\n> > if there was an toption to do so.\n> \n> Yep it would. Are you volunteering?\n> \n\nAdded to TODO:\n\n\t* Allow inherited tables to inherit index\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Oct 2000 11:55:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "* Bruce Momjian <[email protected]> [001016 08:55] wrote:\n> > Alfred Perlstein wrote:\n> > > \n> > > I noticed that INHERITS doesn't propogate indecies, It'd be nice\n> > > if there was an toption to do so.\n> > \n> > Yep it would. Are you volunteering?\n> > \n> \n> Added to TODO:\n> \n> \t* Allow inherited tables to inherit index\n\nThank you, it's not a big problem that this doesn't happen, but it'd\nbe nice to see it as an option when creating a table via inheritance.\n\nWhat about RULEs? I wouldn't really have a use for that but others\nmight.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 16 Oct 2000 09:42:23 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "Alfred Perlstein wrote:\n\n> Thank you, it's not a big problem that this doesn't happen, but it'd\n> be nice to see it as an option when creating a table via inheritance.\n> \n> What about RULEs? I wouldn't really have a use for that but others\n> might.\n\nActually it's a reasonably big deal. Apart from the obvious performance\npenalty for a deep inheritance hierarchy it affects the implementation\nof unique keys, referential integrity for inheritance and the\ntransparancy of extending an inheritance hierarchy.\n", "msg_date": "Tue, 17 Oct 2000 21:21:44 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "Bruce Momjian wrote:\n >> Alfred Perlstein wrote:\n >> > \n >> > I noticed that INHERITS doesn't propogate indecies, It'd be nice\n >> > if there was an toption to do so.\n >> \n >> Yep it would. Are you volunteering?\n >> \n >\n >Added to TODO:\n >\n >\t* Allow inherited tables to inherit index\n\nWhat is the spec for this? \n\nDo you mean that inheriting tables should share a single index with their\nancestors, or that each descendant should get a separate index on the\nsame pattern as its ancestors'? \n\nWith the former, the inherited index could be used to enforce a primary\nkey over a whole inheritance hierarchy, and would presumable make it\neasier to implement RI against an inheritance hierarchy. Is this what\nyou have in mind?\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Delight thyself also in the LORD; and he shall give \n thee the desires of thine heart.\" Psalms 37:4\n\n\n", "msg_date": "Wed, 18 Oct 2000 12:57:40 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INHERITS doesn't offer enough functionality " }, { "msg_contents": "* Oliver Elphick <[email protected]> [001018 04:59] wrote:\n> Bruce Momjian wrote:\n> >> Alfred Perlstein wrote:\n> >> > \n> >> > I noticed that INHERITS doesn't propogate indecies, It'd be nice\n> >> > if there was an toption to do so.\n> >> \n> >> Yep it would. Are you volunteering?\n> >> \n> >\n> >Added to TODO:\n> >\n> >\t* Allow inherited tables to inherit index\n> \n> What is the spec for this? \n> \n> Do you mean that inheriting tables should share a single index with their\n> ancestors, or that each descendant should get a separate index on the\n> same pattern as its ancestors'? \n> \n> With the former, the inherited index could be used to enforce a primary\n> key over a whole inheritance hierarchy, and would presumable make it\n> easier to implement RI against an inheritance hierarchy. Is this what\n> you have in mind?\n\nNot really, it's more of a convience issue for me, a 'derived table'\nshould inherit the attributes of the 'base table' (including indecies),\nhaving an index shared between two tables is an interesting idea but\nnot what I had in mind.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Wed, 18 Oct 2000 08:40:31 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "Alfred Perlstein wrote:\n >* Oliver Elphick <[email protected]> [001018 04:59] wrote:\n >> Do you mean that inheriting tables should share a single index with their\n >> ancestors, or that each descendant should get a separate index on the\n >> same pattern as its ancestors'? \n >> \n >> With the former, the inherited index could be used to enforce a primary\n >> key over a whole inheritance hierarchy, and would presumable make it\n >> easier to implement RI against an inheritance hierarchy. Is this what\n >> you have in mind?\n >\n >Not really, it's more of a convience issue for me, a 'derived table'\n >should inherit the attributes of the 'base table' (including indecies),\n >having an index shared between two tables is an interesting idea but\n >not what I had in mind.\n\nWell then, what will happen if I do\n\n SELECT * FROM table* WHERE inherited_unique_indexed_field = some_value;\n\nwould I expect to get back multiple rows? Are all the separate indexes\ncandidates for use in the selection?\n\nI think you are highlighting the fact that we still haven't satisfactorily\ndefined the semantics of inheritance in PostgreSQL; is it merely a\ntemplate system or is it something more meaningful? What inheritance\nspecifications are we going to work towards?\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Delight thyself also in the LORD; and he shall give \n thee the desires of thine heart.\" Psalms 37:4\n\n\n", "msg_date": "Wed, 18 Oct 2000 20:46:34 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INHERITS doesn't offer enough functionality " }, { "msg_contents": "It's pretty clear to me that an inherited index should be only one\nindex. There may be a case for optional non-inherited indexes (CREATE\nINDEX ON ONLY foobar), but if the index is inherited, it is just one\nindex.\n\nAt the end of the day though, the reason is only performance. The\nsemantics should be the same no matter whether implemented as multiple\nindexes or not. Performance is much better with one index though.(*)\n\n(*) Assuming you use inheritance in the queries, which I have found is\nthe most common thing. That's reflected in the 7.1 defaults where\ninheritance is the default.\n\nOliver Elphick wrote:\n> \n> Alfred Perlstein wrote:\n> >* Oliver Elphick <[email protected]> [001018 04:59] wrote:\n> >> Do you mean that inheriting tables should share a single index with their\n> >> ancestors, or that each descendant should get a separate index on the\n> >> same pattern as its ancestors'?\n> >>\n> >> With the former, the inherited index could be used to enforce a primary\n> >> key over a whole inheritance hierarchy, and would presumable make it\n> >> easier to implement RI against an inheritance hierarchy. Is this what\n> >> you have in mind?\n> >\n> >Not really, it's more of a convience issue for me, a 'derived table'\n> >should inherit the attributes of the 'base table' (including indecies),\n> >having an index shared between two tables is an interesting idea but\n> >not what I had in mind.\n> \n> Well then, what will happen if I do\n> \n> SELECT * FROM table* WHERE inherited_unique_indexed_field = some_value;\n> \n> would I expect to get back multiple rows? Are all the separate indexes\n> candidates for use in the selection?\n> \n> I think you are highlighting the fact that we still haven't satisfactorily\n> defined the semantics of inheritance in PostgreSQL; is it merely a\n> template system or is it something more meaningful? What inheritance\n> specifications are we going to work towards?\n> \n> --\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Delight thyself also in the LORD; and he shall give\n> thee the desires of thine heart.\" Psalms 37:4\n\n-- \nChris Bitmead\nmailto:[email protected]\n", "msg_date": "Thu, 19 Oct 2000 18:33:24 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "\n\nChris wrote:\n\n> It's pretty clear to me that an inherited index should be only one\n> index. There may be a case for optional non-inherited indexes (CREATE\n> INDEX ON ONLY foobar), but if the index is inherited, it is just one\n> index.\n>\n> At the end of the day though, the reason is only performance. The\n> semantics should be the same no matter whether implemented as multiple\n> indexes or not. Performance is much better with one index though.(*)\n>\n\nIs it true ?\nHow to guarantee the uniqueness using multiple indexes ?\n\nRegards.\nHiroshi Inoue\n\n", "msg_date": "Thu, 19 Oct 2000 18:38:47 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "Hiroshi Inoue wrote:\n\n> > At the end of the day though, the reason is only performance. The\n> > semantics should be the same no matter whether implemented as multiple\n> > indexes or not. Performance is much better with one index though.(*)\n> >\n> \n> Is it true ?\n> How to guarantee the uniqueness using multiple indexes ?\n\nWell you'd have to check every index in the hierarchy. As I said,\ninefficient.\n", "msg_date": "Thu, 19 Oct 2000 22:58:14 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "\n\nHiroshi Inoue schrieb:\n> \n> Chris wrote:\n> \n> > It's pretty clear to me that an inherited index should be only one\n> > index. There may be a case for optional non-inherited indexes (CREATE\n> > INDEX ON ONLY foobar), but if the index is inherited, it is just one\n> > index.\n> >\n> > At the end of the day though, the reason is only performance. The\n> > semantics should be the same no matter whether implemented as multiple\n> > indexes or not. Performance is much better with one index though.(*)\n> >\n> \n> Is it true ?\n> How to guarantee the uniqueness using multiple indexes ?\n> \n\n Sorry to say, but you all should really think about, what inheritance\nshould mean !!!!\n\n In the classic mapping strategy (OO-rdbms mapping) it's said, that \neach class is mapped to ONE table ! This is the classic mapping\nstrategy, which is mentioned in every literature.\n\n The point is: this is classic, but noone does it like this if\nyour really have a larger hierarchy of classes. You'll not get \nany good performance, when solving an association in your oo\nprogram, because the framework has to query against each \ntable: 6 tables - 6 queries !!! :-(((((\n\n With the PostgreSQL approach one can send ONE query against\nthe tables and one would get one result ... which will be\nmuch faster (I hope so ... that has to be prooved ..).\n\n--\n\n I'm not sure, that inherited indices should be really ONE\nindex. There are very well reasons NOT to build ONE larger\nindex.\n\n Actually one should think about: why do I really want to \nhave inheritance in the oo-rdbms ? Actually I could put\nall columns (of all classes in this hierarchy into one table \nand that's it). I would like to have inheritance in this\ndatabase system, because the tables are getting smaller\nand queries against special classes (eh tables) are becoming\nfaster.\n\n Actually the inserts will be much faster also because you\nhave several smaller indices.\n\n I've run tests here with ONE large table (5 columns and \n5 indices) holding data for about 17 classes and the result \nis: the insert/update path is the problem and not the \nselect-path. insert-performance is decreasing in a \nlinear fashon ... very, very bad.\n\n\n Marten\n", "msg_date": "Thu, 19 Oct 2000 18:48:50 +0100", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "\n> The point is: this is classic, but noone does it \n> like this if your really have a larger hierarchy of \n> classes. You'll not get any good performance, when \n> solving an association in your oo\n> program, because the framework has to query against \n> each table: 6 tables - 6 queries !!! :-(((((\n> \n> With the PostgreSQL approach one can send ONE query \n> against the tables and one would get one result ... \n> which will be much faster (I hope so ... that has to \n> be prooved ..).=\n\nYou'll still have to do 6 queries in postgres because it does not return\nfields in sub-classes. Imagine the root of the hierarchy is abstract\nwith no fields. You query this class and you get 100 tuples with no\ncolumns! This is the aspect I'm hoping to fix but I'm waiting for Tom to\nre-do the query data structures before I do changes that are thrown\naway.\n\n> Actually one should think about: why do I really want to\n> have inheritance in the oo-rdbms ? Actually I could put\n> all columns (of all classes in this hierarchy into one table\n> and that's it).\n\nOuch. That way lies madness.\n", "msg_date": "Fri, 20 Oct 2000 18:35:41 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "\n\nChris schrieb:\n> \n> > The point is: this is classic, but noone does it\n> > like this if your really have a larger hierarchy of\n> > classes. You'll not get any good performance, when\n> > solving an association in your oo\n> > program, because the framework has to query against\n> > each table: 6 tables - 6 queries !!! :-(((((\n> >\n> > With the PostgreSQL approach one can send ONE query\n> > against the tables and one would get one result ...\n> > which will be much faster (I hope so ... that has to\n> > be prooved ..).=\n> \n> You'll still have to do 6 queries in postgres because it does not return\n> fields in sub-classes. \n\n Practically this is not such a big problem as one might think.\n\n WHEN you have a persistance framework you tell your framework, \nthat every attribut is located (mapped or stored or however you \nmay see it) in the superclass and then your top class (table)\nhelds all attributes your \"lowest\" subclass has.\n\n But that puts another question to be answered: are the defined\ncontrained also inheritate ??? Actually I would say: no and\ntherefore we have the same handling as with indices.\n\n Most of the attributes may have NULL, but who cares ? The \nframework actually has to interpret the data coming from\nthe database and will throw him away.\n\n Therefore I can get around the limitations of PostgreSQL\nin this case. If PostgreSQL can handle this in addition\nthis would be very nice ... but before the basic stuff has\nto be fixed and it has to be very solid.\n\n But I have to admit: my point is a viewpoint from a programmer\nusing an object oriented language and I only want to store\nmy objects into a database. People using PHP, pearl or\nother \"low-level\" languages may have a different view or\nneed, because they do not have a framework doing the work\nfor them.\n\n I can only tell you, what will be an improvement for me as\na persistance framework programmer and will not help me.\n\n What will not help me:\n\n * that the database generates OID\n\n * that the database generates \"clsss\" OID (one may want to\n have that, because to recognize which table the data\n comes from..)\n\n * special features to solve very special problems\n\n What will help me:\n\n * all the stuff to reduce the number (!) of queries send \n to database to get my data\n\n * a way to insert VERY quickly a larger amount of data \n into a table.\n\n * a good, speedy database\n \nMarten\n", "msg_date": "Fri, 20 Oct 2000 19:02:47 +0100", "msg_from": "[email protected] (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "Marten Feldtmann wrote:\n\n> > You'll still have to do 6 queries in postgres because it does not return\n> > fields in sub-classes.\n> \n> Practically this is not such a big problem as one might think.\n\n> WHEN you have a persistance framework you tell your framework,\n> that every attribut is located (mapped or stored or however you\n> may see it) in the superclass and then your top class (table)\n> helds all attributes your \"lowest\" subclass has.\n\nI don't understand what you're saying. There is no query which will\nbring back a set of objects of different types without truncating the\nsub-class fields. Therefore it's a big problem for persistance\nframeworks that use inheritance.\n\n> I can only tell you, what will be an improvement for me as\n> a persistance framework programmer and will not help me.\n> \n> What will not help me:\n> \n> * that the database generates OID\n> \n> * that the database generates \"clsss\" OID (one may want to\n> have that, because to recognize which table the data\n> comes from..)\n\nYou don't seem to be thinking much in terms of an Object Data Management\nGroup style persistence framework. That's a shame since it's becoming\nincreasingly important. Sun seems to be endorsing it for Java in some\nway too.\n\n> \n> * special features to solve very special problems\n> \n> What will help me:\n> \n> * all the stuff to reduce the number (!) of queries send\n> to database to get my data\n> \n> * a way to insert VERY quickly a larger amount of data\n> into a table.\n> \n> * a good, speedy database\n> \n> Marten\n", "msg_date": "Tue, 24 Oct 2000 08:13:28 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INHERITS doesn't offer enough functionality" }, { "msg_contents": "\nTried --enable-multibyte for grins. Bad Move. \n\nTop part of config.status:\n#! /bin/sh\n# Generated automatically by configure.\n# Run this file to recreate the current configuration.\n# This directory was configured as follows,\n# on host lerami.lerctr.org:\n#\n# ./configure --prefix=/home/ler/pg-test --enable-syslog --with-CXX --with-perl --enable-multibyte --with-includes=/usr/local/include --with-libs=/usr/local/lib\n#\n# Compiler output produced by configure, useful for debugging\n# configure, is in ./config.log if it exists.\n\n\nBottom of make output...\nUX:acomp: WARNING: \"mbutils.c\", line 178: argument #1 incompatible with prototype: strlen()\nUX:acomp: WARNING: \"mbutils.c\", line 195: argument #1 incompatible with prototype: strlen()\ncc -c -I/usr/local/include -I../../../../src/include -O -K inline -o wchar.o wchar.c\ncc -c -I/usr/local/include -I../../../../src/include -O -K inline -o wstrcmp.o wstrcmp.c\ncc -c -I/usr/local/include -I../../../../src/include -O -K inline -o wstrncmp.o wstrncmp.c\ncc -c -I/usr/local/include -I../../../../src/include -O -K inline -o big5.o big5.c\n/bin/ld -r -o SUBSYS.o common.o conv.o mbutils.o wchar.o wstrcmp.o wstrncmp.o big5.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\n/bin/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o error/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o mb/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ncc -O -K inline -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -Wl,-Bexport\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\nreset_client_encoding tcop/SUBSYS.o\nUX:ld: ERROR: Symbol referencing errors. No output written to postgres\ngmake[2]: *** [postgres] Error 1\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 26 Oct 2000 07:07:53 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "--enable-multibyte dies (UnixWare 7.1.1)/Current Sources" }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> Tried --enable-multibyte for grins. Bad Move. \n\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> reset_client_encoding tcop/SUBSYS.o\n\nOoops, I guess I broke that yesterday :-(. Will fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Oct 2000 10:51:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --enable-multibyte dies (UnixWare 7.1.1)/Current Sources " }, { "msg_contents": "Tom Lane wrote:\n >Larry Rosenman <[email protected]> writes:\n >> Tried --enable-multibyte for grins. Bad Move. \n >\n >> Undefined\t\t\tfirst referenced\n >> symbol \t\t\t in file\n >> reset_client_encoding tcop/SUBSYS.o\n >\n >Ooops, I guess I broke that yesterday :-(. Will fix.\n\nIt hit me as well (on Linux).\n\nreset_client_encoding is wrongly declared as static in \nsrc/backend/commands/variable.c\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Whosoever therefore shall be ashamed of me and of my \n words in this adulterous and sinful generation; of him\n also shall the Son of man be ashamed, when he cometh \n in the glory of his Father with the holy angels.\" \n Mark 8:38 \n\n\n", "msg_date": "Thu, 26 Oct 2000 22:17:49 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --enable-multibyte dies (UnixWare 7.1.1)/Current\n Sources" }, { "msg_contents": "ive got the backend stuff to compile on sco with the sdk had to add\n-lsocket \nto get rid of unresolved var gethostbyaddress. made it as far as\ncompiling \nepcg compiles but fails with unresolved var\nnocachegetattr in pgc.o\nis this a yacc/lex issue if so what would be min version requirements\nfor \nbison/flex replacments, easiest to port to sco 5.0.5\n\n-- \nMy opinions are my own and not that of my employer even if I am self\nemployed\n", "msg_date": "Thu, 30 Nov 2000 07:04:34 -0600", "msg_from": "\"Arno A. Karner\" <[email protected]>", "msg_from_op": false, "msg_subject": "compiling pg 7.0.3 on sco 5.0.5" }, { "msg_contents": "\"Arno A. Karner\" <[email protected]> writes:\n> epcg compiles but fails with unresolved var\n> nocachegetattr in pgc.o\n\nThis is a header bug (there's a backend header file that some bright\nsoul put a static function declaration into :-( ... and the function\ncan't link outside the backend ... and ecpg includes that header,\neven though it has no use for the particular function).\n\nI'd suggest trying to remove the #define DISABLE_COMPLEX_MACRO from\nport/sco.h. If it compiles and passes regress tests that way, you're\nbetter off without the #define anyhow.\n\nThere was another discussion about this on pghackers just recently...\nsee the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Nov 2000 10:15:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5 " }, { "msg_contents": "Tom Lane wrote:\n> This is a header bug (there's a backend header file that some bright\n> soul put a static function declaration into :-( ... and the function\n\nActually, it's a static function, not a declaration. The DISABLE_COMPLEX_MACRO\ndefinition was originally put in to work around a macro size limitation of the \nUnixWare 2.1 C compiler (and later the SCO UDK (Universal Development Kit)). \nIf the gnu C compiler is being used it should not be defined. The function \nused to replace the macro was placed in the header and defined as static so \nthat the UnixWare compiler would compile the function in-line where ever it \nwas used.\n\n> can't link outside the backend ... and ecpg includes that header,\n> even though it has no use for the particular function).\n> \n> I'd suggest trying to remove the #define DISABLE_COMPLEX_MACRO from\n> port/sco.h. If it compiles and passes regress tests that way, you're\n> better off without the #define anyhow.\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Sun, 03 Dec 2000 02:25:26 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5 " }, { "msg_contents": "\"Billy G. Allie\" <[email protected]> writes:\n> ... The DISABLE_COMPLEX_MACRO definition was originally put in to work\n> around a macro size limitation of the UnixWare 2.1 C compiler (and\n> later the SCO UDK (Universal Development Kit)). If the gnu C compiler\n> is being used it should not be defined.\n\nHm. Is anyone likely to still be using a version of that compiler that\nstill has such limitations?\n\nI ask because we recently pulled \"#define DISABLE_COMPLEX_MACRO\" from\nport/sco.h, on the grounds that various people were seeing more harm\nthan good from it. But I'm suddenly wondering whether those people\nmight've been using gcc. I wonder if\n\n\t#ifndef __GNUC__\n\t#define DISABLE_COMPLEX_MACRO\n\t#endif\n\nin port/sco.h would be the smart way to go.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 01:00:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5 " }, { "msg_contents": "* Tom Lane <[email protected]> [001204 09:27]:\n> \"Billy G. Allie\" <[email protected]> writes:\n> > ... The DISABLE_COMPLEX_MACRO definition was originally put in to work\n> > around a macro size limitation of the UnixWare 2.1 C compiler (and\n> > later the SCO UDK (Universal Development Kit)). If the gnu C compiler\n> > is being used it should not be defined.\n> \n> Hm. Is anyone likely to still be using a version of that compiler that\n> still has such limitations?\n> \n> I ask because we recently pulled \"#define DISABLE_COMPLEX_MACRO\" from\n> port/sco.h, on the grounds that various people were seeing more harm\n> than good from it. But I'm suddenly wondering whether those people\n> might've been using gcc. I wonder if\n> \n> \t#ifndef __GNUC__\n> \t#define DISABLE_COMPLEX_MACRO\n> \t#endif\n> \n> in port/sco.h would be the smart way to go.\nBased on my running both CURRENT UDK and GCC on my UnixWare 7 boxes\nwith CURRENT sources, I think we may need to see if anyone complains. \n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 4 Dec 2000 09:33:58 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5" }, { "msg_contents": "Tom Lane wrote:\n\n> \"Billy G. Allie\" <[email protected]> writes:\n> \n>> ... The DISABLE_COMPLEX_MACRO definition was originally put in to work\n>> around a macro size limitation of the UnixWare 2.1 C compiler (and\n>> later the SCO UDK (Universal Development Kit)). If the gnu C compiler\n>> is being used it should not be defined.\n> \n> \n> Hm. Is anyone likely to still be using a version of that compiler that\n> still has such limitations?\n> \n> I ask because we recently pulled \"#define DISABLE_COMPLEX_MACRO\" from\n> port/sco.h, on the grounds that various people were seeing more harm\n> than good from it. But I'm suddenly wondering whether those people\n> might've been using gcc. I wonder if\n> \n> \t#ifndef __GNUC__\n> \t#define DISABLE_COMPLEX_MACRO\n> \t#endif\n> \n> in port/sco.h would be the smart way to go.\n> \n> \t\t\tregards, tom lane\n\nWell I recompilied with the stock cc shipped in the SCO development \npackage for OpenServer 5. It was released in 97'.\n\n", "msg_date": "Mon, 04 Dec 2000 10:36:35 -0500", "msg_from": "Dave Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5" }, { "msg_contents": "Tom Lane writes:\n\n> I ask because we recently pulled \"#define DISABLE_COMPLEX_MACRO\" from\n> port/sco.h, on the grounds that various people were seeing more harm\n> than good from it. But I'm suddenly wondering whether those people\n> might've been using gcc.\n\nWe can be fairly certain that they weren't, unless GCC started accepting\nSCO's compiler flags (or someone altered the compiler flags and filed a\n*very* incomplete bug report).\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 4 Dec 2000 18:31:17 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling pg 7.0.3 on sco 5.0.5 " }, { "msg_contents": "In the creation below, the column parent_id has ended up with a NOT NULL\nconstraint that I didn't ask for and don't want.\n\nThis is 7.1, updated today from cvs.\n\n=======================================================================\n\n[... other tables created ...]\n\nCREATE TABLE person\n(\n ptype SMALLINT,\n id CHAR(10),\n name TEXT NOT NULL,\n address INTEGER,\n salutation TEXT DEFAULT 'Dear Sir',\n envelope TEXT,\n email TEXT,\n www TEXT,\n CONSTRAINT person_ptype CHECK (ptype >= 0 AND ptype <= 8)\n,\nPRIMARY KEY (id),\nFOREIGN KEY (id, address) REFERENCES address(person, id)\n ON UPDATE CASCADE\n ON DELETE RESTRICT\n)\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'person_pkey' for \ntable 'person'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\n\n[... other tables created ...]\n\nCREATE TABLE organisation\n(\n contact CHAR(10) REFERENCES individual (id)\n ON UPDATE CASCADE\n ON DELETE NO ACTION,\n structure CHAR(1) CHECK (structure='L' OR\n structure='C' OR\n structure='U' OR\n structure='O'),\n department TEXT,\n parent_id CHAR(10),\n CONSTRAINT dept_parent CHECK ((department IS NULL AND parent_id IS NULL) OR\n (department > '' AND parent_id > '')),\n CONSTRAINT organisation_ptype CHECK ((ptype >= 2 AND ptype <=4) OR ptype = \n8)\n,\nPRIMARY KEY (id),\nFOREIGN KEY (id, address) REFERENCES address(person, id)\n ON UPDATE CASCADE\n ON DELETE RESTRICT\n)\n INHERITS (person)\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n'organisation_pkey' for table 'organisation'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\n\n[... skip various COMMENT declarations ...]\n\n\\d+ organisation\n Table \"organisation\"\n Attribute | Type | Modifier | Description\n------------+---------------+--------------------+-----------------------------\n-\n ptype | smallint | |\n id | character(10) | not null | Identifier\n name | text | not null | Name\n address | integer | | Primary address id\n salutation | text | default 'Dear Sir' | Salutation in a letter\n envelope | text | | Form of name on envelope\n email | text | | Email address\n www | text | | Web URL\n contact | character(10) | | Id of primary contact person\n structure | character(1) | | Legal structure code\n department | text | | Name of this department\n parent_id | character(10) | not null | Parent organisation id\nIndex: organisation_pkey\nConstraints: ((ptype >= 0) AND (ptype <= 8))\n ((((structure = 'L'::bpchar) OR (structure = 'C'::bpchar)) OR \n(structure = 'U'::bpchar)) OR (structure = 'O'::bpchar))\n (((department ISNULL) AND (parent_id ISNULL)) OR (((department > \n''::text) AND (parent_id NOTNULL)) AND (parent_id > ''::bpchar)))\n (((ptype >= 2) AND (ptype <= 4)) OR (ptype = 8))\n\n=======================================================================\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The fear of the LORD is the instruction of wisdom, and\n before honour is humility.\" Proverbs 15:33 \n\n\n", "msg_date": "Fri, 15 Dec 2000 17:53:13 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "7.1 (current) unwanted NOT NULL constraint inserted" }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >In the creation below, the column parent_id has ended up with a NOT NULL\n >constraint that I didn't ask for and don't want.\n \nIf I change the order of the columns, the NOT NULL constraint always ends\nup on the last column.\n\nThis is not the only table to be affected.\n\nEvery table that is affected has the last column incorrectly set to\nNOT NULL. Every such table is inherited; however, not every inherited\ntable is affected.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The fear of the LORD is the instruction of wisdom, and\n before honour is humility.\" Proverbs 15:33 \n\n\n", "msg_date": "Fri, 15 Dec 2000 18:43:01 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "7.1 (current) unwanted NOT NULL constraint inserted (more)" }, { "msg_contents": "I can't reproduce this --- I get\n\nConstraints: ((ptype >= 0) AND (ptype <= 8))\n ((((structure = 'L'::bpchar) OR (structure = 'C'::bpchar)) OR (structure = 'U'::bpchar)) OR (structure = 'O'::bpchar))\n (((department ISNULL) AND (parent_id ISNULL)) OR ((department > ''::text) AND (parent_id > ''::bpchar)))\n (((ptype >= 2) AND (ptype <= 4)) OR (ptype = 8))\n\nHowever, I had to guess about the referenced tables, and possibly I\nguessed wrong. Could you supply their declarations too?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Dec 2000 15:41:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 (current) unwanted NOT NULL constraint inserted " }, { "msg_contents": "OK, I see the problem. You have:\n\nCREATE TABLE person (\n id CHAR(10)\n);\n\nCREATE TABLE organisation (\n ...,\nPRIMARY KEY (id)\n)\n INHERITS (person);\n\nie, a PRIMARY KEY declaration on an inherited column. Normally a\nPRIMARY KEY declaration causes the key column to become marked\nNOT NULL --- but if the key column is an inherited one then the\ncode misapplies the mark to the last non-inherited column ... or\ncoredumps if there are no non-inherited columns :-(. See line 995\nin parse/analyze.c.\n\nWhile it's easy enough to avoid the mis-marking of the last column,\ncausing the right thing to happen instead is much less easy. What\nwe really want is for the key column to be marked NOT NULL,\nbut during analyze.c there isn't a set of ColumnDefs for the inherited\ncolumns, and so there's no place to put the mark.\n\nShort of a major restructuring of inherited-column creation, I see\nno good solution to this. I see two bad solutions:\n\n1. Require that the referenced column be marked NOT NULL already,\nso that the constraint will be inherited properly from the parent.\nIn other words you couldn't say PRIMARY KEY for an inherited column\nunless it is NOT NULL (or a fortiori, PRIMARY KEY) in the parent table.\n\n2. Do nothing, in effect silently dropping the NOT NULL constraint\nfor such a column. (Actually we don't have to be silent about it;\nwe could emit a NOTICE when the parent doesn't have NOT NULL.)\n\nIMHO, #1 is a little less bad, but I'm not firmly committed to it.\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Dec 2000 16:56:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 (current) unwanted NOT NULL constraint inserted " }, { "msg_contents": "Tom Lane wrote:\n...\n >Short of a major restructuring of inherited-column creation, I see\n >no good solution to this. I see two bad solutions:\n >\n >1. Require that the referenced column be marked NOT NULL already,\n >so that the constraint will be inherited properly from the parent.\n >In other words you couldn't say PRIMARY KEY for an inherited column\n >unless it is NOT NULL (or a fortiori, PRIMARY KEY) in the parent table.\n >\n >2. Do nothing, in effect silently dropping the NOT NULL constraint\n >for such a column. (Actually we don't have to be silent about it;\n >we could emit a NOTICE when the parent doesn't have NOT NULL.)\n >\n >IMHO, #1 is a little less bad, but I'm not firmly committed to it.\n >Comments anyone?\n\nIn the absence of properly working inheritance, I would vote for 1. (I\nam only declaring PRIMARY KEY on the inherited column because that\nconstraint doesn't get inherited as (I think) it should.) Option 2 would\ngive a wrongly-defined table.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The fear of the LORD is the instruction of wisdom, and\n before honour is humility.\" Proverbs 15:33 \n\n\n", "msg_date": "Fri, 15 Dec 2000 23:31:41 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.1 (current) unwanted NOT NULL constraint inserted " }, { "msg_contents": "Peter, I thought I saw a recent CVS commit from you stating that you now\ncan generate the HISTORY file from the SGML source. Have you tested it?\nIs it being done automatically?\n\nAlso, I would like to get the release dates into the HISTORY file. Do\nyou know an easy way to do that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 16 Dec 2000 15:15:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Generating HISTORY file" }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter, I thought I saw a recent CVS commit from you stating that you now\n> can generate the HISTORY file from the SGML source. Have you tested it?\n\nYes.\n\n> Is it being done automatically?\n\nNo.\n\nIt's basically a semi-simplified way of creating an HTML page with the\nrelease notes and saving it with Netscape. It was more or less a\nby-product of automating the INSTALL build. It's still nothing to get\nexcited about as you would still have to manually polish the resulting\ntext file in some ways.\n\nIf you want to try it out run gmake HISTORY in doc/src/sgml. I've also\ntried lynx and w3m for HTML to text conversions but their results where\ninferior in several ways.\n\n> Also, I would like to get the release dates into the HISTORY file. Do\n> you know an easy way to do that?\n\nI can't think of anything more particular than just writing it in there\nsomewhere.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 20 Dec 2000 00:12:11 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Generating HISTORY file" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Peter, I thought I saw a recent CVS commit from you stating that you now\n> > can generate the HISTORY file from the SGML source. Have you tested it?\n> \n> Yes.\n> \n> > Is it being done automatically?\n> \n> No.\n> \n> It's basically a semi-simplified way of creating an HTML page with the\n> release notes and saving it with Netscape. It was more or less a\n> by-product of automating the INSTALL build. It's still nothing to get\n> excited about as you would still have to manually polish the resulting\n> text file in some ways.\n> \n> If you want to try it out run gmake HISTORY in doc/src/sgml. I've also\n> tried lynx and w3m for HTML to text conversions but their results where\n> inferior in several ways.\n\nOK, same old way I did it long ago.\n\n> \n> > Also, I would like to get the release dates into the HISTORY file. Do\n> > you know an easy way to do that?\n> \n> I can't think of anything more particular than just writing it in there\n> somewhere.\n\nI have it in the SGML. I just need it transfered into the HISTORY file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 19 Dec 2000 22:20:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Generating HISTORY file" }, { "msg_contents": "> > I can't think of anything more particular than just writing it in there\n> > somewhere.\n> I have it in the SGML. I just need it transfered into the HISTORY file.\n\nYou might recall Bruce that I've done this with Applixware for the last\nseveral releases...\n\n - Thomas\n", "msg_date": "Wed, 20 Dec 2000 07:17:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Generating HISTORY file" }, { "msg_contents": "> > > I can't think of anything more particular than just writing it in there\n> > > somewhere.\n> > I have it in the SGML. I just need it transfered into the HISTORY file.\n> \n> You might recall Bruce that I've done this with Applixware for the last\n> several releases...\n\nSo you regenerate the HISTORY file using Applixware. I thought it\nlooked poor, and I had to re-do it, or am I remembering something else.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Dec 2000 12:41:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Generating HISTORY file" }, { "msg_contents": "> > So you regenerate the HISTORY file using Applixware. I thought it\n> > looked poor, and I had to re-do it, or am I remembering something else.\n> \n> *shrug*\n\nI just looked at the CVS logs and I think you fixed Applix to look\nbetter. Please update HISTORY as part of your documentation changes. \nThanks. I added release dates to release.sgml, so it should have the\ndates when you are done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 20 Dec 2000 13:02:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Generating HISTORY file" }, { "msg_contents": "> So you regenerate the HISTORY file using Applixware. I thought it\n> looked poor, and I had to re-do it, or am I remembering something else.\n\n*shrug*\n\n - Thomas\n", "msg_date": "Wed, 20 Dec 2000 18:18:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Generating HISTORY file" }, { "msg_contents": "Some SQL92 functionality is missing from the BIT and VARBIT types.\n\nIt should be possible to enter hexadecimal values as:\n\n B'[<bit>...]'[{<separator>...'[<bit...]'}...]\n X'[<hexdigit>...]'[{<separator>...'[<hexdigit...]'}...]\n\n(Cannan and Otten: SQL - The Standard Handbook, p.38)\n\nbut the hexadeximal form is not accepted.\n\nFor example:\n\nbray=# \\d junk\n Table \"junk\"\n Attribute | Type | Modifier \n-----------+--------------+----------\n id | character(4) | not null\n flag1 | bit(1) | \n flags | bit(8) | \nIndex: junk_pkey\n\nbray=# insert into junk values ('BBBB',B'0',X'FF');\nERROR: Attribute 'flags' is of type 'bit' but expression is of type 'int4'\n\tYou will need to rewrite or cast the expression\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For a child will be born to us, a son will be given to\n us; And the government will rest on His shoulders; And\n His name will be called Wonderful Counsellor, Mighty \n God, Eternal Father, Prince of Peace.\" \n Isaiah 9:6 \n\n\n", "msg_date": "Thu, 21 Dec 2000 21:33:51 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "(7.1) BIT datatype" }, { "msg_contents": "Oliver Elphick writes:\n\n> Some SQL92 functionality is missing from the BIT and VARBIT types.\n>\n> It should be possible to enter hexadecimal values as:\n>\n> B'[<bit>...]'[{<separator>...'[<bit...]'}...]\n> X'[<hexdigit>...]'[{<separator>...'[<hexdigit...]'}...]\n>\n> (Cannan and Otten: SQL - The Standard Handbook, p.38)\n>\n> but the hexadeximal form is not accepted.\n\nThis was omitted because in SQL99 the X'1001' notation also serves as a\nbinary large object value under certain circumstances. Unfortunately,\nit's not exactly known what those circumstances are.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 21 Dec 2000 23:11:25 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: (7.1) BIT datatype" }, { "msg_contents": "Peter Eisentraut wrote:\n >Oliver Elphick writes:\n >\n >> Some SQL92 functionality is missing from the BIT and VARBIT types.\n >>\n >> It should be possible to enter hexadecimal values as:\n...\n >This was omitted because in SQL99 the X'1001' notation also serves as a\n >binary large object value under certain circumstances. Unfortunately,\n >it's not exactly known what those circumstances are.\n\nIt seems a pity that we don't support the SQL92 version at least.\nHow much more convenient it is to enter X'FE1D' than B'1111111000011101'\n(did I get that right?)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For a child will be born to us, a son will be given to\n us; And the government will rest on His shoulders; And\n His name will be called Wonderful Counsellor, Mighty \n God, Eternal Father, Prince of Peace.\" \n Isaiah 9:6 \n\n\n", "msg_date": "Thu, 21 Dec 2000 22:29:45 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: (7.1) BIT datatype " }, { "msg_contents": "> Some SQL92 functionality is missing from the BIT and VARBIT types.\n>\n> It should be possible to enter hexadecimal values as:\n>\n> B'[<bit>...]'[{<separator>...'[<bit...]'}...]\n> X'[<hexdigit>...]'[{<separator>...'[<hexdigit...]'}...]\n>\n> (Cannan and Otten: SQL - The Standard Handbook, p.38)\n>\n> but the hexadeximal form is not accepted.\n\n\nI have been using the BIT and VARBIT types in Postgres 7.0.3 (undocumented I\nbelieve), and I note that the _input_ format is as follows:\n\nupdate blah set flags='b101001'; -- Binary\nupdate blah set flags='xff45'; -- Hex\n\nBut the _output_ format (for varbit) is always:\n\nB'1010110'\n\nHas any of this changed in 7.1?\n\nChris\n\n", "msg_date": "Fri, 22 Dec 2000 09:18:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: (7.1) BIT datatype" }, { "msg_contents": "\"Christopher Kings-Lynne\" wrote:\n >I have been using the BIT and VARBIT types in Postgres 7.0.3 (undocumented I\n >believe), and I note that the _input_ format is as follows:\n >\n >update blah set flags='b101001'; -- Binary\n\nThat is still accepted.\n\n >update blah set flags='xff45'; -- Hex\n \nThat is not.\n\n >But the _output_ format (for varbit) is always:\n >\n >B'1010110'\n\n\nbray=# select * from junk;\n id | flag1 | flags | flags2 \n------+-------+----------+--------\n AAAA | 1 | 11000101 | \n BBBB | 0 | 00111010 | \n cccc | 0 | 01101100 | 11001\n dddd | 0 | 01100000 | \n\n >Has any of this changed in 7.1?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And there were in the same country shepherds abiding \n in the field, keeping watch over their flock by night.\n And, lo, the angel of the Lord came upon them, and the\n glory of the Lord shone around them; and they were \n sore afraid. And the angel said unto them, \" Fear not;\n for behold I bring you good tidings of great joy which\n shall be to all people. For unto you is born this day \n in the city of David a Saviour, which is Christ the \n Lord.\" Luke 2:8-11 \n\n\n", "msg_date": "Fri, 22 Dec 2000 07:15:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: (7.1) BIT datatype " }, { "msg_contents": "Hi,\n I saw the thread from a few days ago about Linux/Alpha and 7.1. I\nbelieve I'm seeing the same problems with DEC/Alpha (Tru64Unix 4.0D).\n\nI noticed the following in the postmaster.log, which occurs, as the\nLinux/Alpha bug report states, during the misc regression test.\n\n DEBUG: copy: line 293, XLogWrite: had to create new log file - you probably should do checkpoints more often\n Server process (pid 24954) exited with status 139 at Fri Dec 22 17:15:48 2000\n Terminating any active server processes...\n Server processes were terminated at Fri Dec 22 17:15:48 2000\n Reinitializing shared memory and semaphores\n DEBUG: starting up\n DEBUG: database system was interrupted at 2000-12-22 17:15:47\n DEBUG: CheckPoint record at (0, 316624)\n DEBUG: Redo record at (0, 316624); Undo record at (0, 0); Shutdown TRUE\n\nthe full src/test/regress/log/postmaster.log can be snagged from\nhttp://www.rcfile.org/postmaster.log\n\nin addition to this, compiling on DEC/Alpha with gcc does not work,\nwithout some shameful hackery :) as __INTERLOCKED_TESTBITSS_QUAD() is \na builtin that gcc does not know about. The DEC cc builds pg properly.\neither way pg is built the test results are much the same, esp the\nFAILURE of misc regression test.\n\nIf there is anything else I can do to help get this working, please\nlet me know.\n\n Brent Verner\n", "msg_date": "Fri, 22 Dec 2000 20:27:10 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "7.1 on DEC/Alpha" }, { "msg_contents": "On 22 Dec 2000 at 20:27 (-0500), Brent Verner wrote:\n\nobservation:\n\n commenting out the queries with 'FROM person* p' causes the misc\n regression test to pass.\n\n SELECT p.name, p.hobbies.name FROM person* p;\n\n Brent\n\n| Hi,\n| I saw the thread from a few days ago about Linux/Alpha and 7.1. I\n| believe I'm seeing the same problems with DEC/Alpha (Tru64Unix 4.0D).\n| \n| I noticed the following in the postmaster.log, which occurs, as the\n| Linux/Alpha bug report states, during the misc regression test.\n| \n| DEBUG: copy: line 293, XLogWrite: had to create new log file - you probably should do checkpoints more often\n| Server process (pid 24954) exited with status 139 at Fri Dec 22 17:15:48 2000\n| Terminating any active server processes...\n| Server processes were terminated at Fri Dec 22 17:15:48 2000\n| Reinitializing shared memory and semaphores\n| DEBUG: starting up\n| DEBUG: database system was interrupted at 2000-12-22 17:15:47\n| DEBUG: CheckPoint record at (0, 316624)\n| DEBUG: Redo record at (0, 316624); Undo record at (0, 0); Shutdown TRUE\n| \n| the full src/test/regress/log/postmaster.log can be snagged from\n| http://www.rcfile.org/postmaster.log\n| \n| in addition to this, compiling on DEC/Alpha with gcc does not work,\n| without some shameful hackery :) as __INTERLOCKED_TESTBITSS_QUAD() is \n| a builtin that gcc does not know about. The DEC cc builds pg properly.\n| either way pg is built the test results are much the same, esp the\n| FAILURE of misc regression test.\n| \n| If there is anything else I can do to help get this working, please\n| let me know.\n| \n| Brent Verner\n", "msg_date": "Fri, 22 Dec 2000 21:58:35 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 on DEC/Alpha" }, { "msg_contents": "On 22 Dec 2000 at 21:58 (-0500), Brent Verner wrote:\n| On 22 Dec 2000 at 20:27 (-0500), Brent Verner wrote:\n| \n| observation:\n| \n| commenting out the queries with 'FROM person* p' causes the misc\n| regression test to pass.\n\nthat's not what I meant to say. the misc test still FAILS, but it \nno longer causes pg to die.\n\n b\n", "msg_date": "Fri, 22 Dec 2000 22:02:51 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 on DEC/Alpha" }, { "msg_contents": "\nhere's a post-mortem.\n\n#0 0x1200ce58c in ExecEvalFieldSelect (fselect=0x1401615c0, \n econtext=0x14016a030, isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1096\n#1 0x1200ceafc in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1234\n#2 0x1200cdd74 in ExecEvalFuncArgs (fcache=0x14016aa70, argList=0x14016a030, \n econtext=0x14016a030) at execQual.c:603\n#3 0x1200cde54 in ExecMakeFunctionResult (fcache=0x14016aa70, \n arguments=0x1401616d0, econtext=0x14016a030, isNull=0x11fffdf88 \"\", \n isDone=0x0) at execQual.c:654\n#4 0x1200ce224 in ExecEvalOper (opClause=0x1401615f0, econtext=0x14016a030, \n isNull=0x11fffdf88 \"\", isDone=0x0) at execQual.c:841\n#5 0x1200cea24 in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1204\n#6 0x1200cec54 in ExecQual (qual=0x14016a1a0, econtext=0x14016a030)\n at execQual.c:1356\n#7 0x1200cf2a8 in ExecScan (node=0x14016a1d0, accessMtd=0x1200d8320 <SeqNext>)\n at execScan.c:129\n#8 0x1200d846c in ExecSeqScan (node=0x1401615f0) at nodeSeqscan.c:138\n#9 0x1200cc280 in ExecProcNode (node=0x14016a1d0, parent=0x14016a1d0)\n at execProcnode.c:284\n#10 0x1200ca8c0 in ExecutePlan (estate=0x14016a310, plan=0x14016a1d0, \n numberTuples=1, direction=ForwardScanDirection, destfunc=0x140020c20)\n at execMain.c:959\n#11 0x1200c9b50 in ExecutorRun (queryDesc=0x1401615f0, estate=0x14016a310, \n count=0) at execMain.c:199\n#12 0x1200d1140 in postquel_getnext (es=0x140160630) at functions.c:324\n#13 0x1200d1300 in postquel_execute (es=0x140160630, fcinfo=0x1401604a0, \n fcache=0x140160590) at functions.c:417\n#14 0x1200d14d8 in fmgr_sql (fcinfo=0x1401604a0) at functions.c:542\n#15 0x1200ce09c in ExecMakeFunctionResult (fcache=0x140160480, \n arguments=0x14015e810, econtext=0x140119cd0, isNull=0x140160350 \"\", \n isDone=0x11fffe258) at execQual.c:712\n#16 0x1200ce2c4 in ExecEvalFunc (funcClause=0x1401615f0, econtext=0x140119cd0, \n isNull=0x140160350 \"\", isDone=0x11fffe258) at execQual.c:883\n#17 0x1200cea3c in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1208\n#18 0x1200c8e10 in ExecEvalIter (iterNode=0x1401615f0, econtext=0x14016a030, \n isNull=0x1 <Error reading address 0x1: Invalid argument>, isDone=0x0)\n at execFlatten.c:56\n#19 0x1200ce9b0 in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1183\n#20 0x1200cdd74 in ExecEvalFuncArgs (fcache=0x140160290, argList=0x14016a030, \n econtext=0x140119cd0) at execQual.c:603\n#21 0x1200cde54 in ExecMakeFunctionResult (fcache=0x140160290, \n arguments=0x14015e840, econtext=0x140119cd0, isNull=0x11fffe3a0 \"\", \n isDone=0x11fffe468) at execQual.c:654\n#22 0x1200ce2c4 in ExecEvalFunc (funcClause=0x1401615f0, econtext=0x140119cd0, \n isNull=0x11fffe3a0 \"\", isDone=0x11fffe468) at execQual.c:883\n#23 0x1200cea3c in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1208\n#24 0x1200ce574 in ExecEvalFieldSelect (fselect=0x14015e720, \n econtext=0x14016a030, isNull=0x11fffe3a0 \"\", isDone=0x0) at execQual.c:1091\n#25 0x1200ceafc in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1234\n#26 0x1200c8e10 in ExecEvalIter (iterNode=0x1401615f0, econtext=0x14016a030, \n isNull=0x1 <Error reading address 0x1: Invalid argument>, isDone=0x0)\n at execFlatten.c:56\n#27 0x1200ce9b0 in ExecEvalExpr (expression=0x1401615f0, econtext=0x0, \n isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1183\n#28 0x1200ceea4 in ExecTargetList (targetlist=0x14015e870, \n targettype=0x140160000, values=0x140160260, econtext=0x140119cd0, \n isDone=0x11fffe5a8) at execQual.c:1528\n#29 0x1200cf1a8 in ExecProject (projInfo=0x0, isDone=0x1) at execQual.c:1751\n#30 0x1200d8074 in ExecResult (node=0x14015e5b0) at nodeResult.c:167\n#31 0x1200cc238 in ExecProcNode (node=0x14015e5b0, parent=0x14015e5b0)\n at execProcnode.c:272\n#32 0x1200ca8c0 in ExecutePlan (estate=0x14015eab0, plan=0x14015e5b0, \n numberTuples=0, direction=ForwardScanDirection, destfunc=0x1401603a0)\n at execMain.c:959\n#33 0x1200c9b50 in ExecutorRun (queryDesc=0x1401615f0, estate=0x14015eab0, \n count=0) at execMain.c:199\n#34 0x12013e5c0 in ProcessQuery (parsetree=0x14015ea80, plan=0x140160000)\n at pquery.c:305\n#35 0x12013c568 in pg_exec_query_string (\n query_string=0x140115310 \"SELECT p.hobbies.equipment.name, p.hobbies.name, p.name FROM person* p;\", parse_context=0x1400c5c60) at postgres.c:817\n#36 0x12013dd10 in PostgresMain (argv=0x11fffe9a8, real_argv=0x11ffffae8, \n username=0x1400b72f9 \"pgadmin\") at postgres.c:1827\n#37 0x12011aef0 in DoBackend (port=0x1400b7080) at postmaster.c:2021\n#38 0x12011a888 in BackendStartup (port=0x1400b7080) at postmaster.c:1798\n#39 0x12011938c in ServerLoop () at postmaster.c:957\n#40 0x120118c10 in PostmasterMain (argv=0x11ffffae8) at postmaster.c:664\n#41 0x1200e5980 in main (argv=0x11ffffae8) at main.c:138\n\n", "msg_date": "Sat, 23 Dec 2000 23:49:40 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 on DEC/Alpha" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> here's a post-mortem.\n\n> #0 0x1200ce58c in ExecEvalFieldSelect (fselect=0x1401615c0, \n> econtext=0x14016a030, isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1096\n\nLooks reasonable as far as it goes. Evidently the crash is in the\nheap_getattr macro call at line 1096 of src/backend/executor/execQual.c.\nWe need to look at the data structures that macro uses.\nWhat do you get from\n\np *fselect\n\np *econtext\n\np *resSlot->val\n\np *resSlot->ttc_tupleDescriptor\n\nBTW, if you didn't configure with --enable-cassert, it'd be a good idea\nto go back and try it that way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Dec 2000 01:00:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: 7.1 on DEC/Alpha " }, { "msg_contents": "On 24 Dec 2000 at 01:00 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > here's a post-mortem.\n| \n| > #0 0x1200ce58c in ExecEvalFieldSelect (fselect=0x1401615c0, \n| > econtext=0x14016a030, isNull=0x14016ab31 \"\", isDone=0x0) at execQual.c:1096\n| \n| Looks reasonable as far as it goes. Evidently the crash is in the\n| heap_getattr macro call at line 1096 of src/backend/executor/execQual.c.\n| We need to look at the data structures that macro uses.\n| What do you get from\n| \n| p *fselect\n\n$1 = {type = T_FieldSelect, arg = 0x140169d40, fieldnum = 1, resulttype = 25, \n resulttypmod = -1}\n\n| p *econtext\n\n$2 = {type = T_ExprContext, ecxt_scantuple = 0x14016a568, \n ecxt_innertuple = 0x0, ecxt_outertuple = 0x0, \n ecxt_per_query_memory = 0x1400c5df0, ecxt_per_tuple_memory = 0x1400c6670, \n ecxt_param_exec_vals = 0x0, ecxt_param_list_info = 0x140141760, \n ecxt_aggvalues = 0x0, ecxt_aggnulls = 0x0}\n\n| p *resSlot->val\n\nError accessing memory address 0x40141838: Invalid argument.\n \n| p *resSlot->ttc_tupleDescriptor\n\nError accessing memory address 0x40141848: Invalid argument.\n\n\nadditionally:\n\n(gdb) p result\n$4 = 1075058736\n\n(gdb) p *resSlot\nError accessing memory address 0x40141830: Invalid argument.\n\n\n| BTW, if you didn't configure with --enable-cassert, it'd be a good idea\n| to go back and try it that way...\n\nwill reconfig/rebuild shortly.\n\n brent\n", "msg_date": "Sun, 24 Dec 2000 01:13:19 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: 7.1 on DEC/Alpha" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> (gdb) p *resSlot\n> Error accessing memory address 0x40141830: Invalid argument.\n\nOooh. resSlot has been truncated to 32 bits --- judging by the other\nnearby pointer values, it almost certainly should have been 0x140141830.\nNow we have a lead.\n\nI am guessing that the truncation happened somewhere in\nexecutor/functions.c, but don't see it right away...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Dec 2000 01:19:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 on DEC/Alpha " }, { "msg_contents": "On 24 Dec 2000 at 00:47 (-0500), Tom Lane wrote:\n| \n| > I'll send the patch that allows me to\n| > cleanly build with gcc. right now, s_lock.h does the wrong thing\n| > when compiling on Alpha/OSF with gcc.\n| \n| Roger, we want to build with either.\n\nThe attached patch _seems_ to do the right thing. could someone\nwho knows Alpha assembly check it out (please).\n\nfor more info on Alpha assembly, this link may help.\nhttp://tru64unix.compaq.com/faqs/publications/base_doc/DOCUMENTATION/V40D_HTML/APS31DTE/TITLE.HTM\n\n brent 'who learned too much today'", "msg_date": "Sun, 24 Dec 2000 15:14:19 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: 7.1 on DEC/Alpha" }, { "msg_contents": "On 24 Dec 2000 at 01:19 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > (gdb) p *resSlot\n| > Error accessing memory address 0x40141830: Invalid argument.\n| \n| Oooh. resSlot has been truncated to 32 bits --- judging by the other\n| nearby pointer values, it almost certainly should have been 0x140141830.\n| Now we have a lead.\n\nFWIW, saying 'set econtext->ecxt_param_list_info->value 0x14014183' in\ngeb allows the process to not SEGV where it _was_ destined to do so, \nthough it does SEGV in a later return to the function. I've tried to\ndetermine where this value is originating, and where it is subsequently\nmodified, but have not been able to do so. lost in gdb. \n\nQ: I tried doing 'watch <address>', but this (appeared) to just hang.\n is there some trick to using 'watch' on addresses that I might be\n overlooking?\n\n| I am guessing that the truncation happened somewhere in\n| executor/functions.c, but don't see it right away...\n\nmore observations WRT sql that blows up postgres on Alpha.\n\nworks:\n SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n FROM ONLY person p;\n\nbreaks:\n SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n FROM person p;\n SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n FROM person* p;\n\nwhatever it is that ONLY causes, avoids the breakage. I've spent the\npast two days in a gdb-hole, going in circles. I just think don't know \nenough (about gdb or postgres) to make any further progress. anyway, \nif someone could tell me what difference the ONLY keyword makes WRT\npg internally, it might help me quit running in circles.\n\nthanks.\n brent\n\n", "msg_date": "Mon, 25 Dec 2000 21:01:06 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.1 on DEC/Alpha" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> more observations WRT sql that blows up postgres on Alpha.\n> works:\n> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n> FROM ONLY person p;\n> breaks:\n> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n> FROM person p;\n> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name \n> FROM person* p;\n\nOK, I see the problem. The breakage actually is present in 7.0.* and\nprior versions as well, it just doesn't happen to be exposed by the\nregress tests --- until now.\n\nThe trouble is the way that entire-tuple function arguments are handled.\nTuple types are declared in pg_type as being the same size as Oid, ie,\n4 bytes. This reflects situations where a tuple value is represented by\nan Oid reference to a row in a table. (I am not sure whether there is\nany code left that depends on that ... in any case I'm nervous about\nchanging it during beta.) But the expression evaluator's implementation\nof a tuple argument is that the Datum value contains a pointer to a\nTupleTableSlot. This works fine as long as the Datum is just passed\naround as a Datum, but if anyone tries to form a tuple containing that\nDatum, only 4 bytes get stored into the tuple. Result: failure on\nmachines where pointers are wider than 4 bytes.\n\nThe reason this shows up in this particular regression test now, and\nnot before, is that 7.1 does the function evaluations at the top of\nthe Append plan that implements inheritance union, whereas 7.0 did it\nat the bottom. That means that in 7.1, the TupleTableSlot Datum gets\ninserted into a tuple that becomes part of the Append output before\nit gets to the function execution. 7.0 would still show the bug\nunder the right circumstances --- a join would do it, for example.\n\nI think that there may still be cases where an Oid is the correct\nrepresentation of a tuple type; anyway I'm afraid to foreclose that\npossibility. What I'm thinking about doing is setting typmod of\nan entire-tuple function argument to sizeof(Pointer), rather than\nthe default -1, to indicate that a pointer representation is being\nused. Comments, hackers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Dec 2000 11:26:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha)" }, { "msg_contents": "I wrote:\n> ... What I'm thinking about doing is setting typmod of\n> an entire-tuple function argument to sizeof(Pointer), rather than\n> the default -1, to indicate that a pointer representation is being\n> used. Comments, hackers?\n\nHere is a patch to current sources along this line. I have not\ncommitted it, since I'm not sure it does the job. It doesn't break\nthe regress tests on my machine, but does it fix them on Alphas?\nPlease apply it locally and let me know what you find.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/parser/parse_func.c.orig\tFri Dec 15 14:22:03 2000\n--- src/backend/parser/parse_func.c\tTue Dec 26 14:17:09 2000\n***************\n*** 442,451 ****\n \n \t\t\t/*\n \t\t\t * for func(relname), the param to the function is the tuple\n! \t\t\t * under consideration. we build a special VarNode to reflect\n \t\t\t * this -- it has varno set to the correct range table entry,\n \t\t\t * but has varattno == 0 to signal that the whole tuple is the\n! \t\t\t * argument.\n \t\t\t */\n \t\t\tif (rte->relname == NULL)\n \t\t\t\telog(ERROR,\n--- 442,453 ----\n \n \t\t\t/*\n \t\t\t * for func(relname), the param to the function is the tuple\n! \t\t\t * under consideration. We build a special VarNode to reflect\n \t\t\t * this -- it has varno set to the correct range table entry,\n \t\t\t * but has varattno == 0 to signal that the whole tuple is the\n! \t\t\t * argument. Also, it has typmod set to sizeof(Pointer) to\n! \t\t\t * signal that the runtime representation will be a pointer\n! \t\t\t * not an Oid.\n \t\t\t */\n \t\t\tif (rte->relname == NULL)\n \t\t\t\telog(ERROR,\n***************\n*** 453,459 ****\n \t\t\ttoid = typenameTypeId(rte->relname);\n \n \t\t\t/* replace it in the arg list */\n! \t\t\tlfirst(i) = makeVar(vnum, 0, toid, -1, sublevels_up);\n \t\t}\n \t\telse if (!attisset)\n \t\t\ttoid = exprType(arg);\n--- 455,465 ----\n \t\t\ttoid = typenameTypeId(rte->relname);\n \n \t\t\t/* replace it in the arg list */\n! \t\t\tlfirst(i) = makeVar(vnum,\n! \t\t\t\t\t\t\t\tInvalidAttrNumber,\n! \t\t\t\t\t\t\t\ttoid,\n! \t\t\t\t\t\t\t\tsizeof(Pointer),\n! \t\t\t\t\t\t\t\tsublevels_up);\n \t\t}\n \t\telse if (!attisset)\n \t\t\ttoid = exprType(arg);\n*** src/backend/access/common/tupdesc.c.orig\tThu Nov 16 17:30:15 2000\n--- src/backend/access/common/tupdesc.c\tTue Dec 26 14:16:06 2000\n***************\n*** 352,358 ****\n \n \tAssertArg(!PointerIsValid(desc->attrs[attributeNumber - 1]));\n \n- \n \t/* ----------------\n \t *\tallocate storage for this attribute\n \t * ----------------\n--- 352,357 ----\n***************\n*** 362,368 ****\n \tdesc->attrs[attributeNumber - 1] = att;\n \n \t/* ----------------\n! \t *\tinitialize some of the attribute fields\n \t * ----------------\n \t */\n \tatt->attrelid = 0;\t\t\t/* dummy value */\n--- 361,367 ----\n \tdesc->attrs[attributeNumber - 1] = att;\n \n \t/* ----------------\n! \t *\tinitialize the attribute fields\n \t * ----------------\n \t */\n \tatt->attrelid = 0;\t\t\t/* dummy value */\n***************\n*** 372,378 ****\n \telse\n \t\tMemSet(NameStr(att->attname), 0, NAMEDATALEN);\n \n- \n \tatt->attdispersion = 0;\t\t/* dummy value */\n \tatt->attcacheoff = -1;\n \tatt->atttypmod = typmod;\n--- 371,376 ----\n***************\n*** 414,421 ****\n \t\tatt->atttypid = InvalidOid;\n \t\tatt->attlen = (int16) 0;\n \t\tatt->attbyval = (bool) 0;\n- \t\tatt->attstorage = 'p';\n \t\tatt->attalign = 'i';\n \t\treturn false;\n \t}\n \n--- 412,419 ----\n \t\tatt->atttypid = InvalidOid;\n \t\tatt->attlen = (int16) 0;\n \t\tatt->attbyval = (bool) 0;\n \t\tatt->attalign = 'i';\n+ \t\tatt->attstorage = 'p';\n \t\treturn false;\n \t}\n \n***************\n*** 427,468 ****\n \ttypeForm = (Form_pg_type) GETSTRUCT(tuple);\n \n \tatt->atttypid = tuple->t_data->t_oid;\n- \tatt->attalign = typeForm->typalign;\n \n! \t/* ------------------------\n! \t If this attribute is a set, what is really stored in the\n! \t attribute is the OID of a tuple in the pg_proc catalog.\n! \t The pg_proc tuple contains the query string which defines\n! \t this set - i.e., the query to run to get the set.\n! \t So the atttypid (just assigned above) refers to the type returned\n! \t by this query, but the actual length of this attribute is the\n! \t length (size) of an OID.\n! \n! \t Why not just make the atttypid point to the OID type, instead\n! \t of the type the query returns? Because the executor uses the atttypid\n! \t to tell the front end what type will be returned (in BeginCommand),\n! \t and in the end the type returned will be the result of the query, not\n! \t an OID.\n! \n! \t Why not wait until the return type of the set is known (i.e., the\n! \t recursive call to the executor to execute the set has returned)\n! \t before telling the front end what the return type will be? Because\n! \t the executor is a delicate thing, and making sure that the correct\n! \t order of front-end commands is maintained is messy, especially\n! \t considering that target lists may change as inherited attributes\n! \t are considered, etc. Ugh.\n! \t -----------------------------------------\n! \t */\n \tif (attisset)\n \t{\n \t\tatt->attlen = sizeof(Oid);\n \t\tatt->attbyval = true;\n \t\tatt->attstorage = 'p';\n \t}\n \telse\n \t{\n \t\tatt->attlen = typeForm->typlen;\n \t\tatt->attbyval = typeForm->typbyval;\n \t\tatt->attstorage = typeForm->typstorage;\n \t}\n \n--- 425,487 ----\n \ttypeForm = (Form_pg_type) GETSTRUCT(tuple);\n \n \tatt->atttypid = tuple->t_data->t_oid;\n \n! \t/*------------------------\n! \t * There are a couple of cases where we must override the information\n! \t * stored in pg_type.\n! \t *\n! \t * First: if this attribute is a set, what is really stored in the\n! \t * attribute is the OID of a tuple in the pg_proc catalog.\n! \t * The pg_proc tuple contains the query string which defines\n! \t * this set - i.e., the query to run to get the set.\n! \t * So the atttypid (just assigned above) refers to the type returned\n! \t * by this query, but the actual length of this attribute is the\n! \t * length (size) of an OID.\n! \t *\n! \t * (Why not just make the atttypid point to the OID type, instead\n! \t * of the type the query returns? Because the executor uses the atttypid\n! \t * to tell the front end what type will be returned (in BeginCommand),\n! \t * and in the end the type returned will be the result of the query, not\n! \t * an OID.)\n! \t *\n! \t * (Why not wait until the return type of the set is known (i.e., the\n! \t * recursive call to the executor to execute the set has returned)\n! \t * before telling the front end what the return type will be? Because\n! \t * the executor is a delicate thing, and making sure that the correct\n! \t * order of front-end commands is maintained is messy, especially\n! \t * considering that target lists may change as inherited attributes\n! \t * are considered, etc. Ugh.)\n! \t *\n! \t * Second: if we are dealing with a complex type (a tuple type), then\n! \t * pg_type will say that the representation is the same as Oid. But\n! \t * if typmod is sizeof(Pointer) then the internal representation is\n! \t * actually a pointer to a TupleTableSlot, and we have to substitute\n! \t * that information.\n! \t *\n! \t * A set of complex type is first and foremost a set, so its\n! \t * representation is Oid not pointer. So, test that case first.\n! \t *-----------------------------------------\n! \t */\n \tif (attisset)\n \t{\n \t\tatt->attlen = sizeof(Oid);\n \t\tatt->attbyval = true;\n+ \t\tatt->attalign = 'i';\n+ \t\tatt->attstorage = 'p';\n+ \t}\n+ \telse if (typeForm->typtype == 'c' && typmod == sizeof(Pointer))\n+ \t{\n+ \t\tatt->attlen = sizeof(Pointer);\n+ \t\tatt->attbyval = true;\n+ \t\tatt->attalign = 'd';\t/* kluge to work with 8-byte pointers */\n+ \t\t/* XXX ought to have a separate attalign value for pointers ... */\n \t\tatt->attstorage = 'p';\n \t}\n \telse\n \t{\n \t\tatt->attlen = typeForm->typlen;\n \t\tatt->attbyval = typeForm->typbyval;\n+ \t\tatt->attalign = typeForm->typalign;\n \t\tatt->attstorage = typeForm->typstorage;\n \t}\n \n*** src/backend/executor/execTuples.c.orig\tSat Nov 11 19:36:57 2000\n--- src/backend/executor/execTuples.c\tTue Dec 26 14:17:30 2000\n***************\n*** 835,841 ****\n \treturn tupType;\n }\n \n! /*\n TupleDesc\n ExecCopyTupType(TupleDesc td, int natts)\n {\n--- 835,841 ----\n \treturn tupType;\n }\n \n! #ifdef NOT_USED\n TupleDesc\n ExecCopyTupType(TupleDesc td, int natts)\n {\n***************\n*** 852,881 ****\n \t\t}\n \treturn newTd;\n }\n! */\n \n /* ----------------------------------------------------------------\n *\t\tExecTypeFromTL\n *\n *\t\tCurrently there are about 4 different places where we create\n *\t\tTupleDescriptors. They should all be merged, or perhaps\n *\t\tbe rewritten to call BuildDesc().\n- *\n- *\told comments\n- *\t\tForms attribute type info from the target list in the node.\n- *\t\tIt assumes all domains are individually specified in the target list.\n- *\t\tIt fails if the target list contains something like Emp.all\n- *\t\twhich represents all the attributes from EMP relation.\n- *\n- *\t\tConditions:\n- *\t\t\tThe inner and outer subtrees should be initialized because it\n- *\t\t\tmight be necessary to know the type infos of the subtrees.\n * ----------------------------------------------------------------\n */\n TupleDesc\n ExecTypeFromTL(List *targetList)\n {\n! \tList\t *tlcdr;\n \tTupleDesc\ttypeInfo;\n \tResdom\t *resdom;\n \tOid\t\t\trestype;\n--- 852,874 ----\n \t\t}\n \treturn newTd;\n }\n! #endif\n \n /* ----------------------------------------------------------------\n *\t\tExecTypeFromTL\n *\n+ *\t\tGenerate a tuple descriptor for the result tuple of a targetlist.\n+ *\t\tNote that resjunk columns, if any, are included in the result.\n+ *\n *\t\tCurrently there are about 4 different places where we create\n *\t\tTupleDescriptors. They should all be merged, or perhaps\n *\t\tbe rewritten to call BuildDesc().\n * ----------------------------------------------------------------\n */\n TupleDesc\n ExecTypeFromTL(List *targetList)\n {\n! \tList\t *tlitem;\n \tTupleDesc\ttypeInfo;\n \tResdom\t *resdom;\n \tOid\t\t\trestype;\n***************\n*** 897,910 ****\n \ttypeInfo = CreateTemplateTupleDesc(len);\n \n \t/* ----------------\n! \t * notes: get resdom from (resdom expr)\n! \t *\t\t get_typbyval comes from src/lib/l-lisp/lsyscache.c\n \t * ----------------\n \t */\n! \ttlcdr = targetList;\n! \twhile (tlcdr != NIL)\n \t{\n! \t\tTargetEntry *tle = lfirst(tlcdr);\n \n \t\tif (tle->resdom != NULL)\n \t\t{\n--- 890,901 ----\n \ttypeInfo = CreateTemplateTupleDesc(len);\n \n \t/* ----------------\n! \t * scan list, generate type info for each entry\n \t * ----------------\n \t */\n! \tforeach(tlitem, targetList)\n \t{\n! \t\tTargetEntry *tle = lfirst(tlitem);\n \n \t\tif (tle->resdom != NULL)\n \t\t{\n***************\n*** 920,926 ****\n \t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t false);\n \n! /*\n \t\t\tExecSetTypeInfo(resdom->resno - 1,\n \t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t(Oid) restype,\n--- 911,917 ----\n \t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t false);\n \n! #ifdef NOT_USED\n \t\t\tExecSetTypeInfo(resdom->resno - 1,\n \t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t(Oid) restype,\n***************\n*** 929,941 ****\n \t\t\t\t\t\t\tNameStr(*resdom->resname),\n \t\t\t\t\t\t\tget_typbyval(restype),\n \t\t\t\t\t\t\tget_typalign(restype));\n! */\n \t\t}\n \t\telse\n \t\t{\n \t\t\tResdom\t *fjRes;\n \t\t\tList\t *fjTlistP;\n! \t\t\tList\t *fjList = lfirst(tlcdr);\n \n #ifdef SETS_FIXED\n \t\t\tTargetEntry *tle;\n--- 920,933 ----\n \t\t\t\t\t\t\tNameStr(*resdom->resname),\n \t\t\t\t\t\t\tget_typbyval(restype),\n \t\t\t\t\t\t\tget_typalign(restype));\n! #endif\n \t\t}\n \t\telse\n \t\t{\n+ \t\t\t/* XXX this branch looks fairly broken ... tgl 12/2000 */\n \t\t\tResdom\t *fjRes;\n \t\t\tList\t *fjTlistP;\n! \t\t\tList\t *fjList = lfirst(tlitem);\n \n #ifdef SETS_FIXED\n \t\t\tTargetEntry *tle;\n***************\n*** 953,959 ****\n \t\t\t\t\t\t\t fjRes->restypmod,\n \t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t false);\n! /*\n \t\t\tExecSetTypeInfo(fjRes->resno - 1,\n \t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t(Oid) restype,\n--- 945,951 ----\n \t\t\t\t\t\t\t fjRes->restypmod,\n \t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t false);\n! #ifdef NOT_USED\n \t\t\tExecSetTypeInfo(fjRes->resno - 1,\n \t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t(Oid) restype,\n***************\n*** 962,968 ****\n \t\t\t\t\t\t\t(char *) fjRes->resname,\n \t\t\t\t\t\t\tget_typbyval(restype),\n \t\t\t\t\t\t\tget_typalign(restype));\n! */\n \n \t\t\tforeach(fjTlistP, lnext(fjList))\n \t\t\t{\n--- 954,960 ----\n \t\t\t\t\t\t\t(char *) fjRes->resname,\n \t\t\t\t\t\t\tget_typbyval(restype),\n \t\t\t\t\t\t\tget_typalign(restype));\n! #endif\n \n \t\t\tforeach(fjTlistP, lnext(fjList))\n \t\t\t{\n***************\n*** 978,984 ****\n \t\t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t\t false);\n \n! /*\n \t\t\t\tExecSetTypeInfo(fjRes->resno - 1,\n \t\t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t\t(Oid) fjRes->restype,\n--- 970,976 ----\n \t\t\t\t\t\t\t\t 0,\n \t\t\t\t\t\t\t\t false);\n \n! #ifdef NOT_USED\n \t\t\t\tExecSetTypeInfo(fjRes->resno - 1,\n \t\t\t\t\t\t\t\ttypeInfo,\n \t\t\t\t\t\t\t\t(Oid) fjRes->restype,\n***************\n*** 987,997 ****\n \t\t\t\t\t\t\t\t(char *) fjRes->resname,\n \t\t\t\t\t\t\t\tget_typbyval(fjRes->restype),\n \t\t\t\t\t\t\t\tget_typalign(fjRes->restype));\n! */\n \t\t\t}\n \t\t}\n- \n- \t\ttlcdr = lnext(tlcdr);\n \t}\n \n \treturn typeInfo;\n--- 979,987 ----\n \t\t\t\t\t\t\t\t(char *) fjRes->resname,\n \t\t\t\t\t\t\t\tget_typbyval(fjRes->restype),\n \t\t\t\t\t\t\t\tget_typalign(fjRes->restype));\n! #endif\n \t\t\t}\n \t\t}\n \t}\n \n \treturn typeInfo;\n*** src/backend/executor/execUtils.c.orig\tThu Nov 16 17:30:20 2000\n--- src/backend/executor/execUtils.c\tTue Dec 26 14:17:26 2000\n***************\n*** 274,289 ****\n {\n \tList\t *targetList;\n \tTupleDesc\ttupDesc;\n- \tint\t\t\tlen;\n \n \ttargetList = node->targetlist;\n \ttupDesc = ExecTypeFromTL(targetList);\n! \tlen = ExecTargetListLength(targetList);\n! \n! \tif (len > 0)\n! \t\tExecAssignResultType(commonstate, tupDesc);\n! \telse\n! \t\tExecAssignResultType(commonstate, (TupleDesc) NULL);\n }\n \n /* ----------------\n--- 274,283 ----\n {\n \tList\t *targetList;\n \tTupleDesc\ttupDesc;\n \n \ttargetList = node->targetlist;\n \ttupDesc = ExecTypeFromTL(targetList);\n! \tExecAssignResultType(commonstate, tupDesc);\n }\n \n /* ----------------\n***************\n*** 582,589 ****\n }\n \n /* ----------------\n! *\t\tExecFreeTypeInfo frees the array of attrbutes\n! *\t\tcreated by ExecMakeTypeInfo and returned by ExecTypeFromTL...\n * ----------------\n */\n void\n--- 576,583 ----\n }\n \n /* ----------------\n! *\t\tExecFreeTypeInfo frees the array of attributes\n! *\t\tcreated by ExecMakeTypeInfo and returned by ExecTypeFromTL\n * ----------------\n */\n void", "msg_date": "Tue, 26 Dec 2000 14:41:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha) " }, { "msg_contents": "On 26 Dec 2000 at 14:41 (-0500), Tom Lane wrote:\n| I wrote:\n| > ... What I'm thinking about doing is setting typmod of\n| > an entire-tuple function argument to sizeof(Pointer), rather than\n| > the default -1, to indicate that a pointer representation is being\n| > used. Comments, hackers?\n| \n| Here is a patch to current sources along this line. I have not\n| committed it, since I'm not sure it does the job. It doesn't break\n| the regress tests on my machine, but does it fix them on Alphas?\n| Please apply it locally and let me know what you find.\n\nresults _look_ the same from 'make check'. I'm gonna get back into\nthe debugger on this (I've learned a few tricks that I didn't know\nwhen last I gdb'd on the Alpha).\n\n brent\n", "msg_date": "Tue, 26 Dec 2000 15:47:37 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha)" }, { "msg_contents": "On 26 Dec 2000 at 14:41 (-0500), Tom Lane wrote:\n| I wrote:\n| > ... What I'm thinking about doing is setting typmod of\n| > an entire-tuple function argument to sizeof(Pointer), rather than\n| > the default -1, to indicate that a pointer representation is being\n| > used. Comments, hackers?\n| \n| Here is a patch to current sources along this line. I have not\n| committed it, since I'm not sure it does the job. It doesn't break\n| the regress tests on my machine, but does it fix them on Alphas?\n| Please apply it locally and let me know what you find.\n\nwhat I'm seeing now is much the same. FWIW, it looks like we're picking\nup the cruft around \n \n functions.c:354 paramLI->value = fcinfo->arg[paramLI->id - 1];\n\n(both of which are type Datum)\n\ni've been in circles trying to figure out where fcinfo->arg is filled.\ncan you point me toward that?\n\nthanks for your help.\n brent\n", "msg_date": "Tue, 26 Dec 2000 18:03:42 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> | Please apply it locally and let me know what you find.\n\n> what I'm seeing now is much the same.\n\nDrat. More to do, then.\n\n> i've been in circles trying to figure out where fcinfo->arg is filled.\n> can you point me toward that?\n\nSee src/backend/utils/fmgr/README and src/backend/utils/fmgr/fmgr.c.\nBut fmgr is probably only the carrier of disease, not the source...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Dec 2000 23:41:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "On 26 Dec 2000 at 23:41 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > | Please apply it locally and let me know what you find.\n| \n| > what I'm seeing now is much the same.\n\nsorry, I sent the previous email w/o the details of the different \nbehavior. Inside ExecEvalFieldSelect(), result is now 303, instead\nof 110599844 (...or whatever is was). I'm not sure if this gives \nyou any additional clues.\n\nthanks.\n brent\n", "msg_date": "Wed, 27 Dec 2000 00:28:49 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha)" }, { "msg_contents": "On 26 Dec 2000 at 23:41 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > | Please apply it locally and let me know what you find.\n| \n| > what I'm seeing now is much the same.\n| \n| Drat. More to do, then.\n| \n| > i've been in circles trying to figure out where fcinfo->arg is filled.\n| > can you point me toward that?\n| \n| See src/backend/utils/fmgr/README and src/backend/utils/fmgr/fmgr.c.\n| But fmgr is probably only the carrier of disease, not the source...\n\nok, I've tracked this further (in the right direction I hope:).\n\nthese are the steps leading up the the assignment of the fscked\nfcache->fcinfo.arg[i] at execQual.c:603, which is what will eventually\nblow up ExecEvalFieldSelect.\n\n\nBreakpoint 4, ExecMakeFunctionResult (fcache=0x14014e700, \n arguments=0x14014c850, econtext=0x140127ae0, isNull=0x14014e390 \"\", \n isDone=0x11fffde78) at execQual.c:652\n652 if (fcache->fcinfo.nargs > 0 && !fcache->argsValid)\n(gdb) print fcache->fcinfo\n$56 = {flinfo = 0x14014e700, context = 0x0, resultinfo = 0x14014e7d0, \n isnull = 0 '\\000', nargs = 1, arg = {0 <repeats 16 times>}, \n argnull = '\\000' <repeats 15 times>}\n(gdb) cont\nBreakpoint 6, ExecEvalVar (variable=0x14014c820, econtext=0x140127ae0, \n isNull=0x14014e7c0 \"\") at execQual.c:298\n298 switch (variable->varno)\n(gdb) print *variable\n$57 = {type = T_Var, varno = 65001, varattno = 1, vartype = 21220, \n vartypmod = 8, varlevelsup = 0, varnoold = 1, varoattno = 0}\n(gdb) print *econtext\n$58 = {type = T_ExprContext, ecxt_scantuple = 0x14014cc58, \n ecxt_innertuple = 0x0, ecxt_outertuple = 0x14014cc58, \n ecxt_per_query_memory = 0x1400e6370, ecxt_per_tuple_memory = 0x1400e66a0, \n ecxt_param_exec_vals = 0x0, ecxt_param_list_info = 0x0, \n ecxt_aggvalues = 0x0, ecxt_aggnulls = 0x0}\n(gdb) break 313\n(gdb) cont\n(gdb) print *slot\n$60 = {type = T_TupleTableSlot, val = 0x14014e430, ttc_shouldFree = 0 '\\000', \n ttc_descIsNew = 1 '\\001', ttc_tupleDescriptor = 0x14014ded0, ttc_buffer = 0}\n(gdb) break 353\n(gdb) cont\n(gdb) print *heapTuple\n$73 = {t_len = 48, t_self = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, \n ip_posid = 0}, t_tableOid = 0, t_datamcxt = 0x1400e6370, \n t_data = 0x14014e450}\n(gdb) print attnum\n$74 = 1\n(gdb) print *tuple_type\n$75 = {natts = 2, attrs = 0x14014df00, constr = 0x0}\n(gdb) print isNull\n$76 = (bool *) 0x14014e7c0 \"\"\n(gdb) break 359\n(gdb) cont\n# after heap_getattr, we have the smashed value.\n(gdb) print result\n$79 = 303\n\n\nis this nearing the problem, or still simply witnessing symptoms?\n\n brent 'delirious from sleep dep.'\n\n", "msg_date": "Wed, 27 Dec 2000 04:06:11 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha)" }, { "msg_contents": "On 26 Dec 2000 at 23:41 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > | Please apply it locally and let me know what you find.\n| \n| > what I'm seeing now is much the same.\n| \n| Drat. More to do, then.\n\nafter hours in the gdb-hole, I see this... maybe a clue? :)\n\nsrc/include/access/common/heaptuple.c:\n\n450 {\n451 \n452 /*\n453 * Fix me when going to a machine with more than a four-byte\n454 * word!\n455 */\n456 off = att_align(off, att[j]->attlen, att[j]->attalign);\n457 \n458 att[j]->attcacheoff = off;\n459 \n460 off = att_addlength(off, att[j]->attlen, tp + off);\n461 }\n\nI'm pretty sure I don't know best how to fix this, but I've got some\nrandomly entered code compiling now :) If it passes the regression \ntests I'll send it along.\n\n brent 'glad the coffee shop in the backyard is open now :)'\n\n", "msg_date": "Wed, 27 Dec 2000 06:29:16 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > Some SQL92 functionality is missing from the BIT and VARBIT types.\n> >\n> > It should be possible to enter hexadecimal values as:\n> >\n> > B'[<bit>...]'[{<separator>...'[<bit...]'}...]\n> > X'[<hexdigit>...]'[{<separator>...'[<hexdigit...]'}...]\n> >\n> > (Cannan and Otten: SQL - The Standard Handbook, p.38)\n> >\n> > but the hexadeximal form is not accepted.\n\nAs Peter noted: the standard does not say whether X'..' should be a\nblob, a bit or a varbit type. Converting it into an integer seems to me\nto be the least reasonable solution, albeit the historical one, as\nlarger bitmasks will not fit. With TOAST the bit type can contain quite\nlarge bit strings, so a case could be made for converting to bit\n(especially as the blob implementation has reputedly got some problems). \n> \n> I have been using the BIT and VARBIT types in Postgres 7.0.3 (undocumented I\n> believe), and I note that the _input_ format is as follows:\n> \n> update blah set flags='b101001'; -- Binary\n> update blah set flags='xff45'; -- Hex\n\nYes, that was done due to limitations in the parser. These have been\nfixed and this format should not be used any longer.\n\n> \n> But the _output_ format (for varbit) is always:\n> \n> B'1010110'\n\nThe SQL standard says nothing about the output of the BIT datatypes. The\nC-routines to interpret both the B'..' and X'..' formats, as well as\noutput routines to generate both are implemented and included. The\nproblem is that a default had to be chosen, and the B'..' format seemed\nmore useful for people using small bit masks. \n\nI don't know whether a function was defined to return an X'..' string of\na bit mask. I don't have one of the more recent Postgres snapshots down\nat the moment. Peter E. may know, as he did all the integration.\n\nAn alternative may be to add a 'SET variable' to psql to govern the\noutput format, but there seem to be too many of those already.\n\nAdriaan\n", "msg_date": "Wed, 27 Dec 2000 16:11:30 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: (7.1) BIT datatype" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> after hours in the gdb-hole, I see this... maybe a clue? :)\n\nI don't think that comment means anything. Possibly it's a leftover\nfrom a time when there was something unportable there. But if att_align\nwere broken on Alphas, you'd have a lot worse problems than what you're\nseeing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 10:08:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> these are the steps leading up the the assignment of the fscked\n> fcache->fcinfo.arg[i] at execQual.c:603, which is what will eventually\n> blow up ExecEvalFieldSelect.\n\nThat looks OK as far as it goes. Inside ExecEvalVar, you need to look\nat the tuple_type data structure in more detail, specifically\n\tp *tuple_type->attrs[0]\n\tp *tuple_type->attrs[1]\n(I think the leading * is correct here, try omitting it if gdb gets\nunhappy.)\n\n> (gdb) print *variable\n> $57 = {type = T_Var, varno = 65001, varattno = 1, vartype = 21220, \n> vartypmod = 8, varlevelsup = 0, varnoold = 1, varoattno = 0}\n\nThat part looks promising --- vartypmod is sizeof(Pointer) not -1,\nso the front-end part of my patch seems to be working. What I suspect\nwe'll find is that the tupledesc doesn't show sizeof the first field to\nbe 8 the way we want. Which would imply that I missed a place (or\nmultiple places :-() that needs to know about the convention for typmod\nof a tuple datatype.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 10:18:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on DEC/Alpha) " }, { "msg_contents": "After further study, I realized that fetchatt() and a number of other\nplaces were not prepared to cope with 8-byte pass-by-value datatypes.\nMost of them weren't checking for cases they couldn't handle, either.\n\nHere is a revised patch for you to try (this includes yesterday's patch\nplus more changes, so you'll need to reverse out the prior patch before\napplying this one). NOTE you will need to do a full reconfigure and\nrebuild to make this fly --- I'd suggest \"make distclean\" to start.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 27 Dec 2000 16:50:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "On 27 Dec 2000 at 16:50 (-0500), Tom Lane wrote:\n| After further study, I realized that fetchatt() and a number of other\n| places were not prepared to cope with 8-byte pass-by-value datatypes.\n| Most of them weren't checking for cases they couldn't handle, either.\n| \n| Here is a revised patch for you to try (this includes yesterday's patch\n| plus more changes, so you'll need to reverse out the prior patch before\n| applying this one). NOTE you will need to do a full reconfigure and\n| rebuild to make this fly --- I'd suggest \"make distclean\" to start.\n\nexcellent!\n\nthis patch fixes the SEGV problem in the regression tests. the only \nremaining failures, which are not due to SEGV, are:\n\n oid ... FAILED\n float8 ... FAILED\n geometry ... FAILED\n\ninitial comments WRT failures:\n float8 fails only when building with gcc.\n oid recall seeing one-liner change to correct this. will try.\n\nmany thanks,\n brent\n\n", "msg_date": "Wed, 27 Dec 2000 18:05:55 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> this patch fixes the SEGV problem in the regression tests. the only \n> remaining failures, which are not due to SEGV, are:\n> oid ... FAILED\n> float8 ... FAILED\n> geometry ... FAILED\n\nWhat are the regression diffs, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 18:10:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "On 27 Dec 2000 at 18:10 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > this patch fixes the SEGV problem in the regression tests. the only \n| > remaining failures, which are not due to SEGV, are:\n| > oid ... FAILED\n| > float8 ... FAILED\n| > geometry ... FAILED\n| \n| What are the regression diffs, exactly?\n\nsee attachment.\n\n brent", "msg_date": "Wed, 27 Dec 2000 18:35:30 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> INSERT INTO OID_TBL(f1) VALUES ('-1040');\n> + ERROR: oidin: error reading \"-1040\": Error 0 occurred.\n\nHm. I thought I'd fixed that. Are you up to date on\nsrc/backend/utils/adt/oid.c ? Current CVS has rev 1.42.\n\n \n> *** ./expected/float8-fp-exception.out\tThu Mar 30 02:46:00 2000\n> --- ./results/float8.out\tWed Dec 27 18:27:15 2000\n> ***************\n> *** 214,220 ****\n> SET f1 = FLOAT8_TBL.f1 * '-1'\n> WHERE FLOAT8_TBL.f1 > '0.0';\n> SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ! ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n> SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n> SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n> --- 214,220 ----\n> SET f1 = FLOAT8_TBL.f1 * '-1'\n> WHERE FLOAT8_TBL.f1 > '0.0';\n> SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n> ! ERROR: Bad float8 input format -- overflow\n> SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n> ERROR: pow() result is out of range\n> SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n\nIt would appear that Alpha no longer needs the special\nfloat8-fp-exception.out comparison file. Try removing the line\n\n\tfloat8/alpha.*-dec-osf=float8-fp-exception\n\nfrom src/test/regress/resultmap.\n\nThe geometry diffs also look like Alpha may be more nearly in sync with\nthe rest of the world than it used to be. Do any of the other geometry\ncomparison files match what you are getting as results/geometry.out?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 18:44:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "On 27 Dec 2000 at 18:44 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > INSERT INTO OID_TBL(f1) VALUES ('-1040');\n| > + ERROR: oidin: error reading \"-1040\": Error 0 occurred.\n| \n| Hm. I thought I'd fixed that. Are you up to date on\n| src/backend/utils/adt/oid.c ? Current CVS has rev 1.42.\n\nyup. got that version -- 1.42 2000/12/22 21:36:09 tgl\n\n| It would appear that Alpha no longer needs the special\n| float8-fp-exception.out comparison file. Try removing the line\n| \n| \tfloat8/alpha.*-dec-osf=float8-fp-exception\n\n cc w/o line above: FAIL\n cc w/ line above: ok\n gcc w/o line above: ??? (will retest later)\n gcc w/ line above: FAIL\n\n| The geometry diffs also look like Alpha may be more nearly in sync with\n| the rest of the world than it used to be. Do any of the other geometry\n| comparison files match what you are getting as results/geometry.out?\n\nnone match.\n\n| \t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 19:38:05 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> | float8-fp-exception.out comparison file. Try removing the line\n> | \n> | \tfloat8/alpha.*-dec-osf=float8-fp-exception\n\n> cc w/o line above: FAIL\n> cc w/ line above: ok\n> gcc w/o line above: ??? (will retest later)\n> gcc w/ line above: FAIL\n\nOK, then it should work for both cases if you do\n\nfloat8/alpha.*-dec-osf.*:cc=float8-fp-exception\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 20:02:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Brent Verner <[email protected]> writes:\n> | Hm. I thought I'd fixed that. Are you up to date on\n> | src/backend/utils/adt/oid.c ? Current CVS has rev 1.42.\n\n> yup. got that version -- 1.42 2000/12/22 21:36:09 tgl\n\nYou're right, it was still broken :-(. I think I've got it now, though.\n\nOliver Elphick was kind enough to arrange access to an Alpha running\nDebian Linux, and I find that current-as-of-this-moment sources pass\nall regression tests in either serial or parallel test mode on that\nsystem. Curiously, however, the system fails when you try to shut\nit down:\n\nSmart Shutdown request at Thu Dec 28 02:41:49 2000\nDEBUG: shutting down\nFATAL 2: Checkpoint lock is busy while data base is shutting down\nShutdown failed - abort\n\nI have no idea why this should be. Evidently there's something wrong\nwith the TAS() macro --- yet it seems to work fine elsewhere. Ideas\nanyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Dec 2000 21:45:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "On 27 Dec 2000 at 21:45 (-0500), Tom Lane wrote:\n| Brent Verner <[email protected]> writes:\n| > | Hm. I thought I'd fixed that. Are you up to date on\n| > | src/backend/utils/adt/oid.c ? Current CVS has rev 1.42.\n| \n| > yup. got that version -- 1.42 2000/12/22 21:36:09 tgl\n| \n| You're right, it was still broken :-(. I think I've got it now, though.\n\ni'll check it tomorrow.\n\n| Oliver Elphick was kind enough to arrange access to an Alpha running\n| Debian Linux, and I find that current-as-of-this-moment sources pass\n| all regression tests in either serial or parallel test mode on that\n| system. Curiously, however, the system fails when you try to shut\n| it down:\n\ngood. I'm glad you guys linked up :)\n\n| Smart Shutdown request at Thu Dec 28 02:41:49 2000\n| DEBUG: shutting down\n| FATAL 2: Checkpoint lock is busy while data base is shutting down\n| Shutdown failed - abort\n\nI'm not seeing this with my latest revision of the TAS() asm.\n\nSmart Shutdown request at Wed Dec 27 19:25:45 2000\nDEBUG: shutting down\nDEBUG: MoveOfflineLogs: remove 0000000000000000\nDEBUG: database system is shut down\n\n| I have no idea why this should be. Evidently there's something wrong\n| with the TAS() macro --- yet it seems to work fine elsewhere. Ideas\n| anyone?\n\nre-evaluating the asm stuff now.\n\nthanks.\n brent\n", "msg_date": "Thu, 28 Dec 2000 00:05:42 -0500", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Tom Lane wrote:\n...\n >system. Curiously, however, the system fails when you try to shut\n >it down:\n >\n >Smart Shutdown request at Thu Dec 28 02:41:49 2000\n >DEBUG: shutting down\n >FATAL 2: Checkpoint lock is busy while data base is shutting down\n >Shutdown failed - abort\n >\n >I have no idea why this should be. Evidently there's something wrong\n >with the TAS() macro --- yet it seems to work fine elsewhere. Ideas\n >anyone?\n \nIt's not just on Alpha; I've seen that on my i386 Linux system.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For God shall bring every work into judgment, \n with every secret thing, whether it be good, or \n whether it be evil.\" Ecclesiastes 12:14 \n\n\n", "msg_date": "Thu, 28 Dec 2000 07:59:06 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Re: Re: Tuple-valued datums on Alpha (was Re:\n\t7.1 on DEC/Alpha)" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>> Smart Shutdown request at Thu Dec 28 02:41:49 2000\n>> DEBUG: shutting down\n>> FATAL 2: Checkpoint lock is busy while data base is shutting down\n>> Shutdown failed - abort\n \n> It's not just on Alpha; I've seen that on my i386 Linux system.\n\nOooh, that's interesting. I was just blindly assuming that it was\na problem with the Alpha spinlock code (we've sure heard plenty of\ndiscussion of same). But maybe there's an actual logic bug in the\ncheckpoint code. I don't see one in a quick scan though.\n\nFWIW, I do *not* see this behavior on HPUX. It seems perfectly\nreproducible on the Debian Alpha box. Is it reproducible on your\ni386 box, or only sometimes?\n\nVadim, any ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Dec 2000 03:10:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Re: Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >>> FATAL 2: Checkpoint lock is busy while data base is shutting down\n\n >> It's not just on Alpha; I've seen that on my i386 Linux system.\n\n >FWIW, I do *not* see this behavior on HPUX. It seems perfectly\n >reproducible on the Debian Alpha box. Is it reproducible on your\n >i386 box, or only sometimes?\n\n\nHmm. I'm just waking up a bit more. Now I'm thinking slightly more\nclearly, I saw the problem yesterday when I was doing an Alpha build\non faure.debian.org; so I think it was actually on Alpha, not i386 after\nall. Sorry for the red herring.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For God shall bring every work into judgment, \n with every secret thing, whether it be good, or \n whether it be evil.\" Ecclesiastes 12:14 \n\n\n", "msg_date": "Thu, 28 Dec 2000 08:24:56 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Re: Re: Tuple-valued datums on Alpha (was Re:\n\t7.1 on DEC/Alpha)" }, { "msg_contents": "On Wed, 27 Dec 2000, Tom Lane wrote:\n\n> After further study, I realized that fetchatt() and a number of other\n> places were not prepared to cope with 8-byte pass-by-value datatypes.\n> Most of them weren't checking for cases they couldn't handle, either.\n> \n> Here is a revised patch for you to try (this includes yesterday's patch\n> plus more changes, so you'll need to reverse out the prior patch before\n> applying this one). NOTE you will need to do a full reconfigure and\n> rebuild to make this fly --- I'd suggest \"make distclean\" to start.\n\n\tGood news is that it solves the 'misc' regression test failure. It\nnow passes with flying colors! The bad news is that the 'oid' regression\ntest is still broken (with the exact same problem as before). I think\nBrent hit the same problem... I guess, verify that your oid fix actually\nhit the CVS tree, and if it did, rethink the solution. :(\n\tFor testing I used the snapshot from ftp.postgresql.org:/pub/dev/\ndated yesterday on my Alpha XLT366 running Debian GNU/Linux 2.2r0, kernel\n2.2.17. Though I found the 'configure' file actually a copy of 'config.in'\nand had to run the latter file through autoconf to get the correct version\nof the former file. Weird.\n\tAlso, I tested a patches source tree on an Linux/x86 box, and it\npassed all regression tests w/o problems. I can test the patched source\ntree on a Linux/Sparc machine if you want (bit more effort required to do\nso). \n\tOverall, it looks like we are making progress! Thanks to both you\nand Brent for looking deeper into these problems. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Thu, 28 Dec 2000 18:55:30 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "Ryan Kirkpatrick <[email protected]> writes:\n> \tGood news is that it solves the 'misc' regression test failure. It\n> now passes with flying colors! The bad news is that the 'oid' regression\n> test is still broken (with the exact same problem as before). I think\n> Brent hit the same problem... I guess, verify that your oid fix actually\n> hit the CVS tree, and if it did, rethink the solution. :(\n\nI believe that is fixed as of src/backend/utils/adt/oid.c v 1.43,\ncommitted at Thu Dec 28 01:51:15 2000 UTC. It should have been in\nThursday morning's snapshot. If you've got 1.43 and it still fails\nthe regress test, let me know.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Dec 2000 21:47:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Tuple-valued datums on Alpha (was Re: 7.1 on\n\tDEC/Alpha)" }, { "msg_contents": "pg_restore has some options that are supposed to allow restoring some or\nall indexes/tables/triggers/etc. For example\n\npg_restore --table\n\nrestores all tables and\n\npg_restore --table=my_table\n\nrestores only the named table. The equivalent short option is -t but it\ndoes not allow restoring all tables, since it requires an argument. I\nsuggest that an argument of '*' also means to restore all tables.\n\nAlso, if you just call pg_restore without arguments it hangs waiting for\ninput from stdin. This is a bit confusing. I suggest that stdin is used\nif and only if the file name argument is '-'.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 6 Jan 2001 22:05:08 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "pg_restore options issues" }, { "msg_contents": "At 22:05 6/01/01 +0100, Peter Eisentraut wrote:\n>\n>restores only the named table. The equivalent short option is -t but it\n>does not allow restoring all tables, since it requires an argument. I\n>suggest that an argument of '*' also means to restore all tables.\n\nSounds fine. \n\n\n>Also, if you just call pg_restore without arguments it hangs waiting for\n>input from stdin. This is a bit confusing. I suggest that stdin is used\n>if and only if the file name argument is '-'.\n\nAlso sounds reasonable.\n\n\nI'm working on pg_dump at the moment, so if there is anything else, let me\nknow.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 07 Jan 2001 11:48:05 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore options issues" }, { "msg_contents": "Peter Eisentraut wrote:\n >pg_restore has some options that are supposed to allow restoring some or\n >all indexes/tables/triggers/etc. For example\n >\n >pg_restore --table\n >\n >restores all tables and\n >\n >pg_restore --table=my_table\n >\n >restores only the named table. The equivalent short option is -t but it\n >does not allow restoring all tables, since it requires an argument. I\n >suggest that an argument of '*' also means to restore all tables.\n\nI don't like that; it will need to be escaped with \\ or the shell will\nsubstitute the contents of the current directory for the *.\n\nWhy not use `-t all' or `-A'?\n\nYou should also change the help text a bit:\n\n -t [table], --table[=table] \t dump for this table only\n\ngives no hint that omitting `=table' will restore all tables.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Blessed are the pure in heart, for they shall see \n God.\" Matthew 5:8 \n\n\n", "msg_date": "Sun, 07 Jan 2001 13:42:16 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_restore options issues " }, { "msg_contents": "Oliver Elphick writes:\n\n> I don't like that; it will need to be escaped with \\ or the shell will\n> substitute the contents of the current directory for the *.\n>\n> Why not use `-t all'\n\nBecause there might be a table called \"all\". (Okay, there could be a\ntable called \"*\", but really...)\n\n> or `-A'?\n\nWe'd need an option letter for tables, triggers, indexes, functions, etc.\n\n> You should also change the help text a bit:\n>\n> -t [table], --table[=table] \t dump for this table only\n>\n> gives no hint that omitting `=table' will restore all tables.\n\nOkay, when we decide how to handle it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 7 Jan 2001 17:02:53 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore options issues " }, { "msg_contents": "Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\nregression on my home RedHat 6.2 machine, which has all locale environment\nvariables disabled (/etc/sysconfig/i18n deleted and a reboot).\n\nIt may take a few minutes to a few hours for the changes I uploaded to\npropagate to the public ftp server.\n\nNOTE: BETA RPMS ARE JUST EXACTLY THAT! These RPM's are not in a finished state\n-- they are for the express purpose of allowing people to test 7.1beta in an\nRPM setup. Upgrading from a prior version is explicitly NOT supported at this\ntime.\n\nThere ARE KNOWN packaging problems -- I am working on them.\n\nNOTE: there have been some changes in the RPM install structure. \n/usr/lib/pgsql is gone, replaced by /usr/share/postgresql. See the file lists\nfrom 'rpm -qlp postgresql*7.1beta3-1.i386.rpm' for more details.\n\nREADME.rpm-dist has not been updated for the 7.1beta3 distribution yet.\n\nTo run regression tests, make sure the postgresql-test RPM is installed, then\nmake sure that the locale environment is set properly. Regression tests may be\nrun by first, as root, starting a postmaster: /etc/rc.d/init.d/postgresql start\n (if there are any errors, make sure you are not upgrading!)\n\nThen, as root:\nchown -R postgres.postgres /usr/share/postgresql/test/regress\n\n(yes, I know the RPMset should do this for you. It is being fixed.)\n\nThen, su to postgres and cd to /usr/share/postgresql/test/regress. Execute:\n./pg_regress --schedule=parallel_schedule\n\nAny failures indicate an error somewhere -- most likely your locale settings.\n\nLook for another release this week, with things streamlined somewhat.\n\nAlso, if you wish to rebuild from the source RPM, please read the comments and\nconditionals in the spec file BEFORE rebuilding -- you can conditionally\nrebuild just a few of the packages instead of having to rebuild everything.\n\nPlease check all the client interfaces you are equipped to check. I need\nsomeone to work over the perl, python, tcl, and tk stuff.\n\nIf someone wants to contribute built jars of the newest JDBC, I would be most\ngrateful, as I don't have a JDK on my devel machine yet. The jars distributed\nare the 7.0 JDBC jars -- which are known to need some recent patches.\n\npgaccess currently will not run unless you reconfigure to use -i in the\nstartup. This is also being fixed in the RPMset -- there is a change necessary\nin postgresql.config, I just have to do the change.\n\nPL/perl is being worked on -- many thanks to Karl DeBisschop for his\nperserverence in this.\n\nRPMS are in at:\nftp.postgresql.org/pub/dev/test-rpms\nand are built for RedHat 6.2/Intel ONLY.\n\nOnce again, at the risk of sounding like a broken record -- these are BETA\nQUALITY RPMS -- BY REQUEST. Please don't make me regret not waiting until a\nrelease candidate :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 02:24:19 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen wrote:\n >Ok, I have a first set of 7.1beta3 RPMs uploading now. \n... >\n >pgaccess currently will not run unless you reconfigure to use -i in the\n >startup. This is also being fixed in the RPMset -- there is a change necess\n >ary\n >in postgresql.config, I just have to do the change.\n \nIn my experience, pgaccess will use the Unix socket if the hostname is\nleft blank. Is this not the case with your RPMs?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For I know that my redeemer liveth, and that he shall \n stand at the latter day upon the earth\" \n Job 19:25 \n\n\n", "msg_date": "Mon, 15 Jan 2001 12:31:07 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded. " }, { "msg_contents": "Lamar Owen writes:\n\n> Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\n> regression on my home RedHat 6.2 machine, which has all locale environment\n> variables disabled (/etc/sysconfig/i18n deleted and a reboot).\n\nSome thoughts:\n\nRe: rpm-pgsql-7.1beta3.patch\n\n| diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib\n| --- postgresql-7.1beta3.orig/src/Makefile.shlib Wed Dec 6 14:37:08 2000\n| +++ postgresql-7.1beta3/src/Makefile.shlib Mon Jan 15 01:50:04 2001\n|@@ -160,7 +160,7 @@\n|\n| ifeq ($(PORTNAME), linux)\n| shlib := lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n| - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname)\n| + LINK.shared = $(COMPILER) -shared -Wl\n| endif\n\nThis cannot possibly be right.\n\n| diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib~ postgresql-7.1beta3/src/Makefile.shlib~\n\n???\n\n| -#!/usr/local/bin/perl -w\n| +#!/usr/bin/perl -w\n(and more of these for Python)\n\nI think this should be fixed to read\n\n#! /usr/bin/env perl\n\nAny comments?\n\n\nRe: spec file\n\n| # I hope this works....\n|\n| %ifarch ia64\n| ln -s linux_i386 src/template/linux\n| %endif\n\nIt definitely won't...\n\n| # If libtool installed, copy some files....\n| if [ -d /usr/share/libtool ]\n| then\n| cp /usr/share/libtool/config.* .\n| fi\n\nThis is useless (because the config.* files are not in src/ anymore) and\n(if it were fixed) not recommendable because config.{guess,sub} is not\ncompatible to itself, *especially* in terms of Linux recognition. You\nreally should use the ones PostgreSQL comes with.\n\n| %ifarch ppc\n| NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v \"\\-O\"|xargs -n 100`\n| NEW_CFLAGS=\"$NEW_CFLAGS -O0\"\n\nThis is no longer necessary.\n\n| ./configure --enable-hba --enable-locale --with-CXX --prefix=/usr\\\n\nThere is no option called '--enable-hba'. And I think you're supposed to\nuse %{configure}.\n\n| %if %tkpkg\n| --with-tk --with-x \\\n| %endif\n\nThere is no '--with-x'. '--with-tk' is the default if '--with-tcl' was\ngiven; you should use '--without-tk' if you don't want it.\n\n| %if %jdbc\n| --with-java \\\n| %endif\n\nThere is no such option.\n\n| %ifarch alpha\n| --with-template=linux_alpha \\\n| %endif\n\nThis won't work and is not necessary.\n\n| make COPT=\"$NEW_CFLAGS\" DESTDIR=$RPM_BUILD_ROOT/usr all\n\nYou should set CFLAGS when you run configure. (%{configure} will do that.)\nDESTDIR is only useful when you run 'make install'. And DESTDIR should\nnot include /usr.\n\n| make all PGDOCS=unpacked -C doc\n\nNot sure what this is supposed to do, but I don't think it will do what\nyou expect. The docs are installed automatically.\n\n| mkdir -p $RPM_BUILD_ROOT/usr/{include/pgsql,lib,bin}\n| mkdir -p $RPM_BUILD_ROOT%{_mandir}\n\nYou don't need that, the directories are made automatically.\n\n| make DESTDIR=$RPM_BUILD_ROOT -C src install\n\nNo '-C src'.\n\n| # copy over the includes needed for SPI development.\n| pushd src/include\n| /lib/cpp -M -I. -I../backend executor/spi.h | \\\n| xargs -n 1| \\\n| grep \\\\W| \\\n| grep -v ^/| \\\n| grep -v spi.o | \\\n| grep -v spi.h | \\\n| sort | \\\n| cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n\nI think the standard installed set of headers is sufficient.\n\n| %if %pgaccess\n| # pgaccess installation\n\nPgaccess is installed automatically when you configure --with-tcl.\n\n| # Move the PL's to the right place\n| mv $RPM_BUILD_ROOT/usr/lib/pl*.so $RPM_BUILD_ROOT/usr/share/postgresql\n\nYou should not put architecture specific files into share/. I'm sure this\nis in violation of FHS. (I'm amazed createlang finds it there.)\n\n\nRe: sub-packages\n\n* pg_id should be in server\n\n* What were the last thoughts about renaming the <nothing> package to -clients?\n\n* pg_upgrade won't work, so you might as well not install it. It will\n probably be disabled before we release.\n\n* You're missing pg_config in the -devel package.\n\n\nThese are the things I could find at first glance. ;-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 15 Jan 2001 18:50:39 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Lamar Owen writes:\n> \n> > Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\n> > regression on my home RedHat 6.2 machine, which has all locale environment\n> > variables disabled (/etc/sysconfig/i18n deleted and a reboot).\n> \n> Some thoughts:\n\nsnip\n\n> | -#!/usr/local/bin/perl -w\n> | +#!/usr/bin/perl -w\n> (and more of these for Python)\n> \n> I think this should be fixed to read\n> \n> #! /usr/bin/env perl\n> \n> Any comments?\n\nWhat assurance do you have that 'env' is in /usr/bin? Linux FHS\n(http://www.pathname.com/fhs/2.1/fhs-4.2.html) says that when perl\nis in some location other than /usr/bin, then symlinks should be\nprovided to /usr/bin/perl. You don't have this assurance with\n/usr/bin/env. Maybe there are some unices that do not have perl in\n/usr/bin - but maybe there are unices that put 'env' into /bin\ninstead of /usr/bin. If you follow the current FHS, /usr/bin/perl\nmay be better -- but I'm not sure if that is entirely true of the\nreal world of linux boxes out there.\n\n> | # copy over the includes needed for SPI development.\n> | pushd src/include\n> | /lib/cpp -M -I. -I../backend executor/spi.h | \\\n> | xargs -n 1| \\\n> | grep \\\\W| \\\n> | grep -v ^/| \\\n> | grep -v spi.o | \\\n> | grep -v spi.h | \\\n> | sort | \\\n> | cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n> \n> I think the standard installed set of headers is sufficient.\n\nI haven't installed the RPM yet. But I dissent unless things have\nchanged from 7.0 in that respect. I copmile against executor/spi.h,\nbased on the examples in the docs. Should I be using something else?\n(I'll have to go look at the 7.1 source, but I just wanted to\nregister at least some comfusion, if not dissent) \n\n-- \nKarl DeBisschop \[email protected]\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer \[email protected]\n", "msg_date": "Mon, 15 Jan 2001 13:37:58 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "(Sorry of this double posts - I'm having alias troubles on the\nlist.)\n\nPeter Eisentraut wrote:\n> \n> Lamar Owen writes:\n> \n> > Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\n> > regression on my home RedHat 6.2 machine, which has all locale environment\n> > variables disabled (/etc/sysconfig/i18n deleted and a reboot).\n> \n> Some thoughts:\n\nsnip\n\n> | -#!/usr/local/bin/perl -w\n> | +#!/usr/bin/perl -w\n> (and more of these for Python)\n> \n> I think this should be fixed to read\n> \n> #! /usr/bin/env perl\n> \n> Any comments?\n\nWhat assurance do you have that 'env' is in /usr/bin? Linux FHS\n(http://www.pathname.com/fhs/2.1/fhs-4.2.html) says that when perl\nis in some location other than /usr/bin, then symlinks should be\nprovided to /usr/bin/perl. You don't have this assurance with\n/usr/bin/env. Maybe there are some unices that do not have perl in\n/usr/bin - but maybe there are unices that put 'env' into /bin\ninstead of /usr/bin. If you follow the current FHS, /usr/bin/perl\nmay be better -- but I'm not sure if that is entirely true of the\nreal world of linux boxes out there.\n\n> | # copy over the includes needed for SPI development.\n> | pushd src/include\n> | /lib/cpp -M -I. -I../backend executor/spi.h | \\\n> | xargs -n 1| \\\n> | grep \\\\W| \\\n> | grep -v ^/| \\\n> | grep -v spi.o | \\\n> | grep -v spi.h | \\\n> | sort | \\\n> | cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n> \n> I think the standard installed set of headers is sufficient.\n\nI haven't installed the RPM yet. But I dissent unless things have\nchanged from 7.0 in that respect. I copmile against executor/spi.h,\nbased on the examples in the docs. Should I be using something else?\n(I'll have to go look at the 7.1 source, but I just wanted to\nregister at least some comfusion, if not dissent) \n\n-- \nKarl DeBisschop [email protected]\nLearning Network/Reference http://www.infoplease.com\nNetsaint Plugin Developer [email protected]\n", "msg_date": "Mon, 15 Jan 2001 13:54:24 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\n> > regression on my home RedHat 6.2 machine, which has all locale environment\n> > variables disabled (/etc/sysconfig/i18n deleted and a reboot).\n \n> Some thoughts:\n \n> Re: rpm-pgsql-7.1beta3.patch\n \n> | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib\n> | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname)\n> | + LINK.shared = $(COMPILER) -shared -Wl\n> | endif\n \n> This cannot possibly be right.\n\nIt's what you recommended a while back. See the discussions on -soname\nfrom the libpq.so.2.1 versus libpq.so.2.0 thread awhile back.\n \n> | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib~ postgresql-7.1beta3/src/Makefile.shlib~\n \n> ???\n\nLeftover Kedit baggage. Those are easy to forget to rm during patch\nbuild, particularly at 2:00AM -- but there's no harm in it being there\nfor now. And it won't be there in -2.\n\n> | -#!/usr/local/bin/perl -w\n> | +#!/usr/bin/perl -w\n> (and more of these for Python)\n \n> I think this should be fixed to read\n> \n> #! /usr/bin/env perl\n\nNo, for a RedHat or any other Linux distribution, /usr/bin is where perl\nand python (or their symlinks) will always live.\n\nAlthough you missed the redundant patches in the regression tree :-).\n\n> Re: spec file\n \n> | # I hope this works....\n> |\n> | %ifarch ia64\n> | ln -s linux_i386 src/template/linux\n> | %endif\n> \n> It definitely won't...\n\n7.0 required it. Baggage from the previous build.\n \n> | # If libtool installed, copy some files....\n> | if [ -d /usr/share/libtool ]\n> | then\n> | cp /usr/share/libtool/config.* .\n> | fi\n \n> This is useless (because the config.* files are not in src/ anymore) and\n> (if it were fixed) not recommendable because config.{guess,sub} is not\n> compatible to itself, *especially* in terms of Linux recognition. You\n> really should use the ones PostgreSQL comes with.\n\nTrond can answer that one more effectively than can I, as that's his\ninsertion.\n\nOf course, I've got to reorg the destination to match the source tree's\nreorg.\n \n> | %ifarch ppc\n> | NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v \"\\-O\"|xargs -n 100`\n> | NEW_CFLAGS=\"$NEW_CFLAGS -O0\"\n \n> This is no longer necessary.\n\nDepends on the convolutions the particular build of rpm itself is\ndoing. This is a fix for the broken rpm setup found on Linux-PPC, as\nfound by Tom Lane. It would be marvelous if this would be expendable at\nthis juncture.\n \n> | ./configure --enable-hba --enable-locale --with-CXX --prefix=/usr\\\n \n> There is no option called '--enable-hba'. And I think you're supposed to\n> use %{configure}.\n\nIf it works now, I'll use it. Version 7.0 and prior wouldn't work with\n%{configure}.\n\nAnd --enable-hba has existed at some point in time.\n \n> | %if %tkpkg\n> | --with-tk --with-x \\\n> | %endif\n> \n> There is no '--with-x'. '--with-tk' is the default if '--with-tcl' was\n> given; you should use '--without-tk' if you don't want it.\n\nThere was in the past a --with-x. So I need to change that to check for\nthe negation of tkpkg and use --without-tk if so.\n \n> | %if %jdbc\n> | --with-java \\\n> | %endif\n \n> There is no such option.\n\nHmmm. I don't remember when that one got placed there....\n \n> | %ifarch alpha\n> | --with-template=linux_alpha \\\n> | %endif\n \n> This won't work and is not necessary.\n\nMore 7.0 and prior baggage. The patches for alpha at one point (6.5\nthrough 7.0.3) have required this -- of course, with the need for the\nalpha patches gone, the need for the special config step is also gone. \nOne more piece of baggage I missed.\n \n> | make COPT=\"$NEW_CFLAGS\" DESTDIR=$RPM_BUILD_ROOT/usr all\n \n> You should set CFLAGS when you run configure. (%{configure} will do that.)\n> DESTDIR is only useful when you run 'make install'. And DESTDIR should\n> not include /usr.\n\nYes, if you'll notice I fixed the DESTDIR in the install. But, of\ncourse, it's not needed (nor is it used) in the build itself. Again,\nI'll use %{configure} when I verify that it works properly (if it does,\nthat will be a very Good Thing for all involved). But, again, you're\nseeing baggage that 7.0 and prior required in order to build (well,\nexcept DESTDIR).\n \n> | make all PGDOCS=unpacked -C doc\n \n> Not sure what this is supposed to do, but I don't think it will do what\n> you expect. The docs are installed automatically.\n\nWell, they are _now_. But 7.0 and prior..... again, more old baggage.\n \n> | mkdir -p $RPM_BUILD_ROOT/usr/{include/pgsql,lib,bin}\n> | mkdir -p $RPM_BUILD_ROOT%{_mandir}\n \n> You don't need that, the directories are made automatically.\n\nThey are _now_. But before, when the make install put things in the\n'wrong' place, it was required to make the directories before doing the\ncopies and moves necessary.\n \n> | make DESTDIR=$RPM_BUILD_ROOT -C src install\n \n> No '-C src'.\n\nNot anymore, at least. *sigh* There has been alot of baggage\naccumulate in the spec due to 7.0 and prior's slightly brain-dead\nbuild. I got rid of a lot of it -- but obviously I missed some.\n \n> | # copy over the includes needed for SPI development.\n> | pushd src/include\n> | /lib/cpp -M -I. -I../backend executor/spi.h | \\\n> | xargs -n 1| \\\n> | grep \\\\W| \\\n> | grep -v ^/| \\\n> | grep -v spi.o | \\\n> | grep -v spi.h | \\\n> | sort | \\\n> | cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n \n> I think the standard installed set of headers is sufficient.\n\nIs it? It _wasn't_ sufficient for SPI development at 7.0. Have the\nheaders and the headers install been fixed to install _all_ necessary\ndevelopment headers, SPI included? If so, this step won't add any\nheaders anyway -- but it would be nice to see the difference.\n \n> | %if %pgaccess\n> | # pgaccess installation\n \n> Pgaccess is installed automatically when you configure --with-tcl.\n\nIt is now. It wasn't previously. At least not to the proper place.\n \n> | # Move the PL's to the right place\n> | mv $RPM_BUILD_ROOT/usr/lib/pl*.so $RPM_BUILD_ROOT/usr/share/postgresql\n \n> You should not put architecture specific files into share/. I'm sure this\n> is in violation of FHS. (I'm amazed createlang finds it there.)\n\nIt finds it there because I tell it to :-P. No, this objection is\ndefinitely correct -- /usr/lib is the right place, and is where it will\nlive in the future. Previously, that line moved them to /usr/lib/pgsql.\n \n> Re: sub-packages\n \n> * pg_id should be in server\n \n> * What were the last thoughts about renaming the <nothing> package to -clients?\n\nMy thoughts are to leave it the way it is. I am not alone. But it's\nnot set in stone.\n \n> * pg_upgrade won't work, so you might as well not install it. It will\n> probably be disabled before we release.\n\nYes, this one needs to not be installed for this release.\n \n> * You're missing pg_config in the -devel package.\n\nYes, I am. I noticed that just after I uploaded the rpms from a 28.8\ndialup at home. So, I'll be releasing a -2 set this week to fix that\nproblem, amongst others.\n \n> These are the things I could find at first glance. ;-)\n\nSome first glance :-)....\n\nThanks for the feedback.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 15:50:04 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "On older versions of PG, 7.0 included, in the $PDGATA/base folder you\ncould see the names of the databases for that $PGDATA. Now all I see is:\n\n$ ls -l\ntotal 16\ndrwx------ 2 postgres wheel 1536 Jan 12 15:42 1\ndrwx------ 2 postgres wheel 1536 Jan 12 15:41 18719\ndrwx------ 2 postgres wheel 1536 Jan 12 15:42 18720\ndrwx------ 2 postgres wheel 1536 Jan 15 15:59 18721\n\nIs there a way to relate this to the names of the databases? Why the\nchange? Or am I missing something key here..\n\n- Brandon\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 15 Jan 2001 16:01:47 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "$PGDATA/base/???" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Peter Eisentraut wrote:\n> > Lamar Owen writes:\n> > > Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass\n> > > regression on my home RedHat 6.2 machine, which has all locale environment\n> > > variables disabled (/etc/sysconfig/i18n deleted and a reboot).\n> \n> > Some thoughts:\n> \n> > Re: rpm-pgsql-7.1beta3.patch\n> \n> > | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib\n> > | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname)\n> > | + LINK.shared = $(COMPILER) -shared -Wl\n> > | endif\n> \n> > This cannot possibly be right.\n> \n> It's what you recommended a while back. See the discussions on -soname\n> from the libpq.so.2.1 versus libpq.so.2.0 thread awhile back.\n\nNot having a soname doesn't really solve the problem, though... it\nshould be used, but correctly (major, minor)\n\n \n> > Re: spec file\n> \n> > | # I hope this works....\n> > |\n> > | %ifarch ia64\n> > | ln -s linux_i386 src/template/linux\n> > | %endif\n> > \n> > It definitely won't...\n\nIt did... we've successfully run queries etc on it.\n\n> > | # If libtool installed, copy some files....\n> > | if [ -d /usr/share/libtool ]\n> > | then\n> > | cp /usr/share/libtool/config.* .\n> > | fi\n> \n> > This is useless (because the config.* files are not in src/ anymore) and\n> > (if it were fixed) not recommendable because config.{guess,sub} is not\n> > compatible to itself, *especially* in terms of Linux recognition. You\n> > really should use the ones PostgreSQL comes with.\n\nWe have a libtool tuned to work with lots of platforms, like ia64,\ns390 etc... this makes sure it's used.\n\n> \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jan 2001 16:03:43 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "bpalmer wrote:\n> \n> On older versions of PG, 7.0 included, in the $PDGATA/base folder you\n> could see the names of the databases for that $PGDATA. Now all I see is:\n\nNo longer.\n \n> Is there a way to relate this to the names of the databases? Why the\n> change? Or am I missing something key here..\n\nSee the thread on the renaming in the archives. In short, this is part\nof Vadim's work on WAL -- the new naming makes certain things easier for\nWAL.\n\nUtilities to relate the new names to the actual database/table names\n_do_ need to be written, however. The information exists in one of the\nsystem catalogs now -- it just has to be made accessible.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 16:04:48 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: $PGDATA/base/???" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > Peter Eisentraut wrote:\n> > > Re: rpm-pgsql-7.1beta3.patch\n> > > | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib\n> > > | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname)\n> > > | + LINK.shared = $(COMPILER) -shared -Wl\n> > > | endif\n\n> > > This cannot possibly be right.\n\n> > It's what you recommended a while back. See the discussions on -soname\n> > from the libpq.so.2.1 versus libpq.so.2.0 thread awhile back.\n \n> Not having a soname doesn't really solve the problem, though... it\n> should be used, but correctly (major, minor)\n\nOk, what's your recommendation for the patch? I'll look again. 7.0.3-2\nshipped with the equivalent patch. I think I know what you have in\nmind, but it will be later tonight before I have time to try it.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 16:08:20 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n>> | %ifarch ppc\n>> | NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v \"\\-O\"|xargs -n 100`\n>> | NEW_CFLAGS=\"$NEW_CFLAGS -O0\"\n \n>> This is no longer necessary.\n\n> Depends on the convolutions the particular build of rpm itself is\n> doing. This is a fix for the broken rpm setup found on Linux-PPC, as\n> found by Tom Lane. It would be marvelous if this would be expendable at\n> this juncture.\n\nIt is. 7.1 builds cleanly on PPC without any CFLAGS hackery. I think\nwe can even survive the -fsigned-char stupidity now ;-)\n\n\n>> I think the standard installed set of headers is sufficient.\n\n> Is it? It _wasn't_ sufficient for SPI development at 7.0. Have the\n> headers and the headers install been fixed to install _all_ necessary\n> development headers, SPI included?\n\nNo, nothing's been done about that AFAIK. I'm not sure the RPMs should\nbe taking it on themselves to solve the problem, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jan 2001 16:11:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded. " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > doing. This is a fix for the broken rpm setup found on Linux-PPC, as\n> > found by Tom Lane. It would be marvelous if this would be expendable at\n> > this juncture.\n \n> It is. 7.1 builds cleanly on PPC without any CFLAGS hackery. I think\n> we can even survive the -fsigned-char stupidity now ;-)\n\nOh, good. Makes it much cleaner. Care to test that theory? :-)\n \n> >> I think the standard installed set of headers is sufficient.\n \n> > Is it? It _wasn't_ sufficient for SPI development at 7.0. Have the\n> > headers and the headers install been fixed to install _all_ necessary\n> > development headers, SPI included?\n \n> No, nothing's been done about that AFAIK. I'm not sure the RPMs should\n> be taking it on themselves to solve the problem, however.\n\nJust trying to make the postgresql-devel rpm complete, as per request. \nSince the folk who build from source usually still have the source tree\naround to do SPI development in, it's not as big of an issue for that\ninstall.\n\nOf course, if the consensus is that the RPM's simply track what the\nsource tarball does, then that can also be arranged. But, the precedent\nof the RPMset fixing difficulties with the source install has already\nbeen set with the upgrading procedure.\n\nArguments about why those wishing to do SPI development should install a\nfull source tree aside, I'm simply providing an RPM-specific requested\nfeature.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 16:21:10 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n>> It is. 7.1 builds cleanly on PPC without any CFLAGS hackery. I think\n>> we can even survive the -fsigned-char stupidity now ;-)\n\n> Oh, good. Makes it much cleaner. Care to test that theory? :-)\n\nI already did, I believe, but just in plain builds from source.\n\nLet me know when you think the 7.1 RPM specfile is stable enough to be\nworth testing, and I'll try to build PPC RPMs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Jan 2001 16:26:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded. " }, { "msg_contents": "Tom Lane wrote:\n> Let me know when you think the 7.1 RPM specfile is stable enough to be\n> worth testing, and I'll try to build PPC RPMs.\n\nOk. Should be coincident with -2. I'm planning to have a -2 out later\nthis week.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 16:29:12 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "> > Is there a way to relate this to the names of the databases? Why the\n> > change? Or am I missing something key here..\n> \n> See the thread on the renaming in the archives. In short, this is part\n> of Vadim's work on WAL -- the new naming makes certain things easier for\n> WAL.\n> \n> Utilities to relate the new names to the actual database/table names\n> _do_ need to be written, however. The information exists in one of the\n> system catalogs now -- it just has to be made accessible.\n\nYes, I am hoping to write this utility before 7.1 final. Maybe it will\nhave to be in /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jan 2001 16:55:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: $PGDATA/base/???" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> We have a libtool tuned to work with lots of platforms, like ia64,\n> s390 etc... this makes sure it's used.\n\nWe don't use libtool. Nor does libtool care about the processor.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 15 Jan 2001 23:02:54 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > We have a libtool tuned to work with lots of platforms, like ia64,\n> > s390 etc... this makes sure it's used.\n> \n> We don't use libtool. \n\nDoing so would be a good thing.\n\n> Nor does libtool care about the processor.\n\nAs you can see from the actual code segment, only the\nconfig.{guess,sub} files are copied.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jan 2001 17:05:14 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut wrote:\n> The patch I recommended was\n \n> - LDFLAGS_SL := -Bdynamic -shared -soname $(shlib)\n> + LDFLAGS_SL := -Bdynamic -shared -soname lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION)\n\nAnd that is what was in 7.0.3.\n \n> but that's not what your patch does. The issue is fixed, you shouldn't\n> patch anything.\n\nOh, OK. Easy enough to delete that section and see where the chips\nfall.\n\n> > > #! /usr/bin/env perl\n\n> > No, for a RedHat or any other Linux distribution, /usr/bin is where perl\n> > and python (or their symlinks) will always live.\n \n> I was thinking in terms of fixing this in the source tree.\n\nOh.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 17:15:14 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen writes:\n\n> > | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib\n> > | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname)\n> > | + LINK.shared = $(COMPILER) -shared -Wl\n> > | endif\n>\n> > This cannot possibly be right.\n>\n> It's what you recommended a while back. See the discussions on -soname\n> from the libpq.so.2.1 versus libpq.so.2.0 thread awhile back.\n\nThe patch I recommended was\n\n- LDFLAGS_SL := -Bdynamic -shared -soname $(shlib)\n+ LDFLAGS_SL := -Bdynamic -shared -soname lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION)\n\nbut that's not what your patch does. The issue is fixed, you shouldn't\npatch anything.\n\n> > I think this should be fixed to read\n> >\n> > #! /usr/bin/env perl\n>\n> No, for a RedHat or any other Linux distribution, /usr/bin is where perl\n> and python (or their symlinks) will always live.\n\nI was thinking in terms of fixing this in the source tree.\n\n> > There is no '--with-x'. '--with-tk' is the default if '--with-tcl' was\n> > given; you should use '--without-tk' if you don't want it.\n>\n> There was in the past a --with-x.\n\nBut it never did anything.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 15 Jan 2001 23:16:19 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut wrote:\n> Trond Eivind Glomsr�d writes:\n> > We have a libtool tuned to work with lots of platforms, like ia64,\n> > s390 etc... this makes sure it's used.\n \n> We don't use libtool. Nor does libtool care about the processor.\n\n'We' has a lot of different meanings. In your sentence, of course, 'We'\nis 'the PostgreSQL Global Development Group.' In my case, I have to use\na 'We' comprised of the PGDG, RedHat, SuSE, TurboLinux, Mandrake,\nCaldera, GreatBridge, etc.\n\nThe RPM-based Linux distributions have their own consistency\nrequirements that may at times conflict with 'our' own. In the case of\npackages the are being distributed with these distributions, it behooves\n'us' to play by the distribution's rules when those rules don't\nnegatively affect 'our' primary source distribution.\n\nBy having this only in the RPM spec file, it obviates the need for 'us'\nto change anything, and it allows 'our' package to be more easily\nintegrated in the distribution in question.\n\nIn particular, this was and is a RedHat-made change. It does not break\nanything that I am aware of, and allows the distributions to do their\nthing as well.\n\nTherefore, the conditional. If the distribution doesn't care, then the\nfiles won't exist, and they won't overwrite ours. RedHat 6.1, for\ninstance.\n\nIs there a technical reason that this would cause a failed or even\nnon-optimal build? Non-functional binaries? Problems at the user\nlevel?\n\nThe PPC shenanigans will be gone -- but the libtool stuff -- until\nlibtoolize works properly on 'our' build (I seem to recall a\ndistribution that successfully libtollized PostgreSQL -- I'll have to\ncheck.....), this seems reasonable to me in my limited knowledge.\n\nSo, Trond, what sort of tunings have been performed that make the files\nin question need to be copied? I'm sure you have a good reason; I am\njust curious as to what it is.\n\nLikewise, Peter, I'm sure that from your point of view you have good\nreasons -- I'd like to see them as well, for the same reasons as I'd\nlike to see Trond's.\n\nWe're early in the RPM beta cycle here -- many things can and will\nchange before final.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 17:16:35 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > > We don't use libtool.\n> >\n> > Doing so would be a good thing.\n> \n> Not if our code is more portable than libtool's.\n\nAnd this is the case? libtool covers pretty much everything... and you\ndon't need to use it for every target.\n \n> > > Nor does libtool care about the processor.\n> >\n> > As you can see from the actual code segment, only the\n> > config.{guess,sub} files are copied.\n> \n> But you argued that this is because your config.guess supports s390 and\n> ia64 (which ours does as well)\n\nIt may do so now, but I'm pretty sure it hasn't always done so... and\neven if it does, it doesn't hurt.\n> \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jan 2001 17:17:37 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> > We don't use libtool.\n>\n> Doing so would be a good thing.\n\nNot if our code is more portable than libtool's.\n\n> > Nor does libtool care about the processor.\n>\n> As you can see from the actual code segment, only the\n> config.{guess,sub} files are copied.\n\nBut you argued that this is because your config.guess supports s390 and\nia64 (which ours does as well), but that cannot possibly be libtool\nrelated.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 15 Jan 2001 23:18:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Peter Eisentraut wrote:\n> > Trond Eivind Glomsr�d writes:\n> > > We have a libtool tuned to work with lots of platforms, like ia64,\n> > > s390 etc... this makes sure it's used.\n> \n> > We don't use libtool. Nor does libtool care about the processor.\n>\n> In particular, this was and is a RedHat-made change. It does not break\n> anything that I am aware of, and allows the distributions to do their\n> thing as well.\n\nNote that this wasn't included in Red Hat Linux 7... it's been done\nsince then, and I don't remember doing it myself (which of course\ndoesn't mean I didn't do it :) - it might have been done for the S/390\nport, by the people working on that. \n \n> So, Trond, what sort of tunings have been performed that make the files\n> in question need to be copied? I'm sure you have a good reason; I am\n> just curious as to what it is.\n\nFor most apps, it's just a question of configure working vs. configure\nfailing on IA64 (there is no \"tuning\" as such, my choice of words\nwasn't too good). There may be something similar for S/390.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 Jan 2001 17:25:50 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > In particular, this was and is a RedHat-made change. It does not break\n> > anything that I am aware of, and allows the distributions to do their\n> > thing as well.\n \n> Note that this wasn't included in Red Hat Linux 7... it's been done\n> since then, and I don't remember doing it myself (which of course\n> doesn't mean I didn't do it :) - it might have been done for the S/390\n> port, by the people working on that.\n\nA non-conditional version (the conditional is my change) was included as\nfar back as RedHat 6.2.\n \n> For most apps, it's just a question of configure working vs. configure\n> failing on IA64 (there is no \"tuning\" as such, my choice of words\n> wasn't too good). There may be something similar for S/390.\n\nCan we test both ways (after your current project is done)?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 Jan 2001 17:32:26 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Lamar Owen writes:\n\n> Likewise, Peter, I'm sure that from your point of view you have good\n> reasons -- I'd like to see them as well, for the same reasons as I'd\n> like to see Trond's.\n\nTwo points of view here:\n\n1. The config.* files were specifically updated because the old ones did\nnot recognize certain Linux(!) setups correctly. This probably affects\nall config.*'s before August 2000. This will simply cause the build to\nfail.\n\n2. If you use the %{configure} macro then this will be pointless because\nthat macro selects the host system type itself:\n\n| %configure \\\n| CFLAGS=\"${CFLAGS:-%optflags}\" ; export CFLAGS ; \\\n| CXXFLAGS=\"${CXXFLAGS:-%optflags}\" ; export CXXFLAGS ; \\\n| FFLAGS=\"${FFLAGS:-%optflags}\" ; export FFLAGS ; \\\n| %{?__libtoolize:[ -f configure.in ] && %{__libtoolize} --copy --force} ; \\\n| ./configure %{_target_platform} \\\\\\\n...\n\n(This is subtly wrong as well, but that's something for the RPM folks to\ndeal with.)\n\nSo in short, don't do it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 16 Jan 2001 17:42:06 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPMS for 7.1beta3 being uploaded." }, { "msg_contents": "Hello,\n\nI've selected postgresql 7.0.3 for our (critical) application and while\ndoing my first experiments I've found a bug which makes me worry very\nmuch.\n\nThe problem is that a SELECT with a certain LIKE condition in combination\nwith a GROUP BY does not find the proper records when there is an index on\nthe particular column present. When the index is removed the SELECT *does*\nreturn the right answer.\n\nFortunately I managed to strip down our database and create a simple\nsingle table with which the bug can be easily reproduced.\n\nI've been searching in the Postgres bug-database and this problem\nmight be related to this report:\n\n http://www.postgresql.org/bugs/bugs.php?4~111\n\nBelow you find a psql-session that demonstrates the bug.\n\nI've made a dump of the test-database available as:\n \n http://dutepp0.et.tudelft.nl/~robn/demo.dump.bz2\n\n(it is 46100 bytes long in compressed form but 45 MB when uncompressed,\n I tried to trim it down but then the bug isn't reproducable anymore !)\n\nThe table is filled with all Spaces execpt for the \"town\" column.\n\n\nSysinfo: \n--------\n - well-maintained Linux Red Hat 6.2\n\t- kernel 2.2.18\n - Intel Pentium III\n - postgresql-7.0.3-2 RPMs from the Postgresql site\n (the problem also occurs with locally rebuilt Source RPM)\n\nAny help is much appreciated !\n \n Friendly greetings,\n Rob van Nieuwkerk\n\n\npsql session:\n***********************************************************************\ndemo=> \\d \n List of relations\n Name | Type | Owner \n------------+-------+-------\n demo_table | table | robn\n(1 row)\n\ndemo=> \\d demo_table \n Table \"demo_table\"\n Attribute | Type | Modifier \n-----------+----------+----------\n postcode | char(7) |\n odd_even | char(1) |\n low | char(5) |\n high | char(5) |\n street | char(24) | \n town | char(24) | \n area | char(1) |\n\ndemo=> \\di\nNo relations found.\ndemo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n town \n--------------------------\n ZWOLLE\n(1 row)\n\ndemo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n\n <<<<<< here 86 towns are correctly found (output removed) >>>>>>\n\ndemo=> CREATE INDEX demo_table_town_idx ON demo_table(town);\nCREATE\ndemo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n town\n------\n(0 rows)\n <<<<<< This is wrong !!!!!! >>>>>>>\n\ndemo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n town\n--------------------------\n ZWOLLE\n(1 row)\n\ndemo=> DROP INDEX demo_table_town_idx;\nDROP\ndemo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n\n <<<<<< here 86 towns are correctly found again >>>>>>\n***********************************************************************\n", "msg_date": "Thu, 18 Jan 2001 15:13:02 +0000 (UTC)", "msg_from": "[email protected] (Rob van Nieuwkerk)", "msg_from_op": false, "msg_subject": "7.0.3 reproduceable serious select error" }, { "msg_contents": "[email protected] (Rob van Nieuwkerk) writes:\n> The problem is that a SELECT with a certain LIKE condition in combination\n> with a GROUP BY does not find the proper records when there is an index on\n> the particular column present. When the index is removed the SELECT *does*\n> return the right answer.\n\nAre you running the postmaster in a non-ASCII locale? This sounds like\nthe old LIKE index optimization problem that we've struggled with for\nquite a while now. 7.1 works around it by disabling the optimization\nin non-ASCII locales, which is unpleasant but at least it gives right\nanswers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 16:13:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "Tom Lane wrote:\n> \n> [email protected] (Rob van Nieuwkerk) writes:\n> > The problem is that a SELECT with a certain LIKE condition in combination\n> > with a GROUP BY does not find the proper records when there is an index on\n> > the particular column present. When the index is removed the SELECT *does*\n> > return the right answer.\n> \n> Are you running the postmaster in a non-ASCII locale? This sounds like\n> the old LIKE index optimization problem that we've struggled with for\n> quite a while now. 7.1 works around it by disabling the optimization\n> in non-ASCII locales, which is unpleasant but at least it gives right\n> answers ...\n\nHi Tom,\n\nI don't think I'm running postmaster in a non-ASCII locale.\nAt least I did not explicitly do anything to accomplish it.\nI'm running with the default settings from the RPMs and didn't\nchange any default setting.\n\nI peeked in some manual pages but couldn't find info quickly about\nthis setting. Please tell me how to check it if you want to know !\n\nThank you for your reaction.\n\n\tgreetings,\n\tRob van Nieuwkerk\n", "msg_date": "Thu, 18 Jan 2001 22:23:26 +0100 (CET)", "msg_from": "Rob van Nieuwkerk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Rob van Nieuwkerk wrote:\nI tried to reproduce this bug on 7.0.2 and 7.0.3 with both 8K and 32K block\nsizes, and could not reproduce the error.\n\nI am running RedHat 6.2 kernel 2.2.16.\n\nI don't know enough to even be close, but I wonder if there are any subtle\ndifferences between the way characters are treated for indexes vs the way they\nare treated for table scans? If there are even slight differences in the way\nthis happens, a misinterpretation of ascii conversions for instance, (I am\nassuming you may be using ascii characters above 0x7F), it could behave\nsomething like this, and explain why I wouldn't see it. .Like I said, however,\nI don't know much so don't read too much into what I say.\n\n\n\n> Hello,\n>\n> I've selected postgresql 7.0.3 for our (critical) application and while\n> doing my first experiments I've found a bug which makes me worry very\n> much.\n>\n> The problem is that a SELECT with a certain LIKE condition in combination\n> with a GROUP BY does not find the proper records when there is an index on\n> the particular column present. When the index is removed the SELECT *does*\n> return the right answer.\n>\n> Fortunately I managed to strip down our database and create a simple\n> single table with which the bug can be easily reproduced.\n>\n> I've been searching in the Postgres bug-database and this problem\n> might be related to this report:\n>\n> http://www.postgresql.org/bugs/bugs.php?4~111\n>\n> Below you find a psql-session that demonstrates the bug.\n>\n> I've made a dump of the test-database available as:\n>\n> http://dutepp0.et.tudelft.nl/~robn/demo.dump.bz2\n>\n> (it is 46100 bytes long in compressed form but 45 MB when uncompressed,\n> I tried to trim it down but then the bug isn't reproducable anymore !)\n>\n> The table is filled with all Spaces execpt for the \"town\" column.\n>\n> Sysinfo:\n> --------\n> - well-maintained Linux Red Hat 6.2\n> - kernel 2.2.18\n> - Intel Pentium III\n> - postgresql-7.0.3-2 RPMs from the Postgresql site\n> (the problem also occurs with locally rebuilt Source RPM)\n>\n> Any help is much appreciated !\n>\n> Friendly greetings,\n> Rob van Nieuwkerk\n>\n> psql session:\n> ***********************************************************************\n> demo=> \\d\n> List of relations\n> Name | Type | Owner\n> ------------+-------+-------\n> demo_table | table | robn\n> (1 row)\n>\n> demo=> \\d demo_table\n> Table \"demo_table\"\n> Attribute | Type | Modifier\n> -----------+----------+----------\n> postcode | char(7) |\n> odd_even | char(1) |\n> low | char(5) |\n> high | char(5) |\n> street | char(24) |\n> town | char(24) |\n> area | char(1) |\n>\n> demo=> \\di\n> No relations found.\n> demo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n> town\n> --------------------------\n> ZWOLLE\n> (1 row)\n>\n> demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n>\n> <<<<<< here 86 towns are correctly found (output removed) >>>>>>\n>\n> demo=> CREATE INDEX demo_table_town_idx ON demo_table(town);\n> CREATE\n> demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n> town\n> ------\n> (0 rows)\n> <<<<<< This is wrong !!!!!! >>>>>>>\n>\n> demo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n> town\n> --------------------------\n> ZWOLLE\n> (1 row)\n>\n> demo=> DROP INDEX demo_table_town_idx;\n> DROP\n> demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n>\n> <<<<<< here 86 towns are correctly found again >>>>>>\n> ***********************************************************************\n\n", "msg_date": "Thu, 18 Jan 2001 16:24:48 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "> I don't think I'm running postmaster in a non-ASCII locale.\n> At least I did not explicitly do anything to accomplish it.\n\nDid you have LANG, LOCALE, or any of the LC_xxx family of\nenvironment variables set when you started the postmaster?\nSome Linux distros tend to set those in system profile scripts ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 16:50:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "\nHi Mark,\n\nI just checked: the \"demo.dump\" file does not contain any characters\nabove 0x7F; it's just plain ASCII. So that can't be the reason.\n\n greetings,\n Rob van Nieuwkerk\n\n\n> Rob van Nieuwkerk wrote:\n\nEhm .., *you* wrote this ! :-)\n\n> I tried to reproduce this bug on 7.0.2 and 7.0.3 with both 8K and 32K block\n> sizes, and could not reproduce the error.\n> \n> I am running RedHat 6.2 kernel 2.2.16.\n> \n> I don't know enough to even be close, but I wonder if there are any subtle\n> differences between the way characters are treated for indexes vs the way they\n> are treated for table scans? If there are even slight differences in the way\n> this happens, a misinterpretation of ascii conversions for instance, (I am\n> assuming you may be using ascii characters above 0x7F), it could behave\n> something like this, and explain why I wouldn't see it. .Like I said, however,\n> I don't know much so don't read too much into what I say.\n\n\n> > Hello,\n> >\n> > I've selected postgresql 7.0.3 for our (critical) application and while\n> > doing my first experiments I've found a bug which makes me worry very\n> > much.\n> >\n> > The problem is that a SELECT with a certain LIKE condition in combination\n> > with a GROUP BY does not find the proper records when there is an index on\n> > the particular column present. When the index is removed the SELECT *does*\n> > return the right answer.\n> >\n> > Fortunately I managed to strip down our database and create a simple\n> > single table with which the bug can be easily reproduced.\n> >\n> > I've been searching in the Postgres bug-database and this problem\n> > might be related to this report:\n> >\n> > http://www.postgresql.org/bugs/bugs.php?4~111\n> >\n> > Below you find a psql-session that demonstrates the bug.\n> >\n> > I've made a dump of the test-database available as:\n> >\n> > http://dutepp0.et.tudelft.nl/~robn/demo.dump.bz2\n> >\n> > (it is 46100 bytes long in compressed form but 45 MB when uncompressed,\n> > I tried to trim it down but then the bug isn't reproducable anymore !)\n> >\n> > The table is filled with all Spaces execpt for the \"town\" column.\n> >\n> > Sysinfo:\n> > --------\n> > - well-maintained Linux Red Hat 6.2\n> > - kernel 2.2.18\n> > - Intel Pentium III\n> > - postgresql-7.0.3-2 RPMs from the Postgresql site\n> > (the problem also occurs with locally rebuilt Source RPM)\n> >\n> > Any help is much appreciated !\n> >\n> > Friendly greetings,\n> > Rob van Nieuwkerk\n> >\n> > psql session:\n> > ***********************************************************************\n> > demo=> \\d\n> > List of relations\n> > Name | Type | Owner\n> > ------------+-------+-------\n> > demo_table | table | robn\n> > (1 row)\n> >\n> > demo=> \\d demo_table\n> > Table \"demo_table\"\n> > Attribute | Type | Modifier\n> > -----------+----------+----------\n> > postcode | char(7) |\n> > odd_even | char(1) |\n> > low | char(5) |\n> > high | char(5) |\n> > street | char(24) |\n> > town | char(24) |\n> > area | char(1) |\n> >\n> > demo=> \\di\n> > No relations found.\n> > demo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n> > town\n> > --------------------------\n> > ZWOLLE\n> > (1 row)\n> >\n> > demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n> >\n> > <<<<<< here 86 towns are correctly found (output removed) >>>>>>\n> >\n> > demo=> CREATE INDEX demo_table_town_idx ON demo_table(town);\n> > CREATE\n> > demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n> > town\n> > ------\n> > (0 rows)\n> > <<<<<< This is wrong !!!!!! >>>>>>>\n> >\n> > demo=> SELECT town FROM demo_table WHERE town LIKE 'ZWO%' GROUP BY town;\n> > town\n> > --------------------------\n> > ZWOLLE\n> > (1 row)\n> >\n> > demo=> DROP INDEX demo_table_town_idx;\n> > DROP\n> > demo=> SELECT town FROM demo_table WHERE town LIKE 'Z%' GROUP BY town;\n> >\n> > <<<<<< here 86 towns are correctly found again >>>>>>\n> > ***********************************************************************\n> \n\n\n", "msg_date": "Thu, 18 Jan 2001 22:56:51 +0100", "msg_from": "Rob van Nieuwkerk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "\nTom Lane wrote:\n\n> > I don't think I'm running postmaster in a non-ASCII locale.\n> > At least I did not explicitly do anything to accomplish it.\n> \n> Did you have LANG, LOCALE, or any of the LC_xxx family of\n> environment variables set when you started the postmaster?\n> Some Linux distros tend to set those in system profile scripts ...\n\nChecking whith ps and looking in /proc reveiled that postmaster indeed\nhad LANG set to \"en_US\" in its environment. I disabled the system script\nthat makes this setting, restarted postgres/postmaster and reran my tests.\n\nThe problem query returns the *right* answer now !\nTurning LANG=en_US back on gives the old buggy behaviour.\n\nI know very little about this LANG, LOCALE etc. stuff.\nBut for our application it is very important to support \"weird\" characters\nlike \"���� ...\" etc. for names. Basically we need all letter symbols\nin ISO-8859-1 (Latin 1). A quick experiment shows that without the\nLANG setting I can still insert & select strings containing these\nsymbols.\n\nDo I lose any postgresql functionality by just getting rid of the LANG\nenvironment variable ? Will I be able to use full ISO-8859-1 in table\nfields without problems ?\n\nPlease tell if you want me to do any other tests !\n\n\tgreetings,\n\tRob van Nieuwkerk\n", "msg_date": "Fri, 19 Jan 2001 00:15:10 +0100", "msg_from": "Rob van Nieuwkerk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "Rob van Nieuwkerk <[email protected]> writes:\n\n> I know very little about this LANG, LOCALE etc. stuff.\n> But for our application it is very important to support \"weird\" characters\n> like \"���� ...\" etc. for names. Basically we need all letter symbols\n> in ISO-8859-1 (Latin 1). \n\nen_US is latin1 - this is what distinguishes it from POSIX/C.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "18 Jan 2001 18:41:43 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Rob van Nieuwkerk <[email protected]> writes:\n> Checking whith ps and looking in /proc reveiled that postmaster indeed\n> had LANG set to \"en_US\" in its environment. I disabled the system script\n> that makes this setting, restarted postgres/postmaster and reran my tests.\n\n> The problem query returns the *right* answer now !\n> Turning LANG=en_US back on gives the old buggy behaviour.\n\nCaution: you can't just change the locale willy-nilly, because doing so\ninvalidates the sort ordering of btree indexes. An index built under\none sort order is effectively corrupt under another. I recommend that\nyou dumpall, then initdb under the desired LANG setting, then reload,\nand be careful always to start the postmaster under that same setting\nhenceforth.\n\n(BTW, 7.1 prevents this type of index screwup by locking down the\ndatabase's locale at initdb time --- the ONLY way to change sort order\nin 7.1 is to initdb with the right locale environment variables. But in\n7.0 you gotta be careful about keeping the locale consistent.)\n\n> I know very little about this LANG, LOCALE etc. stuff.\n> But for our application it is very important to support \"weird\" characters\n> like \"���� ...\" etc. for names. Basically we need all\n> letter symbols in ISO-8859-1 (Latin 1).\n\nAs long as you are not expecting things to sort in any particular order,\nit really doesn't matter what locale you run Postgres in. If you do\ncare about sort order of characters that aren't bog-standard USASCII,\nthen you may have a problem. But you can store 'em in any case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 20:57:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "I meant to ask this the last time this came up on the list, but now is a\ngood time. Given what Tom describes below as the behavior in 7.1\n(initdb stores the locale info), how do you determine what locale a\ndatabase is running in in 7.1 after initdb? Is there some file to look\nat? Is there some sql statement that can be used to select the setting\nfrom the DB?\n\nthanks,\n--Barry\n\n\nTom Lane wrote:\n> \n> Rob van Nieuwkerk <[email protected]> writes:\n> > Checking whith ps and looking in /proc reveiled that postmaster indeed\n> > had LANG set to \"en_US\" in its environment. I disabled the system script\n> > that makes this setting, restarted postgres/postmaster and reran my tests.\n> \n> > The problem query returns the *right* answer now !\n> > Turning LANG=en_US back on gives the old buggy behaviour.\n> \n> Caution: you can't just change the locale willy-nilly, because doing so\n> invalidates the sort ordering of btree indexes. An index built under\n> one sort order is effectively corrupt under another. I recommend that\n> you dumpall, then initdb under the desired LANG setting, then reload,\n> and be careful always to start the postmaster under that same setting\n> henceforth.\n> \n> (BTW, 7.1 prevents this type of index screwup by locking down the\n> database's locale at initdb time --- the ONLY way to change sort order\n> in 7.1 is to initdb with the right locale environment variables. But in\n> 7.0 you gotta be careful about keeping the locale consistent.)\n> \n> > I know very little about this LANG, LOCALE etc. stuff.\n> > But for our application it is very important to support \"weird\" characters\n> > like \"éõåÊ ...\" etc. for names. Basically we need all\n> > letter symbols in ISO-8859-1 (Latin 1).\n> \n> As long as you are not expecting things to sort in any particular order,\n> it really doesn't matter what locale you run Postgres in. If you do\n> care about sort order of characters that aren't bog-standard USASCII,\n> then you may have a problem. But you can store 'em in any case.\n> \n> regards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 18:30:25 -0800", "msg_from": "Barry Lind <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Barry Lind <[email protected]> writes:\n> I meant to ask this the last time this came up on the list, but now is a\n> good time. Given what Tom describes below as the behavior in 7.1\n> (initdb stores the locale info), how do you determine what locale a\n> database is running in in 7.1 after initdb?\n\nHm. There probably ought to be an inquiry function or SHOW variable\nfor that, but at the moment there's not. Offhand I can't think of any\ndirect way except to paw through the pg_control file looking for the\nlocale name (at least it's stored there in ASCII ;-)).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Jan 2001 21:38:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error " }, { "msg_contents": "Added to TODO:\n\n\t* Add SHOW command to see locale\n\n> Barry Lind <[email protected]> writes:\n> > I meant to ask this the last time this came up on the list, but now is a\n> > good time. Given what Tom describes below as the behavior in 7.1\n> > (initdb stores the locale info), how do you determine what locale a\n> > database is running in in 7.1 after initdb?\n> \n> Hm. There probably ought to be an inquiry function or SHOW variable\n> for that, but at the moment there's not. Offhand I can't think of any\n> direct way except to paw through the pg_control file looking for the\n> locale name (at least it's stored there in ASCII ;-)).\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 21:53:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Does this bug still exist?\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Louis-David Mitterrand writes:\n> \n> > When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> > seems the child gets all of the parent's contraints _except_ its PRIMARY\n> > KEY. Is this normal?\n> \n> It's kind of a bug.\n> \n> \n> -- \n> Peter Eisentraut Sernanders v?g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 Jan 2001 23:57:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "Rob van Nieuwkerk wrote:\n> \n> \n> The problem query returns the *right* answer now !\n> Turning LANG=en_US back on gives the old buggy behaviour.\n> \n> I know very little about this LANG, LOCALE etc. stuff.\n> But for our application it is very important to support \"weird\" characters\n> like \"���� ...\" etc. for names. Basically we need all letter symbols\n> in ISO-8859-1 (Latin 1). A quick experiment shows that without the\n> LANG setting I can still insert & select strings containing these\n> symbols.\n> \n> Do I lose any postgresql functionality by just getting rid of the LANG\n> environment variable ? Will I be able to use full ISO-8859-1 in table\n> fields without problems ?\n\nYou should, except that upper() and lower() will not give you right\nanswers \nfor char>128 and order by orders in ASCII (i.e. character code value)\norder.\n\nI would suggest that instead you keep the en_US locale (or some nl\nlocale \nif you need the rigt ordering from DB), but do _not_ create a b-tree \n(the default) index on your text fields. If you need the index for \nexact lookup (field=const) an hash idex will do fine and i'm pretty sure \nthat LIKE optimisations will not use them to spoil searches ;).\n\n-------------------\nHannu\n", "msg_date": "Fri, 19 Jan 2001 10:51:31 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Added to TODO:\n> \n> * Add SHOW command to see locale\n\nI'd rather like it to be a function, as version() is, because SHOW\ncommands may not \nplay nice with other interfaces than psql. \n(and it can first be included in ./contrib if it's too late for a\n\"feature\" <grin>)\n\nJust make sure we will not confict with SQL standard in naming the\nfunction.\n\n-------------------\nHannu\n", "msg_date": "Fri, 19 Jan 2001 10:56:25 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "Rob van Nieuwkerk wrote:\n\n> Hi Mark,\n>\n> I just checked: the \"demo.dump\" file does not contain any characters\n> above 0x7F; it's just plain ASCII. So that can't be the reason.\n>\n> greetings,\n> Rob van Nieuwkerk\n>\n> > Rob van Nieuwkerk wrote:\n\nI think I was close. ;-)\n\nIf I have followed the thread correctly, it is becauase of a language setting.\n\n\n\n", "msg_date": "Fri, 19 Jan 2001 11:33:01 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 reproduceable serious select error" }, { "msg_contents": "\nProbably, since I see it in near recent sources (and it affects\nUNIQUE as well. As I remember it, the last discussion on this couldn't\ndetermine what the correct behavior for unique/primary key constraints\nwas in the inheritance case (is it a single unique hierarchy through\nall the tables [would be needed for fk to inheritance trees] or\nseparate unique constraints for each table [which would be similar\nto how many people seem to currently use postgres inheritance as a \nshortcut]). \n\nOn Thu, 18 Jan 2001, Bruce Momjian wrote:\n\n> Does this bug still exist?\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > Louis-David Mitterrand writes:\n> > \n> > > When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> > > seems the child gets all of the parent's contraints _except_ its PRIMARY\n> > > KEY. Is this normal?\n\n", "msg_date": "Fri, 19 Jan 2001 09:37:02 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "\nOK, what do people want to do with this item? Add to TODO list?\n\nSeems making a separat unique constraint would be easy to do and be of\nvalue to most users.\n\n\n> \n> Probably, since I see it in near recent sources (and it affects\n> UNIQUE as well. As I remember it, the last discussion on this couldn't\n> determine what the correct behavior for unique/primary key constraints\n> was in the inheritance case (is it a single unique hierarchy through\n> all the tables [would be needed for fk to inheritance trees] or\n> separate unique constraints for each table [which would be similar\n> to how many people seem to currently use postgres inheritance as a \n> shortcut]). \n> \n> On Thu, 18 Jan 2001, Bruce Momjian wrote:\n> \n> > Does this bug still exist?\n> > \n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > Louis-David Mitterrand writes:\n> > > \n> > > > When creating a child (through CREATE TABLE ... INHERIT (parent)) it\n> > > > seems the child gets all of the parent's contraints _except_ its PRIMARY\n> > > > KEY. Is this normal?\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 08:44:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "On Wed, 24 Jan 2001, Bruce Momjian wrote:\n\n> \n> OK, what do people want to do with this item? Add to TODO list?\n> \n> Seems making a separat unique constraint would be easy to do and be of\n> value to most users.\n\nThe problem is that doing that will pretty much guarantee that we won't\nbe doing foreign keys to inheritance trees without changing that behavior\nand we've seen people asking about adding that too. I think that this\nfalls into the general category of \"Make inheritance make sense\" (Now \nthere's a todo item :) ) Seriously, I think the work on how inheritance\nis going to work will decide this, maybe we end up with a real inheritance\ntree system and something that works like the current stuff in which case\nI'd say it's probably one unique for the former and one per for the\nlatter.\n\n\n\n", "msg_date": "Wed, 24 Jan 2001 11:25:35 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "> On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> \n> > \n> > OK, what do people want to do with this item? Add to TODO list?\n> > \n> > Seems making a separat unique constraint would be easy to do and be of\n> > value to most users.\n> \n> The problem is that doing that will pretty much guarantee that we won't\n> be doing foreign keys to inheritance trees without changing that behavior\n> and we've seen people asking about adding that too. I think that this\n> falls into the general category of \"Make inheritance make sense\" (Now \n> there's a todo item :) ) Seriously, I think the work on how inheritance\n> is going to work will decide this, maybe we end up with a real inheritance\n> tree system and something that works like the current stuff in which case\n> I'd say it's probably one unique for the former and one per for the\n> latter.\n\nI smell TODO item. In fact, I now see a TODO item:\n\n* Unique index on base column not honored on inserts from inherited table\n INSERT INTO inherit_table (unique_index_col) VALUES (dup) should fail\n [inherit]\n\nSo it seems the fact the UNIQUE doesn't apply to the new table is just a\nmanifestion of the fact that people expect UNIQUE to span the entire\ninheritance tree. I will add the emails to [inherit] and mark it as\nresolved.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 14:31:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "Bruce Momjian wrote:\n >> On Wed, 24 Jan 2001, Bruce Momjian wrote:\n\n >I smell TODO item. In fact, I now see a TODO item:\n >\n >* Unique index on base column not honored on inserts from inherited table\n > INSERT INTO inherit_table (unique_index_col) VALUES (dup) should fail\n > [inherit]\n >\n >So it seems the fact the UNIQUE doesn't apply to the new table is just a\n >manifestion of the fact that people expect UNIQUE to span the entire\n >inheritance tree. I will add the emails to [inherit] and mark it as\n >resolved.\n\nBruce, could you add this text to TODO.detail on the subject of \ninherited constraints. I first sent it on Christmas Eve, and I \nthink most people were too busy holidaying to comment.\n\n=================================================================\nTom Lane wrote:\n >Hm. The short-term answer seems to be to modify the queries generated\n >by the RI triggers to say \"ONLY foo\". I am not sure whether we\n >understand the semantics involved in allowing a REFERENCES target to be\n >taken as an inheritance tree rather than just one table, but certainly\n >the current implementation won't handle that correctly.\n\nMay I propose these semantics as a basis for future development:\n\n1. An inheritance hierarchy (starting at any point in a tree) should be\nequivalent to an updatable view of all the tables at the point of\nreference and below. By default, all descendant tables are combined\nwith the ancestor for all purposes. The keyword ONLY must be used to\nalter this behaviour. Only inherited columns of descendant tables are\nvisible from higher in the tree. Columns may not be dropped in descendants.\nIf columns are added to ancestors, they must be inserted correctly in\ndescendants so as to preserve column ordering and inheritance. If\na column is dropped in an ancestor, it is dropped in all descendants.\n\n2. Insertion into a hierarchy means insertion into the table named in\nthe INSERT statement; updating or deletion affects whichever table(s)\nthe affected rows are found in. Updating cannot move a row from one\ntable to another.\n\n3. Inheritance of a table implies inheriting all its constraints unless\nONLY is used or the constraints are subsequently dropped; again, dropping\noperates through all descendant tables. A primary key, foreign key or\nunique constraint cannot be dropped or modified for a descendant. A\nunique index on a column is shared by all tables below the table for\nwhich it is declared. It cannot be dropped for any descendant.\n\nIn other words, only NOT NULL and CHECK constraints can be dropped in\ndescendants.\n\nIn multiple inheritance, a column may inherit multiple unique indices\nfrom its several ancestors. All inherited constraints must be satisfied\ntogether (though check constraints may be dropped).\n\n4. RI to a table implies the inclusion of all its descendants in the\ncheck. Since a referenced column may be uniquely indexed further up\nthe hierarchy than in the table named, the check must ensure that\nthe referenced value occurs in the right segment of the hierarchy. RI\nto one particular level of the hierarchy, excluding descendants, requires\nthe use of ONLY in the constraint.\n\n5. Dropping a table implies dropping all its descendants.\n\n6. Changes of permissions on a table propagate to all its descendants.\nPermissions on descendants may be looser than those on ancestors; they\nmay not be more restrictive.\n\n\nThis scheme is a lot more restrictive than C++'s or Eiffel's definition\nof inheritance, but it seems to me to make the concept truly useful,\nwithout introducing excessive complexity.\n\n============================================================\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If anyone has material possessions and sees his\n brother in need but has no pity on him, how can the\n love of God be in him?\"\n I John 3:17 \n\n\n", "msg_date": "Wed, 24 Jan 2001 21:41:39 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: [GENERAL] child table doesn't inherit PRIMARY KEY? " }, { "msg_contents": "Contrary to what the submitted documentation claims, there is no\npermission checking done on the CHECKPOINT command. Should there be?\n\nBtw., is there any normal usage application of this command? This relates\nto the previous paragraph somewhat.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 24 Jan 2001 23:02:09 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Permissions on CHECKPOINT" }, { "msg_contents": "Peter Eisentraut wrote:\n >Contrary to what the submitted documentation claims, there is no\n >permission checking done on the CHECKPOINT command. Should there be?\n \nVadim seemed to indicate that he was going to make that restriction.\nPerhaps I misunderstood.\n\nIf it's too late to make the change for 7.1, the fact should be\ndocumented in a Bug section of the man page.\n\n >Btw., is there any normal usage application of this command? This relates\n >to the previous paragraph somewhat.\n >\n >-- \n >Peter Eisentraut [email protected] http://yi.org/peter-e/\n >\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If anyone has material possessions and sees his\n brother in need but has no pity on him, how can the\n love of God be in him?\"\n I John 3:17 \n\n\n", "msg_date": "Wed, 24 Jan 2001 22:14:53 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Permissions on CHECKPOINT " }, { "msg_contents": "\nThanks. Done.\n\n> Bruce Momjian wrote:\n> >> On Wed, 24 Jan 2001, Bruce Momjian wrote:\n> \n> >I smell TODO item. In fact, I now see a TODO item:\n> >\n> >* Unique index on base column not honored on inserts from inherited table\n> > INSERT INTO inherit_table (unique_index_col) VALUES (dup) should fail\n> > [inherit]\n> >\n> >So it seems the fact the UNIQUE doesn't apply to the new table is just a\n> >manifestion of the fact that people expect UNIQUE to span the entire\n> >inheritance tree. I will add the emails to [inherit] and mark it as\n> >resolved.\n> \n> Bruce, could you add this text to TODO.detail on the subject of \n> inherited constraints. I first sent it on Christmas Eve, and I \n> think most people were too busy holidaying to comment.\n> \n> =================================================================\n> Tom Lane wrote:\n> >Hm. The short-term answer seems to be to modify the queries generated\n> >by the RI triggers to say \"ONLY foo\". I am not sure whether we\n> >understand the semantics involved in allowing a REFERENCES target to be\n> >taken as an inheritance tree rather than just one table, but certainly\n> >the current implementation won't handle that correctly.\n> \n> May I propose these semantics as a basis for future development:\n> \n> 1. An inheritance hierarchy (starting at any point in a tree) should be\n> equivalent to an updatable view of all the tables at the point of\n> reference and below. By default, all descendant tables are combined\n> with the ancestor for all purposes. The keyword ONLY must be used to\n> alter this behaviour. Only inherited columns of descendant tables are\n> visible from higher in the tree. Columns may not be dropped in descendants.\n> If columns are added to ancestors, they must be inserted correctly in\n> descendants so as to preserve column ordering and inheritance. If\n> a column is dropped in an ancestor, it is dropped in all descendants.\n> \n> 2. Insertion into a hierarchy means insertion into the table named in\n> the INSERT statement; updating or deletion affects whichever table(s)\n> the affected rows are found in. Updating cannot move a row from one\n> table to another.\n> \n> 3. Inheritance of a table implies inheriting all its constraints unless\n> ONLY is used or the constraints are subsequently dropped; again, dropping\n> operates through all descendant tables. A primary key, foreign key or\n> unique constraint cannot be dropped or modified for a descendant. A\n> unique index on a column is shared by all tables below the table for\n> which it is declared. It cannot be dropped for any descendant.\n> \n> In other words, only NOT NULL and CHECK constraints can be dropped in\n> descendants.\n> \n> In multiple inheritance, a column may inherit multiple unique indices\n> from its several ancestors. All inherited constraints must be satisfied\n> together (though check constraints may be dropped).\n> \n> 4. RI to a table implies the inclusion of all its descendants in the\n> check. Since a referenced column may be uniquely indexed further up\n> the hierarchy than in the table named, the check must ensure that\n> the referenced value occurs in the right segment of the hierarchy. RI\n> to one particular level of the hierarchy, excluding descendants, requires\n> the use of ONLY in the constraint.\n> \n> 5. Dropping a table implies dropping all its descendants.\n> \n> 6. Changes of permissions on a table propagate to all its descendants.\n> Permissions on descendants may be looser than those on ancestors; they\n> may not be more restrictive.\n> \n> \n> This scheme is a lot more restrictive than C++'s or Eiffel's definition\n> of inheritance, but it seems to me to make the concept truly useful,\n> without introducing excessive complexity.\n> \n> ============================================================\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"If anyone has material possessions and sees his\n> brother in need but has no pity on him, how can the\n> love of God be in him?\"\n> I John 3:17 \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Jan 2001 18:56:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [GENERAL] child table doesn't inherit PRIMARY KEY?" }, { "msg_contents": "> >Contrary to what the submitted documentation claims, there is no\n> >permission checking done on the CHECKPOINT command. \n> Should there be?\n> \n> Vadim seemed to indicate that he was going to make that restriction.\n> Perhaps I misunderstood.\n\nYes, there should be permission checking - I'll add it later (in 7.1)\nif no one else.\n\nVadim\n", "msg_date": "Wed, 24 Jan 2001 16:03:23 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Permissions on CHECKPOINT " }, { "msg_contents": "Mikheev, Vadim writes:\n\n> Yes, there should be permission checking - I'll add it later (in 7.1)\n> if no one else.\n\nShould be simple enough. Is this okay:\n\nIndex: utility.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/tcop/utility.c,v\nretrieving revision 1.105\ndiff -c -r1.105 utility.c\n*** utility.c 2001/01/05 06:34:20 1.105\n--- utility.c 2001/01/25 16:40:40\n***************\n*** 18,23 ****\n--- 18,24 ----\n\n #include \"access/heapam.h\"\n #include \"catalog/catalog.h\"\n+ #include \"catalog/pg_shadow.h\"\n #include \"commands/async.h\"\n #include \"commands/cluster.h\"\n #include \"commands/command.h\"\n***************\n*** 851,856 ****\n--- 852,859 ----\n {\n set_ps_display(commandTag = \"CHECKPOINT\");\n\n+ if (!superuser())\n+ elog(ERROR, \"permission denied\");\n CreateCheckPoint(false);\n }\n break;\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 25 Jan 2001 17:49:51 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Permissions on CHECKPOINT " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Mikheev, Vadim writes:\n>> Yes, there should be permission checking - I'll add it later (in 7.1)\n>> if no one else.\n\n> Should be simple enough. Is this okay:\n\nActually, I think a more interesting question is \"should CHECKPOINT\nhave permission restrictions? If so, what should they be?\"\n\nA quite relevant precedent is that Unix systems (at least the ones\nI've used) do not restrict who can call sync().\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 00:08:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permissions on CHECKPOINT " }, { "msg_contents": "\n Hi all,\n\n Re-posting this to -hackers. Will PQprint() remain/disappear/be replaced\nin the future?\n\n Thx\n\n Ed\n\n\n--\n\n ������������������������������������\n\n", "msg_date": "Fri, 26 Jan 2001 05:20:01 +0000", "msg_from": "KuroiNeko <[email protected]>", "msg_from_op": false, "msg_subject": "PQprint" }, { "msg_contents": "Tom Lane wrote:\n >Peter Eisentraut <[email protected]> writes:\n >> Mikheev, Vadim writes:\n >>> Yes, there should be permission checking - I'll add it later (in 7.1)\n >>> if no one else.\n >\n >> Should be simple enough. Is this okay:\n >\n >Actually, I think a more interesting question is \"should CHECKPOINT\n >have permission restrictions? If so, what should they be?\"\n >\n >A quite relevant precedent is that Unix systems (at least the ones\n >I've used) do not restrict who can call sync().\n\nWhat about DoS attacks? What would be the effect of someone's setting\noff an infinite loop of CHECKPOINTs?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Wash me thoroughly from mine iniquity, and cleanse me \n from my sin. For I acknowledge my transgressions; and \n my sin is ever before me. Against thee, thee only, \n have I sinned, and done this evil in thy sight...\"\n Psalms 51:2-4 \n\n\n", "msg_date": "Fri, 26 Jan 2001 05:41:41 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Permissions on CHECKPOINT " }, { "msg_contents": "The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\nfile after the server has run 6 days. This will be a problem.\n\nWe could touch (open) the file once every time the ServerLoop() runs\naround. It's not perfect but it should work in practice.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 26 Jan 2001 21:17:32 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Sure enough, the lock file is gone" }, { "msg_contents": "> The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> file after the server has run 6 days. This will be a problem.\n> \n> We could touch (open) the file once every time the ServerLoop() runs\n> around. It's not perfect but it should work in practice.\n\nIf we have to do it, let's make it an #ifdef __linux__ option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 15:18:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Fri, Jan 26, 2001 at 03:18:13PM -0500, Bruce Momjian wrote:\n> > The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> > file after the server has run 6 days. This will be a problem.\n> > \n> > We could touch (open) the file once every time the ServerLoop() runs\n> > around. It's not perfect but it should work in practice.\n> \n> If we have to do it, let's make it an #ifdef __linux__ option.\n\n#ifdef BRAINDAMAGED_TMP_CLEANER ?\n\nISTR mention of non-linux platforms that do this.\n\nRoss\n", "msg_date": "Fri, 26 Jan 2001 14:24:34 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [010126 12:11] wrote:\n> The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> file after the server has run 6 days. This will be a problem.\n> \n> We could touch (open) the file once every time the ServerLoop() runs\n> around. It's not perfect but it should work in practice.\n\nWhy not have the RPM/configure scripts stick it in where ever redhat\nsays it's safe to?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 26 Jan 2001 12:49:30 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Ross J. Reedstrom wrote:\n> On Fri, Jan 26, 2001 at 03:18:13PM -0500, Bruce Momjian wrote:\n> > > The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> > > file after the server has run 6 days. This will be a problem.\n> > >\n> > > We could touch (open) the file once every time the ServerLoop() runs\n> > > around. It's not perfect but it should work in practice.\n> >\n> > If we have to do it, let's make it an #ifdef __linux__ option.\n>\n> #ifdef BRAINDAMAGED_TMP_CLEANER ?\n>\n> ISTR mention of non-linux platforms that do this.\n\n Exactly the way you want it to do (open(2) and close(2) of a\n UNIX domain socket) was what I had to do to get an old\n Mach3-4.3BSD combo into a kernel-panic.\n\n Better use utime(2) or the like for it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 26 Jan 2001 15:50:39 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Bruce Momjian writes:\n\n> If we have to do it, let's make it an #ifdef __linux__ option.\n\nWhat does Linux have to do with it? FreeBSD does the same thing, only\nevery three days. I dont' know whether it's not enabled on a fresh\ninstall, but it's there, you only need to flip the switch. I doubt /tmp\ncleaning is such an unusual thing, especially on large sites.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 26 Jan 2001 22:04:42 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Jan Wieck writes:\n\n> Exactly the way you want it to do (open(2) and close(2) of a\n> UNIX domain socket) was what I had to do to get an old\n> Mach3-4.3BSD combo into a kernel-panic.\n\nThe lock file is an ordinary file.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 26 Jan 2001 22:05:11 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> If we have to do it, let's make it an #ifdef __linux__ option.\n\n> What does Linux have to do with it? FreeBSD does the same thing, only\n> every three days. I dont' know whether it's not enabled on a fresh\n> install, but it's there, you only need to flip the switch. I doubt /tmp\n> cleaning is such an unusual thing, especially on large sites.\n\nYes, there are lots of systems that will clean /tmp --- and since the\nlock file is an ordinary file (not a socket) pretty much any tmp-cleaner\nis going to decide to remove it. I think that I had intended to insert\na periodic touch of the lockfile and forgot to.\n\nTouching it every time through ServerLoop is an overreaction though.\nI'd suggest touching it in the checkpoint-process-firing code, which\nruns every five minutes (or so?) by default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 16:49:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone " }, { "msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > Exactly the way you want it to do (open(2) and close(2) of a\n> > UNIX domain socket) was what I had to do to get an old\n> > Mach3-4.3BSD combo into a kernel-panic.\n>\n> The lock file is an ordinary file.\n\n So the crazy-temp-vacuum-cleaner on linux doesn't touch the\n sockets?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 26 Jan 2001 17:06:24 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "> Tom Lane wrote:\n> >Peter Eisentraut <[email protected]> writes:\n> >> Mikheev, Vadim writes:\n> >>> Yes, there should be permission checking - I'll add it later (in 7.1)\n> >>> if no one else.\n> >\n> >> Should be simple enough. Is this okay:\n> >\n> >Actually, I think a more interesting question is \"should CHECKPOINT\n> >have permission restrictions? If so, what should they be?\"\n> >\n> >A quite relevant precedent is that Unix systems (at least the ones\n> >I've used) do not restrict who can call sync().\n> \n> What about DoS attacks? What would be the effect of someone's setting\n> off an infinite loop of CHECKPOINTs?\n\nDon't we have bigger DoS attacks? Certainly SELECT cash_out(1) is a\nmuch bigger one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 18:17:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permissions on CHECKPOINT" }, { "msg_contents": "I said:\n> Yes, there are lots of systems that will clean /tmp --- and since the\n> lock file is an ordinary file (not a socket) pretty much any tmp-cleaner\n> is going to decide to remove it. I think that I had intended to insert\n> a periodic touch of the lockfile and forgot to.\n\nDone now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jan 2001 19:06:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone " }, { "msg_contents": "> \n> Hi all,\n> \n> Re-posting this to -hackers. Will PQprint() remain/disappear/be replaced\n> in the future?\n\nNo idea. We are not sure who uses it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 23:28:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PQprint" }, { "msg_contents": "> On Fri, Jan 26, 2001 at 03:18:13PM -0500, Bruce Momjian wrote:\n> > > The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> > > file after the server has run 6 days. This will be a problem.\n> > > \n> > > We could touch (open) the file once every time the ServerLoop() runs\n> > > around. It's not perfect but it should work in practice.\n> > \n> > If we have to do it, let's make it an #ifdef __linux__ option.\n> \n> #ifdef BRAINDAMAGED_TMP_CLEANER ?\n> \n> ISTR mention of non-linux platforms that do this.\n\nYes, thank you.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 23:52:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "> I said:\n> > Yes, there are lots of systems that will clean /tmp --- and since the\n> > lock file is an ordinary file (not a socket) pretty much any tmp-cleaner\n> > is going to decide to remove it. I think that I had intended to insert\n> > a periodic touch of the lockfile and forgot to.\n> \n> Done now.\n\nYes, checkpoint is a good place to put it. Thanks. I still liked the\nBRAINDAMAGED_TMP_CLEANER though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Jan 2001 23:55:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Fri, Jan 26, 2001 at 05:06:24PM -0500, Jan Wieck wrote:\n> So the crazy-temp-vacuum-cleaner on linux doesn't touch the\n> sockets?\n\nThe tmpwatch program that comes with many Linux distributions will only\nunlink regular files and empty directories by default.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Fri, 26 Jan 2001 23:45:08 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Fri, 26 Jan 2001, Peter Eisentraut wrote:\n>Bruce Momjian writes:\n>\n>> If we have to do it, let's make it an #ifdef __linux__ option.\n>\n>What does Linux have to do with it? FreeBSD does the same thing, only\n>every three days. I dont' know whether it's not enabled on a fresh\n>install, but it's there, you only need to flip the switch. I doubt /tmp\n>cleaning is such an unusual thing, especially on large sites.\n>\n\nOnly on a poorly configured FreeBSD box. You do have to turn it on first.\nFreeBSD (This is a 4.2-Stable box) will only delete files that have not been\nmodified within the set number of days. This amount is variable. You can also\ntell clean_tmp to ignore any files you wish. This is all configurable via\nrc.conf and friends.\n\nGB\n\n-- \nGB Clark II | Roaming FreeBSD Admin\[email protected] | General Geek \n CTHULU for President - Why choose the lesser of two evils?\n", "msg_date": "Sat, 27 Jan 2001 05:07:49 -0600", "msg_from": "GB Clark II <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Jan Wieck wrote:\n >Peter Eisentraut wrote:\n >> Jan Wieck writes:\n >>\n >> > Exactly the way you want it to do (open(2) and close(2) of a\n >> > UNIX domain socket) was what I had to do to get an old\n >> > Mach3-4.3BSD combo into a kernel-panic.\n >>\n >> The lock file is an ordinary file.\n >\n > So the crazy-temp-vacuum-cleaner on linux doesn't touch the\n > sockets?\n \ntmpreaper does - that's why I moved the socket in Debian.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Come now, and let us reason together, saith the LORD; \n though your sins be as scarlet, they shall be as white\n as snow; though they be red like crimson, they shall \n be as wool.\" Isaiah 1:18 \n\n\n", "msg_date": "Sat, 27 Jan 2001 12:29:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sure enough, the lock file is gone " }, { "msg_contents": "/tmp is for *temporary* files. Such a lock is not a temporary file, it\nshould go somewhere in /var, why not in /var/lib/pgsql/data ?\n\n\n> The 'tmpwatch' program on Red Hat will remove the /tmp/.s.PGSQL.5432.lock\n> file after the server has run 6 days. This will be a problem.\n> \n> We could touch (open) the file once every time the ServerLoop() runs\n> around. It's not perfect but it should work in practice.\n", "msg_date": "Sat, 27 Jan 2001 18:34:26 +0100", "msg_from": "Florent Guillaume <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Florent Guillaume:\n\n> /tmp is for *temporary* files. Such a lock is not a temporary file,\n> it should go somewhere in /var, why not in /var/lib/pgsql/data ?\n\n/var/run ?\n\n-- \nAlessio F. Bragadini\t\[email protected]\n", "msg_date": "Sat, 27 Jan 2001 19:51:54 +0200", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> Jan Wieck wrote:\n> >Peter Eisentraut wrote:\n> >> Jan Wieck writes:\n> >>\n> >> > Exactly the way you want it to do (open(2) and close(2) of a\n> >> > UNIX domain socket) was what I had to do to get an old\n> >> > Mach3-4.3BSD combo into a kernel-panic.\n> >>\n> >> The lock file is an ordinary file.\n> >\n> > So the crazy-temp-vacuum-cleaner on linux doesn't touch the\n> > sockets?\n> \n> tmpreaper does - that's why I moved the socket in Debian.\n\nBut you have complete control over the OS, while we don't. The problem\nI see of moving it is that only Debian-compiled clients will work on\nDebian systems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 27 Jan 2001 13:17:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "Florent Guillaume wrote:\n> \n> /tmp is for *temporary* files. Such a lock is not a temporary file, it\n> should go somewhere in /var, why not in /var/lib/pgsql/data ?\n\n/var/lib is also not for locks, per FHS.\n\n/var/lock/pgsql (or /var/lock/postgresql....) would be the FHS-mandated\nplace for such a file.\n\nComments? _Why_ is the lock in /tmp? Won't the lock always be put into\nplace by the uid used to run postmaster? Is a _world_ writeable\ntemporary directory the right place?\n\n7.2 discussion, however, IMHO.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 27 Jan 2001 14:09:11 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "\nfirst off, the lock file is put in by an unprivileged user, so /tmp works\non all systems ...\n\nsecond, /tmp on a large portion of systems gets cleaned out after a\nreboot, so there are no 'stray locks' to generally worry about...\n\n\nOn Sat, 27 Jan 2001, Lamar Owen wrote:\n\n> Florent Guillaume wrote:\n> >\n> > /tmp is for *temporary* files. Such a lock is not a temporary file, it\n> > should go somewhere in /var, why not in /var/lib/pgsql/data ?\n>\n> /var/lib is also not for locks, per FHS.\n>\n> /var/lock/pgsql (or /var/lock/postgresql....) would be the FHS-mandated\n> place for such a file.\n>\n> Comments? _Why_ is the lock in /tmp? Won't the lock always be put into\n> place by the uid used to run postmaster? Is a _world_ writeable\n> temporary directory the right place?\n>\n> 7.2 discussion, however, IMHO.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sat, 27 Jan 2001 15:14:13 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "The Hermit Hacker wrote:\n> On Sat, 27 Jan 2001, Lamar Owen wrote:\n> > Comments? _Why_ is the lock in /tmp? Won't the lock always be put into\n> > place by the uid used to run postmaster? Is a _world_ writeable\n> > temporary directory the right place?\n \n> first off, the lock file is put in by an unprivileged user, so /tmp works\n> on all systems ...\n\nIf /usr/local/pgsql (to use the default) is owned by the user running\npostmaster, then the postmaster has privileges to put the lockfile in,\nsay, /usr/local/pgsql/lock/....., right? Or am I missing something basic\nhere? Is this lock placed by postmaster, or by something else? My\n7.1beta3 installation shows two files in /tmp:\nsrwxrwxrwx 1 postgres postgres 0 Jan 27 14:25 .s.PGSQL.5432\n-rw------- 1 postgres postgres 25 Jan 27 14:25\n.s.PGSQL.5432.lock\n\nI understand why the socket needs to be in /tmp, but why the lockfile? \nWhat or who is using the lockfile (which contains the pid of postmaster\nand the path to PGDATA for the postmaster)?\n \n> second, /tmp on a large portion of systems gets cleaned out after a\n> reboot, so there are no 'stray locks' to generally worry about...\n\nIronic that RedHat, which can clean /tmp out on a cron basis would be\none that doesn't clean it out by default on reboot.\n\nLock file cleanup should be the responsibility of the script that starts\npostmaster -- or the responsibility of the DBA who manually starts and\nrestarts postmasters, after crashes or at other times.\n\nNot a big issue, by any means. Just attempting to understand. \n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 27 Jan 2001 14:28:32 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > I understand why the socket needs to be in /tmp, but why the lockfile?\n \n> The lock file protects the Unix domain socket. Consequently, the name of\n> the lock file needs to be derivable from the name of the socket file, and\n> vice versa. Also, the name of the socket file must not vary with other\n> parameters such as installation layout.\n\nI see.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 27 Jan 2001 14:56:36 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen writes:\n\n> I understand why the socket needs to be in /tmp, but why the lockfile?\n\nThe lock file protects the Unix domain socket. Consequently, the name of\nthe lock file needs to be derivable from the name of the socket file, and\nvice versa. Also, the name of the socket file must not vary with other\nparameters such as installation layout.\n\n> Lock file cleanup should be the responsibility of the script that starts\n> postmaster\n\nThe postmaster does that itself. That's why the pid is in there.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 27 Jan 2001 20:58:59 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> I understand why the socket needs to be in /tmp, but why the lockfile?\n\nIt would probably be better if the socket files weren't in /tmp but in\na postgres-owned directory. However, at this point we have a huge\nbackwards compatibility problem to overcome if we want to move the\nsocket files. The location of the socket files is essentially a core\npart of the frontend-backend protocol, because both client and server\nmust know it ab initio. Move the socket, break your clients.\n\nThere is an option in 7.1 to support defining a different directory\nfor the socket files, but I doubt very many people will use it.\n\nI see no real good reason to keep the lockfiles in a different place\nfrom the sockets themselves, however. Doing so would just complicate\nthings even more, without adding any real safety or security.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jan 2001 15:25:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Bruce Momjian wrote:\n >> Note: programs that run as non-root users may be unable to create files un\n >der \n >> /var/run and therefore need a subdirectory owned by the appropriate user.\n >\n >This is the killer. We can't require root. Seems we are stuck with\n >/tmp.\n \nI'd be surprised to learn that non-admin users are allowed to write\nin /usr/local, either. If, on some machine, PostgreSQL is such an\nunoffical project that the admin won't agree to create /var/run/postgresql,\nthe user can define his own temporary directory using the method we have\nalready included in 7.1; if, on the other hand, he is able to create\n/usr/local/pgsql, he will also be able to create /var/run/postgresql.\n\nReally, how many users do we have who can't get their admin to do\nthis for them?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Come now, and let us reason together, saith the LORD; \n though your sins be as scarlet, they shall be as white\n as snow; though they be red like crimson, they shall \n be as wool.\" Isaiah 1:18 \n\n\n", "msg_date": "Sat, 27 Jan 2001 22:05:05 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sure enough, the lock file is gone " }, { "msg_contents": "> It would probably be better if the socket files weren't in /tmp but in\n> a postgres-owned directory. However, at this point we have a huge\n> backwards compatibility problem to overcome if we want to move the\n> socket files. The location of the socket files is essentially a core\n> part of the frontend-backend protocol, because both client and server\n> must know it ab initio. Move the socket, break your clients.\n\nOk, fair enough.\n\nBut sometimes unix sucks, don't you think, having to use /tmp as a\ncentral place for inter-process communication... blech.\n\nFlorent\n\n-- \n<[email protected]>\n", "msg_date": "Sun, 28 Jan 2001 00:30:15 +0100", "msg_from": "Florent Guillaume <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> /var/run/postgresql\n\nThere's another reason why the standard socket directory is /tmp,\nand that's that it exists everywhere. Not all Unix systems even *have*\na /var hierarchy, let alone one that the admin will let you have a\nplaypen in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Jan 2001 19:05:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> It would probably be better if the socket files weren't in /tmp but in\n> a postgres-owned directory. However, at this point we have a huge\n> backwards compatibility problem to overcome if we want to move the\n> socket files.\n\nNot to sound scheptical, but since when did postgresql care about\nbackwards compatiblity? Upgrading is already demanding a lot of\nknowledge from the user (including needing such information, which\nalmost no other package do), this is just a minor change (the files\nare mostly used by bundled tools - any exceptions?)\n\n> There is an option in 7.1 to support defining a different directory\n> for the socket files, but I doubt very many people will use it.\n\nI intend to, for the RPMs we ship.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 Jan 2001 12:49:20 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Not to sound scheptical, but since when did postgresql care about\n> backwards compatiblity? Upgrading is already demanding a lot of\n> knowledge from the user (including needing such information, which\n> almost no other package do), this is just a minor change (the files\n> are mostly used by bundled tools - any exceptions?)\n\nIt is not a minor change. Pay attention now: moving the socket files\nBREAKS THE CLIENT/SERVER PROTOCOL. Got that? If you do this, then\nno old client program will work against your new server until you\nrecompile/relink it.\n\nDon't forget, also, that clients built using the standard distribution\nwon't talk to your server either. Unless you have *sole* control of\nPostgres and Postgres clients on your machine, moving the socket files\nis going to create problems.\n\nWe have seen enough complaints in the past due to changes in the\nprotocol (all of which were backwards-compatible, BTW; this isn't)\nthat I'm pretty hesitant to do another one on such minor grounds as\ngetting the socket file out of /tmp.\n\n>> There is an option in 7.1 to support defining a different directory\n>> for the socket files, but I doubt very many people will use it.\n\n> I intend to, for the RPMs we ship.\n\nExpect complaints.\n\nI wouldn't be so annoyed at this, if I didn't know very well that the\nresulting questions/complaints will come to the Postgres team, and not\nto the perpetrator of the incompatibility. Shall I forward all future\n\"can't connect to server\" bug reports to you to answer?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 13:22:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> Tom Lane <[email protected]> writes:\n> \n> > It would probably be better if the socket files weren't in /tmp but in\n> > a postgres-owned directory. However, at this point we have a huge\n> > backwards compatibility problem to overcome if we want to move the\n> > socket files.\n> \n> Not to sound scheptical, but since when did postgresql care about\n> backwards compatiblity? Upgrading is already demanding a lot of\n> knowledge from the user (including needing such information, which\n> almost no other package do), this is just a minor change (the files\n> are mostly used by bundled tools - any exceptions?)\n\nUpgrading is only one facet of backwards compatibility. When the fe-be\nprotocol was changed for 6.4.2 is a good example. The SQL itself is\nkept very backwards-compatible, on purpose. Things for\nbackwards-compatibility are not as bad as the upgrading situation would\nseem to imply.\n\n> > There is an option in 7.1 to support defining a different directory\n> > for the socket files, but I doubt very many people will use it.\n \n> I intend to, for the RPMs we ship.\n\nOk, why not fix tmpwatch instead? Only the lock files break FHS -- the\nsockets can go there by FHS, right? Of course, our requirement that\nthey be in the same location sort of forces the issue. But, 7.1 now\ntouches the locks enought to keep tmpwatch from blowing them out.\n\nTo where do you intend to move them to? /var/lock/pgsql? \n/var/run/pgsql? (Or postgresql... I'm still not happy with that change\n-- the configuration is much nicer, but now the 'postgresql' suffix is\nfixed -- I'm probably going to have to patch that to pgsql, as I'm\nalready changing many things that I'd prefer to leave closer to what 7.0\nhad).\n\nThe change in question is the use of '/usr/share/postgresql' and\n'/usr/include/postgresql' as part of the installation, rather than\nallowing '/usr/share/pgsql' and '/usr/include/pgsql' .\n\nO well -- I'm just going to have to see how it distills. I've not\nreceived any complaints yet, but I expect many after final. :-(\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 15:54:00 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Trond Eivind Glomsr�d wrote:\n> > \n> > Tom Lane <[email protected]> writes:\n> > \n> > > It would probably be better if the socket files weren't in /tmp but in\n> > > a postgres-owned directory. However, at this point we have a huge\n> > > backwards compatibility problem to overcome if we want to move the\n> > > socket files.\n> > \n> > Not to sound scheptical, but since when did postgresql care about\n> > backwards compatiblity? Upgrading is already demanding a lot of\n> > knowledge from the user (including needing such information, which\n> > almost no other package do), this is just a minor change (the files\n> > are mostly used by bundled tools - any exceptions?)\n> \n> Upgrading is only one facet of backwards compatibility.\n\nI know. I just mentioned one consistently painful aspect of it.\n\n> > > There is an option in 7.1 to support defining a different directory\n> > > for the socket files, but I doubt very many people will use it.\n> \n> > I intend to, for the RPMs we ship.\n> \n> Ok, why not fix tmpwatch instead?\n\nBecause it wouldn't be a fix, it would be a \"lets workaround one\nspecific app which does things in a bad way\"-hack. /tmp isn't supposed\nbe more than that... storing things there for more than than 10 days?\nOuch. \n\n> Only the lock files break FHS \n\nExplictly, yes. However, FHS says /tmp is for temporary files. Also,\nit says programs shouldn't count on data to be stored there between\ninvocations. 10+ days isn't temporary...\n \n> To where do you intend to move them to? /var/lock/pgsql? \n> /var/run/pgsql? \n\nIdeally, the locks should be in /var/lock/pgsql and the socket\nsomewhere else - like /var/lib/pgsql (our mysql packages do this, and\nboth of them are specified in /etc/my.cnf).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 Jan 2001 16:07:09 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Peter Eisentraut wrote [re using rules to guard against unprivileged\ntable creation]:\n >It couldn't, because the CREATE TABLE code does not go through the rule\n >system.\n\nCould it not be done by enforcing access control on system tables? At\npresent this is partially supported. Perversely, I can deny select\nprivilege to pg_class but cannot deny insert privilege:\n\n\njunk=# revoke all on pg_class from public;\nCHANGE\njunk=# \\d \n List of relations\n Name | Type | Owner \n------------------+----------+-------\n a | table | olly\n...\n(14 rows)\njunk=# \\c - ruth\nYou are now connected as new user ruth.\njunk=> \\d\nERROR: pg_class: Permission denied.\njunk=> create table xx (id int);\nCREATE\njunk=> \\c - olly\nYou are now connected as new user olly.\njunk=# \\d\n List of relations\n Name | Type | Owner \n------------------+----------+-------\n a | table | olly\n...\n xx | table | ruth\n(15 rows)\n\n\nIf the denial of write privilege were enforced, it would not be possible\nfor an unprivileged user to create tables. When a database is created,\nall the system tables should be made read only for PUBLIC. As a corollary,\nwhen a write privilege is granted on a table, it may be necessary to\ngive concomitant privilege on tables needed to update sequences and other\nsuch items (I can't think of any others, at the moment), or else by-pass\nprivilege checking on these.\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Many are the afflictions of the righteous; but the \n LORD delivereth him out of them all.\" \n Psalm 34:19 \n\n\n", "msg_date": "Sun, 28 Jan 2001 21:25:53 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Controlling user table creation " }, { "msg_contents": "Lamar Owen wrote:\n >Ok, why not fix tmpwatch instead? Only the lock files break FHS -- the\n >sockets can go there by FHS, right? \n\nNo, UNIX sockets are specifically mentioned as belonging under /var/run.\nIn section 5.10 \"/var/run : Run-time variable data\", it says: \"Programs\nthat maintain transient UNIX-domain sockets should place them in this \ndirectory.\"\n\nSo what ever the outcome for the wider PostgreSQL community, I must make\nthe change to conform to Debian policy.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Many are the afflictions of the righteous; but the \n LORD delivereth him out of them all.\" \n Psalm 34:19 \n\n\n", "msg_date": "Sun, 28 Jan 2001 21:38:38 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > Trond Eivind Glomsr�d wrote:\n> > > Not to sound scheptical, but since when did postgresql care about\n> > > backwards compatiblity? Upgrading is already demanding a lot of\n\n> > Upgrading is only one facet of backwards compatibility.\n \n> I know. I just mentioned one consistently painful aspect of it.\n\nYour pain is evident from the hyperbolic statement. I share your pain.\n\n> However, FHS says /tmp is for temporary files. Also,\n> it says programs shouldn't count on data to be stored there between\n> invocations. 10+ days isn't temporary...\n\nBut postmaster _doesn't_ expect the files to stay _between_\ninvocations.... :-) But your point is understood.\n\nWhat about the X sockets, then? X might stay up 10+ days. X doesn't\njust put one file there, either -- there's a whole directory there in\n/tmp.\n \n> > To where do you intend to move them to? /var/lock/pgsql?\n> > /var/run/pgsql?\n \n> Ideally, the locks should be in /var/lock/pgsql and the socket\n> somewhere else - like /var/lib/pgsql (our mysql packages do this, and\n> both of them are specified in /etc/my.cnf).\n\nAccording to what Peter said, that could be difficult. \n\nBut, let me ask this: is it a good thing for PostgreSQL clients to have\nhard-coded socket locations? (Good thing or not, it exists already, and\nI know it does....)\n\nI have another question of Peter, Tom, Bruce, or anyone -- is the\nhard-coded socket location in libpq? If so, wouldn't a dynamically\nloaded libpq.so bring this location in for _any_ precompiled, not\nstatically-linked, client? Or am I missing something else?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 16:39:55 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Oliver Elphick writes:\n\n> Could it not be done by enforcing access control on system tables?\n\nNo, because CREATE TABLE does not go through access control either.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 28 Jan 2001 23:11:03 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Controlling user table creation " }, { "msg_contents": "Lamar Owen writes:\n\n> What about the X sockets, then?\n\nSockets are not the problem, regular files are. (At least for tmpwatch.)\n\n> But, let me ask this: is it a good thing for PostgreSQL clients to have\n> hard-coded socket locations? (Good thing or not, it exists already, and\n> I know it does....)\n\nPerhaps there could be some sort of /etc/postgresql.conf file that is read\nby both client and server that can control these sort of aspects. But I\ndon't see much use in it besides port number and socket location.\nBecause those are, by definition, the only parameters in common to client\nand server.\n\n> I have another question of Peter, Tom, Bruce, or anyone -- is the\n> hard-coded socket location in libpq? If so, wouldn't a dynamically\n> loaded libpq.so bring this location in for _any_ precompiled, not\n> statically-linked, client?\n\nYes. Good point.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 28 Jan 2001 23:21:04 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > But, let me ask this: is it a good thing for PostgreSQL clients to have\n> > hard-coded socket locations? (Good thing or not, it exists already, and\n> > I know it does....)\n \n> Perhaps there could be some sort of /etc/postgresql.conf file that is read\n> by both client and server that can control these sort of aspects. But I\n> don't see much use in it besides port number and socket location.\n> Because those are, by definition, the only parameters in common to client\n> and server.\n\nOf course, -i and TCP/IP to localhost obviate all of this.\n\nHow about an environment variable? PGSOCKLOC? Use the hard-coded\ndefault if the envvar not set? This way multiple postmasters running on\nmultiple sockets can be smoothly supported.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 17:24:40 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Ideally, the locks should be in /var/lock/pgsql and the socket\n> somewhere else - like /var/lib/pgsql (our mysql packages do this, and\n> both of them are specified in /etc/my.cnf).\n\nThat is not \"ideal\", in fact it would break one of the specific features\nthat UUNET asked us for. Namely, to be able to have noninterfering\nsets of socket files in different explicitly-specified directories.\nIf the lock files don't live where the sockets do, then this doesn't work.\n\n> Explictly, yes. However, FHS says /tmp is for temporary files. Also,\n> it says programs shouldn't count on data to be stored there between\n> invocations. 10+ days isn't temporary...\n\nWe aren't counting on data to be stored in /tmp \"between invocations\".\nThe socket and lock file live only as long as the postmaster runs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 17:47:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> I have another question of Peter, Tom, Bruce, or anyone -- is the\n> hard-coded socket location in libpq? If so, wouldn't a dynamically\n> loaded libpq.so bring this location in for _any_ precompiled, not\n> statically-linked, client? Or am I missing something else?\n\nAs the 7.1 code presently stands, libpq contains a compiled-in default\nsocketfile location, which the client code hopefully doesn't know about.\nSo, yes, if an old client has a dynamically linked libpq.so then\nreplacing the .so would bring that client into sync with a nonstandard\nserver. However, the pitfalls should be obvious: independently built\nclients, statically linked libraries, differing .so version numbers\nto name three risk areas.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 17:53:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Perhaps there could be some sort of /etc/postgresql.conf file that is read\n> by both client and server that can control these sort of aspects.\n\nMaybe ... but it seems to me that still leaves us with the issue of\na single pathname that must be known a-priori to both client and server.\nYou've just changed that path from \"/tmp/...\" to \"/etc/...\".\n\nMoreover, such a setup would make it substantially more painful to run\nmultiple versions of Postgres on a single machine. Right now, as long\nas each version has a different default port number, it works great.\nTry to put the default port number in /etc/postgresql.conf, and you've\ngot a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 17:57:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> [email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > Ideally, the locks should be in /var/lock/pgsql and the socket\n> > somewhere else - like /var/lib/pgsql (our mysql packages do this, and\n> > both of them are specified in /etc/my.cnf).\n\n(note that you AFAIK (I don't use mysql much, I prefer postgresql) can\nhave multiple sections if you want want to have multiple backends\nrunning. \n\n \n> That is not \"ideal\", in fact it would break one of the specific features\n> that UUNET asked us for. Namely, to be able to have noninterfering\n> sets of socket files in different explicitly-specified directories.\n> If the lock files don't live where the sockets do, then this doesn't\n> work.\n\nI don't see why this must be so...\n \n> > Explictly, yes. However, FHS says /tmp is for temporary files. Also,\n> > it says programs shouldn't count on data to be stored there between\n> > invocations. 10+ days isn't temporary...\n> \n> We aren't counting on data to be stored in /tmp \"between invocations\".\n\nBetween invocations of client programs. You're using /tmp as a shared\nof stored data.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 Jan 2001 17:59:08 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> How about an environment variable? PGSOCKLOC? Use the hard-coded\n> default if the envvar not set? This way multiple postmasters running on\n> multiple sockets can be smoothly supported.\n\nIt's spelled PGHOST as of 7.1 ... but the discussion here is about what\nthe default behavior of an installation will be, not what you can\noverride it to do. \n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 17:59:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane wrote:\n> So, yes, if an old client has a dynamically linked libpq.so then\n> replacing the .so would bring that client into sync with a nonstandard\n> server.\n\nOf course, with the server and client on the same machine, the server\nand the client dynamic libs are very likely to follow the same\n'non-standard' as the libpq.so is likely to be from the same build or\npackage as the server is.\n\n> However, the pitfalls should be obvious: independently built\n> clients, statically linked libraries, differing .so version numbers\n> to name three risk areas.\n\nThese are real risks, of course. I have personal experience with the\nstatically linked client and differing so version number cases.\n\nAnd, yes, to echo your previous sentiment, if it breaks, the\ndistributor/packager is not the one that gets the compliants -- the\nPostgreSQL community does.\n\nSo, for future discussion, a compromise will have to be arranged -- but\nthis really isn't a 7.1 issue, as this isn't a 'bugfix' per se -- you\nhave fixed the immediate problem. But this is something to consider for\n7.2 or later, as priorities are shuffled.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 18:00:35 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Explictly, yes. However, FHS says /tmp is for temporary files. Also,\n> it says programs shouldn't count on data to be stored there between\n> invocations. 10+ days isn't temporary...\n>> \n>> We aren't counting on data to be stored in /tmp \"between invocations\".\n\n> Between invocations of client programs. You're using /tmp as a shared\n> of stored data.\n\nHuh? The socket and lockfile are created and held open by the\npostmaster for the duration of its run. Client programs don't even know\nthat the lockfile is there, in fact. How can you argue that client\nprogram lifespan has anything to do with it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 18:06:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > How about an environment variable? PGSOCKLOC? Use the hard-coded\n> > default if the envvar not set? This way multiple postmasters running on\n> > multiple sockets can be smoothly supported.\n \n> It's spelled PGHOST as of 7.1 ... but the discussion here is about what\n> the default behavior of an installation will be, not what you can\n> override it to do.\n\nI'm talking about Unix domain socket location, not TCP/IP hostname,\nwhich PGHOST is, right?\n\nBut you are very right -- this doesn't help the default. The\nFHS-mandated place for such a configuration file detailing such settings\nis in /etc -- but, of course, we support installations that have been\ninstalled by a non-root user. ISTM a 'pg_config --default-socket'\ncommand could be used to find the location, assuming pg_config is on the\nPATH.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 18:13:30 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Oliver Elphick wrote:\n> No, UNIX sockets are specifically mentioned as belonging under /var/run.\n> In section 5.10 \"/var/run : Run-time variable data\", it says: \"Programs\n> that maintain transient UNIX-domain sockets should place them in this\n> directory.\"\n> \n> So what ever the outcome for the wider PostgreSQL community, I must make\n> the change to conform to Debian policy.\n\nSo, if PostgreSQL is a part of Debian, then there will be problems if\nthe client-server situation isn't somehow fixed to allow robust\nlocation-independent socket finding.\n\nLooks like the same thing is going to happen with RedHat's\ndistribution. So, if this is going to occur, let's get a consensus as\nto where that alternate location (barring some other solution) is going\nto be, so that there are the fewest variants out there.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 18:19:15 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen wrote:\n> Tom Lane wrote:\n> > It's spelled PGHOST as of 7.1 ... but the discussion here is about what\n> > the default behavior of an installation will be, not what you can\n> > override it to do.\n \n> I'm talking about Unix domain socket location, not TCP/IP hostname,\n> which PGHOST is, right?\n\nFound the code in fe-connect.c that changes that assumption.....sorry\nfor my high density :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 18:28:44 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Oliver Elphick wrote:\n> > No, UNIX sockets are specifically mentioned as belonging under /var/run.\n> > In section 5.10 \"/var/run : Run-time variable data\", it says: \"Programs\n> > that maintain transient UNIX-domain sockets should place them in this\n> > directory.\"\n> > \n> > So what ever the outcome for the wider PostgreSQL community, I must make\n> > the change to conform to Debian policy.\n> \n> So, if PostgreSQL is a part of Debian, then there will be problems if\n> the client-server situation isn't somehow fixed to allow robust\n> location-independent socket finding.\n> \n> Looks like the same thing is going to happen with RedHat's\n> distribution. So, if this is going to occur, let's get a consensus as\n> to where that alternate location (barring some other solution) is going\n> to be, so that there are the fewest variants out there.\n\nFHS is a good starting (and end-) point.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 Jan 2001 18:37:39 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> [email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > Explictly, yes. However, FHS says /tmp is for temporary files. Also,\n> > it says programs shouldn't count on data to be stored there between\n> > invocations. 10+ days isn't temporary...\n> >> \n> >> We aren't counting on data to be stored in /tmp \"between invocations\".\n> \n> > Between invocations of client programs. You're using /tmp as a shared\n> > of stored data.\n> \n> Huh? The socket and lockfile are created and held open by the\n> postmaster for the duration of its run. Client programs don't even know\n> that the lockfile is there, in fact. How can you argue that client\n> program lifespan has anything to do with it?\n\nNothing but the postmaster uses it? If so, there shouldn't be a\nproblem moving it.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "28 Jan 2001 18:43:51 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Perhaps there could be some sort of /etc/postgresql.conf file that is read\n> > by both client and server that can control these sort of aspects.\n> \n> Maybe ... but it seems to me that still leaves us with the issue of\n> a single pathname that must be known a-priori to both client and server.\n> You've just changed that path from \"/tmp/...\" to \"/etc/...\".\n\nAnd writing to /etc requires root permissions. That is the restriction\nthat got us into /tmp in the first place.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jan 2001 18:54:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Tom Lane wrote:\n>> Lamar Owen <[email protected]> writes:\n> How about an environment variable? PGSOCKLOC?\n \n>> It's spelled PGHOST as of 7.1 ...\n\n> I'm talking about Unix domain socket location, not TCP/IP hostname,\n> which PGHOST is, right?\n\nNo, in 7.1 PGHOST serves a dual purpose. If a hostname beginning with\n\"/\" is given, it's taken to specify Unix-socket communication using a\nsocketfile in the directory whose absolute path is PGHOST. A tad crocky\nbut it avoided having to add an additional parameter to the PQconnect\nfamily of functions ...\n\nAlso, on the postmaster side, there is a postmaster commandline\nparameter to set the directory containing the socket files (and\nlockfiles). So it's possible for a given installation to configure\nthe socketfiles anywhere without modifying the binaries at all.\nBut you do need to set PGHOST on the client side to make this work.\nIt all comes back to what the default is.\n\nBasically, what's bothering me is the idea that the RPM distribution\nwill have a different default socket location than the regular source\ndistribution. I think that will cause a lot more problems than it\nsolves.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 20:00:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> No, UNIX sockets are specifically mentioned as belonging under /var/run.\n> In section 5.10 \"/var/run : Run-time variable data\", it says: \"Programs\n> that maintain transient UNIX-domain sockets should place them in this \n> directory.\"\n\n> So what ever the outcome for the wider PostgreSQL community, I must make\n> the change to conform to Debian policy.\n\nJust out of curiosity, does Debian enforce a nonstandard location for\nX sockets as well?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 20:11:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "On Sun, 28 Jan 2001, Tom Lane wrote:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > No, UNIX sockets are specifically mentioned as belonging under /var/run.\n> > In section 5.10 \"/var/run : Run-time variable data\", it says: \"Programs\n> > that maintain transient UNIX-domain sockets should place them in this\n> > directory.\"\n>\n> > So what ever the outcome for the wider PostgreSQL community, I must make\n> > the change to conform to Debian policy.\n>\n> Just out of curiosity, does Debian enforce a nonstandard location for\n> X sockets as well?\n\nJust curious here ... there seems to have been *alot* of energy expended\non this ... is there any reason why we don't just have a configuration\noption like other software has, that defaults to /tmp like we have it now,\nbut that makes it easier for others to change it for their installs, as\nrequired?\n\n\n", "msg_date": "Sun, 28 Jan 2001 21:37:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "The Hermit Hacker wrote:\n> On Sun, 28 Jan 2001, Tom Lane wrote:\n> > \"Oliver Elphick\" <[email protected]> writes:\n> > > So what ever the outcome for the wider PostgreSQL community, I must make\n> > > the change to conform to Debian policy.\n\n> > Just out of curiosity, does Debian enforce a nonstandard location for\n> > X sockets as well?\n \n> Just curious here ... there seems to have been *alot* of energy expended\n> on this ... is there any reason why we don't just have a configuration\n> option like other software has, that defaults to /tmp like we have it now,\n> but that makes it easier for others to change it for their installs, as\n> required?\n\nIt has touched a nerve, hasn't it? I like your solution -- but, to\nreiterate, this IMHO is 7.2 material, unless we want to go the feature\npatch route, or someone considers this a 'bugfix' (I don't). Unless it\nis a trivial change, that is.\n\nTom fixed the bug with a slight kludge -- by touching the lock\nperiodically, the problem is ameliorated for now. But as long as we\nhave a persistent file in /tmp we will run into OS-dependent problems. \nI can see now a bug report that PostgreSQL is unreliable because it\nkeeps crashing every x days (due to a tmpreaper-like program the hapless\nuser doesn't know is running in cron....).\n\nSince pg_config already reports what configure options were provided, if\nthis is a configure option then the end user can easily find it with\npg_config, if a static linkage or binary-only custom client that\ndirectly accesses the fe-be protocol (are there any that we know\nabout?).\n\nBut we don't need to spend a great deal of time on it, regardless.\nSpeaking of time to spend, are we a 'go' for beta4 yet? ETA? (so I can\nbudget time to rebuild RPM's).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 21:48:47 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> It has touched a nerve, hasn't it? I like your solution -- but, to\n> reiterate, this IMHO is 7.2 material, unless we want to go the feature\n> patch route, or someone considers this a 'bugfix' (I don't). Unless it\n> is a trivial change, that is.\n> \n> Tom fixed the bug with a slight kludge -- by touching the lock\n> periodically, the problem is ameliorated for now. But as long as we\n> have a persistent file in /tmp we will run into OS-dependent problems. \n> I can see now a bug report that PostgreSQL is unreliable because it\n> keeps crashing every x days (due to a tmpreaper-like program the hapless\n> user doesn't know is running in cron....).\n> \n> Since pg_config already reports what configure options were provided, if\n> this is a configure option then the end user can easily find it with\n> pg_config, if a static linkage or binary-only custom client that\n> directly accesses the fe-be protocol (are there any that we know\n> about?).\n\nNo one has suggested a location non-root people can put the socket/lock\nfile, except /tmp, and IMHO, until we find one, the default stays in\n/tmp.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jan 2001 22:04:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just curious here ... there seems to have been *alot* of energy expended\n> on this ... is there any reason why we don't just have a configuration\n> option like other software has, that defaults to /tmp like we have it now,\n> but that makes it easier for others to change it for their installs, as\n> required?\n\nWe *have* such a configuration option, both compile-time and run-time,\nas of 7.1. The argument is about what the default should be, and in\nparticular whether it's a good idea for certain binary distributions to\nhave a different default than other distributions ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 22:46:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Tom fixed the bug with a slight kludge -- by touching the lock\n> periodically, the problem is ameliorated for now. But as long as we\n> have a persistent file in /tmp we will run into OS-dependent problems. \n> I can see now a bug report that PostgreSQL is unreliable because it\n> keeps crashing every x days\n\nKindly don't exaggerate the importance of this problem. We've been\nrunning systems with the socketfiles in /tmp for years now, and we\nknow quite well what the downsides of that are. They're not that\ndrastic (unless you run a tmp-sweeper too stupid to distinguish socket\nfiles from plain files, and even then you only see failure to connect).\n\nAddition of the socket lockfile to the mix isn't increasing the risks\nmeasurably that I can see. Even if a tmp-sweeper is enthusiastic enough\nto remove a lockfile that the postmaster touches every few minutes, so\nwhat? Client connections don't depend on the lockfile. The lockfile\nonly exists to protect against admin error, ie, starting a second\npostmaster on the same socket --- which in routine operation won't\nhappen anyway.\n\nThe bottom line: yes, /tmp was a poor choice of place to put the\nsocket files. But no, it is not so poor as to be worth creating a\ncompatibility problem to fix it. Perhaps someday we will switch to\na whole new interface protocol (CORBA or whatever floats your boat),\nand then we can let the whole mess drift off into the sunset. But\nright now, there is almost nothing about our FE/BE protocol that\n*isn't* a legacy decision --- and fixing it piecemeal at the cost of\na flag day for each fix is not worthwhile. IMHO anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 23:11:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "> The bottom line: yes, /tmp was a poor choice of place to put the\n> socket files. But no, it is not so poor as to be worth creating a\n\nWas it really a poor choice. Where else can we put it as non-root?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jan 2001 23:24:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Sun, 28 Jan 2001, Bruce Momjian wrote:\n\n> > The bottom line: yes, /tmp was a poor choice of place to put the\n> > socket files. But no, it is not so poor as to be worth creating a\n>\n> Was it really a poor choice. Where else can we put it as non-root?\n\n~pgsql/var/run? everything else was under that directory structure ...\n\n\n", "msg_date": "Mon, 29 Jan 2001 00:34:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> On Sun, 28 Jan 2001, Bruce Momjian wrote:\n> \n> > > The bottom line: yes, /tmp was a poor choice of place to put the\n> > > socket files. But no, it is not so poor as to be worth creating a\n> >\n> > Was it really a poor choice. Where else can we put it as non-root?\n> \n> ~pgsql/var/run? everything else was under that directory structure ...\n\nSure, but how do we hard-code where ~pgsql is located, unless we somehow\ncheck the home directory of pgsql, but then again, they really don't\nhave to use the home directory of pgsql for the pgsql directory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jan 2001 23:39:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The bottom line: yes, /tmp was a poor choice of place to put the\n>> socket files. But no, it is not so poor as to be worth creating a\n\n> Was it really a poor choice. Where else can we put it as non-root?\n\nI would've favored something like /tmp/.postgres/socketfile,\nwhich is comparable to typical X11 setups. But it's moot, since\nwe can't change the default path even that much without creating a\ncompatibility headache ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Jan 2001 23:43:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> The bottom line: yes, /tmp was a poor choice of place to put the\n> >> socket files. But no, it is not so poor as to be worth creating a\n> \n> > Was it really a poor choice. Where else can we put it as non-root?\n> \n> I would've favored something like /tmp/.postgres/socketfile,\n> which is comparable to typical X11 setups. But it's moot, since\n> we can't change the default path even that much without creating a\n> compatibility headache ...\n\nAgreed, that would have been better, but I don't see that buys us\nanything against the grim reaper.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 28 Jan 2001 23:43:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Bruce Momjian wrote:\n> No one has suggested a location non-root people can put the socket/lock\n> file, except /tmp, and IMHO, until we find one, the default stays in\n> /tmp.\n\nSince RPM's _must_ be installed by root, that doesn't affect them. The\ndebian packages are the same way. As long as the RPM contains the\nstructures set as owned by 'postgres' (the default run user for\npostmaster), and the appropriate permissions are set, the directory\ncould be anywhere, such as /var/run/pgsql. But there are of course\nproblems with that approach, because libpq makes some hard-coded\nassumptions about where to look.\n\nI have no problem with the default being in /tmp, just like I have no\nproblem with the default source installation being in /usr/local. But I\ndo think that the code should be smart enough to handle non-default\nsettings without major problems.\n\nAnd I'll kindly not exaggerate the importance -- but I would have seen\nreports had the simple fix not been applied.\n\nBut I'm not going to spend any more time arguing about it, that much is\ncertain. I've got other fish to fry, like beta4 RPM's.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 28 Jan 2001 23:44:50 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "> Bruce Momjian wrote:\n> > No one has suggested a location non-root people can put the socket/lock\n> > file, except /tmp, and IMHO, until we find one, the default stays in\n> > /tmp.\n> \n> Since RPM's _must_ be installed by root, that doesn't affect them. The\n> debian packages are the same way. As long as the RPM contains the\n> structures set as owned by 'postgres' (the default run user for\n> postmaster), and the appropriate permissions are set, the directory\n> could be anywhere, such as /var/run/pgsql. But there are of course\n> problems with that approach, because libpq makes some hard-coded\n> assumptions about where to look.\n\nThe issue we have is that we don't assume root installs. Any root\nrequirement is going to be RPM-specific.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Jan 2001 00:57:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Tom Lane wrote:\n >Just out of curiosity, does Debian enforce a nonstandard location for\n >X sockets as well?\n\nX has a big exception made for it in the FHS for /usr/X11R6; I don't see\nany mention of how /tmp/.X11/ is to be treated, but X isn't my problem!\nI suspect no-one has ever thought to ask the question! You may be\npleased to know that I have now stirred the coals and asked that\nquestion, which applies not only to X but also to several other programs\nthat I have running.\n\nIt seems to me that the main reason for the problem is the need to\ncater for non-root installs. I would really like to know what\nproportion of total PostgreSQL installs they now are. We are\na long way now from the days when Postgres was a research database with\na poor reputation for reliability. It is now becoming a serious\ncompetitor to major commercial databases and will in many cases be\nused as a major application on the machines where it is installed.\n\nAre there still people who can only install PostgreSQL as a private \napplication?\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"My son, if sinners entice thee, consent thou not.\" \n Proverbs 1:10 \n\n\n", "msg_date": "Mon, 29 Jan 2001 06:29:17 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Bruce Momjian wrote:\n> Lamar Owen wrote: \n> > Bruce Momjian wrote:\n> > > No one has suggested a location non-root people can put the socket/lock\n> > Since RPM's _must_ be installed by root, that doesn't affect them. The\n> The issue we have is that we don't assume root installs. Any root\n> requirement is going to be RPM-specific.\n\n[waiting on another RPM build cycle to finish]\n\nI understand that issue. The RPMset is just not affected by that issue,\nas, for an RPM to be installed, you must be root. No ifs ands or buts\n-- an RPM installation assumes root, and can do anything along those\nlines it needs to do.\n\nOf course, that means I have to be extra careful -- people installing\nRPM's I build are going to be running my %pre, %post, %preun, and\n%postun scripts _as_root_. RPM's are quite capable of royally hosing a\nsystem, if the packager hasn't done his homework. But that also means\nmalicious RPMs are possible (horrors.) -- one header in an RPM, hidden\nfrom view, could render your system totally useless. Yes, it's true.\nNo, I won't package an RPM 'bomb' -- but it could be done, easily\nenough.\n\nNow, you can _build_ an RPM (which will be installed as root) as a\nnon-root user, and that invokes the 'make install' as part of its\nprocess, but I digress (badly, at that).\n\nBut my issue is that libpq or any other client should be smart enough to\nnot have to assume the location.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 Jan 2001 01:30:35 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Mon, 29 Jan 2001, Oliver Elphick wrote:\n\n> It seems to me that the main reason for the problem is the need to\n> cater for non-root installs. I would really like to know what\n\nPostgreSQL will _not_ run as root. Just try... :)\n\nmorannon:~>/usr/local/pgsql/bin/postmaster\n\n\"root\" execution of the PostgreSQL backend is not permitted.\n\nThe backend must be started under its own userid to prevent\na possible system security compromise. See the INSTALL file for\nmore information on how to properly start the postmaster.\n\nmorannon:~>\n\n> proportion of total PostgreSQL installs they now are. We are\n\n100%\n\n> Are there still people who can only install PostgreSQL as a private \n> application?\n\nPrivate application - or non-root application?\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Mon, 29 Jan 2001 00:34:29 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> But my issue is that libpq or any other client should be smart enough to\n> not have to assume the location.\n\nEr, how do you propose to do that? The client cannot learn the correct\nlocation from the postmaster --- it must figure out *on its own* where\nthe socket file is. AFAICS you can't avoid having hardwired knowledge\nabout how to do that in the client.\n\nYou or somebody else previously suggested hardwiring the location of\na configuration file, rather than the socketfile itself, but I can't\nsee that that really improves matters in this context. In particular,\nchanging to such a method would still break backwards compatibility.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jan 2001 01:44:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "\"Dominic J. Eidson\" wrote:\n >On Mon, 29 Jan 2001, Oliver Elphick wrote:\n >\n >> It seems to me that the main reason for the problem is the need to\n >> cater for non-root installs. I would really like to know what\n >\n >PostgreSQL will _not_ run as root. Just try... :)\n \nI'm talking about the installation, not execution.\n\nBy default, PostgreSQL will install in /usr/local/pgsql; most people\nwill need root privilege to create that directory.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"My son, if sinners entice thee, consent thou not.\" \n Proverbs 1:10 \n\n\n", "msg_date": "Mon, 29 Jan 2001 06:50:03 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > But my issue is that libpq or any other client should be smart enough to\n> > not have to assume the location.\n> Er, how do you propose to do that? The client cannot learn the correct\n> location from the postmaster --- it must figure out *on its own* where\n> the socket file is. AFAICS you can't avoid having hardwired knowledge\n> about how to do that in the client.\n\nHow does netstat find out? A simple \nnetstat -a --unix|grep \\.s\\.PGSQL\nwill get the full list of all postmaster sockets. A little 'cut' or\n'awk' work is simple enough.\n\nI realize, of course, that netstat (and the underlying, on Linux,\n/proc/net/unix file) is not portable. On Linux, simply grep\n/proc/net/unix for the .s.PGSQL pattern and get the last column (or the\ncolumn before that, with the inode information).\n\nIs there a portable way of listing the open unix domain sockets in this\nmanner, then deducing from the socket path what you need to know?\n\n> You or somebody else previously suggested hardwiring the location of\n> a configuration file, rather than the socketfile itself, but I can't\n> see that that really improves matters in this context. In particular,\n> changing to such a method would still break backwards compatibility.\n\nNot me. The less hardwiring, the better, IMHO. And I'm glad you pointed\nme to the new (undocumented that I could find) usage of PGHOST. A\ndynamic socket finder (assuming no specific socket path has been passed)\nwould not break backwards compatibility, as it would find the default\n/tmp case.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 Jan 2001 02:08:01 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> How does netstat find out?\n\nnetstat burrows around in kernel datastructures, is how.\n\nI don't see invoking netstat as a solution anyway. For one thing,\nit's drastically nonstandard; even if available, it varies in parameters\nand output format (your \"simple example\" draws a usage complaint on my\nbox). Furthermore, a moderately security-conscious admin would probably\nchoose to make netstat unavailable to unprivileged users, since it\nreveals an uncomfortably large amount about the activity of other users.\nA final complaint is that netstat would actually reveal *too much*\ninformation, since a netstat-based client would have no way to choose\namong multiple postmasters. (Please recall that one of the motivations\nfor the UUNET patch was to allow multiple postmasters running with the\nsame port number in different subdirectories. Hmm, I wonder how netstat\nshows socketfiles that are in chroot'd subtrees, or outside your own\nchroot ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Jan 2001 02:34:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > How does netstat find out?\n \n> netstat burrows around in kernel datastructures, is how.\n \n> I don't see invoking netstat as a solution anyway. For one thing,\n> it's drastically nonstandard; even if available, it varies in parameters\n\nI said as much as it wasn't portable. But I asked also if a portable\nway was available -- I do not currently know that answer to that, but I\nwill be investigating such.\n\n> (Please recall that one of the motivations\n> for the UUNET patch was to allow multiple postmasters running with the\n> same port number in different subdirectories. Hmm, I wonder how netstat\n> shows socketfiles that are in chroot'd subtrees, or outside your own\n> chroot ...)\n\nWhen were these 'UUNET' patches issued? I like the idea, but just\ncurious. I don't recall them, in fact -- nor do I recall the\ndiscussion. I'll look it up in the archives later. Going to bed after a\nnight of RPM'ing.\n\nAs to the chroot vs netstat question, that is a good one. I have no\nchroot's in effect, so I can't test that one.\n\nSo, if multiple postmasters are running on the same port in different\ndirs, it would be somewhat difficult to determine which should be the\n'default' in the list. However, one would think an admin who has set up\na multiple postmaster system of that sort wouldn't be relying on a\ndefault anyway -- but that is a dangerous assumption.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 Jan 2001 02:43:55 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Sure enough, the lock file is gone" }, { "msg_contents": "On Fri, Jan 26, 2001 at 11:55:16PM -0500, Bruce Momjian wrote:\n> > I said:\n> > > Yes, there are lots of systems that will clean /tmp --- and since the\n> > > lock file is an ordinary file (not a socket) pretty much any tmp-cleaner\n> > > is going to decide to remove it. I think that I had intended to insert\n> > > a periodic touch of the lockfile and forgot to.\n> > \n> > Done now.\n> \n> Yes, checkpoint is a good place to put it. Thanks. I still liked the\n> BRAINDAMAGED_TMP_CLEANER though.\n> \n\nWell, from reading the rest of the thread, it seems that, while tmpreaper\n_is_, in fact, brain damaged (it'll eat socket files) that wasn't what we\nwere talking about: removal of apparently stale ordinary files, like our\nlockfile, is what tmp cleaners are all about, even if it is problematic.\n\nPersonally, I've always thought tmp cleaners are a bad idea, a bandaid\napproach to cleaning up after poorly written software: i.e. broken by\ndesign. Well, it's resolved now, for pgsql. Can't fix all the software\nout there, I guess.\n\nRoss\n", "msg_date": "Mon, 29 Jan 2001 10:43:52 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sure enough, the lock file is gone" }, { "msg_contents": "I finally percated that when data contains '�' or '�' it's impossible to\nparse trought:\n\nCOPY products FROM '/var/lib/postgres/dadesi.txt' USING DELIMITERS '|' \\g\n\nit causes:\n\nSELECT edicion FROM products;\n edicion \n-----------------\n Espa�a|Nacional <-------puts on the same cell either there's an '|' in\nthe middle!!!\n\n\nbut changing '�' for n\n\nSELECT edicion FROM products;\n edicion \n-----------------\n Espana <---------------it separates cells ok\n\n\nso what's my solution for a text to COPY containing such characters? \n\n\nbest regards,\njaume\n", "msg_date": "Mon, 26 Feb 2001 14:28:58 +0100", "msg_from": "Jaume Teixi <[email protected]>", "msg_from_op": false, "msg_subject": "COPY doesn't works when containing '���' or '���' characters on db" }, { "msg_contents": "\nMorning all ...\n\n\tAre there any major outstandings that ppl have on their plates,\nthat should prevent a release? I'd like to put out an RC1 by Friday this\nweek, with a full release schedualed for March 15th ... this would give\nThomas his two weeks for the docs freeze ...\n\n\tBasically, RC1 would say to ppl that we're ready to release, there\nwill be no more core changes that will require an initdb ... feel\ncomfortable using this version in production, with the only major changes\nbetween now and release being docs related ...\n\n\tDoes this work? Or is there something earth-shattering that still\nhas to be done?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 26 Feb 2001 11:52:33 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Release in 2 weeks ... " }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Morning all ...\n> \n> Are there any major outstandings that ppl have on their plates,\n> that should prevent a release? I'd like to put out an RC1 by Friday this\n> week, with a full release schedualed for March 15th ... this would give\n> Thomas his two weeks for the docs freeze ...\n> \n> Basically, RC1 would say to ppl that we're ready to release, there\n> will be no more core changes that will require an initdb ... feel\n> comfortable using this version in production, with the only major changes\n> between now and release being docs related ...\n> \n> Does this work? Or is there something earth-shattering that still\n> has to be done?\n\n\nYep ! As of beta4, the ODBC driver is still seriously broken (the\noriginal libpsqlodbc.so.0.26 doesn't even connect. A version patched by\nNick Gorham allows some connectivity (you can query the DB), but still\nhas some serious breakage (i. e. no \"obvious\" ways to see views from\nStarOffice or MS-Access)).\n\nAnd I have not yet had any opportunity to test the JDBC driver.\n\n[ Explanation : I follow the Debian packages prepared by Oliver Elphick,\nI'm not versed enough in Debian to recreate those packages myself, and I\ndo *not* want to break Debian dependencies by installing Postgres \"The\nWrong Way (TM)\". Hence, I'm stuck with beta4, a broken ODBC and no JDBC.\nUnless some kind soul can send me a JD. 1.1 .jar file ...\n\nFurthermore, I've had some serious hardware troubles (a dying IDE disk).\nI wasn't even able to fulfill Tom Lane's suggestion to try to add -d2 to\nmy postmaster to debug the ODBC connection. I'll try to do that Real\nSoon Now (TM, again), but not for now : my day-work backlog is ...\nimpressive. ]\n\nThese issues might seem small change to you die-hard plpgsql hackers. To\na lmot of people using Postgres for everyday office work through \"nice\"\ninterface, it's bread-and-butter, and these issues *should* be fixed\n*before* release ...\n\n[ crawling back under my rock ... ]\n\n\t\t\t\t\tEmmanuel Charpentier\n", "msg_date": "Mon, 26 Feb 2001 20:24:01 +0100", "msg_from": "Emmanuel Charpentier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ..." }, { "msg_contents": "Jaume Teixi <[email protected]> writes:\n> I finally percated that when data contains '�' or '�' it's impossible to\n> parse trought:\n\n> COPY products FROM '/var/lib/postgres/dadesi.txt' USING DELIMITERS '|' \\g\n\n> it causes:\n\n> SELECT edicion FROM products;\n> edicion \n> -----------------\n> Espa�a|Nacional <-------puts on the same cell either there's an '|' in\n> the middle!!!\n\nVery odd. What LOCALE and multibyte encodings are you using, if any?\nThis seems like it must be a multibyte issue, but I can't guess what.\n\nAlso, which Postgres version are you running? If you said, I missed it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 22:16:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY doesn't works when containing ' ' or ' ' characters on db " }, { "msg_contents": "On Mon, 26 Feb 2001 22:16:35 -0500 Tom Lane <[email protected]> wrote:\n\n> Jaume Teixi <[email protected]> writes:\n> > I finally percated that when data contains '�' or '�' it's impossible\nto\n> > parse trought:\n> \n> > COPY products FROM '/var/lib/postgres/dadesi.txt' USING DELIMITERS '|'\n\\g\n> \n> > it causes:\n> \n> > SELECT edicion FROM products;\n> > edicion \n> > -----------------\n> > Espa�a|Nacional <-------puts on the same cell either there's an '|'\nin\n> > the middle!!!\n\n\nI finally, thanks to Oliver Elphick,\n\nmanaged to create database with:\n\tCREATE DATABASE \"demo\" WITH ENCODING = 'SQL_ASCII'\n\nand data was imported OK, great, thanks!\n\n", "msg_date": "Tue, 27 Feb 2001 10:19:12 +0100", "msg_from": "Jaume Teixi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED: COPY doesn't works when containing ' ' or ' ' characters\n\ton db" }, { "msg_contents": "Tom Lane wrote:\n >Jaume Teixi <[email protected]> writes:\n >> I finally percated that when data contains '' or '' it's impossible to\n >> parse trought:\n >\n >> COPY products FROM '/var/lib/postgres/dadesi.txt' USING DELIMITERS '|' \\g\n >\n >> it causes:\n >\n >> SELECT edicion FROM products;\n >> edicion \n >> -----------------\n >> Espaa|Nacional <-------puts on the same cell either there's an '|' in\n >> the middle!!!\n >\n >Very odd. What LOCALE and multibyte encodings are you using, if any?\n >This seems like it must be a multibyte issue, but I can't guess what.\n >\n >Also, which Postgres version are you running? If you said, I missed it.\n\nI think this happens when the front-end encoding is SQL_ASCII and the\ndatabase is using UNICODE. Then, there are misunderstandings between\nfront-end and back-end, so that a single character with the eighth bit\nset may be sent by the front-end and interpreted by the back-end as the\nfirst half of a UNICODE two-byte character.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If we confess our sins, he is faithful and just to \n forgive us our sins, and to cleanse us from all \n unrighteousness.\" I John 1:9 \n\n\n", "msg_date": "Tue, 27 Feb 2001 15:38:15 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY doesn't works when containing ' ' or ' ' characters\n on db" }, { "msg_contents": "The Hermit Hacker writes:\n\n> \tAre there any major outstandings that ppl have on their plates,\n> that should prevent a release? I'd like to put out an RC1 by Friday this\n> week, with a full release schedualed for March 15th ... this would give\n> Thomas his two weeks for the docs freeze ...\n\nI'm interested to know what exactly takes two weeks with the docs and what\ncould be done to speed it up.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 17:54:18 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ... " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> I think this happens when the front-end encoding is SQL_ASCII and the\n> database is using UNICODE. Then, there are misunderstandings between\n> front-end and back-end, so that a single character with the eighth bit\n> set may be sent by the front-end and interpreted by the back-end as the\n> first half of a UNICODE two-byte character.\n\nI wondered about that, but his examples had one or more characters\nbetween the eighth-bit-set character and the '|', so this doesn't seem\nto explain the problem.\n\nStill, if it went away after moving to ASCII encoding, it clearly is\na multibyte issue of some sort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 12:19:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY doesn't works when containing ' ' or ' ' characters on db " }, { "msg_contents": "Emmanuel Charpentier <[email protected]> writes:\n> Yep ! As of beta4, the ODBC driver is still seriously broken (the\n> original libpsqlodbc.so.0.26 doesn't even connect. A version patched by\n> Nick Gorham allows some connectivity (you can query the DB), but still\n> has some serious breakage (i. e. no \"obvious\" ways to see views from\n> StarOffice or MS-Access)).\n\nI'd be willing to work harder on ODBC if I had any way to test it ;-).\n\nI have a copy of OpenOffice for LinuxPPC but have not figured out how to\ntell it to connect to Postgres. If someone can slip me a clue on how to\nconfigure it and do simple database stuff with it, I'll try to clean up\nthe most pressing ODBC problems before we release.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 14:21:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ... " }, { "msg_contents": "Tom Lane wrote:\n> \n> Emmanuel Charpentier <[email protected]> writes:\n> > Yep ! As of beta4, the ODBC driver is still seriously broken (the\n> > original libpsqlodbc.so.0.26 doesn't even connect. A version patched by\n> > Nick Gorham allows some connectivity (you can query the DB), but still\n> > has some serious breakage (i. e. no \"obvious\" ways to see views from\n> > StarOffice or MS-Access)).\n> \n\nI think I've fixed this bug at least for MS-Access.\nYou could get the latest win32 driver from\n ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\nPlease try it.\n\nHowever I'm not sure about unixODBC.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 28 Feb 2001 08:53:31 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ODBC] Re: Release in 2 weeks ..." }, { "msg_contents": "> \"Oliver Elphick\" <[email protected]> writes:\n> > I think this happens when the front-end encoding is SQL_ASCII and the\n> > database is using UNICODE. Then, there are misunderstandings between\n> > front-end and back-end, so that a single character with the eighth bit\n> > set may be sent by the front-end and interpreted by the back-end as the\n> > first half of a UNICODE two-byte character.\n> \n> I wondered about that, but his examples had one or more characters\n> between the eighth-bit-set character and the '|', so this doesn't seem\n> to explain the problem.\n\nNo.\n\n From Jaume's example:\n\n> SELECT edicion FROM products;\n> edicion \n> -----------------\n> Espa���a|Nacional <-------puts on the same cell either there's an '|' in\n> the middle!!!\n\n\\361 == 0xf1. UTF-8 assumes that:\n\n if (the first byte) & 0xe0 == 0xe0, then the letter consists of 3\n bytes.\n\nSo PostgreSQL believes that \"���a|\" is one UTF-8 letter and eat up\n'|'.\n\nMy guess is Jaume made an UNICODE database but provided it ISO 8859-1\nor that kind of single-byte latin encoding data.\n\nI'm wondering why so many people are using UTF-8 database even he does\nnot understand what UTF-8 is:-) I hope 7.1 would solve this kind of\nconfusion by enabling an automatic encoding conversion between UTF-8\nand others.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Feb 2001 10:01:34 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: COPY doesn't works when containing ' '\n\tor ' ' characters on db" }, { "msg_contents": "> > Are there any major outstandings that ppl have on their plates,\n> > that should prevent a release? I'd like to put out an RC1 by Friday this\n> > week, with a full release schedualed for March 15th ... this would give\n> > Thomas his two weeks for the docs freeze ...\n> I'm interested to know what exactly takes two weeks with the docs and what\n> could be done to speed it up.\n\nThe \"official\" version of the story is that it takes ~10-20 hours for me\nto work through the docs to format them for hardcopy with ApplixWare,\nprimarily because something in the jade RTF tickles a bug in the page\nformatting with Applix. (This round, I'll resort even to M$Word to avoid\nthat time sink, since I just don't have the time.)\n\nThe reality is that it is a two week quiet time for us to get the last\nbugs out and to get the last platform-specific reports. At this moment\nwe have not started the \"report now or risk having a broken platform\"\nthreats that help iron out the last problems.\n\nScrappy has proposed that we start that period now. Were the concerns\nabout WAL etc enough to hold off on that, or are we counting down from\nnow?\n\n - Thomas\n", "msg_date": "Wed, 28 Feb 2001 03:07:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ..." }, { "msg_contents": "> I have a copy of OpenOffice for LinuxPPC but have not figured out how to\n> tell it to connect to Postgres. If someone can slip me a clue on how to\n> configure it and do simple database stuff with it, I'll try to clean up\n> the most pressing ODBC problems before we release.\n\nI've got a clue for ApplixWare, if you happen to have that package\n(US$90).\n\n - Thomas\n", "msg_date": "Wed, 28 Feb 2001 03:21:16 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ODBC] Re: Release in 2 weeks ..." }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I'm interested to know what exactly takes two weeks with the docs and what\n>> could be done to speed it up.\n\n> The \"official\" version of the story is that it takes ~10-20 hours for me\n> to work through the docs to format them for hardcopy with ApplixWare,\n> primarily because something in the jade RTF tickles a bug in the page\n> formatting with Applix.\n\nI'm sure anything that could be done to eliminate this formatting\nmake-work would be just fine with Thomas ;-). However, it probably\nwouldn't really change the release scheduling much, since as he points\nout it's partially an excuse for clamping down:\n\n> The reality is that it is a two week quiet time for us to get the last\n> bugs out and to get the last platform-specific reports.\n\nIn short, now is our \"okay people, let's get *serious*\" phase.\nNo features, no trivial stuff, just get the critical bugs out.\n\n> Scrappy has proposed that we start that period now. Were the concerns\n> about WAL etc enough to hold off on that, or are we counting down from\n> now?\n\nI'm pretty concerned about WAL, but have no good reason not to start\nthe release countdown.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 23:19:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Release in 2 weeks ... " }, { "msg_contents": "On Tue, 27 Feb 2001, Tom Lane wrote:\n\n> > Scrappy has proposed that we start that period now. Were the concerns\n> > about WAL etc enough to hold off on that, or are we counting down from\n> > now?\n>\n> I'm pretty concerned about WAL, but have no good reason not to start\n> the release countdown.\n\nFiguring a 15th of March release right now, Vadim is back on the 6th (or\nso), so that would essentially be the last 'critical bug' ...\n\nJust curious ... Vadim posted yesterday about 'fixes' for WAL related\nstuff ... stuff he wanted to ppl to try out ... has anyone? I didn't see\nanyone respond to his post, so am wondering if nobody but myself saw it\n...\n\n\n", "msg_date": "Wed, 28 Feb 2001 00:30:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Release in 2 weeks ... " }, { "msg_contents": "Thomas Lockhart wrote:\n >> I have a copy of OpenOffice for LinuxPPC but have not figured out how to\n >> tell it to connect to Postgres. If someone can slip me a clue on how to\n >> configure it and do simple database stuff with it, I'll try to clean up\n >> the most pressing ODBC problems before we release.\n >\n >I've got a clue for ApplixWare, if you happen to have that package\n >(US$90).\n \nPlease post it, Thomas.\n\nI got nowhere following their instructions.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"These things have I written unto you that believe on \n the name of the Son of God; that ye may know that ye \n have eternal life, and that ye may believe on the name\n of the Son of God.\" I John 5:13 \n\n\n", "msg_date": "Wed, 28 Feb 2001 06:57:50 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ODBC] Re: Release in 2 weeks ... " }, { "msg_contents": "> >I've got a clue for ApplixWare, if you happen to have that package\n> >(US$90).\n> Please post it, Thomas.\n> I got nowhere following their instructions.\n\nUh, who's instructions? We have a writeup on Applix and ODBC in the\ndocs. Have you found those, or are those falling short of helpful?\n\n - Thomas\n", "msg_date": "Wed, 28 Feb 2001 07:26:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ODBC] Re: Release in 2 weeks ..." }, { "msg_contents": "Thomas Lockhart writes:\n\n> The \"official\" version of the story is that it takes ~10-20 hours for me\n> to work through the docs to format them for hardcopy with ApplixWare,\n\nOkay, I just kept hearing the \"give Thomas 2 weeks for the docs\" theme...\n\n> primarily because something in the jade RTF tickles a bug in the page\n> formatting with Applix. (This round, I'll resort even to M$Word to avoid\n> that time sink, since I just don't have the time.)\n\nIs that the same MS Word that generates Postscript files as a big bitmap?\n\nI suppose by the time we release the 10th anniversary edition, the XML/XSL\narchitecture will be mature enough to produce printable files that way,\nbut until then -- whatever works.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 28 Feb 2001 21:13:40 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ..." }, { "msg_contents": "> > primarily because something in the jade RTF tickles a bug in the page\n> > formatting with Applix. (This round, I'll resort even to M$Word to avoid\n> > that time sink, since I just don't have the time.)\n> Is that the same MS Word that generates Postscript files as a big bitmap?\n> I suppose by the time we release the 10th anniversary edition, the XML/XSL\n> architecture will be mature enough to produce printable files that way,\n> but until then -- whatever works.\n\nI'm not counting on it even then. Some \"last minute markup\" will always\nbe required imho. But I dream about it ;)\n\n - Thomas\n", "msg_date": "Wed, 28 Feb 2001 22:07:31 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ..." }, { "msg_contents": "I haven't been following this thread very carefully but I just remembered a\nsimilar problem we had that is probably related. We did a dump from a UTF-8\ndb containind English, Japanese, and Korean data. When the dump was done in\nthe default mode (e.g., via COPY statements) then we could no restore it. It\nwould die on certain characters. We then tried dumping in with -nd flags.\nThis fixed the problem for us although the restore is a lot slower.\n\n--Rainer\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, February 27, 2001 12:17 PM\n> To: Jaume Teixi\n> Cc: [email protected]; [email protected]; Richard T.\n> Robino; Stefan Huber\n> Subject: Re: [ADMIN] COPY doesn't works when containing ' ' or ' '\n> characters on db\n>\n>\n> Jaume Teixi <[email protected]> writes:\n> > I finally percated that when data contains '・ or '・ it's impossible to\n> > parse trought:\n>\n> > COPY products FROM '/var/lib/postgres/dadesi.txt' USING\n> DELIMITERS '|' \\g\n>\n> > it causes:\n>\n> > SELECT edicion FROM products;\n> > edicion\n> > -----------------\n> > Espa������痺蜿釶�辞���頤�����黶辣�繻�繪�纈�蒹鱚�����蜴\n����迚粐跂 ���帙鴒�粐��葹�模蛋姪�鈔���磔��釿閼蜴苴�鱚�阨�皷鈑�蜀�銷��壽蜩�繞逑�蜍�蜚���矼����磔��齠��碯��竅逾�苺纉��癆���糟齒��蜒�倆齡苒纉�纈皷闔�鱚�阨��鉗鈑�����黶蜆��迚齠繖��\n��\t\t\t鱚艨鰾鵺�闕�瘤�\n", "msg_date": "Thu, 1 Mar 2001 08:01:32 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: COPY doesn't works when containing ' ' or ' ' characters on db " }, { "msg_contents": "On Wed, Feb 28, 2001 at 08:53:31AM +0900, Hiroshi Inoue wrote:\n... \n> I think I've fixed this bug at least for MS-Access.\n> You could get the latest win32 driver from\n> ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\n> Please try it.\n\nHow can I just install that file? (ie., M$ Access -> psqlodbc.dll -> real OS)\n\n===== aside:\n\nI just tried installing pgAdmin - the installer says:\n\nThis setup requires at least version 2.5 of the Microsoft Data Access\nComponents (MDAC) to be installed first. If the MDAC installer\n(mdac_typ.exe) is not provided with this setup, you can find it on the\nMicrosoft web site (www.microsoft.com)\n\nAnd after searching said website,\nhttp://www.microsoft.com/data/download2.htm\nshows:\n\nMicrosoft Data Access Components MDAC 2.1.1.3711.11 < 2.5...\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 1 Mar 2001 00:55:29 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Release in 2 weeks ..." }, { "msg_contents": "Patrick Welche wrote:\n> \n> On Wed, Feb 28, 2001 at 08:53:31AM +0900, Hiroshi Inoue wrote:\n> ...\n> > I think I've fixed this bug at least for MS-Access.\n> > You could get the latest win32 driver from\n> > ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\n> > Please try it.\n> \n> How can I just install that file? (ie., M$ Access -> psqlodbc.dll -> real OS)\n> \n\nI don't know if M$-access requires MDAC now(it didn't require\nMDAC before). I use ADO and don't use M$-access other than\ntesting. ADO requires MDAC and pgAdmin uses ADO AFAIK.\n\n> ===== aside:\n> \n> I just tried installing pgAdmin - the installer says:\n> \n> This setup requires at least version 2.5 of the Microsoft Data Access\n> Components (MDAC) to be installed first. If the MDAC installer\n> (mdac_typ.exe) is not provided with this setup, you can find it on the\n> Microsoft web site (www.microsoft.com)\n> \n> And after searching said website,\n> http://www.microsoft.com/data/download2.htm\n> shows:\n> \n> Microsoft Data Access Components MDAC 2.1.1.3711.11 < 2.5...\n> \n\nI can see the following at http://www.microsoft.com/data/download.htm\n\nData Access Components (MDAC) redistribution releases.\nFive releases of MDAC are available here: The new MDAC\n2.6, two of MDAC 2.5, and two of MDAC 2.1. You can\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Thu, 01 Mar 2001 11:05:02 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ODBC] Re: Release in 2 weeks ..." }, { "msg_contents": "At 11:52 26/02/01 -0400, The Hermit Hacker wrote:\n\n>Morning all ...\n>\n> Are there any major outstandings that ppl have on their plates,\n>that should prevent a release? I'd like to put out an RC1 by Friday this\n>week, with a full release schedualed for March 15th ... this would give\n>Thomas his two weeks for the docs freeze ...\n\nIt will also give me a little extra time. This week has been a tad busy \nwork wise for me I've not been able to do anything at all (only now have I \nbeen able to find time to download the 800 emails that were waiting for me \n:-( )\n\n\n> Basically, RC1 would say to ppl that we're ready to release, there\n>will be no more core changes that will require an initdb ... feel\n>comfortable using this version in production, with the only major changes\n>between now and release being docs related ...\n>\n> Does this work? Or is there something earth-shattering that still\n>has to be done?\n\nNot on my front except:\n\nJDBC1.2 driver needs testing (still can't get JDK1.1.8 to install here).\nThe JDBC 2.1 Enterprise Edition driver also needs some testing.\n\nThe JDBC2.1 Standard Edition driver is ready. Some new patches to look at.\n\nPS: Did you know we are only 1 thing short of being JDBC compliant with the \nJDBC2.1 SE driver?\n\nThe other not implemented bits are extras not technically (according to the \nspec) needed for compliance. But then there's not many of them either \n(about 11 at last count excluding CallableStatement - which isn't required \nwhich I was surprised about when I check it last weekend).\n\nPeter\n\n", "msg_date": "Thu, 01 Mar 2001 20:13:47 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ... " }, { "msg_contents": "At 17:54 27/02/01 +0100, Peter Eisentraut wrote:\n>The Hermit Hacker writes:\n>\n> > Are there any major outstandings that ppl have on their plates,\n> > that should prevent a release? I'd like to put out an RC1 by Friday this\n> > week, with a full release schedualed for March 15th ... this would give\n> > Thomas his two weeks for the docs freeze ...\n>\n>I'm interested to know what exactly takes two weeks with the docs and what\n>could be done to speed it up.\n\n\nIsn't it the typsetting for the postscript/pdf docs? docbook doesn't handle \ntables too well in those cases and its easier to do them by hand?\n\nPeter\n\n\n>--\n>Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 01 Mar 2001 20:18:37 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release in 2 weeks ... " }, { "msg_contents": "> >I've got a clue for ApplixWare, if you happen to have that package\n> >(US$90).\n> Please post it, Thomas.\n> I got nowhere following their instructions.\n\nHave you looked at *our* instructions in the chapter on ODBC? I haven't\ndone much with it in quite a while, but afaik it all should still work.\n\nI would have expected Cary O'Brien (sp? name?? Done from memory: sorry\n\"aka Cary\" :/ to have spoken up if things have broken, so the\ninstructions should still be good.\n\n - Thomas\n", "msg_date": "Fri, 02 Mar 2001 16:28:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Release in 2 weeks ..." }, { "msg_contents": "First, a warning is in order. This will modify your registry. I have no\nidea how it might behave on win2000, but should work for win9x. As\nalways, it's wise to backup your registry first. Use at your own risk.\n\nIf you already have some version of the PG odbc driver installed, you can\njust copy psqlodbc.dll over the old one. If not, you have to \"install\"\nit. There is an installer if you want to use it, but I think it only has\nthe old version so you will have to copy the new psqlodbc.dll over this\nold one. Or, if you want to, you can use the attached .reg file to modify\nyour registry. I'm told that this is all the installer does anyway. \nGive it a try.\n\n-Cedar\n\n\nOn Thu, 1 Mar 2001, Patrick Welche wrote:\n\n> On Wed, Feb 28, 2001 at 08:53:31AM +0900, Hiroshi Inoue wrote:\n> ... \n> > I think I've fixed this bug at least for MS-Access.\n> > You could get the latest win32 driver from\n> > ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\n> > Please try it.\n> \n> How can I just install that file? (ie., M$ Access -> psqlodbc.dll -> real OS)", "msg_date": "Fri, 2 Mar 2001 21:37:35 +0200 (IST)", "msg_from": "Cedar Cox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Release in 2 weeks ..." }, { "msg_contents": "On Thu, Mar 01, 2001 at 11:05:02AM +0900, Hiroshi Inoue wrote:\n> Patrick Welche wrote:\n> > \n> > On Wed, Feb 28, 2001 at 08:53:31AM +0900, Hiroshi Inoue wrote:\n> > ...\n> > > I think I've fixed this bug at least for MS-Access.\n> > > You could get the latest win32 driver from\n> > > ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\n> > > Please try it.\n> > \n> > How can I just install that file? (ie., M$ Access -> psqlodbc.dll -> real OS)\n> > \n> \n> I don't know if M$-access requires MDAC now(it didn't require\n> MDAC before). I use ADO and don't use M$-access other than\n> testing. ADO requires MDAC and pgAdmin uses ADO AFAIK.\n\nIndeed M$-access doesn't need it. Thanks to Emmanuel and Cedar for the\nexplanation (also my fault for having searched for psqlodbc with \"partial\nmatch\" to find it in c:\\winnt\\system32)\n\n...\n> > And after searching said website,\n> > http://www.microsoft.com/data/download2.htm\n> > shows:\n> > \n> > Microsoft Data Access Components MDAC 2.1.1.3711.11 < 2.5...\n> > \n> \n> I can see the following at http://www.microsoft.com/data/download.htm\n\nNow how come you found download.htm and I got download2.htm?! Thanks a lot!\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 5 Mar 2001 11:41:40 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Release in 2 weeks ..." }, { "msg_contents": "I really feel that translated error messages need to happen soon.\nManaging translated message catalogs can be done easily with available\nAPIs. However, translatable messages really require an error code\nmechanism (otherwise it's completely impossible for programs to interpret\nerror messages reliably). I've been thinking about this for much too long\nnow and today I finally settled to the simplest possible solution.\n\nLet the actual method of allocating error codes be irrelevant for now,\nalthough the ones in the SQL standard are certainly to be considered for a\nstart. Essentially, instead of writing\n\n elog(ERROR, \"disaster struck\");\n\nyou'd write\n\n elog(ERROR, \"XYZ01\", \"disaster struck\");\n\nNow you'll notice that this approach doesn't make the error message text\nfunctionally dependend on the error code. The alternative would have been\nto write\n\n elog(ERROR, \"XYZ01\");\n\nwhich makes the code much less clear. Additonally, most of the elog()\ncalls use printf style variable argument lists. So maybe\n\n elog(ERROR, \"XYZ01\", (arg + 1), foo);\n\nThis is not only totally obscure, but also incredibly cumbersome to\nmaintain and very error prone. One earlier idea was to make the \"XYZ01\"\nthing a macro instead that expands to a string with % arguments, that GCC\ncan check as it does now. But I don't consider this a lot better, because\nthe initial coding is still obscured, and additonally the list of those\nmacros needs to be maintained. (The actual error codes might still be\nprovided as readable macro names similar to the errno codes, but I'm not\nsure if we should share these between server and client.)\n\nFinally, there might also be legitimate reasons to have different error\nmessage texts for the same error code. For example, \"type errors\" (don't\nknow if this is an official code) can occur in a number of places that\nmight warrant different explanations. Indeed, this approach would\npreserve \"artistic freedom\" to some extent while still maintaining some\nstructure alongside. And it would be rather straightforward to implement,\ntoo. Those who are too bored to assign error codes to new code can simply\npick some \"zero\" code as default.\n\nOn the protocol front, this could be pretty easy to do. Instead of\n\"message text\" we'd send a string \"XYZ01: message text\". Worst case, we\npass this unfiltered to the client and provide an extra function that\nreturns only the first five characters. Alternatively we could strip off\nthe prefix when returning the message text only.\n\nAt the end, the i18n part would actually be pretty easy, e.g.,\n\n elog(ERROR, \"XYZ01\", gettext(\"stuff happened\"));\n\n\nComments? Better ideas?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 8 Mar 2001 23:49:50 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Internationalized error messages" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Let the actual method of allocating error codes be irrelevant for now,\n> although the ones in the SQL standard are certainly to be considered for a\n> start. Essentially, instead of writing\n> \n> elog(ERROR, \"disaster struck\");\n> \n> you'd write\n> \n> elog(ERROR, \"XYZ01\", \"disaster struck\");\n\nI like this approach. One of the nice things about Oracle is that\nthey have an error manual. All Oracle errors have an associated\nnumber. You can look up that number in the error manual to find a\nparagraph giving details and workarounds. Admittedly, sometimes the\nfurther details are not helpful, but sometimes they are. The basic\nidea of being able to look up an error lets programmers balance the\nneed for a terse error message with the need for a fuller explanation.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 32: I just know I'm a better manager when I have Joe DiMaggio in center field.\n\t\t-- Casey Stengel\n", "msg_date": "08 Mar 2001 16:16:17 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "On Thu, Mar 08, 2001 at 11:49:50PM +0100, Peter Eisentraut wrote:\n> I really feel that translated error messages need to happen soon.\n> Managing translated message catalogs can be done easily with available\n> APIs. However, translatable messages really require an error code\n> mechanism (otherwise it's completely impossible for programs to interpret\n> error messages reliably). I've been thinking about this for much too long\n> now and today I finally settled to the simplest possible solution.\n> \n> Let the actual method of allocating error codes be irrelevant for now,\n> although the ones in the SQL standard are certainly to be considered for a\n> start. Essentially, instead of writing\n> \n> elog(ERROR, \"disaster struck\");\n> \n> you'd write\n> \n> elog(ERROR, \"XYZ01\", \"disaster struck\");\n> \n> Now you'll notice that this approach doesn't make the error message text\n> functionally dependend on the error code. The alternative would have been\n> to write\n> \n> elog(ERROR, \"XYZ01\");\n> \n> which makes the code much less clear. Additonally, most of the elog()\n> calls use printf style variable argument lists. So maybe\n> \n> elog(ERROR, \"XYZ01\", (arg + 1), foo);\n> \n> This is not only totally obscure, but also incredibly cumbersome to\n> maintain and very error prone. One earlier idea was to make the \"XYZ01\"\n> thing a macro instead that expands to a string with % arguments, that GCC\n> can check as it does now. But I don't consider this a lot better, because\n> the initial coding is still obscured, and additonally the list of those\n> macros needs to be maintained. (The actual error codes might still be\n> provided as readable macro names similar to the errno codes, but I'm not\n> sure if we should share these between server and client.)\n> \n> Finally, there might also be legitimate reasons to have different error\n> message texts for the same error code. For example, \"type errors\" (don't\n> know if this is an official code) can occur in a number of places that\n> might warrant different explanations. Indeed, this approach would\n> preserve \"artistic freedom\" to some extent while still maintaining some\n> structure alongside. And it would be rather straightforward to implement,\n> too. Those who are too bored to assign error codes to new code can simply\n> pick some \"zero\" code as default.\n> \n> On the protocol front, this could be pretty easy to do. Instead of\n> \"message text\" we'd send a string \"XYZ01: message text\". Worst case, we\n> pass this unfiltered to the client and provide an extra function that\n> returns only the first five characters. Alternatively we could strip off\n> the prefix when returning the message text only.\n> \n> At the end, the i18n part would actually be pretty easy, e.g.,\n> \n> elog(ERROR, \"XYZ01\", gettext(\"stuff happened\"));\n\nSimilar approaches have been tried frequently, and even enshrined \nin standards (e.g. POSIX catgets), but have almost always proven too\ncumbersome. The problem is that keeping programs that interpret the \nnumeric code in sync with the program they monitor is hard, and trying \nto avoid breaking all those secondary programs hinders development on \nthe primary program. Furthermore, assigning code numbers is a nuisance,\nand they add uninformative clutter. \n\nIt's better to scan the program for elog() arguments, and generate\na catalog by using the string itself as the index code. Those \nmaintaining the secondary programs can compare catalogs to see what \nhas been broken by changes and what new messages to expect. elog()\nitself can (optionally) invent tokens (e.g. catalog indices) to help \nout those programs.\n\nNathan Myers\[email protected]\n", "msg_date": "Thu, 8 Mar 2001 16:42:22 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "> On Thu, Mar 08, 2001 at 11:49:50PM +0100, Peter Eisentraut wrote:\n>> I really feel that translated error messages need to happen soon.\n\nAgreed.\n\[email protected] (Nathan Myers) writes:\n> Similar approaches have been tried frequently, and even enshrined \n> in standards (e.g. POSIX catgets), but have almost always proven too\n> cumbersome. The problem is that keeping programs that interpret the \n> numeric code in sync with the program they monitor is hard, and trying \n> to avoid breaking all those secondary programs hinders development on \n> the primary program. Furthermore, assigning code numbers is a nuisance,\n> and they add uninformative clutter. \n\nThere's a difficult tradeoff to make here, but I think we do want to\ndistinguish between the \"official error code\" --- the thing that has\ntranslations into various languages --- and what the backend is actually\nallowed to print out. It seems to me that a fairly large fraction of\nthe unique messages found in the backend can all be lumped under the\ncategory of \"internal error\", and that we need to have only one official\nerror code and one user-level translated message for the lot of them.\nBut we do want to be able to print out different detail messages for\neach of those internal errors. There are other categories that might be\nlumped together, but that one alone is sufficiently large to force us\nto recognize it. This suggests a distinction between a \"primary\" or\n\"user-level\" error message, which we catalog and provide translations\nfor, and a \"secondary\", \"detail\", or \"wizard-level\" error message that\nexists only in the backend source code, and only in English, and so\ncan be made up on the spur of the moment.\n\nAnother thing that's bothered me for a long time is our inconsistent\napproach to determining where in the code a message comes from. A lot\nof the messages currently embed the name of the generating routine right\ninto the error text. Again, we ought to separate the functionality:\nthe source-code location is valuable but ought not form part of the\nprimary error message. I would like to see elog() become a macro that\ninvokes __FILE__ and __LINE__ to automatically make the *exact* source\ncode location become part of the secondary error information, and then\ndrop the convention of using the routine name in the message text.\n\nSomething else we have talked about off-and-on is providing locator\ninformation for errors that can be associated with a particular point in\nthe query string (lexical and syntactic errors). This would probably be\nbest returned as a character index.\n\nAnother thing that I missed in Peter's proposal is how we are going to\ncope with messages that include parameters. Surely we do not expect\ngettext to start with 'Attribute \"foo\" not found' and distinguish fixed\nfrom variable parts of that string?\n\nSo it's clear that we need to devise a way of breaking an \"error\nmessage\" into multiple portions, including:\n\n\tPrimary error message (localizable)\n\tParameters to insert into error message (user identifiers, etc)\n\tSecondary (wizard) error message (optional)\n\tSource code location\n\tQuery text location (optional)\n\nand perhaps others that I have forgotten about. One of the key things\nto think about is whether we can, or should try to, transmit all this\nstuff in a backwards-compatible protocol. That would mean we'd have\nto dump all the info into a single string, which is doable but would\nperhaps look pretty ugly:\n\n\tERROR: Attribute \"foo\" not found -- basic message for dumb frontends\n\tERRORCODE: UNREC_IDENT\t\t-- key for finding localized message\n\tPARAM1: foo\t-- something to embed in the localized message\n\tMESSAGE: Attribute or table name not known within context of query\n\tCODELOC: src/backend/parser/parse_clause.c line 345\n\tQUERYLOC: 22\n\nAlternatively we could suppress most of this stuff unless the frontend\nspecifically asks for it (and presumably is willing to digest it for\nthe user).\n\nBottom line for me is that if we are going to go to the trouble of\nexamining and changing every single elog() in the system, we should\ntry to get all of these issues cleaned up at once. Let's not have to\ngo back and do it again later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 21:00:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "On Thu, Mar 08, 2001 at 09:00:09PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > Similar approaches have been tried frequently, and even enshrined \n> > in standards (e.g. POSIX catgets), but have almost always proven too\n> > cumbersome. The problem is that keeping programs that interpret the \n> > numeric code in sync with the program they monitor is hard, and trying \n> > to avoid breaking all those secondary programs hinders development on \n> > the primary program. Furthermore, assigning code numbers is a nuisance,\n> > and they add uninformative clutter. \n> \n> There's a difficult tradeoff to make here, but I think we do want to\n> distinguish between the \"official error code\" --- the thing that has\n> translations into various languages --- and what the backend is actually\n> allowed to print out. It seems to me that a fairly large fraction of\n> the unique messages found in the backend can all be lumped under the\n> category of \"internal error\", and that we need to have only one official\n> error code and one user-level translated message for the lot of them.\n> But we do want to be able to print out different detail messages for\n> each of those internal errors. There are other categories that might be\n> lumped together, but that one alone is sufficiently large to force us\n> to recognize it. This suggests a distinction between a \"primary\" or\n> \"user-level\" error message, which we catalog and provide translations\n> for, and a \"secondary\", \"detail\", or \"wizard-level\" error message that\n> exists only in the backend source code, and only in English, and so\n> can be made up on the spur of the moment.\n\nI suggest using different named functions/macros for different \ncategories of message, rather than arguments to a common function. \n(I.e. \"elog(ERROR, ...)\" Considered Harmful.) \n\nYou might even have more than one call at a site, one for the official\nmessage and another for unofficial or unstable informative details.\n\n> Another thing that I missed in Peter's proposal is how we are going to\n> cope with messages that include parameters. Surely we do not expect\n> gettext to start with 'Attribute \"foo\" not found' and distinguish fixed\n> from variable parts of that string?\n\nThe common way to deal with this is to catalog the format string itself,\nwith its embedded % directives. The tricky bit, and what the printf \nfamily has had to be extended to handle, is that the order of the formal \narguments varies with the target language. The original string is an \nordinary printf string, but the translations may have to refer to the \nsubstitution arguments by numeric position (as well as type).\n\nThere is probably Free code to implement that.\n\nAs much as possible, any compile-time annotations should be extracted \ninto the catalog and filtered out of the source, to be reunited only\nwhen you retrieve the catalog entry. \n\n\n> So it's clear that we need to devise a way of breaking an \"error\n> message\" into multiple portions, including:\n> \n> \tPrimary error message (localizable)\n> \tParameters to insert into error message (user identifiers, etc)\n> \tSecondary (wizard) error message (optional)\n> \tSource code location\n> \tQuery text location (optional)\n> \n> and perhaps others that I have forgotten about. One of the key things\n> to think about is whether we can, or should try to, transmit all this\n> stuff in a backwards-compatible protocol. That would mean we'd have\n> to dump all the info into a single string, which is doable but would\n> perhaps look pretty ugly:\n> \n> \tERROR: Attribute \"foo\" not found -- basic message for dumb frontends\n> \tERRORCODE: UNREC_IDENT\t\t-- key for finding localized message\n> \tPARAM1: foo\t-- something to embed in the localized message\n> \tMESSAGE: Attribute or table name not known within context of query\n> \tCODELOC: src/backend/parser/parse_clause.c line 345\n> \tQUERYLOC: 22\n\nWhitespace can be used effectively. E.g. only primary messages appear\nin column 0. PG might emit this, which is easily filtered:\n\n Attribute \"foo\" not found\n severity: cannot proceed\n explain: An attribute or table was name not known within\n explain: the context of the query.\n index: 237 Attribute \\\"%s\\\" not found\n location: src/backend/parser/parse_clause.c line 345\n query_position: 22\n\nHere the first line is the localized replacement of what appears in the \ncode, with arguments substituted in. The other stuff comes from the\ncatalog\n\nThe call looks like\n\n elog_query(\"Attribute \\\"%s\\\" not found\", foo);\n elog_explain(\"An attribute or table was name not known within\"\n \"the context of the query.\");\n elog_severity(ERROR);\n\nwhich might gets expanded (squeezed) by the preprocessor to\n\n _elog(current_query_position, \"Attribute \\\"%s\\\" not found\", foo);\n\nwhile a separate tool scans the sources and builds the catalog,\nannotating it with severity, line number, etc. Human translators\nmay edit copies of the resulting catalog. The call to _elog looks up\nthe string in the catalog, substitutes arguments into the translation,\nand emits it along with the catalog index number and whatever else\nhas been requested in the config file. Alternatively, any other program \ncan use the number to pull the annotations out of the catalog given\njust the index.\n\n> Alternatively we could suppress most of this stuff unless the frontend\n> specifically asks for it (and presumably is willing to digest it for\n> the user).\n> \n> Bottom line for me is that if we are going to go to the trouble of\n> examining and changing every single elog() in the system, we should\n> try to get all of these issues cleaned up at once. Let's not have to\n> go back and do it again later.\n\nThe more complex it is, the more likely that will need to be redone.\nThe simpler the calls look, the more likely that you can automate\n(or implement invisibly) any later improvements. \n\nNathan Myers\[email protected]\n", "msg_date": "Thu, 8 Mar 2001 19:30:41 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "> I like this approach. One of the nice things about Oracle is that\n> they have an error manual. All Oracle errors have an associated\n> number. You can look up that number in the error manual to find a\n> paragraph giving details and workarounds. Admittedly, sometimes the\n> further details are not helpful, but sometimes they are. The basic\n> idea of being able to look up an error lets programmers balance the\n> need for a terse error message with the need for a fuller explanation.\n\nOne of the examples when you need exact error message code is when you want \nto separate unique index violations from other errors. This often needed when \nyou want just do insert, and leave all constraint checking to database...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: [email protected]\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Fri, 9 Mar 2001 11:34:42 +0600", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "On Thu, Mar 08, 2001 at 09:00:09PM -0500, Tom Lane wrote:\n> > On Thu, Mar 08, 2001 at 11:49:50PM +0100, Peter Eisentraut wrote:\n> >> I really feel that translated error messages need to happen soon.\n> \n> Agreed.\n\n Yes, error codes is *very* wanted feature.\n\n> \n> \tERROR: Attribute \"foo\" not found -- basic message for dumb frontends\n> \tERRORCODE: UNREC_IDENT\t\t-- key for finding localized message\n> \tPARAM1: foo\t-- something to embed in the localized message\n> \tMESSAGE: Attribute or table name not known within context of query\n> \tCODELOC: src/backend/parser/parse_clause.c line 345\n> \tQUERYLOC: 22\n\n Great idea! I agree that we need some powerful Error protocol instead \ncurrect string based messages.\n \n For transaltion to other languages I not sure with gettext() stuff on\nbackend -- IMHO better (faster) solution will postgres system catalog\nwith it.\n\n May be add new command too: SET MESSAGE_LANGUAGE TO <xxx>, because\nwanted language not must be always same as locale setting.\n\n Something like elog(ERROR, gettext(...)); is usable, but not sounds good \nfor me.\n\n\t\t\tKarel\n\n-- \n Karel Zak <[email protected]>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 9 Mar 2001 08:53:20 +0100", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "> For transaltion to other languages I not sure with gettext() stuff on\n> backend -- IMHO better (faster) solution will postgres system catalog\n> with it.\n> \n> May be add new command too: SET MESSAGE_LANGUAGE TO <xxx>, because\n> wanted language not must be always same as locale setting.\n\nIn the multibyte enabled environment, that kind of command would not\nbe necessary except UNICODE and MULE_INTERNAL, since they are\nmulti-lingual encoding. For them, we might need something like:\n\nSET LANGUAGE_PREFERENCE TO 'Japanese';\n\nFor the long term solutuon, this kind of problem should be solved in\nthe implemetaion of SQL-92/99 i18n features.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 09 Mar 2001 20:42:26 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "Nathan Myers writes:\n\n> > elog(ERROR, \"XYZ01\", gettext(\"stuff happened\"));\n>\n> Similar approaches have been tried frequently, and even enshrined\n> in standards (e.g. POSIX catgets), but have almost always proven too\n> cumbersome. The problem is that keeping programs that interpret the\n> numeric code in sync with the program they monitor is hard, and trying\n> to avoid breaking all those secondary programs hinders development on\n> the primary program.\n\nThat's why no one uses catgets and everyone uses gettext.\n\n> Furthermore, assigning code numbers is a nuisance, and they add\n> uninformative clutter.\n\nThe error codes are exactly what we want, to allow client programs (as\nopposed to humans) to identify the errors. The code in my example has\nnothing to do with the message id in the catgets interface.\n\n> It's better to scan the program for elog() arguments, and generate\n> a catalog by using the string itself as the index code. Those\n> maintaining the secondary programs can compare catalogs to see what\n> has been broken by changes and what new messages to expect. elog()\n> itself can (optionally) invent tokens (e.g. catalog indices) to help\n> out those programs.\n\nThat's what gettext does for you.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 16:45:54 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "Tom Lane writes:\n\n> There's a difficult tradeoff to make here, but I think we do want to\n> distinguish between the \"official error code\" --- the thing that has\n> translations into various languages --- and what the backend is actually\n> allowed to print out. It seems to me that a fairly large fraction of\n> the unique messages found in the backend can all be lumped under the\n> category of \"internal error\", and that we need to have only one official\n> error code and one user-level translated message for the lot of them.\n\nThat's exactly what I was trying to avoid. You'd still be allowed to\nchoose the error message text freely, but client programs will be able to\nmake sense of them by looking at the code only, as opposed to parsing the\nmessage text. I'm trying to avoid making the message text to be computed\nfrom the error code, because that obscures the source code.\n\n> Another thing that's bothered me for a long time is our inconsistent\n> approach to determining where in the code a message comes from. A lot\n> of the messages currently embed the name of the generating routine right\n> into the error text. Again, we ought to separate the functionality:\n> the source-code location is valuable but ought not form part of the\n> primary error message. I would like to see elog() become a macro that\n> invokes __FILE__ and __LINE__ to automatically make the *exact* source\n> code location become part of the secondary error information, and then\n> drop the convention of using the routine name in the message text.\n\nThese sort of things have been on my mind as well, but they're really\nindependent of my issue. We can easily have runtime options to append or\nnot additional things to the error string. I don't see this as part of my\nproposal.\n\n> Another thing that I missed in Peter's proposal is how we are going to\n> cope with messages that include parameters. Surely we do not expect\n> gettext to start with 'Attribute \"foo\" not found' and distinguish fixed\n> >from variable parts of that string?\n\nSure we do.\n\n> That would mean we'd have to dump all the info into a single string,\n> which is doable but would perhaps look pretty ugly:\n>\n> \tERROR: Attribute \"foo\" not found -- basic message for dumb frontends\n> \tERRORCODE: UNREC_IDENT\t\t-- key for finding localized message\n\nThere should not be a \"key\" to look up localized messages. Remember that\nthe localization will also have to be done in all the front-end programs.\nSurely we do not wish to make a list of messages that pg_dump or psql\nprint out. Gettext takes care of this stuff. The only reason why we need\nerror codes is for the sake of ease of interpreting by programs.\n\n> \tPARAM1: foo\t-- something to embed in the localized message\n\nNot necessary.\n\n> \tMESSAGE: Attribute or table name not known within context of query\n\nHow's that different from ERROR:?\n\n> \tCODELOC: src/backend/parser/parse_clause.c line 345\n\nCan be appended to ERROR (or MESSAGE) depending on configuration setting.\n\n> \tQUERYLOC: 22\n\nNot all errors are related to a query.\n\nThe general problem here is also that this would introduce a client\nincompatibility. Older clients that do not expect this amount of detail\nwill print all this garbage to the screen?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 16:57:18 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> That's exactly what I was trying to avoid. You'd still be allowed to\n> choose the error message text freely, but client programs will be able to\n> make sense of them by looking at the code only, as opposed to parsing the\n> message text. I'm trying to avoid making the message text to be computed\n> from the error code, because that obscures the source code.\n\nI guess I don't understand what you have in mind, because this seems\nself-contradictory. If \"client programs can look at the code only\",\nthen how can the error message text be chosen independently of the code?\n\n>> Surely we do not expect gettext to start with 'Attribute \"foo\" not\n>> found' and distinguish fixed from variable parts of that string?\n\n> Sure we do.\n\nHow does that work exactly? You're assuming an extremely intelligent\nlocalization mechanism, I guess, which I was not. I think it makes more\nsense to work a little harder in the backend to avoid requiring AI\nsoftware in every frontend.\n\n>> MESSAGE: Attribute or table name not known within context of query\n\n> How's that different from ERROR:?\n\nSorry, I meant that as an example of the \"secondary message string\", but\nit's a pretty lame example...\n\n> The general problem here is also that this would introduce a client\n> incompatibility. Older clients that do not expect this amount of detail\n> will print all this garbage to the screen?\n\nYes, if we send it to them. It would make sense to control the amount\nof detail presented via some option (a GUC variable, probably). For\nbackwards compatibility reasons we'd want the default to correspond to\nroughly the existing amount of detail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 11:00:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "Tom Lane writes:\n\n> I guess I don't understand what you have in mind, because this seems\n> self-contradictory. If \"client programs can look at the code only\",\n> then how can the error message text be chosen independently of the code?\n\nLet's say \"type mismatch error\", code 2200G acc. to SQL. At one place in\nthe source you write\n\nelog(ERROR, \"2200G\", \"type mismatch in CASE expression (%s vs %s)\", ...);\n\nElsewhere you'd write\n\nelog(ERROR, \"2200G\", \"type mismatch in argument %d of function %s,\n expected %s, got %s\", ...);\n\nHumans can look at this and have a fairly good idea what they'd need to\nfix. However, a client program currently only has the option of failing\nor not failing. In this example case it would probably better for it to\nfail, but someone else already put forth the example of constraint\nviolation. In this case the program might want to do something else.\n\n> >> Surely we do not expect gettext to start with 'Attribute \"foo\" not\n> >> found' and distinguish fixed from variable parts of that string?\n>\n> > Sure we do.\n>\n> How does that work exactly? You're assuming an extremely intelligent\n> localization mechanism, I guess, which I was not. I think it makes more\n> sense to work a little harder in the backend to avoid requiring AI\n> software in every frontend.\n\nGettext takes care of this. In the source you'd write\n\nelog(ERROR, \"2200G\", gettext(\"type mismatch in CASE expression (%s vs %s)\"),\n string, string);\n\nWhen you run the xgettext utility program it scans the source for cases of\ngettext(...) and creates message catalogs for the translators. When it\nfinds printf arguments it automatically includes marks in the message,\nsuch as\n\n\"type mismatch in CASE expression (%1$s vs %2$s)\"\n\nwhich the translator better keep in his version. This also handles the\ncase where the arguments might have to appear in a different order in a\ndifferent language.\n\n> Sorry, I meant that as an example of the \"secondary message string\", but\n> it's a pretty lame example...\n\nI guess I'm not sold on the concept of primary and secondary message\nstrings. If the primary message isn't good enough you better fix that.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 17:50:28 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "Karel Zak writes:\n\n> For transaltion to other languages I not sure with gettext() stuff on\n> backend -- IMHO better (faster) solution will postgres system catalog\n> with it.\n\nelog(ERROR, \"cannot open message catalog table\");\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 17:57:13 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Let's say \"type mismatch error\", code 2200G acc. to SQL. At one place in\n> the source you write\n> elog(ERROR, \"2200G\", \"type mismatch in CASE expression (%s vs %s)\", ...);\n> Elsewhere you'd write\n> elog(ERROR, \"2200G\", \"type mismatch in argument %d of function %s,\n> expected %s, got %s\", ...);\n\nOkay, so your notion of an error code is not a localizable entity at\nall, it's something for client programs to look at. Now I get it.\n\nI object to writing \"2200G\" however, because that has no mnemonic value\nwhatever, and is much too easy to get wrong. How about\n\nelog(ERROR, ERR_TYPE_MISMATCH, \"type mismatch in argument %d of function %s,\n expected %s, got %s\", ...);\n\nwhere ERR_TYPE_MISMATCH is #defined as \"2200G\" someplace? Or for that\nmatter #defined as \"TYPE_MISMATCH\"? Content-free numeric codes are no\nfun to use on the client side either...\n\n> Gettext takes care of this. In the source you'd write\n\n> elog(ERROR, \"2200G\", gettext(\"type mismatch in CASE expression (%s vs %s)\"),\n> string, string);\n\nDuh. For some reason I was envisioning the localization substitution as\noccurring on the client side, but of course we'd want to do it on the\nserver side, and before parameters are substituted into the message.\nSorry for the noise.\n\nI am not sure we can/should use gettext (possible license problems?),\nbut certainly something like this could be cooked up.\n\n>> Sorry, I meant that as an example of the \"secondary message string\", but\n>> it's a pretty lame example...\n\n> I guess I'm not sold on the concept of primary and secondary message\n> strings. If the primary message isn't good enough you better fix that.\n\nThe motivation isn't so much to improve on the primary message as to\nreduce the number of distinct strings that really need to be translated.\nRemember all those internal \"can't happen\" errors. If we have only one\nmessage component then the translator is faced with a huge pile of\ninternal messages and not a lot of gain from translating them. If\nthere's a primary and secondary component then all the internal messages\ncan share the same primary component (\"Internal error, please file a bug\nreport\"). Now the translator translates that one message, and can\nignore the many secondary-component messages with a clear conscience.\n(Of course, he can translate those too if he really wants to, but the\npoint is that he doesn't *have* to do it to attain reasonably friendly\nbehavior.)\n\nPerhaps another way to look at it is that we have a bunch of errors that\nare user-oriented (ie, relate pretty directly to something the user did\nwrong) and another bunch that are system-oriented (relate to internal\nproblems, such as consistency check failures or violations of internal\nAPIs). We want to provide localized translations of the first set, for\nsure. I don't think we need localized translations of the second set,\nso long as we have some sort of \"covering message\" that can be localized\nfor them. Maybe instead of \"primary\" and \"secondary\" strings for a\nsingle error, we ought to distinguish these two categories of error and\nplan different localization strategies for them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 12:05:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > Let's say \"type mismatch error\", code 2200G acc. to SQL. At one place in\n> > the source you write\n> > elog(ERROR, \"2200G\", \"type mismatch in CASE expression (%s vs %s)\", ...);\n\nTom Lane <[email protected]> spake:\n> I object to writing \"2200G\" however, because that has no mnemonic value\n> whatever, and is much too easy to get wrong. How about\n> \n> elog(ERROR, ERR_TYPE_MISMATCH, \"type mismatch in argument %d of function %s,\n> expected %s, got %s\", ...);\n> \n> where ERR_TYPE_MISMATCH is #defined as \"2200G\" someplace? Or for that\n> matter #defined as \"TYPE_MISMATCH\"? Content-free numeric codes are no\n> fun to use on the client side either...\n\nThis is one thing I think VMS does well. All error messages are a\ncomposite of the subsystem where they originated, the severity of the\nerror, and the actual error itself. Internally this is stored in a\n32-bit word. It's been a long time, so I don't recall how many bits\nthey allocated for each component. The human-readable representation\nlooks like \"<subsystem>-<severity>-<error>\".\n\n--\nAndrew Evans\n", "msg_date": "Fri, 9 Mar 2001 11:43:20 -0800", "msg_from": "Andrew Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "On Fri, Mar 09, 2001 at 12:05:22PM -0500, Tom Lane wrote:\n> > Gettext takes care of this. In the source you'd write\n> \n> > elog(ERROR, \"2200G\", gettext(\"type mismatch in CASE expression (%s vs %s)\"),\n> > string, string);\n> \n> Duh. For some reason I was envisioning the localization substitution as\n> occurring on the client side, but of course we'd want to do it on the\n> server side, and before parameters are substituted into the message.\n> Sorry for the noise.\n> \n> I am not sure we can/should use gettext (possible license problems?),\n> but certainly something like this could be cooked up.\n\nI've been assuming that PG's needs are specialized enough that the\nproject wouldn't use gettext directly, but instead something inspired \nby it. \n\nIf you look at my last posting on the subject, by the way, you will see \nthat it could work without a catalog underneath; integrating a catalog \nwould just require changes in a header file (and the programs to generate \nthe catalog, of course). That quality seems to me essential to allow the \nchangeover to be phased in gradually, and to allow different underlying \ncatalog implementations to be tried out.\n\nNathan\nncm\n", "msg_date": "Fri, 9 Mar 2001 11:49:20 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "Tom Lane writes:\n\n> I object to writing \"2200G\" however, because that has no mnemonic value\n> whatever, and is much too easy to get wrong. How about\n>\n> elog(ERROR, ERR_TYPE_MISMATCH, \"type mismatch in argument %d of function %s,\n> expected %s, got %s\", ...);\n>\n> where ERR_TYPE_MISMATCH is #defined as \"2200G\" someplace? Or for that\n> matter #defined as \"TYPE_MISMATCH\"? Content-free numeric codes are no\n> fun to use on the client side either...\n\nWell, SQL defines these. Do we want to make our own list? However,\nnumeric codes also have the advantage that some hierarchy is possible.\nE.g., the \"22\" in \"2200G\" is actually the category code \"data exception\".\nPersonally, I would stick to the SQL codes but make some readable macro\nname for backend internal use.\n\n> I am not sure we can/should use gettext (possible license problems?),\n\nGettext is an open standard, invented at Sun IIRC. There is also an\nindependent implementation for BSDs in the works. On GNU/Linux system\nit's in the C library. I don't see any license problems that way. Is has\nbeen used widely for free software and so far I haven't seen any real\nalternative.\n\n> but certainly something like this could be cooked up.\n\nWell, I'm trying to avoid having to do the cooking. ;-)\n\n> Perhaps another way to look at it is that we have a bunch of errors that\n> are user-oriented (ie, relate pretty directly to something the user did\n> wrong) and another bunch that are system-oriented (relate to internal\n> problems, such as consistency check failures or violations of internal\n> APIs). We want to provide localized translations of the first set, for\n> sure. I don't think we need localized translations of the second set,\n> so long as we have some sort of \"covering message\" that can be localized\n> for them.\n\nI'm sure this can be covered in some macro way. A random idea:\n\nelog(ERROR, INTERNAL_ERROR(\"text\"), ...)\n\nexpands to\n\nelog(ERROR, gettext(\"Internal error: %s\"), ...)\n\nOTOH, we should not yet make presumptions about what dedicated translators\ncan be capable of. :-)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 21:45:14 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Well, SQL defines these. Do we want to make our own list? However,\n> numeric codes also have the advantage that some hierarchy is possible.\n> E.g., the \"22\" in \"2200G\" is actually the category code \"data exception\".\n> Personally, I would stick to the SQL codes but make some readable macro\n> name for backend internal use.\n\nWe will probably find cases where we need codes not defined by SQL\n(since we have non-SQL features). If there is room to invent our\nown codes then I have no objection to this.\n\n>> I am not sure we can/should use gettext (possible license problems?),\n\n> Gettext is an open standard, invented at Sun IIRC. There is also an\n> independent implementation for BSDs in the works. On GNU/Linux system\n> it's in the C library. I don't see any license problems that way.\n\nUnless that BSD implementation is ready to go, I think we'd be talking\nabout relying on GPL'd (not LGPL'd) code for an essential component of\nthe system functionality. Given RMS' recent antics I am much less\ncomfortable with that than I might once have been.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 15:48:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "\nTom Lane wrote:\n\n> I am not sure we can/should use gettext (possible license problems?),\n> but certainly something like this could be cooked up.\n\nhttp://citrus.bsdclub.org/index-en.html\n\nI'm not sure of the current status of the code.\n\nRegards,\n\nGiles\n\n", "msg_date": "Sun, 11 Mar 2001 09:02:16 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages " }, { "msg_contents": "On Fri, Mar 09, 2001 at 03:48:33PM -0500, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Well, SQL defines these. Do we want to make our own list? However,\n> > numeric codes also have the advantage that some hierarchy is possible.\n> > E.g., the \"22\" in \"2200G\" is actually the category code \"data exception\".\n> > Personally, I would stick to the SQL codes but make some readable macro\n> > name for backend internal use.\n> \n> We will probably find cases where we need codes not defined by SQL\n> (since we have non-SQL features). If there is room to invent our\n> own codes then I have no objection to this.\n> \n> >> I am not sure we can/should use gettext (possible license problems?),\n> \n> > Gettext is an open standard, invented at Sun IIRC. There is also an\n> > independent implementation for BSDs in the works. On GNU/Linux system\n> > it's in the C library. I don't see any license problems that way.\n> \n> Unless that BSD implementation is ready to go, I think we'd be talking\n> about relying on GPL'd (not LGPL'd) code for an essential component of\n> the system functionality. Given RMS' recent antics I am much less\n> comfortable with that than I might once have been.\n\ncf. http://citrus.bsdclub.org/\n\nand the libintl in NetBSD, at least NetBSD-current, works. The hard part\nwas eg convincing gmake's configure to use it as there are bits like\n\n#if __USE_GNU_GETTEXT\n\nrather than just checking for the existence of the functions (as well as\nthe internal symbol _nl_msg_cat_cntr).\n\nSo yes it's ready to go, but please don't use the same m4 in configure.in as\nfor GNU gettext.\n\nCheers,\n\nPatrick\n", "msg_date": "Sun, 11 Mar 2001 18:11:02 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "On Fri, Mar 09, 2001 at 05:57:13PM +0100, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > For transaltion to other languages I not sure with gettext() stuff on\n> > backend -- IMHO better (faster) solution will postgres system catalog\n> > with it.\n> \n> elog(ERROR, \"cannot open message catalog table\");\n\n Sure, and what:\n\nelog(ERROR, gettext(\"can't set LC_MESSAGES\"));\n\n We can generate our system catalog for this by simular way as gettext, it's \nmeans all messages can be in sources in English too.\n\n But this is reflexion, performance test show more.\n\n\t\t\tKarel\n\n-- \n Karel Zak <[email protected]>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 12 Mar 2001 10:32:33 +0100", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "At 23:49 08/03/01 +0100, Peter Eisentraut wrote:\n>I really feel that translated error messages need to happen soon.\n>Managing translated message catalogs can be done easily with available\n>APIs. However, translatable messages really require an error code\n>mechanism (otherwise it's completely impossible for programs to interpret\n>error messages reliably). I've been thinking about this for much too long\n>now and today I finally settled to the simplest possible solution.\n>\n>Let the actual method of allocating error codes be irrelevant for now,\n>although the ones in the SQL standard are certainly to be considered for a\n>start. Essentially, instead of writing\n\nsnip\n\n>On the protocol front, this could be pretty easy to do. Instead of\n>\"message text\" we'd send a string \"XYZ01: message text\". Worst case, we\n>pass this unfiltered to the client and provide an extra function that\n>returns only the first five characters. Alternatively we could strip off\n>the prefix when returning the message text only.\n\nMost other DB's (I'm thinking of Oracle here) pass the code unfiltered to \nthe client anyhow. Saying that, it's not impossible to get psql and other \ninteractive clients to strip the error code anyhow.\n\n\n>At the end, the i18n part would actually be pretty easy, e.g.,\n>\n> elog(ERROR, \"XYZ01\", gettext(\"stuff happened\"));\n>\n>\n>Comments? Better ideas?\n\nA couple of ideas. One, if we have a master list of error codes, we need to \nhave this in an independent format (ie not a .h file). However the other \nidea is to expand on the JDBC's errors.properties files. Being \nascii/unicode, the format will work with just some extra code to implement \nthem in C.\n\nBrief description:\n------------------------\n\nThe ResourceBundle's handle one language per file. From a base filename, \neach different language has a file based on:\n\n filename_la_ct.properties\n\nwhere la is the ISO 2 character language, and ct is the ISO 2 character \ncountry code.\n\nFor example:\n\nmessages_en_GB.properties\nmessages_en_US.properties\nmessages_en.properties\nmessages_fr.properties\nmessages.properties\n\nNow, here for the english locale for England it checks in this order: \nmessages_en_GB.properties messages_en.properties messages.properties.\n\nIn each file, a message is of the format:\n\nkey=message, and each parameter passed into the message written like {1} \n{2} etc, so for example:\n\nfathom=Unable to fathom update count {0}\n\nNow apart from the base file (messages.properties in this case), the other \nfiles are optional, and an entry only needs to be in there if they are \npresent in that language.\n\nSo, in french, fathom may be translated, but then again it may not (in JDBC \nit isn't). Then it's not included in the file. Any new messages can be \nadded to the base language, but only included as and when they are translated.\n\nPeter\n\n", "msg_date": "Mon, 12 Mar 2001 15:09:53 +0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "Karel Zak writes:\n\n> > > For transaltion to other languages I not sure with gettext() stuff on\n> > > backend -- IMHO better (faster) solution will postgres system catalog\n> > > with it.\n> >\n> > elog(ERROR, \"cannot open message catalog table\");\n>\n> Sure, and what:\n>\n> elog(ERROR, gettext(\"can't set LC_MESSAGES\"));\n>\n> We can generate our system catalog for this by simular way as gettext, it's\n> means all messages can be in sources in English too.\n\nWhen there is an error condition in the backend, the last thing you want\nto do (and are allowed to do) is accessing tables. Also keep in mind that\nwe want to internationalize other parts of the system as well, such as\npg_dump and psql.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 12 Mar 2001 20:15:02 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "> > to re-write smgr. I don't know how useful is second sync() call, but\n> > on Solaris (and I believe on many other *NIXes) rc0 calls it\n> > three times, -:) Why?\n> \n> The idea is, that by the time the last sync has run, the \n> first sync will be done flushing the buffers to disk. - this is what\n> we were told by the IBM engineers when I worked tier-2/3 AIX support\n> at IBM.\n\nI was told the same a long ago about FreeBSD. How much can we count on\nthis undocumented sync() feature?\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 21:13:44 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: RE: xlog checkpoint depends on sync() ... seems uns\n\tafe" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n>> The idea is, that by the time the last sync has run, the \n>> first sync will be done flushing the buffers to disk. - this is what\n>> we were told by the IBM engineers when I worked tier-2/3 AIX support\n>> at IBM.\n\n> I was told the same a long ago about FreeBSD. How much can we count on\n> this undocumented sync() feature?\n\nSounds quite unreliable to me. Unless there's some interlock ... like,\nsay, the second sync not being able to advance past a buffer page that's\nas yet unwritten by the first sync. But would all Unixen share such a\nstrange detail of implementation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 00:22:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog checkpoint depends on sync() ... seems uns afe " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> >> The idea is, that by the time the last sync has run, the \n> >> first sync will be done flushing the buffers to disk. - this is what\n> >> we were told by the IBM engineers when I worked tier-2/3 AIX support\n> >> at IBM.\n> \n> > I was told the same a long ago about FreeBSD. How much can we count on\n> > this undocumented sync() feature?\n> \n> Sounds quite unreliable to me. Unless there's some interlock ... like,\n> say, the second sync not being able to advance past a buffer page that's\n> as yet unwritten by the first sync. But would all Unixen share such a\n> strange detail of implementation?\n\nI'm pretty sure it has no basis in fact, it's just one of these habits \nthat gives sysadmins a warm fuzzy feeling. ;) It's apparently been\naround a long time, though I don't remember where I read about it--it\nwas quite a few years ago.\n\n-Doug\n\n", "msg_date": "13 Mar 2001 00:42:58 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog checkpoint depends on sync() ... seems uns afe" }, { "msg_contents": "\n> Sounds quite unreliable to me. Unless there's some interlock ... like,\n> say, the second sync not being able to advance past a buffer page that's\n> as yet unwritten by the first sync. But would all Unixen share such a\n> strange detail of implementation?\n\nI heard Kirk McKusick tell this story in a 4.4BSD internals class.\nHis explanation was that having an *operator* type 'sync' three times\nprovided enough time for the first sync to do the work before the\noperator powered the system down or reset it or whatever.\n\nI've not heard of any filesystem implementation where the number of\nsync() system calls issued makes a difference, and imagine that any\nprogrammer who has written code to call sync three times has only\nheard part of the story. :-)\n\nRegards,\n\nGiles\n\n", "msg_date": "Tue, 13 Mar 2001 17:47:33 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog checkpoint depends on sync() ... seems uns afe " }, { "msg_contents": "On Mon, Mar 12, 2001 at 08:15:02PM +0100, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > > > For transaltion to other languages I not sure with gettext() stuff on\n> > > > backend -- IMHO better (faster) solution will postgres system catalog\n> > > > with it.\n> > >\n> > > elog(ERROR, \"cannot open message catalog table\");\n> >\n> > Sure, and what:\n> >\n> > elog(ERROR, gettext(\"can't set LC_MESSAGES\"));\n> >\n> > We can generate our system catalog for this by simular way as gettext, it's\n> > means all messages can be in sources in English too.\n> \n> When there is an error condition in the backend, the last thing you want\n> to do (and are allowed to do) is accessing tables. Also keep in mind that\n> we want to internationalize other parts of the system as well, such as\n> pg_dump and psql.\n\n Agree, the pg_xxxx application are good adepts for POSIX locales, all my\nprevious notes are about backend error/notice messages, but forget it --\nafter implementation we will more judicious.\n\n-- \n Karel Zak <[email protected]>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 13 Mar 2001 08:30:59 +0100", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internationalized error messages" }, { "msg_contents": "On Tue, 13 Mar 2001, Tom Lane wrote:\n\n> > I was told the same a long ago about FreeBSD. How much can we count on\n> > this undocumented sync() feature?\n> \n> Sounds quite unreliable to me. Unless there's some interlock ...\n> like, say, the second sync not being able to advance past a buffer\n> page that's as yet unwritten by the first sync. But would all Unixen\n> share such a strange detail of implementation?\n\nThe Linux manpage says:\n\nNAME\n sync - commit buffer cache to disk.\n[..]\n\nDESCRIPTION\n sync first commits inodes to buffers, and then buffers to\n disk.\n[..]\n\nCONFORMING TO\n SVr4, SVID, X/OPEN, BSD 4.3\n\nBUGS\n According to the standard specification (e.g., SVID),\n sync() schedules the writes, but may return before the\n actual writing is done. However, since version 1.3.20\n Linux does actually wait. (This still does not guarantee\n data integrity: modern disks have large caches.)\n\n\nAnd it's still true. On a fast system, if you do:\n\n$ cp /dev/zero /tmp & sleep 1; sync\n\nthe sync will often never finish. (Of course, that's\njust an implementation detail really.)\n\nMatthew.\n\n", "msg_date": "Tue, 13 Mar 2001 10:49:00 +0000 (GMT)", "msg_from": "Matthew Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog checkpoint depends on sync() ... seems uns\n afe" }, { "msg_contents": "Just a quick delurk to pass along this tidbit from linux-kernel on\nLinux *sync() behavior, since we've been talking about it a lot...\n\n-Doug\n\n\nHi,\n\nOn Wed, Mar 14, 2001 at 10:26:42PM -0500, Tom Vier wrote:\n> fdatasync() is the same as fsync(), in linux.\n\nNo, in 2.4 fdatasync does the right thing and skips the inode flush if\nonly the timestamps have changed.\n\n> until fdatasync() is\n> implimented (ie, syncs the data only)\n\nfdatasync is required to sync more than just the data: it has to sync\nthe inode too if any fields other than the timestamps have changed.\nSo, for appending to files or writing new files from scratch, fsync ==\nfdatasync (because each write also changes the inode size). Only for\nupdating existing files in place does fdatasync behave differently.\n\n> #ifndef O_DSYNC\n> # define O_DSYNC O_SYNC\n> #endif\n\n2.4's O_SYNC actually does a fdatasync internally. This is also the\ndefault behaviour of HPUX, which requires you to set a sysctl variable\nif you want O_SYNC to flush timestamp changes to disk.\n\nCheers,\n Stephen", "msg_date": "16 Mar 2001 10:15:28 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "[\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "Doug McNaught <[email protected]> forwards:\n>> 2.4's O_SYNC actually does a fdatasync internally. This is also the\n>> default behaviour of HPUX, which requires you to set a sysctl variable\n>> if you want O_SYNC to flush timestamp changes to disk.\n\nWell, that guy might know all about Linux, but he doesn't know anything\nabout HPUX (at least not any version I've ever run). O_SYNC is\ndistinctly different from O_DSYNC around here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 10:55:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Doug McNaught <[email protected]> forwards:\n> >> 2.4's O_SYNC actually does a fdatasync internally. This is also the\n> >> default behaviour of HPUX, which requires you to set a sysctl variable\n> >> if you want O_SYNC to flush timestamp changes to disk.\n> \n> Well, that guy might know all about Linux, but he doesn't know anything\n> about HPUX (at least not any version I've ever run). O_SYNC is\n> distinctly different from O_DSYNC around here.\n\nY'know, I figured that might be the case. ;) He's a well-respected\nLinux filesystem hacker, so I trust him on the Linux stuff. \n\nSo are we still thinking about preallocating log files as a\nperformance hack? It does seem that using preallocated files along\nwith O_DATASYNC will eliminate pretty much all metadata writes under\nLinux in future...\n\n[NOT suggesting we try to add anything to 7.1, I'm eagerly awaiting RC1]\n\n-Doug\n", "msg_date": "16 Mar 2001 12:03:46 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "Doug McNaught <[email protected]> writes:\n> So are we still thinking about preallocating log files as a\n> performance hack?\n\nWe're not just thinking about it, we're doing it in current sources ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 12:29:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "> So are we still thinking about preallocating log files as a\n> performance hack? It does seem that using preallocated files along\n> with O_DATASYNC will eliminate pretty much all metadata writes under\n> Linux in future...\n> \n> [NOT suggesting we try to add anything to 7.1, I'm eagerly awaiting RC1]\n\nI am pretty sure that is done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Mar 2001 12:50:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "\n[ Drifting off topic ... ]\n\n> Well, that guy might know all about Linux, but he doesn't know anything\n> about HPUX (at least not any version I've ever run). O_SYNC is\n> distinctly different from O_DSYNC around here.\n\nThere is a HP_UX kernel flag 'o_sync_is_o_dsync' which will cause\nO_DSYNC to be treated as O_SYNC. It defaults to being off -- it\nis/was a backward compatibility \"feature\" since HP-UX 9.X (which is\nhistory now) had implemented O_SYNC as O_DSYNC.\n\nhttp://docs.hp.com/cgi-bin/otsearch/getfile?id=/hpux/onlinedocs/os/KCparam.OsyncIsOdsync.html\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Sat, 17 Mar 2001 08:50:58 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "\n> There is a HP_UX kernel flag 'o_sync_is_o_dsync' which will cause\n> O_DSYNC to be treated as O_SYNC. It defaults to being off -- it\n\n... other way around there, of course. Trying to clarify and\nadding confusion instead. :-(\n\n> is/was a backward compatibility \"feature\" since HP-UX 9.X (which is\n> history now) had implemented O_SYNC as O_DSYNC.\n\nMuttering,\n\nGiles\n", "msg_date": "Sat, 17 Mar 2001 09:33:05 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [\"Stephen C. Tweedie\" <[email protected]>] Re: O_DSYNC flag for open" }, { "msg_contents": "\nOkay folks ...\n\n\tWe'd like to wrap up an RC1 and get this release happening this\nyear sometime :) Tom mentioned to me that he has no outstandings left on\nhis plate ... does anyone else have any *show stoppers* left that need to\nbe addressed, or can I package things up?\n\n\tSpeak now, or forever hold your piece (where forever is the time\nbetween now and RC1 is packaged) ...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 20 Mar 2001 14:11:18 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Final Call: RC1 about to go out the door ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \tSpeak now, or forever hold your piece (where forever is the time\n> between now and RC1 is packaged) ...\n\nI rather hope it's *NOT* ....\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Mar 2001 13:17:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ... " }, { "msg_contents": "* Tom Lane <[email protected]> [010320 10:21] wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > \tSpeak now, or forever hold your piece (where forever is the time\n> > between now and RC1 is packaged) ...\n> \n> I rather hope it's *NOT* ....\n\nAnd still no LAZY vacuum. *sigh*\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\n", "msg_date": "Tue, 20 Mar 2001 10:35:38 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> \tWe'd like to wrap up an RC1 and get this release happening this\n> year sometime :) Tom mentioned to me that he has no outstandings left on\n> his plate ... does anyone else have any *show stoppers* left that need to\n> be addressed, or can I package things up?\n\nI just uploaded new man pages. I'll probably do them once more in a few\ndays to catch all the changes.\n\nWe need a supported platform list. Let's hear it.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Mar 2001 20:11:21 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "UnixWare 7, Rel 7.1.1, using UDK FS Compiler\nFreeBSD 4.[23]\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/20/01, 1:11:21 PM, Peter Eisentraut <[email protected]> wrote regarding \nRe: [HACKERS] Final Call: RC1 about to go out the door ...:\n\n\n> The Hermit Hacker writes:\n\n> > We'd like to wrap up an RC1 and get this release happening this\n> > year sometime :) Tom mentioned to me that he has no outstandings left on\n> > his plate ... does anyone else have any *show stoppers* left that need to\n> > be addressed, or can I package things up?\n\n> I just uploaded new man pages. I'll probably do them once more in a few\n> days to catch all the changes.\n\n> We need a supported platform list. Let's hear it.\n\n> --\n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Tue, 20 Mar 2001 19:30:03 GMT", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "BSDI 4.01.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> UnixWare 7, Rel 7.1.1, using UDK FS Compiler\n> FreeBSD 4.[23]\n> \n> LER\n> \n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> \n> On 3/20/01, 1:11:21 PM, Peter Eisentraut <[email protected]> wrote regarding \n> Re: [HACKERS] Final Call: RC1 about to go out the door ...:\n> \n> \n> > The Hermit Hacker writes:\n> \n> > > We'd like to wrap up an RC1 and get this release happening this\n> > > year sometime :) Tom mentioned to me that he has no outstandings left on\n> > > his plate ... does anyone else have any *show stoppers* left that need to\n> > > be addressed, or can I package things up?\n> \n> > I just uploaded new man pages. I'll probably do them once more in a few\n> > days to catch all the changes.\n> \n> > We need a supported platform list. Let's hear it.\n> \n> > --\n> > Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Mar 2001 14:36:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> We need a supported platform list. Let's hear it.\n\nHPUX 10.20\t(HP-PA architecture)\nLinux/PPC\t(LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Mar 2001 16:10:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ... " }, { "msg_contents": "\nGo here to report or to see the list.\n\nhttp://www.postgresql.org/~vev/regress/\n\nVince.\n\n\n\nOn Tue, 20 Mar 2001, Larry Rosenman wrote:\n\n> UnixWare 7, Rel 7.1.1, using UDK FS Compiler\n> FreeBSD 4.[23]\n>\n> LER\n>\n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n>\n> On 3/20/01, 1:11:21 PM, Peter Eisentraut <[email protected]> wrote regarding\n> Re: [HACKERS] Final Call: RC1 about to go out the door ...:\n>\n>\n> > The Hermit Hacker writes:\n>\n> > > We'd like to wrap up an RC1 and get this release happening this\n> > > year sometime :) Tom mentioned to me that he has no outstandings left on\n> > > his plate ... does anyone else have any *show stoppers* left that need to\n> > > be addressed, or can I package things up?\n>\n> > I just uploaded new man pages. I'll probably do them once more in a few\n> > days to catch all the changes.\n>\n> > We need a supported platform list. Let's hear it.\n>\n> > --\n> > Peter Eisentraut [email protected] http://yi.org/peter-e/\n>\n>\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 20 Mar 2001 19:04:12 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "> HPUX 10.20 (HP-PA architecture)\n\nTime to drop 9.2 from the list?\n\n> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n\nWhat processor? Tatsuo had tested on a 603...\n\n - Thomas\n", "msg_date": "Wed, 21 Mar 2001 01:00:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "OK, here is my current platform list taken from the -hackers list and\nfrom Vince's web page. I'm sure I've missed at least a few reports, but\nplease confirm that platforms are actually running and passing\nregression tests with recent betas or the latest release candidate.\n\nIf a platform you are running on is not listed, make sure it gets\nincluded! Platforms with reports for 7.0 risk being demoted to the \"used\nto be supported list\", and platforms with reports for only 6.5 are on a\ndeathwatch, so be sure to speak up! Also, I've included names below to\nremind us who helped last time, but feel free to report even if your\nname is not already listed.\n\nI've separated out recent reports and put them at the end of the list.\nThanks in advance.\n\n - Thomas\n\nAIX 4.3.2 RS6000 7.0 2000-04-05, Andreas Zeugswetter\nCompaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\nIRIX 6.5.6f MIPS 6.5.3 2000-02-18, Kevin Wheatley\nLinux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\nLinux 2.0.x MIPS 7.0 2000-04-13, Tatsuo Ishii\nmklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\nNetBSD 1.4 arm32 7.0 2000-04-08, Patrick Welche\nNetBSD 1.4U x86 7.0 2000-03-26, Patrick Welche\nNetBSD m68k 7.0 2000-04-10, Henry B. Hotz\nNetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\nQNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\nSCO OpenServer 5 x86 6.5 1999-05-25, Andrew Merrill\nSolaris x86 7.0 2000-04-12, Marc Fournier\nSolaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\nSunOS 4.1.4 Sparc 7.0 2000-04-13, Tatsuo Ishii\nWindows/Win32 x86 7.0 2000-04-02, Magnus Hagander (clients only)\nWinNT/Cygwin x86 7.0 2000-03-30, Daniel Horak\n\nBeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\nBSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\nFreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\nHPUX 10.20 PA-RISC 7.1 2001-03-19, Tom Lane\nIBM S/390 7.1 2000-11-17, Neale Ferguson\nLinux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\nLinux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\nLinux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\nLinuxPPC G3 7.1 2001-03-19, Tom Lane\nSCO UnixWare 7.1.1 x86 7.1 2001-03-19, Larry Rosenman\nMacOS-X Darwin PowerPC 7.1 2000-12-11, Peter Bierman\n", "msg_date": "Wed, 21 Mar 2001 01:05:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Call for platforms" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> HPUX 10.20 (HP-PA architecture)\n\n> Time to drop 9.2 from the list?\n\nI don't have it running here anymore. Is there anyone on the list\nwho can test on HPUX 9?\n\n>> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n\n> What processor? Tatsuo had tested on a 603...\n\nIt's a Powerbook G3 (FireWire model), but I'm not sure which chip is\ninside (and Apple's spec sheet isn't too helpful)...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Mar 2001 20:13:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ... " }, { "msg_contents": "> >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n> \n> > What processor? Tatsuo had tested on a 603...\n> It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> inside (and Apple's spec sheet isn't too helpful)...\n\n From what I can tell (which isn't much ;) Apple at least calls the\nprocessor a \"G3\". Which accounts for why we can't find another\ndesignation.\n\nNot sure where it fits into the lineup I *used* to know about (for\nembedded systems) but I don't care. Will refer to it as a \"G3\" for\nnow...\n\n - Thomas\n", "msg_date": "Wed, 21 Mar 2001 01:20:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "> > >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n> > > What processor? Tatsuo had tested on a 603...\n> > It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> > inside (and Apple's spec sheet isn't too helpful)...\n> From what I can tell (which isn't much ;) Apple at least calls the\n> processor a \"G3\". Which accounts for why we can't find another\n> designation.\n\nTatsuo, I have a separate listing for \"mklinux\" for the 7.0 release. Is\nthat distro still valid and unique? Or is there a better way to\nrepresent the PPC options under Linux?\n\n - Thomas\n", "msg_date": "Wed, 21 Mar 2001 01:25:17 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "On Wed, 21 Mar 2001, Thomas Lockhart wrote:\n\n> > > >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n> > > > What processor? Tatsuo had tested on a 603...\n> > > It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> > > inside (and Apple's spec sheet isn't too helpful)...\n> > From what I can tell (which isn't much ;) Apple at least calls the\n> > processor a \"G3\". Which accounts for why we can't find another\n> > designation.\n> \n> Tatsuo, I have a separate listing for \"mklinux\" for the 7.0 release. Is\n> that distro still valid and unique? Or is there a better way to\n> represent the PPC options under Linux?\n\nmklinux is older Motorola 68k-based systems\nLinuxPPC is the newer powerPC-based systems\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Tue, 20 Mar 2001 19:42:22 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "> mklinux is older Motorola 68k-based systems\n> LinuxPPC is the newer powerPC-based systems\n\nHmm. I have mklinux listed as being on the 750. My vague recollection is\nthat the distinction is between NuBus and PCI machines (not necessarily\nin that order), but...\n\nI also vaguely recalled that the distros had merged (or something :/\n\n - Thomas\n", "msg_date": "Wed, 21 Mar 2001 01:51:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "* Thomas Lockhart <[email protected]> [010320 20:04]:\n> OK, here is my current platform list taken from the -hackers list and\n> from Vince's web page. I'm sure I've missed at least a few reports, but\n> please confirm that platforms are actually running and passing\n> regression tests with recent betas or the latest release candidate.\n> \n> If a platform you are running on is not listed, make sure it gets\n> included! Platforms with reports for 7.0 risk being demoted to the \"used\n> to be supported list\", and platforms with reports for only 6.5 are on a\n> deathwatch, so be sure to speak up! Also, I've included names below to\n> remind us who helped last time, but feel free to report even if your\n> name is not already listed.\nFreeBSD 4.3-BETA (will be -RELEASE by the time we release) works too.\n\nI reported FreeBSD 4.[23]. \n\nLER\n\n> \n> I've separated out recent reports and put them at the end of the list.\n> Thanks in advance.\n> \n> - Thomas\n> \n> AIX 4.3.2 RS6000 7.0 2000-04-05, Andreas Zeugswetter\n> Compaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\n> IRIX 6.5.6f MIPS 6.5.3 2000-02-18, Kevin Wheatley\n> Linux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\n> Linux 2.0.x MIPS 7.0 2000-04-13, Tatsuo Ishii\n> mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n> NetBSD 1.4 arm32 7.0 2000-04-08, Patrick Welche\n> NetBSD 1.4U x86 7.0 2000-03-26, Patrick Welche\n> NetBSD m68k 7.0 2000-04-10, Henry B. Hotz\n> NetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n> QNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\n> SCO OpenServer 5 x86 6.5 1999-05-25, Andrew Merrill\n> Solaris x86 7.0 2000-04-12, Marc Fournier\n> Solaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n> SunOS 4.1.4 Sparc 7.0 2000-04-13, Tatsuo Ishii\n> Windows/Win32 x86 7.0 2000-04-02, Magnus Hagander (clients only)\n> WinNT/Cygwin x86 7.0 2000-03-30, Daniel Horak\n> \n> BeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\n> BSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\n> FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n> HPUX 10.20 PA-RISC 7.1 2001-03-19, Tom Lane\n> IBM S/390 7.1 2000-11-17, Neale Ferguson\n> Linux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\n> Linux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\n> Linux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\n> LinuxPPC G3 7.1 2001-03-19, Tom Lane\n> SCO UnixWare 7.1.1 x86 7.1 2001-03-19, Larry Rosenman\n> MacOS-X Darwin PowerPC 7.1 2000-12-11, Peter Bierman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 20 Mar 2001 20:09:11 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> SCO OpenServer 5 x86...\n\nOK, I see that Billy Allie recently updated FAQ_SCO to indicate\ndemonstrated (?) support for OpenServer. I will reflect that in the\nplatform support info.\n\n - Thomas\n", "msg_date": "Wed, 21 Mar 2001 02:39:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> Tatsuo, I have a separate listing for \"mklinux\" for the 7.0 release. Is\n> that distro still valid and unique? Or is there a better way to\n> represent the PPC options under Linux?\n\nI think MkLinux is completely different from Linux/PPC. Will test RC1\non my MkLiux box soon...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Mar 2001 12:20:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "> > Tatsuo, I have a separate listing for \"mklinux\" for the 7.0 release. Is\n> > that distro still valid and unique? Or is there a better way to\n> > represent the PPC options under Linux?\n> \n> mklinux is older Motorola 68k-based systems\n\nNo. MkLinux runs on Power PC based system also. I believe there is a\nx86 based MkLinux exists somewhere.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Mar 2001 12:24:48 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "On Wed, 21 Mar 2001, Thomas Lockhart wrote:\n\n> > > >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n> > > > What processor? Tatsuo had tested on a 603...\n> > > It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> > > inside (and Apple's spec sheet isn't too helpful)...\n> > From what I can tell (which isn't much ;) Apple at least calls the\n> > processor a \"G3\". Which accounts for why we can't find another\n> > designation.\n\n The G3s are the MPC7XX family and the G4s are the MPC7XXX family.\n All of the ones Mac is currently releasing are MPC7450s. See\n\n http://e-www.motorola.com/webapp/sps/prod_cat/taxonomy.jsp?catId=M934309493763\n\n for more details.\n\nFWIW.\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n\n", "msg_date": "Tue, 20 Mar 2001 21:59:36 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "> mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n\nI got core dump while running the parallel regression test of beta6.\nWill look at...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Mar 2001 15:36:07 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> Compaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\n\nWe've got 7.0.3 and 7.1b4 running on \n\nCompaq Tru64 4.0G Alpha\n\nWill do the regression test once RC1 is out.\n\nAdriaan\n", "msg_date": "Wed, 21 Mar 2001 08:46:48 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> > mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n> \n> I got core dump while running the parallel regression test of beta6.\n> Will look at...\n> --\n> Tatsuo Ishii\n\n VACUUM;\n! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n! pqReadData() -- backend closed the channel unexpectedly.\n\nmaybe a bug related to Tom recently fixed?\nIf so, I will try RC1...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Mar 2001 16:30:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "\n> Thomas Lockhart <[email protected]> writes:\n> >> HPUX 10.20 (HP-PA architecture)\n> \n> > Time to drop 9.2 from the list?\n> \n> I don't have it running here anymore. Is there anyone on the list\n> who can test on HPUX 9?\n\nHP haven't supported 9.X since the end of 1999 on servers, and since\nearlier than that on workstations. I doubt anyone will expect to see\nit listed on the PostgreSQL list of supported platforms for 7.1.\n\nRegards,\n\nGiles\n\n", "msg_date": "Wed, 21 Mar 2001 19:27:53 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ... " }, { "msg_contents": "Quoting The Hermit Hacker <[email protected]>:\n\n> \n> Okay folks ...\n> \n> \tWe'd like to wrap up an RC1 and get this release happening this\n> year sometime :) Tom mentioned to me that he has no outstandings left\n> on his plate ... does anyone else have any *show stoppers* left that need\n> to be addressed, or can I package things up?\n\nNothing that would stop RC1 (I've still got some testing which I'm doing later \ntonight).\n\n> \tSpeak now, or forever hold your piece (where forever is the time\n> between now and RC1 is packaged) ...\n\nI'm surprised it hasn't been out already - being worked to death the last two \nweeks, I'm still catching up with developments :( ...\n\nPeter\n\n-- \nPeter Mount [email protected]\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Wed, 21 Mar 2001 04:16:26 -0500 (EST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> ! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n> ! pqReadData() -- backend closed the channel unexpectedly.\n\nIs it possible you ran out of disk space?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Mar 2001 06:13:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > ! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n> > ! pqReadData() -- backend closed the channel unexpectedly.\n> \n> Is it possible you ran out of disk space?\n\nProbably not.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Mar 2001 23:04:12 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "Thomas Lockhart writes:\n\n> > HPUX 10.20 (HP-PA architecture)\n>\n> Time to drop 9.2 from the list?\n>\n> > Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think)\n>\n> What processor? Tatsuo had tested on a 603...\n\nGiven that we list \"x86\", I think we wouldn't care.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Mar 2001 19:19:02 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "Hi,\n\nI reported Linux RedHat 6.2 - 2.2.14-5.0smp #1 SMP Tue Mar 7 21:01:40 EST\n2000 i686\n2 cpu - 1Go RAM\n\nGilles DAROLD\n\n", "msg_date": "Wed, 21 Mar 2001 19:20:06 +0100", "msg_from": "Gilles DAROLD <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> > > Tatsuo, I have a separate listing for \"mklinux\" for the 7.0 release. Is\n> > > that distro still valid and unique? Or is there a better way to\n> > > represent the PPC options under Linux?\n> >\n> > mklinux is older Motorola 68k-based systems\n>\n> No. MkLinux runs on Power PC based system also. I believe there is a\n> x86 based MkLinux exists somewhere.\n\nmkLinux is \"micro-kernel\" Linux, on top of Mach. Consequentially, it is\nnon-trivially different from any other Linux.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Mar 2001 19:21:15 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "Thomas Lockhart writes:\n\n> > SCO OpenServer 5 x86...\n>\n> OK, I see that Billy Allie recently updated FAQ_SCO to indicate\n> demonstrated (?) support for OpenServer. I will reflect that in the\n> platform support info.\n\nThe last FAQ_SCO update was by me, and it was rather the consequence of\nsome implementational developments and not a good indicator of any\nactually working platform. (I do have access to a Unixware box, but that\nwas already reported.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Mar 2001 19:24:17 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\nHi,\n\nI am currently testing beta6 on AIX 4.3.3 on a RS6000 H80 with 4 cpu and 4\nGo RAM\nI use :\n\n./configure\n --with-CC=/usr/local/bin/gcc\n --with-includes=/usr/local/include\n --with-libraries=/usr/local/lib\n\nAll seem to be ok, There just the geometry failure in regression test\n(following the AIX FAQ\nit's normal ?)\n\nBut when I configure with --with-perl I have the following error :\n\nmake[4]: cc : Command not found\n\nAny idea ?\n\n\nGilles DAROLD\n\n> Hi,\n>\n> I reported Linux RedHat 6.2 - 2.2.14-5.0smp #1 SMP Tue Mar 7 21:01:40 EST\n> 2000 i686\n> 2 cpu - 1Go RAM\n>\n> Gilles DAROLD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Wed, 21 Mar 2001 20:45:03 +0100", "msg_from": "Gilles DAROLD <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Gilles DAROLD wrote:\n\n> Hi,\n>\n> I am currently testing beta6 on AIX 4.3.3 on a RS6000 H80 with 4 cpu and 4\n> Go RAM\n> I use :\n>\n> ./configure\n> --with-CC=/usr/local/bin/gcc\n> --with-includes=/usr/local/include\n> --with-libraries=/usr/local/lib\n>\n> All seem to be ok, There just the geometry failure in regression test\n>\n> But when I configure with --with-perl I have the following error :\n\nOk symbolic link between cc and gcc seem to be the better fix.\n\nI have now tested the --with-CXX option to compile libpq++ and it\nreally don't work, here are the output :\n\nld: 0711-319 WARNING: Exported symbol not defined:\nPgConnection::CloseConnection\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::Connect\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::ConnectionBad\n\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::DBName\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::ErrorMessage\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::Exec\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::ExecCommandOk\n\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::ExecTuplesOk\nld: 0711-319 WARNING: Exported symbol not defined: PgConnection::IntToString\n\n....\n\nld: 0711-317 ERROR: Undefined symbol: basic_string<char,\nstring_char_traits<char>, __default_alloc_template<false, 0> >::nilRep\nld: 0711-317 ERROR: Undefined symbol:\n__malloc_alloc_template<0>::__malloc_alloc_oom_handler\nld: 0711-317 ERROR: Undefined symbol: endl(ostream &)\nld: 0711-317 ERROR: Undefined symbol: cerr\nld: 0711-317 ERROR: Undefined symbol: .ostream::operator<<(char const *)\nld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n0>::_S_end_free\nld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n0>::_S_start_free\nld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n0>::_S_heap_size\nld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n0>::_S_free_list\nld: 0711-317 ERROR: Undefined symbol: .__out_of_range(char const *)\nld: 0711-317 ERROR: Undefined symbol: .__length_error(char const *)\ncollect2: ld returned 8 exit status\nmake[3]: *** [libpq++.so] Error 1\nmake[3]: Leaving directory\n`/home/darold/postgresql-7.1beta6/src/interfaces/libpq++'\nmake[2]: *** [all] Error 2\n\nI have change the Makefile.global and replace c++ by g++ but it the same\noutput.\n\nCould you tell me what going wrong ? Is my GNU install not fully functionnal ?\n\nI use :\ngcc version 2.95.2.1 19991024 (release) libs for powerpc-ibm-aix4.3.2.0\n\nRegards\n\nGilles DAROLD\n\n", "msg_date": "Wed, 21 Mar 2001 21:53:46 +0100", "msg_from": "Gilles DAROLD <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Call for platforms AIX 4.3.3 Failed" }, { "msg_contents": "Gilles DAROLD writes:\n\n> I have now tested the --with-CXX option to compile libpq++ and it\n> really don't work, here are the output :\n\n> ld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n> 0>::_S_start_free\n> ld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n> 0>::_S_heap_size\n> ld: 0711-317 ERROR: Undefined symbol: __default_alloc_template<false,\n> 0>::_S_free_list\n> ld: 0711-317 ERROR: Undefined symbol: .__out_of_range(char const *)\n> ld: 0711-317 ERROR: Undefined symbol: .__length_error(char const *)\n> collect2: ld returned 8 exit status\n> make[3]: *** [libpq++.so] Error 1\n> make[3]: Leaving directory\n> `/home/darold/postgresql-7.1beta6/src/interfaces/libpq++'\n> make[2]: *** [all] Error 2\n\nThis could be a name mangling problem. Maybe the linker needs to be\ninvoked specially when building C++ libraries. Maybe the C++ compiler\ndriver needs to be invoked directly. This could especially be a problem\nif you're using the GNU compiler with system libraries, since those are\nusually compiled by the system compiler.\n\n> I have change the Makefile.global and replace c++ by g++ but it the same\n> output.\n\nI think the good C++ compiler on AIX is called xlC. In any case, make\nsure that you don't mix different C++ compilers. You need to do 'gmake\nclean' at least in the libpq++ directory if you're switching.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Mar 2001 22:25:55 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Call for platforms AIX 4.3.3 Failed" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> This could be a name mangling problem. Maybe the linker needs to be\n> invoked specially when building C++ libraries. Maybe the C++ compiler\n> driver needs to be invoked directly. This could especially be a problem\n> if you're using the GNU compiler with system libraries, since those are\n> usually compiled by the system compiler.\n>\n> > I have change the Makefile.global and replace c++ by g++ but it the same\n> > output.\n>\n> I think the good C++ compiler on AIX is called xlC. In any case, make\n> sure that you don't mix different C++ compilers. You need to do 'gmake\n> clean' at least in the libpq++ directory if you're switching.\n\nAIX faq said that xlC compiler don't work with libpq++ but with g++ it\nmay works :\n\n> libpq++ does not work because xlC does not have the string and bool\nclasses.\n> compiling the few files, that fail, with g++ does work.\n\nHumm, I have no xlC compiler installed.Here are the compilation lines :\n\nmake[3]: Entering directory\n`/home/darold/postgresql-7.1beta6/src/interfaces/libpq++'\ng++ -O2 -Wall -I../../../src/interfaces/libpq -I../../../src/include\n-I/usr/local/include -c -o pgconnection.o p\ngconnection.cc\ng++ -O2 -Wall -I../../../src/interfaces/libpq -I../../../src/include\n-I/usr/local/include -c -o pgdatabase.o pgd\natabase.cc\ng++ -O2 -Wall -I../../../src/interfaces/libpq -I../../../src/include\n-I/usr/local/include -c -o pgtransdb.o pgtr\nansdb.cc\ng++ -O2 -Wall -I../../../src/interfaces/libpq -I../../../src/include\n-I/usr/local/include -c -o pgcursordb.o pgc\nursordb.cc\ng++ -O2 -Wall -I../../../src/interfaces/libpq -I../../../src/include\n-I/usr/local/include -c -o pglobject.o pglo\nbject.cc\nar crs libpq++.a pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o\npglobject.o\ntouch libpq++.a\n../../../src/backend/port/aix/mkldexport.sh libpq++.a > libpq++.exp\n/usr/local/bin/gcc -Wl,-H512 -Wl,-bM:SRE\n-Wl,-bI:../../../src/backend/postgres.imp -Wl,-bE:libpq++.exp -o libpq++.\nso libpq++.a -L/usr/local/lib -L../../../src/interfaces/libpq -lpq -lc\nld: 0711-224 WARNING: Duplicate symbol: __start\n\nAll work until it want to link with libraries, it seems to want the libc\n(-lc) and I don't\nhave it installed. Is it normal or this realease is not portable with AIX\n4.3.3 ?\n\n", "msg_date": "Wed, 21 Mar 2001 22:37:39 +0100", "msg_from": "Gilles DAROLD <[email protected]>", "msg_from_op": false, "msg_subject": "Re:Call for platforms AIX 4.3.3 Failed" }, { "msg_contents": "\n$ uname -a\nOpenBSD mizer 2.8 a#0 i386\n\nP3, default 2.8 install. Problems w/ TCL, but I think it's a local\nproblem.\n\nSystem needs kernel changes as noted at www.crimelabs.net. (shared mem\nstuff).\n\nOBSD-sparc comming soon.\n\n- b\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Wed, 21 Mar 2001 17:21:30 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Final Call: RC1 about to go out the door ..." }, { "msg_contents": "I see nobody did a test of 7.1 on Linux 2.4.x ?\n\nWould be nice to certify it is running on kernel 2.4.x as they claim this\nis entreprise strength kernel...\n\nCheers.\n\nThomas Lockhart wrote:\n\n>\n>\n> AIX 4.3.2 RS6000 7.0 2000-04-05, Andreas Zeugswetter\n> Compaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\n> IRIX 6.5.6f MIPS 6.5.3 2000-02-18, Kevin Wheatley\n> Linux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\n> Linux 2.0.x MIPS 7.0 2000-04-13, Tatsuo Ishii\n> mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n> NetBSD 1.4 arm32 7.0 2000-04-08, Patrick Welche\n> NetBSD 1.4U x86 7.0 2000-03-26, Patrick Welche\n> NetBSD m68k 7.0 2000-04-10, Henry B. Hotz\n> NetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n> QNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\n> SCO OpenServer 5 x86 6.5 1999-05-25, Andrew Merrill\n> Solaris x86 7.0 2000-04-12, Marc Fournier\n> Solaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n> SunOS 4.1.4 Sparc 7.0 2000-04-13, Tatsuo Ishii\n> Windows/Win32 x86 7.0 2000-04-02, Magnus Hagander (clients only)\n> WinNT/Cygwin x86 7.0 2000-03-30, Daniel Horak\n>\n> BeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\n> BSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\n> FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n> HPUX 10.20 PA-RISC 7.1 2001-03-19, Tom Lane\n> IBM S/390 7.1 2000-11-17, Neale Ferguson\n> Linux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\n> Linux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\n> Linux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\n> LinuxPPC G3 7.1 2001-03-19, Tom Lane\n> SCO UnixWare 7.1.1 x86 7.1 2001-03-19, Larry Rosenman\n> MacOS-X Darwin PowerPC 7.1 2000-12-11, Peter Bierman\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Thu, 22 Mar 2001 12:31:03 +1200", "msg_from": "Franck Martin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (linux 2.4.x ?)" }, { "msg_contents": "Franck Martin <[email protected]> writes:\n\n> Would be nice to certify it is running on kernel 2.4.x as they claim this\n> is entreprise strength kernel...\n\nLamar, if you send me your SRPM I can do that...\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "21 Mar 2001 19:31:13 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Call for platforms (linux 2.4.x ?)" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Tatsuo Ishii <[email protected]> writes:\n> ! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n> ! pqReadData() -- backend closed the channel unexpectedly.\n>> \n>> Is it possible you ran out of disk space?\n\n> Probably not.\n\nThe reason I was speculating that was that it seems pretty unlikely\nthat a write() call could return ENOENT, as the above appears to\nsuggest. I think that the errno = ENOENT value was not set by write(),\nbut is leftover from the expected failure of BasicOpenFile earlier in\nXLogFileInit. Probably write() returned some value less than BLCKSZ\nbut more than zero, and so did not set errno.\n\nOffhand the only reason I can think of for a write to a disk file\nto terminate after a partial transfer is a full disk. What do you\nthink?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Mar 2001 22:29:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "* Tom Lane <[email protected]> [010321 21:29]:\n> Tatsuo Ishii <[email protected]> writes:\n> >> Tatsuo Ishii <[email protected]> writes:\n> > ! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n> > ! pqReadData() -- backend closed the channel unexpectedly.\n> >> \n> >> Is it possible you ran out of disk space?\n> \n> > Probably not.\n> \n> The reason I was speculating that was that it seems pretty unlikely\n> that a write() call could return ENOENT, as the above appears to\n> suggest. I think that the errno = ENOENT value was not set by write(),\n> but is leftover from the expected failure of BasicOpenFile earlier in\n> XLogFileInit. Probably write() returned some value less than BLCKSZ\n> but more than zero, and so did not set errno.\n> \n> Offhand the only reason I can think of for a write to a disk file\n> to terminate after a partial transfer is a full disk. What do you\n> think?\nWhat about hitting a quota?\n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 21 Mar 2001 21:31:09 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Thu, Mar 22, 2001 at 12:31:03PM +1200, Franck Martin wrote:\n> I see nobody did a test of 7.1 on Linux 2.4.x ?\n> \n> Would be nice to certify it is running on kernel 2.4.x as they claim this\n> is entreprise strength kernel...\n\n\tI've been running the 7.1 betas on 2.4 for weeks without any problems.\nI replied to the \"call for platforms\" e-mail, but it looks like it got\nlost in the avalanche.\n\tI'll run the regression tests with the latest CVS snapshot and submit\na report to the list.\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \n", "msg_date": "Wed, 21 Mar 2001 22:48:36 -0700", "msg_from": "Roberto Mello <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (linux 2.4.x ?)" }, { "msg_contents": "\nResults of 'make check':\n\nNetBSD-1.5/i386\t one spurious floating point test failure\n\t\t (mail sent to postgresql-bugs with details)\n\nNetBSD_1.5/alpha all tests passed\n\nNetBSD-1.4.2/i386 four tests fail\n\t\t timestamp ... FAILED\n\t\t\t abstime ... FAILED\n\t\t\t tinterval ... FAILED\n\t\t test horology ... FAILED\n\nI'll look into the 1.4.2 failures when/if I get time. If anyone wants\nthe test output to examine please ask.\n\nRegards,\n\nGiles\n\n", "msg_date": "Thu, 22 Mar 2001 20:08:37 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Call for platforms" }, { "msg_contents": "OK: Linux 2.4.2 i686 / gcc 2.95.2 / Debian testing/unstable\n\nno problems.\n\nOK?: NetBSD 1.5 i586 / egcs 2.91.66 / (netbsd-1-5 from Jan)\n\nnetbsd FAILED the geometry test, diff attached, dunno if its\ncritical or not.\n\n-- \nmarko", "msg_date": "Thu, 22 Mar 2001 12:02:09 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> NetBSD-1.5/i386 one spurious floating point test failure\n> (mail sent to postgresql-bugs with details)\n> NetBSD_1.5/alpha all tests passed\n> NetBSD-1.4.2/i386 four tests fail\n\nThanks! I'm not too worried about 1.4.2, but be sure to let us know what\nthe problem was; it may help out someone else...\n\n - Thomas\n", "msg_date": "Thu, 22 Mar 2001 13:44:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Here is the current scorecard. We have a couple of new platforms\nreported (yeaaa!):\n\nNetBSD 2.8 alpha 7.1 2001-03-22, Giles Lean\nOpenBSD 2.8 x86 7.1 2001-03-22, B. Palmer (first name?)\n\nAny more OpenBSD architectures out there running PostgreSQL? Here are\nthe remaining (unreported, or unnoted) platforms:\n\nIRIX 6.5.6f MIPS 6.5.3 2000-02-18, Kevin Wheatley\n\nAnyone running IRIX? It may be on the unsupported list for 7.1 :(\n\nLinux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\nLinux 2.0.x MIPS 7.0 2000-04-13, Tatsuo Ishii\nmklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n\nTatsuo, do you still have access to the MIPS box?\n\nNetBSD m68k 7.0 2000-04-10, Henry B. Hotz\nNetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\nQNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\n\ncvs shows that there were patches from Maurizio in February, and he said\nthat ecpg worked for him. Bruce applied the patches, but I'm not certain\nthat this was done on the 7.1 code tree? Bruce, do you recall anything?\n\nSolaris x86 7.0 2000-04-12, Marc Fournier\n\nscrappy, do you still have this machine?\n\nSolaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n\nI'll bet that someone already has Solaris covered. Report?\n\nSunOS 4.1.4 Sparc 7.0 2000-04-13, Tatsuo Ishii\n\nTatsuo, I vaguely recall that you reported trouble recently. Is this\nworth continuing as a supported platform?\n\nWindows/Win32 x86 7.0 2000-04-02, Magnus Hagander (clients only)\n\nAnyone compiled for Win32 recently?\n\nAnd here are the up-to-date platforms; thanks for the reports:\n\nAIX 4.3.3 RS6000 7.1 2001-03-21, Gilles Darold\nBeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\nBSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\nCompaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\nFreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\nHPUX 10.20 PA-RISC 7.1 2001-03-19, Tom Lane\nIBM S/390 7.1 2000-11-17, Neale Ferguson\nLinux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\nLinux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\nLinux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\nLinuxPPC G3 7.1 2001-03-19, Tom Lane\nNetBSD 1.5E arm32 7.1 2001-03-21, Patrick Welche\nNetBSD 1.5S x86 7.1 2001-03-21, Patrick Welche\nSCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\nSCO UnixWare 7.1.1 x86 7.1 2001-03-19, Larry Rosenman\nMacOS-X Darwin PowerPC 7.1 2000-12-11, Peter Bierman\nWinNT/Cygwin x86 7.1 2001-03-16, Jason Tishler\n", "msg_date": "Thu, 22 Mar 2001 14:29:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n\n> Solaris x86 7.0 2000-04-12, Marc Fournier\n>\n> scrappy, do you still have this machine?\n\nDoing tests on Solaris x86/7 right now, will report as soon as they are\ndone ...\n\n> Solaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n>\n> I'll bet that someone already has Solaris covered. Report?\n\nWill do up Sparc/7 also this morning ...\n\n> AIX 4.3.3 RS6000 7.1 2001-03-21, Gilles Darold\n> BeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\n> BSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\n> Compaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\n> FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n\nFreeBSD 4.3-BETA is good to go also ...\n\n\n", "msg_date": "Thu, 22 Mar 2001 10:39:33 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> > FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n> FreeBSD 4.3-BETA is good to go also ...\n\nYeah, I'm not sure how to list that, or whether to bother. It is beta,\n4.2 works fine (and nothing had to change for 4.3, right?) so maybe we\njust list it when 4.3 goes stable? Or is 4.3 sufficiently different that\nit would be good to list in the comments for the platform?\n\n - Thomas\n", "msg_date": "Thu, 22 Mar 2001 14:50:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "\nHow much 'diviation' are we allowing for?\n\nSolaris x86/7 results, for example, in geometry.out, show a difference of:\n\n1.53102359078377e-11,3 (expected)\n1.53102359017709e-11,3 (results)\n\nor\n\n3,-3.06204718156754e-11 (expected)\n3,-3.06204718035418e-11 (results)\n\nacceptable diviation?\n\nOn Thu, 22 Mar 2001, The Hermit Hacker wrote:\n\n> On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n>\n> > Solaris x86 7.0 2000-04-12, Marc Fournier\n> >\n> > scrappy, do you still have this machine?\n>\n> Doing tests on Solaris x86/7 right now, will report as soon as they are\n> done ...\n>\n> > Solaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n> >\n> > I'll bet that someone already has Solaris covered. Report?\n>\n> Will do up Sparc/7 also this morning ...\n>\n> > AIX 4.3.3 RS6000 7.1 2001-03-21, Gilles Darold\n> > BeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\n> > BSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\n> > Compaq Tru64 5.0 Alpha 7.0 2000-04-11, Andrew McMurry\n> > FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n>\n> FreeBSD 4.3-BETA is good to go also ...\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 22 Mar 2001 11:06:31 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> Solaris x86/7 results, for example, in geometry.out, show a difference of:\n> 3,-3.06204718156754e-11 (expected)\n> 3,-3.06204718035418e-11 (results)\n> acceptable diviation?\n\nThat sort of deviation is well within bounds, particularly for geometry\ntests which might have some transcendental math involved.\n\nIs Solaris-x86 ready to go then?\n\n - Thomas\n", "msg_date": "Thu, 22 Mar 2001 15:15:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\n4.3 is in RELEASE CANDIDATE right now. By the time we release, it should \nbe -RELEASE or -STABLE. \n\nI'd include it as just 4.3. \n\nIt will be the -RELEASE at the time we are.\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 8:50:26 AM, Thomas Lockhart <[email protected]> wrote \nregarding [HACKERS] Re: Call for platforms:\n\n\n> > > FreeBSD 4.2 x86 7.1 2001-03-19, Vince Vielhaber\n> > FreeBSD 4.3-BETA is good to go also ...\n\n> Yeah, I'm not sure how to list that, or whether to bother. It is beta,\n> 4.2 works fine (and nothing had to change for 4.3, right?) so maybe we\n> just list it when 4.3 goes stable? Or is 4.3 sufficiently different that\n> it would be good to list in the comments for the platform?\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Thu, 22 Mar 2001 15:23:04 GMT", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n\n> > Solaris x86/7 results, for example, in geometry.out, show a difference of:\n> > 3,-3.06204718156754e-11 (expected)\n> > 3,-3.06204718035418e-11 (results)\n> > acceptable diviation?\n>\n> That sort of deviation is well within bounds, particularly for geometry\n> tests which might have some transcendental math involved.\n>\n> Is Solaris-x86 ready to go then?\n\nNope, still working through some things ... the select_implicit test\nfailed completely:\n\ndragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more results/select_implicit.out\npsql: connectDBStart() -- connect() failed: Connection refused\n Is the postmaster running locally\n and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n\nI'm going to re-run the test(s) and see if its an isolated thing or not ...\n\n", "msg_date": "Thu, 22 Mar 2001 11:31:07 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Nope, still working through some things ... the select_implicit test\n> failed completely:\n\n> dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more results/select_implicit.out\n> psql: connectDBStart() -- connect() failed: Connection refused\n> Is the postmaster running locally\n> and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n\n> I'm going to re-run the test(s) and see if its an isolated thing or not ...\n\nTransient overflow of the postmaster socket's accept queue, maybe? How\nbig is SOMAXCONN on your box?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2001 10:52:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n\n> OpenBSD 2.8 x86 7.1 2001-03-22, B. Palmer (first name?)\n\nThough it does work, like FBSD, there are some changes that need to be\nmade to the system. Need max proc / files changes and a kernel recompile\nwith SEMMNI and SEMMNS changes. Anywhere special to note this?\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 22 Mar 2001 10:53:16 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> > OpenBSD 2.8 x86 7.1 2001-03-22, B. Palmer (first name?)\n> Though it does work, like FBSD, there are some changes that need to be\n> made to the system. Need max proc / files changes and a kernel recompile\n> with SEMMNI and SEMMNS changes. Anywhere special to note this?\n\nSo more-or-less the *same* configuration as is required for FBSD? If so,\nI could note that in the comments part of the platform support table.\n\nI'm not sure if either one (OBSD, FBSD) is actually explicitly\ndocumented for PostgreSQL (I don't see a FAQ, and am not sure if there\nis something in the sgml docs). Does anyone know if and where these\nthings are noted?\n\n - Thomas\n", "msg_date": "Thu, 22 Mar 2001 15:54:56 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Thu, 22 Mar 2001, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Nope, still working through some things ... the select_implicit test\n> > failed completely:\n>\n> > dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more results/select_implicit.out\n> > psql: connectDBStart() -- connect() failed: Connection refused\n> > Is the postmaster running locally\n> > and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n>\n> > I'm going to re-run the test(s) and see if its an isolated thing or not ...\n>\n> Transient overflow of the postmaster socket's accept queue, maybe? How\n> big is SOMAXCONN on your box?\n\nOkay, for me, solaris has always been a nemesis as I can never find\nanything on this box :( But, looking through the header files, I find:\n\n/usr/include/sys/socket.h:#define SOMAXCONN 5\n\nI reran the tests two more times since the above ... first time went\nthrough clean as could be, with the geometry test failing (forgot to\nupdate my expected/resultmaps file(s) in my compile tree), the second time\nfailed *totally* different tests then the first run:\n\ndragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> grep\nFAILED regression.out\n opr_sanity ... FAILED\n join ... FAILED\n aggregates ... FAILED\n arrays ... FAILED\n\nI\n\n", "msg_date": "Thu, 22 Mar 2001 11:56:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> the second time\n> failed *totally* different tests then the first run:\n\n> dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> grep\n> FAILED regression.out\n> opr_sanity ... FAILED\n> join ... FAILED\n> aggregates ... FAILED\n> arrays ... FAILED\n\nThese are parallel tests right? What's the failure diffs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2001 11:08:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n\n> > > OpenBSD 2.8 x86 7.1 2001-03-22, B. Palmer (first name?)\n> > Though it does work, like FBSD, there are some changes that need to be\n> > made to the system. Need max proc / files changes and a kernel recompile\n> > with SEMMNI and SEMMNS changes. Anywhere special to note this?\n>\n> So more-or-less the *same* configuration as is required for FBSD? If so,\n> I could note that in the comments part of the platform support table.\n\nThe kernel changes are the same, but OBSD needs the max proc, max open\nfile settings changes (no reboot required).\n\n>\n> I'm not sure if either one (OBSD, FBSD) is actually explicitly\n> documented for PostgreSQL (I don't see a FAQ, and am not sure if there\n> is something in the sgml docs). Does anyone know if and where these\n> things are noted?\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/kernel-resources.html\n\nThis is the closest thing to docs. kernel-resources for specific OSs.\n\n- b\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 22 Mar 2001 11:08:11 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "The Hermit Hacker writes:\n\n> > Is Solaris-x86 ready to go then?\n>\n> Nope, still working through some things ... the select_implicit test\n> failed completely:\n>\n> dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more results/select_implicit.out\n> psql: connectDBStart() -- connect() failed: Connection refused\n> Is the postmaster running locally\n> and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n>\n> I'm going to re-run the test(s) and see if its an isolated thing or not ...\n\nSolaris is known to have trouble with Unix domain sockets.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Mar 2001 17:20:03 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Marko Kreen writes:\n\n> OK: Linux 2.4.2 i686 / gcc 2.95.2 / Debian testing/unstable\n>\n> no problems.\n>\n> OK?: NetBSD 1.5 i586 / egcs 2.91.66 / (netbsd-1-5 from Jan)\n>\n> netbsd FAILED the geometry test, diff attached, dunno if its\n> critical or not.\n\nCan you check whether it matches any of the other possible geometry\nresults? See\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/regress-platform.html\n\nabout the mechanisms.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Mar 2001 17:25:01 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Here is the current scorecard. We have a couple of new platforms\n> reported (yeaaa!):\n\n> QNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\n\nThis one is getting a \"no good\", as of latest reports. There are some\nissues to be worked out in the dreaded spin lock area, which will probably\nnot happen between now and next week.\n\n> And here are the up-to-date platforms; thanks for the reports:\n\n> IBM S/390 7.1 2000-11-17, Neale Ferguson\n ^^^\n should be \"Linux\"\n\n> LinuxPPC G3 7.1 2001-03-19, Tom Lane\n\nThe kernel is called \"Linux\", the processor is called \"PowerPC G3\". But\n\"PowerPC\" is probably enough, given that we list \"x86\". Compare to...\n\n> MacOS-X Darwin PowerPC 7.1 2000-12-11, Peter Bierman\n\n...this. There's a space, no dash, before the \"X\".\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Mar 2001 17:29:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "The Hermit Hacker writes:\n\n> How much 'diviation' are we allowing for?\n>\n> Solaris x86/7 results, for example, in geometry.out, show a difference of:\n>\n> 1.53102359078377e-11,3 (expected)\n> 1.53102359017709e-11,3 (results)\n>\n> or\n>\n> 3,-3.06204718156754e-11 (expected)\n> 3,-3.06204718035418e-11 (results)\n>\n> acceptable diviation?\n\nPractically yes, technically not. Check if the geometry results match any\nof the other \"expected\" files so we can update the \"resultmap\". See\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/regress-platform.html\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Mar 2001 17:30:46 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > > OpenBSD 2.8 x86 7.1 2001-03-22, B. Palmer (first name?)\n> > Though it does work, like FBSD, there are some changes that need to be\n> > made to the system. Need max proc / files changes and a kernel recompile\n> > with SEMMNI and SEMMNS changes. Anywhere special to note this?\n>\n> So more-or-less the *same* configuration as is required for FBSD? If so,\n> I could note that in the comments part of the platform support table.\n\nQuite a few platforms will need some tuning in that area before production\nuse. This is all documented.\n\n> I'm not sure if either one (OBSD, FBSD) is actually explicitly\n> documented for PostgreSQL (I don't see a FAQ, and am not sure if there\n> is something in the sgml docs). Does anyone know if and where these\n> things are noted?\n>\n> - Thomas\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Mar 2001 17:33:38 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Thu, Mar 22, 2001 at 05:25:01PM +0100, Peter Eisentraut wrote:\n> Marko Kreen writes:\n> >\n> > OK?: NetBSD 1.5 i586 / egcs 2.91.66 / (netbsd-1-5 from Jan)\n> >\n> > netbsd FAILED the geometry test, diff attached, dunno if its\n> > critical or not.\n> \n> Can you check whether it matches any of the other possible geometry\n> results? See\n\nYes, it matches geometry-positive-zeros-bsd.out. There is\nanother report about NetBSD 1.5/i386 which has comment:\n\n> one spurious floating point test failure\n> (mail sent to postgresql-bugs with details)\n\nBut I could not find it in archive page. (reporter Giles Lean\n<[email protected]>) Perhaps same thing?\n\n-- \nmarko\n\n", "msg_date": "Thu, 22 Mar 2001 19:12:39 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Thu, 22 Mar 2001, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > the second time\n> > failed *totally* different tests then the first run:\n>\n> > dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> grep\n> > FAILED regression.out\n> > opr_sanity ... FAILED\n> > join ... FAILED\n> > aggregates ... FAILED\n> > arrays ... FAILED\n>\n> These are parallel tests right? What's the failure diffs?\n\nsame as last time:\n\ndragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more\nresults/opr_sanity.out\npsql: connectDBStart() -- connect() failed: Connection refused\n Is the postmaster running locally\n and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n\nand yet another run (and different results):\n\n============== shutting down postmaster ==============\n\n=================================================\n 1 of 76 tests failed, 1 failed test(s) ignored.\n=================================================\n\nThe differences that caused some tests to fail can be viewed in the\nfile `./regression.diffs'. A copy of the test summary that you see\nabove is saved in the file `./regression.out'.\n\nmake: *** [check] Error 1\ndragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> grep\nFAILED regression.out\ntest misc ... FAILED\ndragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress>\n\n\n\n", "msg_date": "Thu, 22 Mar 2001 13:29:47 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> These are parallel tests right? What's the failure diffs?\n\n> same as last time:\n\n> dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more\n> results/opr_sanity.out\n> psql: connectDBStart() -- connect() failed: Connection refused\n> Is the postmaster running locally\n> and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n\nSee Peter's comment elsewhere that he doesn't think Solaris handles\nUnix socket connections very well. Try patching pg_regress to force\nunix_sockets=no.\n\n\n> and yet another run (and different results):\n\n> =================================================\n> 1 of 76 tests failed, 1 failed test(s) ignored.\n> =================================================\n\nThat's just ye olde random \"random\" failure ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2001 12:49:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Just a data point on the geometry test under NetBSD/i386 issue:\n\n/etc/ld.so.conf by default now contains:\nlibm.so.0 machdep.fpu_present 1:libm387.so.0,libm.so.0\n\nwhich means that if the sysctl machdep.fpu_present returns 1, load the\nshared library libm387 to make use of the fpu.\n\nIf you remove /etc/ld.so.conf, so that ldd `which psql` does not show\n\n -lm.0 => /usr/lib/libm387.so.0\n -lm.0 => /usr/lib/libm.so.0\n\nbut only the libm.so.0 line\n\n======================\n All 76 tests passed. \n======================\n\nIf you replace the /etc/ld.so.conf file and have an fpu, then the geometry\ntest will fail with slightly different rounding.\n\nDo we want a specific geometry-netbsd-i386-with-fpu.out where you must\nalso test\n\n% sysctl machdep.fpu_present\nmachdep.fpu_present = 1\n\n?\n\nCheers,\n\nPatrick\n\nPS: AFAIK geometry-positive-zeros-bsd works for all NetBSD platforms - the\nabove difference is only for i386 + fpu.\n\nOn Thu, Mar 22, 2001 at 07:12:39PM +0200, Marko Kreen wrote:\n> On Thu, Mar 22, 2001 at 05:25:01PM +0100, Peter Eisentraut wrote:\n> > Marko Kreen writes:\n> > >\n> > > OK?: NetBSD 1.5 i586 / egcs 2.91.66 / (netbsd-1-5 from Jan)\n> > >\n> > > netbsd FAILED the geometry test, diff attached, dunno if its\n> > > critical or not.\n> > \n> > Can you check whether it matches any of the other possible geometry\n> > results? See\n> \n> Yes, it matches geometry-positive-zeros-bsd.out. There is\n> another report about NetBSD 1.5/i386 which has comment:\n> \n> > one spurious floating point test failure\n> > (mail sent to postgresql-bugs with details)\n> \n> But I could not find it in archive page. (reporter Giles Lean\n> <[email protected]>) Perhaps same thing?\n> \n> -- \n> marko\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 22 Mar 2001 17:54:18 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "\n> PS: AFAIK geometry-positive-zeros-bsd works for all NetBSD platforms - the\n> above difference is only for i386 + fpu.\n\nIt doesn't on NetBSD-1.5/alpha -- there geometry-positive-zeros is\ncorrect.\n\nRegards,\n\nGiles\n", "msg_date": "Fri, 23 Mar 2001 06:25:50 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Thu, 22 Mar 2001, Thomas Lockhart wrote:\n> \n> > Solaris x86 7.0 2000-04-12, Marc Fournier\n> >\n> > scrappy, do you still have this machine?\n> \n> Doing tests on Solaris x86/7 right now, will report as soon as they are\n> done ...\n> \n> > Solaris 2.5.1-2.7 Sparc 7.0 2000-04-12, Peter Eisentraut\n> >\n> > I'll bet that someone already has Solaris covered. Report?\n> \n> Will do up Sparc/7 also this morning ...\n\nIn my tests on sparc/7 my compile died at line 3088 of\npostgresql-7.1beta6/src/interfaces/python/pgmodule.c:\n\n./pgmodule.c:3088: parse error before `init_pg'\n\nThat's line 3137 of today's (22Mar) snapshot, which reads:\n\n/* Initialization function for the module */\nDL_EXPORT(void)\ninit_pg(void)\n{\n\nI'm not a C expert by any means, but I can't figure how that is valid. \n\nGiven my ignorance, I don't want to call it a bug. Plus we don't use the\npython module in production, nor the sparc platform. But it seemed worth\npointing out.\n\n-- \nKarl\n", "msg_date": "Thu, 22 Mar 2001 14:43:27 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\n> NetBSD 2.8 alpha 7.1 2001-03-22, Giles Lean\n\nCorrection: NetBSD-1.5/alpha.\n\nCiao,\n\nGiles\n", "msg_date": "Fri, 23 Mar 2001 06:45:02 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "> > NetBSD 2.8 alpha 7.1 2001-03-22, Giles Lean\n> Correction: NetBSD-1.5/alpha.\n\nRight. That was a typo in transcribing my online copy of the sgml docs\nto the email. I was hoping no one caught it, and didn't bother sending a\ncorrection ;)\n\n - Thomas\n", "msg_date": "Thu, 22 Mar 2001 19:57:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Fri, Mar 23, 2001 at 06:25:50AM +1100, Giles Lean wrote:\n> \n> > PS: AFAIK geometry-positive-zeros-bsd works for all NetBSD platforms - the\n> > above difference is only for i386 + fpu.\n> \n> It doesn't on NetBSD-1.5/alpha -- there geometry-positive-zeros is\n> correct.\n\nSorry, that should have read:\n\nAFAIK geometry-positive-zeros works for all NetBSD platforms - the\nabove difference is only for i386 + fpu.\n\n(-bsd is for bsdi)\n\nThanks for the correction,\n\nPatrick\n", "msg_date": "Thu, 22 Mar 2001 19:58:04 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Thu, Mar 22, 2001 at 07:58:04PM +0000, Patrick Welche wrote:\n> On Fri, Mar 23, 2001 at 06:25:50AM +1100, Giles Lean wrote:\n> > \n> > > PS: AFAIK geometry-positive-zeros-bsd works for all NetBSD platforms - the\n> > > above difference is only for i386 + fpu.\n> > \n> > It doesn't on NetBSD-1.5/alpha -- there geometry-positive-zeros is\n> > correct.\n> \n> Sorry, that should have read:\n> \n> AFAIK geometry-positive-zeros works for all NetBSD platforms - the\n> above difference is only for i386 + fpu.\n\nSeems that following patch is needed. Now It Works For Me (tm).\nGiles, does the regress test now succed for you?\n\n-- \nmarko\n\n\nIndex: src/test/regress/resultmap\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/resultmap,v\nretrieving revision 1.45\ndiff -u -r1.45 resultmap\n--- src/test/regress/resultmap\t2001/03/22 15:13:18\t1.45\n+++ src/test/regress/resultmap\t2001/03/22 17:29:49\n@@ -17,6 +17,7 @@\n geometry/.*-openbsd=geometry-positive-zeros-bsd\n geometry/.*-irix6=geometry-irix\n geometry/.*-netbsd=geometry-positive-zeros\n+geometry/i.86-.*-netbsdelf1.5=geometry-positive-zeros-bsd\n geometry/.*-sysv5uw7.*:cc=geometry-uw7-cc\n geometry/.*-sysv5uw7.*:gcc=geometry-uw7-gcc\n geometry/alpha.*-dec-osf=geometry-alpha-precision\n", "msg_date": "Thu, 22 Mar 2001 22:27:44 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Call for platforms" }, { "msg_contents": "On Thu, Mar 22, 2001 at 10:27:44PM +0200, Marko Kreen wrote:\n> On Thu, Mar 22, 2001 at 07:58:04PM +0000, Patrick Welche wrote:\n> > \n> > AFAIK geometry-positive-zeros works for all NetBSD platforms - the\n> > above difference is only for i386 + fpu.\n> \n> Seems that following patch is needed. Now It Works For Me (tm).\n> Giles, does the regress test now succed for you?\n\nYour patch works for me (i386) - I'd just like to point out that it's\nbecause we are both running on peecees with fpus and thus with libm387\nloaded (else works without patch)\n\nBTW\nNetBSD 2.8 alpha 7.1 2001-03-22, Giles Lean\n\nShouldn't that be 1.5?\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 22 Mar 2001 20:30:36 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Call for platforms" }, { "msg_contents": "On Thu, Mar 22, 2001 at 12:49:39PM -0500, Tom Lane wrote:\n> The Hermit Hacker <[email protected]> writes:\n> >> These are parallel tests right? What's the failure diffs?\n> \n> > same as last time:\n> \n> > dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more\n> > results/opr_sanity.out\n> > psql: connectDBStart() -- connect() failed: Connection refused\n> > Is the postmaster running locally\n> > and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n> \n> See Peter's comment elsewhere that he doesn't think Solaris handles\n> Unix socket connections very well. Try patching pg_regress to force\n> unix_sockets=no.\n> \n> \n> > and yet another run (and different results):\n> \n> > =================================================\n> > 1 of 76 tests failed, 1 failed test(s) ignored.\n> > =================================================\n> \n> That's just ye olde random \"random\" failure ...\n\nFunny, I get the more optimistic:\n\n==================================================\n 75 of 76 tests passed, 1 failed test(s) ignored. \n==================================================\n\nDifferent version? PostgreSQL 7.1RC1\n", "msg_date": "Thu, 22 Mar 2001 20:33:55 +0000", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\n> Seems that following patch is needed. Now It Works For Me (tm).\n> Giles, does the regress test now succed for you?\n\nYes, but I don't like that it is 1.5 specific. I expect that later\nNetBSD/i386 releases will also have the \"new\" floating point behaviour\nby default, subject to /etc/ld.so.conf setting as Patrick Welche\ndiscovered.\n\nBTW NetBSD just uses \"i386\" for any x86. It's not necessary to allow\nfor i486, i586 etc.\n\nPerhaps the resultmap format could be enhanced to allow wildcarding of\nthe result files, and just accept either match?\n\ngeometry/.*-netbsd=geometry-positive-zeros*\n\nRegards,\n\nGiles\n\n> Index: src/test/regress/resultmap\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/resultmap,v\n> retrieving revision 1.45\n> diff -u -r1.45 resultmap\n> --- src/test/regress/resultmap\t2001/03/22 15:13:18\t1.45\n> +++ src/test/regress/resultmap\t2001/03/22 17:29:49\n> @@ -17,6 +17,7 @@\n> geometry/.*-openbsd=geometry-positive-zeros-bsd\n> geometry/.*-irix6=geometry-irix\n> geometry/.*-netbsd=geometry-positive-zeros\n> +geometry/i.86-.*-netbsdelf1.5=geometry-positive-zeros-bsd\n> geometry/.*-sysv5uw7.*:cc=geometry-uw7-cc\n> geometry/.*-sysv5uw7.*:gcc=geometry-uw7-gcc\n> geometry/alpha.*-dec-osf=geometry-alpha-precision\n\n", "msg_date": "Fri, 23 Mar 2001 07:48:08 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Call for platforms " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n\n> If a platform you are running on is not listed, make sure it gets\n> included! \n\nRed Hat Linux, Wolverine Beta (and some updates) - glibc 2.2.2,\n2.4.2ish kernel (read: lots of fixes), gcc 2.96RH: All 76 tests passed\nwith 7.1beta6 (parallel_schedule).\n\nI'll update this info when we do our next release. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "22 Mar 2001 16:02:11 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Thu, 22 Mar 2001, Patrick Welche wrote:\n\n> On Thu, Mar 22, 2001 at 12:49:39PM -0500, Tom Lane wrote:\n> > The Hermit Hacker <[email protected]> writes:\n> > >> These are parallel tests right? What's the failure diffs?\n> >\n> > > same as last time:\n> >\n> > > dragon:/home/centre/marc/src/postgresql-7.1RC1/src/test/regress> more\n> > > results/opr_sanity.out\n> > > psql: connectDBStart() -- connect() failed: Connection refused\n> > > Is the postmaster running locally\n> > > and accepting connections on Unix socket '/tmp/.s.PGSQL.65432'?\n> >\n> > See Peter's comment elsewhere that he doesn't think Solaris handles\n> > Unix socket connections very well. Try patching pg_regress to force\n> > unix_sockets=no.\n> >\n> >\n> > > and yet another run (and different results):\n> >\n> > > =================================================\n> > > 1 of 76 tests failed, 1 failed test(s) ignored.\n> > > =================================================\n> >\n> > That's just ye olde random \"random\" failure ...\n>\n> Funny, I get the more optimistic:\n>\n> ==================================================\n> 75 of 76 tests passed, 1 failed test(s) ignored.\n> ==================================================\n>\n> Different version? PostgreSQL 7.1RC1\n\n7.1RC1 (the not released yet version) :)\n\n\n", "msg_date": "Thu, 22 Mar 2001 17:44:47 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "[email protected] (Trond Eivind Glomsr�d) writes:\n\n> Thomas Lockhart <[email protected]> writes:\n> \n> > If a platform you are running on is not listed, make sure it gets\n> > included! \n> \n> Red Hat Linux, Wolverine Beta (and some updates) - glibc 2.2.2,\n> 2.4.2ish kernel (read: lots of fixes), gcc 2.96RH: All 76 tests passed\n> with 7.1beta6 (parallel_schedule).\n\nForgot to mention: This is x86.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "22 Mar 2001 17:46:18 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On 22 Mar 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n\n> [email protected] (Trond Eivind Glomsr�d) writes:\n>\n> > Thomas Lockhart <[email protected]> writes:\n> >\n> > > If a platform you are running on is not listed, make sure it gets\n> > > included!\n> >\n> > Red Hat Linux, Wolverine Beta (and some updates) - glibc 2.2.2,\n> > 2.4.2ish kernel (read: lots of fixes), gcc 2.96RH: All 76 tests passed\n> > with 7.1beta6 (parallel_schedule).\n>\n> Forgot to mention: This is x86.\n\nForget to enter it into the regresstest database?\n\n http://www.postgresql.org/~vev/regress/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Mar 2001 20:29:20 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 22 Mar 2001, at 14:29, Thomas Lockhart wrote:\n\n> Linux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\n\nCompiled and tested 7.1beta6 tonight. All the regression tests passed \nexcept two - the usual minor differences in geometry (rounding on the \nfinal digit) and this rather troubling output from type_sanity. I'm \nnot altogether sure what impact this has. Everything seems to run \njust fine.\n\n\n*** ./expected/type_sanity.out Tue Sep 12 00:49:16 2000\n- --- ./results/type_sanity.out Thu Mar 22 21:42:49 2001\n***************\n*** 172,177 ****\n p1.attalign != p2.typalign OR\n p1.attbyval != p2.typbyval);\n oid | attname | oid | typname \n! -----+---------+-----+---------\n! (0 rows)\n \n- --- 172,239 ----\n p1.attalign != p2.typalign OR\n p1.attbyval != p2.typbyval);\n oid | attname | oid | typname \n! -------+---------+-----+---------\n! 16572 | ctid | 27 | tid\n! 16593 | ctid | 27 | tid\n! 16610 | ctid | 27 | tid\n! 16635 | ctid | 27 | tid\n! 16646 | ctid | 27 | tid\n! 16678 | ctid | 27 | tid\n! 16691 | ctid | 27 | tid\n! 16873 | ctid | 27 | tid\n! 16941 | ctid | 27 | tid\n! 16953 | ctid | 27 | tid\n! 16970 | ctid | 27 | tid\n! 17038 | ctid | 27 | tid\n! 17051 | ctid | 27 | tid\n! 17067 | ctid | 27 | tid\n! 17079 | ctid | 27 | tid\n! 17090 | ctid | 27 | tid\n! 17206 | ctid | 27 | tid\n! 17221 | ctid | 27 | tid\n! 17236 | ctid | 27 | tid\n! 17251 | ctid | 27 | tid\n! 17266 | ctid | 27 | tid\n! 17281 | ctid | 27 | tid\n! 17301 | ctid | 27 | tid\n! 17314 | ctid | 27 | tid\n! 17327 | ctid | 27 | tid\n! 17342 | ctid | 27 | tid\n! 17355 | ctid | 27 | tid\n! 18792 | ctid | 27 | tid\n! 18820 | ctid | 27 | tid\n! 18832 | ctid | 27 | tid\n! 18845 | ctid | 27 | tid\n! 18857 | ctid | 27 | tid\n! 18869 | ctid | 27 | tid\n! 18888 | ctid | 27 | tid\n! 18922 | ctid | 27 | tid\n! 18937 | ctid | 27 | tid\n! 18967 | ctid | 27 | tid\n! 18990 | ctid | 27 | tid\n! 19005 | ctid | 27 | tid\n! 19019 | ctid | 27 | tid\n! 19031 | ctid | 27 | tid\n! 19042 | ctid | 27 | tid\n! 19053 | ctid | 27 | tid\n! 19069 | ctid | 27 | tid\n! 19080 | ctid | 27 | tid\n! 19103 | ctid | 27 | tid\n! 20617 | ctid | 27 | tid\n! 20633 | ctid | 27 | tid\n! 20643 | ctid | 27 | tid\n! 20655 | ctid | 27 | tid\n! 20677 | ctid | 27 | tid\n! 20689 | ctid | 27 | tid\n! 20702 | ctid | 27 | tid\n! 20716 | ctid | 27 | tid\n! 20726 | ctid | 27 | tid\n! 20766 | ctid | 27 | tid\n! 20784 | ctid | 27 | tid\n! 20794 | ctid | 27 | tid\n! 20804 | ctid | 27 | tid\n! 20836 | ctid | 27 | tid\n! 20860 | ctid | 27 | tid\n! 20879 | ctid | 27 | tid\n! (62 rows)\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: N/A\n\niQCVAwUBOrrHrv+IdJuhyV9xAQGemgQApLVZS9xWQAtIzfgw3ILQThtEdftUBH20\nFCoNqod++HunTazDwQZo6Msbunlvb8cJmSXg/kRkUmN6FQ39RtK9XEWsvFUy1+Nx\neJCHiHyIMZBmmXNK1eiK0AyxFSqD8MKtgSuKvprXWNzTD4+NVZzWy9h1cONhZviN\nKEj9thVXQDc=\n=TG7n\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 22 Mar 2001 22:49:02 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "I have tested today's snap shot on SunOS4.\n\n% uname -a\nSunOS srashd 4.1.4-JL 1 sun4m\n\nThere's a minor portability problem in\nsrc/bin/pg_encoding/Makefile.\n\n*** Makefile Fri Mar 23 11:53:49 2001\n--- Makefile.orig Wed Feb 21 18:05:21 2001\n***************\n*** 16,28 ****\n \n all: submake pg_encoding\n \n- ifdef STRTOUL\n- OBJS+=$(top_builddir)/src/backend/port/strtoul.o\n- \n- $(top_builddir)/src/backend/port/strtoul.o:\n- $(MAKE) -C $(top_builddir)/src/backend/port strtoul.o\n- endif\n- \n pg_encoding: $(OBJS)\n $(CC) $(CFLAGS) $^ $(libpq) $(LDFLAGS) $(LIBS) -o $@\n \nI'm going to check in this correction soon.\n\nFor the regression test, I got 7 failures, most of them seem harmless,\nthe only concern I have is bit test though.\n--\nTatsuo Ishii\n\nP.S. I'm going to test Linux/MIPS (Cobalt RaQ2) soon...\n--\nTatsuo Ishii", "msg_date": "Fri, 23 Mar 2001 14:31:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> For the regression test, I got 7 failures, most of them seem harmless,\n> the only concern I have is bit test though.\n\nMost of the diffs derive from what I recall to be a known SunOS problem,\nthat strtol fails to notice overflow. A value that should be rejected\nis getting inserted into int4_tbl (mod 2^32 of course).\n\nThe bit test diffs seem to indicate that bit_cmp is messed up. That\ndepends on memcmp. I seem to recall something about memcmp not being\n8-bit-clean on SunOS ... does that ring a bell with anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2001 01:08:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "> I have tested today's snap shot on SunOS4.\n> For the regression test, I got 7 failures, most of them seem harmless,\n> the only concern I have is bit test though.\n> P.S. I'm going to test Linux/MIPS (Cobalt RaQ2) soon...\n\nGreat! I'll update info for SunOS4 (individual problems will be fixed or\n\"known features\" ;) and look forward to seeing the Linux/MIPS results.\n\n - Thomas\n", "msg_date": "Fri, 23 Mar 2001 06:31:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> > I have tested today's snap shot on SunOS4.\n> > For the regression test, I got 7 failures, most of them seem harmless,\n> > the only concern I have is bit test though.\n> > P.S. I'm going to test Linux/MIPS (Cobalt RaQ2) soon...\n> \n> Great! I'll update info for SunOS4 (individual problems will be fixed or\n> \"known features\" ;) and look forward to seeing the Linux/MIPS results.\n\nSorry, after moving of my office, this is the first time to boot RaQ2\nbut it won't boot anymore. Seems there are some severe hardware\ntroubles with it. Can anyone else do the testing instead of me?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 23 Mar 2001 16:11:18 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > For the regression test, I got 7 failures, most of them seem harmless,\n> > the only concern I have is bit test though.\n> \n> Most of the diffs derive from what I recall to be a known SunOS problem,\n> that strtol fails to notice overflow. A value that should be rejected\n> is getting inserted into int4_tbl (mod 2^32 of course).\n> \n> The bit test diffs seem to indicate that bit_cmp is messed up. That\n> depends on memcmp. I seem to recall something about memcmp not being\n> 8-bit-clean on SunOS ... does that ring a bell with anyone?\n\nGood point. From the man page of memcmp(3) on this machine:\n\nBUGS\n memcmp() uses native character comparison, which is signed\n on some machines and unsigned on other machines. Thus the\n sign of the value returned when one of the characters has\n its high-order bit set is implementation-dependent.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 23 Mar 2001 17:19:17 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> The bit test diffs seem to indicate that bit_cmp is messed up. That\n>> depends on memcmp. I seem to recall something about memcmp not being\n>> 8-bit-clean on SunOS ... does that ring a bell with anyone?\n\n> Good point. From the man page of memcmp(3) on this machine:\n\n> BUGS\n> memcmp() uses native character comparison, which is signed\n> on some machines and unsigned on other machines. Thus the\n> sign of the value returned when one of the characters has\n> its high-order bit set is implementation-dependent.\n\nEeek.\n\nThe C spec documents I have at hand all agree that memcmp, strcmp,\netc shall interpret their arguments as unsigned char. I hope Sun\nwere the only ones who took the above more liberal interpretation...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2001 03:33:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> Tatsuo Ishii <[email protected]> writes:\n> > ! FATAL 2: ZeroFill(logfile 0 seg 1) failed: No such file or directory\n> > ! pqReadData() -- backend closed the channel unexpectedly.\n> >> \n> >> Is it possible you ran out of disk space?\n> \n> > Probably not.\n> \n> The reason I was speculating that was that it seems pretty unlikely\n> that a write() call could return ENOENT, as the above appears to\n> suggest. I think that the errno = ENOENT value was not set by write(),\n> but is leftover from the expected failure of BasicOpenFile earlier in\n> XLogFileInit. Probably write() returned some value less than BLCKSZ\n> but more than zero, and so did not set errno.\n> \n> Offhand the only reason I can think of for a write to a disk file\n> to terminate after a partial transfer is a full disk. What do you\n> think?\n\nSorry, I was wrong. I accidentaly ran out the disk space.\n\nBTW, I got segfault when I first try beta6 on this platform. To\ninvestigae it, I recompiled with -g (without -O2) and now the problem\nhas gone. It sems there's something wrong with the compiler (gcc\nversion egcs-2.90.25 980302 (egcs-1.0.2 prerelease)) or potential bug\nin 7.1, I don't know.\n\nAnyway, the platform is too old now, and I would like to try it\nanother day with newer MkLinux version installed. I don't want to make\nthis as a show stopper for 7.1...\n-- \nTatsuo Ishii\n\n*** ./expected/oid.out\tTue Nov 21 12:23:20 2000\n--- ./results/oid.out\tThu Mar 22 15:58:56 2001\n***************\n*** 6,11 ****\n--- 6,12 ----\n INSERT INTO OID_TBL(f1) VALUES ('1235');\n INSERT INTO OID_TBL(f1) VALUES ('987');\n INSERT INTO OID_TBL(f1) VALUES ('-1040');\n+ ERROR: oidin: error in \"-1040\": can't parse \"-1040\"\n INSERT INTO OID_TBL(f1) VALUES ('99999999');\n INSERT INTO OID_TBL(f1) VALUES ('');\n -- bad inputs \n***************\n*** 15,28 ****\n ERROR: oidin: error in \"99asdfasd\": can't parse \"asdfasd\"\n SELECT '' AS six, OID_TBL.*;\n six | f1 \n! -----+------------\n | 1234\n | 1235\n | 987\n- | 4294966256\n | 99999999\n | 0\n! (6 rows)\n \n SELECT '' AS one, o.* FROM OID_TBL o WHERE o.f1 = 1234;\n one | f1 \n--- 16,28 ----\n ERROR: oidin: error in \"99asdfasd\": can't parse \"asdfasd\"\n SELECT '' AS six, OID_TBL.*;\n six | f1 \n! -----+----------\n | 1234\n | 1235\n | 987\n | 99999999\n | 0\n! (5 rows)\n \n SELECT '' AS one, o.* FROM OID_TBL o WHERE o.f1 = 1234;\n one | f1 \n***************\n*** 32,44 ****\n \n SELECT '' AS five, o.* FROM OID_TBL o WHERE o.f1 <> '1234';\n five | f1 \n! ------+------------\n | 1235\n | 987\n- | 4294966256\n | 99999999\n | 0\n! (5 rows)\n \n SELECT '' AS three, o.* FROM OID_TBL o WHERE o.f1 <= '1234';\n three | f1 \n--- 32,43 ----\n \n SELECT '' AS five, o.* FROM OID_TBL o WHERE o.f1 <> '1234';\n five | f1 \n! ------+----------\n | 1235\n | 987\n | 99999999\n | 0\n! (4 rows)\n \n SELECT '' AS three, o.* FROM OID_TBL o WHERE o.f1 <= '1234';\n three | f1 \n***************\n*** 57,75 ****\n \n SELECT '' AS four, o.* FROM OID_TBL o WHERE o.f1 >= '1234';\n four | f1 \n! ------+------------\n | 1234\n | 1235\n- | 4294966256\n | 99999999\n! (4 rows)\n \n SELECT '' AS three, o.* FROM OID_TBL o WHERE o.f1 > '1234';\n three | f1 \n! -------+------------\n | 1235\n- | 4294966256\n | 99999999\n! (3 rows)\n \n DROP TABLE OID_TBL;\n--- 56,72 ----\n \n SELECT '' AS four, o.* FROM OID_TBL o WHERE o.f1 >= '1234';\n four | f1 \n! ------+----------\n | 1234\n | 1235\n | 99999999\n! (3 rows)\n \n SELECT '' AS three, o.* FROM OID_TBL o WHERE o.f1 > '1234';\n three | f1 \n! -------+----------\n | 1235\n | 99999999\n! (2 rows)\n \n DROP TABLE OID_TBL;\n\n======================================================================\n\n*** ./expected/geometry-powerpc-linux-gnulibc1.out\tWed Sep 13 06:07:16 2000\n--- ./results/geometry.out\tThu Mar 22 16:01:20 2001\n***************\n*** 445,451 ****\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n--- 445,451 ----\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887967),(-3.33012701896897,0.500000000081028))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n\n======================================================================\n\n", "msg_date": "Fri, 23 Mar 2001 17:45:28 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n\n> On 22 Mar 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n> \n> > [email protected] (Trond Eivind Glomsr�d) writes:\n> >\n> > > Thomas Lockhart <[email protected]> writes:\n> > >\n> > > > If a platform you are running on is not listed, make sure it gets\n> > > > included!\n> > >\n> > > Red Hat Linux, Wolverine Beta (and some updates) - glibc 2.2.2,\n> > > 2.4.2ish kernel (read: lots of fixes), gcc 2.96RH: All 76 tests passed\n> > > with 7.1beta6 (parallel_schedule).\n> >\n> > Forgot to mention: This is x86.\n> \n> Forget to enter it into the regresstest database?\n> \n> http://www.postgresql.org/~vev/regress/\n\nI was planning on waiting with that until I test it on an official release.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "23 Mar 2001 10:35:34 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On 23 Mar 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n>\n> > On 22 Mar 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n> >\n> > > [email protected] (Trond Eivind Glomsr�d) writes:\n> > >\n> > > > Thomas Lockhart <[email protected]> writes:\n> > > >\n> > > > > If a platform you are running on is not listed, make sure it gets\n> > > > > included!\n> > > >\n> > > > Red Hat Linux, Wolverine Beta (and some updates) - glibc 2.2.2,\n> > > > 2.4.2ish kernel (read: lots of fixes), gcc 2.96RH: All 76 tests passed\n> > > > with 7.1beta6 (parallel_schedule).\n> > >\n> > > Forgot to mention: This is x86.\n> >\n> > Forget to enter it into the regresstest database?\n> >\n> > http://www.postgresql.org/~vev/regress/\n>\n> I was planning on waiting with that until I test it on an official release.\n\nI figured that, it was my just smartass way of reminding EVERYONE to put\ntheir data in the database. I saw a few reports of things working yet\nthere was nothing in the database saying so, it was only posted here.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 23 Mar 2001 10:48:24 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> OpenBSD 2.8 x86 7.1 2001-03-22, Brandon. Palmer\n\nOBSD checks out for sparc and i386. We did need to make a change to the\nresultmap file to make the regression tests clean for the sparc. I have\nattached the diff.\n\n\n\n\n\nAlso, on the sparc that i'm using (sparc4/110), make check takes 1950\nseconds. Most of the time is spent in this test:\n\nparallel group (13 tests): float4 int2 int4 text name varchar oid boolean\nchar float8 int8 bit numeric\n\nThere is a long pause between 'bit' and 'numeric'. Same with on i386. Is\nthis a problem that is local to obsd? Is it an expected delay? It works,\nbut seems like a real perf problem.\n\n\n\n\n\n\n\n\nAnyway:\n\n++++++++++++++++\n\nSparc 4/110, 64M, SCSI disk, OBSD 2.8 virgin\n\n======================\n All 76 tests passed.\n======================\n\n 1941.34s real 130.23s user 93.77s system\n\n$ uname -a\nOpenBSD azreal 2.8 GENERIC#0 sparc\n\n++++++++++++++++\n\nP2 300, 96M, IDE Disk, OBSD 2.8 virgin\n\n======================\n All 76 tests passed.\n======================\n\n 262.67s real 21.84s user 13.56s system\n\n$ uname -a\nOpenBSD orion 2.8 GENERIC#0 i386\n\n++++++++++++++++\n\nI can't get tcl/tk working to same my life, but that's not too important\nfor a release, just a config pain in the rear for obsd I guess.\n\n- brandon\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5", "msg_date": "Fri, 23 Mar 2001 12:38:16 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "bpalmer <[email protected]> writes:\n> seconds. Most of the time is spent in this test:\n\n> parallel group (13 tests): float4 int2 int4 text name varchar oid boolean\n> char float8 int8 bit numeric\n\n> There is a long pause between 'bit' and 'numeric'. Same with on i386. Is\n> this a problem that is local to obsd? Is it an expected delay?\n\nYes, that's the expected behavior. The 'numeric' test runs considerably\nlonger than most of the others. (It used to be even slower, but I made\nJan trim it down ;-))\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2001 15:08:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Hi all,\n\nI've built 7.1beta6 on a number of different HP-UX platforms (11.00 32\nbit, 11.00 64 bit, 11i 32 bit).\n\n1. On all these platforms 'make check' hung. Since that's not\n critical to whether PostgreSQL works or not I worked around it by\n using a different shell:\n\n gmake SHELL=$HOME/bin/pdksh check\n\n I'll look at this next week. If someone can confirm that\n /usr/bin/sh works for make check on HP-UX 10.20 that would be\n useful.\n\n2. I saw two different sets of output for geometry.out. These seem to\n relate to the processor level:\n\n (a) on PA-RISC 1.1 some of the zero values are negative\n\n (b) on PA-RISC 2.0 the negative zeros were produced as on PA-RISC\n 1.1, plus about three results varied in the least significant\n digit.\n\n The PA-RISC 2.0 values were identical on two platforms:\n\n (i) PA8000 running 32 bit 11i\n (ii) PA8500 running 64 bit 11.00\n\n If these results are OK (I assumed they were for the purposes\n of Vince's database, so I hope they are :-) then perhaps the\n attached outputs can be added to the expected results and\n resultmap updated for HP-UX 11?\n\nRegards,\n\nGiles", "msg_date": "Sat, 24 Mar 2001 14:45:54 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Call for platforms (HP-UX)" }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> I'll look at this next week. If someone can confirm that\n> /usr/bin/sh works for make check on HP-UX 10.20 that would be\n> useful.\n\nIt does not work. See FAQ_HPUX.\n\n> 2. I saw two different sets of output for geometry.out. These seem to\n> relate to the processor level:\n\nI think it depends more on what software you use. The existing HPUX\nresultmap (geometry-positive-zeros) works on my usual platform (C180,\nPA8000 chip I think) when using gcc. Compile with cc and you get one\ndifferent lowest-order digit in two lines, IIRC. I have not tried it\nlately on a 1.1 chip.\n\n> (a) on PA-RISC 1.1 some of the zero values are negative\n\nHmm, so does it match any of the existing geometry files? That would\nsuggest that HPUX 11 has started adhering more closely to the IEEE rules\nabout negative zeroes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2001 23:14:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "\n> > I'll look at this next week. If someone can confirm that\n> > /usr/bin/sh works for make check on HP-UX 10.20 that would be\n> > useful.\n> \n> It does not work. See FAQ_HPUX.\n\nI'm confused: I don't see anything about shells or make check hanging\nin doc/FAQ_HPUX. There is clear instruction to use GNU make, which I\nam doing.\n\nI'll look into the problem anyway.\n\n> > (a) on PA-RISC 1.1 some of the zero values are negative\n> \n> Hmm, so does it match any of the existing geometry files?\n\nNo ... I was hoping for that, but not.\n\nRegards,\n\nGiles\n\n", "msg_date": "Sat, 24 Mar 2001 15:32:29 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n>> It does not work. See FAQ_HPUX.\n\n> I'm confused: I don't see anything about shells or make check hanging\n> in doc/FAQ_HPUX. There is clear instruction to use GNU make, which I\n> am doing.\n\nHm, I thought I had updated that before beta6. What it has now is\n\nThe parallel regression test script (gmake check) is known to lock up\nwhen run under HP's default Bourne shell, at least in HPUX 10.20. This\nappears to be a shell bug, not the fault of the script. If you see that\nthe tests have stopped making progress and only a shell process is\nconsuming CPU, kill the shell and start over with\n\tgmake SHELL=/bin/ksh check\nto use ksh instead.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2001 23:36:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "\n> Hm, I thought I had updated that before beta6. What it has now is\n\n<grin>\n\n> The parallel regression test script (gmake check) is known to lock up\n> when run under HP's default Bourne shell, at least in HPUX 10.20. This\n> appears to be a shell bug, not the fault of the script. If you see that\n> the tests have stopped making progress and only a shell process is\n> consuming CPU, kill the shell and start over with\n> \tgmake SHELL=/bin/ksh check\n> to use ksh instead.\n\nInterestingly, ksh didn't work for me either, but didn't try it on all\nthe platforms I built on. I'll make a note to check ksh when I'm\ninvestigating the problem next week though, and let you know what I\nfind.\n\nRegards,\n\nGiles\n\n", "msg_date": "Sat, 24 Mar 2001 15:41:55 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> 2. I saw two different sets of output for geometry.out. These seem to\n> relate to the processor level:\n\nOkay, here are my results:\n\nBox 1: C180 (2.0 PA8000), HPUX 10.20\n\nCompile with gcc: all tests pass\nCompile with cc: two lines of diffs in geometry (attached)\n\nBox 2: 715/75 (1.1 PA7100LC), HPUX 10.20\n\nCompile with gcc: all tests pass\nCompile with cc: all tests pass\n\n\nBox 1 is more up-to-date on HP patches than box 2, so I wouldn't\nnecessarily attribute the difference to the processor.\n\n\t\t\tregards, tom lane\n\n*** ./expected/geometry-positive-zeros.out\tMon Sep 11 23:21:06 2000\n--- ./results/geometry.out\tSat Mar 24 02:45:35 2001\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 445,451 ****\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887967),(-3.33012701896897,0.500000000081028))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n--- 445,451 ----\n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n\n======================================================================\n\n", "msg_date": "Sat, 24 Mar 2001 02:59:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "Tom Lane writes:\n\n> The bit test diffs seem to indicate that bit_cmp is messed up. That\n> depends on memcmp. I seem to recall something about memcmp not being\n> 8-bit-clean on SunOS ... does that ring a bell with anyone?\n\nSure enough:\n\n - Macro: AC_FUNC_MEMCMP\n If the `memcmp' function is not available, or does not work on\n 8-bit data (like the one on SunOS 4.1.3), add `memcmp.o' to output\n variable `LIBOBJS'.\n\nWe could try to mangle this into doing the right thing for us.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 24 Mar 2001 10:38:04 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "Tom Lane writes:\n\n> > and yet another run (and different results):\n>\n> > =================================================\n> > 1 of 76 tests failed, 1 failed test(s) ignored.\n> > =================================================\n>\n> That's just ye olde random \"random\" failure ...\n\nActually, this is one real failed test plus the \"random\" failure.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 24 Mar 2001 11:18:11 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n> =================================================\n> 1 of 76 tests failed, 1 failed test(s) ignored.\n> =================================================\n>> \n>> That's just ye olde random \"random\" failure ...\n\n> Actually, this is one real failed test plus the \"random\" failure.\n\n(Checks code...) Hm, you're right. May I suggest that this is a rather\nconfusing wording? Perhaps\n\n 1 of 76 tests failed, plus 1 failed test(s) ignored.\n\nwould be less likely to mislead people.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Mar 2001 11:18:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> The bit test diffs seem to indicate that bit_cmp is messed up. That\n>> depends on memcmp. I seem to recall something about memcmp not being\n>> 8-bit-clean on SunOS ... does that ring a bell with anyone?\n\n> Sure enough:\n> - Macro: AC_FUNC_MEMCMP\n> If the `memcmp' function is not available, or does not work on\n> 8-bit data (like the one on SunOS 4.1.3), add `memcmp.o' to output\n> variable `LIBOBJS'.\n> We could try to mangle this into doing the right thing for us.\n\nNot sure if it's worth the trouble. That would be an AC_TRY_RUN test,\nwhich you've been trying to move away from, no? It doesn't seem like\nanyone still cares about SunOS 4.1.*, so ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Mar 2001 13:31:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "Hi all.\n\nSuddenly I obtain access to \nULTRIX black 4.3 1 RISC\n\nI don't shure is it supported, but I see /src/include/port/ultrix4.h file\nso my guess is `yes, at least was'. I got last version from CVS and try\nconfigure && gmake\nit results in\n\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -c xlog.c -o xlog.o\nIn file included from xlog.c:36:\n../../../../src/include/storage/s_lock.h:88: warning: type defaults to\n`int' in declaration of `slock_t'\n../../../../src/include/storage/s_lock.h:88: parse error before `*'\n../../../../src/include/storage/s_lock.h:91: warning: type defaults to\n`int' in declaration of `slock_t'\n../../../../src/include/storage/s_lock.h:91: parse error before `*'\ngmake[4]: *** [xlog.o] Error 1\n\ngrep of .h files shows that slock_t usualy defined in\n/src/include/port/*.h, but it is not defined in ultrix4.h\nand I can't find it in system includes.\n\nRegards,\nASK\n\n", "msg_date": "Sun, 25 Mar 2001 18:57:24 +0200 (IST)", "msg_from": "Alexander Klimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Alexander Klimov <[email protected]> writes:\n> Suddenly I obtain access to \n> ULTRIX black 4.3 1 RISC\n\nUh ... what kind of processor is that? Offhand I don't see any\nindication that any of the entries in s_lock.h are supposed to work\nfor Ultrix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Mar 2001 12:24:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Alexander Klimov <[email protected]> writes:\n> > Suddenly I obtain access to \n> > ULTRIX black 4.3 1 RISC\n> \n> Uh ... what kind of processor is that? Offhand I don't see any\n> indication that any of the entries in s_lock.h are supposed to work\n> for Ultrix.\n\nThe RISC/Ultrix machines ran (older) MIPS chips. Ultrix also ran\n(amazingly slowly) on the VAX architecture.\n\n[Fond memories of my first sysadmin job...]\n\n-Doug\n", "msg_date": "25 Mar 2001 13:25:00 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> Alexander Klimov <[email protected]> writes:\n>> Suddenly I obtain access to \n>> ULTRIX black 4.3 1 RISC\n\n> Uh ... what kind of processor is that? Offhand I don't see any\n> indication that any of the entries in s_lock.h are supposed to work\n> for Ultrix.\n\nOn closer look I notice that the putative support for machines without\na TEST_AND_SET implementation got broken by careless rearrangement of\nthe declarations in s_lock.h :-(. I have repaired this, and if you\nupdate from CVS you should find that things compile.\n\nHowever, you don't really want to run without TEST_AND_SET support;\nit'll be dog-slow. Furthermore, the support for machines without\nTEST_AND_SET is fairly new. I doubt it existed when the Ultrix port\nwas last reported to work. So the question above still stands.\n\nI suspect that some one of the implementations in s_lock.h was intended\nto be usable on Ultrix, and we've somehow dropped the declarations\nneeded to make it go. You might want to pull down an old tarball (6.3\nor before) and look at how it compiles the s_lock support on Ultrix.\n\nPlease send in a patch if you find that one is necessary for s_lock\nsupport on Ultrix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Mar 2001 13:26:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "\"Mark Knox\" <[email protected]> writes:\n>> Linux 2.2.x armv4l 7.0 2000-04-17, Mark Knox\n\n> Compiled and tested 7.1beta6 tonight. All the regression tests passed \n> except two - the usual minor differences in geometry (rounding on the \n> final digit) and this rather troubling output from type_sanity.\n\nMost bizarre --- and definitely indicative of trouble. Would you send\nalong the output of this query in that database:\n\nselect p1.oid,attrelid,relname,attname,attlen,attalign,attbyval\nfrom pg_attribute p1, pg_class p2\nwhere atttypid = 27 and p2.oid = attrelid\norder by 1;\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Mar 2001 15:02:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 25 Mar 2001, at 15:02, Tom Lane wrote:\n\n> > (rounding on the final digit) and this rather troubling output from\n> > type_sanity.\n> \n> Most bizarre --- and definitely indicative of trouble. Would you send\n> along the output of this query in that database:\n> \n> select p1.oid,attrelid,relname,attname,attlen,attalign,attbyval\n> from pg_attribute p1, pg_class p2\n> where atttypid = 27 and p2.oid = attrelid\n> order by 1;\n\nI was afraid of that ;) Here's the output:\n\n[PostgreSQL 7.1beta6 on armv4l-unknown-linux-gnuoldld, compiled by \nGCC 2.95.1]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: postgres\n\npostgres=> select \np1.oid,attrelid,relname,attname,attlen,attalign,attbyval from \npg_attribute p1, pg_class p2 where atttypid = 27 and p2.oid = \nattrelid order by 1;\n oid|attrelid|relname |attname|attlen|attalign|attbyval\n- -----+--------+--------------+-------+------+--------+--------\n16401| 1247|pg_type |ctid | 6|i |f \n16415| 1262|pg_database |ctid | 6|i |f \n16439| 1255|pg_proc |ctid | 6|i |f \n16454| 1260|pg_shadow |ctid | 6|i |f \n16464| 1261|pg_group |ctid | 6|i |f \n16486| 1249|pg_attribute |ctid | 6|i |f \n16515| 1259|pg_class |ctid | 6|i |f \n16526| 1215|pg_attrdef |ctid | 6|i |f \n16537| 1216|pg_relcheck |ctid | 6|i |f \n16557| 1219|pg_trigger |ctid | 6|i |f \n16572| 16567|pg_inherits |ctid | 8|i |f \n16593| 16579|pg_index |ctid | 8|i |f \n16610| 16600|pg_statistic |ctid | 8|i |f \n16635| 16617|pg_operator |ctid | 8|i |f \n16646| 16642|pg_opclass |ctid | 8|i |f \n16678| 16653|pg_am |ctid | 8|i |f \n16691| 16685|pg_amop |ctid | 8|i |f \n16873| 16867|pg_amproc |ctid | 8|i |f \n16941| 16934|pg_language |ctid | 8|i |f \n16953| 16948|pg_largeobject|ctid | 8|i |f \n16970| 16960|pg_aggregate |ctid | 8|i |f \n17038| 17033|pg_ipl |ctid | 8|i |f \n17051| 17045|pg_inheritproc|ctid | 8|i |f \n17067| 17058|pg_rewrite |ctid | 8|i |f \n17079| 17074|pg_listener |ctid | 8|i |f \n17090| 17086|pg_description|ctid | 8|i |f \n17206| 17201|pg_toast_1215 |ctid | 8|i |f \n17221| 17216|pg_toast_17086|ctid | 8|i |f \n17236| 17231|pg_toast_1255 |ctid | 8|i |f \n17251| 17246|pg_toast_1216 |ctid | 8|i |f \n17266| 17261|pg_toast_17058|ctid | 8|i |f \n17281| 17276|pg_toast_16600|ctid | 8|i |f \n17301| 17291|pg_user |ctid | 8|i |f \n17314| 17309|pg_rules |ctid | 8|i |f \n17327| 17322|pg_views |ctid | 8|i |f \n17342| 17335|pg_tables |ctid | 8|i |f \n17355| 17350|pg_indexes |ctid | 8|i |f \n(37 rows)\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: N/A\n\niQCVAwUBOr5XA/+IdJuhyV9xAQGfOgP6ApV6ia44bxCo/KyIE20knn/1FTysECW9\nRq9mLDhpYKHYtTWz1cgGtxzCEiRAMN+ZuO7u5nydy6TB8dp8iCd9eLAro4GAzqYM\naD9V9S3nK3YwV9RaKBWJqHXNPI5enp19YS74GxN0f9VIw/4PXlYVm2tQJLVWNGs+\nlFfQnraYEZQ=\n=Cj2l\n-----END PGP SIGNATURE-----\n", "msg_date": "Sun, 25 Mar 2001 15:37:23 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Does that database have any user-created relations in it, or is it\njust a virgin database? It seems that the wrong attlen is being\ncomputed for ctid fields during bootstrap, but the regression test\noutput (if it was complete) implies that the value inserted for\nuser-created fields was OK. This doesn't make a lot of sense since\nit's the same code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Mar 2001 16:07:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "On Wed, 21 Mar 2001, Thomas Lockhart wrote:\n\n> OK, here is my current platform list taken from the -hackers list and\n> from Vince's web page. I'm sure I've missed at least a few reports, but\n> please confirm that platforms are actually running and passing\n> regression tests with recent betas or the latest release candidate.\n\n\tUpdates...\n\n> Linux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\n\nTested RC1 with 2.2.17 on my XLT366 Alpha, all regression tests passed.\n\n> Linux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\n\nTested RC1 with 2.2.18 on my Sparc 20 (SM51), all regression tests passed. \n\n\tBoth have been entered into the regression database on the website\nas well. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Sun, 25 Mar 2001 20:49:56 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Two more for the list (not a single regression test failing, which is a\nfirst on Alpha!)\n\nTru64 4.0G Alpha cc-v6.3-129 7.1 2001-03-28 \nTru64 4.0G Alpha gcc-2.95.1 7.1 2001-03-28\n\nI updated the regression test database as well.\n\nAdriaan\n", "msg_date": "Mon, 26 Mar 2001 09:24:36 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> >> Suddenly I obtain access to\n> >> ULTRIX black 4.3 1 RISC\n> > Uh ... what kind of processor is that? Offhand I don't see any\n> > indication that any of the entries in s_lock.h are supposed to work\n> > for Ultrix.\n\nAs mentioned earlier, Ultrix on RISC means that it is a MIPS processor.\nDEC implemented OSF-1 for their Alpha processors.\n\n> I suspect that some one of the implementations in s_lock.h was intended\n> to be usable on Ultrix, and we've somehow dropped the declarations\n> needed to make it go. You might want to pull down an old tarball (6.3\n> or before) and look at how it compiles the s_lock support on Ultrix.\n\nAny hints for Alexander on how to do it *if* it is a MIPS processor?\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 16:18:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "The list of unreported or \"in progress\" platforms has gotten much\nshorter. If anyone can help on the remaining problems, we'll be able to\nmove closer to release status, which is A Good Thing (tm) ;)\n\nbtw, if we get most of these qualified, we will be on around 30\nplatforms!!!!\n\n - Thomas\n\nUnreported or problem platforms:\n\nLinux 2.0.x MIPS 7.0 2000-04-13, Tatsuo Ishii\n\nTatsuo's machine has died. Anyone else with a Cobalt?\n\nmklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n\nAny luck with RC1?\n\nNetBSD m68k 7.0 2000-04-10, Henry B. Hotz\nNetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n\nWe need some NetBSD folks to speak up! Also, there are several flavors\nof OpenBSD which are not represented in our list, but which probably are\nalready running PostgreSQL. Anyone?\n\nQNX 4.25 x86 7.0 2000-04-01, Dr. Andreas Kardos\n\nDoes QNX get demoted to the \"unsupported list\"? It is known to have\nproblems with 7.1, right?\n\nSolaris x86 7.0 2000-04-12, Marc Fournier\n\nscrappy, did you work through the tests yet?\n\nUltrix MIPS 7.1 2001-??-??, Alexander Klimov\n\nAny possibilities here?\n\n\nAnd here are the up-to-date platforms; thanks for the reports:\n\nAIX 4.3.3 RS6000 7.1 2001-03-21, Gilles Darold\nBeOS 5.0.3 x86 7.1 2000-12-18, Cyril Velter\nBSDI 4.01 x86 7.1 2001-03-19, Bruce Momjian\nCompaq Tru64 4.0g Alpha 7.1 2001-03-19, Brent Verner\nFreeBSD 4.3 x86 7.1 2001-03-19, Vince Vielhaber\nHPUX PA-RISC 7.1 2001-03-19, 10.20 Tom Lane, 11.00 Giles Lean\nIRIX 6.5.11 MIPS 7.1 2001-03-22, Robert Bruccoleri\nLinux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\nLinux 2.2.x armv4l 7.1 2001-03-22, Mark Knox\nLinux 2.2.18 PPC750 7.1 2001-03-19, Tom Lane\nLinux 2.2.x S/390 7.1 2000-11-17, Neale Ferguson\nLinux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\nLinux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\nMacOS X Darwin PPC 7.1 2000-12-11, Peter Bierman\nNetBSD 1.5 alpha 7.1 2001-03-22, Giles Lean\nNetBSD 1.5E arm32 7.1 2001-03-21, Patrick Welche\nNetBSD 1.5S x86 7.1 2001-03-21, Patrick Welche\nOpenBSD 2.8 x86 7.1 2001-03-22, Brandon Palmer\nSCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\nSCO UnixWare 7.1.1 x86 7.1 2001-03-19, Larry Rosenman\nSolaris 2.7 Sparc 7.1 2001-03-22, Marc Fournier\nSunOS 4.1.4 Sparc 7.1 2001-03-23, Tatsuo Ishii\nWindows/Win32 x86 7.1 2001-03-26, Magnus Hagander (clients only)\nWinNT/Cygwin x86 7.1 2001-03-16, Jason Tishler\n", "msg_date": "Mon, 26 Mar 2001 17:14:17 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart writes:\n\n> SCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\n\nWhere did you see this? I don't find it in the archives or in Vince's\ndatabase.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 26 Mar 2001 20:01:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> > SCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\n> Where did you see this? I don't find it in the archives or in Vince's\n> database.\n\nIn FAQ_SCO. I was looking to try to figure out what the differences were\nbetween the SCO products :)\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 18:05:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\nSince the SCO UDK works on both UnixWare and OpenServer, I think we are \npretty safe. Also, there was a post to -HACKERS about the accept bug and \nwe changed the workaround to include OSR5. \n\nI'd leave it until disproved. I don't have a OSR5 installation to check \nit with, however. \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/26/01, 12:05:55 PM, Thomas Lockhart <[email protected]> \nwrote regarding Re: [HACKERS] Re: Call for platforms:\n\n\n> > > SCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\n> > Where did you see this? I don't find it in the archives or in Vince's\n> > database.\n\n> In FAQ_SCO. I was looking to try to figure out what the differences were\n> between the SCO products :)\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Mon, 26 Mar 2001 18:09:05 GMT", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Linux 2.2.18 PPC750 7.1 2001-03-19, Tom Lane\n\n\"PPC750\"? What's that? \"PPC G3\" might be more likely to mean something\nto onlookers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 13:22:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> As mentioned earlier, Ultrix on RISC means that it is a MIPS processor.\n\n>> I suspect that some one of the implementations in s_lock.h was intended\n>> to be usable on Ultrix, and we've somehow dropped the declarations\n>> needed to make it go. You might want to pull down an old tarball (6.3\n>> or before) and look at how it compiles the s_lock support on Ultrix.\n\n> Any hints for Alexander on how to do it *if* it is a MIPS processor?\n\nNot sure. The only info I see in s_lock.h is in the \"SGI\" section:\n\n * This stuff may be supplemented in the future with Masato Kataoka's MIPS-II\n * assembly from his NECEWS SVR4 port, but we probably ought to retain this\n * for the R3000 chips out there.\n\nThat name doesn't ring a bell with me --- anyone remember what code is\nbeing referred to here, or where we might find Masato Kataoka?\n\nMIPS-II code may or may not be compatible with Alexander's machine\nanyway, but it's the only starting point I see.\n\nAnyway, the last CVS update to port/ultrix.h that appears to have come\nfrom someone actually using Ultrix was rev 1.2 on 7-May-97, which\npredates the very existence of s_lock.h as a separate file. So I'd\ndefinitely advise Alexander to find a tarball from that era and look at\nhow Ultrix was handled then.\n\nI dunno if we even have tarballs from that far back on-line ... I\nsuppose another possibility is a date-based pull from the CVS server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 13:30:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Thomas Lockhart writes:\n\n> > > SCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\n> > Where did you see this? I don't find it in the archives or in Vince's\n> > database.\n>\n> In FAQ_SCO. I was looking to try to figure out what the differences were\n> between the SCO products :)\n\nI wouldn't necessarily count something dated Oct 9, 2000. That was half a\nyear ago, and even two months before beta. And the message doesn't\nactually say it worked.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 26 Mar 2001 21:03:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> \"PPC750\"? What's that? \"PPC G3\" might be more likely to mean something\n> to onlookers ...\n\nActually \"G3\" means nothing outside of Apple afaict. The 750 series is a\nfollow-on to the 60x series, and there is a 7xxx series also. From my\npov, using an accepted label, rather than a marketing (re)label, better\nindicates *what* this actually can run on. I'm not sure that I have it\nlabeled correctly yet, but \"G3\" is not a step in the right direction.\n\nAs we both found, it is difficult to wade through Apple's own docs to\ndecipher which processor is actually built into the system.\n\nShould I put \"Mac G3\" in the comment section?\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 20:36:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> > As mentioned earlier, Ultrix on RISC means that it is a MIPS processor.\n> > Any hints for Alexander on how to do it *if* it is a MIPS processor?\n> Not sure. The only info I see in s_lock.h is in the \"SGI\" section:\n> * This stuff may be supplemented in the future with Masato Kataoka's MIPS-II\n> * assembly from his NECEWS SVR4 port, but we probably ought to retain this\n> * for the R3000 chips out there.\n> That name doesn't ring a bell with me --- anyone remember what code is\n> being referred to here, or where we might find Masato Kataoka?\n\nI'm not remembering either...\n\n> MIPS-II code may or may not be compatible with Alexander's machine\n> anyway, but it's the only starting point I see.\n\nThe Ultrix machine is more likely to be a 2000- or 3000-series (older)\nprocessor.\n\n> Anyway, the last CVS update to port/ultrix.h that appears to have come\n> from someone actually using Ultrix was rev 1.2 on 7-May-97, which\n> predates the very existence of s_lock.h as a separate file. So I'd\n> definitely advise Alexander to find a tarball from that era and look at\n> how Ultrix was handled then.\n> I dunno if we even have tarballs from that far back on-line ... I\n> suppose another possibility is a date-based pull from the CVS server.\n\nWhat can we help with Alex?\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 20:39:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> > > > SCO OpenServer 5 x86 7.1 2001-03-13, Billy Allie\n> > > Where did you see this? I don't find it in the archives or in Vince's\n> > > database.\n> > In FAQ_SCO. I was looking to try to figure out what the differences were\n> > between the SCO products :)\n> I wouldn't necessarily count something dated Oct 9, 2000. That was half a\n> year ago, and even two months before beta. And the message doesn't\n> actually say it worked.\n\n?? I can see I was thrown off by the \"last updated:\" line near the top\nof the file. It actually comes from a CVS commit, not from an explicit\nupdate of the info in the file.\n\nVery bad :(\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 20:43:11 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> Did you get the message from Trond about Linux 2.4 x86? I can also\n> verify all tests passed on a RedHat Public Beta installation with kernel\n> 2.4.\n\nTrond had indicated that it was a 2.4.2 kernel with lots 'o patches, so\nI figured I'd show the released stuff for now. I mentioned 2.4.2 in the\ncomments section.\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 21:39:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "I would..... \n\nLER\n\n-- \nLarry Rosenman \n http://www.lerctr.org/~ler/\nPhone: +1 972 414 9812 \n E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 US\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/26/01, 2:36:19 PM, Thomas Lockhart <[email protected]> wrote \nregarding Re: [HACKERS] Re: Call for platforms:\n\n\n> > \"PPC750\"? What's that? \"PPC G3\" might be more likely to mean something\n> > to onlookers ...\n\n> Actually \"G3\" means nothing outside of Apple afaict. The 750 series is a\n> follow-on to the 60x series, and there is a 7xxx series also. From my\n> pov, using an accepted label, rather than a marketing (re)label, better\n> indicates *what* this actually can run on. I'm not sure that I have it\n> labeled correctly yet, but \"G3\" is not a step in the right direction.\n\n> As we both found, it is difficult to wade through Apple's own docs to\n> decipher which processor is actually built into the system.\n\n> Should I put \"Mac G3\" in the comment section?\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Mon, 26 Mar 2001 21:41:28 GMT", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Mathijs Brands wrote:\n> \n> Hi\n> \n> Is there a list somewhere listing the platforms 7.1 is being\n> tested on right now? I'd be able to run regression tests on\n> the following platforms, if necessary:\n\nhttp://www.postgresql.org/devel-corner/docs/admin/supported-platforms.html\n\nis close to up to date (I made a few changes this morning).\n\nhttp://www.postgresql.org/~vev/regress/\n\nis an on-line display and data entry page, which I have not managed yet\nto use in my development workflow. So I hope to look at it occasionally,\nbut the -hackers mailing list is where I get most of my info.\n\n> FreeBSD 3.3 (x86)\n> FreeBSD 4.2 (x86)\n\n4.3 (and I think 4.2) is covered already.\n\n> Linux (x86 - 2.2 & 2.4 kernels, Redhat & Debian distro's)\n\nLinux on x86 is pretty well covered, but we welcome additional tests and\ntests on as many variants as possible.\n\n> Solaris 7 (SPARC)\n> Solaris 8 (x86)\n> Solaris 8 (SPARC)\n\nTests on Solaris 8, both Sparc and x86, would be very helpful. No\nreports so far, afaik.\n\n> IRIX 6.2\n> IRIX 6.5\n\nIrix 6.5.11 has been reported recently. But additional tests and testers\nwould be a good thing, since there aren't that many in the -hacker\ncommunity at the moment.\n\n> If I can get the box back to working order, Alpha Linux is\n> also an option. I'd be willing to build binary packages for\n> Solaris and IRIX.\n\nAlpha Linux is covered at the moment. Binary packages would be great.\n\nThanks for the help! Regards.\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 21:50:17 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Karl DeBisschop <[email protected]> writes:\n> In my tests on sparc/7 my compile died at line 3088 of\n> postgresql-7.1beta6/src/interfaces/python/pgmodule.c:\n\n> ./pgmodule.c:3088: parse error before `init_pg'\n\n> That's line 3137 of today's (22Mar) snapshot, which reads:\n\n> /* Initialization function for the module */\n> DL_EXPORT(void)\n> init_pg(void)\n> {\n\nWhat version of Python are you using? In Python 1.5, I find this\nin Python.h:\n\n#ifndef DL_EXPORT\t/* declarations for DLL import/export */\n#define DL_EXPORT(RTYPE) RTYPE\n#endif\n\nwhich should make the above work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 17:15:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Hi\n\nIs there a list somewhere listing the platforms 7.1 is being\ntested on right now? I'd be able to run regression tests on\nthe following platforms, if necessary:\n\n FreeBSD 3.3 (x86)\n FreeBSD 4.2 (x86)\n Linux (x86 - 2.2 & 2.4 kernels, Redhat & Debian distro's)\n Solaris 7 (SPARC)\n Solaris 8 (x86)\n Solaris 8 (SPARC)\n IRIX 6.2\n IRIX 6.5\n\nIf I can get the box back to working order, Alpha Linux is\nalso an option. I'd be willing to build binary packages for\nSolaris and IRIX.\n\nRegards,\n\nMathijs\n-- \n\"Books constitute capital.\" \n Thomas Jefferson \n", "msg_date": "Tue, 27 Mar 2001 00:26:13 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart wrote:\n> Linux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\n> Linux 2.2.x armv4l 7.1 2001-03-22, Mark Knox\n> Linux 2.2.18 PPC750 7.1 2001-03-19, Tom Lane\n> Linux 2.2.x S/390 7.1 2000-11-17, Neale Ferguson\n> Linux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\n> Linux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\n\nDid you get the message from Trond about Linux 2.4 x86? I can also\nverify all tests passed on a RedHat Public Beta installation with kernel\n2.4.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 26 Mar 2001 17:27:04 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Thomas Lockhart wrote:\n> > Linux 2.2.x Alpha 7.1 2001-01-23, Ryan Kirkpatrick\n> > Linux 2.2.x armv4l 7.1 2001-03-22, Mark Knox\n> > Linux 2.2.18 PPC750 7.1 2001-03-19, Tom Lane\n> > Linux 2.2.x S/390 7.1 2000-11-17, Neale Ferguson\n> > Linux 2.2.15 Sparc 7.1 2001-01-30, Ryan Kirkpatrick\n> > Linux 2.2.16 x86 7.1 2001-03-19, Thomas Lockhart\n> \n> Did you get the message from Trond about Linux 2.4 x86? I can also\n> verify all tests passed on a RedHat Public Beta installation with kernel\n> 2.4.\n\nI haven't put those in the list yet... I'll wait until we release a\nproduct, and test it on that.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "26 Mar 2001 17:31:42 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\nOn Tue, 27 Mar 2001, Mathijs Brands wrote:\n\n> Hi\n>\n> Is there a list somewhere listing the platforms 7.1 is being\n> tested on right now? I'd be able to run regression tests on\n> the following platforms, if necessary:\n>\n> FreeBSD 3.3 (x86)\n> FreeBSD 4.2 (x86)\n> Linux (x86 - 2.2 & 2.4 kernels, Redhat & Debian distro's)\n> Solaris 7 (SPARC)\n> Solaris 8 (x86)\n> Solaris 8 (SPARC)\n> IRIX 6.2\n> IRIX 6.5\n>\n> If I can get the box back to working order, Alpha Linux is\n> also an option. I'd be willing to build binary packages for\n> Solaris and IRIX.\n\nCheck out the Developer's Corner on the website. It's at the\ntop of the page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Mar 2001 17:41:31 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Mon, Mar 26, 2001 at 05:41:31PM -0500, Vince Vielhaber allegedly wrote:\n> On Tue, 27 Mar 2001, Mathijs Brands wrote:\n> > Hi\n> >\n> > Is there a list somewhere listing the platforms 7.1 is being\n> > tested on right now? I'd be able to run regression tests on\n> > the following platforms, if necessary:\n> >\n> > FreeBSD 3.3 (x86)\n> > FreeBSD 4.2 (x86)\n> > Linux (x86 - 2.2 & 2.4 kernels, Redhat & Debian distro's)\n> > Solaris 7 (SPARC)\n> > Solaris 8 (x86)\n> > Solaris 8 (SPARC)\n> > IRIX 6.2\n> > IRIX 6.5\n> >\n> > If I can get the box back to working order, Alpha Linux is\n> > also an option. I'd be willing to build binary packages for\n> > Solaris and IRIX.\n> \n> Check out the Developer's Corner on the website. It's at the\n> top of the page.\n> \n> Vince.\n\nI had a look at it, but that surely can't be the complete list.\nThere are only 20 results listed...\n\nMathijs\n-- \n\"Where is human nature so weak as in a bookstore!\" \n Henry Ward Beecher (1813-1887) \n", "msg_date": "Tue, 27 Mar 2001 00:46:15 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > Did you get the message from Trond about Linux 2.4 x86? I can also\n> > verify all tests passed on a RedHat Public Beta installation with kernel\n> > 2.4.\n \n> I haven't put those in the list yet... I'll wait until we release a\n> product, and test it on that.\n\nAh. Ok.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 26 Mar 2001 17:48:21 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "At 5:14 PM +0000 3/26/01, Thomas Lockhart wrote:\n>NetBSD m68k 7.0 2000-04-10, Henry B. Hotz\n\nI no longer have a 68k machine that's fast enough to reasonably test \nPG on. I have a IIcx that sometimes serves as a router, but I'm \nusing some second-generation powermac's mostly now. (You still have \nthat Centris in your closet Tom?)\n\nI *did* just get MacOS X this weekend though and if I get it working \non my work G4 maybe I could give it a try there.\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\[email protected], or [email protected]\n", "msg_date": "Mon, 26 Mar 2001 15:00:27 -0800", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> >NetBSD m68k 7.0 2000-04-10, Henry B. Hotz\n> I no longer have a 68k machine that's fast enough to reasonably test\n> PG on. I have a IIcx that sometimes serves as a router, but I'm\n> using some second-generation powermac's mostly now. (You still have\n> that Centris in your closet Tom?)\n\nOof. With its giant 250MB hard disk. I'm not likely to ever get that\ngoing ;)\n\n> I *did* just get MacOS X this weekend though and if I get it working\n> on my work G4 maybe I could give it a try there.\n\nIt will require at least the second Darwin beta release to work.\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 23:06:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> The non-test-and-set case should work again in current CVS, and I'd\n> appreciate it if Alexander would verify that. But as far as getting\n> some test-and-set support for MIPS goes, it looks like the only way\n> is for someone to sit down with a MIPS assembly manual. I haven't\n> got one, nor access to a machine to test on...\n\nThat is not already available from the Irix support code?\n\n - Thomas\n", "msg_date": "Mon, 26 Mar 2001 23:08:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Anyway, the last CVS update to port/ultrix.h that appears to have come\n>> from someone actually using Ultrix was rev 1.2 on 7-May-97, which\n>> predates the very existence of s_lock.h as a separate file. So I'd\n>> definitely advise Alexander to find a tarball from that era and look at\n>> how Ultrix was handled then.\n>> I dunno if we even have tarballs from that far back on-line ... I\n>> suppose another possibility is a date-based pull from the CVS server.\n\n> What can we help with Alex?\n\nAfter digging around in the old code I have to retract my opinion that\na test-and-set implementation used to exist for MIPS. The code did\nhave SysV-semaphore-based support for machines without test-and-set,\nand undoubtedly that's what was used on the old Ultrix port. (The\nnon-test-and-set code was broken for awhile, but I'd forgotten that\nit formerly worked.)\n\nThe non-test-and-set case should work again in current CVS, and I'd\nappreciate it if Alexander would verify that. But as far as getting\nsome test-and-set support for MIPS goes, it looks like the only way\nis for someone to sit down with a MIPS assembly manual. I haven't\ngot one, nor access to a machine to test on...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 18:35:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "On Mon, Mar 26, 2001 at 06:35:59PM -0500, Tom Lane allegedly wrote:\n> Thomas Lockhart <[email protected]> writes:\n> >> Anyway, the last CVS update to port/ultrix.h that appears to have come\n> >> from someone actually using Ultrix was rev 1.2 on 7-May-97, which\n> >> predates the very existence of s_lock.h as a separate file. So I'd\n> >> definitely advise Alexander to find a tarball from that era and look at\n> >> how Ultrix was handled then.\n> >> I dunno if we even have tarballs from that far back on-line ... I\n> >> suppose another possibility is a date-based pull from the CVS server.\n> \n> > What can we help with Alex?\n> \n> After digging around in the old code I have to retract my opinion that\n> a test-and-set implementation used to exist for MIPS. The code did\n> have SysV-semaphore-based support for machines without test-and-set,\n> and undoubtedly that's what was used on the old Ultrix port. (The\n> non-test-and-set code was broken for awhile, but I'd forgotten that\n> it formerly worked.)\n> \n> The non-test-and-set case should work again in current CVS, and I'd\n> appreciate it if Alexander would verify that. But as far as getting\n> some test-and-set support for MIPS goes, it looks like the only way\n> is for someone to sit down with a MIPS assembly manual. I haven't\n> got one, nor access to a machine to test on...\n\nI've got access to an Indigo� (IRIX 6.5, MIPS R10000), another Indigo�\n(IRIX 6.2, MIPS R4400) and a DECStation (NetBSD 1.?, MIPS R3000). The\nDECStation (also known as PMAX) originally ran Ultrix. If anybody has\nsome code that needs testing, I'd be more than willing. However, if\ntest-and-set works anything like I imagine, we really need to test it\non a multi-cpu MIPS machine. A good starting point might be the\ntest-and-set code in the NetBSD and Linux MIPS kernels.\n\nBtw. Everything you never wanted to know about the MIPS architecture:\n http://www.mips.com/Documentation/\n\nCheers,\n\nMathijs\n-- \n\"A book is a fragile creature. It suffers the wear of time,\n it fears rodents, the elements, clumsy hands.\" \n Umberto Eco \n", "msg_date": "Tue, 27 Mar 2001 02:05:34 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> That is not already available from the Irix support code?\n\nWhat we have for IRIX is\n\n#if defined(__sgi)\n/*\n * SGI IRIX 5\n * slock_t is defined as a unsigned long. We use the standard SGI\n * mutex API.\n *\n * The following comment is left for historical reasons, but is probably\n * not a good idea since the mutex ABI is supported.\n *\n * This stuff may be supplemented in the future with Masato Kataoka's MIPS-II\n * assembly from his NECEWS SVR4 port, but we probably ought to retain this\n * for the R3000 chips out there.\n */\n#include \"mutex.h\"\n#define TAS(lock)\t(test_and_set(lock,1))\n#define S_UNLOCK(lock)\t(test_then_and(lock,0))\n#define S_INIT_LOCK(lock)\t(test_then_and(lock,0))\n#define S_LOCK_FREE(lock)\t(test_then_add(lock,0) == 0)\n#endif\t /* __sgi */\n\nDoesn't look to me like it's likely to work on anything but IRIX ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 19:09:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> \"PPC750\"? What's that? \"PPC G3\" might be more likely to mean something\n>> to onlookers ...\n\n> Actually \"G3\" means nothing outside of Apple afaict. The 750 series is a\n> follow-on to the 60x series, and there is a 7xxx series also. From my\n> pov, using an accepted label, rather than a marketing (re)label, better\n> indicates *what* this actually can run on. I'm not sure that I have it\n> labeled correctly yet, but \"G3\" is not a step in the right direction.\n\nI found an apparently current \"PowerPC CPU Summary\" at\nhttp://e-www.motorola.com/webapp/sps/technology/tech_tutorial.jsp?catId=M943030621280\n\nIf accurate, the chip in this PowerBook is *not* a 750, since that tops\nout at 400 MHz. Apple offered this model in 400 and 500 MHz speeds,\nwhich makes it either a 7400 or 7410 chip ...\n\n> Should I put \"Mac G3\" in the comment section?\n\nYes, if you won't put it where it should be ;-). I'm still of the\nopinion that \"G3\" will mean something to a vastly larger population\nthan \"750\" or \"7400\" would. The latter are \"marketing relabels\" too\nyou know; Motorola's internal designation would probably be something\nelse entirely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 19:53:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "On Mon, Mar 26, 2001 at 07:09:38PM -0500, Tom Lane allegedly wrote:\n> Thomas Lockhart <[email protected]> writes:\n> > That is not already available from the Irix support code?\n> \n> What we have for IRIX is\n> \n> #if defined(__sgi)\n> /*\n> * SGI IRIX 5\n> * slock_t is defined as a unsigned long. We use the standard SGI\n> * mutex API.\n> *\n> * The following comment is left for historical reasons, but is probably\n> * not a good idea since the mutex ABI is supported.\n> *\n> * This stuff may be supplemented in the future with Masato Kataoka's MIPS-II\n> * assembly from his NECEWS SVR4 port, but we probably ought to retain this\n> * for the R3000 chips out there.\n> */\n> #include \"mutex.h\"\n> #define TAS(lock)\t(test_and_set(lock,1))\n> #define S_UNLOCK(lock)\t(test_then_and(lock,0))\n> #define S_INIT_LOCK(lock)\t(test_then_and(lock,0))\n> #define S_LOCK_FREE(lock)\t(test_then_add(lock,0) == 0)\n> #endif\t /* __sgi */\n> \n> Doesn't look to me like it's likely to work on anything but IRIX ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\nI just tried to compile 7.1RC1 on my IRIX 6.5 box using gcc 2.95.2.\nAppearently gcc chokes on some assembly in src/backend/storage/buffer/s_lock.c\n(tas_dummy on line 235 to be precise).\n\nHere's the offending code:\n\n#if defined(__mips__)\nstatic void\ntas_dummy()\n{\n __asm__ _volatile__(\n \"\\\n.global tas \\n\\\ntas: \\n\\\n .frame $sp, 0, $31 \\n\\\n ll $14, 0($4) \\n\\\n or $15, $14, 1 \\n\\\n sc $15, 0($4) \\n\\\n beq $15, 0, fail\\n\\\n bne $14, 0, fail\\n\\\n li $2, 0 \\n\\\n .livereg 0x2000FF0E,0x00000FFF \\n\\\n j $31 \\n\\\nfail: \\n\\\n li $2, 1 \\n\\\n j $31 \\n\\\n\");\n}\n\nNotice the single underscore before volatile. I just checked the CVS\nversion of s_lock.c and this is still not fixed. Fixing this causes\nas (the SGI version, not GNU as) to choke on the '.global tas' statement.\n\ns_lock.c: At top level:\ns_lock.c:234: warning: `tas_dummy' defined but not used\nas: Error: /var/tmp/ccoUdrOb.s, line 421: undefined assembler operation: .global\n .global tas\ngmake[4]: *** [s_lock.o] Error 1\n\nCommenting out the .global statements does produce a binary, but it can't\ncomplete the regression test due to other problems.\n\nIpcSemaphoreCreate: semctl(id=0, 0, SETALL, ...) failed: Bad address\n\nI'll see if I can come up with a solution for the .global and the\nsemaphore problem. I'll check wether pgsql 7.0 does run on this box too.\nOne wonders how Robert Bruccoleri did get 7.1RC1 to work properly. I'll\ncheck the archive for clues.\n\nOn my FreeBSD 4.2 box 7.1RC1 runs flawlessly. I've also tested the CVS\nversion a few days ago on a 4.1.1 box without any problems.\n\nFreeBSD 3.3 however does have some problems.\n\n*** ./expected/float8-small-is-zero.out Fri Mar 31 07:30:31 2000\n--- ./results/float8.out Tue Mar 27 02:28:07 2001\n***************\n*** 214,220 ****\n SET f1 = FLOAT8_TBL.f1 * '-1'\n WHERE FLOAT8_TBL.f1 > '0.0';\n SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n! ERROR: Bad float8 input format -- overflow\n SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n ERROR: pow() result is out of range\n SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n--- 214,220 ----\n SET f1 = FLOAT8_TBL.f1 * '-1'\n WHERE FLOAT8_TBL.f1 > '0.0';\n SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;\n! ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;\n ERROR: pow() result is out of range\n SELECT '' AS bad, ln(f.f1) from FLOAT8_TBL f where f.f1 = '0.0' ;\n\nSome geometry tests also fail. I'll check those tomorrow, erm, today. The\nsame goes for 7.1RC1 on Solaris 8 (Intel and Sparc).\n\nCheers,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Tue, 27 Mar 2001 03:04:50 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Regression test on FBSD 3.3 & 4.2,\n\tIRIX 6.5 (was Re: Re: Call for platforms)" }, { "msg_contents": "On Tue, Mar 27, 2001 at 12:01:24PM +1000, Justin Clift allegedly wrote:\n> I know that Sourceforge has been adding all sorts of machines to their\n> compile farm.\n> \n> Maybe it would be worthwhile taking a look if they have platforms we\n> don't?\n> \n> Regards and best wishes,\n> \n> Justin Clift\n\nCompaq also still hands out free test accounts on Digital servers\nrunning Linux, Tru64 and FreeBSD... I think it's called the Testdrive\nprogram.\n\nCheers,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Tue, 27 Mar 2001 03:08:11 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "At 7:53 PM -0500 3/26/01, Tom Lane wrote:\n>Thomas Lockhart <[email protected]> writes:\n>>> \"PPC750\"? What's that? \"PPC G3\" might be more likely to mean something\n>>> to onlookers ...\n>\n>> Actually \"G3\" means nothing outside of Apple afaict. The 750 series is a\n>> follow-on to the 60x series, and there is a 7xxx series also. From my\n>> pov, using an accepted label, rather than a marketing (re)label, better\n>> indicates *what* this actually can run on. I'm not sure that I have it\n>> labeled correctly yet, but \"G3\" is not a step in the right direction.\n>\n>I found an apparently current \"PowerPC CPU Summary\" at\n>http://e-www.motorola.com/webapp/sps/technology/tech_tutorial.jsp?catId=M943030621280\n>\n>If accurate, the chip in this PowerBook is *not* a 750, since that tops\n>out at 400 MHz. Apple offered this model in 400 and 500 MHz speeds,\n>which makes it either a 7400 or 7410 chip ...\n>\n>> Should I put \"Mac G3\" in the comment section?\n>\n>Yes, if you won't put it where it should be ;-). I'm still of the\n>opinion that \"G3\" will mean something to a vastly larger population\n>than \"750\" or \"7400\" would. The latter are \"marketing relabels\" too\n>you know; Motorola's internal designation would probably be something\n>else entirely.\n\n\nA \"Me Too\" from the peanut gallery.\n\nThere are probably 1000x as many users that will recognize that they have a PowerPC G3 than will know they have a PPC750 or PPC7400.\n\n-pmb\n\n\n", "msg_date": "Mon, 26 Mar 2001 17:08:38 -0800", "msg_from": "Peter Bierman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Mathijs Brands <[email protected]> writes:\n> Notice the single underscore before volatile.\n\nThat's definitely wrong --- fixed.\n\n> Fixing this causes\n> as (the SGI version, not GNU as) to choke on the '.global tas' statement.\n\n> s_lock.c: At top level:\n> s_lock.c:234: warning: `tas_dummy' defined but not used\n> as: Error: /var/tmp/ccoUdrOb.s, line 421: undefined assembler operation: .global\n> .global tas\n> gmake[4]: *** [s_lock.o] Error 1\n\nPerhaps it should be \".globl\"? That's another common spelling.\n\nDo you know whether anyone uses the GNU assembler on this platform,\nor is it always SGI's? I'm wondering if we need two versions of the\nassembly code ...\n\n\nI had missed the fact that s_lock.c contains some MIPS code. Anyone\nhave any idea what versions of the MIPS series this code runs on?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 20:20:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression test on FBSD 3.3 & 4.2,\n\tIRIX 6.5 (was Re: Re: Call for platforms)" }, { "msg_contents": "> Do you know whether anyone uses the GNU assembler on this platform,\n> or is it always SGI's? I'm wondering if we need two versions of the\n> assembly code ...\n\nSure. Both compilers are available, with SGI's, uh, unique approach, and\nwith GNU's well understood assembler.\n\n> I had missed the fact that s_lock.c contains some MIPS code. Anyone\n> have any idea what versions of the MIPS series this code runs on?\n\nThere is a chance it is from the Ultrix days (very pre-1998 afaicr). Or\nis it the *only* MIPS code in our tree? If so, then it probably supports\nTatsuo's dead Cobalt server box, which is fairly recent vintage.\n\n - Thomas\n", "msg_date": "Tue, 27 Mar 2001 01:35:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression test on FBSD 3.3 & 4.2,\n IRIX 6.5 (was Re: Re: Call for\n\tplatforms)" }, { "msg_contents": "On Mon, Mar 26, 2001 at 07:09:38PM -0500, Tom Lane wrote:\n> Thomas Lockhart <[email protected]> writes:\n> > That is not already available from the Irix support code?\n> \n> What we have for IRIX is\n> ... \n> Doesn't look to me like it's likely to work on anything but IRIX ...\n\nI have attached linuxthreads/sysdeps/mips/pt-machine.h from glibc-2.2.2\nbelow. (Glibc linuxthreads has alpha, arm, hppa, i386, ia64, m68k, mips,\npowerpc, s390, SH, and SPARC support, at least in some degree.)\n\nSince the actual instruction sequence is probably lifted from the \nMIPS manual, it's probably much freer than GPL. For the paranoid,\nthe actual instructions, extracted, are just\n\n 1:\n ll %0,%3\n bnez %0,2f\n li %1,1\n sc %1,%2\n beqz %1,1b\n 2:\n\nNathan Myers\[email protected]\n\n-----------------------------------\n/* Machine-dependent pthreads configuration and inline functions.\n\n Copyright (C) 1996, 1997, 1998, 2000 Free Software Foundation, Inc.\n This file is part of the GNU C Library.\n Contributed by Ralf Baechle <[email protected]>.\n Based on the Alpha version by Richard Henderson <[email protected]>.\n\n The GNU C Library is free software; you can redistribute it and/or\n modify it under the terms of the GNU Library General Public License as\n published by the Free Software Foundation; either version 2 of the\n License, or (at your option) any later version.\n\n The GNU C Library is distributed in the hope that it will be useful,\n but WITHOUT ANY WARRANTY; without even the implied warranty of\n MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n Library General Public License for more details.\n\n You should have received a copy of the GNU Library General Public\n License along with the GNU C Library; see the file COPYING.LIB. If\n not, write to the Free Software Foundation, Inc.,\n 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */\n\n#include <sgidefs.h>\n#include <sys/tas.h>\n\n#ifndef PT_EI\n# define PT_EI extern inline\n#endif\n\n/* Memory barrier. */\n#define MEMORY_BARRIER() __asm__ (\"\" : : : \"memory\")\n\n\n/* Spinlock implementation; required. */\n\n#if (_MIPS_ISA >= _MIPS_ISA_MIPS2)\n\nPT_EI long int\ntestandset (int *spinlock)\n{\n long int ret, temp;\n\n __asm__ __volatile__\n (\"/* Inline spinlock test & set */\\n\\t\"\n \"1:\\n\\t\"\n \"ll\t%0,%3\\n\\t\"\n \".set\tpush\\n\\t\"\n \".set\tnoreorder\\n\\t\"\n \"bnez\t%0,2f\\n\\t\"\n \" li\t%1,1\\n\\t\"\n \".set\tpop\\n\\t\"\n \"sc\t%1,%2\\n\\t\"\n \"beqz\t%1,1b\\n\"\n \"2:\\n\\t\"\n \"/* End spinlock test & set */\"\n : \"=&r\" (ret), \"=&r\" (temp), \"=m\" (*spinlock)\n : \"m\" (*spinlock)\n : \"memory\");\n\n return ret;\n}\n\n#else /* !(_MIPS_ISA >= _MIPS_ISA_MIPS2) */\n\nPT_EI long int\ntestandset (int *spinlock)\n{\n return _test_and_set (spinlock, 1);\n}\n#endif /* !(_MIPS_ISA >= _MIPS_ISA_MIPS2) */\n\n\n/* Get some notion of the current stack. Need not be exactly the top\n of the stack, just something somewhere in the current frame. */\n#define CURRENT_STACK_FRAME stack_pointer\nregister char * stack_pointer __asm__ (\"$29\");\n\n\n/* Compare-and-swap for semaphores. */\n\n#if (_MIPS_ISA >= _MIPS_ISA_MIPS2)\n\n#define HAS_COMPARE_AND_SWAP\nPT_EI int\n__compare_and_swap (long int *p, long int oldval, long int newval)\n{\n long int ret;\n\n __asm__ __volatile__\n (\"/* Inline compare & swap */\\n\\t\"\n \"1:\\n\\t\"\n \"ll\t%0,%4\\n\\t\"\n \".set\tpush\\n\"\n \".set\tnoreorder\\n\\t\"\n \"bne\t%0,%2,2f\\n\\t\"\n \" move\t%0,%3\\n\\t\"\n \".set\tpop\\n\\t\"\n \"sc\t%0,%1\\n\\t\"\n \"beqz\t%0,1b\\n\"\n \"2:\\n\\t\"\n \"/* End compare & swap */\"\n : \"=&r\" (ret), \"=m\" (*p)\n : \"r\" (oldval), \"r\" (newval), \"m\" (*p)\n : \"memory\");\n\n return ret;\n}\n\n#endif /* (_MIPS_ISA >= _MIPS_ISA_MIPS2) */\n", "msg_date": "Mon, 26 Mar 2001 17:41:34 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: MIPS test-and-set" }, { "msg_contents": "Hi,\n\nI tested Solaris 8 SPARC (32 bit) over the weekend, and can test Solaris\n8 INTEL this week/weekend.\n\nThe results of Solaris 8 SPARC were in Vince's database last time I\nchecked. ???\n\n+ Justin\n\nMathijs Brands wrote:\n> \n> Hi\n> \n> Is there a list somewhere listing the platforms 7.1 is being\n> tested on right now? I'd be able to run regression tests on\n> the following platforms, if necessary:\n> \n> FreeBSD 3.3 (x86)\n> FreeBSD 4.2 (x86)\n> Linux (x86 - 2.2 & 2.4 kernels, Redhat & Debian distro's)\n> Solaris 7 (SPARC)\n> Solaris 8 (x86)\n> Solaris 8 (SPARC)\n> IRIX 6.2\n> IRIX 6.5\n> \n> If I can get the box back to working order, Alpha Linux is\n> also an option. I'd be willing to build binary packages for\n> Solaris and IRIX.\n> \n> Regards,\n> \n> Mathijs\n> --\n> \"Books constitute capital.\"\n> Thomas Jefferson\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Tue, 27 Mar 2001 11:59:36 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "I know that Sourceforge has been adding all sorts of machines to their\ncompile farm.\n\nMaybe it would be worthwhile taking a look if they have platforms we\ndon't?\n\nRegards and best wishes,\n\nJustin Clift\n\nThomas Lockhart wrote:\n> \n> > The non-test-and-set case should work again in current CVS, and I'd\n> > appreciate it if Alexander would verify that. But as far as getting\n> > some test-and-set support for MIPS goes, it looks like the only way\n> > is for someone to sit down with a MIPS assembly manual. I haven't\n> > got one, nor access to a machine to test on...\n> \n> That is not already available from the Irix support code?\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Tue, 27 Mar 2001 12:01:24 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n\n> NetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n\nFetching the latest source kit now -- hope to have regression tests\nrun and a report back to you within a day or two.\n\n> We need some NetBSD folks to speak up!\n\nI've once again got a VAX that should be able to run PostgreSQL on\nNetBSD/vax, so I hope to be able to help revitalize that port soon...\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "27 Mar 2001 04:38:50 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "One that didn't compilei RC1:\n\n BIGBOY 71# uname -a\nIRIX BIGBOY 6.5 05190003 IP22\n\nOn an Indigo2 (R4000), gcc 2.95.2 , with the following error:\n\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -U_NO_XOPEN4 -c s_lock.c -o s_lock.o\ns_lock.c: In function `s_lock':\ns_lock.c:134: warning: passing arg 1 of pointer to function discards\nqualifiers from pointer target type\ns_lock.c: In function `tas_dummy':\ns_lock.c:235: parse error before `_volatile__'\ns_lock.c: At top level:\ns_lock.c:234: warning: `tas_dummy' defined but not used\ngmake[4]: *** [s_lock.o] Error 1\ngmake[4]: Leaving directory\n`/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend/storage/buffer'\ngmake[3]: *** [buffer-recursive] Error 2\ngmake[3]: Leaving directory\n`/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend/storage'\ngmake[2]: *** [storage-recursive] Error 2\ngmake[2]: Leaving directory\n`/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory\n`/usr/people/telmnstr/pg/postgresql-7.1RC1/src'\ngmake: *** [all] Error 2\n*** Error code 2 (bu21)\n\nJeff\n\n\n", "msg_date": "Mon, 26 Mar 2001 22:01:19 -0500 (EST)", "msg_from": "Jeff Duffy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 25 Mar 2001, at 16:07, Tom Lane wrote:\n\n> Does that database have any user-created relations in it, or is it\n> just a virgin database? It seems that the wrong attlen is being\n> computed for ctid fields during bootstrap, but the regression test\n> output (if it was complete) implies that the value inserted for\n> user-created fields was OK. This doesn't make a lot of sense since\n> it's the same code...\n\nTotally virgin. I created it just for that select you wanted. The \n7.1beta6 I built was installed in /usr/pgsql so as to be entirely \nseparate from any other running parts of the system. Like I said, the \ntest failed, but it seems to *work* just fine...\n\nIf you want the complete regress output, I'll send it as well. The \nonly failures were the type_sanity and geometry though, and the \ngeometry was just fluctuations on the final digit of a few numbers.\n\nI suspect it might be an alignment problem (ARM needs word or dword \nalignment on data access.. our kernel has an alignment trap handler \nthat does fixups in 'broken' code) or something related to signedness \n(ARM has default signed char) but I don't know enough about postgres \ninternals to really debug it. However, I'm certainly willing to \nlearn.. :) \n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: N/A\n\niQCVAwUBOsAFXf+IdJuhyV9xAQFpZgP8C7g9dqlh9Qd/wVwJn2jquVh+X3gBWBZ5\nUMHx43tPfYE7xJvHl3XH/z+mg/POyzgFMCF+5USO2jzbPMDiS2OtJbp+1NvP2FHA\nuuY1ra5o8WKWW/7ZrfaO5edC5e1OsKbhGsXugRIyBwFkzz28blt6gongUdio0nC3\nTd8Fm3GUKNk=\n=+//W\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 26 Mar 2001 22:13:33 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "\"Mark Knox\" <[email protected]> writes:\n> On 25 Mar 2001, at 16:07, Tom Lane wrote:\n>> Does that database have any user-created relations in it, or is it\n>> just a virgin database?\n\n> Totally virgin. I created it just for that select you wanted.\n\nOkay. Would you create a couple of random tables in it and do the\nselect again? I want to see what ctid looks like in a user-created\ntable.\n\n> I suspect it might be an alignment problem\n\nSort of. I am suspicious that sizeof(ItemPointerData) is returning 8\nrather than 6 as one might expect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 23:14:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "[email protected] (Nathan Myers) writes:\n\n> Since the actual instruction sequence is probably lifted from the \n> MIPS manual, it's probably much freer than GPL. For the paranoid,\n> the actual instructions, extracted, are just\n> \n> 1:\n> ll %0,%3\n> bnez %0,2f\n> li %1,1\n> sc %1,%2\n> beqz %1,1b\n> 2:\n\nBut note that the ll instruction is MIPS ISA II, which means that it\nis not supported by the R3000, which means that it will not work on\nmost DECstations.\n\nI don't think there is any way to do a reliable test-and-set sequence\nin user mode on an R3000.\n\nIan\n", "msg_date": "26 Mar 2001 21:18:43 -0800", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MIPS test-and-set" }, { "msg_contents": "> mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n> \n> Any luck with RC1?\n\nI will try today or tomorrow...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Mar 2001 17:57:24 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "\nOn Tue, 27 Mar 2001 09:57:45 -0500 (EST), Bruce Momjian alluded:\n\n> \n> We just fixed that yesterday. Can you grab the most recent CVS and give\n> it a try?\n\n Yep. We have many other MIPS (ONYX Crimson, , ONYX, Challenge, Indy w/ IRIX\n6.2, 6.5, etc.), Alpha and Sparc platforms if there are some others that need\ntesting (How about NetBSD on NeXT?).\n\nJeff\n\n-- \n Jeff Duffy\n [email protected]\n\n", "msg_date": "27 Mar 2001 10:42:59 EST", "msg_from": "\"Local\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Thus spake Tom Ivar Helbekkmo\n> > We need some NetBSD folks to speak up!\n\nI have successfully compiled it from CVS sources on my NetBSD -current but\nI can't find the tar file for RC1 to try it with the package system. Can\nsomeone point me to it please.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Mar 2001 06:36:37 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "On Tue, Mar 27, 2001 at 06:36:37AM -0500, D'Arcy J.M. Cain allegedly wrote:\n> Thus spake Tom Ivar Helbekkmo\n> > > We need some NetBSD folks to speak up!\n> \n> I have successfully compiled it from CVS sources on my NetBSD -current but\n> I can't find the tar file for RC1 to try it with the package system. Can\n> someone point me to it please.\n\nIt's probably in /pub/dev (or something similar) on the ftp\nserver...\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Tue, 27 Mar 2001 13:56:02 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "\nWe just fixed that yesterday. Can you grab the most recent CVS and give\nit a try?\n\n> One that didn't compilei RC1:\n> \n> BIGBOY 71# uname -a\n> IRIX BIGBOY 6.5 05190003 IP22\n> \n> On an Indigo2 (R4000), gcc 2.95.2 , with the following error:\n> \n> gcc -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -U_NO_XOPEN4 -c s_lock.c -o s_lock.o\n> s_lock.c: In function `s_lock':\n> s_lock.c:134: warning: passing arg 1 of pointer to function discards\n> qualifiers from pointer target type\n> s_lock.c: In function `tas_dummy':\n> s_lock.c:235: parse error before `_volatile__'\n> s_lock.c: At top level:\n> s_lock.c:234: warning: `tas_dummy' defined but not used\n> gmake[4]: *** [s_lock.o] Error 1\n> gmake[4]: Leaving directory\n> `/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend/storage/buffer'\n> gmake[3]: *** [buffer-recursive] Error 2\n> gmake[3]: Leaving directory\n> `/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend/storage'\n> gmake[2]: *** [storage-recursive] Error 2\n> gmake[2]: Leaving directory\n> `/usr/people/telmnstr/pg/postgresql-7.1RC1/src/backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory\n> `/usr/people/telmnstr/pg/postgresql-7.1RC1/src'\n> gmake: *** [all] Error 2\n> *** Error code 2 (bu21)\n> \n> Jeff\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Mar 2001 09:57:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Jeff Duffy <[email protected]> writes:\n> s_lock.c:235: parse error before `_volatile__'\n\nThat typo is fixed in current sources (should be OK in last night's\nsnapshot) but there's still some doubt as to how well the MIPS assembly\ncode works ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Mar 2001 10:36:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Mathijs Brands writes:\n\n> I just tried to compile 7.1RC1 on my IRIX 6.5 box using gcc 2.95.2.\n\nAccording to the information at\nhttp://freeware.sgi.com/shared/howto.html#b1 it probably won't work to\ncompile PostgreSQL with GCC on Irix. Or it might work and crash when run.\nBe warned. (I think it is not accidental that no one ever successfully\nused a PostgreSQL/GCC/Irix combo.)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Mar 2001 17:42:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression test on FBSD 3.3 & 4.2, IRIX 6.5 (was Re:\n\tRe: Call for platforms)" }, { "msg_contents": "On Tue, Mar 27, 2001 at 09:57:45AM -0500, Bruce Momjian allegedly wrote:\n> We just fixed that yesterday. Can you grab the most recent CVS and give\n> it a try?\n\nEven if you fix this it won't work (I tried it). Robert mailed why. Check the URL below\nfor more information. It crashes on semctl :(\n\nhttp://freeware.sgi.com/shared/howto.html#b1\n\nCheers,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Tue, 27 Mar 2001 18:06:17 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "I wrote:\n\n> > NetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n> \n> Fetching the latest source kit now -- hope to have regression tests\n> run and a report back to you within a day or two.\n\nHmm. No go here: everything looks peachy until I've started the\npostmaster, and attempt to connect to it:\n\nbarsoom:postgres> psql template1\n/usr/local/pgsql/lib/libpq.so.2: Undefined symbol \"\" (reloc type = 12, symnum = 4)\nbarsoom:postgres> _\n\nI've never seen this happen before... (For what it's worth, I use\nKerberos IV authentication here, so that's what I've configured on\nthis box as well. I notice that psql does not get as far as aquiring\na service key for the database access.)\n\nAny quick hints?\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "27 Mar 2001 22:27:15 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Mathijs Brands <[email protected]> writes:\n> Even if you fix this it won't work (I tried it). Robert mailed\n> why. Check the URL below for more information. It crashes on semctl :(\n\n> http://freeware.sgi.com/shared/howto.html#b1\n\nUgh. Given the semctl compatibility problem, I suspect we'd better note\nin the platform list that IRIX is only supported for cc, not gcc.\n\nThe other uncomfy-looking thing on that page is the very first item,\nabout configure scripts picking up libraries that they'd best not.\n(I have seen similar issues on HPUX, although they were relatively\neasy to get around.) We might need to do some more hacking on our\nconfigure script to make it play nice on IRIX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Mar 2001 15:44:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> wrote:\n> Thomas Lockhart <[email protected]> writes:\n\n>> NetBSD Sparc 7.0 2000-04-13, Tom I. Helbekkmo\n\n> Fetching the latest source kit now -- hope to have regression tests\n> run and a report back to you within a day or two.\n\n>> We need some NetBSD folks to speak up!\n\n> I've once again got a VAX that should be able to run PostgreSQL on\n> NetBSD/vax, so I hope to be able to help revitalize that port soon...\n\nit might also be a good idea to ask on the NetBSD ports lists - i\nthink there will most probably some people trying things out - the\nname of the list is\n\n [email protected]\n\nwhere arch is the corresponding NetBSD port name (pmax, macppc, sparc,\ni386, arm32, ...)\n\nthis might also be a good idea for the mips test-and-set thing (on\nthe port-pmax list - there are a lot of people knowing all that\nstuff very well)\n\nalso it might be worth to eventually ask on the [email protected]\nlist for someone willing to play with PostgreSQL on FreeBSD/alpha\n\njust some ideas ...\n\nt\n\n-- \nthomas graichen <[email protected]> ... perfection is reached, not\nwhen there is no longer anything to add, but when there is no\nlonger anything to take away. --- antoine de saint-exupery\n", "msg_date": "Tue, 27 Mar 2001 23:00:40 +0200", "msg_from": "thomas graichen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "> > mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n> > \n> > Any luck with RC1?\n> \n> I will try today or tomorrow...\n\nIn summary no, improvemnets seen.\n\nIf compiled with -O2 or -O2 -g, I got 10 tests FAILED. misc test\nfailed due to a backend crash. The SQL caused the crash was:\n\nselect i, length(t), octet_length(t), oldstyle_length(i,t) from\noldstyle_test;\n\n#0 ExecReplace (slot=0x1a4a7d0, tupleid=0x0, estate=0x1a4a708)\n at execMain.c:1408\n1408 resultRelationDesc = resultRelInfo->ri_RelationDesc;\n(gdb) where\n#0 ExecReplace (slot=0x1a4a7d0, tupleid=0x0, estate=0x1a4a708)\n at execMain.c:1408\n#1 0x188471c in ExecutePlan (estate=0x0, plan=0x1a4a410, \n operation=CMD_SELECT, numberTuples=0, direction=27567836, \n destfunc=0x1a4adf8) at execMain.c:1127\n#2 0x188471c in ExecutePlan (estate=0x0, plan=0x1a4a410, \n operation=CMD_SELECT, numberTuples=0, direction=27567836, \n destfunc=0x1a4adf8) at execMain.c:1127\n#3 0x18838b8 in ExecutorRun (queryDesc=0x1a4a7d0, estate=0x1a4a708, \n feature=27567784, count=0) at execMain.c:233\n#4 0x18e7784 in ProcessQuery (parsetree=0x1a4a708, plan=0x1a4a6a8, dest=None)\n at pquery.c:295\n#5 0x18e5c38 in pg_exec_query_string (query_string=0x1a4a410 \"\", dest=None, \n parse_context=0x1) at postgres.c:806\n#6 0x18e70b8 in PostgresMain (argc=1, argv=0x0, real_argc=4, real_argv=0x0, \n username=0x0) at postgres.c:1902\n#7 0x18c92ec in DoBackend (port=0x1a4a6a8) at postmaster.c:2111\n#8 0x18c8e10 in BackendStartup (port=0x1a4a708) at postmaster.c:1894\n#9 0x18c7c08 in ServerLoop () at postmaster.c:992\n#10 0x18c74f4 in PostmasterMain (argc=0, argv=0x1a4a6a8) at postmaster.c:682\n#11 0x1899a5c in main (argc=27567784, argv=0x1a4a708) at main.c:147\n#12 0x181c400 in _start ()\n(gdb) \n", "msg_date": "Wed, 28 Mar 2001 10:39:51 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 26 Mar 2001, at 23:14, Tom Lane wrote:\n\n> \"Mark Knox\" <[email protected]> writes:\n> > On 25 Mar 2001, at 16:07, Tom Lane wrote:\n> >> Does that database have any user-created relations in it, or is it\n> >> just a virgin database?\n> \n> > Totally virgin. I created it just for that select you wanted.\n> \n> Okay. Would you create a couple of random tables in it and do the\n> select again? I want to see what ctid looks like in a user-created\n> table.\n\nSure. I created two tables called 'test1' and 'test2'. Test1 has a \nsingle field 'field1' of type int4. Test2 has two fields 'field1' and \n'field2' of types char(200) and int4 respectively. \n\nHere are the results:\n\npostgres=> select \np1.oid,attrelid,relname,attname,attlen,attalign,attbyval from \npg_attribute p1, pg_class p2 where atttypid = 27 and p2.oid = \nattrelid order by 1;\n oid|attrelid|relname |attname|attlen|attalign|attbyval\n- -----+--------+--------------+-------+------+--------+--------\n16401| 1247|pg_type |ctid | 6|i |f \n16415| 1262|pg_database |ctid | 6|i |f \n16439| 1255|pg_proc |ctid | 6|i |f \n16454| 1260|pg_shadow |ctid | 6|i |f \n16464| 1261|pg_group |ctid | 6|i |f \n16486| 1249|pg_attribute |ctid | 6|i |f \n16515| 1259|pg_class |ctid | 6|i |f \n16526| 1215|pg_attrdef |ctid | 6|i |f \n16537| 1216|pg_relcheck |ctid | 6|i |f \n16557| 1219|pg_trigger |ctid | 6|i |f \n16572| 16567|pg_inherits |ctid | 8|i |f \n16593| 16579|pg_index |ctid | 8|i |f \n16610| 16600|pg_statistic |ctid | 8|i |f \n16635| 16617|pg_operator |ctid | 8|i |f \n16646| 16642|pg_opclass |ctid | 8|i |f \n16678| 16653|pg_am |ctid | 8|i |f \n16691| 16685|pg_amop |ctid | 8|i |f \n16873| 16867|pg_amproc |ctid | 8|i |f \n16941| 16934|pg_language |ctid | 8|i |f \n16953| 16948|pg_largeobject|ctid | 8|i |f \n16970| 16960|pg_aggregate |ctid | 8|i |f \n17038| 17033|pg_ipl |ctid | 8|i |f \n17051| 17045|pg_inheritproc|ctid | 8|i |f \n17067| 17058|pg_rewrite |ctid | 8|i |f \n17079| 17074|pg_listener |ctid | 8|i |f \n17090| 17086|pg_description|ctid | 8|i |f \n17206| 17201|pg_toast_1215 |ctid | 8|i |f \n17221| 17216|pg_toast_17086|ctid | 8|i |f \n17236| 17231|pg_toast_1255 |ctid | 8|i |f \n17251| 17246|pg_toast_1216 |ctid | 8|i |f \n17266| 17261|pg_toast_17058|ctid | 8|i |f \n17281| 17276|pg_toast_16600|ctid | 8|i |f \n17301| 17291|pg_user |ctid | 8|i |f \n17314| 17309|pg_rules |ctid | 8|i |f \n17327| 17322|pg_views |ctid | 8|i |f \n17342| 17335|pg_tables |ctid | 8|i |f \n17355| 17350|pg_indexes |ctid | 8|i |f \n18724| 18721|test1 |ctid | 8|i |f \n18735| 18731|test2 |ctid | 8|i |f \n(39 rows)\n\n> > I suspect it might be an alignment problem\n> \n> Sort of. I am suspicious that sizeof(ItemPointerData) is returning 8\n> rather than 6 as one might expect.\n\nMaybe it's padding the structure to a dword boundary? ARM is \nnotorious for such things.. I will rebuild it with \n__attribute__((packed)) on the struct and see if the size changes..\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: N/A\n\niQCVAwUBOsFDJP+IdJuhyV9xAQEKxQP/YJXTxZppLd7ECk4BSwDZaStP4+bE6acc\nStT//i/drdPC53DDWqiXLGA0bS384EXxyjvvaO1bTXzVFU/3+X/pY6YN/G3HMoah\ndbCXRli2Y57yansf1WaVmK1lhiAqLy3iGYFp2nZvO1Sl1u+ba89HtV+G+iaKZSTr\nU+HWTU3nnOM=\n=vkY+\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 27 Mar 2001 20:49:24 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\n\nOn 27 Mar 2001, at 20:49, Mark Knox wrote:\n\n> > > I suspect it might be an alignment problem\n> > \n> > Sort of. I am suspicious that sizeof(ItemPointerData) is returning\n> > 8 rather than 6 as one might expect.\n> \n> Maybe it's padding the structure to a dword boundary? ARM is \n> notorious for such things.. I will rebuild it with \n> __attribute__((packed)) on the struct and see if the size changes..\n\nAha, progress! The packed directive gives attlen of 6 across the \nboard! Type_sanity test passes now too, so the only failing \nregression test is geometry and that is easily dismissed. The \nvariation is in the last decimal place and probably due to emulated \nfloating point (ARM has no FPU).\n\nThe patch is attached.. it's tiny but seems to be effective.\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: N/A\n\niQCVAwUBOsFeUv+IdJuhyV9xAQE2XAP9FF93ew+6Ml5iZ1jWjcGrs+3zaXIeWef6\nSytNtIfyJqmcnyWnMaxBTlChIvBO5A2HVnBkCydM5BjUXdW1eWsEynrd+U79Yc+e\nyVDGo30CK3lAkTLH3Fo6jR3YZe/TsIyr80WlDeqJiWvDmHTfqvo50jRiDq2h1OL/\nLmI4YIQM0rQ=\n=Vwwp\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 27 Mar 2001 22:45:22 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "The following section of this message contains a file attachment\nprepared for transmission using the Internet MIME message format.\nIf you are using Pegasus Mail, or any another MIME-compliant system,\nyou should be able to save it or view it from within your mailer.\nIf you cannot, please ask your system administrator for assistance.\n\n ---- File information -----------\n File: arm-alignment.patch\n Date: 27 Mar 2001, 21:26\n Size: 533 bytes.\n Type: Unknown", "msg_date": "Tue, 27 Mar 2001 22:45:22 -0500", "msg_from": "\"Mark Knox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "\"Mark Knox\" <[email protected]> writes:\n> {\n> \tBlockIdData ip_blkid;\n> \tOffsetNumber ip_posid;\n> +#ifdef __arm__\n> +} __attribute__((packed)) ItemPointerData;\n> +#else\n> }\n> +#endif\n\nThat would fix it for ARM but not for anyplace else with similar\nalignment behavior. Would you try this patch instead to see what\nhappens?\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/catalog/heap.c.orig\tThu Mar 22 09:50:36 2001\n--- src/backend/catalog/heap.c\tWed Mar 28 00:24:45 2001\n***************\n*** 103,109 ****\n */\n \n static FormData_pg_attribute a1 = {\n! \t0xffffffff, {\"ctid\"}, TIDOID, 0, sizeof(ItemPointerData),\n \tSelfItemPointerAttributeNumber, 0, -1, -1, '\\0', 'p', '\\0', 'i', '\\0', '\\0'\n };\n \n--- 103,109 ----\n */\n \n static FormData_pg_attribute a1 = {\n! \t0xffffffff, {\"ctid\"}, TIDOID, 0, SizeOfIptrData,\n \tSelfItemPointerAttributeNumber, 0, -1, -1, '\\0', 'p', '\\0', 'i', '\\0', '\\0'\n };\n \n", "msg_date": "Wed, 28 Mar 2001 00:27:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> > mklinux PPC750 7.0 2000-04-13, Tatsuo Ishii\n\n> If compiled with -O2 or -O2 -g, I got 10 tests FAILED. misc test\n> failed due to a backend crash. The SQL caused the crash was:\n\n> select i, length(t), octet_length(t), oldstyle_length(i,t) from\n> oldstyle_test;\n\n> #0 ExecReplace (slot=0x1a4a7d0, tupleid=0x0, estate=0x1a4a708)\n> at execMain.c:1408\n> 1408 resultRelationDesc = resultRelInfo->ri_RelationDesc;\n> (gdb) where\n> #0 ExecReplace (slot=0x1a4a7d0, tupleid=0x0, estate=0x1a4a708)\n> at execMain.c:1408\n> #1 0x188471c in ExecutePlan (estate=0x0, plan=0x1a4a410, \n> operation=CMD_SELECT, numberTuples=0, direction=27567836, \n> destfunc=0x1a4adf8) at execMain.c:1127\n> #2 0x188471c in ExecutePlan (estate=0x0, plan=0x1a4a410, \n> operation=CMD_SELECT, numberTuples=0, direction=27567836, \n> destfunc=0x1a4adf8) at execMain.c:1127\n> #3 0x18838b8 in ExecutorRun (queryDesc=0x1a4a7d0, estate=0x1a4a708, \n> feature=27567784, count=0) at execMain.c:233\n\nI think you've got a badly broken compiler there. There's no way that\nExecReplace should be entered for a SELECT. The backtrace is wrong on\nits face anyway --- ExecutePlan does not call itself.\n\nWhat gcc version does that platform have?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Mar 2001 00:51:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "> I think you've got a badly broken compiler there. There's no way that\n> ExecReplace should be entered for a SELECT. The backtrace is wrong on\n> its face anyway --- ExecutePlan does not call itself.\n\nYes, I have suspected that.\n\n> What gcc version does that platform have?\n\ngcc version egcs-2.90.25 980302 (egcs-1.0.2 prerelease)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Mar 2001 14:53:35 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> What gcc version does that platform have?\n\n> gcc version egcs-2.90.25 980302 (egcs-1.0.2 prerelease)\n\nCan you try a known-stable gcc version? 2.95.2 say?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Mar 2001 01:08:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> What gcc version does that platform have?\n> \n> > gcc version egcs-2.90.25 980302 (egcs-1.0.2 prerelease)\n> \n> Can you try a known-stable gcc version? 2.95.2 say?\n\nI don't have time right know. Will do maybe for 7.1.1 or 7.2..\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Mar 2001 15:11:59 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "I just had a discussion with a user who doesn't want to update from\n6.4.something to 7.0.* because 7.0 broke a feature he likes, namely\nthe ability to change the default value of a column inherited from\na parent table. It seems that in pre-7.0 Postgres, this works:\n\ncreate table one(id int default 1, descr text);\ncreate table two(id int default 2, tag text) inherits (one);\n\nwith the net effect that table \"two\" has just one \"id\" column with\ndefault value 2.\n\nI can recall a number of requests from users to be able to change\nthe default value when inheriting a column, but I had not realized\nthat it was actually possible to do this in older Postgres releases.\n\nAfter digging into the CVS logs and mail archives, I find that Peter E.\nchanged the behavior in January 2000, apparently without realizing that\nhe was disabling a feature that some considered useful. Here's his\ncomment in pghackers, 26 Jan 2000 19:35:14 +0100 (CET):\n\n> ... I just looked into item 'Disallow inherited columns\n> with the same name as new columns' and it seems that someone actually made\n> provisions for this to be allowed, meaning that\n> create table test1 (x int);\n> create table test2 (x int) inherits (test1);\n> would result in test2 looking exactly like test1. No one knows what the\n> motivation was. (I removed it anyway.)\n\nGiven that Peter was responding to a TODO item, evidently someone had\ncomplained about the lack of any complaint for this construction, but\nI wonder whether the someone really understood all the implications.\nAllowing this construction allows one to change the default, or add\n(but not remove) column constraints, and in general it seems kinda\nuseful.\n\nThe question of the day: should we put this back the way it was?\nIf so, should we try to squeeze it into 7.1, or wait another release\ncycle? (I can see about equally good arguments for considering this\na feature addition or a bug fix...) Should there be a NOTICE about\nthe duplicated column name, or is the old silent treatment okay?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Mar 2001 13:30:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Changing the default value of an inherited column" }, { "msg_contents": "Tom Lane writes:\n\n> It seems that in pre-7.0 Postgres, this works:\n>\n> create table one(id int default 1, descr text);\n> create table two(id int default 2, tag text) inherits (one);\n>\n> with the net effect that table \"two\" has just one \"id\" column with\n> default value 2.\n\nAlthough the liberty to do anything you want seems appealing at first, I\nwould think that allowing this is not correct from an OO point of view.\nBut given that our inheritance system actually has conceivably little\nresemblance to real OO, I don't really care.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 28 Mar 2001 23:15:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the default value of an inherited column" }, { "msg_contents": "At 12:27 AM 3/28/01 -0500, Tom Lane wrote:\n\n>That would fix it for ARM but not for anyplace else with similar\n>alignment behavior. Would you try this patch instead to see what\n>happens?\n\nI don't think this solution would be valid on many other platforms. It forces the structure to not be padded, and assumes that the cpu will be able to fetch from unaligned boundaries. The only reason this works is that the arm linux kernel contains an alignment trap handler that catches the fault and does a fixup on the access. Otherwise it would crash with SIGBUS.\n\n> static FormData_pg_attribute a1 = {\n>! 0xffffffff, {\"ctid\"}, TIDOID, 0, SizeOfIptrData,\n> SelfItemPointerAttributeNumber, 0, -1, -1, '\\0', 'p', '\\0', 'i', '\\0', '\\0'\n> }; \n\nWell, this patch seems to produce attlens of 6 as desired, but it causes many (13) of the regression tests to fail. Do you want to see the regression.diffs? \n\n", "msg_date": "Wed, 28 Mar 2001 21:16:32 -0500", "msg_from": "Mark Knox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Mark Knox <[email protected]> writes:\n> I don't think this solution would be valid on many other platforms.\n\nAu contraire --- the ARM is the first platform I've heard of that does\nnot think sizeof(ItemPointerData) is 6. Else we'd have seen this\nregress test fail before.\n\n> Well, this patch seems to produce attlens of 6 as desired, but it\n> causes many (13) of the regression tests to fail. Do you want to see\n> the regression.diffs?\n\nPlease.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Mar 2001 23:06:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Tom Lane wrote:\n >I just had a discussion with a user who doesn't want to update from\n >6.4.something to 7.0.* because 7.0 broke a feature he likes, namely\n >the ability to change the default value of a column inherited from\n >a parent table. It seems that in pre-7.0 Postgres, this works:\n >\n >create table one(id int default 1, descr text);\n >create table two(id int default 2, tag text) inherits (one);\n >\n >with the net effect that table \"two\" has just one \"id\" column with\n >default value 2.\n ...\n >The question of the day: should we put this back the way it was?\n >If so, should we try to squeeze it into 7.1, or wait another release\n >cycle? (I can see about equally good arguments for considering this\n >a feature addition or a bug fix...) Should there be a NOTICE about\n >the duplicated column name, or is the old silent treatment okay?\n \nI would very much like to have this feature restored; I think there should\nbe a NOTICE, just in case the duplication is caused by mistyping.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Trust in the Lord with all your heart and lean not on \n your own understanding; in all your ways acknowledge \n him, and he will direct your paths.\" Proverbs 3:5,6 \n\n\n", "msg_date": "Thu, 29 Mar 2001 14:27:03 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Changing the default value of an inherited column " }, { "msg_contents": "Peter Eisentraut wrote:\n >Tom Lane writes:\n >\n >> It seems that in pre-7.0 Postgres, this works:\n >>\n >> create table one(id int default 1, descr text);\n >> create table two(id int default 2, tag text) inherits (one);\n >>\n >> with the net effect that table \"two\" has just one \"id\" column with\n >> default value 2.\n >\n >Although the liberty to do anything you want seems appealing at first, I\n >would think that allowing this is not correct from an OO point of view.\n\nI don't agree; this is equivalent to redefinition of a feature (=method) in\na descendant class, which is perfectly acceptable so long as the feature's\nsignature (equivalent to column type) remains unchanged.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Trust in the Lord with all your heart and lean not on \n your own understanding; in all your ways acknowledge \n him, and he will direct your paths.\" Proverbs 3:5,6 \n\n\n", "msg_date": "Thu, 29 Mar 2001 14:29:38 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "Oliver Elphick writes:\n\n> Peter Eisentraut wrote:\n> >Tom Lane writes:\n> >\n> >> It seems that in pre-7.0 Postgres, this works:\n> >>\n> >> create table one(id int default 1, descr text);\n> >> create table two(id int default 2, tag text) inherits (one);\n> >>\n> >> with the net effect that table \"two\" has just one \"id\" column with\n> >> default value 2.\n> >\n> >Although the liberty to do anything you want seems appealing at first, I\n> >would think that allowing this is not correct from an OO point of view.\n>\n> I don't agree; this is equivalent to redefinition of a feature (=method) in\n> a descendant class, which is perfectly acceptable so long as the feature's\n> signature (equivalent to column type) remains unchanged.\n\nThe SQL equivalent of redefining a method would the redefinition of a\nmethod [sic]. But since we don't have anything close to that, feel\nfree...\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 29 Mar 2001 18:53:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n>>>> Although the liberty to do anything you want seems appealing at first, I\n>>>> would think that allowing this is not correct from an OO point of view.\n\n> I don't agree; this is equivalent to redefinition of a feature (=method) in\n> a descendant class, which is perfectly acceptable so long as the feature's\n> signature (equivalent to column type) remains unchanged.\n\nWell, that does bring up the question of exactly what is signature and\nexactly what is implementation. Clearly we cannot allow the column type\nto be redefined. But what about typmod? Is it OK to replace char(32)\nwith char(64)? How about vice versa? How about replacing numeric(9,0)\nwith numeric(7,2)?\n\nThe pre-7.0 code only checked that the type ID is the same, but I wonder\nwhether it wouldn't be a good idea to demand typmod the same as well.\nFor the existing types that use typmod I don't think this is absolutely\nnecessary (ie, I don't think the system might crash if typmods are\ninconsistent in inherited tables) ... but I'm not comfortable about it\neither.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Mar 2001 11:57:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "On Thu, Mar 29, 2001 at 02:29:38PM +0100, Oliver Elphick wrote:\n> Peter Eisentraut wrote:\n> >Tom Lane writes:\n> >\n> >> It seems that in pre-7.0 Postgres, this works:\n> >>\n> >> create table one(id int default 1, descr text);\n> >> create table two(id int default 2, tag text) inherits (one);\n> >>\n> >> with the net effect that table \"two\" has just one \"id\" column with\n> >> default value 2.\n> >\n> >Although the liberty to do anything you want seems appealing at first, I\n> >would think that allowing this is not correct from an OO point of view.\n> \n> I don't agree; this is equivalent to redefinition of a feature (=method) in\n> a descendant class, which is perfectly acceptable so long as the feature's\n> signature (equivalent to column type) remains unchanged.\n\nThe O-O principle involved here is Liskov Substitution: if the derived\ntable is used in the context of code that thinks it's looking at the\nbase table, does anything break?\n\nChanging the default value of a column should not break anything, \nbecause the different default value could as well have been entered \nin the column manually.\n\nNathan Myers\[email protected]\n", "msg_date": "Thu, 29 Mar 2001 12:46:47 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "At 11:06 PM 3/28/01 -0500, Tom Lane wrote:\n>Mark Knox <[email protected]> writes:\n>> I don't think this solution would be valid on many other platforms.\n>\n>Au contraire --- the ARM is the first platform I've heard of that does\n>not think sizeof(ItemPointerData) is 6. Else we'd have seen this\n>regress test fail before.\n\nI meant I don't think *my* solution (ie packing the struct) would be valid anywhere else. It seems to be an arm-specific problem so maybe it needs an arm-specific patch? I've had to do this type of thing many times to get packages working properly in arm linux. It's a quirky platform.\n\n>> Well, this patch seems to produce attlens of 6 as desired, but it\n>> causes many (13) of the regression tests to fail. Do you want to see\n>> the regression.diffs?\n>\n>Please.\n\nSee attached.", "msg_date": "Thu, 29 Mar 2001 22:33:42 -0500", "msg_from": "Mark Knox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "Mark Knox <[email protected]> writes:\n> Well, this patch seems to produce attlens of 6 as desired, but it\n> causes many (13) of the regression tests to fail. Do you want to see\n> the regression.diffs?\n>> \n>> Please.\n\n> See attached.\n\nDoes look pretty broken, but I don't see how my idea would have led to\nall this other stuff failing. Anyway, I guess the path of least\nresistance is to install your ARM-specific packing patch. It's\nimportant to make sure that sizeof(ItemPointerData) is 6 if at all\npossible, since it will cost you four or so wasted bytes in every\ntuple header if it's not. Will take care of it for RC2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Mar 2001 23:58:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "> Yep. We have many other MIPS (ONYX Crimson, , ONYX, Challenge, Indy w/ IRIX\n> 6.2, 6.5, etc.), Alpha and Sparc platforms if there are some others that need\n> testing (How about NetBSD on NeXT?).\n\nAll of these are interesting to help others decide whether their\nparticular machine is supported. For my narrow purposes of documenting\nwhich kinds of platforms are supported for the upcoming release, I'm\nfocused on processor/OS combinations. So the following already seem to\nbe covered:\n\nMIPS/IRIX (32 bit compilation only- try 64 bit compilation?)\nAlpha/Linux\nAlpha/Tru64\nSparc/Solaris\nSparc/Linux\nx86/NetBSD (need all other NetBSD architectures!)\nx86/OpenBSD (need all other archs!)\n\nIf you have other combinations (I've forgotten what NeXT is; we need 68k\nand 88k architectures tested; our NetBSD/68k guy no longer has that\nmachine) they would be particularly helpful.\n\nTIA\n\n - Thomas\n", "msg_date": "Fri, 30 Mar 2001 15:04:25 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "[email protected] (Nathan Myers) writes:\n> The O-O principle involved here is Liskov Substitution: if the derived\n> table is used in the context of code that thinks it's looking at the\n> base table, does anything break?\n\nGood point. That answers my concern about how to handle typmod: an\napplication *could* be broken by a change in typmod (eg, suppose it's\nallocated a buffer just big enough for a char(N) attribute, using the N\nof the parent table). Therefore we must disallow changes in typmod in\nchild tables.\n\nFurther study of creatinh.c shows that we have inconsistent behavior at\nthe moment, as it will allow columns of the same name to be inherited\nfrom multiple parents and (silently) combined --- how is that really\ndifferent from combining with an explicit specification?\n\n\nI propose the following behavior:\n\n1. A table can have only one column of a given name. If the same\ncolumn name occurs in multiple parent tables and/or in the explicitly\nspecified column list, these column specifications are combined to\nproduce a single column specification. A NOTICE will be emitted to\nwarn the user that this has happened. The ordinal position of the\nresulting column is determined by its first appearance.\n\n2. An error will be reported if columns to be combined do not all have\nthe same datatype and typmod value.\n\n3. The new column will have a default value if any of the combined\ncolumn specifications have one. The last-specified default (the one\nin the explicitly given column list, or the rightmost parent table\nthat gives a default) will be used.\n\n4. All relevant constraints from all the column specifications will\nbe applied. In particular, if any of the specifications includes NOT\nNULL, the resulting column will be NOT NULL. (But the current\nimplementation does not support inheritance of UNIQUE or PRIMARY KEY\nconstraints, and I do not have time to add that now.)\n\nThis behavior differs from prior versions as follows:\n\n1. We return to the pre-7.0 behavior of allowing an explicit\nspecification of a column name that is also inherited (7.0 rejects this,\nthereby preventing the default from being changed in the child).\nHowever, we will now issue a warning NOTICE, to answer the concern that\nprompted this change of behavior.\n\n2. We will now enforce uniformity of typmod as well as type OID when\ncombining columns.\n\n3. In both 7.0 and prior versions, if a column appeared in multiple\nparents but not in the explicit column list, the first parent's default\nvalue (if any) and NOT NULL state would be used, ignoring those of later\nparents. Failing to \"or\" together the NOT NULL flags is clearly wrong,\nand I believe it's inconsistent to use an earlier rather than later\nparent's default value when we want an explicitly-specified default to\nwin out over all of them. The explicit column specifications are\ntreated as coming after the last parent for other purposes, so we should\ndefine the default to use as the last one reading left-to-right.\n\nComments? I'm going to implement and commit this today unless I hear\nloud squawks ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Mar 2001 12:10:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "On Fri, Mar 30, 2001 at 12:10:59PM -0500, Tom Lane wrote:\n> Comments? I'm going to implement and commit this today unless I hear\n> loud squawks ...\n\nI like it in general and I think it opens some interesting\npossibilities. I don't know much about how the inheritance system is\nimplemented, so I will put out this scenario in case it makes a\ndifference.\n\nWe recently decided to refactor our schema a bit, using inheritance.\nAll of our tables have a primary key called \"seq\" along with some\nother common fields such as entry time, etc. We realized that moving\nthem into a \"base\" table allowed us to create functions on \"base\"\nthat would work on every derived table. The main problem was that\nwe needed fields like \"seq\" to have distinct sequences, which was\nnot possible without the ability to override the default value in\neach derived table. It seems like this would be easily doable with\nthis change.\n\nAnother thing that seems kind of interesting would be to have:\n\nCREATE TABLE base (table_id CHAR(8) NOT NULL [, etc.]);\nCREATE TABLE foo (table_id CHAR(8) NOT NULL DEFAULT 'foo');\nCREATE TABLE bar (table_id CHAR(8) NOT NULL DEFAULT 'foo');\n\nThen a function on \"base\" could look at table_id and know which\ntable it's working on. A waste of space, but I can think of\nuses for it.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\[email protected] [email protected] http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n", "msg_date": "Fri, 30 Mar 2001 13:07:39 -0500", "msg_from": "Christopher Masto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "Tom Lane writes:\n\n> 3. The new column will have a default value if any of the combined\n> column specifications have one. The last-specified default (the one\n> in the explicitly given column list, or the rightmost parent table\n> that gives a default) will be used.\n\nThis seems pretty random. It would be more reasonable if multiple\n(default) inheritance weren't allowed unless you explicitly specify a new\ndefault for the new column, but we don't have a syntax for this.\n\n> 4. All relevant constraints from all the column specifications will\n> be applied. In particular, if any of the specifications includes NOT\n> NULL, the resulting column will be NOT NULL. (But the current\n> implementation does not support inheritance of UNIQUE or PRIMARY KEY\n> constraints, and I do not have time to add that now.)\n\nThis is definitely a violation of that Liskov Substitution. If a context\nexpects a certain table and gets a more restricted table, it will\ncertainly notice.\n\n> Comments? I'm going to implement and commit this today unless I hear\n> loud squawks ...\n\nIf we're going to make changes to the inheritance logic, we could\ncertainly use some more thought than a few hours. If you want to revert\nthe patch that was installed in 7.0 then ok, but the rest is not\nappropriate right now, IMHO.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 30 Mar 2001 23:05:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> 4. All relevant constraints from all the column specifications will\n>> be applied. In particular, if any of the specifications includes NOT\n>> NULL, the resulting column will be NOT NULL. (But the current\n>> implementation does not support inheritance of UNIQUE or PRIMARY KEY\n>> constraints, and I do not have time to add that now.)\n\n> This is definitely a violation of that Liskov Substitution. If a context\n> expects a certain table and gets a more restricted table, it will\n> certainly notice.\n\nAu contraire --- I'd say that if the child table fails to adhere to the\nconstraints set for the parent table, *that* is a violation of\ninheritance. In particular, a table containing NULLs that is a child of\na table in which the same column is marked NOT NULL is likely to blow up\nan application that is not expecting to get any nulls back.\n\nIn any case, we have already been inheriting general constraints from\nparent tables. Relaxing that would be a change of behavior. The\nfailure to inherit NOT NULL constraints some of the time (in some cases\nthey were inherited, in some cases not) cannot be construed as anything\nbut a bug.\n\n> If we're going to make changes to the inheritance logic, we could\n> certainly use some more thought than a few hours.\n\nThe primary issue here is to revert the 7.0 behavior to what it had been\nfor many years before that, and secondarily to make NOT NULL inheritance\nbehave consistently with itself and with other constraints. It doesn't\ntake hours of thought to justify either.\n\nI will agree that left-to-right vs. right-to-left precedence of\ninherited default values is pretty much a random choice, but it's\ndoubtful that anyone is really depending on that. The existing behavior\nwas not self-consistent anyway, since it was actually not \"the first\nspecified default\" but \"the default or lack of same attached to the\nfirst parent containing such a field\". For example, if we do not change\nthis behavior then\n\n\tcreate table p1 (f1 int);\n\tcreate table p2 (f1 int default 1) inherits(p1);\n\nresults in p2.f1 having a default, while\n\n\tcreate table p1 (f1 int);\n\tcreate table p2 (f1 int default 1, f2 int);\n\tcreate table p3 (f3 int) inherits(p1, p2);\n\nresults in p3.f1 not having a default. I don't think that can be argued\nto be anything but a bug either (consider what happens if p2 also says\nNOT NULL for f1).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Mar 2001 16:15:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "On Fri, Mar 30, 2001 at 12:10:59PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> > The O-O principle involved here is Liskov Substitution: if the derived\n> > table is used in the context of code that thinks it's looking at the\n> > base table, does anything break?\n> \n> I propose the following behavior:\n> \n> 1. A table can have only one column of a given name. If the same\n> column name occurs in multiple parent tables and/or in the explicitly\n> specified column list, these column specifications are combined to\n> produce a single column specification. A NOTICE will be emitted to\n> warn the user that this has happened. The ordinal position of the\n> resulting column is determined by its first appearance.\n\nTreatment of like-named members of multiple base types is not done\nconsistently in the various O-O languages. It's really a snakepit, and \nanything you do automatically will cause terrible problems for somebody. \nNonetheless, for any given circumstances some possible approaches are \nclearly better than others.\n\nIn C++, as in most O-O languages, the like-named members are kept \ndistinct. When referred to in the context of a base type, the member \nchosen is the \"right one\". Used in the context of the multiply-derived \ntype, the compiler reports an ambiguity, and you are obliged to qualify \nthe name explicitly to identify which among the like-named inherited \nmembers you meant. You can declare which one is \"really inherited\". \nSome other languages presume to choose automatically which one they \nthink you meant. The real danger is from members inherited from way\nback up the trees, which you might not know one are there.\n\nOf course PG is different from any O-O language. I don't know if PG \nhas an equivalent to the \"base-class context\". I suppose PG has a long \nhistory of merging like-named members, and that the issue is just of \nthe details of how the merge happens. \n\n> 4. All relevant constraints from all the column specifications will\n> be applied. In particular, if any of the specifications includes NOT\n> NULL, the resulting column will be NOT NULL. (But the current\n> implementation does not support inheritance of UNIQUE or PRIMARY KEY\n> constraints, and I do not have time to add that now.)\n\nSounds like a TODO item...\n\nDo all the triggers of the base tables get applied, to be run one after \nanother?\n\n--\nNathan Myers\[email protected]\n", "msg_date": "Fri, 30 Mar 2001 13:30:35 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "On Fri, Mar 30, 2001 at 11:05:53PM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > 3. The new column will have a default value if any of the combined\n> > column specifications have one. The last-specified default (the one\n> > in the explicitly given column list, or the rightmost parent table\n> > that gives a default) will be used.\n> \n> This seems pretty random. It would be more reasonable if multiple\n> (default) inheritance weren't allowed unless you explicitly specify a new\n> default for the new column, but we don't have a syntax for this.\n\nI agree, but I thought the original issue was that PG _does_ now have \nsyntax for it. Any conflict in default values should result in either \na failure, or \"no default\". Choosing a default randomly, or according \nto an arbitrary and complicated rule (same thing), is a source of bugs.\n\n> > 4. All relevant constraints from all the column specifications will\n> > be applied. In particular, if any of the specifications includes NOT\n> > NULL, the resulting column will be NOT NULL. (But the current\n> > implementation does not support inheritance of UNIQUE or PRIMARY KEY\n> > constraints, and I do not have time to add that now.)\n> \n> This is definitely a violation of that Liskov Substitution. If a context\n> expects a certain table and gets a more restricted table, it will\n> certainly notice.\n\nNot so. The rule is that the base-table code only has to understand\nthe derived table. The derived table need not be able to represent\nall values possible in the base table. \n\nNathan Myers\[email protected]\n", "msg_date": "Fri, 30 Mar 2001 13:40:25 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "At 12:10 30/03/01 -0500, Tom Lane wrote:\n>\n>Comments? I'm going to implement and commit this today unless I hear\n>loud squawks ...\n>\n\nNot a squawk as such, but does this have implications for pg_dump?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 31 Mar 2001 15:06:33 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Not a squawk as such, but does this have implications for pg_dump?\n\nGood point. With recently-committed changes, try:\n\nregression=# create table p1 (f1 int default 42 not null, f2 int);\nCREATE\nregression=# create table c1 (f1 int, f2 int default 7) inherits (p1);\nNOTICE: CREATE TABLE: merging attribute \"f1\" with inherited definition\nNOTICE: CREATE TABLE: merging attribute \"f2\" with inherited definition\nCREATE\nregression=# create table c2 (f1 int default 43, f2 int not null) inherits (p1);\nNOTICE: CREATE TABLE: merging attribute \"f1\" with inherited definition\nNOTICE: CREATE TABLE: merging attribute \"f2\" with inherited definition\nCREATE\n\npg_dump dumps both c1 and c2 like this:\n\nCREATE TABLE \"c2\" (\n\n)\ninherits (\"p1\");\n\nwhich is OK as far as the field set goes, but it loses the additional\nDEFAULT and NOT NULL information for the child table. Any thoughts on\nthe best way to fix this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 01:36:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 01:36 31/03/01 -0500, Tom Lane wrote:\n>\n>which is OK as far as the field set goes, but it loses the additional\n>DEFAULT and NOT NULL information for the child table. Any thoughts on\n>the best way to fix this?\n>\n\nCan pg_dump easily detect overridden attrs? If so, we just treat them as\ntable attrs and let the backend do it's stuff.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 31 Mar 2001 16:41:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "[email protected] (Nathan Myers) writes:\n> Of course PG is different from any O-O language. I don't know if PG \n> has an equivalent to the \"base-class context\". I suppose PG has a long \n> history of merging like-named members, and that the issue is just of \n> the details of how the merge happens. \n\nYes; AFAICT that behavior goes back to PostQUEL. It was partially\ndisabled (without adequate discussion I guess) in 7.0, but it's been\naround for a long time.\n\n>> 4. All relevant constraints from all the column specifications will\n>> be applied. In particular, if any of the specifications includes NOT\n>> NULL, the resulting column will be NOT NULL. (But the current\n>> implementation does not support inheritance of UNIQUE or PRIMARY KEY\n>> constraints, and I do not have time to add that now.)\n\n> Sounds like a TODO item...\n\nThere's something about it in TODO already. There are some definitional\nissues though (should uniqueness be across ALL tables of the inheritance\nhierarchy, or per-table? If the former, how would we implement it?).\nI believe you can find past discussions about this in the archives.\n\n> Do all the triggers of the base tables get applied, to be run one after \n> another?\n\nTriggers aren't inherited either. Possibly they should be, but again\nI think some forethought is needed...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 19:36:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "[email protected] (Nathan Myers) writes:\n>> This seems pretty random. It would be more reasonable if multiple\n>> (default) inheritance weren't allowed unless you explicitly specify a new\n>> default for the new column, but we don't have a syntax for this.\n\n> I agree, but I thought the original issue was that PG _does_ now have \n> syntax for it. Any conflict in default values should result in either \n> a failure, or \"no default\". Choosing a default randomly, or according \n> to an arbitrary and complicated rule (same thing), is a source of\n> bugs.\n\nWell, we *do* have a syntax for specifying a new default (the same one\nthat worked pre-7.0 and does now again). I guess what you are proposing\nis the rule \"If conflicting default values are inherited from multiple\nparents that each define the same column name, then an error is reported\nunless the child table redeclares the column and specifies a new default\nto override the inherited ones\".\n\nThat is:\n\n\tcreate table p1 (f1 int default 1);\n\tcreate table p2 (f1 int default 2);\n\tcreate table c1 (f2 float) inherits(p1, p2);\n\nwould draw an error about conflicting defaults for c1.f1, but\n\n\tcreate table c1 (f1 int default 3, f2 float) inherits(p1, p2);\n\nwould be accepted (and 3 would become the default for c1.f1).\n\nThis would take a few more lines of code, but I'm willing to do it if\npeople think it's a safer behavior than picking one of the inherited\ndefault values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 19:44:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 01:36 31/03/01 -0500, Tom Lane wrote:\n>> which is OK as far as the field set goes, but it loses the additional\n>> DEFAULT and NOT NULL information for the child table. Any thoughts on\n>> the best way to fix this?\n\n> Can pg_dump easily detect overridden attrs? If so, we just treat them as\n> table attrs and let the backend do it's stuff.\n\nWell, it's already detecting inherited attrs so it can suppress them\nfrom the explicit column list. Perhaps we should just hack that code\nto not suppress inherited attrs when they have default values and/or\nNOT NULL that's not in the parent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 20:02:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 20:02 31/03/01 -0500, Tom Lane wrote:\n>\n>Perhaps we should just hack that code\n>to not suppress inherited attrs when they have default values and/or\n>NOT NULL that's not in the parent.\n\nThat's what I meant; can we easily do the 'not in the parent' part, since\nwe may have to go up a long hierarchy to find the parent?\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 01 Apr 2001 11:21:59 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 20:02 31/03/01 -0500, Tom Lane wrote:\n>> Perhaps we should just hack that code\n>> to not suppress inherited attrs when they have default values and/or\n>> NOT NULL that's not in the parent.\n\n> That's what I meant; can we easily do the 'not in the parent' part, since\n> we may have to go up a long hierarchy to find the parent?\n\npg_dump must already contain code to traverse the inheritance hierarchy\n(I haven't looked to see where). Couldn't we just extend it to believe\nthat it's found a match only if the default value and NOT NULL state\nmatch, as well as the column name?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 20:25:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 20:25 31/03/01 -0500, Tom Lane wrote:\n>\n>> That's what I meant; can we easily do the 'not in the parent' part, since\n>> we may have to go up a long hierarchy to find the parent?\n>\n>pg_dump must already contain code to traverse the inheritance hierarchy\n>(I haven't looked to see where). Couldn't we just extend it to believe\n>that it's found a match only if the default value and NOT NULL state\n>match, as well as the column name?\n>\n\nYou are correct; flagInhAttrs in common.c does the work, and it should be\neasy to change. At the moment it extracts all tables attrs then looks for\nan attr with the same name in any parent table. We can extend this to check\nNOT NULL and DEFAULT. Should I also check TYPEDEFN - can that be changed?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 01 Apr 2001 11:37:54 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> You are correct; flagInhAttrs in common.c does the work, and it should be\n> easy to change. At the moment it extracts all tables attrs then looks for\n> an attr with the same name in any parent table. We can extend this to check\n> NOT NULL and DEFAULT. Should I also check TYPEDEFN - can that be changed?\n\nWe presently disallow change of type in child tables, but you might as\nwell check that too, if it's just one more strcmp ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 20:40:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 20:40 31/03/01 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> You are correct; flagInhAttrs in common.c does the work, and it should be\n>> easy to change. At the moment it extracts all tables attrs then looks for\n>> an attr with the same name in any parent table. We can extend this to check\n>> NOT NULL and DEFAULT. Should I also check TYPEDEFN - can that be changed?\n>\n>We presently disallow change of type in child tables, but you might as\n>well check that too, if it's just one more strcmp ...\n\nLooks like it; and just to confirm, based on previous messages, I assume I\nshould look at the parents from right to left?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 01 Apr 2001 11:50:06 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Looks like it; and just to confirm, based on previous messages, I assume I\n> should look at the parents from right to left?\n\nAt the moment that would be the right thing to do.\n\nIf we change the code again based on the latest discussion, then pg_dump\nwould have to detect whether there are conflicting defaults, which would\nmean looking at all the parents not just the rightmost one. Ugh. That\nmight be a good reason not to change...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2001 21:00:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 21:00 31/03/01 -0500, Tom Lane wrote:\n>\n>If we change the code again based on the latest discussion, then pg_dump\n>would have to detect whether there are conflicting defaults, which would\n>mean looking at all the parents not just the rightmost one. Ugh. That\n>might be a good reason not to change...\n>\n\nShall I hold off on this for a day or two to let the other discussion\nsettle down? It seems whatever happens, we should check NOT NULL.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 01 Apr 2001 12:09:36 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n\n> > We need some NetBSD folks to speak up!\n> \n> I've once again got a VAX that should be able to run PostgreSQL on\n> NetBSD/vax, so I hope to be able to help revitalize that port soon...\n\nIt still works. RC1 configures, compiles and runs on my VAX 4000/500\nwith NetBSD-current -- but the regression tests give a lot of failures\nbecause the VAX doesn't have IEEE math, leading to different rounding\nand erroneous assumptions about the limits of floating point values.\nI'll be looking at this more closely.\n\nAlso, dynamic loading now works on NetBSD/vax, so my old #ifdef for\nthat in the backend/port/bsd.c file, which has since propagated into\nthe new *bsd.c files, can go away (actually, I'm suspicious of the\nMIPS part of those, too, but I didn't put that in, and I don't have \nany MIPS-based machines):\n\nIndex: src/backend/port/dynloader/freebsd.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/port/dynloader/freebsd.c,v\nretrieving revision 1.9\ndiff -c -r1.9 freebsd.c\n*** src/backend/port/dynloader/freebsd.c\t2001/02/10 02:31:26\t1.9\n--- src/backend/port/dynloader/freebsd.c\t2001/04/01 08:01:20\n***************\n*** 63,69 ****\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n--- 63,69 ----\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n***************\n*** 78,84 ****\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #else\n--- 78,84 ----\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #else\n***************\n*** 101,107 ****\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n #else\n \tdlclose(handle);\n #endif\n--- 101,107 ----\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__)\n #else\n \tdlclose(handle);\n #endif\nIndex: src/backend/port/dynloader/netbsd.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/port/dynloader/netbsd.c,v\nretrieving revision 1.3\ndiff -c -r1.3 netbsd.c\n*** src/backend/port/dynloader/netbsd.c\t2001/02/10 02:31:26\t1.3\n--- src/backend/port/dynloader/netbsd.c\t2001/04/01 08:01:20\n***************\n*** 63,69 ****\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n--- 63,69 ----\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n***************\n*** 78,84 ****\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #elif defined(__ELF__)\n--- 78,84 ----\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #elif defined(__ELF__)\n***************\n*** 101,107 ****\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n #else\n \tdlclose(handle);\n #endif\n--- 101,107 ----\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__)\n #else\n \tdlclose(handle);\n #endif\nIndex: src/backend/port/dynloader/openbsd.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/port/dynloader/openbsd.c,v\nretrieving revision 1.3\ndiff -c -r1.3 openbsd.c\n*** src/backend/port/dynloader/openbsd.c\t2001/02/10 02:31:26\t1.3\n--- src/backend/port/dynloader/openbsd.c\t2001/04/01 08:01:20\n***************\n*** 63,69 ****\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n--- 63,69 ----\n void *\n BSD44_derived_dlopen(const char *file, int num)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlopen (%s) not supported\", file);\n \treturn NULL;\n #else\n***************\n*** 78,84 ****\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #elif defined(__ELF__)\n--- 78,84 ----\n void *\n BSD44_derived_dlsym(void *handle, const char *name)\n {\n! #if defined(__mips__)\n \tsprintf(error_message, \"dlsym (%s) failed\", name);\n \treturn NULL;\n #elif defined(__ELF__)\n***************\n*** 101,107 ****\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__) || (defined(__NetBSD__) && defined(__vax__))\n #else\n \tdlclose(handle);\n #endif\n--- 101,107 ----\n void\n BSD44_derived_dlclose(void *handle)\n {\n! #if defined(__mips__)\n #else\n \tdlclose(handle);\n #endif\n\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "01 Apr 2001 10:16:56 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Christopher Masto <[email protected]> writes:\n> Another thing that seems kind of interesting would be to have:\n> CREATE TABLE base (table_id CHAR(8) NOT NULL [, etc.]);\n> CREATE TABLE foo (table_id CHAR(8) NOT NULL DEFAULT 'foo');\n> CREATE TABLE bar (table_id CHAR(8) NOT NULL DEFAULT 'bar');\n> Then a function on \"base\" could look at table_id and know which\n> table it's working on. A waste of space, but I can think of\n> uses for it.\n\nThis particular need is superseded in 7.1 by the 'tableoid'\npseudo-column. However you can certainly imagine variants of this\nthat tableoid doesn't handle, for example columns where the subtable\ncreator can provide a useful-but-not-always-correct default value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Apr 2001 15:15:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "Tom Ivar Helbekkmo <[email protected]> writes:\n> Also, dynamic loading now works on NetBSD/vax, so my old #ifdef for\n> that in the backend/port/bsd.c file, which has since propagated into\n> the new *bsd.c files, can go away.\n\nPatch applied, thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Apr 2001 23:09:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms " }, { "msg_contents": "> > I've once again got a VAX that should be able to run PostgreSQL on\n> > NetBSD/vax, so I hope to be able to help revitalize that port soon...\n> It still works. RC1 configures, compiles and runs on my VAX 4000/500\n> with NetBSD-current -- but the regression tests give a lot of failures\n> because the VAX doesn't have IEEE math, leading to different rounding\n> and erroneous assumptions about the limits of floating point values.\n> I'll be looking at this more closely.\n\nGreat! Will put it on the list :)\n\n - Thomas\n", "msg_date": "Mon, 02 Apr 2001 06:49:53 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "Tom Lane writes:\n\n> Well, we *do* have a syntax for specifying a new default (the same one\n> that worked pre-7.0 and does now again). I guess what you are proposing\n> is the rule \"If conflicting default values are inherited from multiple\n> parents that each define the same column name, then an error is reported\n> unless the child table redeclares the column and specifies a new default\n> to override the inherited ones\".\n\nThis was the idea. If it's to complicated to do now, let's at least keep\nit in mind.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 2 Apr 2001 18:34:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Well, we *do* have a syntax for specifying a new default (the same one\n>> that worked pre-7.0 and does now again). I guess what you are proposing\n>> is the rule \"If conflicting default values are inherited from multiple\n>> parents that each define the same column name, then an error is reported\n>> unless the child table redeclares the column and specifies a new default\n>> to override the inherited ones\".\n\n> This was the idea. If it's to complicated to do now, let's at least keep\n> it in mind.\n\nYou and Nathan appear to like it, and no one else has objected.\nI shall make it so.\n\nPhilip: the rule that pg_dump needs to apply w.r.t. defaults for\ninherited fields is that if an inherited field has a default and\neither (a) no parent table supplies a default, or (b) any parent\ntable supplies a default different from the child's, then pg_dump\nhad better emit the child field explicitly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Apr 2001 13:27:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "On Sat, Mar 31, 2001 at 07:44:30PM -0500, Tom Lane wrote:\n> [email protected] (Nathan Myers) writes:\n> >> This seems pretty random. It would be more reasonable if multiple\n> >> (default) inheritance weren't allowed unless you explicitly specify a new\n> >> default for the new column, but we don't have a syntax for this.\n> \n> > I agree, but I thought the original issue was that PG _does_ now have \n> > syntax for it. Any conflict in default values should result in either \n> > a failure, or \"no default\". Choosing a default randomly, or according \n> > to an arbitrary and complicated rule (same thing), is a source of\n> > bugs.\n> \n> Well, we *do* have a syntax for specifying a new default (the same one\n> that worked pre-7.0 and does now again). I guess what you are proposing\n> is the rule \"If conflicting default values are inherited from multiple\n> parents that each define the same column name, then an error is reported\n> unless the child table redeclares the column and specifies a new default\n> to override the inherited ones\".\n> \n> That is:\n> \n> \tcreate table p1 (f1 int default 1);\n> \tcreate table p2 (f1 int default 2);\n> \tcreate table c1 (f2 float) inherits(p1, p2); # XXX\n> \n> would draw an error about conflicting defaults for c1.f1, but\n> \n> \tcreate table c1 (f1 int default 3, f2 float) inherits(p1, p2);\n> \n> would be accepted (and 3 would become the default for c1.f1).\n> \n> This would take a few more lines of code, but I'm willing to do it if\n> people think it's a safer behavior than picking one of the inherited\n> default values.\n\nI do. \n\nAllowing the line marked XXX above, but asserting no default for \nc1.f1 in that case, would be equally safe. (A warning would be \npolite, anyhow.) User code that doesn't rely on the default wouldn't \nnotice. You only need to choose a default if somebody adding rows to \nc1 uses it.\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 2 Apr 2001 14:22:05 -0700", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "On Sun, Apr 01, 2001 at 03:15:56PM -0400, Tom Lane wrote:\n> Christopher Masto <[email protected]> writes:\n> > Another thing that seems kind of interesting would be to have:\n> > CREATE TABLE base (table_id CHAR(8) NOT NULL [, etc.]);\n> > CREATE TABLE foo (table_id CHAR(8) NOT NULL DEFAULT 'foo');\n> > CREATE TABLE bar (table_id CHAR(8) NOT NULL DEFAULT 'bar');\n> > Then a function on \"base\" could look at table_id and know which\n> > table it's working on. A waste of space, but I can think of\n> > uses for it.\n> \n> This particular need is superseded in 7.1 by the 'tableoid'\n> pseudo-column. However you can certainly imagine variants of this\n> that tableoid doesn't handle, for example columns where the subtable\n> creator can provide a useful-but-not-always-correct default value.\n\nA bit of O-O doctrine... when you find yourself tempted to do something \nlike the above, it usually means you're trying to do the wrong thing. \nYou may not have a choice, in some cases, but you should know you are \non the way to architecture meltdown. \"She'll blow, Cap'n!\"\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 2 Apr 2001 14:37:13 -0700", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "On Mon, Apr 02, 2001 at 01:27:06PM -0400, Tom Lane wrote:\n> Philip: the rule that pg_dump needs to apply w.r.t. defaults for\n> inherited fields is that if an inherited field has a default and\n> either (a) no parent table supplies a default, or (b) any parent\n> table supplies a default different from the child's, then pg_dump\n> had better emit the child field explicitly.\n\nThe rule above appears to work even if inherited-default conflicts \nare not taken as an error, but just result in a derived-table column \nwith no default.\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 2 Apr 2001 14:46:16 -0700", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column" }, { "msg_contents": "[email protected] (Nathan Myers) writes:\n> On Sat, Mar 31, 2001 at 07:44:30PM -0500, Tom Lane wrote:\n>> That is:\n>> \n>> create table p1 (f1 int default 1);\n>> create table p2 (f1 int default 2);\n>> create table c1 (f2 float) inherits(p1, p2); # XXX\n>> \n>> would draw an error about conflicting defaults for c1.f1, but\n>> \n>> create table c1 (f1 int default 3, f2 float) inherits(p1, p2);\n>> \n>> would be accepted (and 3 would become the default for c1.f1).\n>> \n>> This would take a few more lines of code, but I'm willing to do it if\n>> people think it's a safer behavior than picking one of the inherited\n>> default values.\n\n> Allowing the line marked XXX above, but asserting no default for \n> c1.f1 in that case, would be equally safe. (A warning would be \n> polite, anyhow.)\n\nThe trouble with that is that we don't have such a concept as \"no\ndefault\", if by that you mean \"INSERTs *must* specify a value\".\nWhat would really happen would be that the effective default would\nbe NULL, which I think would be fairly surprising behavior, since\nnone of the three tables involved asked for that.\n\nI have committed code that raises an error in cases such as XXX above.\nLet's try it like that for awhile and see if anyone complains ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Apr 2001 18:26:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 13:27 2/04/01 -0400, Tom Lane wrote:\n>\n>Philip: the rule that pg_dump needs to apply w.r.t. defaults for\n>inherited fields is that if an inherited field has a default and\n>either (a) no parent table supplies a default, or (b) any parent\n>table supplies a default different from the child's, then pg_dump\n>had better emit the child field explicitly.\n>\n\nWhat is happening with IS NULL constraints (and type names)? I presume the\nabove rule should be applied to each of these fields?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 03 Apr 2001 11:57:45 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 13:27 2/04/01 -0400, Tom Lane wrote:\n>> Philip: the rule that pg_dump needs to apply w.r.t. defaults for\n>> inherited fields is that if an inherited field has a default and\n>> either (a) no parent table supplies a default, or (b) any parent\n>> table supplies a default different from the child's, then pg_dump\n>> had better emit the child field explicitly.\n\n> What is happening with IS NULL constraints (and type names)?\n\nNOT NULL on a child field would only force it to be dumped if none\nof the parents say NOT NULL. Type name really is not an issue since\nit will have to be the same in all the tables anyway; I wouldn't bother\nexpending any code there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Apr 2001 23:57:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "At 23:57 2/04/01 -0400, Tom Lane wrote:\n>\n>NOT NULL on a child field would only force it to be dumped if none\n>of the parents say NOT NULL. Type name really is not an issue since\n>it will have to be the same in all the tables anyway; I wouldn't bother\n>expending any code there.\n>\n\nI've made tha changes and it all seems to work, bu there is a minor\ninconsistency:\n\n create table p3_def1(f1 int default 1, f2 int);\n create table c5(f1 int not null, f3 int) inherits(p3_def1);\n\nc5 gets dumped as:\n\n CREATE TABLE \"c5\" (\n \"f1\" integer DEFAULT 1 NOT NULL,\n \"f3\" integer\n )\n inherits (\"p3_def1\");\n\nsince the NOT NULL forces the field to dump, and it is dumps as though it\nwere a real field. \n\nSimilarly,\n\n create table p2_nn(f1 int not null, f2 int not null);\n create table c6(f1 int default 2, ,f3 int) inherits(p2_nn);\n\nresults in C6 being dumped as:\n\n CREATE TABLE \"c6\" (\n \"f1\" integer DEFAULT 2 NOT NULL,\n \"f3\" integer\n )\n inherits (\"p2_nn\");\n\nI think it needs to dump ONLY the overridden settigns, since a change to\nthe overriding behaviour of a child seems like a bad thing.\n\nWhat do you think?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 03 Apr 2001 16:01:31 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited\n column" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> I think it needs to dump ONLY the overridden settigns, since a change to\n> the overriding behaviour of a child seems like a bad thing.\n\nI was about to say it's not worth the trouble, but I see you already\ndid it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Apr 2001 10:11:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Changing the default value of an inherited column " }, { "msg_contents": "\nFor those that want to get in before the rush, I'm going to do an announce\nthis evenin to -general and -announce ...\n\nVince, can you make appropriate changes to the WebSite as far as linking\nto it is concerned, so that the mirrors pick up the new links also?\n\nThanks ..\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 6 Apr 2001 11:55:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RC3 ..." }, { "msg_contents": "It looks like you wrapped the intermediate (broken) state of\ninterfaces/odbc/convert.c that Hiroshi had in there for a few hours.\nDunno if this is important enough to re-wrap RC3 for; it might affect\na few ODBC users ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2001 12:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ... " }, { "msg_contents": "Tom Lane wrote:\n> \n> It looks like you wrapped the intermediate (broken) state of\n> interfaces/odbc/convert.c that Hiroshi had in there for a few hours.\n> Dunno if this is important enough to re-wrap RC3 for; it might affect\n> a few ODBC users ...\n\nJust as I was getting ready to upload a quickie RC3 RPMset... :-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 06 Apr 2001 12:28:17 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n\n>\n> For those that want to get in before the rush, I'm going to do an announce\n> this evenin to -general and -announce ...\n>\n> Vince, can you make appropriate changes to the WebSite as far as linking\n> to it is concerned, so that the mirrors pick up the new links also?\n\nDoes it look like this it gonna be the one? It's stable and all that?\nOnce it's on the website no matter what moniker it's got (up to and\nincluding \"DANGER THIS IS BROKEN SO DON'T USE IT\") it will be viewed\nas the golden apple so I want to avoid another full mailbox. I'll\nprobably wait till tomorrow evening (if you announce tonite) just to\nbe sure.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 6 Apr 2001 12:58:29 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Vince Vielhaber wrote:\n\n> On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n>\n> >\n> > For those that want to get in before the rush, I'm going to do an announce\n> > this evenin to -general and -announce ...\n> >\n> > Vince, can you make appropriate changes to the WebSite as far as linking\n> > to it is concerned, so that the mirrors pick up the new links also?\n>\n> Does it look like this it gonna be the one? It's stable and all that?\n> Once it's on the website no matter what moniker it's got (up to and\n> including \"DANGER THIS IS BROKEN SO DON'T USE IT\") it will be viewed\n> as the golden apple so I want to avoid another full mailbox. I'll\n> probably wait till tomorrow evening (if you announce tonite) just to\n> be sure.\n\nbaring any major blow ups, the only thing we are waiting on is docs ...\n\n\n", "msg_date": "Fri, 6 Apr 2001 14:19:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> baring any major blow ups, the only thing we are waiting on is docs ...\n\nThe docs are ready for shipment.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Fri, 6 Apr 2001 19:38:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > baring any major blow ups, the only thing we are waiting on is docs ...\n>\n> The docs are ready for shipment.\n\nEven better ...\n\nOkay, let's let this sit as RC3 for the next week, as soon as someone\nmakes a change, I'll do up a new RC# that night ... if we can get a nice\nquiet period where nobody pops up with \"just one more thing\", let's try\nfor a release for next Friday ... *cross fingers* *grin*\n\n", "msg_date": "Fri, 6 Apr 2001 14:45:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n\n> On Fri, 6 Apr 2001, Peter Eisentraut wrote:\n>\n> > The Hermit Hacker writes:\n> >\n> > > baring any major blow ups, the only thing we are waiting on is docs ...\n> >\n> > The docs are ready for shipment.\n>\n> Even better ...\n>\n> Okay, let's let this sit as RC3 for the next week, as soon as someone\n> makes a change, I'll do up a new RC# that night ... if we can get a nice\n> quiet period where nobody pops up with \"just one more thing\", let's try\n> for a release for next Friday ... *cross fingers* *grin*\n\nSo does RC3 have the docs and the odbc thing mentioned earlier rolled\nin?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 6 Apr 2001 14:09:54 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Vince Vielhaber wrote:\n\n> On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n>\n> > On Fri, 6 Apr 2001, Peter Eisentraut wrote:\n> >\n> > > The Hermit Hacker writes:\n> > >\n> > > > baring any major blow ups, the only thing we are waiting on is docs ...\n> > >\n> > > The docs are ready for shipment.\n> >\n> > Even better ...\n> >\n> > Okay, let's let this sit as RC3 for the next week, as soon as someone\n> > makes a change, I'll do up a new RC# that night ... if we can get a nice\n> > quiet period where nobody pops up with \"just one more thing\", let's try\n> > for a release for next Friday ... *cross fingers* *grin*\n>\n> So does RC3 have the docs and the odbc thing mentioned earlier rolled\n> in?\n\nYes, just re-bundled it ...\n\n\n", "msg_date": "Fri, 6 Apr 2001 15:41:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n\n> On Fri, 6 Apr 2001, Vince Vielhaber wrote:\n>\n> > On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n> >\n> > > On Fri, 6 Apr 2001, Peter Eisentraut wrote:\n> > >\n> > > > The Hermit Hacker writes:\n> > > >\n> > > > > baring any major blow ups, the only thing we are waiting on is docs ...\n> > > >\n> > > > The docs are ready for shipment.\n> > >\n> > > Even better ...\n> > >\n> > > Okay, let's let this sit as RC3 for the next week, as soon as someone\n> > > makes a change, I'll do up a new RC# that night ... if we can get a nice\n> > > quiet period where nobody pops up with \"just one more thing\", let's try\n> > > for a release for next Friday ... *cross fingers* *grin*\n> >\n> > So does RC3 have the docs and the odbc thing mentioned earlier rolled\n> > in?\n>\n> Yes, just re-bundled it ...\n\nCool!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 6 Apr 2001 15:40:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "> > The docs are ready for shipment.\n> Even better ...\n> Okay, let's let this sit as RC3 for the next week...\n\nI'll go ahead and start generating hardcopy, though I understand that it\nis no longer allowed into the shipping tarball :(\n\nLamar, do you plan to continue to package the hardcopy somewhere in the\nRPMs? If so, I'll have them ready soon.\n\n - Thomas\n", "msg_date": "Fri, 06 Apr 2001 20:45:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "Thomas Lockhart wrote:\n> I'll go ahead and start generating hardcopy, though I understand that it\n> is no longer allowed into the shipping tarball :(\n \n> Lamar, do you plan to continue to package the hardcopy somewhere in the\n> RPMs? If so, I'll have them ready soon.\n\nI didn't for 7.0, IIRC. Or maybe I did for 7.0, but then didn't for\n7.0.2? I'll have to go back to the changelog.... \n\nI am open to suggestion -- should it be part of the main postgresql RPM\nwith the source and html docs, or should it be a separate package, such\nas postgresql-hardcopy-docs? Ideas? Comments?\n\nThe 'Internals' document is still in the main package, FWIW.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 06 Apr 2001 17:51:21 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "> On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n> \n> >\n> > For those that want to get in before the rush, I'm going to do an announce\n> > this evenin to -general and -announce ...\n> >\n> > Vince, can you make appropriate changes to the WebSite as far as linking\n> > to it is concerned, so that the mirrors pick up the new links also?\n> \n> Does it look like this it gonna be the one? It's stable and all that?\n> Once it's on the website no matter what moniker it's got (up to and\n> including \"DANGER THIS IS BROKEN SO DON'T USE IT\") it will be viewed\n> as the golden apple so I want to avoid another full mailbox. I'll\n> probably wait till tomorrow evening (if you announce tonite) just to\n> be sure.\n\nSmart man, that Vince. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 17:56:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "Thomas Lockhart wrote:\n >> > The docs are ready for shipment.\n >> Even better ...\n >> Okay, let's let this sit as RC3 for the next week...\n >\n >I'll go ahead and start generating hardcopy, though I understand that it\n >is no longer allowed into the shipping tarball :(\n >\n >Lamar, do you plan to continue to package the hardcopy somewhere in the\n >RPMs? If so, I'll have them ready soon.\n \nThomas, will you be doing .pdf files? I have had requests to put that\nin the Debian documentation package.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"A gentle answer turns away wrath, but a harsh word \n stirs up anger.\" Proverbs 15:1 \n\n\n", "msg_date": "Fri, 06 Apr 2001 23:16:54 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: RC3 ... " }, { "msg_contents": "On Fri, 6 Apr 2001, Thomas Lockhart wrote:\n\n> > > The docs are ready for shipment.\n> > Even better ...\n> > Okay, let's let this sit as RC3 for the next week...\n>\n> I'll go ahead and start generating hardcopy, though I understand that it\n> is no longer allowed into the shipping tarball :(\n\nAt 2Meg, is there a reason why we include any of the docs as part of the\nstandard tar ball? It shouldn't be required to compile, so should be able\nto be left out of the main tar ball and downloaded seperately as required\n.. thereby shrinking the distribution to <6Meg from its current 8 ...\n\n", "msg_date": "Fri, 6 Apr 2001 20:19:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, The Hermit Hacker wrote:\n\n> On Fri, 6 Apr 2001, Thomas Lockhart wrote:\n>\n> > > > The docs are ready for shipment.\n> > > Even better ...\n> > > Okay, let's let this sit as RC3 for the next week...\n> >\n> > I'll go ahead and start generating hardcopy, though I understand that it\n> > is no longer allowed into the shipping tarball :(\n>\n> At 2Meg, is there a reason why we include any of the docs as part of the\n> standard tar ball? It shouldn't be required to compile, so should be able\n> to be left out of the main tar ball and downloaded seperately as required\n> .. thereby shrinking the distribution to <6Meg from its current 8 ...\n\nWe may already do so, but include the upgrade instructions, first time\ninstall instructions and basic startup. Then bundle the docs as they\nnormally come in the tarball as a separate set (alone). A number of\nother packages are done this way. If we want to include a be-all-end-all\nwe can do that too. Of course other doc formats/sets will also be\navailable.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 6 Apr 2001 19:41:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "> On Fri, 6 Apr 2001, Thomas Lockhart wrote:\n> \n> > > > The docs are ready for shipment.\n> > > Even better ...\n> > > Okay, let's let this sit as RC3 for the next week...\n> >\n> > I'll go ahead and start generating hardcopy, though I understand that it\n> > is no longer allowed into the shipping tarball :(\n> \n> At 2Meg, is there a reason why we include any of the docs as part of the\n> standard tar ball? It shouldn't be required to compile, so should be able\n> to be left out of the main tar ball and downloaded seperately as required\n> .. thereby shrinking the distribution to <6Meg from its current 8 ...\n\nCan we drop TODO.detail from the tarball too? No need to include that,\nI think. The web site has nice links to it now. Uncompressed it is\n1.314 megs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 19:43:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "> > OTOH, if Marc was only thinking of removing the pre-built docs from the\n> > tarball, I don't object to that. I'm not sure why those weren't\n> > distributed as separate tarballs from the get-go. I just say that the\n> > doc sources are part of the source distribution...\n\n From the get-go, the docs were not, uh, useful docs. They have grown\nquite a bit from 1996 (with sources and formatting, probably by orders\nof magnitude).\n\n - Thomas\n", "msg_date": "Fri, 06 Apr 2001 23:49:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "> Thomas, will you be doing .pdf files? I have had requests to put that\n> in the Debian documentation package.\n\nafaik, I don't have the means to generate pdf directly. Pointers would\nbe appreciated, if there are mechanisms available on Linux boxes. \n\nWe have had lots of offers of help for these conversions, so when the\nhardcopy is ready we can ask someone to convert from there. OK?\n\n - Thomas\n", "msg_date": "Fri, 06 Apr 2001 23:52:38 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Bruce Momjian wrote:\n\n> > On Fri, 6 Apr 2001, Thomas Lockhart wrote:\n> >\n> > > > > The docs are ready for shipment.\n> > > > Even better ...\n> > > > Okay, let's let this sit as RC3 for the next week...\n> > >\n> > > I'll go ahead and start generating hardcopy, though I understand that it\n> > > is no longer allowed into the shipping tarball :(\n> >\n> > At 2Meg, is there a reason why we include any of the docs as part of the\n> > standard tar ball? It shouldn't be required to compile, so should be able\n> > to be left out of the main tar ball and downloaded seperately as required\n> > .. thereby shrinking the distribution to <6Meg from its current 8 ...\n>\n> Can we drop TODO.detail from the tarball too? No need to include that,\n> I think. The web site has nice links to it now. Uncompressed it is\n> 1.314 megs.\n\nDefinitely, I think TODO.detail should be refer'd to by the TODO file, but\nnot included in the distribution itself ...\n\n\n", "msg_date": "Fri, 6 Apr 2001 21:01:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": ">> At 2Meg, is there a reason why we include any of the docs as part of the\n>> standard tar ball? It shouldn't be required to compile, so should be able\n>> to be left out of the main tar ball and downloaded seperately as required\n>> .. thereby shrinking the distribution to <6Meg from its current 8 ...\n\n> Can we drop TODO.detail from the tarball too? No need to include that,\n> I think. The web site has nice links to it now. Uncompressed it is\n> 1.314 megs.\n\nThat strikes me as an awfully web-centric view of things. Not everyone\nhas an always-on high-speed Internet link.\n\nIf you want to make the docs and TODO.detail be a separate chunk of the\nsplit distribution, that's fine with me. But I don't agree with\nremoving them from the full tarball.\n\nOTOH, if Marc was only thinking of removing the pre-built docs from the\ntarball, I don't object to that. I'm not sure why those weren't\ndistributed as separate tarballs from the get-go. I just say that the\ndoc sources are part of the source distribution...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2001 20:09:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... " }, { "msg_contents": "> > Can we drop TODO.detail from the tarball too? No need to include that,\n> > I think. The web site has nice links to it now. Uncompressed it is\n> > 1.314 megs.\n> \n> That strikes me as an awfully web-centric view of things. Not everyone\n> has an always-on high-speed Internet link.\n> \n> If you want to make the docs and TODO.detail be a separate chunk of the\n> split distribution, that's fine with me. But I don't agree with\n> removing them from the full tarball.\n\nBut isn't TODO.detail mostly of interest to people who use CVS?\nI see your point, though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 20:19:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Tom Lane wrote:\n\n> >> At 2Meg, is there a reason why we include any of the docs as part of the\n> >> standard tar ball? It shouldn't be required to compile, so should be able\n> >> to be left out of the main tar ball and downloaded seperately as required\n> >> .. thereby shrinking the distribution to <6Meg from its current 8 ...\n>\n> > Can we drop TODO.detail from the tarball too? No need to include that,\n> > I think. The web site has nice links to it now. Uncompressed it is\n> > 1.314 megs.\n>\n> That strikes me as an awfully web-centric view of things. Not everyone\n> has an always-on high-speed Internet link.\n>\n> If you want to make the docs and TODO.detail be a separate chunk of the\n> split distribution, that's fine with me. But I don't agree with\n> removing them from the full tarball.\n>\n> OTOH, if Marc was only thinking of removing the pre-built docs from the\n> tarball, I don't object to that. I'm not sure why those weren't\n> distributed as separate tarballs from the get-go. I just say that the\n> doc sources are part of the source distribution...\n\nBut, why? That sounds like a highly DSL-centric view of things *grin* If\nsomeone really wants docs, what hurts a second GET ftp call?\n\n\n", "msg_date": "Fri, 6 Apr 2001 21:40:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... " }, { "msg_contents": "> > Thomas, will you be doing .pdf files? I have had requests to put that\n> > in the Debian documentation package.\n> \n> afaik, I don't have the means to generate pdf directly. Pointers would\n> be appreciated, if there are mechanisms available on Linux boxes. \n> \n> We have had lots of offers of help for these conversions, so when the\n> hardcopy is ready we can ask someone to convert from there. OK?\n\nCan you use ps2pdf to generate PDF? It is a utility that comes with\nghostscript. I know versions >= 6.0 are fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 21:23:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "> > That strikes me as an awfully web-centric view of things. Not everyone\n> > has an always-on high-speed Internet link.\n> >\n> > If you want to make the docs and TODO.detail be a separate chunk of the\n> > split distribution, that's fine with me. But I don't agree with\n> > removing them from the full tarball.\n> >\n> > OTOH, if Marc was only thinking of removing the pre-built docs from the\n> > tarball, I don't object to that. I'm not sure why those weren't\n> > distributed as separate tarballs from the get-go. I just say that the\n> > doc sources are part of the source distribution...\n> \n> But, why? That sounds like a highly DSL-centric view of things *grin* If\n> someone really wants docs, what hurts a second GET ftp call?\n\nA major issue is that we don't regenerate docs for 7.1.1 or later, so\nthe 7.1 docs carry for all the 7.1.X releases. That would seem to argue\nfor a separate tarball for docs so people don't redownload the docs\nagain for 7.1.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 21:25:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Fri, 6 Apr 2001, Bruce Momjian wrote:\n\n> > > That strikes me as an awfully web-centric view of things. Not everyone\n> > > has an always-on high-speed Internet link.\n> > >\n> > > If you want to make the docs and TODO.detail be a separate chunk of the\n> > > split distribution, that's fine with me. But I don't agree with\n> > > removing them from the full tarball.\n> > >\n> > > OTOH, if Marc was only thinking of removing the pre-built docs from the\n> > > tarball, I don't object to that. I'm not sure why those weren't\n> > > distributed as separate tarballs from the get-go. I just say that the\n> > > doc sources are part of the source distribution...\n> >\n> > But, why? That sounds like a highly DSL-centric view of things *grin* If\n> > someone really wants docs, what hurts a second GET ftp call?\n>\n> A major issue is that we don't regenerate docs for 7.1.1 or later, so\n> the 7.1 docs carry for all the 7.1.X releases. That would seem to argue\n> for a separate tarball for docs so people don't redownload the docs\n> again for 7.1.1.\n\nOkay, unless someone can come up with a really good argument *for* why\ndocs has to be included as part of the main tar file, I'm going to change\nthe distributin generating script so that it generates a .src.tar.gz file\nseperate from the .doc.tar.gz file, which will make .src.tar.gz ~6Meg\ninstead of the 8meg we are currently forcing ppl to download ...\n\nPeter E, is there anything part of the configure/make procedure that\n*requires* pgsql/doc to be there else it will break? If so, can you\npossibly put it as a test \"if docs exists, deal with it, else ignore\"?\n\n\n", "msg_date": "Fri, 6 Apr 2001 23:08:24 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Fri, Apr 06, 2001 at 09:23:35PM -0400, Bruce Momjian allegedly wrote:\n> > > Thomas, will you be doing .pdf files? I have had requests to put that\n> > > in the Debian documentation package.\n> > \n> > afaik, I don't have the means to generate pdf directly. Pointers would\n> > be appreciated, if there are mechanisms available on Linux boxes. \n> > \n> > We have had lots of offers of help for these conversions, so when the\n> > hardcopy is ready we can ask someone to convert from there. OK?\n> \n> Can you use ps2pdf to generate PDF? It is a utility that comes with\n> ghostscript. I know versions >= 6.0 are fine.\n\nPDF files generated from postscript with Adobe Acrobat are usually of\nmuch higher quality than those generated by ghostscript. It seems that\nghostscript encodes rendered (bitmaps) documents, while Acrobat generates\nPDF files of a quality similar to the original postscript documents.\n\nYou would definately have much hihger quality PDF files if someone with\naccess to Acrobat would step forward. Too bad Acrobat is soo expensive :(\n\nRegards,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Sat, 7 Apr 2001 04:26:46 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "> > Can you use ps2pdf to generate PDF? It is a utility that comes with\n> > ghostscript. I know versions >= 6.0 are fine.\n> \n> PDF files generated from postscript with Adobe Acrobat are usually of\n> much higher quality than those generated by ghostscript. It seems that\n> ghostscript encodes rendered (bitmaps) documents, while Acrobat generates\n> PDF files of a quality similar to the original postscript documents.\n> \n> You would definately have much hihger quality PDF files if someone with\n> access to Acrobat would step forward. Too bad Acrobat is soo expensive :(\n\nThis is only true of ghostscript version <6.0. Pre-6.0 could only\nencode non-bitmapped fonts if they were the standard Adobe 35. 6.0 and\nlater do full curve rendering for all fonts, at least they should. My\nbook PDF's that were used to print certainly were not bitmapped fonts. \nI have tons of PDF's on my web site, and none use bitmapped fonts. All\nused ghoscript 6.01.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 6 Apr 2001 22:46:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "I have no idea if what I say is true about the PG distribution by PG people, but\nI have noticed than in the rpms of other distros the postgresql-devel rpms do not\ninclude all the .h files necessary to build PG extensions. For instance the\nrtree.h and itup.h and gist.h headers are missing. Could you please ensure that\nall the headers are taken into account when you write your spec file.\n\nMay be also in the tar.gz or tar.bz2 distribution (bz2 is more effective than gz\nand available on all platforms) you add a developer file that list all the\nrequired headers, so that package builders know which files to include.\n\nIt seems that the rpm distributions will go as:\npostgresql\npostgresql-docs (user and manager docs)\npostgresql-devel (header files and developper docs)\n\nCheers.\[email protected]\n\n", "msg_date": "Sat, 07 Apr 2001 14:47:27 +1200", "msg_from": "Franck Martin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... and rpms..." }, { "msg_contents": "\n> Thanks! I'm not too worried about 1.4.2, but be sure to let us know what\n> the problem was; it may help out someone else...\n\nNetBSD-1.4.2/i386 passes all tests with 7.1RC3.\n\nMy previous test failure on this platform was due to the timezone\ninformation on the test system not being standard; once that was\ncorrected all tests pass.\n\nIt is still necessary to add -ltermcap after -ledit in\nsrc/Makefile.global to have functional history editing in psql.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Sat, 07 Apr 2001 14:43:39 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms " }, { "msg_contents": "\n> Okay, here are my results:\n> \n> Box 1: C180 (2.0 PA8000), HPUX 10.20\n> \n> Compile with gcc: all tests pass\n> Compile with cc: two lines of diffs in geometry (attached)\n> \n> Box 2: 715/75 (1.1 PA7100LC), HPUX 10.20\n> \n> Compile with gcc: all tests pass\n> Compile with cc: all tests pass\n\nI haven't had time to look at this further yet, except to build 7.1RC3\na couple of times with the HP ANSI C compiler today:\n\nPA-RISC 1.1 code (-Ae +O2 +DAportable): all tests pass\nPA-RISC 2.0 code (-Ae +O2 +DA2.0 +DS2.0): geometry failures\n\nI'm not sure how interesting these differences are anymore -- is there\nanyone familiar enough with floating point to determine if the results\nare acceptable (although currently unexpected :-) or not?\n\nRegards,\n\nGiles\n\n\n\n\n\n", "msg_date": "Sat, 07 Apr 2001 15:24:05 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> I'm not sure how interesting these differences are anymore -- is there\n> anyone familiar enough with floating point to determine if the results\n> are acceptable (although currently unexpected :-) or not?\n\nDifferences in the last couple of decimal places in the geometry test\nare definitely not a cause for worry. Although we've tried to create\nexact-match reference files for the most popular platforms, I think\nthat's largely an exercise in time-wasting. Eventually we will figure\nout a way to make the geometry output round off a few digits, and then\nthe cross-platform differences should mostly vanish.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Apr 2001 01:32:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms (HP-UX) " }, { "msg_contents": "Hi\n\nI've been running RC3 regression tests, starting with a FreeBSD 4.2-STABLE\nand a Solaris 7 Sparc box. Both tests ran without any problems. I tried\nSolaris 8 Sparc next: it still suffered from the same unix socket problems.\nI had a look at the code and it seems to me that the use of unix sockets\nin RC3 is still enabled, even though it (appearently) doesn't work reliably\non Solaris.\n\nSince it was rather strange that RC3 did work correctly on Solaris 7 but\nnot 8, I also ran regression tests on another Solaris 7 and another 8 box,\nwith the same results. Since I still didn't trust it, I also ran RC1 again\non both Solaris 7 and 8; same result. And now things start getting weird.\nA little more than a week ago the RC1 regression tests ran with on average\n10-15 tests randomly failing. Now, however, I can run the regression several\ntimes without any test failing. But if I run the regression test enough\ntimes (4-6 times), I do have tests that fail (about 2-5). The configuration\nof these servers hasn't changed in the last months and I used the same RC1\nsource and binaries.\n\nCan somebody confirm whether pgsql Solaris does or does not work correctly\nout-of-the-box? Disabling unix sockets will probably fix all these problems,\nso I'm naturally wondering whether unix socket will or will not be disabled\nin pgsql 7.1...\n\nRegards,\n\nMathijs\n\nPs. Vince, could you remove test results 46 and 47? I don't trust them\nanymore.\n-- \n\"A book is a fragile creature. It suffers the wear of time,\n it fears rodents, the elements, clumsy hands.\" \n Umberto Eco \n", "msg_date": "Sat, 7 Apr 2001 07:54:16 +0200", "msg_from": "Mathijs Brands <[email protected]>", "msg_from_op": false, "msg_subject": "Call for platforms (Solaris)" }, { "msg_contents": "Franck Martin wrote:\n> I have no idea if what I say is true about the PG distribution by PG people, but\n> I have noticed than in the rpms of other distros the postgresql-devel rpms do not\n> include all the .h files necessary to build PG extensions. For instance the\n> rtree.h and itup.h and gist.h headers are missing. Could you please ensure that\n> all the headers are taken into account when you write your spec file.\n\nThe RPMs now (as of 7.1beta4) use the 'make install-all-headers'\nincantation to generate the development headers. If this doesn't get\nthe headers you need, then install-all-headers needsto be modified to\nreally install ALL headers.\n\nWith my latest RC3 RPM's (which I am preparing to upload to the\nPostgreSQL ftp server sometime this morning, once I get some other\nreorganization done and some contrib stuff built), I get the following\nresults:\n\n[root@utility i386]# rpm -ql postgresql-devel|grep gist\n/usr/include/pgsql/access/gist.h\n/usr/include/pgsql/access/gistscan.h\n/usr/include/pgsql/access/giststrat.h\n[root@utility i386]# rpm -ql postgresql-devel|grep rtree\n/usr/include/pgsql/access/rtree.h\n[root@utility i386]# rpm -ql postgresql-devel|grep itup\n/usr/include/pgsql/access/itup.h\n[root@utility i386]#\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 07:59:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... and rpms..." }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > > OTOH, if Marc was only thinking of removing the pre-built docs from the\n> > > tarball, I don't object to that. I'm not sure why those weren't\n> > > distributed as separate tarballs from the get-go. I just say that the\n> > > doc sources are part of the source distribution...\n> \n> >From the get-go, the docs were not, uh, useful docs. They have grown\n> quite a bit from 1996 (with sources and formatting, probably by orders\n> of magnitude).\n\nToday's docs make even the docs of version 6.1.1 look pretty puny.\n\nI'm looking at the package reorganization for the RPM's this morning --\nwe'll see what I find in a few hours.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 08:04:30 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> Okay, unless someone can come up with a really good argument *for* why\n> docs has to be included as part of the main tar file, I'm going to change\n> the distributin generating script so that it generates a .src.tar.gz file\n> seperate from the .doc.tar.gz file, which will make .src.tar.gz ~6Meg\n> instead of the 8meg we are currently forcing ppl to download ...\n\n> Peter E, is there anything part of the configure/make procedure that\n> *requires* pgsql/doc to be there else it will break? If so, can you\n> possibly put it as a test \"if docs exists, deal with it, else ignore\"?\n\nWe're going to do this at this point in the release cycle? IOW, is\nthere going to be an RC4 with this new packaging, or is the first-off\ntarball with new packaging going to be the *final* 7.1 release *raised\neyebrow*?\n\nI am certainly NOT opposed to doing this -- just questioning the timing.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 08:09:55 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Sat, 7 Apr 2001, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > Okay, unless someone can come up with a really good argument *for* why\n> > docs has to be included as part of the main tar file, I'm going to change\n> > the distributin generating script so that it generates a .src.tar.gz file\n> > seperate from the .doc.tar.gz file, which will make .src.tar.gz ~6Meg\n> > instead of the 8meg we are currently forcing ppl to download ...\n>\n> > Peter E, is there anything part of the configure/make procedure that\n> > *requires* pgsql/doc to be there else it will break? If so, can you\n> > possibly put it as a test \"if docs exists, deal with it, else ignore\"?\n>\n> We're going to do this at this point in the release cycle? IOW, is\n> there going to be an RC4 with this new packaging, or is the first-off\n> tarball with new packaging going to be the *final* 7.1 release *raised\n> eyebrow*?\n\nthere will be an RC4, I'm just waiting to hear back from Peter E as to\nwhether there is anything in the build process we even risk breaking ...\nwe've been doing the whole split thing for the past release or two as it\nis (the FreeBSD ports collection using the individual packages instead of\nthe great big one) so from a packaging perspective, its well tested ...\n\n\n", "msg_date": "Sat, 7 Apr 2001 10:39:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Franck Martin wrote:\n> \n> I have no idea if what I say is true about the PG distribution by PG people, but\n> I have noticed than in the rpms of other distros the postgresql-devel rpms do not\n> include all the .h files necessary to build PG extensions. For instance the\n> rtree.h and itup.h and gist.h headers are missing. Could you please ensure that\n> all the headers are taken into account when you write your spec file.\n> \n> May be also in the tar.gz or tar.bz2 distribution (bz2 is more effective than gz\n> and available on all platforms) you add a developer file that list all the\n> required headers, so that package builders know which files to include.\n\nIn my experience so far, it is also noticably slower than gzip. It does\nwork, and it is available. I have not yet been convinced that the space\nsavings is worth the time lost. But ISTM this is a minor point.\n\n> It seems that the rpm distributions will go as:\n> postgresql\n> postgresql-docs (user and manager docs)\n> postgresql-devel (header files and developper docs)\n\nActually, since you can suppress installation of the docs with --nodocs,\nI would very much prefer to keep the html and text docs in the main RPM.\nOtherwise I have two directories in /usr/doc for one software suite.\n\nThe 'hard copy' docs can go whereever they want as far as I'm concerned,\nsince I typically have little use for paper these days.\n\nOf course, these are only my preferences, but it seems unlikely that the\nassertions above are universally accepted either.\n\n-- \nKarl DeBisschop\n", "msg_date": "Sat, 07 Apr 2001 09:49:03 -0400", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... and rpms..." }, { "msg_contents": "The Hermit Hacker wrote:\n> there will be an RC4, I'm just waiting to hear back from Peter E as to\n\nGood.\n\n> whether there is anything in the build process we even risk breaking ...\n> we've been doing the whole split thing for the past release or two as it\n> is (the FreeBSD ports collection using the individual packages instead of\n> the great big one) so from a packaging perspective, its well tested ...\n\nJust not well-tested for the RPM build environment :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 09:56:04 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Karl DeBisschop wrote:\n> In my experience so far, it is also noticably slower than gzip. It does\n> work, and it is available. I have not yet been convinced that the space\n> savings is worth the time lost. But ISTM this is a minor point.\n\nThe official tarball is gzipped -- the RPM will use that until bzipped\ntarballs are official.\n \n> Actually, since you can suppress installation of the docs with --nodocs,\n> I would very much prefer to keep the html and text docs in the main RPM.\n> Otherwise I have two directories in /usr/doc for one software suite.\n\nThe html docs at the very least will remain in the main package.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 10:03:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... and rpms..." }, { "msg_contents": "On Sat, 7 Apr 2001, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > there will be an RC4, I'm just waiting to hear back from Peter E as to\n>\n> Good.\n>\n> > whether there is anything in the build process we even risk breaking ...\n> > we've been doing the whole split thing for the past release or two as it\n> > is (the FreeBSD ports collection using the individual packages instead of\n> > the great big one) so from a packaging perspective, its well tested ...\n>\n> Just not well-tested for the RPM build environment :-).\n\nYa, but you could concievably test that now, without us doign an RC4 ..\nthe files are all there :)\n\n\n", "msg_date": "Sat, 7 Apr 2001 11:11:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> On Sat, 7 Apr 2001, Lamar Owen wrote:\n> > Just not well-tested for the RPM build environment :-).\n \n> Ya, but you could concievably test that now, without us doign an RC4 ..\n> the files are all there :)\n\nSo the structure isn't going to change -- just there's not going to be a\n'whole thing' tarball anymore? I am now confyoozzled.... :-) If\nthere's still going to be a 'whole thing' tarball, I don't need to\nchange much.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 10:44:53 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "\"Oliver Elphick\" wrote:\n >Debian packages of 7.1RC3 have been uploaded to the Debian experimental\n >distribution and are also available at http://www.debian.org/~elphick/postgr\n >esql\n >\n >These packages are built for sid (Debian unstable); I am currently\n >trying to build a set for potato (stable).\n \n\nPackages for potato are now available from the same URL\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I sought the Lord, and he answered me; he delivered me\n from all my fears.\" Psalm 34:4 \n\n\n", "msg_date": "Sat, 07 Apr 2001 17:18:18 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Debian packages of 7.1RC3 " }, { "msg_contents": "Thomas Lockhart writes:\n\n> > > The docs are ready for shipment.\n> > Even better ...\n> > Okay, let's let this sit as RC3 for the next week...\n>\n> I'll go ahead and start generating hardcopy, though I understand that it\n> is no longer allowed into the shipping tarball :(\n\nI'm not speaking about \"allowed\", I'm merely talking about the state of\naffairs since 7.0. If people think that the postscript format should be\nin the main tarball, then why not, but IIRC this question was raised last\ntime around and the decision went the other way.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:18:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> At 2Meg, is there a reason why we include any of the docs as part of the\n> standard tar ball? It shouldn't be required to compile, so should be able\n> to be left out of the main tar ball and downloaded seperately as required\n> .. thereby shrinking the distribution to <6Meg from its current 8 ...\n\nFor that purpose you introduced the split distribution. If there is any\ngood reason for it, it's this. Currently, the .docs sub-tarball contains\nthe entire doc/ subtree, the consequence of which is that this tarball is\nrequired for a functioning installation. If we were to change this split\nso that doc/src/ is a separate sub-tarball, then that one could be purely\noptional and you could tell people that they don't need it unless they\nwant to write documentation.\n\nHowever, removing any part of the documentation, built or source, from the\nfull tarball seems like a really bad idea. It breaks the fundamental\nprinciple behind a \"full tarball\". The resulting confusion would be\nenormous. Especially now that we seems to start getting some outside\ndocumentation contributors.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:24:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "Bruce Momjian writes:\n\n> Can we drop TODO.detail from the tarball too? No need to include that,\n> I think. The web site has nice links to it now. Uncompressed it is\n> 1.314 megs.\n\nYou see where this discussion goes? Do we want to go through each file\nand argue whether it needs to be distributed? If you're in that kind of\nmood then you should use a binary package.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:25:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Tom Lane writes:\n\n> OTOH, if Marc was only thinking of removing the pre-built docs from the\n> tarball, I don't object to that. I'm not sure why those weren't\n> distributed as separate tarballs from the get-go. I just say that the\n> doc sources are part of the source distribution...\n\nWhy would you want to remove the pre-built docs from the tarball and ship\nthe sources?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:26:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... " }, { "msg_contents": "The Hermit Hacker writes:\n\n> Okay, unless someone can come up with a really good argument *for* why\n> docs has to be included as part of the main tar file,\n\nBecause people want to read the documentation.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:29:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Bruce Momjian writes:\n\n> A major issue is that we don't regenerate docs for 7.1.1 or later, so\n\nSure we do.\n\n> the 7.1 docs carry for all the 7.1.X releases. That would seem to argue\n> for a separate tarball for docs so people don't redownload the docs\n> again for 7.1.1.\n>\n>\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:30:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Lamar Owen writes:\n\n> We're going to do this at this point in the release cycle? IOW, is\n> there going to be an RC4 with this new packaging, or is the first-off\n> tarball with new packaging going to be the *final* 7.1 release *raised\n> eyebrow*?\n>\n> I am certainly NOT opposed to doing this -- just questioning the timing.\n\nI'm also questioning the timing and I am undoubtedly opposed to this.\nWe're talking about butchering up the released distribution less than a\nweek before publication. We had a year to discuss this and various\nquestions about how to handle documentation building, distributions, and\ninstallation were raised during that time.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 19:33:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Sat, 7 Apr 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > Okay, unless someone can come up with a really good argument *for* why\n> > docs has to be included as part of the main tar file,\n>\n> Because people want to read the documentation.\n\nget postgresql.src.tar.gz\nget postgresql.docs.tar.gz\n\ninstead of just\n\nget postgresql.tar.gz\n\nfor those that want to download the docs, same amount of time ... for\nthose that don't want it, it sames them 2meg of download time ...\n\nI'm curious as to how many ppl would actually download those docs ... I\nknow that I'd never do so, as I'm never on the same machine that the\nserver is running from, so just hit the web site ...\n\nso, for those that do, we are giving them one extra step, and for those\nthat don't, saving them time and bandwidth ...\n\n", "msg_date": "Sat, 7 Apr 2001 14:34:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "Giles Lean <[email protected]> writes:\n\n> It is still necessary to add -ltermcap after -ledit in\n> src/Makefile.global to have functional history editing in psql.\n\nThis is a weakness in the configure script: it goes through a loop\nwhere it tries to link a program that calls readline() with, in order,\n\"-lreadline\", \"-lreadline -ltermcap\", \"-lreadline -lncurses\",\n\"-lreadline -lcurses\", \"-ledit\", \"-ledit -ltermcap\", \"-ledit\n-lncurses\" and \"-ledit -lcurses\". The first link that succeeds wil\ndetermine which libraries are used. However, on some platforms with\ndynamic libraries, the link will succeed as soon as readline() is\npresent -- but the shared library that contains it doesn't contain a\ncomplete specification of all other libraries it needs at run-time\n(the executable is expected to hold this information), and the program\nfails at run-time even though it linked without any error message.\n\nI don't know how the situation could best be improved, though...\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "07 Apr 2001 20:09:57 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "The Hermit Hacker writes:\n\n> On Sat, 7 Apr 2001, Peter Eisentraut wrote:\n>\n> > The Hermit Hacker writes:\n> >\n> > > Okay, unless someone can come up with a really good argument *for* why\n> > > docs has to be included as part of the main tar file,\n> >\n> > Because people want to read the documentation.\n>\n> get postgresql.src.tar.gz\n> get postgresql.docs.tar.gz\n>\n> instead of just\n>\n> get postgresql.tar.gz\n\nBut we already have a set of split distributions. If you want to split it\nin a different way, why not, but abolishing the full distribution is going\nto seriously alienate use from the conventions used in open source land.\n\n> I'm curious as to how many ppl would actually download those docs ...\n\nThis is the wrong question to ask. The real question to me is: If we do\nthis, how many people won't download the documentation, don't read it,\ndon't find it, spread the word that PostgreSQL is poorly documented, don't\nuse it correctly, spread the word that PostgreSQL isn't easy to use, and\ntake our time with avoidable mailing list traffic? Also, how many people\nwill consequently not even get a chance to contribute to the\ndocumentation? People won't do two downloads for marginal benefit.\n\nBut let me ask you this: If we split out the documentation, why stop\nthere? Why not leave out pgtclsh, how many people need that? Or what\nabout the JDBC driver sources? People can download the pre-compiled jar\nfile. These are in fact valid concerns, and they are adressed by the\nsplit tarballs that we offer. But there *must* be a full source tarball.\nI cannot count on two hands the occasion where I had a source package\nwhere people left out \"non-essential\" source files. They lost me as a\ncontributor. I'm not going to set up a CVS pull to fix a sentence.\n\n> I\n> know that I'd never do so, as I'm never on the same machine that the\n> server is running from, so just hit the web site ...\n\nI think you would be glad if more people read them locally and less people\nhit the web. ;-)\n\n> so, for those that do, we are giving them one extra step, and for those\n> that don't, saving them time and bandwidth ...\n\nPeople that want to save time and bandwidth use binary packages or the\nsplit tarballs that we already have.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 20:10:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> those that don't want it, it sames them 2meg of download time ...\n\nAnother way to save at least 1 MB of download time would be bzip2'ed\ntarballs.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 20:19:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > A major issue is that we don't regenerate docs for 7.1.1 or later, so\n> \n> Sure we do.\n> \n> > the 7.1 docs carry for all the 7.1.X releases. That would seem to argue\n> > for a separate tarball for docs so people don't redownload the docs\n> > again for 7.1.1.\n\nI didn't know that. I thought we genarated postscript only major\nreleases. Do we regenerate HTML for subreleases?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 7 Apr 2001 14:25:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Sat, 7 Apr 2001, Peter Eisentraut wrote:\n\n> Thomas Lockhart writes:\n> \n> > > > The docs are ready for shipment.\n> > > Even better ...\n> > > Okay, let's let this sit as RC3 for the next week...\n> >\n> > I'll go ahead and start generating hardcopy, though I understand that it\n> > is no longer allowed into the shipping tarball :(\n> \n> I'm not speaking about \"allowed\", I'm merely talking about the state of\n> affairs since 7.0. If people think that the postscript format should be\n> in the main tarball, then why not, but IIRC this question was raised last\n> time around and the decision went the other way.\n\nHaving had to d/l PG many times on many different machines, I'd be\ndelighted if it came w/o .ps docs, and w/o the doc sources (the number of\npeople who seem to be able to turn docbook into useful stuff seems to be\n<< than people who can successful compile PG!).\n\nIt sounds like the separate-tgz for docs and for Postscript makes perfect\nsense. Just make sure that it's *very* obvious where/how to get these, so\nthat the mailing lists are deluged w/ 'where are the docs'?\n\nJust my .02,\n-- \nJoel Burton <[email protected]>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Sat, 7 Apr 2001 14:30:36 -0400 (EDT)", "msg_from": "Joel Burton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RC3 ..." }, { "msg_contents": "Tom Ivar Helbekkmo writes:\n\n> Giles Lean <[email protected]> writes:\n>\n> > It is still necessary to add -ltermcap after -ledit in\n> > src/Makefile.global to have functional history editing in psql.\n>\n> This is a weakness in the configure script: it goes through a loop\n> where it tries to link a program that calls readline() with, in order,\n> \"-lreadline\", \"-lreadline -ltermcap\", \"-lreadline -lncurses\",\n> \"-lreadline -lcurses\", \"-ledit\", \"-ledit -ltermcap\", \"-ledit\n> -lncurses\" and \"-ledit -lcurses\". The first link that succeeds wil\n> determine which libraries are used. However, on some platforms with\n> dynamic libraries, the link will succeed as soon as readline() is\n> present -- but the shared library that contains it doesn't contain a\n> complete specification of all other libraries it needs at run-time\n> (the executable is expected to hold this information), and the program\n> fails at run-time even though it linked without any error message.\n\nOn such a platform it would hardly be possible to detect anything with any\nreliably. A linker that links a program \"succesfully\" while the program\nreally needs more libraries to be runnable isn't very useful.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 7 Apr 2001 21:01:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> On such a platform it would hardly be possible to detect anything with any\n> reliably. A linker that links a program \"succesfully\" while the program\n> really needs more libraries to be runnable isn't very useful.\n\nYou're right, of course -- it's a bug in the linkage loader on the\nplatform in question. NetBSD/vax has it:\n\n$ uname -a\nNetBSD varg.i.eunet.no 1.5T NetBSD 1.5T (VARG) #4: Thu Apr 5 23:38:04 CEST 2001\n [email protected]:/usr/src/sys/arch/vax/compile/VARG vax\n$ cat > foo.c\nint main (int argc, char **argv) { readline(); }\n$ cc -o foo foo.c\n/tmp/ccFTO4Mu.o: Undefined symbol `_readline'referenced from text segment\ncollect2: ld returned 1 exit status\n$ cc -o foo foo.c -ledit\n$ echo $?\n0\n$ ./foo\n/usr/libexec/ld.so: Undefined symbol \"_tputs\"in foo:/usr/lib/libedit.so.2.5\n$ echo $?\n1\n$ ldd foo\nfoo:\n -ledit.2 => /usr/lib/libedit.so.2.5 (0x181b000)\n -lc.12 => /usr/lib/libc.so.12.74 (0x182d000)\n$\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "07 Apr 2001 21:41:04 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Call for platforms" }, { "msg_contents": "One quick note -- since 'R' < 'b', the RC RPM's must be forced to\ninstall with --oldpackage, as RPM does a simple strcmp of version\nnumbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n--oldpackage if you have a 7.1beta RPM already installed.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 23:30:07 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Karl DeBisschop wrote:\n> Actually, since you can suppress installation of the docs with --nodocs,\n> I would very much prefer to keep the html and text docs in the main RPM.\n> Otherwise I have two directories in /usr/doc for one software suite.\n\nI'm researching how to get a subpackage to place docs in the main\npackage %doc.\n\nHTML docs and man pages are with the main package; SGML source and any\nhardcopy docs will go into the docs subpackage. The contrib tree is\ngetting its own subpackage.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 07 Apr 2001 23:43:00 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ... and rpms..." }, { "msg_contents": "On Sat, 7 Apr 2001, Lamar Owen wrote:\n\n> One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n> install with --oldpackage, as RPM does a simple strcmp of version\n> numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n> --oldpackage if you have a 7.1beta RPM already installed.\n\nHuh? I always thought that ASCII R was greater then b ... *confused* in\nthe future, would it help to have 7.2Beta? Or am I missing something? :)\n\n\n", "msg_date": "Sun, 8 Apr 2001 03:08:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "The Hermit Hacker wrote:\n >On Sat, 7 Apr 2001, Lamar Owen wrote:\n >\n >> One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n >> install with --oldpackage, as RPM does a simple strcmp of version\n >> numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n >> --oldpackage if you have a 7.1beta RPM already installed.\n >\n >Huh? I always thought that ASCII R was greater then b ... *confused* in\n >the future, would it help to have 7.2Beta? Or am I missing something? :)\n\nR = 82\nb = 98\n\nso b comes after R, and `blank' comes before either!\n\nTherefore 7.1 < 7.1RC < 7.1beta !\n\nAs I suggested in another mail, let us switch to using even minor\nnumbers for releases and odd ones for development:\n\nThat means the final release of 7.1 will be called 7.2. Bugfix releases\nwill then be 7.2.x. Meanwhile new development versions will be 7.3.x\nwhich will finally be released as 7.4, and so on...\n\nFor current 7.1, the Debian releases are 7.1beta, 7.1cRC, 7.1final,\nwhich is both cumbersome and confusing to those who are looking for\nan exact match.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Do not be anxious about anything, but in everything, \n by prayer and supplication, with thanksgiving, present\n your requests to God. And the peace of God, which \n transcends all understanding, will guard your hearts \n and your minds in Christ Jesus.\" Philippians 4:6,7 \n\n\n", "msg_date": "Sun, 08 Apr 2001 08:43:17 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC " }, { "msg_contents": "The Hermit Hacker writes:\n\n> On Sat, 7 Apr 2001, Lamar Owen wrote:\n>\n> > One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n> > install with --oldpackage, as RPM does a simple strcmp of version\n> > numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n> > --oldpackage if you have a 7.1beta RPM already installed.\n>\n> Huh? I always thought that ASCII R was greater then b ... *confused* in\n> the future, would it help to have 7.2Beta? Or am I missing something? :)\n\nHow about 7.2rc1, which is greater than 7.2beta1.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 8 Apr 2001 12:21:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "On Sun, 8 Apr 2001, Oliver Elphick wrote:\n\n> The Hermit Hacker wrote:\n> >On Sat, 7 Apr 2001, Lamar Owen wrote:\n> >\n> >> One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n> >> install with --oldpackage, as RPM does a simple strcmp of version\n> >> numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n> >> --oldpackage if you have a 7.1beta RPM already installed.\n> >\n> >Huh? I always thought that ASCII R was greater then b ... *confused* in\n> >the future, would it help to have 7.2Beta? Or am I missing something? :)\n>\n> R = 82\n> b = 98\n>\n> so b comes after R, and `blank' comes before either!\n>\n> Therefore 7.1 < 7.1RC < 7.1beta !\n>\n> As I suggested in another mail, let us switch to using even minor\n> numbers for releases and odd ones for development:\n>\n> That means the final release of 7.1 will be called 7.2. Bugfix releases\n> will then be 7.2.x. Meanwhile new development versions will be 7.3.x\n> which will finally be released as 7.4, and so on...\n\nNot in this life time ... we are not going to move to a Linux-like\ndevelopment cycle ... *groan*\n\n", "msg_date": "Sun, 8 Apr 2001 09:27:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC " }, { "msg_contents": "The Hermit Hacker wrote:or development:\n >>\n >> That means the final release of 7.1 will be called 7.2. Bugfix releases\n >> will then be 7.2.x. Meanwhile new development versions will be 7.3.x\n >> which will finally be released as 7.4, and so on...\n >\n >Not in this life time ... we are not going to move to a Linux-like\n >development cycle ... *groan*\n >\n\nHarrumph!!\n\nWell, pick some scheme that gives a rational set of numbers for\ndistributions. The current one is only good for installation\nby hand!\n\n(Mind you, my other major package is progressing from -1.00 to 0,\nso that -0.76 is followed by -0.75. Not that I recommend you to \nfollow _that_ example.)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Do not be anxious about anything, but in everything, \n by prayer and supplication, with thanksgiving, present\n your requests to God. And the peace of God, which \n transcends all understanding, will guard your hearts \n and your minds in Christ Jesus.\" Philippians 4:6,7 \n\n\n", "msg_date": "Sun, 08 Apr 2001 15:14:49 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC " }, { "msg_contents": "Bruce Momjian writes:\n\n> I didn't know that. I thought we genarated postscript only major\n> releases. Do we regenerate HTML for subreleases?\n\nThe HTML is generated every 12 hours, and whenever a distribution is\nwrapped up it picks up the latest bundle. This will probably have to be\nsorted out again when a branch is made so the paths are set correctly, but\nin principle it is trivial to arrange. Not that the documentation ever\nchanges for minor releases.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 8 Apr 2001 18:58:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: RC3 ..." }, { "msg_contents": "On Sun, 8 Apr 2001, Oliver Elphick wrote:\n\n> The Hermit Hacker wrote:or development:\n> >>\n> >> That means the final release of 7.1 will be called 7.2. Bugfix releases\n> >> will then be 7.2.x. Meanwhile new development versions will be 7.3.x\n> >> which will finally be released as 7.4, and so on...\n> >\n> >Not in this life time ... we are not going to move to a Linux-like\n> >development cycle ... *groan*\n> >\n>\n> Harrumph!!\n>\n> Well, pick some scheme that gives a rational set of numbers for\n> distributions. The current one is only good for installation by hand!\n\nWe do, we follow the scheme as used by ... the BSD camp :) Be thankful we\ndon't go all the way and use 7.2-RELEASE too :)\n\n\n", "msg_date": "Sun, 8 Apr 2001 14:41:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC " }, { "msg_contents": "The Hermit Hacker wrote:\n> We do, we follow the scheme as used by ... the BSD camp :) Be thankful we\n> don't go all the way and use 7.2-RELEASE too :)\n\nIf we had 7.1-CURRENT, 7.1-RELEASE, and 7.1-STABLE, the versioning\ncomparision would be just fine -- better than now. As it stands, an\nupgrade from 7.1beta6 to 7.1RC4 and from 7.1RC4 to 7.1 is in the eyes of\nat least two packaging systems a downgrade.\n\nHowever, 7.1beta6 to 7.1rc4 to 7.1.0 would be an ok progression, as 7.1\n< 7.1.0, I think (saying that without having tested it could be\ndangerous.... :-)).\n\nAlthough I must observe that if RPM used the system's locale in\ndetermining version collation, 7.1RC4 would be greater than 7.1beta6 --\nwhich collation breaks our indexing and our LIKE optimizations, and\nbreaks our regression tests. :-) But 7.1 would still be a downgrade\nbased on that.\n\nRed Hat uses a different system for their betas -- which I'm not\nnecessarily advocating, just presenting:\n\nThe public RedHat beta, IIRC, for what may or may not become Red Hat 7.1\ncarried a version of 7.0.91.\n\nBut then again the Linux kernel did that as well, going from 0.13 or so\nto 0.97 (and various pl numbers) before hitting 1.0.\n\nAnd just _why_ are you so adversarial to the Linux version numbering? \nAfter all, it's just another system..... (Rhetorical question -- I\nalready know the answer) :-)\n\nPersonally, I think the Linux versioning is overkill, and prefer the BSD\nway of labeling versions. But that is just my personal opinion.\n\nBut even at that, the Linux and BSD versioning is designed more for\ncarrying concurrent STABLE and CURRENT versions -- we don't really have\n_that_ much version overlap to deal with, do we? Debian does as much --\nbut it is again a matter of version concurrency -- we're not likely to\nrelease a 7.1.0 then a 7.0.4 that fixes bugs in the STABLE branch,\nwhereas at one point Linux 2.0.39, a 2.2.x, and 2.4.0 were being\nreleased concurrently. The same happens with FreeBSDand others -- but\nnot us.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 08 Apr 2001 20:55:06 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Oliver Elphick wrote:\n\n> R = 82\n> b = 98\n\nThis is a very small problem of having capital R and lowercase b that I\nbelieve can be taken into account in the development of 7.2.\n\n> As I suggested in another mail, let us switch to using even minor\n> numbers for releases and odd ones for development:\n\nIt's a Linux-ism that personally I don't like. You have to be familiar\nwith the project to understand that 8.3.3 is not better for general use\nthan 8.2.4 because 8.2 is \"stable\" and 8.3 is \"development\".\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Mon, 09 Apr 2001 09:56:57 +0300", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Lamar Owen writes:\n\n> One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n> install with --oldpackage, as RPM does a simple strcmp of version\n> numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n> --oldpackage if you have a 7.1beta RPM already installed.\n\nBtw., are you aware of the 'serial' tag to override the version guessing\nmechanism?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 11 Apr 2001 19:43:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Lamar Owen writes:\n> \n> > One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n> > install with --oldpackage, as RPM does a simple strcmp of version\n> > numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n> > --oldpackage if you have a 7.1beta RPM already installed.\n> \n> Btw., are you aware of the 'serial' tag to override the version guessing\n> mechanism?\n\nYes, I am, actually. But it seems a broken way of dealing with it. \nAlthough I do have another idea, thanks to Trond. Rather than package\n'7.1RC4-1' I could package '7.1-0.1RC4' -- giving a straight\nversioning. I could progress from '7.1-0.1beta1.1' through\n'7.1-0.1beta6.2' through '7.1-0.2RC1.1' to '7.1-1'.\n\nLast time I looked at the documentation for the serial tag, its use was\nstrongly discouraged. But that _has_ been awhile -- maybe it could be\nuseful. But I would prefer the whole version numbering thingtobe fixed,\nas the Debian packages have the same issue -- and I don't know if .deb\nhas an analog to Serial:.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 11 Apr 2001 16:30:02 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Lamar Owen wrote:\n >Last time I looked at the documentation for the serial tag, its use was\n >strongly discouraged. But that _has_ been awhile -- maybe it could be\n >useful. But I would prefer the whole version numbering thingtobe fixed,\n >as the Debian packages have the same issue -- and I don't know if .deb\n >has an analog to Serial:.\n\nWe have epochs, that is, the package version is preceded by an integer\nand a colon, which overrides every other part of the version and release\nnumber. However, if I ever use an epoch, I will be stuck with epochs for ever;\nso I don't want to start.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Is any one of you in trouble? He should pray. Is\n anyone happy? Let him sing songs of praise. Is any one\n of you sick? He should call the elders of the church\n to pray over him...The prayer of a righteous man is\n powerful and effective.\" James 5:13,14,16 \n\n\n", "msg_date": "Wed, 11 Apr 2001 21:55:46 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC " }, { "msg_contents": "Oliver Elphick wrote:\n> Lamar Owen wrote:\n> >as the Debian packages have the same issue -- and I don't know if .deb\n> >has an analog to Serial:.\n \n> We have epochs, that is, the package version is preceded by an integer\n> and a colon, which overrides every other part of the version and release\n> number. However, if I ever use an epoch, I will be stuck with epochs for ever;\n> so I don't want to start.\n\nRPM also has the epoch mechanism -- and it sounds just like what you\nhave just described. Not something I want to start using, either. It's\nmore like a 'super-major' version number than the RPM serial mechanism,\nwhich works in a more broken fashion.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 11 Apr 2001 17:01:46 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Lamar Owen writes:\n\n> Yes, I am, actually. But it seems a broken way of dealing with it.\n> Although I do have another idea, thanks to Trond. Rather than package\n> '7.1RC4-1' I could package '7.1-0.1RC4' -- giving a straight\n> versioning. I could progress from '7.1-0.1beta1.1' through\n> '7.1-0.1beta6.2' through '7.1-0.2RC1.1' to '7.1-1'.\n\nJust name them\n\n7.1betax\n7.1rcx\n7.1.0\n7.1.1\netc.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Thu, 12 Apr 2001 00:18:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > Yes, I am, actually. But it seems a broken way of dealing with it.\n> > Although I do have another idea, thanks to Trond. Rather than package\n> > '7.1RC4-1' I could package '7.1-0.1RC4' -- giving a straight\n> > versioning. I could progress from '7.1-0.1beta1.1' through\n> > '7.1-0.1beta6.2' through '7.1-0.2RC1.1' to '7.1-1'.\n\n> Just name them\n\n> 7.1betax\n> 7.1rcx\n> 7.1.0\n> 7.1.1\n\nAnd I like that -- but that would be Marc's (and the core group)\ndecision to make not mine. If the current schema is continued, I can\nwork around it --but it would be nice if the version numbering could be\nmore packager-friendly. I have a real aversion to naming the RPM\nversion number differently from the main package version.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 11 Apr 2001 20:34:44 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RPM upgrade caveats going from a beta version to RC" }, { "msg_contents": "Did we decide that \"most NetBSD/i386 users have fpus\" in which case Marko's\npatch should be applied?\n\nCheers,\n\nPatrick\n(just checked, it isn't in today's cvs)\n\n\nOn Thu, Mar 22, 2001 at 10:27:44PM +0200, Marko Kreen wrote:\n> On Thu, Mar 22, 2001 at 07:58:04PM +0000, Patrick Welche wrote:\n> > On Fri, Mar 23, 2001 at 06:25:50AM +1100, Giles Lean wrote:\n> > > \n> > > > PS: AFAIK geometry-positive-zeros-bsd works for all NetBSD platforms - the\n> > > > above difference is only for i386 + fpu.\n> > > \n> > > It doesn't on NetBSD-1.5/alpha -- there geometry-positive-zeros is\n> > > correct.\n> > \n> > Sorry, that should have read:\n> > \n> > AFAIK geometry-positive-zeros works for all NetBSD platforms - the\n> > above difference is only for i386 + fpu.\n> \n> Seems that following patch is needed. Now It Works For Me (tm).\n> Giles, does the regress test now succed for you?\n> \n> -- \n> marko\n> \n> \n> Index: src/test/regress/resultmap\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/resultmap,v\n> retrieving revision 1.45\n> diff -u -r1.45 resultmap\n> --- src/test/regress/resultmap\t2001/03/22 15:13:18\t1.45\n> +++ src/test/regress/resultmap\t2001/03/22 17:29:49\n> @@ -17,6 +17,7 @@\n> geometry/.*-openbsd=geometry-positive-zeros-bsd\n> geometry/.*-irix6=geometry-irix\n> geometry/.*-netbsd=geometry-positive-zeros\n> +geometry/i.86-.*-netbsdelf1.5=geometry-positive-zeros-bsd\n> geometry/.*-sysv5uw7.*:cc=geometry-uw7-cc\n> geometry/.*-sysv5uw7.*:gcc=geometry-uw7-gcc\n> geometry/alpha.*-dec-osf=geometry-alpha-precision\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Thu, 12 Apr 2001 17:15:52 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "> Did we decide that \"most NetBSD/i386 users have fpus\" in which case Marko's\n> patch should be applied?\n\nI'm unclear on what y'all mean by \"i386 + fpu\", especially since NetBSD\nseems to insist on calling every Intel processor a \"i386\". In this case,\nare you suggesting that this patch covers all NetBSD installations on\nevery Intel processor from i386 + fpu forward to i486, i586, etc etc? Or\nis this specifically for the i386 with the 80387 coprocessor which is\nhow any reasonable person would interpret \"i386+fpu\"? ;)\n\n - Thomas\n\n> > Index: src/test/regress/resultmap\n> > ===================================================================\n> > RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/resultmap,v\n> > retrieving revision 1.45\n> > diff -u -r1.45 resultmap\n> > --- src/test/regress/resultmap 2001/03/22 15:13:18 1.45\n> > +++ src/test/regress/resultmap 2001/03/22 17:29:49\n> > @@ -17,6 +17,7 @@\n> > geometry/.*-openbsd=geometry-positive-zeros-bsd\n> > geometry/.*-irix6=geometry-irix\n> > geometry/.*-netbsd=geometry-positive-zeros\n> > +geometry/i.86-.*-netbsdelf1.5=geometry-positive-zeros-bsd\n> > geometry/.*-sysv5uw7.*:cc=geometry-uw7-cc\n> > geometry/.*-sysv5uw7.*:gcc=geometry-uw7-gcc\n> > geometry/alpha.*-dec-osf=geometry-alpha-precision\n", "msg_date": "Fri, 13 Apr 2001 13:25:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "On Fri, Apr 13, 2001 at 01:25:45PM +0000, Thomas Lockhart wrote:\n> > Did we decide that \"most NetBSD/i386 users have fpus\" in which case Marko's\n> > patch should be applied?\n> \n> I'm unclear on what y'all mean by \"i386 + fpu\", especially since NetBSD\n> seems to insist on calling every Intel processor a \"i386\".\n\nHistory ;-)\n\n> In this case,\n> are you suggesting that this patch covers all NetBSD installations on\n> every Intel processor from i386 + fpu forward to i486, i586, etc etc?\n\nYes! It's simply, if the peecee type thing has a fpu (as in the sysctl\nmachdep.fpu_present returns 1), then libm387.so is used, and you get\ndifferences in the (from memory 44th insignificant figure?) otherwise it\njust uses libm.so and you get what is currently correct in resultmap.\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 13 Apr 2001 17:48:51 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call for platforms" }, { "msg_contents": "my original CIDR type implementation used BIND's inet_ntop() and inet_pton()\nwhich therefore included latent support for ipv6. it wouldn't take a huge\namount of effort to bring this back, would it?\n\n(the user below is using VARCHAR for his ip addresses for this reason.)\n\n------- Forwarded Message\n\nDate: Fri, 20 Apr 2001 08:37:22 -0700 (PDT)\nMessage-Id: <[email protected]>\nTo: Paul A Vixie <[email protected]>\nSubject: Re: Appliance caching server configuration database schema \nIn-Reply-To: <[email protected]>\nFrom: [email protected] (Andreas Gustafsson)\n\nPaul A. Vixie writes:\n> you can use INET or CIDR for your addresses since this is postgres.\n\nI would if it supported IPv6 addresses.\n\n------- End of Forwarded Message\n\n", "msg_date": "Sat, 21 Apr 2001 10:27:19 -0700", "msg_from": "Paul A Vixie <[email protected]>", "msg_from_op": false, "msg_subject": "well, now i wish we hadn't gutted the ipv6 support" }, { "msg_contents": "Paul A Vixie <[email protected]> writes:\n> my original CIDR type implementation used BIND's inet_ntop() and inet_pton()\n> which therefore included latent support for ipv6. it wouldn't take a huge\n> amount of effort to bring this back, would it?\n\nAFAIK we never actually *had* IPV6 support in those datatypes, only\nstubs for it. A patch to bring it up to full speed would be gladly\naccepted...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Apr 2001 23:28:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: well, now i wish we hadn't gutted the ipv6 support " }, { "msg_contents": "> AFAIK we never actually *had* IPV6 support in those datatypes, only\n> stubs for it.\n\nthe inet_net_pton implementation that was brought in from BIND had its\nIPv6 portions scrubbed. micro-over-optimization of the contributed\n\"bitncmp\" caused the \"ipv4 as int\" assumption to reoccur. i'm going to\nhave to put it back to BIND-standard as much as possible. presumably\nas long as the on-disk format is compatible (such that old databases can\nbe both read and written by the new code) none of that will be objectionable.\n\n> A patch to bring it up to full speed would be gladly accepted...\n\nthanks for the invitation, i'll start work on it right now.\n\n", "msg_date": "Sat, 21 Apr 2001 20:48:13 -0700", "msg_from": "Paul A Vixie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: well, now i wish we hadn't gutted the ipv6 support " }, { "msg_contents": "[apologies if this appears twice; I thought I had sent it but it hasn't\nappeared anywhere]\nThe attached patch implements a method of connection authentication for\nUnix sockets that support SCM_CREDENTIALS. This includes Linux kernels\n2.2 and 2.4 at least; I don't know what other implementations support\nit.\n\nSince it is not universally supported, I have included a configure test. \nautoconf needs to be run after installing the patch.\n\nThis patch provides a new authentication method \"peer\" for use with\n\"local\" connections; otherwise it works exactly like the \"ident\" method.\n\nPlease consider including this in PostgreSQL.\n\n\n\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Rejoice with them that do rejoice, and weep with them \n that weep.\" Romans 12:15", "msg_date": "Thu, 03 May 2001 10:28:39 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Unix sockets connection authentication - patch" }, { "msg_contents": "Oliver Elphick writes:\n\n> Since it is not universally supported, I have included a configure test.\n> autoconf needs to be run after installing the patch.\n\nYou don't need Autoconf tests for cpp symbols. You can just write #ifdef\nWEIRD_SYMBOL in the code.\n\nBtw., never ever use AC_EGREP_*.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 3 May 2001 16:04:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unix sockets connection authentication - patch" }, { "msg_contents": "> [apologies if this appears twice; I thought I had sent it but it hasn't\n> appeared anywhere]\n> The attached patch implements a method of connection authentication for\n> Unix sockets that support SCM_CREDENTIALS. This includes Linux kernels\n> 2.2 and 2.4 at least; I don't know what other implementations support\n> it.\n\nAre SCM_CREDENTIALS supported by some standard?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 11:19:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unix sockets connection authentication - patch" }, { "msg_contents": "Bruce Momjian wrote:\n >> The attached patch implements a method of connection authentication for\n >> Unix sockets that support SCM_CREDENTIALS. This includes Linux kernels\n >> 2.2 and 2.4 at least; I don't know what other implementations support\n >> it.\n >\n >Are SCM_CREDENTIALS supported by some standard?\n\nI don't know if there is a standard. I've done a search on Google - it\nseems to have been invented by Sun and implemented in newer BSD as well\nas Linux.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Rejoice with them that do rejoice, and weep with them \n that weep.\" Romans 12:15 \n\n\n", "msg_date": "Thu, 03 May 2001 17:22:08 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unix sockets connection authentication - patch " }, { "msg_contents": "Hi all,\n\n\tIs there anyway to get the debug (-d2) log files to mark each transaction\nwith a unique ID. We're trying to debug dead locks and the transactions seem\nto be mixed together somewhat.\n\nThanks,\n\n--Rainer\n\n", "msg_date": "Fri, 4 May 2001 14:16:39 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": false, "msg_subject": "log files" }, { "msg_contents": "\"Rainer Mager\" <[email protected]> writes:\n> \tIs there anyway to get the debug (-d2) log files to mark each transaction\n> with a unique ID.\n\nNot per-transaction, but there's an option to include the backend PID,\nwhich should help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 09:51:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: log files " }, { "msg_contents": "\nNot sure what to do with this. Our authentication options are already\npretty complicated, and I hate to add a new one that no one is really\nsure about its portability or usefulness.\n\n\n> [apologies if this appears twice; I thought I had sent it but it hasn't\n> appeared anywhere]\n> The attached patch implements a method of connection authentication for\n> Unix sockets that support SCM_CREDENTIALS. This includes Linux kernels\n> 2.2 and 2.4 at least; I don't know what other implementations support\n> it.\n> \n> Since it is not universally supported, I have included a configure test. \n> autoconf needs to be run after installing the patch.\n> \n> This patch provides a new authentication method \"peer\" for use with\n> \"local\" connections; otherwise it works exactly like the \"ident\" method.\n> \n> Please consider including this in PostgreSQL.\n> \n\nContent-Description: p.diff\n\n[ Attachment, skipping... ]\n\n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Rejoice with them that do rejoice, and weep with them \n> that weep.\" Romans 12:15 \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 14:10:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unix sockets connection authentication - patch" }, { "msg_contents": "Hi all,\n\n\tWe have an application that runs on both Postgres and Oracle. One problem\nwe've been facing as maintaining the the installed/default database for the\napplication. Once it is up and running, things are fine, but since we\nprimarily develop on Postgres we sometimes hit problems when it is time to\nconvert all of our work to Oracle. I was wondering if anyone knows of any\ntools that take a Postgres dump and convert it to something Oracle can\naccept?\n\n\tThanks,\n\n--Rainer\n\n", "msg_date": "Mon, 14 May 2001 07:55:38 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres <-> Oracle" }, { "msg_contents": "I can't see any reason in the code why this should be happening.\n\n------- Forwarded Message\n\nDate: Thu, 24 May 2001 00:25:32 -0400\nFrom: Marc Sherman <[email protected]>\nTo: Debian Bug Tracking System <[email protected]>\nSubject: Bug#98565: postgresql logs notices with GMT timestamps in syslog\n\nPackage: postgresql\nVersion: 7.1.1-3\nSeverity: normal\n\nNotices are being timestamped in GMT in the syslog, instead of local\ntime like all other log entries. Here's a fragment from my syslog:\n\nMay 23 23:17:30 projectile postgres[1035]: [1] DEBUG: connection: host=[local]\nuser=www-data database=inwopc\nMay 24 03:17:30 projectile postgres[1035]: [2] NOTICE: Adding missing FROM-cla\nu\nse entry for table \"games\"\nMay 23 23:17:30 projectile apache: NOTICE: Adding missing FROM-clause entry fo\nr\n table \"games\"\n\nAll three entries are from the same connection. The second entry is\nthe notice being logged by postgres, while the third entry (with the\nlocal timestamp) is that same notice being logged by apache (php).\n\n- - Marc\n\n- -- System Information\nDebian Release: testing/unstable\nArchitecture: i386\nKernel: Linux projectile 2.2.19 #1 Mon May 7 09:24:37 EDT 2001 i586\n\nVersions of packages postgresql depends on:\nii debconf 0.9.41 Debian configuration management sy\nii debianutils 1.15 Miscellaneous utilities specific t\nii libc6 2.2.3-1 GNU C Library: Shared libraries an\nii libpgsql2.1 7.1.1-3 Shared library libpq.so.2.1 for Po\nii libreadline4 4.2-3 GNU readline and history libraries\nii libssl0.9.6 0.9.6-2 SSL shared libraries \nii postgresql-client 7.1.1-3 Front-end programs for PostgreSQL \nii procps 1:2.0.7-4 The /proc file system utilities. \nii zlib1g 1:1.1.3-15 compression library - runtime \n\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I will praise thee; for I am fearfully and wonderfully \n made...\" Psalms 139:14 \n\n\n", "msg_date": "Thu, 24 May 2001 09:30:05 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug#98565: postgresql logs notices with GMT timestamps in syslog\n (fwd)" }, { "msg_contents": "Since there is no plan yet how to do a wholesale overhaul of the ACL\nsystem, I'd like to stick a few improvements into the current\nimplementation:\n\n* Make DELETE distinct from UPDATE privilege\n\n* rename the internal representation: s = select, i = insert, u = update,\n d = delete, R = rules\n\n* LOCK > AccessShare will require UPDATE or DELETE. This is not a change\n in effect.\n\n* Sequence nextval and setval will require UPDATE; DELETE won't do any\n longer.\n\n* COPY FROM will require INSERT privilege. It used to require\n UPDATE/DELETE, it think that is not correct..\n\n* INSERT (the command) will require INSERT privilege. UPDATE/DELETE won't\n do any longer. (Why was this there?)\n\n* Implement SQL REFERENCES privilege: grant references on A to B will\n allow user B to create a foreign key referencing table A as primary key.\n\nI'd also like to create a regression test. That will require creating\nsome global users and groups in the installation where the test runs. I\nthink as long as we name them \"regressuser1\", \"regressgroup2\", etc. this\nwon't harm anyone.\n\nComments?\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 24 May 2001 13:07:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Smaller access privilege changes" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> * Make DELETE distinct from UPDATE privilege\n\nOkay.\n\n> * rename the internal representation: s = select, i = insert, u = update,\n> d = delete, R = rules\n\nSince the internal representation is visible to users, I fear that a\nwholesale renaming will break existing applications. Can we make this\npart of the change less intrusive?\n\n> * COPY FROM will require INSERT privilege. It used to require\n> UPDATE/DELETE, it think that is not correct..\n> * INSERT (the command) will require INSERT privilege. UPDATE/DELETE won't\n> do any longer. (Why was this there?)\n\nBoth of these are basically there because the underlying model is \"read\nand write\", with \"append\" as a limited form of \"write\"; so \"write\"\nallows everything that \"append\" does. But if we are switching to a full\n\"insert/update/delete\" model then this behavior should go away.\n\n> * Implement SQL REFERENCES privilege: grant references on A to B will\n> allow user B to create a foreign key referencing table A as primary key.\n\nWhich privilege will SELECT FOR UPDATE require, and how do you plan to\nget the system to distinguish users' SELECT FOR UPDATE from the commands\nissued by the foreign key triggers?\n\n> I'd also like to create a regression test. That will require creating\n> some global users and groups in the installation where the test runs. I\n> think as long as we name them \"regressuser1\", \"regressgroup2\", etc. this\n> won't harm anyone.\n\nSeems reasonable, but be careful to cope with the case where these\nobjects already exist from a prior regression run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 07:49:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Notices are being timestamped in GMT in the syslog, instead of local\n> time like all other log entries. Here's a fragment from my syslog:\n\nCurious. I always assumed that syslog timestamps were supplied by the\nsyslog daemon, but to make this happen they'd have to be supplied in the\nsyslog client process (viz. the Postgres process). What timezone is the\nPostgres backend being run in, and is it different from all the other\nsyslog clients on the system?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 08:03:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug#98565: postgresql logs notices with GMT timestamps in syslog\n\t(fwd)" }, { "msg_contents": "Tom Lane writes:\n\n> > * rename the internal representation: s = select, i = insert, u = update,\n> > d = delete, R = rules\n>\n> Since the internal representation is visible to users, I fear that a\n> wholesale renaming will break existing applications. Can we make this\n> part of the change less intrusive?\n\nI guess so. I could make r=select, a=insert, w=update, d=delete, R=rules,\nx=reference. Of course we will have to break this eventually, but we\nmight as well put it off until then.\n\n> > * Implement SQL REFERENCES privilege: grant references on A to B will\n> > allow user B to create a foreign key referencing table A as primary key.\n>\n> Which privilege will SELECT FOR UPDATE require, and how do you plan to\n> get the system to distinguish users' SELECT FOR UPDATE from the commands\n> issued by the foreign key triggers?\n\nThe REFERENCES privilege will be checked by CREATE TABLE and ALTER TABLE\n(where it currently says \"must be owner\"). SELECT FOR UPDATE is not\naffected by this and will stay the way it is.\n\n> > I'd also like to create a regression test. That will require creating\n> > some global users and groups in the installation where the test runs. I\n> > think as long as we name them \"regressuser1\", \"regressgroup2\", etc. this\n> > won't harm anyone.\n>\n> Seems reasonable, but be careful to cope with the case where these\n> objects already exist from a prior regression run.\n\nI drop them at the end of the test.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 24 May 2001 14:37:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>>> * rename the internal representation: s = select, i = insert, u = update,\n>>> d = delete, R = rules\n>> \n>> Since the internal representation is visible to users, I fear that a\n>> wholesale renaming will break existing applications. Can we make this\n>> part of the change less intrusive?\n\n> I guess so. I could make r=select, a=insert, w=update, d=delete, R=rules,\n> x=reference. Of course we will have to break this eventually, but we\n> might as well put it off until then.\n\nMy thought exactly. If we were going straight to full SQL compliance\nthen I wouldn't worry, but I don't like the idea of breaking apps now\nand then breaking them some more later.\n\nA different tack is to go ahead and make the change now, but try to\nensure we won't have to change the coding again when we do the rest of\nthe SQL protection model. Do you know what is still missing given this\nchange?\n\n>> Seems reasonable, but be careful to cope with the case where these\n>> objects already exist from a prior regression run.\n\n> I drop them at the end of the test.\n\nWhat if the prior test crashed or was aborted by the user midway\nthrough? Cleaning up at the end of the test is good, but I think\nit'd be wise for pg_regress to also drop these users/groups before\nit starts the run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 08:42:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > * Make DELETE distinct from UPDATE privilege\n> \n> Okay.\n> \n> > * rename the internal representation: s = select, i = insert, u = update,\n> > d = delete, R = rules\n> \n> Since the internal representation is visible to users, I fear that a\n> wholesale renaming will break existing applications. Can we make this\n> part of the change less intrusive?\n\nIf we are voting, I kind of like the newer letters. The old ones made\nlittle sense except to QUEL users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 May 2001 10:38:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes" }, { "msg_contents": "Tom Lane wrote:\n >\"Oliver Elphick\" <[email protected]> writes:\n >> Notices are being timestamped in GMT in the syslog, instead of local\n >> time like all other log entries. Here's a fragment from my syslog:\n >\n >Curious. I always assumed that syslog timestamps were supplied by the\n >syslog daemon, but to make this happen they'd have to be supplied in the\n >syslog client process (viz. the Postgres process). What timezone is the\n >Postgres backend being run in, and is it different from all the other\n >syslog clients on the system?\n\nI just got this extra information from the reporter.\n\n------- Forwarded Message\n\nDate: Thu, 24 May 2001 08:57:49 -0400\nFrom: \"Marc Sherman\" <[email protected]>\nTo: <[email protected]>\nSubject: Bug#98565: More info on this bug\n\nThe connection that is creating this NOTICE is executing\n\"set time zone 'GMT';\" immediately after connection; I\nsuspect that what's happening is that postmaster is setting the\nlibc timezone (by changing the environment and calling tzset)\nwhen is executes that query, causing subsequent system calls\n(including syslog) to use the new timezone.\n\n- - Marc\n\n\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I will praise thee; for I am fearfully and wonderfully \n made...\" Psalms 139:14 \n\n\n", "msg_date": "Thu, 24 May 2001 18:31:14 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug#98565: postgresql logs notices with GMT timestamps\n\tin syslog (fwd)" }, { "msg_contents": "Peter Eisentraut wrote:\n >Since there is no plan yet how to do a wholesale overhaul of the ACL\n >system, I'd like to stick a few improvements into the current\n >implementation:\n \n >* COPY FROM will require INSERT privilege. It used to require\n > UPDATE/DELETE, it think that is not correct..\n\nCOPY FROM should require either INSERT or UPDATE or both according to\nwhat rows are being copied. If a copied primary key already exists,\nthat will be an update. I don't see any reason to give COPY FROM any\nspecial treatment - surely it should succeed or fail according to \nwhether what it is trying to do is within the user's privileges.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I will praise thee; for I am fearfully and wonderfully \n made...\" Psalms 139:14 \n\n\n", "msg_date": "Thu, 24 May 2001 18:57:50 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> \"Oliver Elphick\" <[email protected]> writes:\n> > Notices are being timestamped in GMT in the syslog, instead of local\n> > time like all other log entries. Here's a fragment from my syslog:\n> \n> Curious. I always assumed that syslog timestamps were supplied by the\n> syslog daemon, but to make this happen they'd have to be supplied in the\n> syslog client process (viz. the Postgres process).\n\nThat is correct. The syslog(3) function puts a timestamp in front of\nthe message, and writes it to the syslog daemon. The string written\nto the daemon starts with <N>, where N is the priority and facility\nor'ed together.\n\nIan\n", "msg_date": "24 May 2001 11:07:49 -0700", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug#98565: postgresql logs notices with GMT timestamps in syslog\n\t(fwd)" }, { "msg_contents": "Oliver Elphick writes:\n\n> COPY FROM should require either INSERT or UPDATE or both according to\n> what rows are being copied. If a copied primary key already exists,\n> that will be an update.\n\nNo, it will be an error.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 24 May 2001 20:41:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> I just got this extra information from the reporter.\n\n> The connection that is creating this NOTICE is executing\n> \"set time zone 'GMT';\" immediately after connection; I\n> suspect that what's happening is that postmaster is setting the\n> libc timezone (by changing the environment and calling tzset)\n> when is executes that query, causing subsequent system calls\n> (including syslog) to use the new timezone.\n\nUh-huh. This is not a bug, or at least it's not our bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 14:58:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug#98565: postgresql logs notices with GMT timestamps in syslog\n\t(fwd)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Peter Eisentraut <[email protected]> writes:\n> * rename the internal representation: s = select, i = insert, u = update,\n> d = delete, R = rules\n\n> If we are voting, I kind of like the newer letters.\n\nActually, I like 'em too ... *if* they're the final set. I'm just\nconcerned about the idea of breaking user applications for an\nintermediate stop on the way to full SQL92 privilege specs. It'd be\nespecially nasty if we found ourselves using non-intuitive coding for\nthe full SQL specs so as to stay compatible with an intermediate stop.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 May 2001 15:02:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "Tom Lane writes:\n\n> A different tack is to go ahead and make the change now, but try to\n> ensure we won't have to change the coding again when we do the rest of\n> the SQL protection model. Do you know what is still missing given this\n> change?\n\nI don't think this will end up being the user-visible interface. We\nshould have a view, perhaps as part of the standard information_schema,\nthat users can query for privilege information. So for now I'll leave the\nold letters the way they are and just add a few new ones.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 24 May 2001 22:05:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Smaller access privilege changes " }, { "msg_contents": "\nI hate the Australian configure option because it means that you can't use the\npre-built postgres\nthat comes with RedHat or whatever. Surely the correct solution is to have a\nconfig file somewhere\nthat gets read on startup? That way us Australians don't have to be the only\nones in the world\nthat need a custom built postgres.\n\n\n\n", "msg_date": "Tue, 12 Jun 2001 15:00:07 +1000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Australian timezone configure option" }, { "msg_contents": "I'd just like to ask, will making USE_AUSTRALIAN_RULES as a GUC option mean \nthe regression tests work?\n\nAnd DST changes will work fine too (although I think that's more Linux system \nrelated).\n\netc.\n\nAs in, I'm in favour of a GUC option, but if it breaks regressions tests or \nother stuff, then I'd have second thoughts.\n\nRegards and best wishes,\n\nJustin Clift\n\nOn Tuesday 12 June 2001 15:00, [email protected] wrote:\n> I hate the Australian configure option because it means that you can't use\n> the pre-built postgres\n> that comes with RedHat or whatever. Surely the correct solution is to have\n> a config file somewhere\n> that gets read on startup? That way us Australians don't have to be the\n> only ones in the world\n> that need a custom built postgres.\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n", "msg_date": "Wed, 13 Jun 2001 11:18:09 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Australian timezone configure option" }, { "msg_contents": "\nMy patch allows the regression tests to pass too.\n\n> I'd just like to ask, will making USE_AUSTRALIAN_RULES as a GUC option mean \n> the regression tests work?\n> \n> And DST changes will work fine too (although I think that's more Linux system \n> related).\n> \n> etc.\n> \n> As in, I'm in favour of a GUC option, but if it breaks regressions tests or \n> other stuff, then I'd have second thoughts.\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> On Tuesday 12 June 2001 15:00, [email protected] wrote:\n> > I hate the Australian configure option because it means that you can't use\n> > the pre-built postgres\n> > that comes with RedHat or whatever. Surely the correct solution is to have\n> > a config file somewhere\n> > that gets read on startup? That way us Australians don't have to be the\n> > only ones in the world\n> > that need a custom built postgres.\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 21:30:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Australian timezone configure option" }, { "msg_contents": "> Surely the correct solution is to have a config file somewhere\n> that gets read on startup? That way us Australians don't have to be the only\n> ones in the world that need a custom built postgres.\n\nI will point out that \"you Australians\", and, well, \"us 'mericans\", are\nthe only countries without the sense to choose unique conventions for\ntime zone names.\n\nIt sounds like having a second lookup table for the Australian rules is\na possibility, and this sounds fairly reasonable to me. Btw, is there an\nAustralian convention for referring to North American time zones for\nthose zones with naming conflicts?\n\n - Thomas\n", "msg_date": "Thu, 14 Jun 2001 00:23:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Australian timezone configure option" }, { "msg_contents": "On Thu, Jun 14, 2001 at 12:23:22AM +0000, Thomas Lockhart wrote:\n> > Surely the correct solution is to have a config file somewhere\n> > that gets read on startup? That way us Australians don't have to be the only\n> > ones in the world that need a custom built postgres.\n> \n> I will point out that \"you Australians\", and, well, \"us 'mericans\", are\n> the only countries without the sense to choose unique conventions for\n> time zone names.\n> \n> It sounds like having a second lookup table for the Australian rules is\n> a possibility, and this sounds fairly reasonable to me. Btw, is there an\n> Australian convention for referring to North American time zones for\n> those zones with naming conflicts?\n\nFor years I've been on the TZ list, the announcement list for a \ncommunity-maintained database of time zones. One point they have \nfirmly established is that there is no reasonable hope of making \nanything like a standard system of time zone name abbreviations work. \nLegislators and dictators compete for arbitrariness in their time\nzone manipulations.\n\nEven if you assign, for your own use, an abbreviation to a particular\nadministrative region, you still need a history of legislation for that \nregion to know what any particular time record (particularly and April \nor September) really means.\n\nThe \"best practice\" for annotating times is to tag them with the numeric\noffset from UTC at the time the sample is formed. If the time sample is\nthe present time, you don't have to know very much make or use it. If \nit's in the past, you have to know the legislative history of the place \nto form a proper time record, but not to use it. If the time is in the \nfuture, you cannot know what offset will be in popular use at that time, \nbut at least you can be precise about what actual time you really mean,\neven if you can't be sure about what the wall clock says. (Actual wall \nclock times are not reliably predictable, a fact that occasionally makes \nthings tough on airline passengers.)\n\nThings are a little more stable in some places (e.g. in Europe it is\nimproving) but worldwide all is chaos.\n\nAssigning some country's current abbreviations at compile time is madness.\n\nNathan Myers\[email protected]\n", "msg_date": "Wed, 13 Jun 2001 18:05:42 -0700", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Australian timezone configure option" }, { "msg_contents": "\n> It sounds like having a second lookup table for the Australian rules is\n> a possibility, and this sounds fairly reasonable to me. Btw, is there an\n> Australian convention for referring to North American time zones for\n> those zones with naming conflicts?\n\nNo.\n\nNamed timezones just lose. For those of us who use machines on both\nthe Australian and USA East coasts it's nearly useless to see \"EST\":\n\n Thu Jun 14 19:54:01 EST 2001\n\nRegards,\n\nGiles\n", "msg_date": "Thu, 14 Jun 2001 19:55:22 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Australian timezone configure option " }, { "msg_contents": "Bruce Momjian wrote:\n >> > I think our current idea is to have people run local ident servers to\n >> > handle this. We don't have any OS-specific stuff in pg_hba.conf and I\n >> > am not sure if we want to add that complexity. What do others think?\n >> \n >> This is not any less \"specific\" than SSL or Kerberos. Note that opening a\n >> TCP/IP socket already opens a theoretical hole to the world. Unix domain\n >> is much safer.\n >\n >You can install SSL/Kerberos on any Unix, and many come pre-installed. \n >You can't add unix-domain socket user authentication to any OS.\n >\n >I assume most OS's have 127.0.0.1 set as loopback so there shouldn't be\n >a hole:\n >\n >127 127.0.0.1 UGRS 4352 lo0\n >127.0.0.1 127.0.0.1 UH 4352 lo0\n >\n >However, the security issue may make it worthwhile. Which OS's support\n >user authentication again, and can we test via configure? Maybe we can\n >strip out the mention in the pg_hba.conf file if it is not supported on\n >that OS.\n \nThe security issue is why I developed it. There were complaints from people \nwho did not want to have identd running at all.\n\nI think the feature is available in Linux, Solaris and some BSD. It can be\ntested for by whether SO_PEERCRED is defined in sys/socket.h.\n\nI don't see the need to strip mention from the comments in pg_hba.conf. The\nsituation is no different from those systems which do not have Kerberos or\nSSL available.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I waited patiently for the LORD; and he inclined unto \n me, and heard my cry. He brought me up also out of an \n horrible pit, out of the miry clay, and set my feet \n upon a rock, and established my goings. And he hath \n put a new song in my mouth, even praise unto our God.\n Many shall see it, and fear, and shall trust in the \n LORD.\" Psalms 40:1-3 \n\n\n", "msg_date": "Thu, 12 Jul 2001 03:37:43 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Debian's PostgreSQL packages " }, { "msg_contents": "> The security issue is why I developed it. There were complaints from people \n> who did not want to have identd running at all.\n> \n> I think the feature is available in Linux, Solaris and some BSD. It can be\n> tested for by whether SO_PEERCRED is defined in sys/socket.h.\n\nYes, I see something similar in BSD/OS. Manual page attached.\n\n> \n> I don't see the need to strip mention from the comments in pg_hba.conf. The\n> situation is no different from those systems which do not have Kerberos or\n> SSL available.\n\nYea, I guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nRECV(2)\t\t\tBSD Programmer's Manual\t\t RECV(2)\n\nNAME\n recv, recvfrom, recvmsg - receive a message from a socket\n\nSYNOPSIS\n #include <sys/types.h>\n #include <sys/socket.h>\n\n ssize_t\n recv(int s, void *buf, size_t len, int flags);\n\n ssize_t\n recvfrom(int s, void *buf, size_t len, int flags, struct sockaddr *from,\n\t socklen_t *fromlen);\n\n ssize_t\n recvmsg(int s, struct msghdr *msg, int flags);\n\nDESCRIPTION\n The recvfrom() and recvmsg() calls are used to receive messages from a\n socket, and may be used to receive data on a socket whether or not it is\n connection-oriented.\n\n If from is non-null, and the socket is not connection-oriented, the\n source address of the message is filled in. The fromlen pointer refers\n to a value-result parameter; it should initially contain the amount of\n space pointed to by from; on return that location will contain the actual\n length (in bytes) of the address returned.\tIf the buffer provided is too\n small, the name is truncated and the full size is returned in the loca-\n tion to which fromlen points. If from is null, the value pointed to by\n fromlen is not modified. Otherwise, if the socket is connection-orient-\n ed, the address buffer will not be modified, and the value pointed to by\n fromlen will be set to zero.\n\n The recv() call is normally used only on a connected socket (see\n connect(2)) and is identical to recvfrom() with a nil from parameter.\n As it is redundant, it may not be supported in future releases.\n\n All three routines return the length of the message on successful comple-\n tion. If a message is too long to fit in the supplied buffer, excess\n bytes may be discarded depending on the type of socket the message is re-\n ceived from (see socket(2)).\n\n If no messages are available at the socket, the receive call waits for a\n message to arrive, unless the socket is nonblocking (see fcntl(2))\tin\n which case the value -1 is returned and the external variable errno set\n to EAGAIN. The receive calls normally return any data available, up to\n the requested amount, rather than waiting for receipt of the full amount\n requested; this behavior is affected by the socket-level options\n SO_RCVLOWAT and SO_RCVTIMEO described in getsockopt(2).\n\n The select(2) call may be used to determine when more data arrive.\n\n The flags argument to a recv call is formed by or'ing one or more of the\n values:\n\n\t MSG_OOB\tprocess out-of-band data\n\t MSG_PEEK\tpeek at incoming message\n\t MSG_WAITALL\twait for full request or error\n\n The MSG_OOB flag requests receipt of out-of-band data that would not be\n received in the normal data stream. Some protocols place expedited data\n at the head of the normal data queue, and thus this flag cannot be used\n with such protocols. The MSG_PEEK flag causes the receive operation to\n return data from the beginning of the receive queue without removing that\n data from the queue. Thus, a subsequent receive call will return the\n same data.\tThe MSG_WAITALL flag requests that the operation block until\n the full request is satisfied. However, the call may still return less\n data than requested if a signal is caught, an error or disconnect occurs,\n or the next data to be received is of a different type than that re-\n turned.\n\n The recvmsg() call uses a msghdr structure to minimize the number of di-\n rectly supplied parameters. This structure has the following form, as\n defined in <sys/socket.h>:\n\n struct msghdr {\n\t caddr_t msg_name;\t/* optional address */\n\t u_int msg_namelen; /* size of address */\n\t struct iovec *msg_iov; /* scatter/gather array */\n\t u_int msg_iovlen; /* # elements in msg_iov */\n\t caddr_t msg_control; /* ancillary data, see below */\n\t u_int msg_controllen; /* ancillary data buffer len */\n\t int msg_flags;\t/* flags on received message */\n };\n\n If msg_name is non-null, and the socket is not connection-oriented, the\n source address of the message is filled in. The amount of space avail-\n able for the address is provided by msg_namelen, which is modified on re-\n turn to reflect the length of the stored address.\tIf the buffer is too\n small, the address is truncated; this is indicated when msg_namelen is\n less than the length embedded in the address (sa_len). If msg_name is\n null, msg_namelen is not modified.\tOtherwise, if the socket is connec-\n tion-oriented, the address buffer will not be modified, and msg_namelen\n will be set to zero.\n\n Msg_iov and msg_iovlen describe scatter gather locations, as discussed in\n read(2). Msg_control, which has length msg_controllen, points to a\n buffer for other protocol control related messages or other miscellaneous\n ancillary data. The messages are of the form:\n\n struct cmsghdr {\n\t u_int cmsg_len;\t/* data byte count, including hdr */\n\t int cmsg_level; /* originating protocol */\n\t int cmsg_type;\t/* protocol-specific type */\n /* followed by\n\t u_char cmsg_data[]; */\n };\n\n As an example, one could use this to learn of changes in the data-stream\n in XNS/SPP, or in ISO, to obtain user-connection-request data by request-\n ing a recvmsg with no data buffer provided immediately after an accept()\n call.\n\n Open file descriptors are now passed as ancillary data for AF_LOCAL do-\n main sockets, with cmsg_level set to SOL_SOCKET and cmsg_type set to\n SCM_RIGHTS.\n\n The msg_flags field is set on return according to the message received.\n MSG_EOR indicates end-of-record; the data returned completed a record\n (generally used with sockets of type SOCK_SEQPACKET). MSG_TRUNC indicates\n that the trailing portion of a datagram was discarded because the data-\n gram was larger than the buffer supplied.\tMSG_CTRUNC indicates that some\n control data were discarded due to lack of space in the buffer for ancil-\n lary data.\tMSG_OOB is returned to indicate that expedited or out-of-band\n data were received.\n\nRETURN VALUES\n These calls return the number of bytes received, or -1 if an error oc-\n curred.\n\nEXAMPLES\n The following code is an example of parsing the control information re-\n turned in the msg_control field. This example shows how to parse the\n control messages for a localdomain(4) socket to obtain passed file de-\n scriptors and the sender's credentials.\n\n #include <sys/param.h>\n #include <sys/socket.h>\n #include <sys/ucred.h>\n\n struct msghdr msghdr;\n struct cmsghdr *cm;\n struct fcred *fc;\t/* Pointer to the credentials */\n int fdcnt;\t\t/* The number of file descriptors passed */\n int *fds;\t\t/* The passed array of file descriptors */\n\n #define ENOUGH_CMSG(p, size) ((p)->cmsg_len >= ((size) + sizeof(*(p))))\n\n fc = NULL;\n fdcnt = 0;\n fds = NULL;\n\n if (msghdr.msg_controllen >= sizeof (struct cmsghdr) &&\n\t (msghdr.msg_flags & MSG_CTRUNC) == 0) {\n\n\t for (cm = CMSG_FIRSTHDR(&msghdr);\n\t\t cm != NULL && cm->cmsg_len >= sizeof(*cm);\n\t\t cm = CMSG_NXTHDR(&msghdr, cm)) {\n\n\t\t if (cm->cmsg_level != SOL_SOCKET)\n\t\t\t continue;\n\n\t\t switch (cm->cmsg_type) {\n\t\t case SCM_RIGHTS:\n\t\t\t fdcnt = (cm->cmsg_len - sizeof(*cm)) / sizeof(int);\n\t\t\t fds = (int *)CMSG_DATA(cm);\n\t\t\t break;\n\n\t\t case SCM_CREDS:\n\t\t\t if (ENOUGH_CMSG(cm, sizeof(*fc)))\n\t\t\t\t fc = (struct fcred *)CMSG_DATA(cm);\n\t\t\t break;\n\t\t }\n\t }\n }\n\nERRORS\n The calls fail if:\n\n [EBADF]\tThe argument s is an invalid descriptor.\n\n [ENOTCONN]\tThe socket is associated with a connection-oriented protocol\n\t\t and has not been connected (see connect(2) and accept(2)).\n\n [ENOTSOCK]\tThe argument s does not refer to a socket.\n\n [EAGAIN]\tThe socket is marked non-blocking, and the receive operation\n\t\t would block, or a receive timeout had been set, and the time-\n\t\t out expired before data were received.\n\n [EINTR]\tThe receive was interrupted by delivery of a signal before\n\t\t any data were available.\n\n [EFAULT]\tThe receive buffer pointer(s) point outside the process's ad-\n\t\t dress space.\n\nSEE ALSO\n fcntl(2),\tread(2), select(2), getsockopt(2), socket(2), ip(4), lo-\n cal(4)\n\nHISTORY\n The recv function call appeared in 4.2BSD.\n\n4.3-Reno Berkeley Distribution February 21, 1994\t\t\t 4", "msg_date": "Wed, 11 Jul 2001 23:48:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: Debian's PostgreSQL packages" }, { "msg_contents": "I was wondering if some of you Postgres hackers could advise me on the\nsafety of the following. I have written a postgres C function that\nuses a popen linux system call. Orginally when I first tried it I kept\ngetting an ECHILD. I read a little bit more on the pclose function\nand the wait system calls and discoverd that on LINUX if the signal\nhandler for SIGCHLD is set to SIG_IGN you will get the ECHILD error\non pclose(or wait4 for that matter). So I did some snooping around in\nthe postgres backend code and found that in the traffic cop that the\nSIGCHLD signal handler is set to SIG_IGN. So in my C function right\nbefore the popen call I set the signal handler for SIGCHLD to SIG_DFL\nand right after the pclose I set it back to SIG_IGN. I tested this\nand it seems to solve my problem. Not knowing much about the\ninternals of the postgres backend I would like to know... Is setting\nthe signal handler to SIG_IGN temorarily going to do anything funky\nwith by database or the backend?\n\nThanks in advance for your insights,\nScott Shealy\n", "msg_date": "11 Jul 2001 21:40:55 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "SIGCHLD handler in Postgres C function." }, { "msg_contents": "[email protected] writes:\n> I have written a postgres C function that\n> uses a popen linux system call. Orginally when I first tried it I kept\n> getting an ECHILD. I read a little bit more on the pclose function\n> and the wait system calls and discoverd that on LINUX if the signal\n> handler for SIGCHLD is set to SIG_IGN you will get the ECHILD error\n> on pclose(or wait4 for that matter). So I did some snooping around in\n> the postgres backend code and found that in the traffic cop that the\n> SIGCHLD signal handler is set to SIG_IGN. So in my C function right\n> before the popen call I set the signal handler for SIGCHLD to SIG_DFL\n> and right after the pclose I set it back to SIG_IGN. I tested this\n> and it seems to solve my problem.\n\nHmm. A possibly related bit of ugliness can be found in\nsrc/backend/commands/dbcommands.c, where we ignore ECHILD after\na system() call:\n\n ret = system(buf);\n /* Some versions of SunOS seem to return ECHILD after a system() call */\n if (ret != 0 && errno != ECHILD)\n {\n\nInteresting, no? I wonder whether we could get rid of that kluge\nif the signal handler was SIG_DFL rather than SIG_IGN. Can anyone\ntry this on one of the affected versions of SunOS? (Tatsuo, you\nseem to have added the ECHILD exception on May 25 2000; the commit\nmessage mentions Solaris but not which version. Could you try it?)\n\nWhat I'd be inclined to do, rather than swapping the handlers around\nwhile running, is to just have backend startup (tcop/postgres.c) set\nthe handler to SIG_DFL not SIG_IGN in the first place. That *should*\nproduce the identical results according to my man pages, but evidently\nit's not quite the same thing on some systems.\n\nChanging this might be a zero-cost solution to a portability glitch.\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jul 2001 13:21:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "> [email protected] writes:\n> > I have written a postgres C function that\n> > uses a popen linux system call. Orginally when I first tried it I kept\n> > getting an ECHILD. I read a little bit more on the pclose function\n> > and the wait system calls and discoverd that on LINUX if the signal\n> > handler for SIGCHLD is set to SIG_IGN you will get the ECHILD error\n> > on pclose(or wait4 for that matter). So I did some snooping around in\n> > the postgres backend code and found that in the traffic cop that the\n> > SIGCHLD signal handler is set to SIG_IGN. So in my C function right\n> > before the popen call I set the signal handler for SIGCHLD to SIG_DFL\n> > and right after the pclose I set it back to SIG_IGN. I tested this\n> > and it seems to solve my problem.\n> \n> Hmm. A possibly related bit of ugliness can be found in\n> src/backend/commands/dbcommands.c, where we ignore ECHILD after\n> a system() call:\n> \n> ret = system(buf);\n> /* Some versions of SunOS seem to return ECHILD after a system() call */\n> if (ret != 0 && errno != ECHILD)\n> {\n> \n> Interesting, no? I wonder whether we could get rid of that kluge\n> if the signal handler was SIG_DFL rather than SIG_IGN. Can anyone\n> try this on one of the affected versions of SunOS? (Tatsuo, you\n> seem to have added the ECHILD exception on May 25 2000; the commit\n> message mentions Solaris but not which version. Could you try it?)\n\nIt was Solaris 2.6.\n\n>Subject: [HACKERS] Solaris 2.6 problems\n>From: Tatsuo Ishii <[email protected]>\n>To: [email protected]\n>Cc: [email protected]\n>Date: Wed, 24 May 2000 18:28:25 +0900\n>X-Mailer: Mew version 1.93 on Emacs 19.34 / Mule 2.3 (SUETSUMUHANA)\n>\n>Hi, I have encountered a really strange problem with PostgreSQL 7.0 on\n>Solaris 2.6/Sparc. The problem is that createdb command or create\n>database SQL always fails. Inspecting the output of truss shows that\n>system() call in createdb() (commands/dbcomand.c) fails because\n>waitid() system call in system() returns error no. 10 (ECHILD).\n>\n>This problem was not in 6.5.3, so I checked the source of it. The\n>reason why 6.5.3's createdb worked was that it just ignored the return\n>code of system()!\n>\n>It seems that we need to ignore an error from system() if the error is\n>ECHILD on Solaris.\n>\n>Any idea?\n>\n>BTW, I have compiled PostgreSQL with egcs 2.95 with/without\n>optimization.\n>--\n>Tatsuo Ishii\n>\n", "msg_date": "Sun, 22 Jul 2001 09:28:03 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "On Sun, 22 Jul 2001, Tatsuo Ishii wrote:\n\n> > [email protected] writes:\n> > > I have written a postgres C function that\n> > > uses a popen linux system call. Orginally when I first tried it I kept\n> > > getting an ECHILD. I read a little bit more on the pclose function\n> > > and the wait system calls and discoverd that on LINUX if the signal\n> > > handler for SIGCHLD is set to SIG_IGN you will get the ECHILD error\n> > > on pclose(or wait4 for that matter). So I did some snooping around in\n> > > the postgres backend code and found that in the traffic cop that the\n> > > SIGCHLD signal handler is set to SIG_IGN. So in my C function right\n> > > before the popen call I set the signal handler for SIGCHLD to SIG_DFL\n> > > and right after the pclose I set it back to SIG_IGN. I tested this\n> > > and it seems to solve my problem.\n\nJust ignore ECHILD. It's not messy at all. :-) It sounds like your kernel\nis using SIG_IGN to do the same thing as the SA_NOCLDWAIT flag in *BSD\n(well NetBSD at least). When a child dies, it gets re-parrented to init\n(which is wait()ing). init does the child-died cleanup, rather than the\nparent needing to. That way when the parent runs wait(), there is no\nchild, so you get an ECHILD.\n\nAll ECHILD is doing is saying there was no child. Since we aren't really\nwaiting for the child, I don't see how that's a problem.\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 30 Jul 2001 10:37:16 -0700 (PDT)", "msg_from": "Bill Studenmund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "Bill Studenmund <[email protected]> writes:\n> All ECHILD is doing is saying there was no child. Since we aren't really\n> waiting for the child, I don't see how that's a problem.\n\nYou're missing the point: on some platforms the system() call is\nreturning a failure indication because of ECHILD. It's system() that's\nbroken, not us, and the issue is how to work around its brokenness\nwithout sacrificing more error detection than we have to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jul 2001 15:00:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "On Mon, 30 Jul 2001, Tom Lane wrote:\n\n> Bill Studenmund <[email protected]> writes:\n> > All ECHILD is doing is saying there was no child. Since we aren't really\n> > waiting for the child, I don't see how that's a problem.\n>\n> You're missing the point: on some platforms the system() call is\n> returning a failure indication because of ECHILD. It's system() that's\n> broken, not us, and the issue is how to work around its brokenness\n> without sacrificing more error detection than we have to.\n\nI think I do get the point. But perhaps I didn't make my point well. :-)\n\nI think the problem is that on some OSs, setting SIGCHLD to SIG_IGN\nactually triggers automatic child reaping. So the problem is that we are:\n1) setting SIGCHLD to SIG_IGN, 2) Calling system(), and 3) thinking ECHILD\nmeans something was really wrong.\n\nI think 4.4BSD systems will do what we expect (as the NO_CHLDWAIT flag\nrequests child reaping), but linux systems will give us the ECHILD.\nLooking at source on the web, I found:\n\nkernel/signal.c:1042\n\n* Note the silly behaviour of SIGCHLD: SIG_IGN means that the\n* signal isn't actually ignored, but does automatic child\n* reaping, while SIG_DFL is explicitly said by POSIX to force\n* the signal to be ignored.\n\nSo we get automatic reaping on Linux systems (which isn't bad).\n\nIf automatic reaping happens, system will give us an ECHILD as the waitpid\n(or equivalent) will not have found a child. :-)\n\nMy suggestion is just leave the ifs as \"if ((error == 0) || (error ==\nECHLD))\" (or the inverse).\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 30 Jul 2001 15:15:16 -0700 (PDT)", "msg_from": "Bill Studenmund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "Bill Studenmund <[email protected]> writes:\n> Looking at source on the web, I found:\n\n> kernel/signal.c:1042\n\n> * Note the silly behaviour of SIGCHLD: SIG_IGN means that the\n> * signal isn't actually ignored, but does automatic child\n> * reaping, while SIG_DFL is explicitly said by POSIX to force\n> * the signal to be ignored.\n\nHmm, interesting. If you'll recall, the start of this thread was a\nproposal to change our backends' handling of SIGCHLD from SIG_IGN to\nSIG_DFL (and get rid of explicit tests for ECHILD). I didn't quite see\nwhy changing the handler should make a difference, but above we seem to\nhave the smoking gun.\n\nWhich kernel, and which version, is the above quote from?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jul 2001 18:40:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "On Mon, 30 Jul 2001, Tom Lane wrote:\n\n> Bill Studenmund <[email protected]> writes:\n> > Looking at source on the web, I found:\n>\n> > kernel/signal.c:1042\n>\n> > * Note the silly behaviour of SIGCHLD: SIG_IGN means that the\n> > * signal isn't actually ignored, but does automatic child\n> > * reaping, while SIG_DFL is explicitly said by POSIX to force\n> > * the signal to be ignored.\n>\n> Hmm, interesting. If you'll recall, the start of this thread was a\n> proposal to change our backends' handling of SIGCHLD from SIG_IGN to\n> SIG_DFL (and get rid of explicit tests for ECHILD). I didn't quite see\n> why changing the handler should make a difference, but above we seem to\n> have the smoking gun.\n>\n> Which kernel, and which version, is the above quote from?\n\nLinux kernel source, 2.4.3, I think i386 version (though it should be the\nsame for this bit, it's supposed to be MI). Check out\nhttp://lxr.linux.no/source/\n\nI do recall the reason for the thread. :-) I see three choices:\n\n1) Change back to SIG_DFL for normal behavior. I think this will be fine\n\tas we run w/o problem on systems that lack this behavior. If\n\tturning off automatic child reaping would cause a problem, we'd\n\thave seen it already on the OSs which don't automatically reap\n\tchildren. Will a backend ever fork after it's started?\n\n2) Change to DFL around system() and then change back.\n\n3) Realize that ECHILD means that the child was auto-reaped (which is an\n\tok think and, I think, will only happen if the child exited w/o\n\terror).\n\nTake care,\n\nBill\n\n", "msg_date": "Mon, 30 Jul 2001 16:09:03 -0700 (PDT)", "msg_from": "Bill Studenmund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "Bill Studenmund <[email protected]> writes:\n> I see three choices:\n\n> 1) Change back to SIG_DFL for normal behavior. I think this will be fine\n> \tas we run w/o problem on systems that lack this behavior. If\n> \tturning off automatic child reaping would cause a problem, we'd\n> \thave seen it already on the OSs which don't automatically reap\n> \tchildren. Will a backend ever fork after it's started?\n\nBackends never fork more backends --- but there are some places that\nlaunch transient children and wait for them to finish. A non-transient\nsubprocess should always be launched by the postmaster, never by a\nbackend, IMHO.\n\n> 2) Change to DFL around system() and then change back.\n\nI think this is pretty ugly, and unnecessary.\n\n> 3) Realize that ECHILD means that the child was auto-reaped (which is an\n> \tok think and, I think, will only happen if the child exited w/o\n> \terror).\n\nThat's the behavior that's in place now, but I do not like it. We\nshould not need to code an assumption that \"this error isn't really\nan error\" --- especially when it only happens on some platforms.\nOn a non-Linux kernel, an ECHILD failure really would be a failure,\nand the existing code would fail to detect that there was a problem.\n\nBottom line: I like solution #1. Does anyone have an objection to it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jul 2001 19:14:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "> Bill Studenmund <[email protected]> writes:\n> > Looking at source on the web, I found:\n> \n> > kernel/signal.c:1042\n> \n> > * Note the silly behaviour of SIGCHLD: SIG_IGN means that the\n> > * signal isn't actually ignored, but does automatic child\n> > * reaping, while SIG_DFL is explicitly said by POSIX to force\n> > * the signal to be ignored.\n> \n> Hmm, interesting. If you'll recall, the start of this thread was a\n> proposal to change our backends' handling of SIGCHLD from SIG_IGN to\n> SIG_DFL (and get rid of explicit tests for ECHILD). I didn't quite see\n> why changing the handler should make a difference, but above we seem to\n> have the smoking gun.\n> \n> Which kernel, and which version, is the above quote from?\n\nThe auto-reaping is standard SysV behavior, while BSD is really ignore. \nSee the Steven's Unix Programming book for more info.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 30 Jul 2001 19:41:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The auto-reaping is standard SysV behavior, while BSD is really ignore. \n\nYou'll recall the ECHILD exception was installed by Tatsuo after seeing\nproblems on Solaris. Evidently Solaris uses the auto-reap behavior too.\n\nI'm somewhat surprised that HPUX does not --- it tends to follow its\nSysV heritage when there's a conflict between that and BSD practice.\nGuess they went BSD on this one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jul 2001 19:45:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > The auto-reaping is standard SysV behavior, while BSD is really ignore. \n> \n> You'll recall the ECHILD exception was installed by Tatsuo after seeing\n> problems on Solaris. Evidently Solaris uses the auto-reap behavior too.\n\nSVr4/Solaris took the SysV behavior. Steven's didn't like it. :-)\n\n\n> I'm somewhat surprised that HPUX does not --- it tends to follow its\n> SysV heritage when there's a conflict between that and BSD practice.\n> Guess they went BSD on this one.\n\nI thought HPUX was mostly SysV tools on BSD kernel.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 30 Jul 2001 19:47:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I'm somewhat surprised that HPUX does not --- it tends to follow its\n>> SysV heritage when there's a conflict between that and BSD practice.\n>> Guess they went BSD on this one.\n\n> I thought HPUX was mostly SysV tools on BSD kernel.\n\nNo, it was all SysV (or maybe even older) to start with, and later on\nthey adopted BSD features wholesale. But where there's a conflict, it's\nstill mostly SysV.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jul 2001 20:53:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "[ redirected to pgsql-hackers for comment ]\n\nHelge Bahmann <[email protected]> writes:\n> On Tue, 31 Jul 2001, Tom Lane wrote:\n>> There is a more complete version of this capability in the Debian patch\n>> set. I think we've been waiting for Oliver to pull it out and submit it\n>> as a patch...\n\n> Ok found it; uses \"peer\" as a keyword instead of \"ident\" but basically\n> does the same thing. I think you can discard my patch then.\n\nWell, we need to talk about that. I like your idea of making ident auth\n\"just work\" on local connections better than Oliver's approach of\ninventing a separate auth-type keyword. So some kind of merger of the\ntwo patches seems attractive to me. But Oliver may feel that he has to\ncontinue to support the \"peer\" keyword on Debian anyway, for backwards\ncompatibility. If so, do we want different ways of doing the same thing\non different distros, or should we just follow the Debian precedent to\nkeep things ugly-but-consistent?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Jul 2001 11:44:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> [ redirected to pgsql-hackers for comment ]\n> \n> Helge Bahmann <[email protected]> writes:\n> > On Tue, 31 Jul 2001, Tom Lane wrote:\n> >> There is a more complete version of this capability in the Debian patch\n> >> set. I think we've been waiting for Oliver to pull it out and submit it\n> >> as a patch...\n> \n> > Ok found it; uses \"peer\" as a keyword instead of \"ident\" but basically\n> > does the same thing. I think you can discard my patch then.\n> \n> Well, we need to talk about that. I like your idea of making ident auth\n> \"just work\" on local connections better than Oliver's approach of\n> inventing a separate auth-type keyword. So some kind of merger of the\n> two patches seems attractive to me. But Oliver may feel that he has to\n> continue to support the \"peer\" keyword on Debian anyway, for backwards\n> compatibility. If so, do we want different ways of doing the same thing\n> on different distros, or should we just follow the Debian precedent to\n> keep things ugly-but-consistent?\n\nWe could easily just accept peer as a synonym for ident for a few\nreleases, because it fact our ident will become something that is used\nbeyond the identd server.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 31 Jul 2001 12:00:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> ... But Oliver may feel that he has to\n>> continue to support the \"peer\" keyword on Debian anyway, for backwards\n>> compatibility. If so, do we want different ways of doing the same thing\n>> on different distros, or should we just follow the Debian precedent to\n>> keep things ugly-but-consistent?\n\n> We could easily just accept peer as a synonym for ident for a few\n> releases,\n\nOr let Oliver patch the Debian package to accept peer as a synonym for\nident. I don't see any real need to encourage the use of that keyword\nby non-Debianers...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Jul 2001 12:22:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> ... But Oliver may feel that he has to\n> >> continue to support the \"peer\" keyword on Debian anyway, for backwards\n> >> compatibility. If so, do we want different ways of doing the same thing\n> >> on different distros, or should we just follow the Debian precedent to\n> >> keep things ugly-but-consistent?\n> \n> > We could easily just accept peer as a synonym for ident for a few\n> > releases,\n> \n> Or let Oliver patch the Debian package to accept peer as a synonym for\n> ident. I don't see any real need to encourage the use of that keyword\n> by non-Debianers...\n\nGood idea. I was hoping to reduce his patching but this way he can\ncontrol how long he keeps it active. However, the text is only one line\nin hba.c. Either way is fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 31 Jul 2001 12:24:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "Tom Lane wrote:\n >[ redirected to pgsql-hackers for comment ]\n >\n >Helge Bahmann <[email protected]> writes:\n >> On Tue, 31 Jul 2001, Tom Lane wrote:\n >>> There is a more complete version of this capability in the Debian patch\n >>> set. I think we've been waiting for Oliver to pull it out and submit it\n >>> as a patch...\n >\n >> Ok found it; uses \"peer\" as a keyword instead of \"ident\" but basically\n >> does the same thing. I think you can discard my patch then.\n >\n >Well, we need to talk about that. I like your idea of making ident auth\n >\"just work\" on local connections better than Oliver's approach of\n >inventing a separate auth-type keyword. So some kind of merger of the\n >two patches seems attractive to me. But Oliver may feel that he has to\n >continue to support the \"peer\" keyword on Debian anyway, for backwards\n >compatibility. If so, do we want different ways of doing the same thing\n >on different distros, or should we just follow the Debian precedent to\n >keep things ugly-but-consistent?\n\nThis change has only been made in the unstable release; so I don't mind\nif peer and ident are folded together. Anyone running unstable knows\nthe world may turn upside down beneath him!\n\nSo if you have a patch to do that, go ahead.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Have not I commanded thee? Be strong and of a good \n courage; be not afraid, neither be thou dismayed; for \n the LORD thy God is with thee whithersoever thou \n goest.\" Joshua 1:9 \n\n\n", "msg_date": "Tue, 31 Jul 2001 20:45:58 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> This change has only been made in the unstable release; so I don't mind\n> if peer and ident are folded together. Anyone running unstable knows\n> the world may turn upside down beneath him!\n\n> So if you have a patch to do that, go ahead.\n\nSounds great. Helge, the main things your patch was missing were\nautoconf support and documentation fixes. Do you want to add those\n(possibly stealing liberally from the Debian patches) and resubmit?\n\nBTW, Bruce has recently committed some wholesale changes in hba.c, so a\npatch against 7.1.2 likely won't apply cleanly. If you could do your\npatch as a diff against CVS tip, it'd ease applying it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Jul 2001 16:00:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> \"Oliver Elphick\" <[email protected]> writes:\n> > This change has only been made in the unstable release; so I don't mind\n> > if peer and ident are folded together. Anyone running unstable knows\n> > the world may turn upside down beneath him!\n> \n> > So if you have a patch to do that, go ahead.\n> \n> Sounds great. Helge, the main things your patch was missing were\n> autoconf support and documentation fixes. Do you want to add those\n> (possibly stealing liberally from the Debian patches) and resubmit?\n> \n> BTW, Bruce has recently committed some wholesale changes in hba.c, so a\n> patch against 7.1.2 likely won't apply cleanly. If you could do your\n> patch as a diff against CVS tip, it'd ease applying it.\n\nI can merge into hba.conf if needed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 31 Jul 2001 17:53:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "\nCan you send over your version for review. We can edit the 'peer' part.\n\n\n> Tom Lane wrote:\n> >[ redirected to pgsql-hackers for comment ]\n> >\n> >Helge Bahmann <[email protected]> writes:\n> >> On Tue, 31 Jul 2001, Tom Lane wrote:\n> >>> There is a more complete version of this capability in the Debian patch\n> >>> set. I think we've been waiting for Oliver to pull it out and submit it\n> >>> as a patch...\n> >\n> >> Ok found it; uses \"peer\" as a keyword instead of \"ident\" but basically\n> >> does the same thing. I think you can discard my patch then.\n> >\n> >Well, we need to talk about that. I like your idea of making ident auth\n> >\"just work\" on local connections better than Oliver's approach of\n> >inventing a separate auth-type keyword. So some kind of merger of the\n> >two patches seems attractive to me. But Oliver may feel that he has to\n> >continue to support the \"peer\" keyword on Debian anyway, for backwards\n> >compatibility. If so, do we want different ways of doing the same thing\n> >on different distros, or should we just follow the Debian precedent to\n> >keep things ugly-but-consistent?\n> \n> This change has only been made in the unstable release; so I don't mind\n> if peer and ident are folded together. Anyone running unstable knows\n> the world may turn upside down beneath him!\n> \n> So if you have a patch to do that, go ahead.\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Have not I commanded thee? Be strong and of a good \n> courage; be not afraid, neither be thou dismayed; for \n> the LORD thy God is with thee whithersoever thou \n> goest.\" Joshua 1:9 \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 31 Jul 2001 19:11:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Allow IDENT authentication on local connections\n\t(Linux only)" }, { "msg_contents": "BTW, while digging through my mail archives I discovered that Oliver\n*did* already extract his \"peer\" auth patch and submit it as a proposed\npatch --- see the pghackers archives for 3-May-2001. At the time I\nthink we were concerned about portability issues, but as long as it's\nappropriately autoconf'd and documented, I see no real objection to\nsupporting SO_PEERCRED authentication.\n\nI do still like Helge's API (use \"ident\") better than adding another\nauth keyword, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Jul 2001 19:39:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> BTW, while digging through my mail archives I discovered that Oliver\n> *did* already extract his \"peer\" auth patch and submit it as a proposed\n> patch --- see the pghackers archives for 3-May-2001. At the time I\n> think we were concerned about portability issues, but as long as it's\n> appropriately autoconf'd and documented, I see no real objection to\n> supporting SO_PEERCRED authentication.\n> \n> I do still like Helge's API (use \"ident\") better than adding another\n> auth keyword, though.\n\nThere is a Solaris API someone submitted a a month ago that was sort of\nrejected too. I will have to dig that one up.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 31 Jul 2001 19:42:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> BTW, while digging through my mail archives I discovered that Oliver\n> *did* already extract his \"peer\" auth patch and submit it as a proposed\n> patch --- see the pghackers archives for 3-May-2001. At the time I\n> think we were concerned about portability issues, but as long as it's\n> appropriately autoconf'd and documented, I see no real objection to\n> supporting SO_PEERCRED authentication.\n> \n> I do still like Helge's API (use \"ident\") better than adding another\n> auth keyword, though.\n\nCan someone find the Solaris patch submitted a few months ago that did a\nsimilar thing? I can't seem to find it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 1 Aug 2001 22:44:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone find the Solaris patch submitted a few months ago that did a\n> similar thing? I can't seem to find it.\n\nI couldn't find one either. I found a couple of unsupported assertions\nthat Solaris and *BSD had SO_PEERCRED, so the Linux patch might work\nfor them. We'll find out soon enough, I suppose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Aug 2001 00:24:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Can someone find the Solaris patch submitted a few months ago that did a\n> > similar thing? I can't seem to find it.\n> \n> I couldn't find one either. I found a couple of unsupported assertions\n> that Solaris and *BSD had SO_PEERCRED, so the Linux patch might work\n> for them. We'll find out soon enough, I suppose.\n\nNot here on BSD/OS. I know I saw a Solaris patch that did exactly this\nand I questioned it because it was only for Solaris. Now that I\nresearch and I see different OS's doing this different ways, and I have\nmucked up hba.c already, it seemed like a good patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 2 Aug 2001 06:22:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Allow IDENT authentication on local connections\n\t(Linux only)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Not here on BSD/OS. I know I saw a Solaris patch that did exactly this\n> and I questioned it because it was only for Solaris. Now that I\n> research and I see different OS's doing this different ways, and I have\n> mucked up hba.c already, it seemed like a good patch.\n\nWell, if someone can come up with a way to do the same thing on other\nplatforms, we can easily fold it in.\n\nNow that I think about it, it's silly to #ifdef SO_PEERCRED in three\nplaces. We can reduce that to one place: make ident_unix always exist,\nand have it do the test for supported-or-not:\n\n\t#ifdef SO_PEERCRED\n\t\tdo it the Linux way\n\t#else\n\t\treport error \"IDENT not supported on local connections\"\n\t#endif\n\nThen adding variants for other platforms is just a matter of more ifdefs\nin the one place. I'll take care of doing this in a little bit...\n\nBTW, a question for Linuxers: Oliver's older patch did\nsetsockopt(SO_PASSCRED) before getsockopt(SO_PEERCRED), whereas Helge's\nversion did not. I included the PASSCRED step in what I committed,\nbecause the Linux docs I had at hand implied it was needed. But\nevidently it worked without it for Helge. Is there some variation among\nLinux versions as to whether PASSCRED is enabled by default?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Aug 2001 09:00:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Allow IDENT authentication on local connections\n\t(Linux only)" }, { "msg_contents": "\n> You'll recall the ECHILD exception was installed by Tatsuo after seeing\n> problems on Solaris. Evidently Solaris uses the auto-reap behavior too.\n> \n> I'm somewhat surprised that HPUX does not --- it tends to follow its\n> SysV heritage when there's a conflict between that and BSD practice.\n> Guess they went BSD on this one.\n\nIf the SysV behaviour of automatically reaping child processes is\nrequired on HP-UX the handler for SIGCHLD can be set to SIG_IGN.\nWhen the handler is SIG_DFL the signal will be ignored but child\nprocesses won't be reaped automatically. This is the same behaviour\nthat Stevens describes for SysVr4. (\"Advanced Programming in the Unix\nEnvironment\", section 10.7.)\n\nWhat different implementations of system(3) with different settings of\nSIGCHLD is another can of worms, and one I've not investigated. :-)\n\nRegards,\n\nGiles\n\n\n\n\n\n\n\n", "msg_date": "Sat, 04 Aug 2001 10:56:57 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGCHLD handler in Postgres C function. " }, { "msg_contents": "Tom Lane writes:\n\n> Well, we need to talk about that. I like your idea of making ident auth\n> \"just work\" on local connections better than Oliver's approach of\n> inventing a separate auth-type keyword.\n\nThis is exactly what I would not like to see. \"ident\" defines a specific\nprotocol, with an ident server. ident over something not TCP/IP doesn't\nmake sense, it could confuse admins. Just because it works similar\ndoesn't mean it is the same. In particular, the security issues are\ncompletely different.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 5 Aug 2001 22:00:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Well, we need to talk about that. I like your idea of making ident auth\n>> \"just work\" on local connections better than Oliver's approach of\n>> inventing a separate auth-type keyword.\n\n> This is exactly what I would not like to see. \"ident\" defines a specific\n> protocol, with an ident server. ident over something not TCP/IP doesn't\n> make sense, it could confuse admins. Just because it works similar\n> doesn't mean it is the same. In particular, the security issues are\n> completely different.\n\nWell, ISTM this is a documentation issue. We've already committed the\npatch using \"ident\" as the keyword, so I'd prefer to leave it that way\nand improve the docs as necessary.\n\n\t\t\tregards, tom lane\n\nPS: welcome back! Hope you had a pleasant vacation.\n", "msg_date": "Sun, 05 Aug 2001 16:09:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)" }, { "msg_contents": "I have had an idea how the LIKE optimization problem could be solved.\nThe problem was that given the expression\n\n A LIKE (P || '%')\n\nwhere A is a column reference and P is a constant not containing\nwildcards, we wanted to find X and Y such that\n\n X <= A and A <= Y\n\nwhere X and Y are calculated from P and the bound is usefully tight.\n\nNote that X <= A is really strcoll(X, A) <= 0 and strcoll(X, A) is really\nstrcmp(strxfrm(X), strxfrm(A)) (the prototype of strxfrm() is different in\nC, of course).\n\nLet $<=$ be a non-locale based comparison (i.e., plain strcmp()).\n\nThe we can say that\n\n strxfrm(P) $<=$ strxfrm(A) and\n strxfrm(A) $<=$ (strxfrm(P) + 1)\n\nwhere \"+ 1\" means adding 1 to the last character and accounting for\noverflow (like considering the string a base-256 number). Basically, this\nis the funny-collation-safe equivalent to A LIKE 'foo%' yielding 'foo' <=\nA <= 'fop'.\n\nWe'd need to implement the strxfrm() function in SQL and the $<=$\noperator, both of which are trivial. The index would have to be in terms\nof strxfrm(). There might be other issues, but they could be solved\nalgorithmically, I suppose.\n\nWhat do you think?\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 6 Aug 2001 00:28:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Possible solution for LIKE optimization" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I have had an idea how the LIKE optimization problem could be solved.\n\nHmm ... so in a non-ASCII locale, we'd have to look for an index on\nstrxfrm(A) rather than directly on A. And the index would need to\nuse a nonstandard operator set --- ie, *non* locale aware comparison\noperators (which might be useful for other purposes anyway).\n\nInteresting thought. I'm not entirely sure how we'd teach the planner\nto do this, but that's probably solvable.\n\nA more significant problem is that I'm still not convinced this gets the\njob done, because of the problem of multi-character collation elements.\nIf \"A LIKE 'FOOS%'\" should match FOOSS, but SS is treated specially by\nthe collation rules, does this scheme work?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Aug 2001 19:01:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > I have had an idea how the LIKE optimization problem could be solved.\n>\n> Hmm ... so in a non-ASCII locale, we'd have to look for an index on\n> strxfrm(A) rather than directly on A. And the index would need to\n> use a nonstandard operator set --- ie, *non* locale aware comparison\n> operators (which might be useful for other purposes anyway).\n\nWait, why isn't that the solution in the first place. Let's build the\nindex with an opclass that uses plain strcmp comparison. Then you can\ncompute the bounds using the method 'foo' <= 'foo%' <= 'fop'. We don't\nneed to trick the locale facilities, we just avoid using them. LIKE is\ndefined in terms of character elements, not collation elements, so that's\nokay.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 6 Aug 2001 03:45:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Wait, why isn't that the solution in the first place. Let's build the\n> index with an opclass that uses plain strcmp comparison.\n\nBy George, I think you've got it! All we need is comparison ops and\nan opclass that use strcmp, even when USE_LOCALE is defined. Then we\ndocument \"here's how you make a LIKE-compatible index in non-ASCII\nlocales\", and away we go.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Aug 2001 21:52:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "> Tom Lane writes:\n> \n> > Well, we need to talk about that. I like your idea of making ident auth\n> > \"just work\" on local connections better than Oliver's approach of\n> > inventing a separate auth-type keyword.\n> \n> This is exactly what I would not like to see. \"ident\" defines a specific\n> protocol, with an ident server. ident over something not TCP/IP doesn't\n> make sense, it could confuse admins. Just because it works similar\n> doesn't mean it is the same. In particular, the security issues are\n> completely different.\n\nPeter has a point here. The only way to save the 'ident' keyword is to\nmake it mean 'auto-identify' rather than identd.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 5 Aug 2001 22:55:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Allow IDENT authentication on local connections\n\t(Linux only)" }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <[email protected]> writes:\n> > Wait, why isn't that the solution in the first place. Let's build the\n> > index with an opclass that uses plain strcmp comparison.\n> \n> By George, I think you've got it! All we need is comparison ops and\n> an opclass that use strcmp, even when USE_LOCALE is defined. Then we\n> document \"here's how you make a LIKE-compatible index in non-ASCII\n> locales\", and away we go.\n> \n\nDo we have to make 2 indexes for non_ASCII text field ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 06 Aug 2001 16:19:19 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>> Peter Eisentraut <[email protected]> writes:\n> Wait, why isn't that the solution in the first place. Let's build the\n> index with an opclass that uses plain strcmp comparison.\n\n> Do we have to make 2 indexes for non_ASCII text field ?\n\nYou would if you want to use indexscans for both LIKE and \"x < 'FOO'\"\n(ie, locale-aware comparisons). Which is not great, but I think we've\nfinally seen the light: a locale-sorted index is just plain not useful\nfor LIKE.\n\nThis discussion has restored my faith in index opclasses; finally we\nhave a real-world application of 'em that we can point to ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2001 10:16:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> Hiroshi Inoue <[email protected]> writes:\n> >> Peter Eisentraut <[email protected]> writes:\n> > Wait, why isn't that the solution in the first place. Let's build the\n> > index with an opclass that uses plain strcmp comparison.\n> \n> > Do we have to make 2 indexes for non_ASCII text field ?\n> \n> You would if you want to use indexscans for both LIKE and \"x < 'FOO'\"\n> (ie, locale-aware comparisons). \n\nAnd ORDER BY ?\n\n> Which is not great, but I think we've\n> finally seen the light: a locale-sorted index is just plain not useful\n> for LIKE.\n\nI'm not familiar with non_ASCII locale.\nIs 'ss' always guaranteed to be LIKE 's%' for example ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 7 Aug 2001 02:47:10 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Possible solution for LIKE optimization " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>> Do we have to make 2 indexes for non_ASCII text field ?\n>> \n>> You would if you want to use indexscans for both LIKE and \"x < 'FOO'\"\n>> (ie, locale-aware comparisons). \n\n> And ORDER BY ?\n\nWell, the assumption is that we'd make a second set of string comparison\noperators that are defined as not-locale-aware. Names still to be\nchosen, but let's suppose they're $=$, $<$, $<=$, etc. Then\n\n\tSELECT ... ORDER BY foo;\n\nwould want to use a plain index (defined using locale-aware comparison),\nwhereas\n\n\tSELECT ... ORDER BY foo USING $<$;\n\nwould want to use a non-locale-aware index. So you could use such an\nindex for sorting, as long as you were content to have non-locale-aware\noutput ordering.\n\n> I'm not familiar with non_ASCII locale.\n> Is 'ss' always guaranteed to be LIKE 's%' for example ?\n\nI'd assume so, but I'd be interested to hear whether native speakers\nof German think that that's appropriate in their locale ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2001 14:31:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Hiroshi Inoue writes:\n\n> I'm not familiar with non_ASCII locale.\n> Is 'ss' always guaranteed to be LIKE 's%' for example ?\n\nYes. LIKE doesn't use any collation rules, since it doesn't do any\ncollating.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 6 Aug 2001 21:07:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Possible solution for LIKE optimization " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Hiroshi Inoue writes:\n>> I'm not familiar with non_ASCII locale.\n>> Is 'ss' always guaranteed to be LIKE 's%' for example ?\n\n> Yes. LIKE doesn't use any collation rules, since it doesn't do any\n> collating.\n\nOn the other hand, LIKE *is* multibyte aware. So the hypothetical\nnon-locale-aware comparison operators would need to be aware of\nmultibyte character sets even though not aware of locale. And the\n\"add one\" operator that we postulated for the LIKE index optimization\nneeds to be able to increment a multibyte character.\n\nThis seems doable, but the sort order of such a comparison function\nmight not be very pleasant, depending on what character set you are\nusing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2001 15:15:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Tom Lane writes:\n\n> On the other hand, LIKE *is* multibyte aware. So the hypothetical\n> non-locale-aware comparison operators would need to be aware of\n> multibyte character sets even though not aware of locale. And the\n> \"add one\" operator that we postulated for the LIKE index optimization\n> needs to be able to increment a multibyte character.\n\nBoth of these are not hard if you know how strcmp operates. It would also\nbe irrelevant whether \"add one\" to a multibyte character yields another\nvalid multibyte character. strcmp doesn't care.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 6 Aug 2001 21:47:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> On the other hand, LIKE *is* multibyte aware. So the hypothetical\n>> non-locale-aware comparison operators would need to be aware of\n>> multibyte character sets even though not aware of locale. And the\n>> \"add one\" operator that we postulated for the LIKE index optimization\n>> needs to be able to increment a multibyte character.\n\n> Both of these are not hard if you know how strcmp operates. It would also\n> be irrelevant whether \"add one\" to a multibyte character yields another\n> valid multibyte character. strcmp doesn't care.\n\nI imagine we can come up with a definition that makes LIKE optimization\nwork. I was wondering more about Hiroshi's implied question: would such\nan index have any use for sorting? Or would the induced sort order (on\nmultibyte character sets) look too weird to be of any use?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2001 15:49:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "\n[ I realise the discussion has left strxfrm(), but for the archives\n if nothing else ... ]\n\nPeter Eisentraut <[email protected]> wrote:\n\n> We'd need to implement the strxfrm() function in SQL and the $<=$\n> operator, both of which are trivial. The index would have to be in terms\n> of strxfrm(). There might be other issues, but they could be solved\n> algorithmically, I suppose.\n\nImplementations of strxfrm() that I've looked at have had result data\nthat is three or four times larger than then input string -- quite a\npenalty in some situations.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Tue, 07 Aug 2001 07:46:48 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> Implementations of strxfrm() that I've looked at have had result data\n> that is three or four times larger than then input string -- quite a\n> penalty in some situations.\n\nEspecially so given that we don't have TOAST for indexes, so the indexed\nvalue can't exceed about 2700 bytes (for btree and an 8K block size).\nYou are allowed to compress first, so that's not a hard limit, but it\ncould still be a problem.\n\nI like the non-locale-aware-opclass idea much better than the original.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Aug 2001 18:21:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > I'm not familiar with non_ASCII locale.\n> > Is 'ss' always guaranteed to be LIKE 's%' for example ?\n> \n> Yes. LIKE doesn't use any collation rules, since it doesn't do any\n> collating.\n> \n\nHmm I see the description like the following in SQL99 though I\ndon't understand the meaning.\n\ni) If <escape character> is not specified, then the collating\n sequence used for the <like predicate> is determined by Table 3,\n ‘‘Collating sequence usage for comparisons’’, taking <character\n match value> as comparand 1 (one) and <character pattern> as\n comparand 2.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 07 Aug 2001 08:40:46 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization" }, { "msg_contents": "On Mon, 6 Aug 2001, Tom Lane wrote:\n\n> Giles Lean <[email protected]> writes:\n> > Implementations of strxfrm() that I've looked at have had result data\n> > that is three or four times larger than then input string -- quite a\n> > penalty in some situations.\n>\n> Especially so given that we don't have TOAST for indexes, so the indexed\n> value can't exceed about 2700 bytes (for btree and an 8K block size).\n> You are allowed to compress first, so that's not a hard limit, but it\n> could still be a problem.\n>\n> I like the non-locale-aware-opclass idea much better than the original.\n\nDoes this means that if implemented we could create indexes for\ndifferent columns with/without locale support ? It's pain currently\nthat I had to enable locale support while actually I need\nit only for several columns. Gnu sort on Linux 10 times slow if\nI use LC_ALL other than C !\n\n\tRegards,\n\n\t\tOleg\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 7 Aug 2001 22:39:33 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible solution for LIKE optimization " }, { "msg_contents": "\n\nOn 30 Jul 2001 Tom Lane wrote:\n\n> [email protected] writes:\n> > gcc2.95; hppa2.0-hp-hpux10.20, postgresql 7.1.2\n> > make check checks the first group all right (tests check out ok and\n> > a process \"postgres:\" shows up in the process list.\n> > When \"make check\" displays \"parallel group (18 tests): point lseg\"\n> > the \"postgres:\" process disappears from the process list and\n> > the test does not complete. Or, rather, not until I lose\n> > patience (15min). Incidentally the shell process that make started\n> > keeps running and eats up all the CPU time.\n> \n> Known bug in HPUX's Bourne shell --- evidently it can't cope with so\n> many children in parallel. It works if you do\n> \n> \tgmake SHELL=/bin/ksh check\n> \n> or if you run the non-parallel \"installcheck\". See FAQ_HPUX.\n> \n> If I still had a support contract in force with HP, I'd file a bug\n> report...\n\nI reduced the pg_regress script to a test case and submitted a CR:\n\n JAGad84609 /usr/bin/sh hang on PostgreSQL regression test\n\nThe defect is filed against HP-UX 11i, but I noted that the defect is\npresent on 11.00 and 10.20 as well.\n\nRegards,\n\nGiles\n", "msg_date": "Wed, 15 Aug 2001 08:24:33 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] pg_regress fails at \"point\" test " }, { "msg_contents": "Well, ability to lock only unlocked rows in select for update is useful,\nof course. But uniq features of user'locks are:\n\n1. They don't interfere with normal locks hold by session/transaction.\n2. Share lock is available.\n3. User can lock *and unlock objects* inside transaction, which is not\n (and will not be) available with locks held by transactions.\n\nThey are interesting too and proposed implementation will not impact lock\nmanager (just additional 4 bytes in LOCKTAG => same size of LOCKTAG\non machines with 8 bytes alignment).\n\n> An interesting method would be to allow users to simply avoid locked\n> rows:\n>\n> SELECT * FROM queue FOR UPDATE LIMIT 1 UNLOCKED;\n>\n> Unlocked, return immediately, whatever could be used as a keyword to\n> avoid rows that are locked (skipping over them).\n>\n> For update locks the row of course. Currently for the above type of\n> thing I issue an ORDER BY random() which avoids common rows enough,\n> the queue agent dies if queries start taking too long (showing it's\n> waiting for other things) and tosses up new copies if it goes a while\n> without waiting at all (showing increased load).\n>\n> --\n> Rod Taylor\n>\n> This message represents the official view of the voices in my head\n>\n> ----- Original Message -----\n> From: \"Mikheev, Vadim\" <[email protected]>\n> To: <[email protected]>\n> Sent: Friday, August 17, 2001 2:48 PM\n> Subject: [HACKERS] User locks code\n>\n>\n> > 1. Just noted this in contrib/userlock/README.user_locks:\n> >\n> > > User locks, by Massimo Dal Zotto <[email protected]>\n> > > Copyright (C) 1999, Massimo Dal Zotto <[email protected]>\n> > >\n> > > This software is distributed under the GNU General Public License\n> > > either version 2, or (at your option) any later version.\n> >\n> > Well, anyone can put code into contrib with whatever license\n> > he/she want but \"user locks\" package includes interface\n> > functions in contrib *and* changes in our lock manager, ie\n> > changes in backend code. I wonder if backend' part of package\n> > is covered by the same license above? And is it good if yes?\n> >\n> > 2. Not good implementation, imho.\n> >\n> > It's too complex (separate lock method table, etc). Much cleaner\n> > would be implement this feature the same way as transactions\n> > wait other transaction commit/abort: by locking objects in\n> > pseudo table. We could get rid of offnum and lockmethod from\n> > LOCKTAG and add\n> >\n> > struct\n> >\n\n> > Oid RelId;\n> > Oid ObjId;\n> > } userObjId;\n> >\n> > to objId union of LOCKTAG.\n> >\n> > This way user could lock whatever object he/she want in specified\n> > table and note that we would be able to use table access rights to\n> > control if user allowed to lock objects in table - missed in 1.\n> >\n> > One could object that 1. is good because user locks never wait.\n> > I argue that \"never waiting\" for lock is same bad as \"always\n> waiting\".\n> > Someday we'll have time-wait etc features for general lock method\n> > and everybody will be happy -:)\n> >\n> > Comments?\n> >\n> > Vadim\n> > P.S. I could add 2. very fast, no matter if we'll keep 1. or not.\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n\n\n", "msg_date": "Sun, 19 Aug 2001 11:20:12 -0700", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: User locks code" }, { "msg_contents": "> Well, ability to lock only unlocked rows in select for update is useful,\n> of course. But uniq features of user'locks are:\n> \n> 1. They don't interfere with normal locks hold by session/transaction.\n> 2. Share lock is available.\n> 3. User can lock *and unlock objects* inside transaction, which is not\n> (and will not be) available with locks held by transactions.\n> \n> They are interesting too and proposed implementation will not impact lock\n> manager (just additional 4 bytes in LOCKTAG => same size of LOCKTAG\n> on machines with 8 bytes alignment).\n> \n> > An interesting method would be to allow users to simply avoid locked\n> > rows:\n> >\n> > SELECT * FROM queue FOR UPDATE LIMIT 1 UNLOCKED;\n> >\n> > Unlocked, return immediately, whatever could be used as a keyword to\n> > avoid rows that are locked (skipping over them).\n> >\n> > For update locks the row of course. Currently for the above type of\n> > thing I issue an ORDER BY random() which avoids common rows enough,\n> > the queue agent dies if queries start taking too long (showing it's\n> > waiting for other things) and tosses up new copies if it goes a while\n> > without waiting at all (showing increased load).\n> >\n> > --\n> > Rod Taylor\n> >\n> > This message represents the official view of the voices in my head\n> >\n> > ----- Original Message -----\n> > From: \"Mikheev, Vadim\" <[email protected]>\n> > To: <[email protected]>\n> > Sent: Friday, August 17, 2001 2:48 PM\n> > Subject: [HACKERS] User locks code\n> >\n> >\n> > > 1. Just noted this in contrib/userlock/README.user_locks:\n> > >\n> > > > User locks, by Massimo Dal Zotto <[email protected]>\n> > > > Copyright (C) 1999, Massimo Dal Zotto <[email protected]>\n> > > >\n> > > > This software is distributed under the GNU General Public License\n> > > > either version 2, or (at your option) any later version.\n> > >\n> > > Well, anyone can put code into contrib with whatever license\n> > > he/she want but \"user locks\" package includes interface\n> > > functions in contrib *and* changes in our lock manager, ie\n> > > changes in backend code. I wonder if backend' part of package\n> > > is covered by the same license above? And is it good if yes?\n> > >\n> > > 2. Not good implementation, imho.\n> > >\n> > > It's too complex (separate lock method table, etc). Much cleaner\n> > > would be implement this feature the same way as transactions\n> > > wait other transaction commit/abort: by locking objects in\n> > > pseudo table. We could get rid of offnum and lockmethod from\n> > > LOCKTAG and add\n> > >\n> > > struct\n> > >\n> \n> > > Oid RelId;\n> > > Oid ObjId;\n> > > } userObjId;\n> > >\n> > > to objId union of LOCKTAG.\n> > >\n> > > This way user could lock whatever object he/she want in specified\n> > > table and note that we would be able to use table access rights to\n> > > control if user allowed to lock objects in table - missed in 1.\n> > >\n> > > One could object that 1. is good because user locks never wait.\n> > > I argue that \"never waiting\" for lock is same bad as \"always\n> > waiting\".\n> > > Someday we'll have time-wait etc features for general lock method\n> > > and everybody will be happy -:)\n> > >\n> > > Comments?\n> > >\n> > > Vadim\n> > > P.S. I could add 2. very fast, no matter if we'll keep 1. or not.\n> > >\n\n4. Most important: user locks are retained across transaction, which is\n not possible with ordinary locks.\n\n5. User locks semantic is defined entirely by the application and is not\n related to rows in the database.\n\nI wrote the user locks code because I needed a method to mark items as\n`busy' for very long time to avoid more users modifying the same object\nand overwriting each one's changes. This requires two features:\n\n 1.\tthey must survive transaction boundary. The typical use of user\n\tlocks is:\n\n\t\ttransaction 1:\tselect object,user_lock(object);\n\n\t\t... work on object for long time\n\n\t\ttransaction 2: update object,user_unlock(object);\n\n 2.\tthey must not block if the object is already locked, so that the\n\tprogram doesn't freeze and the user simply knows it can't use that\n\tobject.\n\nWhen I wrote the code the only way to do this was to add a separate lock\ntable and use the same machinery of ordinary locks. I agree that the code\nis complex and should probably be rewritten.\n\nIf you think there is a better way to implement this feature go ahead,\nbetter code is always welcome.\n\nThe only problem I have found with user locks is that if a backend crashes\nwithout releasing a lock there is no way to relase it except restarting\nthe whole postgres (I don't remember exactly why, I forgot the details).\n\nRegarding the licencing of the code, I always release my code under GPL,\nwhich is the licence I prefer, but my code in the backend is obviously\nreleased under the original postgres licence. Since the module is loaded\ndynamically and not linked into the backend I don't see a problem here.\nIf the licence becomes a problem I can easily change it, but I prefer the\nGPL if possible.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: see my www home page |\n+----------------------------------------------------------------------+\n", "msg_date": "Sun, 19 Aug 2001 23:15:54 +0200 (MEST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "At 11:20 AM 8/19/01 -0700, Vadim Mikheev wrote:\n>Well, ability to lock only unlocked rows in select for update is useful,\n>of course. But uniq features of user'locks are:\n>\n>1. They don't interfere with normal locks hold by session/transaction.\n>2. Share lock is available.\n>3. User can lock *and unlock objects* inside transaction, which is not\n> (and will not be) available with locks held by transactions.\n\nWould your suggested implementation allow locking on an arbitrary string?\n\nIf it does then one of the things I'd use it for is to insert unique data\nwithout having to lock the table or rollback on failed insert (unique index\nstill kept as a guarantee).\n\nCheerio,\nLink.\n\n\n\n\n", "msg_date": "Mon, 20 Aug 2001 23:52:03 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "RE: User locks code" }, { "msg_contents": "> Regarding the licencing of the code, I always release my code under GPL,\n> which is the licence I prefer, but my code in the backend is obviously\n> released under the original postgres licence. Since the module is loaded\n> dynamically and not linked into the backend I don't see a problem here.\n> If the licence becomes a problem I can easily change it, but I prefer the\n> GPL if possible.\n\nWe just wanted to make sure the backend changes were not under the GPL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 16:45:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "> > Application would explicitly call user_lock() functions in\n> > queries, so issue is still not clear for me. And once again -\n> \n> Well, yes, it calls user_lock(), but the communication is not\n> OS-linked, it is linked over a network socket, so I don't think\n> the GPL spreads over a socket. Just as telnet'ing somewhere an\n> typing 'bash' doesn't make your telnet GPL'ed, so I think the\n> client code is safe. To the client, the backend is just\n> returning information. You don't really link to the query\n> results.\n\nAh, ok.\n\n> > compare complexities of contrib/userlock and backend' userlock\n> > code: what's reason to cover contrib/userlock by GPL?\n> \n> Only because Massimo prefers it. I can think of no other reason.\n> It clearly GPL-stamps any backend that links it in.\n\nOk, let's do one step back - you wrote:\n\n> My assumption is that once you link that code into the backend,\n> the entire backend is GPL'ed and any other application code\n> you link into it is also (stored procedures, triggers, etc.)\n\nSo, one would have to open-source and GPL all procedures/triggers\nused by application just because of application uses user_lock()\nin queries?! Is it good?\n\nVadim\n", "msg_date": "Thu, 23 Aug 2001 16:55:29 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: User locks code" }, { "msg_contents": "> > Well, yes, it calls user_lock(), but the communication is not\n> > OS-linked, it is linked over a network socket, so I don't think\n> > the GPL spreads over a socket. Just as telnet'ing somewhere an\n> > typing 'bash' doesn't make your telnet GPL'ed, so I think the\n> > client code is safe. To the client, the backend is just\n> > returning information. You don't really link to the query\n> > results.\n> \n> Ah, ok.\n\nYes, kind of tricky. I am no expert in this but I have had the usual\ndiscussions.\n\n> > > compare complexities of contrib/userlock and backend' userlock\n> > > code: what's reason to cover contrib/userlock by GPL?\n> > \n> > Only because Massimo prefers it. I can think of no other reason.\n> > It clearly GPL-stamps any backend that links it in.\n> \n> Ok, let's do one step back - you wrote:\n> \n> > My assumption is that once you link that code into the backend,\n> > the entire backend is GPL'ed and any other application code\n> > you link into it is also (stored procedures, triggers, etc.)\n> \n> So, one would have to open-source and GPL all procedures/triggers\n> used by application just because of application uses user_lock()\n> in queries?! Is it good?\n\nYep. Is it good? Well, if you like the GPL, I guess so. If you don't,\nthen it isn't good. \n\nOf course, if you want to try and make money selling your program, it\nisn't good whether you like the GPL or not. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 20:11:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "I definitely agree with Vadim here: it's fairly silly that the\ncontrib userlock code is GPL'd, when it consists only of a few dozen\nlines of wrapper for the real functionality that's in the main backend.\nThe only thing this licensing setup can accomplish is to discourage\npeople from using the userlock code; what's the value of that?\n\nBesides, anyone who actually wanted to use the userlock code would need\nonly to write their own wrapper functions to get around the GPL license.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 20:14:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code " }, { "msg_contents": "> I definitely agree with Vadim here: it's fairly silly that the\n> contrib userlock code is GPL'd, when it consists only of a few dozen\n> lines of wrapper for the real functionality that's in the main backend.\n> The only thing this licensing setup can accomplish is to discourage\n> people from using the userlock code; what's the value of that?\n> \n> Besides, anyone who actually wanted to use the userlock code would need\n> only to write their own wrapper functions to get around the GPL license.\n\nHey, I agree with Vadim too. The GPL license is just a roadblock, but I\ncan't tell Massimo what to do with his code if it is not in the backend\nproper.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 20:31:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Tom Lane wrote:\n> \n> I definitely agree with Vadim here: it's fairly silly that the\n> contrib userlock code is GPL'd, when it consists only of a few dozen\n> lines of wrapper for the real functionality that's in the main backend.\n\nAs it seems a generally useful feature, it could at least be LGPL'd so \nthat linking to it won't force the whole backend under GPL.\n\n> The only thing this licensing setup can accomplish is to discourage\n> people from using the userlock code; what's the value of that?\n\nMaybe it makes Massimo feel good ? It seems a worhty reason to me, as \nhe has contributed a lot of useful stuff over the time.\n\nI really think that mixing licences inside one program is bad, if not\nfor \nany other reason then for confusing people and making them have\ndiscussions \nlike this.\n\n> Besides, anyone who actually wanted to use the userlock code would need\n> only to write their own wrapper functions to get around the GPL license.\n\nThis is a part of copyright law that eludes me - can i write a\nreplacement\nfunction for something so simple that it can essentially be done in one \nway only (like incrementing a value by one) ?\n\n-----------------\nHannu\n", "msg_date": "Fri, 24 Aug 2001 07:25:06 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "> Tom Lane wrote:\n> > \n> > I definitely agree with Vadim here: it's fairly silly that the\n> > contrib userlock code is GPL'd, when it consists only of a few dozen\n> > lines of wrapper for the real functionality that's in the main backend.\n> \n\nI was incorrect in something I said to Vadim. I said stored procedures\nwould have to be released if linked against a GPL'ed backend. They have\nto be released only if they are in C or another object file linked into\nthe backend. PlpgSQL or SQL functions don't have to be released because\ntheir code is \"loaded\" into the backend as a script, not existing in the\nbackend binary or required for the backend to run.\n\n> Maybe it makes Massimo feel good ? It seems a worhty reason to me, as \n> he has contributed a lot of useful stuff over the time.\n\nYes, that is probably it. The GPL doesn't give anything to users, it\ntakes some control away from users and gives it to the author of the\ncode.\n\n> I really think that mixing licences inside one program is bad, if not\n> for \n> any other reason then for confusing people and making them have\n> discussions \n> like this.\n\nYes, the weird part is that the BSD license is so lax (don't sue us)\nthat it is the addition of the GPL that changes the affect of the\nlicense. If you added a BSD license to a GPL'ed piece of code, the\neffect would be near zero.\n\n> > Besides, anyone who actually wanted to use the userlock code would need\n> > only to write their own wrapper functions to get around the GPL license.\n> \n> This is a part of copyright law that eludes me - can i write a\n> replacement\n> function for something so simple that it can essentially be done in one \n> way only (like incrementing a value by one) ?\n\nSure, if you don't cut and paste the code line by line, or retype the\ncode while staring at the previous version. That is how Berkeley got\nunix-free version of the BSD operating system. However, the few places\nwhere they lazily copied got them in trouble.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 10:42:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > This is a part of copyright law that eludes me - can i write a\n> > replacement\n> > function for something so simple that it can essentially be done in one\n> > way only (like incrementing a value by one) ?\n> \n> Sure, if you don't cut and paste the code line by line, or retype the\n> code while staring at the previous version. That is how Berkeley got\n> unix-free version of the BSD operating system. However, the few places\n> where they lazily copied got them in trouble.\n>\n\nI can imagine that when writing a trivial code for performing a trivial\nand \nwell-known function it is quite possible to arrive at a result that is \nvirtually indistinguishable from the original.\n\nI know that Compaq was forced to do a clean-room re-engineering of PC\nBIOS \n(two teams - the dirti one with access to real bios athat does\ndescription \nand testin and the clean team to write the actual code so that they can \nprove they did not \"steal\" even if the result is byte-by-byte simila)\nfor \nsimilar reasons\n\nI guess we dont have enough provably clean developers to do it ;)\n\nBTW, teher seems to be some problem with mailing list - I get very few \nmessages from the list that are not CC:d to me too\n\n------------------\nHannu\n", "msg_date": "Fri, 24 Aug 2001 17:33:43 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> > Tom Lane wrote:\n> > > \n> > > I definitely agree with Vadim here: it's fairly silly that the\n> > > contrib userlock code is GPL'd, when it consists only of a few dozen\n> > > lines of wrapper for the real functionality that's in the main backend.\n> > \n> \n> I was incorrect in something I said to Vadim. I said stored procedures\n> would have to be released if linked against a GPL'ed backend. \n\nOnly to those you actually distribute this product to. If you're using\nit internally, you have no obligations to release it to anyone, to\ngive one example.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "24 Aug 2001 12:02:54 -0400", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> \n> > > Tom Lane wrote:\n> > > > \n> > > > I definitely agree with Vadim here: it's fairly silly that the\n> > > > contrib userlock code is GPL'd, when it consists only of a few dozen\n> > > > lines of wrapper for the real functionality that's in the main backend.\n> > > \n> > \n> > I was incorrect in something I said to Vadim. I said stored procedures\n> > would have to be released if linked against a GPL'ed backend. \n> \n> Only to those you actually distribute this product to. If you're using\n> it internally, you have no obligations to release it to anyone, to\n> give one example.\n\nYes, I was speaking only of selling the software. Good point.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 12:03:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Uh, guys? The last thing I can find that Massimo says about the license\nis this, from Sunday:\n\nOn Sun, Aug 19, 2001 at 11:15:54PM +0200, Massimo Dal Zotto wrote:\n> \n> Regarding the licencing of the code, I always release my code under GPL,\n> which is the licence I prefer, but my code in the backend is obviously\n> released under the original postgres licence. Since the module is loaded\n> dynamically and not linked into the backend I don't see a problem here.\n> If the licence becomes a problem I can easily change it, but I prefer the\n> GPL if possible.\n> \n\nSo, rather than going over everone's IANAL opinons about mixing\nlicenses, let's just let Massimo know that it'd just be a lot easier to\nPostgreSQL/BSD license the whole thing, if he doesn't mind too much.\n\nRoss\n\nOn Fri, Aug 24, 2001 at 10:42:48AM -0400, Bruce Momjian wrote:\n<I'm not sure who wrote this, Bruce trimmed the attribution>\n> > Maybe it makes Massimo feel good ? It seems a worhty reason to me, as \n> > he has contributed a lot of useful stuff over the time.\n> \n> Yes, that is probably it. The GPL doesn't give anything to users, it\n> takes some control away from users and gives it to the author of the\n> code.\n> \n", "msg_date": "Fri, 24 Aug 2001 11:06:56 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "----- Original Message ----- \nFrom: Bruce Momjian <[email protected]>\nSent: Friday, August 24, 2001 10:42 AM\n\n\n> > I really think that mixing licences inside one program is bad, if not\n> > for \n> > any other reason then for confusing people and making them have\n> > discussions \n> > like this.\n> \n> Yes, the weird part is that the BSD license is so lax (don't sue us)\n> that it is the addition of the GPL that changes the affect of the\n> license. If you added a BSD license to a GPL'ed piece of code, the\n> effect would be near zero.\n\nSorry for asking this off-topic question, but I'm not sure I completely\nunderstand this license issue... How GPL, LGPL, and BSD are conflicting\nand or overlap, so that it causes such problems? AFAIK with the GPL\none has to ship the source code along with the product every time, but\nunder BSD it can be shipped without the source (that's why M$ doesn't attack\nBSD as it does for GPL), and why the PostgreSQL project originally is being\nreleased under the BSD-like license? Just curious...\n\nSerguei\n\n\n", "msg_date": "Fri, 24 Aug 2001 12:07:04 -0400", "msg_from": "\"Serguei Mokhov\" <[email protected]>", "msg_from_op": false, "msg_subject": "[OT] Re: User locks code" }, { "msg_contents": "----- Original Message ----- \nFrom: Bruce Momjian <[email protected]>\nSent: Friday, August 24, 2001 10:42 AM\n\n\n> > I really think that mixing licences inside one program is bad, if not\n> > for \n> > any other reason then for confusing people and making them have\n> > discussions \n> > like this.\n> \n> Yes, the weird part is that the BSD license is so lax (don't sue us)\n> that it is the addition of the GPL that changes the affect of the\n> license. If you added a BSD license to a GPL'ed piece of code, the\n> effect would be near zero.\n\nSorry for asking this off-topic question, but I'm not sure I completely\nunderstand this license issue... How GPL, LGPL, and BSD are conflicting\nand or overlap, so that it causes such problems? AFAIK with the GPL\none has to ship the source code along with the product every time, but\nunder BSD it can be shipped without the source (that's why M$ doesn't attack\nBSD as it does for GPL), and why the PostgreSQL project originally is being\nreleased under the BSD-like license? Just curious...\n\nSerguei\n\n\n", "msg_date": "Fri, 24 Aug 2001 12:07:04 -0400", "msg_from": "\"Serguei Mokhov\" <[email protected]>", "msg_from_op": false, "msg_subject": "[OT] Re: User locks code" }, { "msg_contents": "> > Yes, the weird part is that the BSD license is so lax (don't sue us)\n> > that it is the addition of the GPL that changes the affect of the\n> > license. If you added a BSD license to a GPL'ed piece of code, the\n> > effect would be near zero.\n> \n> Sorry for asking this off-topic question, but I'm not sure I completely\n> understand this license issue... How GPL, LGPL, and BSD are conflicting\n> and or overlap, so that it causes such problems? AFAIK with the GPL\n> one has to ship the source code along with the product every time, but\n> under BSD it can be shipped without the source (that's why M$ doesn't attack\n> BSD as it does for GPL), and why the PostgreSQL project originally is being\n> released under the BSD-like license? Just curious...\n\nBecause the code we got from Berkeley was BSD licensed, we can't change\nit, and because many of us like the BSD license better because we don't\nwant to require them to release the source code, we just want them to\nuse PostgreSQL. And we think they will release the source code\neventually anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 12:16:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: User locks code" }, { "msg_contents": "Bruce Momjian wrote:\n...\n >Yes, that is probably it. The GPL doesn't give anything to users, it\n >takes some control away from users and gives it to the author of the\n >code.\n\nCorrection - it takes away from the *distributor* of binaries the right to\ngive users fewer rights than he has. If he doesn't distribute, he can do what \nhe likes.\n\nI'm sorry to be pedantic! We need to be clear about that because Microsoft\nare trying to spread FUD about it. \n\n\n From the project's point of view, it is probably a bad idea to accept code\nunder any licence other than BSD. That can only lead to confusion among\nusers and distributors alike and may led to inadvertent violation of the\nGPL by those who don't notice that it is has been used.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I saw in the night visions, and, behold, one like the \n Son of man came with the clouds of heaven, and came to\n the Ancient of days, and they brought him near before \n him. And there was given him dominion, and glory, and \n a kingdom, that all people, nations, and languages, \n should serve him; his dominion is an everlasting \n dominion, which shall not pass away, and his kingdom \n that which shall not be destroyed.\" \n Daniel 7:13,14\n\n\n", "msg_date": "Fri, 24 Aug 2001 17:33:17 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User locks code " }, { "msg_contents": "> Bruce Momjian wrote:\n> ...\n> > >Yes, that is probably it. The GPL doesn't give anything to\n> > users, it takes some control away from users and gives it to\n> > the author of the code.\n> \n> Correction - it takes away from the *distributor* of binaries\n> the right to give users fewer rights than he has. If he doesn't\n> distribute, he can do what he likes.\n\nThat is not totally true. While it prevents him from distributing code\nwith fewer rights than the GPL code he received, he loses distribution\nrights on the code he writes as well. That is the _viral_ nature of the\nGPL that MS was talking about.\n\nI know it is not fun to hear these GPL issues, but it is a valid\nconcern.\n\n> I'm sorry to be pedantic! We need to be clear about that because\n> Microsoft are trying to spread FUD about it.\n\nI was even more pedantic. :-)\n\n> >From the project's point of view, it is probably a bad idea to accept code\n> under any license other than BSD. That can only lead to confusion\n> among users and distributors alike and may led to inadvertent\n> violation of the GPL by those who don't notice that it is has\n> been used.\n\nConsidering that we link the backend against libreadline if it exists\n(even though it isn't needed by the backend), we already have quite a\nbit of confusion. Only libedit can be used freely for line editing,\ni.e. psql.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 15:19:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "> > Tom Lane wrote:\n> > > \n> > > I definitely agree with Vadim here: it's fairly silly that the\n> > > contrib userlock code is GPL'd, when it consists only of a few dozen\n> > > lines of wrapper for the real functionality that's in the main backend.\n> > \n> \n> I was incorrect in something I said to Vadim. I said stored procedures\n> would have to be released if linked against a GPL'ed backend. They have\n> to be released only if they are in C or another object file linked into\n> the backend. PlpgSQL or SQL functions don't have to be released because\n> their code is \"loaded\" into the backend as a script, not existing in the\n> backend binary or required for the backend to run.\n> \n> > Maybe it makes Massimo feel good ? It seems a worhty reason to me, as \n> > he has contributed a lot of useful stuff over the time.\n> \n> Yes, that is probably it. The GPL doesn't give anything to users, it\n> takes some control away from users and gives it to the author of the\n> code.\n\nI have to disagree here. GPL gives users the assurance that code can't be\nmodified and distributed under a more restrictive licence, for example with\nno source code available or restrictions on commercial use.\n\nOn the contrary BSD doesn't guarantee almost anything on this sense.\nThink about what happened with the original Kerberos and the microsoft\nversion which was deliberately modified to prevent compatibilty with\nnon microsoft implementations. This has been possible only because it\nwas released under BSD licence. This why I prefer GPL over BSD.\n\nI choosed to release my code under GPL because I'm more concerned with\nfreedom of software than with commercial issues of it.\n\nAnyway, please stop this thread. I will change the licence of my contrib\ncode and make it compatible with postgres licence.\n\nAfter all, as someone poited out, it is a very trivial code and I don't\nreally care about what people is doing with it.\n\n> > I really think that mixing licences inside one program is bad, if not\n> > for \n> > any other reason then for confusing people and making them have\n> > discussions \n> > like this.\n> \n> Yes, the weird part is that the BSD license is so lax (don't sue us)\n> that it is the addition of the GPL that changes the affect of the\n> license. If you added a BSD license to a GPL'ed piece of code, the\n> effect would be near zero.\n> \n> > > Besides, anyone who actually wanted to use the userlock code would need\n> > > only to write their own wrapper functions to get around the GPL license.\n> > \n> > This is a part of copyright law that eludes me - can i write a\n> > replacement\n> > function for something so simple that it can essentially be done in one \n> > way only (like incrementing a value by one) ?\n> \n> Sure, if you don't cut and paste the code line by line, or retype the\n> code while staring at the previous version. That is how Berkeley got\n> unix-free version of the BSD operating system. However, the few places\n> where they lazily copied got them in trouble.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: see my www home page |\n+----------------------------------------------------------------------+\n", "msg_date": "Fri, 24 Aug 2001 21:33:53 +0200 (MEST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n\n> I know that Compaq was forced to do a clean-room re-engineering of PC\n> BIOS \n> (two teams - the dirti one with access to real bios athat does\n> description \n> and testin and the clean team to write the actual code so that they can \n> prove they did not \"steal\" even if the result is byte-by-byte simila)\n> for \n> similar reasons\n\nCompaq wasn't forced to do this. They did it on the basis that\nfollowing this complex procedure would guarantee that they would win\nif it came to a court case (which it did not). But there is no way to\ntell whether a simpler procedure would not win in court.\n\nThe GPL is a copyright license. Copyrights, unlike patents, only put\nlimitations on derived works. If you independently write the same\nnovel, and you can prove in court that you never saw the original\nnovel, you are not guily of violating copyright. That's why Compaq\nfollowed the procedure they did (and it's why Pierre Menard was not\nguilty of copyright infringement).\n\nBut that's novels. As far as I know, there is no U.S. law, and there\nare no U.S. court decisions, determining when one program is a\nderivative of another. If you read a novel, and write a novel with\nsimilar themes or characters, you've infringed. However, there is\nless protection for functional specifications, in which there may be\nonly one way to do something. When does a computer program infringe?\nNobody knows.\n\nIan\n", "msg_date": "24 Aug 2001 14:38:53 -0700", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User locks code" }, { "msg_contents": "Serguei Mokhov wrote:\n> \n> and why the PostgreSQL project originally is being\n> released under the BSD-like license? Just curious...\n\nBerkeley usually releases their free projects under BSD licence ;)\n\nThere have been some discussion about changing it, but it has never got \nenough support.\n\n--------------\nHannu\n", "msg_date": "Fri, 31 Aug 2001 12:15:47 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: User locks code" }, { "msg_contents": "I'm running on Red Hat Linux 7.1 with all official updates.\n\nI have a problem or two with PostgreSQL and thought to test against the \nCVS version so as to avoid the suggestion (as happened recently) to \n\"try this later version.\"\n\nI particularly want to install the package in some out-of-the-way place \nso\na) It doesn't replace the existing production version\nb) I don't need root privilege.\n\nTo my dismay some components don't honour the \"--prefix=/tmp/postgresql\"\n specification and try to install in some other location.\n\nI'd much prefer for the perl and python components to install into the \nlocation I specified, and to leave me to discuss with Perl and Python \nthe question of how to make sure I get the right versions (or even \nbetter, offer a handy hint).\n\n\n\n-- \nCheers\nJohn Summerfield\n\nMicrosoft's most solid OS: http://www.geocities.com/rcwoolley/\n\nNote: mail delivered to me is deemed to be intended for me, for my \ndisposition.\n\n\n\n", "msg_date": "Sun, 02 Sep 2001 10:27:52 +0800", "msg_from": "John Summerfield <[email protected]>", "msg_from_op": false, "msg_subject": "Build problem with CVS version " }, { "msg_contents": "John Summerfield writes:\n\n> To my dismay some components don't honour the \"--prefix=/tmp/postgresql\"\n> specification and try to install in some other location.\n>\n> I'd much prefer for the perl and python components to install into the\n> location I specified, and to leave me to discuss with Perl and Python\n> the question of how to make sure I get the right versions (or even\n> better, offer a handy hint).\n\nThis is a very valid concern, and it's been bugging us, too. The problem\nis that by default, the majority of users would probably want the Perl and\nPython modules to be put in the default place where they're easy to find\nfor the interpreter. (This is pure speculation. Personally, I certainly\nwouldn't do this, in the same way as I don't install libraries in /usr/lib\nbecause it makes it easier for the linker to find.)\n\nWhat we probably want is some configure switch that switches between the\ncurrent behaviour and the behaviour you want.\n\nIncidentally, some work toward this goal has already been done in the CVS\ntip. Basically, all I was missing is a good name for the option.\n\nFor you to proceed you could try the following (completely untested):\n\n# for local Python install\nmake install python_moduledir='$(pkglibdir)' python_moduleexecdir='$(pkglibdir)'\n# (yes, single quotes)\n\n# for local Perl install\nmake install mysterious_feature=yes\n# (seriously)\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 4 Sep 2001 14:58:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build problem with CVS version " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This is a very valid concern, and it's been bugging us, too. The problem\n> is that by default, the majority of users would probably want the Perl and\n> Python modules to be put in the default place where they're easy to find\n> for the interpreter. (This is pure speculation. Personally, I certainly\n> wouldn't do this, in the same way as I don't install libraries in /usr/lib\n> because it makes it easier for the linker to find.)\n\nI agree that that's the right place to put the perl & python modules\nwhen doing a pure-default configure: it's reasonable to assume we are\ninstalling a production system, and so we should install these modules\nin the default places. But it's a lot harder to make that argument when\ndoing a configure with a non-default --prefix: we may well be building a\nplaypen installation. In any case there should be a way to suppress\nautomatic installation of these modules.\n\n> What we probably want is some configure switch that switches between the\n> current behaviour and the behaviour you want.\n\nI'd suggest --prefix-like options to determine installation locations\nfor the perl and python modules, plus options on the order of\n--no-install-perl (ie, build it, but don't install it).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 10:51:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build problem with CVS version " }, { "msg_contents": "With all the great work put into allowing true 24/7 operation, as\ndistributed we're still unable to rotate the log file. While the log file\ntends to be smaller than the database system as a whole, this is still\ngoing to annoy people because they can't control disk usage without taking\nthe server down.\n\nI've been playing with a little program I wrote whose sole purpose is to\nwrite its stdin to a file and close and reopen that file when it receives\na signal. I figured this could work well when integrated transparently\ninto pg_ctl.\n\nSo, is log rotation a concern? Is this a reasonable solution? Other\nideas?\n\n(No Grand Unified Logging Solutions please. And no, \"use syslog\" doesn't\ncount.)\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 5 Sep 2001 03:12:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Log rotation?" }, { "msg_contents": "\nFunny, I found this going through my mailbox. Seems I was going to\nreturn to this SO_PEERCRED anyway.\n\n> Bruce Momjian wrote:\n> >> > I think our current idea is to have people run local ident servers to\n> >> > handle this. We don't have any OS-specific stuff in pg_hba.conf and I\n> >> > am not sure if we want to add that complexity. What do others think?\n> >> \n> >> This is not any less \"specific\" than SSL or Kerberos. Note that opening a\n> >> TCP/IP socket already opens a theoretical hole to the world. Unix domain\n> >> is much safer.\n> >\n> >You can install SSL/Kerberos on any Unix, and many come pre-installed. \n> >You can't add unix-domain socket user authentication to any OS.\n> >\n> >I assume most OS's have 127.0.0.1 set as loopback so there shouldn't be\n> >a hole:\n> >\n> >127 127.0.0.1 UGRS 4352 lo0\n> >127.0.0.1 127.0.0.1 UH 4352 lo0\n> >\n> >However, the security issue may make it worthwhile. Which OS's support\n> >user authentication again, and can we test via configure? Maybe we can\n> >strip out the mention in the pg_hba.conf file if it is not supported on\n> >that OS.\n> \n> The security issue is why I developed it. There were complaints from people \n> who did not want to have identd running at all.\n> \n> I think the feature is available in Linux, Solaris and some BSD. It can be\n> tested for by whether SO_PEERCRED is defined in sys/socket.h.\n> \n> I don't see the need to strip mention from the comments in pg_hba.conf. The\n> situation is no different from those systems which do not have Kerberos or\n> SSL available.\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"I waited patiently for the LORD; and he inclined unto \n> me, and heard my cry. He brought me up also out of an \n> horrible pit, out of the miry clay, and set my feet \n> upon a rock, and established my goings. And he hath \n> put a new song in my mouth, even praise unto our God.\n> Many shall see it, and fear, and shall trust in the \n> LORD.\" Psalms 40:1-3 \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Sep 2001 00:48:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: Debian's PostgreSQL packages" }, { "msg_contents": "> So, is log rotation a concern? Is this a reasonable solution? Other\n> ideas?\n>\n> (No Grand Unified Logging Solutions please. And no, \"use syslog\" doesn't\n> count.)\n\nWhat's the problem with using newsyslog or logrotate at the moment? (ie.\nuse the system log rotator)\n\nChris\n\n", "msg_date": "Thu, 6 Sep 2001 10:39:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> With all the great work put into allowing true 24/7 operation, as\n> distributed we're still unable to rotate the log file. While the log file\n> tends to be smaller than the database system as a whole, this is still\n> going to annoy people because they can't control disk usage without taking\n> the server down.\n\n> I've been playing with a little program I wrote whose sole purpose is to\n> write its stdin to a file and close and reopen that file when it receives\n> a signal. I figured this could work well when integrated transparently\n> into pg_ctl.\n\nAren't there log-rotation utilities out there already? (I seem to\nrecall mention that Apache has one, for instance.) Seems like this\nis a wheel we shouldn't have to reinvent.\n\nAlso, I kinda thought the long-range solution was to encourage everyone\nto migrate to syslog logging ...\n\n> And no, \"use syslog\" doesn't count.\n\nWhy not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Sep 2001 23:19:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation? " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> > And no, \"use syslog\" doesn't count.\n> \n> Why not?\n\nThe standard implementations of syslog lose log entries under heavy\nload, because they rely on a daemon which reads from a named pipe with\na limited buffer space. This is not acceptable in a production\nsystem, since heavy load is often just the time you need to see the\nlog entries.\n\nIt would be possible to implement the syslog(3) interface in a\ndifferent way, of course, which did not use syslogd. I don't know of\nany such implementation.\n\n(My personal preference these days is an approach like DJB's\ndaemontools, which separates the handling of log entries from the\nprogram doing the logging.)\n\nIan\n", "msg_date": "05 Sep 2001 20:54:44 -0700", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Ian Lance Taylor <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>>> And no, \"use syslog\" doesn't count.\n>> \n>> Why not?\n\n> The standard implementations of syslog lose log entries under heavy\n> load,\n\nOkay, that's a sufficient answer for that point.\n\n> (My personal preference these days is an approach like DJB's\n> daemontools, which separates the handling of log entries from the\n> program doing the logging.)\n\nThat still leads back to my first question, which is whether we can't\nrely on someone else's logrotation code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 00:04:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation? " }, { "msg_contents": "Tom Lane writes:\n\n> Aren't there log-rotation utilities out there already? (I seem to\n> recall mention that Apache has one, for instance.) Seems like this\n> is a wheel we shouldn't have to reinvent.\n\nI'm aware of the Apache rotatelogs utility, but I'm not completely\nsatisfied with it.\n\n1. It tries to do the rotating itself. I'd rather rely on the OS'\nrotating and archiving facilities.\n\n2. Only offers a time-based rotate, no manual intervention possible (via\nsignal).\n\n3. We don't want to have to tell people to install Apache and patch their\npg_ctl.\n\n4. We don't want to include it in our distribution because the license\ncontains an advertisement clause.\n\nIt's not like what I wrote is going to look wildly different than theirs.\nThere's only so much variation you can put into 100 lines of code.\n\n> > And no, \"use syslog\" doesn't count.\n>\n> Why not?\n\n1. Might not be available (?)\n\n2. Might not be reliable\n\n3. Might not have root access\n\n4. Not all messages will go through elog. This is a bug, but not trivial\nto fix.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 12:02:55 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation? " }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> What's the problem with using newsyslog or logrotate at the moment? (ie.\n> use the system log rotator)\n\nThe postmaster will never close the output file, so you can rotate all you\nwant, the original file will never be abandoned.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 12:04:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Thus spake Tom Lane\n> Also, I kinda thought the long-range solution was to encourage everyone\n> to migrate to syslog logging ...\n> \n> > And no, \"use syslog\" doesn't count.\n> \n> Why not?\n\nWell, one \"why not\" might be that syslog is not a guaranteed delivery\nlogging system. It might be good enough for some applications but I\ndon't think that it should be forced on everyone.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 6 Sep 2001 08:33:22 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Aren't there log-rotation utilities out there already? (I seem to\n>> recall mention that Apache has one, for instance.) Seems like this\n>> is a wheel we shouldn't have to reinvent.\n\n> I'm aware of the Apache rotatelogs utility, but I'm not completely\n> satisfied with it.\n\nOkay, those are reasonable points. Given that it's only ~100 lines of\ncode, I'll withdraw my objection to rolling our own. Let's just do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 09:51:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation? " }, { "msg_contents": "At 08:54 PM 9/5/2001 -0700, Ian Lance Taylor wrote:\n>Tom Lane <[email protected]> writes:\n>\n> > > And no, \"use syslog\" doesn't count.\n> >\n> > Why not?\n>\n>The standard implementations of syslog lose log entries under heavy\n>load, because they rely on a daemon which reads from a named pipe with\n>a limited buffer space. This is not acceptable in a production\n>system, since heavy load is often just the time you need to see the\n>log entries.\n>\n>It would be possible to implement the syslog(3) interface in a\n>different way, of course, which did not use syslogd. I don't know of\n>any such implementation.\n>\n>(My personal preference these days is an approach like DJB's\n>daemontools, which separates the handling of log entries from the\n>program doing the logging.)\n>\n>Ian\n\nGreetings,\n\nKind of ironic, I have been working on a similar logging system for Apache \nthat works with PostgreSQL, and I just released 2.0-beta last night. My \npost to announcements was delayed, but you can check it out here: \nhttp://www.digitalstratum.com/pglogd/\n\nIf pgLOGd looks like something similar to what you are looking for, I could \nprobably modify it to log for PostgreSQL. Two of its requirements during \ndevelopment were fast and robust, and similar to what you described above \nit does not \"process\" the entries, that is done later. You also got me \nthinking that maybe syslogd needs an overhaul too...\n\nMatthew\n\n", "msg_date": "Thu, 06 Sep 2001 10:08:21 -0400", "msg_from": "Matthew Hagerty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "You may be interested in\nhttp://www.ietf.org/internet-drafts/draft-ietf-syslog-reliable-12.txt which\nbuilds a reliable syslog protocol on top of BEEP. There are free\nimplementations of BEEP in C and Java at http://beepcore.org\n\n----- Original Message -----\nFrom: \"Matthew Hagerty\" <[email protected]>\nTo: \"Ian Lance Taylor\" <[email protected]>; \"Tom Lane\" <[email protected]>\nCc: \"Peter Eisentraut\" <[email protected]>; \"PostgreSQL Development\"\n<[email protected]>\nSent: Thursday, September 06, 2001 10:08 AM\nSubject: Re: [HACKERS] Log rotation?\n\n\n> At 08:54 PM 9/5/2001 -0700, Ian Lance Taylor wrote:\n> >Tom Lane <[email protected]> writes:\n> >\n> > > > And no, \"use syslog\" doesn't count.\n> > >\n> > > Why not?\n> >\n> >The standard implementations of syslog lose log entries under heavy\n> >load, because they rely on a daemon which reads from a named pipe with\n> >a limited buffer space. This is not acceptable in a production\n> >system, since heavy load is often just the time you need to see the\n> >log entries.\n> >\n> >It would be possible to implement the syslog(3) interface in a\n> >different way, of course, which did not use syslogd. I don't know of\n> >any such implementation.\n> >\n> >(My personal preference these days is an approach like DJB's\n> >daemontools, which separates the handling of log entries from the\n> >program doing the logging.)\n> >\n> >Ian\n>\n> Greetings,\n>\n> Kind of ironic, I have been working on a similar logging system for Apache\n> that works with PostgreSQL, and I just released 2.0-beta last night. My\n> post to announcements was delayed, but you can check it out here:\n> http://www.digitalstratum.com/pglogd/\n>\n> If pgLOGd looks like something similar to what you are looking for, I\ncould\n> probably modify it to log for PostgreSQL. Two of its requirements during\n> development were fast and robust, and similar to what you described above\n> it does not \"process\" the entries, that is done later. You also got me\n> thinking that maybe syslogd needs an overhaul too...\n>\n> Matthew\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Thu, 6 Sep 2001 10:48:52 -0400", "msg_from": "\"Ken Hirsch\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Tom Lane writes:\n\n> > What we probably want is some configure switch that switches between the\n> > current behaviour and the behaviour you want.\n>\n> I'd suggest --prefix-like options to determine installation locations\n> for the perl and python modules,\n\nBasically, I was envisioning some option like\n\n--enable-local-installation-layout\n--enable-playpen-installation\n\njust something, um, better. (If we name them --with-perldir, then 51% of\nthe users will think that's the location where Perl itself is installed.)\n\nActually, if you do opt for the \"playpen\" version then the actual choice\nof installation directory shouldn't be so interesting. The only\nreasonable place is ${libdir}/postgresql, unless you want to make up your\nown file system standard.\n\n> plus options on the order of --no-install-perl (ie, build it, but\n> don't install it).\n\nThis is currently the default behaviour, if you recall. ;-)\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 17:45:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build problem with CVS version " }, { "msg_contents": "> Tom Lane writes:\n> \n> > > What we probably want is some configure switch that switches between the\n> > > current behaviour and the behaviour you want.\n> >\n> > I'd suggest --prefix-like options to determine installation locations\n> > for the perl and python modules,\n> \n> Basically, I was envisioning some option like\n> \n> --enable-local-installation-layout\n> --enable-playpen-installation\n\n\nI'd point out this from the INSTALL document:\n --prefix=PREFIX\n\n Install all files under the directory PREFIX instead of\n /usr/local/pgsql. The actual files will be installed into \nvarious\n subdirectories; no files will ever be installed directly into \nthe\n PREFIX directory.\n\n If you have special needs, you can also customize the \nindividual\n subdirectories with the following options.\n\nThis is entirely consistent with the way other software that uses the \nsame configuration procedure.\n\nI contend that if a user wants different behaviour the onus is on the \nuser to specify that.\n\nI've no argument with those who'd make it easy to specify that \ndifferent behaviour with, perhaps, --disable-perl-install as a \nconfigure option.\n\n\nInstalling everything under --prefix (as the document says) would also \nhelp package builders; the current rpm looks pretty horrible (and \nthat's why I didn't pursue THAT path).\n\n\n> \n> just something, um, better. (If we name them --with-perldir, then 51% of\n> the users will think that's the location where Perl itself is installed.)\n> \n> Actually, if you do opt for the \"playpen\" version then the actual choice\n> of installation directory shouldn't be so interesting. The only\n> reasonable place is ${libdir}/postgresql, unless you want to make up your\n> own file system standard.\n> \n> > plus options on the order of --no-install-perl (ie, build it, but\n> > don't install it).\n> \n> This is currently the default behaviour, if you recall. ;-)\n\nActually the reason it gave for not installing the perl bits is that I \ndidn't have the authority. It would have been completely happy if I'd \nbeen root.\n\nAnd I wouldn't have.\n\n\n\n-- \nCheers\nJohn Summerfield\n\nMicrosoft's most solid OS: http://www.geocities.com/rcwoolley/\n\nNote: mail delivered to me is deemed to be intended for me, for my \ndisposition.\n\n\n\n", "msg_date": "Fri, 07 Sep 2001 07:46:30 +0800", "msg_from": "John Summerfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build problem with CVS version " }, { "msg_contents": "Yeah, I use FreeBSD's wonderful newsyslog utility, and I do my logging like\nthis:\n\nsu -l pgsql -c '[ -d ${PGDATA} ] && exec /usr/local/bin/pg_ctl\nstart -s -w -o \"-i\" -l /var/log/pgsql.log'\n\nAnd my /etc/newsyslog.conf entry:\n\n/var/log/pgsql.log pgsql:pgsql 600 3 4096 * Z\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Peter Eisentraut\n> Sent: Thursday, 6 September 2001 6:04 PM\n> To: Christopher Kings-Lynne\n> Cc: PostgreSQL Development\n> Subject: Re: [HACKERS] Log rotation?\n>\n>\n> Christopher Kings-Lynne writes:\n>\n> > What's the problem with using newsyslog or logrotate at the\n> moment? (ie.\n> > use the system log rotator)\n>\n> The postmaster will never close the output file, so you can rotate all you\n> want, the original file will never be abandoned.\n>\n> --\n> Peter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n", "msg_date": "Fri, 7 Sep 2001 10:06:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> Yeah, I use FreeBSD's wonderful newsyslog utility, and I do my logging like\n> this:\n>\n> su -l pgsql -c '[ -d ${PGDATA} ] && exec /usr/local/bin/pg_ctl\n> start -s -w -o \"-i\" -l /var/log/pgsql.log'\n>\n> And my /etc/newsyslog.conf entry:\n>\n> /var/log/pgsql.log pgsql:pgsql 600 3 4096 * Z\n\nSorry, this does not convey any information to me. Does this newsyslog\nthing do anything that's so smart that we should know about it?\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 7 Sep 2001 16:04:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation?" }, { "msg_contents": "\nHi Peter,\n\n> I've been playing with a little program I wrote whose sole purpose is to\n> write its stdin to a file and close and reopen that file when it receives\n> a signal. I figured this could work well when integrated transparently\n> into pg_ctl.\n> \n> So, is log rotation a concern? Is this a reasonable solution? Other\n> ideas?\n\nThere was a discussion of this over a year ago. After I contributed\nto the discussion Tom Lane suggested I write something, and in the\ntradition of software development I did so but never quite finished it\n... I got to the point that it works for me, then got distracted\nbefore completing a test suite.\n\nYou may well prefer your own code, but this one supports rotation on\nsize and/or time basis as well as on receipt of SIGHUP, and places a\ntimestamp on each line written. It's also pretty careful about errors,\nwhich is one of the things that was disliked about the Apache program\nlast time it was discussed.\n\nI am happy to contribute the code using the standard PostgreSQL\nlicense if it's wanted. (If anyone wants it under a BSD license or\nGPL for another purpose that's fine too.)\n\nI use the code on HP-UX, and developed it on NetBSD. There shouldn't\nbe too many portability problems lurking, other than the usual hassles\nof what % escape to use in printf() for off_t. I doubt anyone wants\nlog files larger than a couple of GB anyway? :-)\n\nftp://ftp.nemeton.com.au/pub/src/logwrite-1.0alpha.tar.gz\n\n> (No Grand Unified Logging Solutions please. And no, \"use syslog\" doesn't\n> count.)\n\n<grin>\n\nOne improvement I suggest is that the postmaster be taught to start\n(and restart if necessary) the log program. This avoids fragile\nstartup scripts and also avoids taking down PostgreSQL if someone\nsends the wrong signal to the log program.\n\nCheers,\n\nGiles\n\n", "msg_date": "Sat, 08 Sep 2001 07:21:19 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log rotation? " }, { "msg_contents": "On Thursday 06 September 2001 07:46 pm, John Summerfield wrote:\n> I'd point out this from the INSTALL document:\n> --prefix=PREFIX\n[snip]\n> Installing everything under --prefix (as the document says) would also\n> help package builders; the current rpm looks pretty horrible (and\n> that's why I didn't pursue THAT path).\n\nBlame the Linux Filesystem Hierarchy Standard and the Linux Standards Base \nfor the directory structure of the RPMset. Things have to be in specified \nlocations to be FHS compliant -- and the current RPMset is, to the best of my \nknowledge, FHS compliant.\n\nHorrible looking is in the eye of the beholder, BTW. Some think that a \nseparate prefix for each software package looks horrible and that the \nintermingled nature of the FHS looks cleaner (which it does from a PATH point \nof view, for sure). \n\nIt just comes down to the OS philosophy you deal with. BSD ports, for \ninstance, has a very different package philosophy from Solaris, which is \ndifferent yet from Domain/OS, which is different yet from the LSB. \nPostgreSQL is very BSDish -- and, while I have to cut cartwheels to finagle \nit into an LSB mold, being BSDish is not a bad thing. Neither is being \nLSBish -- both have their place. The is no One True Filesystem spec...\n\nAs to the can't install perl client issue, the install can fail, even if done \nas root, if you are building for installation at some other location in the \nfilesystem.\n\nThe perl client install phase fails every time I build an RPMset -- the RPM \nspec file has to go through some steps to make it work.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Sep 2001 11:17:28 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Build problem with CVS version" }, { "msg_contents": "John Summerfield writes:\n\n> I'd point out this from the INSTALL document:\n> --prefix=PREFIX\n>\n> Install all files under the directory PREFIX instead of\n> /usr/local/pgsql. The actual files will be installed into various\n> subdirectories; no files will ever be installed directly into the\n> PREFIX directory.\n>\n> If you have special needs, you can also customize the individual\n> subdirectories with the following options.\n\nBut there are also exceptions listed at --with-perl and --with-python.\n\n> This is entirely consistent with the way other software that uses the\n> same configuration procedure.\n\nI am not aware of a package that installs general-purpose Perl/Python\nmodules as well as items from outside of those environments.\n\n> I contend that if a user wants different behaviour the onus is on the\n> user to specify that.\n\nYou're probably right, but I suspect that the majority of users rightly\nexpect the Perl module to go where Perl modules usually go. This wouldn't\nbe such an interesting question if Perl provided a way to add to the\nmodule search path (cf. LD_LIBRARY_PATH and such), but to my knowledge\nthere isn't a generally accepted one, so this issue would introduce an\nannoyance for users.\n\nSurely the current behaviour is an annoyance too, making the whole issue a\nrather unpleasant subject. ;-)\n\nWell, as soon as I come up with a name for the install-where-I-tell-you-to\noption, I'll implement it.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 11 Sep 2001 02:25:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Build problem with CVS version " }, { "msg_contents": "There was a discussion about --enable-syslog by default. What was the\nconsensus? I think this is a good one.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 11 Sep 2001 13:29:58 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "syslog by default?" }, { "msg_contents": "> There was a discussion about --enable-syslog by default. What was the\n> consensus? I think this is a good one.\n\nYes, I thought we decided it should be the default too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 01:07:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "On Tue, 11 Sep 2001, Bruce Momjian wrote:\n\n> > There was a discussion about --enable-syslog by default. What was the\n> > consensus? I think this is a good one.\n>\n> Yes, I thought we decided it should be the default too.\n\nI know it can have an adverse effect on a mail server, is syslog going\nto give us any performance hits?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 11 Sep 2001 06:18:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "> John Summerfield writes:\n> \n> > I'd point out this from the INSTALL document:\n> > --prefix=PREFIX\n> >\n> > Install all files under the directory PREFIX instead of\n> > /usr/local/pgsql. The actual files will be installed into various\n> > subdirectories; no files will ever be installed directly into the\n> > PREFIX directory.\n> >\n> > If you have special needs, you can also customize the individual\n> > subdirectories with the following options.\n> \n> But there are also exceptions listed at --with-perl and --with-python.\n> \n> > This is entirely consistent with the way other software that uses the\n> > same configuration procedure.\n> \n> I am not aware of a package that installs general-purpose Perl/Python\n> modules as well as items from outside of those environments.\n> \n> > I contend that if a user wants different behaviour the onus is on the\n> > user to specify that.\n> \n> You're probably right, but I suspect that the majority of users rightly\n> expect the Perl module to go where Perl modules usually go. This wouldn't\n\nthat isn't reasonable if the installer's not root, and I wasn't, for \nthe precise reason I didn't want to update the system.\n\n> be such an interesting question if Perl provided a way to add to the\n> module search path (cf. LD_LIBRARY_PATH and such), but to my knowledge\n> there isn't a generally accepted one, so this issue would introduce an\n> annoyance for users.\n\nfrom 'man perlrun'\n PERL5LIB A colon-separated list of directories in which to look \nfor Perl library files\n before looking in the standard library and the \ncurrent directory. Any architec-\n ture-specific directories under the specified \nlocations are automatically\n included if they exist. If PERL5LIB is not defined, \nPERLLIB is used.\n \n When running taint checks (either because the \nprogram was running setuid or set-\n gid, or the -T switch was used), neither variable is \nused. The program should\n instead say:\n\nThere are several other environment variables.\n\n\n\n\n> \n> Surely the current behaviour is an annoyance too, making the whole issue a\n> rather unpleasant subject. ;-)\n> \n> Well, as soon as I come up with a name for the install-where-I-tell-you-to\n> option, I'll implement it.\n\n\n\nThe current behaviour makes it difficult to have two versions on one \ncomputer.\n-- \nCheers\nJohn Summerfield\n\nMicrosoft's most solid OS: http://www.geocities.com/rcwoolley/\n\nNote: mail delivered to me is deemed to be intended for me, for my \ndisposition.\n\n\n\n", "msg_date": "Tue, 11 Sep 2001 21:08:36 +0800", "msg_from": "John Summerfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Build problem with CVS version " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> There was a discussion about --enable-syslog by default. What was the\n> consensus? I think this is a good one.\n\nIt would be a good one if we make the blind assumption that syslog()\nexists on all platforms. That is possible, but not guaranteed. (BeOS,\nQNX, Cygwin?)\n\nThe alternative suggestion to turn it off if syslog() is not found makes\nme wary. These schemes have invariably lead to problems in the past.\nRecall libpq++ being missed because of false test results, readline\nsupport mysteriously disappearing and nobody noticing until Mandrake had\nshipped their CDs. There are possible scenarios where syslog support\ncould be missed by configure, such as when you need some compat or bsd\nlibrary.\n\nAn alternative scheme I wanted to implement for readline is\n\n--enable-foo => force feature to be used\n--disable-foo => force feature not to be used\n<nothing> => use feature if available\n\nbut I'm afraid that this would create more confusion than it's worth\nbecause a prudent user would specify --enable-foo anyway.\n\nI'd rather type a few more things and get predictable behavior from\nconfigure rather than relying on it to pick the features for me.\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 11 Sep 2001 15:10:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "\n> I know it can have an adverse effect on a mail server, is syslog going\n> to give us any performance hits?\n\nYes. On some platforms (HP-UX at least) applications can stall ~2s\nretrying if syslogd is not reading the messages written to its pipe.\n\nsyslogd also has a reputation for using too much CPU under load, but\nthis is annecdotal only -- I've not seen such a situation myself.\n\nRegards,\n\nGiles\n\n", "msg_date": "Wed, 12 Sep 2001 07:56:11 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default? " }, { "msg_contents": "On Mar 11 Sep 2001 02:07, Bruce Momjian wrote:\n> > There was a discussion about --enable-syslog by default. What was the\n> > consensus? I think this is a good one.\n>\n> Yes, I thought we decided it should be the default too.\n\nThere was a discusion about log rotation last week, so, where are we going? \nPipe the output of postmaster to a log rotator like apaches logrotate, or are \nwe going to use syslog and have the syslog log rotator do the rotation?\n\nJust a dought I had.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | [email protected]\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 11 Sep 2001 19:14:54 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "On Tuesday 11 September 2001 06:14 pm, Mart�n Marqu�s wrote:\n> There was a discusion about log rotation last week, so, where are we going?\n> Pipe the output of postmaster to a log rotator like apaches logrotate, or\n> are we going to use syslog and have the syslog log rotator do the rotation?\n\nBoth have their place.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 11 Sep 2001 19:11:52 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > There was a discussion about --enable-syslog by default. What was the\n> > consensus? I think this is a good one.\n> \n> It would be a good one if we make the blind assumption that syslog()\n> exists on all platforms. That is possible, but not guaranteed. (BeOS,\n> QNX, Cygwin?)\n> \n> The alternative suggestion to turn it off if syslog() is not found makes\n> me wary. These schemes have invariably lead to problems in the past.\n> Recall libpq++ being missed because of false test results, readline\n> support mysteriously disappearing and nobody noticing until Mandrake had\n> shipped their CDs. There are possible scenarios where syslog support\n> could be missed by configure, such as when you need some compat or bsd\n> library.\n\nWhy are you so worrying about finding syslog() in configure? We have\nalready done lots of function testings. Is there anything special with\nsyslog()?\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 12 Sep 2001 09:59:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Why are you so worrying about finding syslog() in configure? We have\n> already done lots of function testings. Is there anything special with\n> syslog()?\n\nAll the other functions we test for come with a replacement plan. Either\nwe choose between several similar alternatives (atexit/on_exit), or we\nlink in our own function.o file. But we don't shut off a whole piece of\nfunctionality when one goes missing (except in the cases I mentioned).\n\nI'm probably being paranoid. But you did ask what became of the idea, and\nthat's what did, as far as I'm concerned...\n\n-- \nPeter Eisentraut [email protected] http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 12 Sep 2001 14:07:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "> Tatsuo Ishii writes:\n> \n> > Why are you so worrying about finding syslog() in configure? We have\n> > already done lots of function testings. Is there anything special with\n> > syslog()?\n> \n> All the other functions we test for come with a replacement plan. Either\n> we choose between several similar alternatives (atexit/on_exit), or we\n> link in our own function.o file. But we don't shut off a whole piece of\n> functionality when one goes missing (except in the cases I mentioned).\n> \n> I'm probably being paranoid. But you did ask what became of the idea, and\n> that's what did, as far as I'm concerned...\n\nOK, that makes sense. My only question is how many platforms _don't_\nhave syslog. If it is only NT and QNX, I think we can live with using\nit by default if it exists.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 10:20:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "On Sep 12, [email protected] contorted a few electrons to say...\nBruce> OK, that makes sense. My only question is how many platforms _don't_\nBruce> have syslog. If it is only NT and QNX, I think we can live with using\nBruce> it by default if it exists.\n\nperhaps you could take some code from\n\n\thttp://freshmeat.net/projects/cpslapi/\n\nwhich implements a syslog-api that writes to NT's eventlog.\n\ni'd be glad to change the license if it is useful.\n\njr\n\n-- \n------------------------------------------------------------------------\nJoel W. Reed 412-257-3881\n--------------All the simple programs have been written.----------------", "msg_date": "Fri, 14 Sep 2001 09:25:27 -0400", "msg_from": "\"Joel W. Reed\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, that makes sense. My only question is how many platforms _don't_\n> have syslog. If it is only NT and QNX, I think we can live with using\n> it by default if it exists.\n\nThere seems to be a certain amount of confusion here. The proposal at\nhand was to make configure set up to *compile* the syslog support\nwhenever possible. Not to *use* syslog by default. Unless we change\nthe default postgresql.conf --- which I would be against --- we will\nstill log to stderr by default.\n\nGiven that, I'm not sure that Peter's argument about losing\nfunctionality is right; the analogy to readline support isn't exact.\nPerhaps what we should do is (a) always build syslog support if\npossible, and (b) at runtime, complain if syslog logging is requested\nbut we don't have it available.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Sep 2001 18:05:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default? " }, { "msg_contents": "At the just-past OSDN database conference, Bruce and I were annoyed by\nsome benchmark results showing that Postgres performed poorly on an\n8-way SMP machine. Based on past discussion, it seems likely that the\nculprit is the known inefficiency in our spinlock implementation.\nAfter chewing on it for awhile, we came up with an idea for a solution.\n\nThe following proposal should improve performance substantially when\nthere is contention for a lock, but it creates no portability risks\nbecause it uses the same system facilities (TAS and SysV semaphores)\nthat we have always relied on. Also, I think it'd be fairly easy to\nimplement --- I could probably get it done in a day.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n\n\nPlan:\n\nReplace most uses of spinlocks with \"lightweight locks\" (LW locks)\nimplemented by a new lock manager. The principal remaining use of true\nspinlocks (TAS locks) will be to provide mutual exclusion of access to\nLW lock structures. Therefore, we can assume that spinlocks are never\nheld for more than a few dozen instructions --- and never across a kernel\ncall.\n\nIt's pretty easy to rejigger the spinlock code to work well when the lock\nis never held for long. We just need to change the spinlock retry code\nso that it does a tight spin (continuous retry) for a few dozen cycles ---\nideally, the total delay should be some small multiple of the max expected\nlock hold time. If lock still not acquired, yield the CPU via a select()\ncall (10 msec minimum delay) and repeat. Although this looks inefficient,\nit doesn't matter on a uniprocessor because we expect that backends will\nonly rarely be interrupted while holding the lock, so in practice a held\nlock will seldom be encountered. On SMP machines the tight spin will win\nsince the lock will normally become available before we give up and yield\nthe CPU.\n\nDesired properties of the LW lock manager include:\n\t* very fast fall-through when no contention for lock\n\t* waiting proc does not spin\n\t* support both exclusive and shared (read-only) lock modes\n\t* grant lock to waiters in arrival order (no starvation)\n\t* small lock structure to allow many LW locks to exist.\n\nProposed contents of LW lock structure:\n\n\tspinlock mutex (protects LW lock state and PROC queue links)\n\tcount of exclusive holders (always 0 or 1)\n\tcount of shared holders (0 .. MaxBackends)\n\tqueue head pointer (NULL or ptr to PROC object)\n\tqueue tail pointer (could do without this to save space)\n\nIf a backend sees it must wait to acquire the lock, it adds its PROC\nstruct to the end of the queue, releases the spinlock mutex, and then\nsleeps by P'ing its per-backend wait semaphore. A backend releasing the\nlock will check to see if any waiter should be granted the lock. If so,\nit will update the lock state, release the spinlock mutex, and finally V\nthe wait semaphores of any backends that it decided should be released\n(which it removed from the lock's queue while holding the sema). Notice\nthat no kernel calls need be done while holding the spinlock. Since the\nwait semaphore will remember a V occurring before P, there's no problem\nif the releaser is fast enough to release the waiter before the waiter\nreaches its P operation.\n\nWe will need to add a few fields to PROC structures:\n\t* Flag to show whether PROC is waiting for an LW lock, and if so\n\t whether it waits for read or write access\n\t* Additional PROC queue link field.\nWe can't reuse the existing queue link field because it is possible for a\nPROC to be waiting for both a heavyweight lock and a lightweight one ---\nthis will occur when HandleDeadLock or LockWaitCancel tries to acquire\nthe LockMgr module's lightweight lock (formerly spinlock).\n\nIt might seem that we also need to create a second wait semaphore per\nbackend, one to wait on HW locks and one to wait on LW locks. But I\nbelieve we can get away with just one, by recognizing that a wait for an\nLW lock can never be interrupted by a wait for a HW lock, only vice versa.\nAfter being awoken (V'd), the LW lock manager must check to see if it was\nactually granted the lock (easiest way: look at own PROC struct to see if\nLW lock wait flag has been cleared). If not, the V must have been to\ngrant us a HW lock --- but we still have to sleep to get the LW lock. So\nremember this happened, then loop back and P again. When we finally get\nthe LW lock, if there was an extra P operation then V the semaphore once\nbefore returning. This will allow ProcSleep to exit the wait for the HW\nlock when we return to it.\n\nFine points:\n\nWhile waiting for an LW lock, we need to show in our PROC struct whether\nwe are waiting for read or write access. But we don't need to remember\nthis after getting the lock; if we know we have the lock, it's easy to\nsee by inspecting the lock whether we hold read or write access.\n\nProcStructLock cannot be replaced by an LW lock, since a backend cannot\nuse an LW lock until it has obtained a PROC struct and a semaphore,\nboth of which are protected by this lock. It seems okay to use a plain\nspinlock for this purpose. NOTE: it's okay for SInvalLock to be an LW\nlock, as long as the LW mgr does not depend on accessing the SI array\nof PROC objects, but only chains through the PROCs themselves.\n\nAnother tricky point is that some of the setup code executed by the\npostmaster may try to to grab/release LW locks. Here, we can probably\nallow a special case for MyProc=NULL. It's likely that we should never\nsee a block under these circumstances anyway, so finding MyProc=NULL when\nwe need to block may just be a fatal error condition.\n\nA nastier case is checkpoint processes; these expect to grab BufMgr and\nWAL locks. Perhaps okay for them to do plain sleeps in between attempts\nto grab the locks? This says that the MyProc=NULL case should release\nthe spinlock mutex, sleep 10 msec, try again, rather than any sort of error\nor expectation of no conflict. Are there any cases where this represents\na horrid performance loss? Checkpoint itself seems noncritical.\n\nAlternative is for checkpoint to be allowed to create a PROC struct (but\nnot to enter it in SI list) so's it can participate normally in LW lock\noperations. That seems a good idea anyway, actually, so that the PROC\nstruct's facility for releasing held LW locks at elog time will work\ninside the checkpointer. (But that means we need an extra sema too?\nOkay, but don't want an extra would-be backend to obtain the extra sema\nand perhaps cause a checkpoint proc to fail. So must allocate the PROC\nand sema for checkpoint process separately from those reserved for\nbackends.)\n", "msg_date": "Wed, 26 Sep 2001 12:10:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Spinlock performance improvement proposal" }, { "msg_contents": "\nSounds cool to me ... definitely something to fix before v7.2, if its as\n\"easy\" as you make it sound ... I'm expecting the new drive to be\ninstalled today (if all goes well ... Thomas still has his date/time stuff\nto finish off, now that CVSup is fixed ...\n\nLet''s try and target Monday for Beta then? I think the only two\noutstaandings are you and Thomas right now?\n\nBruce, that latest rtree patch looks intriguing also ... can anyone\ncomment positive/negative about it, so that we can try and get that in\nbefore Beta?\n\nOn Wed, 26 Sep 2001, Tom Lane wrote:\n\n> At the just-past OSDN database conference, Bruce and I were annoyed by\n> some benchmark results showing that Postgres performed poorly on an\n> 8-way SMP machine. Based on past discussion, it seems likely that the\n> culprit is the known inefficiency in our spinlock implementation.\n> After chewing on it for awhile, we came up with an idea for a solution.\n>\n> The following proposal should improve performance substantially when\n> there is contention for a lock, but it creates no portability risks\n> because it uses the same system facilities (TAS and SysV semaphores)\n> that we have always relied on. Also, I think it'd be fairly easy to\n> implement --- I could probably get it done in a day.\n>\n> Comments anyone?\n>\n> \t\t\tregards, tom lane\n>\n>\n> Plan:\n>\n> Replace most uses of spinlocks with \"lightweight locks\" (LW locks)\n> implemented by a new lock manager. The principal remaining use of true\n> spinlocks (TAS locks) will be to provide mutual exclusion of access to\n> LW lock structures. Therefore, we can assume that spinlocks are never\n> held for more than a few dozen instructions --- and never across a kernel\n> call.\n>\n> It's pretty easy to rejigger the spinlock code to work well when the lock\n> is never held for long. We just need to change the spinlock retry code\n> so that it does a tight spin (continuous retry) for a few dozen cycles ---\n> ideally, the total delay should be some small multiple of the max expected\n> lock hold time. If lock still not acquired, yield the CPU via a select()\n> call (10 msec minimum delay) and repeat. Although this looks inefficient,\n> it doesn't matter on a uniprocessor because we expect that backends will\n> only rarely be interrupted while holding the lock, so in practice a held\n> lock will seldom be encountered. On SMP machines the tight spin will win\n> since the lock will normally become available before we give up and yield\n> the CPU.\n>\n> Desired properties of the LW lock manager include:\n> \t* very fast fall-through when no contention for lock\n> \t* waiting proc does not spin\n> \t* support both exclusive and shared (read-only) lock modes\n> \t* grant lock to waiters in arrival order (no starvation)\n> \t* small lock structure to allow many LW locks to exist.\n>\n> Proposed contents of LW lock structure:\n>\n> \tspinlock mutex (protects LW lock state and PROC queue links)\n> \tcount of exclusive holders (always 0 or 1)\n> \tcount of shared holders (0 .. MaxBackends)\n> \tqueue head pointer (NULL or ptr to PROC object)\n> \tqueue tail pointer (could do without this to save space)\n>\n> If a backend sees it must wait to acquire the lock, it adds its PROC\n> struct to the end of the queue, releases the spinlock mutex, and then\n> sleeps by P'ing its per-backend wait semaphore. A backend releasing the\n> lock will check to see if any waiter should be granted the lock. If so,\n> it will update the lock state, release the spinlock mutex, and finally V\n> the wait semaphores of any backends that it decided should be released\n> (which it removed from the lock's queue while holding the sema). Notice\n> that no kernel calls need be done while holding the spinlock. Since the\n> wait semaphore will remember a V occurring before P, there's no problem\n> if the releaser is fast enough to release the waiter before the waiter\n> reaches its P operation.\n>\n> We will need to add a few fields to PROC structures:\n> \t* Flag to show whether PROC is waiting for an LW lock, and if so\n> \t whether it waits for read or write access\n> \t* Additional PROC queue link field.\n> We can't reuse the existing queue link field because it is possible for a\n> PROC to be waiting for both a heavyweight lock and a lightweight one ---\n> this will occur when HandleDeadLock or LockWaitCancel tries to acquire\n> the LockMgr module's lightweight lock (formerly spinlock).\n>\n> It might seem that we also need to create a second wait semaphore per\n> backend, one to wait on HW locks and one to wait on LW locks. But I\n> believe we can get away with just one, by recognizing that a wait for an\n> LW lock can never be interrupted by a wait for a HW lock, only vice versa.\n> After being awoken (V'd), the LW lock manager must check to see if it was\n> actually granted the lock (easiest way: look at own PROC struct to see if\n> LW lock wait flag has been cleared). If not, the V must have been to\n> grant us a HW lock --- but we still have to sleep to get the LW lock. So\n> remember this happened, then loop back and P again. When we finally get\n> the LW lock, if there was an extra P operation then V the semaphore once\n> before returning. This will allow ProcSleep to exit the wait for the HW\n> lock when we return to it.\n>\n> Fine points:\n>\n> While waiting for an LW lock, we need to show in our PROC struct whether\n> we are waiting for read or write access. But we don't need to remember\n> this after getting the lock; if we know we have the lock, it's easy to\n> see by inspecting the lock whether we hold read or write access.\n>\n> ProcStructLock cannot be replaced by an LW lock, since a backend cannot\n> use an LW lock until it has obtained a PROC struct and a semaphore,\n> both of which are protected by this lock. It seems okay to use a plain\n> spinlock for this purpose. NOTE: it's okay for SInvalLock to be an LW\n> lock, as long as the LW mgr does not depend on accessing the SI array\n> of PROC objects, but only chains through the PROCs themselves.\n>\n> Another tricky point is that some of the setup code executed by the\n> postmaster may try to to grab/release LW locks. Here, we can probably\n> allow a special case for MyProc=NULL. It's likely that we should never\n> see a block under these circumstances anyway, so finding MyProc=NULL when\n> we need to block may just be a fatal error condition.\n>\n> A nastier case is checkpoint processes; these expect to grab BufMgr and\n> WAL locks. Perhaps okay for them to do plain sleeps in between attempts\n> to grab the locks? This says that the MyProc=NULL case should release\n> the spinlock mutex, sleep 10 msec, try again, rather than any sort of error\n> or expectation of no conflict. Are there any cases where this represents\n> a horrid performance loss? Checkpoint itself seems noncritical.\n>\n> Alternative is for checkpoint to be allowed to create a PROC struct (but\n> not to enter it in SI list) so's it can participate normally in LW lock\n> operations. That seems a good idea anyway, actually, so that the PROC\n> struct's facility for releasing held LW locks at elog time will work\n> inside the checkpointer. (But that means we need an extra sema too?\n> Okay, but don't want an extra would-be backend to obtain the extra sema\n> and perhaps cause a checkpoint proc to fail. So must allocate the PROC\n> and sema for checkpoint process separately from those reserved for\n> backends.)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n", "msg_date": "Wed, 26 Sep 2001 13:18:38 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> Let''s try and target Monday for Beta then?\n\nSounds like a plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 13:22:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\nThe plan for the new spinlocks does look like it has some potential. My \nonly comment in regards to permformance when we start looking at SMP \nmachines is ... it is my belief that getting a true threaded backend may \nbe the only way to get the full potential out of SMP machines. I see that \nis one of the things to experiment with on the TODO list and I have seen \nsome people have messed around already with this using Solaris threads. \nIt should probably be attempted with pthreads if PostgreSQL is going to \nkeep some resemblance of cross-platform compatibility. At that time, it \nwould probably be easier to go in and clean up some stuff for the \nimplementation of other TODO items (put in the base framework for more \ncomplex future items) as threading the backend would take a little bit of \nideology shift.\n\nOf course, it is much easier to stand back and talk about this then \nactually do it - especially comming from someone who has only tried to \ncontribute a few pieces of code. Keep up the good work.\n\n\nOn Wed, 26 Sep 2001, Tom Lane wrote:\n\n> At the just-past OSDN database conference, Bruce and I were annoyed by\n> some benchmark results showing that Postgres performed poorly on an\n> 8-way SMP machine. Based on past discussion, it seems likely that the\n> culprit is the known inefficiency in our spinlock implementation.\n> After chewing on it for awhile, we came up with an idea for a solution.\n> \n> The following proposal should improve performance substantially when\n> there is contention for a lock, but it creates no portability risks\n> because it uses the same system facilities (TAS and SysV semaphores)\n> that we have always relied on. Also, I think it'd be fairly easy to\n> implement --- I could probably get it done in a day.\n> \n> Comments anyone?\n> \n> \t\t\tregards, tom lane\n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 13:40:46 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Tom Lane wrote:\n> \n> At the just-past OSDN database conference, Bruce and I were annoyed by\n> some benchmark results showing that Postgres performed poorly on an\n> 8-way SMP machine. Based on past discussion, it seems likely that the\n> culprit is the known inefficiency in our spinlock implementation.\n> After chewing on it for awhile, we came up with an idea for a solution.\n> \n> The following proposal should improve performance substantially when\n> there is contention for a lock, but it creates no portability risks\n> because it uses the same system facilities (TAS and SysV semaphores)\n> that we have always relied on. Also, I think it'd be fairly easy to\n> implement --- I could probably get it done in a day.\n> \n> Comments anyone?\n\n\nWe have been doing some scalability testing just recently here at Red\nHat. The machine I was using was a 4-way 550 MHz Xeon SMP machine, I\nalso ran the machine in uniprocessor mode to make some comparisons. All\nruns were made on Red Hat Linux running 2.4.x series kernels. I've\nexamined a number of potentially interesting cases -- I'm still\nanalyzing the results, but some of the initial results might be\ninteresting:\n\n- We have tried benchmarking the following: TAS spinlocks (existing\nimplementation), SysV semaphores (existing implementation), Pthread\nMutexes. Pgbench runs were conducted for 1 to 512 simultaneous backends.\n\n For these three cases we found:\n - TAS spinlocks fared the best of all three lock types, however above\n100 clients the Pthread mutexes were lock step in performance. I expect\nthis is due to the cost of any system calls being negligible\nrelative to lock wait time.\n - SysV semaphore implementation faired terribly as expected. However,\nit is worse, relative to the TAS spinlocks on SMP than on uniprocessor.\n\n- Since the above seemed to indicate that the lock implementation may\nnot be the problem (Pthread mutexes are supposed to be implemented to be\nless bang-bang than the Postgres TAS spinlocks, IIRC), I decided to\nprofile Postgres. After much trouble, I got results for it using\noprofile, a kernel profiler for Linux. Unfortunately, I can only profile\nfor uniprocessor right now using oprofile, as it doesn't support SMP\nboxes yet. (soon, I hope.)\n\nInitial results (top five -- if you would like a complete profile, let\nme know):\nEach sample counts as 1 samples.\n % cumulative self self total \n time samples samples calls T1/call T1/call name \n 26.57 42255.02 42255.02 \nFindLockCycleRecurse\n 5.55 51081.02 8826.00 s_lock_sleep\n 5.07 59145.03 8064.00 heapgettup\n 4.48 66274.03 7129.00 hash_search\n 4.48 73397.03 7123.00 s_lock\n 2.85 77926.03 4529.00 \nHeapTupleSatisfiesSnapshot\n 2.07 81217.04 3291.00 SHMQueueNext\n 1.85 84154.04 2937.00 AllocSetAlloc\n 1.84 87085.04 2931.00 fmgr_isbuiltin\n 1.64 89696.04 2611.00 set_ps_display\n 1.51 92101.04 2405.00 FunctionCall2\n 1.47 94442.04 2341.00 XLogInsert\n 1.39 96649.04 2207.00 _bt_compare\n 1.22 98597.04 1948.00 SpinAcquire\n 1.22 100544.04 1947.00 LockBuffer\n 1.21 102469.04 1925.00 tag_hash\n 1.01 104078.05 1609.00 LockAcquire\n.\n.\n.\n\n(The samples are proportional to execution time.)\n\nThis would seem to point to the deadlock detector. (Which some have\nfingered as a possible culprit before, IIRC.)\n\nHowever, this seems to be a red herring. Removing the deadlock detector\nhad no effect. In fact, benchmarking showed removing it yielded no\nimprovement in transaction processing rate on uniprocessor or SMP\nsystems. Instead, it seems that the deadlock detector simply amounts to\n\"something to do\" for the blocked backend while it waits for lock\nacquisition. \n\nProfiling bears this out:\n\nFlat profile:\n\nEach sample counts as 1 samples.\n % cumulative self self total \n time samples samples calls T1/call T1/call name \n 12.38 14112.01 14112.01 s_lock_sleep\n 10.18 25710.01 11598.01 s_lock\n 6.47 33079.01 7369.00 hash_search\n 5.88 39784.02 6705.00 heapgettup\n 5.32 45843.02 6059.00 \nHeapTupleSatisfiesSnapshot \n 2.62 48830.02 2987.00 AllocSetAlloc\n 2.48 51654.02 2824.00 fmgr_isbuiltin\n 1.89 53813.02 2159.00 XLogInsert\n 1.86 55938.02 2125.00 _bt_compare\n 1.72 57893.03 1955.00 SpinAcquire\n 1.61 59733.03 1840.00 LockBuffer\n 1.60 61560.03 1827.00 FunctionCall2\n 1.56 63339.03 1779.00 tag_hash\n 1.46 65007.03 1668.00 set_ps_display\n 1.20 66372.03 1365.00 SearchCatCache\n 1.14 67666.03 1294.00 LockAcquire\n. \n.\n.\n\nOur current suspicion isn't that the lock implementation is the only\nproblem (though there is certainly room for improvement), or perhaps\nisn't even the main problem. For example, there has been some suggestion\nthat perhaps some component of the database is causing large lock\ncontention. My opinion is that rather than guessing and taking stabs in\nthe dark, we need to take a more reasoned approach to these things.\nIMHO, the next step should be to apply instrumentation (likely via some\nneat macros) to all lock acquires / releases. Then, it will be possible\nto determine what components are the greatest consumers of locks, and to\ndetermine whether it is a component problem or a systemic problem. (i.e.\nsome component vs. simply just the lock implementation.)\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: [email protected]\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n", "msg_date": "Wed, 26 Sep 2001 14:46:16 -0400", "msg_from": "Neil Padgett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"D. Hageman\" <[email protected]> writes:\n\n> The plan for the new spinlocks does look like it has some potential. My \n> only comment in regards to permformance when we start looking at SMP \n> machines is ... it is my belief that getting a true threaded backend may \n> be the only way to get the full potential out of SMP machines.\n\nDepends on what you mean. For scaling well with many connections and\nsimultaneous queries, there's no reason IMHO that the current\nprocess-per-backend model won't do, assuming the locking issues are\naddressed. \n\nIf you're talking about making a single query use multiple CPUs, then\nyes, we're probably talking about a fundamental rewrite to use threads \nor some other mechanism.\n\n-Doug\n-- \nIn a world of steel-eyed death, and men who are fighting to be warm,\nCome in, she said, I'll give you shelter from the storm. -Dylan\n", "msg_date": "26 Sep 2001 15:18:18 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On 26 Sep 2001, Doug McNaught wrote:\n\n> \"D. Hageman\" <[email protected]> writes:\n> \n> > The plan for the new spinlocks does look like it has some potential. My \n> > only comment in regards to permformance when we start looking at SMP \n> > machines is ... it is my belief that getting a true threaded backend may \n> > be the only way to get the full potential out of SMP machines.\n> \n> Depends on what you mean. For scaling well with many connections and\n> simultaneous queries, there's no reason IMHO that the current\n> process-per-backend model won't do, assuming the locking issues are\n> addressed. \n\nWell, I know the current process-per-backend model does quite well. My \nargument is not that it fails to do as intended. My original argument is \nthat it is belief (at the momment with the knowledge I have) to get the \nfull potential out of SMP machines - threads might be the way to go. The \ndata from RedHat is quite interesting, so my feelings on this might \nchange or could be re-inforced. I watch anxiously ;-)\n\n> If you're talking about making a single query use multiple CPUs, then\n> yes, we're probably talking about a fundamental rewrite to use threads \n> or some other mechanism.\n\nWell, we have several thread model ideologies that we could chose from. \nOnly experimentation would let us determine the proper path to follow and \nthen it wouldn't be ideal for everyone. You kinda just have to take the \nbest scenerio and run with it. My first inclination would be something \nlike a thread per connection (to reduce connection overhead), but then we \ncould run into limits on different platforms (threads per process). I \nkinda like the idea of using a thread for replication purposes ... lots \nof interesting possibilities exist and I will be first to admit that I \ndon't have all the answers. \n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n", "msg_date": "Wed, 26 Sep 2001 15:03:11 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Neil Padgett <[email protected]> writes:\n> Initial results (top five -- if you would like a complete profile, let\n> me know):\n> Each sample counts as 1 samples.\n> % cumulative self self total \n> time samples samples calls T1/call T1/call name \n> 26.57 42255.02 42255.02 FindLockCycleRecurse\n\nYipes. It would be interesting to know more about the locking pattern\nof your benchmark --- are there long waits-for chains, or not? The\npresent deadlock detector was certainly written with an eye to \"get it\nright\" rather than \"make it fast\", but I wonder whether this shows a\nperformance problem in the detector, or just too many executions because\nyou're waiting too long to get locks.\n\n> However, this seems to be a red herring. Removing the deadlock detector\n> had no effect. In fact, benchmarking showed removing it yielded no\n> improvement in transaction processing rate on uniprocessor or SMP\n> systems. Instead, it seems that the deadlock detector simply amounts to\n> \"something to do\" for the blocked backend while it waits for lock\n> acquisition. \n\nDo you have any idea about the typical lock-acquisition delay in this\nbenchmark? Our docs advise trying to set DEADLOCK_TIMEOUT higher than\nthe typical acquisition delay, so that the deadlock detector does not\nrun unnecessarily.\n\n> For example, there has been some suggestion\n> that perhaps some component of the database is causing large lock\n> contention.\n\nMy thought as well. I would certainly recommend that you use more than\none test case while looking at these things.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 16:05:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\"D. Hageman\" wrote:\n\n> The plan for the new spinlocks does look like it has some potential. My\n> only comment in regards to permformance when we start looking at SMP\n> machines is ... it is my belief that getting a true threaded backend may\n> be the only way to get the full potential out of SMP machines. I see that\n> is one of the things to experiment with on the TODO list and I have seen\n> some people have messed around already with this using Solaris threads.\n> It should probably be attempted with pthreads if PostgreSQL is going to\n> keep some resemblance of cross-platform compatibility. At that time, it\n> would probably be easier to go in and clean up some stuff for the\n> implementation of other TODO items (put in the base framework for more\n> complex future items) as threading the backend would take a little bit of\n> ideology shift.\n\nI can only think of two objectives for threading. (1) running the various\nconnections in their own thread instead of their own process. (2) running\ncomplex queries across multiple threads.\n\nFor item (1) I see no value to this. It is a lot of work with no tangible\nbenefit. If you have an old fashion pthreads implementation, it will hurt\nperformance because are scheduled within the single process's time slice.. If\nyou have a newer kernel scheduled implementation, then you will have the same\nscheduling as separate processes. The only thing you will need to do is\nswitch your brain from figuring out how to share data, to trying to figure\nout how to isolate data. A multithreaded implementation lacks many of the\nbenefits and robustness of a multiprocess implementation.\n\nFor item (2) I can see how that could speed up queries in a low utilization\nsystem, and that would be cool, but in a server that is under load, threading\nthe queries probably be less efficient.\n\n", "msg_date": "Wed, 26 Sep 2001 16:43:02 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Tom Lane wrote:\n> \n> Neil Padgett <[email protected]> writes:\n> > Initial results (top five -- if you would like a complete profile, let\n> > me know):\n> > Each sample counts as 1 samples.\n> > % cumulative self self total\n> > time samples samples calls T1/call T1/call name\n> > 26.57 42255.02 42255.02 FindLockCycleRecurse\n> \n> Yipes. It would be interesting to know more about the locking pattern\n> of your benchmark --- are there long waits-for chains, or not? The\n> present deadlock detector was certainly written with an eye to \"get it\n> right\" rather than \"make it fast\", but I wonder whether this shows a\n> performance problem in the detector, or just too many executions because\n> you're waiting too long to get locks.\n> \n> > However, this seems to be a red herring. Removing the deadlock detector\n> > had no effect. In fact, benchmarking showed removing it yielded no\n> > improvement in transaction processing rate on uniprocessor or SMP\n> > systems. Instead, it seems that the deadlock detector simply amounts to\n> > \"something to do\" for the blocked backend while it waits for lock\n> > acquisition.\n> \n> Do you have any idea about the typical lock-acquisition delay in this\n> benchmark? Our docs advise trying to set DEADLOCK_TIMEOUT higher than\n> the typical acquisition delay, so that the deadlock detector does not\n> run unnecessarily.\n\nWell. Currently the runs are the typical pg_bench runs. This was useful\nsince it was a handy benchmark that was already done, and I was hoping\nit might be useful for comparison since it seems to be popular. More\nbenchmarks of different types would of course be useful though. \n\nI think the large time consumed by the deadlock detector in the profile\nis simply due to too many executions while waiting to acquire to\ncontended locks. But, I agree that it seems DEADLOCK_TIMEOUT was set too\nlow, since it appears from the profile output that the deadlock detector\nwas running unnecessarily. But the deadlock detector isn't causing the\nSMP performance hit right now, since the throughput is the same with it\nin place or with it removed completely. I therefore didn't make any\nattempt to tune DEADLOCK_TIMEOUT. As I mentioned before, it apparently\njust gives the backend \"something\" to do while it waits for a lock. \n\nI'm thinking that the deadlock detector unnecessarily has no effect on\nperformance since the shared memory is causing some level of\nserialization. So, one CPU (or two, or three, but not all) is doing\nuseful work, while the others are idle (that is to say, doing no useful\nwork). If they are idle spinning, or idle running the deadlock detector\nthe net throughput is still the same. (This might also indicate that\nimproving the lock design won't help here.) Of course, another\npossibility is that you spend so long spinning simply because you do\nspin (rather than sleep), and this is wasting much CPU time so the\nuseful work backends take longer to get things done. Either is just\nspeculation right now without any data to back things up.\n\n> \n> > For example, there has been some suggestion\n> > that perhaps some component of the database is causing large lock\n> > contention.\n> \n> My thought as well. I would certainly recommend that you use more than\n> one test case while looking at these things.\n\nYes. That is another suggestion for a next step. Several cases might\nserve to better expose the path causing the slowdown. I think that\nseveral test cases of varying usage patterns, coupled with hold time\ninstrumentation (which can tell what routine acquired the lock and how\nlong it held it, and yield wait-for data in the analysis), are the right\nway to go about attacking SMP performance. Any other thoughts?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: [email protected]\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n", "msg_date": "Wed, 26 Sep 2001 16:53:08 -0400", "msg_from": "Neil Padgett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On Wed, 26 Sep 2001, mlw wrote:\n> \n> I can only think of two objectives for threading. (1) running the various\n> connections in their own thread instead of their own process. (2) running\n> complex queries across multiple threads.\n> \n> For item (1) I see no value to this. It is a lot of work with no tangible\n> benefit. If you have an old fashion pthreads implementation, it will hurt\n> performance because are scheduled within the single process's time slice..\n\nOld fashion ... as in a userland library that implements POSIX threads? \nWell, I would agree. However, most *modern* implementations are done in \nthe kernel or kernel and userland coop model and don't have this \nlimitation (as you mention later in your e-mail). You have kinda hit on \none of my gripes about computers in general. At what point in time does \none say something is obsolete or too old to support anymore - that it \nhinders progress instead of adding a \"feature\"?\n\n> you have a newer kernel scheduled implementation, then you will have the same\n> scheduling as separate processes. The only thing you will need to do is\n> switch your brain from figuring out how to share data, to trying to figure\n> out how to isolate data. A multithreaded implementation lacks many of the\n> benefits and robustness of a multiprocess implementation.\n\nSave for the fact that the kernel can switch between threads faster then \nit can switch processes considering threads share the same address space, \nstack, code, etc. If need be sharing the data between threads is much \neasier then sharing between processes. \n\nI can't comment on the \"isolate data\" line. I am still trying to figure \nthat one out.\n\nThat last line is a troll if I every saw it ;-) I will agree that threads \nisn't for everything and that it has costs just like everything else. Let \nme stress that last part - like everything else. Certain costs exist in \nthe present model, nothing is - how should we say ... perfect.\n\n> For item (2) I can see how that could speed up queries in a low utilization\n> system, and that would be cool, but in a server that is under load, threading\n> the queries probably be less efficient.\n\nWell, I don't follow your logic and you didn't give any substance to back \nup your claim. I am willing to listen.\n\nAnother thought ... Oracle uses threads doesn't it or at least it has a \nsingle processor and multi-processor version last time I knew ... which do \nthey claim is better? (Not saying that Oracle's proclimation of what is \ngood and what is not matters, but it is good for another view point).\n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n", "msg_date": "Wed, 26 Sep 2001 16:14:22 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Neil Padgett <[email protected]> writes:\n> Well. Currently the runs are the typical pg_bench runs.\n\nWith what parameters? If you don't initialize the pg_bench database\nwith \"scale\" proportional to the number of clients you intend to use,\nthen you'll naturally get huge lock contention. For example, if you\nuse scale=1, there's only one \"branch\" in the database. Since every\ntransaction wants to update the branch's balance, every transaction\nhas to write-lock that single row, and so everybody serializes on that\none lock. Under these conditions it's not surprising to see lots of\nlock waits and lots of useless runs of the deadlock detector ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 17:17:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\n\nOn Wed, 26 Sep 2001, mlw wrote:\n\n> I can only think of two objectives for threading. (1) running the various\n> connections in their own thread instead of their own process. (2) running\n> complex queries across multiple threads.\n> \n\nI did a multi-threaded version of 7.0.2 using Solaris threads about a year\nago in order to try\nand get multiple backend connections working under one java process using\njni. I used the thread per connection model.\n\nI eventually got it working, but it was/is very messy ( there were global\nvariables everywhere! ). Anyway, I was able to get a pretty good speed up\non inserts by scheduling buffer writes from multiple connections on one\ncommon writing thread. \n\nI also got some other features that were important to me at the time.\n\n1. True prepared statements under java with bound input and output\nvariables\n2. Better system utilization \n\ta. fewer Solaris lightweight processes mapped to threads.\n\tb. Fewer open files per postgres installation \n3. Automatic vacuums when system activity is low by a daemon thread.\n\nbut there were some drawbacks... One rogue thread or bad user \nfunction could take down all connections for that process. This\nwas and seems to still be the major drawback to using threads.\n\n\nMyron Scott\[email protected]\n\n", "msg_date": "Wed, 26 Sep 2001 15:03:00 -0700 (PDT)", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"D. Hageman\" <[email protected]> writes:\n\n> > you have a newer kernel scheduled implementation, then you will have the same\n> > scheduling as separate processes. The only thing you will need to do is\n> > switch your brain from figuring out how to share data, to trying to figure\n> > out how to isolate data. A multithreaded implementation lacks many of the\n> > benefits and robustness of a multiprocess implementation.\n> \n> Save for the fact that the kernel can switch between threads faster then \n> it can switch processes considering threads share the same address space, \n> stack, code, etc. If need be sharing the data between threads is much \n> easier then sharing between processes. \n\nWhen using a kernel threading model, it's not obvious to me that the\nkernel will switch between threads much faster than it will switch\nbetween processes. As far as I can see, the only potential savings is\nnot reloading the pointers to the page tables. That is not nothing,\nbut it is also not a lot.\n\n> I can't comment on the \"isolate data\" line. I am still trying to figure \n> that one out.\n\nSometimes you need data which is specific to a particular thread.\nBasically, you have to look at every global variable in the Postgres\nbackend, and determine whether to share it among all threads or to\nmake it thread-specific. In other words, you have to take extra steps\nto isolate the data within the thread. This is the reverse of the\ncurrent situation, in which you have to take extra steps to share data\namong all backend processes.\n\n> That last line is a troll if I every saw it ;-) I will agree that threads \n> isn't for everything and that it has costs just like everything else. Let \n> me stress that last part - like everything else. Certain costs exist in \n> the present model, nothing is - how should we say ... perfect.\n\nWhen writing in C, threading inevitably loses robustness. Erratic\nbehaviour by one thread, perhaps in a user defined function, can\nsubtly corrupt the entire system, rather than just that thread. Part\nof defensive programming is building barriers between different parts\nof a system. Process boundaries are a powerful barrier.\n\n(Actually, though, Postgres is already vulnerable to erratic behaviour\nbecause any backend process can corrupt the shared buffer pool.)\n\nIan\n", "msg_date": "26 Sep 2001 15:04:41 -0700", "msg_from": "Ian Lance Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"D. Hageman\" <[email protected]> writes:\n\n> Save for the fact that the kernel can switch between threads faster then \n> it can switch processes considering threads share the same address space, \n> stack, code, etc. If need be sharing the data between threads is much \n> easier then sharing between processes. \n\nThis depends on your system. Solaris has a huge difference between\nthread and process context switch times, whereas Linux has very little \ndifference (and in fact a Linux process context switch is about as\nfast as a Solaris thread switch on the same hardware--Solaris is just\na pig when it comes to process context switching). \n\n> I can't comment on the \"isolate data\" line. I am still trying to figure \n> that one out.\n\nI think his point is one of clarity and maintainability. When a\ntask's data is explicitly shared (via shared memory of some sort) it's\nfairly clear when you're accessing shared data and need to worry about\nlocking. Whereas when all data is shared by default (as with threads)\nit's very easy to miss places where threads can step on each other.\n\n-Doug\n-- \nIn a world of steel-eyed death, and men who are fighting to be warm,\nCome in, she said, I'll give you shelter from the storm. -Dylan\n", "msg_date": "26 Sep 2001 18:39:44 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On 26 Sep 2001, Ian Lance Taylor wrote:\n>\n> > Save for the fact that the kernel can switch between threads faster then \n> > it can switch processes considering threads share the same address space, \n> > stack, code, etc. If need be sharing the data between threads is much \n> > easier then sharing between processes. \n> \n> When using a kernel threading model, it's not obvious to me that the\n> kernel will switch between threads much faster than it will switch\n> between processes. As far as I can see, the only potential savings is\n> not reloading the pointers to the page tables. That is not nothing,\n> but it is also not a lot.\n\nIt is my understanding that avoiding a full context switch of the \nprocessor can be of a significant advantage. This is especially important \non processor architectures that can be kinda slow at doing it (x86). I \nwill admit that most modern kernels have features that assist software \npackages utilizing the forking model (copy on write for instance). It is \nalso my impression that these do a good job. I am the kind of guy that \nlooks towards the future (as in a year, year and half or so) and say that \nprocessors will hopefully get faster at context switching and more and \nmore kernels will implement these algorithms to speed up the forking \nmodel. At the same time, I see more and more processors being shoved into \na single box and it appears that the threads model works better on these \ntype of systems. \n\n> > I can't comment on the \"isolate data\" line. I am still trying to figure \n> > that one out.\n> \n> Sometimes you need data which is specific to a particular thread.\n\nWhen you need data that is specific to a thread you use a TSD (Thread \nSpecific Data). \n\n> Basically, you have to look at every global variable in the Postgres\n> backend, and determine whether to share it among all threads or to\n> make it thread-specific.\n\nYes, if one was to implement threads into PostgreSQL I would think that \nsome re-writing would be in order of several areas. Like I said before, \ngive a person a chance to restructure things so future TODO items wouldn't \nbe so hard to implement. Personally, I like to stay away from global \nvariables as much as possible. They just get you into trouble.\n\n> > That last line is a troll if I every saw it ;-) I will agree that threads \n> > isn't for everything and that it has costs just like everything else. Let \n> > me stress that last part - like everything else. Certain costs exist in \n> > the present model, nothing is - how should we say ... perfect.\n> \n> When writing in C, threading inevitably loses robustness. Erratic\n> behaviour by one thread, perhaps in a user defined function, can\n> subtly corrupt the entire system, rather than just that thread. Part\n> of defensive programming is building barriers between different parts\n> of a system. Process boundaries are a powerful barrier.\n\nI agree with everything you wrote above except for the first line. My \nonly comment is that process boundaries are only *truely* a powerful \nbarrier if the processes are different pieces of code and are not \ndependent on each other in crippling ways. Forking the same code with the \nbug in it - and only 1 in 5 die - is still 4 copies of buggy code running \non your system ;-) \n\n> (Actually, though, Postgres is already vulnerable to erratic behaviour\n> because any backend process can corrupt the shared buffer pool.)\n\nI appreciate your total honest view of the situation. \n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n\n", "msg_date": "Wed, 26 Sep 2001 18:18:08 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On 26 Sep 2001, Doug McNaught wrote:\n\n> This depends on your system. Solaris has a huge difference between\n> thread and process context switch times, whereas Linux has very little \n> difference (and in fact a Linux process context switch is about as\n> fast as a Solaris thread switch on the same hardware--Solaris is just\n> a pig when it comes to process context switching). \n\nYeah, I kinda commented on this in another e-mail. Linux has some nice \ntweaks for software using the forking model, but I am sure a couple of \nSolaris admins out there like to run PostgreSQL. ;-) You are right in \nthat it is very system dependent. I should have prefaced it with \"In \ngeneral ...\"\n\n> > I can't comment on the \"isolate data\" line. I am still trying to figure \n> > that one out.\n> \n> I think his point is one of clarity and maintainability. When a\n> task's data is explicitly shared (via shared memory of some sort) it's\n> fairly clear when you're accessing shared data and need to worry about\n> locking. Whereas when all data is shared by default (as with threads)\n> it's very easy to miss places where threads can step on each other.\n\nWell, I understand what you are saying and you are correct. The situation \nis that when you implement anything using pthreads you lock your \nvariables (which is where the major performance penalty comes into play \nwith threads). Now, the kicker is how you lock them. Depending on how \nyou do it (as per discussion earlier on this list concerning threads) it \ncan be faster or slower. It all depends on what model you use. \n\nData is not explicitely shared between threads unless you make it so. The \nthreads just share the same stack and all of that, but you can't \n(shouldn't is probably a better word) really access anything you don't have \nan address for. Threads just makes it easier to share if you want to. \nAlso, see my other e-mail to the list concerning TSDs.\n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n", "msg_date": "Wed, 26 Sep 2001 18:32:32 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Ian Lance Taylor <[email protected]> writes:\n> (Actually, though, Postgres is already vulnerable to erratic behaviour\n> because any backend process can corrupt the shared buffer pool.)\n\nNot to mention the other parts of shared memory.\n\nNonetheless, our experience has been that cross-backend failures due to\nmemory clobbers in shared memory are very infrequent --- certainly far\nless often than we see localized-to-a-backend crashes. Probably this is\nbecause the shared memory is (a) small compared to the rest of the\naddress space and (b) only accessed by certain specific modules within\nPostgres.\n\nI'm convinced that switching to a thread model would result in a\nsignificant degradation in our ability to recover from coredump-type\nfailures, even given the (implausible) assumption that we introduce no\nnew bugs during the conversion. I'm also *un*convinced that such a\nconversion will yield significant performance benefits, unless we\nintroduce additional cross-thread dependencies (and more fragility\nand lock contention) by tactics such as sharing catalog caches across\nthreads.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 20:46:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "> ... Thomas still has his date/time stuff\n> to finish off, now that CVSup is fixed ...\n\nI'm now getting clean runs through the regression tests on a freshly\nmerged cvs tree. I'd like to look at it a little more to adjust\npg_proc.h attributes before I commit the changes.\n\nThere was a bit of a hiccup when merging since there was some bytea\nstuff added to the catalogs over the last couple of weeks. Could folks\nhold off on claiming new OIDs until I get this stuff committed? TIA\n\nI expect to be able to merge this stuff by Friday at the latest, more\nlikely tomorrow.\n\n - Thomas\n", "msg_date": "Thu, 27 Sep 2001 02:45:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On Wed, 26 Sep 2001, D. Hageman wrote:\n\n> > > Save for the fact that the kernel can switch between threads faster then \n> > > it can switch processes considering threads share the same address space, \n> > > stack, code, etc. If need be sharing the data between threads is much \n> > > easier then sharing between processes. \n> > \n> > When using a kernel threading model, it's not obvious to me that the\n> > kernel will switch between threads much faster than it will switch\n> > between processes. As far as I can see, the only potential savings is\n> > not reloading the pointers to the page tables. That is not nothing,\n> > but it is also\n<major snippage>\n> > > I can't comment on the \"isolate data\" line. I am still trying to figure \n> > > that one out.\n> > \n> > Sometimes you need data which is specific to a particular thread.\n> \n> When you need data that is specific to a thread you use a TSD (Thread \n> Specific Data). \nWhich Linux does not support with a vengeance, to my knowledge.\n\nAs a matter of fact, quote from Linus on the matter was something like\n\"Solution to slow process switching is fast process switching, not another\nkernel abstraction [referring to threads and TSD]\". TSDs make\nimplementation of thread switching complex, and fork() complex.\n\nThe question about threads boils down to: Is there far more data that is\nshared than unshared? If yes, threads are better, if not, you'll be\nabusing TSD and slowing things down. \n\nI believe right now, postgresql' model of sharing only things that need to\nbe shared is pretty damn good. The only slight problem is overhead of\nforking another backend, but its still _fast_.\n\nIMHO, threads would not bring large improvement to postgresql.\n\n Actually, if I remember, there was someone who ported postgresql (I think\nit was 6.5) to be multithreaded with major pain, because the requirement\nwas to integrate with CORBA. I believe that person posted some benchmarks\nwhich were essentially identical to non-threaded postgres...\n\n-alex\n\n", "msg_date": "Wed, 26 Sep 2001 22:58:41 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On Wed, 26 Sep 2001, Alex Pilosov wrote:\n\n> On Wed, 26 Sep 2001, D. Hageman wrote:\n> \n> > When you need data that is specific to a thread you use a TSD (Thread \n> > Specific Data). \n\n> Which Linux does not support with a vengeance, to my knowledge.\n\nI am not sure what that means. If it works it works. \n\n> As a matter of fact, quote from Linus on the matter was something like\n> \"Solution to slow process switching is fast process switching, not another\n> kernel abstraction [referring to threads and TSD]\". TSDs make\n> implementation of thread switching complex, and fork() complex.\n\nLinus does have some interesting ideas. I always like to hear his \nperspective on matters, but just like the government - I don't always \nagree with him. I don't see why TSDs would make the implementation of \nthread switching complex - seems to me that would be something that is \nimplemented in the userland side part of the pthreads implemenation and \nnot the kernel side. I don't really like to talk specifics, but both the \nlightweight process and the system call fork() are implemented using the \n__clone kernel function with the parameters slightly different (This is \nin the Linux kernel, btw since you wanted to use that as an example). The \nspeed improvements the kernel has given the fork() command (like copy on \nwrite) only lasts until the process writes to memmory. The next time it \ncomes around - it is for all intents and purposes a full context switch \nagain. With threads ... the cost is relatively consistant.\n\n> The question about threads boils down to: Is there far more data that is\n> shared than unshared? If yes, threads are better, if not, you'll be\n> abusing TSD and slowing things down. \n\nI think the question about threads boils down to if the core members of \nthe PostgreSQL team want to try it or not. At this time, I would have to \nsay they pretty much agree they like things the way they are now, which is \ncompletely fine. They are the ones that spend most of the time on it and \nwant to support it.\n\n> I believe right now, postgresql' model of sharing only things that need to\n> be shared is pretty damn good. The only slight problem is overhead of\n> forking another backend, but its still _fast_.\n\nOh, man ... am I reading stuff into what you are writing or are you \nreading stuff into what I am writing? Maybe a little bit of both? My \noriginal contention is that I think that the best way to get the full \npotential out of SMP machines is to use a threads model. I didn't say the \npresent way wasn't fast. \n\n> Actually, if I remember, there was someone who ported postgresql (I think\n> it was 6.5) to be multithreaded with major pain, because the requirement\n> was to integrate with CORBA. I believe that person posted some benchmarks\n> which were essentially identical to non-threaded postgres...\n\nActually, it was 7.0.2 and the performance gain was interesting. The \nposting can be found at:\n\nhttp://candle.pha.pa.us/mhonarc/todo.detail/thread/msg00007.html\n\nThe results are:\n\n20 clients, 900 inserts per client, 1 insert per transaction, 4 different\ntables.\n\n7.0.2 About 10:52 average completion\nmulti-threaded 2:42 average completion\n7.1beta3 1:13 average completion\n\nIf the multi-threaded version was 7.0.2 and threads increased performance \nthat much - I would have to say that was a bonus. However, the \nperformance increases that the PostgreSQL team implemented later ... \npushed the regular version ahead again. That kinda says to me that \npotential is there.\n\nIf you look at Myron Scott's post today you will see that it had other \nadvantages going for it (like auto-vacuum!) and disadvantages ... rogue \nthread corruption (already debated today).\n\n-- \n//========================================================\\\\\n|| D. Hageman <[email protected]> ||\n\\\\========================================================//\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 22:41:39 -0500 (CDT)", "msg_from": "\"D. Hageman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On Wed, 26 Sep 2001, D. Hageman wrote:\n\n> Oh, man ... am I reading stuff into what you are writing or are you \n> reading stuff into what I am writing? Maybe a little bit of both? My \n> original contention is that I think that the best way to get the full \n> potential out of SMP machines is to use a threads model. I didn't say the \n> present way wasn't fast. \nOr alternatively, that the current inter-process locking is a bit\ninefficient. Its possible to have inter-process locks that are as fast as\ninter-thread locks.\n\n> > Actually, if I remember, there was someone who ported postgresql (I think\n> > it was 6.5) to be multithreaded with major pain, because the requirement\n> > was to integrate with CORBA. I believe that person posted some benchmarks\n> > which were essentially identical to non-threaded postgres...\n> \n> Actually, it was 7.0.2 and the performance gain was interesting. The \n> posting can be found at:\n> \n> 7.0.2 About 10:52 average completion\n> multi-threaded 2:42 average completion\n> 7.1beta3 1:13 average completion\n> \n> If the multi-threaded version was 7.0.2 and threads increased performance \n> that much - I would have to say that was a bonus. However, the \n> performance increases that the PostgreSQL team implemented later ... \n> pushed the regular version ahead again. That kinda says to me that \n> potential is there.\nAlternatively, you could read that 7.1 took the wind out of threaded\nsails. :) But I guess we won't know until the current version is ported to\nthreads...\n\n-alex\n\n", "msg_date": "Thu, 27 Sep 2001 00:08:51 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"D. Hageman\" <[email protected]> writes:\n> If you look at Myron Scott's post today you will see that it had other \n> advantages going for it (like auto-vacuum!) and disadvantages ... rogue \n> thread corruption (already debated today).\n\nBut note that Myron did a number of things that are (IMHO) orthogonal\nto process-to-thread conversion, such as adding prepared statements,\na separate thread/process/whateveryoucallit for buffer writing, ditto\nfor vacuuming, etc. I think his results cannot be taken as indicative\nof the benefits of threads per se --- these other things could be\nimplemented in a pure process model too, and we have no data with which\nto estimate which change bought how much.\n\nThreading certainly should reduce the context switch time, but this\ncomes at the price of increased overhead within each context (since\naccess to thread-local variables is not free). It's by no means\nobvious that there's a net win there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2001 00:19:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\n \n> But note that Myron did a number of things that are (IMHO) orthogonal\n\nyes, I did :)\n\n> to process-to-thread conversion, such as adding prepared statements,\n> a separate thread/process/whateveryoucallit for buffer writing, ditto\n> for vacuuming, etc. I think his results cannot be taken as indicative\n> of the benefits of threads per se --- these other things could be\n> implemented in a pure process model too, and we have no data with which\n> to estimate which change bought how much.\n> \n\nIf you are comparing just process vs. thread, I really don't think I\ngained much for performance and ended up with some pretty unmanageable\ncode.\n\nThe one thing that led to most of the gains was scheduling all the writes\nto one thread which, as noted by Tom, you could do on the process model.\nBesides, Most of the advantage in doing this was taken away with the\naddition of WAL in 7.1.\n\nThe other real gain that I saw with threading was limiting the number of\nopen files but\nthat led me to alter much of the file manager in order to synchronize\naccess to the files which probably slowed things a bit.\n\nTo be honest, I don't think I, personally,\nwould try this again. I went pretty far off\nthe beaten path with this thing. It works well for what I am doing \n( a limited number of SQL statements run many times over ) but there\nprobably was a better way. I'm thinking now that I should have tried to \nadd a CORBA interface for connections. I would have been able to \naccomplish my original goals without creating a deadend for myself.\n\n\nThanks all for a great project,\n\nMyron\[email protected]\n\n", "msg_date": "Wed, 26 Sep 2001 22:24:29 -0700 (PDT)", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\"D. Hageman\" wrote:\n\n> On 26 Sep 2001, Ian Lance Taylor wrote:\n> >\n> > > Save for the fact that the kernel can switch between threads faster then\n> > > it can switch processes considering threads share the same address space,\n> > > stack, code, etc. If need be sharing the data between threads is much\n> > > easier then sharing between processes.\n> >\n> > When using a kernel threading model, it's not obvious to me that the\n> > kernel will switch between threads much faster than it will switch\n> > between processes. As far as I can see, the only potential savings is\n> > not reloading the pointers to the page tables. That is not nothing,\n> > but it is also not a lot.\n>\n> It is my understanding that avoiding a full context switch of the\n> processor can be of a significant advantage. This is especially important\n> on processor architectures that can be kinda slow at doing it (x86). I\n> will admit that most modern kernels have features that assist software\n> packages utilizing the forking model (copy on write for instance). It is\n> also my impression that these do a good job. I am the kind of guy that\n> looks towards the future (as in a year, year and half or so) and say that\n> processors will hopefully get faster at context switching and more and\n> more kernels will implement these algorithms to speed up the forking\n> model. At the same time, I see more and more processors being shoved into\n> a single box and it appears that the threads model works better on these\n> type of systems.\n\n\"context\" switching happens all the time on a multitasking system. On the x86\nprocessor, a context switch happens when you call into the kernel. You have to go\nthrough a call-gate to get to a lower privilege ring. \"context\" switching is very\nfast. The operating system dictates how heavy or light a process switch is. Under\nLinux (and I believe FreeBSD with Linux threads, or version 4.x ) threads and\nprocesses are virtually identical. The only difference is that the virtual memory\npages are not \"copy on write.\" Process vs thread scheduling is also virtually\nidentical.\n\nIf you look to the future, then you should accept that process switching should\nbecome more efficient as the operating systems improve.\n\n>\n> > > I can't comment on the \"isolate data\" line. I am still trying to figure\n> > > that one out.\n> >\n> > Sometimes you need data which is specific to a particular thread.\n>\n> When you need data that is specific to a thread you use a TSD (Thread\n> Specific Data).\n\nYes, but Postgres has many global variables. The assumption has always been that\nit is a stand-alone process with an explicitly shared paradigm, not implicitly.\n\n>\n> > Basically, you have to look at every global variable in the Postgres\n> > backend, and determine whether to share it among all threads or to\n> > make it thread-specific.\n>\n> Yes, if one was to implement threads into PostgreSQL I would think that\n> some re-writing would be in order of several areas. Like I said before,\n> give a person a chance to restructure things so future TODO items wouldn't\n> be so hard to implement. Personally, I like to stay away from global\n> variables as much as possible. They just get you into trouble.\n\nIn real live software, software which lives from year to year with active\ndevelopment, things do get messy. There are always global variables involved in a\nprogram. Efforts, of course, should be made to keep them to a minimum, but the\nreality is that they always happen.\n\nAlso, the very structure of function calls may need to change when going from a\nprocess model to a threaded model. Functions never before reentrant are now be\nreentrant, think about that. That is a huge undertaking. Every single function\nmay need to be examined for thread safety, with little benefit.\n\n>\n> > > That last line is a troll if I every saw it ;-) I will agree that threads\n> > > isn't for everything and that it has costs just like everything else. Let\n> > > me stress that last part - like everything else. Certain costs exist in\n> > > the present model, nothing is - how should we say ... perfect.\n> >\n> > When writing in C, threading inevitably loses robustness. Erratic\n> > behaviour by one thread, perhaps in a user defined function, can\n> > subtly corrupt the entire system, rather than just that thread. Part\n> > of defensive programming is building barriers between different parts\n> > of a system. Process boundaries are a powerful barrier.\n>\n> I agree with everything you wrote above except for the first line. My\n> only comment is that process boundaries are only *truely* a powerful\n> barrier if the processes are different pieces of code and are not\n> dependent on each other in crippling ways. Forking the same code with the\n> bug in it - and only 1 in 5 die - is still 4 copies of buggy code running\n> on your system ;-)\n\nThis is simply not true. All software has bugs, it is an undeniable fact. Some\nbugs are more likely to be hit than others. 5 processes , when one process hits a\nbug, that does not mean the other 4 will hit the same bug. Obscure bugs kill\nsoftware all the time, the trick is to minimize the impact. Software is not\nperfect, assuming it can be is a mistake.\n\n\n\n\n", "msg_date": "Thu, 27 Sep 2001 10:02:05 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Tom Lane wrote:\n> \n> Neil Padgett <[email protected]> writes:\n> > Well. Currently the runs are the typical pg_bench runs.\n> \n> With what parameters? If you don't initialize the pg_bench database\n> with \"scale\" proportional to the number of clients you intend to use,\n> then you'll naturally get huge lock contention. For example, if you\n> use scale=1, there's only one \"branch\" in the database. Since every\n> transaction wants to update the branch's balance, every transaction\n> has to write-lock that single row, and so everybody serializes on that\n> one lock. Under these conditions it's not surprising to see lots of\n> lock waits and lots of useless runs of the deadlock detector ...\n\nThe results you saw with the large number of useless runs of the\ndeadlock detector had a scale factor of 2. With a scale factor 2, the\nperformance fall-off began at about 100 clients. So, I reran the 512\nclient profiling run with a scale factor of 12. (2:100 as 10:500 -- so\n12 might be an appropriate scale factor with some cushion?) This does,\nof course, reduce the contention. However, the throughput is still only\nabout twice as much, which sounds good, but is still a small fraction of\nthe throughput realized on the same machine with a small number of\nclients. (This is the uniprocessor machine.)\n\nThe new profile looks like this (uniprocessor machine):\nFlat profile:\n\nEach sample counts as 1 samples.\n % cumulative self self total \n time samples samples calls T1/call T1/call name \n 9.44 10753.00 10753.00 pg_fsync (I'd\nattribute this to the slow disk in the machine -- scale 12 yields a lot\nof tuples.)\n 6.63 18303.01 7550.00 s_lock_sleep\n 6.56 25773.01 7470.00 s_lock\n 5.88 32473.01 6700.00 heapgettup\n 5.28 38487.02 6014.00 \nHeapTupleSatisfiesSnapshot\n 4.83 43995.02 5508.00 hash_destroy\n 2.77 47156.02 3161.00 load_file\n 1.90 49322.02 2166.00 XLogInsert\n 1.86 51436.02 2114.00 _bt_compare\n 1.82 53514.02 2078.00 AllocSetAlloc\n 1.72 55473.02 1959.00 LockBuffer\n 1.50 57180.02 1707.00 init_ps_display\n 1.40 58775.03 1595.00 \nDirectFunctionCall9\n 1.26 60211.03 1436.00 hash_search\n 1.14 61511.03 1300.00 GetSnapshotData\n 1.11 62780.03 1269.00 SpinAcquire\n 1.10 64028.03 1248.00 LockAcquire\n 1.04 70148.03 1190.00 heap_fetch\n 0.91 71182.03 1034.00 _bt_orderkeys\n 0.89 72201.03 1019.00 LockRelease\n 0.75 73058.03 857.00 \nInitBufferPoolAccess\n.\n.\n.\n\nI reran the benchmarks on the SMP machine with a scale of 12 instead of\n2. The numbers still show a clear performance drop off at approximately\n100 clients, albeit not as sharp. (But still quite pronounced.) In terms\nof raw performance, the numbers are comparable. The scale factor\ncertainly helped -- but it still seems that we might have a problem\nhere.\n\nThoughts?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: [email protected]\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n", "msg_date": "Thu, 27 Sep 2001 14:42:42 -0400", "msg_from": "Neil Padgett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "* Doug McNaught <[email protected]> wrote:\n|\n| Depends on what you mean. For scaling well with many connections and\n| simultaneous queries, there's no reason IMHO that the current\n| process-per-backend model won't do, assuming the locking issues are\n| addressed. \n\nWouldn't a threading model allow you to share more data across different\nconnections ? I'm thinking in terms of introducing more cache functionality\nto improve performance. What is shared memory used for today ?\n\n-- \nGunnar R�nning - [email protected]\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "28 Sep 2001 05:03:00 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> \n> Sounds cool to me ... definitely something to fix before v7.2, if its as\n> \"easy\" as you make it sound ... I'm expecting the new drive to be\n> installed today (if all goes well ... Thomas still has his date/time stuff\n> to finish off, now that CVSup is fixed ...\n> \n> Let''s try and target Monday for Beta then? I think the only two\n> outstaandings are you and Thomas right now?\n> \n> Bruce, that latest rtree patch looks intriguing also ... can anyone\n> comment positive/negative about it, so that we can try and get that in\n> before Beta?\n\nI put it in the queue and will apply in a day or two.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 00:13:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "At 10:02 AM 9/27/01 -0400, mlw wrote:\n>\"D. Hageman\" wrote:\n>> I agree with everything you wrote above except for the first line. My\n>> only comment is that process boundaries are only *truely* a powerful\n>> barrier if the processes are different pieces of code and are not\n>> dependent on each other in crippling ways. Forking the same code with the\n>> bug in it - and only 1 in 5 die - is still 4 copies of buggy code running\n>> on your system ;-)\n>\n>This is simply not true. All software has bugs, it is an undeniable fact.\nSome\n>bugs are more likely to be hit than others. 5 processes , when one process\nhits a\n>bug, that does not mean the other 4 will hit the same bug. Obscure bugs kill\n>software all the time, the trick is to minimize the impact. Software is not\n>perfect, assuming it can be is a mistake.\n\nA bit off topic, but that really reminded me of how Microsoft does their\nforking in hardware.\n\nBasically they \"fork\" (cluster) FIVE windows machines to run the same buggy\ncode all on the same IP. That way if one process (machine) goes down, the\nother 4 stay running, thus minimizing the impact ;).\n\nThey have many of these clusters put together.\n\nSee: http://www.microsoft.com/backstage/column_T2_1.htm\n>From Microsoft.com Backstage [1]\n\nOK so it's old (1998), but from their recent articles I believe they're\nstill using the same method of achieving \"100% availability\". And they brag\nabout it like it's a good thing...\n\nWhen I first read it I didn't know whether to laugh or get disgusted or\nwhatever.\n\nCheerio,\nLink.\n\n[1]\nhttp://www.microsoft.com/backstage/\nhttp://www.microsoft.com/backstage/archives.htm\n\n\n", "msg_date": "Fri, 28 Sep 2001 23:32:32 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\nGood summary. I agree checkpoint should look like as normal a Proc as\npossible.\n\n\n> At the just-past OSDN database conference, Bruce and I were annoyed by\n> some benchmark results showing that Postgres performed poorly on an\n> 8-way SMP machine. Based on past discussion, it seems likely that the\n> culprit is the known inefficiency in our spinlock implementation.\n> After chewing on it for awhile, we came up with an idea for a solution.\n> \n> The following proposal should improve performance substantially when\n> there is contention for a lock, but it creates no portability risks\n> because it uses the same system facilities (TAS and SysV semaphores)\n> that we have always relied on. Also, I think it'd be fairly easy to\n> implement --- I could probably get it done in a day.\n> \n> Comments anyone?\n> \n> \t\t\tregards, tom lane\n> \n> \n> Plan:\n> \n> Replace most uses of spinlocks with \"lightweight locks\" (LW locks)\n> implemented by a new lock manager. The principal remaining use of true\n> spinlocks (TAS locks) will be to provide mutual exclusion of access to\n> LW lock structures. Therefore, we can assume that spinlocks are never\n> held for more than a few dozen instructions --- and never across a kernel\n> call.\n> \n> It's pretty easy to rejigger the spinlock code to work well when the lock\n> is never held for long. We just need to change the spinlock retry code\n> so that it does a tight spin (continuous retry) for a few dozen cycles ---\n> ideally, the total delay should be some small multiple of the max expected\n> lock hold time. If lock still not acquired, yield the CPU via a select()\n> call (10 msec minimum delay) and repeat. Although this looks inefficient,\n> it doesn't matter on a uniprocessor because we expect that backends will\n> only rarely be interrupted while holding the lock, so in practice a held\n> lock will seldom be encountered. On SMP machines the tight spin will win\n> since the lock will normally become available before we give up and yield\n> the CPU.\n> \n> Desired properties of the LW lock manager include:\n> \t* very fast fall-through when no contention for lock\n> \t* waiting proc does not spin\n> \t* support both exclusive and shared (read-only) lock modes\n> \t* grant lock to waiters in arrival order (no starvation)\n> \t* small lock structure to allow many LW locks to exist.\n> \n> Proposed contents of LW lock structure:\n> \n> \tspinlock mutex (protects LW lock state and PROC queue links)\n> \tcount of exclusive holders (always 0 or 1)\n> \tcount of shared holders (0 .. MaxBackends)\n> \tqueue head pointer (NULL or ptr to PROC object)\n> \tqueue tail pointer (could do without this to save space)\n> \n> If a backend sees it must wait to acquire the lock, it adds its PROC\n> struct to the end of the queue, releases the spinlock mutex, and then\n> sleeps by P'ing its per-backend wait semaphore. A backend releasing the\n> lock will check to see if any waiter should be granted the lock. If so,\n> it will update the lock state, release the spinlock mutex, and finally V\n> the wait semaphores of any backends that it decided should be released\n> (which it removed from the lock's queue while holding the sema). Notice\n> that no kernel calls need be done while holding the spinlock. Since the\n> wait semaphore will remember a V occurring before P, there's no problem\n> if the releaser is fast enough to release the waiter before the waiter\n> reaches its P operation.\n> \n> We will need to add a few fields to PROC structures:\n> \t* Flag to show whether PROC is waiting for an LW lock, and if so\n> \t whether it waits for read or write access\n> \t* Additional PROC queue link field.\n> We can't reuse the existing queue link field because it is possible for a\n> PROC to be waiting for both a heavyweight lock and a lightweight one ---\n> this will occur when HandleDeadLock or LockWaitCancel tries to acquire\n> the LockMgr module's lightweight lock (formerly spinlock).\n> \n> It might seem that we also need to create a second wait semaphore per\n> backend, one to wait on HW locks and one to wait on LW locks. But I\n> believe we can get away with just one, by recognizing that a wait for an\n> LW lock can never be interrupted by a wait for a HW lock, only vice versa.\n> After being awoken (V'd), the LW lock manager must check to see if it was\n> actually granted the lock (easiest way: look at own PROC struct to see if\n> LW lock wait flag has been cleared). If not, the V must have been to\n> grant us a HW lock --- but we still have to sleep to get the LW lock. So\n> remember this happened, then loop back and P again. When we finally get\n> the LW lock, if there was an extra P operation then V the semaphore once\n> before returning. This will allow ProcSleep to exit the wait for the HW\n> lock when we return to it.\n> \n> Fine points:\n> \n> While waiting for an LW lock, we need to show in our PROC struct whether\n> we are waiting for read or write access. But we don't need to remember\n> this after getting the lock; if we know we have the lock, it's easy to\n> see by inspecting the lock whether we hold read or write access.\n> \n> ProcStructLock cannot be replaced by an LW lock, since a backend cannot\n> use an LW lock until it has obtained a PROC struct and a semaphore,\n> both of which are protected by this lock. It seems okay to use a plain\n> spinlock for this purpose. NOTE: it's okay for SInvalLock to be an LW\n> lock, as long as the LW mgr does not depend on accessing the SI array\n> of PROC objects, but only chains through the PROCs themselves.\n> \n> Another tricky point is that some of the setup code executed by the\n> postmaster may try to to grab/release LW locks. Here, we can probably\n> allow a special case for MyProc=NULL. It's likely that we should never\n> see a block under these circumstances anyway, so finding MyProc=NULL when\n> we need to block may just be a fatal error condition.\n> \n> A nastier case is checkpoint processes; these expect to grab BufMgr and\n> WAL locks. Perhaps okay for them to do plain sleeps in between attempts\n> to grab the locks? This says that the MyProc=NULL case should release\n> the spinlock mutex, sleep 10 msec, try again, rather than any sort of error\n> or expectation of no conflict. Are there any cases where this represents\n> a horrid performance loss? Checkpoint itself seems noncritical.\n> \n> Alternative is for checkpoint to be allowed to create a PROC struct (but\n> not to enter it in SI list) so's it can participate normally in LW lock\n> operations. That seems a good idea anyway, actually, so that the PROC\n> struct's facility for releasing held LW locks at elog time will work\n> inside the checkpointer. (But that means we need an extra sema too?\n> Okay, but don't want an extra would-be backend to obtain the extra sema\n> and perhaps cause a checkpoint proc to fail. So must allocate the PROC\n> and sema for checkpoint process separately from those reserved for\n> backends.)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 13:11:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> Save for the fact that the kernel can switch between threads faster then \n> it can switch processes considering threads share the same address space, \n> stack, code, etc. If need be sharing the data between threads is much \n> easier then sharing between processes. \n\nJust a clarification but because we fork each backend, don't they share\nthe same code space? Data/stack is still separate.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 14:52:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\nFYI, I have added a number of these emails to the 'thread' TODO.detail list.\n\n> On Wed, 26 Sep 2001, D. Hageman wrote:\n> \n> > > > Save for the fact that the kernel can switch between threads faster then \n> > > > it can switch processes considering threads share the same address space, \n> > > > stack, code, etc. If need be sharing the data between threads is much \n> > > > easier then sharing between processes. \n> > > \n> > > When using a kernel threading model, it's not obvious to me that the\n> > > kernel will switch between threads much faster than it will switch\n> > > between processes. As far as I can see, the only potential savings is\n> > > not reloading the pointers to the page tables. That is not nothing,\n> > > but it is also\n> <major snippage>\n> > > > I can't comment on the \"isolate data\" line. I am still trying to figure \n> > > > that one out.\n> > > \n> > > Sometimes you need data which is specific to a particular thread.\n> > \n> > When you need data that is specific to a thread you use a TSD (Thread \n> > Specific Data). \n> Which Linux does not support with a vengeance, to my knowledge.\n> \n> As a matter of fact, quote from Linus on the matter was something like\n> \"Solution to slow process switching is fast process switching, not another\n> kernel abstraction [referring to threads and TSD]\". TSDs make\n> implementation of thread switching complex, and fork() complex.\n> \n> The question about threads boils down to: Is there far more data that is\n> shared than unshared? If yes, threads are better, if not, you'll be\n> abusing TSD and slowing things down. \n> \n> I believe right now, postgresql' model of sharing only things that need to\n> be shared is pretty damn good. The only slight problem is overhead of\n> forking another backend, but its still _fast_.\n> \n> IMHO, threads would not bring large improvement to postgresql.\n> \n> Actually, if I remember, there was someone who ported postgresql (I think\n> it was 6.5) to be multithreaded with major pain, because the requirement\n> was to integrate with CORBA. I believe that person posted some benchmarks\n> which were essentially identical to non-threaded postgres...\n> \n> -alex\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 15:07:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> We have been doing some scalability testing just recently here at Red\n> Hat. The machine I was using was a 4-way 550 MHz Xeon SMP machine, I\n> also ran the machine in uniprocessor mode to make some comparisons. All\n> runs were made on Red Hat Linux running 2.4.x series kernels. I've\n> examined a number of potentially interesting cases -- I'm still\n> analyzing the results, but some of the initial results might be\n> interesting:\n\nLet me add a little historical information here. I think the first\nreport of bad performance on SMP machines was from Tatsuo, where he had\n1000 backends running in pgbench. He was seeing poor\ntransactions/second with little CPU or I/O usage. It was clear\nsomething was wrong.\n\nLooking at the code, it was easy to see that on SMP machines, the\nspinlock select() was a problem. Later tests on various OS's found that\nno matter how small your select interval was, select() couldn't sleep\nfor less than one cpu tick, which is tyically 100Hz or 10ms. At that\npoint we knew that the spinlock backoff code was a serious problem. On\nmulti-processor machines that could hit the backoff code on lock\nfailure, there where hudreds of threads sleeping for 10ms, then all\nwaking up, one gets the lock, and the others sleep again.\n\nOn single-cpu machines, the backoff code doesn't get hit too much, but\nit is still a problem. Tom's implementation changes backoffs in all\ncases by placing them in a semaphore queue and reducing the amount of\ncode protected by the spinlock.\n\nWe have these TODO items out of this:\n\n* Improve spinlock code [performance]\n o use SysV semaphores or queue of backends waiting on the lock\n o wakeup sleeper or sleep for less than one clock tick\n o spin for lock on multi-cpu machines, yield on single cpu machines\n o read/write locks\n\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 15:21:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 10:02 AM 9/27/01 -0400, mlw wrote:\n> >\"D. Hageman\" wrote:\n> >> I agree with everything you wrote above except for the first line. My\n> >> only comment is that process boundaries are only *truely* a powerful\n> >> barrier if the processes are different pieces of code and are not\n> >> dependent on each other in crippling ways. Forking the same code with the\n> >> bug in it - and only 1 in 5 die - is still 4 copies of buggy code running\n> >> on your system ;-)\n> >\n> >This is simply not true. All software has bugs, it is an undeniable fact.\n> Some\n> >bugs are more likely to be hit than others. 5 processes , when one process\n> hits a\n> >bug, that does not mean the other 4 will hit the same bug. Obscure bugs kill\n> >software all the time, the trick is to minimize the impact. Software is not\n> >perfect, assuming it can be is a mistake.\n> \n> A bit off topic, but that really reminded me of how Microsoft does their\n> forking in hardware.\n> \n> Basically they \"fork\" (cluster) FIVE windows machines to run the same buggy\n> code all on the same IP. That way if one process (machine) goes down, the\n> other 4 stay running, thus minimizing the impact ;).\n> \n> They have many of these clusters put together.\n> \n> See: http://www.microsoft.com/backstage/column_T2_1.htm\n> >From Microsoft.com Backstage [1]\n> \n> OK so it's old (1998), but from their recent articles I believe they're\n> still using the same method of achieving \"100% availability\". And they brag\n> about it like it's a good thing...\n> \n> When I first read it I didn't know whether to laugh or get disgusted or\n> whatever.\n\nBelieve me don't think anyone should be shipping software with serious bugs in\nit, and I deplore Microsoft's complete lack of accountability when it comes to\nquality, but come on now, lets not lie to ourselves. No matter which god you\nmay pray to, you have to accept that people are not perfect and mistakes will\nbe made.\n\nAt issue is how well programs are isolated from one another (one of the\npurposes of operating systems) and how to deal with programmatic errors. I am\nnot advocating releasing bad software, I am just saying that you must code\ndefensively, assume a caller may pass the wrong parameters, don't trust that\nmalloc worked, etc. Stuff happens in the real world. Code to deal with it. \n\nIn the end, no matter what you do, you will have a crash at some point. (The\ntao of programming) accept it. Just try to make the damage as minimal as\npossible.\n", "msg_date": "Fri, 28 Sep 2001 23:13:59 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Save for the fact that the kernel can switch between threads faster then\n> > it can switch processes considering threads share the same address space,\n> > stack, code, etc. If need be sharing the data between threads is much\n> > easier then sharing between processes.\n> \n> Just a clarification but because we fork each backend, don't they share\n> the same code space? Data/stack is still separate.\n\nIn Linux and many modern UNIX programs, you share everything at fork time. The\ndata and stack pages are marked \"copy on write\" which means that if you touch\nit, the processor traps and drops into the memory manager code. A new page is\ncreated and replaced into your address space where the page, to which you were\ngoing to write, was.\n", "msg_date": "Fri, 28 Sep 2001 23:26:32 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Save for the fact that the kernel can switch between threads faster then\n> > > it can switch processes considering threads share the same address space,\n> > > stack, code, etc. If need be sharing the data between threads is much\n> > > easier then sharing between processes.\n> > \n> > Just a clarification but because we fork each backend, don't they share\n> > the same code space? Data/stack is still separate.\n> \n> In Linux and many modern UNIX programs, you share everything at fork time. The\n> data and stack pages are marked \"copy on write\" which means that if you touch\n> it, the processor traps and drops into the memory manager code. A new page is\n> created and replaced into your address space where the page, to which you were\n> going to write, was.\n\nYes, very true. My point was that backends already share code space and\nnon-modified data space. It is just modified data and stack that is\nnon-shared, but then again, they would have to be non-shared in a\nthreaded backend too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 23:41:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > Save for the fact that the kernel can switch between threads faster then\n> > > > it can switch processes considering threads share the same address space,\n> > > > stack, code, etc. If need be sharing the data between threads is much\n> > > > easier then sharing between processes.\n> > >\n> > > Just a clarification but because we fork each backend, don't they share\n> > > the same code space? Data/stack is still separate.\n> >\n> > In Linux and many modern UNIX programs, you share everything at fork time. The\n> > data and stack pages are marked \"copy on write\" which means that if you touch\n> > it, the processor traps and drops into the memory manager code. A new page is\n> > created and replaced into your address space where the page, to which you were\n> > going to write, was.\n> \n> Yes, very true. My point was that backends already share code space and\n> non-modified data space. It is just modified data and stack that is\n> non-shared, but then again, they would have to be non-shared in a\n> threaded backend too.\n\nIn a threaded system everything would be shared, depending on the OS, even the\nstacks. The stacks could be allocated out of the same global pool.\n\nYou would need something like thread local storage to deal with isolating\naviables from one thread to another. That always seemed more trouble that it\nwas worth. Either that or go through each and every global variable in\nPostgreSQL and make it a member of a structure, and create an instance of this\nstructure for each new thread.\n\nIMHO once you go down the road of using Thread local memory, you are getting to\nthe same level of difficulty (for the OS) in task switching as just switching\nprocesses. The exception to this is Windows where tasks are such a big hit.\n\nI think threaded software is quite usefull, and I have a number of thread based\nservers in production. However, my experience tells me that the work trying to\nmove PostgreSQL to a threaded ebvironment would be extensive and have little or\nno tangable benefit.\n\nI would rather see stuff like 64bit OIDs, three options for function definition\n(short cache, nocache, long cache), etc. than to waste time making PostgreSQL\nthreaded. That's just my opinion.\n", "msg_date": "Sat, 29 Sep 2001 00:28:43 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "I wrote:\n> The following proposal should improve performance substantially when\n> there is contention for a lock, but it creates no portability risks\n> ...\n\nI have committed changes to implement this proposal. I'm not seeing\nany significant performance difference on pgbench on my single-CPU\nsystem ... but pgbench is I/O bound anyway on this hardware, so that's\nnot very surprising. I'll be interested to see what other people\nobserve. (Tatsuo, care to rerun that 1000-client test?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 01:37:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "> I have committed changes to implement this proposal. I'm not seeing\n> any significant performance difference on pgbench on my single-CPU\n> system ... but pgbench is I/O bound anyway on this hardware, so that's\n> not very surprising. I'll be interested to see what other people\n> observe. (Tatsuo, care to rerun that 1000-client test?)\n\nWhat is your system? CPU, memory, IDE/SCSI, OS?\nScaling factor and # of clients?\n\nBTW1 - shouldn't we rewrite pgbench to use threads instead of\n\"libpq async queries\"? At least as option. I'd say that with 1000\nclients current pgbench implementation is very poor.\n\nBTW2 - shouldn't we learn if there are really portability/performance\nissues in using POSIX mutex-es (and cond. variables) in place of\nTAS (and SysV semaphores)?\n\nVadim\n\n\n", "msg_date": "Sat, 29 Sep 2001 01:45:52 -0700", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "On Thursday 27 September 2001 04:09, you wrote:\n> This depends on your system. Solaris has a huge difference between\n> thread and process context switch times, whereas Linux has very little\n> difference (and in fact a Linux process context switch is about as\n> fast as a Solaris thread switch on the same hardware--Solaris is just\n> a pig when it comes to process context switching).\n\nI have never worked on any big systems but from what (little) I have seen, I \nthink there should be a hybrid model.\n\nThis whole discussion started off, from poor performance on SMP machines. If \nI am getting this correctly, threads can be spread on multiple CPUs if \navailable but process can not.\n\nSo I would suggest to have threaded approach for intensive tasks such as \nsorting/searching etc. IMHO converting entire paradigm to thread based is a \nhuge task and may not be required in all cases. \n\nI think of an approach. Threads are created when they are needed but they \nare kept dormant when not needed. So that there is no recreation overhead(if \nthat's a concern). So at any given point of time, one back end connection has \nas many threads as number of CPUs. More than that may not yield much of \nperformance improvement. Say a big task like sorting is split and given to \ndifferent threads so that it can use them all.\n\nIt should be easy to switch the threading function and arguments on the fly, \nrestricting number of threads and there will not be much of thread switching \nas each thread handles different parts of task and later the results are \nmerged.\n\nNumber of threads should be equal to or twice that of number of CPUs. I don't \nthink more than those many threads would yield any performance improvement.\n\nAnd with this approach we can migrate one functionality at a time to threaded \none, thus avoiding big effort at any given time.\n\nJust a suggestion.\n\n Shridhar\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 29 Sep 2001 18:48:56 +0530", "msg_from": "Chamanya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "\"Vadim Mikheev\" <[email protected]> writes:\n>> I have committed changes to implement this proposal. I'm not seeing\n>> any significant performance difference on pgbench on my single-CPU\n>> system ... but pgbench is I/O bound anyway on this hardware, so that's\n>> not very surprising. I'll be interested to see what other people\n>> observe. (Tatsuo, care to rerun that 1000-client test?)\n\n> What is your system? CPU, memory, IDE/SCSI, OS?\n> Scaling factor and # of clients?\n\nHP C180, SCSI-2 disks, HPUX 10.20. I used scale factor 10 and between\n1 and 10 clients. Now that I think about it, I was running with the\ndefault NBuffers (64), which probably constrained performance too.\n\n> BTW1 - shouldn't we rewrite pgbench to use threads instead of\n> \"libpq async queries\"? At least as option. I'd say that with 1000\n> clients current pgbench implementation is very poor.\n\nWell, it uses select() to wait for activity, so as long as all query\nresponses arrive as single packets I don't see the problem. Certainly\nrewriting pgbench without making libpq thread-friendly won't help a bit.\n\n> BTW2 - shouldn't we learn if there are really portability/performance\n> issues in using POSIX mutex-es (and cond. variables) in place of\n> TAS (and SysV semaphores)?\n\nSure, that'd be worth looking into on a long-term basis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 10:25:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "Chamanya wrote:\n> \n> On Thursday 27 September 2001 04:09, you wrote:\n> > This depends on your system. Solaris has a huge difference between\n> > thread and process context switch times, whereas Linux has very little\n> > difference (and in fact a Linux process context switch is about as\n> > fast as a Solaris thread switch on the same hardware--Solaris is just\n> > a pig when it comes to process context switching).\n> \n> I have never worked on any big systems but from what (little) I have seen, I\n> think there should be a hybrid model.\n> \n> This whole discussion started off, from poor performance on SMP machines. If\n> I am getting this correctly, threads can be spread on multiple CPUs if\n> available but process can not.\n\nDifferent processes will be on handled evenly across all CPUs in an SMP\nmachine, unless you set process affinity for a process and a CPU.\n> \n> So I would suggest to have threaded approach for intensive tasks such as\n> sorting/searching etc. IMHO converting entire paradigm to thread based is a\n> huge task and may not be required in all cases.\n\nDividing a query into multiple threads is an amazing task. I wish I had a\ncouple years and someone willing to pay me to try it.\n\n> \n> I think of an approach. Threads are created when they are needed but they\n> are kept dormant when not needed. So that there is no recreation overhead(if\n> that's a concern). So at any given point of time, one back end connection has\n> as many threads as number of CPUs. More than that may not yield much of\n> performance improvement. Say a big task like sorting is split and given to\n> different threads so that it can use them all.\n\nThis is a huge undertaking, and quite frankly, if I understand PostgreSQL, a\ncomplete redesign of the entire system.\n> \n> It should be easy to switch the threading function and arguments on the fly,\n> restricting number of threads and there will not be much of thread switching\n> as each thread handles different parts of task and later the results are\n> merged.\n\nThat is not what I would consider easy.\n\n> \n> Number of threads should be equal to or twice that of number of CPUs. I don't\n> think more than those many threads would yield any performance improvement.\n\nThat isn't true at all.\n\nOne of the problems I see when when people discuss performance on an SMP\nmachine, is that they usually think from the perspective of a single task. If\nyou are doing data mining, one sql query may take a very long time. Which may\nbe a problem, but in the grander scheme of things there are usually multiple\nconcurrent performance issues to be considered. Threading the back end for\nparallel query processing will probably not help this. More often than not a\ndatabase has much more to do than one thing at a time.\n\nAlso, if you are threading query processing, you have to analyze what your\nquery needs to do with the threads. If your query is CPU bound, then you will\nwant to use fewer threads, if your query is I/O bound, you should have as many\nthreads as you have I/O requests, and have each thread block on the I/O.\n\n> \n> And with this approach we can migrate one functionality at a time to threaded\n> one, thus avoiding big effort at any given time.\n\nPerhaps I am being over dramatic, but I have moved a number of systems from\nfork() to threaded (for ports to Windows NT from UNIX), and if my opinion means\nanything on this mailing list, I STRONGLY urge against it. PostgreSQL is a huge\nsystem, over a decade old. The original developers are no longer working on it,\nand in fact, probably wouldn't recognize it. There are nooks and crannys that\nno one knows about.\n\nIt has also been my experience going from separate processes to separate\nthreads does not do much for performance, simply because the operation of your\nsystem does not change, only the methods by which you share memory. If you want\nto multithread a single query, that's a different story and a good R&D project\nin itself.\n", "msg_date": "Sat, 29 Sep 2001 11:00:06 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> I wrote:\n> > The following proposal should improve performance substantially when\n> > there is contention for a lock, but it creates no portability risks\n> > ...\n> \n> I have committed changes to implement this proposal. I'm not seeing\n> any significant performance difference on pgbench on my single-CPU\n> system ... but pgbench is I/O bound anyway on this hardware, so that's\n> not very surprising. I'll be interested to see what other people\n> observe. (Tatsuo, care to rerun that 1000-client test?)\n\nI ran with 20 clients:\n\n\t$ pgbench -i test\n\t$ pgbench -c 20 -t 100 test\n\nand see no difference in tps performance between the two lock\nimplementations. I have a Dual PIII 550MHz i386 BSD/OS machine with\nSCSI disks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 14:37:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I ran with 20 clients:\n\nWhat scale factor? How many buffers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 14:39:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I ran with 20 clients:\n> \n> What scale factor? How many buffers?\n\nNo scale factor, as I illustrated from the initialization command I\nused. Standard buffers too. Let me know what values I should use for\ntesting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 14:43:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> No scale factor, as I illustrated from the initialization command I\n> used. Standard buffers too. Let me know what values I should use for\n> testing.\n\nScale factor has to be >= max number of clients you use, else you're\njust measuring serialization on the \"branch\" rows.\n\nI think the default NBuffers (64) is too low to give meaningful\nperformance numbers, too. I've been thinking that maybe we should\nraise it to 1000 or so by default. This would trigger startup failures\non platforms with small SHMMAX, but we could tell people to use -B until\nthey get around to fixing their kernel settings. It's been a long time\nsince we fit into a 1-MB shared memory segment at the default settings\nanyway, so maybe it's time to select somewhat-realistic defaults.\nWhat we have now is neither very useful nor the lowest common\ndenominator...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 14:59:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\nOK, testing now with 1000 backends and 2000 buffers. Will report.\n\n> Bruce Momjian <[email protected]> writes:\n> > No scale factor, as I illustrated from the initialization command I\n> > used. Standard buffers too. Let me know what values I should use for\n> > testing.\n> \n> Scale factor has to be >= max number of clients you use, else you're\n> just measuring serialization on the \"branch\" rows.\n> \n> I think the default NBuffers (64) is too low to give meaningful\n> performance numbers, too. I've been thinking that maybe we should\n> raise it to 1000 or so by default. This would trigger startup failures\n> on platforms with small SHMMAX, but we could tell people to use -B until\n> they get around to fixing their kernel settings. It's been a long time\n> since we fit into a 1-MB shared memory segment at the default settings\n> anyway, so maybe it's time to select somewhat-realistic defaults.\n> What we have now is neither very useful nor the lowest common\n> denominator...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 15:32:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> > I have committed changes to implement this proposal. I'm not seeing\n> > any significant performance difference on pgbench on my single-CPU\n> > system ... but pgbench is I/O bound anyway on this hardware, so that's\n> > not very surprising. I'll be interested to see what other people\n> > observe. (Tatsuo, care to rerun that 1000-client test?)\n> \n> What is your system? CPU, memory, IDE/SCSI, OS?\n> Scaling factor and # of clients?\n> \n> BTW1 - shouldn't we rewrite pgbench to use threads instead of\n> \"libpq async queries\"? At least as option. I'd say that with 1000\n> clients current pgbench implementation is very poor.\n\nWould it be useful to run a test like the AS3AP benchmark on this to\nlook for performance measurements?\n\nOn linux the Open Source Database Benchmark (osdb.sf.net) does this, and\nit's multi-threaded to simulate multiple clients hitting the database at\nonce. The only inconvenience is having to download a separate program\nto generate the test data, as OSDB doesn't generate this itself yet. I\ncan supply the test program (needs to be run through Wine) and a script\nif anyone wants.\n\n???\n\n> \n> BTW2 - shouldn't we learn if there are really portability/performance\n> issues in using POSIX mutex-es (and cond. variables) in place of\n> TAS (and SysV semaphores)?\n> \n> Vadim\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 30 Sep 2001 12:31:20 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "On Sat, Sep 29, 2001 at 06:48:56PM +0530, Chamanya wrote:\n> \n> Number of threads should be equal to or twice that of number of CPUs. I don't \n> think more than those many threads would yield any performance improvement.\n> \n \n This expects that thread still runnig, but each process (thread) sometime\nwaiting for disk, net etc. During this time can runs some other thread.\n Performance of program not directly depends on number of CPU, but on \ntype of a work that execute thread. The important thing is how you can \nsplit a work to small and independent parts. \n\n\tKarel\n\n-- \n Karel Zak <[email protected]>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 1 Oct 2001 10:23:13 +0200", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Tom Lane wrote:\n> \n<snip>\n> I think the default NBuffers (64) is too low to give meaningful\n> performance numbers, too. I've been thinking that maybe we should\n> raise it to 1000 or so by default. This would trigger startup failures\n> on platforms with small SHMMAX, but we could tell people to use -B until\n> they get around to fixing their kernel settings. It's been a long time\n> since we fit into a 1-MB shared memory segment at the default settings\n> anyway, so maybe it's time to select somewhat-realistic defaults.\n> What we have now is neither very useful nor the lowest common\n> denominator...\n\nHow about a startup error message which gets displayed when used with\nuntuned settings (i.e. the default settings), maybe unless an option\nlike -q (quiet) is given?\n\nMy thought is the server should operate, but let the new/novice admin\nknow they need to configure PostgreSQL properly. Would probably be a\ngood reminder for experienced admins if they forget too.\n\nMaybe something simple like pg_ctl shell script message, or something\nproper like a postmaster start-up check.\n\nThis wouldn't break anything would it?\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 01 Oct 2001 23:48:19 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "> Tom Lane wrote:\n> > \n> <snip>\n> > I think the default NBuffers (64) is too low to give meaningful\n> > performance numbers, too. I've been thinking that maybe we should\n> > raise it to 1000 or so by default. This would trigger startup failures\n> > on platforms with small SHMMAX, but we could tell people to use -B until\n> > they get around to fixing their kernel settings. It's been a long time\n> > since we fit into a 1-MB shared memory segment at the default settings\n> > anyway, so maybe it's time to select somewhat-realistic defaults.\n> > What we have now is neither very useful nor the lowest common\n> > denominator...\n> \n> How about a startup error message which gets displayed when used with\n> untuned settings (i.e. the default settings), maybe unless an option\n> like -q (quiet) is given?\n> \n> My thought is the server should operate, but let the new/novice admin\n> know they need to configure PostgreSQL properly. Would probably be a\n> good reminder for experienced admins if they forget too.\n> \n> Maybe something simple like pg_ctl shell script message, or something\n> proper like a postmaster start-up check.\n\nYes, this seems like the way to go, probably something in the postmaster\nlog file. For single-user developers, we want it to start but we want\nproduction machines to tune it. In fact, picking a higher number for\nthese values may be almost as far off as our defaults.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 11:41:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Tom Lane wrote:\n> I think the default NBuffers (64) is too low to give meaningful\n> performance numbers, too. I've been thinking that maybe we should\n> raise it to 1000 or so by default.\n\n>> Maybe something simple like pg_ctl shell script message, or something\n>> proper like a postmaster start-up check.\n\n> Yes, this seems like the way to go, probably something in the postmaster\n> log file.\n\nExcept that a lot of people send postmaster stderr to /dev/null.\nI think bleating about untuned parameters in the postmaster log will be\nnext to useless, because it won't do a thing except for people who are\nclueful enough to (a) direct the log someplace useful and (b) look at it\ncarefully. Those folks are not the ones who need help about tuning.\n\nWe already have quite detailed error messages for shmget/semget\nfailures, eg\n\n$ postmaster -B 200000\nIpcMemoryCreate: shmget(key=5440001, size=1668366336, 03600) failed: Invalid argument\n\nThis error can be caused by one of three things:\n\n1. The maximum size for shared memory segments on your system was\n exceeded. You need to raise the SHMMAX parameter in your kernel\n to be at least 4042162176 bytes.\n\n2. The requested shared memory segment was too small for your system.\n You need to lower the SHMMIN parameter in your kernel.\n\n3. The requested shared memory segment already exists but is of the\n wrong size. This can occur if some other application on your system\n is also using shared memory.\n\nThe PostgreSQL Administrator's Guide contains more information about\nshared memory configuration.\n\n\nThis is still missing a bet since it fails to mention the option of\nadjusting -B and -N instead of changing kernel parameters, but that's\neasily fixed. I propose that we reword this message and the semget\none to mention first the option of changing -B/-N and second the option\nof changing kernel parameters. Then we could consider raising the\ndefault -B setting to something more realistic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 12:19:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "> This is still missing a bet since it fails to mention the option of\n> adjusting -B and -N instead of changing kernel parameters, but that's\n> easily fixed. I propose that we reword this message and the semget\n> one to mention first the option of changing -B/-N and second the option\n> of changing kernel parameters. Then we could consider raising the\n> default -B setting to something more realistic.\n\nYes, we could do that but it makes things harder for newbies and really\nisn't the right numbers for production use anyway. I think anyone using\ndefault values should see a message asking them to tune it. Can we\nthrow a message during initdb? Of course, we don't have a running\nbackend at that point so you would always throw a message.\n\n From postmaster startup, by default, could we try larger amounts of\nbuffer memory until it fails then back off and allocate that? Seems\nlike a nice default to me.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 14:49:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> From postmaster startup, by default, could we try larger amounts of\n> buffer memory until it fails then back off and allocate that? Seems\n> like a nice default to me.\n\nChewing all available memory is the very opposite of a nice default,\nI'd think.\n\nThe real problem here is that some platforms will let us have huge shmem\nsegments, and some will only let us have tiny ones, and neither of those\nis a reasonable default behavior. Allowing the platform to determine\nour sizing is the wrong way round IMHO; the dbadmin should have a clear\nidea of what he's getting, and silent adjustment of the B/N parameters\nwill not give him that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 14:55:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "\nBruce Momjian <[email protected]> wrote:\n\n> From postmaster startup, by default, could we try larger amounts of\n> buffer memory until it fails then back off and allocate that? Seems\n> like a nice default to me.\n\nSo performance would vary depending on the amount of shared memory\nthat could be allocated at startup? Not a good idea IMHO.\n\nRegards,\n\nGiles\n\n", "msg_date": "Tue, 02 Oct 2001 07:57:10 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlock performance improvement proposal " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, that makes sense. My only question is how many platforms _don't_\n> > have syslog. If it is only NT and QNX, I think we can live with using\n> > it by default if it exists.\n> \n> There seems to be a certain amount of confusion here. The proposal at\n> hand was to make configure set up to *compile* the syslog support\n> whenever possible. Not to *use* syslog by default. Unless we change\n> the default postgresql.conf --- which I would be against --- we will\n> still log to stderr by default.\n> \n> Given that, I'm not sure that Peter's argument about losing\n> functionality is right; the analogy to readline support isn't exact.\n> Perhaps what we should do is (a) always build syslog support if\n> possible, and (b) at runtime, complain if syslog logging is requested\n> but we don't have it available.\n\nDid we decide to compile in syslog support by default? I thought so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 12:39:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "\"Thomas Yackel\" <[email protected]> writes:\n> I got the error: \"Bad abstime external representation ''\" when attempted to start psql as a particular user and the postmaster shutdown.\n\n> The problem, we discovered, is that this user had a carriage return contained within his password. Changing the password to remove the CR avoided the system shutdown.\n\nHmm. I can see how a linefeed in a password would create a problem (it\nbreaks the line-oriented formatting of the pg_pwd file). However, I\ncan't reproduce a postmaster crash here. Either I'm not testing the\nright combination of circumstances, or current sources are more robust\nabout this than 7.1. That's not unlikely given that Bruce rewrote the\npassword-file-parsing code a couple months ago.\n\nIn any case it seems like it'd be a good idea to forbid nonprinting\ncharacters in passwords. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2001 00:28:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: user authentication crash by Erik Luke (20-08-2001; 1.3kb) " }, { "msg_contents": "\n when doing some works with views, I faced the following problem :\n\n consider the following schema :\n\n create table A (v1 int4,v2 int4);\n create table B (v1 int4,v2 int4);\n create view C as select v1,v2 from A union all select v1,v2 from B;\n\n populate A and B with several thousands records\n\n select v1 from c where v2=1000; give the following plan :\n\nSubquery Scan c (cost=0.00..4544.12 rows=294912 width=8)\n -> Append (cost=0.00..4544.12 rows=294912 width=8)\n -> Subquery Scan *SELECT* 1 (cost=0.00..252.84 rows=16384 width=8)\n -> Seq Scan on a (cost=0.00..252.84 rows=16384 width=8)\n -> Subquery Scan *SELECT* 2 (cost=0.00..4291.28 rows=278528\nwidth=8)\n -> Seq Scan on b (cost=0.00..4291.28 rows=278528 width=8)\n\n\n select v1 from a where v2=5 union all select v1 from b where v2=1000;\ngive the following plan :\n\nAppend (cost=0.00..217.88 rows=83 width=4)\n -> Subquery Scan *SELECT* 1 (cost=0.00..2.02 rows=1 width=4)\n -> Index Scan using idx1 on a (cost=0.00..2.02 rows=1 width=4)\n -> Subquery Scan *SELECT* 2 (cost=0.00..215.86 rows=82 width=4)\n -> Index Scan using idx2 on b (cost=0.00..215.86 rows=82 width=4)\n\n Is there a way for the optimizer to move the view \"where\" clause in the\nelementary union queries in order to use an index scan instead of the Seq\nscan ?\n\n I'm using 7.1.3\n\n\n cyril\n\n", "msg_date": "Thu, 1 Nov 2001 17:40:46 +0100", "msg_from": "\"Cyril VELTER\" <[email protected]>", "msg_from_op": false, "msg_subject": "Union View Optimization" }, { "msg_contents": "Tom Lane wrote:\n >Hmm. I can see how a linefeed in a password would create a problem (it\n >breaks the line-oriented formatting of the pg_pwd file). \n...\n >In any case it seems like it'd be a good idea to forbid nonprinting\n >characters in passwords. Comments anyone?\n\nThat sounds too restrictive; allowing non-printing characters should\nimprove password security. Why not simply exclude linefeed and\ncarriage return? (And possibly ctrl-Q and ctrl-S as well, in case there\nis still anyone running a terminal with XON/XOFF flow control.)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"But they that wait upon the LORD shall renew their \n strength; they shall mount up with wings as eagles; \n they shall run, and not be weary; and they shall walk,\n and not faint.\" Isaiah 40:31 \n\n\n", "msg_date": "Thu, 01 Nov 2001 17:34:01 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] user authentication crash by Erik Luke " }, { "msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Tom Lane wrote:\n>>>> Hmm. I can see how a linefeed in a password would create a problem (it\n>>>> breaks the line-oriented formatting of the pg_pwd file). \n> ...\n>>>> In any case it seems like it'd be a good idea to forbid nonprinting\n>>>> characters in passwords. Comments anyone?\n\n> That sounds too restrictive; allowing non-printing characters should\n> improve password security. Why not simply exclude linefeed and\n> carriage return?\n\nActually it seems that linefeed and tab are the minimum set of\ncharacters that must be excluded to avoid breaking pg_pwd.\nWorking on it now ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2001 12:52:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] user authentication crash by Erik Luke (20-08-2001;\n 1.3kb) " }, { "msg_contents": "\"Cyril VELTER\" <[email protected]> writes:\n> Is there a way for the optimizer to move the view \"where\" clause in the\n> elementary union queries in order to use an index scan instead of the Seq\n> scan ?\n\nThis is on the \"to look at\" list. It's not immediately clear to me\nwhether there are any restrictions on when the system can safely make\nsuch a transformation, nor whether there are cases where it would be\na pessimization rather than an optimization.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2001 13:45:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Union View Optimization " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, that makes sense. My only question is how many platforms _don't_\n> > have syslog. If it is only NT and QNX, I think we can live with using\n> > it by default if it exists.\n> \n> There seems to be a certain amount of confusion here. The proposal at\n> hand was to make configure set up to *compile* the syslog support\n> whenever possible. Not to *use* syslog by default. Unless we change\n> the default postgresql.conf --- which I would be against --- we will\n> still log to stderr by default.\n> \n> Given that, I'm not sure that Peter's argument about losing\n> functionality is right; the analogy to readline support isn't exact.\n> Perhaps what we should do is (a) always build syslog support if\n> possible, and (b) at runtime, complain if syslog logging is requested\n> but we don't have it available.\n\nIs this idea dead for 7.2? Should I add it to the TODO list?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Nov 2001 11:01:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog by default?" }, { "msg_contents": "Are we doing beta3 soon? I thought we were going to do it yesterday,\nthough it seems like we have a few packaging problems to iron out first.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Nov 2001 13:19:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "beta3" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Are we doing beta3 soon? I thought we were going to do it yesterday,\n> though it seems like we have a few packaging problems to iron out first.\n\nThe packaging problems need to be solved before we can build a release\ncandidate --- but I don't see why they should hold up beta3. beta2 has\nthe same problems, so beta3 wouldn't be a regression.\n\nI don't have any pending work that ought to go in beta3, but I think\nThomas is sitting on some datetime patches ... probably should wait\nfor him.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Nov 2001 14:07:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "Tom Lane writes:\n\n> The packaging problems need to be solved before we can build a release\n> candidate --- but I don't see why they should hold up beta3. beta2 has\n> the same problems, so beta3 wouldn't be a regression.\n\nConsidering the history of how-do-we-get-the-docs-in-the-tarball problem,\nI'd like to see at least one beta where this works before shipping a\nrelease candidate. Otherwise I'll have no confidence at all that we're\nnot going to ship the final release with last year's docs and no man\npages.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sun, 18 Nov 2001 18:17:41 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "\nOkay, how do we want to address/fix/test this?\n\nOn Sun, 18 Nov 2001, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n>\n> > The packaging problems need to be solved before we can build a release\n> > candidate --- but I don't see why they should hold up beta3. beta2 has\n> > the same problems, so beta3 wouldn't be a regression.\n>\n> Considering the history of how-do-we-get-the-docs-in-the-tarball problem,\n> I'd like to see at least one beta where this works before shipping a\n> release candidate. Otherwise I'll have no confidence at all that we're\n> not going to ship the final release with last year's docs and no man\n> pages.\n>\n> --\n> Peter Eisentraut [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Sun, 18 Nov 2001 22:42:43 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "Marc G. Fournier writes:\n\n> Okay, how do we want to address/fix/test this?\n\nAssuming that we don't want to make sweeping changes to the overall\nprocedure, I suggest this:\n\n1. The postgres.tar.gz and man.tar.gz documentation tarballs required for\na release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The new\nfile system location of this directory needs to be determined.\n\n2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n\n3. The currently commented out portion in the mk-release script needs\nto be altered to pick up these files.\n\n(I also suggest 'set -e' in this script so it doesn't ignore errors.)\n\n4. The HTML-doc-build cron job needs to be reinstated on postgresql.org\nand put its output file at the new location.\n\n5. When I'm done with the new manpages I'll upload them to the\nappropriate location.\n\nI think the answer to #1 is /var/spool/ftp or something. For #2, use the\npostgres.tar.gz files that Bruce currently builds, and use the old\nman.tar.gz file from 7.1.\n\nI suggest when #3 is done we can ship a beta3.\n\nFor #4, I don't know who wants to do it. If it's me then it might take a\nfew days. #5 is well under way. When #4 and #5 are done we can ship rc1\nor beta4, depending on the other issues.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 19 Nov 2001 15:39:52 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "On Mon, 19 Nov 2001, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Okay, how do we want to address/fix/test this?\n>\n> Assuming that we don't want to make sweeping changes to the overall\n> procedure, I suggest this:\n>\n> 1. The postgres.tar.gz and man.tar.gz documentation tarballs required for\n> a release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The new\n> file system location of this directory needs to be determined.\n\nShould simply be ~ftp/pub/dev/doc\n\n> 2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n>\n> 3. The currently commented out portion in the mk-release script needs\n> to be altered to pick up these files.\n>\n> (I also suggest 'set -e' in this script so it doesn't ignore errors.)\n\nOkay, just added 'set -e', commented out, to the script ... actually, just\nuncommented it, for, even without the docs, it probably shouldn't be\nignored, eh? :)\n\n> 4. The HTML-doc-build cron job needs to be reinstated on\n> postgresql.org and put its output file at the new location.\n>\n> 5. When I'm done with the new manpages I'll upload them to the\n> appropriate location.\n>\n> I think the answer to #1 is /var/spool/ftp or something. For #2, use the\n> postgres.tar.gz files that Bruce currently builds, and use the old\n> man.tar.gz file from 7.1.\n>\n> I suggest when #3 is done we can ship a beta3.\n\nOkay, will bundle up Beta3 in the morning then ...\n\n> For #4, I don't know who wants to do it. If it's me then it might\n> take a few days. #5 is well under way. When #4 and #5 are done we\n> can ship rc1 or beta4, depending on the other issues.\n\nWould #4 be these, from thomas old cron jobs:\n\n# Run at 3:05 local time (EST5EDT?) every day\n 5 3 * * * $HOME/CURRENT/docbuild |& tee $HOME/CURRENT/docbuild.log\n# Twice a day during beta 2000-03-28\n25 11 * * * $HOME/CURRENT/docbuild |& tee $HOME/CURRENT/docbuild.log\n\n", "msg_date": "Mon, 19 Nov 2001 19:32:04 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Peter Eisentraut\n> Sent: Monday, 19 November 2001 10:40 PM\n> To: Marc G. Fournier\n> Cc: Tom Lane; Bruce Momjian; PostgreSQL-development; Thomas Lockhart\n> Subject: Re: [HACKERS] beta3\n>\n>\n> Marc G. Fournier writes:\n>\n> > Okay, how do we want to address/fix/test this?\n>\n> Assuming that we don't want to make sweeping changes to the overall\n> procedure, I suggest this:\n>\n> 1. The postgres.tar.gz and man.tar.gz documentation tarballs required for\n> a release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The new\n> file system location of this directory needs to be determined.\n>\n> 2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n\nJust a thought: Doesn't everyone have bzip2 these days? Many large\nprojects come bz2-only even these days. Is there a good reason to stick\nwith the extra-large tar.gz format?\n\nChris\n\n", "msg_date": "Tue, 20 Nov 2001 09:42:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "On Tue, 20 Nov 2001, Christopher Kings-Lynne wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Peter Eisentraut\n> > Sent: Monday, 19 November 2001 10:40 PM\n> > To: Marc G. Fournier\n> > Cc: Tom Lane; Bruce Momjian; PostgreSQL-development; Thomas Lockhart\n> > Subject: Re: [HACKERS] beta3\n> >\n> >\n> > Marc G. Fournier writes:\n> >\n> > > Okay, how do we want to address/fix/test this?\n> >\n> > Assuming that we don't want to make sweeping changes to the overall\n> > procedure, I suggest this:\n> >\n> > 1. The postgres.tar.gz and man.tar.gz documentation tarballs required for\n> > a release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The new\n> > file system location of this directory needs to be determined.\n> >\n> > 2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n>\n> Just a thought: Doesn't everyone have bzip2 these days? Many large\n> projects come bz2-only even these days. Is there a good reason to stick\n> with the extra-large tar.gz format?\n\nCause not everyone has bzip2 ... Solaris comes with gzip, doesn't come\nwith bzip ... not sure how many other OS, but even FreeBSD requiresyou to\ncompile bzip from ports, it isn't part of hte operating system ...\n\n\n", "msg_date": "Mon, 19 Nov 2001 20:49:20 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> Cause not everyone has bzip2 ... Solaris comes with gzip, doesn't come\n> with bzip ... not sure how many other OS, but even FreeBSD requiresyou to\n> compile bzip from ports, it isn't part of hte operating system ...\n\nFreeBSD 4.4 has it as part of the OS...\n\nFor example, the entire KDE project is bz2, and people use it on Solaris.\n\nBut anyway, fair enough - I guess it's an extra step that would inhibit the\nuse of Postgres.\n\nChris\n\n", "msg_date": "Tue, 20 Nov 2001 10:39:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "On Lun 19 Nov 2001 22:49, you wrote:\n> On Tue, 20 Nov 2001, Christopher Kings-Lynne wrote:\n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Peter\n> > > Eisentraut Sent: Monday, 19 November 2001 10:40 PM\n> > > To: Marc G. Fournier\n> > > Cc: Tom Lane; Bruce Momjian; PostgreSQL-development; Thomas Lockhart\n> > > Subject: Re: [HACKERS] beta3\n> > >\n> > > Marc G. Fournier writes:\n> > > > Okay, how do we want to address/fix/test this?\n> > >\n> > > Assuming that we don't want to make sweeping changes to the overall\n> > > procedure, I suggest this:\n> > >\n> > > 1. The postgres.tar.gz and man.tar.gz documentation tarballs required\n> > > for a release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The\n> > > new file system location of this directory needs to be determined.\n> > >\n> > > 2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n> >\n> > Just a thought: Doesn't everyone have bzip2 these days? Many large\n> > projects come bz2-only even these days. Is there a good reason to stick\n> > with the extra-large tar.gz format?\n>\n> Cause not everyone has bzip2 ... Solaris comes with gzip, doesn't come\n> with bzip ... not sure how many other OS, but even FreeBSD requiresyou to\n> compile bzip from ports, it isn't part of hte operating system ...\n\nGo to www.sunfreeware.com and get bzip2 for Solaris.\nI even recompiled the new tar with support for bzip2! :-)\n\nSaludos... :-)\n\nP.D.: bzip2 is slow, but you can get a real small package with it, even \nthough PostgreSQL isn't that big, if we compare it with KDE or Mozilla.\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | [email protected]\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 20 Nov 2001 08:51:13 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <[email protected]> writes:\n> P.D.: bzip2 is slow, but you can get a real small package with it, even \n> though PostgreSQL isn't that big, if we compare it with KDE or Mozilla.\n\nAs an experiment, I zipped my current PG source tree with both. (This\nisn't an exact test of the distribution size, because I didn't bother\nto get rid of the CVS control files, but it's pretty close.)\n\nOriginal tar file: 37089280 bytes\ngzip -9:\t\t 8183182 bytes\nbzip2:\t\t\t 6762638 bytes\n\nor slightly less than a 20% savings for bzip over gzip. That's useful,\nbut not exactly compelling. A comparison of unzip runtime also seems\nrelevant:\n\n$ time gunzip pgsql.tar.gz\n\nreal 0m5.48s\nuser 0m4.46s\nsys 0m0.62s\n\n$ time bunzip2 pgsql.tar.bz2\n\nreal 0m27.77s\nuser 0m26.50s\nsys 0m0.92s\n\nIf I'd downloaded this thing over a decent DSL or cable modem line,\nbzip2 would actually be a net loss in total download + uncompress time.\n\n<editorial>\nThe reason bzip is still an also-ran is that it's not enough better\nthan gzip to have persuaded people to switch over. My bet is that\nbzip will always be an also-ran, and that gzip will remain the de\nfacto standard until something comes along that's really significantly\nbetter, like a factor of 2 better. I've watched this sort of game\nplay out before, and I know you don't take over the world with a 20%\nimprovement over the existing standard. At least not without other\ncompelling reasons, like speed (oops) or patent freedom (no win there\neither).\n</editorial>\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Nov 2001 10:16:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "On Mar 20 Nov 2001 12:16, Tom Lane wrote:\n>\n> As an experiment, I zipped my current PG source tree with both. (This\n> isn't an exact test of the distribution size, because I didn't bother\n> to get rid of the CVS control files, but it's pretty close.)\n>\n> Original tar file: 37089280 bytes\n> gzip -9:\t\t 8183182 bytes\n> bzip2:\t\t\t 6762638 bytes\n>\n> or slightly less than a 20% savings for bzip over gzip. That's useful,\n> but not exactly compelling. A comparison of unzip runtime also seems\n> relevant:\n>\n> $ time gunzip pgsql.tar.gz\n>\n> real 0m5.48s\n> user 0m4.46s\n> sys 0m0.62s\n>\n> $ time bunzip2 pgsql.tar.bz2\n>\n> real 0m27.77s\n> user 0m26.50s\n> sys 0m0.92s\n>\n> If I'd downloaded this thing over a decent DSL or cable modem line,\n> bzip2 would actually be a net loss in total download + uncompress time.\n\nThat would be if I have a decent DSL or cable modem. We have a dedicated line \nof 756 kb, but we also have 3000 users. Maybe postgreSQL isn't that big, but \nMozilla or KDE are, and waiting for la large download isn't what I recommend \nfor an internet experience. :-)\n\n> <editorial>\n> The reason bzip is still an also-ran is that it's not enough better\n> than gzip to have persuaded people to switch over. My bet is that\n> bzip will always be an also-ran, and that gzip will remain the de\n> facto standard until something comes along that's really significantly\n> better, like a factor of 2 better. I've watched this sort of game\n> play out before, and I know you don't take over the world with a 20%\n> improvement over the existing standard. At least not without other\n> compelling reasons, like speed (oops) or patent freedom (no win there\n> either).\n> </editorial>\n\nI think it comes to what CPU I'm using. If I'm on an old Pentium, I may not \nbe happy seeing unbzip2 grabbing all my CPU, but if I'm on a last generation \nPIV of, lets say 1000Mhz, I may not feel it.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | [email protected]\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 20 Nov 2001 19:51:14 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "\nOkay, tried today what you suggested, about set -e and all that ... take a\nlook at ~pgsql/bin/mk-beta, to see if maybe I missed something ... but,\ntrying to do the packaging, I'm getting an error about 'build.xml'?\n\n/usr/bin/tar cf postgresql-opt-7.2b3.tar postgresql-7.2b3/src/backend/utils/mb postgresql-7.2b3/contrib/retep postgresql-7.2b3/build.xml postgresql-7.2b3/src/tools postgresql-7.2b3/src/corba postgresql-7.2b3/src/data postgresql-7.2b3/src/tutorial postgresql-7.2b3/src/bin/pgaccess postgresql-7.2b3/src/bin/pgtclsh postgresql-7.2b3/src/bin/pg_encoding postgresql-7.2b3/src/interfaces/odbc postgresql-7.2b3/src/interfaces/libpq++ postgresql-7.2b3/src/interfaces/libpgtcl postgresql-7.2b3/src/interfaces/perl5 postgresql-7.2b3/src/interfaces/python postgresql-7.2b3/src/interfaces/jdbc postgresql-7.2b3/src/pl/plperl postgresql-7.2b3/src/pl/tcl\n/usr/bin/tar: can't add file postgresql-7.2b3/build.xml : No such file or directory\ngmake: *** [postgresql-opt-7.2b3.tar] Error 1\ngmake: *** Deleting file `postgresql-opt-7.2b3.tar'\n%\n\nLooking in the wrong directory?\n\n%find . -name build.xml -print\n./contrib/retep/build.xml\n./src/interfaces/jdbc/build.xml\n%\n\n\nOn Mon, 19 Nov 2001, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Okay, how do we want to address/fix/test this?\n>\n> Assuming that we don't want to make sweeping changes to the overall\n> procedure, I suggest this:\n>\n> 1. The postgres.tar.gz and man.tar.gz documentation tarballs required for\n> a release used to be at ftp://ftp.postgresql.org/pub/dev/doc/. The new\n> file system location of this directory needs to be determined.\n>\n> 2. Some postgres.tar.gz and man.tar.gz files need to be put there.\n>\n> 3. The currently commented out portion in the mk-release script needs\n> to be altered to pick up these files.\n>\n> (I also suggest 'set -e' in this script so it doesn't ignore errors.)\n>\n> 4. The HTML-doc-build cron job needs to be reinstated on postgresql.org\n> and put its output file at the new location.\n>\n> 5. When I'm done with the new manpages I'll upload them to the\n> appropriate location.\n>\n> I think the answer to #1 is /var/spool/ftp or something. For #2, use the\n> postgres.tar.gz files that Bruce currently builds, and use the old\n> man.tar.gz file from 7.1.\n>\n> I suggest when #3 is done we can ship a beta3.\n>\n> For #4, I don't know who wants to do it. If it's me then it might take a\n> few days. #5 is well under way. When #4 and #5 are done we can ship rc1\n> or beta4, depending on the other issues.\n>\n> --\n> Peter Eisentraut [email protected]\n>\n>\n\n\n", "msg_date": "Tue, 20 Nov 2001 23:09:11 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> \n> Okay, tried today what you suggested, about set -e and all that ... take a\n> look at ~pgsql/bin/mk-beta, to see if maybe I missed something ... but,\n> trying to do the packaging, I'm getting an error about 'build.xml'?\n> \n> /usr/bin/tar cf postgresql-opt-7.2b3.tar postgresql-7.2b3/src/backend/utils/mb postgresql-7.2b3/contrib/retep postgresql-7.2b3/build.xml postgresql-7.2b3/src/tools postgresql-7.2b3/src/corba postgresql-7.2b3/src/data postgresql-7.2b3/src/tutorial postgresql-7.2b3/src/bin/pgaccess postgresql-7.2b3/src/bin/pgtclsh postgresql-7.2b3/src/bin/pg_encoding postgresql-7.2b3/src/interfaces/odbc postgresql-7.2b3/src/interfaces/libpq++ postgresql-7.2b3/src/interfaces/libpgtcl postgresql-7.2b3/src/interfaces/perl5 postgresql-7.2b3/src/interfaces/python postgresql-7.2b3/src/interfaces/jdbc postgresql-7.2b3/src/pl/plperl postgresql-7.2b3/src/pl/tcl\n> /usr/bin/tar: can't add file postgresql-7.2b3/build.xml : No such file or directory\n> gmake: *** [postgresql-opt-7.2b3.tar] Error 1\n> gmake: *** Deleting file `postgresql-opt-7.2b3.tar'\n> %\n> \n> Looking in the wrong directory?\n> \n> %find . -name build.xml -print\n> ./contrib/retep/build.xml\n> ./src/interfaces/jdbc/build.xml\n> %\n\nOK, patch applied. There was a space where there should have been a\nslash.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: GNUmakefile.in\n===================================================================\nRCS file: /cvsroot/pgsql/GNUmakefile.in,v\nretrieving revision 1.19\ndiff -c -r1.19 GNUmakefile.in\n*** GNUmakefile.in\t2001/09/17 23:00:27\t1.19\n--- GNUmakefile.in\t2001/11/21 05:40:51\n***************\n*** 65,71 ****\n $(distdir).tar: distdir\n \t$(TAR) chf $@ $(distdir)\n \n! opt_files := src/backend/utils/mb contrib/retep build.xml \\\n \tsrc/tools src/corba src/data src/tutorial \\\n \t$(addprefix src/bin/, pgaccess pgtclsh pg_encoding) \\\n \t$(addprefix src/interfaces/, odbc libpq++ libpgtcl perl5 python jdbc) \\\n--- 65,71 ----\n $(distdir).tar: distdir\n \t$(TAR) chf $@ $(distdir)\n \n! opt_files := src/backend/utils/mb contrib/retep/build.xml \\\n \tsrc/tools src/corba src/data src/tutorial \\\n \t$(addprefix src/bin/, pgaccess pgtclsh pg_encoding) \\\n \t$(addprefix src/interfaces/, odbc libpq++ libpgtcl perl5 python jdbc) \\", "msg_date": "Wed, 21 Nov 2001 00:41:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "> > \n> > Okay, tried today what you suggested, about set -e and all that ... take a\n> > look at ~pgsql/bin/mk-beta, to see if maybe I missed something ... but,\n> > trying to do the packaging, I'm getting an error about 'build.xml'?\n> > \n> > /usr/bin/tar cf postgresql-opt-7.2b3.tar postgresql-7.2b3/src/backend/utils/mb postgresql-7.2b3/contrib/retep postgresql-7.2b3/build.xml postgresql-7.2b3/src/tools postgresql-7.2b3/src/corba postgresql-7.2b3/src/data postgresql-7.2b3/src/tutorial postgresql-7.2b3/src/bin/pgaccess postgresql-7.2b3/src/bin/pgtclsh postgresql-7.2b3/src/bin/pg_encoding postgresql-7.2b3/src/interfaces/odbc postgresql-7.2b3/src/interfaces/libpq++ postgresql-7.2b3/src/interfaces/libpgtcl postgresql-7.2b3/src/interfaces/perl5 postgresql-7.2b3/src/interfaces/python postgresql-7.2b3/src/interfaces/jdbc postgresql-7.2b3/src/pl/plperl postgresql-7.2b3/src/pl/tcl\n> > /usr/bin/tar: can't add file postgresql-7.2b3/build.xml : No such file or directory\n> > gmake: *** [postgresql-opt-7.2b3.tar] Error 1\n> > gmake: *** Deleting file `postgresql-opt-7.2b3.tar'\n> > %\n> > \n> > Looking in the wrong directory?\n> > \n> > %find . -name build.xml -print\n> > ./contrib/retep/build.xml\n> > ./src/interfaces/jdbc/build.xml\n> > %\n> \n> OK, patch applied. There was a space where there should have been a\n> slash.\n\nOK, I found another problem:\n\n gzip -f --best postgresql-base-7.2b3.tar\n /usr/bin/tar cf postgresql-docs-7.2b3.tar\n postgresql-7.2b3/doc/postgres.tar.gz postgresql-7.2b3/doc/src\n\n ^^^^^^^^^^^^^^^^^^^\n postgresql-7.2b3/doc/TODO.detail postgresql-7.2b3/doc/internals.ps\n /usr/bin/tar: can't add file postgresql-7.2b3/doc/postgres.tar.gz : No\n\n ^^^^^^^^^^^^^^^^^^^\n\n such file or directory\n\nThe problem is this line in pgsql/GNUmakefile.in:\n\n docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n\nWhy is doc/postgres.tar.gz used. It normally is in doc/src, and why\nonly that doc tarball.\n\nAnd again, there is that intenrals.ps file that doesn't really belong\nthere, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 01:21:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Tue, Nov 20, 2001 at 10:16:57AM -0500, Tom Lane wrote:\n> Original tar file: 37089280 bytes\n> gzip -9:\t\t 8183182 bytes\n> bzip2:\t\t\t 6762638 bytes\n\nmaybe 1.420.544 it's not a big difference for you, but here in italy\nthere's still a lot of people using 33.6/55Kb telephone line modem.\nso, this difference means, downloading at 3kb/s, seven minutes...\nalso:\n\na) cpu utilization has no cost... band cost a lot;\nb) if i have to download 5 files, and i can avoid 1 MB of download for each\n one...\n\nthanks for your time,\nandrea gelmini\n", "msg_date": "Wed, 21 Nov 2001 12:02:11 +0100", "msg_from": "andrea gelmini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "> OK, I found another problem:\n> \n> gzip -f --best postgresql-base-7.2b3.tar\n> /usr/bin/tar cf postgresql-docs-7.2b3.tar\n> postgresql-7.2b3/doc/postgres.tar.gz postgresql-7.2b3/doc/src\n> \n> ^^^^^^^^^^^^^^^^^^^\n> postgresql-7.2b3/doc/TODO.detail postgresql-7.2b3/doc/internals.ps\n> /usr/bin/tar: can't add file postgresql-7.2b3/doc/postgres.tar.gz : No\n> \n> ^^^^^^^^^^^^^^^^^^^\n> \n> such file or directory\n> \n> The problem is this line in pgsql/GNUmakefile.in:\n> \n> docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n> \n> Why is doc/postgres.tar.gz used. It normally is in doc/src, and why\n> only that doc tarball.\n\nOK, I updated this to point to doc/src. Marc, I have a\ndoc/postgres.tar.gz here that you can use to make the tarball. It is\nnow at ftp://candle.pha.pa.us/pub/postgresql.\n\n> And again, there is that intenrals.ps file that doesn't really belong\n> there, I think.\n\nTom agrees so I will move it to the web site. Shame we don't have the\nsource, which was originally TeX.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 10:30:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Wed, Nov 21, 2001 at 10:30:54AM -0500, Bruce Momjian wrote:\n> \n> > And again, there is that intenrals.ps file that doesn't really belong\n> > there, I think.\n> \n> Tom agrees so I will move it to the web site. Shame we don't have the\n> source, which was originally TeX.\n> \n\nDigging in the archives, I find that the author of this doc offered\nthe source:\n\n> Date: Wed, 13 Jan 1999 06:04:15 +0000\n> From: Clark Evans <[email protected]>\n> Subject: Re: [HACKERS] Re: EXCEPT/INTERSECT for v6.4\n> \n> Thomas,\n> \n> The documentation Stefan wrote for his Master's thesis\n> is great stuff. Perhaps some of could be incorporated?\n> \n> Stefan Simkovics wrote:\n> > > I also included the text of my Master's Thesis. (a postscript\n> > > version). I hope that you find something of it useful and would be\n> > > happy if parts of it find their way into the PostgreSQL documentation\n> > > project (If so, tell me, then I send the sources of the document!)\n\n\nIf we can track down his current email address, perhaps he still has it?\nI'm copying this to his @ag.or.at address, since the mailer there still\nrecognizes it. Let's see if Stefan's around. ;-)\n\nRoss (musing if he could find the source docs to his Ph.D. thesis ... and\nif so, if they'd be readable)\n\n-- \nRoss Reedstrom, Ph.D. [email protected]\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n", "msg_date": "Wed, 21 Nov 2001 10:14:39 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "> > Stefan Simkovics wrote:\n> > > > I also included the text of my Master's Thesis. (a postscript\n> > > > version). I hope that you find something of it useful and would be\n> > > > happy if parts of it find their way into the PostgreSQL documentation\n> > > > project (If so, tell me, then I send the sources of the document!)\n> \n> \n> If we can track down his current email address, perhaps he still has it?\n> I'm copying this to his @ag.or.at address, since the mailer there still\n> recognizes it. Let's see if Stefan's around. ;-)\n> \n> Ross (musing if he could find the source docs to his Ph.D. thesis ... and\n> if so, if they'd be readable)\n\nThe thesis was in LaTeX. I think Thomas may have copy, but I am not sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 11:47:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Hi!\n\nAs you can see, the address is still valid :-)\nCould you please let me know exactly , who is looking for what?\n\nThen I can send you a LaTeX/PDF/PS Version of my thesis.\n\nbest regards\n\n Stefan Simkovics\n\n> > > Stefan Simkovics wrote:\n> > > > > I also included the text of my Master's Thesis. (a postscript\n> > > > > version). I hope that you find something of it useful and would be\n> > > > > happy if parts of it find their way into the PostgreSQL documentation\n> > > > > project (If so, tell me, then I send the sources of the document!)\n> > \n> > \n> > If we can track down his current email address, perhaps he still has it?\n> > I'm copying this to his @ag.or.at address, since the mailer there still\n> > recognizes it. Let's see if Stefan's around. ;-)\n> > \n> > Ross (musing if he could find the source docs to his Ph.D. thesis ... and\n> > if so, if they'd be readable)\n> \n> The thesis was in LaTeX. I think Thomas may have copy, but I am not sure.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n=====================================================================\nDI Stefan Simkovics\nKPNQwest Austria GmbH\nA-1150 Wien, Diefenbachgasse 35\nTel.: +43 (1) 899 33 - 162, Fax: +43 (1) 899 33 - 10 162\nMob.: +43 (0) 664 818 91 41, e-Mail: [email protected]\nhttp://www.kpnqwest.at, http://www.kpnqwest.com\n\n\n", "msg_date": "Wed, 21 Nov 2001 19:13:05 +0100", "msg_from": "Stefan Simkovics <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> Hi!\n> \n> As you can see, the address is still valid :-)\n> Could you please let me know exactly , who is looking for what?\n> \n> Then I can send you a LaTeX/PDF/PS Version of my thesis.\n\nI think we wanted the Latex source.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 13:14:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Hi!\n\nCould you please let me know the reason you're looking for it?\n(I'm just curious :-))\n\nbest regards\n\n Stefan\n\n\n> > Hi!\n> > \n> > As you can see, the address is still valid :-)\n> > Could you please let me know exactly , who is looking for what?\n> > \n> > Then I can send you a LaTeX/PDF/PS Version of my thesis.\n> \n> I think we wanted the Latex source.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n=====================================================================\nDI Stefan Simkovics\nKPNQwest Austria GmbH\nA-1150 Wien, Diefenbachgasse 35\nTel.: +43 (1) 899 33 - 162, Fax: +43 (1) 899 33 - 10 162\nMob.: +43 (0) 664 818 91 41, e-Mail: [email protected]\nhttp://www.kpnqwest.at, http://www.kpnqwest.com\n\n\n", "msg_date": "Wed, 21 Nov 2001 19:18:35 +0100", "msg_from": "Stefan Simkovics <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> Hi!\n> \n> Could you please let me know the reason you're looking for it?\n> (I'm just curious :-))\n\nThat's an excellent question. We moved the Postscript file out of the\nmain CVS into the web page CVS because it wasn't really source code. \n\nOf course, if we had the LaTeX, it would be documentation source code. \nHowever, we have no intention of modifying it, so am not sure what value\nthe LaTeX would be to us.\n\nGuys, why did we want the source, exactly? I think just putting the\nPostscript on the web site is enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 13:21:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Guys, why did we want the source, exactly?\n\nI just said that source for the document would be appropriate to keep\nin CVS whereas the PS output was not. I'm not sure that we really want\nthe document, though, since we're not using it as actual documentation\n(ie, it's not being updated to reflect changes in the system).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 13:36:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Guys, why did we want the source, exactly?\n> \n> I just said that source for the document would be appropriate to keep\n> in CVS whereas the PS output was not. I'm not sure that we really want\n> the document, though, since we're not using it as actual documentation\n> (ie, it's not being updated to reflect changes in the system).\n\nYea, that was my reading too. I will convert the PS to PDF and add it\nto the web site. I think that's the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 13:47:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Wed, Nov 21, 2001 at 01:36:15PM -0500, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Guys, why did we want the source, exactly?\n> \n> I just said that source for the document would be appropriate to keep\n> in CVS whereas the PS output was not. I'm not sure that we really want\n> the document, though, since we're not using it as actual documentation\n> (ie, it's not being updated to reflect changes in the system).\n\nPerhaps because it _can't_ be updated, without the source? A number of\npeople have commented on how it's a good write up of internals of query\nprocessing. Unfortunately, it's circa v. 6.3, and starting to show it's\nage. If the source were available, pulling out sections and updating them\nis at least a possibility. Otherwise, it's just a historical document:\ninteresting, but fading.\n\nStefan, could you see your way to putting it under an Open Content\nlicense of some sort, so it could be used in this way?\n\nRoss\n\n", "msg_date": "Wed, 21 Nov 2001 13:29:10 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "OK, here is where I think we are on beta3 packaging.\n\nLast night, I fixed the error Marc was seeing in the build. I have also\nput my HTML version of the docs at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/postgres-docs.tar.gz.\n\nIf that is copied to doc/src and mk-beta is run, it should work fine.\n\nAlso, can the beta directory be removed from the snapshot tar file?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 15:22:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Wed, 21 Nov 2001, Bruce Momjian wrote:\n\n> OK, here is where I think we are on beta3 packaging.\n>\n> Last night, I fixed the error Marc was seeing in the build. I have also\n> put my HTML version of the docs at:\n>\n> \tftp://candle.pha.pa.us/pub/postgresql/postgres-docs.tar.gz.\n>\n> If that is copied to doc/src and mk-beta is run, it should work fine.\n\nOkay, we have a ~ftp/pub/doc/current, but no src ... what is the\ndifference between current and src?\n\nAlso, its reporting that internals.ps is missing ...\n\n", "msg_date": "Wed, 21 Nov 2001 16:14:01 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "> On Wed, 21 Nov 2001, Bruce Momjian wrote:\n> \n> > OK, here is where I think we are on beta3 packaging.\n> >\n> > Last night, I fixed the error Marc was seeing in the build. I have also\n> > put my HTML version of the docs at:\n> >\n> > \tftp://candle.pha.pa.us/pub/postgresql/postgres-docs.tar.gz.\n> >\n> > If that is copied to doc/src and mk-beta is run, it should work fine.\n> \n> Okay, we have a ~ftp/pub/doc/current, but no src ... what is the\n> difference between current and src?\n\nI meant copy into pgsql/doc/src or wherever directory tree you are using\nto make the beta. Does that make sense?\n\n> Also, its reporting that internals.ps is missing ...\n\nOK, sorry, fixed. Please rerun configure to get the new code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 16:18:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> Also, its reporting that internals.ps is missing ...\n\nLooks like Bruce forgot to remove the reference in the toplevel\nGNUmakefile.in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 16:19:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> \"Marc G. Fournier\" <[email protected]> writes:\n> > Also, its reporting that internals.ps is missing ...\n> \n> Looks like Bruce forgot to remove the reference in the toplevel\n> GNUmakefile.in.\n\nYea, I removed it only from GNUmakefile the first time. Got it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 16:25:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "> \n> Okay, how do we want to address/fix/test this?\n> \n> On Sun, 18 Nov 2001, Peter Eisentraut wrote:\n> \n> > Tom Lane writes:\n> >\n> > > The packaging problems need to be solved before we can build a release\n> > > candidate --- but I don't see why they should hold up beta3. beta2 has\n> > > the same problems, so beta3 wouldn't be a regression.\n> >\n> > Considering the history of how-do-we-get-the-docs-in-the-tarball problem,\n> > I'd like to see at least one beta where this works before shipping a\n> > release candidate. Otherwise I'll have no confidence at all that we're\n> > not going to ship the final release with last year's docs and no man\n> > pages.\n\nThe best we can do at this point is to package my HTML files for beta3\nand see if we can test this for RC1. At least the tar.gz file itself\nwill be added by the script, just not built on the same machine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 16:39:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Bruce Momjian writes:\n\n> > The problem is this line in pgsql/GNUmakefile.in:\n> >\n> > docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n> >\n> > Why is doc/postgres.tar.gz used. It normally is in doc/src, and why\n> > only that doc tarball.\n>\n> OK, I updated this to point to doc/src.\n\nNooooooooooooooooooooooooooooo!\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 21 Nov 2001 23:14:20 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, I found another problem:\n>\n> gzip -f --best postgresql-base-7.2b3.tar\n> /usr/bin/tar cf postgresql-docs-7.2b3.tar\n> postgresql-7.2b3/doc/postgres.tar.gz postgresql-7.2b3/doc/src\n>\n> ^^^^^^^^^^^^^^^^^^^\n> postgresql-7.2b3/doc/TODO.detail postgresql-7.2b3/doc/internals.ps\n> /usr/bin/tar: can't add file postgresql-7.2b3/doc/postgres.tar.gz : No\n>\n> ^^^^^^^^^^^^^^^^^^^\n>\n> such file or directory\n\nWell, I sent you the list with items 1 through 5 and if you process them\nin a different order then all bets are off.\n\n>\n> The problem is this line in pgsql/GNUmakefile.in:\n>\n> docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n>\n> Why is doc/postgres.tar.gz used. It normally is in doc/src, and why\n> only that doc tarball.\n\nNo, this is correct.\n\n>\n> And again, there is that intenrals.ps file that doesn't really belong\n> there, I think.\n>\n>\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 21 Nov 2001 23:14:35 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> OK, I updated this to point to doc/src.\n\n> Nooooooooooooooooooooooooooooo!\n\nIf it's wrong, please fix it.\n\n*Somebody's* got to take responsibility for making the doc build process\nwork again on postgresql.org. I don't have the foggiest idea what's wrong\nor where to look. You and Marc (and Thomas, except he's gone for awhile)\nare the only candidate fixers in sight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 17:22:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > The problem is this line in pgsql/GNUmakefile.in:\n> > >\n> > > docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n> > >\n> > > Why is doc/postgres.tar.gz used. It normally is in doc/src, and why\n> > > only that doc tarball.\n> >\n> > OK, I updated this to point to doc/src.\n> \n> Nooooooooooooooooooooooooooooo!\n\nOK, I put it back. I don't know how this is supposed to work. Marc, if\nyou put the postgresql.tar.gz in /doc instead of doc/src as I said\nbefore, it should work.\n\nAgain, I am just trying to jump start this thing so we can get on to beta3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 18:20:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Wed, 21 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > Okay, how do we want to address/fix/test this?\n> >\n> > On Sun, 18 Nov 2001, Peter Eisentraut wrote:\n> >\n> > > Tom Lane writes:\n> > >\n> > > > The packaging problems need to be solved before we can build a release\n> > > > candidate --- but I don't see why they should hold up beta3. beta2 has\n> > > > the same problems, so beta3 wouldn't be a regression.\n> > >\n> > > Considering the history of how-do-we-get-the-docs-in-the-tarball problem,\n> > > I'd like to see at least one beta where this works before shipping a\n> > > release candidate. Otherwise I'll have no confidence at all that we're\n> > > not going to ship the final release with last year's docs and no man\n> > > pages.\n>\n> The best we can do at this point is to package my HTML files for beta3\n> and see if we can test this for RC1. At least the tar.gz file itself\n> will be added by the script, just not built on the same machine.\n\nIs there a reason why it can't be built on the same machine, by the script\nthat builds the package itself? Is there something remaining missing on\nthe server that prevents this from happening?\n\n", "msg_date": "Wed, 21 Nov 2001 22:18:20 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Wed, 21 Nov 2001, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> >> OK, I updated this to point to doc/src.\n>\n> > Nooooooooooooooooooooooooooooo!\n>\n> If it's wrong, please fix it.\n>\n> *Somebody's* got to take responsibility for making the doc build process\n> work again on postgresql.org. I don't have the foggiest idea what's wrong\n> or where to look. You and Marc (and Thomas, except he's gone for awhile)\n> are the only candidate fixers in sight.\n\nand, right now, I don't know what is \"broken\" that needs to be fixed ...\nPeter, my fingers are at y our disposal, what haven't I installed yet? :(\n\n", "msg_date": "Wed, 21 Nov 2001 22:19:42 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3 " }, { "msg_contents": "> > The best we can do at this point is to package my HTML files for beta3\n> > and see if we can test this for RC1. At least the tar.gz file itself\n> > will be added by the script, just not built on the same machine.\n> \n> Is there a reason why it can't be built on the same machine, by the script\n> that builds the package itself? Is there something remaining missing on\n> the server that prevents this from happening?\n\nUh, I don't know. It needs the SGML tools stuff. Is that working?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 22:34:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "\nPeter and I are discussing this off list and will let y'all know once we\nget it working again ... once we do, beta3 will come out ...\n\nOn Wed, 21 Nov 2001, Bruce Momjian wrote:\n\n> > > The best we can do at this point is to package my HTML files for beta3\n> > > and see if we can test this for RC1. At least the tar.gz file itself\n> > > will be added by the script, just not built on the same machine.\n> >\n> > Is there a reason why it can't be built on the same machine, by the script\n> > that builds the package itself? Is there something remaining missing on\n> > the server that prevents this from happening?\n>\n> Uh, I don't know. It needs the SGML tools stuff. Is that working?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Wed, 21 Nov 2001 22:49:30 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "\nIs he awake? It is Thu Nov 22 03:50:21 GMT 2001.\n\n---------------------------------------------------------------------------\n\n> \n> Peter and I are discussing this off list and will let y'all know once we\n> get it working again ... once we do, beta3 will come out ...\n> \n> On Wed, 21 Nov 2001, Bruce Momjian wrote:\n> \n> > > > The best we can do at this point is to package my HTML files for beta3\n> > > > and see if we can test this for RC1. At least the tar.gz file itself\n> > > > will be added by the script, just not built on the same machine.\n> > >\n> > > Is there a reason why it can't be built on the same machine, by the script\n> > > that builds the package itself? Is there something remaining missing on\n> > > the server that prevents this from happening?\n> >\n> > Uh, I don't know. It needs the SGML tools stuff. Is that working?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 22:50:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "Tom Lane wrote:\n> <editorial>\n> The reason bzip is still an also-ran is that it's not enough better\n> than gzip to have persuaded people to switch over. My bet is that\n> bzip will always be an also-ran, and that gzip will remain the de\n> facto standard until something comes along that's really significantly\n> better, like a factor of 2 better. I've watched this sort of game\n> play out before, and I know you don't take over the world with a 20%\n> improvement over the existing standard. At least not without other\n> compelling reasons, like speed (oops) or patent freedom (no win there\n> either).\n> </editorial>\n\nWhile agree in principle with your view on bzip2, I think there is a strong\nreason why you should use it, 20%\n\nThat 20% is quite valuable. Just by switching to bzip2, the hosting companies\ncan deliver 20% more downloads with the same equipment and bandwidth cost. The\npeople with slow connections can get it 20% faster.\n\nWill bzip2 become the standard? Probably not in general use, but for\ndownloadable tarballs it is rapidly becoming the standard. Those who pay for\nbandwidth (server or client) welcome any improvement possible. \n\nI would switch the argument around, time how long it takes to do:\n\nncftpget postgresql-xxxx.tar.gz\ntar xpzvf postgresql-xxxx.tar.gz\ncd postgresql-xxxx\n./configure --option\nmake\nmake install\n\u0003\nvs\n\nncftpget postgresql-xxxx.tar.bz2\nbunzip2 -c postgresql-xxxx.tar.bz2 | tar xpzv\ncd postgresql-xxxx\n./configure --option\nmake\nmake install\n\n\nThe total time involved is almost identical, plus or minus a few seconds, but\non slow connections: users get postgresql faster and have to tie up a phone\nline for a shorter amount of time.\n\nThe hosting company can serve 20% less bandwidth.\nThe archive takes up 20% less space on the mirrors.\nThe archine takes up 20% less space on server backups.\n\nI don't think there is a compelling reason to keep a gzipped version.\n", "msg_date": "Fri, 23 Nov 2001 11:10:33 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On Vie 23 Nov 2001 13:10, mlw wrote:\n>\n> While agree in principle with your view on bzip2, I think there is a strong\n> reason why you should use it, 20%\n>\n> That 20% is quite valuable. Just by switching to bzip2, the hosting\n> companies can deliver 20% more downloads with the same equipment and\n> bandwidth cost. The people with slow connections can get it 20% faster.\n>\n> Will bzip2 become the standard? Probably not in general use, but for\n> downloadable tarballs it is rapidly becoming the standard. Those who pay\n> for bandwidth (server or client) welcome any improvement possible.\n>\n> I would switch the argument around, time how long it takes to do:\n>\n> ncftpget postgresql-xxxx.tar.gz\n> tar xpzvf postgresql-xxxx.tar.gz\n> cd postgresql-xxxx\n> ./configure --option\n> make\n> make install\n>\n> vs\n>\n> ncftpget postgresql-xxxx.tar.bz2\n> bunzip2 -c postgresql-xxxx.tar.bz2 | tar xpzv\n\nNew versions of tar would do this:\n\ntar xpjvf postgresql-xxxx.tar.gz\n\nIt comes with bzip2 support.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | [email protected]\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Fri, 23 Nov 2001 16:06:54 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "...\n> Perhaps because it _can't_ be updated, without the source? A number of\n> people have commented on how it's a good write up of internals of query\n> processing. Unfortunately, it's circa v. 6.3, and starting to show it's\n> age. If the source were available, pulling out sections and updating them\n> is at least a possibility. Otherwise, it's just a historical document:\n> interesting, but fading.\n> Stefan, could you see your way to putting it under an Open Content\n> license of some sort, so it could be used in this way?\n\nStefan already authorized us to use it anyway that we see fit, but we\n*do* need to preserve the attribution for himself and his academic\nadvisor and institution.\n\nSome/much of the source is commented out in cvs source code (latex\nmarkup and all) in the repository afaicr (don't have my laptop fired up\nat the moment) because I was worried about losing it otherwise. Since\nthen, I've of course lost the original sources afaict.\n\n - Thomas\n", "msg_date": "Mon, 26 Nov 2001 17:29:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nI'm trying to write C functions to handle some of the number crunching\nthat I have been doing via backend processing. Specifically, I want\nto be able to construct a function such that a query like:\n\n\tselect crunch_number(foo) from bar where [some condition];\n\n...where `foo' is the name of some column and `bar' is some table\nname. The function needs to create some ornate data structures (i.e.,\ndoubly linked lists), and outputs some summary statistic.\n\nIf my data types were simpler, I could simply use an AGGREGATE function.\nUnfortunately, I don't know of any way to schlep something as complex\nas a doubly-linked list of arrays of arbitrary precision numbers.\n\nI suppose ideally I'd like some way of either:\n\n\t-Being able to call a function on each row (like most user-defined\n\t functions) which only returns a result on the last row; or\n\t-Being able to pass the table name, column name, and selection\n\t conditions to the function, and walk through the matching rows\n\t inside the function, returning a single result upon completion\n\nIn terms of logical structure, this looks similar to functions to do\nthings like compute means or standard deviations. The complication (as\nfar as I can tell) is because I can't get by with a simple accumulator\nvariable/transform function.\n\nIs there any clean way to accomplish this in Postgres? Any pointers\nor suggestions would be appreciated.\n\n\n\n\n\n\n- -Steve\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.3 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8SLyyG3kIaxeRZl8RAsNNAKCy8YDnMMZCIGrMYT6pt2IxqxtCJwCgxFp2\nHFlA8B9X5BJRfnMDmQSh8Ss=\n=Kx46\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 18 Jan 2002 16:24:33 -0800", "msg_from": "\"Stephen P. Berry\" <[email protected]>", "msg_from_op": false, "msg_subject": "Functions in C with Ornate Data Structures" }, { "msg_contents": "\"Stephen P. Berry\" <[email protected]> writes:\n> If my data types were simpler, I could simply use an AGGREGATE function.\n> Unfortunately, I don't know of any way to schlep something as complex\n> as a doubly-linked list of arrays of arbitrary precision numbers.\n\nYou could, but the amount of data copying needed would be annoying.\nHowever, there's no law that says you can't cheat. I'd suggest that\nyou build this as an aggregate function whose nominal state value is\nsimply a pointer to data structures that are off somewhere else.\n\nFor example, assuming that you are willing to cheat to the extent of\nassuming sizeof(pointer) = sizeof(integer), try something like this:\n\nCREATE AGGREGATE crunch_number (\n basetype = float8, -- or whatever the input column type is\n sfunc = crunch_func,\n stype = integer,\n ffunc = crunch_finish,\n initcond = 0);\n\nwhere crunch_func(integer) returns integer is your data accumulation\nfunction, and it looks like\n\n\tdatstruct *ptr = (datstruct *) PG_GET_INT32(0);\n\tdouble newdataval = PG_GET_FLOAT8(1);\n\n\tif (ptr == NULL)\n\t{\n\t\t/* first call of query; initialize datastructures */\n\t}\n\n\t/* update datastructures using newdataval */\n\n\tPG_RETURN_INT32((int32) ptr);\n\nFinally, crunch_finish(integer) returns float8 (or whatever is needed)\ncontains your code to compute the final result and release the working\ndatastructure.\n\nNow, the important detail: you can't allocate your working\ndatastructures with a simple palloc(), because these functions will be\ncalled in a short-lived memory context. What I'd suggest is that in\nyour setup step, you create a private memory context that is a child\nof TransactionCommandContext; then allocate all your datastructures in\nthat. Then in the crunch_finish step, you needn't bother with retail\nreleasing of the data structures, just destroy the private context\nand you're done.\n\n\t\t\tregards, tom lane\n\nPS: this is not a novice-level question ;-). You should be asking this\nkind of stuff on pgsql-hackers, methinks. There really isn't any other\nlist that discusses C coding inside the backend.\n", "msg_date": "Fri, 18 Jan 2002 20:30:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Functions in C with Ornate Data Structures " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nIn message <[email protected]>, Tom Lane writes:\n\n>You could, but the amount of data copying needed would be annoying.\n>However, there's no law that says you can't cheat. I'd suggest that\n>you build this as an aggregate function whose nominal state value is\n>simply a pointer to data structures that are off somewhere else.\n>For example, assuming that you are willing to cheat to the extent of\n>assuming sizeof(pointer) = sizeof(integer), try something like this:\n\nI'd actually thought of doing something like this, but couldn't find\nan actual explicit argument type for pointers[0], and I can't make\nthe assumption you describe for portability reasons (my three main\ntest platforms are alpha, sparc64, and x86).\n\nI was also hoping that I could get away with not passing the problem\ndata structures internally at all...i.e., have a crunch_input() function\nthat initialises the linked list I need and populates it,\nthen a crunch_result() function that spits out the result. The\naccumulator is just a dummy variable, and the interesting data structure(s)\naren't in the argument list for any of the functions. I tried\ndoing such a thing with an aggregate, but it didn't work---although,\ninterestingly, invoking the input and result functions manually\nin a single session did. I took this to mean that I really didn't\nunderstand aggregates, so I was assuming it was a novice-level question.\nI was sorta hoping this would turn out to be a standard question (although\nI couldn't find any useful references in the mailing list archives or\nvia web searches).\n\n>Now, the important detail: you can't allocate your working\n>datastructures with a simple palloc(), because these functions will be\n>called in a short-lived memory context. What I'd suggest is that in\n>your setup step, you create a private memory context that is a child\n>of TransactionCommandContext; then allocate all your datastructures in\n>that. Then in the crunch_finish step, you needn't bother with retail\n>releasing of the data structures, just destroy the private context\n>and you're done.\n\nIs there any way to keep `intermediate' data used by user-defined\nfunctions around indefinitely? I.e., have some sort of crunch_init()\nfunction that creates a bunch of in-memory data structures, which\ncan then be used by subsequent (and independent) queries? I'm\nassuming not...and if I want to do that sort of thing I should populate\na temporary table with the data from these `intermediate' results.\nOr keep all this fancy stuff in standalone applications rather than\nin user-defined functions.\n\nIt seems like the general class of thing I'm trying to accomplish\nisn't that esoteric. Imagine trying to write a function to compute\nthe standard deviation of arbitrary precision numbers using the GMP\nlibrary or some such. Note that I'm not saying that that's what I'm\ntrying to do...I'm just offering it as a simple sample problem in\nwhich one can't pass everything as an argument in an aggregate. How\ndoes one set about doing such a thing in Postgres?\n\n\n\n\n\n\n\n- -Steve\n\n- -----\n0\tI was hoping that there would be, since the macro widgetry in\n\tthe version 1 function semantics clearly includes the concept\n\tof pointers as a distinct type.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.3 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8SN5tG3kIaxeRZl8RAmPmAJ4ilTeyoC//MRG5JHf7AmNuR7oW/QCdHHqw\nRoE/GplKts1rxNO85ADEebk=\n=Oedz\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 18 Jan 2002 19:01:10 -0800", "msg_from": "\"Stephen P. Berry\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Functions in C with Ornate Data Structures " }, { "msg_contents": "\"Stephen P. Berry\" <[email protected]> writes:\n>> For example, assuming that you are willing to cheat to the extent of\n>> assuming sizeof(pointer) = sizeof(integer), try something like this:\n\n> I'd actually thought of doing something like this, but couldn't find\n> an actual explicit argument type for pointers[0], and I can't make\n> the assumption you describe for portability reasons (my three main\n> test platforms are alpha, sparc64, and x86).\n\nFair enough. I had actually thought better of that shortly after writing,\nso here's how I'd really do it:\n\nStill make the declaration of the state datatype be \"integer\" at the SQL\nlevel, and say initcond = 0. (If you don't do this, you have to fight\nnodeAgg.c's ideas about what to do with a pass-by-reference datatype,\nand it ain't worth the trouble.) But in the C code, write acquisition\nand return of the state value as\n\n\tdatstruct *ptr = (datstruct *) PG_GETARG_POINTER(0);\n\n\t...\n\n\tPG_RETURN_POINTER(ptr);\n\nThis relies on the fact that what you are *really* passing and returning\nis not an int but a Datum, and Datum is by definition large enough for\npointers. The only part of the above that's even slightly dubious is\nthe assumption that a Datum created from an int32 zero will read as a\npointer NULL --- but I am not aware of any platform where a zero bit\npattern doesn't read as a pointer NULL (and lots of pieces of Postgres\nwould break on such a platform). You could get around that too by\nmaking the initial state condition be a SQL NULL instead of a zero, but\nI don't see the point. Unless you need to treat NULL input values as\nsomething other than \"ignores\", you really want to declare the sfunc as\nstrict, and that gets in the way of using a NULL initcond.\n\n> Is there any way to keep `intermediate' data used by user-defined\n> functions around indefinitely? I.e., have some sort of crunch_init()\n> function that creates a bunch of in-memory data structures, which\n> can then be used by subsequent (and independent) queries?\n\nYou can if you can figure out how to find them again. However, the\nonly obvious answer to that is to use static variables, which falls\ndown miserably if someone tries to run two independent instances of\nyour aggregate in one query. I'd suggest hewing closely to the external\nbehavior of standard aggregates --- ie, each one is an independent\ncalculation. You can use the above techniques to build an efficient\nimplementation. If you instead build something that has an API\ninvolving state that persists across queries, I'm pretty sure you'll\nregret it in the long run.\n\n> It seems like the general class of thing I'm trying to accomplish\n> isn't that esoteric. Imagine trying to write a function to compute\n> the standard deviation of arbitrary precision numbers using the GMP\n> library or some such. Note that I'm not saying that that's what I'm\n> trying to do...I'm just offering it as a simple sample problem in\n> which one can't pass everything as an argument in an aggregate. How\n> does one set about doing such a thing in Postgres?\n\nI blink not an eye to say that I'd do it exactly as described above.\nStick all the intermediate state into a data structure that's referenced\nby a single master pointer, and pass the pointer as the \"state value\"\nof the aggregate.\n\nBTW, mlw posted some contrib code on pghackers just a day or two back\nthat does something similar to this. He did some details differently\nthan I would've, notably this INT32-vs-POINTER business; but it's a\nworking example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Jan 2002 22:28:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Functions in C with Ornate Data Structures " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Stephen P. Berry\" <[email protected]> writes:\n> > Is there any way to keep `intermediate' data used by user-defined\n> > functions around indefinitely? I.e., have some sort of crunch_init()\n> > function that creates a bunch of in-memory data structures, which\n> > can then be used by subsequent (and independent) queries?\n\nI have had to deal with this problem. I implemented a small version of Oracle's\n\"contains()\" API call. The API is something like this:\n\nselect score(1), score(2) from table where contains(cola, 'bla bla bal', 1) >0\nand contains(colb, 'fubar', 2) > 1;\n\nOn the first call I parse the search string and store it in a hash table based\non the number passed to both contains() and score(). The number passed is an\narbitrary bookmark which separates the various result sets for a single query.\n\nThe hash table is static data allocated with malloc(), although I have been\nthinking I should use MemoryContextAlloc with the right context, but malloc\nseems to work.\n\nOn subsequent queries, if the bookmark numer is is found, but the string for\nthe contains function differes, then I delete the old entry and reparse and\nstore the new one.\n\n> \n> > It seems like the general class of thing I'm trying to accomplish\n> > isn't that esoteric. Imagine trying to write a function to compute\n> > the standard deviation of arbitrary precision numbers using the GMP\n> > library or some such. Note that I'm not saying that that's what I'm\n> > trying to do...I'm just offering it as a simple sample problem in\n> > which one can't pass everything as an argument in an aggregate. How\n> > does one set about doing such a thing in Postgres?\n> \n> I blink not an eye to say that I'd do it exactly as described above.\n> Stick all the intermediate state into a data structure that's referenced\n> by a single master pointer, and pass the pointer as the \"state value\"\n> of the aggregate.\n> \n> BTW, mlw posted some contrib code on pghackers just a day or two back\n> that does something similar to this. He did some details differently\n> than I would've, notably this INT32-vs-POINTER business; but it's a\n> working example.\n\nThe sizeof(int32) == sizeof(void *) is a problem, and I am not happy with it,\nalthough I will look into your (Tom) recommendations.\n", "msg_date": "Sat, 19 Jan 2002 09:02:43 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [NOVICE] Functions in C with Ornate Data Structures" }, { "msg_contents": "I'd like to implement *something* to help us collect information on what\nplatforms actually have what features. This would be useful, for\nexample, for figuring out whether any platforms are lacking 8 byte\nintegers or are missing timezone infrastructure.\n\nI was thinking about something like \"make report\" which would mail the\nresults of ./configure to, say, the ports mailing list. We could mention\nit in the text message printed at the end of the make cycle.\n\nComments? Suggestions?\n\n - Thomas\n", "msg_date": "Mon, 22 Apr 2002 22:20:15 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "\"make report\"" }, { "msg_contents": "> I'd like to implement *something* to help us collect information on what\n> platforms actually have what features. This would be useful, for\n> example, for figuring out whether any platforms are lacking 8 byte\n> integers or are missing timezone infrastructure.\n> \n> I was thinking about something like \"make report\" which would mail the\n> results of ./configure to, say, the ports mailing list. We could mention\n> it in the text message printed at the end of the make cycle.\n> \n> Comments? Suggestions?\n\nSuggestion: Why not embed this information into the binary, and provide some \nway of extracting it.\n\n(There's a Linux kernel option that allows something similar, so it wouldn't \nbe something unprecedented.)\n\nIf all the config information is embedded in the binary, automatically, at \ncompile time, then this allows the ability to be _certain_ that:\n\n- \"Oh, that was compiled with a really stupid set of compiler options; you'll \nhave to recompile!\"\n\n- \"That was compiled without support for FOO, but with support for BAR.\"\n\n- \"Announcement, people: Look out for whether or not your distribution \ncompiled PostgreSQL with proper support for 64 bit integers. Several \ndistributions got this wrong with the 7.4.17 release, and you can see if it's \nOK by looking for LONG_LONG_REVISED in the embedded configuration information.\"\n\n[Downside: \"Announcement, script kiddies: If you find option \nUPDATE_DESCR_TABS=1 in the configuration information, then there's a very easy \nroot exploit...\"]\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/x.html\nRules of the Evil Overlord #176. \"I will add indelible dye to the\nmoat. It won't stop anyone from swimming across, but even dim-witted\nguards should be able to figure out when someone has entered in this\nfashion.\" <http://www.eviloverlord.com/>\n\n-- \n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/spiritual.html\nIncluding a destination in the CC list that will cause the recipients'\nmailer to blow out is a good way to stifle dissent.\n-- from the Symbolics Guidelines for Sending Mail", "msg_date": "Tue, 23 Apr 2002 01:39:25 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: \"make report\" " }, { "msg_contents": "On Tue, 23 Apr 2002 [email protected] wrote:\n\n> Suggestion: Why not embed this information into the binary, and\n> provide some way of extracting it.\n\nI like this!\n\n> [Downside: \"Announcement, script kiddies: If you find option\n> UPDATE_DESCR_TABS=1 in the configuration information, then there's a\n> very easy root exploit...\"]\n\nThat's not a downside at all. If an exploit exists, you need only\ntry it, and it works or it doesn't.\n\nIn fact, it's an upside becuase it allows someone who doesn't have\nexploit code more easily to determine whether or not he might be\nvulnerable.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 23 Apr 2002 17:18:14 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\" " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I was thinking about something like \"make report\" which would mail the\n> results of ./configure to, say, the ports mailing list. We could mention\n> it in the text message printed at the end of the make cycle.\n\nI think it'd be a bad idea to encourage people to send mail to the ports\nlist. For one thing, a pile of nonsubscriber mail would make more work\nfor Marc, who has quite enough already. If you want to do this, set up\na dedicated mail alias to receive such reports.\n\n(Possibly my thoughts on soliciting mass email are a tad colored by the\namount of virus traffic I've been seeing lately :-( ... if the Klez\nepidemic gets any worse I will be forced to shut down jpeg-info, which\nis currently seeing upwards of 1000 virus mails/day...)\n\nI do like Joe's idea of embedding a complete configuration report\nright into the backend executable, where it can be retrieved long\nafter config.log has been thrown away.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Apr 2002 12:02:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\" " }, { "msg_contents": "Thomas Lockhart writes:\n\n> I'd like to implement *something* to help us collect information on what\n> platforms actually have what features. This would be useful, for\n> example, for figuring out whether any platforms are lacking 8 byte\n> integers or are missing timezone infrastructure.\n\nI don't think that's very useful. Most configuration checks are there\nbecause some platform needed them at one point. (Some are not -- that's a\ndifferent story.) Those platforms do not go away, and receiving thousands\nof reports about \"I have feature X\" won't prove that there are no systems\nwithout feature X.\n\nIf you want to collect information about what features are portable you\ncan check other software packages, product manuals, ports trees, etc.\nMost issues are documented someplace.\n\nBtw., yes, 8 byte integers are missing on some platforms.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Tue, 23 Apr 2002 19:30:06 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\"" }, { "msg_contents": "> If you want to collect information about what features are portable you\n> can check other software packages, product manuals, ports trees, etc.\n> Most issues are documented someplace.\n\nOh goodness. Thanks for offering me a new hobby ;)\n\n> Btw., yes, 8 byte integers are missing on some platforms.\n\nRight. The two areas which come to mind are integer availability and the\ntimezone support (as you might know we support *three* different time\nzone models). At the moment, none of the developers know the features\nsupported on the platforms we claim to support. Which platforms do not\nhave int8 support still? Which do not have time zone interfaces fitting\ninto the two \"zonefull\" styles? I'd like to know, but istm that the\npeople *with* the platforms could do this much more easily than those\nwithout. What am I missing here??\n\n - Thomas\n", "msg_date": "Wed, 24 Apr 2002 06:55:20 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\"" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Right. The two areas which come to mind are integer availability and the\n> timezone support (as you might know we support *three* different time\n> zone models). At the moment, none of the developers know the features\n> supported on the platforms we claim to support. Which platforms do not\n> have int8 support still?\n\n\"Still\" is the wrong word. There used to be platforms with certain areas\nof trouble, and those platforms don't go away.\n\nBut since you asked: QNX 4 and SCO OpenServer are known to lack 8 byte\nintegers.\n\n> Which do not have time zone interfaces fitting\n> into the two \"zonefull\" styles? I'd like to know, but istm that the\n> people *with* the platforms could do this much more easily than those\n> without. What am I missing here??\n\nI don't think polling users this way will yield reliable results. If you\nreally want to find out, break something and see if someone complains.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 24 Apr 2002 14:14:29 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\"" }, { "msg_contents": "It depends. QNX4 may be used with GCC, in which case it does have long long.\nI am not sure if that combination will play along with Postgres, but it\nshould not be assumed impossible.\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <[email protected]>\nTo: \"Thomas Lockhart\" <[email protected]>\nCc: \"PostgreSQL Hackers\" <[email protected]>\nSent: Wednesday, April 24, 2002 1:14 PM\nSubject: Re: [HACKERS] \"make report\"\n\n\n> Thomas Lockhart writes:\n>\n> > Right. The two areas which come to mind are integer availability and the\n> > timezone support (as you might know we support *three* different time\n> > zone models). At the moment, none of the developers know the features\n> > supported on the platforms we claim to support. Which platforms do not\n> > have int8 support still?\n>\n> \"Still\" is the wrong word. There used to be platforms with certain areas\n> of trouble, and those platforms don't go away.\n>\n> But since you asked: QNX 4 and SCO OpenServer are known to lack 8 byte\n> integers.\n>\n> > Which do not have time zone interfaces fitting\n> > into the two \"zonefull\" styles? I'd like to know, but istm that the\n> > people *with* the platforms could do this much more easily than those\n> > without. What am I missing here??\n>\n> I don't think polling users this way will yield reliable results. If you\n> really want to find out, break something and see if someone complains.\n>\n> --\n> Peter Eisentraut [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 24 Apr 2002 13:19:22 -0500", "msg_from": "\"Igor Kovalenko\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\"" }, { "msg_contents": "Igor Kovalenko writes:\n\n> It depends. QNX4 may be used with GCC, in which case it does have long long.\n> I am not sure if that combination will play along with Postgres, but it\n> should not be assumed impossible.\n\nThe point is, it should not be assumed possible.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 24 Apr 2002 14:34:00 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"make report\"" }, { "msg_contents": "Just out of curiosity, does PostgreSQL have a mission statement?\n\nIf so, where could I find it?\n\nIf not, does anyone see a need?\n\n(No, I am not some rabid MBA, but it may be useful to have for those rabid MBAs\nwith whom I must deal.)\n", "msg_date": "Wed, 01 May 2002 14:24:30 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL mission statement?" }, { "msg_contents": "Dear Team,\n\nI have been monitoring this list for quite some time now and have been studying PostGreSQL for a while. I also did some internet research on the subject of \"multi valued\" database theory. I know that this is the basis for the \"Pick\" database system, FileMaker Pro, \"D3\", and a few other database systems. After carefully reviewing the theoretical arguments in favor of this type of database, I am thoroughly convinced that there are certain advantages to it that will never be matched by a traditional \"relational database\".\n\nI won't waste your time in reviewing the technical advantages here, because you can do your own research. However, I will say that it is obvious to me that an mV database will be an integral part of any truly practical AI robotics system. It will probably be necessary to \"marry\" the technologies of both relational databases and mV databases in such a system.\n\nIMHO, this is something that you, as the leaders in the most advanced database system ever developed, should carefully consider. The Linux community needs to be aware of the special advantages that an mV database offers, the way to interface an mV system with a traditional RDBMS, and the potential application theory as it relates to AI systems.\n\nWe, as a community of leaders in GPL'd software need to make sure that this technology is part of the \"knowledge base\" of our community. Thanks for listening.\n\nArthur\n\n\n\n\n\n\n\nDear Team,\n \nI have been monitoring this list for quite some \ntime now and have been studying PostGreSQL for a while.  I also did some \ninternet research on the subject of \"multi valued\" database theory.  I know \nthat this is the basis for the \"Pick\" database system, FileMaker Pro, \"D3\", and \na few other database systems.  After carefully reviewing the theoretical \narguments in favor of this type of database, I am thoroughly convinced that \nthere are certain advantages to it that will never be matched by a traditional \n\"relational database\".\n \nI won't waste your time in reviewing the technical \nadvantages here, because you can do your own research.  However, I will say \nthat it is obvious to me that an mV database will be an integral part of any \ntruly practical AI robotics system.  It will probably be necessary to \n\"marry\" the technologies of both relational databases and mV databases in such a \nsystem.\n \nIMHO, this is something that you, as the leaders in \nthe most advanced database system ever developed, should carefully \nconsider.  The Linux community needs to be aware of the special advantages \nthat an mV database offers, the way to interface an mV system with a traditional \nRDBMS, and the potential application theory as it relates to AI \nsystems.\n \nWe, as a community of leaders in GPL'd software \nneed to make sure that this technology is part of the \"knowledge base\" of our \ncommunity.  Thanks for listening.\n \nArthur", "msg_date": "Wed, 1 May 2002 11:37:26 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "mV database tools" }, { "msg_contents": "On Wed, 2002-05-01 at 19:37, [email protected] wrote:\n\n> I also did some internet research on the subject of \"multi valued\"\ndatabase theory. I know that this is the basis for the \"Pick\" database\nsystem\n\nFor those who aren't familiar with PICK, it is an untyped database\n(apart from weak types provided by a separate dictionary - advisory but\nnot enforced). All records must have a single key, by which the record\nis retrieved. Database records are divided into \"attributes\" by\nCHAR(254), fields are subdivided into \"values\" by CHAR(253) and values\ncan be further sub-divided into \"subvalues\" by CHAR(252). When records\nare listed, second and subsequent values are presented on separate lines\nwithin their columns. \n\nIn a PICK application, it would be common to have a set up such as:\n\nCUSTOMERS file:\n key=id\n record=name|address1^address2^...|...|ordernumber1^ordernumber2^...\n(using | for CHAR(254) and ^ for CHAR(253))\n\nwhere the whole address is in one field, with each address line in a\nseparate value, and there is another field listing the order numbers\n(record keys in CUST_ORDERS) of all outstanding orders, again as\nseparate values. \n\nThen you could use the following commands (this is not SQL, of course):\n\n SELECT CUSTOMERS WITH ID = \"C23\" SAVING ORDNOS\n SORT CUST_ORDERS\n\nto list all the outstanding orders for custoemr C23.\n\nThe advantages of Pick are that it is very easy to program; the\ncorresponding disadvantage is that it is a very undisciplined\nenvironment. It is necessary for the programmer to remember to update\nthat list of order keys whenever an order is created or deleted. (Some\nrecent implementations now support triggers, I think.)\n\nI suppose arrays are PostgreSQL's equivalent of multi-valued data (is it\npossible to have arrays of arrays?) So it could be argued that\nPostgreSQL already provides part of what Arthur wants.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"For if ye forgive men their trespasses, your heavenly \n Father will also forgive you; But if ye forgive not \n men their trespasses, neither will your Father forgive\n your trespasses.\" Matthew 6:14,15", "msg_date": "01 May 2002 21:46:21 +0100", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mV database tools" }, { "msg_contents": "mlw <[email protected]> writes:\n> Just out of curiosity, does PostgreSQL have a mission statement?\n\nNope. Given the wide variety of views among the developer community,\nI think we'd have a tough time agreeing on a mission statement, unless\nit was so generic as to be meaningless ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 19:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement? " }, { "msg_contents": "> I suppose arrays are PostgreSQL's equivalent of multi-valued data (is it\n> possible to have arrays of arrays?) So it could be argued that\n> PostgreSQL already provides part of what Arthur wants.\n\nIt seems to me that there would be a whopping lot of value to the exercise of \nfiguring out some way of \"layering\" MVD on top of a relational database, even \nif only to provide something sufficiently analytical to cope with the \nperpetual claims of:\n\n \"MultiValued Databases are Vastly, Spectacularly, the Bestest Kind of\n Database ever imagined in the universe! No, really they are!\"\n\nIt might not be necessary to go all the way to fully layering such a thing atop PostgreSQL, although it would be a nice riposte to be able to respond with:\n\n \"Been there, done that. Of _COURSE_ PostgreSQL supports MultiValue.\"\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/lisp.html\nIncluding a destination in the CC list that will cause the recipients'\nmailer to blow out is a good way to stifle dissent.\n-- from the Symbolics Guidelines for Sending Mail\n\n-- \n(reverse (concatenate 'string \"ac.notelrac.teneerf@\" \"454aa\"))\nhttp://www.cbbrowne.com/info/spreadsheets.html\n\"Starting a project in C/C++ is a premature optimization.\"\n-- Peter Jensen", "msg_date": "Wed, 01 May 2002 19:08:28 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: mV database tools " }, { "msg_contents": "> mlw <[email protected]> writes:\n> > Just out of curiosity, does PostgreSQL have a mission statement?\n\n> Nope. Given the wide variety of views among the developer community,\n> I think we'd have a tough time agreeing on a mission statement, unless\n> it was so generic as to be meaningless ...\n\nWell, I think one of the things that has been agreed on that _isn't_\nthat generic is:\n \"We use a Berkeley style license, and prefer it that way.\"\n\n:-)\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\nTo err is human, to moo bovine. \n", "msg_date": "Wed, 01 May 2002 19:25:12 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement? " }, { "msg_contents": "On Wed, May 01, 2002 at 02:24:30PM -0400, mlw wrote:\n> Just out of curiosity, does PostgreSQL have a mission statement?\n> \n> If so, where could I find it?\n> \n> If not, does anyone see a need?\n\n\"Provide a really good database and have fun doing it\"\n\n-- \nDavid Terrell | \"War is peace, \nPrime Minister, Nebcorp | freedom is slavery, \[email protected] | ignorance is strength \nhttp://wwn.nebcorp.com/ | Dishes are clean.\" - Chris Fester\n", "msg_date": "Wed, 1 May 2002 16:59:34 -0700", "msg_from": "David Terrell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "[email protected] wrote:\n> > mlw <[email protected]> writes:\n> > > Just out of curiosity, does PostgreSQL have a mission statement?\n>\n> > Nope. Given the wide variety of views among the developer community,\n> > I think we'd have a tough time agreeing on a mission statement, unless\n> > it was so generic as to be meaningless ...\n>\n> Well, I think one of the things that has been agreed on that _isn't_\n> that generic is:\n> \"We use a Berkeley style license, and prefer it that way.\"\n\n Does that now count as a mission? Whow, didn't know that I am\n on a mission! What in our mission is mission critical? What\n are our defined mission goals?\n\n\nJan :-)\n\n>\n> :-)\n> --\n> (concatenate 'string \"cbbrowne\" \"@ntlug.org\")\n> http://www.ntlug.org/~cbbrowne/rdbms.html\n> To err is human, to moo bovine.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Wed, 1 May 2002 20:31:05 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "[email protected] wrote:\n> \n> > mlw <[email protected]> writes:\n> > > Just out of curiosity, does PostgreSQL have a mission statement?\n> \n> > Nope. Given the wide variety of views among the developer community,\n> > I think we'd have a tough time agreeing on a mission statement, unless\n> > it was so generic as to be meaningless ...\n> \n> Well, I think one of the things that has been agreed on that _isn't_\n> that generic is:\n> \"We use a Berkeley style license, and prefer it that way.\"\n\nNo! no! no! Don't even kid like that. EVERY time that debate is even mentioned,\nit goes on for days.\n", "msg_date": "Wed, 01 May 2002 21:05:31 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Le Jeudi 2 Mai 2002 01:59, David Terrell a écrit :\n> \"Provide a really good database and have fun doing it\"\n\nPostgreSQL Community is commited to providing Humanity with the best \nmulti-purpose, reliable, open-source and free database system.\n", "msg_date": "Thu, 2 May 2002 11:20:58 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "> Le Jeudi 2 Mai 2002 01:59, David Terrell a écrit :\n> > \"Provide a really good database and have fun doing it\"\n> \n> PostgreSQL Community is commited to providing Humanity with the best\n> multi-purpose, reliable, open-source and free database system.\n> \n\nHow about \"We can store your data\" ?\n\nJust a late night thought...\n\ndali\n\n", "msg_date": "Thu, 2 May 2002 21:42:39 +1200", "msg_from": "\"Dalibor Andzakovic\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "> \"[email protected]\" wrote:\n> \n> Dear Team,\n> \n> I have been monitoring this list for quite some time now and have been\n> studying PostGreSQL for a while. I also did some internet research on the\n> subject of \"multi valued\" database theory. I know that this is the basis for\n> the \"Pick\" database system, FileMaker Pro, \"D3\", and a few other database\n> systems. After carefully reviewing the theoretical arguments in favor of\n> this type of database, I am thoroughly convinced that there are certain\n> advantages to it that will never be matched by a traditional \"relational\n> database\".\n> \n> I won't waste your time in reviewing the technical advantages here, because\n> you can do your own research. However, I will say that it is obvious to me\n> that an mV database will be an integral part of any truly practical AI\n> robotics system. It will probably be necessary to \"marry\" the technologies\n> of both relational databases and mV databases in such a system.\n\nThe idea of a multi-view database is interesting and all, but hardly a next\nleap in database theory. It is nothing more than using an addressable array\nwithin PostgreSQL, and PostgreSQL has contributed functions for just these\nsorts of operations.\n\nThe syntax may be different than you would wish, but with the same result. It\nseems that mu\n\n> \n> IMHO, this is something that you, as the leaders in the most advanced\n> database system ever developed, should carefully consider. The Linux\n> community needs to be aware of the special advantages that an mV database\n> offers, the way to interface an mV system with a traditional RDBMS, and the\n> potential application theory as it relates to AI systems.\n> \n> We, as a community of leaders in GPL'd software need to make sure that this\n> technology is part of the \"knowledge base\" of our community. Thanks for\n> listening.\n> \n> Arthur\n", "msg_date": "Thu, 02 May 2002 08:01:34 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mV database tools" }, { "msg_contents": "mlw wrote:\n> \n> > \"[email protected]\" wrote:\n> >\n> > Dear Team,\n> >\n> > I have been monitoring this list for quite some time now and have been\n> > studying PostGreSQL for a while. I also did some internet research on the\n> > subject of \"multi valued\" database theory. I know that this is the basis for\n> > the \"Pick\" database system, FileMaker Pro, \"D3\", and a few other database\n> > systems. After carefully reviewing the theoretical arguments in favor of\n> > this type of database, I am thoroughly convinced that there are certain\n> > advantages to it that will never be matched by a traditional \"relational\n> > database\".\n> >\n> > I won't waste your time in reviewing the technical advantages here, because\n> > you can do your own research. However, I will say that it is obvious to me\n> > that an mV database will be an integral part of any truly practical AI\n> > robotics system. It will probably be necessary to \"marry\" the technologies\n> > of both relational databases and mV databases in such a system.\n> \n> The idea of a multi-view database is interesting and all, but hardly a next\n> leap in database theory. It is nothing more than using an addressable array\n> within PostgreSQL, and PostgreSQL has contributed functions for just these\n> sorts of operations.\n> \n> The syntax may be different than you would wish, but with the same result. It\n> seems that mu\n Doh!! pressed send!!\n\nIt seems that a multivalue database can be implemented on top of PostgreSQL,\nwhere as a full relational database can not be implemented on top of an MVDB.\n", "msg_date": "Thu, 02 May 2002 08:06:28 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mV database tools" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> Le Jeudi 2 Mai 2002 01:59, David Terrell a �crit :\n> > \"Provide a really good database and have fun doing it\"\n> \n> PostgreSQL Community is commited to providing Humanity with the best\n> multi-purpose, reliable, open-source and free database system.\n\n\nThe PostgreSQL community is committed to creating and maintaining the best,\nmost reliable, open-source multi-purpose standards based database, and with it,\npromote free and open source software world wide.\n\nWho's that? Anyone disagree?\n", "msg_date": "Thu, 02 May 2002 08:15:15 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 08:15:15AM -0400, mlw wrote:\n> Jean-Michel POURE wrote:\n> > Le Jeudi 2 Mai 2002 01:59, David Terrell a �crit :\n> > > \"Provide a really good database and have fun doing it\"\n> > \n> > PostgreSQL Community is commited to providing Humanity with the best\n> > multi-purpose, reliable, open-source and free database system.\n> \n> \n> The PostgreSQL community is committed to creating and maintaining the best,\n> most reliable, open-source multi-purpose standards based database, and with\n> it, promote free and open source software world wide.\n> \n> Who's that? Anyone disagree?\n\nwhy does it have to be THE BEST ? that is insulting to the other projects\nlike MySQL which while \"competitors\" are also a valid and useful benchmark\nfor features, performance and keeping the postgresql community on its\ncollective toes.\n\npostgresql is not THE BEST in all applications, so calling it that is inviting\nderision and pointless arguments.\n\ni'd go with:\n\nThe PostgreSQL community is committed to creating and maintaining a very\nreliable, open-source, multi-purpose, standards-based database, and\nencouraging participation in open-source usage and development world wide.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 08:37:06 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jim Mercer wrote:\n> \n> On Thu, May 02, 2002 at 08:15:15AM -0400, mlw wrote:\n> > Jean-Michel POURE wrote:\n> > > Le Jeudi 2 Mai 2002 01:59, David Terrell a �crit :\n> > > > \"Provide a really good database and have fun doing it\"\n> > >\n> > > PostgreSQL Community is commited to providing Humanity with the best\n> > > multi-purpose, reliable, open-source and free database system.\n> >\n> >\n> > The PostgreSQL community is committed to creating and maintaining the best,\n> > most reliable, open-source multi-purpose standards based database, and with\n> > it, promote free and open source software world wide.\n> >\n> > Who's that? Anyone disagree?\n> \n> why does it have to be THE BEST ? that is insulting to the other projects\n> like MySQL which while \"competitors\" are also a valid and useful benchmark\n> for features, performance and keeping the postgresql community on its\n> collective toes.\n\nThis is interesting, a mission statement isn't necessarily about \"what is\" but\nabout what we \"want to do,\" what it is that we \"intend to do,\" i.e. \"our\nmission.\" It is vital that a mission statement contain the superlatives.\nMediocrity has no place here.\n\nI don't know about you, but I want PostgreSQL to be the best, be THE most\nreliable. Omitting \"best\" or \"most\" from the statement means that we should all\njust give up now, because PostgreSQL is pretty damn good already.\n", "msg_date": "Thu, 02 May 2002 08:43:04 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "\n\nIs this an indication of a need for [email protected]? :)\n\nI like:\n\nWe'll store your data; if we think it'll be an interesting enough\ndiversion for us.\n\n\n--\nNigel Andrews\n\n\nOn Thu, 2 May 2002, mlw wrote:\n\n> Jean-Michel POURE wrote:\n> > \n> > Le Jeudi 2 Mai 2002 01:59, David Terrell a écrit :\n> > > \"Provide a really good database and have fun doing it\"\n> > \n> > PostgreSQL Community is commited to providing Humanity with the best\n> > multi-purpose, reliable, open-source and free database system.\n> \n> \n> The PostgreSQL community is committed to creating and maintaining the best,\n> most reliable, open-source multi-purpose standards based database, and with it,\n> promote free and open source software world wide.\n> \n> Who's that? Anyone disagree?\n\n", "msg_date": "Thu, 2 May 2002 13:43:39 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 08:43:04AM -0400, mlw wrote:\n> Jim Mercer wrote:\n> > why does it have to be THE BEST ? that is insulting to the other projects\n> > like MySQL which while \"competitors\" are also a valid and useful benchmark\n> > for features, performance and keeping the postgresql community on its\n> > collective toes.\n> \n> This is interesting, a mission statement isn't necessarily about \"what is\" but\n> about what we \"want to do,\" what it is that we \"intend to do,\" i.e. \"our\n> mission.\" It is vital that a mission statement contain the superlatives.\n> Mediocrity has no place here.\n> \n> I don't know about you, but I want PostgreSQL to be the best, be THE most\n> reliable. Omitting \"best\" or \"most\" from the statement means that we should\n> all just give up now, because PostgreSQL is pretty damn good already.\n\ni think a mission statement full of boastfulness is just a sound bite, and\nwill be dismissed as such.\n\nif you want the mission statement to have an impact, then it needs to be\nacceptable not only to those who fully embrace it, but also acceptable to\nthose who will respect the project from a distance.\n\notherwise its not a mission statement, its akin to a corporate cheer.\n( i'm picturing Steve Balmer's superlative exhaltations to the converted\n http://www.ntk.net/ballmer/mirrors.html )\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 08:56:44 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 01:43:39PM +0100, Nigel J. Andrews wrote:\n> I like:\n> \n> We'll store your data; if we think it'll be an interesting enough\n> diversion for us.\n\ngets my vote.\n\n8^)\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 08:59:19 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "> > I don't know about you, but I want PostgreSQL to be the best, be\nTHE most\n> > reliable. Omitting \"best\" or \"most\" from the statement means that\nwe should\n> > all just give up now, because PostgreSQL is pretty damn good\nalready.\n>\n> i think a mission statement full of boastfulness is just a sound\nbite, and\n> will be dismissed as such.\n\nTheres no reason Postgresql can't be the best in a good majority if\nnot all of the fields. Yeah, a few things are needed to accomplish\nthis -- but theres no reason it can't happen.\n\nAnyway, most companies do something like 'Postgresql will become the\nchoice database'. That is, majority market share.\n\nThat said, they're dumb. They need to be changed once you meet the\ngoal.\n\nA good mission statement should last the lifetime of the company /\ndepartment / project. WalMart actually has one of the better ones,\nwhere their mission is to beat last years sales by x%. 3M will\ninnovate. HP would not release a product unless it offered the market\nsomething new or better (they don't [ didn't for years anyway ] clone\nothers stuff and undercut them in price).\n\nPerhaps Postgresql should have the mission of handling twice the\namount of data this year than last. That is, all collective\ninstallations will maintain more stuff. Difficult to measure, but\nwould ensure we create / maintain the features required by big and\nsmall databases. Currently those features appear to be relational\ndesign, reliability, etc. Should the data storage requirements\nchange, Postgresql would have to follow in order to maintain it's\nmission statement which is a good thing.\n\nGoals should change, (being most reliable, or ANSI compliant) but\npurpose should be consistent.\n\n\nThat said, skip the whole thing. I don't think we need something to\nrally behind as it's kinda self explanatory why you'd donate time or\nmoney to the project. It needs to meet your needs first, which\naveraged out between all of the developers will meet those of most\nothers. DBs work that way, desktops often don't ;)\n\n\n\n\n\n", "msg_date": "Thu, 2 May 2002 09:44:53 -0400", "msg_from": "\"Rod Taylor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, 2002-05-02 at 14:37, Jim Mercer wrote:\n> On Thu, May 02, 2002 at 08:15:15AM -0400, mlw wrote:\n> > Who's that? Anyone disagree?\n> \n> why does it have to be THE BEST ? that is insulting to the other projects\n> like MySQL which while \"competitors\" are also a valid and useful benchmark\n> for features, performance and keeping the postgresql community on its\n> collective toes.\n> \n> postgresql is not THE BEST in all applications, so calling it that is inviting\n> derision and pointless arguments.\n> \n\nThe Politically Correct mission statement follows:\n\nThe PostgreSQL community is committed to creating and maintaining a good \nbut not the best, mostly reliable, open-source multi-purpose standards \nbased database, and with it, promote free and open source software and \nother worthy causes world wide and to not hurting anyones feelings in doing so. \nWe are also committed to not cheating our SOs, not charging too much for\nour services nor eating too much and to recommending products of our\ncommercial competitors before ours in order to help them fullfil their\nobligations to their stockholders. \n\nI was hoping to fit in something about being a good\nChristian/Muslim/Atheist but was unable to do it in an universally\nacceptable way. \nThere may be other points that are not valid everywhere though.\n\nBTW, I think PostgreSQL does _not_ need any mission statement.\n\n-------------\nHannu\n\n\n\n", "msg_date": "02 May 2002 16:07:54 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "The PostgreSQL community is committed to creating and maintaining the best,\nmost reliable, open-source multi-purpose standards based database, and with\nit, promote free(dom) and open source software world wide.\n\nI hope you don't mind writing \"free(dom)\" with the idea of fighting patent \nabuses.\n\nCheers,\nJean-Michel\n", "msg_date": "Thu, 2 May 2002 16:29:34 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thursday 02 May 2002 08:56 am, Jim Mercer wrote:\n> i think a mission statement full of boastfulness is just a sound bite, and\n> will be dismissed as such.\n\n> if you want the mission statement to have an impact, then it needs to be\n> acceptable not only to those who fully embrace it, but also acceptable to\n> those who will respect the project from a distance.\n\n> otherwise its not a mission statement, its akin to a corporate cheer.\n> ( i'm picturing Steve Balmer's superlative exhaltations to the converted\n> http://www.ntk.net/ballmer/mirrors.html )\n\nIn the corporate world a mission statement is often the 'sound bite' and a \n'corporate cheer'.\n\nI personally think\n\"To have fun making and improving the most extensible, robust, ACID-compliant \nFree database system on the planet\" wraps up at least why I think we're all \nhere. s/Free/Open Source/g if you'd rather not invoke a stallmanism. Or \neven s/Free/BSD-licensed/g if you want to really state the obvious. :-)\n\nIf other projects' members are insulted by that, then they're just too \nsensitive.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 2 May 2002 10:44:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> The PostgreSQL community is committed to creating and maintaining the best,\n> most reliable, open-source multi-purpose standards based database, and with\n> it, promote free(dom) and open source software world wide.\n> \n> I hope you don't mind writing \"free(dom)\" with the idea of fighting patent\n> abuses.\n\nNo, the mission statement is about what the postgresql group, as a whole, is\nall about. \n\nI know it seems silly to have such a thing, but really, the more I read on this\ndiscussion, the more it seems like it is a useful \"call to arms\" for developers\nand users alike.\n\nNow, I do not wish to have a manifesto, but a short and sweet \"this is who we\nare, and this is what we do\" could be a positive thing.\n\nP.S. I think every software engineer worth anything should fight software\npatents. If Donald Knuth didn't patent his algorithms, practically none of us\ndeserve patents. I mean seriously, most of the software patents are trivial and\nobvious. Knuth did something, most of us only build on his work.\n", "msg_date": "Thu, 02 May 2002 11:44:34 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, 2 May 2002, mlw wrote:\n\n> Jim Mercer wrote:\n> >\n> > On Thu, May 02, 2002 at 08:15:15AM -0400, mlw wrote:\n> > > Jean-Michel POURE wrote:\n> > > > Le Jeudi 2 Mai 2002 01:59, David Terrell a �crit :\n> > > > > \"Provide a really good database and have fun doing it\"\n> > > >\n> > > > PostgreSQL Community is commited to providing Humanity with the best\n> > > > multi-purpose, reliable, open-source and free database system.\n> > >\n> > >\n> > > The PostgreSQL community is committed to creating and maintaining the best,\n> > > most reliable, open-source multi-purpose standards based database, and with\n> > > it, promote free and open source software world wide.\n> > >\n> > > Who's that? Anyone disagree?\n> >\n> > why does it have to be THE BEST ? that is insulting to the other projects\n> > like MySQL which while \"competitors\" are also a valid and useful benchmark\n> > for features, performance and keeping the postgresql community on its\n> > collective toes.\n>\n> This is interesting, a mission statement isn't necessarily about \"what is\" but\n> about what we \"want to do,\" what it is that we \"intend to do,\" i.e. \"our\n> mission.\" It is vital that a mission statement contain the superlatives.\n> Mediocrity has no place here.\n>\n> I don't know about you, but I want PostgreSQL to be the best, be THE most\n> reliable. Omitting \"best\" or \"most\" from the statement means that we should all\n> just give up now, because PostgreSQL is pretty damn good already.\n\naltho in most contexts, I would agree with Jim as to the use of 'The\nBest', for any mission statement to say anything other then that, IMHO,\nshows a lack of commitment ... I agree with mlw on this one, the mission\nstatement is what we are *striving* for ... where we eventually want to\nget to ... if we aren't \"The Best\", then there is someone better then us\nthat we have to work that much harder to become better then ...\n\nI personally like mlw's wording ...\n\n\n", "msg_date": "Thu, 2 May 2002 13:11:41 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On 2 May 2002, Hannu Krosing wrote:\n\n> On Thu, 2002-05-02 at 14:37, Jim Mercer wrote:\n> > On Thu, May 02, 2002 at 08:15:15AM -0400, mlw wrote:\n> > > Who's that? Anyone disagree?\n> >\n> > why does it have to be THE BEST ? that is insulting to the other projects\n> > like MySQL which while \"competitors\" are also a valid and useful benchmark\n> > for features, performance and keeping the postgresql community on its\n> > collective toes.\n> >\n> > postgresql is not THE BEST in all applications, so calling it that is inviting\n> > derision and pointless arguments.\n> >\n>\n> The Politically Correct mission statement follows:\n>\n> The PostgreSQL community is committed to creating and maintaining a good\n> but not the best, mostly reliable, open-source multi-purpose standards\n\nOkay, so there now has to be someone always better then us, since we don't\nwant to be the best? *confused look*\n\n> BTW, I think PostgreSQL does _not_ need any mission statement.\n\nNope, it doesn't ... never did before, don't know why it does suddenly ...\ndo any other open source projects have one? Its kinda fun to see what ppl\nbanter around, but I can't see it being useful to adopt any single one,\nconsidering I can't see *everyone* agreeing with it ...\n\n", "msg_date": "Thu, 2 May 2002 13:14:34 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, 2 May 2002, Nigel J. Andrews wrote:\n\n>\n>\n> Is this an indication of a need for [email protected]? :)\n\nalready exists as pgsql-advocacy :)\n\n\n", "msg_date": "Thu, 2 May 2002 13:14:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n \n> > BTW, I think PostgreSQL does _not_ need any mission statement.\n> \n> Nope, it doesn't ... never did before, don't know why it does suddenly ...\n> do any other open source projects have one? Its kinda fun to see what ppl\n> banter around, but I can't see it being useful to adopt any single one,\n> considering I can't see *everyone* agreeing with it ...\n\nWe as developers do not need mission statements, per se' but it is often useful\nas something to point to.\n\nI am writing a business plan for my company and I was looking for PostgreSQL's\nmission statement. Of course I did not see one. It is very interesting seeing\nwhat people are coming up with.\n\nIMHO, if we can come up with a strong, positive statement, it would help MBA\ntrained CIOs and CTOs choose PostgreSQL. To them, it will show a professional\nminded development group, it will be recognizable to them.\n", "msg_date": "Thu, 02 May 2002 12:25:50 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "\n[ please try not to take this too seriously ]\n\nOn Thu, May 02, 2002 at 01:11:41PM -0300, Marc G. Fournier wrote:\n> altho in most contexts, I would agree with Jim as to the use of 'The\n> Best', for any mission statement to say anything other then that, IMHO,\n> shows a lack of commitment ... I agree with mlw on this one, the mission\n> statement is what we are *striving* for ... where we eventually want to\n> get to ... if we aren't \"The Best\", then there is someone better then us\n> that we have to work that much harder to become better then ...\n\nthat's right, we have to work harder to STOMP OUT ALL COMPETITION!\n\nin fact, we should stop giving free access to the source, as our competitors\nmight use the code to make their product better than ours.\n\nif we aren't THE BEST, then all those stock options are worthless!\n\nwe could increase our chances of being the best by infiltrating the CVS\ntree of MySQL and the other projects stealing our thunder and injecting\nbugs into their code.\n\ni mean, if we want to be THE BEST, why should we stop at mere rhetoric?\n\nto be THE BEST, you need to dominate, good quality design and code are not\nthe complete recipe for being THE BEST.\n\nTHE BEST implies that no-one else compares, and we can completely demoralize\nthe competition, thereby eliminating the competition.\n\nin all seriousness, i think that this attitude of being THE BEST goes against\nthe philosophy of Open Source.\n\nif your source is open and available for modification/improvement/localization,\nthen there will always be a chance for someone to run with it and make\nimprovements.\n\nwhat is THE BEST Unix-based system? Linux? Debian? RedHat? FreeBSD? NetBSD?\nOpenBSD? Solaris? Minix? Qnix?\n\nif any of them claimed to be THE BEST Unix, we'd all laugh.\n\nnone of them are THE BEST, but all of them strive to be as good as they\ncan make them, and all of them borrow from each other in order to make\ntheir version better in the eyes of their audience.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 12:26:06 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Wed, 1 May 2002, David Terrell wrote:\n\n> On Wed, May 01, 2002 at 02:24:30PM -0400, mlw wrote:\n> > Just out of curiosity, does PostgreSQL have a mission statement?\n> > \n> > If so, where could I find it?\n> > \n> > If not, does anyone see a need?\n> \n> \"Provide a really good database and have fun doing it\"\n\nMotto: \"The best damned database money can't buy\"\n\n:-)\n\n", "msg_date": "Thu, 2 May 2002 11:56:05 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "mlw wrote:\n> \"Marc G. Fournier\" wrote:\n>\n> > > BTW, I think PostgreSQL does _not_ need any mission statement.\n> >\n> > Nope, it doesn't ... never did before, don't know why it does suddenly ...\n> > do any other open source projects have one? Its kinda fun to see what ppl\n> > banter around, but I can't see it being useful to adopt any single one,\n> > considering I can't see *everyone* agreeing with it ...\n>\n> We as developers do not need mission statements, per se' but it is often useful\n> as something to point to.\n\n Yupp. As PostgreSQL developers we are simply committed to\n make commit what others need to commit to PostgreSQL.\n\n If we ever define something like a mission statement, it\n should be named \"The 10 commitments\" anyway, no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 2 May 2002 15:55:49 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On 2 May 2002, Hannu Krosing wrote:\n\n> The Politically Correct mission statement follows:\n> \n> The PostgreSQL community is committed to creating and maintaining a good \n> but not the best, mostly reliable, open-source multi-purpose standards \n> based database, and with it, promote free and open source software and \n> other worthy causes world wide and to not hurting anyones feelings in doing so. \n> We are also committed to not cheating our SOs, not charging too much for\n> our services nor eating too much and to recommending products of our\n> commercial competitors before ours in order to help them fullfil their\n> obligations to their stockholders. \n\nAs a practicing polyamorist, I find the part about not cheating on our SOs \nhighly offensive. :-)\n\n", "msg_date": "Thu, 2 May 2002 14:44:45 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "\n> We as developers do not need mission statements, per se' but it is\noften useful\n> as something to point to.\n\nIt's comforting and useful to point to; in addition, developers\nwork on something because of personal \"itches\" (to coin a phrase)\n that happen to broadly overlap with the group\n\n> IMHO, if we can come up with a strong, positive statement, it would\nhelp MBA\n> trained CIOs and CTOs choose PostgreSQL. To them, it will show a\nprofessional\n> minded development group, it will be recognizable to them.\n\nI think this is an excellent point, especially since I'd say that one of\nthe\nimplicit goals of the PG project is for the database to be *used* ;)\n- and the corporate world in some form or another represents probably\nthe\nlargest user base.\n\nThis reasoning is a bit dicey b/c playing PR games really isn't fun\nafter\nthe initial rush, and I don't think anyone really wants catering to the\ncorporate world to be first and foremost in their minds.\n\nTo this end, if a mission statement is adopted, it should probably be\na very dynamic document that remains capable of both engaging CIOs/CTOs\nand intriguing developers as the vision and the landscape change.\nA mission statement that is agonized over (and takes time away from\ndevelopment),\nfinally adopted, and then allowed to become obsolete does PG no good.\n\nShould you guys hold a vote to see who wants a mission statement (and\nwho\nwants to write one or compile all the suggestions here into a nice form)\nand then work from there? I'm not exactly familiar with the procedures\nhere.\n\nthanks for listening to my rambling,\nMichael Locasto\n\n", "msg_date": "Thu, 2 May 2002 17:11:30 -0400", "msg_from": "\"Michael E. Locasto\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n\n> On Wed, 1 May 2002, David Terrell wrote:\n> \n> > On Wed, May 01, 2002 at 02:24:30PM -0400, mlw wrote:\n> > > Just out of curiosity, does PostgreSQL have a mission statement?\n> > > \n> > > If so, where could I find it?\n> > > \n> > > If not, does anyone see a need?\n> > \n> > \"Provide a really good database and have fun doing it\"\n> \n> Motto: \"The best damned database money can't buy\"\n\nI don't think that any of the PostgreSQL developers would want, in any\nway shape or form, to suggest that you can't pay money for PostgreSQL.\nNor are they likely to limit themselves to competing with free\n(libre/gratis) databases.\n\nSome of them might even object to the use of the word \"damn\" :).\n\nJason\n", "msg_date": "02 May 2002 16:01:56 -0600", "msg_from": "Jason Earl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Fri, 2002-05-03 at 04:25, mlw wrote:\n\n> \n> IMHO, if we can come up with a strong, positive statement, it would help MBA\n> trained CIOs and CTOs choose PostgreSQL. To them, it will show a professional\n> minded development group, it will be recognizable to them.\n\nI am not so sure about that -\n\nIn my experience the things that those guys use to decide on products\nare :\n\n1) reference sites\n2) official support\n\n(and they like to pay for a product 'cause they are used to doing\nit...but lets not go there...)\n\nPersonally I find the lack of \"business-speak\" things (like mission\nstatements) refeshing, and I see it as part of the \"values\" that\ndifferentiate open source / community based products from commercial\nones. \n\njust my NZ4c (=US2c)\n\nMark\n\n\n", "msg_date": "03 May 2002 10:09:49 +1200", "msg_from": "Mark kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On 2 May 2002, Jason Earl wrote:\n\n> Scott Marlowe <[email protected]> writes:\n> \n> > On Wed, 1 May 2002, David Terrell wrote:\n> > \n> > > On Wed, May 01, 2002 at 02:24:30PM -0400, mlw wrote:\n> > > > Just out of curiosity, does PostgreSQL have a mission statement?\n> > > > \n> > > > If so, where could I find it?\n> > > > \n> > > > If not, does anyone see a need?\n> > > \n> > > \"Provide a really good database and have fun doing it\"\n> > \n> > Motto: \"The best damned database money can't buy\"\n> \n> I don't think that any of the PostgreSQL developers would want, in any\n> way shape or form, to suggest that you can't pay money for PostgreSQL.\n> Nor are they likely to limit themselves to competing with free\n> (libre/gratis) databases.\n\nTrue, but my point wasn't that you could pay for it, but that it couldn't \nbe \"bought\" like so many other things (think politicians, OEMs, judges, \netc...) But I was pretty much just foolin' around. :-)\n\nSo how about:\n\n\"Postgresql: Open Source, Open Standards, Open Development, Open Minds\"\n\n", "msg_date": "Thu, 2 May 2002 16:14:45 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "...\n> Now, I do not wish to have a manifesto, but a short and sweet \"this is who we\n> are, and this is what we do\" could be a positive thing.\n\n\"PostgreSQL is the most advanced open-source database available\nanywhere\"\n\nhas appeared in the docs for quite some time, and has appeared in other\nmention of PostgreSQL (release announcements etc).\n\nA short and sweet mission statement could describe what we do to make\nthis true, and what we must continue to do to stay on top. Or we could\nleave it general; for example, \"most advanced\" could mean wrt standards\ncompliance, or wrt leading edge features, or wrt performance, or ?? Let\nus decide what those specifics are as we go along...\n\n\"PostgreSQL is and will be the most advanced open-source database\navailable anywhere.\"\n\nNo different in substance from what we've said all along. Leave out the\nspecifics, because the next generations of developers will have specific\ndirections to go :)\n\n - Thomas\n", "msg_date": "Thu, 02 May 2002 17:22:25 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Mark kirkwood wrote:\n> \n> On Fri, 2002-05-03 at 04:25, mlw wrote:\n> \n> >\n> > IMHO, if we can come up with a strong, positive statement, it would help MBA\n> > trained CIOs and CTOs choose PostgreSQL. To them, it will show a professional\n> > minded development group, it will be recognizable to them.\n> \n> I am not so sure about that -\n> \n> In my experience the things that those guys use to decide on products\n> are :\n> \n> 1) reference sites\n> 2) official support\n> \n> (and they like to pay for a product 'cause they are used to doing\n> it...but lets not go there...)\n> \n> Personally I find the lack of \"business-speak\" things (like mission\n> statements) refeshing, and I see it as part of the \"values\" that\n> differentiate open source / community based products from commercial\n> ones.\n\nA mission statement is like a tie.\n", "msg_date": "Thu, 02 May 2002 20:41:30 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> A mission statement is like a tie.\n\nstraw vote!\n\nwho on the list wears ties?\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 20:47:31 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jim Mercer wrote:\n> \n> On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > A mission statement is like a tie.\n> \n> straw vote!\n> \n> who on the list wears ties?\n\nHow many people who make IT decisions wear ties?\n", "msg_date": "Thu, 02 May 2002 21:14:03 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 09:14:03PM -0400, mlw wrote:\n> Jim Mercer wrote:\n> > \n> > On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > > A mission statement is like a tie.\n> > \n> > straw vote!\n> > \n> > who on the list wears ties?\n> \n> How many people who make IT decisions wear ties?\n\ntoo many.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 21:37:18 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jim Mercer wrote:\n> \n> On Thu, May 02, 2002 at 09:14:03PM -0400, mlw wrote:\n> > Jim Mercer wrote:\n> > >\n> > > On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > > > A mission statement is like a tie.\n> > >\n> > > straw vote!\n> > >\n> > > who on the list wears ties?\n> >\n> > How many people who make IT decisions wear ties?\n> \n> too many.\n\nI'm sorry I started this thread.\n", "msg_date": "Thu, 02 May 2002 21:45:45 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, May 02, 2002 at 09:45:45PM -0400, mlw wrote:\n> Jim Mercer wrote:\n> > On Thu, May 02, 2002 at 09:14:03PM -0400, mlw wrote:\n> > > Jim Mercer wrote:\n> > > > On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > > > > A mission statement is like a tie.\n> > > > who on the list wears ties?\n> > > How many people who make IT decisions wear ties?\n> > too many.\n> I'm sorry I started this thread.\n\ndon't be sorry.\n\ni'm not big on wearing the corporate suit, be it physically, or figuratively.\n\nthat's my opinion, and i'm stating it.\n\nyour opinion differs, and that's fine.\n\ni've had to do the corporate \"mission statement\" dance, as well as a bunch\nof other hokey crap that didn't matter squat to the bottom line due to the\nfact that the execs read some magazine article or attended some Tony Robbins\n-esque motivational session.\n\nwhen i hear \"mission statement\" and \"quality circle\" and \"internal customer\",\ni cringe.\n\nif the corporate management doesn't want to buy into the Open Source concept,\nfuck 'em.\n\ni've had a number of installations where due to management panic to get\nsomething working, it was implemented using Open Source. Only to have\na perfectly good system replaced with \"real software\" when management\nfinds out 6 months later that it is using Open Source.\n\ni have had successes in getting Open Source into corporate environments,\nbut only after battling mega-politics with CIO/CFO and MSCE IT managers\nwho only want to see Microsoft or Sun Solaris solutions.\n\nwe did a project using FreeBSD and Samba to replace a number of highly\nunstable NT file/print servers.\n\nrecently, some consultants (friends of the managing partners) said that\nit was a bad idea to use \"Public Domain software that was full of bugs\nand highly insecure\".\n\nwhen we pointed out that the servers hadn't rebooted in 160 days, and\nthat they were protected by both RFC1918 addressing and a firewall, the\nconsultants backed off a bit.\n\nthen they returned spouting the same \"full of bugs and highly insecure\" crap.\n\nnow management is going to have them re-implement the network using the\nlatest NT stuff.\n\nthis is a long winded way of saying that my feeling is the type of MBA\nCFO/CIO that is impressed by a mission statement, is probably not going\nto buy into technology that isn't listed on NASDAQ.\n\nso, what's the harm in having one?\n\nprobably not much, but to me it smells of corporate bullshit.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 2 May 2002 22:29:17 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jim Mercer wrote:\n> \n> On Thu, May 02, 2002 at 09:45:45PM -0400, mlw wrote:\n> > Jim Mercer wrote:\n> > > On Thu, May 02, 2002 at 09:14:03PM -0400, mlw wrote:\n> > > > Jim Mercer wrote:\n> > > > > On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > > > > > A mission statement is like a tie.\n> > > > > who on the list wears ties?\n> > > > How many people who make IT decisions wear ties?\n> > > too many.\n> > I'm sorry I started this thread.\n> \n> don't be sorry.\n> \n> i'm not big on wearing the corporate suit, be it physically, or figuratively.\n\nTrust me, nor am I. I haven't warn a suit professionally since the '80s.\n\n> \n> that's my opinion, and i'm stating it.\n> \n> your opinion differs, and that's fine.\n\nDon't be so sure.\n\n> \n> i've had to do the corporate \"mission statement\" dance, as well as a bunch\n> of other hokey crap that didn't matter squat to the bottom line due to the\n> fact that the execs read some magazine article or attended some Tony Robbins\n> -esque motivational session.\n\nYes I know. Been there done that.\n\n> \n> when i hear \"mission statement\" and \"quality circle\" and \"internal customer\",\n> i cringe.\n\nDitto.\n\n> \n> if the corporate management doesn't want to buy into the Open Source concept,\n> fuck 'em.\n\nHe is where we differ. There is merit in displaying an amount of understanding\nof the corporate personality. I am not a corporate type, but I understand that\nthere are people that are, and to promote PostgreSQL, we need to reach them.\n\nIt is not selling out to use chopsticks at a chinese dinner, it is following\ncustom. A mission statement is similar. These people are brainwashed to look at\nthe mission statement. Having one for them to look at is not a bad idea.\n> \n> i've had a number of installations where due to management panic to get\n> something working, it was implemented using Open Source. Only to have\n> a perfectly good system replaced with \"real software\" when management\n> finds out 6 months later that it is using Open Source.\n\nPresenting a corporate aware culture will help them.\n \n> i have had successes in getting Open Source into corporate environments,\n> but only after battling mega-politics with CIO/CFO and MSCE IT managers\n> who only want to see Microsoft or Sun Solaris solutions.\n\nBeen there done that.\n \n> we did a project using FreeBSD and Samba to replace a number of highly\n> unstable NT file/print servers.\n\nAgain, been there, done that.\n \n> recently, some consultants (friends of the managing partners) said that\n> it was a bad idea to use \"Public Domain software that was full of bugs\n> and highly insecure\".\n\nPure FUD, of course.\n \n> when we pointed out that the servers hadn't rebooted in 160 days, and\n> that they were protected by both RFC1918 addressing and a firewall, the\n> consultants backed off a bit.\n\nMost consultants (not me :-) are idiots.\n\n> then they returned spouting the same \"full of bugs and highly insecure\" crap.\n\nThey are uninformed, or worse, believe what Microsoft says.\n \n> now management is going to have them re-implement the network using the\n> latest NT stuff.\n\nFight it! Is there any evidence that will help you?\n \n> this is a long winded way of saying that my feeling is the type of MBA\n> CFO/CIO that is impressed by a mission statement, is probably not going\n> to buy into technology that isn't listed on NASDAQ.\n\nThat is where the fight lies!! We have to make believers out of them! We have\nsuperior technology, we have superior quality. \n\nMicrosoft and Oracle have billions of dollars in marketing, all we have is\nourselves. \n> \n> so, what's the harm in having one?\n> \n> probably not much, but to me it smells of corporate bullshit.\n\nCorporate bullshit or not, it is a fact of life and a custom that we open\nsource people need to accept. We write the best shit, we do the best work. We\nare \"more professional\" and dedicated than most professionals. Our quality is\nusually much better than proprietary our counterparts. Unfortunately business\ntypes do not understand us. If we are unable to reach the people who would\ndecide to use our stuff, then it is our fault for failure.\n", "msg_date": "Thu, 02 May 2002 22:46:21 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Le Vendredi 3 Mai 2002 04:46, mlw a écrit :\n> Corporate bullshit or not, it is a fact of life and a custom that we open\n> source people need to accept. We write the best shit, we do the best work.\n> We are \"more professional\" and dedicated than most professionals. Our\n> quality is usually much better than proprietary our counterparts.\n> Unfortunately business types do not understand us. If we are unable to\n> reach the people who would decide to use our stuff, then it is our fault\n> for failure.\n\nTherefore we need a moto.\n", "msg_date": "Fri, 3 May 2002 14:34:08 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Le Vendredi 3 Mai 2002 02:22, Thomas Lockhart a écrit :\n> \"PostgreSQL is and will be the most advanced open-source database\n> available anywhere.\"\n\n*******************************************************************************************\nThe PostgreSQL community is committed to creating and maintaining the best, \nmost reliable, open-source multi-purpose standards based database, and with \nit, promote free and open source software world wide.\n\nUltimately, PostgreSQL database is a gift to Humanity serving freedom, \nknowledge and equal access to information, and as such belongs to every \nHuman.\n*******************************************************************************************\n\nPostgreSQL is \"transcendantal\", which means it goes \"beyond\" the original \nconcept of its creators. People began working on it for various reasons (for \nprofessional needs, because of open-source code, to have fun) ... and \nultimately it becomes a gift to Humanity.\n\nMy feeling is that the PostgreSQL community is making history without even \nnoticing it. You are all heroes my friends...\n\nCheers,\nJean-Michel POURE\n\n", "msg_date": "Fri, 3 May 2002 15:12:46 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, 2 May 2002, Scott Marlowe wrote:\n\n> On 2 May 2002, Hannu Krosing wrote:\n>\n> > The Politically Correct mission statement follows:\n> >\n> > The PostgreSQL community is committed to creating and maintaining a good\n> > but not the best, mostly reliable, open-source multi-purpose standards\n> > based database, and with it, promote free and open source software and\n> > other worthy causes world wide and to not hurting anyones feelings in doing so.\n> > We are also committed to not cheating our SOs, not charging too much for\n> > our services nor eating too much and to recommending products of our\n> > commercial competitors before ours in order to help them fullfil their\n> > obligations to their stockholders.\n>\n> As a practicing polyamorist, I find the part about not cheating on our SOs\n> highly offensive. :-)\n\n\"not cheating our SOs\", !\"not cheating on our SOs\" ... you add an 'on'\nthere that wasn't actually there :)\n\n\n", "msg_date": "Fri, 3 May 2002 10:24:01 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Mark kirkwood <[email protected]> writes:\n\n> On Fri, 2002-05-03 at 04:25, mlw wrote:\n> \n> > \n> > IMHO, if we can come up with a strong, positive statement, it\n> > would help MBA trained CIOs and CTOs choose PostgreSQL. To them,\n> > it will show a professional minded development group, it will be\n> > recognizable to them.\n> \n> I am not so sure about that -\n> \n> In my experience the things that those guys use to decide on\n> products are :\n> \n> 1) reference sites\n> 2) official support\n> \n> (and they like to pay for a product 'cause they are used to doing\n> it...but lets not go there...)\n> \n> Personally I find the lack of \"business-speak\" things (like mission\n> statements) refeshing, and I see it as part of the \"values\" that\n> differentiate open source / community based products from commercial\n> ones.\n> \n> just my NZ4c (=US2c)\n> \n> Mark\n\nAs a developer and systems administrator my favorite thing about\nPostgreSQL is the fact that I can get the straight dope on what works\nand what doesn't. The PostgreSQL developers are quite candid, and are\nmore than willing to tell you which bits of PostgreSQL are dicey.\nThat's a huge bonus to someone creating and maintaining an\napplication.\n\nHowever, it's not exactly the sort of salesmanship that my boss is\nlooking for.\n\nJason\n", "msg_date": "03 May 2002 11:01:30 -0600", "msg_from": "Jason Earl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "On Thu, 2 May 2002, Jim Mercer wrote:\n\n> On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > A mission statement is like a tie.\n> \n> straw vote!\n> \n> who on the list wears ties?\n\nDoes a skinny black tie count if I'm only wearing it to go out to a jazz \nclub? :-) Not at work though. I think ties are designed to slow the flow \nof blood to the head so you'll think slow enough for marketeers to \nunderstand you.\n\n", "msg_date": "Fri, 3 May 2002 11:03:07 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Jim Mercer wrote: \n> On Thu, May 02, 2002 at 09:45:45PM -0400, mlw wrote:\n> > Jim Mercer wrote:\n> > > On Thu, May 02, 2002 at 09:14:03PM -0400, mlw wrote:\n> > > > Jim Mercer wrote:\n> > > > > On Thu, May 02, 2002 at 08:41:30PM -0400, mlw wrote:\n> > > > > > A mission statement is like a tie.\n> > > > > who on the list wears ties?\n> > > > How many people who make IT decisions wear ties?\n> > > too many.\n> > I'm sorry I started this thread.\n> when i hear \"mission statement\" and \"quality circle\" and \"internal customer\",\n> i cringe.\n> if the corporate management doesn't want to buy into the Open Source concept,\n> fuck 'em.\n\n<trench warfare snippage>\n\nLet's see... open source philosophy applied *into* corporate-speak should\nbe doable....\n1. If you have an itch, scratch it.\n2. If you want to know what's going on, use the source, luke!\n3. More eyeballs = less bugs.\n4. Software should be free (insert debates on speech, beer, use, licence\nXYZ vs. ABC, etc., I'm not going to bother).\n\nHm...firing up my geekspeak->corporate BS translator. :-)\n\nHow about:\n\"PostgreSQL creates a dynamic environment to ensure that all customers\ncan effectly create highly customized solutions specific to their needs. We\nshare and collaborate on both problems and solutions by making all\ninformation about our products available. By using this open and exciting\nenvironment, we increase the amount of successful software releases\nusing advanced concepts of peer review and peer enhancement. We ensure our\nongoing enhancement and improvement through our community, because\nour customers are also our creators.\"\n\nNow, if you'll excuse me, I have to go wash my mouth out with soap.\n\n-Ronabop\n", "msg_date": "Fri, 03 May 2002 13:46:18 -0700", "msg_from": "Ron Chmara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "I mentioned in another thread, Windows does not support \"fork().\" PostgreSQL\nseems irrevocably tied to using fork(). Without a drastic rewrite of how\npostmaster works, I don't see a way to make a pure Windows version.\n\nThe big trick to cygwin is its implementation of fork(). It represents a very\nimportant and fairly mature technique. It can be written for PostgreSQL but it\nwould require a fair amount of development time and testing.\n\nThen we would need to be able to trace all the native API calls made so that\nthings like file handles are dealt with correctly for the child process.\n\nI see cygwin as a portability layer or subsystem, as such, it should be able to\nemulate foreign operating system constructs. A native application should, on\nthe other hand, not attempt to do so.\n\nThere is a strategy PostgreSQL could use:\n\nPut all global variables which need to be duplicated in a single place, perhaps\na struct, which can be copied into the child process. On systems without\nfork(), the memory can be duplicated or passed around using a shared memory\nblock, on a system with fork(), nothing extra would need to be done. This could\nbe implemented using \"standard\" APIs, with little or no specialized OS\nknowledge. \n\nThis represents a lot of reworking of code, but should not affect much in the\nway of operation, but would require a lot of typing and testing. It would also\nforce restrictions on module static and global variables.\n\nI will sign up to do the Windows stuff to get this to work, but it will take a\nlot of postgres internal reworking that I am not up for doing.\n\nThe other alternative, is to profile PostgreSQL running in the cygwin\nenvironment and try to assess where any bottlenecks are, and if there are any\nspot optimizations which can be applied.\n", "msg_date": "Tue, 07 May 2002 13:16:01 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "How much work is a native Windows application?" }, { "msg_contents": "mlw <[email protected]> writes:\n> There is a strategy PostgreSQL could use:\n\n> Put all global variables which need to be duplicated in a single\n> place, perhaps a struct, which can be copied into the child\n> process. On systems without fork(), the memory can be duplicated or\n> passed around using a shared memory block, on a system with fork(),\n> nothing extra would need to be done. This could be implemented using\n> \"standard\" APIs, with little or no specialized OS knowledge.\n\n> This represents a lot of reworking of code, but should not affect much\n> in the way of operation, but would require a lot of typing and\n> testing. It would also force restrictions on module static and global\n> variables.\n\nYeah. The real problem with it in my eyes is that it'd be a continuing\nmaintenance headache: straightforward programming techniques that work\nfine on all the Unix ports would fail (perhaps in nonobvious ways) when\nmoved to Windows, should you forget to put a variable in the right\nplace.\n\nA lesser objection is that variables that can currently be \"static\" in\na single module would become exposed to the world, which again is a\nmaintenance problem.\n\n> The other alternative, is to profile PostgreSQL running in the cygwin\n> environment and try to assess where any bottlenecks are, and if there\n> are any spot optimizations which can be applied.\n\nIt'd be worth trying to understand cygwin issues in detail before we\nsign up to do and support a native Windows port. I understand the\nuser-friendliness objection to cygwin (though one would think proper\npackaging might largely hide cygwin from naive Windows users). What\nI don't understand is whether there are any serious performance lossages\nfrom it, and if so whether we could work around them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 May 2002 13:49:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "\nhttp://www.mkssoftware.com/docs/man3/fork.3.asp\n\nhttp://www.computing.net/programming/wwwboard/forum/60.html\n\nhttp://www.research.att.com/sw/tools/uwin (Semaphores & Fork)\n\nOn Tue, 7 May 2002, mlw wrote:\n\n> I mentioned in another thread, Windows does not support \"fork().\" PostgreSQL\n> seems irrevocably tied to using fork(). Without a drastic rewrite of how\n> postmaster works, I don't see a way to make a pure Windows version.\n>\n> The big trick to cygwin is its implementation of fork(). It represents a very\n> important and fairly mature technique. It can be written for PostgreSQL but it\n> would require a fair amount of development time and testing.\n>\n> Then we would need to be able to trace all the native API calls made so that\n> things like file handles are dealt with correctly for the child process.\n>\n> I see cygwin as a portability layer or subsystem, as such, it should be able to\n> emulate foreign operating system constructs. A native application should, on\n> the other hand, not attempt to do so.\n>\n> There is a strategy PostgreSQL could use:\n>\n> Put all global variables which need to be duplicated in a single place, perhaps\n> a struct, which can be copied into the child process. On systems without\n> fork(), the memory can be duplicated or passed around using a shared memory\n> block, on a system with fork(), nothing extra would need to be done. This could\n> be implemented using \"standard\" APIs, with little or no specialized OS\n> knowledge.\n>\n> This represents a lot of reworking of code, but should not affect much in the\n> way of operation, but would require a lot of typing and testing. It would also\n> force restrictions on module static and global variables.\n>\n> I will sign up to do the Windows stuff to get this to work, but it will take a\n> lot of postgres internal reworking that I am not up for doing.\n>\n> The other alternative, is to profile PostgreSQL running in the cygwin\n> environment and try to assess where any bottlenecks are, and if there are any\n> spot optimizations which can be applied.\n>\n\n\n", "msg_date": "Tue, 7 May 2002 14:52:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> http://www.mkssoftware.com/docs/man3/fork.3.asp\n> \n> http://www.computing.net/programming/wwwboard/forum/60.html\n> \n> http://www.research.att.com/sw/tools/uwin (Semaphores & Fork)\n\nThese are pretty much what I have been saying. \n\nIs PostgreSQL going to implement its own fork()? If so, what's the point? Just\nuse cygwin.\n\nWithout trying to sound conceited, I can write a fork() call, that's not the\nproblem. How much time will it take to do and get right? What about all the\ninfrastructure? Tracking file handles and resources allocated so that they can\nbe properly duplicated for the child process, etc. It is a lot of work, and to\ndo it for a BSD license, I shouldn't reference the cygwin code to do so.\n\nThe semaphore, shared memory, file API, etc. all these are straight forward.\nThey can be handled with a set of macros and some thin functions.\n\nThe problems of a native PostgreSQL on Windows is fork(), and all the\nsubtleties that go with it like ownership of system resources allocated by the\nparent and passed to the child and initialization of global and static\nvariables.\n\nAdding fork() to postgres seems silly. Cygwin does it already, and it seems\nlike it is outside the scope of what should be supported by PostgreSQL.\n\nSince RedHat owns cygwin and they want RedHat database to be a success, maybe\nthey can make an exception to the GNU license for PostgreSQL. \n\nDoes anyone think it is a good idea for PostgreSQL to implement it's own\nversion of fork()?\n", "msg_date": "Tue, 07 May 2002 14:32:04 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Tue, 7 May 2002, Tom Lane wrote:\n\n> It'd be worth trying to understand cygwin issues in detail before we\n> sign up to do and support a native Windows port. I understand the\n> user-friendliness objection to cygwin (though one would think proper\n> packaging might largely hide cygwin from naive Windows users). What I\n> don't understand is whether there are any serious performance lossages\n> from it, and if so whether we could work around them.\n\nActually, there are licensing issues involved ... we could never put a\n'windows binary' up for anon-ftp, since to distribute it would require the\ncygwin.dll to be distributed, and to do that, there is a licensing cost\n... of course, I guess we could require ppl to download cygwin seperately,\ninstall that, then install the binary over top of that ...\n\n", "msg_date": "Wed, 8 May 2002 01:03:37 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> On Tue, 7 May 2002, Tom Lane wrote:\n>> It'd be worth trying to understand cygwin issues in detail before we\n>> sign up to do and support a native Windows port.\n\n> Actually, there are licensing issues involved ... we could never put a\n> 'windows binary' up for anon-ftp, since to distribute it would require the\n> cygwin.dll to be distributed, and to do that, there is a licensing cost\n> ... of course, I guess we could require ppl to download cygwin seperately,\n> install that, then install the binary over top of that ...\n\n<<itch>> And how much development time are we supposed to expend to\navoid that?\n\nGive me a technical case for avoiding Cygwin, and maybe I can get\nexcited about it. I'm not planning to lift a finger on the basis\nof licensing though... after all, Windows users are accustomed to\npaying for software, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 May 2002 00:49:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "...\n> Give me a technical case for avoiding Cygwin, and maybe I can get\n> excited about it. I'm not planning to lift a finger on the basis\n> of licensing though... after all, Windows users are accustomed to\n> paying for software, no?\n\n<evil grin>\n\nYou tell us: RH sells our database and sells cygwin, so might have an\nopinion on the subject. Perhaps they would like to contribute back some\nno cost package licensing terms for cygwin if used for PostgreSQL?? ;)\n\n - Thomas\n", "msg_date": "Tue, 07 May 2002 22:08:59 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Le Mardi 7 Mai 2002 20:32, mlw a écrit :\n> Since RedHat owns cygwin and they want RedHat database to be a success,\n> maybe they can make an exception to the GNU license for PostgreSQL.\n\nCygwin received contributions from several authors. To leave a GNU licence, \nyou need to get the agreement of all authors, which is not possible for a big \nproject like Cygwin.\n\nAlternatively, you could pick-up the old static version of Cygwin and release \nit as a minimal Cygwin dll. This idea is probably ***stupid***.\n\nAnother possible solution is http://debian-cygwin.sourceforge.net/ project, \nwhich tries to port dpkg to Windows. If it was possible to release Cygwin \nusing dpkg, we could create a comprehensive Cygwin + PostgreSQL on-line \ninstaller.\n\nWhy look for complicated solutions when the only real issue for users is the \nCygwin installer?\n\nCheers,\nJean-Michel POURE\n", "msg_date": "Wed, 8 May 2002 11:02:58 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Le Mercredi 8 Mai 2002 06:03, Marc G. Fournier a écrit :\n> Actually, there are licensing issues involved ... we could never put a\n> 'windows binary' up for anon-ftp, since to distribute it would require the\n> cygwin.dll to be distributed, and to do that, there is a licensing cost\n> ... of course, I guess we could require ppl to download cygwin seperately,\n> install that, then install the binary over top of that ...\n\nThis is an installer problem. Let's port dpkg to Windows, create a minimal \nCygwin dpkg distribution .. et voilà.\n", "msg_date": "Wed, 8 May 2002 11:05:40 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Wed, 2002-05-08 at 14:02, Jean-Michel POURE wrote:\n> Le Mardi 7 Mai 2002 20:32, mlw a écrit :\n> > Since RedHat owns cygwin and they want RedHat database to be a success,\n> > maybe they can make an exception to the GNU license for PostgreSQL.\n> \n> Cygwin received contributions from several authors. To leave a GNU licence, \n> you need to get the agreement of all authors, which is not possible for a big \n> project like Cygwin.\n> \n> Alternatively, you could pick-up the old static version of Cygwin and release \n> it as a minimal Cygwin dll. This idea is probably ***stupid***.\n> \n> Another possible solution is http://debian-cygwin.sourceforge.net/ project, \n> which tries to port dpkg to Windows. If it was possible to release Cygwin \n> using dpkg, we could create a comprehensive Cygwin + PostgreSQL on-line \n> installer.\n> \n> Why look for complicated solutions when the only real issue for users is the \n> Cygwin installer?\n\nIIRC the initial issue was bad performance, which was attributed to\nwin32/cygwin fork() behaviour.\n\nThat was long before this thread started.\n\nBTW, does anyone know how other real databases (Oracle, DB2,\nInterbase/Firebird, Infomix) do it on Windows.\n\n-------------\nHannu\n\n\n", "msg_date": "08 May 2002 15:12:28 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Wed, 8 May 2002 01:03:37 -0300 (ADT)\n\"Marc G. Fournier\" <[email protected]> wrote:\n> On Tue, 7 May 2002, Tom Lane wrote:\n> \n> > It'd be worth trying to understand cygwin issues in detail before we\n> > sign up to do and support a native Windows port. I understand the\n> > user-friendliness objection to cygwin (though one would think proper\n> > packaging might largely hide cygwin from naive Windows users). What I\n> > don't understand is whether there are any serious performance lossages\n> > from it, and if so whether we could work around them.\n> \n> Actually, there are licensing issues involved ... we could never put a\n> 'windows binary' up for anon-ftp, since to distribute it would require the\n> cygwin.dll to be distributed, and to do that, there is a licensing cost\n\nWhy? Isn't Cyygwin GPL'd? From http://cygwin.com/licensing.html I don't\nsee anything that would require licensing fees for OSD-compliant software.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 8 May 2002 08:15:09 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Wed, 8 May 2002, Neil Conway wrote:\n\n> On Wed, 8 May 2002 01:03:37 -0300 (ADT)\n> \"Marc G. Fournier\" <[email protected]> wrote:\n> > On Tue, 7 May 2002, Tom Lane wrote:\n> >\n> > > It'd be worth trying to understand cygwin issues in detail before we\n> > > sign up to do and support a native Windows port. I understand the\n> > > user-friendliness objection to cygwin (though one would think proper\n> > > packaging might largely hide cygwin from naive Windows users). What I\n> > > don't understand is whether there are any serious performance lossages\n> > > from it, and if so whether we could work around them.\n> >\n> > Actually, there are licensing issues involved ... we could never put a\n> > 'windows binary' up for anon-ftp, since to distribute it would require the\n> > cygwin.dll to be distributed, and to do that, there is a licensing cost\n>\n> Why? Isn't Cyygwin GPL'd? From http://cygwin.com/licensing.html I don't\n> see anything that would require licensing fees for OSD-compliant software.\n\nI may be wrong about this ... this was prior to Redhat buying it out,\nwhich I totally forgot about ...\n\n", "msg_date": "Wed, 8 May 2002 09:28:38 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, May 07, 2002 1:49 PM\n> To: mlw\n> Cc: Marc G. Fournier; PostgreSQL-development\n> Subject: Re: [HACKERS] How much work is a native Windows application?\n>\n>\n> It'd be worth trying to understand cygwin issues in detail before we\n> sign up to do and support a native Windows port. I understand the\n> user-friendliness objection to cygwin (though one would think proper\n> packaging might largely hide cygwin from naive Windows users). What\n> I don't understand is whether there are any serious performance lossages\n> from it, and if so whether we could work around them.\n\nI've sent others to do a cygwin install of PG; it's not at all obvious to\nthem how much of cygwin they need & they end up installing ALL of cygwin (a\nton of devel tools, obscure unix utils, etc.) just to get PG working.\n\nIt would seem not too difficult to package up the cygwin.dll, the handful of\nshell utils (sh, rm, etc.) req'd by PG, and perhaps even give it a standard\nWindows installer.\n\nWould this be a worthwhile move?\n\nJoel BURTON | [email protected] | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Wed, 8 May 2002 10:23:05 -0400", "msg_from": "\"Joel Burton\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Marc G. Fournier\n> Sent: Wednesday, May 08, 2002 12:04 AM\n> To: Tom Lane\n> Cc: mlw; PostgreSQL-development\n> Subject: Re: [HACKERS] How much work is a native Windows application?\n>\n>\n> On Tue, 7 May 2002, Tom Lane wrote:\n>\n> > It'd be worth trying to understand cygwin issues in detail before we\n> > sign up to do and support a native Windows port. I understand the\n> > user-friendliness objection to cygwin (though one would think proper\n> > packaging might largely hide cygwin from naive Windows users). What I\n> > don't understand is whether there are any serious performance lossages\n> > from it, and if so whether we could work around them.\n>\n> Actually, there are licensing issues involved ... we could never put a\n> 'windows binary' up for anon-ftp, since to distribute it would require the\n> cygwin.dll to be distributed, and to do that, there is a licensing cost\n> ... of course, I guess we could require ppl to download cygwin seperately,\n> install that, then install the binary over top of that ...\n\n\n>From http://cygwin.com/licensing.html:\n\n\"\"\"\nIn accordance with section 10 of the GPL, Red Hat permits programs whose\nsources are distributed under a license that complies with the Open Source\ndefinition to be linked with libcygwin.a without libcygwin.a itself causing\nthe resulting program to be covered by the GNU GPL.\n\nThis means that you can port an Open Source(tm) application to cygwin, and\ndistribute that executable as if it didn't include a copy of libcygwin.a\nlinked into it. Note that this does not apply to the cygwin DLL itself. If\nyou distribute a (possibly modified) version of the DLL you must adhere to\nthe terms of the GPL, i.e. you must provide sources for the cygwin DLL.\n\nSee http://www.opensource.org/osd.html for the precise Open Source\nDefinition referenced above.\n\"\"\"\n\nNot following this exactly, but would this give PG the exception it needs\n(my eyes start to glaze over on stuff like this)? Anyone from RedHat still\non this list?\n\nIn any event, if PG can't release a PG+Cygwin in one package, we could\nmaintain a official web page about how to get PG running under Cygwin that\nwalks through exactly what to install, how to install, and how to set up.\n\nThere are some notes at http://www.ca.postgresql.org/docs/faq-mswin.html,\nbut these are assuming that you want to build PG, rather than simply install\nPG from the cygwin packages.\n\nI'd be very willing to help with this effort, once there's some consensus on\nwhat direction we want to head.\n\n- J.\n\nJoel BURTON | [email protected] | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Wed, 8 May 2002 10:42:07 -0400", "msg_from": "\"Joel Burton\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n\n> On Tue, 7 May 2002, Tom Lane wrote:\n> \n> > It'd be worth trying to understand cygwin issues in detail before we\n> > sign up to do and support a native Windows port. I understand the\n> > user-friendliness objection to cygwin (though one would think proper\n> > packaging might largely hide cygwin from naive Windows users). What I\n> > don't understand is whether there are any serious performance lossages\n> > from it, and if so whether we could work around them.\n> \n> Actually, there are licensing issues involved ... we could never put\n> a 'windows binary' up for anon-ftp, since to distribute it would\n> require the cygwin.dll to be distributed, and to do that, there is a\n> licensing cost ... of course, I guess we could require ppl to\n> download cygwin seperately, install that, then install the binary\n> over top of that ...\n\n From the Cygwin FAQ:\n\n Is it free software?\n\n Yes. Parts are GNU software (gcc, gas, ld, etc...), parts are\n covered by the standard X11 license, some of it is public\n domain, some of it was written by Cygnus and placed under the\n GPL. None of it is shareware. You don't have to pay anyone to\n use it but you should be sure to read the copyright section of\n the FAQ more more information on how the GNU General Public\n License may affect your use of these tools.\n\nThere is even a clause allowing you to link to the cygwin dll without\nyour software falling under the GPL if your software is released under\na license that complies with the Open Source Definition.\n\n *** NOTE ***\n\n In accordance with section 10 of the GPL, Red Hat permits\n programs whose sources are distributed under a license that\n complies with the Open Source definition to be linked with\n libcygwin.a without libcygwin.a itself causing the resulting\n program to be covered by the GNU GPL.\n\n This means that you can port an Open Source(tm) application to\n cygwin, and distribute that executable as if it didn't include\n a copy of libcygwin.a linked into it. Note that this does not\n apply to the cygwin DLL itself. If you distribute a (possibly\n modified) version of the DLL you must adhere to the terms of\n the GPL, i.e. you must provide sources for the cygwin DLL.\n\n See http://www.opensource.org/osd.html for the precise Open\n Source Definition referenced above.\n\n Red Hat sells a special Cygwin License for customers who are\n unable to provide their application in open source code\n form. For more information, please see:\n http://www.redhat.com/software/tools/cygwin/, or call\n 866-2REDHAT ext. 3007\n\nIn other words, you could even distribute a BSD licensed PostgreSQL\nthat that ran on Windows. Not that such a loophole is particularly\nuseful. GPL projects regularly include BSD code, this doesn't make\nthe BSD version GPLed. The GPL might be viral in nature, but it's not\nthat viral.\n\nNow, I can understand why the PostgreSQL mirrors might be a little bit\nconcerned about distributing GPLed software, because of the legal\nramifications, but they could leave the distribution of Cygwin up to\nRedHat, and simply distribute a BSD-licensed PostgreSQL Windows\nbinary.\n\nJason\n", "msg_date": "08 May 2002 13:18:30 -0600", "msg_from": "Jason Earl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Tue, May 07, 2002 at 01:16:01PM -0400, mlw wrote:\n> I mentioned in another thread, Windows does not support \"fork().\" PostgreSQL\n> seems irrevocably tied to using fork(). Without a drastic rewrite of how\n> postmaster works, I don't see a way to make a pure Windows version.\n\n I watch this discussion and only one question is still in my head:\n how much people use Windows for server side part of stable application \n based Oracle or DB2? Why my employer spend a lot of money with\n SGI cluster + IRIX?\n\n _IMHO_ if you want support Windows, please, write good tools for admins, \n DB designers and developers (forms?). The server is really not a problem if \n you think about real DB application. There is more important things in our\n TODO than support GUI-OS for server running... (IMHO:-)\n \n Karel\n\n-- \n Karel Zak <[email protected]>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 9 May 2002 09:25:04 +0200", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Le Jeudi 9 Mai 2002 09:25, Karel Zak a écrit :\n> _IMHO_ if you want support Windows, please, write good tools for admins,\n> DB designers and developers (forms?). The server is really not a problem\n> if you think about real DB application. There is more important things in\n> our TODO than support GUI-OS for server running... (IMHO:-)\nTry pgAdmin2 (http://pgadmin.postgresql.org. Other project are in the hub. \nMono port for example.\n", "msg_date": "Thu, 9 May 2002 10:10:44 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <[email protected]> writes:\n> > On Tue, 7 May 2002, Tom Lane wrote:\n> >> It'd be worth trying to understand cygwin issues in detail before we\n> >> sign up to do and support a native Windows port.\n>\n> > Actually, there are licensing issues involved ... we could never put a\n> > 'windows binary' up for anon-ftp, since to distribute it would require the\n> > cygwin.dll to be distributed, and to do that, there is a licensing cost\n> > ... of course, I guess we could require ppl to download cygwin seperately,\n> > install that, then install the binary over top of that ...\n>\n> <<itch>> And how much development time are we supposed to expend to\n> avoid that?\n>\n> Give me a technical case for avoiding Cygwin, and maybe I can get\n> excited about it. I'm not planning to lift a finger on the basis\n> of licensing though... after all, Windows users are accustomed to\n> paying for software, no?\n\n Nobody asked you to lift any of your fingers. A few people\n (including me) just see value in a native Windows port,\n kicking out the Cygwin requirement.\n\n I have the impression you never did use Cygwin. I did, thanks\n but no thanks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 May 2002 09:44:11 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <[email protected]> writes:\n> > > On Tue, 7 May 2002, Tom Lane wrote:\n> > >> It'd be worth trying to understand cygwin issues in detail before we\n> > >> sign up to do and support a native Windows port.\n> >\n> > > Actually, there are licensing issues involved ... we could never put a\n> > > 'windows binary' up for anon-ftp, since to distribute it would require the\n> > > cygwin.dll to be distributed, and to do that, there is a licensing cost\n> > > ... of course, I guess we could require ppl to download cygwin seperately,\n> > > install that, then install the binary over top of that ...\n> >\n> > <<itch>> And how much development time are we supposed to expend to\n> > avoid that?\n> >\n> > Give me a technical case for avoiding Cygwin, and maybe I can get\n> > excited about it. I'm not planning to lift a finger on the basis\n> > of licensing though... after all, Windows users are accustomed to\n> > paying for software, no?\n> \n> Nobody asked you to lift any of your fingers. A few people\n> (including me) just see value in a native Windows port,\n> kicking out the Cygwin requirement.\n> \n> I have the impression you never did use Cygwin. I did, thanks\n> but no thanks.\n\nI have used the cygwin version too. It is a waste of time. No Windows user will\never accept it. No windows-only user is going to use the cygwin tools. From a\nproduction stand point, would anyone reading this trust their data to\nPostgreSQL running on cygwin? Think about it, if you wouldn't, why would anyone\nelse.\n\nI think, and I know people are probably sick of me spouting opinions, that if\nyou want a Windows presence for PostgreSQL, then we should write a real Win32\nversion.\n\nIf the global/static variables which are initialized by the postmaster are\nmoved to a structure, we can should be able to remove the fork() requirement\nand port to a Win32 native system.\n", "msg_date": "Thu, 09 May 2002 10:05:03 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Jan Wieck writes:\n > Tom Lane wrote:\n > > Give me a technical case for avoiding Cygwin, and maybe I can get\n > > excited about it. I'm not planning to lift a finger on the basis\n > > of licensing though... after all, Windows users are accustomed to\n > > paying for software, no?\n > Nobody asked you to lift any of your fingers. A few people\n > (including me) just see value in a native Windows port,\n > kicking out the Cygwin requirement.\n > I have the impression you never did use Cygwin. I did, thanks\n > but no thanks.\n\nI think the crux of the the problem is that a native Windows port\nwould require a LOT of changes in the source (switching over to API\nwrappers, adding compatibility layers). Obviously this has the\npossibility of introducing a lot of bugs with zero gain for the folk\nwho are already happily running PostgreSQL on UNIX-like systems. And\nwhat of performance?\n\nSure It'd be nice to have a native PostgreSQL on XP Server (I don't\nsee the point in consumer level Microsoft OSs) but how high is the\ndemand? What's the prize? What are the current limitations - fork,\nsemaphores, ugly interface...?\n\nLee.\n", "msg_date": "Thu, 9 May 2002 15:13:27 +0100", "msg_from": "Lee Kindness <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Lee Kindness wrote:\n\n> \n> Sure It'd be nice to have a native PostgreSQL on XP Server (I don't\n> see the point in consumer level Microsoft OSs) but how high is the\n> demand? What's the prize? What are the current limitations - fork,\n> semaphores, ugly interface...?\n\nThe demand for PostgreSQL on Windows is currently as near to zero as you can\nimagine. This is probably because there is no viable PostgreSQL on Windows.\n\nIf written correctly, a Win32 version of PostgreSQL would rock the Windows\nworld. I see no reason why it would be limted to the \"professional\" version.\nHell, it could even run on Windows 98.\n\nRight now, in the small to medium space, there is only one choice for Windows,\nMSSQL. It requires the \"professional\" or server versions of the Microsoft\nplatforms. PostgreSQL could come in and run on all of them. \n\nPostgreSQL's feature set and price ($0), with a good installer, would do VERY\nwell.\n", "msg_date": "Thu, 09 May 2002 10:23:41 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "mlw <[email protected]> writes:\n> I have used the cygwin version too. It is a waste of time. No Windows user will\n> ever accept it. No windows-only user is going to use the cygwin tools.\n\nWith decent packaging, no windows-only user would even know we have\ncygwin in there. The above argument is just plain irrelevant. The real\npoint is that we need a nice clean friendly GUI for both installation\nand administration --- and AFAICS that will take about the same amount of\nwork to write whether the server requires cygwin internally or not.\n\nRather than expending largely-pointless work on internal rewrites of\nthe server, people who care about this issue ought to be thinking about\nthe GUI problems.\n\n> From a production stand point, would anyone reading this trust their\n> data to PostgreSQL running on cygwin?\n\nI wouldn't trust my data to *any* database running on a Microsoft OS.\nPeriod. The above argument thus doesn't impress me at all, especially\nwhen it's being made without offering a shred of evidence that cygwin\ncontributes any major degree of instability.\n\nI am especially unhappy about the prospect of major code revisions\nand development time spent on chasing this rather than improving our\nperformance and stability on Unix-type OSes. I agree with the comment\nsomeone else made: that's just playing Microsoft's game.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 May 2002 10:25:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "Tom Lane wrote:\n> With decent packaging, no windows-only user would even know we have\n> cygwin in there. The above argument is just plain irrelevant. The real\n> point is that we need a nice clean friendly GUI for both installation\n> and administration --- and AFAICS that will take about the same amount of\n> work to write whether the server requires cygwin internally or not.\n\nCan a cygwin version of PostgreSQL see the native file system, like: C:\\My\nDatabase, D:\\postgres?\n\n> > From a production stand point, would anyone reading this trust their\n> > data to PostgreSQL running on cygwin?\n> \n> I wouldn't trust my data to *any* database running on a Microsoft OS.\n\nThat is a prejudice that is affecting your judgment. Many people do trust\nWindows, do I? No, but a lot of people do. People have trusted their businesses\non Windows NT/2K/XP, many are still doing so. We want these people to use\nPostgreSQL, so when they see the error in their ways, they have a way out.\n\n\n> The above argument thus doesn't impress me at all, especially\n> when it's being made without offering a shred of evidence that cygwin\n> contributes any major degree of instability.\n\n From a software development standpoint, I am VERY uncomfortable with the\ntechnique of a user space program copying its writeable memory to another\nprocess's. It may work until Microsoft changes something with the next version\nof IE. What about anti-virus software, cygwin has problems with them, and you\nhave to have anti-virus software on Windows.\n\nOn top of that, the time spent copying the whole process is too long, and it\nforces real memory to be allocated and initialized at process startup. \n\nSo, the cygwin fork() will cause PostgreSQL to be slower and use more memory\nthan a native version, and will not co-exist well with anti-virus software.\n\n> \n> I am especially unhappy about the prospect of major code revisions\n> and development time spent on chasing this rather than improving our\n> performance and stability on Unix-type OSes. I agree with the comment\n> someone else made: that's just playing Microsoft's game.\n\nMaybe is is playing \"Microsoft's Game\" but the end result will be a program\nthat can seriously compete with MSSQL on Windows, and provide a REAL migration\npath to UNIX. \n\nMany developers use MSSQL because they \"have it\" in MSDN, so to them, it is\nfree. Once they develop something using it, they are tied to Windows. When it\ncomes time to deploy their pet project, the company has to cough up the price\nof the server.\n\nA native, friendly, Win32 PostgreSQL that works the same on Windows as it does\non FreeBSD, Linux, Solaris, etc. Will offer the developer real options away\nfrom Windows.\n\nAlso: I don't think it needs to be a major rewrite, no strategy needs to\nchange, it is basically renaming variables, i.e. my_global_var becomes\npg_globals.my_global_var.\n\nOnce that is done, a port writer can do what ever they need to do to get that\nstructure to the child correctly. As an exercise, I bet if we did this, we\nwould find bugs which are lurking, as yet unfound.\n\nBesides, the discipline of using a globals structure will improve the code\nbase. Don't you agree?\n", "msg_date": "Thu, 09 May 2002 10:55:01 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "...\n> PostgreSQL's feature set and price ($0), with a good installer, would do VERY\n> well.\n\nThat may be (I'd like to think so!).\n\nWe've identified at least a couple of barriers to folks running\nPostgreSQL on Windows. The installer and GUI issue needs to be solved no\nmatter what, and we *could* have a version running on Windows with just\nthose things in place.\n\nimho if we are going down the path, we need to take the first steps. And\nthose do *not* require code rewrites to do so (or at least don't appear\nto).\n\nIf we had a package available for Windows -- with some developers such\nas yourself supporting it -- then we could talk about putting more\nresources into supporting that platform better. But the perception of at\nleast some of the key developers (including myself) is that *if* we did\nthe code rewrite, and *if* we spent the effort to end up as a native on\nWindows, then we *very well might* be an unreliable database on an\nunreliable platform.\n\nistm that getting a well packaged system running now, then being able to\nidentify *only cygwin* as the barrier to better reliability would get\nmore support for changes in the backend code.\n\nAnd if we were working toward some ability to do threading anyway (I\ndon't see that in the near future, but we've talked in the past about\nstructuring the query engine around \"tuple sources\" which could then be\ndistributed across threads or across machines) then maybe the next step\nis easier.\n\nMy 2c...\n\n - Thomas\n", "msg_date": "Thu, 09 May 2002 08:13:17 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "There are some issues that the whole idea of a win32 port should bring up. \nOne of them is whether or not postgresql should be rewritten as a \nmulti-threaded app.\n\nIf postgresql will never be rewritten as a multi-threaded app, then \nperformance under Windows is likely to ALWAYS be slow, since that \nmulti-thread is the preferred model for good performance on W32. note \nthat many Unixes prefer multi-threaded models as well (Solaris comes to \nmind) so there's the possibility that a multi-threaded postgresql could \nenjoy better performance on more than just windows.\n\nIf postgresql IS going to eventually be multi-threaded, then the whole \nwin32 port should probably be delayed until then, since it would solve \nmany of the issues of fork() versus createprocess().\n\nJust my thoughts on it.\n\nScott\n\n", "msg_date": "Thu, 9 May 2002 10:34:58 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Issues tangential to win32 support" }, { "msg_contents": "mlw wrote:\n> I think, and I know people are probably sick of me spouting opinions, that if\n> you want a Windows presence for PostgreSQL, then we should write a real Win32\n> version.\n>\n> If the global/static variables which are initialized by the postmaster are\n> moved to a structure, we can should be able to remove the fork() requirement\n> and port to a Win32 native system.\n\n My opinion here is that until May 1998 Postgres did exec(),\n so it was clean and okay for CreateProcess() up to then. Just\n because we optimized it for the copy-on-write behaviour,\n modern Unix kernels do with fork() only, is NO reason to\n accept sloppy coding. The Postmaster and the backend have\n different responsibilities. In fact, I still consider them\n beeing different programs even if they reside in one\n executable. Mixing global variables of one with the other is\n wrong.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 May 2002 12:37:16 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "\n\nmlw wrote:\n> Can a cygwin version of PostgreSQL see the native file system, like: C:\\My\n> Database, D:\\postgres?\n\nSort of. C:\\My Database\nbecomes /cygdrive/c/My Database\nunder cygwin. So the path names need to be munged, but you can access \nthe entire windows filesystem from within cygwin.\n\n--Barry\n\n\n", "msg_date": "Thu, 09 May 2002 09:44:14 -0700", "msg_from": "Barry Lind <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "> Lee Kindness wrote:\n>> Sure It'd be nice to have a native PostgreSQL on XP Server (I don't\n>> see the point in consumer level Microsoft OSs) but how high is the\n>> demand? What's the prize? What are the current limitations - fork,\n>> semaphores, ugly interface...?\n\n> The demand for PostgreSQL on Windows is currently as near to zero\n> as you can imagine. This is probably because there is no viable\n> PostgreSQL on Windows.\n>\n> If written correctly, a Win32 version of PostgreSQL would rock\n> the Windows world. I see no reason why it would be limted to the\n> \"professional\" version. Hell, it could even run on Windows 98.\n>\n> Right now, in the small to medium space, there is only one choice for\n> Windows, MSSQL. It requires the \"professional\" or server versions of\n> the Microsoft platforms. PostgreSQL could come in and run on all of\n> them.\n>\n> PostgreSQL's feature set and price ($0), with a good installer, would\n> do VERY well.\n\nIf \"fixing\" PostgreSQL to \"work\" on Win32 caused a whole lot of breakage on the Unix side, that would _not_ be a \"win.\" It might do well on Win32, but breakage could lead to a LOSS of interest on Unix, as people decided to take the point of view that the developers considered it more important to toady to Win-Needs than to improve how it works on Unix.\n\nIs that a totally \"fair\" point of view? No, but in a world where New York office buildings get bombed resulting in absolutely bizarre combinations of cheering and jeering, an expectation of \"fairness\" is definitely too much to ask. (You won't get anything resembling \"fairness\" at the airport, that's for sure...)\n\nHow about a totally different perspective: \n Why not let someone else go after the Windows \"market\"?\n\nPeople are actively working on SAP-DB and Firebird, and putting pretty serious efforts into the Win32 side of those database systems. How outrageous an idea is it to say: \"Let them deal with that set of headaches?\"\n\nAside from that, there's also the \"Show me the patch\" option. If someone is excited about porting PostgreSQL to Win32, nothing is stopping them from doing so, and contributing patches back. There seem to be several such efforts out there; if one becomes mature enough, it may represent a useful basis to port in to make the main codebase \"more portable.\"\n\nThere are at least the two clear options: \n- Other DBs;\n- Volunteers porting PostgreSQL to Win32.\n\nIf a \"winner\" emerges, that would surely be useful to guide later PostgreSQL efforts.\n--\n(reverse (concatenate 'string \"gro.mca@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/wp.html\n\"What you end up with, after running an operating system concept\nthrough these many marketing coffee filters, is something not unlike\nplain hot water.\" -- Matt Welsh\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/sap.html\n\"I'm not switching from slrn. I'm quite confident that anything that\n*needs* to be posted in HTML is fatuous garbage not worth my time.\"\n-- David M. Cook <[email protected]>", "msg_date": "Thu, 09 May 2002 13:00:31 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "> I think, and I know people are probably sick of me spouting opinions,\n> that if you want a Windows presence for PostgreSQL, then we should\n> write a real Win32 version.\n\nThe crucial wrong word is the word \"we.\"\n\nIf _you_ want a Windows presence, then _you_ should write a real Win32 \nversion. That clearly attaches responsibility to someone who is interested.\n--\n(reverse (concatenate 'string \"gro.mca@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/unix.html\n\"I'm not switching from slrn. I'm quite confident that anything that\n*needs* to be posted in HTML is fatuous garbage not worth my time.\"\n-- David M. Cook <[email protected]>\n\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/spreadsheets.html\nRules of the Evil Overlord #166. \"If the rebels manage to trick me, I\nwill make a note of what they did so that I do not keep falling for\nthe same trick over and over again.\" <http://www.eviloverlord.com/>", "msg_date": "Thu, 09 May 2002 13:02:33 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "[email protected] wrote:\n> \n> > I think, and I know people are probably sick of me spouting opinions,\n> > that if you want a Windows presence for PostgreSQL, then we should\n> > write a real Win32 version.\n> \n> The crucial wrong word is the word \"we.\"\n> \n> If _you_ want a Windows presence, then _you_ should write a real Win32\n> version. That clearly attaches responsibility to someone who is interested.\n\nI have already said that I am willing to write the pieces for a Windows port.\nThe issue is changes in PostgreSQL required to do it.\n", "msg_date": "Thu, 09 May 2002 13:24:03 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Christopher Browne wrote:\n> \n> If \"fixing\" PostgreSQL to \"work\" on Win32 caused a whole lot of\n> breakage on the Unix side, that would _not_ be a \"win.\" It might\n> do well on Win32, but breakage could lead to a LOSS of interest\n> on Unix, as people decided to take the point of view that the\n> developers considered it more important to toady to Win-Needs\n> than to improve how it works on Unix.\n \nAs a PostgreSQL user, I *wholeheartedly* agree. I have no need nor\ninterest in a Win32 solution. Period. If I perceive that an effort\nto add a Win32 postgresql is adversely impacting the ongoing\ndevelopment of Unix-based PostgreSQL then I will start looking at\nother solutions.\n\nIn fact, if you folks could find additional resources that would\nsupport Win32 development, it still seems to me that perhaps those\nresources could be better spent improving the Unix version.\n\n-- \nSteve Wampler -- [email protected]\nO sibile, si ergo. Fortibus es enaro.\nNobile, demis trux. Demis phulla causan dux.\n", "msg_date": "Thu, 09 May 2002 10:29:29 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Scott Marlowe wrote:\n> \n> There are some issues that the whole idea of a win32 port should bring up.\n> One of them is whether or not postgresql should be rewritten as a\n> multi-threaded app.\n\nPerhaps.\n\n> \n> If postgresql will never be rewritten as a multi-threaded app, then\n> performance under Windows is likely to ALWAYS be slow, since that\n> multi-thread is the preferred model for good performance on W32.\n\nThere are methods for reducing process creation load on Windows. One way is to\nmake PostgreSQL one big .DLL and just spin off a small program. A windows .DLL\nis different than a UNIX shared library, in some ways better, in other ways\nworse, either way, it is a usefull tool.\n\n\n> note\n> that many Unixes prefer multi-threaded models as well (Solaris comes to\n> mind) so there's the possibility that a multi-threaded postgresql could\n> enjoy better performance on more than just windows.\n\nThe isolation of a process is very important to reliable operation. Going\nthreaded usually means allowing a single connection to bring down the whole\nserver.\n", "msg_date": "Thu, 09 May 2002 13:36:05 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "Le Jeudi 9 Mai 2002 16:55, mlw a écrit :\n> Can a cygwin version of PostgreSQL see the native file system, like: C:\\My\n> Database, D:\\postgres?\n\nYou have the choice to keep Windows or Unix paths. Both are supported.\n/Jean-Michel POURE\n", "msg_date": "Thu, 9 May 2002 19:42:10 +0200", "msg_from": "Jean-Michel POURE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "> [email protected] wrote:\n>>> I think, and I know people are probably sick of me spouting\n>>> opinions, that if you want a Windows presence for PostgreSQL, then\n>>> we should write a real Win32 version.\n>>\n>> The crucial wrong word is the word \"we.\"\n\n>> If _you_ want a Windows presence, then _you_ should write a real\n>> Win32 version. That clearly attaches responsibility to someone who\n>> is interested.\n\n> I have already said that I am willing to write the pieces for a\n> Windows port. The issue is changes in PostgreSQL required to do it.\n\nNo, I don't think you understand.\n\nIf you're planning to do a port, then _all_ changes are your\nresponsibility. Nobody ought to need to change PostgreSQL in order for\nyou to write a Windows port; that, in fact, would be a waste of time,\nhaving several people working on something that should probably be done\nby one person.\n--\n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://www.ntlug.org/~cbbrowne/x.html\nWhy are they called apartments, when they're all stuck together? \n", "msg_date": "Thu, 09 May 2002 13:46:04 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "Scott Marlowe wrote:\n> There are some issues that the whole idea of a win32 port should bring up.\n> One of them is whether or not postgresql should be rewritten as a\n> multi-threaded app.\n\n Please, don't add this one to it.\n\n I'm all for the native Windows port, yes, but I've discussed\n the multi-thread questions for days at Great Bridge, then\n again with my new employer, with people on shows and whatnot.\n\n Anything in the whole backend is designed with a multi-\n process model in mind. You'll not do that in any reasonable\n amount of time.\n\n> If postgresql will never be rewritten as a multi-threaded app, then\n> performance under Windows is likely to ALWAYS be slow, since that\n> multi-thread is the preferred model for good performance on W32. note\n> that many Unixes prefer multi-threaded models as well (Solaris comes to\n> mind) so there's the possibility that a multi-threaded postgresql could\n> enjoy better performance on more than just windows.\n\n As soon as you switch to a multi-threaded middle tier, that\n uses connection pools and avoids constant process creation, I\n think you don't have much of a problem any more. The high\n connect/disconnect ratio, acceptable for threaded databases,\n is what kills PostgreSQL.\n\n> If postgresql IS going to eventually be multi-threaded, then the whole\n> win32 port should probably be delayed until then, since it would solve\n> many of the issues of fork() versus createprocess().\n\n If multi-threading is the plan, then there is light at the\n end of the tunnel ... the upcoming train...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 May 2002 13:51:29 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "On Thu, 9 May 2002, Jan Wieck wrote:\n\n> > If postgresql IS going to eventually be multi-threaded, then the whole\n> > win32 port should probably be delayed until then, since it would solve\n> > many of the issues of fork() versus createprocess().\n> \n> If multi-threading is the plan, then there is light at the\n> end of the tunnel ... the upcoming train...\n\nThat's a bit extreme don't you think? I'm not fan of multi-threading as \nthe one true way, and since I use linux as my server for postgresql, there \nis no gain for me in a multi-threaded model. In fact, I'd prefer \npostgresql stay multi-process for robustness.\n\nBUT, if there are plans to go multi-thread, or could be plans, then those \nshould take priority over how to port to windows, since making postgresql \nmulti-threaded will change it so much as to make the current \"how do we \nport to windows\" thread meaningless.\n\nOne of the primary reasons given for avoiding cygwin is that postgresql \nruns so slowly under it on windows, but I submit that it probably won't \nrun a heck of a lot faster if it was written as a native app as long as \nit's running as a multi-process design. Since there's probably no great \ngain to be had from moving it out from under cygwin, why bother?\n\nMy vote would be to stay multi-process and Unix compatible. I have no \nreal use for windows as a server, and do NOT want to sacrifice the \nperformance / reliability I have with postgresql under Linux for a windows \nport.\n\n\n\n", "msg_date": "Thu, 9 May 2002 12:13:59 -0600 (MDT)", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "[email protected] wrote:\n> \n> > [email protected] wrote:\n> >>> I think, and I know people are probably sick of me spouting\n> >>> opinions, that if you want a Windows presence for PostgreSQL, then\n> >>> we should write a real Win32 version.\n> >>\n> >> The crucial wrong word is the word \"we.\"\n> \n> >> If _you_ want a Windows presence, then _you_ should write a real\n> >> Win32 version. That clearly attaches responsibility to someone who\n> >> is interested.\n> \n> > I have already said that I am willing to write the pieces for a\n> > Windows port. The issue is changes in PostgreSQL required to do it.\n> \n> No, I don't think you understand.\n> \n> If you're planning to do a port, then _all_ changes are your\n> responsibility. Nobody ought to need to change PostgreSQL in order for\n> you to write a Windows port; that, in fact, would be a waste of time,\n> having several people working on something that should probably be done\n> by one person.\n\nWithout buy-in from the group, there is no point in me wasting my time doing\nall the work necessary. I'm not interested in making Mark's special version of\nPostgreSQL.\n\nIf we can agree on a strategy and a course, then it is worth doing. If all the\nchanges made fall on the floor because the group does not like them, then I\nwasted my time. Got it?\n\nAlso, doing the Windows portions of the code will represent a significant\ninvestment of my time. I'm not interested in doing a lot of work on a shoddy\nproject. If you ask the core group to put out a crappy version of PostgreSQL\nfor a UNIX, they would fight long and hard against it. Why should we be willing\nto produce a crappy version for Windows, just because the people here don't\nlike Windows.\n\nI don't care about Solaris, but I understand WHY it is important to make\nPostgreSQL work well on it. I don't understand why the people in this group\ndon't see the same purpose for a Windows port. To be honest, I think a good\nWindows port will do wonders for PostgreSQL's acceptance.\n", "msg_date": "Thu, 09 May 2002 14:23:02 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "[email protected] wrote:\n> > [email protected] wrote:\n> >>> I think, and I know people are probably sick of me spouting\n> >>> opinions, that if you want a Windows presence for PostgreSQL, then\n> >>> we should write a real Win32 version.\n> >>\n> >> The crucial wrong word is the word \"we.\"\n>\n> >> If _you_ want a Windows presence, then _you_ should write a real\n> >> Win32 version. That clearly attaches responsibility to someone who\n> >> is interested.\n>\n> > I have already said that I am willing to write the pieces for a\n> > Windows port. The issue is changes in PostgreSQL required to do it.\n>\n> No, I don't think you understand.\n>\n> If you're planning to do a port, then _all_ changes are your\n> responsibility. Nobody ought to need to change PostgreSQL in order for\n> you to write a Windows port; that, in fact, would be a waste of time,\n> having several people working on something that should probably be done\n> by one person.\n\n Who said PostgreSQL shall not support any other OS than *NIX\n or things that look alike?\n\n When I first used Postgres, it still had a VMS port. Well,\n we dropped the VMS port at some point, when we where pretty\n sure it was broken and nobody complained.\n\n Now we face the fact that Microsoft managed to create\n something useful (other than a joystick or optical mouse, I\n mean Win2K). And as a logical consequence more and more\n people ask for support of their OS.\n\n A good deal of effort in the original Postgres was about\n portability. I hope we've not become a UNIX-only show.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Thu, 9 May 2002 14:40:56 -0400 (EDT)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Thu, 2002-05-09 at 22:36, mlw wrote:\n> Scott Marlowe wrote:\n> > note\n> > that many Unixes prefer multi-threaded models as well (Solaris comes to\n> > mind) so there's the possibility that a multi-threaded postgresql could\n> > enjoy better performance on more than just windows.\n> \n> The isolation of a process is very important to reliable operation. Going\n> threaded usually means allowing a single connection to bring down the whole\n> server.\n\nAFAIK we do that already in forked model - any time postmaster thinks\nthat a dying child has corrupted shared memory it kills all its\nchildren.\n\n--------\nHannu\n\n\n\n", "msg_date": "10 May 2002 00:06:48 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "On Thu, 2002-05-09 at 22:51, Jan Wieck wrote:\n> Scott Marlowe wrote:\n> > There are some issues that the whole idea of a win32 port should bring up.\n> > One of them is whether or not postgresql should be rewritten as a\n> > multi-threaded app.\n> \n> Please, don't add this one to it.\n> \n> I'm all for the native Windows port, yes, but I've discussed\n> the multi-thread questions for days at Great Bridge, then\n> again with my new employer, with people on shows and whatnot.\n> \n> Anything in the whole backend is designed with a multi-\n> process model in mind. You'll not do that in any reasonable\n> amount of time.\n\nIIRC you are replying to the man who _has_ actually don this ?\n\nPerhaps using an unreasonable amount of time but still ... :)\n\n----------\nHannu\n\n\n", "msg_date": "10 May 2002 00:09:58 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "On Fri, 2002-05-10 at 00:09, Hannu Krosing wrote:\n> On Thu, 2002-05-09 at 22:51, Jan Wieck wrote:\n> > Scott Marlowe wrote:\n> > > There are some issues that the whole idea of a win32 port should bring up.\n> > > One of them is whether or not postgresql should be rewritten as a\n> > > multi-threaded app.\n> > \n> > Please, don't add this one to it.\n> > \n> > I'm all for the native Windows port, yes, but I've discussed\n> > the multi-thread questions for days at Great Bridge, then\n> > again with my new employer, with people on shows and whatnot.\n> > \n> > Anything in the whole backend is designed with a multi-\n> > process model in mind. You'll not do that in any reasonable\n> > amount of time.\n> \n> IIRC you are replying to the man who _has_ actually don this ?\n\nSorry, mistaken identity. I meant Myron scott ( [email protected] )\n\nwho has made this http://sourceforge.net/projects/mtpgsql\n\n> Perhaps using an unreasonable amount of time but still ... :)\n> \n> ----------\n> Hannu\n> \n\n\n", "msg_date": "10 May 2002 00:17:24 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "On Thu, 2002-05-09 at 19:23, mlw wrote:\n> Lee Kindness wrote:\n> \n> > \n> > Sure It'd be nice to have a native PostgreSQL on XP Server (I don't\n> > see the point in consumer level Microsoft OSs) but how high is the\n> > demand? What's the prize? What are the current limitations - fork,\n> > semaphores, ugly interface...?\n> \n> The demand for PostgreSQL on Windows is currently as near to zero as you can\n> imagine. This is probably because there is no viable PostgreSQL on Windows.\n> \n> If written correctly, a Win32 version of PostgreSQL would rock the Windows\n> world. I see no reason why it would be limted to the \"professional\" version.\n> Hell, it could even run on Windows 98.\n\nPerhaps we could simpultaneously solve another problem - creating a\nsinglethreaded embeddable version of postgresql \"engine\"\n\n-------------\nHannu\n\n\n", "msg_date": "10 May 2002 00:31:24 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "On Thu, 2002-05-09 at 19:25, Tom Lane wrote:\n> mlw <[email protected]> writes:\n> > I have used the cygwin version too. It is a waste of time. No Windows user will\n> > ever accept it. No windows-only user is going to use the cygwin tools.\n> \n> With decent packaging, no windows-only user would even know we have\n> cygwin in there. The above argument is just plain irrelevant. The real\n> point is that we need a nice clean friendly GUI for both installation\n> and administration --- and AFAICS that will take about the same amount of\n> work to write whether the server requires cygwin internally or not.\n\n<evil grin>\nWe can go the Oracle way and write a 200MB cross-platform java installer\nrequiring and exact version of java runtime \n</evil grin>\n\n> Rather than expending largely-pointless work on internal rewrites of\n> the server, people who care about this issue ought to be thinking about\n> the GUI problems.\n\npgAccess is quite nice (Disclaimer: I'm not a windows weenie, I run it\ninside vmware/win98 IE browser test environment on my Linux workstation\n;). \n\nWhy not just bundle what we've got ?\n\n> > From a production stand point, would anyone reading this trust their\n> > data to PostgreSQL running on cygwin?\n> \n> I wouldn't trust my data to *any* database running on a Microsoft OS.\n> Period. \n\nDo we support Xenix and SCO ?\n\n> The above argument thus doesn't impress me at all, especially\n> when it's being made without offering a shred of evidence that cygwin\n> contributes any major degree of instability.\n\n From the comments here it seems to be either cygwin or more likely\ncygipc\n\n> I am especially unhappy about the prospect of major code revisions\n> and development time spent on chasing this rather than improving our\n> performance and stability on Unix-type OSes. I agree with the comment\n> someone else made: that's just playing Microsoft's game.\n\nNot!\n\nI think that this thread is mostly about coordinating code and interface\ncleanups that are likely beneficial for both *NIX and non-*NIX platforms\nmainly\n * cleaner support for semaphores\n * separating shared and per-process data\n * process creation\n * (file operations)\n * (init and service scripts)\nif done properly none of these will degrade code quality nor\nperformance.\n\nAlso, having a clean interface for those will not only enable any\ninterested party to make windows/BeOS/OSX/QNX binaries with less effort,\nit will most likely make it easier make use of advances in *NIX world\nlike AIO, multiprocessor systems, NUMA and distributed systems, and just\nmake things more robust and reliable by making code inspection easier.\n\n---------------\nHannu\n\n\n", "msg_date": "10 May 2002 00:53:27 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application?" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> On Thu, 2002-05-09 at 22:36, mlw wrote:\n> > Scott Marlowe wrote:\n> > > note\n> > > that many Unixes prefer multi-threaded models as well (Solaris comes to\n> > > mind) so there's the possibility that a multi-threaded postgresql could\n> > > enjoy better performance on more than just windows.\n> >\n> > The isolation of a process is very important to reliable operation. Going\n> > threaded usually means allowing a single connection to bring down the whole\n> > server.\n> \n> AFAIK we do that already in forked model - any time postmaster thinks\n> that a dying child has corrupted shared memory it kills all its\n> children.\n\nI know there are cases when postmaster will kill all its children, but take the\ncase of a faulty user function that gets a segfault. That process dies and the\nothers continue. Without a lot of OS specific crap in postgres, that sort of\nbehavior would be difficult to have with a threaded server.\n", "msg_date": "Thu, 09 May 2002 17:09:56 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues tangential to win32 support" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> mlw <[email protected]> writes:\n> > I have used the cygwin version too. It is a waste of time. No Windows\nuser will\n> > ever accept it. No windows-only user is going to use the cygwin tools.\n>\n> With decent packaging, no windows-only user would even know we have\n> cygwin in there. The above argument is just plain irrelevant. The real\n> point is that we need a nice clean friendly GUI for both installation\n> and administration --- and AFAICS that will take about the same amount of\n> work to write whether the server requires cygwin internally or not.\n\nI'm afraid I agree with mlw, Tom. I don't think the problem ends at the GUI,\nalthough for many people it would. The issue extends at least also to\nsupport and troubleshooting. In a production environment, I have a better\nchance of figuring out what's going wrong with an application written\nnatively for an operating system dealing directly with that operating\nsystem. I would take a dim view of using PostgreSQL running on cygwin unless\nI had extensive experience doing it, or if there were no other alternative.\n\n> > From a production stand point, would anyone reading this trust their\n> > data to PostgreSQL running on cygwin?\n>\n> I wouldn't trust my data to *any* database running on a Microsoft OS.\n> Period. The above argument thus doesn't impress me at all, especially\n> when it's being made without offering a shred of evidence that cygwin\n> contributes any major degree of instability.\n\nIf you could prove to me that cygwin doesn't contribute *any* instability,\nI'd still be pretty worried, probably for the same reasons that you don't\ntrust any Microsoft OS. There are increased chances that something could go\ncritically wrong, particularly in an environment fundamentally different. I\nthink mlw's basic point is quite valid, that PG+cygwin will not ever find\nfavor with decision-makers who are used to Windows systems. Suspicion of\nthe other environment's foibles is common, and goes both ways.\n\n> I am especially unhappy about the prospect of major code revisions\n> and development time spent on chasing this rather than improving our\n> performance and stability on Unix-type OSes. I agree with the comment\n> someone else made: that's just playing Microsoft's game.\n\nThere I don't deny you may be right.\n\nErnie Gutierrez\nWalnut Creek, CA\n\n", "msg_date": "Thu, 9 May 2002 17:59:55 -0700", "msg_from": "\"Ernesto Gutierrez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work is a native Windows application? " }, { "msg_contents": "On Sat, 2002-05-11 at 02:25, mlw wrote:\n> A binary version of PostgreSQL for Windows should not use the cygwin dll. I\n> know and understand there is some disagreement with this position, but in this\n> I'm sure about this.\n \n...\n\n> I believe we can use the cygwin development environment, and direct gcc not to\n> link with the cygwin dll. Last time I looked it was a command line option. This\n> will produce a native windows application. No emulation, just a standard C\n> runtime.\n\nIt seems that mingw (http://www.mingw.org/) does exactly this and\nprovides needed headers/libs. And they have also non-cycwin minimal\nbuild environment (MSYS) that supplies make,sh and other stuff we might\nuse for running initdb and other shell scripts.\n\n> Some of the hits will be file path manipulation, '/' vs '\\', the notion of\n> drive letters, and case insensitivity in file names. \n> \n> Unicode may be an issue, I haven't looked at that yet. Is that a must for the\n> initial release?\n\nProbably not.\n\n>> \n> A couple simple programs can be written using msvc to monitor, start and stop\n> PostgreSQL. The programs will be simple using the application wizard, just make\n> a small dialog box application.\n\ndev-c++ has also wizards for easy making of trivial user interfaces\n\nhttp://sourceforge.net/projects/dev-cpp/\n\n--------------\nHannu\n\n\n", "msg_date": "11 May 2002 00:48:12 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Native Win32, How about this?" }, { "msg_contents": "A binary version of PostgreSQL for Windows should not use the cygwin dll. I\nknow and understand there is some disagreement with this position, but in this\nI'm sure about this.\n\nThe tools used to create the binary need not be Microsoft, many venders have\nused Borland or Watcom, the run of the mill user/developer does not care. The\ndevelopers who do care won't mind the cygwin development environment as long as\nit produces a native Windows binary that does not play tricks such as fork().\n\nWindows developers don't care too much about source code. The build environment\nwill not be a problem.\n\nThe issue is that the system must perform well and must be stable. I do not\nbelieve that cygwin can meet this requirement. Having done some research for\nthese discussions, I think we know it has startup performance issues and\nunknown operational issues.\n\nFYI: My PHP project msession, can produce both a Windows version and a Cygwin\nversion. It is threaded C++, but I have measured a performance improvements\nusing the native Windows version over the cygwin version. I have since\nabandoned the cygwin version.\n\nI believe we can use the cygwin development environment, and direct gcc not to\nlink with the cygwin dll. Last time I looked it was a command line option. This\nwill produce a native windows application. No emulation, just a standard C\nruntime.\n\nSome of the hits will be file path manipulation, '/' vs '\\', the notion of\ndrive letters, and case insensitivity in file names. \n\nUnicode may be an issue, I haven't looked at that yet. Is that a must for the\ninitial release?\n\nThere will be a need for some emulation/api specification of things like\nsemaphores, shared memory, file API (I would like to use Windows native\nCreateFile routines, as these should be pretty fast.), and so on.\n\nWe will also have to breakup postgres and postmaster, and for the Windows\nversion use CreateProcess. There are a number of ways to attack this, globals\nin a structure based in shared memory, globals in a .DLL exported to processes\nand shared, and so on.\n\nI think a huge time savings can be had by avoiding rewriting everything for the\nMicrosoft build environment. As far as I know, and please correct me if I'm\nwrong, code produced by the cygwin gcc is freely distributable and need not be\nGPL.\n\nOnce we have it working without fork() using the cygwin build environment, we\nwill have a native Windows application, we can then further evaluate whether or\nnot we want to expend the work to make a MSC version. \n\nOnce the backend and most of the tools are built without requiring the\ncygwin.dll, installation is a breeze. Just dump it somewhere and run it.\n\nA couple simple programs can be written using msvc to monitor, start and stop\nPostgreSQL. The programs will be simple using the application wizard, just make\na small dialog box application.\n\nPgaccess will provide all the GUI stuff, and we may even be able to wrap the\nmonitor code into pgaccess.\n\nThe server install can be done with install shield.\n\nThere is code that will run any program as an NT service. We can use that for\nserver installations. We can use the MSVC wizard application to pop-up in the\ntool bar.\n\nHave I missed anything?\nIs this a realistic and attainable plan?\n", "msg_date": "Fri, 10 May 2002 17:25:02 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Native Win32, How about this?" }, { "msg_contents": "> A binary version of PostgreSQL for Windows should not use the cygwin\n> dll. I know and understand there is some disagreement with this\n> position, but in this I'm sure about this.\n\nThat may ultimately be desirable.\n\nIn the short term, it is likely preferable to use cygwin.\n\nIt is only necessary to point at MySQL for an example. Cygwin is used there.\n<http://www.mysql.com/downloads/mysql-3.23.html> It is being used widely, \n\"crap\" or not.\n\nCygwin may not ultimately be the ideal thing to use; we don't yet live in \nPangloss' \"Best of All Possible Worlds,\" and thus have to live with some \nthings not being ideal.\n\nIf having the installer install Cygwin as well as the DBMS makes it easy to \nhave something usable soon, and this allows 100,000 WinFolk to try out \nPostgreSQL, then that's a Big Win. Out of 100K users, surely two or three may \nbe attracted into working on a more Panglossian solution.\n\nIt may be fair to say that none of those 100K folk would be using PostgreSQL \nto support HA applications involving hundreds of GB of data. That's _fine_.\n\nIf there are new 100K folk using PostgreSQL/cygwin, _some_ of them will \noutgrow its capabilities, and come looking for improvements.\n\nAnd as they're Windows users, accustomed to having to pay hefty amounts to \nMicrosoft to get support no better than that provided by the Psychic Friends \nNetwork (see <http://www.bmug.org/news/articles/MSvsPF.html>), they'll \ndoubtless be prepared to have to pay _something_ in order for \n\"PostgreSQL/Win3K-Enterprise Edition\" to become available.\n\nThat seems a not too unreasonable path towards the \"Best of All Possible \nWorlds.\" There may be a bit of hyperbole in the above, but any time Voltaire \ngets quoted, that's likely to happen :-).\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/wp.html\nEagles may soar, but weasels don't get sucked into jet engines.\n\n-- \n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.cbbrowne.com/info/multiplexor.html\nIt's a little known fact that the Dark Ages were caused by the Y1K\nproblem.", "msg_date": "Sat, 11 May 2002 11:26:18 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Native Win32, How about this? " }, { "msg_contents": "On Thu, May 02, 2002 at 01:14:34PM -0300, Marc G. Fournier wrote:\n> On 2 May 2002, Hannu Krosing wrote:\n...\n> > BTW, I think PostgreSQL does _not_ need any mission statement.\n> \n> Nope, it doesn't ... never did before, don't know why it does suddenly ...\n> do any other open source projects have one? Its kinda fun to see what ppl\n> banter around, but I can't see it being useful to adopt any single one,\n> considering I can't see *everyone* agreeing with it ...\n\nQuick - get out the Dilbert!\n\nPatick\n", "msg_date": "Sun, 12 May 2002 18:45:11 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "mlw,\n\nOn Fri, May 10, 2002 at 05:25:02PM -0400, mlw wrote:\n> A binary version of PostgreSQL for Windows should not use the cygwin\n> dll. I know and understand there is some disagreement with this position,\n> but in this I'm sure about this.\n\nSorry, but I'm not going to touch the above -- even with a ten foot pole.\nOr, at least try not to... :,)\n\n> I believe we can use the cygwin development environment, and direct gcc\n> not to link with the cygwin dll. Last time I looked it was a command line\n> option. This will produce a native windows application. No emulation,\n> just a standard C runtime.\n\nYes, the above mentioned option is \"-mno-cygwin\".\n\n> Some of the hits will be file path manipulation, '/' vs '\\', the notion of\n> drive letters, and case insensitivity in file names. \n\nCase insensitivity is typically \"enabled\" regardless. Unless you are\nreferring to CYGWIN=check_case:strict, but almost no one uses this setting.\n\nJust to be explicit, another hit will be the loss of Posix.\n\n> [snip]\n> \n> I think a huge time savings can be had by avoiding rewriting everything\n> for the Microsoft build environment.\n\nYes, you should use Cygwin and gcc -mno-cygwin or MSYS and Mingw.\n\n> As far as I know, and please correct me if I'm wrong, code produced by\n> the cygwin gcc is freely distributable and need not be GPL.\n\nThe above is true only with gcc -mno-cygwin or Mingw code. Any code\nproduced by the normal Cygwin gcc (and hence, linked against cygwin1.dll)\nis effectively GPL'd or at least required to be open source.\n\n> [snip]\n\nJason\n", "msg_date": "Mon, 13 May 2002 09:59:04 -0400", "msg_from": "Jason Tishler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Native Win32, How about this?" }, { "msg_contents": "Hi, \n\nSomething is pretty broken in redhat 7.3 but I'm not sure what and I\ndon't have time to dig further\n\nmasm@test=# select cast('1967-04-18' as timestamptz);\n timestamptz\n------------------------\n 1967-04-17 18:00:00-06\n(1 row)\n\nmasm@test=# select cast(cast('1967-04-18' as date) as timestamp);\nERROR: Unable to convert date to tm\nmasm@test=#\n\nBoth cases works correctly in Redhat 7.2. Sorry if this is not the\ncorrect forum however an alert is nice for people planning an Redhat\nupgrade. I hope to see pretty soon an update since I don't want to\ndowngrade my server.\n\nRegards,\nManuel.\n", "msg_date": "17 May 2002 23:34:42 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Saturday 18 May 2002 12:34 am, Manuel Sugawara wrote:\n> Something is pretty broken in redhat 7.3 but I'm not sure what and I\n> don't have time to dig further\n\n> Both cases works correctly in Redhat 7.2. Sorry if this is not the\n> correct forum however an alert is nice for people planning an Redhat\n> upgrade. I hope to see pretty soon an update since I don't want to\n> downgrade my server.\n\nFiled on Red Hat's Bugzilla system as bug# 65227, as I can reliably reporoduce \nthis bug here, and PostgreSQL 7.2.1 on Red Hat 6.2 on SPARC does not exhibit \nthe bug.\n\nTom (or Thomas):\n\nWhere would we go to ferret out the source of this bug? More to the point: we \nneed a test case in C that could expose this as a glibc bug. Methinks Red \nHat would want this bug ferretted out, as it would likely cause problems with \nRedHat Database on RH 7.3's glibc.....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 20 May 2002 13:58:12 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Filed on Red Hat's Bugzilla system as bug# 65227, as I can reliably\n> reporoduce this bug here, and PostgreSQL 7.2.1 on Red Hat 6.2 on\n> SPARC does not exhibit the bug.\n\nThanks for filling that report. I couldn't remember what had forgotten\n;-)\n\nRegards,\nManuel.\n", "msg_date": "20 May 2002 13:22:14 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Where would we go to ferret out the source of this bug? More to the\n> point: we need a test case in C that could expose this as a glibc\n> bug. \n\nSeems like mktime(3) is having problems with dates before the\nepoch. Attached is the a program to test this. The glibc source is now\ndownloading I will try to hunt down this bug but not until the next\nweek.\n\nRegards,\nManuel.", "msg_date": "20 May 2002 19:08:07 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Monday 20 May 2002 08:08 pm, Manuel Sugawara wrote:\n> > Where would we go to ferret out the source of this bug? More to the\n> > point: we need a test case in C that could expose this as a glibc\n> > bug.\n\n> Seems like mktime(3) is having problems with dates before the\n> epoch. Attached is the a program to test this. The glibc source is now\n> downloading I will try to hunt down this bug but not until the next\n> week.\n\nIt's not a bug. At least not according to the ISO C standard. See \nhttp://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14 \nfor the definition of 'Seconds Since the Epoch', then cross-reference to the \nman page of mktime.\n\nI don't like it any more than you do, but that is the letter of the standard. \n\nThomas, any comments?\n\nOur implementation is broken, then. Thomas, is this fixable for a 7.2.x \nrelease, or something for 7.3?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 20 May 2002 23:12:21 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "...\n> Our implementation is broken, then. Thomas, is this fixable for a 7.2.x\n> release, or something for 7.3?\n\n\"Our implementation is broken, then\" is really not a conclusion to be\nreached. The de facto behavior of mktime() on all world-class\nimplementations is to support pre-1970 times. This has been true forever\nafaik, certainly far longer than PostgreSQL (or Postgres) has been in\nexistance.\n\nAny standard which chooses to ignore pre-1970 dates is fundamentally\nbroken imho, and I'm really ticked off that the glibc folks have so\nglibly introduced a change of this nature and magnitude without further\ndiscussion.\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 06:30:06 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> On Monday 20 May 2002 08:08 pm, Manuel Sugawara wrote:\n> > > Where would we go to ferret out the source of this bug? More to the\n> > > point: we need a test case in C that could expose this as a glibc\n> > > bug.\n> \n> > Seems like mktime(3) is having problems with dates before the\n> > epoch. Attached is the a program to test this. The glibc source is now\n> > downloading I will try to hunt down this bug but not until the next\n> > week.\n> \n> It's not a bug. At least not according to the ISO C standard. See\n> http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14\n> for the definition of 'Seconds Since the Epoch', then\n> cross-reference to the man page of mktime.\n\nI see. This behavior is consistent with the fact that mktime is\nsupposed to return -1 on error, but then is broken in every other Unix\nimplementation that I know.\n\nAny other workaround than downgrade or install FreeBSD?\n\nRegards,\nManuel.\n", "msg_date": "21 May 2002 10:04:00 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:\n> I see. This behavior is consistent with the fact that mktime is\n> supposed to return -1 on error, but then is broken in every other Unix\n> implementation that I know.\n\n> Any other workaround than downgrade or install FreeBSD?\n\nComplain to Red Hat. Loudly. However, as this is a glibc change, other \ndistributors are very likely to fold in this change sooner rather than later.\n\nTry using timestamp without timezone?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 21 May 2002 12:29:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tue, 21 May 2002, Lamar Owen wrote:\n\n> On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:\n> > I see. This behavior is consistent with the fact that mktime is\n> > supposed to return -1 on error, but then is broken in every other Unix\n> > implementation that I know.\n> \n> > Any other workaround than downgrade or install FreeBSD?\n> \n> Complain to Red Hat. Loudly. However, as this is a glibc change, other \n> distributors are very likely to fold in this change sooner rather than \n> later. \n\nRelying on nonstandardized/nondocumented behaviour is a program bug, not a \nglibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at \nit, but glibc is not the component with a problem.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Tue, 21 May 2002 12:31:14 -0400 (EDT)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tuesday 21 May 2002 12:31 pm, Trond Eivind Glomsr�d wrote:\n> On Tue, 21 May 2002, Lamar Owen wrote:\n> > However, as this is a glibc change, other\n> > distributors are very likely to fold in this change sooner rather than\n> > later.\n\n> Relying on nonstandardized/nondocumented behaviour is a program bug, not a\n> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at\n> it, but glibc is not the component with a problem.\n\nIn your opinion. Not everyone agrees with the new glibc behavior. In fact, \nsome here are rather livid about it. But that's a digression. The matter at \nhand is making it work right again. \n\nIt seems to me someone should have thought about this before making the glibc \nchange that had the potential for breaking a large application -- regardless \nof who is at fault, it's egg in Red Hat's face for not catching it sooner \n(and egg in my face as well, as I must admit that I of all people should have \ncaught this earlier). \n\nIs the change in glibc behavior noted in the release notes? The man page \nisn't changed either, from Red Hat 6.2, in fact. The only change is adhering \nto the ISO definition of 'Seconds Since the Epoch' rather than the defacto \nindustry-accepted definition that has been with us a very long time. \n\nLike PostgreSQL's refusal to upgrade in a sane manner, this should have \nreceived release notes billing, IMHO. Then I, nor anyone else, could \nreasonably complain.\n\nBut this does show the need for the regression testing packge, no? :-) And it \nalso shows the danger in becoming too familiar with certain regression tests \nfailing due to locale -- to the extent that a locale issue was the first \nthing thought of. To the extent that I'm going to change my build process to \ninclude regression testing as a part of the build -- and any failure will \nabort the build. Maybe that will get my attention. And anyone else's \nrebuilding from the source RPM. SuSE already does this. I wonder how \nthey've handled this issue with 8.0? \n\nIn any case, this isn't just a Red Hat problem, as it's going to cause \nproblems with the use of timestamps on ANY glibc 2.2.5 dist. That's more \nthan Red Hat, by a large margin.\n\nAnd I think that it is naive to think that PostgreSQL is the only program that \nhas used mktime's behavior in the negative-time_t zone. Other packages are \nlikely impacted, to a greater or lesser extent.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 21 May 2002 13:24:41 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tue, 2002-05-21 at 21:31, Trond Eivind Glomsrød wrote:\n> On Tue, 21 May 2002, Lamar Owen wrote:\n> \n> > On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:\n> > > I see. This behavior is consistent with the fact that mktime is\n> > > supposed to return -1 on error, but then is broken in every other Unix\n> > > implementation that I know.\n> > \n> > > Any other workaround than downgrade or install FreeBSD?\n> > \n> > Complain to Red Hat. Loudly. However, as this is a glibc change, other \n> > distributors are very likely to fold in this change sooner rather than \n> > later. \n> \n> Relying on nonstandardized/nondocumented behaviour is a program bug, not a \n> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at \n> it, but glibc is not the component with a problem.\n\nStill it seems kind of silly to have a function that works different\nfrom all other implementations and forces people to use their own\nfunction of the same name (lifted from BSD and also compliant).\n\nSpeaking of nonstandardized/nondocumented behaviour, I read from \"The\nOpen Sources\" book that if you implement TCP/IP strictly from the RFCs\nthen it won't interoperate with any other TCP/IP stack. \n\nI hope that Red Hat is not going to be \"standards compliant\" here ;)\n\n--------------\nHannu\n\n\n", "msg_date": "21 May 2002 23:54:06 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Trond Eivind Glomsrød <[email protected]> writes:\n\n> Relying on nonstandardized/nondocumented behaviour is a program bug,\n> not a glibc bug.\n\nThe question is: how this thing didn't show up before? ISTM that\nsomeone is not doing his work correctly.\n\n> PostgreSQL needs fixing.\n\nArguably, however, right now is *a lot easier* to fix glibc, and it's\nreally needed for production systems using postgreSQL and working on\nRedHat. But redhat users doesn't matter, the most important thing is\n*strict* conformace to standars, right?\n\n> Since we ship both, we're looking at it, but glibc is not the\n ^^^^^^^^^^^^^^^^^^^\nThe sad true is: you only answered when the 'Complain to Red Hat'\nstatement appeared, not a single word before and not a single word\nwhen the bug report were closed. I'm really disappointed.\n\nThe nice thing is: glibc is free software and we don't have to wait or\nrelay on some of the redhat staff members (thanks god) for this to get\nfixed or say: for the standard to get extended again. The patch to\nglibc is pretty straightforward and attached.\n\nRegards,\nManuel.", "msg_date": "21 May 2002 13:59:39 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On 21 May 2002, Manuel Sugawara wrote:\n\n> Trond Eivind Glomsr�d <[email protected]> writes:\n> \n> > Relying on nonstandardized/nondocumented behaviour is a program bug,\n> > not a glibc bug.\n> \n> The question is: how this thing didn't show up before? ISTM that\n> someone is not doing his work correctly.\n\nFWIW, I ran the regressions tests some time ago(probably before that \nchange to glibc) . Since the tests are known \nto be broken wrt. time issues anyway (as well as currency, math and sorting), \nit's easy to overlook.\n\n> > PostgreSQL needs fixing.\n> \n> Arguably, however, right now is *a lot easier* to fix glibc, and it's\n> really needed for production systems using postgreSQL and working on\n> RedHat. \n\nYou're not \"fixing\" glibc, you're reintroducing non-standardized, upstream \nremoved behaviour. That's typically a very bad thing. If anything, it \ndemonstrates the importance of not using or relying on \nunstandardized/undocumented behaviour (and given that time_t is pretty \nrestrictive anyway, you'll need something else to keep dates. It doesn't \neven cover all living people, and definitely not historical dates).\n\n> > Since we ship both, we're looking at it, but glibc is not the\n> ^^^^^^^^^^^^^^^^^^^\n> The sad true is: you only answered when the 'Complain to Red Hat'\n> statement appeared, not a single word before and not a single word\n> when the bug report were closed. I'm really disappointed.\n\nThe bug wasn't open for long, and was closed by someone else.\n\n> The nice thing is: glibc is free software \n\nAlso, notice that this was where the fix came from: The upstream \nmaintainers (some of whom work for us, others don't).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Tue, 21 May 2002 15:09:34 -0400 (EDT)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tuesday 21 May 2002 03:09 pm, Trond Eivind Glomsr�d wrote:\n> FWIW, I ran the regressions tests some time ago(probably before that\n> change to glibc) . Since the tests are known\n> to be broken wrt. time issues anyway (as well as currency, math and\n> sorting), it's easy to overlook.\n\nThe time tests have never broken in this manner before on Red Hat. When the \noriginal regression failure report was posted, I saw right away that this was \nnot the run of the mill locale issue -- this was a real problem. Regression \ntesting must become a regularly scheduled activity, methinks. In the RPM \nbuild process, we can control the locale to the extent that the tests will \npass (except on DST days) reliably. I am going to implement this for my next \nRPM set. Along with a patch to this problem -- we _can_ patch around this, I \nbelieve, but it's not likely going to be an easy one.\n\nWe have gotten blind to the regular locale-induced failures -- this is not a \ngood thing.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 21 May 2002 15:44:40 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[email protected]> writes:\n> Relying on nonstandardized/nondocumented behaviour is a program bug, not a \n> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at \n> it, but glibc is not the component with a problem.\n\nA library that can no longer cope with dates before 1970 is NOT my idea\nof a component without a problem. We will be looking at ways to get\naround glibc's breakage at the application level, since we have little\nalternative other than to declare Linux an unsupported platform;\nbut it's still glibc (and the ISO spec:-() that are broken.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 17:14:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Tue, 2002-05-21 at 18:24, Lamar Owen wrote:\n> In any case, this isn't just a Red Hat problem, as it's going to cause \n> problems with the use of timestamps on ANY glibc 2.2.5 dist. That's more \n> than Red Hat, by a large margin.\n\nI'm running glibc 2.2.5 on Debian and all regression tests pass OK (with\nmake check). I don't see any note in the glibc Debian changelog about\nreversing an upstream change to mktime().\n\nI missed the first messages in this thread and I can't find them in the\narchive. What should I be looking for to see if I have the problem you\nhave encountered or to see why I don't have it if I ought to have?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"O come, let us worship and bow down; let us kneel\n before the LORD our maker.\" Psalms 95:6", "msg_date": "21 May 2002 23:09:58 +0100", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tuesday 21 May 2002 06:09 pm, Oliver Elphick wrote:\n> On Tue, 2002-05-21 at 18:24, Lamar Owen wrote:\n> > In any case, this isn't just a Red Hat problem, as it's going to cause\n> > problems with the use of timestamps on ANY glibc 2.2.5 dist. That's more\n> > than Red Hat, by a large margin.\n\n> I'm running glibc 2.2.5 on Debian and all regression tests pass OK (with\n> make check). I don't see any note in the glibc Debian changelog about\n> reversing an upstream change to mktime().\n\n> I missed the first messages in this thread and I can't find them in the\n> archive. What should I be looking for to see if I have the problem you\n> have encountered or to see why I don't have it if I ought to have?\n\nHmmm. Compile and run the attached program. If you get -1, it's the new \nbehavior. It might be interesting to see the differences here.....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11", "msg_date": "Tue, 21 May 2002 18:47:05 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Manuel Sugawara <[email protected]> writes:\n \n> +#if 0\n> /* Only years after 1970 are defined.\n> If year is 69, it might still be representable due to\n> timezone differnces. */\n> if (year < 69)\n> return -1;\n> +#endif\n\nHm. If that fixes it, it implies that all the other support for\npre-1970 dates is still there (notably, entries in the timezone tables).\n\nShould we assume that future glibc releases will rip out the timezone\ndatabase entries and other support for pre-1970 dates? Or is the\nbreakage going to stop with mktime?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 19:33:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "Lamar Owen writes:\n\n> SuSE already does this. I wonder how they've handled this issue with\n> 8.0?\n\nTheir glibc doesn't have that problem.\n\nPersonally, I think if you need time (zone) support before 1970, obtain\none of the various operating systems that support it. There's little\nvalue in hacking around it in PostgreSQL, since the rest of your system\nwill be broken as well.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 22 May 2002 03:09:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "> > SuSE already does this. I wonder how they've handled this issue with\n> > 8.0?\n> Their glibc doesn't have that problem.\n\nMy strong recollection is that a SuSE guy was the one applying the\nchange. So this is coming to those systems too. I may not remember that\ncorrectly though...\n\n> Personally, I think if you need time (zone) support before 1970, obtain\n> one of the various operating systems that support it. There's little\n> value in hacking around it in PostgreSQL, since the rest of your system\n> will be broken as well.\n\nYes, I'm afraid I agree. In practice, maybe most applications won't\nnotice. But after getting the Linux time zone databases set up to be\nbetter than most (Solaris has the best I've found for fidelity to\npre-1970 year-to-year conventions) throwing that work away is just plain\nsilly. I consider this a major gaff on the part of the commercial Linux\nhouses to not see this coming and to contribute to a better solution.\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 19:01:55 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Tue, 2002-05-21 at 23:47, Lamar Owen wrote:\n> Hmmm. Compile and run the attached program. If you get -1, it's the new \n> behavior. It might be interesting to see the differences here.....\n\n$ a.out\nThe system thinks 11/30/1969 is a timestamp of -176400 \n$ dpkg -l libc6\n...\n||/ Name Version Description\n+++-==============-==============-============================================\nii libc6 2.2.5-6 GNU C Library: Shared libraries and Timezone\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"We are troubled on every side, yet not distressed; we \n are perplexed, but not in despair; persecuted, but not\n forsaken; cast down, but not destroyed; Always bearing\n about in the body the dying of the Lord Jesus, that \n the life also of Jesus might be made manifest in our \n body.\" II Corinthians 4:8-10", "msg_date": "22 May 2002 05:18:43 +0100", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, 2002-05-22 at 02:14, Tom Lane wrote:\n> =?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[email protected]> writes:\n> > Relying on nonstandardized/nondocumented behaviour is a program bug, not a \n> > glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at \n> > it, but glibc is not the component with a problem.\n> \n> A library that can no longer cope with dates before 1970 is NOT my idea\n> of a component without a problem. We will be looking at ways to get\n> around glibc's breakage at the application level, since we have little\n> alternative other than to declare Linux an unsupported platform;\n> but it's still glibc (and the ISO spec:-() that are broken.\n\nIIRC the spec is not _really_ broken - it still allows the correct\nbehaviour :)\n\nThe fact the ISO spec is broken usually means that at least one of the\nbig vendors involved in ISO spec creation must have had a broken\nimplementation at that time.\n\nMost likely they have fixed it by now ...\n\nDoes anyone know _any_ other libc that has this behaviour ?\n\n--------------\nHannu\n\n\n", "msg_date": "22 May 2002 09:33:45 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "> IIRC the spec is not _really_ broken - it still allows the correct\n> behaviour :)\n\nYes.\n\n> The fact the ISO spec is broken usually means that at least one of the\n> big vendors involved in ISO spec creation must have had a broken\n> implementation at that time.\n\nRight. IBM.\n\n> Most likely they have fixed it by now ...\n\nNope, though I don't know for sure. Anyone here have a recent AIX\nmachine to test?\n\n> Does anyone know _any_ other libc that has this behaviour ?\n\nAIX and (I think) Irix.\n\nTrond, do you have a suggestion on how to get this addressed at the\nglibc level? Does someone within RH participate in glibc development? If\nso, can we get them to champion changes which would comply with the\nstandard but remove this arbitrary breakage?\n\nThe changes would involve returning -1 from mktime() for dates before\n1970, and using the tm_isdst flag to indicate whether a time zone\ntranslation was not possible.\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 06:30:05 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Thomas Lockhart writes:\n > Right. IBM.\n > > Most likely they have fixed it by now ...\n > Nope, though I don't know for sure. Anyone here have a recent AIX\n > machine to test?\n\nWell on AIX 4.3.3 the output from Lamar's earlier test program is:\n\n The system thinks 11/30/1969 is a timestamp of -1\n\nand tm_isdst is left at -1...\n\nI could boot the machine into 5.0 too, but going by the AIX 5L\nmanpages it still returns -1:\n\n Note: The mktime subroutine cannot convert time values before\n 00:00:00 UTC, January 1, 1970 and after 03:14:07 UTC, January 19,\n 2038.\n\nAnd getting an Irix 5.3 box up and running would be a chore!\n\nLee.\n", "msg_date": "Wed, 22 May 2002 14:57:25 +0100", "msg_from": "Lee Kindness <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "...\n> > AIX and (I think) Irix.\n> How do we currently support AIX/Irix ?\n\nDates and times prior to 1970 have no time zone (they are in GMT, as is\nthe case for all platforms on dates before 1903 and after 2038). We have\nseparate regression test results for those platforms.\n\n - Tom\n", "msg_date": "Wed, 22 May 2002 06:58:06 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, 22 May 2002, Thomas Lockhart wrote:\n\n> > IIRC the spec is not _really_ broken - it still allows the correct\n> > behaviour :)\n> \n> Yes.\n> \n> > The fact the ISO spec is broken usually means that at least one of the\n> > big vendors involved in ISO spec creation must have had a broken\n> > implementation at that time.\n> \n> Right. IBM.\n> \n> > Most likely they have fixed it by now ...\n> \n> Nope, though I don't know for sure. Anyone here have a recent AIX\n> machine to test?\n> \n> > Does anyone know _any_ other libc that has this behaviour ?\n> \n> AIX and (I think) Irix.\n> \n> Trond, do you have a suggestion on how to get this addressed at the\n> glibc level? Does someone within RH participate in glibc development?\n\nJakub Jelinek, Ulrich Drepper and others.\n\n> If so, can we get them to champion changes which would comply with the\n> standard but remove this arbitrary breakage?\n\nUnlikely. They already saw (and participated, at least Ulrich) a thread on \nthis with Lamar. Their take is \"this is the standard, you should do what \nthe standard says and not rely on \nundocumented, non-standardized sideeffects.\n\n> The changes would involve returning -1 from mktime() for dates before\n> 1970, and using the tm_isdst flag to indicate whether a time zone\n> translation was not possible.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Wed, 22 May 2002 10:25:02 -0400 (EDT)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, 2002-05-22 at 15:30, Thomas Lockhart wrote:\n> > IIRC the spec is not _really_ broken - it still allows the correct\n> > behaviour :)\n> \n> Yes.\n> \n> > The fact the ISO spec is broken usually means that at least one of the\n> > big vendors involved in ISO spec creation must have had a broken\n> > implementation at that time.\n> \n> Right. IBM.\n> \n> > Most likely they have fixed it by now ...\n> \n> Nope, though I don't know for sure. Anyone here have a recent AIX\n> machine to test?\n> \n> > Does anyone know _any_ other libc that has this behaviour ?\n> \n> AIX and (I think) Irix.\n\nHow do we currently support AIX/Irix ?\n\n----------------\nHannu\n\n\n", "msg_date": "22 May 2002 16:48:54 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, 2002-05-22 at 08:00, Thomas Lockhart wrote:\n> ...\n> > > If so, can we get them to champion changes which would comply with the\n> > > standard but remove this arbitrary breakage?\n> > Unlikely. They already saw (and participated, at least Ulrich) a thread on\n> > this with Lamar. Their take is \"this is the standard, you should do what\n> > the standard says and not rely on undocumented, non-standardized sideeffects.\n> \n> OK. They must be new guys.\n\n:-) Very funny.\n\n-- \n---------------. ,-. 1325 Chesapeake Terrace\nUlrich Drepper \\ ,-------------------' \\ Sunnyvale, CA 94089 USA\nRed Hat `--' drepper at redhat.com `------------------------", "msg_date": "22 May 2002 10:12:31 -0700", "msg_from": "Ulrich Drepper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wednesday 22 May 2002 01:12 pm, Ulrich Drepper wrote:\n> On Wed, 2002-05-22 at 08:00, Thomas Lockhart wrote:\n> > > > If so, can we get them to champion changes which would comply with\n> > > > the standard but remove this arbitrary breakage?\n\n> > > Unlikely. They already saw (and participated, at least Ulrich) a thread\n> > > on this with Lamar. Their take is \"this is the standard, you should do\n> > > what the standard says and not rely on undocumented, non-standardized\n> > > sideeffects.\n\n> > OK. They must be new guys.\n\n> :-) Very funny.\n\nWhat isn't funny is Oliver Elphick's results on Debian, running glibc 2.2.5 \n(same as Red Hat 7.3's version). They are different. And, IMO, those \nresults are the 'expected' results on a unixoid system, ISO or no ISO.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 22 May 2002 13:51:20 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n\n> What isn't funny is Oliver Elphick's results on Debian, running glibc 2.2.5 \n> (same as Red Hat 7.3's version).\n\nThis is a completely different version. Once Debian updates (in a few\nyears) they'll get the same result.\n\nIf you are misusing interfaces you get what you deserve. At no time was\nit correct to use these functions for general date manipulation. It\nalways only was allowed to use them to represent system times and there\nwas no Unix system before the epoch. Therefore you argumentation is\ncompletely wrong.\n\nIf you need date manipulation write your own code which work for all the\ntimes you want to represent.\n\n-- \n---------------. ,-. 1325 Chesapeake Terrace\nUlrich Drepper \\ ,-------------------' \\ Sunnyvale, CA 94089 USA\nRed Hat `--' drepper at redhat.com `------------------------", "msg_date": "22 May 2002 10:58:15 -0700", "msg_from": "Ulrich Drepper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Ulrich Drepper <[email protected]> writes:\n> If you are misusing interfaces you get what you deserve. At no time was\n> it correct to use these functions for general date manipulation. It\n> always only was allowed to use them to represent system times and there\n> was no Unix system before the epoch. Therefore you argumentation is\n> completely wrong.\n\n> If you need date manipulation write your own code which work for all the\n> times you want to represent.\n\nWe may indeed end up doing that, if glibc fails to provide the\nfunctionality we need; but your argument is nonsense. Unix systems have\n*always* interpreted time_t as a signed offset from the epoch. Do you\nreally think that when Unixen were first built in the early 70s, there\nwas no interest in working with pre-1970 dates? Hardly likely.\n\nglibc has just taken a large step backwards by comparison to every other\nlibc on the planet. Claiming that you are okay because you conform to\na lowest-common-denominator ISO spec is beside the point. ISO specs are\nonly pieces of paper. You have removed functionality that is de facto\nstandard, important in practice, and *provided by most of your\ncompetition*. People will start going to the competition. Or perhaps\nthere will be a glibc fork.\n\nPostgres is not the only application that will be complaining loudly\nabout this change; anyone who has to deal with historical information\nis going to be unhappy. We just happen to be the earliest bearers of\nthe bad news. But you will end up reverting this change due to pushback\nfrom users. Want to make a side bet?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 14:23:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Wed, 2002-05-22 at 11:23, Tom Lane wrote:\n\n> Unix systems have\n> *always* interpreted time_t as a signed offset from the epoch.\n\nNo. This always was an accident if it happens.\n\n> Do you\n> really think that when Unixen were first built in the early 70s, there\n> was no interest in working with pre-1970 dates? Hardly likely.\n\nThere never were files or any system events with these dates. Yes.\n\nAnd just to educate you and your likes: the majority of systems on this\nplanet use mktime this way. I hate using this as an argument, but\nbeside major Unixes M$ systems also do this.\n\n> But you will end up reverting this change due to pushback\n> from users. Want to make a side bet?\n\nSure. Especially not everybody is that stubborn.\n\n-- \n---------------. ,-. 1325 Chesapeake Terrace\nUlrich Drepper \\ ,-------------------' \\ Sunnyvale, CA 94089 USA\nRed Hat `--' drepper at redhat.com `------------------------", "msg_date": "22 May 2002 11:40:58 -0700", "msg_from": "Ulrich Drepper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wednesday 22 May 2002 01:58 pm, Ulrich Drepper wrote:\n> On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n> > What isn't funny is Oliver Elphick's results on Debian, running glibc\n> > 2.2.5 (same as Red Hat 7.3's version).\n\n> This is a completely different version. Once Debian updates (in a few\n> years) they'll get the same result.\n\nA completely different version with the same version number? Or is this a \ncase of a Red Hat version number really meaning something different \nShouldn't glibc 2.2.5 be the same as glibc 2.2.5 regardless of distribution?\n\nAnd who's to stop them patching out the new stuff and reinstating the old \nbehavior? :-)\n\n> If you are misusing interfaces you get what you deserve. At no time was\n> it correct to use these functions for general date manipulation. It\n> always only was allowed to use them to represent system times and there\n> was no Unix system before the epoch. Therefore you argumentation is\n> completely wrong.\n\nIf it is completely wrong, then tell Sun, HP, and all the rest of the Unix \nvendors, including the authors of the original AT&T code as lifted by \nBerkeley, that they're wrong and you're right. They'll laugh you to scorn.\n\nAnd just which 'major Unixen' other than AIX and Irix that follow the letter \nof the 'seconds since the epoch' definition of the ISO standard?\n\n> If you need date manipulation write your own code which work for all the\n> times you want to represent.\n\nThe mktime bug doesn't effect our representation of dates and times other than \nthe timezone at this time. What's aggravating to me is the surprise factor.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 22 May 2002 14:58:38 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "This thread is getting pretty annoying rather than constructive. By\nthe mean time I can see the users of many db's running under linux\nloudly complaining.\n\nAs a user of both products (glibc and postgres), I would like to see a\ngood compromise in both sides. For instance: postgreSQL will implement\nit's own time engine in one or to releases (In a few months I can do\nit if no one else volunteers) and glibc will revert this change for\none or two releases (I can do it right now). That way everyone will be\nhappy; particularly me and many others using glibc and postgres.\n\nRegards,\nManuel.\n", "msg_date": "22 May 2002 14:09:18 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "> > OK. They must be new guys.\n> :-) Very funny.\n\nHey, it worked. Got you out of the woodwork. :))\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 17:20:21 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "> On Wed, 2002-05-22 at 15:30, Thomas Lockhart wrote:\n> > > IIRC the spec is not _really_ broken - it still allows the correct\n> > > behaviour :)\n> > \n> > Yes.\n> > \n> > > The fact the ISO spec is broken usually means that at least one of the\n> > > big vendors involved in ISO spec creation must have had a broken\n> > > implementation at that time.\n> > \n> > Right. IBM.\n> > \n> > > Most likely they have fixed it by now ...\n> > \n> > Nope, though I don't know for sure. Anyone here have a recent AIX\n> > machine to test?\n> > \n> > > Does anyone know _any_ other libc that has this behaviour ?\n> > \n> > AIX and (I think) Irix.\n> \n> How do we currently support AIX/Irix ?\n\nWhy should we rely on broken glibc and the standard? Why don't we make\nour own mktime() and use it on all platforms.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 23 May 2002 10:14:01 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "...\n> Why should we rely on broken glibc and the standard? Why don't we make\n> our own mktime() and use it on all platforms.\n\nThe downside to doing that is that we then take over maintenance of the\ncode and, more importantly, the timezone database.\n\nBut it might be the best thing to do. It looks like the zic package\npointed to the other day could be used, though we may have to change\nsome of the variables and entry points to avoid library conflicts. But\nwe could also blow past the y2038 limit afaict which would be nice.\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 18:51:07 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Why should we rely on broken glibc and the standard? Why don't we make\n>> our own mktime() and use it on all platforms.\n\n> The downside to doing that is that we then take over maintenance of the\n> code and, more importantly, the timezone database.\n\n> But it might be the best thing to do.\n\nI've been sorta thinking the same thing. We could get out from under\nthe Y2038 issue, and also eliminate a whole lot of platform\ndependencies. Not to mention sillinesses like being unable to recognize\na bad timezone name when it's fed to us.\n\nExactly how much work (and code bulk) would we be taking on? I've\nnever looked at how big the timezone databases are...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 22:19:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\r\n[snip]\r\n> \r\n> Exactly how much work (and code bulk) would we be taking on? I've\r\n> never looked at how big the timezone databases are...\r\n> \r\n\r\nSome answers on database sizes, if this is any help...\r\nI did \"du -sh /usr/share/zoneinfo/\" on them all.\r\n\r\nOpenBSD 3.1, sparc64:\r\n1.3M /usr/share/zoneinfo/\r\n\r\nLinux, i686, oldish mandrake (6.x?), glibc 2.1.3:\r\n478k /usr/share/zoneinfo\r\n\r\nLinux, i686, newish redhat 7.2, glibc 2.2.4:\r\n4.9M /usr/share/zoneinfo\r\n\r\nLinux, alpha EV56, oldish redhat 6.2, glibc 2.1.3\r\n1.4M /usr/share/zoneinfo\r\n\r\n> regards, tom lane\r\n\r\nMagnus\r\n\r\n-- \r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n Programmer/Networker [|] Magnus Naeslund\r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n\r\n", "msg_date": "Thu, 23 May 2002 04:42:13 +0200", "msg_from": "\"Magnus Naeslund(f)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Thu, May 23, 2002 at 04:42:13AM +0200, Magnus Naeslund(f) wrote:\n> Some answers on database sizes, if this is any help...\n> I did \"du -sh /usr/share/zoneinfo/\" on them all.\n> \n> OpenBSD 3.1, sparc64:\n> 1.3M /usr/share/zoneinfo/\n> \n> Linux, i686, oldish mandrake (6.x?), glibc 2.1.3:\n> 478k /usr/share/zoneinfo\n> \n> Linux, i686, newish redhat 7.2, glibc 2.2.4:\n> 4.9M /usr/share/zoneinfo\n ^^^^\n\nWhat do they do with that di?\n\n> Linux, alpha EV56, oldish redhat 6.2, glibc 2.1.3\n> 1.4M /usr/share/zoneinfo\n\nHere's what my Debian Woody system says:\n\n1.5M /usr/share/zoneinfo\n\nAnd this is with glibc 2.2.5. Of course this wouldn't be the first time\nthat RedHat uses the same version number for a different version.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 23 May 2002 10:44:04 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Wed, May 22, 2002 at 10:58:15AM -0700, Ulrich Drepper wrote:\n> On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n> \n> > What isn't funny is Oliver Elphick's results on Debian, running glibc 2.2.5 \n> > (same as Red Hat 7.3's version).\n> \n> This is a completely different version. Once Debian updates (in a few\n> years) they'll get the same result.\n\nUlrich, how shall I understand this? I'm pretty sure Oliver\ndoes not use a Debian 2.2 system with glibc-2.1.3 but a pretty\nup-to-date one. The glibc version in the soon to be released Woody\nrelease is 2.2.5. \n\nThis seems to be the very same version that RedHat uses. So what\ncould/should Debian update? Besides, the \"in a few years\" comment looks\nlike FUD to me. It may be a few years since we talked the last time, but\nI cannot imagine you changed that much that you spread FUD\nnowadays. So I probably misunderstood this sentence, but nevertheless\nwould like to know what Debian should update.\n\nOr do you mean that once Debian updates to glibc 2.3 (or whatever the\nnext release will be) it will show the same results? Does RedHat 7.3\nalready run on that new release? But then I would think they changed the\nversion number.\n\nMichael\n\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 23 May 2002 16:20:22 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On 22 May 2002, Ulrich Drepper wrote:\n\n> On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n>\n> > What isn't funny is Oliver Elphick's results on Debian, running glibc 2.2.5\n> > (same as Red Hat 7.3's version).\n>\n> This is a completely different version. Once Debian updates (in a few\n> years) they'll get the same result.\n>\n> If you are misusing interfaces you get what you deserve. At no time was\n> it correct to use these functions for general date manipulation. It\n> always only was allowed to use them to represent system times and there\n> was no Unix system before the epoch. Therefore you argumentation is\n> completely wrong.\n>\n> If you need date manipulation write your own code which work for all the\n> times you want to represent.\n\nWe are Redhat, you will be assimilated\n\n\n", "msg_date": "Thu, 23 May 2002 11:36:35 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On 22 May 2002, Ulrich Drepper wrote:\n\n> On Wed, 2002-05-22 at 11:23, Tom Lane wrote:\n>\n> > Unix systems have\n> > *always* interpreted time_t as a signed offset from the epoch.\n>\n> No. This always was an accident if it happens.\n>\n> > Do you\n> > really think that when Unixen were first built in the early 70s, there\n> > was no interest in working with pre-1970 dates? Hardly likely.\n>\n> There never were files or any system events with these dates. Yes.\n>\n> And just to educate you and your likes: the majority of systems on this\n> planet use mktime this way. I hate using this as an argument, but\n> beside major Unixes M$ systems also do this.\n\nM$ systems crashes regularly too ... is Redhat going to adopt that too?\n\n", "msg_date": "Thu, 23 May 2002 11:39:38 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "> --=-Z1lifK4QZqKV8kHqHfYT\n> Content-Type: text/plain\n> Content-Transfer-Encoding: quoted-printable\n> \n> On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n> \n> > What isn't funny is Oliver Elphick's results on Debian, running glibc 2.2=\n> .5=20\n> > (same as Red Hat 7.3's version).\n> \n> This is a completely different version. Once Debian updates (in a few\n> years) they'll get the same result.\n> \n> If you are misusing interfaces you get what you deserve. At no time was\n> it correct to use these functions for general date manipulation. It\n> always only was allowed to use them to represent system times and there\n> was no Unix system before the epoch. Therefore you argumentation is\n> completely wrong.\n> \n> If you need date manipulation write your own code which work for all the\n> times you want to represent.\n\nThis is indeed a problem with dates on LIBC, because even if it is \ntheoretically satisfactory to describe dates within some range within 2^31 \nseconds of 1970, that is certainly NOT satisfactory for describing all dates \nof interest unless you're being terribly parochial about what is to be \nconsidered \"of interest.\"\n\nMy father's birth falls within 2^31 seconds of 1970, but the Battle of \nAgincourt doesn't.\n\nAny backup of any Unix system in history falls within 2^31 seconds of 1970, \nbut there are people alive whose births don't.\n\nPeople get away with using Unix dates as a \"general\" date type when they don't \nhave to work outside a limited range. If/when there is a move to 64 bit \nvalues, that will provide something with enough range to cover history back to \nridiculously early times, relieving the limit.\n\nBut anybody using Unix dates as \"general dates\" has leaped into exactly the \nsame sort of trap that caused people to get so paranoid about Y2K.\n--\n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://www.cbbrowne.com/info/oses.html\nDo Roman paramedics refer to IV's as \"4's\"? \n\n-- \n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\n\"So, when you typed in the date, it exploded into a sheet of blue\nflame and burned the entire admin wing to the ground? Yes, that's a\nknown bug. We'll be fixing it in the next release. Until then, try not\nto use European date format, and keep an extinguisher handy.\"\n-- [email protected] (Tequila Rapide) \n\n\n", "msg_date": "Thu, 23 May 2002 11:02:03 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "> On 22 May 2002, Ulrich Drepper wrote:\n> \n> > On Wed, 2002-05-22 at 11:23, Tom Lane wrote:\n> >\n> > > Unix systems have\n> > > *always* interpreted time_t as a signed offset from the epoch.\n> >\n> > No. This always was an accident if it happens.\n> >\n> > > Do you\n> > > really think that when Unixen were first built in the early 70s, there\n> > > was no interest in working with pre-1970 dates? Hardly likely.\n> >\n> > There never were files or any system events with these dates. Yes.\n> >\n> > And just to educate you and your likes: the majority of systems on this\n> > planet use mktime this way. I hate using this as an argument, but\n> > beside major Unixes M$ systems also do this.\n> \n> M$ systems crashes regularly too ... is Redhat going to adopt that too?\n\nHarbison and Steele indicates that:\n\n \"Although the traditional return type of time is long, the value returned is \nusually of type unsigned long.\"\n\nThat is NOT a \"Linux\" reference; it was published before Linus had started \nworking on his kernel project.\n\nANSI does not indicate that time_t is a signed int, signed long, or whatever. \nIt is only defined to be an arithmetic type.\n\nIt would not be a bug for GLIBC to define time_t to be an unsigned int, with \nan epoch beginning of January 1, 1999. That definition would conform to ANSI \nC.\n\nSince that definition can conform to ANSI C, and Unix systems have more \nparticularly been known to use unsigned ints with epoch of 1970, it is NOT \nreasonable to assume that time_t can be used to express dates that are at all \nancient in the past.\n\nIndeed, it is fairly _useful_ for different libc implementations to make \ndifferent assumptions about things whose definitions are permitted to vary, as \nthis makes it easier to call out as WRONG those systems that make up their own \ndefinitions.\n\nPeople will no doubt get defensive about their own non-standard \nimplementations of things; it is certainly far easier to cry \"They're trying \nto play Microsoft!\" than it is to be honest and actually look at the standards.\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.cbbrowne.com/info/advocacy.html\nWhen confronted by a difficult problem, solve it by reducing it to the\nquestion, \"How would the Lone Ranger handle this?\"\n\n-- \n(reverse (concatenate 'string \"gro.mca@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/linuxxian.html\nAs of next Monday, COMSAT will be flushed in favor of a string and two tin\ncans. Please update your software.\n\n\n", "msg_date": "Thu, 23 May 2002 12:17:09 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Thu, 2002-05-23 at 07:20, Michael Meskes wrote:\n\n> The glibc version in the soon to be released Woody\n> release is 2.2.5. \n\nThe version in RHL7.3 is 2.2.5-34. This is not what Debian uses. Maybe\nyou should read the changelog for the version.\n\n-- \n---------------. ,-. 1325 Chesapeake Terrace\nUlrich Drepper \\ ,-------------------' \\ Sunnyvale, CA 94089 USA\nRed Hat `--' drepper at redhat.com `------------------------", "msg_date": "23 May 2002 09:29:06 -0700", "msg_from": "Ulrich Drepper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Thu, 23 May 2002 [email protected] wrote:\n\n> > On 22 May 2002, Ulrich Drepper wrote:\n> >\n> > > On Wed, 2002-05-22 at 11:23, Tom Lane wrote:\n> > >\n> > > > Unix systems have\n> > > > *always* interpreted time_t as a signed offset from the epoch.\n> > >\n> > > No. This always was an accident if it happens.\n> > >\n> > > > Do you\n> > > > really think that when Unixen were first built in the early 70s, there\n> > > > was no interest in working with pre-1970 dates? Hardly likely.\n> > >\n> > > There never were files or any system events with these dates. Yes.\n> > >\n> > > And just to educate you and your likes: the majority of systems on this\n> > > planet use mktime this way. I hate using this as an argument, but\n> > > beside major Unixes M$ systems also do this.\n> >\n> > M$ systems crashes regularly too ... is Redhat going to adopt that too?\n\n< stuff deleted >\n\n> People will no doubt get defensive about their own non-standard\n> implementations of things; it is certainly far easier to cry \"They're trying\n> to play Microsoft!\" than it is to be honest and actually look at the standards.\n\nJust to clarify, if this was directed at my comment, I wasn't the one that\nbrought up the fact that \"Redhat is trying to play Microsoft\", Ulrich was\nthe one that brought it into the argument ... I was just curious as to how\nfar they planned on getting to what M$ systems do ...\n\n", "msg_date": "Thu, 23 May 2002 21:12:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Thu, 2002-05-23 at 15:20, Michael Meskes wrote:\n> On Wed, May 22, 2002 at 10:58:15AM -0700, Ulrich Drepper wrote:\n> > On Wed, 2002-05-22 at 10:51, Lamar Owen wrote:\n> > \n> > > What isn't funny is Oliver Elphick's results on Debian, running glibc 2.2.5 \n> > > (same as Red Hat 7.3's version).\n> > \n> > This is a completely different version. Once Debian updates (in a few\n> > years) they'll get the same result.\n> \n> Ulrich, how shall I understand this? I'm pretty sure Oliver\n> does not use a Debian 2.2 system with glibc-2.1.3 but a pretty\n> up-to-date one. The glibc version in the soon to be released Woody\n> release is 2.2.5. \n\nIn fact, I run \"unstable\" and the version is 2.2.5-6. I couldn't see\nany reference in the Debian changelog, but I'm wondering if the change\nwas commented out. I haven't had the time to download the source and go\nlooking.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"I will praise thee; for I am fearfully and wonderfully \n made...\" Psalms 139:14", "msg_date": "24 May 2002 07:25:53 +0100", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Thu, May 23, 2002 at 09:29:06AM -0700, Ulrich Drepper wrote:\n> On Thu, 2002-05-23 at 07:20, Michael Meskes wrote:\n> \n> > The glibc version in the soon to be released Woody\n> > release is 2.2.5. \n> \n> The version in RHL7.3 is 2.2.5-34. This is not what Debian uses. Maybe\n> you should read the changelog for the version.\n\nSo with your arithmetics 2.2.5 != 2.2.5. Hey that's great. How about\nnaming the next Linux kernel release 2.0 just to confuse some people?\n:-)\n\nSeriously though, while this is offtopic on this list, I wonder what's\ngoing on. If I have a system with glibc 2.2.5, I cannot expect this to\nbe the same version as on another system with glibc 2.2.5. Is this the\ncorrect understanding?\n\nAnd which changelog should I read? The RedHat one, the Debian one or the\nglibc one? \n\nOr does the -34 mean more than just the RedHat version number? The\nDebian version is correctly named 2.2.5-6 where the -6 means that this\nis the 6th release of glibc 2.2.5 for Debian,\n\nMichael\n\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 24 May 2002 09:10:44 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "...\n> But anybody using Unix dates as \"general dates\" has leaped into exactly the\n> same sort of trap that caused people to get so paranoid about Y2K.\n\nCertainly true. We don't use Unix dates as \"general dates\", we use the\nUnix time zone database and API for dates and times within the year\nrange of 1903 to 2038. Well, up until now anyway...\n\nPrior to the 1900s, the concept of time zones was more localized and not\nuniversally adopted. In the US, a first round of time zone\nstandardization came with the transcontinental railroads in the 1860s.\nAfter 2038, it is a good bet that time zones will resemble those in use\ntoday, but they are as much a political construct as a physical one so\nthe details are subject to change.\n\n - Thomas\n", "msg_date": "Fri, 24 May 2002 06:10:39 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Fri, 24 May 2002 06:10:39 PDT, the world broke into rejoicing as\nThomas Lockhart <[email protected]> said:\n> ...\n> > But anybody using Unix dates as \"general dates\" has leaped into exactly the\n> > same sort of trap that caused people to get so paranoid about Y2K.\n\n> Certainly true. We don't use Unix dates as \"general dates\", we use the\n> Unix time zone database and API for dates and times within the year\n> range of 1903 to 2038. Well, up until now anyway...\n\nI don't think going past 1970 is particularly safe; it certainly doesn't\nseem to fit with ANSI...\n\nBy the way, the seemingly relevant link to look at for TZ info is \nhttp://www.twinsun.com/tz/tz-link.htm, linking to the data used by\nvarious Unix implementations.\n\n> Prior to the 1900s, the concept of time zones was more localized and\n> not universally adopted. In the US, a first round of time zone\n> standardization came with the transcontinental railroads in the 1860s.\n> After 2038, it is a good bet that time zones will resemble those in\n> use today, but they are as much a political construct as a physical\n> one so the details are subject to change.\n\nSome of the zones are quite peculiar if you head to Africa and Asia;\nthere are some sitting on 15 minute intervales, rather than the usual 1h\nintervals.\n\n(The classic Canadian timezone joke is \"World ends at 9:00; 9:30 in\nNewfoundland\". For more information, see TZ='America/St_Johns')\n--\n(concatenate 'string \"chris\" \"@cbbrowne.com\")\nhttp://www.ntlug.org/~cbbrowne/spreadsheets.html\n\"Heuristics (from the French heure, \"hour\") limit the amount of time\nspent executing something. [When using heuristics] it shouldn't take\nlonger than an hour to do something.\"\n", "msg_date": "Fri, 24 May 2002 09:47:52 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "[email protected] writes:\n> By the way, the seemingly relevant link to look at for TZ info is \n> http://www.twinsun.com/tz/tz-link.htm, linking to the data used by\n> various Unix implementations.\n\nOh, this is interesting: it claims that\n\n: This database (often called tz or zoneinfo) is used by several\n: implementations, including GNU/Linux, FreeBSD, NetBSD, OpenBSD, DJGPP,\n: HP-UX, IRIX, Open UNIX, Solaris, and Tru64.\n\nThe actual timezone database seems to consist of about half a meg of\nheavily commented text data files. The accompanying code (probably\nproviding far more functionality than we actually need) is under 400k.\n(Both figures are for uncompressed source.) Not large at all.\n\nI cannot find any sign of a copyright or license in the files; I think\nit is intended to be public domain, and in any case if the *BSDs are\nusing it then it must have a BSD-compatible license.\n\nIt seems to me that it'd be really practical to just take what we need\nout of this distribution and forgo all dependency on system-provided\ntimezone databases. And, since there's a mailing list maintaining it,\nwe could expect someone else to handle updates ;-) ... we'd just have\nto be careful to use the database files unmodified, so that we could\ndrop in new releases from time to time.\n\nComments? Anyone want to do the legwork?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 10:17:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "Tom Lane wrote:\n > It seems to me that it'd be really practical to just take what we\n > need out of this distribution and forgo all dependency on\n > system-provided timezone databases. And, since there's a mailing\n > list maintaining it, we could expect someone else to handle updates\n > ;-) ... we'd just have to be careful to use the database files\n > unmodified, so that we could drop in new releases from time to time.\n >\n > Comments? Anyone want to do the legwork?\n >\n\nI don't understand precisely what need to be done, but I'll give it a \nshot if you get me pointed in the right direction.\n\n<downloads and looks at code>\nI see that tzcode2002c.tar.gz includes a mktime() function. Is the idea \nto pull this out (with just whatever support it needs), incorporate it \ninto PostgreSQL source (perhaps in a new src/backend/utils/tz directory) \nand use this in place of the system provided mktime()?\n\nJoe\n\n\n", "msg_date": "Fri, 24 May 2002 10:27:18 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I don't understand precisely what need to be done, but I'll give it a \n> shot if you get me pointed in the right direction.\n> <downloads and looks at code>\n> I see that tzcode2002c.tar.gz includes a mktime() function. Is the idea \n> to pull this out (with just whatever support it needs), incorporate it \n> into PostgreSQL source (perhaps in a new src/backend/utils/tz directory) \n> and use this in place of the system provided mktime()?\n\nWell, that's the zeroth-order approximation. We should take the\nopportunity to get out from under the mktime()/tzset() API. The real\nidea here is to make use of the timezone database info in the ways that\nPostgres needs. Some things that are not good about mktime()/tzset():\n\n* Arbitrary restrictions on range of dates. We certainly don't want to\nbe limited by a 32-bit time_t, whether you think it's signed or not.\nThe APIs should be recast in terms of PG's preferred internal\nrepresentations. (Lockhart would be the man to point you in the right\ndirection here, not me.)\n\n* No way to tell whether a user-provided timezone name is actually good.\n\n* No support for concurrent access to multiple zones, short of flushing\nall memory of one zone to load the next. Although we do not really need\nthis now, I can foresee wanting it. I'd be inclined to store all the\ninfo about a particular zone in some struct that can be referenced by\na pointer; that would give us the flexibility to have multiple such\nstructs floating around a backend in the future (perhaps living in a\nhashtable indexed by timezone name?)\n\nMy guess is that we want to borrow the parts of the tzcode library that\nare associated with reading a tz database file and loading it into some\nsuitable internal representation; there's probably not a lot else that\nwe'll want to use as-is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 13:51:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "Michael Meskes writes:\n\n> Or does the -34 mean more than just the RedHat version number? The\n> Debian version is correctly named 2.2.5-6 where the -6 means that this\n> is the 6th release of glibc 2.2.5 for Debian,\n\nJust for general amusement: I run SuSE's glibc 2.2.5-38 which contains\nneither the questionable code in the original sources nor is there any\nreference to it in the patch set. Go figure.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 24 May 2002 21:03:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Fri, 2002-05-24 at 12:03, Peter Eisentraut wrote:\n\n> > Or does the -34 mean more than just the RedHat version number? The\n> > Debian version is correctly named 2.2.5-6 where the -6 means that this\n> > is the 6th release of glibc 2.2.5 for Debian,\n> \n> Just for general amusement: I run SuSE's glibc 2.2.5-38 which contains\n> neither the questionable code in the original sources nor is there any\n> reference to it in the patch set. Go figure.\n\nThis is getting silly. Does nobody here understand that the release\nnumber is local for each distribution. Comparing them does not lead to\nanything. If you want to find out run\n\n rpm -q --changelog glibc | less\n\non a RH system. Don't know what other systems provide in this\ndirection. You'll see that the glibc in RHL7.3 contains a lot of the\ncode from the glibc 2.3 branch. It's not named 2.2.90 because major\npieces are missing.\n\nIf you still don't know that version numbers are meaningless for\ndetermining feature lists you might want to consider going back to your\nCS101 class and revisit software configuration management.\n\n-- \n---------------. ,-. 1325 Chesapeake Terrace\nUlrich Drepper \\ ,-------------------' \\ Sunnyvale, CA 94089 USA\nRed Hat `--' drepper at redhat.com `------------------------", "msg_date": "24 May 2002 12:15:47 -0700", "msg_from": "Ulrich Drepper <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Tom Lane wrote:\n> Well, that's the zeroth-order approximation. We should take the\n> opportunity to get out from under the mktime()/tzset() API. The real\n> idea here is to make use of the timezone database info in the ways that\n> Postgres needs. Some things that are not good about mktime()/tzset():\n> \n> * Arbitrary restrictions on range of dates. We certainly don't want to\n> be limited by a 32-bit time_t, whether you think it's signed or not.\n> The APIs should be recast in terms of PG's preferred internal\n> representations. (Lockhart would be the man to point you in the right\n> direction here, not me.)\n> \n> * No way to tell whether a user-provided timezone name is actually good.\n> \n> * No support for concurrent access to multiple zones, short of flushing\n> all memory of one zone to load the next. Although we do not really need\n> this now, I can foresee wanting it. I'd be inclined to store all the\n> info about a particular zone in some struct that can be referenced by\n> a pointer; that would give us the flexibility to have multiple such\n> structs floating around a backend in the future (perhaps living in a\n> hashtable indexed by timezone name?)\n> \n> My guess is that we want to borrow the parts of the tzcode library that\n> are associated with reading a tz database file and loading it into some\n> suitable internal representation; there's probably not a lot else that\n> we'll want to use as-is.\n\nWell, this does sound a bit more involved than I was envisioning. There \nare a few items wrt SRFs that I should finish first, but I'll come back \nto this afterward if no one else does first.\n\nJoe\n\n", "msg_date": "Fri, 24 May 2002 15:30:24 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "...\n> Well, this does sound a bit more involved than I was envisioning. There\n> are a few items wrt SRFs that I should finish first, but I'll come back\n> to this afterward if no one else does first.\n\nThe first cut might be to reproduce the functionality we already have.\nThat would allow us to (optionally) use this internal code *or* the\nsystem-provided code with a configure-time switch. We could then strip\nout two of the three time zone interface styles we support. And we could\n(possibly) allow folks to use their built-in time zone databases if they\nwant, to minimize inconsistancies between PostgreSQL and other programs\non the system. We might need to modify function and variable signatures\nto avoid conflicts with system-supplied libraries.\n\nThe next step would be to see how to generalize this past Y2038 (as\nmentioned previously, time zone info for pre-1900 is not likely to be\ninteresting). If it involves mass substitution of time_t for, say,\npg_time_t, then that might be all we need for the second phase, at which\ntime we've broken the y2038 limit by moving to 64-bit integer time.\n\nThe last phase could be extending the API to allow multiple simultaneous\ntime zones, detection of bad time zones, etc etc. This would involve API\nchanges or extensions, and breaks compatibility with system-supplied\ninfrastructure.\n\n - Thomas\n", "msg_date": "Fri, 24 May 2002 16:35:03 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The last phase could be extending the API to allow multiple simultaneous\n> time zones, detection of bad time zones, etc etc. This would involve API\n> changes or extensions, and breaks compatibility with system-supplied\n> infrastructure.\n\nOne thing that wasn't clear to me, but could use investigation: if so\nmany systems are using the same underlying timezone database info, maybe\nthere is some commonality at a level below the ISO mktime/tzset/etc API.\nIf we could make use of the system-provided TZ database at a lower level\nwhile still using our own APIs not tied to time_t, it'd answer the issue\nof compatibility with the surrounding system. (Which is a real issue,\nI agree --- we should be able to accept the system's standard TZ setting\nif possible.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 19:41:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "> > The last phase could be extending the API to allow multiple simultaneous\n> > time zones, detection of bad time zones, etc etc. This would involve API\n> > changes or extensions, and breaks compatibility with system-supplied\n> > infrastructure.\n> One thing that wasn't clear to me, but could use investigation: if so\n> many systems are using the same underlying timezone database info, maybe\n> there is some commonality at a level below the ISO mktime/tzset/etc API.\n> If we could make use of the system-provided TZ database at a lower level\n> while still using our own APIs not tied to time_t, it'd answer the issue\n> of compatibility with the surrounding system. (Which is a real issue,\n> I agree --- we should be able to accept the system's standard TZ setting\n> if possible.)\n\nThe fundamental problem (which of course can have a fundamental solution\n;) is that a time zone database built with a 32-bit time_t will have\ntime zone info through 2038 only (it is a binary file with 32-bit time\nfields -- almost certainly anyway). So if we have an extended time zone\ninfrastructure using something different for time_t we would need to\nhandle the case of reading non-extended time zones databases, which puts\nus back to having limitations.\n\nI'm guessing that a better approach might be to have our time zone stuff\ninside our own API, which then could choose to call, for example,\nmktime() or pg_mktime(), which could each have different signatures.\nThen the heuristics for matching one to the other are isolated to our\nthin API implementation, not to the underlying system- or pg-provided\nlibraries.\n\nafaik there is no API provision for the \"inverse time zone\" problem of\nmatching \"stringy time zones\" to numeric offsets for input date/times.\nThe time zone databases themselves don't lend themselves to this, since\nthe tables have those stringy zones somewhere on the right hand side of\neach row of information and the fields can change from year to year.\n\n - Thomas\n", "msg_date": "Fri, 24 May 2002 17:04:11 -0700", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The fundamental problem (which of course can have a fundamental solution\n> ;) is that a time zone database built with a 32-bit time_t will have\n> time zone info through 2038 only (it is a binary file with 32-bit time\n> fields -- almost certainly anyway).\n\nI'm not sure that the time zone database tables store time_t's at all.\nThey certainly could be coded not to use 'em; but I do not know exactly\nhow various vendors have chosen to represent the data.\n\nA random extract from the tzdata2002c files looks like:\n\nZone America/Chicago\t-5:50:36 -\tLMT\t1883 Nov 18 12:00\n\t\t\t-6:00\tUS\tC%sT\t1920\n\t\t\t-6:00\tChicago\tC%sT\t1936 Mar 1 2:00\n\t\t\t-5:00\t-\tEST\t1936 Nov 15 2:00\n\t\t\t-6:00\tChicago\tC%sT\t1942\n\t\t\t-6:00\tUS\tC%sT\t1946\n\t\t\t-6:00\tChicago\tC%sT\t1967\n\t\t\t-6:00\tUS\tC%sT\n\nwhich might well be represented with separate y/m/d/h/m fields...\ncertainly we'd choose some such thing if we have to implement it\nourselves.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 20:09:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "> > > The last phase could be extending the API to allow multiple simultaneous\n> > > time zones, detection of bad time zones, etc etc. This would involve API\n> > > changes or extensions, and breaks compatibility with system-supplied\n> > > infrastructure.\n> > One thing that wasn't clear to me, but could use investigation: if so\n> > many systems are using the same underlying timezone database info, maybe\n> > there is some commonality at a level below the ISO mktime/tzset/etc API.\n> > If we could make use of the system-provided TZ database at a lower level\n> > while still using our own APIs not tied to time_t, it'd answer the issue\n> > of compatibility with the surrounding system. (Which is a real issue,\n> > I agree --- we should be able to accept the system's standard TZ setting\n> > if possible.)\n\n> The fundamental problem (which of course can have a fundamental\n> solution ;) is that a time zone database built with a 32-bit time_t\n> will have time zone info through 2038 only (it is a binary file with\n> 32-bit time fields -- almost certainly anyway). So if we have an\n> extended time zone infrastructure using something different for time_t\n> we would need to handle the case of reading non-extended time zones\n> databases, which puts us back to having limitations.\n\nAh, but the database in question _doesn't_ consist of 32 bit time_t\nvalues.\n\nIt consists of things like:\n\n# @(#)zone.tab\t1.26\n#\n# TZ zone descriptions\n#\n# From Paul Eggert <[email protected]> (1996-08-05):\n#\n# This file contains a table with the following columns:\n# 1. ISO 3166 2-character country code. See the file `iso3166.tab'.\n# 2. Latitude and longitude of the zone's principal location\n# in ISO 6709 sign-degrees-minutes-seconds format,\n# either +-DDMM+-DDDMM or +-DDMMSS+-DDDMMSS,\n# first latitude (+ is north), then longitude (+ is east).\n# 3. Zone name used in value of TZ environment variable.\n# 4. Comments; present if and only if the country has multiple rows.\n#\n# Columns are separated by a single tab.\n# The table is sorted first by country, then an order within the country that\n# (1) makes some geographical sense, and\n# (2) puts the most populous zones first, where that does not contradict (1).\n#\n# Lines beginning with `#' are comments.\n#\n#country-\n#code\tcoordinates\tTZ\t\t\tcomments\nAD\t+4230+00131\tEurope/Andorra\nAE\t+2518+05518\tAsia/Dubai\nAF\t+3431+06912\tAsia/Kabul\nAG\t+1703-06148\tAmerica/Antigua\nAI\t+1812-06304\tAmerica/Anguilla\nAL\t+4120+01950\tEurope/Tirane\nAM\t+4011+04430\tAsia/Yerevan\nAN\t+1211-06900\tAmerica/Curacao\nAO\t-0848+01314\tAfrica/Luanda\n\nThen a \"leapseconds\" table, looking like:\n# The correction (+ or -) is made at the given time, so lines\n# will typically look like:\n#\tLeap\tYEAR\tMON\tDAY\t23:59:60\t+\tR/S\n# or\n#\tLeap\tYEAR\tMON\tDAY\t23:59:59\t-\tR/S\n\n# If the leapsecond is Rolling (R) the given time is local time\n# If the leapsecond is Stationary (S) the given time is UTC\n\n# Leap\tYEAR\tMONTH\tDAY\tHH:MM:SS\tCORR\tR/S\nLeap\t1972\tJun\t30\t23:59:60\t+\tS\nLeap\t1972\tDec\t31\t23:59:60\t+\tS\nLeap\t1973\tDec\t31\t23:59:60\t+\tS\nLeap\t1974\tDec\t31\t23:59:60\t+\tS\nLeap\t1975\tDec\t31\t23:59:60\t+\tS\nLeap\t1976\tDec\t31\t23:59:60\t+\tS\n\nAnd then a set of rules about timezone adjustments for all sorts of\nlocalities, including the following:\n\n# Rule\tNAME\tFROM\tTO\tTYPE\tIN\tON\tAT\tSAVE\tLETTER/S\n# Summer Time Act, 1916\nRule\tGB-Eire\t1916\tonly\t-\tMay\t21\t2:00s\t1:00\tBST\nRule\tGB-Eire\t1916\tonly\t-\tOct\t 1\t2:00s\t0\tGMT\n# S.R.&O. 1917, No. 358\nRule\tGB-Eire\t1917\tonly\t-\tApr\t 8\t2:00s\t1:00\tBST\nRule\tGB-Eire\t1917\tonly\t-\tSep\t17\t2:00s\t0\tGMT\n\n\n# Zone\tNAME\t\tGMTOFF\tRULES\tFORMAT\t[UNTIL]\nZone Antarctica/Casey\t0\t-\tzzz\t1969\n\t\t\t8:00\t-\tWST\t# Western (Aus) Standard Time\nZone Antarctica/Davis\t0\t-\tzzz\t1957 Jan 13\n\t\t\t7:00\t-\tDAVT\t1964 Nov # Davis Time\n\t\t\t0\t-\tzzz\t1969 Feb\n\t\t\t7:00\t-\tDAVT\nZone Antarctica/Mawson\t0\t-\tzzz\t1954 Feb 13\n\t\t\t6:00\t-\tMAWT\t# Mawson Time\n\n> I'm guessing that a better approach might be to have our time zone\n> stuff inside our own API, which then could choose to call, for\n> example, mktime() or pg_mktime(), which could each have different\n> signatures. Then the heuristics for matching one to the other are\n> isolated to our thin API implementation, not to the underlying system-\n> or pg-provided libraries.\n\n> matching \"stringy time zones\" to numeric offsets for input date/times.\n> The time zone databases themselves don't lend themselves to this,\n> since the tables have those stringy zones somewhere on the right hand\n> side of each row of information and the fields can change from year to\n> year.\n\nThe ultimate goal would seem likely to be to store dates internally in\nsome form like UTC, with some reasonably huge dynamic range, that is,\nnot limited to 32 bit timestamps, but rather using something like a\nproleptic Gregorian calendar (per _Calendrical Calculations_, page 50).\n\nSome reasonable treatments would include:\n\n - 32 bits is an signed int indicating number of days since GREG_EPOCH,\n where logical epochs would include January 1, 1, January 1, 1900, or\n perhaps even something actually proleptic (proleptic indicates\n \"future\"), such as January 1, 2038.\n\n - 8 bits indicating the month; 8 bits indicating the day of month;\n 16 bits providing a range of years from -32767 to 32768.\n\nBoth have merits...\n\nTimestamps would then forcibly expand things by _at least_ 22 bits, the\nminimum needed to express 1/100ths of seconds. Might as well head on to\n32 bits for the time and so have something that can easily represent\nvalues down to well below a millisecond.\n\nThe \"stringy stuff\" indicates how values are to be displayed or parsed.\nIt does nothing about what is stored internally, or at least shouldn't.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/emacs.html\nIn the name of the Lord-High mutant, we sacrifice this suburban girl\n-- `Future Schlock'\n", "msg_date": "Fri, 24 May 2002 20:37:24 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug " }, { "msg_contents": "On Fri, 24 May 2002, Peter Eisentraut wrote:\n\n> Michael Meskes writes:\n> \n> > Or does the -34 mean more than just the RedHat version number? The\n> > Debian version is correctly named 2.2.5-6 where the -6 means that this\n> > is the 6th release of glibc 2.2.5 for Debian,\n> \n> Just for general amusement: I run SuSE's glibc 2.2.5-38 which contains\n> neither the questionable code in the original sources nor is there any\n> reference to it in the patch set. Go figure.\n\nYou've got to remember that you're talking about systems where, a long time ago\nnow, certain groups felt it necessary to supply nonstandard versions of the\ncore component (the kernel). Sure they helped development of the kernel but\nonly through bastardisation of version numbers where 2.0.1 didn't really mean \na Linux 2.0.1 kernel. Is it really surprising the system support stuff has been\nmangled beyond sense?\n\nAnyway, I've composed several and aborted all but this message on this subject\nand I'm not going to persue it. I have my own views on the right and wrongs off\nthe change in glibc but they wouldn't have advanced anything so I'm keeping\nquiet on it, still. It seems there is a solution forming. Plus, I'd hate to\nside with the baddies from the first paragraph :)\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Sat, 25 May 2002 02:55:58 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Fri, May 24, 2002 at 12:15:47PM -0700, Ulrich Drepper wrote:\n> This is getting silly. Does nobody here understand that the release\n\nYes, but I'm not sure on which side.\n\n> number is local for each distribution. Comparing them does not lead to\n\nNo, this is simply not true. The version number is what the upstream\ngives its release. No more no less. What RH does is becoming as subtly\nincompatible a possible. If that's the goal, it doesn't look like free\nsoftware for me. Sure all changes are published, but why forcing this\nkind of difference between linux distributions? Why not calling it 2.2.6\nor something if there has to be some changes that are not compatible?\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sat, 25 May 2002 21:55:01 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Sat, 25 May 2002, Michael Meskes wrote:\n\n> No, this is simply not true. The version number is what the upstream\n> gives its release. No more no less. What RH does is becoming as subtly\n> incompatible a possible. If that's the goal, it doesn't look like free\n> software for me. Sure all changes are published, but why forcing this\n> kind of difference between linux distributions? Why not calling it 2.2.6\n> or something if there has to be some changes that are not compatible?\n\nOr rlibc? :)\n\n", "msg_date": "Sun, 26 May 2002 00:32:40 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "On Friday 24 May 2002 03:15 pm, Ulrich Drepper wrote:\n> This is getting silly.\n\nYes, Ulrich, it is. Very silly. And Red Hat's stance is one of the silliest, \nIMHO.\n\n>You'll see that the glibc in RHL7.3 contains a lot of the\n> code from the glibc 2.3 branch. It's not named 2.2.90 because major\n> pieces are missing.\n\n> If you still don't know that version numbers are meaningless for\n> determining feature lists you might want to consider going back to your\n> CS101 class and revisit software configuration management.\n\nIOW, Red Hat's glibc 2.2.5 isn't really pristine glibc 2.2.5 as found straight \nfrom the GNU repository. In fact, Red Hat glibc 2.2.5 isn't really 2.2.5 -- \nhow about 2.2.96? :-) .96 was good enough for gcc....\n\nFurthermore, Red Hat glibc 2.2.5 isn't even fully compatible with GNU glibc \n2.2.5 -- at least in the area of time_t stuff.\n\nIn the open source world, version numbers are actually supposed to mean \nsomething -- at least for package dependencies. Of course, I also have read \nthe kernel-2.4.18 source RPM and its 21.8MB 'ac-bits' patch.\n\nYou do realize that this sort of thing doesn't help Red Hat's PR state amongst \nthe greater open source community, right? Nor would it help Mandrake, SuSE, \nor any other Linux distributor (I specifically excluded Debian due to its \nunique community supported state). But, if you don't care about the greater \nopen source community, well...\n\nAnd I say all of that while running and enjoying the greater part of Red Hat \n7.3. For the most part it is extraordinarily stable. And I know that that \n21.8MB kernel patch is one of the reasons it is so stable. But I still \nquestion the versioning of glibc.\n\nSo, in summary, the glibc version number in any particular linux distribution \nis meaningless because the distributor is free to patch the bloody daylights \nout of it at any time. Sweet. And so standard.\n\nBut, if glibc 2.3 is where this bit came from, it is just a matter of time \nbefore all Linux distributions (that aren't willing to patch away) get this \nbraindead behavior. Oh well. The general solution will happen.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 27 May 2002 22:18:17 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Tom Lane wrote:\n> Thomas Lockhart <[email protected]> writes:\n> >> Why should we rely on broken glibc and the standard? Why don't we make\n> >> our own mktime() and use it on all platforms.\n> \n> > The downside to doing that is that we then take over maintenance of the\n> > code and, more importantly, the timezone database.\n> \n> > But it might be the best thing to do.\n> \n> I've been sorta thinking the same thing. We could get out from under\n> the Y2038 issue, and also eliminate a whole lot of platform\n> dependencies. Not to mention sillinesses like being unable to recognize\n> a bad timezone name when it's fed to us.\n> \n> Exactly how much work (and code bulk) would we be taking on? I've\n> never looked at how big the timezone databases are...\n\nI am not really excited about distributing a timezone database as part\nof PostgreSQL, and it wouldn't match the OS's timezone. (We do need a\n64-time time_t, but I think we can wait to get closer to 2038.) Can we\ndetect if glibc is being used for the compile (easy), and substitute a\nnon-broken mktime in the link path ahead of glibc's mktime? Seems that\nwould be the easiest solution.\n\nOf course, pre-1970 dates then wouldn't match the OS on glibc systems,\nbut that seems like a win. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 01:04:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> On Tue, 21 May 2002, Lamar Owen wrote:\n> \n> > On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:\n> > > I see. This behavior is consistent with the fact that mktime is\n> > > supposed to return -1 on error, but then is broken in every other Unix\n> > > implementation that I know.\n> > \n> > > Any other workaround than downgrade or install FreeBSD?\n> > \n> > Complain to Red Hat. Loudly. However, as this is a glibc change, other \n> > distributors are very likely to fold in this change sooner rather than \n> > later. \n> \n> Relying on nonstandardized/nondocumented behaviour is a program bug, not a \n> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at \n> it, but glibc is not the component with a problem.\n\nNo one has really answered the question --- if the way PostgreSQL is\nusing mktime() for pre-1970 dates is wrong, why do timezone databases\nhave pre-1970 timezone information?\n\nI assume Linux does or the old mktime() wouldn't have worked for\npre-1970 dates.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 01:11:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redhat 7.3 time manipulation bug" }, { "msg_contents": "Anyone mind if we bump the DTD version to Docbook 4.2?\n\nThis consists on all users who wish to build docs on installing the 4.2\nDTD set, and updating some depreciated tags within the sgml files.\n\ncomment -> remark\ndocinfo -> appendixinfo, chapterinfo, bookinfo, etc.\n\n\nWhat it buys is a number of useful tags, SVGs and probably more\nimportantly for the future, xsl and fop support which will probably be\nimportant in the future. OpenJade hasn't had a new release in quite a\nlong time -- not to say work isn't needed.\n\nYes, after updating docs to the newer DTD I intend to make them XML\ncompliant to ensure they work with v5 of docbook in the future.\n\n\n", "msg_date": "14 Aug 2002 23:14:03 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Documentation DTD" }, { "msg_contents": "Reading about the pgmonitor thread and mention of gborg made me wonder\nabout replication and ready ability to uniformly monitor it. Just as\npg_stat* tables exist to allow for statistic gathering and monitoring in\na uniform fashion, it occurred to me that a predefined set of views\nand/or tables for all replication implementations may be worthwhile. \nThat way, no matter what replication method/tool is being used, as long\nas it conforms to the defined replication interfaces, generic monitoring\ntools can be used to keep an eye on things.\n\nThink this has any merit?\n\nGreg Copeland", "msg_date": "14 Aug 2002 22:15:32 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Standard replication interface?" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> ... it occurred to me that a predefined set of views\n> and/or tables for all replication implementations may be worthwhile.\n\nDo we understand replication well enough to define such a set of views?\nI sure don't ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Aug 2002 23:47:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface? " }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> Anyone mind if we bump the DTD version to Docbook 4.2?\n\nPeter E. is the gatekeeper on that, I think --- he pushed us to 4.1\nnot long ago.\n\nIf Peter's okay with 4.2, then full speed ahead ...\n\n\t\t\tregards, tom lane\n\nPS: pgsql-docs is probably the more appropriate forum for this\ndiscussion.\n", "msg_date": "Thu, 15 Aug 2002 00:12:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation DTD " }, { "msg_contents": "Well, that's a different issue. ;)\n\nI initially wanted to get feedback to see if anyone else thought the\nconcept might hold some merit.\n\nI take it from your answer you think it might...but are scratching your\nhead wondering exactly what it entails...\n\nGreg\n\n\nOn Wed, 2002-08-14 at 22:47, Tom Lane wrote:\n> Greg Copeland <[email protected]> writes:\n> > ... it occurred to me that a predefined set of views\n> > and/or tables for all replication implementations may be worthwhile.\n> \n> Do we understand replication well enough to define such a set of views?\n> I sure don't ...\n> \n> \t\t\tregards, tom lane", "msg_date": "15 Aug 2002 09:05:47 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "On Wed, Aug 14, 2002 at 10:15:32PM -0500, Greg Copeland wrote:\n\n> Reading about the pgmonitor thread and mention of gborg made me wonder\n> about replication and ready ability to uniformly monitor it. Just as\n> pg_stat* tables exist to allow for statistic gathering and monitoring in\n> a uniform fashion, it occurred to me that a predefined set of views\n> and/or tables for all replication implementations may be worthwhile. \n> That way, no matter what replication method/tool is being used, as long\n> as it conforms to the defined replication interfaces, generic monitoring\n> tools can be used to keep an eye on things.\n\nThat sounds like the cart is before the horse. You need to know what\nsort of replication scheme you might ever have before you could\nknow the statistics that you might want to know.\n\nThere are different sorts of replication schemes under consideration. \nFor instance, rserv uses an asynchronous master/slave approach, which\nrelies on slaves that are almost dumb as chickens. (Not quite. \nThere is some data about the state of replication in the slave\ndatabase; but most of it is in the master.) Postgres-R, on the other\nhand, contemplates a distributed model wherein different database\nmachines participate in a pool.\n\nSo for rserv-style replication, you want to know (for instance)\naverage slave-update times, and whether slaves are getting behind,\nand by how much, and such. Balancing of inserts, however, is not\nrelevant, because you can't do that.\n\nPostgres-R will have the opposite need: you'll want to know what sort\nof load balancing you're getting, but time-to-replicate is not\nrelevant, because a commit on one machine is necessarily a commit\neverywhere (that's why it's \"eager\" replication).\n\nYou probably could design a set of statistics that would cover all\ncases, but only after you know what the cases were.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 15 Aug 2002 10:47:56 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Wed, Aug 14, 2002 at 10:15:32PM -0500, Greg Copeland wrote:\n> > Reading about the pgmonitor thread and mention of gborg made me wonder\n> > about replication and ready ability to uniformly monitor it. Just as\n> > pg_stat* tables exist to allow for statistic gathering and monitoring in\n> > a uniform fashion, it occurred to me that a predefined set of views\n> > and/or tables for all replication implementations may be worthwhile. \n> > That way, no matter what replication method/tool is being used, as long\n> > as it conforms to the defined replication interfaces, generic monitoring\n> > tools can be used to keep an eye on things.\n> \n> That sounds like the cart is before the horse.\n\nThat's exactly what I was going to say -- I'd prefer that any\ninterested parties concentrate on producing a *really good*\nreplication implementation, which might eventually be integrated into\nPostgreSQL itself.\n\nProducing a \"generic API\" for something that really doesn't need\ngenericity sounds like a waste of time, IMHO.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n\n", "msg_date": "15 Aug 2002 10:53:15 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "On Thu, 2002-08-15 at 09:47, Andrew Sullivan wrote:\n> On Wed, Aug 14, 2002 at 10:15:32PM -0500, Greg Copeland wrote:\n> > That way, no matter what replication method/tool is being used, as long\n> > as it conforms to the defined replication interfaces, generic monitoring\n> > tools can be used to keep an eye on things.\n> \n> That sounds like the cart is before the horse. You need to know what\n> sort of replication scheme you might ever have before you could\n> know the statistics that you might want to know.\n\nHmmm. Never heard of an inquiry for interest in a concept as putting\nthe cart before the horse. Considering this is pretty much how things\nget developed in the real world, I'm not sure what you feel is so\nspecial about replication.\n\nFirst step is always identify the need. I'm attempting to do so. Not\nsure what you'd consider the first step to be but I can assure you,\nregardless of this concept seeing the light of day, it is the first\nstep. The horse is correctly positioned in front of the cart.\n\nI also stress that I'm talking about a statistical replication\ninterface. It occurred to me that you might of been confused on this\nmatter. That is, a set of tables and views will allow for the\nreplication process to be uniformly *monitored*. I am not talking about\na set of interfaces which all manner of replication much perform its job\nthrough (interface with databases for replication).\n\n> \n> There are different sorts of replication schemes under consideration. \n\nYep. Thus it would seemingly be ideal to have a specification which\ndifferent implementations would seek to implement. Off of the top of my\nhead and for starters, a table and/or view which could can queried that\nreturns the tables that are being replicated sounds good to me. Same\nthing for the list of databases, the servers involved and their\nassociated role (master, slave, peer).\n\nWithout such a concept, there will be no standardized way to monitor\nyour replication. As such, chances are one of two things will happen. \nOne, a single replication method will be championed and fair tools will\ndevelop to support where all others are bastards. Two, quality tools to\nmonitor replication will never materialize because each method for\nmonitoring is specific to the different types of implementations. \nResources will constantly be spread amongst a variety of well meaning\nprojects.\n\n\n--Greg", "msg_date": "15 Aug 2002 12:08:54 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "On Thu, 2002-08-15 at 09:53, Neil Conway wrote:\n> That's exactly what I was going to say -- I'd prefer that any\n> interested parties concentrate on producing a *really good*\n> replication implementation, which might eventually be integrated into\n> PostgreSQL itself.\n> \n> Producing a \"generic API\" for something that really doesn't need\n> genericity sounds like a waste of time, IMHO.\n> \n> Cheers,\n> \n> Neil\n\n\nSome how I get the impression that I've been completely misunderstood. \nSomehow, people seem to of only read the subject and skipped the body\nexplaining the concept.\n\nIn what way would providing a generic interface to *monitor* be a \"waste\nof time\"? In what way would that prevent someone from \"producing a\n*readlly good* replication implementation\"? I utterly fail to see the\nconnection.\n\nRegards,\n\tGreg Copeland", "msg_date": "15 Aug 2002 12:50:59 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> In what way would providing a generic interface to *monitor* be a\n> \"waste of time\"?\n\nAs I said -- I don't really see the need for a bunch of replication\nimplementations, and therefore I don't see the need for a generic API\nto make the whole mess (slightly) more manageable.\n\n> In what way would that prevent someone from \"producing a readlly\n> good* replication implementation\"?\n\nIt wouldn't -- it's just that if/when such an implementation exists\nand everyone who needs replication is using it, a \"generic monitoring\nAPI\" would be pointless.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n\n", "msg_date": "15 Aug 2002 13:57:08 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "> As I said -- I don't really see the need for a bunch of replication\n> implementations, and therefore I don't see the need for a generic API\n> to make the whole mess (slightly) more manageable.\n\nI see. So the intension of the core developers is to have one and only\none replication solution?\n\nGreg", "msg_date": "15 Aug 2002 13:03:53 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> > As I said -- I don't really see the need for a bunch of replication\n> > implementations, and therefore I don't see the need for a generic API\n> > to make the whole mess (slightly) more manageable.\n> \n> I see. So the intension of the core developers is to have one and only\n> one replication solution?\n\nNot being a core developer, I can't comment on their intentions.\n\nThat said, I _personally_ don't see the need for more than one or two\nreplication implementations. You might need more than one if you\nwanted to do both lazy and eager replication, for example. But you\ncertainly don't need 5 or 6 or however many implementations exist at\nthe moment.\n\nI think the reason there are a lot of different implementations at the\nmoment is that each one has some pretty serious problems. So rather\nthan trying to reduce the problem by making it slightly easier for the\ndifferent replication solutions to inter-operate, I think it's a\nbetter idea to solve the problem outright by improving one of the\nexisting replication projects to the point at which it is ready for\nwidespread production usage.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n\n", "msg_date": "15 Aug 2002 14:18:15 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "On Thu, 2002-08-15 at 13:18, Neil Conway wrote:\n> That said, I _personally_ don't see the need for more than one or two\n> replication implementations. You might need more than one if you\n> wanted to do both lazy and eager replication, for example. But you\n> certainly don't need 5 or 6 or however many implementations exist at\n> the moment.\n\nFair enough. Thank you for offering a complete explanation.\n\nYou're argument certainly made sense. I wasn't aware of any single\nserious effort underway which sought to finally put replication to bed,\nlet alone integrated into the core code base.\n\nSign,\n\n\tGreg Copeland", "msg_date": "15 Aug 2002 13:37:59 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "Rod Taylor writes:\n\n> Anyone mind if we bump the DTD version to Docbook 4.2?\n\nNot sure if we should do this now. We're approaching the time where\npeople should be writing documentation, not having to refiddle their\ncarefully crafted DocBook installations. We're not going to realize any\nimmediate benefits anyway.\n\n> What it buys is a number of useful tags, SVGs and probably more\n> importantly for the future, xsl and fop support which will probably be\n> important in the future. OpenJade hasn't had a new release in quite a\n> long time -- not to say work isn't needed.\n\nThe last release was in January.\n\n> Yes, after updating docs to the newer DTD I intend to make them XML\n> compliant to ensure they work with v5 of docbook in the future.\n\nAh, an XML vs. SGML debate. I look forward to it.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 15 Aug 2002 21:30:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation DTD" }, { "msg_contents": "\n> > Yes, after updating docs to the newer DTD I intend to make them XML\n> > compliant to ensure they work with v5 of docbook in the future.\n> \n> Ah, an XML vs. SGML debate. I look forward to it.\n\nThis one is pretty simple. It's been announced that the docbook group\nisn't looking to continue with SGML. This is shown on the oasis-open\npages as well as their discussion in the mailing lists (xsltproc and fop\nrather than jade and dsssl).\n\nI prefer working with SGML, but not enough to try hacking away at\nopenjade to finish it off :)\n\n\nAnyway, you're right about the patch. Lets apply it to the 7.4 tree \nafter branching.\n\n", "msg_date": "15 Aug 2002 15:33:11 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation DTD" }, { "msg_contents": "> --=-QQHYShMlxI2BY71i6NiO\n> Content-Type: text/plain\n> Content-Transfer-Encoding: quoted-printable\n> \n> > As I said -- I don't really see the need for a bunch of replication\n> > implementations, and therefore I don't see the need for a generic API\n> > to make the whole mess (slightly) more manageable.\n> \n> I see. So the intension of the core developers is to have one and only\n> one replication solution?\n\nIf the various \"solutions\" may be folded down into a smaller set of programs, \nperhaps, ultimately, into _one_ program, that would surely be easier to \nmanage, in the codebase, than having five or six such programs.\n\nIf one program can do the job that needs to be done, and it has not been \n_clearly_ established that that is _not_ possible, then I'd think it rather \nsilly to have a bunch of \"replication solutions\" that need to be updated any \ntime a relevant change goes into the database engine.\n\nI'd be surprised if, in the end, there truly _needed_ to be more than about \ntwo approaches.\n\nShould the team plan to _have_ a mess? I'd think not.\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://cbbrowne.com/info/linuxdistributions.html\n\"We don't understand the software, and sometimes we don't understand\nthe hardware, but we can *see* the blinking lights!\" -- Unknown\n\n\n", "msg_date": "Thu, 15 Aug 2002 16:01:04 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Standard replication interface? " }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Greg Copeland <[email protected]> writes:\n>> I see. So the intension of the core developers is to have one and only\n>> one replication solution?\n\n> Not being a core developer, I can't comment on their intentions.\n\nWell, I am, but I'm only speaking for myself here:\n\nI think there's definitely a need for at least two replication\nimplementations: sync and async. The space of requirements is wide\nenough that there's not a one-size-fits-all solution. You might care\nto look at Darren Johnson's OSCON slides for more about this:\nhttp://conferences.oreillynet.com/cs/os2002/view/e_sess/3280\nI think there is room for several replication solutions for Postgres\n(three or four, maybe).\n\nIt's difficult to say what will wind up in our core distribution.\nA tightly linked implementation like Postgres-R is really impractical\nas an add-on: you need enough mods of the core code that it'd be a\nnightmare to try to maintain if it's not integrated into the regular\nCVS tree. So assuming that the Postgres-R project gets to the point\nof usefulness, I'd vote in favor of integrating it. On the other hand,\nit's possible to do good stuff without touching the core code at all\n(cf. PostgreSQL Inc's rserv) and in that case there may or may not be\nany interest in integrating the code. It's really gonna depend mostly\non the wishes of the people who develop the replication solutions,\nI think.\n\nI can foresee a time when there are one or two replication solutions\nthat are included in the base distribution and others are available\nseparately. In fact, counting contrib/rserv that more or less describes\nthe state of affairs today. What we need is more work on the available\nsolutions to improve their quality and general usefulness.\n\nAs for the point at hand: I'm fairly dubious that a common monitoring\nAPI will be very useful, considering how different the possible\nreplication approaches are. If Greg can prove me wrong, fine. But\nI don't want to see us artificially constraining replication solutions\nby insisting that they meet some prespecified API.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Aug 2002 16:36:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface? " }, { "msg_contents": "> Rod Taylor writes:\n> \n> > Anyone mind if we bump the DTD version to Docbook 4.2?\n> \n> Not sure if we should do this now. We're approaching the time where\n> people should be writing documentation, not having to refiddle their\n> carefully crafted DocBook installations. We're not going to realize any\n> immediate benefits anyway.\n\nIndeed.\n\n> > What it buys is a number of useful tags, SVGs and probably more\n> > importantly for the future, xsl and fop support which will probably be\n> > important in the future. OpenJade hasn't had a new release in quite a\n> > long time -- not to say work isn't needed.\n> \n> The last release was in January.\n> \n> > Yes, after updating docs to the newer DTD I intend to make them XML\n> > compliant to ensure they work with v5 of docbook in the future.\n> \n> Ah, an XML vs. SGML debate. I look forward to it.\n\nPlease no!\n\nIf and when it becomes forcibly preferable to use XML, there's a\ntool called sgml2xml that is part of the \"sp\" package (which includes nsgmls \nand sgmlnorm) that does a Perfectly Good Job of this. Totally automated.\n\nPossible exception: sgml2xml capitalizes all the tags, and it looks like the \nXML DTD wants MixedCaseTagging, which is a rather irritating thing about XML; \nin any case, that's something that should be fixed up in one fell swoop in a \n\"normalize it all and make it into XML\" process LATER.\n\nIt would make sense to fix use of any deprecated elements, but \"fixing\" any \nXML aspects of it now is pretty much a senseless exercise.\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/emacs.html\n\"Computers in the future may weigh no more than 1.5 tons\". -- POPULAR\nMECHANICS magazine forecasting the \"relentless march of science\" 1955\n\n\n", "msg_date": "Thu, 15 Aug 2002 18:33:41 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Documentation DTD " }, { "msg_contents": "On Thu, 2002-08-15 at 15:36, Tom Lane wrote:\n> Well, I am, but I'm only speaking for myself here:\n> \n\nFair enough.\n\n> I think there is room for several replication solutions for Postgres\n> (three or four, maybe).\n\nIf the ideal solution count is merely one with a maybe on two then I\ntend to concur that any specification along these lines would *mostly*\nbe a waste. On the other hand, if we can count three or more possible\nreplication solutions, IMHO, there seemingly would be merit is providing\nsome sort of defacto monitoring interface.\n\nSeems the current difficulty is forecasting the future in this regard. \nPerhaps other core developers would care to chime in and share their\nvision?\n\n> CVS tree. So assuming that the Postgres-R project gets to the point\n> of usefulness, I'd vote in favor of integrating it. On the other hand,\n\nI guess I should ask. Do the developers foresee immediate usability\nfrom this project or are we looking at something that's a year+ away? I\ndon't think I have a problem helping guide what could be an interim\nsolution if the interim window were large enough. In theory, monitoring\ntools developed between now and the closing of the window could largely\ncontinue to function without change. That, of course, assumes that even\nthe end-run solutions would implement the interface as well.\n\nThe return on such a concept is that it allows generic monitoring tools\nto mature while providing value now and in the future. The end result\nshould be a stronger, more powerful tool base which matures while other\ntechnologies are still being developed.\n\nAnother question along this line is, once something rolls into a core\nposition, does that obsolete all other existing implementations or\nmerely become the defacto in a bag of solutions? Tom seems to hint at\nthe later. If the answer is the former then that seemingly argues not\nto worry about this...unless the window for usefulness and/or inclusion\nis rather large.\n\n> As for the point at hand: I'm fairly dubious that a common monitoring\n> API will be very useful, considering how different the possible\n\nWell, all replication scenarios have a lot in common. They should, \nafter all, they are all doing the same thing. Since the different\nstrategies for accomplishing replication are well understood, it seems\nwell within reason to assume that someone can put their brain around\nthis.\n\nI can also imagine that the specification includes requirements as well\nas optional facilities. Certainly capability queries would further iron\nout any gaps between differing solutions/implementations.\n\n> replication approaches are. If Greg can prove me wrong, fine. But\n> I don't want to see us artificially constraining replication solutions\n> by insisting that they meet some prespecified API.\n\nHmmm. I'm not sure how it would act as a constraining force. To me,\nthis implies that any such specification would fail to evolve and could\nnot be revised based on feedback. IMO, most specifications are regarded\nas living documents. While I can see that some specifications are set\nin stone, I certainly am not so bold as to assert my crystal ball even\ncame with batteries. ;) That is, I assume some level of revision to an\ninitial specification would be required following real-world use.\n\n\nRegards,\n\n\tGreg Copeland", "msg_date": "16 Aug 2002 09:20:11 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface?" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> I guess I should ask. Do the developers foresee immediate usability\n> from [Postgres-R] or are we looking at something that's a year+ away?\n\nDarren Johnson would be the man to answer that, but from what he said\nat OSCON it sounded like we'd be seeing something useful by the end of\nthe year, with all the usual caveats about time actually being available\nto work on it.\n\n>> As for the point at hand: I'm fairly dubious that a common monitoring\n>> API will be very useful, considering how different the possible\n\n> Well, all replication scenarios have a lot in common. They should,=20\n> after all, they are all doing the same thing.\n\nThe end goal is approximately the same, but the mechanisms are totally\ndifferent, and that means that what you want to monitor is totally\ndifferent.\n\nPerhaps the problem is that you're using the wrong word, and that what\nyou would like to standardize is not monitoring but administrative\nfunctions. For example, I'd classify selecting tables to be replicated\nas an admin task. Monitoring to me means something like \"how much data\nis in the queue to be pushed out to slave X?\", which is a question that\nalready presupposes a heck of a lot about the implementation.\n\nI could agree with a set of guidelines that say stuff like \"if your\nmechanism is capable of selecting individual tables to replicate,\nthen here's the preferred way to control that feature.\" But I'm not\nsure that there's enough common functionality for monitoring (in the\nabove sense) to be worth standardizing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Aug 2002 10:35:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Standard replication interface? " }, { "msg_contents": "Rod Taylor writes:\n\n> This one is pretty simple. It's been announced that the docbook group\n> isn't looking to continue with SGML.\n\nI don't know where you got this from, but it's not true. DocBook 5 will\nsupport SGML. And as long as they publish DTDs you can use them with SGML\ntools anyway.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 17 Aug 2002 17:47:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Documentation DTD" }, { "msg_contents": "On Sat, 2002-08-17 at 11:47, Peter Eisentraut wrote:\n> Rod Taylor writes:\n> \n> > This one is pretty simple. It's been announced that the docbook group\n> > isn't looking to continue with SGML.\n> \n> I don't know where you got this from, but it's not true. DocBook 5 will\n> support SGML. And as long as they publish DTDs you can use them with SGML\n> tools anyway.\n\nYes, jade and friends will work. But Fop is quickly catching up to the\ndsssl abilities and can already do some things much cleaner (title\npages, headers and footers).\n\nAnyway, XML or SGML doesn't really matter. There are a number of\nenhancements I'd like to make to the doc process which won't be affected\neither way. Auto-generated example output, and others to help things\nstay in sync.\n\n", "msg_date": "17 Aug 2002 13:56:03 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Documentation DTD" }, { "msg_contents": "Rod Taylor writes:\n\n> Yes, jade and friends will work. But Fop is quickly catching up to the\n> dsssl abilities and can already do some things much cleaner (title\n> pages, headers and footers).\n\nThe real concern is that the XSLT stylesheets aren't anywhere near the\nmaturity of the DSSSL releases. I occasionally build the PostgreSQL\ndocumentation with various combinations of XSL tools and the results are\nbasically too ugly to look at -- if you get anything to look at in the\nfirst place.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sun, 18 Aug 2002 23:35:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Documentation DTD" }, { "msg_contents": "\nwill announce it on -announce tomorrow, if ppl want to take a quick look\nat it ... man pages weren't included, but I did regenerate the docs per\nPeter's suggested commands ...\n\nScary, even with removing a load of stuff over to gborg, its still gotten\nbigger then the last release :)\n\n%ls -lt ~ftp/pub/source/v7.3beta\ntotal 21125\n-rw-r--r-- 1 pgsql pgsql 70 Sep 4 22:28 postgresql-test-7.3b1.tar.gz.md5\n-rw-r--r-- 1 pgsql pgsql 65 Sep 4 22:28 postgresql-7.3b1.tar.gz.md5\n-rw-r--r-- 1 pgsql pgsql 70 Sep 4 22:28 postgresql-base-7.3b1.tar.gz.md5\n-rw-r--r-- 1 pgsql pgsql 70 Sep 4 22:28 postgresql-docs-7.3b1.tar.gz.md5\n-rw-r--r-- 1 pgsql pgsql 69 Sep 4 22:28 postgresql-opt-7.3b1.tar.gz.md5\n-rw-r--r-- 1 pgsql pgsql 1070154 Sep 4 22:28 postgresql-test-7.3b1.tar.gz\n-rw-r--r-- 1 pgsql pgsql 2629533 Sep 4 22:28 postgresql-opt-7.3b1.tar.gz\n-rw-r--r-- 1 pgsql pgsql 2577818 Sep 4 22:28 postgresql-docs-7.3b1.tar.gz\n-rw-r--r-- 1 pgsql pgsql 4505929 Sep 4 22:28 postgresql-base-7.3b1.tar.gz\n-rw-r--r-- 1 pgsql pgsql 10783992 Sep 4 22:27 postgresql-7.3b1.tar.gz\n\n\n", "msg_date": "Wed, 4 Sep 2002 23:39:33 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "beta1 packaged" }, { "msg_contents": "On Wed, 4 Sep 2002, Marc G. Fournier wrote:\n\n> %ls -lt ~ftp/pub/source/v7.3beta\n\nIs this where you're putting it this time? Last time was ~ftp/pub/beta.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n\n", "msg_date": "Thu, 5 Sep 2002 05:51:51 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Thu, 5 Sep 2002, Vince Vielhaber wrote:\n\n> On Wed, 4 Sep 2002, Marc G. Fournier wrote:\n>\n> > %ls -lt ~ftp/pub/source/v7.3beta\n>\n> Is this where you're putting it this time? Last time was ~ftp/pub/beta.\n\nactually, should be a symlink, but until I know the packaging and all is\nwell, I'm avoiding put it in there ...\n\n>\n\n", "msg_date": "Thu, 5 Sep 2002 09:16:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Thu, 5 Sep 2002, Marc G. Fournier wrote:\n\n> On Thu, 5 Sep 2002, Vince Vielhaber wrote:\n>\n> > On Wed, 4 Sep 2002, Marc G. Fournier wrote:\n> >\n> > > %ls -lt ~ftp/pub/source/v7.3beta\n> >\n> > Is this where you're putting it this time? Last time was ~ftp/pub/beta.\n>\n> actually, should be a symlink, but until I know the packaging and all is\n> well, I'm avoiding put it in there ...\n\nOk, I'll leave the script as is then.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 5 Sep 2002 08:19:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> actually, should be a symlink, but until I know the packaging and all is\n> well, I'm avoiding put it in there ...\n\nI pulled down the main tarball -- looks good AFAICT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 10:09:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "On Wed, 2002-09-04 at 22:39, Marc G. Fournier wrote:\n> \n> will announce it on -announce tomorrow, if ppl want to take a quick look\n> at it ... man pages weren't included, but I did regenerate the docs per\n> Peter's suggested commands ...\n\n'./configure && make check' passes on i386 FreeBSD.\n\nSunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\nshows an error in ALTER TABLE tests:\n\n\nc> cat src/test/regress/regression.diffs \n*** ./expected/alter_table.out Fri Aug 30 12:23:20 2002\n--- ./results/alter_table.out Thu Sep 5 07:44:18 2002\n***************\n*** 367,374 ****\n -- As should this\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\npktable(ptest1);\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\n DROP TABLE pktable cascade;\n- NOTICE: Drop cascades to constraint $2 on table fktable\n NOTICE: Drop cascades to constraint $1 on table fktable\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n--- 367,374 ----\n -- As should this\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\npktable(ptest1);\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\n+ ERROR: Relation \"pg_temp_5\".\"\" does not exist\n DROP TABLE pktable cascade;\n NOTICE: Drop cascades to constraint $1 on table fktable\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n\n======================================================================\n\n\n\n", "msg_date": "05 Sep 2002 10:47:07 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> shows an error in ALTER TABLE tests:\n\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> pktable(ptest1);\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n\nThat's pretty bizarre. Is it reproducible? Can you get in there with a\ndebugger and try to figure out what's going wrong?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 11:19:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "\nGuys,\n\npostgresql7.3b1 does not build :-(, seems like a missing multibyte\ndirectory\n\n'----------------------------------------\n| make[4]: Entering directory `/home/masm/download/postgresql-7.3b1/src/backend/utils/time'\n| gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tqual.o tqual.c\n| /usr/bin/ld -r -o SUBSYS.o tqual.o\n| make[4]: Leaving directory `/home/masm/download/postgresql-7.3b1/src/backend/utils/time'\n| make -C mb SUBSYS.o\n| make: Entering an unknown directory\n| make: *** mb: No such file or directory. Stop.\n| make: Leaving an unknown directory\n| make[3]: *** [mb-recursive] Error 2\n| make[3]: Leaving directory `/home/masm/download/postgresql-7.3b1/src/backend/utils'\n| make[2]: *** [utils-recursive] Error 2\n| make[2]: Leaving directory `/home/masm/download/postgresql-7.3b1/src/backend'\n| make[1]: *** [all] Error 2\n| make[1]: Leaving directory `/home/masm/download/postgresql-7.3b1/src'\n| make: *** [all] Error 2\n`----------------------------------------\n\nor I'm missing something?\n\nRegards,\nManuel.\n", "msg_date": "05 Sep 2002 11:08:02 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Thu, 2002-09-05 at 11:19, Tom Lane wrote:\n> Rod Taylor <[email protected]> writes:\n> > SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> > shows an error in ALTER TABLE tests:\n> \n> > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > pktable(ptest1);\n> > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > check(s)\n> > + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n> \n> That's pretty bizarre. Is it reproducible? Can you get in there with a\n> debugger and try to figure out what's going wrong?\n\nNo, I've been unable to reproduce.\n\n\n", "msg_date": "05 Sep 2002 12:22:22 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Manuel Sugawara <[email protected]> writes:\n> or I'm missing something?\n\nSo it would seem. The utils/mb directory is certainly there in the full\ntarball that I pulled from ftp.us.postgresql.org this morning. How did\nyou acquire your source tree, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 12:33:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> So it would seem. The utils/mb directory is certainly there in the full\n> tarball that I pulled from ftp.us.postgresql.org this morning. How did\n> you acquire your source tree, exactly?\n\nThe file is postgresql-base-7.3b1.tar.gz from\nftp://ftp.postgresql.org/pub/source/v7.3beta/\n\nmay be I need postgresql-7.3b1.tar.gz?\n\nRegards,\nManuel.\n\n-- \nNo es que no puedan hallar la solución: es que no ven el problema.\nG.K. Chesterson\n", "msg_date": "05 Sep 2002 11:46:25 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n\n> You need either the 7.3b1.tar.gz (which is everything), or you need to get\n> all the various -*- parts (which are more manageable)\n\nOh, well. Thanks\n\nRegards,\nManuel.\n-- \nNo es que no puedan hallar la solución: es que no ven el problema.\nG.K. Chesterson\n", "msg_date": "05 Sep 2002 11:48:50 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On 5 Sep 2002, Manuel Sugawara wrote:\n\n> Tom Lane <[email protected]> writes:\n>\n> > So it would seem. The utils/mb directory is certainly there in the full\n> > tarball that I pulled from ftp.us.postgresql.org this morning. How did\n> > you acquire your source tree, exactly?\n>\n> The file is postgresql-base-7.3b1.tar.gz from\n> ftp://ftp.postgresql.org/pub/source/v7.3beta/\n>\n> may be I need postgresql-7.3b1.tar.gz?\n\nYou need either the 7.3b1.tar.gz (which is everything), or you need to get\nall the various -*- parts (which are more manageable)\n\n\n", "msg_date": "Thu, 5 Sep 2002 13:51:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Marc G. Fournier wrote:\n> On 5 Sep 2002, Manuel Sugawara wrote:\n> \n> > Tom Lane <[email protected]> writes:\n> >\n> > > So it would seem. The utils/mb directory is certainly there in the full\n> > > tarball that I pulled from ftp.us.postgresql.org this morning. How did\n> > > you acquire your source tree, exactly?\n> >\n> > The file is postgresql-base-7.3b1.tar.gz from\n> > ftp://ftp.postgresql.org/pub/source/v7.3beta/\n> >\n> > may be I need postgresql-7.3b1.tar.gz?\n> \n> You need either the 7.3b1.tar.gz (which is everything), or you need to get\n> all the various -*- parts (which are more manageable)\n\nI am confused. Are you saying the base file isn't compilable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 13:24:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> > You need either the 7.3b1.tar.gz (which is everything), or you need to get\n> > all the various -*- parts (which are more manageable)\n> \n> I am confused. Are you saying the base file isn't compilable?\n\nMy idea was that it is.\n\nRegards,\nManuel.\n", "msg_date": "05 Sep 2002 12:50:25 -0500", "msg_from": "Manuel Sugawara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On 5 Sep 2002, Manuel Sugawara wrote:\n> >\n> > > Tom Lane <[email protected]> writes:\n> > >\n> > > > So it would seem. The utils/mb directory is certainly there in the full\n> > > > tarball that I pulled from ftp.us.postgresql.org this morning. How did\n> > > > you acquire your source tree, exactly?\n> > >\n> > > The file is postgresql-base-7.3b1.tar.gz from\n> > > ftp://ftp.postgresql.org/pub/source/v7.3beta/\n> > >\n> > > may be I need postgresql-7.3b1.tar.gz?\n> >\n> > You need either the 7.3b1.tar.gz (which is everything), or you need to get\n> > all the various -*- parts (which are more manageable)\n>\n> I am confused. Are you saying the base file isn't compilable?\n\nHrmm ... that is odd, now that you mention it ... but the file\n'distributions' between v7.2 and v7.3beta appear to be the same, so -base-\nwas broken in the old one too?\n\n\n", "msg_date": "Thu, 5 Sep 2002 15:15:39 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am confused. Are you saying the base file isn't compilable?\n\nThe mb stuff is missing because it used to be optional in the old\nsplitting scheme. Needs to be rethought.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 5 Sep 2002 20:27:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Marc G. Fournier writes:\n\n> Scary, even with removing a load of stuff over to gborg, its still gotten\n> bigger then the last release :)\n\nNot hard to find the culprit:\n\n7.2:\n\n3.4M src/backend/utils/mb\n\n7.3:\n\n9.6M src/backend/utils/mb\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 5 Sep 2002 20:28:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> Hrmm ... that is odd, now that you mention it ... but the file\n> 'distributions' between v7.2 and v7.3beta appear to be the same, so -base-\n> was broken in the old one too?\n\nIt was never intended that the \"base\" tarfile was alone sufficient to do\nanything, was it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 14:47:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <[email protected]> writes:\n> > Hrmm ... that is odd, now that you mention it ... but the file\n> > 'distributions' between v7.2 and v7.3beta appear to be the same, so -base-\n> > was broken in the old one too?\n> \n> It was never intended that the \"base\" tarfile was alone sufficient to do\n> anything, was it?\n\nOK, so if base isn't compilable, then what is it good for? I don't see\nany add-on packages that would make it usable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 14:49:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Peter Eisentraut wrote:\n> Marc G. Fournier writes:\n> \n> > Scary, even with removing a load of stuff over to gborg, its still gotten\n> > bigger then the last release :)\n> \n> Not hard to find the culprit:\n> \n> 7.2:\n> \n> 3.4M src/backend/utils/mb\n> \n> 7.3:\n> \n> 9.6M src/backend/utils/mb\n\nWow. Just checking my CVS, the /mb stuff appears to be 17% of our\nsource tree. Now that they are loadable, can we get enough of it in the\nbase to make it compilable and leave the rest for the full tarball.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 14:56:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, so if base isn't compilable, then what is it good for? I don't see\n> any add-on packages that would make it usable.\n\nAFAIR, the only reason for having the split packaging is to accommodate\npeople who are downloading across flaky connections --- less to retry if\nyour connection drops. You still have to download all the data if you\nwant to have a useful package, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 16:13:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > OK, so if base isn't compilable, then what is it good for? I don't see\n> > any add-on packages that would make it usable.\n> \n> AFAIR, the only reason for having the split packaging is to accommodate\n> people who are downloading across flaky connections --- less to retry if\n> your connection drops. You still have to download all the data if you\n> want to have a useful package, no?\n\nThat was not my understanding. If that was the issue, we would release\npackages numbered 1-5. Marc splits them up saying if you don't want the\ndocs, don't download them. I assume opt is /contrib. My guess is he\nhad the multibyte stuff out in /opt but now they are required.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 16:38:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Fri, 2002-09-06 at 03:14, Marc G. Fournier wrote:\n> 0On Thu, 5 Sep 2002, Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > OK, so if base isn't compilable, then what is it good for? I don't see\n> > > any add-on packages that would make it usable.\n> >\n> > AFAIR, the only reason for having the split packaging is to accommodate\n> > people who are downloading across flaky connections --- less to retry if\n> > your connection drops. You still have to download all the data if you\n> > want to have a useful package, no?\n> \n> Correct ... even on high speed connections, I've had problems in the past\n> getting large files to download, so it makes it easier if you only have to\n> retry a part, instead of hte whole thing ...\n\nMost modern ftp servers and clients support resuming aborted downloads,\nno ?\n\n\n[hannu@rh72 DL]$ ncftp ftp://ftp.postgresql.org/pub/source/v7.3beta/\nNcFTP 3.1.3 (Mar 27, 2002) by Mike Gleason ([email protected]).\nConnecting to\n64.49.215.8... \npostgresql.org FTP server (lukemftpd 1.2 beta 1) ready.\nLogging\nin... \nGuest login ok, access restrictions apply.\nLogged in to\nftp.postgresql.org. \nCurrent remote directory is /pub/source/v7.3beta.\nncftp /pub/source/v7.3beta > get postgresql-test-7.3b1.tar.gz\npostgresql-test-7.3b1.tar.gz: ETA: 0:03 0.63/ 1.02 MB 131.44 kB/s \n\n*** NB! I pressed ^C here\n\nselect: Interrupted system call\npostgresql-test-7.3b1.tar.gz: 0.72/ 1.02 MB 112.17 kB/s \nget postgresql-test-7.3b1.tar.gz: data transfer aborted by local user.\nncftp /pub/source/v7.3beta > get postgresql-test-7.3b1.tar.gz\n\nThe local file \"postgresql-test-7.3b1.tar.gz\" already exists.\n Local: 753768 bytes, dated Thu Sep 05 07:28:40 GMT-5 2002.\n Remote: 1070154 bytes, dated Thu Sep 05 07:28:40 GMT-5 2002.\n\n [O]verwrite? [R]esume? [A]ppend to? [S]kip? [N]ew Name?\n [O!]verwrite all? [R!]esume all? [S!]kip all? [C]ancel > R\npostgresql-test-7.3b1.tar.gz: 1.02 MB 112.59\nkB/s \nncftp /pub/source/v7.3beta >\n\n\n-----------\nHannu\n\n\n\n", "msg_date": "06 Sep 2002 01:59:13 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "0On Thu, 5 Sep 2002, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > OK, so if base isn't compilable, then what is it good for? I don't see\n> > any add-on packages that would make it usable.\n>\n> AFAIR, the only reason for having the split packaging is to accommodate\n> people who are downloading across flaky connections --- less to retry if\n> your connection drops. You still have to download all the data if you\n> want to have a useful package, no?\n\nCorrect ... even on high speed connections, I've had problems in the past\ngetting large files to download, so it makes it easier if you only have to\nretry a part, instead of hte whole thing ...\n\n\n", "msg_date": "Thu, 5 Sep 2002 19:14:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Marc G. Fournier wrote:\n> 0On Thu, 5 Sep 2002, Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > OK, so if base isn't compilable, then what is it good for? I don't see\n> > > any add-on packages that would make it usable.\n> >\n> > AFAIR, the only reason for having the split packaging is to accommodate\n> > people who are downloading across flaky connections --- less to retry if\n> > your connection drops. You still have to download all the data if you\n> > want to have a useful package, no?\n> \n> Correct ... even on high speed connections, I've had problems in the past\n> getting large files to download, so it makes it easier if you only have to\n> retry a part, instead of hte whole thing ...\n\nThen why do we mark them with stuff. Isn't it easier to just label them\n1-5? I can see the docs being split out, but I don't understand the\nother splits.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 18:15:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "On Thu, 5 Sep 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > OK, so if base isn't compilable, then what is it good for? I don't see\n> > > any add-on packages that would make it usable.\n> >\n> > AFAIR, the only reason for having the split packaging is to accommodate\n> > people who are downloading across flaky connections --- less to retry if\n> > your connection drops. You still have to download all the data if you\n> > want to have a useful package, no?\n>\n> That was not my understanding. If that was the issue, we would release\n> packages numbered 1-5. Marc splits them up saying if you don't want the\n> docs, don't download them. I assume opt is /contrib. My guess is he\n> had the multibyte stuff out in /opt but now they are required.\n\nActually, I just asked for the split, I think it was peter that actually\ndid it ... :)\n\n", "msg_date": "Thu, 5 Sep 2002 19:15:21 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\nTom Lane writes:\n> Rod Taylor <[email protected]> writes:\n> > SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> > shows an error in ALTER TABLE tests:\n> \n> > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > pktable(ptest1);\n> > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > check(s)\n> > + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n> \n> That's pretty bizarre. Is it reproducible? Can you get in there with a\n> debugger and try to figure out what's going wrong?\n\nI saw a similar error on a NetBSD-1.5.1/i386 box, but have not been\nable to reproduce it. Subsequent runs of 'gmake check' have all\npassed.\n\nUntil I saw Rod's message I was thinking it was more evidence of\nhardware flakiness with this particular machine, but perhaps not.\n\n*** ./expected/alter_table.out Sat Aug 31 05:23:20 2002\n--- ./results/alter_table.out Fri Sep 6 16:54:35 2002\n***************\n*** 332,337 ****\n--- 332,338 ----\n -- Try (and succeed)\n ALTER TABLE tmp3 add constraint tmpconstr foreign key (a) references tmp2 matc\nh full;\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n+ ERROR: Relation \"public\".\"^B^U&W<88><F0>0}\" does not exist\n -- Try (and fail) to create constraint from tmp5(a) to tmp4(a) - unique constr\naint on\n -- tmp4 is a,b\n ALTER TABLE tmp5 add constraint tmpconstr foreign key(a) references tmp4(a) ma\ntch full;\n\nRegards,\n\nGiles\n", "msg_date": "Sat, 07 Sep 2002 00:02:27 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Marc G. Fournier writes:\n\n> Actually, I just asked for the split, I think it was peter that actually\n> did it ... :)\n\nI recall that you thought of the split in order to save bandwidth for\nthose who didn't need everything. It was expressedly intended that the\n-base tarball was usable by itself and that you only needed the others if\nyou wanted any of the optional features (--with-* etc.).\n\nBut now that the optional stuff has mostly either gone away or isn't\noptional anymore a revised split would come out pretty skewed:\n\n-rw-r--r-- 1 peter users 10824414 Sep 6 23:21 postgresql-7.3b1.tar.gz\n-rw-r--r-- 1 peter users 6675930 Sep 6 23:25 postgresql-base-7.3b1.tar.gz\n-rw-r--r-- 1 peter users 2585621 Sep 6 23:30 postgresql-docs-7.3b1.tar.gz\n-rw-r--r-- 1 peter users 485095 Sep 6 23:30 postgresql-opt-7.3b1.tar.gz\n-rw-r--r-- 1 peter users 1072069 Sep 6 23:30 postgresql-test-7.3b1.tar.gz\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 7 Sep 2002 00:01:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\nRod, are you still seeing this failure?\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> On Wed, 2002-09-04 at 22:39, Marc G. Fournier wrote:\n> > \n> > will announce it on -announce tomorrow, if ppl want to take a quick look\n> > at it ... man pages weren't included, but I did regenerate the docs per\n> > Peter's suggested commands ...\n> \n> './configure && make check' passes on i386 FreeBSD.\n> \n> SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> shows an error in ALTER TABLE tests:\n> \n> \n> c> cat src/test/regress/regression.diffs \n> *** ./expected/alter_table.out Fri Aug 30 12:23:20 2002\n> --- ./results/alter_table.out Thu Sep 5 07:44:18 2002\n> ***************\n> *** 367,374 ****\n> -- As should this\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> pktable(ptest1);\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> DROP TABLE pktable cascade;\n> - NOTICE: Drop cascades to constraint $2 on table fktable\n> NOTICE: Drop cascades to constraint $1 on table fktable\n> DROP TABLE fktable;\n> CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> --- 367,374 ----\n> -- As should this\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> pktable(ptest1);\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> check(s)\n> + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n> DROP TABLE pktable cascade;\n> NOTICE: Drop cascades to constraint $1 on table fktable\n> DROP TABLE fktable;\n> CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> \n> ======================================================================\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Sep 2002 22:50:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "I've not been able to reproduce it, so no. But there was another report\nfrom someone else about the same failure on another platform.\n\nOn Tue, 2002-09-10 at 22:50, Bruce Momjian wrote:\n> \n> Rod, are you still seeing this failure?\n> \n> ---------------------------------------------------------------------------\n> \n> Rod Taylor wrote:\n> > On Wed, 2002-09-04 at 22:39, Marc G. Fournier wrote:\n> > > \n> > > will announce it on -announce tomorrow, if ppl want to take a quick look\n> > > at it ... man pages weren't included, but I did regenerate the docs per\n> > > Peter's suggested commands ...\n> > \n> > './configure && make check' passes on i386 FreeBSD.\n> > \n> > SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> > shows an error in ALTER TABLE tests:\n> > \n> > \n> > c> cat src/test/regress/regression.diffs \n> > *** ./expected/alter_table.out Fri Aug 30 12:23:20 2002\n> > --- ./results/alter_table.out Thu Sep 5 07:44:18 2002\n> > ***************\n> > *** 367,374 ****\n> > -- As should this\n> > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > pktable(ptest1);\n> > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > check(s)\n> > DROP TABLE pktable cascade;\n> > - NOTICE: Drop cascades to constraint $2 on table fktable\n> > NOTICE: Drop cascades to constraint $1 on table fktable\n> > DROP TABLE fktable;\n> > CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> > --- 367,374 ----\n> > -- As should this\n> > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > pktable(ptest1);\n> > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > check(s)\n> > + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n> > DROP TABLE pktable cascade;\n> > NOTICE: Drop cascades to constraint $1 on table fktable\n> > DROP TABLE fktable;\n> > CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> > \n> > ======================================================================\n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n-- \n Rod Taylor\n\n", "msg_date": "10 Sep 2002 22:52:36 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "\nYep, and he couldn't reproduce it either, and on a different platform. \nI think that indicates we do have a problem in there, it just doesn't\nshow very often. He even got ASCII garbage in the error message.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> I've not been able to reproduce it, so no. But there was another report\n> from someone else about the same failure on another platform.\n> \n> On Tue, 2002-09-10 at 22:50, Bruce Momjian wrote:\n> > \n> > Rod, are you still seeing this failure?\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Rod Taylor wrote:\n> > > On Wed, 2002-09-04 at 22:39, Marc G. Fournier wrote:\n> > > > \n> > > > will announce it on -announce tomorrow, if ppl want to take a quick look\n> > > > at it ... man pages weren't included, but I did regenerate the docs per\n> > > > Peter's suggested commands ...\n> > > \n> > > './configure && make check' passes on i386 FreeBSD.\n> > > \n> > > SunOS control.shared2 5.7 Generic_106541-20 sun4u sparc SUNW,Ultra-5_10\n> > > shows an error in ALTER TABLE tests:\n> > > \n> > > \n> > > c> cat src/test/regress/regression.diffs \n> > > *** ./expected/alter_table.out Fri Aug 30 12:23:20 2002\n> > > --- ./results/alter_table.out Thu Sep 5 07:44:18 2002\n> > > ***************\n> > > *** 367,374 ****\n> > > -- As should this\n> > > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > > pktable(ptest1);\n> > > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > > check(s)\n> > > DROP TABLE pktable cascade;\n> > > - NOTICE: Drop cascades to constraint $2 on table fktable\n> > > NOTICE: Drop cascades to constraint $1 on table fktable\n> > > DROP TABLE fktable;\n> > > CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> > > --- 367,374 ----\n> > > -- As should this\n> > > ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n> > > pktable(ptest1);\n> > > NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n> > > check(s)\n> > > + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n> > > DROP TABLE pktable cascade;\n> > > NOTICE: Drop cascades to constraint $1 on table fktable\n> > > DROP TABLE fktable;\n> > > CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 inet,\n> > > \n> > > ======================================================================\n> > > \n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > > \n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> -- \n> Rod Taylor\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Sep 2002 22:55:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yep, and he couldn't reproduce it either, and on a different platform. \n> I think that indicates we do have a problem in there, it just doesn't\n> show very often.\n\nI agree, this looks a lot like a low-probability bug. But how to attack\nit when we can't reproduce it with even small probability? We need the\nreporters to try to figure out what environment made it happen for them.\nI can chase a bug if I can make it happen one-time-in-ten, or even\none-time-in-a-hundred, but I can't do much with a bug that I've only\nheard secondhand reports of.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Sep 2002 23:53:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n>> Rod Taylor <[email protected]> writes:\n>>> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references\n>>> pktable(ptest1);\n>>> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY\n>>> check(s)\n>>> + ERROR: Relation \"pg_temp_5\".\"\" does not exist\n>> \n>> That's pretty bizarre. Is it reproducible? Can you get in there with a\n>> debugger and try to figure out what's going wrong?\n\n> I saw a similar error on a NetBSD-1.5.1/i386 box, but have not been\n> able to reproduce it. Subsequent runs of 'gmake check' have all\n> passed.\n\n> Until I saw Rod's message I was thinking it was more evidence of\n> hardware flakiness with this particular machine, but perhaps not.\n\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> + ERROR: Relation \"public\".\"^B^U&W<88><F0>0}\" does not exist\n\n\nI've applied the attached patch, which I think may cure these failures.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/commands/tablecmds.c.orig\tWed Sep 4 17:30:18 2002\n--- src/backend/commands/tablecmds.c\tThu Sep 12 17:06:58 2002\n***************\n*** 2920,2926 ****\n \t * unfortunately).\n \t */\n \tmyRel = makeRangeVar(get_namespace_name(RelationGetNamespace(rel)),\n! \t\t\t\t\t\t RelationGetRelationName(rel));\n \n \t/*\n \t * Preset objectAddress fields\n--- 2920,2926 ----\n \t * unfortunately).\n \t */\n \tmyRel = makeRangeVar(get_namespace_name(RelationGetNamespace(rel)),\n! \t\t\t\t\t\t pstrdup(RelationGetRelationName(rel)));\n \n \t/*\n \t * Preset objectAddress fields\n", "msg_date": "Thu, 12 Sep 2002 17:18:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta1 packaged " }, { "msg_contents": "I have the beginnings of an idea about improving our interlock logic\nfor postmaster startup. The existing method is pretty good, but we\nhave had multiple reports that it can fail during system boot if the\nold postmaster wasn't given a chance to shut down cleanly: there's\na fair-sized chance that the old postmaster PID will have been assigned\nto some other process, and that fools the interlock check.\n\nI think we can improve matters by combining the existing checks for\nold-postmaster-PID and old-shared-memory-segment into one cohesive\nentity. To do this, we must abandon the existing special case for\n\"private memory\" when running a bootstrap or standalone backend.\nEven a standalone backend will be required to get a shmem segment\njust like a postmaster would. This ensures that we can use both\nparts of the safety check, even when the old holder of the data\ndirectory interlock was a standalone backend.\n\nHere's a sketch of the improved startup procedure:\n\n1. Try to open and read the $PGDATA/postmaster.pid file. If we fail\nbecause it's not there, okay to continue, because old postmaster must\nhave shut down cleanly; skip to step 8. If we fail for any other reason\n(eg, permissions failure), complain and abort startup. (Because we\nwrite the postmaster.pid file mode 600, getting past this step\nguarantees we are either the same UID as the old postmaster or root;\nelse we'd have failed to read the old file. This fact justifies some\nassumptions below.)\n\n2. Extract old postmaster PID and old shared memory key from file.\n(Both will now always be there, per above; abort if file contents are\nnot as expected.) We do not bother with trying kill(PID, 0) anymore,\nbecause it doesn't prove anything.\n\n3. Try to attach to the old shared memory segment using the old key.\nThere are three possible outcomes:\nA: fail because it's not there. Then we know the old postmaster\n (or standalone backend) is gone, and so are all its children.\n Okay to skip to step 7.\nB: fail for some other reason, eg permissions violation. Because\n we know we are the same UID (or root) as before, this must indicate\n that the \"old\" shmem segment actually belongs to someone else;\n so we have a chance collision with someone else's shmem key.\n Ignore the shmem segment, skip to step 7. (In short,\n we can treat all failures alike, which is a Good Thing.)\nC: attach succeeds. Continue to step 4.\n\n4. Examine header of old shmem segment to see if it contains the right\n magic number *and* old postmaster PID. If not, it isn't really\n a Postgres shmem segment, so ignore it; detach and skip to step 7.\n\n5. If old shmem segment still has other processes attached to it,\n abort: these must be an old postmaster and/or old backends still\n alive. (We can check nattach > 1 in the SysV case, or just assume\n they are there in the hugepages-segment case that Neil wants to add.)\n\n6. Detach from and delete the old shmem segment. (Deletion isn't\n strictly necessary, but we should do it to avoid sucking resources.)\n\n7. Delete the old postmaster.pid file. If this fails for any reason,\n abort. (Either we've got permissions problems or a race condition\n with someone else trying to start up.)\n\n8. Create a shared memory segment.\n\n9. Create a new postmaster.pid file and record my PID and segment key.\n If we fail to do this (with O_EXCL create), abort; someone else\n must be trying to start up at the same time. Be careful to create\n the lockfile mode 600, per notes above.\n\n\nThis is not quite ready for prime time yet, because it's not very\nbulletproof against the scenario where two would-be postmasters are\nstarting concurrently. The first one might get all the way through the\nsequence before the second one arrives at step 7 --- in which case the\nsecond one will be deleting the first one's lockfile. Oops. A possible\nanswer is to create a second lockfile that only exists for the duration\nof the startup sequence, and use that to ensure that only one process is\ntrying this sequence at a time. This reintroduces the same problem\nwe're trying to get away from (must rely on kill(PID, 0) to determine\nvalidity of the lock file), but at least the window of vulnerability is\nmuch smaller than before. Does anyone see a better way?\n\nA more general objection is that this approach will hardwire, even more\nsolidly than before, the assumption that we are using a shared-memory\nAPI that provides identifiable shmem segments (ie, something we can\nrecord a key for and later try to attach to). I think some people\nwanted to get away from that. But so far I've not seen any proposal\nfor an alternative startup interlock that doesn't require attachable\nshared memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 12:01:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Improving backend startup interlock" }, { "msg_contents": "I have seen no discussion on whether to go ahead with a 7.2.3 to add\nseveral serious fixes Tom has made to the code in the past few days. \n\nAre we too close to 7.3 for this to be worthwhile? Certainly there will\nbe people distributing 7.2.X for some time as 7.3 stabilizes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 14:36:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "7.2.3?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have seen no discussion on whether to go ahead with a 7.2.3 to add\n> several serious fixes Tom has made to the code in the past few days.\n\nThis will allow production sites to run the 7.2 series and also do\nVACUUM FULL won't it?\n\nIf so, then the idea is already pretty good. :-)\n\nWhich other fixes would be included?\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Are we too close to 7.3 for this to be worthwhile? Certainly there will\n> be people distributing 7.2.X for some time as 7.3 stabilizes.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 04:45:40 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "On Saturday 28 September 2002 02:36 pm, Bruce Momjian wrote:\n> I have seen no discussion on whether to go ahead with a 7.2.3 to add\n> several serious fixes Tom has made to the code in the past few days.\n\n> Are we too close to 7.3 for this to be worthwhile? Certainly there will\n> be people distributing 7.2.X for some time as 7.3 stabilizes.\n\nIMHO, I believe a 7.2.3 is worthwhile. It isn't _that_ much effort, is it? I \nam most certainly of the school of thought that backporting serious issues \ninto the last stable release is a Good Thing. I don't think a released 7.3 \nshould prevent us from a 7.2.4 down the road, either -- or even a 7.1.4 if a \nserious security issue were to be found there. Probably not a 7.0.4, though. \nAnd definitely not a 6.5.4. Some people can have great difficulty migrating \n-- if we're not going to make it easy for people to migrate, we should \nsupport older versions with fixes. IMHO, of course.\n\nIf it hasn't already, a fix for the Red Hat 7.3/glibc mktime(3) issue \n(workaround really) would be nice, as I understand the 7.3 branch has one.\n\nRPM's will take me all of an hour if I'm at work when it's released. That is \nif my wife doesn't go into labor first (she's at 37 weeks and having \nBraxton-Hicks already). #4.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 28 Sep 2002 14:47:28 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Justin Clift dijo: \n\n> Bruce Momjian wrote:\n>\n> > I have seen no discussion on whether to go ahead with a 7.2.3 to add\n> > several serious fixes Tom has made to the code in the past few days.\n> \n> This will allow production sites to run the 7.2 series and also do\n> VACUUM FULL won't it?\n> \n> If so, then the idea is already pretty good. :-)\n> \n> Which other fixes would be included?\n\nAt least the VACUUM code should prevent VACUUM from running inside a\nfunction. At least one user has been bitten by it.\n\nMemory leaks and such in the PL modules should be backported also.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hern�ndez-Novich)\n\n", "msg_date": "Sat, 28 Sep 2002 15:19:44 -0400 (CLT)", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Memory leaks and such in the PL modules should be backported also.\n\nThis is getting out of hand :-(\n\n7.2 is in maintenance status at this point. I'm willing to do backports\nfor bugs that cause data loss, like this VACUUM/CLOG issue.\nPerformance problems are not on the radar screen at all (especially\nnot when the putative fixes for them haven't received much of any\ntesting, and are barely worthy to be called beta status).\n\nWe do not have either the developer manpower or the testing resources\nto do more than the most minimal maintenance on back versions. Major\nback-port efforts just aren't going to happen. If they did, they would\nsignificantly impact our ability to work on 7.3 and up; does that seem\nlike a good tradeoff to you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 16:14:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3? " }, { "msg_contents": "Tom Lane dijo: \n\n> Alvaro Herrera <[email protected]> writes:\n> > Memory leaks and such in the PL modules should be backported also.\n> \n> This is getting out of hand :-(\n\nYes, I agree with you.\n\n> Major back-port efforts just aren't going to happen. If they did,\n> they would significantly impact our ability to work on 7.3 and up;\n> does that seem like a good tradeoff to you?\n\nI understand the issue. I also understand that is very nice for\nPostgreSQL to advance very quickly, and requiring backports (and\nsubsequent slowdown) is not nice at all. However, for users it's very\nimportant to have the fixes present in newer versions... _without_ the\nburden of having to upgrade!\n\nI agree with Lamar that upgrading is a very difficult process right now.\nRequiring huge amounts of disk space and database downtime to do\ndump/restore is in some cases too high a price to pay. So maybe the\nupgrading process should be observed instead of wasting time on people\ntrying to stay behind because of the price of that process.\n\nMaybe there is some way of making the life easier for the upgrader.\nLet's see, when you upgrade there are basically two things that change:\n\na) system catalogs\n Going from one version to another requires a number of changes: new\n tuples, deleted tuples, new attributes, deleted attributes. On-line\n transforming syscatalogs for the three first types seems easy. The\n last one may be difficult, but it also may not be, I'm not sure. It\n will require a standalone backend for shared relations and such, but\n hey, it's much cheaper than the process that's required now.\n\nb) on-disk representation of user data\n This is not easy. Upgrading means changing each filenode from one\n version to another; it requires a tool that understands both (and\n more than two) versions. It also requires a backend that is able to\n detect that a page is not the version it should, and either abort or\n convert it on the fly (this last possibility seems very nice).\n\n Note that only tables should be converted: other objects (indexes)\n should just be rebuilt.\n\nThere are other things that change. For example, dependencies are new\nin 7.3; building them without the explicit schema construction seems\ndifficult, but it's certainly possible. The implicit/explicit cast\nsystem is also new, but it doesn't depend on user data (except for user\ndefined datatypes, and that should be done manually by the user), so\nshould just be created from scratch.\n\nIs this at least remotely possible to do?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La fuerza no est� en los medios f�sicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n", "msg_date": "Sat, 28 Sep 2002 16:42:43 -0400 (CLT)", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3? " }, { "msg_contents": "On Sat, 28 Sep 2002, Bruce Momjian wrote:\n\n> I have seen no discussion on whether to go ahead with a 7.2.3 to add\n> several serious fixes Tom has made to the code in the past few days.\n>\n> Are we too close to 7.3 for this to be worthwhile? Certainly there will\n> be people distributing 7.2.X for some time as 7.3 stabilizes.\n\nThe vacuum thing is big enough that there should be since as always people\naren't going to move immediately forward with a major version change.\n\n\n", "msg_date": "Sat, 28 Sep 2002 13:43:55 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Maybe there is some way of making the life easier for the upgrader.\n> Let's see, when you upgrade there are basically two things that change:\n> a) system catalogs\n> b) on-disk representation of user data\n> [much snipped]\n\nYup. I see nothing wrong with the pg_upgrade process that we've\npreviously used for updating the system catalogs, however. Trying to\ndo it internally in some way will be harder and more dangerous (ie,\nmuch less reliable) than relying on schema-only dump and restore\nfollowed by moving the physical data.\n\nUpdates that change the on-disk representation of user data are much\nharder, as you say. But I think they can be made pretty infrequent.\nWe've only had two such updates that I know of in Postgres' history:\nadding WAL in 7.1 forced some additions to page headers, and now in\n7.3 we've changed tuple headers for space-saving reasons, and fixed\nsome problems with alignment in array data.\n\npg_upgrade could have worked for the 7.2 cycle, but it wasn't done,\nmostly for lack of effort.\n\nGoing forward I think we should try to maintain compatibility of on-disk\nuser data and ensure that pg_upgrade works.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 16:57:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Upgrade process (was Re: 7.2.3?)" }, { "msg_contents": "On Saturday 28 September 2002 04:14 pm, Tom Lane wrote:\n> 7.2 is in maintenance status at this point. I'm willing to do backports\n> for bugs that cause data loss, like this VACUUM/CLOG issue.\n> Performance problems are not on the radar screen at all (especially\n> not when the putative fixes for them haven't received much of any\n> testing, and are barely worthy to be called beta status).\n\nA fix that is beta-quality for a non-serious issue (serious issues being of \nthe level of the VACUUM/CLOG issue) is, in my mind at least, not for \ninclusion into a _stable_ release. Simple fixes (the localtime versus mktime \nfix) might be doable, but might not depending upon the particular fix, how \ndifficult the packport, etc. But 7.2 is considered _stable_ -- and I agree \nthat this means maintenance mode only. Only the most trivial or the most \nserious problems should be tackled here.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 28 Sep 2002 17:00:31 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "On Saturday 28 September 2002 04:57 pm, Tom Lane wrote:\n> 7.3 we've changed tuple headers for space-saving reasons, and fixed\n> some problems with alignment in array data.\n\n> Going forward I think we should try to maintain compatibility of on-disk\n> user data and ensure that pg_upgrade works.\n\nThis is of course a two-edged sword.\n\n1.)\tKeeping pg_upgrade working, which depends upon pg_dump working;\n2.)\tMaintaining security fixes for 7.2 for a good period of time to come, \nsince migration from 7.2 to >7.2 isn't easy.\n\nIf pg_upgrade is going to be the cookie, then let's all try to test the \ncookie. I'll certainly try to do my part.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 28 Sep 2002 17:05:58 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade process (was Re: 7.2.3?)" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> This is of course a two-edged sword.\n\n> 1.)\tKeeping pg_upgrade working, which depends upon pg_dump working;\n\n... which we have to have anyway, of course ...\n\n> 2.)\tMaintaining security fixes for 7.2 for a good period of time to come, \n> since migration from 7.2 to >7.2 isn't easy.\n\nTrue, but I think we'll have to deal with that anyway. Even if the\nphysical database upgrade were trivial, people are going to find\napplication compatibility problems due to schemas and other 7.3 changes.\nSo we're going to have to expend at least some work on fixing critical\n7.2.* problems. (I just want to keep a tight rein on how much.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 17:22:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade process (was Re: 7.2.3?) " }, { "msg_contents": "Alvaro Herrera wrote:\n<snip>\n> I agree with Lamar that upgrading is a very difficult process right now.\n> Requiring huge amounts of disk space and database downtime to do\n> dump/restore is in some cases too high a price to pay. So maybe the\n> upgrading process should be observed instead of wasting time on people\n> trying to stay behind because of the price of that process.\n\nAs a \"simple for the user approach\", would it be\ntoo-difficult-to-bother-with to add to the postmaster an ability to\nstart up with the data files from the previous version, for it to\nrecognise an old data format automatically, then for it to do the\nconversion process of the old data format to the new one before going\nany further?\n\nSounds like a pain to create initially, but nifty in the end.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n<snip>\n> --\n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"La fuerza no est� en los medios f�sicos\n> sino que reside en una voluntad indomable\" (Gandhi)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 07:30:35 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "\nTom Lane wrote:\n\n[ discussion of new startup interlock ]\n\n> This is not quite ready for prime time yet, because it's not very\n> bulletproof against the scenario where two would-be postmasters are\n> starting concurrently.\n\nA solution to this is to require would-be postmasters to obtain an\nexclusive lock on a lock file before touching the pid file. (The lock\nfile perhaps could be the pid file, but it doesn't have to be.)\n\nIs there some reason that file locking is not acceptable? Is there\nany platform or filesystem supported for use with PostgreSQL which\ndoesn't have working exclusive file locking?\n\n> A possible answer is to create a second lockfile that only exists\n> for the duration of the startup sequence, and use that to ensure\n> that only one process is trying this sequence at a time.\n> ...\n> This reintroduces the same problem\n> we're trying to get away from (must rely on kill(PID, 0) to determine\n> validity of the lock file), but at least the window of vulnerability is\n> much smaller than before.\n\nA lock file locked for the whole time the postmaster is running can be\nresponsible for preventing multiple postmasters running without\nrelying on pids. All that is needed is that the OS drop exclusive\nfile locks on process exit and that locks not survive across reboots.\n\nThe checks of the shared memory segment (number of attachements etc)\nlook after orphaned back end processes, per the proposal.\n\nRegards,\n\nGiles\n", "msg_date": "Sun, 29 Sep 2002 08:37:20 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "\nTom lane wrote:\n\n> True, but I think we'll have to deal with that anyway. Even if the\n> physical database upgrade were trivial, people are going to find\n> application compatibility problems due to schemas and other 7.3 changes.\n\nMore reasons:\n\na) learning curve -- I want to use 7.3 and gain some experience with\n 7.2.x -> 7.3 migration before rolling out 7.3 to my users.\n\nb) change control and configuration freezes sometimes dictate when\n upgrades may be done. A 7.2.2 -> 7.2.3 upgrade for bug fixes is\n much less intrusive than an upgrade to 7.3.\n\n> So we're going to have to expend at least some work on fixing critical\n> 7.2.* problems. (I just want to keep a tight rein on how much.)\n\nNo argument here. Supporting multiple versions eats resources and\neventually destabilises the earlier releases, so critial fixes only,\nplease. New features and non-critical fixes however minor are\nactually unhelpful.\n\nSince PostgreSQL is open source, anyone who \"just has\" to have some\nminor new feature back ported can do it, or pay for it to be done.\nBut this doesn't have to effect all users.\n\nRegards,\n\nGiles\n", "msg_date": "Sun, 29 Sep 2002 08:58:29 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade process (was Re: 7.2.3?) " }, { "msg_contents": "Justin Clift wrote:\n> Alvaro Herrera wrote:\n> <snip>\n> > I agree with Lamar that upgrading is a very difficult process right now.\n> > Requiring huge amounts of disk space and database downtime to do\n> > dump/restore is in some cases too high a price to pay. So maybe the\n> > upgrading process should be observed instead of wasting time on people\n> > trying to stay behind because of the price of that process.\n> \n> As a \"simple for the user approach\", would it be\n> too-difficult-to-bother-with to add to the postmaster an ability to\n> start up with the data files from the previous version, for it to\n> recognise an old data format automatically, then for it to do the\n> conversion process of the old data format to the new one before going\n> any further?\n> \n> Sounds like a pain to create initially, but nifty in the end.\n\nYes, we could, but if we are going to do that, we may as well just\nautomate the dump/reload.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 21:23:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> Is there some reason that file locking is not acceptable? Is there\n> any platform or filesystem supported for use with PostgreSQL which\n> doesn't have working exclusive file locking?\n\nHow would we know? We have never tried to use such a feature.\n\nFor sure I would not trust it on an NFS filesystem. (Although we\ndisparage running an NFS-mounted database, people do it anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 21:47:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "On Saturday 28 September 2002 09:23 pm, Bruce Momjian wrote:\n> Justin Clift wrote:\n> > Alvaro Herrera wrote:\n> > > I agree with Lamar that upgrading is a very difficult process right\n\n> > As a \"simple for the user approach\", would it be\n> > too-difficult-to-bother-with to add to the postmaster an ability to\n> > start up with the data files from the previous version, for it to\n> > recognise an old data format automatically, then for it to do the\n> > conversion process of the old data format to the new one before going\n> > any further?\n\n> > Sounds like a pain to create initially, but nifty in the end.\n\n> Yes, we could, but if we are going to do that, we may as well just\n> automate the dump/reload.\n\nAutomating the dump/reload is fraught with pitfalls. Been there; done that; \ngot the t-shirt. The dump from the old version many times requires \nhand-editing for cases where the complexity is above a certain threshold. \nThe 7.2->7.3 threshold is just a little lower than normal. \n\nOur whole approach to the system catalog is wrong for what Justin (and many \nothers would like to see).\n\nWith MySQL, for instance, one can migrate on a table-by-table basis from one \ntable type to another. As older table types are continuously supported, one \ncan upgrade each table in turn as you need the featureset supported by that \ntabletype.\n\nYes, I know that doesn't fit our existing model of 'all in one' system \ncatalogs. And the solution doesn't present itself readily -- but one day \nsomeone will see the way to do this, and it will be good. It _will_ involve \nrefactoring the system catalog schema so that user 'system catalog' metadata \nand system 'system catalog' data aren't codependent. A more modular data \nstorage approach at a level above the existing broken storage manager \nmodularity will result, and things will be different.\n\nHowever, the number of messages on this subject has increased; one day it will \nbecome an important feature worthy of core developer attention. That will be \na happy day for me, as well as many others. I have not the time to do it \nmyself; but I can be a gadfly, at least. In the meantime we have pg_upgrade \nfor the future 7.3 -> 7.4 upgrade.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 28 Sep 2002 22:19:29 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "\nTom Lane wrote:\n\n> Giles Lean <[email protected]> writes:\n> > Is there some reason that file locking is not acceptable? Is there\n> > any platform or filesystem supported for use with PostgreSQL which\n> > doesn't have working exclusive file locking?\n> \n> How would we know? We have never tried to use such a feature.\n\nI asked because I've not been following this project long enough to\nknow if it had been tried and rejected previously. Newcomers being\nprone to making silly suggestions and all that. :-)\n\n> For sure I would not trust it on an NFS filesystem. (Although we\n> disparage running an NFS-mounted database, people do it anyway.)\n\n<scratches head>\n\nI can't work out if that's an objection or not.\n\nI'm certainly no fan of NFS locking, but if someone trusts their NFS\nclient and server implementations enough to put their data on, they\nmight as well trust it to get a single lock file for startup right\ntoo. IMHO. Your mileage may vary.\n\nRegards,\n\nGiles\n", "msg_date": "Sun, 29 Sep 2002 12:58:53 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> I'm certainly no fan of NFS locking, but if someone trusts their NFS\n> client and server implementations enough to put their data on, they\n> might as well trust it to get a single lock file for startup right\n> too. IMHO. Your mileage may vary.\n\nWell, my local man page for lockf() sez\n\n The advisory record-locking capabilities of lockf() are implemented\n throughout the network by the ``network lock daemon'' (see lockd(1M)).\n If the file server crashes and is rebooted, the lock daemon attempts\n to recover all locks associated with the crashed server. If a lock\n cannot be reclaimed, the process that held the lock is issued a\n SIGLOST signal.\n\nand the lockd man page mentions that not only lockd but statd have to be\nrunning locally *and* at the NFS server.\n\nThis sure sounds like file locking on NFS introduces additional\nfailure modes above and beyond what we have already.\n\nSince the entire point of this locking exercise is to improve PG's\nrobustness, solutions that depend on other daemons not crashing\ndon't sound like a step forward to me. I'm willing to trust the local\nkernel, but I get antsy if I have to trust more than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 23:06:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "Bruce Momjian dijo: \n\n> Justin Clift wrote:\n> > Alvaro Herrera wrote:\n\n> > As a \"simple for the user approach\", would it be\n> > too-difficult-to-bother-with to add to the postmaster an ability to\n> > start up with the data files from the previous version, for it to\n> > recognise an old data format automatically, then for it to do the\n> > conversion process of the old data format to the new one before going\n> > any further?\n> \n> Yes, we could, but if we are going to do that, we may as well just\n> automate the dump/reload.\n\nI don't think that's an acceptable solution. It requires too much free\ndisk space and too much time. On-line upgrading, meaning altering the\ndatabases on a table-by-table basis (or even page-by-page) solves both\nproblems (binary conversion sure takes less than converting to text\nrepresentation and parsing it to binary again).\n\nI think a converting postmaster would be a waste, because it's unneeded\nfunctionality 99.999% of the time. I'm leaning towards an external\nprogram doing the conversion, and the backend just aborting if it finds\nold or in-conversion data. The converter should be able to detect that\nit has aborted and resume conversion.\n\nWhat would that converter need:\n- the old system catalog (including user defined data)\n- the new system catalog (ditto, including the schema)\n- the storage manager subsystem\n\nI think that should be enough for converting table files. I'd like to\nexperiment with something like this when I have some free time. Maybe\nnext year...\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living fuck out of me.\" (JWZ)\n\n", "msg_date": "Sun, 29 Sep 2002 00:31:38 -0400 (CLT)", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> What would that converter need:\n> [snip]\n> I think that should be enough for converting table files. I'd like to\n> experiment with something like this when I have some free time. Maybe\n> next year...\n\nIt's difficult to say anything convincing on this topic without a\nspecific conversion requirement in mind.\n\nLocalized conversions like 7.3's tuple header change could be done on a\npage-by-page basis as you suggest. (In fact, one reason I insisted on\nputting in a page header version number was to leave the door open for\nsuch a converter, if someone wants to do one.)\n\nBut one likely future format change for user data is combining parent\nand child tables into a single physical table, per recent inheritance\nthread. (I'm not yet convinced that that's feasible or desirable,\nI'm just using it as an example of a possible conversion requirement.)\nYou can't very well do that page-by-page; it'd require a completely\ndifferent approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Sep 2002 00:47:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3? " }, { "msg_contents": "On Sun, 2002-09-29 at 07:19, Lamar Owen wrote:\n> On Saturday 28 September 2002 09:23 pm, Bruce Momjian wrote:\n> > Justin Clift wrote:\n> > > Alvaro Herrera wrote:\n> > > > I agree with Lamar that upgrading is a very difficult process right\n> \n> > > As a \"simple for the user approach\", would it be\n> > > too-difficult-to-bother-with to add to the postmaster an ability to\n> > > start up with the data files from the previous version, for it to\n> > > recognise an old data format automatically, then for it to do the\n> > > conversion process of the old data format to the new one before going\n> > > any further?\n> \n> > > Sounds like a pain to create initially, but nifty in the end.\n> \n> > Yes, we could, but if we are going to do that, we may as well just\n> > automate the dump/reload.\n> \n> Automating the dump/reload is fraught with pitfalls. Been there; done that; \n> got the t-shirt. The dump from the old version many times requires \n> hand-editing for cases where the complexity is above a certain threshold. \n> The 7.2->7.3 threshold is just a little lower than normal. \n> \n> Our whole approach to the system catalog is wrong for what Justin (and many \n> others would like to see).\n> \n> With MySQL, for instance, one can migrate on a table-by-table basis from one \n> table type to another. As older table types are continuously supported, one \n> can upgrade each table in turn as you need the featureset supported by that \n> tabletype.\n\nThe initial Postgres design had a notion of StorageManager's, which\nshould make this very easy indeed, if it had been kept working .\n\nIIRC the black box nature of storage manager interface was broken at\nlatest when adding WAL (if it had really been there in the first place).\n\n----------------------\nHannu\n\n\n", "msg_date": "29 Sep 2002 12:50:47 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "On Sun, 2002-09-29 at 09:47, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > What would that converter need:\n> > [snip]\n> > I think that should be enough for converting table files. I'd like to\n> > experiment with something like this when I have some free time. Maybe\n> > next year...\n> \n> It's difficult to say anything convincing on this topic without a\n> specific conversion requirement in mind.\n> \n> Localized conversions like 7.3's tuple header change could be done on a\n> page-by-page basis as you suggest. (In fact, one reason I insisted on\n> putting in a page header version number was to leave the door open for\n> such a converter, if someone wants to do one.)\n> \n> But one likely future format change for user data is combining parent\n> and child tables into a single physical table, per recent inheritance\n> thread. (I'm not yet convinced that that's feasible or desirable,\n> I'm just using it as an example of a possible conversion requirement.)\n> You can't very well do that page-by-page; it'd require a completely\n> different approach.\n\nI started to think about possible upgrade strategy for this scenario and\ncame up with a whole new way for the whole storage :\n\nWe could extend our current way of 1G split files for inheritance, so\nthat each inherited table is in its own (set of) physical files which\nrepresent a (set of) 1G segment(s) for the logical file definition of\nall parent. This would even work for both single and multiple\ninheritance !\n\nIn this case the indexes (which enforce the uniquenaess and are required\nfor RI) would see the thing as a single file and can use plain TIDs. The\nprocess of mapping from TID.PAGENR to actual file will happen below the\nlevel visible to executor. It would also naturally cluster similar\ntuples.\n\nAa an extra bonus migration can be done only by changing system catalogs\nand recreating indexes.\n\nIt will limit the size of inherited structure to at most 16K different\ntables (max unsigned int/pagesize), but I don't think this will be a\nreal limit anytime soon.\n\n---------------------\nHannu\n\n\n", "msg_date": "29 Sep 2002 13:08:35 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> The initial Postgres design had a notion of StorageManager's, which\n> should make this very easy indeed, if it had been kept working .\n\nBut the storage manager interface was never built to hide issues like\ntuple representation --- storage managers just deal in raw pages.\nI doubt it would have helped in the least for anything we've been\nconcerned about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Sep 2002 10:28:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3? " }, { "msg_contents": "On Sun, 2002-09-29 at 19:28, Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > The initial Postgres design had a notion of StorageManager's, which\n> > should make this very easy indeed, if it had been kept working .\n> \n> But the storage manager interface was never built to hide issues like\n> tuple representation --- storage managers just deal in raw pages.\n\nI had an impression that SM was meant to be a little higher-level. IIRC\nthe original Berkeley Postgres had at one point a storage manager for\nwrite-once storage on CDWr jukeboxes.\n\nthe README in src/backend/storage/smgr still contains mentions about\nSony jukebox drivers.\n\nhttp://www.ndim.edrc.cmu.edu/postgres95/www/pglite1.html also claims\nthis:\n\nVersion 3 appeared in 1991 and added support for multiple storage\nmanagers, an improved query executor and a rewritten rewrite rule\nsystem. For the most part, releases since then have focused on\nportability and reliability. \n\n> I doubt it would have helped in the least for anything we've been\n> concerned about.\n\nYes, it seems that we do not have a SM in the semse I hoped.\n\nStill, if we could use a clean SM interface over old page format, then\nthe tuple conversion could be done there.\n\nThat of course would need the storage manager to be aware of old/new\ntuple structures ;(\n\n-----------------\nHannu\n\n\n\n", "msg_date": "29 Sep 2002 22:05:55 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Should an advisory be issued for production sites to not perform a\nvacuum full with a notice that a bug fix will be coming shortly?\n\nGreg\n\n\n\nOn Sat, 2002-09-28 at 13:45, Justin Clift wrote:\n> Bruce Momjian wrote:\n> > \n> > I have seen no discussion on whether to go ahead with a 7.2.3 to add\n> > several serious fixes Tom has made to the code in the past few days.\n> \n> This will allow production sites to run the 7.2 series and also do\n> VACUUM FULL won't it?\n> \n> If so, then the idea is already pretty good. :-)\n> \n> Which other fixes would be included?\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> > Are we too close to 7.3 for this to be worthwhile? Certainly there will\n> > be people distributing 7.2.X for some time as 7.3 stabilizes.\n> > \n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]", "msg_date": "30 Sep 2002 10:39:28 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3?" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> Should an advisory be issued for production sites to not perform a\n> vacuum full with a notice that a bug fix will be coming shortly?\n\nPeople seem to be misunderstanding the bug. Whether your vacuum is FULL\nor not (or VERBOSE or not, or ANALYZE or not) is not relevant. The\ndangerous thing is to execute a VACUUM that's not a single-table VACUUM\n*as a non-superuser*. The options don't matter. If you see any notices\nabout \"skipping tables\" out of VACUUM, then you are at risk.\n\nI'm not averse to issuing an announcement, but let's be sure we have\nthe details straight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 14:11:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.2.3? " }, { "msg_contents": "Hi everyone,\n\nHave just put together a prototype page to show off the multi-lingual\ncapabilities that the Advocacy sites' infrastructure has:\n\nhttp://advocacy.postgresql.org/?lang=de\n\nThe text was translated to german via Altavista's Babelfish, so it's\nprobably only about 80% accurate, but it conveys the concept.\n\nIs anyone interested in translating the English version to other\nlanguages? All Latin based languages should be fine (German, French,\nItalian, Spanish, Portuguese, Turkish, Greek, etc).\n\nIf there's strong interest, then an interface to let volunteers\ntranslators do it easily can be constructed over the next fortnight or\nso.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 03 Oct 2002 08:53:07 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "Justin Clift <[email protected]> wrote:\n\n> Hi everyone,\n>\n> Have just put together a prototype page to show off the multi-lingual\n> capabilities that the Advocacy sites' infrastructure has:\n>\n> http://advocacy.postgresql.org/?lang=de\n>\n> The text was translated to german via Altavista's Babelfish, so it's\n> probably only about 80% accurate, but it conveys the concept.\n>\n> Is anyone interested in translating the English version to other\n> languages? All Latin based languages should be fine (German, French,\n> Italian, Spanish, Portuguese, Turkish, Greek, etc).\n>\n> If there's strong interest, then an interface to let volunteers\n> translators do it easily can be constructed over the next fortnight or\n> so.\n\nHi Justin,\n\nI am from Austria, and I would like to help. I could provide a German\ntranslation. The Babelfish's translation is really funny. Machine\ntranslation is readable, but it is no advocacy. ;-) I do not really nead an\ninterface, but just tell me in what way you want the texts.\n\nBest Regards,\nMichael Paesold\n\n", "msg_date": "Thu, 3 Oct 2002 01:21:00 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of the\n\tAdvocacy site?" }, { "msg_contents": "Hi Michael,\n\nMichael Paesold wrote:\n<snip> \n> Hi Justin,\n> \n> I am from Austria, and I would like to help. I could provide a German\n> translation. The Babelfish's translation is really funny. Machine\n> translation is readable, but it is no advocacy. ;-) I do not really nead an\n> interface, but just tell me in what way you want the texts.\n\nCool. Could you deal with an OpenOffice Calc or M$ Excel file having\nthe lines of English text in one column, and doing the German\ntranslation into a second column?\n\nThat might be easiest, and will allow a cut-n-paste of the German\nversion straight into the database backend.\n\nSound workable to you?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> Best Regards,\n> Michael Paesold\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 03 Oct 2002 09:23:28 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of the\n\tAdvocacy" }, { "msg_contents": "Justin Clift <[email protected]> wrote:\n\n\n> Hi Michael,\n>\n> Michael Paesold wrote:\n> <snip>\n> > Hi Justin,\n> >\n> > I am from Austria, and I would like to help. I could provide a German\n> > translation. The Babelfish's translation is really funny. Machine\n> > translation is readable, but it is no advocacy. ;-) I do not really nead\nan\n> > interface, but just tell me in what way you want the texts.\n>\n> Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> the lines of English text in one column, and doing the German\n> translation into a second column?\n>\n> That might be easiest, and will allow a cut-n-paste of the German\n> version straight into the database backend.\n>\n> Sound workable to you?\n\nSpreadsheet sounds great. I use M$.\nPerhaps you can group the items in categories, at least navigation and text.\nSo I know where the text will be put on the website. The translation could\nbe different depending on how a word is used. E.g. it is quite common on\nGerman websites to use the same English word \"Home\" for the main page; but\nyou would not use \"Home\" in a different context. The exceptable length of a\ntranslation depends on the context, too.\n\nBest Regards,\nMichael Paesold\n\n\n", "msg_date": "Thu, 3 Oct 2002 01:36:30 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of the\n\tAdvocacy site?" }, { "msg_contents": "The lock on the relation owning the rule wasn't unlocked.\n\nOn Wed, 2002-10-02 at 19:36, Michael Paesold wrote:\n> Justin Clift <[email protected]> wrote:\n> \n> \n> > Hi Michael,\n> >\n> > Michael Paesold wrote:\n> > <snip>\n> > > Hi Justin,\n> > >\n> > > I am from Austria, and I would like to help. I could provide a German\n> > > translation. The Babelfish's translation is really funny. Machine\n> > > translation is readable, but it is no advocacy. ;-) I do not really nead\n> an\n> > > interface, but just tell me in what way you want the texts.\n> >\n> > Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> > the lines of English text in one column, and doing the German\n> > translation into a second column?\n> >\n> > That might be easiest, and will allow a cut-n-paste of the German\n> > version straight into the database backend.\n> >\n> > Sound workable to you?\n> \n> Spreadsheet sounds great. I use M$.\n> Perhaps you can group the items in categories, at least navigation and text.\n> So I know where the text will be put on the website. The translation could\n> be different depending on how a word is used. E.g. it is quite common on\n> German websites to use the same English word \"Home\" for the main page; but\n> you would not use \"Home\" in a different context. The exceptable length of a\n> translation depends on the context, too.\n> \n> Best Regards,\n> Michael Paesold\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n-- \n Rod Taylor", "msg_date": "02 Oct 2002 20:14:00 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of the" }, { "msg_contents": "Justin wrote:\n> Hi Michael,\n> Michael Paesold wrote:\n> <snip> \n> > Hi Justin,\n> > I am from Austria, and I would like to help. I could provide a German\n> > translation. The Babelfish's translation is really funny. Machine\n> > translation is readable, but it is no advocacy. ;-) I do not really nead an\n> > interface, but just tell me in what way you want the texts.\n> \n> Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> the lines of English text in one column, and doing the German\n> translation into a second column?\n\nIsn't this, um, the sort of thing you might want to put into, um, a, um, \ndatabase?\n--\n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://cbbrowne.com/info/internet.html\n\"you can obvioulsy understand what i'm saying. you're just being\npendantic.\" -- [email protected]\n\n\n", "msg_date": "Wed, 02 Oct 2002 20:58:25 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the " }, { "msg_contents": "[email protected] wrote:\n<snip>\n> > Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> > the lines of English text in one column, and doing the German\n> > translation into a second column?\n> \n> Isn't this, um, the sort of thing you might want to put into, um, a, um,\n> database?\n\nSure is. Are there any good options apart from?\n\na) Build an interface for people to translate through\nb) Allow selected people to connect directly to the database\n\nFor the present b) is not an option as I don't have the needed access to\nthe postgresql.org database server to be able to adjust the pg_hba.conf\nfile, and a) Would take some decent time and effort to get up and\nrunning. A lot longer than cut and pasting into an Excel document then\nback out again. :-/\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> --\n> (concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\n> http://cbbrowne.com/info/internet.html\n> \"you can obvioulsy understand what i'm saying. you're just being\n> pendantic.\" -- [email protected]\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 03 Oct 2002 11:09:18 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> \t/* Call CreateComments() to create/drop the comments */\n> \tCreateComments(ruleoid, classoid, 0, comment);\n> +\n> + \theap_close(relation, AccessShareLock);\n> }\n> \n> /*\n\nOoops.\n\nI think though that this should read\n\n+\theap_close(relation, NoLock);\n\nIn general, we hold locks on user relations we are modifying until end\nof transaction. This is different from the rule for system catalogs\n(eg, it's okay to drop the AccessShareLock on pg_rewrite a few lines\nabove this). The reason for the distinction is that we want to be\nsure that the user relation won't get DROPped by someone else before\nwe've committed our changes. (If someone else did try to drop it in\nthat interval, they'd not delete the pg_description row we just added,\nbecause they couldn't see it.) On the other hand, system catalogs such\nas pg_rewrite are not going to go away, by definition, and so it's okay\nto drop their locks early. The only reason we lock system catalogs at\nall is to allow VACUUM FULL to nail down exclusive access to a catalog\nwhile it vacuums it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 23:32:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of the " }, { "msg_contents": "> Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> the lines of English text in one column, and doing the German\n> translation into a second column?\n> \n> That might be easiest, and will allow a cut-n-paste of the German\n> version straight into the database backend.\n\nHi Justin,\n\nI would gladly do the translation in French. I don't have too much time\nbeing both a freelancer and a young daddy, but there isn't that much content\nso I should be able to manage.\n\nI don't have OpenOffice, so an Excel worksheet would do fine.\n\nCheers.\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"\n\n", "msg_date": "Thu, 03 Oct 2002 09:42:38 +0200", "msg_from": "Francois Suter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy " }, { "msg_contents": ">\n>\n>Hi everyone,\n>\n>Have just put together a prototype page to show off the multi-lingual\n>capabilities that the Advocacy sites' infrastructure has:\n>\n>http://advocacy.postgresql.org/?lang=de\n>\n>The text was translated to german via Altavista's Babelfish, so it's\n>probably only about 80% accurate, but it conveys the concept.\n>\n>Is anyone interested in translating the English version to other\n>languages? All Latin based languages should be fine (German, French,\n>Italian, Spanish, Portuguese, Turkish, Greek, etc).\n>\n>If there's strong interest, then an interface to let volunteers\n>translators do it easily can be constructed over the next fortnight or\n>so.\n>\n>:-)\n>\n>Regards and best wishes,\n>\n>Justin Clift\n>\n> \n>\nJustin,\n\nI would be glad to translate it to brazilian portuguese.\nHere we have a lot of companies starting to use PostgreSQL, including \nthe one where I work and some of our clients.\nIt would be very nice to have this site translated.\n\nCheers,\n\n-- \nDiogo de Oliveira Biazus\[email protected]\nIkono Sistemas e Automa��o\nhttp://www.ikono.com.br\n\n\n", "msg_date": "Thu, 03 Oct 2002 09:35:19 -0300", "msg_from": "Diogo Biazus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy" }, { "msg_contents": "> Is anyone interested in translating the English version to other\n> languages?\n\nI don't have time for the translation, unfortunately, but i would \nsuggest changing \"worlds\" to \"world's\" on the main page.\n\n-tfo\n", "msg_date": "Thu, 03 Oct 2002 10:48:31 -0500", "msg_from": "Thomas O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "Thomas O'Connell wrote:\n> \n> > Is anyone interested in translating the English version to other\n> > languages?\n> \n> I don't have time for the translation, unfortunately, but i would\n> suggest changing \"worlds\" to \"world's\" on the main page.\n\nUm, doesn't \"world's\" mean \"world is\" ?\n\nThat wouldn't make sense then though. ?\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> -tfo\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 01:51:31 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy " }, { "msg_contents": "> Um, doesn't \"world's\" mean \"world is\" ?\n\nIn this situation, the \"'s\" denotes possession, as in \"the most advanced \nopen source database of the world\".\n\n\"worlds\" here is basically saying \"every world most advanced open source \ndatabase\" and does not, in any case, connote possession.\n\n-tfo\n\n", "msg_date": "Thu, 3 Oct 2002 10:54:57 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "\"Thomas F.O'Connell\" wrote:\n> \n> > Um, doesn't \"world's\" mean \"world is\" ?\n> \n> In this situation, the \"'s\" denotes possession, as in \"the most advanced\n> open source database of the world\".\n> \n> \"worlds\" here is basically saying \"every world most advanced open source\n> database\" and does not, in any case, connote possession.\n\nOk, updating it now. Thanks heaps Thomas.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> -tfo\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 01:57:23 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the " }, { "msg_contents": "> Um, doesn't \"world's\" mean \"world is\" ?\n\ni forgot to provide a real-world example:\n\nhttp://www.amazon.com/\n\n\"Earth's Biggest Selection\"\n\n-tfo\n\n", "msg_date": "Thu, 3 Oct 2002 10:57:45 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "Hi Justin,\n\nyou want probably use the language-negotiation\nrather then a query variable :-)\n\nRegards\nTino\n\n--On Donnerstag, 3. Oktober 2002 08:53 +1000 Justin Clift \n<[email protected]> wrote:\n\n> Hi everyone,\n>\n> Have just put together a prototype page to show off the multi-lingual\n> capabilities that the Advocacy sites' infrastructure has:\n>\n> http://advocacy.postgresql.org/?lang=de\n>\n> The text was translated to german via Altavista's Babelfish, so it's\n> probably only about 80% accurate, but it conveys the concept.\n>\n> Is anyone interested in translating the English version to other\n> languages? All Latin based languages should be fine (German, French,\n> Italian, Spanish, Portuguese, Turkish, Greek, etc).\n>\n> If there's strong interest, then an interface to let volunteers\n> translators do it easily can be constructed over the next fortnight or\n> so.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Thu, 03 Oct 2002 19:05:33 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the" }, { "msg_contents": "Hi Tino,\n\nTino Wildenhain wrote:\n> \n> Hi Justin,\n> \n> you want probably use the language-negotiation\n> rather then a query variable :-)\n\nUm, language-negotiation in good in theory, but there are real world\nscenarios it doesn't take into account. :(\n\nHowever, the query variable is an override, and if one isn't present\nthen the backend is supposed to use other means to determine the\nappropriate language, including the browsers preferred language. It's\njust that the code to do this bit hasn't been written yet. :-)\n\nIf all else fails it falls back to a default language, English for this\nsite.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Regards\n> Tino\n<snip>\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 03:06:53 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of theAdvocacy " }, { "msg_contents": "Hi Justin,\n\n--On Donnerstag, 3. Oktober 2002 09:23 +1000 Justin Clift \n<[email protected]> wrote:\n\n> Hi Michael,\n>\n> Michael Paesold wrote:\n> <snip>\n>> Hi Justin,\n>>\n>> I am from Austria, and I would like to help. I could provide a German\n>> translation. The Babelfish's translation is really funny. Machine\n>> translation is readable, but it is no advocacy. ;-) I do not really nead\n>> an interface, but just tell me in what way you want the texts.\n>\n> Cool. Could you deal with an OpenOffice Calc or M$ Excel file having\n> the lines of English text in one column, and doing the German\n> translation into a second column?\n\n> That might be easiest, and will allow a cut-n-paste of the German\n> version straight into the database backend.\n\nHaha cut&paste ;-) Ever heard of csv? :-))\n\nHowever, I can also have a look at it, if desired.\n\nRegards\nTino\n\n> Sound workable to you?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>> Best Regards,\n>> Michael Paesold\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n", "msg_date": "Thu, 03 Oct 2002 19:14:13 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation" }, { "msg_contents": "Tino Wildenhain wrote:\n<snip> \n> Haha cut&paste ;-) Ever heard of csv? :-))\n> \n> However, I can also have a look at it, if desired.\n\nHeh Heh Heh\n\nGood point. For the moment we've whipped up that MS Excel document\n(created in OpenOffice of course) of all the English text strings in the\nsite and emailed it to the volunteers. :)\n\nSo far community members have volunteered for German, Turkish, French,\nSpanish, Brazilian Portuguese, and Polish.\n\nCool. :)\n\nWant to co-ordinate with the other two German language volunteers?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Regards\n> Tino\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 03:28:39 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translationof the " }, { "msg_contents": "Hi Justin,\n\n> Good point. For the moment we've whipped up that MS Excel document\n> (created in OpenOffice of course) of all the English text strings in the\n> site and emailed it to the volunteers. :)\n\nBtw. did you ever unzip the native OpenOffice (aka StarOffice)\nfile?\n\n>\n> So far community members have volunteered for German, Turkish, French,\n> Spanish, Brazilian Portuguese, and Polish.\n>\n> Cool. :)\n>\n> Want to co-ordinate with the other two German language volunteers?\n\nSure. So I'm here :-)\n\nRegards\nTino\n", "msg_date": "Thu, 03 Oct 2002 21:34:22 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the" }, { "msg_contents": "Tino Wildenhain <[email protected]> wrote:\n\n> Hi Justin,\n>\n> > Good point. For the moment we've whipped up that MS Excel document\n> > (created in OpenOffice of course) of all the English text strings in the\n> > site and emailed it to the volunteers. :)\n>\n> Btw. did you ever unzip the native OpenOffice (aka StarOffice)\n> file?\n>\n> >\n> > So far community members have volunteered for German, Turkish, French,\n> > Spanish, Brazilian Portuguese, and Polish.\n> >\n> > Cool. :)\n> >\n> > Want to co-ordinate with the other two German language volunteers?\n>\n> Sure. So I'm here :-)\n>\n> Regards\n> Tino\n\nYou should have already got at least two mails, haven't you?\n\nMichael\n\n", "msg_date": "Thu, 3 Oct 2002 21:54:39 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translationof the\n\tAdvocacy" }, { "msg_contents": "Hi Michael,\n\nyeah, I got :-)\n\nI'm busy reviewing :-)\n\nRegards\nTino\n\n--On Donnerstag, 3. Oktober 2002 21:54 +0200 Michael Paesold \n<[email protected]> wrote:\n\n> Tino Wildenhain <[email protected]> wrote:\n>\n>> Hi Justin,\n>>\n>> > Good point. For the moment we've whipped up that MS Excel document\n>> > (created in OpenOffice of course) of all the English text strings in\n>> > the site and emailed it to the volunteers. :)\n>>\n>> Btw. did you ever unzip the native OpenOffice (aka StarOffice)\n>> file?\n>>\n>> >\n>> > So far community members have volunteered for German, Turkish, French,\n>> > Spanish, Brazilian Portuguese, and Polish.\n>> >\n>> > Cool. :)\n>> >\n>> > Want to co-ordinate with the other two German language volunteers?\n>>\n>> Sure. So I'm here :-)\n>>\n>> Regards\n>> Tino\n>\n> You should have already got at least two mails, haven't you?\n>\n> Michael\n>\n\n\n", "msg_date": "Thu, 03 Oct 2002 22:35:08 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the" }, { "msg_contents": "\nHave people considered flock (advisory locking) on the postmaster.pid\nfile for backend detection? It has a nonblocking option. Don't most\nOS's support it?\n\nI can't understand why we can't get an easier solution to postmaster\ndetection than shared memory.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Giles Lean <[email protected]> writes:\n> > I'm certainly no fan of NFS locking, but if someone trusts their NFS\n> > client and server implementations enough to put their data on, they\n> > might as well trust it to get a single lock file for startup right\n> > too. IMHO. Your mileage may vary.\n> \n> Well, my local man page for lockf() sez\n> \n> The advisory record-locking capabilities of lockf() are implemented\n> throughout the network by the ``network lock daemon'' (see lockd(1M)).\n> If the file server crashes and is rebooted, the lock daemon attempts\n> to recover all locks associated with the crashed server. If a lock\n> cannot be reclaimed, the process that held the lock is issued a\n> SIGLOST signal.\n> \n> and the lockd man page mentions that not only lockd but statd have to be\n> running locally *and* at the NFS server.\n> \n> This sure sounds like file locking on NFS introduces additional\n> failure modes above and beyond what we have already.\n> \n> Since the entire point of this locking exercise is to improve PG's\n> robustness, solutions that depend on other daemons not crashing\n> don't sound like a step forward to me. I'm willing to trust the local\n> kernel, but I get antsy if I have to trust more than that.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 21:09:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock" }, { "msg_contents": "\nTom's change added to Rod's patch:\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nTom Lane wrote:\n> Rod Taylor <[email protected]> writes:\n> > \t/* Call CreateComments() to create/drop the comments */\n> > \tCreateComments(ruleoid, classoid, 0, comment);\n> > +\n> > + \theap_close(relation, AccessShareLock);\n> > }\n> > \n> > /*\n> \n> Ooops.\n> \n> I think though that this should read\n> \n> +\theap_close(relation, NoLock);\n> \n> In general, we hold locks on user relations we are modifying until end\n> of transaction. This is different from the rule for system catalogs\n> (eg, it's okay to drop the AccessShareLock on pg_rewrite a few lines\n> above this). The reason for the distinction is that we want to be\n> sure that the user relation won't get DROPped by someone else before\n> we've committed our changes. (If someone else did try to drop it in\n> that interval, they'd not delete the pg_description row we just added,\n> because they couldn't see it.) On the other hand, system catalogs such\n> as pg_rewrite are not going to go away, by definition, and so it's okay\n> to drop their locks early. The only reason we lock system catalogs at\n> all is to allow VACUUM FULL to nail down exclusive access to a catalog\n> while it vacuums it.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 22:29:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone want to assist with the translation of" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Have people considered flock (advisory locking) on the postmaster.pid\n> file for backend detection?\n\n$ man flock\nNo manual entry for flock.\n$\n\nHPUX has generally taken the position of adopting both BSD and SysV\nfeatures, so if it doesn't exist here, it's not portable to older\nUnixen ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 22:45:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "\nTom Lane writes:\n\n> $ man flock\n> No manual entry for flock.\n> $\n> \n> HPUX has generally taken the position of adopting both BSD and SysV\n> features, so if it doesn't exist here, it's not portable to older\n> Unixen ...\n\nIf only local locking is at issue then finding any one of fcntl()\nlocking, flock(), or lockf() would do. All Unixen will have one or\nmore of these and autoconf machinery exists to find them.\n\nThe issue Tom raised about NFS support remains: locking over NFS\nintroduces new failure modes. It also only works for NFS clients\nthat support NFS locking, which not all do.\n\nMind you NFS users are currently entirely unprotected from someone\nstarting a postmaster on a different NFS client using the same data\ndirectory right now, which file locking would prevent. So there is\nsome win for NFS users as well as local filesystem users. (Anyone\nusing NFS care to put their hand up? Maybe nobody does?)\n\nIs the benefit of better local filesystem behaviour plus multiple\nclient protection for NFS users who have file locking enough to\noutweigh the drawbacks? My two cents says it is, but my two cents are\nworth approximately USD$0.01, which is to say not very much ...\n\nRegards,\n\nGiles\n", "msg_date": "Sat, 05 Oct 2002 09:24:55 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "Giles Lean <[email protected]> wrote:\n\n> Tom Lane writes:\n> \n> > $ man flock\n> > No manual entry for flock.\n> > $\n> > \n> > HPUX has generally taken the position of adopting both BSD and SysV\n> > features, so if it doesn't exist here, it's not portable to older\n> > Unixen ...\n> \n> If only local locking is at issue then finding any one of fcntl()\n> locking, flock(), or lockf() would do. All Unixen will have one or\n> more of these and autoconf machinery exists to find them.\n> \n> The issue Tom raised about NFS support remains: locking over NFS\n> introduces new failure modes. It also only works for NFS clients\n> that support NFS locking, which not all do.\n> \n> Mind you NFS users are currently entirely unprotected from someone\n> starting a postmaster on a different NFS client using the same data\n> directory right now, which file locking would prevent. So there is\n> some win for NFS users as well as local filesystem users. (Anyone\n> using NFS care to put their hand up? Maybe nobody does?)\n> \n> Is the benefit of better local filesystem behaviour plus multiple\n> client protection for NFS users who have file locking enough to\n> outweigh the drawbacks? My two cents says it is, but my two cents are\n> worth approximately USD$0.01, which is to say not very much ...\n\nWell, I am going to do some tests with postgresql and our netapp\nfiler later in October. If that setup proves to work fast and reliable\nI would also be interested in such a locking. I don't care about\nthe feature if I find the postgresql/NFS/netapp-filer setup to be\nunreliable or bad performing.\n\nI'll see.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Sat, 5 Oct 2002 01:42:08 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock " }, { "msg_contents": "Michael Paesold wrote:\n> Giles Lean <[email protected]> wrote:\n>>Mind you NFS users are currently entirely unprotected from someone\n>>starting a postmaster on a different NFS client using the same data\n>>directory right now, which file locking would prevent. So there is\n>>some win for NFS users as well as local filesystem users. (Anyone\n>>using NFS care to put their hand up? Maybe nobody does?)\n>>\n>>Is the benefit of better local filesystem behaviour plus multiple\n>>client protection for NFS users who have file locking enough to\n>>outweigh the drawbacks? My two cents says it is, but my two cents are\n>>worth approximately USD$0.01, which is to say not very much ...\n> \n> \n> Well, I am going to do some tests with postgresql and our netapp\n> filer later in October. If that setup proves to work fast and reliable\n> I would also be interested in such a locking. I don't care about\n> the feature if I find the postgresql/NFS/netapp-filer setup to be\n> unreliable or bad performing.\n> \n\nWe have multiple Oracle databases running over NFS from an HPUX server to a \nnetapp and have been pleased with the performance overall. It does require \nsome tuning to get it right, and it hasn't been entirely without issues, but I \ndon't see us going back to local storage. We also just recently set up a Linux \nbox running Oracle against an NFS mounted netapp. Soon I'll be adding Postgres \non the same machine, initially using locally attached storage, but at some \npoint I may need to shift to the netapp due to data volume.\n\nIf you do try Postgres on the netapp, please post your results/experience and \nI'll do the same.\n\nAnyway, I guess I qualify as interested in an NFS safe locking method.\n\nJoe\n\n", "msg_date": "Fri, 04 Oct 2002 18:14:58 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving backend startup interlock" }, { "msg_contents": "Hi Adrian,\n\nWow. That's pretty cool. :)\n\nNo-one has offered to do Romanian yet, so you're very welcome to.\n\nFirst things first:\n\n - What is the two letter language identifier most often used for\nRomanian? i.e. fr = Franch, de = German, etc. ro?\n - What is the character set that should be used to send out Romanian\npages? i.e. for English, French, German it's iso-8859-1, for Turkish\nit's iso-8859-9, Romanian = ?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\[email protected] wrote:\n> \n> Hello !\n> \n> I'd like to translate the advocacy site to Romanian,\n> as long as nobody else has already offered himself/herself\n> to do it.\n> \n> Just tell me, please, what should i do.\n> \n> Adrian Maier\n> ([email protected])\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 15 Oct 2002 05:07:46 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy " }, { "msg_contents": "Hello !\n\nI'd like to translate the advocacy site to Romanian,\nas long as nobody else has already offered himself/herself\nto do it.\n\nJust tell me, please, what should i do.\n\n\nAdrian Maier\n([email protected])\n\n", "msg_date": "Tue, 15 Oct 2002 00:13:58 +0300", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "On Tue, Oct 15, 2002 at 05:07:46AM +1000, Justin Clift wrote:\n> Hi Adrian,\n> \n> Wow. That's pretty cool. :)\n> \n> No-one has offered to do Romanian yet, so you're very welcome to.\n> \n> First things first:\n> \n> - What is the two letter language identifier most often used for\n> Romanian? i.e. fr = Franch, de = German, etc. ro?\n\nro = Romanian\n\n> - What is the character set that should be used to send out Romanian\n> pages? i.e. for English, French, German it's iso-8859-1, for Turkish\n> it's iso-8859-9, Romanian = ?\n\niso-8859-2\n", "msg_date": "Tue, 15 Oct 2002 20:38:21 +0300", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone want to assist with the translation of the Advocacy site?" }, { "msg_contents": "\nI have put the latest patch at:\n\n http://downloads.rhyme.com.au/postgresql/pg_dump/\n\nalong with two dump files of the regression DB, one with 4 byte\nand the other with 8 byte offsets. I can read/restore each from\nthe other, so it looks pretty good. Once the endianness is tested,\nwe should be OK.\n\nKnown problems:\n\n- will not cope with > 4GB files and size_t not 64 bit.\n- when printing data position, it is assumed that off_t is UINT64\n (we could remove this entirely - it's just for display)\n- if seek is not supported, then an intXX is assigned to off_t\n when file offsets are needed. This *should* not cause a problem\n since without seek, the offsets will not be written to the file.\n\nChanges from Prior Version:\n\n- No longer stores or outputs data length\n- Assumes result of ftello is correct if it disagrees with internally\n kept tally.\n- 'pg_restore -l' now shows sizes of int and offset.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sat, 19 Oct 2002 17:08:09 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nPhilip Warner wrote:\n> \n> I have put the latest patch at:\n> \n> http://downloads.rhyme.com.au/postgresql/pg_dump/\n> \n> along with two dump files of the regression DB, one with 4 byte\n> and the other with 8 byte offsets. I can read/restore each from\n> the other, so it looks pretty good. Once the endianness is tested,\n> we should be OK.\n> \n> Known problems:\n> \n> - will not cope with > 4GB files and size_t not 64 bit.\n> - when printing data position, it is assumed that off_t is UINT64\n> (we could remove this entirely - it's just for display)\n> - if seek is not supported, then an intXX is assigned to off_t\n> when file offsets are needed. This *should* not cause a problem\n> since without seek, the offsets will not be written to the file.\n> \n> Changes from Prior Version:\n> \n> - No longer stores or outputs data length\n> - Assumes result of ftello is correct if it disagrees with internally\n> kept tally.\n> - 'pg_restore -l' now shows sizes of int and offset.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 20 Oct 2002 21:18:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:18 PM 20/10/2002 -0400, Bruce Momjian wrote:\n>I will try to apply it within the next 48 hours.\n\nI'm happy to apply it when necessary; but I wouldn't do it until we've from \nsome someone with a big-endian machine...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Mon, 21 Oct 2002 11:47:40 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 09:18 PM 20/10/2002 -0400, Bruce Momjian wrote:\n> >I will try to apply it within the next 48 hours.\n> \n> I'm happy to apply it when necessary; but I wouldn't do it until we've from \n> some someone with a big-endian machine...\n\nWell, I think Tom was going to try it on his HPUX machine. However, it\nis on the open items list, so we are going to need to get it in there\nsoon anyway, or yank it all out. If no big endian people want to test\nit, we will have to ship and then I am sure some big-ending testing will\nhappen. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 20 Oct 2002 21:50:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:50 PM 20/10/2002 -0400, Bruce Momjian wrote:\n>Well, I think Tom was going to try it on his HPUX machine.\n\nIt might be good if someone who knows a little more than me about\nendianness etc has a look at the patch - specifically this bit of code:\n\n#if __BYTE_ORDER == __LITTLE_ENDIAN\n for (off = 0 ; off < sizeof(off_t) ; off++) {\n#else\n for (off = sizeof(off_t) -1 ; off >= 0 ; off--) {\n#endif\n i = *(char*)(ptr+off);\n (*AH->WriteBytePtr) (AH, i);\n }\n\nIt is *intended* to write the data such that the least significant byte\nis written first to the file, but the dump Giles put on his FTP site\nis not correct - it's written msb->lsb.\n\nThere seem to be two possibilities (a) I am an idiot and there is something\nwrong with the code above that I can not see, or (b) the test:\n\n #if __BYTE_ORDER == __LITTLE_ENDIAN\n\nis not the right thing to do. Any insights would be appreciated.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Mon, 21 Oct 2002 23:15:00 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> It might be good if someone who knows a little more than me about\n> endianness etc has a look at the patch - specifically this bit of code:\n\n> #if __BYTE_ORDER == __LITTLE_ENDIAN\n\nWell, the main problem with that is there's no such symbol as\n__BYTE_ORDER ...\n\nI'd prefer not to introduce one, either, if we can possibly avoid it.\nI know that we have BYTE_ORDER defined in the port header files, but\nI think it's quite untrustworthy, since there is no other place in the\nmain distribution that uses it anymore (AFAICS only contrib/pgcrypto\nuses it at all).\n\nThe easiest way to write and reassemble an arithmetic value in a\nplatform-independent order is via shifting. For instance,\n\n\t// write, LSB first\n\tfor (i = 0; i < sizeof(off_t); i++)\n\t{\n\t\twritebyte(val & 0xFF);\n\t\tval >>= 8;\n\t}\n\n\t// read, LSB first\n\tval = 0;\n\tshift = 0;\n\tfor (i = 0; i < sizeof(off_t); i++)\n\t{\n\t\tval |= (readbyte() << shift);\n\t\tshift += 8;\n\t}\n\n(This assumes readbyte delivers an unsigned byte, else you might need to\nmask it with 0xFF before shifting.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Oct 2002 09:47:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 09:47 AM 21/10/2002 -0400, Tom Lane wrote:\n>Well, the main problem with that is there's no such symbol as\n>__BYTE_ORDER ...\n\nWhat about just:\n\n int i = 256;\n\nthen checking the first byte? This should give me the endianness, and makes \na non-destructive write (not sure it it's important). Currently the \ncommonly used code does not rely on off_t arithmetic, so if possible I'd \nlike to avoid shift. Does that sound reasonable? Or overly cautious?\n\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Tue, 22 Oct 2002 00:10:22 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> then checking the first byte? This should give me the endianness, and makes \n> a non-destructive write (not sure it it's important). Currently the \n> commonly used code does not rely on off_t arithmetic, so if possible I'd \n> like to avoid shift. Does that sound reasonable? Or overly cautious?\n\nI think it's pointless. Let's assume off_t is not an arithmetic type\nbut some weird struct dreamed up by a crazed kernel hacker. What are\nthe odds that dumping the bytes in it, in either order, will produce\nsomething that's compatible with any other platform? There could be\npadding, or the fields might be in an order that doesn't match the\nbyte order within the fields, or something else.\n\nThe shift method requires *no* directly endian-dependent code,\nand I think it will work on any platform where you have any hope of\nportability anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Oct 2002 10:16:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nHere is a modified version of Philip's patch that has the changes Tom\nsuggested; treating off_t as an integral type. I did light testing on\nmy BSD/OS machine that has 8-byte off_t but I don't have 4 gigs of free\nspace to test larger files. \n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/pg_dump\n\nCan others test?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Philip Warner <[email protected]> writes:\n> > then checking the first byte? This should give me the endianness, and makes \n> > a non-destructive write (not sure it it's important). Currently the \n> > commonly used code does not rely on off_t arithmetic, so if possible I'd \n> > like to avoid shift. Does that sound reasonable? Or overly cautious?\n> \n> I think it's pointless. Let's assume off_t is not an arithmetic type\n> but some weird struct dreamed up by a crazed kernel hacker. What are\n> the odds that dumping the bytes in it, in either order, will produce\n> something that's compatible with any other platform? There could be\n> padding, or the fields might be in an order that doesn't match the\n> byte order within the fields, or something else.\n> \n> The shift method requires *no* directly endian-dependent code,\n> and I think it will work on any platform where you have any hope of\n> portability anyway.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 21 Oct 2002 21:47:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "On Mon, 2002-10-21 at 20:47, Bruce Momjian wrote:\n> \n> Here is a modified version of Philip's patch that has the changes Tom\n> suggested; treating off_t as an integral type. I did light testing on\n> my BSD/OS machine that has 8-byte off_t but I don't have 4 gigs of free\n> space to test larger files. \nI can make an account for anyone that wants to play on UnixWare 7.1.3.\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "21 Oct 2002 20:50:07 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Larry Rosenman wrote:\n> On Mon, 2002-10-21 at 20:47, Bruce Momjian wrote:\n> > \n> > Here is a modified version of Philip's patch that has the changes Tom\n> > suggested; treating off_t as an integral type. I did light testing on\n> > my BSD/OS machine that has 8-byte off_t but I don't have 4 gigs of free\n> > space to test larger files. \n> I can make an account for anyone that wants to play on UnixWare 7.1.3.\n\nIf you have 7.3, you can just test this way:\n\t\n\t1) apply the patch\n\t2) run the regression tests\n\t3) pg_dump -Fc regression >/tmp/x\n\t4) pg_restore -Fc </tmp/x\n\nThat's all I did and it worked.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 21 Oct 2002 21:52:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "On Mon, 2002-10-21 at 20:52, Bruce Momjian wrote:\n> Larry Rosenman wrote:\n> > On Mon, 2002-10-21 at 20:47, Bruce Momjian wrote:\n> > > \n> > > Here is a modified version of Philip's patch that has the changes Tom\n> > > suggested; treating off_t as an integral type. I did light testing on\n> > > my BSD/OS machine that has 8-byte off_t but I don't have 4 gigs of free\n> > > space to test larger files. \n> > I can make an account for anyone that wants to play on UnixWare 7.1.3.\n> \n> If you have 7.3, you can just test this way:\nI haven't had the time to play with 7.3 (busy on a NUMBER of other\nthings). \n\nI'm more than willing to supply resources, just my time is short right\nnow. \n\n\n> \t\n> \t1) apply the patch\n> \t2) run the regression tests\n> \t3) pg_dump -Fc regression >/tmp/x\n> \t4) pg_restore -Fc </tmp/x\n> \n> That's all I did and it worked.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "21 Oct 2002 21:05:09 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:52 PM 21/10/2002 -0400, Bruce Momjian wrote:\n> 4) pg_restore -Fc </tmp/x\n\npg_restore /tmp/x\n\nis enough; it will determine the file type, and by avoiding the pipe, you \nallow it to do seeks which are not much use here, but are usefull when you \nonly restore one table in a very large backup.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Tue, 22 Oct 2002 17:52:37 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 10:16 AM 21/10/2002 -0400, Tom Lane wrote:\n>What are\n>the odds that dumping the bytes in it, in either order, will produce\n>something that's compatible with any other platform?\n\nNone, but it will be compatible with itself (the most we can hope for), and \nwill work even if shifting is not supported for off_t (how likely is \nthat?). I agree shift is definitely the way to go if it works on arbitrary \ndata - ie. it does not rely on off_t being an integer. Can I shift a struct?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Tue, 22 Oct 2002 21:46:25 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> None, but it will be compatible with itself (the most we can hope for), and \n> will work even if shifting is not supported for off_t (how likely is \n> that?). I agree shift is definitely the way to go if it works on arbitrary \n> data - ie. it does not rely on off_t being an integer. Can I shift a struct?\n\nYou can't. If there are any platforms where in fact off_t isn't an\narithmetic type, then shifting code would break there. I am not sure\nthere are any; can anyone provide a counterexample?\n\nIt would be simple enough to add a configure test to see whether off_t\nis arithmetic (just try to compile \"off_t x; x <<= 8;\"). How about\n\t#ifdef OFF_T_IS_ARITHMETIC_TYPE\n\t\t// cross-platform compatible\n\t\tuse shifting method\n\t#else\n\t\t// not cross-platform compatible\n\t\tread or write bytes of struct in storage order\n\t#endif\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Oct 2002 09:29:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Tom Lane wrote:\n> Philip Warner <[email protected]> writes:\n> > None, but it will be compatible with itself (the most we can hope for), and \n> > will work even if shifting is not supported for off_t (how likely is \n> > that?). I agree shift is definitely the way to go if it works on arbitrary \n> > data - ie. it does not rely on off_t being an integer. Can I shift a struct?\n> \n> You can't. If there are any platforms where in fact off_t isn't an\n> arithmetic type, then shifting code would break there. I am not sure\n> there are any; can anyone provide a counterexample?\n> \n> It would be simple enough to add a configure test to see whether off_t\n> is arithmetic (just try to compile \"off_t x; x <<= 8;\"). How about\n> \t#ifdef OFF_T_IS_ARITHMETIC_TYPE\n> \t\t// cross-platform compatible\n> \t\tuse shifting method\n> \t#else\n> \t\t// not cross-platform compatible\n> \t\tread or write bytes of struct in storage order\n> \t#endif\n\nIt is my understanding that off_t is an integral type and fpos_t is\nperhaps a struct. My fgetpos manual page says:\n\n The fgetpos() and fsetpos() functions are alternate interfaces equivalent\n to ftell() and fseek() (with whence set to SEEK_SET ), setting and stor-\n ing the current value of the file offset into or from the object refer-\n enced by pos. On some (non-UNIX) systems an ``fpos_t'' object may be a\n complex object and these routines may be the only way to portably reposi-\n tion a text stream.\n\nI poked around and found this Usenet posting:\n\n\thttp://groups.google.com/groups?q=C+off_t+standard+integral&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=E958tG.8tH%40root.co.uk&rnum=1\n\nstating that while off_t must be arithmetic, it doesn't have to be\nintegral, meaning it could be float or double, which can't be shifted.\n\nHowever, since we don't know if we support any non-integral off_t\nplatforms, and because a configure test would require us to have two\ncode paths for with/without integral off_t, I suggest we apply my\nversion of Philip's patch and let's see if everyone can compile it\ncleanly. It does have the advantage of being more portable on systems\nthat do have integral off_t, which I think is most/all of our supported\nplatforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 12:00:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 12:00 PM 22/10/2002 -0400, Bruce Momjian wrote:\n>It does have the advantage of being more portable on systems\n>that do have integral off_t\n\nI suspect it is no more portable than determining storage order by using \n'int i = 256', then writing in storage order, and has the disadvantage that \nit may break as discussed.\n\nAFAICT, using storage order will not break under any circumstances within \none OS/architecture (unlike using shift), and will not break any more often \nthan using shift in cases where off_t is integral.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 02:20:45 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> However, since we don't know if we support any non-integral off_t\n> platforms, and because a configure test would require us to have two\n> code paths for with/without integral off_t, I suggest we apply my\n> version of Philip's patch and let's see if everyone can compile it\n> cleanly.\n\nActually, it looks to me like configure will spit up if off_t is not\nan integral type:\n\n /* Check that off_t can represent 2**63 - 1 correctly.\n We can't simply define LARGE_OFF_T to be 9223372036854775807,\n since some C++ compilers masquerading as C compilers\n incorrectly reject 9223372036854775807. */\n#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721\n\t\t && LARGE_OFF_T % 2147483647 == 1)\n\t\t ? 1 : -1];\n\nSo I think we're wasting our time to debate whether we need to support\nnon-integral off_t ... let's just apply Bruce's version and wait to\nsee if anyone has a problem before doing more work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Oct 2002 12:22:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner wrote:\n> At 12:00 PM 22/10/2002 -0400, Bruce Momjian wrote:\n> >It does have the advantage of being more portable on systems\n> >that do have integral off_t\n> \n> I suspect it is no more portable than determining storage order by using \n> 'int i = 256', then writing in storage order, and has the disadvantage that \n> it may break as discussed.\n> \n> AFAICT, using storage order will not break under any circumstances within \n> one OS/architecture (unlike using shift), and will not break any more often \n> than using shift in cases where off_t is integral.\n\nYour version will break more often because we are assuming we can\ndetermine the endian-ness of the OS, _and_ for quad off_t types,\nassuming we know that is stored the same too. While we have ending for\nint's, I have no idea if quads are always stored the same. By accessing\nit as an integral type, we make certain it is output the same way every\ntime for every OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 12:26:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > However, since we don't know if we support any non-integral off_t\n> > platforms, and because a configure test would require us to have two\n> > code paths for with/without integral off_t, I suggest we apply my\n> > version of Philip's patch and let's see if everyone can compile it\n> > cleanly.\n> \n> Actually, it looks to me like configure will spit up if off_t is not\n> an integral type:\n> \n> /* Check that off_t can represent 2**63 - 1 correctly.\n> We can't simply define LARGE_OFF_T to be 9223372036854775807,\n> since some C++ compilers masquerading as C compilers\n> incorrectly reject 9223372036854775807. */\n> #define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721\n> \t\t && LARGE_OFF_T % 2147483647 == 1)\n> \t\t ? 1 : -1];\n> \n> So I think we're wasting our time to debate whether we need to support\n> non-integral off_t ... let's just apply Bruce's version and wait to\n> see if anyone has a problem before doing more work.\n\nI am concerned about one more thing. On BSD/OS, we have off_t of quad\n(8 byte), but we don't have fseeko, so this call looks questionable:\n\n\tif (fseeko(AH->FH, tctx->dataPos, SEEK_SET) != 0)\n\nIn this case, dataPos is off_t (8 bytes), while fseek only accepts long\nin that parameter (4 bytes). When this code is hit, a file > 4 gigs\nwill seek to the wrong offset, I am afraid. Also, I don't understand\nwhy the compiler doesn't produce a warning.\n\nI wonder if I should add a conditional test so this code is hit only if\nHAVE_FSEEKO is defined. There is alternative code for all the non-zero\nfseeks.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 12:46:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Your version will break more often because we are assuming we can\n> determine the endian-ness of the OS, _and_ for quad off_t types,\n> assuming we know that is stored the same too. While we have ending for\n> int's, I have no idea if quads are always stored the same.\n\nThere is precedent for problems of that ilk, too, cf PDP_ENDIAN: years\nago someone made double-word-integer software routines and did not\nthink twice about which word should appear first in storage, with the\nconsequence that the storage order was neither little-endian nor\nbig-endian. (We have exactly the same issue with our CRC routines for\ncompilers without int64: the two-int32 struct is defined in a way that's\ncompatible with little-endian storage, and on a big-endian machine it'll\nproduce a funny storage order.)\n\nUnless someone can point to a supported (or potentially interesting)\nplatform on which off_t is indeed not integral, I think the shift-based\ncode is our safest bet. (The precedent of the off_t checking code in\nconfigure makes me really doubt that there are any platforms with\nnon-integral off_t.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Oct 2002 13:15:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nPatch applied with shift <</>> changes by me. Thanks.\n\n---------------------------------------------------------------------------\n\n\nPhilip Warner wrote:\n> \n> I have put the latest patch at:\n> \n> http://downloads.rhyme.com.au/postgresql/pg_dump/\n> \n> along with two dump files of the regression DB, one with 4 byte\n> and the other with 8 byte offsets. I can read/restore each from\n> the other, so it looks pretty good. Once the endianness is tested,\n> we should be OK.\n> \n> Known problems:\n> \n> - will not cope with > 4GB files and size_t not 64 bit.\n> - when printing data position, it is assumed that off_t is UINT64\n> (we could remove this entirely - it's just for display)\n> - if seek is not supported, then an intXX is assigned to off_t\n> when file offsets are needed. This *should* not cause a problem\n> since without seek, the offsets will not be written to the file.\n> \n> Changes from Prior Version:\n> \n> - No longer stores or outputs data length\n> - Assumes result of ftello is correct if it disagrees with internally\n> kept tally.\n> - 'pg_restore -l' now shows sizes of int and offset.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 15:16:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian wrote:\n> > So I think we're wasting our time to debate whether we need to support\n> > non-integral off_t ... let's just apply Bruce's version and wait to\n> > see if anyone has a problem before doing more work.\n> \n> I am concerned about one more thing. On BSD/OS, we have off_t of quad\n> (8 byte), but we don't have fseeko, so this call looks questionable:\n> \n> \tif (fseeko(AH->FH, tctx->dataPos, SEEK_SET) != 0)\n> \n> In this case, dataPos is off_t (8 bytes), while fseek only accepts long\n> in that parameter (4 bytes). When this code is hit, a file > 4 gigs\n> will seek to the wrong offset, I am afraid. Also, I don't understand\n> why the compiler doesn't produce a warning.\n> \n> I wonder if I should add a conditional test so this code is hit only if\n> HAVE_FSEEKO is defined. There is alternative code for all the non-zero\n> fseeks.\n\nHere is a patch that I think fixes the problem I outlined above. If\nthere is no fseeko(), it will not call fseek with a non-zero offset\nunless sizeof(off_t) <= sizeof(long).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/bin/pg_dump/pg_backup_custom.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_custom.c,v\nretrieving revision 1.22\ndiff -c -c -r1.22 pg_backup_custom.c\n*** src/bin/pg_dump/pg_backup_custom.c\t22 Oct 2002 19:15:23 -0000\t1.22\n--- src/bin/pg_dump/pg_backup_custom.c\t22 Oct 2002 21:36:30 -0000\n***************\n*** 431,437 ****\n \tif (tctx->dataState == K_OFFSET_NO_DATA)\n \t\treturn;\n \n! \tif (!ctx->hasSeek || tctx->dataState == K_OFFSET_POS_NOT_SET)\n \t{\n \t\t/* Skip over unnecessary blocks until we get the one we want. */\n \n--- 431,441 ----\n \tif (tctx->dataState == K_OFFSET_NO_DATA)\n \t\treturn;\n \n! \tif (!ctx->hasSeek || tctx->dataState == K_OFFSET_POS_NOT_SET\n! #if !defined(HAVE_FSEEKO)\n! \t\t|| sizeof(off_t) > sizeof(long)\n! #endif\n! \t\t)\n \t{\n \t\t/* Skip over unnecessary blocks until we get the one we want. */\n \n***************\n*** 809,815 ****\n \t\t * be ok to just use the existing self-consistent block\n \t\t * formatting.\n \t\t */\n! \t\tif (ctx->hasSeek)\n \t\t{\n \t\t\tfseeko(AH->FH, tpos, SEEK_SET);\n \t\t\tWriteToc(AH);\n--- 813,823 ----\n \t\t * be ok to just use the existing self-consistent block\n \t\t * formatting.\n \t\t */\n! \t\tif (ctx->hasSeek\n! #if !defined(HAVE_FSEEKO)\n! \t\t\t&& sizeof(off_t) <= sizeof(long)\n! #endif\n! \t\t\t)\n \t\t{\n \t\t\tfseeko(AH->FH, tpos, SEEK_SET);\n \t\t\tWriteToc(AH);", "msg_date": "Tue, 22 Oct 2002 17:37:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am concerned about one more thing. On BSD/OS, we have off_t of quad\n> (8 byte), but we don't have fseeko, so this call looks questionable:\n>\n> \tif (fseeko(AH->FH, tctx->dataPos, SEEK_SET) != 0)\n\nMaybe you want to ask your OS provider how the heck this is supposed to\nwork. I mean, it's great to have wide types, but what's the point if the\nAPI can't handle them?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 23 Oct 2002 00:20:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I am concerned about one more thing. On BSD/OS, we have off_t of quad\n> > (8 byte), but we don't have fseeko, so this call looks questionable:\n> >\n> > \tif (fseeko(AH->FH, tctx->dataPos, SEEK_SET) != 0)\n> \n> Maybe you want to ask your OS provider how the heck this is supposed to\n> work. I mean, it's great to have wide types, but what's the point if the\n> API can't handle them?\n\nExcellent question. They do have fsetpos/fgetpos, and I think they\nthink you are supposed to use those. However, they don't do seek from\ncurrent position, and they don't take an off_t, so I am confused myself.\n\nI did ask on the mailing list and everyone kind of agreed it was a\nmissing feature. However, because of the way we call fseeko not knowing\nif it is a quad or a long, I think we have to add the checks to prevent\nsuch wild seeks from happening.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 21:13:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 05:37 PM 22/10/2002 -0400, Bruce Momjian wrote:\n>! if (ctx->hasSeek\n>! #if !defined(HAVE_FSEEKO)\n>! && sizeof(off_t) <= sizeof(long)\n>! #endif\n>! )\n\nJust to clarify my understanding:\n\n- HAVE_FSEEKO is tested & defined in configure\n- If it is not defined, then all calls to fseeko will magically be \ntranslated to fseek calls, and use the 'long' parameter type.\n\nIs that right?\n\nIf so, why don't we:\n\n#if defined(HAVE_FSEEKO)\n#define FILE_OFFSET off_t\n#define FSEEK fseeko\n#else\n#define FILE_OFFSET long\n#define FSEEK fseek\n#end if\n\nthen replace all refs to off_t with FILE_OFFSET, and fseeko with FSEEK.\n\nExisting checks etc will then refuse to load file offsets with significant \nbytes after the 4th byte, we will still use fseek/o in broken OS \nimplementations of off_t.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 12:39:23 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 05:37 PM 22/10/2002 -0400, Bruce Momjian wrote:\n> >! if (ctx->hasSeek\n> >! #if !defined(HAVE_FSEEKO)\n> >! && sizeof(off_t) <= sizeof(long)\n> >! #endif\n> >! )\n> \n> Just to clarify my understanding:\n> \n> - HAVE_FSEEKO is tested & defined in configure\n> - If it is not defined, then all calls to fseeko will magically be \n> translated to fseek calls, and use the 'long' parameter type.\n> \n> Is that right?\n> \n> If so, why don't we:\n> \n> #if defined(HAVE_FSEEKO)\n> #define FILE_OFFSET off_t\n> #define FSEEK fseeko\n> #else\n> #define FILE_OFFSET long\n> #define FSEEK fseek\n> #end if\n> \n> then replace all refs to off_t with FILE_OFFSET, and fseeko with FSEEK.\n> \n> Existing checks etc will then refuse to load file offsets with significant \n> bytes after the 4th byte, we will still use fseek/o in broken OS \n> implementations of off_t.\n\nUh, not exactly. I have off_t as a quad, and I don't have fseeko, so\nthe above conditional doesn't work. I want to use off_t, but can't use\nfseek(). As it turns out, the code already has options to handle no\nfseek, so it seems to work anyway. I think what you miss may be the\ntable of contents in the archive, if I am reading the code correctly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 22 Oct 2002 22:46:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 10:46 PM 22/10/2002 -0400, Bruce Momjian wrote:\n>Uh, not exactly. I have off_t as a quad, and I don't have fseeko, so\n>the above conditional doesn't work. I want to use off_t, but can't use\n>fseek().\n\nThen when you create dumps, they will be invalid since I assume that ftello \nis also broken in the same way. You need to fix _getFilePos as well. And \nany other place that uses an off_t needs to be looked at very carefully. \nThe code was written assuming that if 'hasSeek' was set, then we could \ntrust it.\n\nGiven that you say you do have support for some kind of 64 bt offset, I \nwould be a lot happier with these changes if you did something akin to my \noriginal sauggestion:\n\n#if defined(HAVE_FSEEKO)\n#define FILE_OFFSET off_t\n#define FSEEK fseeko\n#elseif defined(HAVE_SOME_OTHER_FSEEK)\n#define FILE_OFFSET some_other_offset\n#define FSEEK some_other_fseek\n#else\n#define FILE_OFFSET long\n#define FSEEK fseek\n#end if\n\n...assuming you have a non-broken 64 bit fseek/tell pair, then this will \nwork in all cases, and make the code a lot less ugly (assuming of course \nthe non-broken version can be shifted).\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 13:06:08 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nSounds messy. Let me see if I can code up an fseeko/ftello for BSD/OS\nand add that to /port. No reason to hold up beta for that, though.\n\nI wonder if any other platforms have this limitation. I think we need\nto add some type of test for no-fseeko()/ftello() and sizeof(off_t) >\nsizeof(long). This fseeko/ftello/off_t is just too fluid, and the\nfailure modes too serious.\n\n---------------------------------------------------------------------------\n\nPhilip Warner wrote:\n> At 10:46 PM 22/10/2002 -0400, Bruce Momjian wrote:\n> >Uh, not exactly. I have off_t as a quad, and I don't have fseeko, so\n> >the above conditional doesn't work. I want to use off_t, but can't use\n> >fseek().\n> \n> Then when you create dumps, they will be invalid since I assume that ftello \n> is also broken in the same way. You need to fix _getFilePos as well. And \n> any other place that uses an off_t needs to be looked at very carefully. \n> The code was written assuming that if 'hasSeek' was set, then we could \n> trust it.\n> \n> Given that you say you do have support for some kind of 64 bt offset, I \n> would be a lot happier with these changes if you did something akin to my \n> original sauggestion:\n> \n> #if defined(HAVE_FSEEKO)\n> #define FILE_OFFSET off_t\n> #define FSEEK fseeko\n> #elseif defined(HAVE_SOME_OTHER_FSEEK)\n> #define FILE_OFFSET some_other_offset\n> #define FSEEK some_other_fseek\n> #else\n> #define FILE_OFFSET long\n> #define FSEEK fseek\n> #end if\n> \n> ...assuming you have a non-broken 64 bit fseek/tell pair, then this will \n> work in all cases, and make the code a lot less ugly (assuming of course \n> the non-broken version can be shifted).\n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 00:29:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I wonder if any other platforms have this limitation. I think we need\n> to add some type of test for no-fseeko()/ftello() and sizeof(off_t) >\n> sizeof(long). This fseeko/ftello/off_t is just too fluid, and the\n> failure modes too serious.\n\nI am wondering why pg_dump has to depend on either fseek or ftell.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Oct 2002 00:32:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 12:32 AM 23/10/2002 -0400, Tom Lane wrote:\n>I am wondering why pg_dump has to depend on either fseek or ftell.\n\nIt doesn't - it just works better and has more features if they are \navailable, much like zlib etc.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 14:36:06 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 12:29 AM 23/10/2002 -0400, Bruce Momjian wrote:\n>This fseeko/ftello/off_t is just too fluid, and the\n>failure modes too serious.\n\nI agree. Can you think of a better solution than the one I suggested???\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 14:38:18 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nOK, you are saying if we don't have fseeko(), there is no reason to use\noff_t, and we may as well use long. What limitations does that impose,\nand are the limitations clear to the user.\n\nWhat has me confused is that I only see two places that use a non-zero\nfseeko, and in those cases, there is a non-fseeko code path that does\nthe same thing, or the call isn't actually required. Both cases are in\npg_dump/pg_dump_custom.c. It appears seeking in the file is an\noptimization that prevents all the blocks from being read. That is\nfine, but we shouldn't introduce failure cases to do that.\n\nIf BSD/OS is the only problem OS, I can deal with that, but I have no\nidea if other OS's have the same limitation, and because of the way our\ncode exists now, we are not even checking to see if there is a problem.\n\nI did some poking around, and on BSD/OS, fgetpos/fsetpos use fpos_t,\nwhich is actually off_t, and interestingly, lseek() uses off_t too. \nSeems only fseek/ftell is limited to long. I can easily implemnt\nfseeko/ftello using fgetpos/fsetpos, but that is only one OS.\n\nOne idea would be to patch up BSD/OS in backend/port/bsdi and add a\nconfigure tests that actually fails if fseeko doesn't exist _and_\nsizeof(off_t) > sizeof(long). That would at least catch OS's before\nthey make >2gig backups that can't be restored.\n\n---------------------------------------------------------------------------\n\nPhilip Warner wrote:\n> At 10:46 PM 22/10/2002 -0400, Bruce Momjian wrote:\n> >Uh, not exactly. I have off_t as a quad, and I don't have fseeko, so\n> >the above conditional doesn't work. I want to use off_t, but can't use\n> >fseek().\n> \n> Then when you create dumps, they will be invalid since I assume that ftello \n> is also broken in the same way. You need to fix _getFilePos as well. And \n> any other place that uses an off_t needs to be looked at very carefully. \n> The code was written assuming that if 'hasSeek' was set, then we could \n> trust it.\n> \n> Given that you say you do have support for some kind of 64 bt offset, I \n> would be a lot happier with these changes if you did something akin to my \n> original sauggestion:\n> \n> #if defined(HAVE_FSEEKO)\n> #define FILE_OFFSET off_t\n> #define FSEEK fseeko\n> #elseif defined(HAVE_SOME_OTHER_FSEEK)\n> #define FILE_OFFSET some_other_offset\n> #define FSEEK some_other_fseek\n> #else\n> #define FILE_OFFSET long\n> #define FSEEK fseek\n> #end if\n> \n> ...assuming you have a non-broken 64 bit fseek/tell pair, then this will \n> work in all cases, and make the code a lot less ugly (assuming of course \n> the non-broken version can be shifted).\n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 01:02:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 01:02 AM 23/10/2002 -0400, Bruce Momjian wrote:\n\n>OK, you are saying if we don't have fseeko(), there is no reason to use\n>off_t, and we may as well use long. What limitations does that impose,\n>and are the limitations clear to the user.\n\nWhat I'm saying is that if we have not got fseeko then we should use any \n'seek-class' function that returns a 64 bit value. We have already made the \nassumption that off_t is an integer; the same logic that came to that \nconclusion, applies just as validly to the other seek functions.\n\nSecondly, if there is no 64 bit 'seek-class' function, then we should \nprobably use a size_t, but a long would probably be fine too. I am not \nparticularly attached to this part; long, int etc etc. Whatever is most \nlikely to return an integer and work with whatever function we choose.\n\nAs to implications: assuming they are all integers (which as you know I \ndon't like), we should have no problems.\n\nIf a system does not have any function to access 64 bit file offsets, then \nI'd say they are pretty unlikely to have files > 2GB.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 23 Oct 2002 15:41:57 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 01:02 AM 23/10/2002 -0400, Bruce Momjian wrote:\n> \n> >OK, you are saying if we don't have fseeko(), there is no reason to use\n> >off_t, and we may as well use long. What limitations does that impose,\n> >and are the limitations clear to the user.\n> \n> What I'm saying is that if we have not got fseeko then we should use any \n> 'seek-class' function that returns a 64 bit value. We have already made the \n> assumption that off_t is an integer; the same logic that came to that \n> conclusion, applies just as validly to the other seek functions.\n\nOh, I see, so try to use fsetpos/fgetpos? I can write wrappers for\nthose to look like fgetpos/fsetpos and put it in /port.\n\n> Secondly, if there is no 64 bit 'seek-class' function, then we should \n> probably use a size_t, but a long would probably be fine too. I am not \n> particularly attached to this part; long, int etc etc. Whatever is most \n> likely to return an integer and work with whatever function we choose.\n> \n> As to implications: assuming they are all integers (which as you know I \n> don't like), we should have no problems.\n> \n> If a system does not have any function to access 64 bit file offsets, then \n> I'd say they are pretty unlikely to have files > 2GB.\n\nOK, my OS can handle 64-bit files, but has only fgetpos/fsetpos, so I\ncould get that working. The bigger question is what about OS's that\nhave 64-bit off_t/files but don't have any seek-type functions. I did\nresearch to find mine, but what about others that may have other\nvariants?\n\nI think you are right that we have to not use off_t and use long if we\ncan't find a proper 64-bit seek function, but what are the failure modes\nof doing this? Exactly what happens for larger files?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 10:24:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 01:02 AM 23/10/2002 -0400, Bruce Momjian wrote:\n> \n> >OK, you are saying if we don't have fseeko(), there is no reason to use\n> >off_t, and we may as well use long. What limitations does that impose,\n> >and are the limitations clear to the user.\n> \n> What I'm saying is that if we have not got fseeko then we should use any \n> 'seek-class' function that returns a 64 bit value. We have already made the \n> assumption that off_t is an integer; the same logic that came to that \n> conclusion, applies just as validly to the other seek functions.\n> \n> Secondly, if there is no 64 bit 'seek-class' function, then we should \n> probably use a size_t, but a long would probably be fine too. I am not \n> particularly attached to this part; long, int etc etc. Whatever is most \n> likely to return an integer and work with whatever function we choose.\n> \n> As to implications: assuming they are all integers (which as you know I \n> don't like), we should have no problems.\n> \n> If a system does not have any function to access 64 bit file offsets, then \n> I'd say they are pretty unlikely to have files > 2GB.\n\nLet me see if I can be clearer. With shifting off_t, if that fails, we\nwill find out right away, at compile time. I think that is acceptable.\n\nWhat I am concerned about are cases that fail at runtime, specifically\nduring a restore of a >2gig file. In my reading of the code, those\nfailures will be silent or will produce unusual error messages. I don't\nthink we can ship code that has strange failure modes for data restore.\n\nNow, if someone knows those failure cases, I would love to hear about\nit. If not, I will dig into the code today and find out where they are.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 10:42:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian writes:\n\n> I think you are right that we have to not use off_t and use long if we\n> can't find a proper 64-bit seek function, but what are the failure modes\n> of doing this? Exactly what happens for larger files?\n\nFirst we need to decide what we want to happen and after that think about\nhow to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(),\nwe have the following options:\n\n1. Disable access to large files.\n\n2. Seek in some other way.\n\nWhat's it gonna be?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 23 Oct 2002 23:50:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I think you are right that we have to not use off_t and use long if we\n> > can't find a proper 64-bit seek function, but what are the failure modes\n> > of doing this? Exactly what happens for larger files?\n> \n> First we need to decide what we want to happen and after that think about\n> how to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(),\n> we have the following options:\n> \n> 1. Disable access to large files.\n> \n> 2. Seek in some other way.\n> \n> What's it gonna be?\n\nOK, well BSD/OS now works, but I wonder if there are any other quad\noff_t OS's out there without fseeko.\n\nHow would we disable access to large files? Do we fstat the file and\nsee if it is too large? I suppose we are looking for cases where the\nfile system has large files, but fseeko doesn't allow us to access them.\nShould we leave this issue alone and wait to find another OS with this\nproblem, and we can then rejigger fseeko.c to handle that OS too?\n\nLooking at the pg_dump code, it seems the fseeks are optional in there\nanyway because it already has code to read the file sequentially rather\nthan use fseek, and the TOC case in pg_backup_custom.c says that is\noptional too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 17:50:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> First we need to decide what we want to happen and after that think about\n> how to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(),\n> we have the following options:\n\nIt seems obvious to me that there are no platforms that offer\nsizeof(off_t) > sizeof(long) but have no API for doing seeks with off_t.\nThat would be just plain silly. IMHO it's acceptable for us to fail at\nconfigure time if we can't figure out how to seek.\n\nThe question is *which* seek APIs we need to support. Are there any\nbesides fseeko() and fgetpos()?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Oct 2002 17:52:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> How would we disable access to large files?\n\nI think configure should fail if it can't find a way to seek.\nWorkaround for anyone in that situation is configure --disable-largefile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Oct 2002 17:55:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > First we need to decide what we want to happen and after that think about\n> > how to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(),\n> > we have the following options:\n> \n> It seems obvious to me that there are no platforms that offer\n> sizeof(off_t) > sizeof(long) but have no API for doing seeks with off_t.\n> That would be just plain silly. IMHO it's acceptable for us to fail at\n> configure time if we can't figure out how to seek.\n\nI would certainly be happy failing at configure time, so we know at the\nstart what is broken, rather than failures during restore.\n\n> The question is *which* seek APIs we need to support. Are there any\n> besides fseeko() and fgetpos()?\n\nWhat I have added is BSD/OS specific because only on BSD/OS do I know\nfpos_t and off_t are the same type. If we come up with other platforms,\nwe will have to deal with it then.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 18:01:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> OK, well BSD/OS now works, but I wonder if there are any other quad\n> off_t OS's out there without fseeko.\n\nNetBSD prior to 1.6, released September 14, 2002. (Source: CVS logs.)\n\nOpenBSD prior to 2.7, released June 15, 2000. (Source: release notes.)\n\nFreeBSD has had fseeko() for some time, but I'm not sure which release\nintroduced it -- perhaps 3.2.0, released May, 1999. (Source: CVS logs.)\n\nRegards,\n\nGiles\n\n\n\n\n", "msg_date": "Thu, 24 Oct 2002 11:14:19 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 05:50 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>Looking at the pg_dump code, it seems the fseeks are optional in there\n>anyway because it already has code to read the file sequentially rather\n\nBut there are features that are not available if it can't seek: eg. it will \nnot restore in a different order to that in which it was written; it will \nnot dump data offsets in the TOC so dump files can not be restored in \nalternate orders; restore times will be large for a single table (it has to \nread the entire file potentially).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:33:50 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 05:50 PM 23/10/2002 -0400, Bruce Momjian wrote:\n> >Looking at the pg_dump code, it seems the fseeks are optional in there\n> >anyway because it already has code to read the file sequentially rather\n> \n> But there are features that are not available if it can't seek: eg. it will \n> not restore in a different order to that in which it was written; it will \n> not dump data offsets in the TOC so dump files can not be restored in \n> alternate orders; restore times will be large for a single table (it has to \n> read the entire file potentially).\n\nOK, that helps. We just got a list of 2 other OS's without fseeko and\nwith large file support. Any NetBSD before Auguest 2002 has that\nproblem. We are going to need to either get fseeko workarounds for\nthose, or disable those features in a meaningful way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 21:36:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Giles Lean wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> \n> > OK, well BSD/OS now works, but I wonder if there are any other quad\n> > off_t OS's out there without fseeko.\n> \n> NetBSD prior to 1.6, released September 14, 2002. (Source: CVS logs.)\n\nOK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 21:37:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 10:42 AM 23/10/2002 -0400, Bruce Momjian wrote:\n>What I am concerned about are cases that fail at runtime, specifically\n>during a restore of a >2gig file.\n\nPlease give an example that would still apply assuming we get a working \nseek/tell pair that works with whatever we use as an offset?\n\nIf you are concerned about reading a dump file with 8 byte offsets on a \nmachine with 4 byte off_t, that case and it's permutations are already covered.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:37:20 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 10:42 AM 23/10/2002 -0400, Bruce Momjian wrote:\n> >What I am concerned about are cases that fail at runtime, specifically\n> >during a restore of a >2gig file.\n> \n> Please give an example that would still apply assuming we get a working \n> seek/tell pair that works with whatever we use as an offset?\n\nIf we get this, everything is fine. I have done that for BSD/OS today. \nI may need to do the same for NetBSD/OpenBSD too.\n\n> If you are concerned about reading a dump file with 8 byte offsets on a \n> machine with 4 byte off_t, that case and it's permutations are already covered.\n\nNo, I know that is covered because it will report a proper error message\non the restore on the 4-byte off_t machine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 21:41:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 11:50 PM 23/10/2002 +0200, Peter Eisentraut wrote:\n\n>1. Disable access to large files.\n>\n>2. Seek in some other way.\n\nThis gets my vote, but I would like to see a clean implementation (not huge \nquantities if ifdefs every time we call fseek); either we write our own \nfseek as Bruce seems to be suggesting, or we have a single header file that \ndefines the FSEEK/FTELL/OFF_T to point to the 'right' functions, where \n'right' is defined as 'most likely to generate an integer and which makes \nuse of the largest number of bytes'.\n\nThe way the code is currently written it does not matter if this is a 16 or \n3 byte value - so long as it is an integer.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:43:14 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 11:50 PM 23/10/2002 +0200, Peter Eisentraut wrote:\n> \n> >1. Disable access to large files.\n> >\n> >2. Seek in some other way.\n> \n> This gets my vote, but I would like to see a clean implementation (not huge \n> quantities if ifdefs every time we call fseek); either we write our own \n> fseek as Bruce seems to be suggesting, or we have a single header file that \n> defines the FSEEK/FTELL/OFF_T to point to the 'right' functions, where \n> 'right' is defined as 'most likely to generate an integer and which makes \n> use of the largest number of bytes'.\n\nWe have to write another function because fsetpos doesn't do SEEK_CUR so\nyou have to implement it with more complex code. It isn't a drop in\nplace thing.\n\n> The way the code is currently written it does not matter if this is a 16 or \n> 3 byte value - so long as it is an integer.\n\nRight. What we are assuming now is that off_t can be seeked using\nwhatever we defined for fseeko, which is incorrect in one, and now I\nhear more than one OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 21:45:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:36 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>We are going to need to either get fseeko workarounds for\n>those, or disable those features in a meaningful way.\n\n????? if we have not got a 64 bit seek function of any kind, then use a 32 \nbit seek - the features don't need to be disabled. AFAICT, this is a \nnon-issue: no 64 bit seek means no large files.\n\nI'm not sure we should even worry about it, but if you are genuinely \nconcerned that we have no 64 bit seek call, but we do have files > 4GB, \nthen If you really want to disable seek, just modify the code that sets \n'hasSeek' - don't screw around with every seek call. But only modify clear \nit if the file is > 4GB.\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:50:47 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:41 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>If we get this, everything is fine. I have done that for BSD/OS today.\n>I may need to do the same for NetBSD/OpenBSD too.\n\nWhat did you do to achieve this?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:51:42 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 09:41 PM 23/10/2002 -0400, Bruce Momjian wrote:\n> >If we get this, everything is fine. I have done that for BSD/OS today.\n> >I may need to do the same for NetBSD/OpenBSD too.\n> \n> What did you do to achieve this?\n\nSee src/port/fseeko.c in current CVS, with some configure.in glue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 21:52:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:45 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>We have to write another function because fsetpos doesn't do SEEK_CUR so\n>you have to implement it with more complex code. It isn't a drop in\n>place thing.\n\nThe only code that uses SEEK_CUR is the code to check if seek is available \n- I am ver happy to change that to SEEK_SET - I can't even recall why I \nused SEEK_CUR. The code that does the real seeks uses SEEK_SET.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 11:55:17 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 11:55 AM 24/10/2002 +1000, Philip Warner wrote:\n\n>The only code that uses SEEK_CUR is the code to check if seek is available \n>- I am ver happy to change that to SEEK_SET - I can't even recall why I \n>used SEEK_CUR. The code that does the real seeks uses SEEK_SET.\n\nCome to think of it:\n\n ctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n\nshould be replaced by:\n\n#ifdef HAS_FSEEK[O]\n ctx->hasSeek = TRUE;\n#else\n ctx->hasSeek = FALSE;\n#endif\n\nSince we're now checking for it in configure, we should remove the checks \nfrom the pg_dump code.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 12:02:14 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 09:45 PM 23/10/2002 -0400, Bruce Momjian wrote:\n> >We have to write another function because fsetpos doesn't do SEEK_CUR so\n> >you have to implement it with more complex code. It isn't a drop in\n> >place thing.\n> \n> The only code that uses SEEK_CUR is the code to check if seek is available \n> - I am ver happy to change that to SEEK_SET - I can't even recall why I \n> used SEEK_CUR. The code that does the real seeks uses SEEK_SET.\n\nThere are other problems. fgetpos() expects a pointer to an fpos_t,\nwhile ftello just returns off_t, so you need a local variable in the\nfunction to pass to fgetpos() and they return that from the function.\n\nIt is much cleaner to just duplicate the entire API so you don't have\nany limitations or failure cases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 22:03:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nWell, that certainly changes the functionality of the code. I thought\nthat fseeko test was done so that things that couldn't be seeked on were\ndetected. Not sure what isn't seek-able, maybe named pipes. I thought\nit was testing that so I didn't touch that variable.\n\nThis was my original thought, that we have non-fseeko code in place. \nCan we just trigger the non-fseeko code on HAS_FSEEKO. The code would\nbe something like:\n\t\n\tif (sizeof(long) >= sizeof(off_t))\n\t\tctx->hasSeek = TRUE;\n\telse\n\t#ifdef HAVE_FSEEKO\n\t ctx->hasSeek = TRUE;\n\t#else\n\t ctx->hasSeek = FALSE;\n\t#endif\n\n---------------------------------------------------------------------------\n\nPhilip Warner wrote:\n> At 11:55 AM 24/10/2002 +1000, Philip Warner wrote:\n> \n> >The only code that uses SEEK_CUR is the code to check if seek is available \n> >- I am ver happy to change that to SEEK_SET - I can't even recall why I \n> >used SEEK_CUR. The code that does the real seeks uses SEEK_SET.\n> \n> Come to think of it:\n> \n> ctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n> \n> should be replaced by:\n> \n> #ifdef HAS_FSEEK[O]\n> ctx->hasSeek = TRUE;\n> #else\n> ctx->hasSeek = FALSE;\n> #endif\n> \n> Since we're now checking for it in configure, we should remove the checks \n> from the pg_dump code.\n> \n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 22:08:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 10:08 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>Well, that certainly changes the functionality of the code. I thought\n>that fseeko test was done so that things that couldn't be seeked on were\n>detected.\n\nYou are quite correct. It should read:\n\n #ifdef HAVE_FSEEKO\n ctx->hasSeek = fseeko(...,SEEK_SET);\n #else\n ctx->hasSeek = FALSE;\n #endif\n\npipes are the main case for which we are checking.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 12:14:35 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 10:03 PM 23/10/2002 -0400, Bruce Momjian wrote:\n>It is much cleaner to just duplicate the entire API so you don't have\n>any limitations or failure cases.\n\nWe may still end up using macros in pg_dump to cope with cases where off_t \n& fseeko are not defined - if there are any. I presume we would then just \nrevert to calling fseek/ftell etc.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 24 Oct 2002 12:36:20 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 10:03 PM 23/10/2002 -0400, Bruce Momjian wrote:\n> >It is much cleaner to just duplicate the entire API so you don't have\n> >any limitations or failure cases.\n> \n> We may still end up using macros in pg_dump to cope with cases where off_t \n> & fseeko are not defined - if there are any. I presume we would then just \n> revert to calling fseek/ftell etc.\n\nWell, we have fseeko falling back to fseek already, so that is working\nfine. I don't think we will find any OS's without off_t. We just need\na little smarts. Let me see if I can work on it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 22:37:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\n> OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?\n\nYes:\n\n int\n fgetpos(FILE *stream, fpos_t *pos);\n\n int\n fsetpos(FILE *stream, const fpos_t *pos);\n\nPer comments in <stdio.h> fpos_t is the same format as off_t, and\noff_t and fpos_t have been 64 bit since 1994.\n\n http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h\n\nRegards,\n\nGiles\n\n\n\n\n", "msg_date": "Thu, 24 Oct 2002 12:54:50 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "\nLooks like I have some more work to do. Thanks.\n\n---------------------------------------------------------------------------\n\nGiles Lean wrote:\n> \n> > OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?\n> \n> Yes:\n> \n> int\n> fgetpos(FILE *stream, fpos_t *pos);\n> \n> int\n> fsetpos(FILE *stream, const fpos_t *pos);\n> \n> Per comments in <stdio.h> fpos_t is the same format as off_t, and\n> off_t and fpos_t have been 64 bit since 1994.\n> \n> http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h\n> \n> Regards,\n> \n> Giles\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 22:56:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nOK, NetBSD added. \n\nAny other OS's need this? Is it safe for me to code something that\nassumes fpos_t and off_t are identical? I can't think of a good way to\ntest if two data types are identical. I don't think sizeof is enough.\n\n---------------------------------------------------------------------------\n\nGiles Lean wrote:\n> \n> > OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?\n> \n> Yes:\n> \n> int\n> fgetpos(FILE *stream, fpos_t *pos);\n> \n> int\n> fsetpos(FILE *stream, const fpos_t *pos);\n> \n> Per comments in <stdio.h> fpos_t is the same format as off_t, and\n> off_t and fpos_t have been 64 bit since 1994.\n> \n> http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h\n> \n> Regards,\n> \n> Giles\n> \n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 23 Oct 2002 23:11:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, NetBSD added.\n>\n> Any other OS's need this? Is it safe for me to code something that\n> assumes fpos_t and off_t are identical? I can't think of a good way to\n> test if two data types are identical. I don't think sizeof is enough.\n\nNo, you can't assume that fpos_t and off_t are identical.\n\nBut you can simulate a long fseeko() by calling fseek() multiple times, so\nit should be possible to write a replacement that works on all systems.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 25 Oct 2002 00:12:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, NetBSD added.\n> >\n> > Any other OS's need this? Is it safe for me to code something that\n> > assumes fpos_t and off_t are identical? I can't think of a good way to\n> > test if two data types are identical. I don't think sizeof is enough.\n> \n> No, you can't assume that fpos_t and off_t are identical.\n\nI was wondering --- if fpos_t and off_t are identical sizeof, and fpos_t\ncan do shift << or >>, that means fpos_t is also integral like off_t.\nCan I then assume they are the same?\n\n> But you can simulate a long fseeko() by calling fseek() multiple times, so\n> it should be possible to write a replacement that works on all systems.\n\nYes, but I can't simulate ftello, so I then can't do SEEK_CUR. and if I\ncan't duplicate the entire API, I don't want to try.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 24 Oct 2002 19:43:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner wrote:\n> At 10:08 PM 23/10/2002 -0400, Bruce Momjian wrote:\n> >Well, that certainly changes the functionality of the code. I thought\n> >that fseeko test was done so that things that couldn't be seeked on were\n> >detected.\n> \n> You are quite correct. It should read:\n> \n> #ifdef HAVE_FSEEKO\n> ctx->hasSeek = fseeko(...,SEEK_SET);\n> #else\n> ctx->hasSeek = FALSE;\n> #endif\n> \n> pipes are the main case for which we are checking.\n\nOK, I have applied the following patch to set hasSeek only if\nfseek/fseeko is reliable. This takes care of the random failure case\nfor large files. Now I need to see if I can get the custom fseeko\nworking for more platforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/bin/pg_dump/common.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/common.c,v\nretrieving revision 1.71\ndiff -c -c -r1.71 common.c\n*** src/bin/pg_dump/common.c\t9 Oct 2002 16:20:25 -0000\t1.71\n--- src/bin/pg_dump/common.c\t25 Oct 2002 01:30:51 -0000\n***************\n*** 290,296 ****\n \t\t * attr with the same name, then only dump it if:\n \t\t *\n \t\t * - it is NOT NULL and zero parents are NOT NULL\n! \t\t * OR \n \t\t * - it has a default value AND the default value does not match\n \t\t * all parent default values, or no parents specify a default.\n \t\t *\n--- 290,296 ----\n \t\t * attr with the same name, then only dump it if:\n \t\t *\n \t\t * - it is NOT NULL and zero parents are NOT NULL\n! \t\t * OR\n \t\t * - it has a default value AND the default value does not match\n \t\t * all parent default values, or no parents specify a default.\n \t\t *\nIndex: src/bin/pg_dump/pg_backup_archiver.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_archiver.c,v\nretrieving revision 1.59\ndiff -c -c -r1.59 pg_backup_archiver.c\n*** src/bin/pg_dump/pg_backup_archiver.c\t22 Oct 2002 19:15:23 -0000\t1.59\n--- src/bin/pg_dump/pg_backup_archiver.c\t25 Oct 2002 01:30:57 -0000\n***************\n*** 2338,2343 ****\n--- 2338,2369 ----\n }\n \n \n+ /*\n+ * checkSeek\n+ *\t check to see if fseek can be performed.\n+ */\n+ \n+ bool\n+ checkSeek(FILE *fp)\n+ {\n+ \n+ \tif (fseek(fp, 0, SEEK_CUR) != 0)\n+ \t\treturn false;\n+ \telse if (sizeof(off_t) > sizeof(long))\n+ \t/*\n+ \t *\tAt this point, off_t is too large for long, so we return\n+ \t *\tbased on whether an off_t version of fseek is available.\n+ \t */\n+ #ifdef HAVE_FSEEKO\n+ \t\treturn true;\n+ #else\n+ \t\treturn false;\n+ #endif\n+ \telse\n+ \t\treturn true;\n+ }\n+ \n+ \n static void\n _SortToc(ArchiveHandle *AH, TocSortCompareFn fn)\n {\nIndex: src/bin/pg_dump/pg_backup_archiver.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_archiver.h,v\nretrieving revision 1.48\ndiff -c -c -r1.48 pg_backup_archiver.h\n*** src/bin/pg_dump/pg_backup_archiver.h\t22 Oct 2002 19:15:23 -0000\t1.48\n--- src/bin/pg_dump/pg_backup_archiver.h\t25 Oct 2002 01:30:58 -0000\n***************\n*** 27,32 ****\n--- 27,33 ----\n \n #include \"postgres_fe.h\"\n \n+ #include <stdio.h>\n #include <time.h>\n #include <errno.h>\n \n***************\n*** 284,289 ****\n--- 285,291 ----\n extern void WriteDataChunks(ArchiveHandle *AH);\n \n extern int\tTocIDRequired(ArchiveHandle *AH, int id, RestoreOptions *ropt);\n+ extern bool checkSeek(FILE *fp);\n \n /*\n * Mandatory routines for each supported format\nIndex: src/bin/pg_dump/pg_backup_custom.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_custom.c,v\nretrieving revision 1.22\ndiff -c -c -r1.22 pg_backup_custom.c\n*** src/bin/pg_dump/pg_backup_custom.c\t22 Oct 2002 19:15:23 -0000\t1.22\n--- src/bin/pg_dump/pg_backup_custom.c\t25 Oct 2002 01:31:01 -0000\n***************\n*** 179,185 ****\n \t\tif (!AH->FH)\n \t\t\tdie_horribly(AH, modulename, \"could not open archive file %s: %s\\n\", AH->fSpec, strerror(errno));\n \n! \t\tctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n \t}\n \telse\n \t{\n--- 179,185 ----\n \t\tif (!AH->FH)\n \t\t\tdie_horribly(AH, modulename, \"could not open archive file %s: %s\\n\", AH->fSpec, strerror(errno));\n \n! \t\tctx->hasSeek = checkSeek(AH->FH);\n \t}\n \telse\n \t{\n***************\n*** 190,196 ****\n \t\tif (!AH->FH)\n \t\t\tdie_horribly(AH, modulename, \"could not open archive file %s: %s\\n\", AH->fSpec, strerror(errno));\n \n! \t\tctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n \n \t\tReadHead(AH);\n \t\tReadToc(AH);\n--- 190,196 ----\n \t\tif (!AH->FH)\n \t\t\tdie_horribly(AH, modulename, \"could not open archive file %s: %s\\n\", AH->fSpec, strerror(errno));\n \n! \t\tctx->hasSeek = checkSeek(AH->FH);\n \n \t\tReadHead(AH);\n \t\tReadToc(AH);\nIndex: src/bin/pg_dump/pg_backup_files.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_files.c,v\nretrieving revision 1.20\ndiff -c -c -r1.20 pg_backup_files.c\n*** src/bin/pg_dump/pg_backup_files.c\t22 Oct 2002 19:15:23 -0000\t1.20\n--- src/bin/pg_dump/pg_backup_files.c\t25 Oct 2002 01:31:01 -0000\n***************\n*** 129,135 ****\n \t\tif (AH->FH == NULL)\n \t\t\tdie_horribly(NULL, modulename, \"could not open output file: %s\\n\", strerror(errno));\n \n! \t\tctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n \n \t\tif (AH->compression < 0 || AH->compression > 9)\n \t\t\tAH->compression = Z_DEFAULT_COMPRESSION;\n--- 129,135 ----\n \t\tif (AH->FH == NULL)\n \t\t\tdie_horribly(NULL, modulename, \"could not open output file: %s\\n\", strerror(errno));\n \n! \t\tctx->hasSeek = checkSeek(AH->FH);\n \n \t\tif (AH->compression < 0 || AH->compression > 9)\n \t\t\tAH->compression = Z_DEFAULT_COMPRESSION;\n***************\n*** 147,153 ****\n \t\tif (AH->FH == NULL)\n \t\t\tdie_horribly(NULL, modulename, \"could not open input file: %s\\n\", strerror(errno));\n \n! \t\tctx->hasSeek = (fseeko(AH->FH, 0, SEEK_CUR) == 0);\n \n \t\tReadHead(AH);\n \t\tReadToc(AH);\n--- 147,153 ----\n \t\tif (AH->FH == NULL)\n \t\t\tdie_horribly(NULL, modulename, \"could not open input file: %s\\n\", strerror(errno));\n \n! \t\tctx->hasSeek = checkSeek(AH->FH);\n \n \t\tReadHead(AH);\n \t\tReadToc(AH);\nIndex: src/bin/pg_dump/pg_backup_tar.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_dump/pg_backup_tar.c,v\nretrieving revision 1.31\ndiff -c -c -r1.31 pg_backup_tar.c\n*** src/bin/pg_dump/pg_backup_tar.c\t22 Oct 2002 19:15:23 -0000\t1.31\n--- src/bin/pg_dump/pg_backup_tar.c\t25 Oct 2002 01:31:04 -0000\n***************\n*** 190,196 ****\n \t\t */\n \t\t/* setvbuf(ctx->tarFH, NULL, _IONBF, 0); */\n \n! \t\tctx->hasSeek = (fseeko(ctx->tarFH, 0, SEEK_CUR) == 0);\n \n \t\tif (AH->compression < 0 || AH->compression > 9)\n \t\t\tAH->compression = Z_DEFAULT_COMPRESSION;\n--- 190,196 ----\n \t\t */\n \t\t/* setvbuf(ctx->tarFH, NULL, _IONBF, 0); */\n \n! \t\tctx->hasSeek = checkSeek(ctx->tarFH);\n \n \t\tif (AH->compression < 0 || AH->compression > 9)\n \t\t\tAH->compression = Z_DEFAULT_COMPRESSION;\n***************\n*** 227,233 ****\n \n \t\tctx->tarFHpos = 0;\n \n! \t\tctx->hasSeek = (fseeko(ctx->tarFH, 0, SEEK_CUR) == 0);\n \n \t\t/*\n \t\t * Forcibly unmark the header as read since we use the lookahead\n--- 227,233 ----\n \n \t\tctx->tarFHpos = 0;\n \n! \t\tctx->hasSeek = checkSeek(ctx->tarFH);\n \n \t\t/*\n \t\t * Forcibly unmark the header as read since we use the lookahead", "msg_date": "Thu, 24 Oct 2002 21:32:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nThe patch will not work. Please reread my quoted email.\n\nAt 09:32 PM 24/10/2002 -0400, Bruce Momjian wrote:\n>Philip Warner wrote:\n> >\n> > You are quite correct. It should read:\n> >\n> > #ifdef HAVE_FSEEKO\n> > ctx->hasSeek = fseeko(...,SEEK_SET);\n> > #else\n> > ctx->hasSeek = FALSE;\n> > #endif\n> >\n> > pipes are the main case for which we are checking.\n>\n>OK, I have applied the following patch to set hasSeek only if\n>fseek/fseeko is reliable.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 25 Oct 2002 11:51:48 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nYou are going to have to be more specific than that.\n\n---------------------------------------------------------------------------\n\nPhilip Warner wrote:\n> \n> The patch will not work. Please reread my quoted email.\n> \n> At 09:32 PM 24/10/2002 -0400, Bruce Momjian wrote:\n> >Philip Warner wrote:\n> > >\n> > > You are quite correct. It should read:\n> > >\n> > > #ifdef HAVE_FSEEKO\n> > > ctx->hasSeek = fseeko(...,SEEK_SET);\n> > > #else\n> > > ctx->hasSeek = FALSE;\n> > > #endif\n> > >\n> > > pipes are the main case for which we are checking.\n> >\n> >OK, I have applied the following patch to set hasSeek only if\n> >fseek/fseeko is reliable.\n> \n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 24 Oct 2002 21:56:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:56 PM 24/10/2002 -0400, Bruce Momjian wrote:\n> > > > You are quite correct. It should read:\n> > > >\n> > > > #ifdef HAVE_FSEEKO\n> > > > ctx->hasSeek = fseeko(...,SEEK_SET);\n\n ^^^^^^^^^^^^^^^^^^^^^^\n\n\n> > > > #else\n> > > > ctx->hasSeek = FALSE;\n> > > > #endif\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 25 Oct 2002 12:09:25 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "\nOK, finally figured it out. I had used fseek instead of fseeko.\n\n---------------------------------------------------------------------------\n\nPhilip Warner wrote:\n> At 09:56 PM 24/10/2002 -0400, Bruce Momjian wrote:\n> > > > > You are quite correct. It should read:\n> > > > >\n> > > > > #ifdef HAVE_FSEEKO\n> > > > > ctx->hasSeek = fseeko(...,SEEK_SET);\n> \n> ^^^^^^^^^^^^^^^^^^^^^^\n> \n> \n> > > > > #else\n> > > > > ctx->hasSeek = FALSE;\n> > > > > #endif\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 24 Oct 2002 23:47:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Hi All,\n\nThis relates to the recent discussion about floating point output format.\n\nThe discussion was at a point where one parameter would be added to \nspecify the number of extra digits used in fp output formatting.\nThe parameter would have a 0 default value, a maximum of 2 and the \nminimum remained open for discussion.\n\nIn a previous message I proposed that for double precision numbers a \nminimum value of -13 would be usefful. For single precision numbers this \ncorresponds to a value of -4.\n\nI downloaded the PG sources and added two parameters (as PGC_USERSET):\n\nint extra_float4_digits, default 0, min -4, max 2\nint extra_float8_digits, defualt 0, min -13, max 2\n\nCompiled and tested for these functionalities. It is ok.\n\nThe afected files are:\n\nsrc/backend/utils/adt/float.c\nsrc/backend/utils/misc/guc.c\nsrc/bin/psql/tab-complete.c\nsrc/backend/utils/misc/postgresql.conf.sample\n\nI used sources from Debian source package, postgresql_7.2.1-2woody2.\n\nDiff's produced with diff -u are enclosed as attachments.\nCan you comment on this (particularly the min values) ?\nAlso, if we concluded that there is a need of 2 more digits, should'nt \nthis be the default ?\n\nBest regards,\nPedro M. Ferreira\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao", "msg_date": "Mon, 04 Nov 2002 13:50:21 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Float output formatting options" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> int extra_float4_digits, default 0, min -4, max 2\n> int extra_float8_digits, defualt 0, min -13, max 2\n\nI think a single setting extra_float_digits would be sufficient.\n\n> Also, if we concluded that there is a need of 2 more digits, should'nt \n> this be the default ?\n\nNo. pg_dump would want to bump it up on-the-fly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 09:46:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Tom Lane wrote:\n> \"Pedro M. Ferreira\" <[email protected]> writes:\n> \n>>int extra_float4_digits, default 0, min -4, max 2\n>>int extra_float8_digits, defualt 0, min -13, max 2\n> \n> \n> I think a single setting extra_float_digits would be sufficient.\n\nOk. Assuming,\n\nint extra_float_digits, default 0, min -13, max 2\n\nIf extra_float_digits==-13 and we are outputing a float4 this results in \na negative value for FLT_DIG+extra_float_digits. I dont know if \nsprintf's behaviour is the same across different libraries for this \nsituation.\n\nShould I include the following to handle this case ?\n\nif(extra_float_digits<-4)\n sprintf(ascii, \"%.*g\", FLT_DIG-4, num);\nelse\n sprintf(ascii, \"%.*g\", FLT_DIG+extra_float_digits, num);\n\n> \n> \n>>Also, if we concluded that there is a need of 2 more digits, should'nt \n>>this be the default ?\n> \n> \n> No. pg_dump would want to bump it up on-the-fly.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao\n\n", "msg_date": "Mon, 04 Nov 2002 15:27:10 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> Tom Lane wrote:\n>> I think a single setting extra_float_digits would be sufficient.\n\n> Ok. Assuming,\n\n> int extra_float_digits, default 0, min -13, max 2\n\n> If extra_float_digits==-13 and we are outputing a float4 this results in \n> a negative value for FLT_DIG+extra_float_digits.\n\nYou would want to clamp the values passed to %g to not less than 1.\nI'd favor code like\n\tint\tndig = FLT_DIG + extra_float_digits;\n\tif (ndig < 1)\n\t\tndig = 1;\n\tsprintf(ascii, \"%.*g\", ndig, num);\n\nProbably best to do it this way with float8 too; otherwise we're\nessentially wiring in the assumption that we know what DBL_DIG is.\nWhich is exactly what we're trying to avoid doing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 10:33:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Tom Lane wrote:\n> \"Pedro M. Ferreira\" <[email protected]> writes:\n >>\n>>If extra_float_digits==-13 and we are outputing a float4 this results in \n>>a negative value for FLT_DIG+extra_float_digits.\n> \n> You would want to clamp the values passed to %g to not less than 1.\n> I'd favor code like\n> \tint\tndig = FLT_DIG + extra_float_digits;\n> \tif (ndig < 1)\n> \t\tndig = 1;\n> \tsprintf(ascii, \"%.*g\", ndig, num);\n> \n> Probably best to do it this way with float8 too; otherwise we're\n> essentially wiring in the assumption that we know what DBL_DIG is.\n> Which is exactly what we're trying to avoid doing.\n\nGood.\nCorrected this, compiled and tested it. Works fine.\n\nI am attaching the diff's made with diff -u. Sources were from Debian \nsource package Version 7.2.1-2woody2.\n\nBest regards,\nPedro\n\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao", "msg_date": "Mon, 04 Nov 2002 16:51:10 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> I am attaching the diff's made with diff -u. Sources were from Debian \n> source package Version 7.2.1-2woody2.\n\nLooks good, will keep to apply after we branch for 7.4 development.\n\nBTW, did you check to see if this affects the geometric types or not?\nI am not sure that they go through float8out; they may need similar\nadjustments in their output routines.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 11:56:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Tom Lane wrote:\n\n> Looks good, will keep to apply after we branch for 7.4 development.\n> \n> BTW, did you check to see if this affects the geometric types or not?\n> I am not sure that they go through float8out; they may need similar\n> adjustments in their output routines.\n\nOnly checked arrays.\nI will check this and get you posted about it ASAP.\n\nBest regards,\nPedro\n\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao\n\n", "msg_date": "Mon, 04 Nov 2002 17:02:46 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "Tom Lane wrote:\n\n> BTW, did you check to see if this affects the geometric types or not?\n> I am not sure that they go through float8out; they may need similar\n> adjustments in their output routines.\n\nIn fact they need adjustments.\n\nThe *_out routines (in src/backend/utils/adt/geo_ops.c) for the \ngeometric types rely on two functions to output data:\n\nstatic int pair_encode(float8 x, float8 y, char *str);\nstatic int single_encode(float8 x, char *str);\n\nThese functions produce output with (for pair_encode):\n\nsprintf(str, \"%.*g,%.*g\", digits8, x, digits8, y);\n\ndigits8 is defined as ,\n\n#define P_MAXDIG DBL_DIG\nstatic int digits8 = P_MAXDIG;\n\nI think it would be done the same way as for float4_out and float8_out:\n\nextern int extra_float_digits;\n\n\nint \nndig = digits8 + extra_float_digits;\n\nif (ndig < 1)\n\tndig = 1;\n\nsprintf(str, \"%.*g,%.*g\", ndig, x, ndig, y);\n\nThere a bunch of other places where output is produced. They are all \nwithin #ifdef GEODEBUG / #enfif blocks. Should these be corrected the \nsame way ?\n\nRegards,\nPedro\n\n", "msg_date": "Mon, 04 Nov 2002 18:10:12 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> These functions produce output with (for pair_encode):\n\n> sprintf(str, \"%.*g,%.*g\", digits8, x, digits8, y);\n\n> digits8 is defined as ,\n\n> #define P_MAXDIG DBL_DIG\n> static int digits8 = P_MAXDIG;\n\n> I think it would be done the same way as for float4_out and float8_out:\n\nYeah. In fact I'd be inclined to remove the static variable and make\nthe code match float8out exactly (do \"DBL_DIG + extra_float_digits\").\n\n> There a bunch of other places where output is produced. They are all \n> within #ifdef GEODEBUG / #enfif blocks. Should these be corrected the \n> same way ?\n\nUp to you. Personally I'd just leave them alone...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 13:14:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Tom Lane wrote:\n> \n> Yeah. In fact I'd be inclined to remove the static variable and make\n> the code match float8out exactly (do \"DBL_DIG + extra_float_digits\").\n\nP_MAXDIG is only used in the lines below:\n\n#define P_MAXDIG DBL_DIG\n#define P_MAXLEN (2*(P_MAXDIG+7)+1)\nstatic int digits8 = P_MAXDIG;\n\nIs it ok to remove #define P_MAXDIG DBL_DIG,\nchange P_MAXLEN to 2*(DBL_DIG+7)+1) and\nremove the line 'static int digits8 = P_MAXDIG;' ?\n\nWould then change the two geo output functions and replace digits8 by \nDBL_DIG in the #ifdef GEODEBUG / #enfif output stuff.\n\n>>There a bunch of other places where output is produced. They are all \n>>within #ifdef GEODEBUG / #enfif blocks. Should these be corrected the \n>>same way ?\n\n", "msg_date": "Mon, 04 Nov 2002 18:28:28 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> Is it ok to remove #define P_MAXDIG DBL_DIG,\n> change P_MAXLEN to 2*(DBL_DIG+7)+1) and\n> remove the line 'static int digits8 = P_MAXDIG;' ?\n\nPerhaps P_MAXLEN now needs to be (2*(DBL_DIG+2+7)+1), considering\nthat we'll allow extra_float_digits to be up to 2. What's it used for?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 13:38:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Tom Lane wrote:\n> \"Pedro M. Ferreira\" <[email protected]> writes:\n> \n>>Is it ok to remove #define P_MAXDIG DBL_DIG,\n>>change P_MAXLEN to 2*(DBL_DIG+7)+1) and\n>>remove the line 'static int digits8 = P_MAXDIG;' ?\n> \n> Perhaps P_MAXLEN now needs to be (2*(DBL_DIG+2+7)+1), considering\n> that we'll allow extra_float_digits to be up to 2. What's it used for?\n\nYes. I guess so, because it is used in what I think is a memory \nallocation function. P_MAXLEN is only used twice:\n\n* 1st use in path_encode() (allmost all the geo *_out functions use \npath_encode):\n\nint size = npts * (P_MAXLEN + 3) + 2;\n\n/* Check for integer overflow */\nif ((size - 2) / npts != (P_MAXLEN + 3))\n elog(ERROR, \"Too many points requested\");\n\n\n* 2nd use in circle_out(PG_FUNCTION_ARGS):\n\nresult = palloc(3 * (P_MAXLEN + 1) + 3);\n\nI will do the changes tomorrow and send in the appropriate diff's.\n\nRegards,\nPedro\n\n", "msg_date": "Mon, 04 Nov 2002 18:51:42 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "> > Perhaps P_MAXLEN now needs to be (2*(DBL_DIG+2+7)+1), considering\n> > that we'll allow extra_float_digits to be up to 2. What's it used for?\n> \n> Yes. I guess so, because it is used in what I think is a memory \n> allocation function. P_MAXLEN is only used twice:\n\nCurse my slowness, but what's the actual problem being fixed here?\n\nChris\n\n", "msg_date": "Tue, 5 Nov 2002 09:43:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Curse my slowness, but what's the actual problem being fixed here?\n\nTwo things:\n\n* allow pg_dump to accurately dump and restore float quantities (setting\nfloat_extra_digits to 2 during the dump will accomplish this, at least\non systems with reasonable float I/O routines).\n\n* allow us to get out from under the geometry regression test's platform\ndependency problems (setting float_extra_digits to -2 or so during the\ntest should make most or all of the variations go away).\n\nThis proposal is the first one I've seen that solves both these problems\nwithout introducing any compatibility issues of its own.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Nov 2002 23:16:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Pedro M. Ferreira wrote:\n> Tom Lane wrote:\n>> Perhaps P_MAXLEN now needs to be (2*(DBL_DIG+2+7)+1), considering\n>> that we'll allow extra_float_digits to be up to 2. What's it used for?\n> \n> Yes. I guess so, because it is used in what I think is a memory \n> allocation function. P_MAXLEN is only used twice:\n<...>\n> \n> I will do the changes tomorrow and send in the appropriate diff's.\n\nOk. Its done now.\nOnly one file changed: src/backend/utils/adt/geo_ops.c\n\nAll the geometric types should now account for float_extra_digits on output.\n\nA diff -u is attached.\n\nBest reagards,\nPedro\n\n> Regards,\n> Pedro\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n\n\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao", "msg_date": "Tue, 05 Nov 2002 11:11:25 +0000", "msg_from": "\"Pedro M. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to build\nsuccessfully on OpenUnix 8.0. These same patches should also work for UnixWare\n7.x. I will confirm that tomorrow (Nov 7, 2002).\n\nHere is an explanation of the patches:\n\n1. An update of the FAQ_SCO file.\n\n2. This patch removes a static declaration of a in-line function in\n src/backend/utils/sort/tuplesort.c \n\n3. This patch to src/makefiles/Makefile.unixware, together with the patch to\n src/Makefile.global.in allows any addition library search directories (added\n with the configure --with-libraries option) to be added to the rpath option \n sent to the linker. The use of a different variable to pass the addition \n search paths was necessary to avoid a circular reference to LDFLAGS.\n\n4. This patch creates the variable (trpath) used by the patch to \n Makefile.unixware. This patch would also be for other platforms that would \n have to add the additional library search paths to the rpath linker option.\n See Makefile.unixware for an example of how to do this.\n\nAfter applying these patches, PostgreSQL successfully compiled on OpenUnix 8 \nand it passed all the regression tests.\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | MSN.......: [email protected]\n|-/-|----- | Dearborn, MI 48126|\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Wed, 06 Nov 2002 22:57:26 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL supported platform report and a patch." }, { "msg_contents": "\nI am fine with this because it only touches unixware-specific stuff,\nexcept the change to Tom's inline function:\n\n [static] inline Datum\n myFunctionCall2(FmgrInfo *flinfo, Datum arg1, Datum arg2)\n\nTom will have to comment on that.\n\n---------------------------------------------------------------------------\n\nBilly G. Allie wrote:\n-- Start of PGP signed section.\n> I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to build\n> successfully on OpenUnix 8.0. These same patches should also work for UnixWare\n> 7.x. I will confirm that tomorrow (Nov 7, 2002).\n> \n> Here is an explanation of the patches:\n> \n> 1. An update of the FAQ_SCO file.\n> \n> 2. This patch removes a static declaration of a in-line function in\n> src/backend/utils/sort/tuplesort.c \n> \n> 3. This patch to src/makefiles/Makefile.unixware, together with the patch to\n> src/Makefile.global.in allows any addition library search directories (added\n> with the configure --with-libraries option) to be added to the rpath option \n> sent to the linker. The use of a different variable to pass the addition \n> search paths was necessary to avoid a circular reference to LDFLAGS.\n> \n> 4. This patch creates the variable (trpath) used by the patch to \n> Makefile.unixware. This patch would also be for other platforms that would \n> have to add the additional library search paths to the rpath linker option.\n> See Makefile.unixware for an example of how to do this.\n> \n> After applying these patches, PostgreSQL successfully compiled on OpenUnix 8 \n> and it passed all the regression tests.\n> \n\nContent-Description: ou8.patch.20021106\n\n[ Attachment, skipping... ]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | MSN.......: [email protected]\n> |-/-|----- | Dearborn, MI 48126|\n> |/ |LLIE | (313) 582-1540 |\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 6 Nov 2002 23:03:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am fine with this because it only touches unixware-specific stuff,\n> except the change to Tom's inline function:\n> [static] inline Datum\n> myFunctionCall2(FmgrInfo *flinfo, Datum arg1, Datum arg2)\n> Tom will have to comment on that.\n\nThat change would actively break some platforms (see C99 inline\nspecifications). Why is it necessary for SCO? We certainly have\nplenty of other static inline functions ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Nov 2002 23:38:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "We already have success messages from Olivier Prenant for 7.3B4 on 8.0.0, \nand me for 7.1.3.\n\nI don't believe your changes are necessary.\n\n\n\n--On Wednesday, November 06, 2002 22:57:26 -0500 \"Billy G. Allie\" \n<[email protected]> wrote:\n\n> I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to\n> build successfully on OpenUnix 8.0. These same patches should also work\n> for UnixWare 7.x. I will confirm that tomorrow (Nov 7, 2002).\n>\n> Here is an explanation of the patches:\n>\n> 1. An update of the FAQ_SCO file.\n>\n> 2. This patch removes a static declaration of a in-line function in\n> src/backend/utils/sort/tuplesort.c\n>\n> 3. This patch to src/makefiles/Makefile.unixware, together with the patch\n> to src/Makefile.global.in allows any addition library search\n> directories (added with the configure --with-libraries option) to be\n> added to the rpath option sent to the linker. The use of a different\n> variable to pass the addition search paths was necessary to avoid a\n> circular reference to LDFLAGS.\n>\n> 4. This patch creates the variable (trpath) used by the patch to\n> Makefile.unixware. This patch would also be for other platforms that\n> would have to add the additional library search paths to the rpath\n> linker option. See Makefile.unixware for an example of how to do this.\n>\n> After applying these patches, PostgreSQL successfully compiled on\n> OpenUnix 8 and it passed all the regression tests.\n>\n\n\n\n-- \nLarry Rosenman, Sr. Network Engineer, Internet America, Inc.\nE-Mail: [email protected]\nPhone: +1 214-861-2571, Fax: 214-861-2663\nUS Mail: 350 N. St. Paul, Suite 3000, Dallas, TX 75201\n", "msg_date": "Wed, 06 Nov 2002 23:27:31 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch." }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I am fine with this because it only touches unixware-specific stuff,\n> > except the change to Tom's inline function:\n> > [static] inline Datum\n> > myFunctionCall2(FmgrInfo *flinfo, Datum arg1, Datum arg2)\n> > Tom will have to comment on that.\n> \n> That change would actively break some platforms (see C99 inline\n> specifications). Why is it necessary for SCO? We certainly have\n> plenty of other static inline functions ...\n> \n> \t\t\tregards, tom lane\n\nHere is the error messages generated during the compile:\n\ncc -K pentium_pro,host,inline,loop_unroll -I../../../../src/include \n-I/usr/local/include -I/usr/local/ssl/include -c -o tuplesort.o tuplesort.c\nUX:acomp: ERROR: \"tuplesort.c\", line 1854: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\nUX:acomp: ERROR: \"tuplesort.c\", line 1856: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\nUX:acomp: ERROR: \"tuplesort.c\", line 1870: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\nUX:acomp: ERROR: \"tuplesort.c\", line 1872: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\nUX:acomp: ERROR: \"tuplesort.c\", line 1885: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\nUX:acomp: ERROR: \"tuplesort.c\", line 1897: \"inline\" functions cannot use \n\"static\" identifier: myFunctionCall2\ngmake[4]: *** [tuplesort.o] Error 1\n\nThe problem only occurs in tuplesort.c. It does not occur in pg_lzcompress.c \nor aset.c, which are the only other source files that contain static inline \nfunction definitions that get compiled. The rest are IF DEFed out.\n\nI think the problem is that myFunctionCall2 is called by a non-static inline \nfunction, ApplySortFunction. If I make ApplySortFunction static, it compiles \n(but break the link phase). If I remove the inline from ApplySortFunction, it \ncompiles and builds. In order for tuplesort.c to compile on OpenUNIX the code \nmust be changed to either:\n\n1. Remove the static modifier from myFuntionCall2\n or\n2. Remove the inline from ApplySortFunction\n or\n3. Wrap the static modifier for myFunctionCall2 with an IF DEF so it's not\n there when USE_UNIVEL_CC is defined.\n\nI think that option 2 is the best choice, but it's your call.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | MSN.......: [email protected]\n|-/-|----- | Dearborn, MI 48126|\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Thu, 07 Nov 2002 01:11:13 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "\"Billy G. Allie\" <[email protected]> writes:\n> Here is the error messages generated during the compile:\n\n> cc -K pentium_pro,host,inline,loop_unroll -I../../../../src/include \n> -I/usr/local/include -I/usr/local/ssl/include -c -o tuplesort.o tuplesort.c\n> UX:acomp: ERROR: \"tuplesort.c\", line 1854: \"inline\" functions cannot use \n> \"static\" identifier: myFunctionCall2\n\nUh, what version are you testing exactly? I thought we'd resolved that\nas of a week or so back (certainly in 7.3b4).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Nov 2002 01:16:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "Larry Rosenman wrote:\n> We already have success messages from Olivier Prenant for 7.3B4 on 8.0.0, \n> and me for 7.1.3.\n> \n> I don't believe your changes are necessary.\n> \n\nWas that using gcc or the native compiler?\nWas it in linux kernel personality mode or OpenUNIX mode.\n\nI was compiling using the native (UDK) compiler. and it failed in tuplesort.c.\nIt was also unable to file the readline shared libraries without the changes to the makefiles or setting LD_RUN_PATH (which is a pain and is depreciated in OpenUNIX 8 and UnixWare 7).\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | MSN.......: [email protected]\n|-/-|----- | Dearborn, MI 48126|\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Thu, 07 Nov 2002 01:17:55 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "Tom Lane wrote:\n> \"Billy G. Allie\" <[email protected]> writes:\n> > Here is the error messages generated during the compile:\n> \n> > cc -K pentium_pro,host,inline,loop_unroll -I../../../../src/include \n> > -I/usr/local/include -I/usr/local/ssl/include -c -o tuplesort.o tuplesort.\n> c\n> > UX:acomp: ERROR: \"tuplesort.c\", line 1854: \"inline\" functions cannot use \n> > \"static\" identifier: myFunctionCall2\n> \n> Uh, what version are you testing exactly? I thought we'd resolved that\n> as of a week or so back (certainly in 7.3b4).\n\nIt was 7.3b3. I've just downloaded 7.3b5 and will re-test.\nIf that's the case then ignore the patch to tuplesort.c The rest of the \npatches are still valid though.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | MSN.......: [email protected]\n|-/-|----- | Dearborn, MI 48126|\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Thu, 07 Nov 2002 01:22:05 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> I don't believe your changes are necessary.\n\nThe static-inline change was obsoleted by a recent fix, per discussion.\nBut the rpath changes seem possibly useful (or maybe my thoughts are\njust colored by the fact that I'm currently trying to persuade OpenSSL\nto build with a non-broken rpath setup on HPUX...) Do you have an\nobjection to the rpath part of Billy's patch?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Nov 2002 02:42:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "That's true!\n\nBut I had to export CFLAGS=-Xb to compile (this should be in port Makefile\nIMHO)\nAlso, I think the Setting of LD_LIBRARY_PATH could be a win to. Although I\ndoubt anyone would run uw whith at least\nLD_LIBRARY_PATH=/lib:/usr/local/lib, setting LD_LIBRARY_PATH and includes\nin the port makefile could ease the configure process as readline is not\nfound if you don't add --with-includes ans --with-libs on configure\ncommand.\n\nReagrds\n On Wed, 6 Nov 2002, Larry Rosenman wrote:\n\n> Date: Wed, 06 Nov 2002 23:27:31 -0600\n> From: Larry Rosenman <[email protected]>\n> To: Billy G. Allie <[email protected]>, [email protected]\n> Cc: [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n> patch.\n> \n> We already have success messages from Olivier Prenant for 7.3B4 on 8.0.0, \n> and me for 7.1.3.\n> \n> I don't believe your changes are necessary.\n> \n> \n> \n> --On Wednesday, November 06, 2002 22:57:26 -0500 \"Billy G. Allie\" \n> <[email protected]> wrote:\n> \n> > I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to\n> > build successfully on OpenUnix 8.0. These same patches should also work\n> > for UnixWare 7.x. I will confirm that tomorrow (Nov 7, 2002).\n> >\n> > Here is an explanation of the patches:\n> >\n> > 1. An update of the FAQ_SCO file.\n> >\n> > 2. This patch removes a static declaration of a in-line function in\n> > src/backend/utils/sort/tuplesort.c\n> >\n> > 3. This patch to src/makefiles/Makefile.unixware, together with the patch\n> > to src/Makefile.global.in allows any addition library search\n> > directories (added with the configure --with-libraries option) to be\n> > added to the rpath option sent to the linker. The use of a different\n> > variable to pass the addition search paths was necessary to avoid a\n> > circular reference to LDFLAGS.\n> >\n> > 4. This patch creates the variable (trpath) used by the patch to\n> > Makefile.unixware. This patch would also be for other platforms that\n> > would have to add the additional library search paths to the rpath\n> > linker option. See Makefile.unixware for an example of how to do this.\n> >\n> > After applying these patches, PostgreSQL successfully compiled on\n> > OpenUnix 8 and it passed all the regression tests.\n> >\n> \n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 13:32:18 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 01:17:55 -0500 \"Billy G. Allie\" \n<[email protected]> wrote:\n\n> Larry Rosenman wrote:\n>> We already have success messages from Olivier Prenant for 7.3B4 on\n>> 8.0.0, and me for 7.1.3.\n>>\n>> I don't believe your changes are necessary.\n>>\n>\n> Was that using gcc or the native compiler?\nNative.\n> Was it in linux kernel personality mode or OpenUNIX mode.\n>\nNative.\n> I was compiling using the native (UDK) compiler. and it failed in\n> tuplesort.c. It was also unable to file the readline shared libraries\n> without the changes to the makefiles or setting LD_RUN_PATH (which is a\n> pain and is depreciated in OpenUNIX 8 and UnixWare 7).\nTom fixed the tuplesort.c issue with some help from the Caldera/SCO \nCompiler team\n(I'm on the 7.1.3 BETA).\n\n\nMy system has always found the readline stuff (I use the skunkware \nreadline).\n\nIt hasn't been an issue on my system.\n\n\n>\n> --\n> ____ | Billy G. Allie | Domain....: [email protected]\n>| /| | 7436 Hartwell | MSN.......: [email protected]\n>| -/-|----- | Dearborn, MI 48126|\n>| / |LLIE | (313) 582-1540 |\n>\n>\n\n\n\n-- \nLarry Rosenman, Sr. Network Engineer, Internet America, Inc.\nE-Mail: [email protected]\nPhone: +1 214-861-2571, Fax: 214-861-2663\nUS Mail: 350 N. St. Paul, Suite 3000, Dallas, TX 75201\n", "msg_date": "Thu, 07 Nov 2002 06:34:18 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 02:42:47 -0500 Tom Lane \n<[email protected]> wrote:\n\n> Larry Rosenman <[email protected]> writes:\n>> I don't believe your changes are necessary.\n>\n> The static-inline change was obsoleted by a recent fix, per discussion.\n> But the rpath changes seem possibly useful (or maybe my thoughts are\n> just colored by the fact that I'm currently trying to persuade OpenSSL\n> to build with a non-broken rpath setup on HPUX...) Do you have an\n> objection to the rpath part of Billy's patch?\nNot necessarily. I was just concerned about the tuplesort one, and the fact\nthat mine builds and passes without the changes.\n\n\n>\n> \t\t\tregards, tom lane\n\n\n\n-- \nLarry Rosenman, Sr. Network Engineer, Internet America, Inc.\nE-Mail: [email protected]\nPhone: +1 214-861-2571, Fax: 214-861-2663\nUS Mail: 350 N. St. Paul, Suite 3000, Dallas, TX 75201\n", "msg_date": "Thu, 07 Nov 2002 06:35:31 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch. " }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 13:32:18 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> That's true!\n>\n> But I had to export CFLAGS=-Xb to compile (this should be in port Makefile\n> IMHO)\nTom fixed that with a later tuplesort.c fix (per a discussion with the \nCaldera/SCO\ncompiler guys).\n\n> Also, I think the Setting of LD_LIBRARY_PATH could be a win to. Although I\n> doubt anyone would run uw whith at least\n> LD_LIBRARY_PATH=/lib:/usr/local/lib, setting LD_LIBRARY_PATH and includes\n> in the port makefile could ease the configure process as readline is not\n> found if you don't add --with-includes ans --with-libs on configure\n> command.\nNot a problem here. (the change that is).\n\n\n>\n> Reagrds\n> On Wed, 6 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Wed, 06 Nov 2002 23:27:31 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: Billy G. Allie <[email protected]>, [email protected]\n>> Cc: [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n>> patch.\n>>\n>> We already have success messages from Olivier Prenant for 7.3B4 on\n>> 8.0.0, and me for 7.1.3.\n>>\n>> I don't believe your changes are necessary.\n>>\n>>\n>>\n>> --On Wednesday, November 06, 2002 22:57:26 -0500 \"Billy G. Allie\"\n>> <[email protected]> wrote:\n>>\n>> > I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to\n>> > build successfully on OpenUnix 8.0. These same patches should also\n>> > work for UnixWare 7.x. I will confirm that tomorrow (Nov 7, 2002).\n>> >\n>> > Here is an explanation of the patches:\n>> >\n>> > 1. An update of the FAQ_SCO file.\n>> >\n>> > 2. This patch removes a static declaration of a in-line function in\n>> > src/backend/utils/sort/tuplesort.c\n>> >\n>> > 3. This patch to src/makefiles/Makefile.unixware, together with the\n>> > patch to src/Makefile.global.in allows any addition library search\n>> > directories (added with the configure --with-libraries option) to be\n>> > added to the rpath option sent to the linker. The use of a\n>> > different variable to pass the addition search paths was necessary\n>> > to avoid a circular reference to LDFLAGS.\n>> >\n>> > 4. This patch creates the variable (trpath) used by the patch to\n>> > Makefile.unixware. This patch would also be for other platforms\n>> > that would have to add the additional library search paths to\n>> > the rpath linker option. See Makefile.unixware for an example of\n>> > how to do this.\n>> >\n>> > After applying these patches, PostgreSQL successfully compiled on\n>> > OpenUnix 8 and it passed all the regression tests.\n>> >\n>>\n>>\n>>\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n\n-- \nLarry Rosenman, Sr. Network Engineer, Internet America, Inc.\nE-Mail: [email protected]\nPhone: +1 214-861-2571, Fax: 214-861-2663\nUS Mail: 350 N. St. Paul, Suite 3000, Dallas, TX 75201\n", "msg_date": "Thu, 07 Nov 2002 06:41:02 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "On Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 06:41:02 -0600\n> From: Larry Rosenman <[email protected]>\n> To: [email protected]\n> Cc: Billy G. Allie <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n> \n> \n> \n> --On Thursday, November 07, 2002 13:32:18 +0100 Olivier PRENANT \n> <[email protected]> wrote:\n> \n> > That's true!\n> >\n> > But I had to export CFLAGS=-Xb to compile (this should be in port Makefile\n> > IMHO)\n> Tom fixed that with a later tuplesort.c fix (per a discussion with the \n> Caldera/SCO\n> compiler guys).\nHuh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs the\ncompiler...\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 14:23:43 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 14:23:43 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Thu, 07 Nov 2002 06:41:02 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: [email protected]\n>> Cc: Billy G. Allie <[email protected]>, [email protected],\n>> [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n>>\n>>\n>>\n>> --On Thursday, November 07, 2002 13:32:18 +0100 Olivier PRENANT\n>> <[email protected]> wrote:\n>>\n>> > That's true!\n>> >\n>> > But I had to export CFLAGS=-Xb to compile (this should be in port\n>> > Makefile IMHO)\n>> Tom fixed that with a later tuplesort.c fix (per a discussion with the\n>> Caldera/SCO\n>> compiler guys).\n> Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs the\n> compiler...\n>>\nDidn't for me.... :-(\n\nWierd.\n\n\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 08:41:58 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "On Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 08:41:58 -0600\n> From: Larry Rosenman <[email protected]>\n> To: [email protected]\n> Cc: Billy G. Allie <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n> \n> \n> \n> >> Tom fixed that with a later tuplesort.c fix (per a discussion with the\n> >> Caldera/SCO\n> >> compiler guys).\n> > Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs the\n> > compiler...\n> >>\n> Didn't for me.... :-(\n> \n> Wierd.\nBTW, this is on 7.1.1 not (yet) on 8.0.0\nI'll let you know hopefully today.\n\n(How did you get 713 when it's due for december?) Can I have a copy?\n> \n> \n> >\n> > --\n> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: [email protected]\n> > -------------------------------------------------------------------------\n> > ----- Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 15:44:37 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 15:44:37 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Thu, 07 Nov 2002 08:41:58 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: [email protected]\n>> Cc: Billy G. Allie <[email protected]>, [email protected],\n>> [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a\n>>\n>>\n>>\n>> >> Tom fixed that with a later tuplesort.c fix (per a discussion with the\n>> >> Caldera/SCO\n>> >> compiler guys).\n>> > Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs\n>> > the compiler...\n>> >>\n>> Didn't for me.... :-(\n>>\n>> Wierd.\n> BTW, this is on 7.1.1 not (yet) on 8.0.0\n> I'll let you know hopefully today.\n>\n> (How did you get 713 when it's due for december?) Can I have a copy?\nI'm on the Beta. No, I can't give it to you. You might want to sign up\non http://www.caldera.com/beta/ to get in on the next one.\n\n\n>>\n>>\n>> >\n>> > --\n>> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n>> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n>> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n>> > FRANCE Email: [email protected]\n>> > ----------------------------------------------------------------------\n>> > --- ----- Make your life a dream, make your dream a reality. (St\n>> > Exupery)\n>>\n>>\n>>\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 08:47:33 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "Olivier PRENANT <[email protected]> writes:\n> Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs the\n> compiler...\n\nIt won't get better if you don't show any details...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Nov 2002 10:21:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "On Thu, 7 Nov 2002, Tom Lane wrote:\n\n> Date: Thu, 07 Nov 2002 10:21:25 -0500\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n> [email protected], [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> Olivier PRENANT <[email protected]> writes:\n> > Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs the\n> > compiler...\n> \n> It won't get better if you don't show any details...\nOk... (sorry) this is on UW 711 WITHOUT CFLAGS=-Xb:\nScript started on Thu Nov 7 16:57:05 2002\n$ cd postgresql*5\n$ make\nUsing GNU make found at /usr/local/bin/gmake\n/usr/local/bin/gmake -C doc all\ngmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/doc'\ngmake[1]: Nothing to be done for `all'.\ngmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/doc'\n/usr/local/bin/gmake -C src all\ngmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/src'\n/usr/local/bin/gmake -C port all\ngmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\ngmake[2]: Nothing to be done for `all'.\ngmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n/usr/local/bin/gmake -C backend all\ngmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend'\n/usr/local/bin/gmake -C ../../src/port all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n/usr/local/bin/gmake -C access all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access'\n/usr/local/bin/gmake -C common SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/common'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/common'\n/usr/local/bin/gmake -C gist SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/gist'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/gist'\n/usr/local/bin/gmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/hash'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/hash'\n/usr/local/bin/gmake -C heap SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/heap'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/heap'\n/usr/local/bin/gmake -C index SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/index'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/index'\n/usr/local/bin/gmake -C nbtree SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree'\n/usr/local/bin/gmake -C rtree SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/rtree'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/rtree'\n/usr/local/bin/gmake -C transam SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/access/transam'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access/transam'\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access'\n/usr/local/bin/gmake -C bootstrap all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/bootstrap'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/bootstrap'\n/usr/local/bin/gmake -C catalog all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/catalog'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/catalog'\n/usr/local/bin/gmake -C parser all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/parser'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/parser'\n/usr/local/bin/gmake -C commands all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/commands'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/commands'\n/usr/local/bin/gmake -C executor all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/executor'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/executor'\n/usr/local/bin/gmake -C lib all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/lib'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/lib'\n/usr/local/bin/gmake -C libpq all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/libpq'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/libpq'\n/usr/local/bin/gmake -C main all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/main'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/main'\n/usr/local/bin/gmake -C nodes all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/nodes'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/nodes'\n/usr/local/bin/gmake -C optimizer all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n/usr/local/bin/gmake -C geqo SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo'\n/usr/local/bin/gmake -C path SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path'\n/usr/local/bin/gmake -C plan SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan'\n/usr/local/bin/gmake -C prep SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep'\n/usr/local/bin/gmake -C util SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util'\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n/usr/local/bin/gmake -C port all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/port'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/port'\n/usr/local/bin/gmake -C postmaster all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/postmaster'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/postmaster'\n/usr/local/bin/gmake -C regex all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/regex'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/regex'\n/usr/local/bin/gmake -C rewrite all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/rewrite'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/rewrite'\n/usr/local/bin/gmake -C storage all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage'\n/usr/local/bin/gmake -C buffer SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer'\n/usr/local/bin/gmake -C file SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/file'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/file'\n/usr/local/bin/gmake -C freespace SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace'\n/usr/local/bin/gmake -C ipc SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc'\n/usr/local/bin/gmake -C large_object SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\n/usr/local/bin/gmake -C lmgr SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr'\n/usr/local/bin/gmake -C page SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/page'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/page'\n/usr/local/bin/gmake -C smgr SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr'\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage'\n/usr/local/bin/gmake -C tcop all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/tcop'\ngmake[3]: Nothing to be done for `all'.\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/tcop'\n/usr/local/bin/gmake -C utils all\ngmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils'\n/usr/local/bin/gmake -C adt SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/adt'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/adt'\n/usr/local/bin/gmake -C cache SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/cache'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/cache'\n/usr/local/bin/gmake -C error SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/error'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/error'\n/usr/local/bin/gmake -C fmgr SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr'\n/usr/local/bin/gmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/hash'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/hash'\n/usr/local/bin/gmake -C init SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/init'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/init'\n/usr/local/bin/gmake -C misc SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/misc'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/misc'\n/usr/local/bin/gmake -C mmgr SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr'\ngmake[4]: `SUBSYS.o' is up to date.\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr'\n/usr/local/bin/gmake -C sort SUBSYS.o\ngmake[4]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend/utils/sort'\ncc -O -K inline -I../../../../src/include -I/usr/local/include -c tuplesort.c -o tuplesort.o\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1854: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1856: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1870: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1872: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1885: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\nUX:acomp: ERREUR: \"tuplesort.c\", ligne 1897: \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\ngmake[4]: *** [tuplesort.o] Error 1\ngmake[4]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils/sort'\ngmake[3]: *** [sort-recursive] Error 2\ngmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/src'\ngmake: *** [all] Error 2\n*** Code d'erreur 2 (bu21)\nUX:make: ERREUR: erreur irr�m�diable.\n\nscript done on Thu Nov 7 16:57:29 2002\nIt works OK with -Xb...\n\nRegards\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 17:00:21 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "It looks like you do **NOT** have B4 or B5....\n\nLER\n\n\n--On Thursday, November 07, 2002 17:00:21 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> On Thu, 7 Nov 2002, Tom Lane wrote:\n>\n>> Date: Thu, 07 Nov 2002 10:21:25 -0500\n>> From: Tom Lane <[email protected]>\n>> To: [email protected]\n>> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n>> [email protected], [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n>> a\n>>\n>> Olivier PRENANT <[email protected]> writes:\n>> > Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs\n>> > the compiler...\n>>\n>> It won't get better if you don't show any details...\n> Ok... (sorry) this is on UW 711 WITHOUT CFLAGS=-Xb:\n> Script started on Thu Nov 7 16:57:05 2002\n> $ cd postgresql*5\n> $ make\n> Using GNU make found at /usr/local/bin/gmake\n> /usr/local/bin/gmake -C doc all\n> gmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/doc'\n> gmake[1]: Nothing to be done for `all'.\n> gmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/doc'\n> /usr/local/bin/gmake -C src all\n> gmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/src'\n> /usr/local/bin/gmake -C port all\n> gmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\n> gmake[2]: Nothing to be done for `all'.\n> gmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n> /usr/local/bin/gmake -C backend all\n> gmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend'\n> /usr/local/bin/gmake -C ../../src/port all\n> gmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\n> gmake[3]: Nothing to be done for `all'.\n> gmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n> /usr/local/bin/gmake -C access all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access' /usr/local/bin/gmake\n> -C common SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/common' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/common'\n> /usr/local/bin/gmake -C gist SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/gist' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/gist'\n> /usr/local/bin/gmake -C hash SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/hash' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/hash'\n> /usr/local/bin/gmake -C heap SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/heap' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/heap'\n> /usr/local/bin/gmake -C index SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/index' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/index'\n> /usr/local/bin/gmake -C nbtree SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree'\n> /usr/local/bin/gmake -C rtree SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/rtree' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/rtree'\n> /usr/local/bin/gmake -C transam SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/transam' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/access/transam' gmake[3]:\n> Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access'\n> /usr/local/bin/gmake -C bootstrap all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/bootstrap' gmake[3]: Nothing\n> to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/bootstrap'\n> /usr/local/bin/gmake -C catalog all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/catalog' gmake[3]: Nothing\n> to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/catalog'\n> /usr/local/bin/gmake -C parser all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/parser' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/parser' /usr/local/bin/gmake\n> -C commands all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/commands' gmake[3]: Nothing\n> to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/commands'\n> /usr/local/bin/gmake -C executor all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/executor' gmake[3]: Nothing\n> to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/executor'\n> /usr/local/bin/gmake -C lib all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/lib' gmake[3]: Nothing to be\n> done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/lib' /usr/local/bin/gmake -C\n> libpq all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/libpq' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/libpq' /usr/local/bin/gmake\n> -C main all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/main' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/main' /usr/local/bin/gmake\n> -C nodes all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/nodes' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/nodes' /usr/local/bin/gmake\n> -C optimizer all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n> /usr/local/bin/gmake -C geqo SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo'\n> /usr/local/bin/gmake -C path SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path'\n> /usr/local/bin/gmake -C plan SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan'\n> /usr/local/bin/gmake -C prep SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep'\n> /usr/local/bin/gmake -C util SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util' gmake[3]:\n> Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n> /usr/local/bin/gmake -C port all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/port' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/port' /usr/local/bin/gmake\n> -C postmaster all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/postmaster' gmake[3]:\n> Nothing to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/postmaster'\n> /usr/local/bin/gmake -C regex all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/regex' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/regex' /usr/local/bin/gmake\n> -C rewrite all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/rewrite' gmake[3]: Nothing\n> to be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/rewrite'\n> /usr/local/bin/gmake -C storage all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage'\n> /usr/local/bin/gmake -C buffer SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer'\n> /usr/local/bin/gmake -C file SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/file' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/file'\n> /usr/local/bin/gmake -C freespace SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace'\n> /usr/local/bin/gmake -C ipc SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc'\n> /usr/local/bin/gmake -C large_object SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\n> gmake[4]: `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\n> /usr/local/bin/gmake -C lmgr SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr'\n> /usr/local/bin/gmake -C page SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/page' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/page'\n> /usr/local/bin/gmake -C smgr SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr' gmake[3]:\n> Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage'\n> /usr/local/bin/gmake -C tcop all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/tcop' gmake[3]: Nothing to\n> be done for `all'.\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/tcop' /usr/local/bin/gmake\n> -C utils all\n> gmake[3]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils' /usr/local/bin/gmake\n> -C adt SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/adt' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/adt'\n> /usr/local/bin/gmake -C cache SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/cache' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/cache'\n> /usr/local/bin/gmake -C error SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/error' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/error'\n> /usr/local/bin/gmake -C fmgr SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr'\n> /usr/local/bin/gmake -C hash SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/hash' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/hash'\n> /usr/local/bin/gmake -C init SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/init' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/init'\n> /usr/local/bin/gmake -C misc SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/misc' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/misc'\n> /usr/local/bin/gmake -C mmgr SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr' gmake[4]:\n> `SUBSYS.o' is up to date.\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr'\n> /usr/local/bin/gmake -C sort SUBSYS.o\n> gmake[4]: Entering directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/sort' cc -O -K inline\n> -I../../../../src/include -I/usr/local/include -c tuplesort.c -o\n> tuplesort.o UX:acomp: ERREUR: \"tuplesort.c\", ligne 1854: \"inline\"\n> functions cannot use \"static\" identifier: myFunctionCall2 UX:acomp:\n> ERREUR: \"tuplesort.c\", ligne 1856: \"inline\" functions cannot use \"static\"\n> identifier: myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\", ligne 1870:\n> \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\n> UX:acomp: ERREUR: \"tuplesort.c\", ligne 1872: \"inline\" functions cannot\n> use \"static\" identifier: myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\",\n> ligne 1885: \"inline\" functions cannot use \"static\" identifier:\n> myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\", ligne 1897: \"inline\"\n> functions cannot use \"static\" identifier: myFunctionCall2 gmake[4]: ***\n> [tuplesort.o] Error 1\n> gmake[4]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils/sort' gmake[3]: ***\n> [sort-recursive] Error 2\n> gmake[3]: Leaving directory\n> `/home/postgres/postgresql-7.3b5/src/backend/utils' gmake[2]: ***\n> [utils-recursive] Error 2\n> gmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/src'\n> gmake: *** [all] Error 2\n> *** Code d'erreur 2 (bu21)\n> UX:make: ERREUR: erreur irrémédiable.\n>\n> script done on Thu Nov 7 16:57:29 2002\n> It works OK with -Xb...\n>\n> Regards\n>>\n>> \t\t\tregards, tom lane\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 10:07:37 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "*WHAT??**\n\nthis directory has hust been created by de-taring postgresql-7.3b5.tar.gz\nfrom my own mirror dated nov 6 20:04...\n\n\nOn Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 10:07:37 -0600\n> From: Larry Rosenman <[email protected]>\n> To: [email protected], Tom Lane <[email protected]>\n> Cc: Billy G. Allie <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> It looks like you do **NOT** have B4 or B5....\n> \n> LER\n> \n> \n> --On Thursday, November 07, 2002 17:00:21 +0100 Olivier PRENANT \n> <[email protected]> wrote:\n> \n> > On Thu, 7 Nov 2002, Tom Lane wrote:\n> >\n> >> Date: Thu, 07 Nov 2002 10:21:25 -0500\n> >> From: Tom Lane <[email protected]>\n> >> To: [email protected]\n> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n> >> [email protected], [email protected]\n> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n> >> a\n> >>\n> >> Olivier PRENANT <[email protected]> writes:\n> >> > Huh! I just tried to compile 7.3b5 without CFLAGS=-Xb, it still bugs\n> >> > the compiler...\n> >>\n> >> It won't get better if you don't show any details...\n> > Ok... (sorry) this is on UW 711 WITHOUT CFLAGS=-Xb:\n> > Script started on Thu Nov 7 16:57:05 2002\n> > $ cd postgresql*5\n> > $ make\n> > Using GNU make found at /usr/local/bin/gmake\n> > /usr/local/bin/gmake -C doc all\n> > gmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/doc'\n> > gmake[1]: Nothing to be done for `all'.\n> > gmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/doc'\n> > /usr/local/bin/gmake -C src all\n> > gmake[1]: Entering directory `/home/postgres/postgresql-7.3b5/src'\n> > /usr/local/bin/gmake -C port all\n> > gmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\n> > gmake[2]: Nothing to be done for `all'.\n> > gmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n> > /usr/local/bin/gmake -C backend all\n> > gmake[2]: Entering directory `/home/postgres/postgresql-7.3b5/src/backend'\n> > /usr/local/bin/gmake -C ../../src/port all\n> > gmake[3]: Entering directory `/home/postgres/postgresql-7.3b5/src/port'\n> > gmake[3]: Nothing to be done for `all'.\n> > gmake[3]: Leaving directory `/home/postgres/postgresql-7.3b5/src/port'\n> > /usr/local/bin/gmake -C access all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access' /usr/local/bin/gmake\n> > -C common SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/common' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/common'\n> > /usr/local/bin/gmake -C gist SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/gist' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/gist'\n> > /usr/local/bin/gmake -C hash SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/hash' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/hash'\n> > /usr/local/bin/gmake -C heap SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/heap' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/heap'\n> > /usr/local/bin/gmake -C index SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/index' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/index'\n> > /usr/local/bin/gmake -C nbtree SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/nbtree'\n> > /usr/local/bin/gmake -C rtree SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/rtree' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/rtree'\n> > /usr/local/bin/gmake -C transam SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/transam' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/access/transam' gmake[3]:\n> > Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/access'\n> > /usr/local/bin/gmake -C bootstrap all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/bootstrap' gmake[3]: Nothing\n> > to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/bootstrap'\n> > /usr/local/bin/gmake -C catalog all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/catalog' gmake[3]: Nothing\n> > to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/catalog'\n> > /usr/local/bin/gmake -C parser all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/parser' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/parser' /usr/local/bin/gmake\n> > -C commands all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/commands' gmake[3]: Nothing\n> > to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/commands'\n> > /usr/local/bin/gmake -C executor all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/executor' gmake[3]: Nothing\n> > to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/executor'\n> > /usr/local/bin/gmake -C lib all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/lib' gmake[3]: Nothing to be\n> > done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/lib' /usr/local/bin/gmake -C\n> > libpq all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/libpq' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/libpq' /usr/local/bin/gmake\n> > -C main all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/main' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/main' /usr/local/bin/gmake\n> > -C nodes all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/nodes' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/nodes' /usr/local/bin/gmake\n> > -C optimizer all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n> > /usr/local/bin/gmake -C geqo SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/geqo'\n> > /usr/local/bin/gmake -C path SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/path'\n> > /usr/local/bin/gmake -C plan SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/plan'\n> > /usr/local/bin/gmake -C prep SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/prep'\n> > /usr/local/bin/gmake -C util SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/optimizer/util' gmake[3]:\n> > Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/optimizer'\n> > /usr/local/bin/gmake -C port all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/port' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/port' /usr/local/bin/gmake\n> > -C postmaster all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/postmaster' gmake[3]:\n> > Nothing to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/postmaster'\n> > /usr/local/bin/gmake -C regex all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/regex' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/regex' /usr/local/bin/gmake\n> > -C rewrite all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/rewrite' gmake[3]: Nothing\n> > to be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/rewrite'\n> > /usr/local/bin/gmake -C storage all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage'\n> > /usr/local/bin/gmake -C buffer SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/buffer'\n> > /usr/local/bin/gmake -C file SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/file' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/file'\n> > /usr/local/bin/gmake -C freespace SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/freespace'\n> > /usr/local/bin/gmake -C ipc SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/ipc'\n> > /usr/local/bin/gmake -C large_object SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\n> > gmake[4]: `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/large_object'\n> > /usr/local/bin/gmake -C lmgr SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/lmgr'\n> > /usr/local/bin/gmake -C page SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/page' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/page'\n> > /usr/local/bin/gmake -C smgr SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/storage/smgr' gmake[3]:\n> > Leaving directory `/home/postgres/postgresql-7.3b5/src/backend/storage'\n> > /usr/local/bin/gmake -C tcop all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/tcop' gmake[3]: Nothing to\n> > be done for `all'.\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/tcop' /usr/local/bin/gmake\n> > -C utils all\n> > gmake[3]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils' /usr/local/bin/gmake\n> > -C adt SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/adt' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/adt'\n> > /usr/local/bin/gmake -C cache SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/cache' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/cache'\n> > /usr/local/bin/gmake -C error SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/error' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/error'\n> > /usr/local/bin/gmake -C fmgr SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/fmgr'\n> > /usr/local/bin/gmake -C hash SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/hash' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/hash'\n> > /usr/local/bin/gmake -C init SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/init' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/init'\n> > /usr/local/bin/gmake -C misc SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/misc' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/misc'\n> > /usr/local/bin/gmake -C mmgr SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr' gmake[4]:\n> > `SUBSYS.o' is up to date.\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/mmgr'\n> > /usr/local/bin/gmake -C sort SUBSYS.o\n> > gmake[4]: Entering directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/sort' cc -O -K inline\n> > -I../../../../src/include -I/usr/local/include -c tuplesort.c -o\n> > tuplesort.o UX:acomp: ERREUR: \"tuplesort.c\", ligne 1854: \"inline\"\n> > functions cannot use \"static\" identifier: myFunctionCall2 UX:acomp:\n> > ERREUR: \"tuplesort.c\", ligne 1856: \"inline\" functions cannot use \"static\"\n> > identifier: myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\", ligne 1870:\n> > \"inline\" functions cannot use \"static\" identifier: myFunctionCall2\n> > UX:acomp: ERREUR: \"tuplesort.c\", ligne 1872: \"inline\" functions cannot\n> > use \"static\" identifier: myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\",\n> > ligne 1885: \"inline\" functions cannot use \"static\" identifier:\n> > myFunctionCall2 UX:acomp: ERREUR: \"tuplesort.c\", ligne 1897: \"inline\"\n> > functions cannot use \"static\" identifier: myFunctionCall2 gmake[4]: ***\n> > [tuplesort.o] Error 1\n> > gmake[4]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils/sort' gmake[3]: ***\n> > [sort-recursive] Error 2\n> > gmake[3]: Leaving directory\n> > `/home/postgres/postgresql-7.3b5/src/backend/utils' gmake[2]: ***\n> > [utils-recursive] Error 2\n> > gmake[2]: Leaving directory `/home/postgres/postgresql-7.3b5/src/backend'\n> > gmake[1]: *** [all] Error 2\n> > gmake[1]: Leaving directory `/home/postgres/postgresql-7.3b5/src'\n> > gmake: *** [all] Error 2\n> > *** Code d'erreur 2 (bu21)\n> > UX:make: ERREUR: erreur irr�m�diable.\n> >\n> > script done on Thu Nov 7 16:57:29 2002\n> > It works OK with -Xb...\n> >\n> > Regards\n> >>\n> >> \t\t\tregards, tom lane\n> >>\n> >\n> > --\n> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: [email protected]\n> > -------------------------------------------------------------------------\n> > ----- Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 17:15:08 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Olivier PRENANT <[email protected]> writes:\n> *WHAT??**\n> this directory has hust been created by de-taring postgresql-7.3b5.tar.gz\n> from my own mirror dated nov 6 20:04...\n\nWell, there's something darn weird here. Is\nsrc/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line 1838,\ndo you see\n\tinline int32\n\tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\nor\n\tstatic inline int32\n\tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Nov 2002 11:26:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "I see the latter in both b4 and b5!\n\nI've just relaunched my mirroring procedure and it did'nt pick another b4\nor b5!\n\nWhat happens??\n\nRegards,\nOn Thu, 7 Nov 2002, Tom Lane wrote:\n\n> Date: Thu, 07 Nov 2002 11:26:42 -0500\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n> [email protected], [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> Olivier PRENANT <[email protected]> writes:\n> > *WHAT??**\n> > this directory has hust been created by de-taring postgresql-7.3b5.tar.gz\n> > from my own mirror dated nov 6 20:04...\n> \n> Well, there's something darn weird here. Is\n> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line 1838,\n> do you see\n> \tinline int32\n> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n> or\n> \tstatic inline int32\n> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 17:34:49 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "It **LOOKS** right. I'm about to double check it on 7.1.3.\n\nOlivier, is this the 7.1.1b FS Compiler?\n\nI wonder if a bug fix made it in...\n\nWierd.\n\n\n\n--On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> I see the latter in both b4 and b5!\n>\n> I've just relaunched my mirroring procedure and it did'nt pick another b4\n> or b5!\n>\n> What happens??\n>\n> Regards,\n> On Thu, 7 Nov 2002, Tom Lane wrote:\n>\n>> Date: Thu, 07 Nov 2002 11:26:42 -0500\n>> From: Tom Lane <[email protected]>\n>> To: [email protected]\n>> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n>> [email protected], [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n>> a\n>>\n>> Olivier PRENANT <[email protected]> writes:\n>> > *WHAT??**\n>> > this directory has hust been created by de-taring\n>> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n>>\n>> Well, there's something darn weird here. Is\n>> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line 1838,\n>> do you see\n>> \tinline int32\n>> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n>> or\n>> \tstatic inline int32\n>> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n>>\n>> \t\t\tregards, tom lane\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 10:37:36 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "What's FS, it(s the 7.1.1b compiler yes.\n\nI don't mind having CFLAGS=-Xb though, done it for php already...\nBy everyone says this should go off so..\n\nRegards\nOn Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 10:37:36 -0600\n> From: Larry Rosenman <[email protected]>\n> To: [email protected], Tom Lane <[email protected]>\n> Cc: Billy G. Allie <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> It **LOOKS** right. I'm about to double check it on 7.1.3.\n> \n> Olivier, is this the 7.1.1b FS Compiler?\n> \n> I wonder if a bug fix made it in...\n> \n> Wierd.\n> \n> \n> \n> --On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT \n> <[email protected]> wrote:\n> \n> > I see the latter in both b4 and b5!\n> >\n> > I've just relaunched my mirroring procedure and it did'nt pick another b4\n> > or b5!\n> >\n> > What happens??\n> >\n> > Regards,\n> > On Thu, 7 Nov 2002, Tom Lane wrote:\n> >\n> >> Date: Thu, 07 Nov 2002 11:26:42 -0500\n> >> From: Tom Lane <[email protected]>\n> >> To: [email protected]\n> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie <[email protected]>,\n> >> [email protected], [email protected]\n> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n> >> a\n> >>\n> >> Olivier PRENANT <[email protected]> writes:\n> >> > *WHAT??**\n> >> > this directory has hust been created by de-taring\n> >> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n> >>\n> >> Well, there's something darn weird here. Is\n> >> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line 1838,\n> >> do you see\n> >> \tinline int32\n> >> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n> >> or\n> >> \tstatic inline int32\n> >> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n> >>\n> >> \t\t\tregards, tom lane\n> >>\n> >\n> > --\n> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: [email protected]\n> > -------------------------------------------------------------------------\n> > ----- Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 17:40:24 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "FS== Feature Supplement.\n\nI've got a compile running with the 7.3b5 tarball on my 7.1.3 system (with \nthe newer\ncompiler).\n\nWe'll see. :-)\n\n\n\n--On Thursday, November 07, 2002 17:40:24 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> What's FS, it(s the 7.1.1b compiler yes.\n>\n> I don't mind having CFLAGS=-Xb though, done it for php already...\n> By everyone says this should go off so..\n>\n> Regards\n> On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Thu, 07 Nov 2002 10:37:36 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: [email protected], Tom Lane <[email protected]>\n>> Cc: Billy G. Allie <[email protected]>, [email protected],\n>> [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n>> a\n>>\n>> It **LOOKS** right. I'm about to double check it on 7.1.3.\n>>\n>> Olivier, is this the 7.1.1b FS Compiler?\n>>\n>> I wonder if a bug fix made it in...\n>>\n>> Wierd.\n>>\n>>\n>>\n>> --On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT\n>> <[email protected]> wrote:\n>>\n>> > I see the latter in both b4 and b5!\n>> >\n>> > I've just relaunched my mirroring procedure and it did'nt pick another\n>> > b4 or b5!\n>> >\n>> > What happens??\n>> >\n>> > Regards,\n>> > On Thu, 7 Nov 2002, Tom Lane wrote:\n>> >\n>> >> Date: Thu, 07 Nov 2002 11:26:42 -0500\n>> >> From: Tom Lane <[email protected]>\n>> >> To: [email protected]\n>> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie\n>> >> <[email protected]>, [email protected],\n>> >> [email protected]\n>> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report\n>> >> and a\n>> >>\n>> >> Olivier PRENANT <[email protected]> writes:\n>> >> > *WHAT??**\n>> >> > this directory has hust been created by de-taring\n>> >> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n>> >>\n>> >> Well, there's something darn weird here. Is\n>> >> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line\n>> >> 1838, do you see\n>> >> \tinline int32\n>> >> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n>> >> or\n>> >> \tstatic inline int32\n>> >> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind\n>> >> \tkind,\n>> >>\n>> >> \t\t\tregards, tom lane\n>> >>\n>> >\n>> > --\n>> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n>> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n>> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n>> > FRANCE Email: [email protected]\n>> > ----------------------------------------------------------------------\n>> > --- ----- Make your life a dream, make your dream a reality. (St\n>> > Exupery)\n>>\n>>\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 10:42:53 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "cc -O -g -I../../../../src/include -I/usr/local/include -c -o tuplesort.o \ntuplesort.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n\nIt still passes here. I really wonder if they fixed something in the 4.1 \ncompiler. IIRC\nthe 7.1.1b compiler is 4.0.\n\nTruly wierd.\n\nLER\n\n\n--On Thursday, November 07, 2002 17:40:24 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> What's FS, it(s the 7.1.1b compiler yes.\n>\n> I don't mind having CFLAGS=-Xb though, done it for php already...\n> By everyone says this should go off so..\n>\n> Regards\n> On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Thu, 07 Nov 2002 10:37:36 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: [email protected], Tom Lane <[email protected]>\n>> Cc: Billy G. Allie <[email protected]>, [email protected],\n>> [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n>> a\n>>\n>> It **LOOKS** right. I'm about to double check it on 7.1.3.\n>>\n>> Olivier, is this the 7.1.1b FS Compiler?\n>>\n>> I wonder if a bug fix made it in...\n>>\n>> Wierd.\n>>\n>>\n>>\n>> --On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT\n>> <[email protected]> wrote:\n>>\n>> > I see the latter in both b4 and b5!\n>> >\n>> > I've just relaunched my mirroring procedure and it did'nt pick another\n>> > b4 or b5!\n>> >\n>> > What happens??\n>> >\n>> > Regards,\n>> > On Thu, 7 Nov 2002, Tom Lane wrote:\n>> >\n>> >> Date: Thu, 07 Nov 2002 11:26:42 -0500\n>> >> From: Tom Lane <[email protected]>\n>> >> To: [email protected]\n>> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie\n>> >> <[email protected]>, [email protected],\n>> >> [email protected]\n>> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report\n>> >> and a\n>> >>\n>> >> Olivier PRENANT <[email protected]> writes:\n>> >> > *WHAT??**\n>> >> > this directory has hust been created by de-taring\n>> >> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n>> >>\n>> >> Well, there's something darn weird here. Is\n>> >> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line\n>> >> 1838, do you see\n>> >> \tinline int32\n>> >> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n>> >> or\n>> >> \tstatic inline int32\n>> >> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind\n>> >> \tkind,\n>> >>\n>> >> \t\t\tregards, tom lane\n>> >>\n>> >\n>> > --\n>> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n>> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n>> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n>> > FRANCE Email: [email protected]\n>> > ----------------------------------------------------------------------\n>> > --- ----- Make your life a dream, make your dream a reality. (St\n>> > Exupery)\n>>\n>>\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 10:45:13 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Haha!!!\n\nIt passes (b4) on 800 and not on uw 711..\n\nLarry, should I install 800 SDK on 711?\n\nOn Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 10:45:13 -0600\n> From: Larry Rosenman <[email protected]>\n> To: [email protected]\n> Cc: Tom Lane <[email protected]>, Billy G. Allie <[email protected]>,\n> [email protected], [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> cc -O -g -I../../../../src/include -I/usr/local/include -c -o tuplesort.o \n> tuplesort.c\n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> \n> It still passes here. I really wonder if they fixed something in the 4.1 \n> compiler. IIRC\n> the 7.1.1b compiler is 4.0.\n> \n> Truly wierd.\n> \n> LER\n> \n> \n> --On Thursday, November 07, 2002 17:40:24 +0100 Olivier PRENANT \n> <[email protected]> wrote:\n> \n> > What's FS, it(s the 7.1.1b compiler yes.\n> >\n> > I don't mind having CFLAGS=-Xb though, done it for php already...\n> > By everyone says this should go off so..\n> >\n> > Regards\n> > On Thu, 7 Nov 2002, Larry Rosenman wrote:\n> >\n> >> Date: Thu, 07 Nov 2002 10:37:36 -0600\n> >> From: Larry Rosenman <[email protected]>\n> >> To: [email protected], Tom Lane <[email protected]>\n> >> Cc: Billy G. Allie <[email protected]>, [email protected],\n> >> [email protected]\n> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n> >> a\n> >>\n> >> It **LOOKS** right. I'm about to double check it on 7.1.3.\n> >>\n> >> Olivier, is this the 7.1.1b FS Compiler?\n> >>\n> >> I wonder if a bug fix made it in...\n> >>\n> >> Wierd.\n> >>\n> >>\n> >>\n> >> --On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT\n> >> <[email protected]> wrote:\n> >>\n> >> > I see the latter in both b4 and b5!\n> >> >\n> >> > I've just relaunched my mirroring procedure and it did'nt pick another\n> >> > b4 or b5!\n> >> >\n> >> > What happens??\n> >> >\n> >> > Regards,\n> >> > On Thu, 7 Nov 2002, Tom Lane wrote:\n> >> >\n> >> >> Date: Thu, 07 Nov 2002 11:26:42 -0500\n> >> >> From: Tom Lane <[email protected]>\n> >> >> To: [email protected]\n> >> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie\n> >> >> <[email protected]>, [email protected],\n> >> >> [email protected]\n> >> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report\n> >> >> and a\n> >> >>\n> >> >> Olivier PRENANT <[email protected]> writes:\n> >> >> > *WHAT??**\n> >> >> > this directory has hust been created by de-taring\n> >> >> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n> >> >>\n> >> >> Well, there's something darn weird here. Is\n> >> >> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line\n> >> >> 1838, do you see\n> >> >> \tinline int32\n> >> >> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n> >> >> or\n> >> >> \tstatic inline int32\n> >> >> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind\n> >> >> \tkind,\n> >> >>\n> >> >> \t\t\tregards, tom lane\n> >> >>\n> >> >\n> >> > --\n> >> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> >> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> >> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> >> > FRANCE Email: [email protected]\n> >> > ----------------------------------------------------------------------\n> >> > --- ----- Make your life a dream, make your dream a reality. (St\n> >> > Exupery)\n> >>\n> >>\n> >>\n> >\n> > --\n> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: [email protected]\n> > -------------------------------------------------------------------------\n> > ----- Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 7 Nov 2002 18:02:51 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 18:02:51 +0100 Olivier PRENANT \n<[email protected]> wrote:\n\n> Haha!!!\n>\n> It passes (b4) on 800 and not on uw 711..\n>\n> Larry, should I install 800 SDK on 711?\nYes.\n\n\n>\n> On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>\n>> Date: Thu, 07 Nov 2002 10:45:13 -0600\n>> From: Larry Rosenman <[email protected]>\n>> To: [email protected]\n>> Cc: Tom Lane <[email protected]>, Billy G. Allie <[email protected]>,\n>> [email protected], [email protected]\n>> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and\n>> a\n>>\n>> cc -O -g -I../../../../src/include -I/usr/local/include -c -o\n>> tuplesort.o tuplesort.c\n>> UX:cc: WARNING: debugging and optimization mutually exclusive; -O\n>> disabled\n>>\n>> It still passes here. I really wonder if they fixed something in the\n>> 4.1 compiler. IIRC\n>> the 7.1.1b compiler is 4.0.\n>>\n>> Truly wierd.\n>>\n>> LER\n>>\n>>\n>> --On Thursday, November 07, 2002 17:40:24 +0100 Olivier PRENANT\n>> <[email protected]> wrote:\n>>\n>> > What's FS, it(s the 7.1.1b compiler yes.\n>> >\n>> > I don't mind having CFLAGS=-Xb though, done it for php already...\n>> > By everyone says this should go off so..\n>> >\n>> > Regards\n>> > On Thu, 7 Nov 2002, Larry Rosenman wrote:\n>> >\n>> >> Date: Thu, 07 Nov 2002 10:37:36 -0600\n>> >> From: Larry Rosenman <[email protected]>\n>> >> To: [email protected], Tom Lane <[email protected]>\n>> >> Cc: Billy G. Allie <[email protected]>, [email protected],\n>> >> [email protected]\n>> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report\n>> >> and a\n>> >>\n>> >> It **LOOKS** right. I'm about to double check it on 7.1.3.\n>> >>\n>> >> Olivier, is this the 7.1.1b FS Compiler?\n>> >>\n>> >> I wonder if a bug fix made it in...\n>> >>\n>> >> Wierd.\n>> >>\n>> >>\n>> >>\n>> >> --On Thursday, November 07, 2002 17:34:49 +0100 Olivier PRENANT\n>> >> <[email protected]> wrote:\n>> >>\n>> >> > I see the latter in both b4 and b5!\n>> >> >\n>> >> > I've just relaunched my mirroring procedure and it did'nt pick\n>> >> > another b4 or b5!\n>> >> >\n>> >> > What happens??\n>> >> >\n>> >> > Regards,\n>> >> > On Thu, 7 Nov 2002, Tom Lane wrote:\n>> >> >\n>> >> >> Date: Thu, 07 Nov 2002 11:26:42 -0500\n>> >> >> From: Tom Lane <[email protected]>\n>> >> >> To: [email protected]\n>> >> >> Cc: Larry Rosenman <[email protected]>, Billy G. Allie\n>> >> >> <[email protected]>, [email protected],\n>> >> >> [email protected]\n>> >> >> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report\n>> >> >> and a\n>> >> >>\n>> >> >> Olivier PRENANT <[email protected]> writes:\n>> >> >> > *WHAT??**\n>> >> >> > this directory has hust been created by de-taring\n>> >> >> > postgresql-7.3b5.tar.gz from my own mirror dated nov 6 20:04...\n>> >> >>\n>> >> >> Well, there's something darn weird here. Is\n>> >> >> src/backend/utils/sort/tuplesort.c version 1.28 or 1.29? At line\n>> >> >> 1838, do you see\n>> >> >> \tinline int32\n>> >> >> \tApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind kind,\n>> >> >> or\n>> >> >> \tstatic inline int32\n>> >> >> \tinlineApplySortFunction(FmgrInfo *sortFunction, SortFunctionKind\n>> >> >> \tkind,\n>> >> >>\n>> >> >> \t\t\tregards, tom lane\n>> >> >>\n>> >> >\n>> >> > --\n>> >> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n>> >> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n>> >> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n>> >> > FRANCE Email: [email protected]\n>> >> > -------------------------------------------------------------------\n>> >> > --- --- ----- Make your life a dream, make your dream a reality. (St\n>> >> > Exupery)\n>> >>\n>> >>\n>> >>\n>> >\n>> > --\n>> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n>> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n>> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n>> > FRANCE Email: [email protected]\n>> > ----------------------------------------------------------------------\n>> > --- ----- Make your life a dream, make your dream a reality. (St\n>> > Exupery)\n>>\n>>\n>>\n>\n> --\n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: [email protected]\n> -------------------------------------------------------------------------\n> ----- Make your life a dream, make your dream a reality. (St Exupery)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 11:23:13 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "\n\n--On Thursday, November 07, 2002 22:21:56 +0100 Peter Eisentraut \n<[email protected]> wrote:\n\n> Olivier PRENANT writes:\n>\n>> I don't mind having CFLAGS=-Xb though, done it for php already...\n>> By everyone says this should go off so..\n>\n> The idea of the platform testing is not to determine whether you can\n> compile PostgreSQL after performing a secret dance. If it doesn't compile\n> with the default options, please don't report it as supported.\nIt DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3. Apparently a \ncompiler\nfix between the 7.1.1b FS and 7.1.2.\n\nThe -Xb switch is NOT a secret dance. It's needed for LOTS of open source \nstuff.\n\nSee the discussion from the Caldera folks last week.\n\nTom's fix fixed the defaults for 7.1.2+\n\n\n>\n> --\n> Peter Eisentraut [email protected]\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 15:17:28 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Olivier PRENANT writes:\n\n> I don't mind having CFLAGS=-Xb though, done it for php already...\n> By everyone says this should go off so..\n\nThe idea of the platform testing is not to determine whether you can\ncompile PostgreSQL after performing a secret dance. If it doesn't compile\nwith the default options, please don't report it as supported.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 7 Nov 2002 22:21:56 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Bruce Momjian writes:\n\n> I am fine with this because it only touches unixware-specific stuff,\n\nThis is an entirely new feature, so it's inappropriate to do now. And if\nwe do it, we should do it for all platforms.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 7 Nov 2002 22:22:26 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3. Apparently a \n> compiler\n> fix between the 7.1.1b FS and 7.1.2.\n\nWell, this is what the REMARKS column is for in the supported-platform\nlist. Seems we need a comment like \"for older compiler versions, you\nmay need to add -Xb to CFLAGS\". Can anyone provide a short and accurate\ndescription of when to do this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Nov 2002 16:34:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare \n7.1.2), Including\nthe 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS or the \nCC environment\nvariable.\n\n\n\n--On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane \n<[email protected]> wrote:\n\n> Larry Rosenman <[email protected]> writes:\n>> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n>> Apparently a compiler\n>> fix between the 7.1.1b FS and 7.1.2.\n>\n> Well, this is what the REMARKS column is for in the supported-platform\n> list. Seems we need a comment like \"for older compiler versions, you\n> may need to add -Xb to CFLAGS\". Can anyone provide a short and accurate\n> description of when to do this?\n>\n> \t\t\tregards, tom lane\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 15:41:53 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "\nWe do point to the FAQ_SCO file for specifics. Would you send a diff\nfor that file?\n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare \n> 7.1.2), Including\n> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS or the \n> CC environment\n> variable.\n> \n> \n> \n> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane \n> <[email protected]> wrote:\n> \n> > Larry Rosenman <[email protected]> writes:\n> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n> >> Apparently a compiler\n> >> fix between the 7.1.1b FS and 7.1.2.\n> >\n> > Well, this is what the REMARKS column is for in the supported-platform\n> > list. Seems we need a comment like \"for older compiler versions, you\n> > may need to add -Xb to CFLAGS\". Can anyone provide a short and accurate\n> > description of when to do this?\n> >\n> > \t\t\tregards, tom lane\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 7 Nov 2002 17:08:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "With or withou Billie's update(s)?\n\n(I haven't looked at them)....\n\nLER\n\n\n--On Thursday, November 07, 2002 17:08:04 -0500 Bruce Momjian \n<[email protected]> wrote:\n\n>\n> We do point to the FAQ_SCO file for specifics. Would you send a diff\n> for that file?\n>\n> -------------------------------------------------------------------------\n> --\n>\n> Larry Rosenman wrote:\n>> For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare\n>> 7.1.2), Including\n>> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS or\n>> the CC environment\n>> variable.\n>>\n>>\n>>\n>> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane\n>> <[email protected]> wrote:\n>>\n>> > Larry Rosenman <[email protected]> writes:\n>> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n>> >> Apparently a compiler\n>> >> fix between the 7.1.1b FS and 7.1.2.\n>> >\n>> > Well, this is what the REMARKS column is for in the supported-platform\n>> > list. Seems we need a comment like \"for older compiler versions, you\n>> > may need to add -Xb to CFLAGS\". Can anyone provide a short and\n>> > accurate description of when to do this?\n>> >\n>> > \t\t\tregards, tom lane\n>>\n>>\n>> --\n>> Larry Rosenman http://www.lerctr.org/~ler\n>> Phone: +1 972-414-9812 E-Mail: [email protected]\n>> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n> 19073\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 16:10:44 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\nHis updates deal only with the LDLIBRARY path issue, which I think we\nare keeping for 7.4:\n\n*** ./doc/FAQ_SCO.orig Wed Nov 6 21:35:46 2002\n--- ./doc/FAQ_SCO Wed Nov 6 21:40:44 2002\n***************\n*** 71,76 ****\n--- 71,79 ----\n \n configure --with-libs=/usr/local/lib --with-includes=/usr/local/include\n \n+ You will also need to set LD_LIBRARY_PATH to '/usr/local/lib' (or add it to\n+ LD_LIBRARY_PATH, if LD_LIBRARY_PATH already exists) before running config-\n+ ure or the test for readline will fail.\n \n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> With or withou Billie's update(s)?\n> \n> (I haven't looked at them)....\n> \n> LER\n> \n> \n> --On Thursday, November 07, 2002 17:08:04 -0500 Bruce Momjian \n> <[email protected]> wrote:\n> \n> >\n> > We do point to the FAQ_SCO file for specifics. Would you send a diff\n> > for that file?\n> >\n> > -------------------------------------------------------------------------\n> > --\n> >\n> > Larry Rosenman wrote:\n> >> For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare\n> >> 7.1.2), Including\n> >> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS or\n> >> the CC environment\n> >> variable.\n> >>\n> >>\n> >>\n> >> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane\n> >> <[email protected]> wrote:\n> >>\n> >> > Larry Rosenman <[email protected]> writes:\n> >> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n> >> >> Apparently a compiler\n> >> >> fix between the 7.1.1b FS and 7.1.2.\n> >> >\n> >> > Well, this is what the REMARKS column is for in the supported-platform\n> >> > list. Seems we need a comment like \"for older compiler versions, you\n> >> > may need to add -Xb to CFLAGS\". Can anyone provide a short and\n> >> > accurate description of when to do this?\n> >> >\n> >> > \t\t\tregards, tom lane\n> >>\n> >>\n> >> --\n> >> Larry Rosenman http://www.lerctr.org/~ler\n> >> Phone: +1 972-414-9812 E-Mail: [email protected]\n> >> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >>\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 3: if posting/reading through Usenet, please send an appropriate\n> >> subscribe-nomail command to [email protected] so that your\n> >> message can get through to the mailing list cleanly\n> >>\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania\n> > 19073\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 7 Nov 2002 17:44:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "OK, I'll try and do up a diff to B5's tonite (probably late, my Daughter's \nelementary school honors choir has a performance tonite).\n\nLER\n\n\n--On Thursday, November 07, 2002 17:44:37 -0500 Bruce Momjian \n<[email protected]> wrote:\n\n>\n> His updates deal only with the LDLIBRARY path issue, which I think we\n> are keeping for 7.4:\n>\n> *** ./doc/FAQ_SCO.orig Wed Nov 6 21:35:46 2002\n> --- ./doc/FAQ_SCO Wed Nov 6 21:40:44 2002\n> ***************\n> *** 71,76 ****\n> --- 71,79 ----\n>\n> configure --with-libs=/usr/local/lib --with-includes=/usr/local/include\n>\n> + You will also need to set LD_LIBRARY_PATH to '/usr/local/lib' (or add\n> it to + LD_LIBRARY_PATH, if LD_LIBRARY_PATH already exists) before\n> running config- + ure or the test for readline will fail.\n>\n>\n> -------------------------------------------------------------------------\n> --\n>\n> Larry Rosenman wrote:\n>> With or withou Billie's update(s)?\n>>\n>> (I haven't looked at them)....\n>>\n>> LER\n>>\n>>\n>> --On Thursday, November 07, 2002 17:08:04 -0500 Bruce Momjian\n>> <[email protected]> wrote:\n>>\n>> >\n>> > We do point to the FAQ_SCO file for specifics. Would you send a diff\n>> > for that file?\n>> >\n>> > ----------------------------------------------------------------------\n>> > --- --\n>> >\n>> > Larry Rosenman wrote:\n>> >> For compilers earlier than the one released with OpenUNIX\n>> >> 8.0.0(UnixWare 7.1.2), Including\n>> >> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS\n>> >> or the CC environment\n>> >> variable.\n>> >>\n>> >>\n>> >>\n>> >> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane\n>> >> <[email protected]> wrote:\n>> >>\n>> >> > Larry Rosenman <[email protected]> writes:\n>> >> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n>> >> >> Apparently a compiler\n>> >> >> fix between the 7.1.1b FS and 7.1.2.\n>> >> >\n>> >> > Well, this is what the REMARKS column is for in the\n>> >> > supported-platform list. Seems we need a comment like \"for older\n>> >> > compiler versions, you may need to add -Xb to CFLAGS\". Can anyone\n>> >> > provide a short and accurate description of when to do this?\n>> >> >\n>> >> > \t\t\tregards, tom lane\n>> >>\n>> >>\n>> >> --\n>> >> Larry Rosenman http://www.lerctr.org/~ler\n>> >> Phone: +1 972-414-9812 E-Mail: [email protected]\n>> >> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>> >>\n>> >>\n>> >> ---------------------------(end of\n>> >> broadcast)--------------------------- TIP 3: if posting/reading\n>> >> through Usenet, please send an appropriate subscribe-nomail command\n>> >> to [email protected] so that your message can get through to\n>> >> the mailing list cleanly\n>> >>\n>> >\n>> > --\n>> > Bruce Momjian | http://candle.pha.pa.us\n>> > [email protected] | (610) 359-1001\n>> > + If your life is a hard drive, | 13 Roberts Road\n>> > + Christ can be your backup. | Newtown Square, Pennsylvania\n>> > 19073\n>>\n>>\n>> --\n>> Larry Rosenman http://www.lerctr.org/~ler\n>> Phone: +1 972-414-9812 E-Mail: [email protected]\n>> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n> 19073\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Thu, 07 Nov 2002 16:50:26 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\nI am confused about this patch. I don't see extra_float_digits defined\nanywhere. Am I missing a patch?\n\n---------------------------------------------------------------------------\n\nPedro M. Ferreira wrote:\n> Pedro M. Ferreira wrote:\n> > Tom Lane wrote:\n> >> Perhaps P_MAXLEN now needs to be (2*(DBL_DIG+2+7)+1), considering\n> >> that we'll allow extra_float_digits to be up to 2. What's it used for?\n> > \n> > Yes. I guess so, because it is used in what I think is a memory \n> > allocation function. P_MAXLEN is only used twice:\n> <...>\n> > \n> > I will do the changes tomorrow and send in the appropriate diff's.\n> \n> Ok. Its done now.\n> Only one file changed: src/backend/utils/adt/geo_ops.c\n> \n> All the geometric types should now account for float_extra_digits on output.\n> \n> A diff -u is attached.\n> \n> Best reagards,\n> Pedro\n> \n> > Regards,\n> > Pedro\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > \n> > \n> \n> \n> -- \n> ----------------------------------------------------------------------\n> Pedro Miguel Frazao Fernandes Ferreira\n> Universidade do Algarve\n> Faculdade de Ciencias e Tecnologia\n> Campus de Gambelas\n> 8000-117 Faro\n> Portugal\n> Tel./Fax: (+351) 289 800950 / 289 819403\n> http://w3.ualg.pt/~pfrazao\n\n> --- old/postgresql-7.2.1/src/backend/utils/adt/geo_ops.c\tMon Nov 4 12:01:39 2002\n> +++ postgresql-7.2.1/src/backend/utils/adt/geo_ops.c\tTue Nov 5 10:47:56 2002\n> @@ -80,11 +80,11 @@\n> #define RDELIM_C\t\t'>'\n> \n> /* Maximum number of output digits printed */\n> -#define P_MAXDIG DBL_DIG\n> -#define P_MAXLEN (2*(P_MAXDIG+7)+1)\n> -\n> -static int\tdigits8 = P_MAXDIG;\n> +/* ...+2+7 : 2 accounts for extra_float_digits max value */\n> +#define P_MAXLEN (2*(DBL_DIG+2+7)+1)\n> \n> +/* Extra digits in float output formatting (in float.c) */\n> +extern int extra_float_digits;\n> \n> /*\n> * Geometric data types are composed of points.\n> @@ -139,7 +139,12 @@\n> static int\n> single_encode(float8 x, char *str)\n> {\n> -\tsprintf(str, \"%.*g\", digits8, x);\n> +\tint\tndig = DBL_DIG + extra_float_digits;\n> +\n> +\tif (ndig < 1)\n> +\t\tndig=1;\n> +\n> +\tsprintf(str, \"%.*g\", ndig, x);\n> \treturn TRUE;\n> }\t/* single_encode() */\n> \n> @@ -190,7 +195,12 @@\n> static int\n> pair_encode(float8 x, float8 y, char *str)\n> {\n> -\tsprintf(str, \"%.*g,%.*g\", digits8, x, digits8, y);\n> +\tint\tndig = DBL_DIG + extra_float_digits;\n> +\n> +\tif (ndig < 1)\n> +\t\tndig=1;\n> +\n> +\tsprintf(str, \"%.*g,%.*g\", ndig, x, ndig, y);\n> \treturn TRUE;\n> }\n> \n> @@ -974,7 +984,7 @@\n> #endif\n> #ifdef GEODEBUG\n> \t\tprintf(\"line_construct_pts- line is neither vertical nor horizontal (diffs x=%.*g, y=%.*g\\n\",\n> -\t\t\t digits8, (pt2->x - pt1->x), digits8, (pt2->y - pt1->y));\n> +\t\t\t DBL_DIG, (pt2->x - pt1->x), DBL_DIG, (pt2->y - pt1->y));\n> #endif\n> \t}\n> }\n> @@ -1180,8 +1190,8 @@\n> \n> #ifdef GEODEBUG\n> \tprintf(\"line_interpt- lines are A=%.*g, B=%.*g, C=%.*g, A=%.*g, B=%.*g, C=%.*g\\n\",\n> -\t\t digits8, l1->A, digits8, l1->B, digits8, l1->C, digits8, l2->A, digits8, l2->B, digits8, l2->C);\n> -\tprintf(\"line_interpt- lines intersect at (%.*g,%.*g)\\n\", digits8, x, digits8, y);\n> +\t\t DBL_DIG, l1->A, DBL_DIG, l1->B, DBL_DIG, l1->C, DBL_DIG, l2->A, DBL_DIG, l2->B, DBL_DIG, l2->C);\n> +\tprintf(\"line_interpt- lines intersect at (%.*g,%.*g)\\n\", DBL_DIG, x, DBL_DIG, y);\n> #endif\n> \n> \treturn result;\n> @@ -2390,14 +2400,14 @@\n> \tp = line_interpt_internal(&tmp, line);\n> #ifdef GEODEBUG\n> \tprintf(\"interpt_sl- segment is (%.*g %.*g) (%.*g %.*g)\\n\",\n> -\t\t digits8, lseg->p[0].x, digits8, lseg->p[0].y, digits8, lseg->p[1].x, digits8, lseg->p[1].y);\n> +\t\t DBL_DIG, lseg->p[0].x, DBL_DIG, lseg->p[0].y, DBL_DIG, lseg->p[1].x, DBL_DIG, lseg->p[1].y);\n> \tprintf(\"interpt_sl- segment becomes line A=%.*g B=%.*g C=%.*g\\n\",\n> -\t\t digits8, tmp.A, digits8, tmp.B, digits8, tmp.C);\n> +\t\t DBL_DIG, tmp.A, DBL_DIG, tmp.B, DBL_DIG, tmp.C);\n> #endif\n> \tif (PointerIsValid(p))\n> \t{\n> #ifdef GEODEBUG\n> -\t\tprintf(\"interpt_sl- intersection point is (%.*g %.*g)\\n\", digits8, p->x, digits8, p->y);\n> +\t\tprintf(\"interpt_sl- intersection point is (%.*g %.*g)\\n\", DBL_DIG, p->x, DBL_DIG, p->y);\n> #endif\n> \t\tif (on_ps_internal(p, lseg))\n> \t\t{\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 7 Nov 2002 23:50:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "\nThis has been saved for the 7.4 release:\n\n\thttp:/momjian.postgresql.org/cgi-bin/pgpatches2\n\nI will not apply the _inline_ part.\n\n---------------------------------------------------------------------------\n\nBilly G. Allie wrote:\n-- Start of PGP signed section.\n> I am including a set of 4 small patches that enable PostgreSQL 7.3b3 to build\n> successfully on OpenUnix 8.0. These same patches should also work for UnixWare\n> 7.x. I will confirm that tomorrow (Nov 7, 2002).\n> \n> Here is an explanation of the patches:\n> \n> 1. An update of the FAQ_SCO file.\n> \n> 2. This patch removes a static declaration of a in-line function in\n> src/backend/utils/sort/tuplesort.c \n> \n> 3. This patch to src/makefiles/Makefile.unixware, together with the patch to\n> src/Makefile.global.in allows any addition library search directories (added\n> with the configure --with-libraries option) to be added to the rpath option \n> sent to the linker. The use of a different variable to pass the addition \n> search paths was necessary to avoid a circular reference to LDFLAGS.\n> \n> 4. This patch creates the variable (trpath) used by the patch to \n> Makefile.unixware. This patch would also be for other platforms that would \n> have to add the additional library search paths to the rpath linker option.\n> See Makefile.unixware for an example of how to do this.\n> \n> After applying these patches, PostgreSQL successfully compiled on OpenUnix 8 \n> and it passed all the regression tests.\n> \n\nContent-Description: ou8.patch.20021106\n\n[ Attachment, skipping... ]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | MSN.......: [email protected]\n> |-/-|----- | Dearborn, MI 48126|\n> |/ |LLIE | (313) 582-1540 |\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 7 Nov 2002 23:57:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a patch." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am confused about this patch. I don't see extra_float_digits defined\n> anywhere. Am I missing a patch?\n\nEvidently. I have the patch and was planning to apply it myself as soon\nas Pedro does something with the geometry types...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Nov 2002 00:20:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "HI All,\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> \n>>I am confused about this patch. I don't see extra_float_digits defined\n>>anywhere. Am I missing a patch ?\n\nProbably! :)\n\nThe definition of extra_float_digits was made in float.c and the diff was sent \nto the list and Tom before. I think its the one Tom refers below.\nThere were diff's to four files:\n\nsrc/backend/utils/adt/float.c\nsrc/backend/utils/misc/guc.c\nsrc/bin/psql/tab-complete.c\nsrc/backend/utils/misc/postgresql.conf.sample\n\n> \n> \n> Evidently. I have the patch and was planning to apply it myself as soon\n> as Pedro does something with the geometry types...\n\nLots of mail in the list! ;)\nIts done. The diff file to src/backend/utils/adt/geo_ops.c (to handle the \ngemetry types) was sent some days ago. Its the one in the message where Bruce \nsays that extra_float_digits is not defined anywhere.\n\nIf you want I can send in the diff's again.\n\nBest regards,\nPedro\n> \n> \t\t\tregards, tom lane\n\n\n-- \n----------------------------------------------------------------------\nPedro Miguel Frazao Fernandes Ferreira\nUniversidade do Algarve\nFaculdade de Ciencias e Tecnologia\nCampus de Gambelas\n8000-117 Faro\nPortugal\nTel./Fax: (+351) 289 800950 / 289 819403\nhttp://w3.ualg.pt/~pfrazao\n\n", "msg_date": "Fri, 08 Nov 2002 10:20:39 +0000", "msg_from": "\"Pedro M. Frazao F. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options" }, { "msg_contents": "Many thanks Larry,\n\nSorry I had omited the -Xb (no secret dance) in my report...\n\nOn Thu, 7 Nov 2002, Larry Rosenman wrote:\n\n> Date: Thu, 07 Nov 2002 15:17:28 -0600\n> From: Larry Rosenman <[email protected]>\n> To: Peter Eisentraut <[email protected]>, Olivier PRENANT <[email protected]>\n> Cc: Tom Lane <[email protected]>, Billy G. Allie <[email protected]>,\n> [email protected], [email protected]\n> Subject: Re: [PORTS] [HACKERS] PostgreSQL supported platform report and a \n> \n> \n> \n> --On Thursday, November 07, 2002 22:21:56 +0100 Peter Eisentraut \n> <[email protected]> wrote:\n> \n> > Olivier PRENANT writes:\n> >\n> >> I don't mind having CFLAGS=-Xb though, done it for php already...\n> >> By everyone says this should go off so..\n> >\n> > The idea of the platform testing is not to determine whether you can\n> > compile PostgreSQL after performing a secret dance. If it doesn't compile\n> > with the default options, please don't report it as supported.\n> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3. Apparently a \n> compiler\n> fix between the 7.1.1b FS and 7.1.2.\n> \n> The -Xb switch is NOT a secret dance. It's needed for LOTS of open source \n> stuff.\n> \n> See the discussion from the Caldera folks last week.\n> \n> Tom's fix fixed the defaults for 7.1.2+\n> \n> \n> >\n> > --\n> > Peter Eisentraut [email protected]\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 8 Nov 2002 12:57:28 +0100 (MET)", "msg_from": "Olivier PRENANT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Here is diff. Please let me know if you need better wording.\n\n\n\n--On Thursday, November 07, 2002 16:50:26 -0600 Larry Rosenman \n<[email protected]> wrote:\n\n> OK, I'll try and do up a diff to B5's tonite (probably late, my\n> Daughter's elementary school honors choir has a performance tonite).\n>\n> LER\n>\n>\n> --On Thursday, November 07, 2002 17:44:37 -0500 Bruce Momjian\n> <[email protected]> wrote:\n>\n>>\n>> His updates deal only with the LDLIBRARY path issue, which I think we\n>> are keeping for 7.4:\n>>\n>> *** ./doc/FAQ_SCO.orig Wed Nov 6 21:35:46 2002\n>> --- ./doc/FAQ_SCO Wed Nov 6 21:40:44 2002\n>> ***************\n>> *** 71,76 ****\n>> --- 71,79 ----\n>>\n>> configure --with-libs=/usr/local/lib --with-includes=/usr/local/include\n>>\n>> + You will also need to set LD_LIBRARY_PATH to '/usr/local/lib' (or add\n>> it to + LD_LIBRARY_PATH, if LD_LIBRARY_PATH already exists) before\n>> running config- + ure or the test for readline will fail.\n>>\n>>\n>> -------------------------------------------------------------------------\n>> --\n>>\n>> Larry Rosenman wrote:\n>>> With or withou Billie's update(s)?\n>>>\n>>> (I haven't looked at them)....\n>>>\n>>> LER\n>>>\n>>>\n>>> --On Thursday, November 07, 2002 17:08:04 -0500 Bruce Momjian\n>>> <[email protected]> wrote:\n>>>\n>>> >\n>>> > We do point to the FAQ_SCO file for specifics. Would you send a diff\n>>> > for that file?\n>>> >\n>>> > ----------------------------------------------------------------------\n>>> > --- --\n>>> >\n>>> > Larry Rosenman wrote:\n>>> >> For compilers earlier than the one released with OpenUNIX\n>>> >> 8.0.0(UnixWare 7.1.2), Including\n>>> >> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS\n>>> >> or the CC environment\n>>> >> variable.\n>>> >>\n>>> >>\n>>> >>\n>>> >> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane\n>>> >> <[email protected]> wrote:\n>>> >>\n>>> >> > Larry Rosenman <[email protected]> writes:\n>>> >> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n>>> >> >> Apparently a compiler\n>>> >> >> fix between the 7.1.1b FS and 7.1.2.\n>>> >> >\n>>> >> > Well, this is what the REMARKS column is for in the\n>>> >> > supported-platform list. Seems we need a comment like \"for older\n>>> >> > compiler versions, you may need to add -Xb to CFLAGS\". Can anyone\n>>> >> > provide a short and accurate description of when to do this?\n>>> >> >\n>>> >> > \t\t\tregards, tom lane\n>>> >>\n>>> >>\n>>> >> --\n>>> >> Larry Rosenman http://www.lerctr.org/~ler\n>>> >> Phone: +1 972-414-9812 E-Mail: [email protected]\n>>> >> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>>> >>\n>>> >>\n>>> >> ---------------------------(end of\n>>> >> broadcast)--------------------------- TIP 3: if posting/reading\n>>> >> through Usenet, please send an appropriate subscribe-nomail command\n>>> >> to [email protected] so that your message can get through to\n>>> >> the mailing list cleanly\n>>> >>\n>>> >\n>>> > --\n>>> > Bruce Momjian | http://candle.pha.pa.us\n>>> > [email protected] | (610) 359-1001\n>>> > + If your life is a hard drive, | 13 Roberts Road\n>>> > + Christ can be your backup. | Newtown Square, Pennsylvania\n>>> > 19073\n>>>\n>>>\n>>> --\n>>> Larry Rosenman http://www.lerctr.org/~ler\n>>> Phone: +1 972-414-9812 E-Mail: [email protected]\n>>> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 4: Don't 'kill -9' the postmaster\n>>>\n>>\n>> --\n>> Bruce Momjian | http://candle.pha.pa.us\n>> [email protected] | (610) 359-1001\n>> + If your life is a hard drive, | 13 Roberts Road\n>> + Christ can be your backup. | Newtown Square, Pennsylvania\n>> 19073\n>\n>\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749", "msg_date": "Fri, 08 Nov 2002 06:43:15 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\"Pedro M. Frazao F. Ferreira\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Evidently. I have the patch and was planning to apply it myself as soon\n>> as Pedro does something with the geometry types...\n\n> Its done.\n\nSo it is, dunno what I was thinking. Will work on getting it applied.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Nov 2002 09:43:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Float output formatting options " }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> +For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare\n> +7.1.2), Including the 7.1.1b Feature Supplement, you may need to specify -Xb\n> + in CFLAGS or the CC environment variable. The indication of this is an\n> +error in compiling tuplesort.c referencing inline parameters. \n\ns/inline parameters/inline functions/. Otherwise looks good.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Nov 2002 09:45:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "Do y'all want a new diff, or can you deal with it when you do the patch?\n\nLER\n\n\n--On Friday, November 08, 2002 09:45:44 -0500 Tom Lane <[email protected]> \nwrote:\n\n> Larry Rosenman <[email protected]> writes:\n>> +For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare\n>> +7.1.2), Including the 7.1.1b Feature Supplement, you may need to\n>> specify -Xb + in CFLAGS or the CC environment variable. The indication\n>> of this is an +error in compiling tuplesort.c referencing inline\n>> parameters.\n>\n> s/inline parameters/inline functions/. Otherwise looks good.\n>\n> \t\t\tregards, tom lane\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Fri, 08 Nov 2002 09:07:51 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a " }, { "msg_contents": "\nI will deal with it when I apply.\n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> Do y'all want a new diff, or can you deal with it when you do the patch?\n> \n> LER\n> \n> \n> --On Friday, November 08, 2002 09:45:44 -0500 Tom Lane <[email protected]> \n> wrote:\n> \n> > Larry Rosenman <[email protected]> writes:\n> >> +For compilers earlier than the one released with OpenUNIX 8.0.0(UnixWare\n> >> +7.1.2), Including the 7.1.1b Feature Supplement, you may need to\n> >> specify -Xb + in CFLAGS or the CC environment variable. The indication\n> >> of this is an +error in compiling tuplesort.c referencing inline\n> >> parameters.\n> >\n> > s/inline parameters/inline functions/. Otherwise looks good.\n> >\n> > \t\t\tregards, tom lane\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 8 Nov 2002 11:40:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\nPatch applied to 7.3 and CVS, with Tom's correction.\n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> Here is diff. Please let me know if you need better wording.\n> \n> \n> \n> --On Thursday, November 07, 2002 16:50:26 -0600 Larry Rosenman \n> <[email protected]> wrote:\n> \n> > OK, I'll try and do up a diff to B5's tonite (probably late, my\n> > Daughter's elementary school honors choir has a performance tonite).\n> >\n> > LER\n> >\n> >\n> > --On Thursday, November 07, 2002 17:44:37 -0500 Bruce Momjian\n> > <[email protected]> wrote:\n> >\n> >>\n> >> His updates deal only with the LDLIBRARY path issue, which I think we\n> >> are keeping for 7.4:\n> >>\n> >> *** ./doc/FAQ_SCO.orig Wed Nov 6 21:35:46 2002\n> >> --- ./doc/FAQ_SCO Wed Nov 6 21:40:44 2002\n> >> ***************\n> >> *** 71,76 ****\n> >> --- 71,79 ----\n> >>\n> >> configure --with-libs=/usr/local/lib --with-includes=/usr/local/include\n> >>\n> >> + You will also need to set LD_LIBRARY_PATH to '/usr/local/lib' (or add\n> >> it to + LD_LIBRARY_PATH, if LD_LIBRARY_PATH already exists) before\n> >> running config- + ure or the test for readline will fail.\n> >>\n> >>\n> >> -------------------------------------------------------------------------\n> >> --\n> >>\n> >> Larry Rosenman wrote:\n> >>> With or withou Billie's update(s)?\n> >>>\n> >>> (I haven't looked at them)....\n> >>>\n> >>> LER\n> >>>\n> >>>\n> >>> --On Thursday, November 07, 2002 17:08:04 -0500 Bruce Momjian\n> >>> <[email protected]> wrote:\n> >>>\n> >>> >\n> >>> > We do point to the FAQ_SCO file for specifics. Would you send a diff\n> >>> > for that file?\n> >>> >\n> >>> > ----------------------------------------------------------------------\n> >>> > --- --\n> >>> >\n> >>> > Larry Rosenman wrote:\n> >>> >> For compilers earlier than the one released with OpenUNIX\n> >>> >> 8.0.0(UnixWare 7.1.2), Including\n> >>> >> the 7.1.1b Feature Supplement, you may need to specify -Xb in CFLAGS\n> >>> >> or the CC environment\n> >>> >> variable.\n> >>> >>\n> >>> >>\n> >>> >>\n> >>> >> --On Thursday, November 07, 2002 16:34:12 -0500 Tom Lane\n> >>> >> <[email protected]> wrote:\n> >>> >>\n> >>> >> > Larry Rosenman <[email protected]> writes:\n> >>> >> >> It DOES compile out of the box now on 8.0.0(7.1.2) and 7.1.3.\n> >>> >> >> Apparently a compiler\n> >>> >> >> fix between the 7.1.1b FS and 7.1.2.\n> >>> >> >\n> >>> >> > Well, this is what the REMARKS column is for in the\n> >>> >> > supported-platform list. Seems we need a comment like \"for older\n> >>> >> > compiler versions, you may need to add -Xb to CFLAGS\". Can anyone\n> >>> >> > provide a short and accurate description of when to do this?\n> >>> >> >\n> >>> >> > \t\t\tregards, tom lane\n> >>> >>\n> >>> >>\n> >>> >> --\n> >>> >> Larry Rosenman http://www.lerctr.org/~ler\n> >>> >> Phone: +1 972-414-9812 E-Mail: [email protected]\n> >>> >> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >>> >>\n> >>> >>\n> >>> >> ---------------------------(end of\n> >>> >> broadcast)--------------------------- TIP 3: if posting/reading\n> >>> >> through Usenet, please send an appropriate subscribe-nomail command\n> >>> >> to [email protected] so that your message can get through to\n> >>> >> the mailing list cleanly\n> >>> >>\n> >>> >\n> >>> > --\n> >>> > Bruce Momjian | http://candle.pha.pa.us\n> >>> > [email protected] | (610) 359-1001\n> >>> > + If your life is a hard drive, | 13 Roberts Road\n> >>> > + Christ can be your backup. | Newtown Square, Pennsylvania\n> >>> > 19073\n> >>>\n> >>>\n> >>> --\n> >>> Larry Rosenman http://www.lerctr.org/~ler\n> >>> Phone: +1 972-414-9812 E-Mail: [email protected]\n> >>> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >>>\n> >>>\n> >>> ---------------------------(end of broadcast)---------------------------\n> >>> TIP 4: Don't 'kill -9' the postmaster\n> >>>\n> >>\n> >> --\n> >> Bruce Momjian | http://candle.pha.pa.us\n> >> [email protected] | (610) 359-1001\n> >> + If your life is a hard drive, | 13 Roberts Road\n> >> + Christ can be your backup. | Newtown Square, Pennsylvania\n> >> 19073\n> >\n> >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: [email protected]\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 8 Nov 2002 11:49:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL supported platform report and a" }, { "msg_contents": "\"Pedro M. Ferreira\" <[email protected]> writes:\n> [ patch for extra_float_digits ]\n\nI've applied this patch along with followup changes to pg_dump (it sets\nextra_float_digits to 2 to allow accurate dump/reload) and the geometry\nregression test (it sets extra_float_digits to -3).\n\nI find that two geometry 'expected' files are now sufficient to cover\nall the platforms I have available to test. (We'd only need one, if\neveryone displayed minus zero as '-0', but some platforms print '0'.)\nI tested on HPUX 10.20 (HPPA), Red Hat Linux 8.0 (Intel), Mac OS X 10.2.1\nand LinuxPPC (PPC).\n\nI'd be interested to hear results of testing CVS tip (now 7.4devel)\non other platforms. Does geometry pass cleanly for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Nov 2002 15:33:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Geometry regression tests (was Re: Float output formatting options)" }, { "msg_contents": "\n> I'd be interested to hear results of testing CVS tip (now 7.4devel)\n> on other platforms. Does geometry pass cleanly for you?\n\nYes for NetBSD-1.5.1/i386, where it previously didn't due to processor\nspecific math libraries on this platform.\n\nGiles\n\n\n", "msg_date": "Sat, 09 Nov 2002 13:10:45 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Geometry regression tests (was Re: Float output formatting\n\toptions)" }, { "msg_contents": "Tom Lane wrote:\n> \"Pedro M. Ferreira\" <[email protected]> writes:\n> \n>>[ patch for extra_float_digits ]\n> \n> \n> I've applied this patch along with followup changes to pg_dump (it sets\n> extra_float_digits to 2 to allow accurate dump/reload) and the geometry\n> regression test (it sets extra_float_digits to -3).\n> \n> I find that two geometry 'expected' files are now sufficient to cover\n> all the platforms I have available to test. (We'd only need one, if\n> everyone displayed minus zero as '-0', but some platforms print '0'.)\n> I tested on HPUX 10.20 (HPPA), Red Hat Linux 8.0 (Intel), Mac OS X 10.2.1\n> and LinuxPPC (PPC).\n> \n> I'd be interested to hear results of testing CVS tip (now 7.4devel)\n> on other platforms. Does geometry pass cleanly for you?\n\nYes! :)\nAll tests passed on a dual AMD Athlon MP with Debian GNU/Linux 3.0 (Woody), \nkernel 2.4.18-5.\n\nTested with a snapshot downloaded yesterday.\n\nBest regards,\nPedro M. Ferreira\n\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 12 Nov 2002 10:35:03 +0000", "msg_from": "\"Pedro M. Frazao F. Ferreira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Geometry regression tests (was Re: Float output formatting" }, { "msg_contents": "Tom Lane writes:\n\n> I find that two geometry 'expected' files are now sufficient to cover\n> all the platforms I have available to test. (We'd only need one, if\n> everyone displayed minus zero as '-0', but some platforms print '0'.)\n\nJudging from the platforms affected by this, I would suspect that this is\na (mis-)feature of the snprintf() implementation rather than compiler or\nprocessor. Would it make sense to provide a fixed version of snprintf()\nand get rid of these differences?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 15 Nov 2002 18:35:40 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Geometry regression tests (was Re: Float output" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I find that two geometry 'expected' files are now sufficient to cover\n>> all the platforms I have available to test. (We'd only need one, if\n>> everyone displayed minus zero as '-0', but some platforms print '0'.)\n\n> Judging from the platforms affected by this, I would suspect that this is\n> a (mis-)feature of the snprintf() implementation rather than compiler or\n> processor. Would it make sense to provide a fixed version of snprintf()\n> and get rid of these differences?\n\nCertainly it's a library issue on most of these platforms --- AFAIK,\nall these machines have IEEE-compliant float hardware, so it must be\nsprintf's fault and not a matter of not getting the minus zero in the\nfirst place.\n\nI wouldn't want to write a float converter from scratch, but maybe we\ncould add a few lines in src/port/snprintf.c to patch up a wrong result?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 12:46:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Geometry regression tests (was Re: Float output formatting\n\toptions)" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:[email protected]] \n> Sent: 03 December 2002 19:12\n> To: Bruce Momjian\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] [GENERAL] PostgreSQL Global \n> Development Group Announces\n> \n> \n> On Thu, 28 Nov 2002, Bruce Momjian wrote:\n> \n> > Wow, this sounds great.\n> >\n> > Where can I get a copy? Why would anyone use anything else? ;-)\n> \n> Well, if you read the announcement in its entirety, you would have\n> noticed:\n> \n> \"Source for this release is available at:\n> http://advocacy.postgresql.org/download/\n>\n\nI could have sworn we used to have a bunch of ftp mirrors for downloads.\nCome to think of it I rewrote/stole a load of Vince's PHP code to allow\nyou to select one from the portal recently. Are we not using them\nanymore?\n\n:-)\n\nRegards, Dave.\n", "msg_date": "Tue, 3 Dec 2002 20:29:48 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Tue, 3 Dec 2002, Dave Page wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: Marc G. Fournier [mailto:[email protected]]\n> > Sent: 03 December 2002 19:12\n> > To: Bruce Momjian\n> > Cc: PostgreSQL-development\n> > Subject: Re: [HACKERS] [GENERAL] PostgreSQL Global\n> > Development Group Announces\n> >\n> >\n> > On Thu, 28 Nov 2002, Bruce Momjian wrote:\n> >\n> > > Wow, this sounds great.\n> > >\n> > > Where can I get a copy? Why would anyone use anything else? ;-)\n> >\n> > Well, if you read the announcement in its entirety, you would have\n> > noticed:\n> >\n> > \"Source for this release is available at:\n> > http://advocacy.postgresql.org/download/\n> >\n>\n> I could have sworn we used to have a bunch of ftp mirrors for downloads.\n> Come to think of it I rewrote/stole a load of Vince's PHP code to allow\n> you to select one from the portal recently. Are we not using them\n> anymore?\n\nHaven't you been paying attention? There's this new advocacy and suit\nmarketing thing going on that makes all of that irrelevant. It's just\nthere for show now.\n\n:)\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Tue, 3 Dec 2002 15:48:16 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Tue, 3 Dec 2002, Dave Page wrote:\n\n> I could have sworn we used to have a bunch of ftp mirrors for downloads.\n> Come to think of it I rewrote/stole a load of Vince's PHP code to allow\n> you to select one from the portal recently. Are we not using them\n> anymore?\n\nYup, as with doing anything for the firs ttime, the press release itself\nhad its 'bugs' ... considering how many times Josh asked for comments on\nit, I'm surprised that nobody picked up on it *shrug*\n\nWe are looking at some improvements to the download stuff ... Greg(?)\nsuggested a layout that I really liked for a web based version that would\nhave to tie into the main mirror database ... one that provided a wee bit\nmore information then just the directory listings ... but, with that\nthought, isn't there a file you can put into an ftp directory that, when\nyou web into that directory, i gives you the listings with various\ncomments, or is that just using the .messages file?\n\n\n", "msg_date": "Tue, 3 Dec 2002 17:17:15 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Tue, 3 Dec 2002, Vince Vielhaber wrote:\n\n> Haven't you been paying attention? There's this new advocacy and suit\n> marketing thing going on that makes all of that irrelevant. It's just\n> there for show now.\n\n\n", "msg_date": "Tue, 3 Dec 2002 17:18:38 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Tue, 3 Dec 2002, Marc G. Fournier wrote:\n\n> On Tue, 3 Dec 2002, Dave Page wrote:\n>\n> > I could have sworn we used to have a bunch of ftp mirrors for downloads.\n> > Come to think of it I rewrote/stole a load of Vince's PHP code to allow\n> > you to select one from the portal recently. Are we not using them\n> > anymore?\n>\n> Yup, as with doing anything for the firs ttime, the press release itself\n> had its 'bugs' ... considering how many times Josh asked for comments on\n> it, I'm surprised that nobody picked up on it *shrug*\n\nI understood it was intentional so comments wouldn't have done any good.\n\n> We are looking at some improvements to the download stuff ... Greg(?)\n> suggested a layout that I really liked for a web based version that would\n> have to tie into the main mirror database ... one that provided a wee bit\n> more information then just the directory listings ... but, with that\n> thought, isn't there a file you can put into an ftp directory that, when\n> you web into that directory, i gives you the listings with various\n> comments, or is that just using the .messages file?\n\nAll of them I've seen had an index.html in it.\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Tue, 3 Dec 2002 16:40:15 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Dave Page wrote:\n<snip>\n> I could have sworn we used to have a bunch of ftp mirrors for downloads.\n> Come to think of it I rewrote/stole a load of Vince's PHP code to allow\n> you to select one from the portal recently. Are we not using them\n> anymore?\n\nOf course we are, it's just that we're also trying to direct people to \nthe Advocacy site where there is a lot more info, in a lot more languages.\n\nThe only reason for the download page not having a list of mirrors is \ndue to not having done it yet.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> :-)\n> \n> Regards, Dave.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 04 Dec 2002 10:14:47 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Marc G. Fournier writes:\n\n> Yup, as with doing anything for the firs ttime, the press release itself\n> had its 'bugs' ... considering how many times Josh asked for comments on\n> it, I'm surprised that nobody picked up on it *shrug*\n\nAnd how should we have guessed that release management is now done by the\n\"advocacy\" group? While you're out advocating, don't forget the existing\nusers.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 4 Dec 2002 00:29:23 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Peter Eisentraut wrote:\n> Marc G. Fournier writes:\n> \n> \n>>Yup, as with doing anything for the firs ttime, the press release itself\n>>had its 'bugs' ... considering how many times Josh asked for comments on\n>>it, I'm surprised that nobody picked up on it *shrug*\n> \n> \n> And how should we have guessed that release management is now done by the\n> \"advocacy\" group? While you're out advocating, don't forget the existing\n> users.\n\nSorry Peter.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 04 Dec 2002 10:29:34 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Peter Eisentraut wrote:\n> Justin Clift writes:\n> \n> \n>>Of course we are, it's just that we're also trying to direct people to\n>>the Advocacy site where there is a lot more info, in a lot more languages.\n> \n> \n> Why don't we just shut down the regular web site. Clearly it's not\n> considered adequate anymore.\n\nWell, qe're trying to move the new \"portal\" side of things into place \n(presently at wwwdevel.postgresql.org), so that all of the different \nPostgreSQL pieces are more easily accessible.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 04 Dec 2002 10:31:51 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Justin Clift writes:\n\n> Of course we are, it's just that we're also trying to direct people to\n> the Advocacy site where there is a lot more info, in a lot more languages.\n\nWhy don't we just shut down the regular web site. Clearly it's not\nconsidered adequate anymore.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 4 Dec 2002 00:33:44 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Tue, 3 Dec 2002, Vince Vielhaber wrote:\n\n> > Yup, as with doing anything for the firs ttime, the press release itself\n> > had its 'bugs' ... considering how many times Josh asked for comments on\n> > it, I'm surprised that nobody picked up on it *shrug*\n>\n> I understood it was intentional so comments wouldn't have done any good.\n\nAnything is only as intentional as nobody making constructive critisms of\nit ... ewwww, that was major bad english ... not part of solution, you are\npart of problem sort of thing...\n\n\n", "msg_date": "Wed, 4 Dec 2002 09:32:27 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Yup, as with doing anything for the firs ttime, the press release itself\n> > had its 'bugs' ... considering how many times Josh asked for comments on\n> > it, I'm surprised that nobody picked up on it *shrug*\n>\n> And how should we have guessed that release management is now done by the\n> \"advocacy\" group? While you're out advocating, don't forget the existing\n> users.\n\nIt isn't, but those working on -advocacy were asked to help come up with a\nstronger release *announcement* then we've had in the past ...\n\n\n", "msg_date": "Wed, 4 Dec 2002 09:33:24 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Justin Clift wrote:\n\n> Dave Page wrote:\n> <snip>\n> > I could have sworn we used to have a bunch of ftp mirrors for downloads.\n> > Come to think of it I rewrote/stole a load of Vince's PHP code to allow\n> > you to select one from the portal recently. Are we not using them\n> > anymore?\n>\n> Of course we are, it's just that we're also trying to direct people to\n> the Advocacy site where there is a lot more info, in a lot more languages.\n>\n> The only reason for the download page not having a list of mirrors is\n> due to not having done it yet.\n\nSo as to not recreate the wheel, or, at least, get the wheel properly\nrolling, can we get that download page redirected to the one that does\nlist the mirrors? :)\n\nI liked Greg(?)'s ideas, but I don't see it as being implemented overnight\n:)\n\n\n", "msg_date": "Wed, 4 Dec 2002 09:35:29 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Peter Eisentraut wrote:\n\n> Justin Clift writes:\n>\n> > Of course we are, it's just that we're also trying to direct people to\n> > the Advocacy site where there is a lot more info, in a lot more languages.\n>\n> Why don't we just shut down the regular web site. Clearly it's not\n> considered adequate anymore.\n\nAs of yet, the new portal isn't ready yet ... and the adequacy of the\nexisting site isn't so much a problem, but maintainability of it ...\naccording to Vince, trying to add anything to it is virtually impossible\n:(\n\n\n\n", "msg_date": "Wed, 4 Dec 2002 09:37:33 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Marc G. Fournier wrote:\n\n> On Tue, 3 Dec 2002, Vince Vielhaber wrote:\n>\n> > > Yup, as with doing anything for the firs ttime, the press release itself\n> > > had its 'bugs' ... considering how many times Josh asked for comments on\n> > > it, I'm surprised that nobody picked up on it *shrug*\n> >\n> > I understood it was intentional so comments wouldn't have done any good.\n>\n> Anything is only as intentional as nobody making constructive critisms of\n> it ... ewwww, that was major bad english ... not part of solution, you are\n> part of problem sort of thing...\n\nThat may be how you understood it, but not how I understood it. There\nappears to be an incremental takeover occurring.\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Wed, 4 Dec 2002 08:41:43 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Marc G. Fournier wrote:\n<snip>\n> So as to not recreate the wheel, or, at least, get the wheel properly\n> rolling, can we get that download page redirected to the one that does\n> list the mirrors? :)\n\nYep.\n\nWould the best way to do this be changing the wording to say something like:\n\n\"PostgreSQL can be downloaded as source code from any of the many mirror \nsites:\"\n\nWith a link after it directing to somewhere that gives the list. The \npresent \"www.postgresql.org\" with the list of mirrors would probably be \nadequate, but it'll need to be a different url than the straight \n\"www.postgresql.org\" as that's going to change as soon as the new portal \nis in place.\n\nDoes this sound like a workable approach for now?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> I liked Greg(?)'s ideas, but I don't see it as being implemented overnight\n> :)\n> \n> \n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Thu, 05 Dec 2002 00:42:11 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Marc G. Fournier wrote:\n\n> On Wed, 4 Dec 2002, Peter Eisentraut wrote:\n>\n> > Marc G. Fournier writes:\n> >\n> > > Yup, as with doing anything for the firs ttime, the press release itself\n> > > had its 'bugs' ... considering how many times Josh asked for comments on\n> > > it, I'm surprised that nobody picked up on it *shrug*\n> >\n> > And how should we have guessed that release management is now done by the\n> > \"advocacy\" group? While you're out advocating, don't forget the existing\n> > users.\n>\n> It isn't, but those working on -advocacy were asked to help come up with a\n> stronger release *announcement* then we've had in the past ...\n\nThat wasn't stronger, it was fluffier. It was full of buzzwords that were\nmasking the actual content. Are you trying to hide the accomplishments or\npromote them? If you're trying to hide them like in this announcement you\nmay want to try using this tool: http://www.dack.com/web/bullshit.html\nThe stored phrases are much more refined and better paired.\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Wed, 4 Dec 2002 08:47:53 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Marc G. Fournier wrote:\n\n> On Wed, 4 Dec 2002, Peter Eisentraut wrote:\n>\n> > Justin Clift writes:\n> >\n> > > Of course we are, it's just that we're also trying to direct people to\n> > > the Advocacy site where there is a lot more info, in a lot more languages.\n> >\n> > Why don't we just shut down the regular web site. Clearly it's not\n> > considered adequate anymore.\n>\n> As of yet, the new portal isn't ready yet ... and the adequacy of the\n> existing site isn't so much a problem, but maintainability of it ...\n> according to Vince, trying to add anything to it is virtually impossible\n> :(\n\nI have a new design for it, now it's just getting the time to implement\nit. It's easy to add to and looks alot nicer.\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Wed, 4 Dec 2002 08:58:23 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Vince Vielhaber wrote:\n\n> That wasn't stronger, it was fluffier. It was full of buzzwords that\n> were masking the actual content. Are you trying to hide the\n> accomplishments or promote them? If you're trying to hide them like in\n> this announcement you may want to try using this tool:\n> http://www.dack.com/web/bullshit.html The stored phrases are much more\n> refined and better paired.\n\nBookmark'd for the next release ... thanks for the suggestion ...\n\n\n", "msg_date": "Wed, 4 Dec 2002 13:22:20 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Vince Vielhaber wrote:\n\n> I have a new design for it, now it's just getting the time to implement\n> it. It's easy to add to and looks alot nicer.\n\nCool, I think the only beef I ever had with it was the way the results\nwere presented, but loved teh whole annotated aspects ...\n\n\n", "msg_date": "Wed, 4 Dec 2002 13:23:32 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Wed, 4 Dec 2002, Vince Vielhaber wrote:\n> \n> > That wasn't stronger, it was fluffier. It was full of buzzwords that\n> > were masking the actual content. Are you trying to hide the\n> > accomplishments or promote them? If you're trying to hide them like in\n> > this announcement you may want to try using this tool:\n> > http://www.dack.com/web/bullshit.html The stored phrases are much more\n> > refined and better paired.\n> \n> Bookmark'd for the next release ... thanks for the suggestion ...\n\nI was hoping for something that would take existing text and *Bullshit*\nit. Bummer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Dec 2002 12:57:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 4 Dec 2002, Vince Vielhaber wrote:\n> >\n> > > That wasn't stronger, it was fluffier. It was full of buzzwords that\n> > > were masking the actual content. Are you trying to hide the\n> > > accomplishments or promote them? If you're trying to hide them like in\n> > > this announcement you may want to try using this tool:\n> > > http://www.dack.com/web/bullshit.html The stored phrases are much more\n> > > refined and better paired.\n> >\n> > Bookmark'd for the next release ... thanks for the suggestion ...\n>\n> I was hoping for something that would take existing text and *Bullshit*\n> it. Bummer.\n\nClick on it a few times. You'll get the text you need. I've actually\nused it for real things with excellent results (I'm not going to\nelaborate).\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Wed, 4 Dec 2002 13:01:19 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 4 Dec 2002, Vince Vielhaber wrote:\n> >\n> > > That wasn't stronger, it was fluffier. It was full of buzzwords that\n> > > were masking the actual content. Are you trying to hide the\n> > > accomplishments or promote them? If you're trying to hide them like in\n> > > this announcement you may want to try using this tool:\n> > > http://www.dack.com/web/bullshit.html The stored phrases are much more\n> > > refined and better paired.\n> >\n> > Bookmark'd for the next release ... thanks for the suggestion ...\n>\n> I was hoping for something that would take existing text and *Bullshit*\n> it. Bummer.\n\nNo, but I figure that at least it will give me a good site to give me BS\nfodder from ... man, just wait for the next release announcement :)\n\n\n", "msg_date": "Wed, 4 Dec 2002 16:03:59 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Marc G. Fournier writes:\n\n> It isn't, but those working on -advocacy were asked to help come up with a\n> stronger release *announcement* then we've had in the past ...\n\nConsider that a failed experiment. PostgreSQL is driven by the\ndevelopment group and, to some extent, by the existing user base. The\nlast thing we need is a marketing department in that mix.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 5 Dec 2002 01:01:16 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Peter Eisentraut wrote:\n> Marc G. Fournier writes:\n> \n> > It isn't, but those working on -advocacy were asked to help come up with a\n> > stronger release *announcement* then we've had in the past ...\n> \n> Consider that a failed experiment. PostgreSQL is driven by the\n> development group and, to some extent, by the existing user base. The\n> last thing we need is a marketing department in that mix.\n\nPeter, I understand your perspective, but I think you are in the\nminority on this one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Dec 2002 19:23:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "> > It isn't, but those working on -advocacy were asked to help come up with\na\n> > stronger release *announcement* then we've had in the past ...\n>\n> Consider that a failed experiment. PostgreSQL is driven by the\n> development group and, to some extent, by the existing user base. The\n> last thing we need is a marketing department in that mix.\n\nUmmm...I disagree. Lack of marketing is one of Postgres's major problems.\nParticularly when you compare against similar efforts from MySQL, Oracle,\netc.\n\nChris\n\n", "msg_date": "Wed, 4 Dec 2002 17:48:37 -0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "> > > It isn't, but those working on -advocacy were asked to help come up with\n> a\n> > > stronger release *announcement* then we've had in the past ...\n> >\n> > Consider that a failed experiment. PostgreSQL is driven by the\n> > development group and, to some extent, by the existing user base. The\n> > last thing we need is a marketing department in that mix.\n> \n> Ummm...I disagree. Lack of marketing is one of Postgres's major problems.\n> Particularly when you compare against similar efforts from MySQL, Oracle,\n> etc.\n\nYes, indeed.\n\nThe _prime_ reason for the fact that MySQL is the \"M\" in \"LAMP\" is that there \nis a steady, intent set of efforts going into marketing the \"M.\" People think \nthat MySQL is faster, easier to use and \"more standard\" than its alternatives, \nand that is certainly the result of marketing.\n\nThe /real/ technical merit of MySQL has been that there are some integrated \ntools for ISPs like CPANEL that make it easy for ISPs that don't know \n/anything/ about DBMSes to provide MySQL for their customers. CPANEL doesn't \nsupport PostgreSQL, and historically, it has been somewhat more difficult to \nsupport large numbers of PostgreSQL instances on a web server. Some of that \nhas changed, though CPANEL /still/ doesn't support PostgreSQL.\n\nIf any of you consider these \"technical\" issues to be small and petty, I'm \nafraid I don't /care/. More importantly, the hundreds of ISPs licensing \nCPANEL don't care. /They/ are the ones that would need convincing, and I \ndon't think there's any real route to convince them that they should be \npounding down CPANEL's door asking for a PostgreSQL front end and to convince \nthem that they have to tell their customers:\n\n \"We sold you MySQL, telling you it was good for you to use. We were\n wrong, and our new story is that you should convert your databases over\n to use PostgreSQL.\"\n\nAnyone consider that a likely scenario? Anyone?\n\nIt's fair to say that PostgreSQL doesn't need the likes of the \"Database \nHOWTO\" that gives a sales job that's so blindly enthusiastic as to be, well, \nblind.\n\nBut an organization that has /no/ \"marketing department\" is at a severe \ndisadvantage, like it or not.\n\nIt is unfortunate that it is almost impossible to have a marketing group \nwithout there being some wilful blinders involved; it's vital for there to be \nsome technical involvement in the marketing group to pop whatever bubbles they \ngrow that are woefully wrong. But even if it operates with some occasional \nlack of /real/ vision, it's necessary to have a marketing group...\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"sirhc\"))\nhttp://cbbrowne.com/info/advocacy.html\nRules of the Evil Overlord #106. \"If my supreme command center comes\nunder attack, I will immediately flee to safety in my prepared escape\npod and direct the defenses from there. I will not wait until the\ntroops break into my inner sanctum to attempt this.\"\n<http://www.eviloverlord.com/>\n\n\n", "msg_date": "Wed, 04 Dec 2002 22:43:40 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces " }, { "msg_contents": "At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n>Lack of marketing is one of Postgres's major problems.\n\nWhat are the consequences of the problem?\n\n\n>Particularly when you compare against similar efforts from MySQL, Oracle,\n>etc.\n\nYou could even include Microsoft here - they do a lot of database \nmarketing. I am not at all sure the fact that a lot of large companies with \ndubious products engage in extensive marketing is a reason for *us* to \nengage in extensive marketing.\n\nWe already have a substantial following, and our clients have direct access \nto the developers, so any marketing group is pretty irrelevant for existing \nclients. So the only place I can see for a marketing group is in building \nour market share by bringing in new clients.\n\nIf that is what we want, then fine. But I don't want to see any part of the \ndevelopment effort distorted or the existing user base inconvenienced in an \neffort to purely gain that market share. I usually associate increased \nmarketing with decreased quality, and I think the causality works *both* ways.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 05 Dec 2002 14:52:56 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "[cc: list trimmed]\n\nOn Wednesday 04 December 2002 22:52, Philip Warner wrote:\n> At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n> >Lack of marketing is one of Postgres's major problems.\n\n> What are the consequences of the problem?\n\nActually, lack of easy upgrading is one of PostgreSQL's major problems....\n\nBut lack of focused marketing -- truthful, not, as has been said, like the \n'Database HOWTO' -- is a real problem. It would be nice to increase our \nusage.\n\n> If that is what we want, then fine. But I don't want to see any part of the\n> development effort distorted or the existing user base inconvenienced in an\n> effort to purely gain that market share. I usually associate increased\n> marketing with decreased quality, and I think the causality works *both*\n> ways.\n\nISTM there's a separate, non-code-developer group doing this. It doesn't seem \nto take away _any_ developer resources to do an advocacy site.\n\nHowever, I seriously question the need in the long term for our sites to be as \nfractured as they are. Good grief! We've got advocacy.postgresql.org, \ntechdocs.postgresql.org, odbc.postgresql.org, gborg.postgresql.org, \ndeveloper.postgresql.org, jdbc.postgresql.org, etc. Oh, and we also have \nwww.postgresql.org on the side? I think not. Oh, and they are fractured in \ntheir styles -- really, guys, we need a unified style here.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 4 Dec 2002 23:22:41 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Wed, 04 Dec 2002 22:54:37 -0500, Philip Warner wrote:\n> At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n>>Lack of marketing is one of Postgres's major problems.\n> \n> What are the consequences of the problem?\n> \n\nOne consequence that probably hits home for everyone here is it makes it\nextremely hard to make a living working with postgresql. A quick search on\nmonster.com gives me 17 jobs mentioning postgresql, with none listed in the\nlast week. A search on mysql gives me 100 jobs, with 3 filed just today. \nI won't even go into the numbers for Oracle, DB2, and M$. We all have to \npay the bills and I think we'd like to do it working with postgresql.\n\n>>Particularly when you compare against similar efforts from MySQL,\n>>Oracle, etc.\n> \n> You could even include Microsoft here - they do a lot of database\n> marketing. I am not at all sure the fact that a lot of large companies\n> with dubious products engage in extensive marketing is a reason for *us*\n> to engage in extensive marketing.\n> \n\nYou can't win marketshare on technology alone, so unless you think we\ndon't need to increase our market share, that is reason enough to do more\nmarketing.\n\n> We already have a substantial following, and our clients have direct\n> access to the developers, so any marketing group is pretty irrelevant\n> for existing clients. So the only place I can see for a marketing group\n> is in building our market share by bringing in new clients.\n> \n\nWell, my previous employer uses postgresql, but they were under constant\nassault from their clients to use oracle or db2. Technically there was no\nreason to switch, but if your choice is switch databases or go out of \nbusiness, there really isn't much choice. \n\nIn the company I work for now we use at least 4 different\ndatabase systems. We could probably switch all of these to postgresql,\nbut it probably be one heck of a battle to convince people of that. A\nsimple argument that could be raised is that several of the database\ndevelopers use ERWin from computer associates. ERWin's postgresql support\nis spotty compared to its support of oracle, and unless there is a\ngroundswell of demand for better postgresql support, that's not going to\nchange. If postgresql can gain a larger market share, computer associates\nmight improve their postgresql support, and we, existing clients that we\nare, will be able to use postgresql in more areas. \n\nMarketing is very relevant to existing customers.\n\n> If that is what we want, then fine. But I don't want to see any part of\n> the development effort distorted or the existing user base\n> inconvenienced in an effort to purely gain that market share. I usually\n> associate increased marketing with decreased quality, and I think the\n> causality works *both* ways.\n> \n\nAren't most development efforts made simply to gain market share? After\nall, I don't think we added schema support to get *less* people to use\npostgresql.\n\nRobert Treat\n", "msg_date": "Thu, 05 Dec 2002 00:12:04 -0500", "msg_from": "\"Robert Treat\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Robert Treat wrote:\n> On Wed, 04 Dec 2002 22:54:37 -0500, Philip Warner wrote:\n> > At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n> >>Lack of marketing is one of Postgres's major problems.\n> > \n> > What are the consequences of the problem?\n> > \n> \n> One consequence that probably hits home for everyone here is it makes it\n> extremely hard to make a living working with postgresql. A quick search on\n> monster.com gives me 17 jobs mentioning postgresql, with none listed in the\n> last week. A search on mysql gives me 100 jobs, with 3 filed just today. \n> I won't even go into the numbers for Oracle, DB2, and M$. We all have to \n> pay the bills and I think we'd like to do it working with postgresql.\n\nOne other thing marketing does is attracting developers, including\n_paid_ developers, to work on PostgreSQL. Fortunately PostgreSQL is a\nbig hit in Japan, so SRA can pay me to work on PostgreSQL. If we can\nincrease PostgreSQL's popularity, we will get more people working to\nimprove PostgreSQL, both paid and volunteers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Dec 2002 00:20:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "\nOn Thu, 5 Dec 2002, Philip Warner wrote:\n\n> What are the consequences of the problem?\n\nSpeaking from the perspective of a long time postgresql user, who\ncurrently has several very mission critical applications using postgresql\non the back end, at a very large company...\n\nI can say the one consequence of the problem that I have run into\npersonally, is convincing management to allow me to use postgresql for my\nprojects to begin with. Fortunately, where I am currently employed, I was\nable to bash my head against the brick wall until they got tired of\nhearing from me, and allowed me to go with postgresql instead of sybase\n(which was their first choice, as the corporation already has a sybase\nsite license).\n\nThe lack of name recognition was a factor that contributed to the\ndifficulty of getting postgresql accepted. The last thing a non technical\nmiddle manager wants to tell his or her manager is that some mission\ncritical application that just crashed was running on some database he had\nnever heard of before that he gave the go ahead to use.\n\nAnyway, this probably doesn't belong on this mailing list, but I saw the\nquestion and figured I'd answer :)\n\nBy the way, I'm happy to report that after a year of absolutely flawless\nperformance ( except the day the raid array imploded, which was hardly\npostgres's fault ), postgresql has a very good reputation in my\ndepartment.\n\nBrian Knox\nSystems Programmer\n", "msg_date": "Thu, 5 Dec 2002 00:34:19 -0500 (EST)", "msg_from": "Brian Knox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Lamar Owen wrote:\n> However, I seriously question the need in the long term for our sites to be as \n> fractured as they are. Good grief! We've got advocacy.postgresql.org, \n> techdocs.postgresql.org, odbc.postgresql.org, gborg.postgresql.org, \n> developer.postgresql.org, jdbc.postgresql.org, etc. Oh, and we also have \n> www.postgresql.org on the side? I think not. Oh, and they are fractured in \n> their styles -- really, guys, we need a unified style here.\n\nI'd love to see this happen. From reading the messages here, it sounds \nlike the perception is that marketing == spouting bullshit. I don't \nbelieve that's true. I think having an informative, up-to-date, \nstylistically consistent website would do a tremendous amount of good.\n\nThe JDBC one is a particularly bad example right now - it doesn't fit in \nwith any of the rest of the site and its most prominent link is to a \ncompletely out-of-date list of compliance tests the driver fails. The \ndriver may have its flaws but it's a lot better than presented there.\n\nIMHO these things make a difference to technical people as well as \nsuits. If that site and the MySQL JDBC driver's site were my first \nimpressions, I would be using MySQL.\n\nThe JDBC site is certainly not the only one with flaws. The main website \nhas this paragraph in <http://www15.us.postgresql.org/related.html>:\n\n For encrypted postgresql connections, Brett McCormick\n ([email protected]) has made a patch for PostgreSQL\n version 6.3.2 using SSL. Visit his info page for more information.\n\nThat's horribly obselete. In fact, I think a lot of the related projects \nare. That's only two clicks away from the main page.\n\nI'm volunteering to do work here. I could at the very least go through \nthe sites and make a longer list of things like this that I notice. If \nthey are public CVS somewhere, I can send patches. I saw that there's a \n<http://wwwdevel.postgresql.org/>. What's going on with that? Is there \nanything I can do to speed up its adoption? How will it affect the rest \nof the sites?\n\nIs this list the appropriate place to discuss the websites? or should I \ntake it to -advocacy? My impression here is that the two sites are \nmaintained separately and the people involved haven't interacted very \nmuch. Is that accurate or no?\n\nThanks,\nScott\n\n", "msg_date": "Thu, 05 Dec 2002 00:37:09 -0600", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Wed, 4 Dec 2002 [email protected] wrote:\n\n> It is unfortunate that it is almost impossible to have a marketing group\n> without there being some wilful blinders involved; it's vital for there to be\n> some technical involvement in the marketing group to pop whatever bubbles they\n> grow that are woefully wrong. But even if it operates with some occasional\n> lack of /real/ vision, it's necessary to have a marketing group...\n\nAnd, for the most part, those that are -advocacy are techies that wish to\ncontribute as they can, but don't have the knowledge/time to dedicate to\nactual code ...\n\nBruce is kinda quiet, but both he and I are on that list, and I read (and\nimagine Bruce does to) pretty much everything that goes through ...\nbut, again, these aren't 'marketing droids' we have over there, but\ntechies that are using the software and have an idea of her limitations\nand benefits ...\n\n", "msg_date": "Thu, 5 Dec 2002 10:31:11 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Thu, 5 Dec 2002, Philip Warner wrote:\n\n> At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n> >Lack of marketing is one of Postgres's major problems.\n>\n> What are the consequences of the problem?\n\nWell, I'd have to say the major one is a difficult in increasing our user\nbase, as ppl like MySQL are making sure they are heard whenever they\nadd something new that we've had for years ...\n\n> If that is what we want, then fine. But I don't want to see any part of\n> the development effort distorted or the existing user base\n> inconvenienced in an effort to purely gain that market share. I usually\n> associate increased marketing with decreased quality, and I think the\n> causality works *both* ways.\n\nThat is what we want, and the efforts in no way are meant to\nundermine/distort anything ... go to archives.postgresql.org and read\nthrough the threads to get a feel ... its not a closed/hidden list by any\nmeans ...\n\n\n", "msg_date": "Thu, 5 Dec 2002 10:34:05 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Lamar Owen wrote:\n\n> However, I seriously question the need in the long term for our sites to be as\n> fractured as they are. Good grief! We've got advocacy.postgresql.org,\n> techdocs.postgresql.org, odbc.postgresql.org, gborg.postgresql.org,\n> developer.postgresql.org, jdbc.postgresql.org, etc. Oh, and we also have\n> www.postgresql.org on the side? I think not. Oh, and they are fractured in\n> their styles -- really, guys, we need a unified style here.\n\nUmmm, actually, we have:\n\nadvocacy, techdocs, gborg, developer, archives, jobs\n\nnote that altho they are seperate URLs, the end result is going to be that\nhttp://www.postgresql.org we become the \"town square\" of sorts, which\nshould be \"real soon now\" ...\n\njdbc/odbc are 'project sites' off of gborg, similar to what sourceforge\nprovides ...\n\n\n", "msg_date": "Thu, 5 Dec 2002 10:37:48 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Thu, 5 Dec 2002, Scott Lamb wrote:\n\n> Is this list the appropriate place to discuss the websites? or should I\n> take it to -advocacy? My impression here is that the two sites are\n> maintained separately and the people involved haven't interacted very\n> much. Is that accurate or no?\n\nExpect some major changes coming down the pipe ...\nhttp://www.postgresql.org is in its final stages of a major face lift ...\nthe informatoin that iscurrently on that site, Vince is in the process of\ndoing a major face lift on, but as it is now, I guess its been a veritible\nnightmare for him to really add anyting to it ...\n\nOnce we announce the new http://www.postgresql.org (hopefully this coming\nweek *cross fingers*), then start bombarding us with problems :)\n\nNote that for the web site development effort itself, there is a\nclosed list with about a dozen or so of us on it ... the -advocacy list is\nmeant to be open, with its focus reflected on the advocacy web site itself\n...\n\n\n\n", "msg_date": "Thu, 5 Dec 2002 10:42:25 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Thursday 05 December 2002 09:37, Marc G. Fournier wrote:\n> On Wed, 4 Dec 2002, Lamar Owen wrote:\n> > However, I seriously question the need in the long term for our sites to\n> > be as fractured as they are. Good grief! We've got\n\n> note that altho they are seperate URLs, the end result is going to be that\n> http://www.postgresql.org we become the \"town square\" of sorts, which\n> should be \"real soon now\" ...\n\n> jdbc/odbc are 'project sites' off of gborg, similar to what sourceforge\n> provides ...\n\nGlad to hear this.\n\nOne question: is there any particular reason the www list is closed? Just \ncurious -- reading archives of this list, or getting a digest or this list, \neven in a read-only manner, might alleviate some misconceptions. Those who \ncare can at least read what's planned for the web site.\n\nAs far as advocacy is concerned, I made a conscious decision to not read that \nlist -- I don't need to be convinced to use PostgreSQL. :-). Nor am I \nnecessarily a good 'advocacy' person......my 'convincing' many times comes \nacross much different from what I meant. So I don't read that list.\n\nCan you (or Vince) distill a roadmap for the website and post here, on \nhackers?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 5 Dec 2002 12:01:43 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Wed, 4 Dec 2002 [email protected] wrote:\n> \n> > It is unfortunate that it is almost impossible to have a marketing group\n> > without there being some wilful blinders involved; it's vital for there to be\n> > some technical involvement in the marketing group to pop whatever bubbles they\n> > grow that are woefully wrong. But even if it operates with some occasional\n> > lack of /real/ vision, it's necessary to have a marketing group...\n> \n> And, for the most part, those that are -advocacy are techies that wish to\n> contribute as they can, but don't have the knowledge/time to dedicate to\n> actual code ...\n> \n> Bruce is kinda quiet, but both he and I are on that list, and I read (and\n> imagine Bruce does to) pretty much everything that goes through ...\n> but, again, these aren't 'marketing droids' we have over there, but\n> techies that are using the software and have an idea of her limitations\n> and benefits ...\n\nYes, I have been way too quiet. I am trying to carve out time before\nstarting on 7.4 work, but it seems stuff keeps coming up. I have\nupdated the developers page with company names, and Vince is going to\nintegrate that. My next step is to split out my advocacy mailbox and\nstart shooting out content for the advocacy site.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Dec 2002 13:54:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Peter Eisentraut wrote:\n>> Marc G. Fournier writes:\n>>> It isn't, but those working on -advocacy were asked to help come up with a\n>>> stronger release *announcement* then we've had in the past ...\n>> \n>> Consider that a failed experiment. PostgreSQL is driven by the\n>> development group and, to some extent, by the existing user base. The\n>> last thing we need is a marketing department in that mix.\n\n> Peter, I understand your perspective, but I think you are in the\n> minority on this one.\n\nI tend to agree with Peter. Not that we don't need a marketing\npresence; we do (I think Great Bridge's marketing efforts are sorely\nmissed). But the point he is making is that the pgsql mailing lists\ngo to people who are generally unimpressed by marketing fluff. And\nthey're already \"sold\" on PG anyway.\n\nThe right way to handle this next time is to generate a PR-style\npress release to send to outside contacts, but to do our more\ntraditional, technically-oriented announcement on the mailing lists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Dec 2002 14:17:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces " }, { "msg_contents": "At 12:12 AM 5/12/2002 -0500, Robert Treat wrote:\n> >\n> > What are the consequences of the problem?\n> >\n>\n>One consequence that probably hits home for everyone here is it makes it\n>extremely hard to make a living working with postgresql.\n...\n>You can't win marketshare on technology alone\n\nI am happy with increasing market share so long a development is not \ndistorted or current users inconvenienced. We have seen the latter with the \nmisplaced announcements. And the former because I am writing this on \n-hackers, rather than implementing dependency-tracking in pg_dump ;-).\n\n\n>...lots of stuff deleted...\n>Marketing is very relevant to existing customers.\n\nGood point. Market Share -> Influence ->Corprate Support -> more features \n-> market share.\n\nGaining market share *is* a natural consequence of improving the product; \nmarketing is about convincing people a product has improved, even if it \nhasn't. Advocacy is about telling people about the product as it is - and I \nhave no problem with that, with the above proviso.\n\n\n>Aren't most development efforts made simply to gain market share?\n\n<diatribe>\nI seriously hope not - in fact I would find that very depressing.\n\nIn my opinion, anyone who devotes their personal free time to an open \nsource development project probably has a slew of complex motivations that \nhave little to do with market share. Perhaps the closest they would come \nwould be to say \"I want to make it better\", and in some peoples minds, \n\"better\" is measured by market share.\n\nIn my case, development I did on other open source projects (libgd) was \ndriven by a philosophical objection to application of patents to software \nin the US, and to a need for particular features (gd2 format, & gif \nsupport). My work on PG is driven by a desire to make the product more \nuseful (to me), more usable (for me), and by a philosophical belief in the \nimportance of free & open software. The fact that other people (& I) profit \nfrom this work is great. In any case, market share, for me, is at best a \nthird order influence - and I assume that's true for most people who \ncontribute to OS software. Although I do admit that there is a natural \ntendency to want \"your team\" to win.\n</diatribe>\n\n\n>After\n>all, I don't think we added schema support to get *less* people to use\n>postgresql.\n\nI am not sure why it was added, and it's sufficiently esoteric and large \nthat I doubt market share was an issue. If we wanted market share, then \nonline-vacuum and online-upgrade would have been the big-hitters.\n\nMy guess is that it was done because we did not support it, it is in the \nSQL standard, and it solved a number of issues that caused existing users & \ndevelopers problems. It was probably also an interesting project. Maybe I'm \nwrong...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Fri, 06 Dec 2002 12:31:05 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Thu, 05 Dec 2002 21:26:13 -0500, Philip Warner wrote:\n> At 12:12 AM 5/12/2002 -0500, Robert Treat wrote:\n> I am happy with increasing market share so long a development is not\n> distorted or current users inconvenienced. We have seen the latter with\n> the misplaced announcements.\n\nIt seems to me that people were inconvenienced solely because Mark forgot\nto CC the right groups and he didn't put the word \"7.3\" in the right\nplace in his subject line. Oh, and guess it was disruptive for people who\nkillfile any piece of email that has quoted text in it...\n\n> And the former because I am writing this on\n> -hackers, rather than implementing dependency-tracking in pg_dump ;-).\n> \n\nso get back to coding already...\n\n>>...lots of stuff deleted...\n>>Marketing is very relevant to existing customers.\n> \n> Good point. Market Share -> Influence ->Corprate Support -> more\n> features -> market share.\n> \n> Gaining market share *is* a natural consequence of improving the\n> product; \n\nreally? postgresql has been improving by leaps and bounds of the last few\nyears, but I guarantee you it's been losing market share, and it's losing\nthat market share to databases without half the features.\n\n> marketing is about convincing people a product has improved,\n> even if it hasn't. Advocacy is about telling people about the product as\n> it is - and I have no problem with that, with the above proviso.\n> \n\n<snip lots more stuff that basically says marketing isn't all bad, it's\nirrelevant too>\n\nwell, i think any more discussion at this point becomes a semantical\nargument or a flame war, and I've time for neither. \n\nRobert Treat\n", "msg_date": "Thu, 05 Dec 2002 23:13:39 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "As someone who exists mainly as an active user (and part-time \nadvocate/documentation tweaker), I have found the release of PostgreSQL \n7.3 to be disappointing. The ensuing pseudo-flamewar on the various \nlists has been similarly disappointing.\n\nI was surprised, for instance, to receive a non-list email announcing \nthe release of the software but then to have to wait for days actually \nto see it show up on the official (or even the advocacy) website in a \nnews item. Even now it is not listed at PostgreSQL, Inc.\n\nConsider the pieces of the puzzle here:\n\n1) an official website (http://www.postgresql.org/)\n2) an advocacy website (http://advocacy.postgresql.org/)\n3) official mailing lists\n4) a separate email database\n5) a developers' website (http://developers.postgresql.org/)\n6) an official ftp site (ftp://ftp.postgresql.org/)\n7) mirror websites\n8) mirror ftp sites\n9) a corporate website (http://www.pgsql.com/)\n\nWhile I have remained impressed with the software itself, the \norganization of these pieces has left much to be desired for the \nduration of my involvement as an end user.\n\nAs someone who works in a small startup company, I am a frequent witness \nto both the advantages and disadvantages of the lack of a strong \nbenevolent dictatorship in the form of management. I think one of the \ncore problems with the advocacy and presentation of the PostgreSQL \nproject is the fact that it has been a developer-centric project for \nquite some time, and that process, while there are drivers, does not \ntend to affect much other than the code. There does not seem to be a \nsingle, driving vision (or even a Board or consensus-based vision) \nbehind the public face of PostgreSQL. Granted, when a project is \nentirely volunteer-based, the management and development are loose. I've \nnoticed that in many such projects, web design and maintenance become \nvery low priority, especially when left to groups of hackers. Witness \nGNU, Debian, and, I would say PostgreSQL: extremely spare official \nwebsites often intimidating and/or difficult for the newbie.\n\nI've wanted to see a bit more structure given to the PostgreSQL website, \nthe release process, and various other portions of the project for quite \nsome time, but often it seems as though such a structure would not even \nbe welcome. As someone who has not had time to be a true developer on \nthe project, I'm content to wait for the missing features I'd like to \nsee.\n\nStill, I'm hoping that developers and advocates alike realize that the \nrelease process and these lists are in the public domain, and the way \nbusiness is conducted affects the perceptions of users as much as the \nquality of the software or any amount of marketing.\n\nIn any case, thanks for all the hard work. I actually thought the text \nof the email release I received was good and am working on the upgrade \nprocess now in my own environment.\n\n-tfo\n\nIn article <[email protected]>,\n [email protected] (Tom Lane) wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Peter Eisentraut wrote:\n> >> Marc G. Fournier writes:\n> >>> It isn't, but those working on -advocacy were asked to help come up with a\n> >>> stronger release *announcement* then we've had in the past ...\n> >> \n> >> Consider that a failed experiment. PostgreSQL is driven by the\n> >> development group and, to some extent, by the existing user base. The\n> >> last thing we need is a marketing department in that mix.\n> \n> > Peter, I understand your perspective, but I think you are in the\n> > minority on this one.\n> \n> I tend to agree with Peter. Not that we don't need a marketing\n> presence; we do (I think Great Bridge's marketing efforts are sorely\n> missed). But the point he is making is that the pgsql mailing lists\n> go to people who are generally unimpressed by marketing fluff. And\n> they're already \"sold\" on PG anyway.\n> \n> The right way to handle this next time is to generate a PR-style\n> press release to send to outside contacts, but to do our more\n> traditional, technically-oriented announcement on the mailing lists.\n", "msg_date": "Fri, 06 Dec 2002 19:14:12 -0600", "msg_from": "Thomas O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Wed, 4 Dec 2002, Bruce Momjian wrote:\n\n> Peter Eisentraut wrote:\n> > Marc G. Fournier writes:\n> >\n> > > It isn't, but those working on -advocacy were asked to help come up with a\n> > > stronger release *announcement* then we've had in the past ...\n> >\n> > Consider that a failed experiment. PostgreSQL is driven by the\n> > development group and, to some extent, by the existing user base. The\n> > last thing we need is a marketing department in that mix.\n>\n> Peter, I understand your perspective, but I think you are in the\n> minority on this one.\n\nKinda depends who you're asking now, doesn't it? I happen to agree with\nhim, but as long as you're only going to involve a selected few in the\nopinion gathering you can pretty much get the answer you want to get. I\ncan survey 100 people and get the opposite result putting you in the\nminority.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sat, 7 Dec 2002 19:20:19 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Thu, 5 Dec 2002, Robert Treat wrote:\n\n> Well, my previous employer uses postgresql, but they were under constant\n> assault from their clients to use oracle or db2. Technically there was no\n> reason to switch, but if your choice is switch databases or go out of\n> business, there really isn't much choice.\n\nThat tells me their clients wanted a commercial database, not one that's\nopen source. All the marketing in the world won't change that.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sat, 7 Dec 2002 20:53:20 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Thu, 5 Dec 2002, Robert Treat wrote:\n> \n> \n>>Well, my previous employer uses postgresql, but they were under constant\n>>assault from their clients to use oracle or db2. Technically there was no\n>>reason to switch, but if your choice is switch databases or go out of\n>>business, there really isn't much choice.\n> \n> \n> That tells me their clients wanted a commercial database, not one that's\n> open source. All the marketing in the world won't change that.\n\nReally?\n\nWhy do you say that?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Vince.\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Sun, 08 Dec 2002 12:58:41 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Thu, 5 Dec 2002, Brian Knox wrote:\n\n> Speaking from the perspective of a long time postgresql user, who\n> currently has several very mission critical applications using postgresql\n> on the back end, at a very large company...\n>\n> I can say the one consequence of the problem that I have run into\n> personally, is convincing management to allow me to use postgresql for my\n> projects to begin with. Fortunately, where I am currently employed, I was\n> able to bash my head against the brick wall until they got tired of\n> hearing from me, and allowed me to go with postgresql instead of sybase\n> (which was their first choice, as the corporation already has a sybase\n> site license).\n>\n> The lack of name recognition was a factor that contributed to the\n> difficulty of getting postgresql accepted. The last thing a non technical\n> middle manager wants to tell his or her manager is that some mission\n> critical application that just crashed was running on some database he had\n> never heard of before that he gave the go ahead to use.\n\nNot name recognition, but it'd be nice to think that's the reason.\nMysql has alot of name recognition but you didn't mention them. You\nmentioned sybase and having a sybase site license. Marketing wouldn't\nhelp here, they want a commercial database used that they've already\npaid for.\n\nWhat too many people fail to realize is that in a commercial environment\nmany companies want another company to point the finger at in case of\ndisaster. Sybase failed, or HP failed, or IBM failed, or Microsoft\nfailed. They feel they can do something about that. If they lose a\nfew million they have someone they can go after, who are they going to\ngo after if PostgreSQL fails them? Marc? Bruce?\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sat, 7 Dec 2002 21:13:12 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "> What too many people fail to realize is that in a commercial environment\n> many companies want another company to point the finger at in case of\n> disaster. Sybase failed, or HP failed, or IBM failed, or Microsoft\n> failed. They feel they can do something about that. If they lose a\n> few million they have someone they can go after, who are they going to\n> go after if PostgreSQL fails them? Marc? Bruce?\n\nThis is when you start to shout that RedHat offers commercial support,\nlicencing, etc. INCLUDING a free, non-restrictive source licence to the\ncore components of RHDB.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "07 Dec 2002 22:11:18 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Sun, 8 Dec 2002, Justin Clift wrote:\n\n> Vince Vielhaber wrote:\n> > On Thu, 5 Dec 2002, Robert Treat wrote:\n> >\n> >\n> >>Well, my previous employer uses postgresql, but they were under constant\n> >>assault from their clients to use oracle or db2. Technically there was no\n> >>reason to switch, but if your choice is switch databases or go out of\n> >>business, there really isn't much choice.\n> >\n> >\n> > That tells me their clients wanted a commercial database, not one that's\n> > open source. All the marketing in the world won't change that.\n>\n> Really?\n>\n> Why do you say that?\n\nBecause of this taken from the above quoted text:\n\n\"they were under constant assault from their clients to use oracle or db2\"\n\nLast I looked neither Oracle or DB2 were open source, but they both just\nhappen to be commercial and I don't see mysql mentioned.\n\nAnything else you don't understand about that?\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 15:52:10 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On 7 Dec 2002, Rod Taylor wrote:\n\n>\n> > What too many people fail to realize is that in a commercial environment\n> > many companies want another company to point the finger at in case of\n> > disaster. Sybase failed, or HP failed, or IBM failed, or Microsoft\n> > failed. They feel they can do something about that. If they lose a\n> > few million they have someone they can go after, who are they going to\n> > go after if PostgreSQL fails them? Marc? Bruce?\n>\n> This is when you start to shout that RedHat offers commercial support,\n> licencing, etc. INCLUDING a free, non-restrictive source licence to the\n> core components of RHDB.\n\nI had considered mentioning redhat but didn't want to blur things. Red\nhat markets PostgreSQL under a different name and they're offering a\ncomplete package (including support as you note). The PGDG isn't doing\nthat and they shouldn't be.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 15:57:03 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Vince Vielhaber wrote:\n> Because of this taken from the above quoted text:\n> \n> \"they were under constant assault from their clients to use oracle or db2\"\n> \n> Last I looked neither Oracle or DB2 were open source, but they both just\n> happen to be commercial and I don't see mysql mentioned.\n\nAnd.... ?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Anything else you don't understand about that?\n> \n> Vince.\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 09 Dec 2002 08:19:13 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Sun, 2002-12-08 at 20:52, Vince Vielhaber wrote:\n> > Why do you say that?\n> \n> Because of this taken from the above quoted text:\n> \n> \"they were under constant assault from their clients to use oracle or db2\"\n> \n> Last I looked neither Oracle or DB2 were open source, but they both just\n> happen to be commercial and I don't see mysql mentioned.\n\nThis is a reason to increase marketing effort. I know the word has\npejorative overtones in our community, but it means talking about\nPostgreSQL so that the PHBs hear about it and therefore begin to feel\ncomfortable about using it.\n\nIf something is familiar, it feels safe. We need to make PostgreSQL\nfamiliar. That's why we need marketing.\n\n-- \nOliver Elphick <[email protected]>\nLFIX Limited\n\n", "msg_date": "08 Dec 2002 22:01:00 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Mon, 9 Dec 2002, Justin Clift wrote:\n\n> Vince Vielhaber wrote:\n> > Because of this taken from the above quoted text:\n> >\n> > \"they were under constant assault from their clients to use oracle or db2\"\n> >\n> > Last I looked neither Oracle or DB2 were open source, but they both just\n> > happen to be commercial and I don't see mysql mentioned.\n>\n> And.... ?\n\nAnd what? If you can't understand the above you're in the wrong business.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 17:07:55 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Mon, 9 Dec 2002, Justin Clift wrote:\n> \n> \n>>Vince Vielhaber wrote:\n>>\n>>>Because of this taken from the above quoted text:\n>>>\n>>>\"they were under constant assault from their clients to use oracle or db2\"\n>>>\n>>>Last I looked neither Oracle or DB2 were open source, but they both just\n>>>happen to be commercial and I don't see mysql mentioned.\n>>\n>>And.... ?\n> \n> \n> And what? If you can't understand the above you're in the wrong business.\n\nAnd.... ?\n\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Vince.\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 09 Dec 2002 09:18:27 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On 8 Dec 2002, Oliver Elphick wrote:\n\n> On Sun, 2002-12-08 at 20:52, Vince Vielhaber wrote:\n> > > Why do you say that?\n> >\n> > Because of this taken from the above quoted text:\n> >\n> > \"they were under constant assault from their clients to use oracle or db2\"\n> >\n> > Last I looked neither Oracle or DB2 were open source, but they both just\n> > happen to be commercial and I don't see mysql mentioned.\n>\n> This is a reason to increase marketing effort. I know the word has\n> pejorative overtones in our community, but it means talking about\n> PostgreSQL so that the PHBs hear about it and therefore begin to feel\n> comfortable about using it.\n>\n> If something is familiar, it feels safe. We need to make PostgreSQL\n> familiar. That's why we need marketing.\n\nThen why wasn't mysql in the list? It's familiar.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 17:27:40 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Mon, 9 Dec 2002, Justin Clift wrote:\n\n> Vince Vielhaber wrote:\n> > On Mon, 9 Dec 2002, Justin Clift wrote:\n> >\n> >\n> >>Vince Vielhaber wrote:\n> >>\n> >>>Because of this taken from the above quoted text:\n> >>>\n> >>>\"they were under constant assault from their clients to use oracle or db2\"\n> >>>\n> >>>Last I looked neither Oracle or DB2 were open source, but they both just\n> >>>happen to be commercial and I don't see mysql mentioned.\n> >>\n> >>And.... ?\n> >\n> >\n> > And what? If you can't understand the above you're in the wrong business.\n>\n> And.... ?\n\nThat's what I thought. You have no argument so your just typing.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 17:30:08 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Sun, 2002-12-08 at 22:27, Vince Vielhaber wrote:\n> On 8 Dec 2002, Oliver Elphick wrote:\n\n> > If something is familiar, it feels safe. We need to make PostgreSQL\n> > familiar. That's why we need marketing.\n> \n> Then why wasn't mysql in the list? It's familiar.\n\nTo PHBs?\n\nMySQL doesn't have anything like the marketing clout of Oracle and IBM. \nBe thankful it isn't in the list; it would make it a hell of a lot more\ndifficult to dislodge it.\n\nIf we want people to use PostgreSQL in preference to anything else, we\nhave to make it known. That is marketing. If we believe we have a good\nproduct we need to say so and say why and how it's better, cheaper and\npurer than anything else. If there's no good marketing, bad marketing\nwill rule the world for sure.\n\nIf we don't care, we can retreat into a pure technological huddle and\ndisappear up our own navels. The rest of the world won't even notice. \nSuch purity will eventually destroy the project because it will lose the\nmomentum for growth through a lack of new input. You can grow or you\ncan decline; a steady state is almost impossible to achieve.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For I am the LORD your God; ye shall therefore \n sanctify yourselves, and ye shall be holy; for I am \n holy.\" Leviticus 11:44 \n\n", "msg_date": "08 Dec 2002 22:59:09 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On 8 Dec 2002, Oliver Elphick wrote:\n\n> On Sun, 2002-12-08 at 22:27, Vince Vielhaber wrote:\n> > On 8 Dec 2002, Oliver Elphick wrote:\n>\n> > > If something is familiar, it feels safe. We need to make PostgreSQL\n> > > familiar. That's why we need marketing.\n> >\n> > Then why wasn't mysql in the list? It's familiar.\n>\n> To PHBs?\n\nI would argue yes. Everywhere you turn you see \"Powered by MySQL\".\nIf years of working on it isn't getting them the familiarity to overcome\nthe PHBs then the PHBs are either not considering open source or the\nmarketing attempts aren't strong or capable enough to penetrate.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 18:14:53 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Vince Vielhaber wrote:\n >\n> That's what I thought. You have no argument so your just typing.\n\nHi Vince,\n\nWas more hoping you'd care to share your basis for stating Robert's \nemployers clients wanted a \"commercial database\", after he mentioned \nspecifically DB2 and Oracle. Knowing one of the obvious common factors \nthey have and then stating it was definitely the reason - not having \nsought clarification nor confirmation from Robert - and then further \nstating that the PG Advocacy and Marketing group wouldn't be able to \nassist even if that were the case, is extremely bad form coming from \nanyone, let alone you.\n\nPlease consider the statements you make by a more accurate approach in \nthe future.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Vince.\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 09 Dec 2002 10:52:56 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Mon, 9 Dec 2002, Justin Clift wrote:\n\n> Vince Vielhaber wrote:\n> >\n> > That's what I thought. You have no argument so your just typing.\n>\n> Hi Vince,\n>\n> Was more hoping you'd care to share your basis for stating Robert's\n> employers clients wanted a \"commercial database\", after he mentioned\n> specifically DB2 and Oracle. Knowing one of the obvious common factors\n> they have and then stating it was definitely the reason - not having\n> sought clarification nor confirmation from Robert - and then further\n> stating that the PG Advocacy and Marketing group wouldn't be able to\n> assist even if that were the case, is extremely bad form coming from\n> anyone, let alone you.\n\nThen they come with the insults. Justin, I'm finished discussing this\nwith you. You're obviously not capable of understanding it and you're\nsimply wasting my time - like usual.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Sun, 8 Dec 2002 19:14:12 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Oliver Elphick wrote:\n\n> If we want people to use PostgreSQL in preference to anything else, we\n> have to make it known. That is marketing. If we believe we have a good\n> product we need to say so and say why and how it's better, cheaper and\n> purer than anything else. If there's no good marketing, bad marketing\n> will rule the world for sure.\n> \n> If we don't care, we can retreat into a pure technological huddle and\n> disappear up our own navels. The rest of the world won't even notice. \n> Such purity will eventually destroy the project because it will lose the\n> momentum for growth through a lack of new input. You can grow or you\n> can decline; a steady state is almost impossible to achieve.\n\nCouldn't agree more with that last point.\n\nI've had the perspective of working in big companies using various database software, a company specifically focused on PostgreSQL (Great Bridge), and now a new ISV with PostgreSQL underneath a vertical application (OpenMFG). I can tell you that even though the pgsql-hacker community is as strong as it's ever been, I think there's a serious danger of the larger world passing PostgreSQL by.\n\nOracle and DB2 continue to get better and - significantly - cheaper, and SQL Server ... well, Oracle and DB2 are getting better. MySQL, even though it's an inferior product for most real database work, has always had a significantly larger installed base than PostgreSQL- and it's less controversial for people like Sun (who have deep relationships with Oracle) to get involved with. And despite the productizing of RHDB, Red Hat doesn't seem interested in making a real push for PostgreSQL either. While there are a number of smaller companies trying to help out, I think it's clear that the burden for helping PostgreSQL to find wider acceptance in the marketplace will be on the pgsql-hacker community for some time to come.\n\nI applaud the efforts of the advocacy group, and encourage others here not to look at the marketing as somehow dirty or beneath the dignity of the project.\n\nKeep up the good work,\nNed\n\n", "msg_date": "Sun, 8 Dec 2002 20:53:17 -0500", "msg_from": "\"Ned Lilly\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Sunday 08 December 2002 06:14 pm, Vince Vielhaber wrote:\n> On 8 Dec 2002, Oliver Elphick wrote:\n> > On Sun, 2002-12-08 at 22:27, Vince Vielhaber wrote:\n> > > On 8 Dec 2002, Oliver Elphick wrote:\n> > > > If something is familiar, it feels safe. We need to make PostgreSQL\n> > > > familiar. That's why we need marketing.\n> > >\n> > > Then why wasn't mysql in the list? It's familiar.\n> >\n> > To PHBs?\n>\n> I would argue yes. Everywhere you turn you see \"Powered by MySQL\".\n> If years of working on it isn't getting them the familiarity to overcome\n> the PHBs then the PHBs are either not considering open source or the\n> marketing attempts aren't strong or capable enough to penetrate.\n>\n\nI don't think mysql has penetrated the \"enterprise class/ mission critical\" \nmindest, which is the level our service had to be provided that. To be \nhonest, it was tough to argue PostgreSQL belonged in that group, though we \nhad a good 2 years worth of history in actually running the business on \nPostgreSQL which couldn't be dismissed. Of course, some of these companies \nweren't too happy things were running on linux, and not aix or solaris; are \nwe seeing a pointy haired trend here? Personally I never understood why our \nsales guys didn't just tell them \"ok we'll port the service to oracle/solaris \nfor you, but it's going to cost you at least twice what it does now, if not \nthree times. Oh, and you won't see any better performance.\"\n\nRobert Treat\n\n", "msg_date": "Sun, 8 Dec 2002 22:48:54 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Am Donnerstag, 5. Dezember 2002 05:22 schrieb Lamar Owen:\n> [cc: list trimmed]\n>\n> On Wednesday 04 December 2002 22:52, Philip Warner wrote:\n> > At 05:48 PM 4/12/2002 -0800, Christopher Kings-Lynne wrote:\n> > >Lack of marketing is one of Postgres's major problems.\n> >\n> > What are the consequences of the problem?\n>\n> Actually, lack of easy upgrading is one of PostgreSQL's major problems....\n>\n> But lack of focused marketing -- truthful, not, as has been said, like the\n> 'Database HOWTO' -- is a real problem. It would be nice to increase our\n> usage.\n>\n> > If that is what we want, then fine. But I don't want to see any part of\n> > the development effort distorted or the existing user base inconvenienced\n> > in an effort to purely gain that market share. I usually associate\n> > increased marketing with decreased quality, and I think the causality\n> > works *both* ways.\n>\n> ISTM there's a separate, non-code-developer group doing this. It doesn't\n> seem to take away _any_ developer resources to do an advocacy site.\n>\n> However, I seriously question the need in the long term for our sites to be\n> as fractured as they are. Good grief! We've got advocacy.postgresql.org,\n> techdocs.postgresql.org, odbc.postgresql.org, gborg.postgresql.org,\n> developer.postgresql.org, jdbc.postgresql.org, etc. Oh, and we also have\n> www.postgresql.org on the side? I think not. Oh, and they are fractured\n> in their styles -- really, guys, we need a unified style here.\n\nHi,\n\nthere are lots of sites talking about postgresql. But if someone hear about \npostgresql he sure tries www.postgresql.org. There he just get a list of \nmirrors. Not really a good start. But worse: there is no links to gborg, \nadvocacy, techdocs, ... Advocacy should be found at www.postgresql.org and \nhave links to the other pages. I found gborg when reading the mailinglistst. \nIt is something like a insidertip.\n\nwww.apache.org has a much better structure. You go to www.apache.org and get a \nwelcome-message and links to subprojects as the webserver.\n\nAnother point that comes to my mind is design. I'm not a designer, but I like \nthe design of www.postgresql.org but not advocacy.postrgresql.org.\n\n\nTommi\n\n-- \nDr. Eckhardt + Partner GmbH\nhttp://www.epgmbh.de\n", "msg_date": "Mon, 9 Dec 2002 09:03:44 +0100", "msg_from": "Tommi Maekitalo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Hi Tommi,\n\nTommi Maekitalo wrote:\n<snip>\n> Hi,\n> \n> there are lots of sites talking about postgresql. But if someone hear about \n> postgresql he sure tries www.postgresql.org. There he just get a list of \n> mirrors. Not really a good start. But worse: there is no links to gborg, \n> advocacy, techdocs, ... Advocacy should be found at www.postgresql.org and \n> have links to the other pages. I found gborg when reading the mailinglistst. \n> It is something like a insidertip.\n\nThere is a new front page for the www.postgresql.org site that was \nrecently finished, and will be moved into the correct place soon. You \ncan view it for now at wwwdevel.postgresql.org.\n\nThe new front page has links to the other main websites, so it should \nhelp people find the information they need in a much easier way. :-)\n\nHope that's helpful to know.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n<snip>\n> Tommi\n> \n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 09 Dec 2002 19:16:01 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Sun, 8 Dec 2002, Justin Clift wrote:\n> \n> > Vince Vielhaber wrote:\n> > > On Thu, 5 Dec 2002, Robert Treat wrote:\n> > >\n> > >\n> > >>Well, my previous employer uses postgresql, but they were under constant\n> > >>assault from their clients to use oracle or db2. Technically there was no\n> > >>reason to switch, but if your choice is switch databases or go out of\n> > >>business, there really isn't much choice.\n> > >\n> > >\n> > > That tells me their clients wanted a commercial database, not one that's\n> > > open source. All the marketing in the world won't change that.\n> >\n> > Really?\n> >\n> > Why do you say that?\n> \n> Because of this taken from the above quoted text:\n> \n> \"they were under constant assault from their clients to use oracle or db2\"\n> \n> Last I looked neither Oracle or DB2 were open source, but they both just\n> happen to be commercial and I don't see mysql mentioned.\n> \n> Anything else you don't understand about that?\n\nThere are a number of reasons their clients could have been clamoring\nfor DB2 or Oracle, only some of which are related to the fact that\nthey're commercial, closed-source databases:\n\n1. They already have significant in-house expertise with one or the\n other product.\n\n2. They need 24x7 support, and are convinced that they'll get better\n support for Oracle or DB2 than anything else.\n\n3. They want a company to blame in case things go wrong.\n\n4. They require certain capabilities that they believe only DB2 or\n Oracle can provide.\n\n5. They have an established partnership with IBM or Oracle.\n\n6. Some combination of the above.\n\n\nSome of those reasons are such that it might be possible (depending on\nthe specifics of the situation) to successfully market PostgreSQL (or\neven MySQL) to them, and some of them aren't. It just depends.\n\nAnd that's why it's a bad idea to simply discard that situation as one\nin which it would be impossible to market PostgreSQL.\n\n\nMarketing is the art of convincing someone that they want your\nproduct. Since the keyword here is \"want\", it's an art that combines\nreason and emotion. Even if the situation seems logically hopeless\n(that is, there's no logical reason for the customer to prefer your\nproduct over another), you may still manage to successfully market\nyour product to them by appealing to their emotions. Happens all the\ntime.\n\nMy personal feeling is that in the case of PostgreSQL, it should be\nmarketed primarily using reason. More precisely, it should *not* be\nmarketed to someone for whom a different product would better suit\nthem. That, to me, would be shady at best and would eventually become\na blemish on the reputation of the PostgreSQL community. But it\ndoesn't mean giving up just because the client thinks he wants a\ncommercial database: he may well want something else that a commercial\ndatabase just happens to provide.\n\nIf you're trying to sell someone on PostgreSQL, it behooves you to\nfigure out what their real needs are first. Their actual needs may be\nsignificantly different from what they tell you they want.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Mon, 9 Dec 2002 01:20:45 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On 9 Dec 2002 at 1:20, Kevin Brown wrote:\n\n> 2. They need 24x7 support, and are convinced that they'll get better\n> support for Oracle or DB2 than anything else.\n\nI have experienced what oracle support means for 24x7. I wouldn't even wish \nthat penalty for my worst enemy.\n\nI can tell a story about it but I digress. Details aren't important though \ntrue. \n\nWhat really matters is how kindly and dearly you stand by your product. That is \nwhere all support originates.. Rest is marketing..\n\nBye\n Shridhar\n\n--\nI have never understood the female capacity to avoid a direct answer toany \nquestion.\t\t-- Spock, \"This Side of Paradise\", stardate 3417.3\n\n", "msg_date": "Mon, 09 Dec 2002 15:13:12 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "On Thu, 5 Dec 2002, Tom Lane wrote:\n\n> I tend to agree with Peter. Not that we don't need a marketing\n> presence; we do (I think Great Bridge's marketing efforts are sorely\n> missed). But the point he is making is that the pgsql mailing lists go\n> to people who are generally unimpressed by marketing fluff. And they're\n> already \"sold\" on PG anyway.\n>\n> The right way to handle this next time is to generate a PR-style\n> press release to send to outside contacts, but to do our more\n> traditional, technically-oriented announcement on the mailing lists.\n\nAgreed ... we tried to do 'two-in-one' on this one and it didn't quite\nwork out as well as it could have ... next time, we'll go with both\nmethods ...\n\n", "msg_date": "Mon, 9 Dec 2002 18:15:21 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Fri, 6 Dec 2002, Thomas O'Connell wrote:\n\n> I was surprised, for instance, to receive a non-list email announcing\n> the release of the software but then to have to wait for days actually\n> to see it show up on the official (or even the advocacy) website in a\n> news item. Even now it is not listed at PostgreSQL, Inc.\n\nack, an oversight, I can assure you ... I have proded the apporpriate ppl\nfor this one :(\n\n", "msg_date": "Mon, 9 Dec 2002 18:18:16 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "On Sat, 7 Dec 2002, Vince Vielhaber wrote:\n\n> On Wed, 4 Dec 2002, Bruce Momjian wrote:\n>\n> > Peter Eisentraut wrote:\n> > > Marc G. Fournier writes:\n> > >\n> > > > It isn't, but those working on -advocacy were asked to help come up with a\n> > > > stronger release *announcement* then we've had in the past ...\n> > >\n> > > Consider that a failed experiment. PostgreSQL is driven by the\n> > > development group and, to some extent, by the existing user base. The\n> > > last thing we need is a marketing department in that mix.\n> >\n> > Peter, I understand your perspective, but I think you are in the\n> > minority on this one.\n>\n> Kinda depends who you're asking now, doesn't it? I happen to agree with\n> him, but as long as you're only going to involve a selected few in the\n> opinion gathering you can pretty much get the answer you want to get. I\n> can survey 100 people and get the opposite result putting you in the\n> minority.\n\nMe, I think Peter went to the 'far left', while the press release went to\nthe 'far right' (or vice versa) ... i think Tom sum'd it up best that we\nshould have had one for each 'market' we were trying to address ...\ndefinitely something to keep in mind and strive for for the next release\n...\n\n", "msg_date": "Mon, 9 Dec 2002 18:21:29 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": ">\n>\n>>>Peter Eisentraut wrote:\n>>> \n>>>\n>>>>Marc G. Fournier writes:\n>>>>\n>>>> \n>>>>\n>>>>>It isn't, but those working on -advocacy were asked to help come up with a\n>>>>>stronger release *announcement* then we've had in the past ...\n>>>>> \n>>>>>\n>>>>Consider that a failed experiment. PostgreSQL is driven by the\n>>>>development group and, to some extent, by the existing user base. The\n>>>>last thing we need is a marketing department in that mix.\n>>>>\nThen you will have what you want. You will be used by a limited number \nof developers who understand the idea. And you will have ugly dialogues \nlike that. This sounds a bit like 'what would happen if all population \nof the world were male'. Or all were developers. You should accept the \nfact that you never have developers on the front line. Even if you take \nMicrosoft - I even do not know the name of the chief software engineer \n(do not tell me this is Mr. Gates, he is not - there is a guy with a \nbeard, the third richest man in the world or so). Or if you take Oracle \n- you have Larry. Larry is not a developer. Or even with MySQL - you see \nthe marketing machine. Even with Linux - I have not seen Linus in the \npress for ages. Or Alan. All 'gurus' are hidden. You take the hype - the \nhype of Bill or the hype of Linus. Or the charming and successfully \narogant Lary. And make a product out of it and a market. As long as the \ndevelopers of PostgreSQL want to be on the front line - it will be what \nit is - a fine database used by the people who have the clue to talk to \nand understand developers. An uncut diamond.\n\nI actually do not understand why is the whole cry - why not somebody who \nhas REALLY the marketing in his/her heart - does not make an open source \namazingly beautiful and powerful web site. You do not have to ask Bruce \nfor that. You get BRICOLAGE - it is free, and it is good - salon.com \nruns on it. You inspire some great designer to do the desing (do not ask \na developer to do that, otherwise a designer might want to do some code \nand PostgreSQL is lost). Call Mario Garcia (www.mariogarcia.com) - he \nwill be proud to help. And you take ten fanatic advocacy people to fill \nin success stories and case studies. News. Whatever.\n\nIt does not take that much. It take strong individuals that lead. \nHowever, some people on HACKERS find special pleasure to kill all \ninitiative. I do not see this for first time...\n\nIavor\n\nwww.pgaccess.org\n\n\n", "msg_date": "Sat, 14 Dec 2002 01:44:57 +0100", "msg_from": "Iavor Raytchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Iavor Raytchev wrote:\n> I actually do not understand why is the whole cry - why not somebody who \n> has REALLY the marketing in his/her heart - does not make an open source \n> amazingly beautiful and powerful web site. You do not have to ask Bruce \n> for that. You get BRICOLAGE - it is free, and it is good - salon.com \n> runs on it. You inspire some great designer to do the desing (do not ask \n> a developer to do that, otherwise a designer might want to do some code \n> and PostgreSQL is lost). Call Mario Garcia (www.mariogarcia.com) - he \n> will be proud to help. And you take ten fanatic advocacy people to fill \n> in success stories and case studies. News. Whatever.\n> \n> It does not take that much. It take strong individuals that lead. \n> However, some people on HACKERS find special pleasure to kill all \n> initiative. I do not see this for first time...\n\nI think we have gotten over that hurdle and _most_ agree marketing is a\npriority.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 13 Dec 2002 19:50:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Bruce Momjian wrote:\n\n>Iavor Raytchev wrote:\n> \n>\n>>I actually do not understand why is the whole cry - why not somebody who \n>>has REALLY the marketing in his/her heart - does not make an open source \n>>amazingly beautiful and powerful web site. You do not have to ask Bruce \n>>for that. You get BRICOLAGE - it is free, and it is good - salon.com \n>>runs on it. You inspire some great designer to do the desing (do not ask \n>>a developer to do that, otherwise a designer might want to do some code \n>>and PostgreSQL is lost). Call Mario Garcia (www.mariogarcia.com) - he \n>>will be proud to help. And you take ten fanatic advocacy people to fill \n>>in success stories and case studies. News. Whatever.\n>>\n>>It does not take that much. It take strong individuals that lead. \n>>However, some people on HACKERS find special pleasure to kill all \n>>initiative. I do not see this for first time...\n>> \n>>\n>\n>I think we have gotten over that hurdle and _most_ agree marketing is a\n>priority.\n>\nI am sorry. Seems I came too late. I did it out of my good feelings.\n\nIavor\n\n\n\n", "msg_date": "Sat, 14 Dec 2002 02:16:18 +0100", "msg_from": "Iavor Raytchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "Peter Eisentraut wrote:\n\n>Marc G. Fournier writes:\n>\n> \n>\n>>It isn't, but those working on -advocacy were asked to help come up with a\n>>stronger release *announcement* then we've had in the past ...\n>> \n>>\n>\n>Consider that a failed experiment. PostgreSQL is driven by the\n>development group and, to some extent, by the existing user base. The\n>last thing we need is a marketing department in that mix.\n>\n\nI am a long term user of PostgreSQL and I think it suffers from a lack \nof a marketing department.\n\nIf you have the best restaurant in town, but no one eats there, what's \nthe point?\n\nWe all correspond and work on PostgreSQL to make it the best we can. To \ncreate something \"good\" that people can use. One of the prime parts of \nthat sentence is \"people can use.\" Like it or not, that means getting \nthe word out.\n\nMySQL is an appalling database, but people use it, a lot! Why? Because \nthey really market it. They push it. They craft deceptive benchmarks \nwhich show it is better. PostgreSQL doesn't even need to be deceptive.\n\nMy company is working on a Suite of applications and PostgreSQL is a key \ncomponent. We will be doing our own local marketing, but it it would \nhelp if the PostgreSQL core understood that a clean professional looking \nwebsite, geared toward end users would make a big difference.\n\nFurthermore, I think it would be very rewarding for everyone involved if \nwe could get some of the \"street cred\" that MySQL has. PostgreSQL *is* a \nbetter database in almost every way. If MySQL virtually owns the open \nsource mind share for SQL databases, it is our fault.\n\nPeter, Tom, Bruce, et al. you guys do a great job, IMHO PostgreSQL isn't \nlacking in anything technical, as of 7.2, with non-locking vacuum, I \nwould consider it a viable database with no caveats. 7.3 is superior. A \npure Win32 version would be awesome.\n\nI just think that if we could get people equally talented at spreading \nthe word and making the noise, it would make a big difference in the \nnumber of users. More users eventually translates to more funding or \ndevelopment.\n\nWouldn't you like to say to someone: \"I contribute the PostgreSQL \nproject\" and have them say \"Cool\" instead of \"What's that?\"\n\n\n\n\n\n", "msg_date": "Sat, 14 Dec 2002 08:26:51 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group Announces" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Devrim GО©╫NDО©╫Z\" <[email protected]>\nTo: \"PostgreSQL-development\" <[email protected]>\nSent: Saturday, December 14, 2002 4:58 PM\nSubject: Re: [HACKERS] [GENERAL] PostgreSQL Global Development Group\n> Also, I have something to say about win32 port.\n>\n> I'm a Linux user. I'm happy that PostgreSQL does not have win32 version.\n> If someone wants to use a real database server, then they should install\n> Linux (or *bsd,etc). This is what Oracle offers,too. Native Windows\n> support will cause some problems; such as some dummy windows users will\n> begin using it. I do not believe that PostgreSQL needs native windowz\n> support.\n\nOoops.\nI'm a Linux user too, but i have a SCO Openserver, UnixWare, Netware and lot\nof windows boxes in my office.\nAlso I have Informix, Sybase ... etc.\nThis isn't for my entertainment.\nOur customers need to \"use a real database server\".\nBut what about small business?\nA lot of our small customers can't spent money for dedicated linux box :(((\n\nI spent 2 month in trying open source databases (PostgreSQL, SAP DB,\nInterbase/Firebird)\nfinaly i choose PostgreSQL. Now we port one of our products from Sybase SQL\nAnywhere to PostgreSQL.\n\nWe have more than 100 customers with small networks (2-10). Most of them\ncant't aford dedicated linux box.\nAnother situation DHL Bulgaria and TNT Worldwide Express Bulgaria are our\ncustomers too.\nIn HQ they choose windows nt (i don't comment how \"smart\" is this decision),\npay a lot of money to mr.Gates and now what - we say PostgreSQL is great ,\nbut ......\n( and i have personal contacts with their sysadmins i don't believe they are\n\"dummy windows users\")\n\nSo if you don't want windows support just don't use it!!!!!\n\n\n\n\n", "msg_date": "Sat, 14 Dec 2002 16:31:58 +0100", "msg_from": "\"Igor Georgiev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Hi,\n\nOn Sat, 2002-12-14 at 13:26, mlw wrote:\n\n> MySQL is an appalling database, but people use it, a lot! Why? Because \n> they really market it. They push it. They craft deceptive benchmarks \n> which show it is better. PostgreSQL doesn't even need to be deceptive.\n> \n<snip>\n> Furthermore, I think it would be very rewarding for everyone involved if \n> we could get some of the \"street cred\" that MySQL has. PostgreSQL *is* a \n> better database in almost every way. If MySQL virtually owns the open \n> source mind share for SQL databases, it is our fault.\n\nI do NOT like hearing about MySQL in this (these) list(s).\n\nPostgreSQL is not in the same category with MySQL. MySQL is for\n*dummies*, not database admins. I do not even call it a database. I\nhave never forgotten my data loss 2,5 years ago; when I used MySQL for\njust 2 months!!! \n\nIf we want to \"sell\" PostgreSQL, we should talk about, maybe, Oracle.\nI have never took care of MySQL said. I just know that I'm running\nPostgreSQL since 2,5 years and I only stopped it \"JUST\" before upgrades\nof PostgreSQL. It's just *working*; which is unfamiliar to MySQL users. \n\nI've presented about 28 seminars in last 12 months on PostgreSQL... In\nall of them, I always tried to avoid talking about MySQL. But always\n\"hit\" Oracle. I'm sick of hearing such sentences : \"We paid $$$$ to\nOracle, we hold 1 GB of data!\". Even MySQL can hold that amount of data\n:-) \n\nAlso, I have something to say about win32 port.\n\nI'm a Linux user. I'm happy that PostgreSQL does not have win32 version.\nIf someone wants to use a real database server, then they should install\nLinux (or *bsd,etc). This is what Oracle offers,too. Native Windows\nsupport will cause some problems; such as some dummy windows users will\nbegin using it. I do not believe that PostgreSQL needs native windowz\nsupport. \n\nSo, hackers (I'm not a hacker) should decide whether PostgreSQL should\nbe used widely in real database apps, or it should be used even by dummy\nusers?\n\nI prefer the first one, if we want to compete with Oracle; not MySQL.\n\nBest regards,\n\n--\nDevrim GUNDUZ\nTR.NET System Support Specialist\[email protected]\n\n", "msg_date": "14 Dec 2002 15:58:29 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Hi,\n\nOn Sat, 2002-12-14 at 15:31, Igor Georgiev wrote:\n\n<snip>\n> In HQ they choose windows nt (i don't comment how \"smart\" is this decision),\n> pay a lot of money to mr.Gates and now what - we say PostgreSQL is great ,\n> but ......\n> ( and i have personal contacts with their sysadmins i don't believe they are\n> \"dummy windows users\")\n\nHey, I did not say that \"any windowz user is dummy\". If you read my\nprevious post from the beginning; you'll see that my target is MySQL\nusers on Windows...\n\nWhat I've been trying to say that is: If we have a chance to choose, I'd\nprefer using PostgreSQL in *nix systems. This is what I've been doing\nsince 2,5 years. \n\n> So if you don't want windows support just don't use it!!!!!\n\nI can't, even if I want it; since I do not have a windows installed\ncomputer. ;-)\n\nAnyway, this will be a \"windows-linux\" discussion; which is offtopic for\nthis list.\n\nBest regards,\n--\nDevrim GUNDUZ\nTR.NET System Support Specialist\[email protected]\n\n", "msg_date": "14 Dec 2002 16:56:24 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Devrim G?ND?Z wrote:\n> I do NOT like hearing about MySQL in this (these) list(s).\n> \n> PostgreSQL is not in the same category with MySQL. MySQL is for\n> *dummies*, not database admins. I do not even call it a database. I\n> have never forgotten my data loss 2,5 years ago; when I used MySQL for\n> just 2 months!!! \n\nI think you're on to something here, but it's obscured by the way you\nsaid it.\n\nThere's no question in my mind that PostgreSQL is superior in almost\nevery way to MySQL. For those of us who are technically minded, it\nboggles the mind that people would choose MySQL over PostgreSQL. Yet\nthey do. And it's important to understand why.\n\nSimply saying \"MySQL has better marketing\" isn't enough. It's too\nsimple an answer and obscures some issues that should probably be\naddressed.\n\nPeople use MySQL because it's very easy to set up, relatively easy to\nmaintain (when something doesn't go wrong, that is), is very well\ndocumented and supported, and is initially adequate for the task they\nhave in mind (that the task may change significantly such that MySQL\nis no longer adequate is something only those with experience will\nconsider).\n\nPostgreSQL has come a long way and, with the exception of a few minor\nthings (the need to VACUUM, for instance. The current version makes\nthe VACUUM requirement almost a non-issue as regards performance and\navailability, but it really should be something that the database\ntakes care of itself), is equivalent to MySQL in the above things\nexcept for documentation and support.\n\nMySQL's documentation is very, very good. My experience with it is\nthat it's possible, and relatively easy, to find information about\nalmost anything you might need to know.\n\nPostgreSQL's documentation is good, but not quite as good as MySQL's.\nIt's not quite as complete. For instance, I didn't find any\ndocumentation at all in the User's Guide or Administrator's Guide on\ncreating tables (if I missed it, then that might illustrate that the\ndocumentation needs to be organized slightly differently). I did find\na little in the tutorial (about the amount that you'd want in a\ntutorial), but to find out more I had to go to the SQL statement\nreference (in my case I was looking for the means by which one could\ncreate a constraint on a column during table creation time).\n\nThe reason this is important is that the documentation is *the* way\npeople are going to learn the database. If it's too sparse or too\ndisorganized, people who don't have a lot of time to spend searching\nthrough the documentation for something may well decide that a\ndifferent product (such as MySQL) would suit their needs better.\n\nThe documentation for PostgreSQL improves all the time, largely in\nresponse to comments such as this one, and that's a very good thing.\nMy purpose in bringing this up is to show you what PostgreSQL is up\nagainst in terms of widespread adoption.\n\n> If we want to \"sell\" PostgreSQL, we should talk about, maybe, Oracle.\n> I have never took care of MySQL said. I just know that I'm running\n> PostgreSQL since 2,5 years and I only stopped it \"JUST\" before upgrades\n> of PostgreSQL. It's just *working*; which is unfamiliar to MySQL\n> users. \n\nThe experience people have with MySQL varies a lot, and much of it has\nto do with the load people put on it. If MySQL were consistently bad\nand unreliable it would have a much smaller following (since it's not\nin a monopoly position the way Microsoft is).\n\nBut you're mistaken if you believe that MySQL isn't competition for\nPostgreSQL. It is, because it serves the same purpose: a means of\nstoring information in an easily retrievable way.\n\nSelling potential MySQL users on PostgreSQL should be easier than\ndoing the same for Oracle users because potential MySQL users have at\nleast already decided that a free database is worthy of consideration.\nAs their needs grow beyond what MySQL offers, they'll look for a more\ncapable database engine. It's a target market that we'd be idiots to\nignore, and we do so at our peril (the more people out there using\nMySQL, the fewer there are using PostgreSQL).\n\n> I'm a Linux user. I'm happy that PostgreSQL does not have win32 version.\n> If someone wants to use a real database server, then they should install\n> Linux (or *bsd,etc). This is what Oracle offers,too. Native Windows\n> support will cause some problems; such as some dummy windows users will\n> begin using it. I do not believe that PostgreSQL needs native windowz\n> support. \n\nI hate to break it to you (assuming that I didn't misunderstand what\nyou said), but Oracle offers a native Windows port of their database\nengine, and has done so for some time. It's *stupid* to ignore the\nnative Windows market. There are a lot of people who need a database\nengine to store their data and who would benefit from a native Windows\nimplementation of PostgreSQL, but aren't interested in the additional\nburden of setting up a Linux server because they lack the money, time,\nor expertise.\n\n> So, hackers (I'm not a hacker) should decide whether PostgreSQL should\n> be used widely in real database apps, or it should be used even by dummy\n> users?\n\nWhat makes you think we can't meet the needs of both groups? The\ncapabilities of PostgreSQL are (with very few exceptions) a superset\nof MySQL's, which means that wherever someone deploys a MySQL server,\nthey could probably have deployed a PostgreSQL server in its place.\nIt should be an easy sell: they get a database engine that is\nsignificantly more capable than MySQL for the same low price!\n\nSelling to the Oracle market is going to be harder. The capabilities\nof Oracle are a superset of those of PostgreSQL. Shops which plan to\ndeploy a database server and who need the capabilities of PostgreSQL\nat a minimum are going to look at Oracle for the same reason that\nshops which at a minimum need the capabilities of MySQL would be smart\nto look at PostgreSQL: their needs may grow over time and changing the\ndatabase mid-project is difficult and time-consuming. The difference\nis that the prices of MySQL and PostgreSQL are the same, while the\nprices of PostgreSQL and Oracle are vastly different.\n\nThat's not to say that going after the Oracle market shouldn't be done\n(quite the opposite, provided it's done honestly), only that *not*\ngoing after the MySQL market is folly.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 14 Dec 2002 22:02:47 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL Global Development Group" }, { "msg_contents": "Kevin Brown wrote:\n> Devrim G?ND?Z wrote:\n> > I do NOT like hearing about MySQL in this (these) list(s).\n> > \n> > PostgreSQL is not in the same category with MySQL. MySQL is for\n> > *dummies*, not database admins. I do not even call it a database. I\n> > have never forgotten my data loss 2,5 years ago; when I used MySQL for\n> > just 2 months!!! \n> \n> I think you're on to something here, but it's obscured by the way you\n> said it.\n> \n> There's no question in my mind that PostgreSQL is superior in almost\n> every way to MySQL. For those of us who are technically minded, it\n> boggles the mind that people would choose MySQL over PostgreSQL. Yet\n> they do. And it's important to understand why.\n> \n> Simply saying \"MySQL has better marketing\" isn't enough. It's too\n> simple an answer and obscures some issues that should probably be\n> addressed.\n\nI think it /is/ a significant factor, the point being that the MySQL company \nhas been quite activist in pressing MySQL as \"the answer,\" to the point to \nwhich there's a development strategy called \"LAMP\" (Linux + Apache + MySQL + \n(Perl|Python|PHP)).\n\n> People use MySQL because it's very easy to set up, relatively easy to\n> maintain (when something doesn't go wrong, that is), is very well\n> documented and supported, and is initially adequate for the task they\n> have in mind (that the task may change significantly such that MySQL\n> is no longer adequate is something only those with experience will\n> consider).\n\n... And the consistent marketing pressure that in essence claims:\n\n - It's easier to use than any alternative;\n - It's much faster than any other DBMS;\n - It's plenty powerful and robust enough.\n\nAs near as I can tell, /none/ of these things are true outside of very \ncarefully selected application domains. But the claims have been presented \nenough times that people actually believe them to be true.\n\n> PostgreSQL has come a long way and, with the exception of a few minor\n> things (the need to VACUUM, for instance. The current version makes\n> the VACUUM requirement almost a non-issue as regards performance and\n> availability, but it really should be something that the database\n> takes care of itself), is equivalent to MySQL in the above things\n> except for documentation and support.\n\nI would point to a third thing: Tools to support \"hands-off administration.\" \nMy web hosting provider has a set of tools to let me administer various \naspects of my site complete with \"pretty GUI\" that covers:\n - Configuring email accounts, including mailing lists, Spam Assassin, and \nsuch;\n - Configuring subdomains;\n - Managing files/directories, doing backups;\n - Apache configuration;\n - Cron jobs;\n - A couple of \"shopping cart\" systems;\n - A \"chat room system;\"\n - Last, but certainly not least, the ability to manage MySQL databases.\n\nThere is no \"canned\" equivalent for PostgreSQL, which means that ISPs that \ndon't have people with DBMS expertise will be inclined to prefer MySQL. It's \na better choice for them.\n\n> MySQL's documentation is very, very good. My experience with it is\n> that it's possible, and relatively easy, to find information about\n> almost anything you might need to know.\n> \n> PostgreSQL's documentation is good, but not quite as good as MySQL's.\n> It's not quite as complete. For instance, I didn't find any\n> documentation at all in the User's Guide or Administrator's Guide on\n> creating tables (if I missed it, then that might illustrate that the\n> documentation needs to be organized slightly differently). I did find\n> a little in the tutorial (about the amount that you'd want in a\n> tutorial), but to find out more I had to go to the SQL statement\n> reference (in my case I was looking for the means by which one could\n> create a constraint on a column during table creation time).\n> \n> The reason this is important is that the documentation is *the* way\n> people are going to learn the database. If it's too sparse or too\n> disorganized, people who don't have a lot of time to spend searching\n> through the documentation for something may well decide that a\n> different product (such as MySQL) would suit their needs better.\n> \n> The documentation for PostgreSQL improves all the time, largely in\n> response to comments such as this one, and that's a very good thing.\n> My purpose in bringing this up is to show you what PostgreSQL is up\n> against in terms of widespread adoption.\n\nThat's probably pretty fair. I'm using the word \"fair\" advisedly, too.\n\nIf someone objects, saying that PostgreSQL docs /are/ good, keep in mind that \nnew users are not mandated to be \"fair\" about this. If they have trouble \nfinding what they were looking for, they couldn't care less that you think the \ndocs are pretty good: /they/ didn't find what /they/ were looking for, and \nthat's all they care about.\n\n> > If we want to \"sell\" PostgreSQL, we should talk about, maybe, Oracle.\n> > I have never took care of MySQL said. I just know that I'm running\n> > PostgreSQL since 2,5 years and I only stopped it \"JUST\" before upgrades\n> > of PostgreSQL. It's just *working*; which is unfamiliar to MySQL\n> > users. \n> \n> The experience people have with MySQL varies a lot, and much of it has\n> to do with the load people put on it. If MySQL were consistently bad\n> and unreliable it would have a much smaller following (since it's not\n> in a monopoly position the way Microsoft is).\n> \n> But you're mistaken if you believe that MySQL isn't competition for\n> PostgreSQL. It is, because it serves the same purpose: a means of\n> storing information in an easily retrievable way.\n\nIndeed. People with modest data storage requirements that came in with /no/ \ncomprehension of what a \"relational\" database is may find the limited \nfunctionality of MySQL perfectly reasonable for their purposes.\n\nAnd I'll pull in a quote I saw on comp.databases this week that I think is \nquite fabulous:\n\n-------------------------------------------------------------\n\n>>if you mean by \"ideal\" that it runs on Unix and crashes all the time\n>>and needs a bazillion DBA's to keep them running and you want to\n>>constantly recover your database and your data files, then you can\n>>have ideal.\n\n> A little background on my original comment might be in order. I\n> don't tend to use the term \"ideal\" myself, much. I was referring to\n> a comment made fairly frequently in this forum, to the effect that\n> \"A commercial Relational Databse system has never been built.\" These\n> people exclude Oracle, SQL Server, DB2, Informix, Interbase, yada\n> yada, because all of them fail, in one way or another to live up to\n> the \"ideal\" of a truly relational system. I have a hard time with such\n> terminological rigidity, myself. One can say that all those products\n> aren't perfect relational products, but one shouldn't, in my view, say\n> that they \"aren't even relational\".\n\nWhy do you think that they are \"relational\" ? Do they operate on relations ? I \ndon't think so. If their primary business is not to operate on relations but \non bags of rows, calling them relational is misleading.\n\nJust like ODBMS are often database construction kits or persistence libraries, \nSQL DBMSes are a real DBMS (they do provide transactions, recovery, \nconcurrency control, some data integrity) + a *relational construction kit*. \nMeaning that by a skillful use of SQL one can come somewhere close to a \nrelational database.\n\nBut the complexity is left on the user to shoulder, and it is very difficult \nto stretch SQL so that you are still in the realm of relational model. And \nguess what: most users don't and most users suffer as a consequence.\n\nIt's even worse than that : very often product documentation and books \nsponsored by the vendors (Oracle press: anyone there ?) simply lie to the \nusers by defining relational model in the most ridiculous terms. Actually they \nscrewed up their products, they built a multi-billion dollars industry by \ntaking agressive shortcuts on the implementation side and transfering the \ncomplexity to the user and now they try to lie and cheat by proclaiming their \nversion of \"relational\" (not long ago the auto industry maintained seat belts \nand airbags were unnecessarily expensive and not needed).\n\nBest regards,\nCostin Cozianu\n-------------------------------------------------------------\n\nThe interesting argument that Costin makes is that SQL databases are /not/ \n\"relational databases,\" but rather that they are tools that can be used to \nconstruct relational database systems.\n\nPostgreSQL has enough decent constructs, what with mature implementations of \nforeign keys, views, and constraints that it is fairly easy to build \nrelational systems using PostgreSQL. In contrast, the paucity of supportive \nconstructs in MySQL means that neither the database nor the resulting \napplications are likely to be terribly \"relational\" in the senses intended by \nCodd and Date.\n\n> Selling potential MySQL users on PostgreSQL should be easier than\n> doing the same for Oracle users because potential MySQL users have at\n> least already decided that a free database is worthy of consideration.\n> As their needs grow beyond what MySQL offers, they'll look for a more\n> capable database engine. It's a target market that we'd be idiots to\n> ignore, and we do so at our peril (the more people out there using\n> MySQL, the fewer there are using PostgreSQL).\n\nThe unfortunate part is that those that outgrow MySQL are likely to have /two/ \nmisconceptions:\n\n1. That the only /real/ reliability improvement will come in moving to \nsomething like Oracle;\n\n2. That PostgreSQL will be a huge step backwards into performance problems \nbecause it is \"so much slower.\"\n\nThat these are misconceptions does not prevent people from believing them. \n(The third deceptive misconception I see is that MySQL is somehow \"more \nstandard\" than some of its competitors.)\n\n> > I'm a Linux user. I'm happy that PostgreSQL does not have win32 version.\n> > If someone wants to use a real database server, then they should install\n> > Linux (or *bsd,etc). This is what Oracle offers,too. Native Windows\n> > support will cause some problems; such as some dummy windows users will\n> > begin using it. I do not believe that PostgreSQL needs native windowz\n> > support. \n> \n> I hate to break it to you (assuming that I didn't misunderstand what\n> you said), but Oracle offers a native Windows port of their database\n> engine, and has done so for some time. It's *stupid* to ignore the\n> native Windows market. There are a lot of people who need a database\n> engine to store their data and who would benefit from a native Windows\n> implementation of PostgreSQL, but aren't interested in the additional\n> burden of setting up a Linux server because they lack the money, time,\n> or expertise.\n\nI think it would be a Bad Thing if making PostgreSQL support Windows better \nwere to compromise how well it works on Unix, but I haven't seen evidence of \nanyone actually proposing patches that would have that result.\n\n> > So, hackers (I'm not a hacker) should decide whether PostgreSQL should\n> > be used widely in real database apps, or it should be used even by dummy\n> > users?\n> \n> What makes you think we can't meet the needs of both groups? The\n> capabilities of PostgreSQL are (with very few exceptions) a superset\n> of MySQL's, which means that wherever someone deploys a MySQL server,\n> they could probably have deployed a PostgreSQL server in its place.\n> It should be an easy sell: they get a database engine that is\n> significantly more capable than MySQL for the same low price!\n\nYou can't sell into the \"ISP appliance market\" until there's something as \nubiquitous as \"PHPMyAdmin\" for PostgreSQL. And note that the \"ISP appliance \nmarket\" only cares about this in a very indirect way. They don't actually use \nthe database; their /customers/ do. And their customers are likely to be \nfairly unsophisticated souls who will use whatever database is given to them.\n\n> Selling to the Oracle market is going to be harder. The capabilities\n> of Oracle are a superset of those of PostgreSQL. Shops which plan to\n> deploy a database server and who need the capabilities of PostgreSQL\n> at a minimum are going to look at Oracle for the same reason that\n> shops which at a minimum need the capabilities of MySQL would be smart\n> to look at PostgreSQL: their needs may grow over time and changing the\n> database mid-project is difficult and time-consuming. The difference\n> is that the prices of MySQL and PostgreSQL are the same, while the\n> prices of PostgreSQL and Oracle are vastly different.\n\nThere are Oracle markets /not/ worth going after, at this point. You /don't/ \ngo after the \"ERP\" markets or the data center markets where license budgets \nare in millions of dollars, and where it's going to be tough to take \nPostgreSQL seriously when Oracle is entirely prepared to send in a group of 10 \ntechnical marketing people to swamp the customer with marketing information.\n\nWhat /is/ worth going after is the \"small server\" market, for departmental \napplications. It's not \"big bucks;\" in the Oracle realm, it might lead to a \nlicensing fee of $20K. For $20K, they aren't going to send in a swarm of \nmarketers to fight for the account.\n\n> That's not to say that going after the Oracle market shouldn't be done\n> (quite the opposite, provided it's done honestly), only that *not*\n> going after the MySQL market is folly.\n\nIndeed.\n\nIt is almost a \"necessary defense\" to counter the deceptive claims that are \nmade. If nobody says anything, people may actually /believe/ that PostgreSQL \nis vastly slower.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/nonrdbms.html\n\"Power tends to corrupt and absolute power corrupts absolutely.\" \n-- First Baron Acton, 1834 - 1902\n\n\n", "msg_date": "Sun, 15 Dec 2002 16:40:33 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group " }, { "msg_contents": "> You can't sell into the \"ISP appliance market\" until there's something as\n> ubiquitous as \"PHPMyAdmin\" for PostgreSQL. And note that the \"ISP\nappliance\n> market\" only cares about this in a very indirect way. They don't actually\nuse\n> the database; their /customers/ do. And their customers are likely to be\n> fairly unsophisticated souls who will use whatever database is given to\nthem.\n\nHey! What about phpPgAdmin?\n\nWe're actually working on a next generation version atm which is a total\nrewrite that:\n\n1. modern php\n2. register_globals off, full error checking\n3. themable\n4. Easily supports all versions\n5. etc.\n\nHowever, even with repeated calls for developers, it's just me and Rob\nTreat!\n\nphpPgAdmin does not work with 7.3 so this in an increasingly important\nproject.\n\nAnyone wanna help? :)\n\nhttp://phppgdamin.sourceforge.net/\n\nMaybe we should move to gborg?\n\nChris\n\n\n", "msg_date": "Mon, 16 Dec 2002 10:23:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PostgreSQL Global Development Group " }, { "msg_contents": "[email protected] wrote:\n> Kevin Brown wrote:\n> > Simply saying \"MySQL has better marketing\" isn't enough. It's too\n> > simple an answer and obscures some issues that should probably be\n> > addressed.\n> \n> I think it /is/ a significant factor, the point being that the MySQL company \n> has been quite activist in pressing MySQL as \"the answer,\" to the point to \n> which there's a development strategy called \"LAMP\" (Linux + Apache + MySQL + \n> (Perl|Python|PHP)).\n\nOh, I'll certainly not dispute that marketing has had a significant\neffect, but I don't think it's the only reason for MySQL's success.\n\nHistory has a lot to do with it, because it's through history that\nmomentum gets built up, as it has with MySQL.\n\n> > People use MySQL because it's very easy to set up, relatively easy to\n> > maintain (when something doesn't go wrong, that is), is very well\n> > documented and supported, and is initially adequate for the task they\n> > have in mind (that the task may change significantly such that MySQL\n> > is no longer adequate is something only those with experience will\n> > consider).\n> \n> ... And the consistent marketing pressure that in essence claims:\n> \n> - It's easier to use than any alternative;\n> - It's much faster than any other DBMS;\n> - It's plenty powerful and robust enough.\n> \n> As near as I can tell, /none/ of these things are true outside of very \n> carefully selected application domains. But the claims have been presented \n> enough times that people actually believe them to be true.\n\nI agree with you -- now. But the situation as it is now has not\nalways been. Consider where PostgreSQL was 4 years ago. I believe it\nwas at version 6 at that time, if I remember correctly. And as I\nrecall, many people had very significant issues with it in the key\nareas of performance and reliability. Now, I didn't experience these\nthings firsthand because I wasn't using it at the time, but it is the\ngeneral impression I got when reading the accounts of people who\n*were* using it.\n\nMySQL at the time wasn't necessarily any more reliable, but it had one\nthing going for it that PostgreSQL didn't: myisamchk. Even if the\ndatabase crashed, you stood a very good chance of being able to\nrecover your data without having to restore from backups. PostgreSQL\ndidn't have this at all: either you had to be a guru with the\nPostgreSQL database format or you had to restore from backups. That\nmeant that *in practice* MySQL was easier to maintain, even it crashed\nmore often as PostgreSQL, because the amount of administrative effort\nto deal with a MySQL crash was so much less.\n\n> > PostgreSQL has come a long way and, with the exception of a few minor\n> > things (the need to VACUUM, for instance. The current version makes\n> > the VACUUM requirement almost a non-issue as regards performance and\n> > availability, but it really should be something that the database\n> > takes care of itself), is equivalent to MySQL in the above things\n> > except for documentation and support.\n> \n> I would point to a third thing: Tools to support \"hands-off\n> administration.\" My web hosting provider has a set of tools to let\n> me administer various aspects of my site complete with \"pretty GUI\"\n> that covers:\n>\n> - Configuring email accounts, including mailing lists, Spam\n> Assassin, and such;\n> - Configuring subdomains;\n> - Managing files/directories, doing backups;\n> - Apache configuration;\n> - Cron jobs;\n> - A couple of \"shopping cart\" systems;\n> - A \"chat room system;\"\n> - Last, but certainly not least, the ability to manage MySQL\n> databases.\n> \n> There is no \"canned\" equivalent for PostgreSQL, which means that\n> ISPs that don't have people with DBMS expertise will be inclined to\n> prefer MySQL. It's a better choice for them.\n\nThis is true, but the only way to combat that is to get PostgreSQL\nmore widely deployed. Network effects such as that are common in the\ncomputing world, so it doesn't come as much surprise that the most\npopular database engine in the webhosting world is the best supported\none for that role.\n\nIt's only because of the relative popularity of MySQL that it has so\nmuch support. The only way to grow PostgreSQL's popularity is to get\nit deployed in situations where the tools available for it are\nsufficient.\n\n> > But you're mistaken if you believe that MySQL isn't competition for\n> > PostgreSQL. It is, because it serves the same purpose: a means of\n> > storing information in an easily retrievable way.\n> \n> Indeed. People with modest data storage requirements that came in\n> with /no/ comprehension of what a \"relational\" database is may find\n> the limited functionality of MySQL perfectly reasonable for their\n> purposes.\n\nThis is true, but the biggest problem is that the requirements of a\nproject often balloon over time, and the demands on the database\nbackend will also tend to increase. Because MySQL is rather limited\nin its functionality, it doesn't take much until you'll be forced to\nuse a different database backend.\n\nThis is why I view PostgreSQL as a much wiser choice in almost all\ncases where you need a database engine. Your needs will have to be\nquite considerable before PostgreSQL's capabilities are no longer\nenough.\n\n> PostgreSQL has enough decent constructs, what with mature\n> implementations of foreign keys, views, and constraints that it is\n> fairly easy to build relational systems using PostgreSQL. In\n> contrast, the paucity of supportive constructs in MySQL means that\n> neither the database nor the resulting applications are likely to be\n> terribly \"relational\" in the senses intended by Codd and Date.\n\nThis is true, but what everyone fails to ask is whether or not any\nparticular customer really *cares* about that. The customer isn't\ninterested in whether or not an application is \"relational\", they care\nwhether or not the application does the job it's supposed to. How\n\"relational\" it is is an implementation detail to them.\n\nThe reason that PostgreSQL wins over MySQL is not so much that it's\neasier to build relational systems with it, but that it's easier to\nbuild *reliable* systems with it. That building the system in a\nrelational way is one way to achieve that is, again, an implementation\ndetail.\n\n> > Selling potential MySQL users on PostgreSQL should be easier than\n> > doing the same for Oracle users because potential MySQL users have at\n> > least already decided that a free database is worthy of consideration.\n> > As their needs grow beyond what MySQL offers, they'll look for a more\n> > capable database engine. It's a target market that we'd be idiots to\n> > ignore, and we do so at our peril (the more people out there using\n> > MySQL, the fewer there are using PostgreSQL).\n> \n> The unfortunate part is that those that outgrow MySQL are likely to\n> have /two/ misconceptions:\n> \n> 1. That the only /real/ reliability improvement will come in moving to \n> something like Oracle;\n>\n> 2. That PostgreSQL will be a huge step backwards into performance problems \n> because it is \"so much slower.\"\n\nThis is because people lack familiarity with PostgreSQL. That's where\nmarketing PostgreSQL well comes in.\n\nThe performance misconception is the result of history. At one time\nPostgreSQL *was* much slower than MySQL. People need to be informed\nof the current state of affairs.\n\n> That these are misconceptions does not prevent people from believing them. \n> (The third deceptive misconception I see is that MySQL is somehow \"more \n> standard\" than some of its competitors.)\n\nThe third misconception happens because most people equate \"standard\"\nwith \"popular\". And in the real world, they're not entirely wrong to\ndo so, unfortunately.\n\n> I think it would be a Bad Thing if making PostgreSQL support Windows\n> better were to compromise how well it works on Unix, but I haven't\n> seen evidence of anyone actually proposing patches that would have\n> that result.\n\nI agree, and I also believe that the maintainers would not accept a\npatch that compromised the performance under Unix for the sake of\nsupporting Windows. And rightly so: such a patch would indicate that\nthe people doing the Windows port haven't solved the problem properly.\n\n> You can't sell into the \"ISP appliance market\" until there's\n> something as ubiquitous as \"PHPMyAdmin\" for PostgreSQL.\n\nBut there is: PHPPgAdmin (or whatever it's called these days. I seem\nto remember that they changed the name of it). Unfortunately it's not\nas well known, largely because PostgreSQL itself isn't as well known.\n\n> And note that the \"ISP appliance market\" only cares about this in a\n> very indirect way. They don't actually use the database; their\n> /customers/ do. And their customers are likely to be fairly\n> unsophisticated souls who will use whatever database is given to\n> them.\n\nAnd if that's *really* true, then providers will do just as well to\nprovide PostgreSQL as they would MySQL (since their customers will\njust use whatever database they're given). So it's really a question\nof selling the providers on it, which (as you mentioned earlier) is in\npart a matter of giving them the tools they need to make managing a\nPostgreSQL installation easy.\n\n> There are Oracle markets /not/ worth going after, at this point.\n> You /don't/ go after the \"ERP\" markets or the data center markets\n> where license budgets are in millions of dollars, and where it's\n> going to be tough to take PostgreSQL seriously when Oracle is\n> entirely prepared to send in a group of 10 technical marketing\n> people to swamp the customer with marketing information.\n\nThis is why marketing PostgreSQL *honestly* is so important. If it\nwon't do the ERP job well, then it behooves those who are promoting it\nto realize that and restrain themselves appropriately.\n\n> What /is/ worth going after is the \"small server\" market, for\n> departmental applications. It's not \"big bucks;\" in the Oracle\n> realm, it might lead to a licensing fee of $20K. For $20K, they\n> aren't going to send in a swarm of marketers to fight for the\n> account.\n\nAnd this is exactly one of the markets that MySQL is currently\ntargeting. Of course, MS-SQL is *also* targeting this market, with a\nreasonable amount of success. PostgreSQL is a *perfect* fit for this\nkind of operation, and it's one of the reasons that it really *is*\nimportant to have a native Windows port.\n\n> > That's not to say that going after the Oracle market shouldn't be done\n> > (quite the opposite, provided it's done honestly), only that *not*\n> > going after the MySQL market is folly.\n> \n> Indeed.\n> \n> It is almost a \"necessary defense\" to counter the deceptive claims\n> that are made. If nobody says anything, people may actually\n> /believe/ that PostgreSQL is vastly slower.\n\nThe way you counter such deceptive claims is to provide proof that\nthose claims are wrong. Point them at the head-to-head comparison on\nthe PHPBuilder site. Prove to them that PostgreSQL is in the same\nleague (if not better) as MySQL in the performance arena. And for\ndeity's sake, show them how much *less* work they'd have to do under\nPostgreSQL because of its referential integrity features. I really\nthink most people would be willing to sacrifice a small bit of speed\nif it meant doing a whole lot less work.\n\n\nCopied to the advocacy group because of the relevance.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sun, 15 Dec 2002 23:15:49 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] PostgreSQL Global Development Group" }, { "msg_contents": "Something I have done at little cost was to submit a request for a few \nbooks on\nPostgreSQL to my local library and I check them out once in while and \nsee that\nothers are also checking them out.....\n\nI agree with poor level of documentation....the skeleton is \nthere....perhpas we could\nhave some volunteers write up some parts (or add more....)...\n\nKevin Brown wrote:\n\n>Devrim G?ND?Z wrote:\n> \n>\n>>I do NOT like hearing about MySQL in this (these) list(s).\n>>\n>>PostgreSQL is not in the same category with MySQL. MySQL is for\n>>*dummies*, not database admins. I do not even call it a database. I\n>>have never forgotten my data loss 2,5 years ago; when I used MySQL for\n>>just 2 months!!! \n>> \n>>\n>\n>I think you're on to something here, but it's obscured by the way you\n>said it.\n>\n>There's no question in my mind that PostgreSQL is superior in almost\n>every way to MySQL. For those of us who are technically minded, it\n>boggles the mind that people would choose MySQL over PostgreSQL. Yet\n>they do. And it's important to understand why.\n>\n>Simply saying \"MySQL has better marketing\" isn't enough. It's too\n>simple an answer and obscures some issues that should probably be\n>addressed.\n>\n>People use MySQL because it's very easy to set up, relatively easy to\n>maintain (when something doesn't go wrong, that is), is very well\n>documented and supported, and is initially adequate for the task they\n>have in mind (that the task may change significantly such that MySQL\n>is no longer adequate is something only those with experience will\n>consider).\n>\n>PostgreSQL has come a long way and, with the exception of a few minor\n>things (the need to VACUUM, for instance. The current version makes\n>the VACUUM requirement almost a non-issue as regards performance and\n>availability, but it really should be something that the database\n>takes care of itself), is equivalent to MySQL in the above things\n>except for documentation and support.\n>\n>MySQL's documentation is very, very good. My experience with it is\n>that it's possible, and relatively easy, to find information about\n>almost anything you might need to know.\n>\n>PostgreSQL's documentation is good, but not quite as good as MySQL's.\n>It's not quite as complete. For instance, I didn't find any\n>documentation at all in the User's Guide or Administrator's Guide on\n>creating tables (if I missed it, then that might illustrate that the\n>documentation needs to be organized slightly differently). I did find\n>a little in the tutorial (about the amount that you'd want in a\n>tutorial), but to find out more I had to go to the SQL statement\n>reference (in my case I was looking for the means by which one could\n>create a constraint on a column during table creation time).\n>\n>The reason this is important is that the documentation is *the* way\n>people are going to learn the database. If it's too sparse or too\n>disorganized, people who don't have a lot of time to spend searching\n>through the documentation for something may well decide that a\n>different product (such as MySQL) would suit their needs better.\n>\n>The documentation for PostgreSQL improves all the time, largely in\n>response to comments such as this one, and that's a very good thing.\n>My purpose in bringing this up is to show you what PostgreSQL is up\n>against in terms of widespread adoption.\n>\n> \n>\n>>If we want to \"sell\" PostgreSQL, we should talk about, maybe, Oracle.\n>>I have never took care of MySQL said. I just know that I'm running\n>>PostgreSQL since 2,5 years and I only stopped it \"JUST\" before upgrades\n>>of PostgreSQL. It's just *working*; which is unfamiliar to MySQL\n>>users. \n>> \n>>\n>\n>The experience people have with MySQL varies a lot, and much of it has\n>to do with the load people put on it. If MySQL were consistently bad\n>and unreliable it would have a much smaller following (since it's not\n>in a monopoly position the way Microsoft is).\n>\n>But you're mistaken if you believe that MySQL isn't competition for\n>PostgreSQL. It is, because it serves the same purpose: a means of\n>storing information in an easily retrievable way.\n>\n>Selling potential MySQL users on PostgreSQL should be easier than\n>doing the same for Oracle users because potential MySQL users have at\n>least already decided that a free database is worthy of consideration.\n>As their needs grow beyond what MySQL offers, they'll look for a more\n>capable database engine. It's a target market that we'd be idiots to\n>ignore, and we do so at our peril (the more people out there using\n>MySQL, the fewer there are using PostgreSQL).\n>\n> \n>\n>>I'm a Linux user. I'm happy that PostgreSQL does not have win32 version.\n>>If someone wants to use a real database server, then they should install\n>>Linux (or *bsd,etc). This is what Oracle offers,too. Native Windows\n>>support will cause some problems; such as some dummy windows users will\n>>begin using it. I do not believe that PostgreSQL needs native windowz\n>>support. \n>> \n>>\n>\n>I hate to break it to you (assuming that I didn't misunderstand what\n>you said), but Oracle offers a native Windows port of their database\n>engine, and has done so for some time. It's *stupid* to ignore the\n>native Windows market. There are a lot of people who need a database\n>engine to store their data and who would benefit from a native Windows\n>implementation of PostgreSQL, but aren't interested in the additional\n>burden of setting up a Linux server because they lack the money, time,\n>or expertise.\n>\n> \n>\n>>So, hackers (I'm not a hacker) should decide whether PostgreSQL should\n>>be used widely in real database apps, or it should be used even by dummy\n>>users?\n>> \n>>\n>\n>What makes you think we can't meet the needs of both groups? The\n>capabilities of PostgreSQL are (with very few exceptions) a superset\n>of MySQL's, which means that wherever someone deploys a MySQL server,\n>they could probably have deployed a PostgreSQL server in its place.\n>It should be an easy sell: they get a database engine that is\n>significantly more capable than MySQL for the same low price!\n>\n>Selling to the Oracle market is going to be harder. The capabilities\n>of Oracle are a superset of those of PostgreSQL. Shops which plan to\n>deploy a database server and who need the capabilities of PostgreSQL\n>at a minimum are going to look at Oracle for the same reason that\n>shops which at a minimum need the capabilities of MySQL would be smart\n>to look at PostgreSQL: their needs may grow over time and changing the\n>database mid-project is difficult and time-consuming. The difference\n>is that the prices of MySQL and PostgreSQL are the same, while the\n>prices of PostgreSQL and Oracle are vastly different.\n>\n>That's not to say that going after the Oracle market shouldn't be done\n>(quite the opposite, provided it's done honestly), only that *not*\n>going after the MySQL market is folly.\n>\n>\n> \n>\n\n\n\n", "msg_date": "Mon, 16 Dec 2002 13:15:46 -0800", "msg_from": "Medi Montaseri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL Global Development Group" }, { "msg_contents": "\nLast night, we packaged up v7.3.1 of PostgreSQL, our latest stable\nrelease.\n\nPurely meant to be a bug fix release, this one does have one major change,\nin that the major number of the libpq library was increased, which means\nthat everyone is encouraged to recompile their clients along with this\nupgrade.\n\nThis release can be found on all the mirrors, and on the root ftp server,\nunder:\n\n\t\tftp://ftp.postgresql.org/pub/source/v7.3.1\n\nPlease report all bugs to [email protected].\n\n\n\n", "msg_date": "Sun, 22 Dec 2002 15:12:05 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "v7.3.1 Bundled and Released ..." }, { "msg_contents": "On Sun, 2002-12-22 at 13:12, Marc G. Fournier wrote:\n> Last night, we packaged up v7.3.1 of PostgreSQL, our latest stable\n> release.\n> \n> Purely meant to be a bug fix release, this one does have one major change,\n> in that the major number of the libpq library was increased, which means\n> that everyone is encouraged to recompile their clients along with this\n> upgrade.\n> \n> This release can be found on all the mirrors, and on the root ftp server,\n> under:\n> \n> \t\tftp://ftp.postgresql.org/pub/source/v7.3.1\n> \n> Please report all bugs to [email protected].\n> \n\n\nHmm. For some reason I'm not seeing a 7.3.1 tag in CVS. Do you guys do\nsomething else for sub-releases? Case in point:\ncvs [server aborted]: no such tag REL7_3_1_STABLE\nIt's still early here so I may be suffering from early morning brain\nrot. ;)\n\n\nRegards,\n\n\tGreg\n\n\n\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "23 Dec 2002 07:07:07 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v7.3.1 Bundled and Released ..." }, { "msg_contents": "Greg Copeland wrote:\n> On Sun, 2002-12-22 at 13:12, Marc G. Fournier wrote:\n> > Last night, we packaged up v7.3.1 of PostgreSQL, our latest stable\n> > release.\n> > \n> > Purely meant to be a bug fix release, this one does have one major change,\n> > in that the major number of the libpq library was increased, which means\n> > that everyone is encouraged to recompile their clients along with this\n> > upgrade.\n> > \n> > This release can be found on all the mirrors, and on the root ftp server,\n> > under:\n> > \n> > \t\tftp://ftp.postgresql.org/pub/source/v7.3.1\n> > \n> > Please report all bugs to [email protected].\n> > \n> \n> \n> Hmm. For some reason I'm not seeing a 7.3.1 tag in CVS. Do you guys do\n> something else for sub-releases? Case in point:\n> cvs [server aborted]: no such tag REL7_3_1_STABLE\n> It's still early here so I may be suffering from early morning brain\n> rot. ;)\n\nThere should be a 7.3.1 tag, but you can use the 7_3 branch to pull\n7.3.1. Of course, that will shift as we patch for 7.3.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Dec 2002 10:57:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v7.3.1 Bundled and Released ..." }, { "msg_contents": "On Sunday 22 December 2002 14:12, Marc G. Fournier wrote:\n> Last night, we packaged up v7.3.1 of PostgreSQL, our latest stable\n> release.\n\nRPMs are available for RedHat8 + tcl8.4.1 at \nftp://ftp.postgresql.org/pub/binary/v7.3.1/RPMS (once mirrors propagate it \nwill be at the mirrors)\n\nYes, Red Hat 8.0 _+_ tcl8.4.1. If you wish to use the pl subpackage without \npl/tcl, you need to add --nodeps to the rpm command line when installing the \npl subpackage. The postgresql-tcl subpackage was built against tcl 8.4.1 -- \nRed Hat 8 ships with an old tcl (8.3.3) which is not even installed by \ndefault. If this is too much of a problem for people, please let me know, \nand I'l rebuild for Red Hat 8 without any tcl support, or with the older tcl \nsupport. Otherwise obtain tcl 8.4.1. I have to have it installed here for \nwork purposes. This is the only exception to the 'pristine Red Hat install' \nI have made in a very long time. Rebuilding from source RPM will require an \ninstalled tcl of some version.\n\nSource RPMS in SRPMS, Red Hat 8 binaries in redhat-8.0\n\nIf installing the contrib subpackage, you must use --nodeps due to a broken \ndependency upon the now removed postgresql-perl subpackage. There will be \nsoon a postgresql-perl RPM -- I have to get the new gborg code and build it, \nI guess.\n\nChangelog since 7.3-2:\n* Mon Dec 23 2002 Lamar Owen <[email protected]>\n- 7.3.1-1PGDG\n- Fix dependency order for test and pl subpackages.\n- Fixed a bug in the initscript for echo_success\n- Fixed the bug in .bashprofile (argh).\n\n(quick note: [email protected] does work. I typically post from \[email protected]; both addresses come to me....)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Mon, 23 Dec 2002 14:12:16 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v7.3.1 Bundled and Released ..." }, { "msg_contents": "Just a reminder, there still doesn't appear to be a 7.3.1 tag.\n\nThis is from the \"HISTORY\" file.\n\nsymbolic names:\n REL7_3_STABLE: 1.182.0.2\n REL7_2_3: 1.153.2.8\n REL7_2_STABLE: 1.153.0.2\n REL7_2: 1.153\n\n\nNotice 7.3 stable but nothing about 7.3.x! I also see a 7.2.3, etc.,\njust as one would expect but nothing about 7.3 dot releases.\n\nI'm still getting, \"cvs [server aborted]: no such tag REL7_3_1_STABLE\". \nSomething overlooked here?\n\n\nRegards,\n\n\tGreg Copeland\n\n\nOn Mon, 2002-12-23 at 09:57, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> > On Sun, 2002-12-22 at 13:12, Marc G. Fournier wrote:\n> > > Last night, we packaged up v7.3.1 of PostgreSQL, our latest stable\n> > > release.\n> > > \n> > > Purely meant to be a bug fix release, this one does have one major change,\n> > > in that the major number of the libpq library was increased, which means\n> > > that everyone is encouraged to recompile their clients along with this\n> > > upgrade.\n> > > \n> > > This release can be found on all the mirrors, and on the root ftp server,\n> > > under:\n> > > \n> > > \t\tftp://ftp.postgresql.org/pub/source/v7.3.1\n> > > \n> > > Please report all bugs to [email protected].\n> > > \n> > \n> > \n> > Hmm. For some reason I'm not seeing a 7.3.1 tag in CVS. Do you guys do\n> > something else for sub-releases? Case in point:\n> > cvs [server aborted]: no such tag REL7_3_1_STABLE\n> > It's still early here so I may be suffering from early morning brain\n> > rot. ;)\n> \n> There should be a 7.3.1 tag, but you can use the 7_3 branch to pull\n> 7.3.1. Of course, that will shift as we patch for 7.3.2.\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "29 Dec 2002 21:51:46 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v7.3.1 Bundled and Released ..." }, { "msg_contents": "Greg Copeland writes:\n\n> Just a reminder, there still doesn't appear to be a 7.3.1 tag.\n\nThere is a long tradition of systematically failing to tag releases in\nthis project. Don't expect it to improve.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 4 Jan 2003 11:27:53 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ..." }, { "msg_contents": "On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n\n> Greg Copeland writes:\n>\n> > Just a reminder, there still doesn't appear to be a 7.3.1 tag.\n>\n> There is a long tradition of systematically failing to tag releases in\n> this project. Don't expect it to improve.\n\nIt was I who suggested that a release team would be a good idea. I think\nthat was soundly rejected. I still think it's a good idea. If only to\nensure that things are properly tagged, the right annoucements go out at\nthe right times, that a code freeze goes into effect, etc. These concepts\nare not new. A release is an important step in the life cycle.\n\nI volunteered to document the release procedure as it resides only within\nlore and a couple of heads. I have yet to start.\n\n", "msg_date": "Sat, 4 Jan 2003 06:44:18 -0500 (EST)", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ..." }, { "msg_contents": "Dan Langille <[email protected]> writes:\n> On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n>> There is a long tradition of systematically failing to tag releases in\n>> this project. Don't expect it to improve.\n\n> It was I who suggested that a release team would be a good idea.\n\nWe *have* a release team. Your problem is that Marc, who is the man who\nwould need to do this, doesn't appear to consider it an important thing\nto do. Try to convince him to put it on his checklist.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Jan 2003 11:08:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "On Sat, 2003-01-04 at 04:27, Peter Eisentraut wrote:\n> Greg Copeland writes:\n> \n> > Just a reminder, there still doesn't appear to be a 7.3.1 tag.\n> \n> There is a long tradition of systematically failing to tag releases in\n> this project. Don't expect it to improve.\n\nWell, I thought I remembered from the \"release team\" thread that it was\nsaid there was a \"punch list\" of things that are done prior to actually\nreleasing. If not, it certainly seems like we need one. If there is\none, tagging absolutely needs to be on it. If we have one and this is\nalready on the list, seems we need to be eating our own food. ;)\n\n\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "04 Jan 2003 12:13:17 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ..." }, { "msg_contents": "msg resent because I incorrectly copied/pasted some addresses. \nSorry.\n\nOn 4 Jan 2003 at 11:08, Tom Lane wrote:\n\n> Dan Langille <[email protected]> writes:\n> > On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n> >> There is a long tradition of systematically failing to tag releases\n> >> in this project. Don't expect it to improve.\n> \n> > It was I who suggested that a release team would be a good idea.\n> \n> We *have* a release team.\n\nI have a suggestion. Let us document who is the release team and who \nis responsible for each step of the release. Perhaps that is the \nproblem: a lack of process.\n\nI'll add that to my list of things I've promised to do.\n-- \nDan Langille : http://www.langille.org/\n\n", "msg_date": "Sat, 04 Jan 2003 15:56:41 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "msg resent because I incorrectly copied/pasted some addresses. Sorry.\n\nOn 4 Jan 2003 at 11:08, Tom Lane wrote:\n\n> Dan Langille <[email protected]> writes:\n> > On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n> >> There is a long tradition of systematically failing to tag releases\n> >> in this project. Don't expect it to improve.\n> \n> > It was I who suggested that a release team would be a good idea.\n> \n> We *have* a release team. Your problem is that Marc, who is the man\n> who would need to do this, doesn't appear to consider it an important\n> thing to do. Try to convince him to put it on his checklist.\n\nMarc? Is this true? You don't consider it important to tag the \nrelease? I'm quite sure that's not the case and that Marc does \nconsider it important. It's just something which he forgot to do.\n\nA recent post by Greg Copeland implies this item is on his checklist.\n\nIMHO, it is vital that the tree is properly tagged for each release. \nAFAIK, a tag can be laid with with respect to timestamp value. So \nwhy don't we just do it?\n-- \nDan Langille : http://www.langille.org/\n\n", "msg_date": "Sat, 04 Jan 2003 15:56:56 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "On Sat, 4 Jan 2003, Tom Lane wrote:\n\n> Dan Langille <[email protected]> writes:\n> > On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n> >> There is a long tradition of systematically failing to tag releases in\n> >> this project. Don't expect it to improve.\n>\n> > It was I who suggested that a release team would be a good idea.\n>\n> We *have* a release team. Your problem is that Marc, who is the man who\n> would need to do this, doesn't appear to consider it an important thing\n> to do. Try to convince him to put it on his checklist.\n\nI never considered tag'ng for minor releases as having any importance,\nsince the tarball's themselves provide the 'tag' ... branches give us the\nability to back-patch, but tag's don't provide us anything ... do they?\n\nThat said, I can back-tag the whole source tree for past releases if ppl\ndo think it is important, its just a matter of knowing the 'timestamp' to\nbase it on, which I can do based on the dates of the tar files ...\n\nIts not like tag'ng is hard to do ...\n", "msg_date": "Sat, 4 Jan 2003 21:04:32 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "\n\n--On Saturday, January 04, 2003 21:04:32 -0400 \"Marc G. Fournier\" \n<[email protected]> wrote:\n\n> On Sat, 4 Jan 2003, Tom Lane wrote:\n>\n>> Dan Langille <[email protected]> writes:\n>> > On Sat, 4 Jan 2003, Peter Eisentraut wrote:\n>> >> There is a long tradition of systematically failing to tag releases in\n>> >> this project. Don't expect it to improve.\n>>\n>> > It was I who suggested that a release team would be a good idea.\n>>\n>> We *have* a release team. Your problem is that Marc, who is the man who\n>> would need to do this, doesn't appear to consider it an important thing\n>> to do. Try to convince him to put it on his checklist.\n>\n> I never considered tag'ng for minor releases as having any importance,\n> since the tarball's themselves provide the 'tag' ... branches give us the\n> ability to back-patch, but tag's don't provide us anything ... do they?\n>\n> That said, I can back-tag the whole source tree for past releases if ppl\n> do think it is important, its just a matter of knowing the 'timestamp' to\n> base it on, which I can do based on the dates of the tar files ...\nIt's useful for those using the CVS files to RECREATE a version based on \nthe TAG\nto checkout something (without pulling the whole tarball).\n\nLER\n\n>\n> Its not like tag'ng is hard to do ...\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "Sat, 04 Jan 2003 19:08:20 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> I never considered tag'ng for minor releases as having any importance,\n> since the tarball's themselves provide the 'tag' ... branches give us the\n> ability to back-patch, but tag's don't provide us anything ... do they?\n\nWell, a tag makes it feasible for someone else to recreate the tarball,\ngiven access to the CVS server. Dunno how important that is in the real\nworld --- but I have seen requests before for us to tag release points.\n\nAny other arguments out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Jan 2003 20:10:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "\nTom Lane wrote:\n\n> Any other arguments out there?\n\nPer-release tags make it easier to see quickly if some code has\nchanged in -current or not. As the CVS tree is available via anoymous\nCVS (I think?), CVSup, and via the web so there are many potential\nusers who are not active developers and who probably run releases\nrather than -current.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Sun, 05 Jan 2003 14:05:45 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "On Sat, 4 Jan 2003, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <[email protected]> writes:\n> > I never considered tag'ng for minor releases as having any importance,\n> > since the tarball's themselves provide the 'tag' ... branches give us the\n> > ability to back-patch, but tag's don't provide us anything ... do they?\n>\n> Well, a tag makes it feasible for someone else to recreate the tarball,\n> given access to the CVS server. Dunno how important that is in the real\n> world --- but I have seen requests before for us to tag release points.\n\nFWIW, in the real world, a release doesn't happen if it's not taqged.\n\n", "msg_date": "Sun, 5 Jan 2003 07:41:37 -0500 (EST)", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "On Sun, 2003-01-05 at 06:41, Dan Langille wrote:\n> On Sat, 4 Jan 2003, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <[email protected]> writes:\n> > > I never considered tag'ng for minor releases as having any importance,\n> > > since the tarball's themselves provide the 'tag' ... branches give us the\n> > > ability to back-patch, but tag's don't provide us anything ... do they?\n> >\n> > Well, a tag makes it feasible for someone else to recreate the tarball,\n> > given access to the CVS server. Dunno how important that is in the real\n> > world --- but I have seen requests before for us to tag release points.\n> \n> FWIW, in the real world, a release doesn't happen if it's not taqged.\n\nAgreed! Any tarballs, rpms, etc., should be made from the tagged\nsource. Period. If rpm's are made from a tarball that is made from\ntagged source, that's fine. Nonetheless, any official release (major or\nminor) should always be made from the resulting tagged source. This\ndoes two things. First, it validates that everything has been properly\ntagged. Two, it ensures that there are not any localized files or\nchanges which might become part of a tarball/release which are not\nofficially part of the repository.\n\nI can't stress enough that a release should never happen unless source\nhas been tagged. Releases should ALWAYS be made from a checkout based\non tags.\n\n\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "05 Jan 2003 08:53:25 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ..." }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Well, a tag makes it feasible for someone else to recreate the tarball,\n> given access to the CVS server. Dunno how important that is in the real\n> world --- but I have seen requests before for us to tag release points.\n>\n> Any other arguments out there?\n\nFWIW, I use the tags often in some scripts that rely on the output \nof 'cvs status -v'. Seeing REL7_3_STABLE at the top of the \n\"Existing Tags\" list is a bit disconcerting when you know that \nit's not true. My scripts assume that the latest release should \nalways be tagged.\n\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200208\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+GE1NvJuQZxSWSsgRAh/1AKCPEKQeQ3OnKzbeSl5DXstnwwiFPQCfQ2mn\nKplkOouzodJqZvQNN2tk8Fk=\n=OaZY\n-----END PGP SIGNATURE-----\n\n\n\n", "msg_date": "Sun, 5 Jan 2003 15:10:15 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [GENERAL] v7.3.1 Bundled and Released ... " }, { "msg_contents": "I noticed sync() is used in PostgreSQL.\n\nCHECKPOINT -> FlushBufferPool() -> smgrsync() -> mdsync() -> sync()\n\nCan someone tell me why we need sync() here?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 08 Jan 2003 15:08:49 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "sync()" }, { "msg_contents": "Tatsuo Ishii wrote:\n> I noticed sync() is used in PostgreSQL.\n> \n> CHECKPOINT -> FlushBufferPool() -> smgrsync() -> mdsync() -> sync()\n> \n> Can someone tell me why we need sync() here?\n\nAs part of checkpoint, we discard some WAL files. To do that, we must\nfirst be sure that all the dirty buffers we have written to the kernel\nare actually on the disk. That is why the sync() is required.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 8 Jan 2003 01:18:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tatsuo Ishii wrote:\n>> Can someone tell me why we need sync() here?\n\n> As part of checkpoint, we discard some WAL files. To do that, we must\n> first be sure that all the dirty buffers we have written to the kernel\n> are actually on the disk. That is why the sync() is required.\n\nWhat we really need is something better than sync(), viz flush all dirty\nbuffers to disk *and* wait till they're written. But sync() and sleep\nfor awhile is the closest portable approximation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Jan 2003 01:34:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "> Tatsuo Ishii wrote:\n> > I noticed sync() is used in PostgreSQL.\n> > \n> > CHECKPOINT -> FlushBufferPool() -> smgrsync() -> mdsync() -> sync()\n> > \n> > Can someone tell me why we need sync() here?\n> \n> As part of checkpoint, we discard some WAL files. To do that, we must\n> first be sure that all the dirty buffers we have written to the kernel\n> are actually on the disk. That is why the sync() is required.\n\n?? I thought WAL files are synced by pg_fsync() (if needed).\n--\nTatsuo Ishii\n", "msg_date": "Wed, 08 Jan 2003 15:36:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "> > As part of checkpoint, we discard some WAL files. To do that, we must\n> > first be sure that all the dirty buffers we have written to the kernel\n> > are actually on the disk. That is why the sync() is required.\n> \n> What we really need is something better than sync(), viz flush all dirty\n> buffers to disk *and* wait till they're written. But sync() and sleep\n> for awhile is the closest portable approximation.\n\nAre you saying that fsync() might not wait untill the IO completes?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 08 Jan 2003 15:39:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Can someone tell me why we need sync() here?\n\n> ?? I thought WAL files are synced by pg_fsync() (if needed).\n\nThey are. But to write a checkpoint record --- which implies that the\nWAL records before it need no longer be replayed --- we have to ensure\nthat all the changes-so-far in the regular database files are written\ndown to disk. That is what we need sync() for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Jan 2003 01:42:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> What we really need is something better than sync(), viz flush all dirty\n>> buffers to disk *and* wait till they're written. But sync() and sleep\n>> for awhile is the closest portable approximation.\n\n> Are you saying that fsync() might not wait untill the IO completes?\n\nNo, I said that sync() might not. Read the man pages. HPUX's man\npage for sync(2) says\n\n sync() causes all information in memory that should be on disk to be\n written out.\n ...\n The writing, although scheduled, is not necessarily complete upon\n return from sync.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Jan 2003 01:46:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "> > Are you saying that fsync() might not wait untill the IO completes?\n> \n> No, I said that sync() might not. Read the man pages. HPUX's man\n> page for sync(2) says\n> \n> sync() causes all information in memory that should be on disk to be\n> written out.\n> ...\n> The writing, although scheduled, is not necessarily complete upon\n> return from sync.\n\nI'm just wondering why we do not use fsync() to flush data/index\npages.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 08 Jan 2003 15:51:52 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I'm just wondering why we do not use fsync() to flush data/index\n> pages.\n\nThere isn't any efficient way to do that AFAICS. The process that wants\nto do the checkpoint hasn't got any way to know just which files need to\nbe sync'd. Even if it did know, it's not clear to me that we can\nportably assume that process A issuing an fsync on a file descriptor F\nit's opened for file X will force to disk previous writes issued against\nthe same physical file X by a different process B using a different file\ndescriptor G.\n\nsync() is surely overkill, in that it writes out dirty kernel buffers\nthat might have nothing at all to do with Postgres. But I don't see how\nto do better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Jan 2003 02:02:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > I'm just wondering why we do not use fsync() to flush data/index\n> > pages.\n> \n> There isn't any efficient way to do that AFAICS. The process that wants\n> to do the checkpoint hasn't got any way to know just which files need to\n> be sync'd. Even if it did know, it's not clear to me that we can\n> portably assume that process A issuing an fsync on a file descriptor F\n> it's opened for file X will force to disk previous writes issued against\n> the same physical file X by a different process B using a different file\n> descriptor G.\n> \n> sync() is surely overkill, in that it writes out dirty kernel buffers\n> that might have nothing at all to do with Postgres. But I don't see how\n> to do better.\n\nThanks for a good summary. Maybe this is yet another reason to have\na separate IO process like Oracle...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 08 Jan 2003 16:17:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <[email protected]> writes:\n> >> What we really need is something better than sync(), viz flush all dirty\n> >> buffers to disk *and* wait till they're written. But sync() and sleep\n> >> for awhile is the closest portable approximation.\n> \n> > Are you saying that fsync() might not wait untill the IO completes?\n> \n> No, I said that sync() might not. Read the man pages. HPUX's man\n> page for sync(2) says\n> \n> sync() causes all information in memory that should be on disk to be\n> written out.\n> ...\n> The writing, although scheduled, is not necessarily complete upon\n> return from sync.\n\nYep, BSD/OS says:\n\n\tBUGS\n\t Sync() may return before the buffers are completely flushed.\n\nAt least they classify it as a bug.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 8 Jan 2003 09:51:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <[email protected]> writes:\n> > I'm just wondering why we do not use fsync() to flush data/index\n> > pages.\n> \n> There isn't any efficient way to do that AFAICS. The process that wants\n> to do the checkpoint hasn't got any way to know just which files need to\n> be sync'd.\n\nSo the backends have to keep a common list of all the files they\ntouch. Admittedly, that could be a problem if it means using a bunch\nof shared memory, and it may have additional performance implications\ndepending on the implementation ...\n\n> Even if it did know, it's not clear to me that we can\n> portably assume that process A issuing an fsync on a file descriptor F\n> it's opened for file X will force to disk previous writes issued against\n> the same physical file X by a different process B using a different file\n> descriptor G.\n\nIf the manpages are to be believed, then under FreeBSD, Linux, and\nHP-UX, calling fsync() will force to disk *all* unwritten buffers\nassociated with the file pointed to by the filedescriptor.\n\nSadly, however, the Solaris and IRIX manpages suggest that only\nbuffers associated with the specific file descriptor itself are\nwritten, not necessarily all buffers associated with the file pointed\nat by the file descriptor (and interestingly, the Solaris version\nappears to be implemented as a library function and not a system call,\nif the manpage's section is any indication).\n\n> sync() is surely overkill, in that it writes out dirty kernel buffers\n> that might have nothing at all to do with Postgres. But I don't see how\n> to do better.\n\nIt's obvious to me that sync() can have some very significant\nperformance implications on a system that is acting as more than just\na database server. So it should probably be used only when there's no\ngood alternative.\n\nSo: this is probably one of those cases where it's important to\ndistinguish between operating systems and use the sync() approach only\nwhen it's uncertain that fsync() will do the job. So FreeBSD (and\nprobably all the other BSD derivatives) definitely should use fsync()\nsince they have known-good implementations. Linux and HP-UX 11 (if\nthe manpage's wording can be trusted. Not sure about earlier\nversions) should use fsync() as well. Solaris and IRIX should use\nsync() since their manpages indicate that only data associated with\nthe filedescriptor will be written to disk.\n\nUnder Linux (and perhaps HP-UX), it may be necessary to fsync() the\ndirectories leading to the file as well, so that the state of the\nfilesystem on disk is consistent and safe in the event that the files\nin question are newly-created. Whether that's truly necessary or not\nappears to be filesystem-dependent. A quick perusal of the Linux\nsource shows that ext2 appears to only sync the data and metadata\nassociated with the inode of the specific file and not any parent\ndirectories, so it's probably a safe bet to fsync() any ancestor\ndirectories that matter as well as the file even if the system is\nrunning on top of a journalled filesystem. Since all the files in\nquestion probably reside in the same set of directories, the directory\nfsync()s can be deferred until the very end.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sun, 12 Jan 2003 21:31:02 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> So the backends have to keep a common list of all the files they\n> touch. Admittedly, that could be a problem if it means using a bunch\n> of shared memory, and it may have additional performance implications\n> depending on the implementation ...\n\nIt would have to be a list of all files that have been touched since the\nlast checkpoint. That's a serious problem for storage in shared memory,\nwhich is by definition fixed-size.\n\n>> Even if it did know, it's not clear to me that we can\n>> portably assume that process A issuing an fsync on a file descriptor F\n>> it's opened for file X will force to disk previous writes issued against\n>> the same physical file X by a different process B using a different file\n>> descriptor G.\n\n> If the manpages are to be believed, then under FreeBSD, Linux, and\n> HP-UX, calling fsync() will force to disk *all* unwritten buffers\n> associated with the file pointed to by the filedescriptor.\n\n> Sadly, however, the Solaris and IRIX manpages suggest that only\n> buffers associated with the specific file descriptor itself are\n> written, not necessarily all buffers associated with the file pointed\n> at by the file descriptor (and interestingly, the Solaris version\n> appears to be implemented as a library function and not a system call,\n> if the manpage's section is any indication).\n\nRight. \"Portably\" was the key word in my comment (sorry for not\nemphasizing this more clearly). The real problem here is how to know\nwhat is the actual behavior of each platform? I'm certainly not\nprepared to trust reading-between-the-lines-of-some-man-pages. And I\ncan't think of a simple yet reliable direct test. You'd really have to\ninvest detailed study of the kernel source code to know for sure ...\nand many of our platforms don't have open-source kernels.\n\n> Under Linux (and perhaps HP-UX), it may be necessary to fsync() the\n> directories leading to the file as well, so that the state of the\n> filesystem on disk is consistent and safe in the event that the files\n> in question are newly-created.\n\nAFAIK, all Unix implementations are paranoid about consistency of\nfilesystem metadata, including directory contents. So fsync'ing\ndirectories from a user process strikes me as a waste of time, even\nassuming that it were portable, which I doubt. What we need to worry\nabout is whether fsync'ing a bunch of our own data files is a practical\nsubstitute for a global sync() call.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jan 2003 00:52:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > So the backends have to keep a common list of all the files they\n> > touch. Admittedly, that could be a problem if it means using a bunch\n> > of shared memory, and it may have additional performance implications\n> > depending on the implementation ...\n> \n> It would have to be a list of all files that have been touched since the\n> last checkpoint. That's a serious problem for storage in shared memory,\n> which is by definition fixed-size.\n\nOf course, the file list needn't be stored in SysV shared memory. It\ncould be stored in a file that's later read by the checkpointing\nprocess. The backends could serialize their writes via fcntl() or\nioctl() style locks, whichever is appropriate. Locking might even be\navoided entirely if the individual writes are small enough.\n\n> Right. \"Portably\" was the key word in my comment (sorry for not\n> emphasizing this more clearly). The real problem here is how to know\n> what is the actual behavior of each platform? I'm certainly not\n> prepared to trust reading-between-the-lines-of-some-man-pages. \n\nReading between the lines isn't necessarily required, just literal\ninterpretation. :-)\n\n> And I can't think of a simple yet reliable direct test. You'd\n> really have to invest detailed study of the kernel source code to\n> know for sure ... and many of our platforms don't have open-source\n> kernels.\n\nLinux appears to do the right thing with the file data itself, even if\nit doesn't handle the directory entry simultaneously. Others claim,\nin messages written to pgsql-general and elsewhere (via Google\nsearch), that FreeBSD does the right thing for sure.\n\nI certainly agree that non-open-source kernels are uncertain. That's\nwhy it wouldn't be a bad idea to control this via a GUC variable.\n\n> > Under Linux (and perhaps HP-UX), it may be necessary to fsync() the\n> > directories leading to the file as well, so that the state of the\n> > filesystem on disk is consistent and safe in the event that the files\n> > in question are newly-created.\n> \n> AFAIK, all Unix implementations are paranoid about consistency of\n> filesystem metadata, including directory contents. \n\nNot ext2 under Linux! By default, it writes everything\nasynchronously. I don't know how many people use ext2 to do serious\ntasks under Linux, so this may not be that much of an issue.\n\n> So fsync'ing directories from a user process strikes me as a waste\n> of time, even assuming that it were portable, which I doubt. What\n> we need to worry about is whether fsync'ing a bunch of our own data\n> files is a practical substitute for a global sync() call.\n\nI'm positive that under certain operating systems, fsyncing the data\nis a better option than a global sync(), especially since sync() isn't\nguaranteed to wait until the buffers are flushed. Right now the state\nof the data on disk immediately after a checkpoint is just a guess\nbecause of that. I don't see that using fsync() would introduce\nsignificantly more uncertainty on systems where the manpage explicitly\nsays that the buffers associated with the file referenced by the file\ndescriptor are the ones written to disk. For instance, the FreeBSD\nmanpage says:\n\n Fsync() causes all modified data and attributes of fd to be moved\n to a permanent storage device. This normally results in all\n in-core modified copies of buffers for the associated file to be\n written to a disk.\n\n Fsync() should be used by programs that require a file to be in a\n known state, for example, in building a simple transaction\n facility.\n\nand the Linux manpage says:\n\n fsync copies all in-core parts of a file to disk, and waits until\n the device reports that all parts are on stable storage. It also\n updates metadata stat information. It does not necessarily ensure\n that the entry in the directory containing the file has also\n reached disk. For that an explicit fsync on the file descriptor\n of the directory is also needed.\n\nBoth are rather unambiguous, and a cursory review of the Linux source\nconfirms what its manpage says, at least. The FreeBSD manpage might\nbe ambiguous, but the fact that they also have an fsync command line\nutility essentially proves that FreeBSD's fsync() flushes all buffers\nassociated with the file.\n\nConversely, the Solaris manpage says:\n\n The fsync() function moves all modified data and attributes of the\n file descriptor fildes to a storage device. When fsync() returns,\n all in-memory modified copies of buffers associated with fildes\n have been written to the physical medium.\n\nIt's pretty clear from the Solaris description that its fsync()\nconcerns itself only with the buffers associated with a file\ndescriptor and not with the file itself. The fact that it's\nimplemented as a library call (the manpage is in section 3 instead of\nsection 2) convinces me further that its fsync() implementation is as\ndescribed.\n\n\nThe PostgreSQL default for checkpoints should probably be sync(), but\nI think fsync() should be an available option, just as it's possible\nto control whether or not synchronous writes are used for the\ntransaction log as well as the type of synchronization mechanism used\nfor it. Yes, it's another parameter for the administrator to concern\nhimself with, but it seems to me that a significant amount of speed\ncould be gained under certain (perhaps quite common) circumstances\nwith such a mechanism.\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sun, 12 Jan 2003 23:32:07 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "\nTom Lane writes:\n\n> Right. \"Portably\" was the key word in my comment (sorry for not\n> emphasizing this more clearly). The real problem here is how to know\n> what is the actual behavior of each platform? I'm certainly not\n> prepared to trust reading-between-the-lines-of-some-man-pages. And I\n> can't think of a simple yet reliable direct test.\n\nIs the \"Single Unix Standard, version 2\" (aka UNIX98) any better?\nIt says for fsync():\n\n \"The fsync() function forces all currently queued I/O operations\n associated with the file indicated by file descriptor fildes to\n the synchronised I/O completion state. All I/O operations are\n completed as defined for synchronised I/O file integrity\n completion.\"\n\nThis to me clearly says that changes to the file must be written,\nnot just changes made via this file descriptor.\n\nI did have to test this behaviour once (for a customer, strange\nsituation) but I couldn't find a portable way to do it, either.\n\nWhat I did was read the appropriate disk block from the raw device to\nbypass the buffer cache. As this required low level knowledge of the\non-disk filesystem layout it was not very portable. For anyone\ninterested Tom Christiansen's \"icat\" program can be ported to UFS\nderived filesystems fairly easily:\n\n http://www.rosat.mpe-garching.mpg.de/mailing-lists/perl5-porters/1997-04/msg00487.html\n\n> AFAIK, all Unix implementations are paranoid about consistency of\n> filesystem metadata, including directory contents. So fsync'ing\n> directories from a user process strikes me as a waste of time, ...\n\nThere is one variant where this is not the case: Linux using ext2fs\nand possibly other filesystems.\n\nThere was a flame fest of great entertainment value a few years ago\nbetween Linus Torvalds and Dan Bernstein. Of course, neither was able\nto influence the opinion of the other to any noticible degree, but it\nmade fun reading. I think this might be a starting point:\n\n http://www.ornl.gov/cts/archives/mailing-lists/qmail/1998/05/msg00667.html\n\nA more recent posting from Linus where he continues to recommend\nfsync() is this:\n\n http://www.cs.helsinki.fi/linux/linux-kernel/2001-29/0659.html\n\nI've not heard that any other Unix-like OS has abandoned the\ntraditional and POSIX semantic.\n\n> assuming that it were portable, which I doubt. What we need to worry\n> about is whether fsync'ing a bunch of our own data files is a practical\n> substitute for a global sync() call.\n\nI wish that it were. There are situations (serveral GB buffer caches,\nfor example) where I mistrust the current use of sync() to have all\nwrites completed before the sleep() returns. My concern is\ntheoretical at the moment -- I never get to play with machines that\nlarge!\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Mon, 13 Jan 2003 19:31:08 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync() " }, { "msg_contents": "tom lane wrote:\n> <flame on>\n> In all honesty, I do not *want* Windows people to think that\n> they're not running on the \"poor stepchild\" platform.\n\nWe should distinguish between \"poor stepchild\" from a client support\nperspective and a production environment perspective.\n\nWhat is the downside to supporting development of client products\nbetter? That is what I am really suggesting.\n\nIf people are deciding what open-source database server they want to\nuse, Linux or FreeBSD is the obvious choice for the server OS. The kind\nof people who are inclined to use PostgreSQL or MySQL will mostly NOT be\nconsidering Windows servers.\n\n\n> I have no objection to there being a Windows port that people\n> can use to do SQL-client development on their laptops. But \n> let us please not confuse this with an industrial-strength \n> solution; nor give any level of support that might lead \n> others to make such confusion.\n\nAll we can do is simply to make it clear that Windows\nis not recommended for production server use and outline\nall the reasons why Windows sucks for that purpose.\n\nBeyond that, if people want to shoot themselves in the head, they will\ndo so and I don't see much point in trying to stop them.\n\n\n> The MySQL guys made the right choice here: they don't want to\n> buy into making Windows a grade-A platform, either. <flame off>\n\n<flame retardent on>\nHow does providing a native Windows executable that doesn't require\nCygwin accomplish your objective. It seems to me that you are going to\nhave the problem if you release a native version irrespective of the\nissue at hand (Visual C++ project support).\n\nI don't see how making it easier to build adds to this problem.\n\nI also don't see how making it harder for Windows client developer to\nadopt PostgreSQL helps anyone. <flame retardent off>\n\nI hate Microsoft and I don't like Windows, but I am forced to use it\nbecause the software we need to run our business runs only on \nWindows. I use Unix whenever possible and whenever reliability is\nrequired.\n\n- Curtis\n\nP.S. The lack of a real C++ client library that supports the most common\ndevelopment environment out there is another problem that seriously\nimpedes Windows client developers.\n\nI like libpqxx, Jeroen did a find job. However, one needs to \njump through hoops to get it to run on Visual C++ 6.0 at\nthe moment.\n\n\n", "msg_date": "Wed, 29 Jan 2003 12:54:51 -0400", "msg_from": "\"Curtis Faith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Curtis Faith wrote:\n<snip>\n > If people are deciding what open-source database server they want to\n> use, Linux or FreeBSD is the obvious choice for the server OS. The kind\n> of people who are inclined to use PostgreSQL or MySQL will mostly NOT be\n> considering Windows servers.\n\nFor another perspective, we've been getting a few requests per day \nthrough the PostgreSQL Advocacy and Marketing site's request form along \nthe lines of:\n\n\"Is there a license fee for using PostgreSQL? We'd like to distribute \nit with our XYZ product that needs a database.\"\n\nProbably about 4 or so per day like this at present. A lot of the \npeople sending these emails appear to have windows based products that \nneed a database, and have heard of PostgreSQL being a database that they \ndon't need to pay license fee's for. They've kind of missed the point \nof Open Source from the purist point of view, but it's still working for \nthem. ;-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Thu, 30 Jan 2003 03:56:52 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "*sigh*\n\nOften there isn't a choice of OS. If I am selling to a large enterprise\nwhose corporate standards say they will only run Windows in their data\ncenter, my chances of getting them to make an exception are none. But my\nchances of getting them to install Pg just for my application are far\ngreater. Would I prefer *nix? You betcha. Would I break a deal over it? No.\nWould I prefer to be able to recommend Pg over, say, Oracle, or MS-SQL?\nAbsolutely. I'm not alone.\n\nI don't care how it's built. I have a lot of sympathy for the folks saying\nmake the build process universal, rather than having a special one for\nWindows. Requiring cygwin shouldn't be a big deal. You aren't going to get a\nsudden flood of *nix-ignorant windows developers rushing in, no matter what\nyou do.\n\nI've been mildly surprised and disappointed by the venom I detect in this\nthread. I want to be able to recommend a single Db to my customers no matter\nwhat OS they run. MySQL just doesn't do it, SAPdB is a nightmare, Pg is my\nlast hope other than a proprietary system. If you are an OpenSource zealot,\nthink of this as an opportunity to get some into places where it is often\nanaethema.\n\ncheers\n\nandrew\n\n----- Original Message -----\nFrom: \"Curtis Faith\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, January 29, 2003 11:54 AM\nSubject: Re: [mail] Re: [HACKERS] Windows Build System\n\n\n> tom lane wrote:\n> > <flame on>\n> > In all honesty, I do not *want* Windows people to think that\n> > they're not running on the \"poor stepchild\" platform.\n>\n> We should distinguish between \"poor stepchild\" from a client support\n> perspective and a production environment perspective.\n>\n> What is the downside to supporting development of client products\n> better? That is what I am really suggesting.\n>\n> If people are deciding what open-source database server they want to\n> use, Linux or FreeBSD is the obvious choice for the server OS. The kind\n> of people who are inclined to use PostgreSQL or MySQL will mostly NOT be\n> considering Windows servers.\n>\n>\n> > I have no objection to there being a Windows port that people\n> > can use to do SQL-client development on their laptops. But\n> > let us please not confuse this with an industrial-strength\n> > solution; nor give any level of support that might lead\n> > others to make such confusion.\n>\n> All we can do is simply to make it clear that Windows\n> is not recommended for production server use and outline\n> all the reasons why Windows sucks for that purpose.\n>\n> Beyond that, if people want to shoot themselves in the head, they will\n> do so and I don't see much point in trying to stop them.\n>\n>\n> > The MySQL guys made the right choice here: they don't want to\n> > buy into making Windows a grade-A platform, either. <flame off>\n>\n> <flame retardent on>\n> How does providing a native Windows executable that doesn't require\n> Cygwin accomplish your objective. It seems to me that you are going to\n> have the problem if you release a native version irrespective of the\n> issue at hand (Visual C++ project support).\n>\n> I don't see how making it easier to build adds to this problem.\n>\n> I also don't see how making it harder for Windows client developer to\n> adopt PostgreSQL helps anyone. <flame retardent off>\n>\n> I hate Microsoft and I don't like Windows, but I am forced to use it\n> because the software we need to run our business runs only on\n> Windows. I use Unix whenever possible and whenever reliability is\n> required.\n>\n> - Curtis\n>\n> P.S. The lack of a real C++ client library that supports the most common\n> development environment out there is another problem that seriously\n> impedes Windows client developers.\n>\n> I like libpqxx, Jeroen did a find job. However, one needs to\n> jump through hoops to get it to run on Visual C++ 6.0 at\n> the moment.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 29 Jan 2003 12:54:09 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Justin Clift wrote:\n> For another perspective, we've been getting a few requests per day \n> through the PostgreSQL Advocacy and Marketing site's request form along \n> the lines of:\n> \n> \"Is there a license fee for using PostgreSQL? We'd like to distribute \n> it with our XYZ product that needs a database.\"\n> \n> Probably about 4 or so per day like this at present. A lot of the \n> people sending these emails appear to have windows based products that \n> need a database, and have heard of PostgreSQL being a database that they \n> don't need to pay license fee's for. They've kind of missed the point \n> of Open Source from the purist point of view, but it's still working for \n> them. ;-)\n\nIf they are:\n a) not clueful enough to actually look at the license, and\n b) looking at it from the purely selfish perspective of \"not having to\n pay license fees,\"\nthen are they /truly/ people where it is useful to put effort into being\nhelpful?\n\nFurthermore, if their lawyers are incapable of reading the license and\nexplaining to them \"You don't have to pay,\" I'd suggest the thought that\nmaybe they have bigger problems than you can possibly solve for them.\n\nThe great security quote of recent days is thus:\n \"If you spend more on coffee than on IT security, then you will be\n hacked.\" -- Richard Clarke\n\nThe analagous thing might be:\n\n\"If you spend more on coffee than you do on getting proper legal advice\nabout software licenses, then it's just possible that you might do\nsomething DOWNRIGHT STUPID and get yourself in a whole barrel of legal\nhot water.\"\n\nIf these people are incapable of reading software licenses, and haven't\nany competent legal counsel to to do it for them, you've got to wonder\nif they are competent to sell licenses to their own software. I\nseriously doubt that they are.\n\nFurthermore, I'm not at all sure that it is wise for you to even /try/\nto give them any guidance in this, beyond giving them a URL to the\nlicense, and saying \"Have your lawyer read this.\" If you start giving\nthem interpretations of the license, that smacks of \"giving legal\nadvice,\" and bar associations tend to frown on that.\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/\n\"Interfaces keep things tidy, but don't accelerate growth: functions\ndo.\" -- Alan Perlis\n", "msg_date": "Wed, 29 Jan 2003 18:15:19 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Vince Vielhaber [mailto:[email protected]] \n> Sent: 30 January 2003 09:17\n> To: Ron Mayer\n> Cc: [email protected]\n> Subject: Re: [mail] Re: [HACKERS] Windows Build System\n> \n> \n> On Wed, 29 Jan 2003, Ron Mayer wrote:\n> \n> >\n> > Cool irony in the automated .sig on the mailinglist software...\n> >\n> > On Wed, 29 Jan 2003, Vince Vielhaber wrote:\n> > > ...\n> > > hammering the betas is a far cry from an \"industrial-strength \n> > > solution\". ... TIP 4: Don't 'kill -9' the postmaster\n> >\n> > Sounds like you're basically saying is\n> >\n> > _do_ 'kill -9' the postmaster...\n> >\n> > and make sure it recovers gracefully when testing for an \n> \"industrial- \n> > strength solution\".\n> \n> Not what I said at all.\n\nIt's not far off, but it's quite amusing none the less.\n\nWhat I read from your postings it that you are demanding more rigourous\ntesting for a new major feature *prior* to it being comitted to CVS in a\ndev cycle than I think we ever gave any previous new feature even in the\nbeta test phase. I don't object to testing, and have been thinking about\ncoding something to address Tom's concerns, but let's demand heavy\ntesting for the right reasons, not just to try to justify not doing a\nWin32 port.\n\nI would also point out that we already list the Cygwin port of\nPostgreSQL as supported. Who ever gave that the kind of testing people\nare demanding now? I think the worst case scenario will be that our\nWin32 port is far better than the existing 'supported' solution.\n\nRegards, Dave.\n", "msg_date": "Thu, 30 Jan 2003 13:08:58 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 30 Jan 2003, Dave Page wrote:\n\n> > On Wed, 29 Jan 2003, Ron Mayer wrote:\n> >\n> > >\n> > > Cool irony in the automated .sig on the mailinglist software...\n> > >\n> > > On Wed, 29 Jan 2003, Vince Vielhaber wrote:\n> > > > ...\n> > > > hammering the betas is a far cry from an \"industrial-strength\n> > > > solution\". ... TIP 4: Don't 'kill -9' the postmaster\n> > >\n> > > Sounds like you're basically saying is\n> > >\n> > > _do_ 'kill -9' the postmaster...\n> > >\n> > > and make sure it recovers gracefully when testing for an\n> > \"industrial-\n> > > strength solution\".\n> >\n> > Not what I said at all.\n>\n> It's not far off, but it's quite amusing none the less.\n\nI agree with Tom on yanking the plug while it's operating. Do you\nknow the difference between kill -9 and yanking the plug?\n\n> What I read from your postings it that you are demanding more rigourous\n> testing for a new major feature *prior* to it being comitted to CVS in a\n> dev cycle than I think we ever gave any previous new feature even in the\n> beta test phase. I don't object to testing, and have been thinking about\n> coding something to address Tom's concerns, but let's demand heavy\n> testing for the right reasons, not just to try to justify not doing a\n> Win32 port.\n\nNice try. I've demanded nothing, quit twisting my words to fit your\nargument. If you're going to test and call it conclusive, do some\nconclusive testing or call it something else. But I suspect that since\nyou don't know the difference between yanking the plug and kill -9 this\nconversation is a waste of time.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 30 Jan 2003 08:24:50 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 2003-01-30 at 13:24, Vince Vielhaber wrote:\n> On Thu, 30 Jan 2003, Dave Page wrote:\n> \n> > > On Wed, 29 Jan 2003, Ron Mayer wrote:\n> > >\n> > > >\n> > > > Cool irony in the automated .sig on the mailinglist software...\n> > > >\n> > > > On Wed, 29 Jan 2003, Vince Vielhaber wrote:\n> > > > > ...\n> > > > > hammering the betas is a far cry from an \"industrial-strength\n> > > > > solution\". ... TIP 4: Don't 'kill -9' the postmaster\n> > > >\n> > > > Sounds like you're basically saying is\n> > > >\n> > > > _do_ 'kill -9' the postmaster...\n> > > >\n> > > > and make sure it recovers gracefully when testing for an\n> > > \"industrial-\n> > > > strength solution\".\n> > >\n> > > Not what I said at all.\n> >\n> > It's not far off, but it's quite amusing none the less.\n> \n> I agree with Tom on yanking the plug while it's operating. Do you\n> know the difference between kill -9 and yanking the plug?\n\nKill -9 seems to me _less_ severe than yanking the plug but much easier\nto automate, so that could be the first thing to test. You have no hope\nof passing the pull-the-plug test if you can't survive even kill -9.\n\nPerhaps we could have a special \"reliability-regression\" test that does\n\"kill -9 postmaster\", repeatedly, at random intervals, and checks for\nconsistency ?\n\nMaybe we will find even some options for some OS'es to \"force-unmount\"\ndisks. I guess that setting IDE disk's to read-only with hdparm could\npossibly achieve something like that on Linux. \n\n> > What I read from your postings it that you are demanding more rigourous\n> > testing for a new major feature *prior* to it being comitted to CVS in a\n> > dev cycle than I think we ever gave any previous new feature even in the\n> > beta test phase. I don't object to testing, and have been thinking about\n> > coding something to address Tom's concerns, but let's demand heavy\n> > testing for the right reasons, not just to try to justify not doing a\n> > Win32 port.\n> \n> Nice try. I've demanded nothing, quit twisting my words to fit your\n> argument. If you're going to test and call it conclusive, do some\n> conclusive testing or call it something else. \n\nSo we have no conclusive testing done that /proves/ postgres to be\nreliable ? I guess that such thing (positive conclusive reliability\ntest) is impossible even in theory. \n\nBut Dave has done some testing that could not prove the opposite and\nconcluded that it is good enough for him. So I guess that his test were\nif fact \"conclusive\", if only just for him ;)\n\nSometimes it is very hard to do the pull-the-plug test - I've seen\npeople pondering over a HP server they could not switch off after\naccidentally powering it up. Pulling the plug just made it beep, but did\nnot switch it off ;)\n\n> But I suspect that since\n> you don't know the difference between yanking the plug and kill -9 this\n> conversation is a waste of time.\n\nI assume you realize that U can't \"kill -9\" the plug ;)\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "30 Jan 2003 15:44:19 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Kill -9 seems to me _less_ severe than yanking the plug but much easier\n> to automate, so that could be the first thing to test. You have no hope\n> of passing the pull-the-plug test if you can't survive even kill -9.\n\nActually, they're two orthogonal issues.\n\nIn the pull-the-plug case you have to worry about what is on disk at any\ngiven instant and whether you can make all the bits on disk consistent\nagain. (And also about whether your filesystem can perform the\nequivalent exercise for its own metadata; which is why we are\nquestioning Windows here. Oracle's Windows port may have an advantage,\nif they bypass the OS to do raw disk I/O as they do on other platforms.)\n\nIn the kill -9 case there is no risk of losing data consistency on disk,\nbecause the OS isn't crashing; whatever we last wrote we can expect to\nread. The issue for kill -9 is whether we can deal with leftover\ndynamic state, like pre-existing shared memory segments, pre-existing\nSysV semaphores, TCP port numbers that the kernel won't reassign until\nsome timeout expires, that kind of fun stuff. The reason the TIP is\nstill there is that there are platforms on which that stuff doesn't work\nvery nicely. It's better to let the postmaster exit cleanly so that\nthat state gets cleaned up. I have no idea what the comparable issues\nare for a native Windows port, but I bet there are some...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 10:56:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "\"Dave Page\" <[email protected]> writes:\n> I would also point out that we already list the Cygwin port of\n> PostgreSQL as supported. Who ever gave that the kind of testing people\n> are demanding now? I think the worst case scenario will be that our\n> Win32 port is far better than the existing 'supported' solution.\n\nA good point --- but what this is really about is expectations. If we\nsupport a native Windows port then people will probably think that it's\nokay to run production databases on that setup; whereas I doubt many\npeople would think that about the Cygwin-based port. So what we need to\nknow is whether the platform is actually stable enough that that's a\nreasonable thing to do; so that we can plaster the docs with appropriate\ndisclaimers if necessary. Windows, unlike the other OSes mentioned in\nthis thread, has a long enough and sorry enough track record that it\nseems appropriate to run such tests ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 11:12:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "On Thursday 30 January 2003 11:12, Tom Lane wrote:\n> A good point --- but what this is really about is expectations. If we\n> support a native Windows port then people will probably think that it's\n> okay to run production databases on that setup; whereas I doubt many\n> people would think that about the Cygwin-based port. So what we need to\n> know is whether the platform is actually stable enough that that's a\n> reasonable thing to do; so that we can plaster the docs with appropriate\n> disclaimers if necessary. Windows, unlike the other OSes mentioned in\n> this thread, has a long enough and sorry enough track record that it\n> seems appropriate to run such tests ...\n\nI think it's just developer backlash to Win32. I am on record (see the \narchives) as not wanting the Win32 port -- but the vitriol I've seen in this \nthread from several people is entirely uncalled for and is sickening.\n\nDave appears to have tested this Win32 beta at least as much as a regular \nPostgreSQL release would be tested. These tests are being held to \nartificially high standards, simply because it's native Win32. That is \ndisgusting. And poor Katie just got _slammed_ -- and she's the lead \ndeveloper.\n\nVince, I would say that we, the developers of PostgreSQL, are then not \nqualified to test our own releases for the reasons you mentioned that Katie \nshould not test her own releases. Of course that's ridiculous -- often the \ndevelopers can do a better job of testing because they know better than the \nregular user would about what conditions can cause crashes.\n\nI don't like the thoughts of native Win32 either. I think Win32 should die a \nlong horrible death. But that doesn't give me the right to publicly ridicule \nthe folks that want to use PostgreSQL, even if it's in an 'industrial \nstrength setting,' on Win32. The BSD license indemnifies us anyway. So \nwhat's the problem.\n\nThe developers don't like Win32. That's the problem.\n\nBut as to 'industrial strength testing' -- do ANY of our releases get this \nsort of testing on ANY platform? No, typically it's 'regression passed' 'Ok, \nit's supported on that platform.'\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 13:03:00 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Hi,\n\nOn Thursday 30 January 2003 17:12, you wrote:\n> \"Dave Page\" <[email protected]> writes:\n> > I would also point out that we already list the Cygwin port of\n> > PostgreSQL as supported. Who ever gave that the kind of testing people\n> > are demanding now? I think the worst case scenario will be that our\n> > Win32 port is far better than the existing 'supported' solution.\n>\n> A good point --- but what this is really about is expectations. If we\n> support a native Windows port then people will probably think that it's\n> okay to run production databases on that setup; whereas I doubt many\n> people would think that about the Cygwin-based port. So what we need to\n> know is whether the platform is actually stable enough that that's a\n> reasonable thing to do; so that we can plaster the docs with appropriate\n> disclaimers if necessary. Windows, unlike the other OSes mentioned in\n> this thread, has a long enough and sorry enough track record that it\n> seems appropriate to run such tests ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nAh, well - I wanted to hold off but could not.\n\nFirst, a disclaimer: I don't like Windows at all. There, you got it.\n\nBut: it's actually quite stable if you configure it well, and don't run the 3 \nmillion available 'dang, this looks nice' tools on it. Place it in the \ncorner, let it run only server apps, and it serves well and stable. In my \nexperience (and I have quite some experience in letting Win machines run in \nheavy-duty 24/7 production floors) they will happily run and not eat data \nuntil the some hardware breaks or disks overflow, just like any OS.\n\nSo, please, don't let a 'I don't like it' kind of flamewar hinder a native \nport. And please no more 'not for production use' warnings - see above.\nMake this 'not for production use on workstations'.\n\nGreetings,\n\tJoerg\n-- \nLeading SW developer - S.E.A GmbH\nMail: [email protected]\nWWW: http://www.sea-gmbh.com\n", "msg_date": "Thu, 30 Jan 2003 19:11:04 +0100", "msg_from": "Joerg Hessdoerfer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 30 Jan 2003, Lamar Owen wrote:\n\n> Vince, I would say that we, the developers of PostgreSQL, are then not\n> qualified to test our own releases for the reasons you mentioned that Katie\n> should not test her own releases. Of course that's ridiculous -- often the\n> developers can do a better job of testing because they know better than the\n> regular user would about what conditions can cause crashes.\n\nDon't twist what I said. My statement about Katie was that she has a\nknowledge of the port and the OS to the point where there are things\nthat she knows are wrong to do and would avoid doing it. In the case\nof this port the idea is to make sure that those things that may cause\nthe backend to close are something that SHOULD be tested. By their own\nadmission they haven't been doing that. All they've done is loaded it\ndown and made sure it continued to work. The other ports have a long\nhistory, the windows port has ZERO history. If you're being sickened\nnow, how sick would you be if something went wrong and you started seeing\nthings all over /. and other sites going on about how PG crashed and\nblew away some corporation's data and half the OS away on something\nthat at worse should have only caused the backend to close? It won't\nmatter that it was running on windows, it would have been a native\nport that was blessed by the PGDG.\n\nIf anything, the resistance to this testing should sicken you.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 30 Jan 2003 13:17:34 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> And poor Katie just got _slammed_ -- and she's the lead developer.\n\nWe could definitely do without the vitriol. I'd like to apologize if\nanyone took anything I said as a personal attack. It wasn't meant that\nway.\n\n> The developers don't like Win32. That's the problem.\n\nSure, we're on record as not liking Windows. But:\n\n> But as to 'industrial strength testing' -- do ANY of our releases get this \n> sort of testing on ANY platform? No, typically it's 'regression passed' 'Ok, \n> it's supported on that platform.'\n\nMost variants of Unix are known to be pretty stable. Most variants of\nUnix are known to follow the Unix standard semantics for sync() and\nfsync(). I think we are entirely justified in doubting whether Windows\nis a suitable platform for PG, and in wanting to run tests to find out.\nYes, we are holding Windows to a higher standard than we would for a\nUnix variant.\n\nPartly this is a matter of wanting to protect Postgres' reputation.\nJust on sheer numbers, if there is a native Windows port then there are\nlikely to be huge numbers of people using Postgres on Windows. If\nthat's not going to be a reliable combination, we need to know it and\ntell them so up-front. Otherwise, people will be blaming Postgres, not\nWindows, when they lose data. It's an entirely different situation from\nwhether Postgres-on-Joe-Blow's-Unix-Variant loses data, first because of\nvisibility, and second because of the different user base. Am I being\nparanoid to suspect that the average Postgres-on-Windows user will be\nless clueful than the average Postgres-on-Unix user? I don't think so.\n\nBetween the population factors and Windows' hard-earned reputation for\nunreliability, we would be irresponsible not to be asking tough\nquestions here. If the Windows partisans don't think Windows should be\nheld to a higher standard than the platforms we already deal with,\nwhy not? Are they afraid that their platform won't pass the scrutiny?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 13:34:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "On Thursday 30 January 2003 13:17, Vince Vielhaber wrote:\n> On Thu, 30 Jan 2003, Lamar Owen wrote:\n> > Vince, I would say that we, the developers of PostgreSQL, are then not\n> > qualified to test our own releases for the reasons you mentioned that\n> > Katie should not test her own releases.\n\n> Don't twist what I said. My statement about Katie was that she has a\n> knowledge of the port and the OS to the point where there are things\n> that she knows are wrong to do and would avoid doing it.\n\nThen she would not be honestly testing, would she?\n\n> admission they haven't been doing that. All they've done is loaded it\n> down and made sure it continued to work. The other ports have a long\n> history, the windows port has ZERO history.\n\nDo we do powerfail testing on a unix-type port now? That's not testing the \nport, incidentally, it's testing the OS, sync semantics aside. Do we hold \nthe other ports to the same standards? Yes, the Win32 port is a substantial \nchange from the Unix ports. Yes, it needs robust testing. But all the ports \nneed that same grade of testing, not just Win32. And that type of testing is \nnot being rigorously done on any port now, unless it is being done by a few \nthat aren't announcing that they are doing it.\n\nAnd thanks to hardware write-back caching on many hard drives, powerfail \ntesting may be moot regardless of OS or filesystem type.\n\n> If you're being sickened\n> now, how sick would you be if something went wrong and you started seeing\n> things all over /. and other sites going on about how PG crashed and\n> blew away some corporation's data and half the OS away on something\n> that at worse should have only caused the backend to close?\n\nSick enough. But that applies to all our supported platforms, not just Win32. \n>From what I've seen and heard the 'supported' Cygwin port will barf all over \nitself under high load. So, the first thing I personally would test for a \nWin32 native port is 'how well is it performing under load?' -- after it \npasses that I would then throw the more pathological cases at it.\n\n> It won't\n> matter that it was running on windows, it would have been a native\n> port that was blessed by the PGDG.\n\nSo? How many users out there actually know about the PGDG? How many users \nhave gotten PostgreSQL from their distributor of choice (whether a Linux \ndistribution, the Cygwin distribution, FreeBSD ports, or wherever) and know \nnothing of PGDG or even postgresql.org? We make ourselves too important.\n\nI know enough to take all those sites with a shakerful of salt. But then \nagain I know enough to know that the batboy didn't help Clinton or Bush do \nanything, 'Weekly World News' aside. We can't prevent the tabloid mentality \nregardless of what we do. Or don't do. \n\nThe point being that if any release of anything labeled 'PostgreSQL', \nregardless of its status as blessed or not blessed (or even cursed) by the \nPGDG, does what you've said, PostgreSQL as a whole will suffer. Our blessing \nor cursing is meaningless to most users. Or, in slightly different words, if \nthey can't be bothered to care that it's on Windows then they aren't going to \ncare whether we gave it the Royal Seal of PGDG either.\n\nHowever, I'm sure the folks that are wanting to sell this Win32 native port \ncare a whole lot about how much return business they get -- so I'm sure they \ncare more about whether it is robustly tested than you give them credit.\n\n> If anything, the resistance to this testing should sicken you.\n\nThere isn't any resistance to this testing that I've seen. ISTM that the \nresistance is to the idea of a 'supported' WIn32 native port. So, let's test \nthe Win32 native beta using your scheme, and see what falls down. And let's \ntest Linux, *BSD, HP-UX, and AIX using the same scheme and see if it falls \ndown. Let's just be fair about the testing. The Win32 stuff is being \nproclaimed as beta already -- so none are being misled into thinking it's \nproduction grade right now. But it is passing those tests that hitherto have \nbeen thrown at it -- and it seems to be passing them well.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 13:41:50 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 30 Jan 2003, Lamar Owen wrote:\n\n> On Thursday 30 January 2003 13:17, Vince Vielhaber wrote:\n> > On Thu, 30 Jan 2003, Lamar Owen wrote:\n> > > Vince, I would say that we, the developers of PostgreSQL, are then not\n> > > qualified to test our own releases for the reasons you mentioned that\n> > > Katie should not test her own releases.\n>\n> > Don't twist what I said. My statement about Katie was that she has a\n> > knowledge of the port and the OS to the point where there are things\n> > that she knows are wrong to do and would avoid doing it.\n>\n> Then she would not be honestly testing, would she?\n\nShe consider herself testing to her own standards as a windows user/\ndeveloper. Is that enough? IMO, No. I've been on both sides know\nthat the windows user/developer doesn't hold things to the same standards\nas the unix user/developer.\n\n> > admission they haven't been doing that. All they've done is loaded it\n> > down and made sure it continued to work. The other ports have a long\n> > history, the windows port has ZERO history.\n>\n> Do we do powerfail testing on a unix-type port now? That's not testing the\n> port, incidentally, it's testing the OS, sync semantics aside. Do we hold\n> the other ports to the same standards? Yes, the Win32 port is a substantial\n> change from the Unix ports. Yes, it needs robust testing. But all the ports\n> need that same grade of testing, not just Win32. And that type of testing is\n> not being rigorously done on any port now, unless it is being done by a few\n> that aren't announcing that they are doing it.\n\nSince you're pretty much ignoring my reasoning, I'll give you the same\nconsideration. The history of windows as a platform has shown itself\nto be rather fragile compared to unix.\n\nBefore you respond to this, read Tom Lane's response and reply to that.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 30 Jan 2003 14:20:26 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thursday 30 January 2003 13:34, Tom Lane wrote:\n> anyone took anything I said as a personal attack. It wasn't meant that\n> way.\n\nWith a <flame on> tag? Flames are by long tradition personal. But I \nunderstand that that wasn't the intent -- the <flame on> was more of a \n<emphasis> tag.\n\n> Sure, we're on record as not liking Windows. But:\n> > But as to 'industrial strength testing' -- do ANY of our releases get\n> > this sort of testing on ANY platform? No, typically it's 'regression\n> > passed' 'Ok, it's supported on that platform.'\n\n> Most variants of Unix are known to be pretty stable. Most variants of\n> Unix are known to follow the Unix standard semantics for sync() and\n> fsync(). I think we are entirely justified in doubting whether Windows\n> is a suitable platform for PG, and in wanting to run tests to find out.\n\nTesting is being done. Those who are testing it are comfortable so far in its \ncapabilities. We will hear about it, loadly, when that changes, I'm sure.\n\n> Yes, we are holding Windows to a higher standard than we would for a\n> Unix variant.\n\nWhich is pretty ironic, given Win's reputation, right?\n\n> Partly this is a matter of wanting to protect Postgres' reputation.\n\nAnd here's where the rubber meets the road. We, like many developers of \nsoftware (open source and otherwise) have worked on this for so long and so \nhard that we have personified the program and it has become our child, so to \nspeak. As a father of four, I know what that can do. We will protect our \nchild at any cost, vehemently so. I for one can recognize this, and further \nrecognize that _it's_just_a_program_ (!!!!!) and not my child. This is hard \nto do. We're seeing our child experiment with what we consider to be bad \ncompany, and the defense mechanism is kicking in.\n\n> Just on sheer numbers, if there is a native Windows port then there are\n> likely to be huge numbers of people using Postgres on Windows. If\n\nWhile I understand (and agree with) your (and Vince's) reasoning on why \nWindows should be considered less reliable, neither of you have provided a \nsound technical basis for why we should not hold the other ports to the same \nstandards. I believe we should test every release as pathologically as Vince \nhas stated for Win32. The more reliable we become, the worse our test cases \nshould become. Across the board, and not just on Win32. \n\nDo we want to encourage Win32? (some obviously do, but I don't) Well, telling \npeople that we have tested PostgreSQL on Win32 much more thoroughly than on \nUnix is in a way telling them that we think it is _better_ than the \ntime-tested Unix ports ('It passed a harder test on Win32. Are we afraid the \nUnix ports won't pass those same tests?'). I for one don't want that to be a \nconclusion -- but the 'suits' will see it that way, rest assured.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 14:48:50 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\nFrom: \"Tom Lane\" <[email protected]>\n>\n> Most variants of Unix are known to be pretty stable. Most variants of\n> Unix are known to follow the Unix standard semantics for sync() and\n> fsync(). I think we are entirely justified in doubting whether Windows\n> is a suitable platform for PG, and in wanting to run tests to find out.\n> Yes, we are holding Windows to a higher standard than we would for a\n> Unix variant.\n\nThe patches that were released implement fsync() by a call to _commit(),\nwhich is what I expected to see after a brief tour of the M$ support site.\nIs there any reason to think this won't have the desired effect? IANAWD, but\nmy reading suggests these should be pretty much equivalent.\n\nandrew\n\n", "msg_date": "Thu, 30 Jan 2003 14:55:02 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> While I understand (and agree with) your (and Vince's) reasoning on why \n> Windows should be considered less reliable, neither of you have provided a \n> sound technical basis for why we should not hold the other ports to the same \n> standards.\n\nThe point here is that Windows is virgin territory for us. We know\nabout Unix. When we port to a new Unix variant, we are dealing with the\nsame system APIs, and in many cases large chunks of the same system\ncode, that we've dealt with before. It's reasonable for us to have\nconfidence that Postgres will work the same on such a platform as it\ndoes on other Unix variants. And the track record of reliability that\nwe have built up across a bunch of Unix variants gives us\ncross-pollinating confidence in all of them.\n\nWindows shares none of that heritage. It is the first truly new port,\nonto a system without any Unix background, that we have ever done AFAIK.\nClaiming that it doesn't require an increased level of testing is\nsomewhere between ridiculous and irresponsible.\n\n> I believe we should test every release as pathologically as Vince \n> has stated for Win32.\n\nGreat, go to it. That does not alter the fact that today, with our\nexisting port history, Windows has to be treated with extra suspicion.\n\nI do not buy the argument you are making that we should treat all\nplatforms alike. If we had a ten-year-old Windows port, we could\nconsider it as stable as all our other ten-year-old Unix ports.\nWe don't. Given that we don't have infinite resources for testing,\nit's simple rationality to put more testing emphasis on the places\nthat we suspect there will be problems. And if you don't suspect\nthere will be problems on Windows, you are being way too naive :-(\n\n> Do we want to encourage Win32? (some obviously do, but I don't) Well, telling \n> people that we have tested PostgreSQL on Win32 much more thoroughly than on \n> Unix is in a way telling them that we think it is _better_ than the \n> time-tested Unix ports ('It passed a harder test on Win32. Are we afraid the \n> Unix ports won't pass those same tests?').\n\nIf it passes the tests, good for it. I honestly do not expect that it\nwill. My take on this is that we want to be able to document the\nproblems in advance, rather than be blindsided.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 15:29:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "\nDave, Lamar and Katie can cheer now 'cuze this is the last comment\nI'm going to make on this. All others will be ignored, probably.\n\nThe one thing I haven't seen from Dave, Lamar or Katie on this is\nreputation. You're all for the PostgreSQL name going on it but I\nhave yet to see any of you so sure of yourselves that you'd put\nyour own name on it. The license allows it. Red Hat did it. I\nsee no \"PageSQL\" or \"KatieSQL\" or even an \"Oh-Win SQL\" being offered\nup. Yet all three of you are advocating that the PostgreSQL stamp\nof approval should be immediately placed on it (ok, Lamar may not\nbe as in favor as the Dave and Katie).\n\nWithout documented testing and sufficient warnings until enough\nhistory is banked, I don't think a native windows port should be\ngiven any kind of seal of approval. After that, what about keeping\nthe code current? In a year or so will it suffer from bit-rot and\nbe the source of complaints? Are there going to be security concerns\nsurrounding it? Is there going to be a bunch of scrambling going on\nto put out a patch when the latest active-x bug hoses the data dir?\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 30 Jan 2003 16:01:22 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Without documented testing and sufficient warnings until enough\n> history is banked, I don't think a native windows port should be\n> given any kind of seal of approval.\n\nThat was my last point also: we have years of track record on most of\nour Unix ports, and none yet on Windows. Even several months of\nintensive testing by a small number of people will hardly level the\nplaying field.\n\n> After that, what about keeping the code current?\n\nI don't think that's an issue. We are not blessing anything based on\n7.2 ;-). The objective is to merge the changes into CVS tip and have\na first \"official\" Windows port as part of the 7.4 release. After that,\nit'll stay as current as any other port that's being actively used.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 16:12:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts " }, { "msg_contents": "On Thursday 30 January 2003 15:29, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > While I understand (and agree with) your (and Vince's) reasoning on why\n> > Windows should be considered less reliable, neither of you have provided\n\n> Windows shares none of that heritage. It is the first truly new port,\n> onto a system without any Unix background, that we have ever done AFAIK.\n> Claiming that it doesn't require an increased level of testing is\n> somewhere between ridiculous and irresponsible.\n\nI am saying that as we mature we need increased testing across the board. And \nit is a very low percentage of code that is tied into the OS API, right? The \nmajority of the code (the vast majority) isn't touched by it. \n\n> that we suspect there will be problems. And if you don't suspect\n> there will be problems on Windows, you are being way too naive :-(\n\nReread my statement above. I _agree_ with the rationale -- but I fear it will \nhave the opposite impact. And I am not convinced that just because we have \ngood history with the unixoid ports means that we can slack on them -- Linux, \n*BSD, etc all change. The strftime(3) breakage with RedHat of a cycle ago \nshould show us that much.\n\nI suspect there will be problems on Win32 -- it is, after all, a new port. \nBut if we're going to immediately throw pathological test cases at it that \nwe're not even bothering to test against now, that immediately throws up a \nflag to me. And TESTING IS BEING DONE on the Win32 port, nobody is yet \ntrying to put the PGDG blessing on it as yet, and progress is being made by \nthose who wish to see it made. It is still being touted as beta software, \nright? The patches from Jan are very preliminary still, correct? Katie \nhasn't issued a press release saying that it's not beta, right?\n\n<hyperbole>\nI don't see what the uproar is about, other than 'Win32 is so unstable that it \ncan't possibly work as well as you are seeing it work -- you must be doing \nsomething wrong. Test it harder. Pull the plug repeatedly!! Test it until \nit breaks! HA! Told you it would break! (yeah, firing up the old \noxyacetlyene torch and hitting the hard drive with a 6,000 degree flame did \nthe trick -- this has got to be a bad operating system!)'\n</hyperbole>\n\nAnd, by the way, who in their right mind tests a database server by repeated \nyanking of the AC power? To go to that extreme for Win32 when we caution \nagainst something as mundane as a kill -9 of postmaster on Unix is absurd. \nAnd, yes, I know the difference. I also know that the AC power pull has \nnothing to do with PostgreSQL, but it has to do with the OS under it. \nAlthough a kill -9, from the point of view of the running process, is \nidentical to a power failure. It simply dies (unless it becomes a zombie, in \nwhich case it is undead) either way. The effects of a kill -9 shouldn't be \nas severe as a power fail, since the OS can properly flush written buffers \neven after the process writing them has died.\n\nAnd I also can point the finger at some Unix swervers (spelling intentional) \nthat would fail that test in a miserable way. I can also point at a few VMS \nmachines that couldn't pass that test. I've even seen machines blow up due \nto improper power cycling. \n\nAnd I've seen Win2k machines come right up after repeated power blips (I've \nalso seen them not come up). \n\nIt really depends upon what the hard disk is doing at the instant the \nregulators drop out the 5 and 12V supplies (and which supply goes out first, \nwhich can depend upon the respective loads -- for modern Pentium 4 systems \nthe 12V will probably go down first since it is more heavily loaded than the \n5V supply in these systems). Under certain conditions where the 12V goes \ndown before the 5V does, the head might still be writing as the servo spirals \ntowards park, causing all manner of damage (maybe even to servo information, \nwhich normally cannot be written). So the power cycle becomes a test of \nhardware, too, played Russian Roulette-style.\n\nTalk about an unscientific test.\n\nA database server that needs that kind of testing is going to be hardened \nhardware on a doubly redundant UPS anyway.\n\nBut, then again I've seen a Linux server survive a power cycle with no lost \ndata (ext3 filesystem -- I've seen lost data with ext2). And I've seen the \nsame server barf all over itself due to a single bit error in memory. Blew \nout the entire root filesystem, which was journaled and residing on a RAID 1 \npartition (the corruption was perfectly mirrored, by the way). Serves me \nright for not having ECC RAM installed at the time.\n\n> If it passes the tests, good for it. I honestly do not expect that it\n> will. My take on this is that we want to be able to document the\n> problems in advance, rather than be blindsided.\n\nI fully expect that Katie, Jan, Dave, and all the others working on this share \nyour concerns and want the Win32 port to be as solid as is possible on that \nOS.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 16:29:15 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thursday 30 January 2003 16:01, Vince Vielhaber wrote:\n> Dave, Lamar and Katie can cheer now 'cuze this is the last comment\n> I'm going to make on this. All others will be ignored, probably.\n\n> up. Yet all three of you are advocating that the PostgreSQL stamp\n> of approval should be immediately placed on it (ok, Lamar may not\n> be as in favor as the Dave and Katie).\n\nFor the record, again, I am not at all in favor of a Win32 native port. I \nhave never been in favor of a Win32 native port (see the archives -- it's in \nthere). I am in favor of fair testing for all ports, and less of an \nemotional response to the idea of a Win32 port. It's going to happen; we \ncan't stop it; we might as well see how best to handle it. \n\nAnd I am definitely not in favor of putting the Royal Seal of PGDG on the code \nthat is out there now. It _isn't_ proven. And, as Tom just said, it's 7.2, \nand we're not due to make an Officially Stamped Win32 native port until 7.4. \n\nBut it doesn't take AC power cycling to prove it, either. And so I objected \nto the tone and to the extremity of the proposed testing, relative to the \ntesting we do now for the other ports.\n\nBut I also see the futility of withholding the Official Stamp of Approval -- \nif Win32 PostgreSQL is out there (and it will be, whether we like it or not), \nthen we will get flak over it if it breaks. Logically we should do \neverything we can to make sure the port is as stable as possible for Win32 -- \nand power cycle testing ain't the right way. ISTM that Dave, Katie, Jan, et \nall are doing this. They even seem to know what they are talking about, \nwhich is better than most Win32 partisans. There actually _can_ be \nreasonable people who use an unreasonable OS, for whatever reasons they may \nhave.\n\nDo I like it? No. Can I change it? No. Can I help test the Win32 port? Yes, \neven though I don't want to do so. Can I be reasonable and patient with \nthose who are doing the work on the Win32 port? Yes, I can. Do I need to \nsling the napalm because I don't like it? Not on the mailing lists (hmm, \nneed to get some naptha, some palmitic acid....might be fun to sling some \nnapalm in the back yard to rid the place of weeds, and get some relaxation to \nboot....).\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 16:42:11 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> And, by the way, who in their right mind tests a database server by repeated \n> yanking of the AC power?\n\nAnybody who would like their data to survive a power outage.\n\n> To go to that extreme for Win32 when we caution \n> against something as mundane as a kill -9 of postmaster on Unix is absurd. \n> And, yes, I know the difference. I also know that the AC power pull has \n> nothing to do with PostgreSQL, but it has to do with the OS under it. \n> Although a kill -9, from the point of view of the running process, is \n> identical to a power failure.\n\nNo, it is not. Did you not read my comments earlier today? The reasons\nwhy we are concerned about kill -9 have *nothing* to do with whether the\ndatabase can survive system crashes. Rather, the issues created by kill\n-9 have to do with coping with leftover state from a previous postmaster\nin the same system lifecycle. I forgot to mention one of the biggest\nheadaches, which is that kill -9 the postmaster doesn't kill the child\nbackends. We've got an interlock that tries to prevent starting a new\npostmaster when there are still old children around, but it's one of the\nthings that I think is most likely to break on any new port. (And I'm\ndead certain that that code doesn't work on Windows.) It's that sort of\nthing that we have painfully worked out on Unix-based systems, and are\ngoing to have to do over again for Windows. In many places we are\nprobably not even going to realize that we have to do something over\nagain, until someone gets bitten.\n\nThe fact that Postgres is reliable does not come (only) from the code\nbeing \"right\" in some abstract sense that will carry over to a new\nplatform. A big reason it's reliable is that we have painfully learned\nabout Unix-ish failure modes and put in defenses against them. Windows\nis going to bring a whole new set of failure modes that we don't have\ndefenses for. (Yet.) *That* is what we need extensive testing to learn\nabout, and claiming that we are discriminating against Windows just\nbecause it's Windows misses the point completely.\n\nOr, if you prefer, we can ship Postgres 7.4 for Windows with no more\ntesting than we need for any of the existing, long-since-well-tested\nports. But I'll bet a great deal that our reputation will go down the\ndrain (along with many people's data) if we do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 16:54:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "On Thursday 30 January 2003 16:54, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > And, by the way, who in their right mind tests a database server by\n> > repeated yanking of the AC power?\n\n> Anybody who would like their data to survive a power outage.\n\nI don't buy that. That's why I have $36,000 worth of lead acid in the room \nnext door, with $5,000 of inverters and chargers in the server room. Until I \nhad to upgrade RAM I had 240+ days of uptime on one box. The longest power \ninterruption was 28 hours. The battery held the whole time. There was never \nmore than 30 days between interruptions. The last time I had the server \nactually power down was during a maintenance run on the inverter/charge \nsystem, and I had to transfer power to the servers onto another branch, \nnecessitating two power cycles, which were clean shutdown/reboots. I haven't \nhad an unscheduled dirty powerdown in two years.\n\nWe cannot on any system guarantee the data surviving a sudden power outage. \nUntil we can be certain the write-back cache on that high performance drive \n(or NAS array using iSCSI, perhaps) flushes we cannot know the data hit the \ndisks.\n\n> > To go to that extreme for Win32 when we caution\n> > against something as mundane as a kill -9 of postmaster on Unix is\n> > absurd. And, yes, I know the difference. I also know that the AC power\n> > pull has nothing to do with PostgreSQL, but it has to do with the OS\n> > under it. Although a kill -9, from the point of view of the running\n> > process, is identical to a power failure.\n\n> No, it is not. Did you not read my comments earlier today?\n\nOf course I did -- I'm not daft. And that's why I specified 'from the point \nof view of the running process' -- that is, the process you are SIGKILLing \ncannot itself determine the difference between the power cycle and SIGKILL. \nIt just simply goes down, hard. Of course there is:\n\n> I forgot to mention one of the biggest\n> headaches, which is that kill -9 the postmaster doesn't kill the child\n> backends.\n\nThis is a real difference, and one that I forgot as well. So SIGKILL is \ndifferent to the whole backend system, but not to the singular process that \nis being SIGKILL'd. Suppose I issue a SIGKILL to postmaster and all forked \nbackends simultaneously? Where does SIGKILL differ from a power failure from \nthe point of view of the database system in that scenario? This is also \nassuming that you clean reboot the OS after the SIGKILL to postmaster, as \nthere is that dynamic state you mentioned to worry about. I probably should \nhave mentioned that before.\n\n> Windows\n> is going to bring a whole new set of failure modes that we don't have\n> defenses for. (Yet.) *That* is what we need extensive testing to learn\n> about, and claiming that we are discriminating against Windows just\n> because it's Windows misses the point completely.\n\nAnd ISTM that an experienced Windows developer, such as Katie or Dave, would \nknow to do this, would know how to do this, and would know the best way of \ndoing this. And I wasn't singling you out, Tom. It was the whole thread and \nthe turns it took that got me rather upset. \n\n> Or, if you prefer, we can ship Postgres 7.4 for Windows with no more\n> testing than we need for any of the existing, long-since-well-tested\n> ports. But I'll bet a great deal that our reputation will go down the\n> drain (along with many people's data) if we do that.\n\nWe don't have a standard testing methodology for any of our ports. We need \none for all of our ports. I fully expect the Win32 port to need a different \nmethodology than the FreeBSD port or the Linux port. And I expect we have \nenough experienced Win32 developers (which I am not) here that can provide \ninsight into how the methodologies should differ.\n\nI prefer more extensive testing for all of our ports. You did read that when \nI wrote it, right? (When I wrote it multiple times....) Just saying 'it \npassed regression' shouldn't be enough -- but we should really spend some \ncycles thinking about what the test suite really should be. For all \nplatforms.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 17:30:54 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Tom Lane wrote:\n> \"Dave Page\" <[email protected]> writes:\n> > I would also point out that we already list the Cygwin port of\n> > PostgreSQL as supported. Who ever gave that the kind of testing people\n> > are demanding now? I think the worst case scenario will be that our\n> > Win32 port is far better than the existing 'supported' solution.\n> \n> A good point --- but what this is really about is expectations. If we\n> support a native Windows port then people will probably think that it's\n> okay to run production databases on that setup; \n\nSure. But it's only common sense that a piece of software is only as\nreliable as the platform it's running on.\n\nPeople run production databases under MS-SQL all the time. Has MS-SQL\nitself gained a reputation for being an unreliable piece of junk?\nPerhaps. But if so, that obviously hasn't stopped people from putting\ntheir production databases on it!\n\nIs MS-SQL's reputation for unreliability, if any, because of MS-SQL\nitself or the platform it's operating on? The way to answer that is\nto ask the same question of Oracle and DB/2 under Windows. And\ntherefore, the answer seems to be that the platform is a minor\ndeterminant, if any.\n\n> whereas I doubt many\n> people would think that about the Cygwin-based port. \n\nWhy not? Seriously, if the people in question are the simpletons that\nyou appear to be expecting them to be, then wouldn't they have that\nsame expectation of the Cygwin based port? Why not?\n\n> So what we need to\n> know is whether the platform is actually stable enough that that's a\n> reasonable thing to do; so that we can plaster the docs with appropriate\n> disclaimers if necessary. \n\nWell, shouldn't we do that anyway, then, until we know otherwise?\nShouldn't we do that with *any* new port?\n\n> Windows, unlike the other OSes mentioned in\n> this thread, has a long enough and sorry enough track record that it\n> seems appropriate to run such tests ...\n\nWith this I agree, but before you start thinking that Windows is the\nonly OS that qualifies, consider this: I've run the \"pull the plug\"\ntest under early Linux 2.4 kernels running with ReiserFS. I'd start a\nmake of a large project, pull the power, bring the system back up, and\nrestart the build. And the end result was that some of the files\nfiles in the build directory were corrupted, such that the build could\nnot continue. I haven't tried this under current versions of the\nkernel, so I don't know if things have improved or not.\n\nDoesn't that -- shouldn't that -- give you pause about declaring\n*Linux* an industrial-strength solution?\n\n\nMy point: if you're going to hold *one* OS to a given standard, you\nshould hold *all* of them to that same standard.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 30 Jan 2003 14:39:59 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, Jan 30, 2003 at 02:39:59PM -0800, Kevin Brown wrote:\n> \n> With this I agree, but before you start thinking that Windows is the\n> only OS that qualifies, consider this: I've run the \"pull the plug\"\n> test under early Linux 2.4 kernels running with ReiserFS. I'd start a\n> make of a large project, pull the power, bring the system back up, and\n> restart the build. And the end result was that some of the files\n> files in the build directory were corrupted, such that the build could\n> not continue.\n\nAfaik, ReiserFS does not guarantee data consistency, only meta\ndata. As in, the file system itself will be consistent, and an\nfsck shouldn't find a problem.\n\n\nKurt\n\n", "msg_date": "Fri, 31 Jan 2003 00:09:05 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Kurt Roeckx wrote:\n> On Thu, Jan 30, 2003 at 02:39:59PM -0800, Kevin Brown wrote:\n> > \n> > With this I agree, but before you start thinking that Windows is the\n> > only OS that qualifies, consider this: I've run the \"pull the plug\"\n> > test under early Linux 2.4 kernels running with ReiserFS. I'd start a\n> > make of a large project, pull the power, bring the system back up, and\n> > restart the build. And the end result was that some of the files\n> > files in the build directory were corrupted, such that the build could\n> > not continue.\n> \n> Afaik, ReiserFS does not guarantee data consistency, only meta\n> data. As in, the file system itself will be consistent, and an\n> fsck shouldn't find a problem.\n\nExactly. Does NTFS? Not as far as I know. Why should we hold NTFS\nto a standard that ReiserFS doesn't meet?\n\nThat said, I do agree with Tom that the Windows port is basically\nvirgin territory and needs to be approached with caution. But we\nshouldn't be so cautious that we hesitate to release the port to the\nworld (sufficient disclaimers are appropriate, as with any new\nport)...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 30 Jan 2003 15:16:36 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> On Thursday 30 January 2003 16:54, Tom Lane wrote:\n>> Lamar Owen <[email protected]> writes:\n>>> And, by the way, who in their right mind tests a database server by\n>>> repeated yanking of the AC power?\n\n>> Anybody who would like their data to survive a power outage.\n\n> I don't buy that. That's why I have $36,000 worth of lead acid in the room \n> next door, with $5,000 of inverters and chargers in the server room.\n\nWell, great; you're probably proof against misfeasance of your local\npower company. But how about someone tripping over the power cord?\nOr a blowout in the server's internal power supply? Or a kernel crash?\nPulling the power plug is just a convenient way of (approximately)\nmodeling a whole class of unpleasant events. I don't think the fact\nthat you can afford to spend that much on batteries makes it\nuninteresting to test such scenarios.\n\nBut we're pretty much talking at cross-purposes here. The real issue\nIMHO is that the Windows port needs a lot of testing because it is a\nnew platform (for us), and one not like the platforms we've used before.\nIt is faulty to equate the amount of testing required to gain confidence\nin that port with the amount of testing required to gain confidence that\nPG 7.4 will run reliably on, say, HPUX 10.20, when we already know that\nevery PG back to 6.4 has run reliably on HPUX 10.20. You're attacking a\nstraw man you have set up, namely the idea that only specific testing\nproduces confidence in a port. In my mind past track record has a lot\nmore to do with confidence than whatever testing we do for an individual\nrelease.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 18:39:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "On Thursday 30 January 2003 18:39, Tom Lane wrote:\n> Well, great; you're probably proof against misfeasance of your local\n> power company. But how about someone tripping over the power cord?\n\nTwistlok.\n\n> Or a blowout in the server's internal power supply?\n\nRedundant supplies.\n\n> Or a kernel crash?\n\nDifferent from pulling the plug.\n\n> It is faulty to equate the amount of testing required to gain confidence\n> in that port with the amount of testing required to gain confidence that\n> PG 7.4 will run reliably on, say, HPUX 10.20, when we already know that\n> every PG back to 6.4 has run reliably on HPUX 10.20.\n\nBut does the fact that PG 6.4 ran reliably on HP-UX 10 mean PG 7.4 will run as \nreliably on HP-UX 11? Does the fact that PG 6.2.1 ran well on Linux kernel \n2.0.30 with libc 5.3.12 mean PG 7.4 will run well on Linux 2.6.x with glibc \n2.4.x? The OS is also a moving target. Hmph. PG 7.3 won't even build on \nRed Hat 5.2, for instance. So much for track record.\n\n> You're attacking a\n> straw man you have set up, namely the idea that only specific testing\n> produces confidence in a port. In my mind past track record has a lot\n> more to do with confidence than whatever testing we do for an individual\n> release.\n\nTrack record means nothing if sufficient items have changed in the underlying \nOS. I remember the Linux fiasco with PostgreSQL 6.3.1. It was so bad that \nRed Hat was considering releasing Red Hat 5.1 with a CVS checkout of \npre-6.3.2. That is not Red Hat's normal policy.\n\nAlso, between major versions enough may have changed to make it necessary to \ntest thoroughly -- WAL, for instance. MVCC for another instance. PITR is \ngoing to be another instance requiring a different test methodology. One \nwill indeed be required to blow down the whole system to properly test PITR, \non all platforms.\n\nTrack record indicates that all of our x.y.1 releases are typically hosed in \nsome fashion. 7.3.1 proved that wrong. Track record only requires a single \nfailure to invalidate -- and we should test for those failures across the \nboard, regardless of track record. Records are meant to be broken.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 30 Jan 2003 18:57:27 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Tom Lane wrote:\n> Most variants of Unix are known to be pretty stable. Most variants of\n> Unix are known to follow the Unix standard semantics for sync() and\n> fsync(). I think we are entirely justified in doubting whether Windows\n> is a suitable platform for PG, and in wanting to run tests to find out.\n> Yes, we are holding Windows to a higher standard than we would for a\n> Unix variant.\n> \n> Partly this is a matter of wanting to protect Postgres' reputation.\n> Just on sheer numbers, if there is a native Windows port then there are\n> likely to be huge numbers of people using Postgres on Windows. If\n> that's not going to be a reliable combination, we need to know it and\n> tell them so up-front. Otherwise, people will be blaming Postgres, not\n> Windows, when they lose data. It's an entirely different situation from\n> whether Postgres-on-Joe-Blow's-Unix-Variant loses data, first because of\n> visibility, and second because of the different user base. Am I being\n> paranoid to suspect that the average Postgres-on-Windows user will be\n> less clueful than the average Postgres-on-Unix user? I don't think so.\n\nAssuming all your assumptions are right, why the hell is Oracle's and MS\nSQL-Server's reputation that bloody good? And what about MySQL? They all\nhave a native Windows (sup)port for some time ... didn't harm their\nreputation. I think that we got in bed with this ugly Cybill ... er ...\nCygwin thing had cost us more reputation than the sucking performance of\npre-7 releases all together.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Thu, 30 Jan 2003 22:06:36 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\nOn Thu, 30 Jan 2003, Dave Page wrote:\n> > On Wed, 29 Jan 2003, Ron Mayer wrote:\n> > >\n> > > Cool irony in the automated .sig on the mailinglist software...\n> > > [...]\n> > > Sounds like you're basically saying is\n> > > _do_ 'kill -9' the postmaster...\n> > > and make sure it recovers gracefully...\n> > ... \n> It's not far off, but it's quite amusing none the less.\n\nSorry it looks like I should have added \"50% :-), 50% :-|\". I was\njust amused by the irony of having the admonition against the \nstandard linux-low-memory condition. [ 90% :-) ]\n\n\n\nMore constructively, I think it'd be best for everyone if\n\n a) postgresql does have a native windows port, and\n\n b) it's positioned as a \"well-tested beta\" rather than \"production\n ready\", with documentation saying what part people have\n confidence in (the engine? operation in non-failure modes),\n and what part is still in beta (OS failure modes).\n\nIMHO this would have the advantages of\n\n Even if it works better than many expect, it shows...\n \n ... that postgresql has a high standard even for beta releases,\n where this community thinks about even broader system issues.\n\n ... corporations using it what aspects they need to focus on in\n internal testing before using it in production environments.\n\n ... that we're interested in reaching a broader community.\n\n ... that postgresql is quite portable across platforms.\n\nNote that I don't even care about running the windows version. I just think \nthat such a release can be positioned to _strengthen_ PostgreSQL's brand image \nrather than weaken it.\n\n\n Ron\n\n", "msg_date": "Thu, 30 Jan 2003 19:20:15 -0800 (PST)", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "I've been looking at the PID file creation mechanism we currently use.\nIt goes through a loop in an attempt to create the PID file, and if\none is there it attempts to remove it if the PID it contains no longer\nexists (there are checks for shared memory usage as well).\n\nThis could be cleaned up rather dramatically if we were to use one of\nthe file locking primitives supplied by the OS to grab an exclusive\nlock on the file, and the upside is that, when the locking code is\nused, the postmaster would *know* whether or not there's another\npostmaster running, but the price for that is that we'd have to eat a\nfile descriptor (closing the file means losing the lock), and we'd\nstill have to retain the old code anyway in the event that there is no\nsuitable file locking mechanism to use on the platform in question.\n\nThe first question for the group is: is it worth doing that?\n\nThe second question for the group is: if we do indeed decide to do\nfile locking in that manner, what *other* applications of the OS-level\nfile locking mechanism will we have? Some of them allow you to lock\nsections of a file, for instance, while others apply a lock on the\nentire file. It's not clear to me that the former will be available\non all the platforms we're interested in, so locking the entire file\nis probably the only thing we can really count on (and keep in mind\nthat even if an API to lock sections of a file is available, it may\nwell be that it's implemented by locking the entire file anyway).\n\nWhat I had in mind was implementation of a file locking function that\nwould take a file descriptor and a file range. If the underlying OS\nmechanism supported it, it would lock that range. The interesting\ncase is when the underlying OS mechanism did *not* support it. Would\nit be more useful in that case to return an error indication? Would\nit be more useful to simply lock the entire file? If no underlying\nfile locking mechanism is available, it seems obvious to me that the\nfunction would have to always return an error.\n\n\nThoughts?\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 30 Jan 2003 19:23:54 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "On file locking" }, { "msg_contents": "Mmy problem is freebsd getting totally loaded at which point it sends kills\nto various processes. This sometime seems to end up with several actual\npostmasters running, and none of them working.\n\nBetter existing process detection would help that greatly I'm sure.\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Kevin Brown\n> Sent: Friday, 31 January 2003 11:24 AM\n> To: PostgreSQL Development\n> Subject: [HACKERS] On file locking\n>\n>\n> I've been looking at the PID file creation mechanism we currently use.\n> It goes through a loop in an attempt to create the PID file, and if\n> one is there it attempts to remove it if the PID it contains no longer\n> exists (there are checks for shared memory usage as well).\n>\n> This could be cleaned up rather dramatically if we were to use one of\n> the file locking primitives supplied by the OS to grab an exclusive\n> lock on the file, and the upside is that, when the locking code is\n> used, the postmaster would *know* whether or not there's another\n> postmaster running, but the price for that is that we'd have to eat a\n> file descriptor (closing the file means losing the lock), and we'd\n> still have to retain the old code anyway in the event that there is no\n> suitable file locking mechanism to use on the platform in question.\n>\n> The first question for the group is: is it worth doing that?\n>\n> The second question for the group is: if we do indeed decide to do\n> file locking in that manner, what *other* applications of the OS-level\n> file locking mechanism will we have? Some of them allow you to lock\n> sections of a file, for instance, while others apply a lock on the\n> entire file. It's not clear to me that the former will be available\n> on all the platforms we're interested in, so locking the entire file\n> is probably the only thing we can really count on (and keep in mind\n> that even if an API to lock sections of a file is available, it may\n> well be that it's implemented by locking the entire file anyway).\n>\n> What I had in mind was implementation of a file locking function that\n> would take a file descriptor and a file range. If the underlying OS\n> mechanism supported it, it would lock that range. The interesting\n> case is when the underlying OS mechanism did *not* support it. Would\n> it be more useful in that case to return an error indication? Would\n> it be more useful to simply lock the entire file? If no underlying\n> file locking mechanism is available, it seems obvious to me that the\n> function would have to always return an error.\n>\n>\n> Thoughts?\n>\n>\n>\n> --\n> Kevin Brown\t\t\t\t\t [email protected]\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 31 Jan 2003 11:54:57 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> Assuming all your assumptions are right, why the hell is Oracle's and MS\n> SQL-Server's reputation that bloody good?\n\nThey have marketing departments.\n\n> And what about MySQL?\n\nWhat about it? Someone claimed in this thread that MySQL's Windows port\nrequires Cygwin. Is that true or not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 23:08:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> This could be cleaned up rather dramatically if we were to use one of\n> the file locking primitives supplied by the OS to grab an exclusive\n> lock on the file, and the upside is that, when the locking code is\n> used, the postmaster would *know* whether or not there's another\n> postmaster running, but the price for that is that we'd have to eat a\n> file descriptor (closing the file means losing the lock),\n\nYeah, I was just thinking about that this morning. Eating one file\ndescriptor in the postmaster is absolutely no problem --- the postmaster\ndoesn't have all that many files open anyhow. What I was wondering was\nwhether it was worth eating an FD for every backend process, by holding\nopen the file inherited from the postmaster. If we did that, we would\nhave a reliable way of detecting that the old postmaster died but left\nsurviving child backends. (As I mentioned in a nearby flamefest, the\nexisting interlock for this situation strikes me as mighty fragile.)\n\nBut this only wins if a child process inheriting an open file also\ninherits copies of any locks held by the parent. If not, then the\nissue is moot. Anybody have any idea if file locks work that way?\nIs it portable??\n\n> The second question for the group is: if we do indeed decide to do\n> file locking in that manner, what *other* applications of the OS-level\n> file locking mechanism will we have?\n\nI can't see any use in partial-file locks for us, and would not want\nto design an internal API that expects them to work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jan 2003 23:26:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <[email protected]> writes:\n> > And, by the way, who in their right mind tests a database server by repeated\n> > yanking of the AC power?\n> \n> Anybody who would like their data to survive a power outage.\n\n... has UPS, ECC Ram on quality boards and storage subsystems that\nguarantee the data to hit \"some\" surface after it passed the interface\n... what's your point? Are you telling me that the reliability of an\nEMC2 system depends on which OS it is receiving the bits from? Is SuSE\nas reliable as TurboLinux? Or do I have to buy AIX to get the best\nresult? \n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Thu, 30 Jan 2003 23:37:17 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "> file descriptor (closing the file means losing the lock), and we'd\n> still have to retain the old code anyway in the event that there is no\n> suitable file locking mechanism to use on the platform in question.\n\nWhat is the gain given the above statement? If what we currently do can\ncause issues (fail), then beefing it up where available may be useful --\nbut otherwise it's just additional code.\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "30 Jan 2003 23:42:03 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Hannu Krosing wrote:\n> > I agree with Tom on yanking the plug while it's operating. Do you\n> > know the difference between kill -9 and yanking the plug?\n> \n> Kill -9 seems to me _less_ severe than yanking the plug but much easier\n> to automate, so that could be the first thing to test. You have no hope\n> of passing the pull-the-plug test if you can't survive even kill -9.\n> \n> Perhaps we could have a special \"reliability-regression\" test that does\n> \"kill -9 postmaster\", repeatedly, at random intervals, and checks for\n> consistency ?\n> \n> Maybe we will find even some options for some OS'es to \"force-unmount\"\n> disks. I guess that setting IDE disk's to read-only with hdparm could\n> possibly achieve something like that on Linux.\n\nGet VMWare for Linux, run whatever OS you like in it and \"kill -9\" the\nvirtual machine. That's as close as you can get to \"yanking\" without\nwearing out your power plugs.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Thu, 30 Jan 2003 23:54:17 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\n> This could be cleaned up rather dramatically if we were to use one of\n> the file locking primitives supplied by the OS to grab an exclusive\n> lock on the file, ...\n> ...\n> The first question for the group is: is it worth doing that?\n\nIn the past it has been proposed and declined -- there is some stuff\nin the archives. While it would be beneficial to installations using\nlocal data it would introduce new failure modes for installations\nusing NFS.\n\nRegards,\n\nGiles\n\n", "msg_date": "Fri, 31 Jan 2003 16:03:22 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "> What about it? Someone claimed in this thread that MySQL's Windows port\n> requires Cygwin. Is that true or not?\n\nIt's been a while, but I know I've installed MySQL on windows without any \nseparate step of installing Cygwin (I can't say 100% for sure that it didn't \ninstall some part of Cygwin transparently to me).\n\nRegards,\n\tJeff Davis\n", "msg_date": "Thu, 30 Jan 2003 22:27:15 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Friday 31 Jan 2003 9:56 am, you wrote:\n> Kevin Brown <[email protected]> writes:\n> But this only wins if a child process inheriting an open file also\n> inherits copies of any locks held by the parent. If not, then the\n> issue is moot. Anybody have any idea if file locks work that way?\n> Is it portable??\n\nIn my experience of HP-UX and linux, they do differ. How much, I don't \nremember.\n\nI have a stupid proposal. Keep file lock aside. I think shared memory can be \nkept alive even after process dies. Why not write a shared memory segment id \nto a file and let postmaster check that segment. That would be much easier.\n\nBesides file locking is implemented using setgid bit on most unices. And \neverybody is free to do what he/she thinks right with it.\n\nMay be stupid but just a thought..\n\n Shridhar\n\n", "msg_date": "Fri, 31 Jan 2003 12:20:41 +0530", "msg_from": "\"Shridhar Daithankar<[email protected]>\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Jeff Davis wrote:\n>>What about it? Someone claimed in this thread that MySQL's Windows port\n>>requires Cygwin. Is that true or not?\n> \n> It's been a while, but I know I've installed MySQL on windows without any \n> separate step of installing Cygwin (I can't say 100% for sure that it didn't \n> install some part of Cygwin transparently to me).\n\n From the MySQL site's page about MySQL vs PostgreSQL:\nhttp://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n\n\"MySQL Server works better on Windows than PostgreSQL does. MySQL Server \nruns as a native Windows application (a service on NT/2000/XP), while \nPostgreSQL is run under the Cygwin emulation.\"\n\nThat seems pretty straightforward.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Regards,\n> \tJeff Davis\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Fri, 31 Jan 2003 18:19:23 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Man, I go away for one day, and look what you guys get into. :-)\n\nLet me shoot out some comments on this.\n\nFirst, clearly the Win32 port is going to have more port-specific code\npaths than any other port, so it is going to require extra testing even\nif it wasn't our first non-Unix port. You can expect it to take some\nextra effort even after the port has stabalized because when we add\nsomething that works only on Unix, we will need to code some workaround\nin Win32.\n\nSecond, there are going to be new error cases on this platform that we\ncan't anticipate, and some of that isn't going to show until we get it\nreleased. Documenting those pitfalls, like only using NTFS, is a good\nstart.\n\nThird, I suspect folks running Win32 aren't as particular about\nstability/reliability, or they would have left MS products already.\n\nFourth, some say Win32 isn't an acceptable platform. It may or may not\nbe for specific people, but Linux may be an unacceptable platform for\nsome people too. I don't think we can second guess the users. We will\ndo our best and see how it goes.\n\nAlso, I have heard from several people that XP is the first OS MS got\nright. That may or may not be true, but some feel things are getting\nbetter. It is all a continuim with these OS's. Some are great, some\nmediocre, some really bad, but people make decisions and choose bad ones\nall the time. PostgreSQL just needs to be there, if only to migrate\nthem to a better platform later. If we aren't there, we can't show them\nhow good we are.\n\nAs for build environment, we have two audiences --- those using\nbinaries, and those compiling from source. Clearly we are going to have\nmore binary users vs. source users on Win32 than on any other platform,\nso at this stage I think making thing easier for the majority of our\nUnix developers is the priority, meaning we should use our existing\nMakefiles and cygwin to compile. Later, if things warrant it, we can do\nVC++ project files somehow.\n\nLastly, SRA just released _today_ their first Win32 port of PostgreSQL,\nand it is _threaded_:\n\n\thttp://osb.sra.co.jp/PowerGres/\n\nNow, that's a port!\n\nAlso, when I am back home for an extended period starting in March, I\nwill going through Jan's patch (if no one does it first) and\nsubmit/apply it in pieces that address specific Win32 issues, like path\nnames or carriage returns. Once those are in, we can look at the more\ncomplex issues of build handling.\n\nSo, as far as I am concerned, we will have a Win32 port in 7.4. It will\nnot be perfect, but it will be as good as we can do. We are also\ngetting point-in-time recovery in 7.4, so that may help us with Win32\nport failures too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 31 Jan 2003 03:21:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Tom Lane wrote:\n> But this only wins if a child process inheriting an open file also\n> inherits copies of any locks held by the parent. If not, then the\n> issue is moot. Anybody have any idea if file locks work that way?\n> Is it portable??\n\nAn alternate way might be to use semaphores, but I can't see how to do\nthat using the standard PGSemaphores implementation: it appears to\ndepend on cooperating processes inheriting a copy of the postmaster's\nheap.\n\nAnd since the POSIX semaphores default to unnamed ones, it appears\nthis idea is also a dead end unless my impressions are dead wrong...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 31 Jan 2003 00:46:39 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "On Friday 31 January 2003 05:08, Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n>\n> > And what about MySQL?\n>\n> What about it? Someone claimed in this thread that MySQL's Windows port\n> requires Cygwin. Is that true or not?\n\nFor reference, from the INSTALL-SOURCE file included in \nthe MySQL sources which I have lying about [*]:\n\n[*] danged legacy applications ;-)\n\n--QUOTE START--\n\nWindows Source Distribution\n---------------------------\n\nYou will need the following:\n\n * VC++ 6.0 compiler (updated with 4 or 5 SP and Pre-processor\n package) The Pre-processor package is necessary for the macro\n assembler. More details at:\n `http://msdn.microsoft.com/vstudio/sp/vs6sp5/faq.asp'.\n\n * The MySQL source distribution for Windows, which can be downloaded\n from `http://www.mysql.com/downloads/'.\n\nBuilding MySQL\n\n 1. Create a work directory (e.g., workdir).\n\n 2. Unpack the source distribution in the aforementioned directory.\n\n 3. Start the VC++ 6.0 compiler.\n\n 4. In the `File' menu, select `Open Workspace'.\n\n 5. Open the `mysql.dsw' workspace you find on the work directory.\n\n 6. From the `Build' menu, select the `Set Active Configuration' menu.\n\n 7. Click over the screen selecting `mysqld - Win32 Debug' and click\n OK.\n\n 8. Press `F7' to begin the build of the debug server, libs, and some\n client applications.\n\n 9. When the compilation finishes, copy the libs and the executables\n to a separate directory.\n\n 10. Compile the release versions that you want, in the same way.\n\n 11. Create the directory for the MySQL stuff: e.g., `c:\\mysql'\n\n 12. From the workdir directory copy for the c:\\mysql directory the\n following directories:\n\n * Data\n\n * Docs\n\n * Share\n\n 13. Create the directory `c:\\mysql\\bin' and copy all the servers and\n clients that you compiled previously.\n\n 14. If you want, also create the `lib' directory and copy the libs\n that you compiled previously.\n\n 15. Do a clean using Visual Studio.\n\nSet up and start the server in the same way as for the binary Windows\ndistribution. *Note Windows prepare environment::.\n\n--QUOTE END--\n\nIan Barwick\[email protected]\n\n", "msg_date": "Fri, 31 Jan 2003 09:55:09 +0100", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 2003-01-30 at 15:56, Tom Lane wrote:\n> The reason the TIP is\n> still there is that there are platforms on which that stuff doesn't work\n> very nicely. It's better to let the postmaster exit cleanly so that\n> that state gets cleaned up. I have no idea what the comparable issues\n> are for a native Windows port, but I bet there are some...\n\nThat's why I proposed an automated test for this too. It is mostly\nimportant when conquering new OS'es, but could also be nice to have when\ntesting if changes to storage manager or some other important subsystem\nwill break anything.\n\n> \t\t\tregards, tom lane\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "31 Jan 2003 11:42:58 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\n> But this only wins if a child process inheriting an open file also\n> inherits copies of any locks held by the parent. If not, then the\n> issue is moot. Anybody have any idea if file locks work that way?\n> Is it portable??\n\n From RedHat 8.0 manages fork(2):\n\nSYNOPSIS\n #include <sys/types.h>\n #include <unistd.h>\n\n pid_t fork(void);\n\nDESCRIPTION\n fork creates a child process that differs from the parent process only\n in its PID and PPID, and in the fact that resource utilizations are set\n to 0. File locks and pending signals are not inherited.\n ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^\n\nAnd from SunOS 5.8 flock\n Locks are on files, not file descriptors. That is, file\n descriptors duplicated through dup(2) or fork(2) do not\n result in multiple instances of a lock, but rather multiple\n references to a single lock. If a process holding a lock on\n a file forks and the child explicitly unlocks the file, the\n parent will lose its lock. Locks are not inherited by a\n child process.\n\nIf I understand correctly it says that if parent dies, file is unlocked no\nmatter if there's children still running?\n\n-- \nAntti Haapala\n\n", "msg_date": "Fri, 31 Jan 2003 14:33:39 +0200 (EET)", "msg_from": "Antti Haapala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "On Thu, 2003-01-30 at 20:29, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > While I understand (and agree with) your (and Vince's) reasoning on why \n> > Windows should be considered less reliable, neither of you have provided a \n> > sound technical basis for why we should not hold the other ports to the same \n> > standards.\n> \n> The point here is that Windows is virgin territory for us. We know\n> about Unix. When we port to a new Unix variant, we are dealing with the\n> same system APIs, and in many cases large chunks of the same system\n> code, that we've dealt with before. It's reasonable for us to have\n> confidence that Postgres will work the same on such a platform as it\n> does on other Unix variants. And the track record of reliability that\n> we have built up across a bunch of Unix variants gives us\n> cross-pollinating confidence in all of them.\n> \n> Windows shares none of that heritage. It is the first truly new port,\n> onto a system without any Unix background, that we have ever done AFAIK.\n\nI don't know how much Unix backgroun BeOS has. It does have a better \nPOSIX support than Win32, but I don't know how much of it is really from\nUnix.\n\n> Claiming that it doesn't require an increased level of testing is\n> somewhere between ridiculous and irresponsible.\n\nWe should have at least _some_ platforms (besides Win32) that we could\nclain to have run thorough test on. \n\nI suspect that RedHat does some (perhaps even severe) testing for\nRHAS/RHDB, but I don't know of any other thorough testing. \n\nOr should reliability testing actually be something left for commercial\nentities ? \n\n> > I believe we should test every release as pathologically as Vince \n> > has stated for Win32.\n> \n> Great, go to it. That does not alter the fact that today, with our\n> existing port history, Windows has to be treated with extra suspicion.\n\nI don't think that the pull-the-plug scenario happens enough in the wild\nthat even our seven-year track record can prove anything conlusive about\nthe reliability. I have not found instructions about providing that kind\nof reliability in the docs either - things like what filesystems to use\non what OSes and with which mount options. \n\nWe just mention -f as a way to get non-reliable system ;)\n\n> I do not buy the argument you are making that we should treat all\n> platforms alike. If we had a ten-year-old Windows port, we could\n> consider it as stable as all our other ten-year-old Unix ports.\n> We don't. Given that we don't have infinite resources for testing,\n> it's simple rationality to put more testing emphasis on the places\n> that we suspect there will be problems. And if you don't suspect\n> there will be problems on Windows, you are being way too naive :-(\n\n\"We\" don't have that old windows port, but I guess that there are native\nwindows ports at least a few years old.\n\n> > Do we want to encourage Win32? (some obviously do, but I don't) Well, telling \n> > people that we have tested PostgreSQL on Win32 much more thoroughly than on \n> > Unix is in a way telling them that we think it is _better_ than the \n> > time-tested Unix ports ('It passed a harder test on Win32. Are we afraid the \n> > Unix ports won't pass those same tests?').\n> \n> If it passes the tests, good for it. I honestly do not expect that it\n> will. My take on this is that we want to be able to document the\n> problems in advance, rather than be blindsided.\n\nWhere can I read such documentations for *nix ports ?\n\nWhat I have read in this list is that losing different voltages in wrong\norder can just write over any sectors on a disk, and that power-cycling\ncan blow up computers. I don't expect even Unix to survive that!\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "31 Jan 2003 12:44:32 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "> Jan Wieck <[email protected]> writes:\n> > Assuming all your assumptions are right, why the hell is Oracle's and MS\n> > SQL-Server's reputation that bloody good?\n> \n> They have marketing departments.\n\n... As well as sizable systems integration departments devoted to the \nplatforms in question. PostgreSQL doesn't have the latter, although the \nrecent efforts make a move towards it.\n\n> > And what about MySQL?\n> \n> What about it? Someone claimed in this thread that MySQL's Windows port\n> requires Cygwin. Is that true or not?\n\nhttp://www.mysql.com/downloads/mysql-3.23.html\n\"Windows downloads\n\nThe Windows binaries use the Cygwin library. Source code for the version of \nCygwin we have used is available on this page.\"\nhttp://www.mysql.com/downloads/cygwin.html\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/spiritual.html\n\"When you have eliminated the impossible, whatever remains, however\nimprobable, must be the truth.\" -- Sir Arthur Conan Doyle (1859-1930),\nEnglish author. Sherlock Holmes, in The Sign of Four, ch. 6 (1889).\n[...but see the Holmesian Fallacy, due to Bob Frankston...\n<http://www.frankston.com/public/Essays/Holmesian%20Fallacy.asp>]\n\n\n", "msg_date": "Fri, 31 Jan 2003 08:12:14 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Jeff Davis wrote:\n> > What about it? Someone claimed in this thread that MySQL's Windows port\n> > requires Cygwin. Is that true or not?\n> \n> It's been a while, but I know I've installed MySQL on windows without any \n> separate step of installing Cygwin (I can't say 100% for sure that it didn't \n> install some part of Cygwin transparently to me).\n\nThat may have involved \"not being sufficiently observant,\" because the company \nquite clearly documents Cygwin as a dependancy.\nhttp://www.mysql.com/downloads/cygwin.html\n--\noutput = (\"aa454\" \"@freenet.carleton.ca\")\nhttp://www3.sympatico.ca/cbbrowne/linuxxian.html\nChange is inevitable, except from a vending machine. \n\n\n", "msg_date": "Fri, 31 Jan 2003 08:16:24 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "> Jeff Davis wrote:\n> >>What about it? Someone claimed in this thread that MySQL's Windows port\n> >>requires Cygwin. Is that true or not?\n> > \n> > It's been a while, but I know I've installed MySQL on windows without any \n> > separate step of installing Cygwin (I can't say 100% for sure that it didn'\nt \n> > install some part of Cygwin transparently to me).\n> \n> From the MySQL site's page about MySQL vs PostgreSQL:\n> http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n> \n> \"MySQL Server works better on Windows than PostgreSQL does. MySQL Server \n> runs as a native Windows application (a service on NT/2000/XP), while \n> PostgreSQL is run under the Cygwin emulation.\"\n> \n> That seems pretty straightforward.\n\nBut it's not /nearly/ that straightforward.\n\nIf you look at the downloads that MySQL AB provides, they point you to a link \nthat says \"Windows binaries use the Cygwin library.\"\n\nWhich apparently means that this \"feature\" is not actually a feature. Unlike \nPostgreSQL, which \"is run under the Cygwin emulation,\" MySQL runs as a native \nWindows application (with Cygwin emulation). Apparently those are not at all \nthe same thing, even though they are both using Cygwin...\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/linuxdistributions.html\n(1) Sigs are preceded by the \"sigdashes\" line, ie \"\\n-- \\n\" (dash-dash-space).\n(2) Sigs contain at least the name and address of the sender in the first line.\n(3) Sigs are at most four lines and at most eighty characters per line.\n\n\n", "msg_date": "Fri, 31 Jan 2003 08:22:47 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "Christopher Browne wrote:\n<snip>\n>> From the MySQL site's page about MySQL vs PostgreSQL:\n>>http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n>>\n>>\"MySQL Server works better on Windows than PostgreSQL does. MySQL Server \n>>runs as a native Windows application (a service on NT/2000/XP), while \n>>PostgreSQL is run under the Cygwin emulation.\"\n>>\n>>That seems pretty straightforward.\n> \n> But it's not /nearly/ that straightforward.\n> \n> If you look at the downloads that MySQL AB provides, they point you to a link \n> that says \"Windows binaries use the Cygwin library.\"\n> \n> Which apparently means that this \"feature\" is not actually a feature. Unlike \n> PostgreSQL, which \"is run under the Cygwin emulation,\" MySQL runs as a native \n> Windows application (with Cygwin emulation). Apparently those are not at all \n> the same thing, even though they are both using Cygwin...\n\nHmm... wonder if they're meaning that MySQL compiles and executes as a \nTrue native windows application (skipping any unix compatibility calls), \nand it's just some of the support utils that use cygwin, or if they're \ntrying to say that PostgreSQL has to operate entirely in the cygwin \nenvironment, whereas they don't?\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Sat, 01 Feb 2003 00:22:00 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Thu, 2003-01-30 at 16:01, Vince Vielhaber wrote:\n> \n> Dave, Lamar and Katie can cheer now 'cuze this is the last comment\n> I'm going to make on this. All others will be ignored, probably.\n> \n> The one thing I haven't seen from Dave, Lamar or Katie on this is\n> reputation. You're all for the PostgreSQL name going on it but I\n> have yet to see any of you so sure of yourselves that you'd put\n> your own name on it. The license allows it. Red Hat did it. I\n> see no \"PageSQL\" or \"KatieSQL\" or even an \"Oh-Win SQL\" being offered\n> up. Yet all three of you are advocating that the PostgreSQL stamp\n> of approval should be immediately placed on it (ok, Lamar may not\n> be as in favor as the Dave and Katie).\n> \n\nOh-win SQL! Man that was great :-) If only all of your posts were so\nwitty...\n\n> Without documented testing and sufficient warnings until enough\n> history is banked, I don't think a native windows port should be\n> given any kind of seal of approval. After that, what about keeping\n> the code current? In a year or so will it suffer from bit-rot and\n> be the source of complaints? Are there going to be security concerns\n> surrounding it? Is there going to be a bunch of scrambling going on\n> to put out a patch when the latest active-x bug hoses the data dir?\n> \n\nWe already support postgresql on cygwin, and we know that's crap. Having\na native emulation can only improve that situation, so I don't see any\nreason not to move in that direction. All of this \"stamp of approval\"\ntalk is really pointless at this juncture; no matter how much testing\nhas been done, none of it means a lick until the code is integrated into\nthe 7.4 branch. In the mean time, if some of the unix oriented guys want\nto devise a suggested test plan that can be used to determine if we are\ngoing to call the native windows support \"production grade\" or merely a\nvast improvement over the cygwin developers version, well I bet the\nwindows folks would appreciate that. Even more so if someone runs those\ntests against a linux box so that we have actual statistics to compare\nagainst. \n\nRobert Treat\n\n", "msg_date": "31 Jan 2003 09:48:25 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Antti Haapala <[email protected]> writes:\n> And from SunOS 5.8 flock\n> Locks are on files, not file descriptors. That is, file\n> descriptors duplicated through dup(2) or fork(2) do not\n> result in multiple instances of a lock, but rather multiple\n> references to a single lock. If a process holding a lock on\n> a file forks and the child explicitly unlocks the file, the\n> parent will lose its lock. Locks are not inherited by a\n> child process.\n\nThat seems self-contradictory. If the fork results in multiple\nreferences to the open file, then I should think that if the parent\ndies but the child still holds the file open, then the lock still\nexists. Seems that some experimentation is called for ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Jan 2003 10:34:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "On Fri, 2003-01-31 at 07:22, Christopher Browne wrote:\n> But it's not /nearly/ that straightforward.\n> \n> If you look at the downloads that MySQL AB provides, they point you to a link \n> that says \"Windows binaries use the Cygwin library.\"\n> \n> Which apparently means that this \"feature\" is not actually a feature. Unlike \n> PostgreSQL, which \"is run under the Cygwin emulation,\" MySQL runs as a native \n> Windows application (with Cygwin emulation). Apparently those are not at all \n> the same thing, even though they are both using Cygwin...\n\nI'm confused as to whether you are being sarcastic or truly seem to\nthink there is a distinction here. Simple question, does MySQL require\nthe cygwin dll's (or statically linked to) to run?\n\nIf the answer is yes, then there is little question that they are as\n\"emulated\" as is the current PostgreSQL/Win32 effort.\n\nCare to expand on exactly what you believe the distinction is? ...or\ndid I miss the humor boat? :(\n\n\nRegards,\n\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "31 Jan 2003 12:20:00 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Greg Copeland\" <[email protected]>\n> I'm confused as to whether you are being sarcastic or truly seem to\n> think there is a distinction here. Simple question, does MySQL require\n> the cygwin dll's (or statically linked to) to run?\n>\n> If the answer is yes, then there is little question that they are as\n> \"emulated\" as is the current PostgreSQL/Win32 effort.\n>\n> Care to expand on exactly what you believe the distinction is? ...or\n> did I miss the humor boat? :(\n\nI just installed it (their latest gama), to see what was there (and\nuninstalled it straight away ;-). There was a cygwinb19.dll (I think that's\nwhat it was called) installed.\n\nIn any case, if we are talking about \"industrial strength\", is this the\ncomparison we should be using? ;-)\n\nandrew\n\n", "msg_date": "Fri, 31 Jan 2003 14:08:04 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Christopher Browne wrote:\n> <snip>\n> >> From the MySQL site's page about MySQL vs PostgreSQL: \n> >>http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n> >>\n> >>\"MySQL Server works better on Windows than PostgreSQL does. MySQL \n> >>Server runs as a native Windows application (a service on \n> >>NT/2000/XP), while PostgreSQL is run under the Cygwin emulation.\"\n> >>\n> >>That seems pretty straightforward.\n> > \n> > But it's not /nearly/ that straightforward.\n> > \n> > If you look at the downloads that MySQL AB provides, they \n> point you to a link that says \"Windows binaries use the Cygwin\nlibrary.\"\n> > \n> > Which apparently means that this \"feature\" is not actually \n> a feature. \n> > Unlike\n> > PostgreSQL, which \"is run under the Cygwin emulation,\" \n> MySQL runs as a native \n> > Windows application (with Cygwin emulation). Apparently \n> those are not at all \n> > the same thing, even though they are both using Cygwin...\n\nJustin Clift replied:\n> Hmm... wonder if they're meaning that MySQL compiles and \n> executes as a True native windows application (skipping any unix \n> compatibility calls), and it's just some of the support utils that\n> use cygwin, or if they're trying to say that PostgreSQL has to\n> operate entirely in the cygwin environment, whereas they don't?\n\nI just downloaded the latest productin source (3.3.55) and it appears to\nme that:\n\n1) It uses Cygwin emulation via a dll.\n\n2) It uses Visual Studio C++ 6.0 for the primary build environment. It\ncompiles out of the box without having to learn Unix-style build\nsystems, config, make, etc. No warnings, no errors, it just builds out\nof the box. If I did not have a lot of experience building databases I\ncertainly would have found their support for Windows compelling. This is\na big reason why they are #1.\n\n3) The statement by the MySQL folks above that MySQL runs as a native\nWindows application (a service on NT/2000/XP) is indicative of why MySQL\nis kicking PostgreSQL's butt in terms of popularity. It is \"marketing\nspeak\" at its best. It is technically true, MySQL runs as a service. As\nChristopher Browne points out, they still use the Cygwin Emulation\nlayer. The statement is misleading, however, as it implies that they\ndon't use any emulation but they do.\n\nThe salient points:\n \n a) Running as a service is important as this the way NT/2000\nadministrators manage server tasks. The fact that PostgreSQL's Cygwin\nemulation doesn't do this is very indicative of inferior Windows\nsupport.\n\n b) MySQL recognizes that the important issue is to appear to be a\nwell supported Windows application rather than to actually be one.\n\n c) It is probably much easier to add the support for running as an NT\nservice than it is to write a true native port with no Cygwin\ndependency. NT Service support is basically a single funtion wrapper for\ncertain API calls (startup, shutdown, etc.) that enable the Windows\nadministration tools to deal with all servers in a similar manner. \n\nThey have worked on that which makes them look better, makes their\nprospective customers happier, and makes it easier to support. Exactly\nwhat any good product development organization that listens to their\ncustomers would have done.\n\n<flame on>\nIMHO, PostgreSQL will never have the same level of use in the field as\nMySQL currently does as long as there is the kind \"head in the sand\"\nattitude about Windows that I've seen here on the hackers list,\nespecially as evidenced by the recent outright attacks against those who\nare simply trying to port PostgreSQL to the largest platform out there\ntoday.\n\nThere have been some very legitimate points about Windows being a new\nplatform, one that will likely see a lot of users, and therefore one\nthat should be more thoroughly tested before release than the typical\nport to another flavor of *nix. \n\nHowever, the way the conversation started reminds me of some of the chat\ndiscussions I've seen between young teens.\n\nI was a Mac developer way, way back and long ago realized that the best\noften loses and that better marketing beats better engineering every\nsingle time.\n<\\flame off>\n\nDISCLAIMER: I hate Microsoft and Windows drives me nuts.\n\n- Curtis\n\n\n\n\n\n", "msg_date": "Fri, 31 Jan 2003 15:18:29 -0400", "msg_from": "\"Curtis Faith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Mon, Jan 13, 2003 at 07:31:08PM +1100, Giles Lean wrote:\n> \n> Is the \"Single Unix Standard, version 2\" (aka UNIX98) any better?\n> It says for fsync():\n> \n> \"The fsync() function forces all currently queued I/O operations\n> associated with the file indicated by file descriptor fildes to\n> the synchronised I/O completion state. All I/O operations are\n> completed as defined for synchronised I/O file integrity\n> completion.\"\n\nIn version 3 it says:\n\n The fsync() function shall request that all data for the open file\n descriptor named by fildes is to be transferred to the storage\n device associated with the file described by fildes in an\n implementation-defined manner. The fsync() function shall not\n return until the system has completed that action or until an error\n is detected.\n\n [SIO] [Option Start] If _POSIX_SYNCHRONIZED_IO is defined, the\n fsync() function shall force all currently queued I/O operations\n associated with the file indicated by file descriptor fildes to the\n synchronized I/O completion state. All I/O operations shall be\n completed as defined for synchronized I/O file integrity\n completion. [Option End]\n\n\nKurt\n\n", "msg_date": "Fri, 31 Jan 2003 22:11:47 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "> On Fri, 2003-01-31 at 07:22, Christopher Browne wrote:\n> > But it's not /nearly/ that straightforward.\n\n>> If you look at the downloads that MySQL AB provides, they point you\n>> to a link that says \"Windows binaries use the Cygwin library.\"\n\n>> Which apparently means that this \"feature\" is not actually a feature.\n>> Unlike PostgreSQL, which \"is run under the Cygwin emulation,\" MySQL\n>> runs as a native Windows application (with Cygwin emulation).\n>> Apparently those are not at all the same thing, even though they are\n>> both using Cygwin...\n \n> I'm confused as to whether you are being sarcastic or truly seem to\n> think there is a distinction here. Simple question, does MySQL require\n> the cygwin dll's (or statically linked to) to run?\n\nI don't know if there's a distinction; read in whatever sarcasm is\ndeserved by the reality of things.\n\n> If the answer is yes, then there is little question that they are as\n> \"emulated\" as is the current PostgreSQL/Win32 effort.\n\nJust so. If the answer is yes, then the MySQL folk are claiming an\nadvantage that has no reality to it, in effect, \"We aren't using Cygwin\nemulation, so we're better... (Whoops, we're actually /using/ Cygwin\nemulation.)\n\n> Care to expand on exactly what you believe the distinction is? ...or\n> did I miss the humor boat? :(\n\nI'm making the generous assumption that since /they/ claim that there is\nsome distinction, that there perhaps is one.\n--\n(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\nhttp://cbbrowne.com/info/oses.html\n\"All language designers are arrogant. Goes with the territory...\" \n-- Larry Wall\n", "msg_date": "Fri, 31 Jan 2003 17:07:34 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System " }, { "msg_contents": "On Friday 31 January 2003 03:21, Bruce Momjian wrote:\n> Man, I go away for one day, and look what you guys get into. :-)\n\nNo duh. Whew.\n\n> Lastly, SRA just released _today_ their first Win32 port of PostgreSQL,\n> and it is _threaded_:\n\n> \thttp://osb.sra.co.jp/PowerGres/\n\nIs there an English translation of the site so one who doesn't speak or write \nJapanese can try it out?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Fri, 31 Jan 2003 19:26:27 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "On Fri, 31 Jan 2003, Shridhar Daithankar<[email protected]> wrote:\n\n> Besides file locking is implemented using setgid bit on most unices. And\n> everybody is free to do what he/she thinks right with it.\n\nI don't believe it's implemented with the setgid bit on most Unices. As\nI recall, it's certainly not on Xenix, SCO Unix, any of the BSDs, Linux,\nSunOS, Solaris, and Tru64 Unix.\n\n(I'm talking about the flock system call, here.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 1 Feb 2003 09:26:37 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Curtis Faith writes:\n\n> a) Running as a service is important as this the way NT/2000\n> administrators manage server tasks. The fact that PostgreSQL's Cygwin\n> emulation doesn't do this is very indicative of inferior Windows\n> support.\n\nNo, it is indicative of the inability to read the documentation.\nPostgreSQL on Cygwin runs as a service if and only if you ask it to.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 1 Feb 2003 02:06:48 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "> As for build environment, we have two audiences --- those using\n> binaries, and those compiling from source. Clearly we are going to have\n> more binary users vs. source users on Win32 than on any other platform,\n> so at this stage I think making thing easier for the majority of our\n> Unix developers is the priority, meaning we should use our existing\n> Makefiles and cygwin to compile. Later, if things warrant it, we can do\n> VC++ project files somehow.\n\nI'm ignorant when it comes to build environments on windows, but I was under \nthe impression that DJGPP was mostly a complete environment. Are there any \nplans to support it, or is it even possible?\n\n> So, as far as I am concerned, we will have a Win32 port in 7.4. It will\n> not be perfect, but it will be as good as we can do. We are also\n> getting point-in-time recovery in 7.4, so that may help us with Win32\n> port failures too.\n\nInteresting consolation :)\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Fri, 31 Jan 2003 17:35:54 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Bruce Momjian wrote:\n<snip>\n> \n> So, as far as I am concerned, we will have a Win32 port in 7.4. It will\n> not be perfect, but it will be as good as we can do. We are also\n> getting point-in-time recovery in 7.4, so that may help us with Win32\n> port failures too.\n\nIf anyone's interested, the \"PostgreSQL 7.3.1 Proof of Concept for \nWindows Alpha 1\" (yes the warnings are even built into the name) \neasy-installer that I whipped up using Inno Setup was quietly uploaded \nto the pgsql project on Sourceforge the other night. It's using \nPostgreSQL + cygwin, pretty much stock standard but pre-installed and \nwrapped up into a single installable.\n\nAs an indicater, having made no release annoucement, and only having put \na one paragraph small mention with a link to it on the Techdocs \n\"Installing On Windows\" page (with warnings), over 1,600 people \ndownloaded it in the first 24 hours (that's about 17.1 GB of bandwidth).\n\nThis was just a version so that I could practise some windows packaging \nand see what kind of things we'd need to address. Dave has already \npointed out that we're probably going to need to do this so it can be \nmade into a \"Merge Module\" and other things.\n\nA couple of bits of interest turned up whilst packaging:\n\n + There are unix command line tools that PostgreSQL relies on. For \nexample, when running initdb, it errors out if some tools aren't \npresent. i.e. sed, grep, ash (cygwin's \"/bin/sh\"), and from memory a \nfew others\n\n\n + GPL licensing issues. Am trying to get my head around the \nimplications - with regards to licensing - if we released a proper \nversion with some of the cygwin tools included... i.e. grep, sed, etc. \nDon't think that places could use it embedded with their products and \nnot at least have source available, but still haven't totally grokked \nthis all completely yet. Not going to commit any code to the GBorg \nproject that was setup the other day until this is sorted out. \nPostgreSQL 7.4 on Win32 should be properly BSD too.\n\n\n + Aside from all this, it might be nice to have a few Win32 specific \ngui pieces in place at the time that PostgreSQL 7.4 Win32 is released. \nAm sure they'll develop over time, but was thinking we should at least \nmake a good impression with the first release. Hey, if we make a really \nbad impression with the first release, then there might not be the \nquadruple-zillion Windows PG users after all. If that sounds like a \ngood idea, maybe adding the GUC variables \"random_query_delay\" \n(minutes), \"crash_how_often\" (seconds), and \"reboot_plus_corrupt_please\" \n(true/false)?\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Sat, 01 Feb 2003 15:38:31 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "On Sat, 1 Feb 2003, Peter Eisentraut wrote:\n\n> Curtis Faith writes:\n>\n> > a) Running as a service is important as this the way NT/2000\n> > administrators manage server tasks. The fact that PostgreSQL's Cygwin\n> > emulation doesn't do this is very indicative of inferior Windows\n> > support.\n>\n> No, it is indicative of the inability to read the documentation.\n> PostgreSQL on Cygwin runs as a service if and only if you ask it to.\n\nI would say that not supporting those who have an inability to read\ndocumentation would count as \"inferior Windows support.\" :-)\n\nWhat I'm hearing here is that all we really need to do to \"compete\" with\nMySQL on Windows is to make the UI a bit slicker. So what's the problem\nwith someone building, for each release, a set of appropriate binaries, and\nsomeone making a slick install program that will install postgres,\ninstall parts of cygwin if necessary, and set up postgres as a service?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 1 Feb 2003 15:06:29 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Fri, 31 Jan 2003, Tom Lane wrote:\n\n> Antti Haapala <[email protected]> writes:\n> > And from SunOS 5.8 flock\n> > Locks are on files, not file descriptors. That is, file\n> > descriptors duplicated through dup(2) or fork(2) do not\n> > result in multiple instances of a lock, but rather multiple\n> > references to a single lock. If a process holding a lock on\n> > a file forks and the child explicitly unlocks the file, the\n> > parent will lose its lock. Locks are not inherited by a\n> > child process.\n>\n> That seems self-contradictory.\n\nYes. I note that in NetBSD, that paragraph of the manual page is\nidentical except that the last sentence has been removed.\n\nAt any rate, it seems to me highly unlikely that, since the child has\nthe *same* descriptor as the parent had, that the lock would disappear.\n\nThe other option would be that the lock belongs to the process, in which\ncase one would think that a child doing an unlock should not affect the\nparent, because it's a different process....\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 1 Feb 2003 15:11:28 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "On Fri, 2003-01-31 at 16:07, Christopher Browne wrote:\n\n> I'm making the generous assumption that since /they/ claim that there is\n> some distinction, that there perhaps is one.\n\nI've used the cygwin environment enough to know that there isn't any. \nIf it's linked against the cygwin dll, the application runs in an\n\"emulated unix environment.\" To say it's emulated is really too strong\nbut to say it adds *tons* of overhead certainly won't make you a lair. \n;)\n\n\n-- \nGreg Copeland <[email protected]>\nCopeland Computer Consulting\n\n", "msg_date": "01 Feb 2003 00:33:58 -0600", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "On Saturday 01 February 2003 01:26, Lamar Owen wrote:\n> On Friday 31 January 2003 03:21, Bruce Momjian wrote:\n> > Man, I go away for one day, and look what you guys get into. :-)\n>\n> No duh. Whew.\n>\n> > Lastly, SRA just released _today_ their first Win32 port of PostgreSQL,\n> > and it is _threaded_:\n> >\n> > \thttp://osb.sra.co.jp/PowerGres/\n>\n> Is there an English translation of the site so one who doesn't speak or\n> write Japanese can try it out?\n\nCan't see one, but here is a summarized translation of the relevant\nparts as I understand them.\n\nHTH\n\nIan Barwick\[email protected]\n\n\nhttp://osb.sra.co.jp/PowerGres/\n-------------------------------\n\n\"Announcement about Powergres\"\n\n* Release of [ Beta download ] of PowerGres (31.1.2003)\n* [Press release] (27.11.2002)\n\n\nhttp://osb.sra.co.jp/PowerGres/introduction.php\n-----------------------------------------------\n\n\"PowerGres (PostgreSQL on Windows)\"\n\"The standard open source database 'PostgreSQL' on Windows\"\n\nPowerGres is a DBMS which has been developed on the basis\nof PostgreSQL and ported to Windows (2000 / XP).\n\nPowerGres' features:\n Port of Postgres to Windows\n The popular Unix/Linux OS Database \"postgresql\"\n becomes more accessible\n\n Optimised for the Windows environment\n A thread model enabling effective processing of\n multiple transactions is used. This enables\n maximum performance in a windows environment.\n\n Web back end DB at low cost\n There is no limit to the number of users who\n can connect concurrently, making (PowerGres)\n suitable as a low cost web app backend DB\n\n GUI admin tool\n A GUI admin tool is packaged with powergres. This\n enables beginners to perform database management \n visually / per point and click\n \n Japanese manual provided\n (translation of original Postgres manuals)\n\n C, Java Interface spport\n API for C and Java provided\n\n (pretty pictures, presumably of GUI admin tool)\n\n\nhttp://osb.sra.co.jp/PowerGres/function.php\n===========================================\n\n\"Table of PowerGres functions\"\n\n(comparision with \"other DBs for windows\", \nseems a bit pointless, left out)\n\n\nhttp://osb.sra.co.jp/PowerGres/catalog.php\n==========================================\n\n\"Product catalogue\"\n(more: overview)\n\nEnvironment:\n CPU: Pentiium or compat, min 300Mhz\n OS: Windows 2000 (SP2 or later), XP\n Memory: 128MB (rec: 256MB +)\n Drive space: 100MB+\n\nProduct:\n - 1 CD ROM\n - PowerGres installer\n - PowerGres\n . PowerGres GUI admin tol\n - PostgreSQL 7.3 Japanese documentation\n - also:\n - PowerGres handbook\n - user registration\n - misc\n\nInstallation support\n Free support by email and fax for 30 days after registration\n\nPrice\n 48,000 Yen + tax (probably 5% sales tax; we're talking roughly\n total US$ 500 or about the same in Euros)\n Available from March 2003 (scheduled)\n Beta download available from Jan 2003\n\n\nhttp://powergres.sra.co.jp/\n===========================\n\n(Beta dowload)\n\nThankyou for your interest in PowerGres.\nA free beta version of PowerGres is available.\nCurrently 1.0b s available for download.\nIt can be evaluated for 30 days. Please\ndo not hesitate to try before you buy.\nWe cannot offer any support for this software.\nUse at own risk (blah blah). Also , be aware\nthe Beta version has some restrictions /\nlack of features compared to the release version, see\nhere:\n http://osb.sra.co.jp/PowerGres/beta_restriction.php\n\n(list of things, mainly command line tools with certain\n options not working properly)\n\nDOWNLOAD FORM:\n\nName*\nEmail*\nCompany\nDept.\nPostal code*\nAddress*\n\n* required. (Note: Japanese postal code are like 111-2222 ).\n\n( There follows a select box clicked by default enabling\nSRA to send you emails... The button is \"Send\".)\n\n(following that, privacy info boilerplate).\n\n\n\nhttp://osb.sra.co.jp/PowerGres/faq.php\n======================================\n\n(not a complete translation, only the interesting points \nfrom this page)\n\n\"FAQ\"\n\n\n- License: one license per server; client software\n is unrestricted ,though no free support available.\n- it is possible to transfer data from a PostgreSQL \n installation to PowerGres, though some\n restrictions apply;\n- restrictions are among others:\n - max simultaneous connections 50 users\n (seems to contradict a previous statement...)\n - User-definable functions only in C, SQL, PL/pgSQL\n - No UNIX domain socket support\n - authentication only trust/reject/md5\n \n\n\n", "msg_date": "Sat, 1 Feb 2003 11:08:53 +0100", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Kurt Roeckx wrote:\n> [SIO] [Option Start] If _POSIX_SYNCHRONIZED_IO is defined, the\n> fsync() function shall force all currently queued I/O operations\n> associated with the file indicated by file descriptor fildes to the\n> synchronized I/O completion state. All I/O operations shall be\n> completed as defined for synchronized I/O file integrity\n> completion. [Option End]\n\nHmmm....so if I consistently want these semantics out of fsync() I\nhave to #define _POSIX_SYNCHRONIZED_IO? Or does the above mean that\nyou'll get those semantics if and only if the OS defines the above for\nyou?\n\nI certainly hope the former is the case, because the newer semantics\nwhich you mentioned in the section I cut don't do us any good at all\nand we can't rely on the OS to define something like\n_POSIX_SYNCHRONIZED_IO for us...\n\nBeing able to open a file, do an fsync(), and have the kernel actually\nwrite all the buffers associated with that file to disk could be, I\nthink, a significant performance win compared with the \"flush\neverything known to the kernel\" approach we take now, at least on\nsystems that do something other than PostgreSQL...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 1 Feb 2003 08:15:17 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Curt Sampson wrote:\n> On Fri, 31 Jan 2003, Shridhar Daithankar<[email protected]> wrote:\n> \n> > Besides file locking is implemented using setgid bit on most unices. And\n> > everybody is free to do what he/she thinks right with it.\n> \n> I don't believe it's implemented with the setgid bit on most Unices. As\n> I recall, it's certainly not on Xenix, SCO Unix, any of the BSDs, Linux,\n> SunOS, Solaris, and Tru64 Unix.\n> \n> (I'm talking about the flock system call, here.)\n\nLinux, at least, supports mandatory file locks. The Linux kernel\ndocumentation mentions that you're supposed to use fcntl() or lockf()\n(the latter being a library wrapper around the former) to actually\nlock the file but, when those operations are applied to a file that\nhas the setgid bit set but without the group execute bit set, the\nkernel enforces it as a mandatory lock. That means that operations\nlike open(), read(), and write() initiated by other processes on the\nsame file will block (or return EAGAIN, if O_NONBLOCK was used to open\nit) if that's what the lock on the file calls for.\n\nThat same documentation mentions that locks acquired using flock()\nwill *not* invoke the mandatory lock semantics even if on a file\nmarked for it, so I guess flock() isn't implemented on top of fcntl()\nin Linux.\n\nSo if we wanted to make use of mandatory locks, we'd have to refrain\nfrom using flock().\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 1 Feb 2003 08:31:04 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> So if we wanted to make use of mandatory locks, we'd have to refrain\n> from using flock().\n\nWe have no need for mandatory locks; the advisory style will do fine.\nThis is true because we have no desire to interoperate with any\nnon-Postgres code ... everyone else is supposed to stay the heck out of\n$PGDATA.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Feb 2003 11:52:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "On Sat, Feb 01, 2003 at 08:15:17AM -0800, Kevin Brown wrote:\n> Kurt Roeckx wrote:\n> > [SIO] [Option Start] If _POSIX_SYNCHRONIZED_IO is defined, the\n> > fsync() function shall force all currently queued I/O operations\n> > associated with the file indicated by file descriptor fildes to the\n> > synchronized I/O completion state. All I/O operations shall be\n> > completed as defined for synchronized I/O file integrity\n> > completion. [Option End]\n> \n> Hmmm....so if I consistently want these semantics out of fsync() I\n> have to #define _POSIX_SYNCHRONIZED_IO? Or does the above mean that\n> you'll get those semantics if and only if the OS defines the above for\n> you?\n\nIt's something that will be defined in unistd.h. Depending on\nthe value you know if the system supports it always, you can turn\nit on per application, or it's always on.\n\nYou know that this standard is freely available on internet?\n(http://www.unix-systems.org/version3/online.html)\n\nThere are other comments in about the usage of it.\n\nNote that there also is a function call fdatasync() in the\nSynchronized IO extention.\n\n\nKurt\n\n", "msg_date": "Sat, 1 Feb 2003 17:56:21 +0100", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > So if we wanted to make use of mandatory locks, we'd have to refrain\n> > from using flock().\n> \n> We have no need for mandatory locks; the advisory style will do fine.\n> This is true because we have no desire to interoperate with any\n> non-Postgres code ... everyone else is supposed to stay the heck out of\n> $PGDATA.\n\nTrue. But, of course, mandatory locks could be used to *make*\neveryone else stay out of $PGDATA. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 1 Feb 2003 09:11:45 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "Kurt Roeckx wrote:\n> On Sat, Feb 01, 2003 at 08:15:17AM -0800, Kevin Brown wrote:\n> > Kurt Roeckx wrote:\n> > > [SIO] [Option Start] If _POSIX_SYNCHRONIZED_IO is defined, the\n> > > fsync() function shall force all currently queued I/O operations\n> > > associated with the file indicated by file descriptor fildes to the\n> > > synchronized I/O completion state. All I/O operations shall be\n> > > completed as defined for synchronized I/O file integrity\n> > > completion. [Option End]\n> > \n> > Hmmm....so if I consistently want these semantics out of fsync() I\n> > have to #define _POSIX_SYNCHRONIZED_IO? Or does the above mean that\n> > you'll get those semantics if and only if the OS defines the above for\n> > you?\n> \n> It's something that will be defined in unistd.h. Depending on\n> the value you know if the system supports it always, you can turn\n> it on per application, or it's always on.\n> \n> You know that this standard is freely available on internet?\n> (http://www.unix-systems.org/version3/online.html)\n> \n> There are other comments in about the usage of it.\n> \n> Note that there also is a function call fdatasync() in the\n> Synchronized IO extention.\n\nAh, excellent, thank you. Yes, fdatasync() is *exactly* what we need,\nsince it's defined thusly: \"The functionality shall be equivalent to\nfsync() with the symbol _POSIX_SYNCHRONIZED_IO defined, with the\nexception that all I/O operations shall be completed as defined for\nsynchronized I/O data integrity completion\".\n\nLooks to me like we have a winner. Question is, can we bank on its\nexistence and, if so, is it properly implemented on all platforms that\nsupport it?\n\n\nSince we've been talking about porting to rather different platforms\n(win32 in particular), it seems logical to build a PGFileSync()\nfunction or something (perhaps a single PGSync() which synchronizes\nall relevant PG files to disk, with sync() if necessary) and which\nwould thus use fdatasync() or its equivalent.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 1 Feb 2003 09:20:36 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sync()" }, { "msg_contents": "Curt Sampson wrote:\n<snip>\n > What I'm hearing here is that all we really need to do to \"compete\" with\n > MySQL on Windows is to make the UI a bit slicker. So what's the problem\n > with someone building, for each release, a set of appropriate \nbinaries, and\n > someone making a slick install program that will install postgres,\n > install parts of cygwin if necessary, and set up postgres as a service?\n\nThe non-code related parts of the Win32 port of PostgreSQL that are \nbeing looked at:\n\n + Working on the packaging bits (slick install program) already. Have \ncreated a project - pgsqlwin - on GBorg to hold any specific bits we need.\n\n http://gborg.postgresql.org/project/pgsqlwin/projdisplay.php\n\n First release of the *extremely alpha* \"Proof of Concept\" version is at:\n\n http://prdownloads.sourceforge.net/pgsql/PgSQL731wina1.exe?download\n\n\n + Concerned about including GPL stuff without having 100% totally \ninvestigated the ramifications for people including the Win32 version of \nPostgreSQL as a built-in part of their applications. Not going to \ncommit anything even slightly GPL related to that GBorg project until it \n100% safe to do so without affect our ability to release it as BSD. \nHave some preliminary information regarding this, but just need to wrap \nmy head around it properly. Not going to look at it closely for another \nweek or so.\n\n + It would be greatly helpful to have some way for the install program \nto automatically add the \"Log in as a service\" Win32 priviledge to the \n\"postgres\" user without having to instruct the user to do so. We can \ncreate the user automatically through a shell command, but no idea how \nto add that permission. If someone could do some Win32 API stuff to do \nit behind the scenes without a shell command even, that would be great.\n\n + The WinMaster project is a first go at creating a Win32 GUI command \nconsole for controlling the PostgreSQL service. It's still a bit too \nbasic for real use though:\n\n http://gborg.postgresql.org/project/winmaster/projdisplay.php\n\nFurther suggestions, volunteers, etc are totally welcome.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n > cjs\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Sun, 02 Feb 2003 14:12:49 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\nCurt Sampson <[email protected]> wrote:\n\n> At any rate, it seems to me highly unlikely that, since the child has\n> the *same* descriptor as the parent had, that the lock would\n> disappear.\n\nIt depends on the lock function. After fork():\n\n o with flock() the lock continues to be held, but will be unlocked\n if any child process explicitly unlocks it\n\n o with fcntl() the lock is not inherited in the child\n\n o with lockf() the standards and manual pages don't say\n\nBoring reference material follows.\n\nflock\n===== \n\n From the NetBSD manual page:\n\nNOTES\n Locks are on files, not file descriptors. That is, file descriptors du-\n plicated through dup(2) or fork(2) do not result in multiple instances of\n a lock, but rather multiple references to a single lock. If a process\n holding a lock on a file forks and the child explicitly unlocks the file,\n the parent will lose its lock.\n\nThe Red Hat Linux 8.0 manual page has similar wording. (No standards\nto check here -- flock() is not standardised in POSIX, X/Open, Single\nUnix Standard, ...)\n\nfcntl\n=====\n\nThe NetBSD manual page notes that these locks are not inherited by\nchild processes:\n\n Another minor semantic problem with this interface is that locks\n are not inherited by a child process created using the fork(2)\n function.\n\nDitto the Single Unix Standard versions 2 and 3.\n\nlockf()\n=======\n\nThe standards and manual pages that I've checked don't discuss\nfork() in relation to lockf(), which seems a peculiar ommission\nand makes me suspect that behaviour has varied historically.\n\nIn practice I would expect lockf() semantics to be the same as\nfcntl().\n\nRegards,\n\nGiles\n \n\n\n\n\n\n\n", "msg_date": "Sun, 02 Feb 2003 16:17:22 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "Giles Lean <[email protected]> writes:\n> Boring reference material follows.\n\nCouldn't help noticing that you omitted HPUX ;-)\n\nOn HPUX 10.20, flock doesn't seem to exist (hasn't got a man page nor\nany mention in /usr/include). lockf says\n\n All locks for a process are released upon\n the first close of the file, even if the process still has the file\n opened, and all locks held by a process are released when the process\n terminates.\n\nand\n\n When a file descriptor is closed, all locks on the file from the\n calling process are deleted, even if other file descriptors for that\n file (obtained through dup() or open(), for example) still exist.\n\nwhich seems to imply (but doesn't actually say) that HPUX keeps track of\nexactly which process took out the lock, even if the file is held open\nby multiple processes.\n\nThis all doesn't look good for using file locks in the way I had in\nmind :-( ... but considering that all these man pages seem pretty vague,\nmaybe some direct experimentation is called for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Feb 2003 02:31:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "Lamar Owen wrote:\n> On Friday 31 January 2003 03:21, Bruce Momjian wrote:\n> > Man, I go away for one day, and look what you guys get into. :-)\n> \n> No duh. Whew.\n> \n> > Lastly, SRA just released _today_ their first Win32 port of PostgreSQL,\n> > and it is _threaded_:\n> \n> > \thttp://osb.sra.co.jp/PowerGres/\n> \n> Is there an English translation of the site so one who doesn't speak or write \n> Japanese can try it out?\n\nNo, sorry. Tatsuo mentioned that. However, Babelfish will do the\ntranslation:\n\n\thttp://world.altavista.com/\n\nPut in the URL, and choose translate Japanese to English.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 07:17:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Jeff Davis wrote:\n> > As for build environment, we have two audiences --- those using\n> > binaries, and those compiling from source. Clearly we are going to have\n> > more binary users vs. source users on Win32 than on any other platform,\n> > so at this stage I think making thing easier for the majority of our\n> > Unix developers is the priority, meaning we should use our existing\n> > Makefiles and cygwin to compile. Later, if things warrant it, we can do\n> > VC++ project files somehow.\n> \n> I'm ignorant when it comes to build environments on windows, but I was under \n> the impression that DJGPP was mostly a complete environment. Are there any \n> plans to support it, or is it even possible?\n\nI don't think we want to throw our Unix folks into culture shock. Let's\npick one build environment and go from there, either cygwin or something\nelse. Once the patches are in, folks can test the various build options.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 07:19:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Justin Clift wrote:\n> + Aside from all this, it might be nice to have a few Win32 specific \n> gui pieces in place at the time that PostgreSQL 7.4 Win32 is released. \n> Am sure they'll develop over time, but was thinking we should at least \n> make a good impression with the first release. Hey, if we make a really \n> bad impression with the first release, then there might not be the \n> quadruple-zillion Windows PG users after all. If that sounds like a \n> good idea, maybe adding the GUC variables \"random_query_delay\" \n> (minutes), \"crash_how_often\" (seconds), and \"reboot_plus_corrupt_please\" \n> (true/false)?\n\nWhat we need is for the backend to query postgresql.org to set those\nparameters, so we can control how many Win32 users adopt PostgreSQL. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 07:21:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "Bruce Momjian wrote:\n> Justin Clift wrote:\n> \n>> + Aside from all this, it might be nice to have a few Win32 specific \n>>gui pieces in place at the time that PostgreSQL 7.4 Win32 is released. \n>>Am sure they'll develop over time, but was thinking we should at least \n>>make a good impression with the first release. Hey, if we make a really \n>>bad impression with the first release, then there might not be the \n>>quadruple-zillion Windows PG users after all. If that sounds like a \n>>good idea, maybe adding the GUC variables \"random_query_delay\" \n>>(minutes), \"crash_how_often\" (seconds), and \"reboot_plus_corrupt_please\" \n>>(true/false)?\n> \n> \n> What we need is for the backend to query postgresql.org to set those\n> parameters, so we can control how many Win32 users adopt PostgreSQL. :-)\n\n\"All your [data] base belong to us\" ?\n\n;-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 03 Feb 2003 01:31:26 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "On Sun, 2 Feb 2003, Tom Lane wrote:\n\n> This all doesn't look good for using file locks in the way I had in\n> mind :-( ... but considering that all these man pages seem pretty vague,\n> maybe some direct experimentation is called for.\n\nDefinitely. I wonder about the NetBSD manpage quotes in the post you\nfollowed up to, given that last time I checked flock() was implmented,\nin the kernel, using fcntl(). Either that's changed, or the manpages\nare unclear or lying.\n\nThis has been my experience in the past; locking semantics are subtle\nand unclear enough that you really need to test for exactly what you\nwant at build time on every system, and you've got to do this testing\non the filesystem you intend to put the locks on. (So you don't, e.g.,\ntest a local filesystem but end up with data on an NFS filesystem with\ndifferent locking semantics.) That's what procmail does.\n\nGiven this, I'm not even sure the whole idea is worth persuing. (Though\nI guess I should find out what NetBSD is really doing, and fix the\nmanual pages correspond to reality.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Mon, 3 Feb 2003 04:27:28 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "\nCurt Sampson <[email protected]> wrote:\n\n> On Sun, 2 Feb 2003, Tom Lane wrote:\n> \n> > This all doesn't look good for using file locks in the way I had in\n> > mind :-( ... but considering that all these man pages seem pretty vague,\n> > maybe some direct experimentation is called for.\n> \n> Definitely. I wonder about the NetBSD manpage quotes in the post you\n> followed up to, given that last time I checked flock() was implmented,\n> in the kernel, using fcntl(). Either that's changed, or the manpages\n> are unclear or lying.\n\nUsing the same kernel code != same semantics.\n\nI think the NetBSD manual pages are trying to say that it's \"safe\" to\nhave lockf(), fcntl(), and flock() locking playing together. That\nneedn't be the case on all operating systems and the standards don't\nrequire it.\n\n> This has been my experience in the past; locking semantics are subtle\n> and unclear enough that you really need to test for exactly what you\n> want at build time on every system, and you've got to do this testing\n> on the filesystem you intend to put the locks on.\n\nWhat he said ...\n\nGiles\n", "msg_date": "Mon, 03 Feb 2003 09:08:38 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "\nTom Lane wrote:\n\n> On HPUX 10.20, flock doesn't seem to exist (hasn't got a man page nor\n> any mention in /usr/include).\n\nCorrect. Still isn't there in later releases.\n\n> lockf says\n> \n> All locks for a process are released upon\n> the first close of the file, even if the process still has the file\n> opened, and all locks held by a process are released when the process\n> terminates.\n> \n> and\n> \n> When a file descriptor is closed, all locks on the file from the\n> calling process are deleted, even if other file descriptors for that\n> file (obtained through dup() or open(), for example) still exist.\n> \n> which seems to imply (but doesn't actually say) that HPUX keeps track of\n> exactly which process took out the lock, even if the file is held open\n> by multiple processes.\n\nHaving done some testing today, I now understand what the standards\nare trying to say when they talk about locks being \"inherited\". Or at\nleast I think I understand: standards are tricky, locking is subtle,\nand I'm prepared to be corrected if I'm wrong!\n\nAll of these lock functions succeed when the same process asks for a\nlock that it already has. That is:\n\n fcntl(fd, ...);\n fcntl(fd, ...); /* success -- no error returned */\n\nFor flock() only, the lock is inherited by a child process along\nwith the file descriptor so the child can re-issue the flock()\ncall and that will pass, too:\n\n flock(fd, ...);\n pid = fork();\n if (pid == 0)\n flock(fd, ...); /* success -- no error returned */\n\nFor fcntl() and lockf() the locks are not inherited, and the\ncall in a child fails:\n\n fcntl(fd, ...);\n pid = fork();\n if (pid == 0)\n fcntl(fd, ...); /* will fail and return -1 */\n\nIn no case does just closing the file descriptor in the child lose\nthe parent's lock. I rationalise this as follows:\n\n1. flock() is using a \"last close\" semantic, so closing the file\n descriptor is documented not to lose the lock\n\n2. lockf() and fcntl() use a \"first close\", but because the locks\n are not inherited by the child process the child can't unlock\n them\n\n> This all doesn't look good for using file locks in the way I had in\n> mind :-( ... but considering that all these man pages seem pretty vague,\n> maybe some direct experimentation is called for.\n\nI conjecture that Tom was looking for a facility to lock a file and\nhave it stay locked if the postmaster or any child process was still\nrunning. flock() fits the bill, but it's not portable everywhere.\n\nOne additional warning: this stuff *is* potentially filesystem\ndependent, per the source code I looked at, which would call\nfilesystem specific routines.\n\nI tested with HP-UX 11.00 (VxFS), NetBSD (FFS) and Linux (ext3). I've\nput the rough and ready test code up for FTP, if anyone wants to check\nmy working:\n\n ftp://ftp.nemeton.com.au/pub/pgsql/\n\nLimitations in the testing:\n\nI only used whole file locking (no byte ranges) and didn't prove that\na lock taken by flock() is still held after a child calls close() as\nit is documented to be.\n\nRegards,\n\nGiles\n", "msg_date": "Mon, 03 Feb 2003 19:19:39 +1100", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "\n> That same documentation mentions that locks acquired using flock()\n> will *not* invoke the mandatory lock semantics even if on a file\n> marked for it, so I guess flock() isn't implemented on top of fcntl()\n> in Linux.\n\nThey're not. And there's another difference between fcntl and flock in\nLinux: although fork(2) states that file locks are not inherited, locks\nmade by flock are inherited to children and they keep the lock even when\nthe parent process is killed with SIGKILL. Tested this.\n\nJust see man syscall, there exists both\n\tflock(2)\nand\n\tfcntl(2)\n\n\n\n-- \nAntti Haapala\n+358 50 369 3535\nICQ: #177673735\n\n", "msg_date": "Mon, 3 Feb 2003 12:29:47 +0200 (EET)", "msg_from": "Antti Haapala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Justin Clift\" <[email protected]>\nTo: \"Curt Sampson\" <[email protected]>\nCc: \"Peter Eisentraut\" <[email protected]>; \"Curtis Faith\"\n<[email protected]>; <[email protected]>\nSent: Sunday, February 02, 2003 4:42 AM\nSubject: Re: [mail] Re: [HACKERS] Windows Build System\n> + It would be greatly helpful to have some way for the install program\n> to automatically add the \"Log in as a service\" Win32 priviledge to the\n> \"postgres\" user without having to instruct the user to do so. We can\n> create the user automatically through a shell command, but no idea how\n> to add that permission. If someone could do some Win32 API stuff to do\n> it behind the scenes without a shell command even, that would be great.\n>\n> + The WinMaster project is a first go at creating a Win32 GUI command\n> console for controlling the PostgreSQL service. It's still a bit too\n> basic for real use though:\n> http://gborg.postgresql.org/project/winmaster/projdisplay.php>\n> Further suggestions, volunteers, etc are totally welcome.\n> :-)\n> Regards and best wishes,\n> Justin Clift\n\n It's still a bit too basic for real use though:\nYeah i know. I write this for my internal use.\nInitial purpose of this stuff is only to avoid teaching of an old lady with\nminimum computer skills to use bash and hide this ugly dos box :)\nMark L. Woodward (mlw) anounce few monts ago a self installing PostgreSQL\nfor Windows so\ni write him about this console. He do a lof job to.\nSpecial thanks Mark.\n\nOK, now how to make WinMaster more usefull ?\nIt's open source so if any1 want use it he/she may help to\ndevelop it.\n\nI. Install as a service feature for winmaster are included in my plans\nfor future.\nII. I'm thinking about direct link to PostgreSQL server instead usung\nCreateProcess,\n but this is unclear idea at present time. Any suggestions will be\nwelcome.\nIII Please add any feature rquests to\nhttp://gborg.postgresql.org/project/winmaster/bugs/buglist.php?fr=yes\n and ideas to mailto:[email protected]\n\nJustin you are right !!!\nFurther suggestions, volunteers, etc are totally welcome!!!\n Further suggestions, volunteers, etc are totally welcome!!!\n Further suggestions, volunteers, etc are totally welcome!!!\n\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.449 / Virus Database: 251 - Release Date: 27/01/2003\n\n", "msg_date": "Mon, 3 Feb 2003 12:29:27 +0100", "msg_from": "\"Igor Georgiev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "\n> All of these lock functions succeed when the same process asks for a\n> lock that it already has. That is:\n>\n> fcntl(fd, ...);\n> fcntl(fd, ...); /* success -- no error returned */\n>\n> For flock() only, the lock is inherited by a child process along\n> with the file descriptor so the child can re-issue the flock()\n> call and that will pass, too:\n>\n> flock(fd, ...);\n> pid = fork();\n> if (pid == 0)\n> flock(fd, ...); /* success -- no error returned */\n\nTrue...\n\n> For fcntl() and lockf() the locks are not inherited, and the\n> call in a child fails:\n>\n> fcntl(fd, ...);\n> pid = fork();\n> if (pid == 0)\n> fcntl(fd, ...); /* will fail and return -1 */\n>\n> In no case does just closing the file descriptor in the child lose\n> the parent's lock. I rationalise this as follows:\n>\n> 1. flock() is using a \"last close\" semantic, so closing the file\n> descriptor is documented not to lose the lock\n\nYep.\n\n> 2. lockf() and fcntl() use a \"first close\", but because the locks\n> are not inherited by the child process the child can't unlock\n> them\n\nAnd at least old linux system call manuals seems to reflect this\n(incorrectly) when they state that file locks are not inherited (should\nbe \"record locks obtained by fcntl\").\n\n> One additional warning: this stuff *is* potentially filesystem\n> dependent, per the source code I looked at, which would call\n> filesystem specific routines.\n>\n> I only used whole file locking (no byte ranges) and didn't prove that\n> a lock taken by flock() is still held after a child calls close() as\n> it is documented to be.\n\nI tested this on Linux 2.4.x ext2 fs and it seems to follow the spec\nexactly. If child is forked and it closes the file, parent still has the\nlock until it's killed or it has also closed the file.\n\nWhat about having two different lock files: one that would indicate that\nthere are some child processes still running and another which would\nindicate that there's postmaster's parent process running? - Using flock\nand fcntl semantics respectively (or flock semantics with children\nimmediately closing their fds).\n\nAnd of course locking is file system dependant, just think NFS on linux\nwhere virtually no locking semantics actually work :)\n\n-- \nAntti Haapala\n\n", "msg_date": "Mon, 3 Feb 2003 14:10:49 +0200 (EET)", "msg_from": "Antti Haapala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On file locking " }, { "msg_contents": "Hannu Krosing wrote:\n> \n> On Thu, 2003-01-30 at 20:29, Tom Lane wrote:\n> > Claiming that it doesn't require an increased level of testing is\n> > somewhere between ridiculous and irresponsible.\n> \n> We should have at least _some_ platforms (besides Win32) that we could\n> clain to have run thorough test on.\n> \n> I suspect that RedHat does some (perhaps even severe) testing for\n> RHAS/RHDB, but I don't know of any other thorough testing.\n> \n> Or should reliability testing actually be something left for commercial\n> entities ?\n\nThe testing has to be done before we make anything available as an\nofficial release. As of now, the status of this project is at the\nbeginning of incorporating a 7.2.1 based patch into CVS HEAD.\n\nAsking for exzessive tests at this stage of development and (ab)using\nthe absence of 100% proof of rock solid reliability as an excuse to\nreject the entire aproach would be ridiculous.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 03 Feb 2003 16:20:09 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Windows Build System" }, { "msg_contents": "Lamar Owen wrote:\n> \n> On Friday 31 January 2003 03:21, Bruce Momjian wrote:\n> > Man, I go away for one day, and look what you guys get into. :-)\n> \n> No duh. Whew.\n> \n> > Lastly, SRA just released _today_ their first Win32 port of PostgreSQL,\n> > and it is _threaded_:\n> \n> > http://osb.sra.co.jp/PowerGres/\n> \n> Is there an English translation of the site so one who doesn't speak or write\n> Japanese can try it out?\n\nBruce, better be careful!\n\nIf SRA hasn't done exzessive power-off and other crash testing, beware\nof the dogs :-)\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 03 Feb 2003 16:59:53 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows Build System - My final thoughts" }, { "msg_contents": "The debate on the configuration file sparked a memory of an old patch I \nsubmitted in 7.1 days.\n\nOne of the things I do not like about PostgreSQL is, IMHO, is a \nbackwards configuration process. Rather than specify a data directory, \nthe administrator should specify a database configuration file. Within \nthe configuration file is the location and names of the data directory \nand other information. Most admins want a central location for this \ninformation.\n\nOne of the things that is frustrating to me, is to have to hunt down \nwhere the data directory is so that I can administrate a DB. It can be \nanywhere, in any directory on any volume. If you had, say, a \n/usr/local/pgsql/admin, then you could have mydb.conf which could then \nbe checked in to CVS. A standard location for configuration files is a \nmore normal process as the location of the data directory is less so. I \njust don't think the PG data directory should not contain configuration \ninformation.\n\nThe original patch allowed a user to specify the location of the \npostgresql.conf file, rather than assuming it lived in $PGDATA\nAlso included in that patch, was the ability to specify the location of \nthe PGDATA directory as well as the names of the pg_hba.conf and other \nconfiguration files.\n\nIt also allowed the user to use PG as it has always worked, The patch \nwas not applied because a better solution was supposedly coming, but \nthat was two major revisions ago. I would still like to see this \nfunctionality. Would anyone else?\n\n", "msg_date": "Tue, 11 Feb 2003 13:44:15 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "location of the configuration files" }, { "msg_contents": "On Tue, 2003-02-11 at 13:44, mlw wrote:\n> The debate on the configuration file sparked a memory of an old patch I \n> submitted in 7.1 days.\n> \n> One of the things I do not like about PostgreSQL is, IMHO, is a \n> backwards configuration process. Rather than specify a data directory, \n> the administrator should specify a database configuration file. Within \n> the configuration file is the location and names of the data directory \n> and other information. Most admins want a central location for this \n> information.\n> \n> One of the things that is frustrating to me, is to have to hunt down \n> where the data directory is so that I can administrate a DB. It can be \n> anywhere, in any directory on any volume. If you had, say, a \n> /usr/local/pgsql/admin, then you could have mydb.conf which could then \n> be checked in to CVS. A standard location for configuration files is a \n> more normal process as the location of the data directory is less so. I \n> just don't think the PG data directory should not contain configuration \n> information.\n> \n> The original patch allowed a user to specify the location of the \n> postgresql.conf file, rather than assuming it lived in $PGDATA\n> Also included in that patch, was the ability to specify the location of \n> the PGDATA directory as well as the names of the pg_hba.conf and other \n> configuration files.\n> \n> It also allowed the user to use PG as it has always worked, The patch \n> was not applied because a better solution was supposedly coming, but \n> that was two major revisions ago. I would still like to see this \n> functionality. Would anyone else?\n> \n\nI'm going to be lazy and ask if you can post what the better solution\nthat was coming was (or a link to the thread). While I'll grant you that\nthe \"it's coming\" argument is pretty weak after two releases, that fact\nthat it may have been a better solution could still hold up.\n\nRobert Treat\n\n\n\n", "msg_date": "11 Feb 2003 13:55:08 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nRobert Treat wrote:\n\n>I'm going to be lazy and ask if you can post what the better solution\n>that was coming was (or a link to the thread). While I'll grant you that\n>the \"it's coming\" argument is pretty weak after two releases, that fact\n>that it may have been a better solution could still hold up.\n>\n>Robert Treat\n> \n>\nAFAIK it wasn't actually done. It was more of a, \"we should do something \ndifferent\" argument. At one point it was talked about rewriting the \nconfiguration system to allow \"include\" and other things.\n\n\n", "msg_date": "Tue, 11 Feb 2003 14:10:34 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nI, personally, also think it makes more sense to pass to the postmaster\na configuration file that contains all the rest of the information about\nthe database system, including the disk locations of the various data\ndirectories and whatnot.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Wed, 12 Feb 2003 13:53:17 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw wrote:\n> AFAIK it wasn't actually done. It was more of a, \"we should do something \n> different\" argument. At one point it was talked about rewriting the \n> configuration system to allow \"include\" and other things.\n\nThat seems like extreme overkill. The PostgreSQL configuration\nmechanism doesn't seem to me to be anywhere near complicated enough to\njustify an \"include\" mechanism.\n\nI agree with you: you should be able to specify all of the base\nconfiguration information (including the location of the data\ndirectories) in one file, and it makes perfect sense to me for the\nlocation of the data directory to be a GUC variable.\n\nI'd say the only thing the postmaster needs to know prior to startup\nis the directory containing the postgresql.conf file. An\nadministrator who wishes to set up multiple independent databases can\neasily do so by using multiple config file directories. When\nconsistency is required, he can easily use symlinks to point to master\nconfig files where appropriate.\n\n\nI assume $PGDATA was around long before GUC?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Tue, 11 Feb 2003 22:43:00 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I assume $PGDATA was around long before GUC?\n\nYes, it was. But I have not yet seen an argument here that justifies\nwhy $SOMECONFIGDIRECTORY/postgresql.conf is better than\n$PGDATA/postgresql.conf. The latter keeps all the related files\ntogether. The former seems only to introduce unnecessary complexity.\nYou can only justify it as simpler if you propose hardwiring a value for\n$SOMECONFIGDIRECTORY ... which is a proposal that will not fly with any\nof the core developers, because we all run multiple versions of Postgres\non our machines so that we can deal with back-version bug reports,\ntest installations, etc. It is unlikely to fly with any of the RPM\npackagers either, due to the wildly varying ideas out there about the\nOne True Place where applications should put their config files.\n\n(This point was pretty much why mlw's previous proposal was rejected,\nIIRC.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Feb 2003 02:22:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > I assume $PGDATA was around long before GUC?\n> \n> Yes, it was. But I have not yet seen an argument here that justifies\n> why $SOMECONFIGDIRECTORY/postgresql.conf is better than\n> $PGDATA/postgresql.conf. \n\nOkay, here's one: most Unix systems store all of the configuration\nfiles in a well known directory: /etc. These days it's a hierarchy of\ndirectories with /etc as the root of the hierarchy. When an\nadministrator is looking for configuration files, the first place he's\ngoing to look is in /etc and its subdirectories. After that, he's\nforced to look through the startup scripts to figure out where things\nare located. And if those aren't revealing, then he has to read\nmanpages and hope they're actually useful. :-) And if that doesn't\nwork, then he has to resort to tricks like doing \"strings\" on the\nbinaries (he doesn't necessarily have access to the sources that the\nbinaries were compiled from, which is all that matters here).\n\n> The latter keeps all the related files together. The former seems\n> only to introduce unnecessary complexity.\n\nWell, I'd say it's \"unnecessary\" only when you already know where the\ndata files are located -- which is true when you're a developer or\nsomeone who is already familiar with the installation you're working\nwith. But if you're just getting started and installed it from a\npackage like an RPM file, then you have to look in the package to see\nwhere it created the data file areas, or look at the startup scripts,\netc.\n\n> You can only justify it as simpler if you propose hardwiring a value\n> for $SOMECONFIGDIRECTORY ...\n\nMaking things simpler from the standpoint of the code isn't the point.\nMaking things simpler for the DBA and/or Unix sysadmin is.\n\nI'd say $SOMECONFIGDIRECTORY should be a hardwired default with a\ncommand line override.\n\nI doubt you'll get a whole lot of argument from the general user\ncommunity if you say that the hard wired default should be\n/etc/postgresql.\n\n> which is a proposal that will not fly with any of the core\n> developers, because we all run multiple versions of Postgres on our\n> machines so that we can deal with back-version bug reports, test\n> installations, etc.\n\nI absolutely agree that the config directory to use should be\nsomething that can be controlled with a command line option.\n\n> It is unlikely to fly with any of the RPM packagers either, due to\n> the wildly varying ideas out there about the One True Place where\n> applications should put their config files.\n\nThere seems to be substantial agreement among the distribution\nmaintainers that config files belong somewhere in /etc. At least,\nI've seen very little disagreement with that idea except from people\nwho believe that each package should have its own, separate directory\nhierarchy. And the fact that the vast majority of packages put their\nconfig files somewhere in /etc supports this.\n\nDebian, for instance, actually *does* put the PostgreSQL config files\nin /etc/postgresql and creates symlinks in the data directory that\npoint to them. This works, but it's a kludge.\n\nThere are highly practical reasons for putting all the config files\nunder /etc, not the least of which is that it makes backing up files\nthat are *very* likely to change from the default, and which are also\nhighly critical to the operation of the system, very easy.\n\nYou'll get A LOT more disagreement about where to put data files than\nconfig files, as standards go. And in the case of PostgreSQL, where\nyou put your data files is especially important for performance\nreasons, so it therefore makes even less sense to put the config files\nin the same location: it means that the config files could literally\nbe anywhere, and any administrator who is unfamiliar with the system\nwill have to dig through startup scripts (or worse!) to figure it out.\n\n\nOh, here's another reason $SOMECONFIGDIRECTORY is better than $PGDATA:\nit allows much more appropriate separation of concern by default.\n\nMost installations of PostgreSQL start the database from a startup\nscript that's run at boot time. With $PGDATA, changing the target\ndata directory requires changing the startup script, which requires\nroot access to the system -- if it didn't require root access then the\nentire system is open to the possibility of a world of hurt because\nthe DBA isn't necessarily the same guy as the Unix sysadmin and\ntherefore doesn't necessarily know his way around shell scripts in\ngeneral, and rc scripts in particular, the way the Unix admin will.\nThe possibility of hurt comes from the fact that the rc script runs at\nroot, at a time that the system is hardest to work with in the event\nof a failure (many systems haven't even put up any console login\nprompts and may not have even started any remote login facilities\nbefore the PostgreSQL startup script runs). A sufficiently bad\nscrewup on the part of the DBA with that kind of setup will require\nthe Unix sysadmin to go to single user mode or worse to fix it. So\nunless the Unix sysadmin really trusts the DBA, he's not going to\nallow the DBA that kind of access. Instead he'll kludge something\ntogether so that the DBA can make the appropriate kinds of changes\nwithout compromising the system. But every shop will do this a\ndifferent way. Even Debian, which usually is pretty good about\ndealing with issues like these, doesn't address this.\n\nBut it shouldn't even be necessary for a shop to kludge around the\nproblem: with $SOMECONFIGDIRECTORY, the Unix sysadmin can safely give\nwrite permissions to the DBA on the config files (and even on the\nentire directory they reside in), the DBA can point the database at\nwhatever directory he wants the data to reside in, and the Unix admin\nonly has to set up the storage areas, set the permissions, and let the\nDBA loose on it -- he doesn't have to touch the startup scripts at\nall, the DBA doesn't have to be much of a shell script wizard, and\neveryone is (relatively) happy. And, even better, the DBA will be\nable to do this at most shops he works for because his knowledge will\nbe based on the standard PostgreSQL install.\n\nIf we want to get wider acceptance amongst the heavy database users\n(which often *do* separate DBAs from Unix sysadmins), we may want to\nthink about things like this from time to time...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 12 Feb 2003 05:24:30 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Wed, 2003-02-12 at 08:24, Kevin Brown wrote:\n> Tom Lane wrote:\n> > You can only justify it as simpler if you propose hardwiring a value\n> > for $SOMECONFIGDIRECTORY ...\n> \n> Making things simpler from the standpoint of the code isn't the point.\n> Making things simpler for the DBA and/or Unix sysadmin is.\n> \n> I'd say $SOMECONFIGDIRECTORY should be a hardwired default with a\n> command line override.\n> \n> I doubt you'll get a whole lot of argument from the general user\n> community if you say that the hard wired default should be\n> /etc/postgresql.\n> \n> > which is a proposal that will not fly with any of the core\n> > developers, because we all run multiple versions of Postgres on our\n> > machines so that we can deal with back-version bug reports, test\n> > installations, etc.\n> \n> I absolutely agree that the config directory to use should be\n> something that can be controlled with a command line option.\n> \n> > It is unlikely to fly with any of the RPM packagers either, due to\n> > the wildly varying ideas out there about the One True Place where\n> > applications should put their config files.\n> \n> There seems to be substantial agreement among the distribution\n> maintainers that config files belong somewhere in /etc. At least,\n> I've seen very little disagreement with that idea except from people\n> who believe that each package should have its own, separate directory\n> hierarchy. And the fact that the vast majority of packages put their\n> config files somewhere in /etc supports this.\n> \n> Debian, for instance, actually *does* put the PostgreSQL config files\n> in /etc/postgresql and creates symlinks in the data directory that\n> point to them. This works, but it's a kludge.\n> \n\nSeems like a good compromise would be to make the hard wired default\n$SOMECONFIGDIRECTORY be $PGDATA; this makes each version of the software\nmore self contained/ less likely to interfere with another installation.\n(This becomes really handy when doing major upgrades). If you really\nhave a strong desire to change this, you can.\n\nAs I see it, this change would (should?) need to be something that could\nbe changed in the configure script when building postgresql, as well\nchangeable via a command line option, any other places?\n\nRobert Treat\n\n\n", "msg_date": "12 Feb 2003 10:39:24 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "> Okay, here's one: most Unix systems store all of the configuration\n> files in a well known directory: /etc. These days it's a hierarchy of\n> directories with /etc as the root of the hierarchy. When an\n> administrator is looking for configuration files, the first place he's\n> going to look is in /etc and its subdirectories. After that, he's\n> forced to look through the startup scripts to figure out where things\n> are located. And if those aren't revealing, then he has to read\n> manpages and hope they're actually useful. :-) And if that doesn't\n> work, then he has to resort to tricks like doing \"strings\" on the\n> binaries (he doesn't necessarily have access to the sources that the\n> binaries were compiled from, which is all that matters here).\n\nNo goddammit - /usr/local/etc. Why can't the Linux community respect\nhistory!!!!\n\nIt is the ONE TRUE PLACE dammit!!!\n\nChris\n\n(btw, there is humour + seriousness in above post...)\n\n", "msg_date": "Thu, 13 Feb 2003 09:37:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "> > binaries (he doesn't necessarily have access to the sources that the\n> > binaries were compiled from, which is all that matters here).\n> \n> No goddammit - /usr/local/etc. Why can't the Linux community respect\n> history!!!!\n\nHistory? It's the only way to make a read-only / (enforced by\nsecure-level) and still be able to change the user applications.\n\nI don't mind /usr/X11R6/etc either, but it's not exactly appropriate for\nPostgreSQL ;)\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "12 Feb 2003 21:17:01 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\"Christopher Kings-Lynne\" <[email protected]> wrote: \n\n> > Okay, here's one: most Unix systems store all of the configuration\n> > files in a well known directory: /etc. These days it's a hierarchy of\n> > directories with /etc as the root of the hierarchy. When an\n> > administrator is looking for configuration files, the first place he's\n> > going to look is in /etc and its subdirectories. \n\n> No goddammit - /usr/local/etc. Why can't the Linux community respect\n> history!!!!\n> \n> It is the ONE TRUE PLACE dammit!!!\n\nWell, to the extent that you're serious, you understand that \na lot of people feel that /usr/local should be reserved for \nstuff that's installed by the local sysadmin, and your\nvendor/distro isn't supposed to be messing with it. \n\nWhich means if the the vendor installed Postgresql (say, the\nRed Hat Database) you'd expect config files to be in /etc.\nIf the postgresql is compiled from source by local admin, \nyou might look somewhere in /usr/local.\n\nI've got the vauge feeling that this is all more than a\nlittle silly... directory locations floating about depending\non who did what, as thought it were such a radical thing \nto do a ./configure, make & make install. But this is a \npretty common feeling among the unix world (more wide spread\nthan just in the Linux world). \n\n\n", "msg_date": "Wed, 12 Feb 2003 19:03:33 -0800", "msg_from": "\"J. M. Brenner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Wednesday 12 February 2003 20:37, Christopher Kings-Lynne wrote:\n> > Okay, here's one: most Unix systems store all of the configuration\n> > files in a well known directory: /etc. These days it's a hierarchy of\n\n> No [snip] - /usr/local/etc. Why can't the Linux community respect\n> history!!!!\n\n> It is the ONE TRUE PLACE [snip]\n\nIf PostgreSQL is supported as a part of the base operating system in a Linux \ndistribution, and that distribution wishes to be Linux Standards Base \ncompliant (most do), then PostgreSQL cannot go in /usr/local -- period.\n\nIDIC at work.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Wed, 12 Feb 2003 22:08:23 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n\n>Kevin Brown <[email protected]> writes:\n> \n>\n>>I assume $PGDATA was around long before GUC?\n>> \n>>\n>\n>Yes, it was. But I have not yet seen an argument here that justifies\n>why $SOMECONFIGDIRECTORY/postgresql.conf is better than\n>$PGDATA/postgresql.conf. The latter keeps all the related files\n>together. The former seems only to introduce unnecessary complexity.\n>You can only justify it as simpler if you propose hardwiring a value for\n>$SOMECONFIGDIRECTORY ... which is a proposal that will not fly with any\n>of the core developers, because we all run multiple versions of Postgres\n>on our machines so that we can deal with back-version bug reports,\n>test installations, etc. It is unlikely to fly with any of the RPM\n>packagers either, due to the wildly varying ideas out there about the\n>One True Place where applications should put their config files.\n>\n>(This point was pretty much why mlw's previous proposal was rejected,\n>IIRC.)\n> \n>\nI wasn't talking about a \"default directory\" I was talking about \nconfiguring a database in a configuration file.\n\nWhile I accept that the PostgreSQL group can not be playing catch-up \nwith other databases, this does not preclude the notion accepting common \npractices and adopting them.\n\nUnderstand, I really like PostgreSQL. I like it better than Oracle, and \nit is my DB of choice. That being said, I see what other DBs do right. \nPutting the configuration in the data directory is \"wrong,\" no other \ndatabase or service under UNIX or Windows does this, Period.\n\nDoes the PostgreSQL team know better than the rest of the world?\n\nThe idea that a, more or less, arbitrary data location determines the \ndatabase configuration is wrong. It should be obvious to any \nadministrator that a configuration file location which controls the \nserver is the \"right\" way to do it. Regardless of where ever you choose \nto put the default configuration file, it is EASIER to configure a \ndatabase by using a file in a standard configuration directory (/etc, \n/usr/etc, /usr/local/etc, /usr/local/pgsql/conf or what ever). The data \ndirectory should not contain configuration data as it is typically \ndependent on where the admin chooses to mount storage.\n\nI am astounded that this point of view is missed by the core group.\n\n\nMark.\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nKevin Brown <[email protected]> writes:\n \n\nI assume $PGDATA was around long before GUC?\n \n\n\nYes, it was. But I have not yet seen an argument here that justifies\nwhy $SOMECONFIGDIRECTORY/postgresql.conf is better than\n$PGDATA/postgresql.conf. The latter keeps all the related files\ntogether. The former seems only to introduce unnecessary complexity.\nYou can only justify it as simpler if you propose hardwiring a value for\n$SOMECONFIGDIRECTORY ... which is a proposal that will not fly with any\nof the core developers, because we all run multiple versions of Postgres\non our machines so that we can deal with back-version bug reports,\ntest installations, etc. It is unlikely to fly with any of the RPM\npackagers either, due to the wildly varying ideas out there about the\nOne True Place where applications should put their config files.\n\n(This point was pretty much why mlw's previous proposal was rejected,\nIIRC.)\n \n\nI wasn't talking about a \"default directory\" I was talking about configuring\na database in a configuration file.\n\nWhile I accept that the PostgreSQL group can not be playing catch-up with\nother databases, this does not preclude the notion accepting common practices\nand adopting them.\n\nUnderstand, I really like PostgreSQL. I like it better than Oracle, and it\nis my DB of choice.  That being said, I see what other DBs do right. Putting\nthe configuration in the data directory is \"wrong,\" no other database or\nservice under UNIX or Windows does this, Period.\n\nDoes the PostgreSQL team know better than the rest of the world?\n\nThe idea that a, more or less, arbitrary data location determines the database\nconfiguration is wrong. It should be obvious to any administrator that a\nconfiguration file location which controls the server is the \"right\" way\nto do it.  Regardless of where ever you choose to put the default configuration\nfile, it is EASIER to configure a database by using a file in a standard\nconfiguration directory (/etc, /usr/etc, /usr/local/etc, /usr/local/pgsql/conf\nor what ever). The data directory should not contain configuration data as\nit is typically dependent on where the admin chooses to mount storage.\n\nI am astounded that this point of view is missed by the core group.\n\n\nMark.", "msg_date": "Thu, 13 Feb 2003 00:31:01 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw <[email protected]> writes:\n> The idea that a, more or less, arbitrary data location determines the \n> database configuration is wrong. It should be obvious to any \n> administrator that a configuration file location which controls the \n> server is the \"right\" way to do it.\n\nI guess I'm just dense, but I entirely fail to see why this is the One\nTrue Way To Do It. What you seem to be proposing (ignoring\nsyntactic-sugar issues) is that we replace \"postmaster -D\n/some/data/dir\" by \"postmaster -config /some/config/file\". I am not\nseeing the nature of the improvement. It looks to me like the sysadmin\nmust now grant the Postgres DBA write access on *two* directories, viz\n/some/config/ and /wherever/the/data/directory/is. How is that better\nthan granting write access on one directory? Given that we can't manage\nto standardize the data directory location across multiple Unixen, how\nis it that we will be more successful at standardizing a config file\nlocation?\n\nAll I see here is an arbitrary break with our past practice. I do not\nsee any net improvement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 00:51:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n\n>mlw <[email protected]> writes:\n> \n>\n>>The idea that a, more or less, arbitrary data location determines the \n>>database configuration is wrong. It should be obvious to any \n>>administrator that a configuration file location which controls the \n>>server is the \"right\" way to do it.\n>> \n>>\n>\n>I guess I'm just dense, but I entirely fail to see why this is the One\n>True Way To Do It. What you seem to be proposing (ignoring\n>syntactic-sugar issues) is that we replace \"postmaster -D\n>/some/data/dir\" by \"postmaster -config /some/config/file\". I am not\n>seeing the nature of the improvement. It looks to me like the sysadmin\n>must now grant the Postgres DBA write access on *two* directories, viz\n>/some/config/ and /wherever/the/data/directory/is. How is that better\n>than granting write access on one directory? Given that we can't manage\n>to standardize the data directory location across multiple Unixen, how\n>is it that we will be more successful at standardizing a config file\n>location?\n>\n>All I see here is an arbitrary break with our past practice. I do not\n>see any net improvement.\n>\n> \n>\nThere is a pretty well understood convention that a configuration file \nwill be located in some standard location depending on your distro. \nWould you disagree with that?\n\nThere is also a convention that most servers are configured by a \nconfiguration file, located in a central location. Look at sendmail, \nnamed,, et al.\n\nHere is the test, configure a server, with sendmail, named, apache, and \nPostgreSQL. Tell me which of these systems doesn't configure right.\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nmlw <[email protected]> writes:\n \n\nThe idea that a, more or less, arbitrary data location determines the \ndatabase configuration is wrong. It should be obvious to any \nadministrator that a configuration file location which controls the \nserver is the \"right\" way to do it.\n \n\n\nI guess I'm just dense, but I entirely fail to see why this is the One\nTrue Way To Do It. What you seem to be proposing (ignoring\nsyntactic-sugar issues) is that we replace \"postmaster -D\n/some/data/dir\" by \"postmaster -config /some/config/file\". I am not\nseeing the nature of the improvement. It looks to me like the sysadmin\nmust now grant the Postgres DBA write access on *two* directories, viz\n/some/config/ and /wherever/the/data/directory/is. How is that better\nthan granting write access on one directory? Given that we can't manage\nto standardize the data directory location across multiple Unixen, how\nis it that we will be more successful at standardizing a config file\nlocation?\n\nAll I see here is an arbitrary break with our past practice. I do not\nsee any net improvement.\n\n \n\nThere is a pretty well understood convention that a configuration file will\nbe located in some standard location depending on your distro. Would you\ndisagree with that?\n\nThere is also a convention that most servers are configured by a configuration\nfile, located in a central location. Look at sendmail, named,, et al. \n\nHere is the test, configure a server, with sendmail, named, apache, and PostgreSQL.\nTell me which of these systems doesn't configure right.", "msg_date": "Thu, 13 Feb 2003 01:10:41 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "At 12:31 AM -0500 2/13/03, mlw wrote:\n>The idea that a, more or less, arbitrary data location determines \n>the database configuration is wrong. It should be obvious to any \n>administrator that a configuration file location which controls the \n>server is the \"right\" way to do it.\n\n\nIsn't the database data itself a rather significant portion of the \n'configuration' of the database?\n\nWhat do you gain by having the postmaster config and the database \ndata live in different locations?\n\n-pmb\n", "msg_date": "Wed, 12 Feb 2003 22:14:50 -0800", "msg_from": "Peter Bierman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Peter Bierman wrote:\n\n> At 12:31 AM -0500 2/13/03, mlw wrote:\n>\n>> The idea that a, more or less, arbitrary data location determines the \n>> database configuration is wrong. It should be obvious to any \n>> administrator that a configuration file location which controls the \n>> server is the \"right\" way to do it.\n>\n>\n>\n> Isn't the database data itself a rather significant portion of the \n> 'configuration' of the database?\n>\n> What do you gain by having the postmaster config and the database data \n> live in different locations? \n\nWhile I don't like to use another product as an example, I think amongst \nthe number of things Oracle does right is that it has a fairly standard \nway for an admin to find everything. All one needs to do is find the \n\"ORACLE_HOME\" directory, and everything can be found from there.\n\nIf, assume, PostgreSQL worked like every other system. It would have \neither an entry in /etc or some other directory specified by configure.\n\nSomene please tell me how what I'm proposing differs from things like \nsendmail, named, or anyother standards based UNIX server?\n\n", "msg_date": "Thu, 13 Feb 2003 01:31:47 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nBefore I get started, I should note that it may be a good compromise\nto have the data directory be the same as the config file directory,\nwhen neither the config file nor the command line specify something\ndifferent. So the changes I think may make the most sense are:\n\n1. We add a new GUC variable which specifies where the data is.\n The data is assumed to reside in the same place the config files\n reside unless the GUC variable is defined (either in\n postgresql.conf or on the command line, as usual for a GUC\n variable). Both -D and $PGDATA therefore retain their current\n semantics unless overridden by the GUC variable, in which case\n they fall back to the new semantics of specifying only where the\n config files can be found.\n\n2. We add a configure option that specifies what the hardcoded\n fallback directory should be when neither -D nor $PGDATA are\n specified: /etc/postgresql when the option isn't specified to\n configure.\n\n3. We supply a different default startup script and a different\n default configuration file (but can make the older versions\n available in the distribution as well if we wish). The former\n uses neither $PGDATA nor -D (or uses /etc/postgresql for them),\n and the latter uses the new GUC variable to specify a data\n directory location (/var/lib/postgres by default?)\n\nThis combination should work nicely for transitioning and for package\nbuilders. It accomplishes all of the goals mentioned in this thread\nand will cause minimal pain for developers, since they can use their\ncurrent methods. Sounds like it'll make Tom happy, at least. :-)\n\n\nTom Lane wrote:\n> mlw <[email protected]> writes:\n> > The idea that a, more or less, arbitrary data location determines the \n> > database configuration is wrong. It should be obvious to any \n> > administrator that a configuration file location which controls the \n> > server is the \"right\" way to do it.\n> \n> I guess I'm just dense, but I entirely fail to see why this is the One\n> True Way To Do It. \n\nBut we're not saying it's the One True Way, just saying that it's a\nway that has very obvious benefits over the way we're using now, if\nyour job is to manage a system that someone else set up.\n\n> What you seem to be proposing (ignoring syntactic-sugar issues) is\n> that we replace \"postmaster -D /some/data/dir\" by \"postmaster\n> -config /some/config/file\". I am not seeing the nature of the\n> improvement.\n\nThe nature of the improvement is that the configuration of a\nPostgreSQL install will becomes obvious to anyone who looks in the\nobvious places. Remember, the '-D ...' is optional! The PGDATA\nenvironment variable can be used instead, and *is* used in what few\ninstallations I've seen. That's not something that shows up on the\ncommand line when looking at the process list, which forces the\nadministrator to hunt down the data directory through other means.\n\n> It looks to me like the sysadmin must now grant the Postgres DBA\n> write access on *two* directories, viz /some/config/ and\n> /wherever/the/data/directory/is. How is that better than granting\n> write access on one directory?\n\nThe difference in where you grant write access isn't a benefit to be\ngained here. The fact that you no longer have to give root privileges\nto the DBA so that he can change the data directory as needed is the\nbenefit (well, one of them, at least). A standard packaged install\ncan easily set the /etc/postgresql directory up with write permissions\nfor the postgres user by default, so the sysadmin won't even have to\ntouch it if he doesn't want to.\n\nA big production database box is usually managed by one or more system\nadministrators and one or more DBAs. Their roles are largely\northogonal. The sysadmins have the responsibility of keeping the\nboxes up and making sure they don't fall over or crawl to a\nstandstill. The DBAs have the responsibility of maximizing the\nperformance and availability of the database and *that's all*. Giving\nthe DBAs root privileges means giving them the power to screw up the\nsystem in ways that they can't recover from and might not even know\nabout. The ways you can take down a system by misconfiguring the\ndatabase are bad enough. No sane sysadmin is going to give the DBA\nthe power to run an arbitrary script as root at a time during the boot\ncycle that the system is the most difficult to manage unless he thinks\nthe DBA is *really* good at system administration tasks, too. And\nthat's assuming the sysadmin even *has* the authority to grant the DBA\nthat kind of access. Many organizations keep a tight rein on who can\ndo what in an effort to minimize the damage from screwups.\n\nThe point is that the DBA isn't likely to have root access to the box.\nWhen the DBA lacks that ability, the way we currently do things places\ngreater demand on the sysadmin than is necessary, because root access\nis required to change the startup scripts, as it should be, and the\nlocation of the data, as it should *not* be.\n\n> Given that we can't manage to standardize the data directory\n> location across multiple Unixen, how is it that we will be more\n> successful at standardizing a config file location?\n\nA couple of ways.\n\nFirstly, as we mentioned before, just about every other daemon that\nruns on a Unix system has its configuration file somewhere in the /etc\nhierarchy. By putting our config files in that same hierarchy we'll\nbe *adhering* to a standard. We don't have to worry about\n\"standardizing\" that config file location because it's *already* a\nstandard that we're currently ignoring.\n\nSecondly, standards arise as a result of being declared standards and\nby most people using them. So simply by making /etc/postgresql the\ndefault configuration directory, *that* will become the standard\nplace. Most people won't mess with the default install if they don't\nhave to.\n\nRight now they almost *have to* mess with the default install, because\nthere is no standard place on a Unix system for high speed, highly\nreliable disk access. And that means that, right now, there *is* no\nstandard place for our config files -- it's wherever the person who\nconfigured the database decided the data should be, and he made that\ndecision based on performance and reliability considerations, not on\nany standards.\n\n> All I see here is an arbitrary break with our past practice. I do not\n> see any net improvement.\n\nThat's probably because you're looking at this from the point of view\nof a developer. From that standpoint there really isn't any net\nimprovement, because *you* still have to specify something on the\ncommand line to get your test databases going. As a developer you\n*always* install and manage your own database installations, so *of\ncourse* you'll always know where the config files are. But that's not\nhow it works in the production world.\n\nThe break we'd be making is *not* arbitrary, and that's much of the\npoint: it's a break towards existing standards, and there are good\nreasons for doing it, benefits to be had by adhering to those\nstandards.\n\n\nThe way we currently handle configuration files is fine for research\nand development use -- the environment from which PostgreSQL sprang.\nBut now we're talking about getting it used in production\nenvironments, and their requirements are very different.\n\nSince it is *we* who are not currently adhering to the standard,\nshouldn't the burden of proof (so to speak) be on those who wish to\nkeep things as they are?\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 12 Feb 2003 23:36:45 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n> Well, to the extent that you're serious, you understand that \n> a lot of people feel that /usr/local should be reserved for \n> stuff that's installed by the local sysadmin, and your\n> vendor/distro isn't supposed to be messing with it. \n> \n> Which means if the the vendor installed Postgresql (say, the\n> Red Hat Database) you'd expect config files to be in /etc.\n> If the postgresql is compiled from source by local admin, \n> you might look somewhere in /usr/local.\n\nIndeed. For better or worse, there is a Filesystem Hierarcy Standard,\nand most of the important Linux distros, BSDs and some legacy Unixen\nstick to it, so so should we.\n\nConfiguration files should be in /etc/postgresql/, or at the very least\nsymlinked from there.\n\nMartin\n\n", "msg_date": "13 Feb 2003 11:20:46 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Wed, 12 Feb 2003, J. M. Brenner wrote:\n\n>\n> \"Christopher Kings-Lynne\" <[email protected]> wrote:\n>\n> > > Okay, here's one: most Unix systems store all of the configuration\n> > > files in a well known directory: /etc. These days it's a hierarchy of\n> > > directories with /etc as the root of the hierarchy. When an\n> > > administrator is looking for configuration files, the first place he's\n> > > going to look is in /etc and its subdirectories.\n>\n> > No goddammit - /usr/local/etc. Why can't the Linux community respect\n> > history!!!!\n> >\n> > It is the ONE TRUE PLACE dammit!!!\n>\n> Well, to the extent that you're serious, you understand that\n> a lot of people feel that /usr/local should be reserved for\n> stuff that's installed by the local sysadmin, and your\n> vendor/distro isn't supposed to be messing with it.\n>\n> Which means if the the vendor installed Postgresql (say, the\n> Red Hat Database) you'd expect config files to be in /etc.\n> If the postgresql is compiled from source by local admin,\n> you might look somewhere in /usr/local.\n\nThen why not ~postgres/etc ?? Or substitute ~postgres with the\ndb admin user you (or the distro) decided on at installation time.\nGives a common location no matter who installed it or where it was\ninstalled.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 13 Feb 2003 07:00:11 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Wed, 12 Feb 2003, Peter Bierman wrote:\n\n> What do you gain by having the postmaster config and the database\n> data live in different locations?\n\nYou can then standardize a location for the configuration files.\n\nEverybody has room in /etc for another 10K of data. Where you have\nroom for something that might potentially be a half terrabyte of\ndata, and is not infrequently several gigabytes or more, is pretty\nsystem-depenendent.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Thu, 13 Feb 2003 22:09:41 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "In the last exciting episode, [email protected] (Curt Sampson) wrote:\n> On Wed, 12 Feb 2003, Peter Bierman wrote:\n>\n>> What do you gain by having the postmaster config and the database\n>> data live in different locations?\n>\n> You can then standardize a location for the configuration files.\n>\n> Everybody has room in /etc for another 10K of data. Where you have\n> room for something that might potentially be a half terrabyte of\n> data, and is not infrequently several gigabytes or more, is pretty\n> system-depenendent.\n\nAh, but this has two notable problems:\n\n1. It assumes that there is \"a location\" for \"the configuration files\n for /the single database instance./\"\n\n If I have a second database instance, that may conflict.\n\n2. It assumes I have write access to /etc\n\n If I'm a Plain Old User, as opposed to root, I may only have\n read-only access to /etc.\n\nThese conditions have both been known to occur...\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\n\"Some people, when confronted with a Unix problem, think \"I know, I'll\nuse sed.\" Now they have two problems.\" -- Jamie Zawinski\n", "msg_date": "Thu, 13 Feb 2003 08:32:51 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, Christopher Browne wrote:\n\n> 1. It assumes that there is \"a location\" for \"the configuration files\n> for /the single database instance./\"\n\nNo; it assumes that there's a location for \"the default instance.\" If\nyou have more than one, you could have one default and one elsewhere, or\njust do what I often do, which is put in an empty config file except for\na comment saying \"we have several instances of <xxx> on this machine; look\nin <yyy> for them.\"\n\n> 2. It assumes I have write access to /etc\n>\n> If I'm a Plain Old User, as opposed to root, I may only have\n> read-only access to /etc.\n\nRight. It's dependent on the sysadmin to create /etc/postgres/ and make\nit writeable, or set up proper symlinks, or whatever.\n\nFortunately, the files in /etc are only the defaults, to be used if\nthey're not overridden on the command line. If you're in a situation\nlike #2, you're basically stuck where we are now all the time: you have\nto just put it somewhere and hope that, if someone else needs to find\nit, they can.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Thu, 13 Feb 2003 23:18:00 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Christopher Browne wrote:\n\n>In the last exciting episode, [email protected] (Curt Sampson) wrote:\n> \n>\n>>On Wed, 12 Feb 2003, Peter Bierman wrote:\n>>\n>> \n>>\n>>>What do you gain by having the postmaster config and the database\n>>>data live in different locations?\n>>> \n>>>\n>>You can then standardize a location for the configuration files.\n>>\n>>Everybody has room in /etc for another 10K of data. Where you have\n>>room for something that might potentially be a half terrabyte of\n>>data, and is not infrequently several gigabytes or more, is pretty\n>>system-depenendent.\n>> \n>>\n>\n>Ah, but this has two notable problems:\n>\n>1. It assumes that there is \"a location\" for \"the configuration files\n> for /the single database instance./\"\n>\n> If I have a second database instance, that may conflict.\n>\n>2. It assumes I have write access to /etc\n>\n> If I'm a Plain Old User, as opposed to root, I may only have\n> read-only access to /etc.\n>\n>These conditions have both been known to occur...\n> \n>\nThese are not issues at all. You could put the configuration file \nanywhere, just as you can for any UNIX service.\n\npostmaster --config=/home/myhome/mydb.conf\n\nI deal with a number of PG databases on a number of sites, and it is a \nreal pain in the ass to get to a PG box and hunt around for data \ndirectory so as to be able to administer the system. What's really \nannoying is when you have to find the data directory when someone else \nset up the system.\n\nConfiguring postgresql via a configuration file which specifies all the \ndata, i.e. data directory, name of other configuration files, etc. is \nthe right way to do it. Even if you have reasons against it, even if you \nthink it is a bad idea, a bad standard is almost always a better \nsolution than an arcane work of perfection.\n\nPersonally, however, I think the configuration issue is a no-brainer and \nI am amazed that people are balking. EVERY other service on a UNIX box \nis configured in this way, why not do it this way in PostgreSQL? The \npatch I submitted allowed the configuration to work as it currently \ndoes, but allowed for the more standard configuration file methodology.\n\nI just don't understand what the resistance is, it makes no sense.\n\n\n\n\n\n\n\n\n\n\n\n\n\nChristopher Browne wrote:\n\nIn the last exciting episode, [email protected] (Curt Sampson) wrote:\n \n\nOn Wed, 12 Feb 2003, Peter Bierman wrote:\n\n \n\nWhat do you gain by having the postmaster config and the database\ndata live in different locations?\n \n\nYou can then standardize a location for the configuration files.\n\nEverybody has room in /etc for another 10K of data. Where you have\nroom for something that might potentially be a half terrabyte of\ndata, and is not infrequently several gigabytes or more, is pretty\nsystem-depenendent.\n \n\n\nAh, but this has two notable problems:\n\n1. It assumes that there is \"a location\" for \"the configuration files\n for /the single database instance./\"\n\n If I have a second database instance, that may conflict.\n\n2. It assumes I have write access to /etc\n\n If I'm a Plain Old User, as opposed to root, I may only have\n read-only access to /etc.\n\nThese conditions have both been known to occur...\n \n\nThese are not issues at all. You could put the configuration file anywhere,\njust as you can for any UNIX service.\n\npostmaster --config=/home/myhome/mydb.conf\n\nI deal with a number of PG databases on a number of sites, and it is a real\npain in the ass to get to a PG box and hunt around for data directory so\nas to be able to administer the system. What's really annoying is when you\nhave to find the data directory when someone else set up the system.\n\nConfiguring postgresql via a configuration file which specifies all the data,\ni.e. data directory, name of other configuration files, etc. is the right\nway to do it. Even if you have reasons against it, even if you think it is\na bad idea, a bad standard is almost always a better solution than an arcane\nwork of perfection.\n\nPersonally, however, I think the configuration issue is a no-brainer and\nI am amazed that people are balking. EVERY other service on a UNIX box is\nconfigured in this way, why not do it this way in PostgreSQL? The patch I\nsubmitted allowed the configuration to work as it currently does, but allowed\nfor the more standard configuration file methodology.\n\nI just don't understand what the resistance is, it makes no sense.", "msg_date": "Thu, 13 Feb 2003 09:23:20 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 08:32, Christopher Browne wrote:\n> In the last exciting episode, [email protected] (Curt Sampson) wrote:\n> > Everybody has room in /etc for another 10K of data. Where you have\n> > room for something that might potentially be a half terrabyte of\n> > data, and is not infrequently several gigabytes or more, is pretty\n> > system-depenendent.\n\n> 1. It assumes that there is \"a location\" for \"the configuration files\n> for /the single database instance./\"\n\n> If I have a second database instance, that may conflict.\n\nIf you run multiple servers of any kind, the second and subsequent servers \nmust have a command line switch specifying the location of the config file. \nThis is the way named, sendmail, et al do it. I have run multiple nameds on \na single box, by using alternate config file locations.\n\n> 2. It assumes I have write access to /etc\n\n> If I'm a Plain Old User, as opposed to root, I may only have\n> read-only access to /etc.\n\nSo you start postmaster with a config file switch pointing to your config file \nin your tree. Or you specify the default location with a configure switch at \ncompile time. Or you do it the same way you would run any other typical \ndaemon as a regular user. How does Apache, AOLserver (my favorite), \nsendmail, jabberd, named, or any other typical daemon do it? \n\nFor example, AOLserver can easily be installed and run as a plain user (just \nnot on port 80). The command line switch '-t' specifies the tcl \nconfiguration script's location. There is no default. The configuration \nscript then specifies pageroot and the like -- and a webserver is very much \nlike running PostgreSQL -- you have configuration, you have logs, and you \nhave the spool (database or pageroot). All can be in different locations at \nthe discretion of the admin. And hardcoding dependencies like this stifles \nthe discretion of the admin.\n\n> These conditions have both been known to occur...\n\nJust because the situation is known to occur doesn't mean the whole direction \nof a project should hinge on those corner cases. They should be allowed but \nnot forced.\n\nFor better or for worse, thanks to Karl DeBisschop, the latest RPMs have the \nability to start multiple postmasters with their data trees and \nconfigurations in different places. This is all done in the startup script, \nand required no new functionality from postmaster. I personally think it is \nfor the better; YMMV.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 09:50:22 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, Curt Sampson wrote:\n\n> On Thu, 13 Feb 2003, Christopher Browne wrote:\n> \n> > 1. It assumes that there is \"a location\" for \"the configuration files\n> > for /the single database instance./\"\n> \n> No; it assumes that there's a location for \"the default instance.\" If\n> you have more than one, you could have one default and one elsewhere, or\n> just do what I often do, which is put in an empty config file except for\n> a comment saying \"we have several instances of <xxx> on this machine; look\n> in <yyy> for them.\"\n> \n> > 2. It assumes I have write access to /etc\n> >\n> > If I'm a Plain Old User, as opposed to root, I may only have\n> > read-only access to /etc.\n> \n> Right. It's dependent on the sysadmin to create /etc/postgres/ and make\n> it writeable, or set up proper symlinks, or whatever.\n> \n> Fortunately, the files in /etc are only the defaults, to be used if\n> they're not overridden on the command line. If you're in a situation\n> like #2, you're basically stuck where we are now all the time: you have\n> to just put it somewhere and hope that, if someone else needs to find\n> it, they can.\n\nIt doesn't follow this line of argument directly but it's to do with this\nthread...\n\nIs everyone forgetting that wherever the configuration file is stored and\nwhether or not it needs a command line argument to specify it the database is\nnot going to start up automatically unless at least part of the installation is\ndone as root anyway?\n\nAs I like to install software as a non root user normally anyway I am happy\nthat the config file lives somewhere not requiring write access by the\ninstaller. However, I think having it in an etc directory is a good thing\n(tm). So, colour me an uncommited, fence sitter :)\n\nI'm not talking distribution/package installation here but just plain system\nadministration. Being an untrusting soul I do _not_ want to type make install\nas root and find things installed outside of where I say I want things placed.\nThat includes configuration files. Doing this as a normal user protects the\nsystem from bad software which assumes things about the host system. It also\nsimplifies switching between versions of software, try doing that if your\nconfig is /etc/postgresql/postgres.conf.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Thu, 13 Feb 2003 15:03:09 +0000 (GMT)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw <[email protected]> writes:\n> Here is the test, configure a server, with sendmail, named, apache, and \n> PostgreSQL. Tell me which of these systems doesn't configure right.\n\nAFAIK, only one of those four is designed to support multiple instances\nrunning on a single machine. This is not unrelated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 10:03:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n\n>mlw <[email protected]> writes:\n> \n>\n>>Here is the test, configure a server, with sendmail, named, apache, and \n>>PostgreSQL. Tell me which of these systems doesn't configure right.\n>> \n>>\n>\n>AFAIK, only one of those four is designed to support multiple instances\n>running on a single machine. This is not unrelated.\n>\n> \n>\nWhile I will agree with you on sendmail and named, Apache is often run \nmore than once with different options.\nFurthermore, I hate to keep bringing it up, Oracle does use the \nconfiguration file methodology.\n\nTom, I just don't understand why this is being resisted so vigorously. \nWhat is wrong with starting PostgreSQL as:\n\npostmaster -C /etc/postgresql.conf\n\nUNIX admins would love to have this as a methodology, I don't think you \ncan deny this, can you? I, as a long term PG user, really really want \nthis, because in the long run, it makes PostgreSQL easier to administer.\n\nIf a patch allows PG to function as it does, but also allows a \nconfiguration file methodology, why not?\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nmlw <[email protected]> writes:\n \n\nHere is the test, configure a server, with sendmail, named, apache, and \nPostgreSQL. Tell me which of these systems doesn't configure right.\n \n\n\nAFAIK, only one of those four is designed to support multiple instances\nrunning on a single machine. This is not unrelated.\n\n \n\nWhile I will agree with you on sendmail and named, Apache is often run more\nthan once with different options.\nFurthermore, I hate to keep bringing it up, Oracle does use the configuration\nfile methodology.\n\nTom, I just don't understand why this is being resisted so vigorously. What\nis wrong with starting PostgreSQL as:\n\npostmaster -C /etc/postgresql.conf\n\nUNIX admins would love to have this as a methodology, I don't think you can\ndeny this, can you? I, as a long term PG user, really really want this, because\nin the long run, it makes PostgreSQL easier to administer.\n\nIf a patch allows PG to function as it does, but also allows a configuration\nfile methodology, why not?", "msg_date": "Thu, 13 Feb 2003 10:19:11 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, mlw wrote:\n\n> Tom, I just don't understand why this is being resisted so vigorously. \n> What is wrong with starting PostgreSQL as:\n> \n> postmaster -C /etc/postgresql.conf\n> \n> UNIX admins would love to have this as a methodology, I don't think you \n> can deny this, can you? I, as a long term PG user, really really want \n> this, because in the long run, it makes PostgreSQL easier to administer.\n> \n> If a patch allows PG to function as it does, but also allows a \n> configuration file methodology, why not?\n\nI forgot to say that I don't see why this facility can't be included in\naddition to the existing scheme.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Thu, 13 Feb 2003 15:22:39 +0000 (GMT)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 10:03, Tom Lane wrote:\n> mlw <[email protected]> writes:\n> > Here is the test, configure a server, with sendmail, named, apache, and\n> > PostgreSQL. Tell me which of these systems doesn't configure right.\n\n> AFAIK, only one of those four is designed to support multiple instances\n> running on a single machine. This is not unrelated.\n\nOne can run many nameds on a single machine by specifying '-c \nalternate_named.conf' , which then points to a different set of zone files, \nlistens to either a different port or address, etc. I have personally done \nthis, and it worked as if it were designed to do so.\n\nApache can easily have multiple instances by passing the location of \nhttpd.conf on the command line. Everything else comes from that. Although \nApache's virtual hosting is typically use instead, it may be necessary for \nlarge sites to run multiple instances with degrees of separation at the \nconfig file level.\n\nI use AOLserver, which in version 3 is designed from the ground up for many \n(even thousands) of instances on a single box. You access this capability \nwith the '-t' switch (it stands for 'tcl config script' -- previous versions \nof AOLserver had an 'ini' file accessed with '-i', and version 3 added the \ntcl config script and deprecated the ini file). In fact, since there is no \ndefault, you MUST use -t. The tcl config script specifies all the parameters \nthat instance needs (with the exception of the user and group ID the server \nshould run as, if started as root (for port 80 access) -- but that doesn't \neffect PostgreSQL since our port is above 1024). Two instances can even \nshare a tcl config script, as long as the virtual server name is specified on \nthe command line, and the tcl config has multiple virtual server sections. \n\nI personally only lightly use this feature, running a mere half dozen \nAOLserver's on one of my production servers. All of which share a single \nPostgreSQL instance; but that's a different story. \n\nAOLserver is an excellent example here, as everything that has a location is \nconfigurable. During ./configure, you can specify the prefix and the other \nstandard autoconf type options. This includes the location of the \n--enable-thread built Tcl 8.4 that you have to have first. I have two \nversions of AOLserver on that machine, and they coexist very well, because I \n_can_ be so specific in where everything lies. I run OpenACS on two of those \ninstances, and, due to the size of that install I have those two pageroots on \na separate filesystem from the binaries and config script. It was just a \nsingle tcl config entry. No biggie.\n\nEven sendmail has a -c switch to specify the location of sendmail.cf: so, yes, \nyou can run multiple instances, although it could be argued that it wasn't \ndesigned in.\n\nNext?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 10:32:19 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n\n>mlw <[email protected]> writes:\n> \n>\n>>Here is the test, configure a server, with sendmail, named, apache, and \n>>PostgreSQL. Tell me which of these systems doesn't configure right.\n>> \n>>\n>\n>AFAIK, only one of those four is designed to support multiple instances\n>running on a single machine. This is not unrelated.\n> \n>\nAlso, using an explicit configuration file will also help you run \nmultiple postgresql's on the same machien in a consistent manner, for \ninstance:\n\npostmaster -C /etc/postgres/common.conf -D /RAID0/postgres -p 5432\npostmaster -C /etc/postgres/common.conf -D /RAID1/postgres -p 5433\n\nPlease, Tom, tell me why this is such a bad idea?\n\nI will make the patch, I will submit it, will you guys put it in?\n\nIf not, why?\n\n> \n>\n\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nmlw <[email protected]> writes:\n \n\nHere is the test, configure a server, with sendmail, named, apache, and \nPostgreSQL. Tell me which of these systems doesn't configure right.\n \n\n\nAFAIK, only one of those four is designed to support multiple instances\nrunning on a single machine. This is not unrelated.\n \n\nAlso, using an explicit configuration file will also help you run multiple\npostgresql's on the same machien in a consistent manner, for instance:\n\npostmaster -C /etc/postgres/common.conf -D /RAID0/postgres -p 5432\npostmaster -C /etc/postgres/common.conf -D /RAID1/postgres -p 5433\n\nPlease, Tom, tell me why this is such a bad idea?\n\nI will make the patch, I will submit it, will you guys put it in?\n\nIf not, why?", "msg_date": "Thu, 13 Feb 2003 10:59:11 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, Feb 13, 2003 at 15:03:09 +0000,\n \"Nigel J. Andrews\" <[email protected]> wrote:\n> \n> Is everyone forgetting that wherever the configuration file is stored and\n> whether or not it needs a command line argument to specify it the database is\n> not going to start up automatically unless at least part of the installation is\n> done as root anyway?\n\nUsers can use cron to start there own instance. Your cron script can check\nif the server is running every (say) 15 minutes and start the server\nif it isn't.\n", "msg_date": "Thu, 13 Feb 2003 10:28:52 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 09:23, mlw wrote: \n> I deal with a number of PG databases on a number of sites, and it is a\n> real pain in the ass to get to a PG box and hunt around for data\n> directory so as to be able to administer the system. What's really\n> annoying is when you have to find the data directory when someone else\n> set up the system.\n> \n\nfind / -name postgresql.conf -print\n\nyou now know where all of your configuration files are and where the\ndata for each of those servers is as well. \n\n(Not I'm not against the idea...)\n\nRobert Treat\n\n", "msg_date": "13 Feb 2003 11:32:24 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, Feb 13, 2003 at 09:23:20 -0500,\n mlw <[email protected]> wrote:\n> \n> Personally, however, I think the configuration issue is a no-brainer and \n> I am amazed that people are balking. EVERY other service on a UNIX box \n> is configured in this way, why not do it this way in PostgreSQL? The \n> patch I submitted allowed the configuration to work as it currently \n> does, but allowed for the more standard configuration file methodology.\n\nIf you are interested in reading a contrary position, you can read\nBerstein's arguments for his recommended way to install services at:\nhttp://cr.yp.to/unix.html\n", "msg_date": "Thu, 13 Feb 2003 10:36:51 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n\n>On Thu, 2003-02-13 at 09:23, mlw wrote: \n> \n>\n>>I deal with a number of PG databases on a number of sites, and it is a\n>>real pain in the ass to get to a PG box and hunt around for data\n>>directory so as to be able to administer the system. What's really\n>>annoying is when you have to find the data directory when someone else\n>>set up the system.\n>>\n>> \n>>\n>\n>find / -name postgresql.conf -print\n>\n\nLOL, That is NOT an option. It can take hours on some systems. \nSpecifically, one of the systems is freedb server, it has over 300,000 \nfiles in a directory tree.\n\n\n\n\n\n\n\n\n\nRobert Treat wrote:\n\nOn Thu, 2003-02-13 at 09:23, mlw wrote: \n \n\nI deal with a number of PG databases on a number of sites, and it is a\nreal pain in the ass to get to a PG box and hunt around for data\ndirectory so as to be able to administer the system. What's really\nannoying is when you have to find the data directory when someone else\nset up the system.\n\n \n\n\nfind / -name postgresql.conf -print\n\n\nLOL, That is NOT an option.  It can take hours on some systems. Specifically,\none of the systems is  freedb server, it has over 300,000 files in a directory\ntree.", "msg_date": "Thu, 13 Feb 2003 11:47:22 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, mlw wrote:\n\n>\n>\n> Robert Treat wrote:\n>\n> >On Thu, 2003-02-13 at 09:23, mlw wrote:\n> >\n> >\n> >>I deal with a number of PG databases on a number of sites, and it is a\n> >>real pain in the ass to get to a PG box and hunt around for data\n> >>directory so as to be able to administer the system. What's really\n> >>annoying is when you have to find the data directory when someone else\n> >>set up the system.\n\nYou realize that the actual code feature doesn't necessarily help this\ncase, right? Putting configuration in /etc and having a configuration file\noption on the command line are separate concepts.\n\nI think the feature is worthwhile, but I have some initial condition\nfunctionality questions that may have been answered in the previous patch,\nbut I don't remember at this point.\n\nMostly these have to deal with initial creation. Does the user specify an\noutput location to initdb, do they just specify a data dir as now where\nthe configuration goes but then they need to move it somewhere, does\ninitdb now do nothing relating to configuration file and the user should\nmake one on his own. Related, is the admin expected to have already made\n(say) /etc/postgresql to stick the config in and set the permissions\ncorrectly (since initdb doesn't run as root)?\n\n\n", "msg_date": "Thu, 13 Feb 2003 08:49:21 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruno Wolff III wrote:\n\n>On Thu, Feb 13, 2003 at 09:23:20 -0500,\n> mlw <[email protected]> wrote:\n> \n>\n>>Personally, however, I think the configuration issue is a no-brainer and \n>>I am amazed that people are balking. EVERY other service on a UNIX box \n>>is configured in this way, why not do it this way in PostgreSQL? The \n>>patch I submitted allowed the configuration to work as it currently \n>>does, but allowed for the more standard configuration file methodology.\n>> \n>>\n>\n>If you are interested in reading a contrary position, you can read\n>Berstein's arguments for his recommended way to install services at:\n>http://cr.yp.to/unix.html\n>\n> \n>\nWhere, specificaly are his arguements against a configuration file \nmethodology?\n\n> \n>\n\n\n\n\n\n\n\n\n\nBruno Wolff III wrote:\n\nOn Thu, Feb 13, 2003 at 09:23:20 -0500,\n mlw <[email protected]> wrote:\n \n\nPersonally, however, I think the configuration issue is a no-brainer and \nI am amazed that people are balking. EVERY other service on a UNIX box \nis configured in this way, why not do it this way in PostgreSQL? The \npatch I submitted allowed the configuration to work as it currently \ndoes, but allowed for the more standard configuration file methodology.\n \n\n\nIf you are interested in reading a contrary position, you can read\nBerstein's arguments for his recommended way to install services at:\nhttp://cr.yp.to/unix.html\n\n \n\nWhere, specificaly are his arguements against a configuration file methodology?", "msg_date": "Thu, 13 Feb 2003 11:53:26 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Thu, 13 Feb 2003, mlw wrote:\n>\n> \n>\n>>Robert Treat wrote:\n>>\n>> \n>>\n>>>On Thu, 2003-02-13 at 09:23, mlw wrote:\n>>>\n>>>\n>>> \n>>>\n>>>>I deal with a number of PG databases on a number of sites, and it is a\n>>>>real pain in the ass to get to a PG box and hunt around for data\n>>>>directory so as to be able to administer the system. What's really\n>>>>annoying is when you have to find the data directory when someone else\n>>>>set up the system.\n>>>> \n>>>>\n>\n>You realize that the actual code feature doesn't necessarily help this\n>case, right? Putting configuration in /etc and having a configuration file\n>option on the command line are separate concepts.\n>\n>I think the feature is worthwhile, but I have some initial condition\n>functionality questions that may have been answered in the previous patch,\n>but I don't remember at this point.\n>\n>Mostly these have to deal with initial creation. Does the user specify an\n>output location to initdb, do they just specify a data dir as now where\n>the configuration goes but then they need to move it somewhere, does\n>initdb now do nothing relating to configuration file and the user should\n>make one on his own. Related, is the admin expected to have already made\n>(say) /etc/postgresql to stick the config in and set the permissions\n>correctly (since initdb doesn't run as root)?\n>\nMy patch only works on the PostgreSQL server code. No changes have been \nmade to the initialization scripts.\n\nThe patch declares three extra configuration file parameters:\nhbafile= '/etc/postgres/pg_hba.conf'\nidentfile='/etc/postgres/pg_ident.conf'\ndatadir='/RAID0/postgres'\n\nThe command line option is a capital 'C,' as in:\npostmaster -C /etc/postgresql.conf\n\nI have no problem leaving the default configuration files remaining in \nthe data directory as sort of a maintenance / boot strap sort of thing, \nso I don't see any reason to alter the installation.\n\n\nAs for this feature helping or not, I think it will. I think it \naccomplishes two things:\n(1) Separates configuration from data.\n(2) Allows an administrator to create a convention across multiple \nsystems regardless of the location and mount points of the database storage.\n(3) Lastly, it is a familiar methodology to DBAs not familiar with \nPostgreSQL.\n\nAgain, I don't see a valid reason for not including the patch. Yes, if \nyou don't want to configure PostgreSQL that way, then so be it, but why \nnot add the functionality for those who do?\n\nI can envision the configuration file methodology of managing a database \nbecoming the \"preferred\" approach over time as it is a more familiar and \nstandard way of configuring servers on UNIX.\n\n\n> \n>\n\n\n\n\n\n\nStephan Szabo wrote:\n\nOn Thu, 13 Feb 2003, mlw wrote:\n\n \n\n\nRobert Treat wrote:\n\n \n\nOn Thu, 2003-02-13 at 09:23, mlw wrote:\n\n\n \n\nI deal with a number of PG databases on a number of sites, and it is a\nreal pain in the ass to get to a PG box and hunt around for data\ndirectory so as to be able to administer the system. What's really\nannoying is when you have to find the data directory when someone else\nset up the system.\n \n\n\n\n\nYou realize that the actual code feature doesn't necessarily help this\ncase, right? Putting configuration in /etc and having a configuration file\noption on the command line are separate concepts.\n\nI think the feature is worthwhile, but I have some initial condition\nfunctionality questions that may have been answered in the previous patch,\nbut I don't remember at this point.\n\nMostly these have to deal with initial creation. Does the user specify an\noutput location to initdb, do they just specify a data dir as now where\nthe configuration goes but then they need to move it somewhere, does\ninitdb now do nothing relating to configuration file and the user should\nmake one on his own. Related, is the admin expected to have already made\n(say) /etc/postgresql to stick the config in and set the permissions\ncorrectly (since initdb doesn't run as root)?\n\nMy patch only works on the PostgreSQL server code. No changes have been made\nto the initialization scripts.\n\nThe patch declares three extra configuration file parameters:\nhbafile= '/etc/postgres/pg_hba.conf'\nidentfile='/etc/postgres/pg_ident.conf'\ndatadir='/RAID0/postgres'\n\nThe command line option is a capital 'C,' as in:\npostmaster -C /etc/postgresql.conf\n\nI have no problem leaving the default configuration files remaining in the\ndata directory as sort of a maintenance / boot strap sort of thing, so I\ndon't see any reason to alter the installation.\n\n\nAs for this feature helping or not, I think it will. I think it accomplishes\ntwo things:\n(1) Separates configuration from data.\n(2) Allows an administrator to create a convention across multiple systems\nregardless of the location and mount points of the database storage.\n(3) Lastly, it is a familiar methodology to DBAs not familiar with PostgreSQL.\n\nAgain, I don't see a valid reason for not including the patch. Yes, if you\ndon't want to configure PostgreSQL that way, then so be it, but why not add\nthe functionality for those who do?\n\nI can envision the configuration file methodology of managing a database\nbecoming the \"preferred\" approach over time as it is a more familiar and\nstandard way of configuring servers on UNIX.", "msg_date": "Thu, 13 Feb 2003 12:13:00 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 13:32, Christopher Browne wrote:\n> > Everybody has room in /etc for another 10K of data. Where you have\n> > room for something that might potentially be a half terrabyte of\n> > data, and is not infrequently several gigabytes or more, is pretty\n> > system-depenendent.\n> \n> Ah, but this has two notable problems:\n> \n> 1. It assumes that there is \"a location\" for \"the configuration files\n> for /the single database instance./\"\n> \n> If I have a second database instance, that may conflict.\n\nI think that moving configuration to [/usr/local]/etc/postgresql implies\nthe need for sub-directories by port, possibly with a default config to\nbe used if there is no port-specific config file.\n\n> 2. It assumes I have write access to /etc\n> \n> If I'm a Plain Old User, as opposed to root, I may only have\n> read-only access to /etc.\n\nThe location should be configurable; I hope we're talking about the\ndefault here. For distributions it should be /etc/postgresql; for local\nbuilds it should be /usr/local/etc/postgresql, assuming you have root\naccess. If you don't, the -c configfile switch suggested elsewhere in\nthis debate would be needed.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 17:28:14 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 12:00, Vince Vielhaber wrote:\n> > Which means if the the vendor installed Postgresql (say, the\n> > Red Hat Database) you'd expect config files to be in /etc.\n> > If the postgresql is compiled from source by local admin,\n> > you might look somewhere in /usr/local.\n> \n> Then why not ~postgres/etc ?? Or substitute ~postgres with the\n> db admin user you (or the distro) decided on at installation time.\n> Gives a common location no matter who installed it or where it was\n> installed.\n\nBecause it doesn't comply with FHS. All projects should remember that\nthey coexist with many others and should do their best to stick to\ncommon standards.\n\nThe default config file location should be set as a parameter to\n./configure, which should default to /usr/local/etc/postgresql. Those\nof us who build for distributions will change it to /etc/postgresql.\n\nI suppose if we want to run different postmasters simultaneously, we\ncould have /etc/postgresql/5432/ and so on for each port number being\nused. Perhaps have a default set in /etc/postgresql/ which can be used\nif there is no port-specific directory, but a postmaster using those\ndefaults would have to have PGDATA specified on the command line.\n\n\n\nOn the same lines, I have just had a request (as Debian maintainer) to\nmove the location of postmaster.pid to the /var/run hierarchy; firstly,\nso as to make it easier for the administrator to find, and secondly so\nas to make it easier to configure SE Linux policy for file access. (SE\nLinux is the highly secure version produced by the NSA.)\n\nI'm not entirely sure why SE Linux has a problem, seeing that postgres\nneeds read-write access to all the files in $PGDATA, but assuming the\nneed is verified, I could do this by moving the pid file from\n$PGDATA/postmaster.pid to /var/run/postgresql/5432.pid and similarly for\nother ports. This would also have the benefit of being more FHS\ncompliant What do people think about that?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 17:28:35 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On 13 Feb 2003, Oliver Elphick wrote:\n\n> On Thu, 2003-02-13 at 12:00, Vince Vielhaber wrote:\n> > > Which means if the the vendor installed Postgresql (say, the\n> > > Red Hat Database) you'd expect config files to be in /etc.\n> > > If the postgresql is compiled from source by local admin,\n> > > you might look somewhere in /usr/local.\n> >\n> > Then why not ~postgres/etc ?? Or substitute ~postgres with the\n> > db admin user you (or the distro) decided on at installation time.\n> > Gives a common location no matter who installed it or where it was\n> > installed.\n>\n> Because it doesn't comply with FHS. All projects should remember that\n> they coexist with many others and should do their best to stick to\n> common standards.\n>\n> The default config file location should be set as a parameter to\n> ./configure, which should default to /usr/local/etc/postgresql. Those\n> of us who build for distributions will change it to /etc/postgresql.\n\nSeems to me that if FHS allows such a mess, it's reason enough to avoid\ncompliance. Either that or those of you who build for distributions are\nmaking an ill advised change. Simply because the distribution makes the\ndecision to add PostgreSQL, or some other package, to it's distribution\ndoesn't make it a requirement to change the location of the config files.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 13 Feb 2003 12:52:23 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n> Seems to me that if FHS allows such a mess, it's reason enough to avoid\n> compliance. Either that or those of you who build for distributions are\n> making an ill advised change. Simply because the distribution makes the\n> decision to add PostgreSQL, or some other package, to it's distribution\n> doesn't make it a requirement to change the location of the config files.\n\nDebian (and FHS) specifically requires that. All configuration files\nMUST be under /etc; the reason is to make the system administrator's job\neasier. Part of the raison d'etre of a distribution is to rationalise\nthe idiosyncrasies of individual projects. The locations used by\nlocally-built packages are up to the local administrator, but they\nreally should not be in /etc and are recommended to be under /usr/local.\n\nI really don't see why there is such a not-invented-here mentality about\nthis issue. I say again, standards-compliance is the best way. It\nmakes life easier for everyone if standards are followed. Don't we\npride ourselves on being closer to the SQL spec than other databases? \nAny way, if PostgreSQL stays as it is, I will continue to have to ensure\nthat initdb creates symlinks to /etc/postgresql/, as happens now.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 18:18:31 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Oliver Elphick wrote:\n> On Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n> > Seems to me that if FHS allows such a mess, it's reason enough to avoid\n> > compliance. Either that or those of you who build for distributions are\n> > making an ill advised change. Simply because the distribution makes the\n> > decision to add PostgreSQL, or some other package, to it's distribution\n> > doesn't make it a requirement to change the location of the config files.\n> \n> Debian (and FHS) specifically requires that. All configuration files\n> MUST be under /etc; the reason is to make the system administrator's job\n> easier. Part of the raison d'etre of a distribution is to rationalise\n> the idiosyncrasies of individual projects. The locations used by\n> locally-built packages are up to the local administrator, but they\n> really should not be in /etc and are recommended to be under /usr/local.\n> \n> I really don't see why there is such a not-invented-here mentality about\n> this issue. I say again, standards-compliance is the best way. It\n> makes life easier for everyone if standards are followed. Don't we\n> pride ourselves on being closer to the SQL spec than other databases? \n> Any way, if PostgreSQL stays as it is, I will continue to have to ensure\n> that initdb creates symlinks to /etc/postgresql/, as happens now.\n\nIt doesn't have anything to do with \"not-invented-here\", which is a\ncommon refrain by people who don't like our decisions, like \"Why don't\nyou use mmap()? Oh, it's because I thought of it and you didn't\". Does\nanyone seriously believe that is the motiviation of anyone in this\nproject! I certainly don't.\n\nNow, on to this configuration discussion. Seems moving the config file\nout of $PGDATA requies either:\n\t\n\t1) we specifiy both the config directory and the data directory on\n\tpostmaster start\n\t\n\t2) we specify the pgdata directory inside postgresql.conf or\n\tother config file\n\nIs this accurate?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 13:45:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n\n>Oliver Elphick wrote:\n> \n>\n>>On Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n>> \n>>\n>>>Seems to me that if FHS allows such a mess, it's reason enough to avoid\n>>>compliance. Either that or those of you who build for distributions are\n>>>making an ill advised change. Simply because the distribution makes the\n>>>decision to add PostgreSQL, or some other package, to it's distribution\n>>>doesn't make it a requirement to change the location of the config files.\n>>> \n>>>\n>>Debian (and FHS) specifically requires that. All configuration files\n>>MUST be under /etc; the reason is to make the system administrator's job\n>>easier. Part of the raison d'etre of a distribution is to rationalise\n>>the idiosyncrasies of individual projects. The locations used by\n>>locally-built packages are up to the local administrator, but they\n>>really should not be in /etc and are recommended to be under /usr/local.\n>>\n>>I really don't see why there is such a not-invented-here mentality about\n>>this issue. I say again, standards-compliance is the best way. It\n>>makes life easier for everyone if standards are followed. Don't we\n>>pride ourselves on being closer to the SQL spec than other databases? \n>>Any way, if PostgreSQL stays as it is, I will continue to have to ensure\n>>that initdb creates symlinks to /etc/postgresql/, as happens now.\n>> \n>>\n>\n>It doesn't have anything to do with \"not-invented-here\", which is a\n>common refrain by people who don't like our decisions, like \"Why don't\n>you use mmap()? Oh, it's because I thought of it and you didn't\". Does\n>anyone seriously believe that is the motiviation of anyone in this\n>project! I certainly don't.\n>\n>Now, on to this configuration discussion. Seems moving the config file\n>out of $PGDATA requies either:\n>\t\n>\t1) we specifiy both the config directory and the data directory on\n>\tpostmaster start\n>\t\n>\t2) we specify the pgdata directory inside postgresql.conf or\n>\tother config file\n>\n>Is this accurate?\n> \n>\nThe patch that I have adds three settings to postgresql.conf and one \ncommand line parameter.\n\nhba_conf = 'filename'\nident_conf='filename'\ndata_dir='path'\n\nThe command linae parameter is -C, used as:\n\npostmaster -C /usr/local/etc/postgresql.conf\n\nI think this will help administrators.\n\nBruce, can you shed some light as to why this is being so strongly \nrejected. I just don't see any downside. I just don't get it.\n\nI will be resubmitting my patch for the 7.3.2 tree.\n\n\n> \n>\n\n\n\n\n\n\n\n\n\nBruce Momjian wrote:\n\nOliver Elphick wrote:\n \n\nOn Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n \n\nSeems to me that if FHS allows such a mess, it's reason enough to avoid\ncompliance. Either that or those of you who build for distributions are\nmaking an ill advised change. Simply because the distribution makes the\ndecision to add PostgreSQL, or some other package, to it's distribution\ndoesn't make it a requirement to change the location of the config files.\n \n\nDebian (and FHS) specifically requires that. All configuration files\nMUST be under /etc; the reason is to make the system administrator's job\neasier. Part of the raison d'etre of a distribution is to rationalise\nthe idiosyncrasies of individual projects. The locations used by\nlocally-built packages are up to the local administrator, but they\nreally should not be in /etc and are recommended to be under /usr/local.\n\nI really don't see why there is such a not-invented-here mentality about\nthis issue. I say again, standards-compliance is the best way. It\nmakes life easier for everyone if standards are followed. Don't we\npride ourselves on being closer to the SQL spec than other databases? \nAny way, if PostgreSQL stays as it is, I will continue to have to ensure\nthat initdb creates symlinks to /etc/postgresql/, as happens now.\n \n\n\nIt doesn't have anything to do with \"not-invented-here\", which is a\ncommon refrain by people who don't like our decisions, like \"Why don't\nyou use mmap()? Oh, it's because I thought of it and you didn't\". Does\nanyone seriously believe that is the motiviation of anyone in this\nproject! I certainly don't.\n\nNow, on to this configuration discussion. Seems moving the config file\nout of $PGDATA requies either:\n\t\n\t1) we specifiy both the config directory and the data directory on\n\tpostmaster start\n\t\n\t2) we specify the pgdata directory inside postgresql.conf or\n\tother config file\n\nIs this accurate?\n \n\nThe patch that I have adds three settings to postgresql.conf and one command\nline parameter.\n\nhba_conf = 'filename'\nident_conf='filename'\ndata_dir='path'\n\nThe command linae parameter is  -C, used as:\n\npostmaster -C /usr/local/etc/postgresql.conf\n\nI think this will help administrators. \n\nBruce, can you shed some light as to why this is being so strongly rejected.\nI just don't see any downside. I just don't get it. \n\nI will be resubmitting my patch for the 7.3.2 tree.", "msg_date": "Thu, 13 Feb 2003 14:06:23 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 18:45, Bruce Momjian wrote:\n> Now, on to this configuration discussion. Seems moving the config file\n> out of $PGDATA requies either:\n> \t\n> \t1) we specifiy both the config directory and the data directory on\n> \tpostmaster start\n> \t\n> \t2) we specify the pgdata directory inside postgresql.conf or\n> \tother config file\n> \n> Is this accurate?\n\nThe default start would read the config file from its predefined\nlocation, set by ./configure. No command line options would be\nnecessary for the postmaster to run, though they could be provided.\n\nThe config file should contain the pgdata location; this and any other\nparameter should be overridden if a different location is specified by a\ncommand-line option. I think the config should be able to contain all\ninformation that can be specified on the command line (except, of\ncourse, the location of the configuration file.)\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 19:13:14 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 18:45, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > On Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n> > > Seems to me that if FHS allows such a mess, it's reason enough to avoid\n> > > compliance. Either that or those of you who build for distributions are\n> > > making an ill advised change. Simply because the distribution makes the\n> > > decision to add PostgreSQL, or some other package, to it's distribution\n> > > doesn't make it a requirement to change the location of the config files.\n> > ...\n> > I really don't see why there is such a not-invented-here mentality about\n> > this issue. I say again, standards-compliance is the best way. It\n> > makes life easier for everyone if standards are followed. Don't we\n> > pride ourselves on being closer to the SQL spec than other databases? \n> > Any way, if PostgreSQL stays as it is, I will continue to have to ensure\n> > that initdb creates symlinks to /etc/postgresql/, as happens now.\n> \n> It doesn't have anything to do with \"not-invented-here\", which is a\n> common refrain by people who don't like our decisions, like \"Why don't\n> you use mmap()? Oh, it's because I thought of it and you didn't\". Does\n> anyone seriously believe that is the motiviation of anyone in this\n> project! I certainly don't.\n\nMy apologies. I withdraw the comment, which was provoked mostly by\nVince's response, quoted above. I agree that it is not characteristic\nof the project.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 19:15:36 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw wrote:\n> >It doesn't have anything to do with \"not-invented-here\", which is a\n> >common refrain by people who don't like our decisions, like \"Why don't\n> >you use mmap()? Oh, it's because I thought of it and you didn't\". Does\n> >anyone seriously believe that is the motiviation of anyone in this\n> >project! I certainly don't.\n> >\n> >Now, on to this configuration discussion. Seems moving the config file\n> >out of $PGDATA requies either:\n> >\t\n> >\t1) we specifiy both the config directory and the data directory on\n> >\tpostmaster start\n> >\t\n> >\t2) we specify the pgdata directory inside postgresql.conf or\n> >\tother config file\n> >\n> >Is this accurate?\n> > \n> >\n> The patch that I have adds three settings to postgresql.conf and one \n> command line parameter.\n> \n> hba_conf = 'filename'\n> ident_conf='filename'\n> data_dir='path'\n> \n> The command linae parameter is -C, used as:\n> \n> postmaster -C /usr/local/etc/postgresql.conf\n> \n> I think this will help administrators.\n> \n> Bruce, can you shed some light as to why this is being so strongly \n> rejected. I just don't see any downside. I just don't get it.\n> \n> I will be resubmitting my patch for the 7.3.2 tree.\n\nWell, in a sense, it trades passing one parameter, PGDATA, for another. \nI see your point that we should specify configuration first, and let\neverything pass from there. However, it does add extra configuration\nparameters, and because you still need to specify/create pgdata, it adds\nan extra level of abstraction to setting up the server.\n\nAlso, there is nothing preventing someone from symlinking the\nconfiguration files from pgdata to somewhere else.\n\nI don't think separate params for each config file is good. At the\nmost, I think we will specify the configuration _directory_ for all the\nconfig files, perhaps pgsql/etc, and have pgdata default to ../data, or\nhonor $PGDATA. That might be the cleanest.\n\nOf course, that now gives us $PGCONFIG and $PGDATA, and possible\nintraction if postgresql.conf specifies a different pgdata from $PGDATA.\nAs you can see, it could get messy.\n\nAnd, if you specify pgdata in postgresql.conf, it prevents you from\nusing that file by different postmasters.\n\nMy best guess would be to not specify pgdata in postgresql.conf, and\nhave a new $PGCONFIG param to specify the configuration directory, but\nif we do that, $PGDATA/postgresql.conf becomes meaningless, which could\nalso be confusing. Maybe we don't allow those files to exist in $PGDATA\nif $PGCONFIG is used, _and_ $PGCONFIG is not the same as $PGDATA. See,\nI am getting myself confused. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 14:17:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 14:06, mlw wrote:\n> \n> I will be resubmitting my patch for the 7.3.2 tree.\n> \n\nI'm no core developer, but surely this wont be included in the 7.3.x\nbranch. Any change needs to be made against CVS head.\n\nRobert Treat\n\n\n", "msg_date": "13 Feb 2003 14:23:38 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n> On Thu, 2003-02-13 at 14:06, mlw wrote:\n> > \n> > I will be resubmitting my patch for the 7.3.2 tree.\n> > \n> \n> I'm no core developer, but surely this wont be included in the 7.3.x\n> branch. Any change needs to be made against CVS head.\n\nI assume he meant he will repost his 7.3.2-based patch and we will merge\nit into CVS HEAD if it is accepted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 14:28:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 12:13, mlw wrote:\n> \n> My patch only works on the PostgreSQL server code. No changes have been\n> made to the initialization scripts.\n> \n> The patch declares three extra configuration file parameters:\n> hbafile= '/etc/postgres/pg_hba.conf'\n> identfile='/etc/postgres/pg_ident.conf'\n> datadir='/RAID0/postgres'\n> \n\nIf we're going to do this, I think we need to account for all of the\nfiles in the directory including PG_VERSION, postmaster.opts,\npostmaster.pid. In the end if we can't build so that we are either fully\nFHS compliant and/or LSB compliant, we've not done enough work on it.\n\nRobert Treat\n\n\n", "msg_date": "13 Feb 2003 14:30:43 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n> On Thu, 2003-02-13 at 12:13, mlw wrote:\n> > \n> > My patch only works on the PostgreSQL server code. No changes have been\n> > made to the initialization scripts.\n> > \n> > The patch declares three extra configuration file parameters:\n> > hbafile= '/etc/postgres/pg_hba.conf'\n> > identfile='/etc/postgres/pg_ident.conf'\n> > datadir='/RAID0/postgres'\n> > \n> \n> If we're going to do this, I think we need to account for all of the\n> files in the directory including PG_VERSION, postmaster.opts,\n> postmaster.pid. In the end if we can't build so that we are either fully\n> FHS compliant and/or LSB compliant, we've not done enough work on it.\n\nWoh, how do we move some of those files into /etc or /var/run if we\naren't running as root? We certainly don't want to require that. I\nguess /etc/postgresql will work if that directory is owned by the\nPostgreSQL superuser, but /var/run will be a problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 14:43:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 19:30, Robert Treat wrote:\n> If we're going to do this, I think we need to account for all of the\n> files in the directory including PG_VERSION, postmaster.opts,\n\nNot PG_VERSION; that is intimately associated with the data itself and\nought to stay in the data directory.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 19:47:12 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n>Well, in a sense, it trades passing one parameter, PGDATA, for another. \n>I see your point that we should specify configuration first, and let\n>everything pass from there. However, it does add extra configuration\n>parameters, and because you still need to specify/create pgdata, it adds\n>an extra level of abstraction to setting up the server.\n>\nWhile this is true, it is not uncommon, and it is more or less expected \nby most UNIX admins.\n\n>\n>Also, there is nothing preventing someone from symlinking the\n>configuration files from pgdata to somewhere else.\n>\nStop!!! symlinks are not sufficient. When happens when a native Win32 \nversion comes out? there are no symlinks. Also, most of the admins I \nknow don't like to use simlinks as they are not self documenting. \nSymlinks are \"bad.\"\n\n>\n>I don't think separate params for each config file is good. At the\n>most, I think we will specify the configuration _directory_ for all the\n>config files, perhaps pgsql/etc, and have pgdata default to ../data, or\n>honor $PGDATA. That might be the cleanest.\n>\nThe problem with that is that you are back to symlinking shared files. \nSymlinks are a kludge.\n\n>\n>Of course, that now gives us $PGCONFIG and $PGDATA, and possible\n>intraction if postgresql.conf specifies a different pgdata from $PGDATA.\n>As you can see, it could get messy.\n>\nI don't see it as very messy, for instance:\n\npostmaster -C /etc/postgres/postgresql.conf -D /RAID0/postgres -p 5432\npostmaster -C /etc/postgres/postgresql.conf -D /RAID1/postgres -p 5433\n\nThat looks like a real clean way to run multiple PostgreSQL servers on \nthe same box using the same configuration files.\n\n>And, if you specify pgdata in postgresql.conf, it prevents you from\n>using that file by different postmasters.\n>\nNot true, command line parameters, as a rule, override configuration \nfile defaults.\n\n>\n>My best guess would be to not specify pgdata in postgresql.conf, and\n>have a new $PGCONFIG param to specify the configuration directory, but\n>if we do that, $PGDATA/postgresql.conf becomes meaningless, which could\n>also be confusing. Maybe we don't allow those files to exist in $PGDATA\n>if $PGCONFIG is used, _and_ $PGCONFIG is not the same as $PGDATA. See,\n>I am getting myself confused. :-)\n>\nI think you are making it too complicated.\n\nI wouldn't remove the default configration set, it would be useful as a \nfailsafe or maintainence feature.\n\n\n", "msg_date": "Thu, 13 Feb 2003 14:48:28 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nOn Thu, 13 Feb 2003, mlw wrote:\n\n> Stephan Szabo wrote:\n> >>>On Thu, 2003-02-13 at 09:23, mlw wrote:\n> >>>>I deal with a number of PG databases on a number of sites, and it is a\n> >>>>real pain in the ass to get to a PG box and hunt around for data\n> >>>>directory so as to be able to administer the system. What's really\n> >>>>annoying is when you have to find the data directory when someone else\n> >>>>set up the system.\n> >>>>\n> >>>>\n> >\n> >You realize that the actual code feature doesn't necessarily help this\n> >case, right? Putting configuration in /etc and having a configuration file\n> >option on the command line are separate concepts.\n\nRe-read my statement and yours about the case you were mentioning. ;)\nSure, putting the files in /etc lets you find them easily. However, if\nyou're doing things like finding configuration made by someone else and\nsaid configuration isn't in /etc (which if they wanted to they could do\nnow with symlinks I believe - yes symlinks aren't a complete solution, but\nI think they're reasonable on most of our current ports) then you still\nhave to search the system for the configuration file, except now it might\nnot even be postgresql.conf. That's why I said the two issues aren't the\nsame.\n\n> >I think the feature is worthwhile, but I have some initial condition\n> >functionality questions that may have been answered in the previous patch,\n> >but I don't remember at this point.\n> >\n> >Mostly these have to deal with initial creation. Does the user specify an\n> >output location to initdb, do they just specify a data dir as now where\n> >the configuration goes but then they need to move it somewhere, does\n> >initdb now do nothing relating to configuration file and the user should\n> >make one on his own. Related, is the admin expected to have already made\n> >(say) /etc/postgresql to stick the config in and set the permissions\n> >correctly (since initdb doesn't run as root)?\n> >\n> My patch only works on the PostgreSQL server code. No changes have been\n> made to the initialization scripts.\n>\n> The patch declares three extra configuration file parameters:\n> hbafile= '/etc/postgres/pg_hba.conf'\n> identfile='/etc/postgres/pg_ident.conf'\n> datadir='/RAID0/postgres'\n>\n> The command line option is a capital 'C,' as in:\n> postmaster -C /etc/postgresql.conf\n>\n> I have no problem leaving the default configuration files remaining in\n> the data directory as sort of a maintenance / boot strap sort of thing,\n> so I don't see any reason to alter the installation.\n>\n>\n> As for this feature helping or not, I think it will. I think it\n> accomplishes two things:\n> (1) Separates configuration from data.\n> (2) Allows an administrator to create a convention across multiple\n> systems regardless of the location and mount points of the database storage.\n> (3) Lastly, it is a familiar methodology to DBAs not familiar with\n> PostgreSQL.\n\nI agree on all these points (\"I think the feature is worthwhile, but...\").\nI just wonder if we were going to do this, we might as well look at all of\nthe various things people want and decide what we want to do, for example,\npeople commenting on default configuration locations through configure,\nhow does this interact with what we have now, etc. I'd rather have a\nmonth spent arguing out a behavior rather than just adding a new behavior\nthat we'll need to possibly revisit again in the future. :)\n\n", "msg_date": "Thu, 13 Feb 2003 11:50:59 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n\n>On Thu, 2003-02-13 at 12:13, mlw wrote:\n> \n>\n>>My patch only works on the PostgreSQL server code. No changes have been\n>>made to the initialization scripts.\n>>\n>>The patch declares three extra configuration file parameters:\n>>hbafile= '/etc/postgres/pg_hba.conf'\n>>identfile='/etc/postgres/pg_ident.conf'\n>>datadir='/RAID0/postgres'\n>>\n>> \n>>\n>\n>If we're going to do this, I think we need to account for all of the\n>files in the directory including PG_VERSION, postmaster.opts,\n>postmaster.pid. In the end if we can't build so that we are either fully\n>FHS compliant and/or LSB compliant, we've not done enough work on it.\n>\n>Robert Treat\n>\n>\n> \n>\npostmaster.opts, PG_VERSION, and postmaster.pid are not configuration \nparameters.\n\nPG_VERSION is VERY important, it is how you know the version of the \ndatabase.\nPostmaster.pid is a postgres writable value\nAFAIK, postmaster.opts is also a postgres writable value.\n\n\n\n\n\n\n\n\nRobert Treat wrote:\n\nOn Thu, 2003-02-13 at 12:13, mlw wrote:\n \n\nMy patch only works on the PostgreSQL server code. No changes have been\nmade to the initialization scripts.\n\nThe patch declares three extra configuration file parameters:\nhbafile= '/etc/postgres/pg_hba.conf'\nidentfile='/etc/postgres/pg_ident.conf'\ndatadir='/RAID0/postgres'\n\n \n\n\nIf we're going to do this, I think we need to account for all of the\nfiles in the directory including PG_VERSION, postmaster.opts,\npostmaster.pid. In the end if we can't build so that we are either fully\nFHS compliant and/or LSB compliant, we've not done enough work on it.\n\nRobert Treat\n\n\n \n\npostmaster.opts, PG_VERSION, and postmaster.pid are not configuration parameters.\n\n\nPG_VERSION is VERY important, it is how you know the version of the database.\nPostmaster.pid is a postgres writable value \nAFAIK, postmaster.opts is also a postgres writable value.", "msg_date": "Thu, 13 Feb 2003 14:51:27 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n\n>Robert Treat wrote:\n> \n>\n>>On Thu, 2003-02-13 at 12:13, mlw wrote:\n>> \n>>\n>>>My patch only works on the PostgreSQL server code. No changes have been\n>>>made to the initialization scripts.\n>>>\n>>>The patch declares three extra configuration file parameters:\n>>>hbafile= '/etc/postgres/pg_hba.conf'\n>>>identfile='/etc/postgres/pg_ident.conf'\n>>>datadir='/RAID0/postgres'\n>>>\n>>> \n>>>\n>>If we're going to do this, I think we need to account for all of the\n>>files in the directory including PG_VERSION, postmaster.opts,\n>>postmaster.pid. In the end if we can't build so that we are either fully\n>>FHS compliant and/or LSB compliant, we've not done enough work on it.\n>> \n>>\n>\n>Woh, how do we move some of those files into /etc or /var/run if we\n>aren't running as root? We certainly don't want to require that. I\n>guess /etc/postgresql will work if that directory is owned by the\n>PostgreSQL superuser, but /var/run will be a problem.\n>\n> \n>\nI don't think those files need to move. As I said in another post, they \nare postgres writable and should in the PostgreSQL data directory. \nHowever, I suppose, that those also could be configuration parameters? No?\n\nPG_VERSION obviously should not move.\npostmaster.opts gets created when postmaster is run, correct?\n\nThe only issue would be the PID file, and I don't have strong feelings \nabout it, except that using a /var/run system will make running multiple \npostmasters a pain.\n\n\n\n\n\n\n\n\n\n\nBruce Momjian wrote:\n\nRobert Treat wrote:\n \n\nOn Thu, 2003-02-13 at 12:13, mlw wrote:\n \n\nMy patch only works on the PostgreSQL server code. No changes have been\nmade to the initialization scripts.\n\nThe patch declares three extra configuration file parameters:\nhbafile= '/etc/postgres/pg_hba.conf'\nidentfile='/etc/postgres/pg_ident.conf'\ndatadir='/RAID0/postgres'\n\n \n\nIf we're going to do this, I think we need to account for all of the\nfiles in the directory including PG_VERSION, postmaster.opts,\npostmaster.pid. In the end if we can't build so that we are either fully\nFHS compliant and/or LSB compliant, we've not done enough work on it.\n \n\n\nWoh, how do we move some of those files into /etc or /var/run if we\naren't running as root? We certainly don't want to require that. I\nguess /etc/postgresql will work if that directory is owned by the\nPostgreSQL superuser, but /var/run will be a problem.\n\n \n\nI don't think those files need to move. As I said in another post, they are \npostgres writable and should in the PostgreSQL data directory. However, I \nsuppose, that those also could be configuration parameters? No?\n\n PG_VERSION obviously should not move.\n postmaster.opts gets created when postmaster is run, correct?\n\n The only issue would be the PID file, and I don't have strong feelings about\nit, except that using a /var/run system will make running multiple postmasters\na pain.", "msg_date": "Thu, 13 Feb 2003 14:58:26 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Thu, 13 Feb 2003, mlw wrote:\n>\n> \n>\n>>Stephan Szabo wrote:\n>> \n>>\n>>>>>On Thu, 2003-02-13 at 09:23, mlw wrote:\n>>>>> \n>>>>>\n>>>>>>I deal with a number of PG databases on a number of sites, and it is a\n>>>>>>real pain in the ass to get to a PG box and hunt around for data\n>>>>>>directory so as to be able to administer the system. What's really\n>>>>>>annoying is when you have to find the data directory when someone else\n>>>>>>set up the system.\n>>>>>>\n>>>>>>\n>>>>>> \n>>>>>>\n>>>You realize that the actual code feature doesn't necessarily help this\n>>>case, right? Putting configuration in /etc and having a configuration file\n>>>option on the command line are separate concepts.\n>>> \n>>>\n>\n>Re-read my statement and yours about the case you were mentioning. ;)\n>Sure, putting the files in /etc lets you find them easily. However, if\n>you're doing things like finding configuration made by someone else and\n>said configuration isn't in /etc (which if they wanted to they could do\n>now with symlinks I believe - yes symlinks aren't a complete solution, but\n>I think they're reasonable on most of our current ports) then you still\n>have to search the system for the configuration file, except now it might\n>not even be postgresql.conf. That's why I said the two issues aren't the\n>same.\n>\n> \n>\n>>>I think the feature is worthwhile, but I have some initial condition\n>>>functionality questions that may have been answered in the previous patch,\n>>>but I don't remember at this point.\n>>>\n>>>Mostly these have to deal with initial creation. Does the user specify an\n>>>output location to initdb, do they just specify a data dir as now where\n>>>the configuration goes but then they need to move it somewhere, does\n>>>initdb now do nothing relating to configuration file and the user should\n>>>make one on his own. Related, is the admin expected to have already made\n>>>(say) /etc/postgresql to stick the config in and set the permissions\n>>>correctly (since initdb doesn't run as root)?\n>>>\n>>> \n>>>\n>>My patch only works on the PostgreSQL server code. No changes have been\n>>made to the initialization scripts.\n>>\n>>The patch declares three extra configuration file parameters:\n>>hbafile= '/etc/postgres/pg_hba.conf'\n>>identfile='/etc/postgres/pg_ident.conf'\n>>datadir='/RAID0/postgres'\n>>\n>>The command line option is a capital 'C,' as in:\n>>postmaster -C /etc/postgresql.conf\n>>\n>>I have no problem leaving the default configuration files remaining in\n>>the data directory as sort of a maintenance / boot strap sort of thing,\n>>so I don't see any reason to alter the installation.\n>>\n>>\n>>As for this feature helping or not, I think it will. I think it\n>>accomplishes two things:\n>>(1) Separates configuration from data.\n>>(2) Allows an administrator to create a convention across multiple\n>>systems regardless of the location and mount points of the database storage.\n>>(3) Lastly, it is a familiar methodology to DBAs not familiar with\n>>PostgreSQL.\n>> \n>>\n>\n>I agree on all these points (\"I think the feature is worthwhile, but...\").\n>I just wonder if we were going to do this, we might as well look at all of\n>the various things people want and decide what we want to do, for example,\n>people commenting on default configuration locations through configure,\n>how does this interact with what we have now, etc. I'd rather have a\n>month spent arguing out a behavior rather than just adding a new behavior\n>that we'll need to possibly revisit again in the future. :)\n>\n\nI have absolutely no problem debating and augmenting the feature. None \nwhat so ever, I am more pushing to get momentum to actually do it. In \n7.1 I proposed this, and was told that it wasn't needed because (a) \nsymlinks provide all the functionality you need and (b) that they were \ngoing to redesign the configuration system. That was well over a year \nago (two?). I am willing to do the work, but what's the point if the \ncore group isn't even going to use it?\n\nMost of the admins I know don't use symlinks as they can not carry \ncomments. Without knowing, you can change or delete a file that does \nnot appear to be in use but which kills a working server. Symlinks are \ndangerous in production systems, it is easy to screw them up with scp \nwhen administering a cluster of computers.\n\n\n> \n>\n\n\n\n\n\n\n\n\nStephan Szabo wrote:\n\nOn Thu, 13 Feb 2003, mlw wrote:\n\n \n\nStephan Szabo wrote:\n \n\n\n\nOn Thu, 2003-02-13 at 09:23, mlw wrote:\n \n\nI deal with a number of PG databases on a number of sites, and it is a\nreal pain in the ass to get to a PG box and hunt around for data\ndirectory so as to be able to administer the system. What's really\nannoying is when you have to find the data directory when someone else\nset up the system.\n\n\n \n\n\n\nYou realize that the actual code feature doesn't necessarily help this\ncase, right? Putting configuration in /etc and having a configuration file\noption on the command line are separate concepts.\n \n\n\n\nRe-read my statement and yours about the case you were mentioning. ;)\nSure, putting the files in /etc lets you find them easily. However, if\nyou're doing things like finding configuration made by someone else and\nsaid configuration isn't in /etc (which if they wanted to they could do\nnow with symlinks I believe - yes symlinks aren't a complete solution, but\nI think they're reasonable on most of our current ports) then you still\nhave to search the system for the configuration file, except now it might\nnot even be postgresql.conf. That's why I said the two issues aren't the\nsame.\n\n \n\n\nI think the feature is worthwhile, but I have some initial condition\nfunctionality questions that may have been answered in the previous patch,\nbut I don't remember at this point.\n\nMostly these have to deal with initial creation. Does the user specify an\noutput location to initdb, do they just specify a data dir as now where\nthe configuration goes but then they need to move it somewhere, does\ninitdb now do nothing relating to configuration file and the user should\nmake one on his own. Related, is the admin expected to have already made\n(say) /etc/postgresql to stick the config in and set the permissions\ncorrectly (since initdb doesn't run as root)?\n\n \n\nMy patch only works on the PostgreSQL server code. No changes have been\nmade to the initialization scripts.\n\nThe patch declares three extra configuration file parameters:\nhbafile= '/etc/postgres/pg_hba.conf'\nidentfile='/etc/postgres/pg_ident.conf'\ndatadir='/RAID0/postgres'\n\nThe command line option is a capital 'C,' as in:\npostmaster -C /etc/postgresql.conf\n\nI have no problem leaving the default configuration files remaining in\nthe data directory as sort of a maintenance / boot strap sort of thing,\nso I don't see any reason to alter the installation.\n\n\nAs for this feature helping or not, I think it will. I think it\naccomplishes two things:\n(1) Separates configuration from data.\n(2) Allows an administrator to create a convention across multiple\nsystems regardless of the location and mount points of the database storage.\n(3) Lastly, it is a familiar methodology to DBAs not familiar with\nPostgreSQL.\n \n\n\nI agree on all these points (\"I think the feature is worthwhile, but...\").\nI just wonder if we were going to do this, we might as well look at all of\nthe various things people want and decide what we want to do, for example,\npeople commenting on default configuration locations through configure,\nhow does this interact with what we have now, etc. I'd rather have a\nmonth spent arguing out a behavior rather than just adding a new behavior\nthat we'll need to possibly revisit again in the future. :)\n\n\nI have absolutely no problem debating and augmenting the feature. None what\nso ever, I am more pushing to get momentum to actually do it. In 7.1 I proposed\nthis, and was told that it wasn't needed because (a) symlinks provide all\nthe functionality you need and (b) that they were going to redesign the configuration\nsystem. That was well over a year ago (two?). I am willing to do the work,\nbut what's the point if the core group isn't even going to use it?\n\nMost of the admins I know don't use symlinks as they can not carry comments.\nWithout knowing,  you can change or delete a file that does not appear to\nbe in use but which kills a working server. Symlinks are dangerous in production\nsystems, it is easy to screw them up with scp when administering a cluster\nof computers.", "msg_date": "Thu, 13 Feb 2003 15:08:39 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On 13 Feb 2003, Martin Coxall wrote:\n\n> \n> > Well, to the extent that you're serious, you understand that \n> > a lot of people feel that /usr/local should be reserved for \n> > stuff that's installed by the local sysadmin, and your\n> > vendor/distro isn't supposed to be messing with it. \n> > \n> > Which means if the the vendor installed Postgresql (say, the\n> > Red Hat Database) you'd expect config files to be in /etc.\n> > If the postgresql is compiled from source by local admin, \n> > you might look somewhere in /usr/local.\n> \n> Indeed. For better or worse, there is a Filesystem Hierarcy Standard,\n> and most of the important Linux distros, BSDs and some legacy Unixen\n> stick to it, so so should we.\n> \n> Configuration files should be in /etc/postgresql/, or at the very least\n> symlinked from there.\n\nSo, how do we handle things like installing three or four versions at the \nsame time. This isn't the same thing as /etc/fstab. While we only would \nlikely need to have one fstab or whatever, with postgresql, it's not \nunreasonable to want to intall more than one copy or version for various \nreason.\n\nGenerally things that live in /etc are owned and operated by the OS. \nPostgresql, by it's definition is a userspace program, not an OS owned \none.\n\nI've found having a $PGDATA var where EVERYTHING lives to be a huge \nadvantage when you need to run a half dozen instances of pgsql under \ndifferent accounts or for different versions on the same box.\n\nNow, if we could do it like X, where the base stuff is all in the \n/etc/X11R6 directory, but your own personal config lives in your home \ndirectory, then we're right as rain. but what parts of postgresql would \nalways be common to all flavors that might need to be run at the same \ntime? Not much.\n\n", "msg_date": "Thu, 13 Feb 2003 13:11:13 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "I don't see why we can't keep everyone happy and let the users choose the \nsetup they want. To wit, make the following, probably simple, changes:\n\n1) Have postgresql default to using /etc/postgresql.conf\n2) Add a setting in postgresql.conf specifying the data directory\n3) Change the meaning of -D to mean \"use this config file\"\n4) In the absence of a specified data directory in postgresql.conf, use the \nlocation of the postgresql.conf file as the data directory\n\nI see several advantages:\n\n1) Anyone who doesn't want to change doesn't have to - leaving the data \ndirectory spec out of postgresql.conf and starting with -D will be \nessentially identical to how things are now (except it would be -D \n/foo/bar/postgresql.conf instead of -D /foo/bar/ - even this could be \novercome with a bit of bailing wire saying if -D specifies a directory, look \nfor postgresql.conf in that directory).\n\n2) Postgresql will be more \"familiar\" to those who expect or desire configs \nto be in /etc.\n\n3) Adding a postgresql.conf line for data location sets the stage for being \nable to specify directories for all sorts of files (WAL, index, etc.) without \nthe need for symlinks.\n\n4) Multiple config files could be more easily managed for \ntesting/benchmarking/etc.\n\nCheers,\nSteve\n\n\nOn Wednesday 12 February 2003 10:14 pm, Peter Bierman wrote:\n> At 12:31 AM -0500 2/13/03, mlw wrote:\n> >The idea that a, more or less, arbitrary data location determines\n> >the database configuration is wrong. It should be obvious to any\n> >administrator that a configuration file location which controls the\n> >server is the \"right\" way to do it.\n>\n> Isn't the database data itself a rather significant portion of the\n> 'configuration' of the database?\n>\n> What do you gain by having the postmaster config and the database\n> data live in different locations?\n>\n> -pmb\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 13 Feb 2003 12:28:07 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, mlw wrote:\n\n> \n> \n> Christopher Browne wrote:\n> \n> >In the last exciting episode, [email protected] (Curt Sampson) wrote:\n> >\n> >>On Wed, 12 Feb 2003, Peter Bierman wrote:\n> >>\n> >>>What do you gain by having the postmaster config and the database\n> >>>data live in different locations?\n> >>>\n> >>You can then standardize a location for the configuration files.\n> >>\n> >>Everybody has room in /etc for another 10K of data. Where you have\n> >>room for something that might potentially be a half terrabyte of\n> >>data, and is not infrequently several gigabytes or more, is pretty\n> >>system-depenendent.\n> >\n> >Ah, but this has two notable problems:\n> >\n> >1. It assumes that there is \"a location\" for \"the configuration files\n> > for /the single database instance./\"\n> >\n> > If I have a second database instance, that may conflict.\n> >\n> >2. It assumes I have write access to /etc\n> >\n> > If I'm a Plain Old User, as opposed to root, I may only have\n> > read-only access to /etc.\n> >\n> >These conditions have both been known to occur...\n> > \n> >\n> These are not issues at all. You could put the configuration file \n> anywhere, just as you can for any UNIX service.\n> \n> postmaster --config=/home/myhome/mydb.conf\n> \n> I deal with a number of PG databases on a number of sites, and it is a \n> real pain in the ass to get to a PG box and hunt around for data \n> directory so as to be able to administer the system. What's really \n> annoying is when you have to find the data directory when someone else \n> set up the system.\n\nReally? I would think it's easier to do this:\n\nsu - pgsuper\ncd $PGDATA\npwd\n\nThan to try to figure out what someone entered when they ran ./configure \n--config=...\n\n> Configuring postgresql via a configuration file which specifies all the \n> data, i.e. data directory, name of other configuration files, etc. is \n> the right way to do it. Even if you have reasons against it, even if you \n> think it is a bad idea, a bad standard is almost always a better \n> solution than an arcane work of perfection.\n\nWrong, I strongly disagree with this sentament. Conformity to standards \nfor simple conformity's sake is as wrong as sticking to the old way \nbecause it's what we're all comfy with. \n\n> Personally, however, I think the configuration issue is a no-brainer and \n> I am amazed that people are balking. EVERY other service on a UNIX box \n> is configured in this way, why not do it this way in PostgreSQL? The \n> patch I submitted allowed the configuration to work as it currently \n> does, but allowed for the more standard configuration file methodology.\n\nIf I do a .tar.gz install of apache, I get /usr/local/apache/conf, which \nis not the standard way you're listing. If I install openldap from \n.tar.gz, I get a /usr/local/etc/openldap directory, close, but still not \nthe same. The fact is, it's the packagers that put things into /etc and \nwhatnot, and I can see the postgresql RPMs or debs or whatever having that \nas the default, but for custom built software, NOTHING that I know of \nbuilds from source and uses /etc without a switch to tell it to, just like \npostgresql can do now.\n\n> I just don't understand what the resistance is, it makes no sense.\n\nI agree, but from the other side of the fence.\n\n", "msg_date": "Thu, 13 Feb 2003 14:07:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On 13 Feb 2003, Oliver Elphick wrote:\n\n> On Thu, 2003-02-13 at 18:45, Bruce Momjian wrote:\n> > Oliver Elphick wrote:\n> > > On Thu, 2003-02-13 at 17:52, Vince Vielhaber wrote:\n> > > > Seems to me that if FHS allows such a mess, it's reason enough to avoid\n> > > > compliance. Either that or those of you who build for distributions are\n> > > > making an ill advised change. Simply because the distribution makes the\n> > > > decision to add PostgreSQL, or some other package, to it's distribution\n> > > > doesn't make it a requirement to change the location of the config files.\n> > > ...\n> > > I really don't see why there is such a not-invented-here mentality about\n> > > this issue. I say again, standards-compliance is the best way. It\n> > > makes life easier for everyone if standards are followed. Don't we\n> > > pride ourselves on being closer to the SQL spec than other databases?\n> > > Any way, if PostgreSQL stays as it is, I will continue to have to ensure\n> > > that initdb creates symlinks to /etc/postgresql/, as happens now.\n> >\n> > It doesn't have anything to do with \"not-invented-here\", which is a\n> > common refrain by people who don't like our decisions, like \"Why don't\n> > you use mmap()? Oh, it's because I thought of it and you didn't\". Does\n> > anyone seriously believe that is the motiviation of anyone in this\n> > project! I certainly don't.\n>\n> My apologies. I withdraw the comment, which was provoked mostly by\n> Vince's response, quoted above. I agree that it is not characteristic\n> of the project.\n\nI certainly wasn't trying to provoke anything. It just seems odd to me\nthat when the distribution installs a package and places it's config files\nin /etc and later the admin happens to upgrade by the instructions with\nthe package, it's acceptable for the config files to now be in two places\nand you don't find it confusing. What happens when a new admin comes on\nand tries to figure out which config file is which? Ever try to figure\nout where the hell Pine's config really is?\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 13 Feb 2003 16:21:48 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw writes:\n\n> AFAIK it wasn't actually done. It was more of a, \"we should do something\n> different\" argument. At one point it was talked about rewriting the\n> configuration system to allow \"include\" and other things.\n\nThe core of the problem was, and continues to be, this: If you move\npostgresql.conf somewhere else, then someone else will also want to move\npg_hba.conf and all the rest. And that opens up a number of security and\ncumbersome-to-install problems.\n\nJust having an option that says, the configuration file is \"there\", is a\nfirst step but not a complete solution.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 13 Feb 2003 22:25:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Peter Eisentraut wrote:\n\n>mlw writes:\n>\n> \n>\n>>AFAIK it wasn't actually done. It was more of a, \"we should do something\n>>different\" argument. At one point it was talked about rewriting the\n>>configuration system to allow \"include\" and other things.\n>> \n>>\n>\n>The core of the problem was, and continues to be, this: If you move\n>postgresql.conf somewhere else, then someone else will also want to move\n>pg_hba.conf and all the rest. And that opens up a number of security and\n>cumbersome-to-install problems.\n>\n>Just having an option that says, the configuration file is \"there\", is a\n>first step but not a complete solution.\n>\nThe location of pg_hba.conf and pg_ident.conf can be specified within \nthe postgresql.conf file if desired.\n\nI don't understand the security concerns, what security issues can there be?\n\n> \n>\n\n\n\n\n\n\n\n\nPeter Eisentraut wrote:\n\nmlw writes:\n\n \n\nAFAIK it wasn't actually done. It was more of a, \"we should do something\ndifferent\" argument. At one point it was talked about rewriting the\nconfiguration system to allow \"include\" and other things.\n \n\n\nThe core of the problem was, and continues to be, this: If you move\npostgresql.conf somewhere else, then someone else will also want to move\npg_hba.conf and all the rest. And that opens up a number of security and\ncumbersome-to-install problems.\n\nJust having an option that says, the configuration file is \"there\", is a\nfirst step but not a complete solution.\n\nThe location of pg_hba.conf and pg_ident.conf can be specified within the\npostgresql.conf file if desired.\n\nI don't understand the security concerns, what security issues can there\nbe?", "msg_date": "Thu, 13 Feb 2003 16:26:37 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "scott.marlowe wrote:\n\n>>These are not issues at all. You could put the configuration file \n>>anywhere, just as you can for any UNIX service.\n>>\n>>postmaster --config=/home/myhome/mydb.conf\n>>\n>>I deal with a number of PG databases on a number of sites, and it is a \n>>real pain in the ass to get to a PG box and hunt around for data \n>>directory so as to be able to administer the system. What's really \n>>annoying is when you have to find the data directory when someone else \n>>set up the system.\n>> \n>>\n>\n>Really? I would think it's easier to do this:\n>\n>su - pgsuper\n>cd $PGDATA\n>pwd\n>\n>Than to try to figure out what someone entered when they ran ./configure \n>--config=...\n> \n>\nWhy do you think PGDATA would be set for root?\n\n> \n>\n>>Configuring postgresql via a configuration file which specifies all the \n>>data, i.e. data directory, name of other configuration files, etc. is \n>>the right way to do it. Even if you have reasons against it, even if you \n>>think it is a bad idea, a bad standard is almost always a better \n>>solution than an arcane work of perfection.\n>> \n>>\n>\n>Wrong, I strongly disagree with this sentament. Conformity to standards \n>for simple conformity's sake is as wrong as sticking to the old way \n>because it's what we're all comfy with. \n>\nIt isn't conformity for conformitys sake. It is following an established \npractice, like driving on the same side of the road or stopping at red \nlights.\n\n>\n> \n>\n>>Personally, however, I think the configuration issue is a no-brainer and \n>>I am amazed that people are balking. EVERY other service on a UNIX box \n>>is configured in this way, why not do it this way in PostgreSQL? The \n>>patch I submitted allowed the configuration to work as it currently \n>>does, but allowed for the more standard configuration file methodology.\n>> \n>>\n>\n>If I do a .tar.gz install of apache, I get /usr/local/apache/conf, which \n>is not the standard way you're listing. If I install openldap from \n>.tar.gz, I get a /usr/local/etc/openldap directory, close, but still not \n>the same. The fact is, it's the packagers that put things into /etc and \n>whatnot, and I can see the postgresql RPMs or debs or whatever having that \n>as the default, but for custom built software, NOTHING that I know of \n>builds from source and uses /etc without a switch to tell it to, just like \n>postgresql can do now.\n>\nYou are confusing the default location of a file with the ability to use \nthe file. The default I have proposed all along was to use the existing \npractice of keeping everything in the $PGDATA directory.\n\nThe change I wish to make to the code allows this to be changed. Most \nadmins want configuration and data separate. Most admins do not want to \nuse symlinks because they are dangerous in a production environment.\n\nI would rather have a simpler solution sooner than a perfect solution never.\n\n\n\n\n\n\n\n\n\nscott.marlowe wrote:\n\n\n\nThese are not issues at all. You could put the configuration file \nanywhere, just as you can for any UNIX service.\n\npostmaster --config=/home/myhome/mydb.conf\n\nI deal with a number of PG databases on a number of sites, and it is a \nreal pain in the ass to get to a PG box and hunt around for data \ndirectory so as to be able to administer the system. What's really \nannoying is when you have to find the data directory when someone else \nset up the system.\n \n\n\nReally? I would think it's easier to do this:\n\nsu - pgsuper\ncd $PGDATA\npwd\n\nThan to try to figure out what someone entered when they ran ./configure \n--config=...\n \n\nWhy do you think PGDATA would be set for root?\n\n\n\n \n\nConfiguring postgresql via a configuration file which specifies all the \ndata, i.e. data directory, name of other configuration files, etc. is \nthe right way to do it. Even if you have reasons against it, even if you \nthink it is a bad idea, a bad standard is almost always a better \nsolution than an arcane work of perfection.\n \n\n\nWrong, I strongly disagree with this sentament. Conformity to standards \nfor simple conformity's sake is as wrong as sticking to the old way \nbecause it's what we're all comfy with. \n\nIt isn't conformity for conformitys sake. It is following an established\npractice, like driving on the same side of the road or stopping at red lights.\n\n\n\n \n\nPersonally, however, I think the configuration issue is a no-brainer and \nI am amazed that people are balking. EVERY other service on a UNIX box \nis configured in this way, why not do it this way in PostgreSQL? The \npatch I submitted allowed the configuration to work as it currently \ndoes, but allowed for the more standard configuration file methodology.\n \n\n\nIf I do a .tar.gz install of apache, I get /usr/local/apache/conf, which \nis not the standard way you're listing. If I install openldap from \n.tar.gz, I get a /usr/local/etc/openldap directory, close, but still not \nthe same. The fact is, it's the packagers that put things into /etc and \nwhatnot, and I can see the postgresql RPMs or debs or whatever having that \nas the default, but for custom built software, NOTHING that I know of \nbuilds from source and uses /etc without a switch to tell it to, just like \npostgresql can do now.\n\nYou are confusing the default location of a file with the ability to use\nthe file. The default I have proposed all along was to use the existing practice\nof keeping everything in the $PGDATA directory.\n\nThe change I wish to make to the code allows this to be changed. Most admins\nwant configuration and data separate. Most admins do not want to use symlinks\nbecause they are dangerous in a production environment.\n\nI would rather have a simpler solution sooner than a perfect solution never.", "msg_date": "Thu, 13 Feb 2003 16:44:05 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "> All I see here is an arbitrary break with our past practice. I do\n> not see any net improvement.\n\n<FreeBSD Port Maintainer>\nWell, given that there's a trend to make PostgreSQL more usable, I can\nsay with absolute certainty, that an FAQ that I get about once a week\nis (and granted only from new users) \"where is the postgresql.conf? I\ndon't see it in ${LOCALBASE}/etc/.\" PostgreSQL is one of a few ports\nin an extreme minority that uses a local configuration directive and\nit violates the policy of least surprise for sysadmins.\n\nPS LOCALBASE/PREFIX is /usr/local 99.999% of the time\n</FreeBSD Port Maintainer>\n\nWith my DBA hat on, however, here are a few reasons that I'd like to\nsee the conf moved out of the data directory:\n\n1) pg_dumpall > foo && rm -rf $PGDATA && initdb\n\n As a DBA I don't have to worry about backing up my config file when\n doing upgrades since the config file is located in an external\n directory.\n\n2) Backing up config files in ${LOCALBASE}/etc is a pretty common\n practice. Having to make a special case for postgresql's kind of a\n PITA.\n\n\nSuggestions:\n\n1) gmake install installs a default configuration file in\n ${LOCALBASE}/etc/postgresql.conf.default. Promote that DBAs should\n diff postgresql.conf.default with postgresql.conf and make\n adjustments as they see fit, but gmake install will _not_, under\n any circumstances, touch postgresql.conf (by default, it should cp\n postgresql.conf.default to postgresql.conf that way things \"just\n work\" out of the box).\n\n2) Leave the current functionality in place. Being able to have\n multiple databases on the same machine is a _really_ nice feature\n of PostgreSQL. If you want multiple databases, having the config\n file in $PGDATA makes some sense because with multiple\n installations, you want to keep everything together... though it\n doesn't make much sense if you have only one installation per\n server... and really, the only reason to have multiple\n installations is to handle username collisions (hint hint).\n\n3) In the absence of a PGDATA environment variable (don't want to\n break backward compatible installations) being set, the future\n behavior allow for a default location of a config file (if no CLI\n switch is specified for an explicit location) that points to a\n config file. The path would be ${PREFIX}/etc and would provide\n most admins with a standard launching off point for running/tuning\n their databases. The config file would have to specify the data\n directory as well as the path to the hba.conf, which should be\n outside of the datadir as well (speaking of the hba.conf, am I the\n only one who things that hba.conf should be converted into a system\n catalog? ::shrug::)\n\nJust some random thoughts from someone who's had to deal with this on\nall of the mentioned levels (new users, single installations, multiple\ninstallations, and multiple copies running via daemontools). -sc\n\n\nPS If there is no huge press for this, I should have the time do do\nthis in a few weeks if someone doesn't beat me to it.\n\n-- \nSean Chittenden\n", "msg_date": "Thu, 13 Feb 2003 14:06:25 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 21:21, Vince Vielhaber wrote:\n> I certainly wasn't trying to provoke anything. It just seems odd to me\n> that when the distribution installs a package and places it's config files\n> in /etc and later the admin happens to upgrade by the instructions with\n> the package, it's acceptable for the config files to now be in two places\n> and you don't find it confusing. What happens when a new admin comes on\n> and tries to figure out which config file is which? Ever try to figure\n> out where the hell Pine's config really is?\n\nI've not used pine, and there doesn't seem to be an official Debian\npackage, (it doesn't allow any changes to its source, I believe, which\nmakes it ineligible). But if it were an official package, I know I\nshould look in /etc/pine.\n\nIf the admin installs a local build of something he has installed as a\npackage, he will presumably take care to separate the two. If his local\nbuild is to replace the package, he should purge the installed package,\nso that there are no traces of it left. Since he is administering a\ndistribution installation, it is certainly his responsibility to\nunderstand the difference between local and distributed packages, as\nwell as the different places that each should put their configuration\nfiles. (Incidentally, Debian's changes from the upstream configuration\nare documented in the package.) In the end, though, when we package for\na distribution, we expect people to use the packages. If they want to\nbuild from source, the packages system lets them do it. Anyone who is\nbuilding from the upstream source must be presumed to know what he is\ndoing and take responsibility for it.\n\nWhat your comments strongly suggest to me is that projects like\nPostgreSQL and pine, along with everything else, should comply with FHS;\nthen there will be no confusion because everyone will be following the\nsmae standards. Messes arise when people ignore standards; we have all\nseen the dreadful examples of MySQL and the Beast, haven't we?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" \n Psalms 24:1 \n\n", "msg_date": "13 Feb 2003 22:22:56 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 14:28, Bruce Momjian wrote:\n> Robert Treat wrote:\n> > On Thu, 2003-02-13 at 14:06, mlw wrote:\n> > > \n> > > I will be resubmitting my patch for the 7.3.2 tree.\n> > > \n> > \n> > I'm no core developer, but surely this wont be included in the 7.3.x\n> > branch. Any change needs to be made against CVS head.\n> \n> I assume he meant he will repost his 7.3.2-based patch and we will merge\n> it into CVS HEAD if it is accepted.\n> \n\nIIRC he originally wrote the patch for a pre 7.3 version, so it seems\nlike he'd be reworking it for 7.3.x with the above statement. I'm only\nsuggesting he rework it against CVS head if he doesn't have plans to do\nso already. Course if yall are willing to merge it in for him, none of\nthis really matters does it? :-)\n\nRobert Treat\n\n", "msg_date": "13 Feb 2003 17:25:13 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 14:43, Bruce Momjian wrote:\n> Robert Treat wrote:\n> > On Thu, 2003-02-13 at 12:13, mlw wrote:\n> > > \n> > > My patch only works on the PostgreSQL server code. No changes have been\n> > > made to the initialization scripts.\n> > > \n> > > The patch declares three extra configuration file parameters:\n> > > hbafile= '/etc/postgres/pg_hba.conf'\n> > > identfile='/etc/postgres/pg_ident.conf'\n> > > datadir='/RAID0/postgres'\n> > > \n> > \n> > If we're going to do this, I think we need to account for all of the\n> > files in the directory including PG_VERSION, postmaster.opts,\n> > postmaster.pid. In the end if we can't build so that we are either fully\n> > FHS compliant and/or LSB compliant, we've not done enough work on it.\n> \n> Woh, how do we move some of those files into /etc or /var/run if we\n> aren't running as root? We certainly don't want to require that. I\n> guess /etc/postgresql will work if that directory is owned by the\n> PostgreSQL superuser, but /var/run will be a problem.\n> \n\nSeems like some are saying one of the problems with the current system\nis it doesn't follow FHS or LSB. If those are valid reasons to change\nthe system, it seems like a change which doesn't actually address those\nconcerns would not be acceptable. (Unless those really aren't valid\nconcerns...)\n\nRobert Treat\n\n", "msg_date": "13 Feb 2003 17:29:24 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 14:51, mlw wrote:\n> \n> \n> Robert Treat wrote:\n> \n> \n> On Thu, 2003-02-13 at 12:13, mlw wrote:\n> \n> \n> \n> My patch only works on the PostgreSQL server code. No changes have been\n> \n> made to the initialization scripts.\n> \n> \n> \n> The patch declares three extra configuration file parameters:\n> \n> hbafile= '/etc/postgres/pg_hba.conf'\n> \n> identfile='/etc/postgres/pg_ident.conf'\n> \n> datadir='/RAID0/postgres'\n> \n> \n> If we're going to do this, I think we need to account for all of the\n> \n> files in the directory including PG_VERSION, postmaster.opts,\n> \n> postmaster.pid. In the end if we can't build so that we are either fully\n> \n> FHS compliant and/or LSB compliant, we've not done enough work on it.\n> \n> \n> postmaster.opts, PG_VERSION, and postmaster.pid are not configuration\n> parameters. \n>\n\nSo? I'm not saying they all have to be moved, just they all need to be\naccounted for. \n \n> PG_VERSION is VERY important, it is how you know the version of the\n> database.\n> Postmaster.pid is a postgres writable value \n> AFAIK, postmaster.opts is also a postgres writable value.\n> \n\nIIRC the postmaster.pid file should be in /var/run according to FHS, I'm\nnot sure about postmaster.opts though...\n\nAgain, if we're going to make a change, let's make sure we think it\nthrough.\n\nRobert Treat\n\n", "msg_date": "13 Feb 2003 17:44:29 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Oliver Elphick wrote:\n> What your comments strongly suggest to me is that projects like\n> PostgreSQL and pine, along with everything else, should comply with FHS;\n> then there will be no confusion because everyone will be following the\n> smae standards. Messes arise when people ignore standards; we have all\n> seen the dreadful examples of MySQL and the Beast, haven't we?\n\nCan the FHS handle installing PostgreSQL as non-root?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 17:53:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n> IIRC the postmaster.pid file should be in /var/run according to FHS, I'm\n> not sure about postmaster.opts though...\n> \n> Again, if we're going to make a change, let's make sure we think it\n> through.\n\nCan non-root write to /var/run?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 17:55:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 15:08, mlw wrote:\n> Stephan Szabo wrote:\n> \n> Re-read my statement and yours about the case you were mentioning. ;)\n> \n> Sure, putting the files in /etc lets you find them easily. However, if\n> \n> you're doing things like finding configuration made by someone else and\n> \n> said configuration isn't in /etc (which if they wanted to they could do\n> \n> now with symlinks I believe - yes symlinks aren't a complete solution,\n> but\n> \n> I think they're reasonable on most of our current ports) then you still\n> \n> have to search the system for the configuration file, except now it\n> might\n> \n> not even be postgresql.conf. That's why I said the two issues aren't the\n> \n> same.\n> \n\nActually, I'd almost go so far as to say it will make it worse. In the\ncurrent system, if you can figure out where $PGDATA is, you've found\neverything you need for that installation. In the new system, there's no\ntelling where people will put things, and it certainly won't be any\neasier to find it. THinking on the above Stephan, you'd almost have to\nrequire that the config file be called postgresql.conf in order to run,\nanything else leads to real scary scenario's.\n\n\n> On Thu, 13 Feb 2003, mlw wrote:\n> \n> I have absolutely no problem debating and augmenting the feature. None\n> what so ever, I am more pushing to get momentum to actually do it. \n\nStick with it, I think most of us here can see the value in the option,\nbut there are valid concerns that it be implemented correctly.\nPersonally I think a postgresql installation is much more like an apache\ninstallation, which generally contains all of the files (data and\nconfig) under /usr/local/apache. Maybe someone can dig more to see if\nthat system is more appropriate a comparison than something like bind.\n\nRobert Treat\n\n\n", "msg_date": "13 Feb 2003 17:59:17 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n\n>Robert Treat wrote:\n> \n>\n>>IIRC the postmaster.pid file should be in /var/run according to FHS, I'm\n>>not sure about postmaster.opts though...\n>>\n>>Again, if we're going to make a change, let's make sure we think it\n>>through.\n>> \n>>\n>\n>Can non-root write to /var/run?\n>\n> \n>\nShouldn't be able too\n\n\n\n\n\n\n\n\nBruce Momjian wrote:\n\nRobert Treat wrote:\n \n\nIIRC the postmaster.pid file should be in /var/run according to FHS, I'm\nnot sure about postmaster.opts though...\n\nAgain, if we're going to make a change, let's make sure we think it\nthrough.\n \n\n\nCan non-root write to /var/run?\n\n \n\nShouldn't be able too", "msg_date": "Thu, 13 Feb 2003 18:06:15 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On 13 Feb 2003, Oliver Elphick wrote:\n\n> What your comments strongly suggest to me is that projects like\n> PostgreSQL and pine, along with everything else, should comply with FHS;\n> then there will be no confusion because everyone will be following the\n> smae standards. Messes arise when people ignore standards; we have all\n> seen the dreadful examples of MySQL and the Beast, haven't we?\n\nActually FHS says the opposite. If the distribution installs PostgreSQL\nthen the config files belong in /etc/postgresql. If the admin does then\nthey belong in /usr/local/etc/postgresql. FHS is out of their tree. If\nPostgreSQL or any other package is not critical to the basic operation of\nthe operating system, it's config files shouldn't be polluting /etc.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 13 Feb 2003 18:07:46 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Robert Treat wrote:\n\n>On Thu, 2003-02-13 at 14:51, mlw wrote:\n> \n>\n>>Robert Treat wrote:\n>>\n>>\n>>On Thu, 2003-02-13 at 12:13, mlw wrote:\n>>\n>> \n>>\n>>My patch only works on the PostgreSQL server code. No changes have been\n>>\n>>made to the initialization scripts.\n>>\n>>\n>>\n>>The patch declares three extra configuration file parameters:\n>>\n>>hbafile= '/etc/postgres/pg_hba.conf'\n>>\n>>identfile='/etc/postgres/pg_ident.conf'\n>>\n>>datadir='/RAID0/postgres'\n>>\n>>\n>>If we're going to do this, I think we need to account for all of the\n>>\n>>files in the directory including PG_VERSION, postmaster.opts,\n>>\n>>postmaster.pid. In the end if we can't build so that we are either fully\n>>\n>>FHS compliant and/or LSB compliant, we've not done enough work on it.\n>>\n>>\n>>postmaster.opts, PG_VERSION, and postmaster.pid are not configuration\n>>parameters. \n>>\n>> \n>>\n>\n>So? I'm not saying they all have to be moved, just they all need to be\n>accounted for. \n>\nOK, what was the point?\n\n>>PG_VERSION is VERY important, it is how you know the version of the\n>>database.\n>>Postmaster.pid is a postgres writable value \n>>AFAIK, postmaster.opts is also a postgres writable value.\n>>\n>> \n>>\n>\n>IIRC the postmaster.pid file should be in /var/run according to FHS, I'm\n>not sure about postmaster.opts though...\n>\n>Again, if we're going to make a change, let's make sure we think it\n>through.\n>\nI'm not a big fan of the \"/var/run\" directory convention, especially \nwhen we expect multiple instances of the server to be able to run \nconcurrently. I suppose it can be a parameter in both the configuration \nfile and command line.\n\n\n\n\n\n\n\n\n\nRobert Treat wrote:\n\nOn Thu, 2003-02-13 at 14:51, mlw wrote:\n \n\n\nRobert Treat wrote:\n\n\nOn Thu, 2003-02-13 at 12:13, mlw wrote:\n\n \n\nMy patch only works on the PostgreSQL server code. No changes have been\n\nmade to the initialization scripts.\n\n\n\nThe patch declares three extra configuration file parameters:\n\nhbafile= '/etc/postgres/pg_hba.conf'\n\nidentfile='/etc/postgres/pg_ident.conf'\n\ndatadir='/RAID0/postgres'\n\n\nIf we're going to do this, I think we need to account for all of the\n\nfiles in the directory including PG_VERSION, postmaster.opts,\n\npostmaster.pid. In the end if we can't build so that we are either fully\n\nFHS compliant and/or LSB compliant, we've not done enough work on it.\n\n\npostmaster.opts, PG_VERSION, and postmaster.pid are not configuration\nparameters. \n\n \n\n\nSo? I'm not saying they all have to be moved, just they all need to be\naccounted for. \n\nOK, what was the point?\n\n\n\nPG_VERSION is VERY important, it is how you know the version of the\ndatabase.\nPostmaster.pid is a postgres writable value \nAFAIK, postmaster.opts is also a postgres writable value.\n\n \n\n\nIIRC the postmaster.pid file should be in /var/run according to FHS, I'm\nnot sure about postmaster.opts though...\n\nAgain, if we're going to make a change, let's make sure we think it\nthrough.\n\nI'm not a big fan of the \"/var/run\" directory convention, especially when\nwe expect multiple instances of the server to be able to run concurrently.\nI suppose it can be a parameter in both the configuration file and command\nline.", "msg_date": "Thu, 13 Feb 2003 18:25:20 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 17:53, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > What your comments strongly suggest to me is that projects like\n> > PostgreSQL and pine, along with everything else, should comply with FHS;\n> > then there will be no confusion because everyone will be following the\n> > smae standards. Messes arise when people ignore standards; we have all\n> > seen the dreadful examples of MySQL and the Beast, haven't we?\n\n> Can the FHS handle installing PostgreSQL as non-root?\n\nOnce again, no one is trying to make an FHS install the default 'let's force \neveryone to think our way or no way' coercion.\n\nWe just want the option.\n\nFor those who wish to do non-root installs, nothing would need to change. You \ncan still put it into /usr/local/pgsql (assuming you have permissions to put \nit there) or your home directory, or wherever.\n\nI deal with RPMs; Oliver deals with .deb's. Neither can be installed as \nnon-root. The daemon can of course run as non-root (and it does, which is \nexactly correct); but the installation of the files is done as root _always_ \nin an RPM or deb environment. So I really don't care about non-root \ninstalls; sorry. I wonder what percentage of our users are not the \nadministrator of the machine on which they are running PostgreSQL?\n\nI dispute the statement made earlier in the thread (not by Bruce) that \nPostgreSQL is by definition not an OS service. This is false, and needs to \nbe realized by this community. PostgreSQL is becoming an essential OS core \nservice in many cases: virtually all Linux distributions (the lion's share of \nour current distribution) include PostgreSQL as a core service. Many of our \nnew users see PostgreSQL as 'SQL server' in the Red Hat installation menu.\n\nNow, on a Win32 server, what is PostgreSQL going to be considered? It is \nprobably going to run as a service, right? So you need to be Administrator \nthere to perform the install, right?\n\nThis isn't the same environment, Bruce, that you got into back when it was \nstill Postgres95. We are in the big leagues OS-wise, and we need to act like \nit. Assuming that we are a 'userspace' program (which is a misnomer anyway, \nas _anything_ non-kernel is 'userspace') is not going to cut it anymore. \n\nSo we need to fit in to an OS environment, whether it is FreeBSD, OS/X, Win32, \nSolaris, or Linux. In FreeBSD, as the ports maintainer excellently posted, \nPostgreSQL should live in LOCALBASE. We should make that easy. In Win32, \nconfiguration might be better stored in the system registry (Argh! Did I \nactually say THAT! Yuck!) -- we should make even that easy. In OS/X we \nshould use the OS/X paradigm (whatever that is). And we should make it easy \nto make PostgreSQL LSB-compliant for our very large Linux user community. We \nshould be adaptable to the accepted administration paradigm on whatever \nsystem we are running -- this should be a minimum.\n\nThese concerns vastly outweigh the occasional non-root install from source, in \nmy mind at least. I am not opposed to that way even being the default; after \nall, leaving the default the same as now agrees with the principle of least \nsurprise (although we really don't ascribe to that; witness the 7.2-7.3 \nmigration fiasco -- 7.3 should have been 8.0 to warn people of the major \nchanges going on in client connections). But I do advocate _allowing_ the \nconfiguration options Mark has enumerated -- although I really wish we could \nuse the lowercase c instead, for consistency with other OS services.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 18:26:00 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 18:07, Vince Vielhaber wrote:\n> Actually FHS says the opposite. If the distribution installs PostgreSQL\n> then the config files belong in /etc/postgresql. If the admin does then\n> they belong in /usr/local/etc/postgresql. FHS is out of their tree. If\n> PostgreSQL or any other package is not critical to the basic operation of\n> the operating system, it's config files shouldn't be polluting /etc.\n\nPostgreSQL is as critical as PHP, Apache, or whatever other package is being \nbackended by PostgreSQL. If the package is provided by the distributor, \nconsider it part of the OS. If it isn't, well, it isn't.\n\nThis is so that local admin installed (from source -- not from binary package) \nfiles don't get clobbered by the next operating system binary upgrade. In \nthat context the FHS (LSB) mandate makes lots of sense.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 18:30:45 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, Lamar Owen wrote:\n\n> On Thursday 13 February 2003 18:07, Vince Vielhaber wrote:\n> > Actually FHS says the opposite. If the distribution installs PostgreSQL\n> > then the config files belong in /etc/postgresql. If the admin does then\n> > they belong in /usr/local/etc/postgresql. FHS is out of their tree. If\n> > PostgreSQL or any other package is not critical to the basic operation of\n> > the operating system, it's config files shouldn't be polluting /etc.\n>\n> PostgreSQL is as critical as PHP, Apache, or whatever other package is being\n> backended by PostgreSQL. If the package is provided by the distributor,\n> consider it part of the OS. If it isn't, well, it isn't.\n\nYou completely miss my point, but lately you've been real good at that.\n\nCan the system boot without PHP, Apache, PostgreSQL, Mysql and/or Pine?\nCan the root user log in without PHP, Apache, PostgreSQL, Mysql and/or Pine?\nCan any user log in without PHP, Apache, PostgreSQL, Mysql and/or Pine?\n\nNote, I'm not even including an MTA here. I said BASIC OPERATION.\n\nIf a package is not critical as I just outlined, it shouldn't matter who\ninstalled it.\n\nAfter the last go around with you Lamar, this will be my last response\nto you on this.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Thu, 13 Feb 2003 18:41:06 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 18:41, Vince Vielhaber wrote:\n> On Thu, 13 Feb 2003, Lamar Owen wrote:\n> > PostgreSQL is as critical as PHP, Apache, or whatever other package is\n> > being backended by PostgreSQL. If the package is provided by the\n> > distributor, consider it part of the OS. If it isn't, well, it isn't.\n\n> You completely miss my point, but lately you've been real good at that.\n\nNo, Vince, I understand your point. But understand mine: it does matter who \ninstalled it.\n\n> Note, I'm not even including an MTA here. I said BASIC OPERATION.\n\n> If a package is not critical as I just outlined, it shouldn't matter who\n> installed it.\n\n'Critical' is in the eye of the admin of the system in question. For my \nservers, if, for instance, sshd doesn't come up, then there's a major \nproblem, as they are all headless. If the webserver doesn't come up, I have \nother problems, as OpenACS is mission-critical here. So what's critical is a \nquestion for the individual sysadmin.\n\nSo, to continue your point, what is 'critical' to the 'basic operation' of the \nsystem shouldn't pollute /etc. So, let's eliminate the /etc/mail, \n/etc/samba, /etc/xinetd.d, /etc/X11, /etc/httpd, and the other subtrees foung \nin at least Red Hat 8. While we're at it, many other files in /etc need to \ngo: named.conf for one. It depends on what you consider 'critical'. \nPostgreSQL is at least as critical on my systems as some of the other things \nthat already 'pollute' /etc.\n\n> After the last go around with you Lamar, this will be my last response\n> to you on this.\n\nAw Vince, I don't know what your problem is with conflicting opinions. But \nthat's your choice. And Open Source is about _choice_. You are free to \nadmin your systems your way, and I'm free to do so my way. And all's well.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 19:10:04 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, Feb 13, 2003 at 05:59:17PM -0500, Robert Treat wrote:\n> On Thu, 2003-02-13 at 15:08, mlw wrote:\n> > Stephan Szabo wrote:\n> > \n> > On Thu, 13 Feb 2003, mlw wrote:\n> > \n> > I have absolutely no problem debating and augmenting the feature. None\n> > what so ever, I am more pushing to get momentum to actually do it. \n> \n> Stick with it, I think most of us here can see the value in the option,\n> but there are valid concerns that it be implemented correctly.\n> Personally I think a postgresql installation is much more like an apache\n> installation, which generally contains all of the files (data and\n> config) under /usr/local/apache. Maybe someone can dig more to see if\n> that system is more appropriate a comparison than something like bind.\n\n\tI think you are making a pretty uninformed, if not just plain wrong \ngeneralization. I've run exactly one system with apache configuration \nfiles in /usr/local/apache, and even then, the data was not there.\n\n\tA quick straw poll of the people I know who actually do run real systems\nalso mentioned that they use packaging systems like encap or rpm to manage\nupgrades, and would almost never put datafiles into /usr/local.\n\nRedHat (7.3 at least)'s default httpd datafiles go in /var/www/html and\nconfig goes in /etc/httpd\n\nOne OpenBSD user I talked to puts his in /home/www and config files in\n/etc/httpd. The defaults are /var/www and /var/www/conf\n\nAnother user reports:\nOn systems that I set up I have /web/{apache|httpd}/ and put all \nthe config info there.\nAnd /web/sites/name/ holds site data.\n\n\n\nWhat does this mean?\n\nPeople will put things in different places, and there are typically\nvery good reasons for this. This is ESPECIALLY true when one wants to\nhave configuration files, at least the base ones in a common place such\nas /etc or /usr/local/etc in order to make backup of configuration easy\nand clean, while leaving data somewhere else for performance or magnitude\nof partition reasons. It just makes sense to ME to have postgresql.conf\nreside in /etc, yet put my data in /var/data/postgresql, yet retain the\noption to put my data in /raid/data/postgresql at a later date, when the\nnew hardware comes in.\n\nYes, symlinks are an option on most systems. No, they are not a good\none on most systems.\n\n\nWhat _I_ would like to see:\n\no. a default postgresql.conf location of $PREFIX/data/postgresql.conf\no. a default PGDATA location of whatever directory postgresql.conf is in\n(this should maintain backward compatibility)\no. a ./configure - time option to override the location of the postgresql.conf\no. a run-time option to override the location of the postgresql.conf\no. options in postgresql.conf to specify the location of PGDATA and PID files.\n\n($PREFIX is already settable at ./configure - time)\n\nThis would allow:\n\to. Config files in /usr/local/pgsql/data, /etc, /usr/local/etc, ~postgresql, \n\tor /dev/.hidden-node, whichever you prefer, so long as you either know\n\tthe compile-time default, or are willing to specify it at startup.\n\n\to. Datafiles to be in /usr/local/pgsql/data, /var/data, /raid0, /nfs/bigmount\n\tor whichever you prefer, so long as you either know the compile-time default,\n\tor are willing to specify it in a config file that you specify at startup.\n\nDoes it add complexity to the system? Sure -- a very little bit, IMHO, especially\ncompared to the BTREE-folding that I see being bantered about.\n\nIs it some work? Sure -- a very little bit, and it seems that it has already\nbeen done.\n\n\nHowever, this seems, to me, to be a very small addition that has some real-world\n(and yes, we need to start paying attention to the real world) advantages.\n\nAnd finally, don't go telling me that I'm wrong to put my data and config files\nwhere I am. You can offer advice, but I'm probably going to ignore it because\nI like where they are and don't need to explain why.\n\n\n-- \nAdam Haberlach | \"Because manholes are round.\"\[email protected] |\nhttp://mediariffic.com |\n", "msg_date": "Thu, 13 Feb 2003 16:22:40 -0800", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen wrote:\n> This isn't the same environment, Bruce, that you got into back when it was \n> still Postgres95. We are in the big leagues OS-wise, and we need to act like \n> it. Assuming that we are a 'userspace' program (which is a misnomer anyway, \n> as _anything_ non-kernel is 'userspace') is not going to cut it anymore. \n\nSo you are saying this isn't my grandma's database anymore. :-)\n\nAnyway, I think I have _a_ proposal that we can use to work toward a\ngoal.\n\nFirst, a few conclusions:\n\n\tWe can't use /var/run because we need the postmaster to create\n\tthose, and it isn't root.\n\n\tRight now, the fact that we mix the config stuff with the\n\tdata isn't ideal. Someone mentioned:\n\n\t\tpg_dumpall > foo && rm -rf $PGDATA && initdb\n\n\tdiscards all the config files.\n\nSo, I propose we change a few things. The good news is that this is\nsomething only administrators deal with; client apps don't deal with\nit.\n\n\nOK, first, we keep postmaster.pid and postmaster.opts in /data. We\ncan't put them in /var/run, and /data seems like the best spot for them.\n\nThat leaves postgresql.conf, pg_hba.conf, and pg_ident.conf. I\nrecommend moving them all, by default, into pgsql/etc. I recommend we\nadd these to postgresql.conf:\n\n\tdata_dir = ../data\n\tpg_hba_dir = ./\n\tpg_ident_dir = ./\n\nThose paths are relative to postgresql.conf.\n\nWe then add a PGCONFIG variable and postmaster -C flag to point to the\nconfig _directory_. That way, if folks want to move all of this into\n/etc, then easily do that. This also pulls those files out of /data so\nthey are easier to back up.\n\nWe can also firm up stuff in 7.5 by removing PGDATA and -D, and perhaps\nremoving the other duplicate postmaster flags that have postgresql.conf\nentries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 20:09:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 20:09, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > This isn't the same environment, Bruce, that you got into back when it\n> > was still Postgres95.\n\n> So you are saying this isn't my grandma's database anymore. :-)\n\nI actually thought of saying it that way, too. :-)\n\n> Anyway, I think I have _a_ proposal that we can use to work toward a\n> goal.\n\n> First, a few conclusions:\n\n> \tWe can't use /var/run because we need the postmaster to create\n> \tthose, and it isn't root.\n\nIt isn't without precedent to have a directory under /var/run. Maybe \n/var/run/postgresql. Under this one could have a uniquely named pid file. I \nsay uniquely named so that multiple postmasters could run. Naming those \nfiles could be fun. /var/run/postgresql would be owned by the postmaster run \nuser. This of course requires root to install -- but would be completely \noptional.\n\n> \t\tpg_dumpall > foo && rm -rf $PGDATA && initdb\n\n> \tdiscards all the config files.\n\nYes, this is a big deal. It makes it more difficult to properly restore. \nWhile it's not impossible to do so now, of course, it just could be a little \neasier.\n\n> So, I propose we change a few things.\n\n> OK, first, we keep postmaster.pid and postmaster.opts in /data. We\n> can't put them in /var/run, and /data seems like the best spot for them.\n\nCan we make that configurable? The default in pgdata is fine; just having the \noption is good.\n\n> That leaves postgresql.conf, pg_hba.conf, and pg_ident.conf. I\n> recommend moving them all, by default, into pgsql/etc. I recommend we\n> add these to postgresql.conf:\n\n> \tdata_dir = ../data\n> \tpg_hba_dir = ./\n> \tpg_ident_dir = ./\n\n> Those paths are relative to postgresql.conf.\n\nAnd these are all just defaults, easily changed. Good.\n\n> We then add a PGCONFIG variable and postmaster -C flag to point to the\n> config _directory_. That way, if folks want to move all of this into\n> /etc, then easily do that. This also pulls those files out of /data so\n> they are easier to back up.\n\nYes. I'm thinking along the lines of this sort of structure:\n/etc\n|---postgresql\n |----- name of postmaster one (unique ID of some kind)\n |----- name of postmaster two\n .\n .\n\nNot difficult.\n\n> We can also firm up stuff in 7.5 by removing PGDATA and -D, and perhaps\n> removing the other duplicate postmaster flags that have postgresql.conf\n> entries.\n\nNow I really _like_ this idea. By removing it to 7.5, and therefore \ndeprecating it in 7.4, this brings best practice into effect.\n\nHowever, at the same time, I wouldn't be opposed to leaving them in place, \neither, for backwards compatibility.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 20:39:13 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> I don't think separate params for each config file is good. At the\n> most, I think we will specify the configuration _directory_ for all the\n> config files, perhaps pgsql/etc, and have pgdata default to ../data, or\n> honor $PGDATA. That might be the cleanest.\n> \n> Of course, that now gives us $PGCONFIG and $PGDATA, and possible\n> intraction if postgresql.conf specifies a different pgdata from $PGDATA.\n> As you can see, it could get messy.\n\nUh...why are we having to mess with environment variables at all?\nIt's one thing for shell scripts to make use of them, but another\nthing entirely for an executable like the postmaster to do the same.\n\nSeems logical to me to eliminate the use of $PGDATA in the postmaster\nentirely. It usually gets started from a shell script, so let the\nshell script pass the appropriate parameter telling the postmaster\nwhere to find the data, or the config files, or whatever.\n\n> And, if you specify pgdata in postgresql.conf, it prevents you from\n> using that file by different postmasters.\n\nNot at all. Don't GUC variables that are specified on the command\nline override the ones in the configuration file?\n\n> My best guess would be to not specify pgdata in postgresql.conf, and\n> have a new $PGCONFIG param to specify the configuration directory, but\n> if we do that, $PGDATA/postgresql.conf becomes meaningless, which could\n> also be confusing. Maybe we don't allow those files to exist in $PGDATA\n> if $PGCONFIG is used, _and_ $PGCONFIG is not the same as $PGDATA. See,\n> I am getting myself confused. :-)\n\nI think the solution is real simple:\n\n1. Eliminate the use of $PGDATA in the postmaster. It causes far\n more headaches than it's worth. Instead, require that -D be\n passed on the command line. It's fine if the postmaster *sets*\n $PGDATA in order to minimize any changes that need to be made\n elsewhere, but the postmaster should not use it until it sets it.\n The postmaster right now reads all the config files (including\n postgresql.conf) from the directory specified by the -D option.\n Keep it that way.\n\n2. Add a GUC variable that specifies where the data is. If this\n variable is not defined either on the command line or in the\n config file, then assume that the data is in the same place as the\n config file. Obviously files like PG_VERSION are associated with\n the data and not with the config, so they get treated\n appropriately.\n\nThe above addresses *everyone's* concerns that I've seen thus far, I\nthink. Thoughts?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 13 Feb 2003 17:47:11 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen wrote:\n> > First, a few conclusions:\n> \n> > \tWe can't use /var/run because we need the postmaster to create\n> > \tthose, and it isn't root.\n> \n> It isn't without precedent to have a directory under /var/run. Maybe \n> /var/run/postgresql. Under this one could have a uniquely named pid file. I \n> say uniquely named so that multiple postmasters could run. Naming those \n> files could be fun. /var/run/postgresql would be owned by the postmaster run \n> user. This of course requires root to install -- but would be completely \n> optional.\n\nBut how do you handle the default then, where you have postmaster.pid in\n/data? Do we rename it to postmaster.pid.5432 so it can sit in\n/var/run/postgresql alone with other backends?\n\nAnother issue is that pg_ctl looks at that file, so moving it around is\ngoing to be tricky. Also, this brings up a new issue that pg_ctl all of\na sudden can't just look at $PGDATA but must instead grope through\npostgresql.conf to find the data directory location. That could be\ninteresting. Of course, it can still supply the /data path on the\ncommand line, but if we use only $PGCONFIG, we would need to have it\nfind /data automatically from postgresql.conf.\n\n\n> > OK, first, we keep postmaster.pid and postmaster.opts in /data. We\n> > can't put them in /var/run, and /data seems like the best spot for them.\n> \n> Can we make that configurable? The default in pgdata is fine; just having the \n> option is good.\n\nBasically, I am saying that unless someone wants to use this\nconfigurability, it is going to cause code confusion so it is best\navoided.\n`\n> Yes. I'm thinking along the lines of this sort of structure:\n> /etc\n> |---postgresql\n> |----- name of postmaster one (unique ID of some kind)\n> |----- name of postmaster two\n> .\n> .\n> \n> Not difficult.\n\nYes, that would work easily.\n\n> > We can also firm up stuff in 7.5 by removing PGDATA and -D, and perhaps\n> > removing the other duplicate postmaster flags that have postgresql.conf\n> > entries.\n> \n> Now I really _like_ this idea. By removing it to 7.5, and therefore \n> deprecating it in 7.4, this brings best practice into effect.\n> \n> However, at the same time, I wouldn't be opposed to leaving them in place, \n> either, for backwards compatibility.\n\nThe problem is that we would be having too many ways to specify the\n/data directory.\n\nI am now wondering if we even want pg_hba_dir and pg_ident_dir. Seems\nwe can assume they are in the same directory as postgresql.conf. That\nleaves only data_dir as new for postgresql.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 21:13:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Thu, 13 Feb 2003, Lamar Owen wrote:\n> \n> > On Thursday 13 February 2003 18:07, Vince Vielhaber wrote:\n> > > Actually FHS says the opposite. If the distribution installs PostgreSQL\n> > > then the config files belong in /etc/postgresql. If the admin does then\n> > > they belong in /usr/local/etc/postgresql. FHS is out of their tree. If\n> > > PostgreSQL or any other package is not critical to the basic operation of\n> > > the operating system, it's config files shouldn't be polluting /etc.\n> >\n> > PostgreSQL is as critical as PHP, Apache, or whatever other package is being\n> > backended by PostgreSQL. If the package is provided by the distributor,\n> > consider it part of the OS. If it isn't, well, it isn't.\n> \n> You completely miss my point, but lately you've been real good at that.\n> \n> Can the system boot without PHP, Apache, PostgreSQL, Mysql and/or\n> Pine?\n\nYep.\n\n> Can the root user log in without PHP, Apache, PostgreSQL, Mysql\n> and/or Pine?\n\nHopefully.\n\n> Can any user log in without PHP, Apache, PostgreSQL, Mysql and/or\n> Pine?\n\nThat depends, doesn't it? There exist PAM modules that allow\nauthentication against a database, for instance. If you're using them\nand the database doesn't come up, the users can't log in. So suddenly\nthe database config files belong in /etc?\n\nThe mission of the box is what counts. If the mission of the box is\nto be a web server then I'm probably not going to care whether\nnon-root users can log into it: that simply doesn't factor into the\nmission profile. The web server process is going to be as critical to\nthe mission of the box as almost anything else on it, as will anything\nthe web server process depends on -- which may well include a\ndatabase.\n\n> Note, I'm not even including an MTA here. I said BASIC OPERATION.\n\nSo by your reasoning sendmail.cf doesn't belong in /etc?? I dare say\nthat's news to most of us. Where, then, *does* it belong?\n\n> If a package is not critical as I just outlined, it shouldn't matter\n> who installed it.\n\nOh, it matters a great deal, because people upgrade their OS installs\nfrom time to time. Many OS distributions come with a lot of packages\nthat aren't \"critical\" as you define them but which nevertheless will\ncause much pain and suffering for the sysadmin if they install\nthemselves over what the sysadmin has previously built by hand.\n\nThe purpose for differentiating between a package that was compiled\nand installed from the source by the sysadmin and a prebuilt package\nthat was provided to the sysadmin by the vendor is to keep them from\nstepping on each other -- if the sysadmin went to the trouble of\ncompiling and installing a package from the source instead of using a\nprebuilt version from the vendor, then he probably did so for a very\ngood reason, and is going to be *really* annoyed if an OS upgrade\nblows away his work.\n\n\nThere are some good reasons for putting all the config files in /etc,\none of them being that it gives you *one* directory full of config\nfiles to worry about backing up instead of many. If you've got other\nideas I'm certainly interested in hearing the reasoning behind them.\nBut from the point of view of maintaining a widely deployed package\nlike PostgreSQL, the conventions the distributions and sysadmins use\nmatter a great deal, whether or not you happen to agree with those\nconventions.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 13 Feb 2003 18:20:05 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "> On 13 Feb 2003, Oliver Elphick wrote:\n> \n> > What your comments strongly suggest to me is that projects like\n> > PostgreSQL and pine, along with everything else, should comply with FHS;\n> > then there will be no confusion because everyone will be following the\n> > smae standards. Messes arise when people ignore standards; we have all\n> > seen the dreadful examples of MySQL and the Beast, haven't we?\n> \n> Actually FHS says the opposite. If the distribution installs PostgreSQL\n> then the config files belong in /etc/postgresql. If the admin does then\n> they belong in /usr/local/etc/postgresql. FHS is out of their tree. If\n> PostgreSQL or any other package is not critical to the basic operation of\n> the operating system, it's config files shouldn't be polluting /etc.\n\nI suspect you may be conflating BSD usage with Linux usage here...\n\nThe point isn't of being \"critical to basic operation of the operating \nsystem;\" it is of whether or not the software is being \"package-managed\" or \nnot.\n\nOne of the operating principles in FHS is that \"/usr/local\" is an area that \nthe distribution should never \"pollute.\" And so, a \"package-managed\" \nPostgreSQL installation should never touch that area.\n\nLooking at FHS, for a moment: http://www.pathname.com/fhs/2.2/\n\n3.7.1 Purpose\n/etc contains configuration files and directories that are specific to the \ncurrent system.\n\n3.7.4 Indicates that \n\n\"Host-specific configuration files for add-on application software packages \nmust be installed within the directory /etc/opt/<package>, where <package> is \nthe name of the subtree in /opt where the static data from that package is \nstored.\"\n\n3.12 indicates: /opt is reserved for the installation of add-on application \nsoftware packages.\n\nA package to be installed in /opt must locate its static files in a separate \n/opt/<package> directory tree, where <package> is a name that describes the \nsoftware package.\n\nThen comes 5.1, on /var\n\n/var contains variable data files. This includes spool directories and files, \nadministrative and logging data, and transient and temporary files.\n\nIt would make most sense, based on FHS, for PostgreSQL information to \nassortedly reside in:\n\n- /etc/opt/postgresql or /etc/postgresql, for static config information;\n- Binaries could assortedly live in /usr/bin or /opt/postgresql;\n- Logs should live in /var/log or /var/log/postgresql;\n- Data could assortedly live in /var/lib/postgresql, /var/opt/postgresql;\n- PIDs should live in /var/lock or /var/lock/postgresql.\n\nNone of these choices should come as any spectacular shock to anyone; there \nare an assortment of sets of bigotry out there surrounding the Proper Purposes \nof /opt and /usr/local, and there's probably enough wriggle room there to \navoid overly enraging anyone that (for instance) felt calling a directory \n\"/opt\" would make someone deserving of carpet bombing by B-52s.\n\nInterestingly, the Debian install of PostgreSQL somewhat resembles this, with, \nassortedly:\n\n/etc/postgresql\n/etc/postgresql/postgresql.conf\n/etc/postgresql/postmaster.conf\n/etc/postgresql/pg_hba.conf\n/etc/postgresql/pg_ident.conf\n/etc/init.d/postgresql\n/usr/share/doc/postgresql\n/usr/share/man/man1/pg_ctl.1.gz\n/usr/lib/postgresql\n/usr/lib/postgresql/bin/postgres\n/usr/lib/postgresql/bin/enable_lang\n/usr/lib/postgresql/bin/initdb\n/usr/lib/postgresql/bin/initlocation\n/usr/lib/postgresql/bin/ipcclean\n/usr/lib/postgresql/bin/pg_ctl\n/usr/lib/postgresql/bin/pg_dumpall\n/var/run/postgresql\n\n(This is obviously incomplete; this just gives the flavor that there are files \nin a reasonably rational but diverse assortment of places.)\n\nNote that the server software hides in /usr/lib/postgresql/bin; it's not stuff \nyou should normally run from the command line, so, quel surprise, it is \nstashed somewhere that's unlikely to be in your $PATH.\n\nStashing _everything_ in /var/lib/postgres would seem a tad surprising.\n\nMind you, if I need to manage 4 instances on one box, I might very well \ninstall several instances some place custom, say /opt/postgres, or similar, \nand in that case, it's probably preferable for everything to reside clearly \nunderneath that, and for my custom backup scripts to work in that area.\n\nBut if I'm managing 4 instances on one box, it should be quite evident that \nI'm going well beyond what any packaging system is likely to be prepared to \nhandle. Again, quel surprise.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/linuxxian.html\n\"Of _course_ it's the murder weapon. Who would frame someone with a\nfake?\"\n\n\n", "msg_date": "Thu, 13 Feb 2003 21:45:47 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Thu, 13 Feb 2003, scott.marlowe wrote:\n\n> If I do a .tar.gz install of apache, I get /usr/local/apache/conf, which\n> is not the standard way you're listing.\n\nI'm going to stay out of this argument from now on, but this struck a sore\npoint.\n\n/usr is designed to be a filesystem that can be shared. Is the stuff in\n/usr/local/apache/conf really supposed to be shared amongst all machines\nof that architecture on your site that run apache?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 14 Feb 2003 11:46:39 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thursday 13 February 2003 21:13, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > It isn't without precedent to have a directory under /var/run. Maybe\n> > /var/run/postgresql. Under this one could have a uniquely named pid\n> > file.\n\n> But how do you handle the default then, where you have postmaster.pid in\n> /data? Do we rename it to postmaster.pid.5432 so it can sit in\n> /var/run/postgresql alone with other backends?\n\nWell, you can have the default as 'postmaster.pid' if it wasn't named. But \nmore thought is needed. I'll have to admit; the wisdom of AOLserver having a \nfull-fledged tcl config script is beginning to look better and better.\n\n> Another issue is that pg_ctl looks at that file, so moving it around is\n> going to be tricky.\n\npg_ctl could be interesting.\n\n> I am now wondering if we even want pg_hba_dir and pg_ident_dir. Seems\n> we can assume they are in the same directory as postgresql.conf. That\n> leaves only data_dir as new for postgresql.conf.\n\nOk, if we're going this far already, tell me exactly why we have three config \nfiles. Why not really Unify things and fulfil the full promise of Grand \nUnified Configuration by rolling hba and ident into postgresql.conf. Is \nthere a compelling reason not to do so? The structure of that configuration \ndata would have to change, for sure. Although I seem to remember this being \nsuggested once before, but my mind draws a blank trying to recall it. Just a \nsuggestion; maybe not even a good one, but something that crossed my mind.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 21:47:19 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Oliver Elphick <[email protected]> writes:\n> I'm not entirely sure why SE Linux has a problem, seeing that postgres\n> needs read-write access to all the files in $PGDATA, but assuming the\n> need is verified, I could do this by moving the pid file from\n> $PGDATA/postmaster.pid to /var/run/postgresql/5432.pid and similarly for\n> other ports. This would also have the benefit of being more FHS\n> compliant What do people think about that?\n\nNo chance at all. Breaking the connection between the data directory\nand the postmaster.pid file means we don't have an interlock against\nstarting two postmasters in the same data directory.\n\nI do not see the argument for moving the pid file anyway. Surely no\none's going to tell us that the postmaster shouldn't have write access\nto the data directory?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 21:49:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Thursday 13 February 2003 21:49, Tom Lane wrote:\n> Oliver Elphick <[email protected]> writes:\n> > need is verified, I could do this by moving the pid file from\n> > $PGDATA/postmaster.pid to /var/run/postgresql/5432.pid and similarly for\n> > other ports. This would also have the benefit of being more FHS\n> > compliant What do people think about that?\n\n> No chance at all. Breaking the connection between the data directory\n> and the postmaster.pid file means we don't have an interlock against\n> starting two postmasters in the same data directory.\n\nIt's not a pid file in the /var/run sense, really. It's an interlock for \nPGDATA. So it might be argued that postmaster.pid is misnamed.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Thu, 13 Feb 2003 21:59:25 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen wrote:\n> > I am now wondering if we even want pg_hba_dir and pg_ident_dir. Seems\n> > we can assume they are in the same directory as postgresql.conf. That\n> > leaves only data_dir as new for postgresql.conf.\n> \n> Ok, if we're going this far already, tell me exactly why we have three config \n> files. Why not really Unify things and fulfil the full promise of Grand \n> Unified Configuration by rolling hba and ident into postgresql.conf. Is \n> there a compelling reason not to do so? The structure of that configuration \n> data would have to change, for sure. Although I seem to remember this being \n> suggested once before, but my mind draws a blank trying to recall it. Just a \n> suggestion; maybe not even a good one, but something that crossed my mind.\n\npostgresql.conf is var=val, while the others are column-based.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 22:11:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Ok, if we're going this far already, tell me exactly why we have three config\n> files. Why not really Unify things and fulfil the full promise of Grand \n> Unified Configuration by rolling hba and ident into postgresql.conf. Is \n> there a compelling reason not to do so?\n\nLack of backwards compatibility; unnecessary complexity. Unifying those\nfiles would create a big headache in terms of having to unify their\nsyntax. And there are some basic semantic differences too. For\ninstance, order matters in pg_hba.conf, but not in postgresql.conf.\n\nAnother reason not to do it is that there are differences in the\nsecurity requirements of these files. postgresql.conf probably doesn't\ncontain anything that needs to be hidden from prying eyes, but I'd be\ninclined to want to keep the other two mode 600.\n\n---\n\nOkay, I've been laying low all day, but here are my thoughts on the\ndiscussion:\n\nI do see the value in being able to (as opposed to being forced to,\nplease) keep hand-edited config files in a separate location from\nthe machine-processed data files. We have already gone some distance\nin that direction over the past few releases --- there's much less in\nthe top $PGDATA directory than there once was. It makes sense to let\npeople keep hand-edited files away from what initdb will overwrite.\n\nI would favor a setup that allows a -C *directory* (not file) to be\nspecified as a postmaster parameter separately from the -D directory;\nthen the hand-editable config files would be sought in -C not -D. In\nthe absence of -C the config files should be sought in -D, same as they\never were (thus simplifying life for people like me who run many\npostmasters and don't give a darn about FHS ;-)).\n\nI don't see any great value in a separate postgresql.conf parameter for\neach secondary config file; that just means clutter to me, especially\nif we add more such files in future. I am also distinctly not in favor\nof eliminating the PGDATA environment variable; that reads to me as\n\"we are going to force you to do it our way rather than the way you've\nalways done it, even if you like the old way\".\n\nTo make the RPM packagers happy, I guess that the default -C directory\nhas to be settable via configure. We do not currently have a default\n-D directory, and I didn't hear anyone arguing in favor of adding one.\nSo that leaves the following possible combinations that the postmaster\nmight see at startup, for which I propose the following behaviors:\n\n1. No -C switch, no -D switch, no PGDATA found in environment: seek\npostgresql.conf in the default -C directory established at configure\ntime. Use the 'datadir' specified therein as -D. Fail if postgresql.conf\ndoesn't define a datadir value.\n\n2. No -C switch, no -D switch, PGDATA found in environment: use $PGDATA\nas both -C and -D. (Minor detail: if the postgresql.conf in the $PGDATA\ndirectory specifies a different directory as datadir, do we follow that\nor raise an error? I'd be inclined to say \"follow it\" but maybe there\nis an argument for erroring out.)\n\n(In all the following cases, any environment PGDATA value is ignored.)\n\n3. No -C switch, -D switch on command line: use -D value as both -C and -D,\nproceed as in case 2.\n\n4. -C switch, no -D switch on command line: seek postgresql.conf in\n-C directory, use the datadir it specifies.\n\n5. -C and -D on command line: seek postgresql.conf in -C directory,\nuse -D as datadir overriding what is in postgresql.conf (this is just\nthe usual rule that command line switches override postgresql.conf).\n\nCases 2 and 3 are backwards-compatible with our historical behavior,\nso that anyone who likes the historical behavior will not be unhappy.\nCases 1 and 4 I think will make mlw and our packagers happy. Case 5\nis just the logical conclusion for that combination.\n\nIn all cases, pg_hba.conf and pg_ident.conf would be sought in the\nsame directory as postgresql.conf. The other stuff in the toplevel\n$PGDATA directory should stay where it is, IMHO.\n\nI would venture that the configure-time-default for -C should be\n${prefixdir}/etc if configure is not told differently, while the\npackagers would probably set it to /etc/postgresql/ (ie, the\nconfig files should live in a subdirectory that can be owned by\npostgres user). I'm not wedded to that though.\n\nAnother interesting question is whether the installed-by-default\npostgresql.conf should specify a datadir value, and if so what.\nIf initdb installs it, it can and probably should insert the actual\ndatadir location the user gave to initdb into the file. But should\ninitdb install any config files at all anymore? I'm leaning to the\nthought that initdb should store default config files into $PGDATA\nsame as it ever did, and then it's up to the user (or package install\nscripts) to move them to the desired -C directory if appropriate.\nOr I suppose we could add a -C parameter to initdb to tell it where to\nput 'em.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 23:00:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n> I don't see any great value in a separate postgresql.conf parameter for\n> each secondary config file; that just means clutter to me, especially\n> if we add more such files in future. I am also distinctly not in favor\n> of eliminating the PGDATA environment variable; that reads to me as\n> \"we are going to force you to do it our way rather than the way you've\n> always done it, even if you like the old way\".\n\nThe scripts needn't ignore PGDATA at all. Only postmaster. Since the\nvast majority of people start the postmaster from a script, this winds\nup being a minor issue, except for the fact that without PGDATA\nadministrators will be able to count on looking at the output of 'ps'\nto determine where the postmaster is looking for either the config\nfile or the data directory. In other words, they'll have somewhere to\nstart from without having to poke through scripts that might not even\nhave been used (what happens when a user defines PGDATA and starts a\npostmaster? The administrator will have to go to more extreme\nlengths, like using lsof, to figure out where the data directory is.\nNot all systems have such tools).\n\n> Comments?\n\nI agree with your assessment for the most part, except for PGDATA.\nThere's no good reason I can think of for the postmaster to look at\nit. It's fine if it sets it for processes it forks to inherit, but it\nshouldn't pay attention to it on startup. Some people might complain,\nbut there's little difference in doing a \"postmaster -D $PGDATA\" and\njust \"postmaster\", and people who are starting things by hand\nhopefully aren't so inflexible as to demand that PGDATA remain treated\nas-is. People who really care can create a simple little 'pm.sh'\nscript that simply does a \"postmaster -D $PGDATA\", which will save\nthem typing even over just doing a \"postmaster\" from the command line.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 13 Feb 2003 21:59:42 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 23:06, mlw wrote:\n> \n> Bruce Momjian wrote:\n\n> > Can non-root write to /var/run?\n> > \n> > \n> Shouldn't be able too\n\nBut it should be able to write under /var/run/postgresql, which the\ndistribution will set up with the correct permissions.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 06:17:43 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 22:53, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > What your comments strongly suggest to me is that projects like\n> > PostgreSQL and pine, along with everything else, should comply with FHS;\n> > then there will be no confusion because everyone will be following the\n> > smae standards. Messes arise when people ignore standards; we have all\n> > seen the dreadful examples of MySQL and the Beast, haven't we?\n> \n> Can the FHS handle installing PostgreSQL as non-root?\n\nCertainly. It is only necessary to set permissions correctly in\n/etc/postgresql and /var/run/postgresql.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 06:18:22 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, 2003-02-14 at 02:49, Tom Lane wrote:\n> Oliver Elphick <[email protected]> writes:\n> > I'm not entirely sure why SE Linux has a problem, seeing that postgres\n> > needs read-write access to all the files in $PGDATA, but assuming the\n> > need is verified, I could do this by moving the pid file from\n> > $PGDATA/postmaster.pid to /var/run/postgresql/5432.pid and similarly for\n> > other ports. This would also have the benefit of being more FHS\n> > compliant What do people think about that?\n> \n> No chance at all. Breaking the connection between the data directory\n> and the postmaster.pid file means we don't have an interlock against\n> starting two postmasters in the same data directory.\n\nYes; that would take a lot of effort to get round. Not worth it, I\nthink.\n\n> I do not see the argument for moving the pid file anyway. Surely no\n> one's going to tell us that the postmaster shouldn't have write access\n> to the data directory?\n\nI'm waiting for a response on that one; I don't understand it either.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 06:26:12 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I agree with your assessment for the most part, except for PGDATA.\n> There's no good reason I can think of for the postmaster to look at\n> it.\n\nThe other side of that coin is, what's the good reason to remove it?\nThere's a long way between \"I don't want my setup to depend on PGDATA\"\nand \"I don't think your setup should be allowed to depend on PGDATA\".\nIf you don't want to use it, then don't use it. Why do you need to\ntell me how I'm allowed to run my installation?\n\n> ... people who are starting things by hand hopefully aren't so\n> inflexible as to demand that PGDATA remain treated as-is.\n\nYes, I could reconfigure my scripts to not depend on this. You have\nnot given me an adequate argument why I should have to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 01:30:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Fri, 2003-02-14 at 02:45, [email protected] wrote:\n> 3.7.1 Purpose\n> /etc contains configuration files and directories that are specific to the \n> current system.\n> \n> 3.7.4 Indicates that \n> \n> \"Host-specific configuration files for add-on application software packages \n> must be installed within the directory /etc/opt/<package>, where <package> is \n> the name of the subtree in /opt where the static data from that package is \n> stored.\"\n> \n> 3.12 indicates: /opt is reserved for the installation of add-on application \n> software packages.\n> \n> A package to be installed in /opt must locate its static files in a separate \n> /opt/<package> directory tree, where <package> is a name that describes the \n> software package.\n...\n> It would make most sense, based on FHS, for PostgreSQL information to \n> assortedly reside in:\n> \n> - /etc/opt/postgresql or /etc/postgresql, for static config information;\n\nI feel that /opt (and therefore /etc/opt) are intended for the use of\nvendors; so commercial packages designed to fit in with FHS should use\nthose. I don't think they are for locally built stuff.\n\nNo matter; it illustrates the main point, which is that these things\nshould be easily configurable.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 06:37:30 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Apache explicitly supports a number of different layouts for files out of\nthe box (and provides support for you to roll your own very easily). From\nthe manual:\n\nThe second, and more flexible way to configure the install path locations\nfor Apache is using the config.layout file. Using this method, it is\npossible to separately specify the location for each type of file within the\nApache installation. The config.layout file contains several example\nconfigurations, and you can also create your own custom configuration\nfollowing the examples. The different layouts in this file are grouped into\n<Layout FOO>...</Layout> sections and referred to by name as in FOO.\n --enable-layout=LAYOUT\n Use the named layout in the config.layout file to specify the installation\npaths.\nMaybe pg could benefit from something similar?\n\ncheers\n\nandrew\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <[email protected]>\nSent: Thursday, February 13, 2003 4:07 PM\n[snip]\n> If I do a .tar.gz install of apache, I get /usr/local/apache/conf, which\n> is not the standard way you're listing. If I install openldap from\n> .tar.gz, I get a /usr/local/etc/openldap directory, close, but still not\n> the same. The fact is, it's the packagers that put things into /etc and\n> whatnot, and I can see the postgresql RPMs or debs or whatever having that\n> as the default, but for custom built software, NOTHING that I know of\n> builds from source and uses /etc without a switch to tell it to, just like\n> postgresql can do now.\n\n", "msg_date": "Fri, 14 Feb 2003 02:35:03 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > I agree with your assessment for the most part, except for PGDATA.\n> > There's no good reason I can think of for the postmaster to look at\n> > it.\n> \n> The other side of that coin is, what's the good reason to remove it?\n> There's a long way between \"I don't want my setup to depend on PGDATA\"\n> and \"I don't think your setup should be allowed to depend on PGDATA\".\n> If you don't want to use it, then don't use it. Why do you need to\n> tell me how I'm allowed to run my installation?\n\nI'm not talking about getting rid of ALL dependency on PGDATA in our\nentire distribution, only postmaster's.\n\nRecall that the main purpose of making any of these changes at all is\nto make life easier for the guys who have to manage the systems that\nwill be running PostgreSQL. Agreed?\n\nSo: imagine you're the newly-hired DBA and your boss points you to the\nsystem and says \"administrate the database on that\". You go over to\nthe computer and start looking around.\n\nYou do a \"ps\" and see a postmaster process running. You know that\nit's the process that is listening for connections. The \"ps\" listing\nonly says \"/usr/bin/postmaster\". No arguments to clue you in,\nnothing. Where do you look to figure out where the data is? How do\nyou figure out what port it's listening on?\n\nWell, we're already agreed on how to deal with that question: you look\nin /etc/postgresql, and because this is a relatively new install (and\nthe PostgreSQL maintainers, who are very wise and benevolent, made\nthat the default location for configs :-), it has a postgresql.conf\nfile with a line that says \"data_directory = /var/lib/pgsql\". It\ndoesn't mention a port to listen to so you know that it's listening on\nport 5432. As a DBA, you're all set.\n\nNow let's repeat that scenario, except that instead of seeing one\npostmaster process, you see five. And they all say\n\"/usr/bin/postmaster\" in the \"ps\" listing. No arguments to clue you\nin or anything, as before. You might be able to figure out where one\nof them is going by looking at /etc/postgresql, but what about the\nrest? Now you're stuck unless you want to do a \"find\" (time consuming\nand I/O intensive -- a good way to slow the production database down a\nbit), or you're knowledgeable enough to use 'lsof' or black magic like\ndigging into kernel memory to figure out where the config files and\ndata directories are, or you have enough knowledge to pore through the\nstartup scripts and understand what they're doing.\n\nLest you think that this is an unlikely scenario, keep in mind that\nmost startup scripts, including pg_ctl, currently start the postmaster\nwithout arguments and rely on PGDATA, so a shop that hasn't already\nbeen bitten by this *will* be. Right now shops that wish to avoid the\ntrap I described have to go to *extra* lengths: they have to make\nexactly the same kinds of changes to the scripts that I'm talking\nabout us making (putting an explicit '-D \"$PGDATA\"' where none now\nexists) or they have to resort to tricks like renaming the postmaster\nexecutable and creating a shell script in its place that will invoke\nthe (renamed) postmaster with '-D \"$PGDATA\"'.\n\nIt's not so bad if only a few shops have to make those changes. But\nwhat if it's thousands? Yeah, the distribution guys can patch the\nscripts to do this, but why should they have to? They, and the shops\nthat run PostgreSQL, are our customers.\n\n\nAll of that is made possible because the postmaster can use an\ninherited PGDATA for the location of the config files and (if the\nconfig files don't say differently in our new scheme) the data\ndirectory, and pg_ctl takes advantage of that fact (as do most startup\nscripts that I've seen, that don't just invoke pg_ctl).\n\nI'm not arguing that we should remove the use of PGDATA *everywhere*,\nonly in postmaster (and then, only postmaster's use of an *inherited*\nPGDATA. It should still set PGDATA so its children can use it). It\nmeans changing pg_ctl and the startup scripts we ship. The earlier we\nmake these changes, the less overall pain there will be in the long\nrun.\n\n\n> > ... people who are starting things by hand hopefully aren't so\n> > inflexible as to demand that PGDATA remain treated as-is.\n> \n> Yes, I could reconfigure my scripts to not depend on this. You have\n> not given me an adequate argument why I should have to.\n\n[By this I'm assuming you're referring to the scripts you use for\ntesting, and not the ones that ship with the distribution]\n\nI'm not arguing that you should get rid of all the references to\nPGDATA in your scripts or anything crazy like that. The changes I'm\ntalking about are minor: where you see \"postmaster\" without any \"-D\"\narguments, you simply add '-D \"$PGDATA\"' to it, before any other\narguments that you might also be passing. That's it. Nothing else\nshould be needed.\n\nThe reason for removing postmaster's use of an inherited PGDATA is the\nsame as the reason for making the other changes we already agree\nshould be made: to make things easier for the guys in the field who\nhave to manage production systems.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 02:58:49 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Tom Lane wrote:\n> > Kevin Brown <[email protected]> writes:\n> > > I agree with your assessment for the most part, except for PGDATA.\n> > > There's no good reason I can think of for the postmaster to look at\n> > > it.\n> > \n> > The other side of that coin is, what's the good reason to remove it?\n> > There's a long way between \"I don't want my setup to depend on PGDATA\"\n> > and \"I don't think your setup should be allowed to depend on PGDATA\".\n> > If you don't want to use it, then don't use it. Why do you need to\n> > tell me how I'm allowed to run my installation?\n> \n> I'm not talking about getting rid of ALL dependency on PGDATA in our\n> entire distribution, only postmaster's.\n> \n> Recall that the main purpose of making any of these changes at all is\n> to make life easier for the guys who have to manage the systems that\n> will be running PostgreSQL. Agreed?\n> \n> So: imagine you're the newly-hired DBA and your boss points you to the\n> system and says \"administrate the database on that\". You go over to\n> the computer and start looking around.\n> \n> You do a \"ps\" and see a postmaster process running. You know that\n> it's the process that is listening for connections. The \"ps\" listing\n> only says \"/usr/bin/postmaster\". No arguments to clue you in,\n> nothing. Where do you look to figure out where the data is? How do\n> you figure out what port it's listening on?\n\nIf you want ps to display the data dir, you should use -D. Remember, it\nis mostly important for multiple postmaster, so if you are doing that,\njust use -D, but don't prevent single-postmaster folks from using\nPGDATA.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 07:17:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nOK, here is an updated proposal. I think we have decided:\n\n\tMoving postmaster.pid and postmaster.opts isn't worth it.\n\n\tWe don't want per-file GUC variables, but assume it is in\n\tthe same config directory as postgresql.conf. I don't\n\tsee any valid reason they would want to put them somewhere\n\tdifferent than postgresql.conf.\n\nSo, we add data_dir to postgresql.conf, and add -C/PGCONFIG to\npostmaster.\n\nRegarding Tom's idea of replacing data_dir with a full path during\ninitdb, I think we are better having it be relative to the config\ndirectory, that way if they move pgsql/, the system still works.\nHowever, if the config directory is in a different lead directory path,\nwe should replace it with the full path, e.g. /usr/local/pgsql/data and\n/usr/local/pgsql/etc use relative paths, ../data, while /etc/postgresql\nand /usr/local/pgsql/data get an absolute path.\n\nMy idea is to introduce the above capabilities in 7.4, and keep the\nconfig files in /data. This will allow package folks to move the config\nfiles in 7.4.\n\nI also think we should start telling people to use PGCONFIG rather than\nPGDATA. Then, in 7.5, we move the default config file location to\npgsql/etc, and tell folks to point there rather than /data. I think\nthere is major value to getting those config files out of the initdb\ncreation area and for backups.\n\nI am now wondering if we should add PGCONFIG and move them out of data\nall in the same release. Not sure if delaying the split is valuable.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Ok, if we're going this far already, tell me exactly why we have three config\n> > files. Why not really Unify things and fulfil the full promise of Grand \n> > Unified Configuration by rolling hba and ident into postgresql.conf. Is \n> > there a compelling reason not to do so?\n> \n> Lack of backwards compatibility; unnecessary complexity. Unifying those\n> files would create a big headache in terms of having to unify their\n> syntax. And there are some basic semantic differences too. For\n> instance, order matters in pg_hba.conf, but not in postgresql.conf.\n> \n> Another reason not to do it is that there are differences in the\n> security requirements of these files. postgresql.conf probably doesn't\n> contain anything that needs to be hidden from prying eyes, but I'd be\n> inclined to want to keep the other two mode 600.\n> \n> ---\n> \n> Okay, I've been laying low all day, but here are my thoughts on the\n> discussion:\n> \n> I do see the value in being able to (as opposed to being forced to,\n> please) keep hand-edited config files in a separate location from\n> the machine-processed data files. We have already gone some distance\n> in that direction over the past few releases --- there's much less in\n> the top $PGDATA directory than there once was. It makes sense to let\n> people keep hand-edited files away from what initdb will overwrite.\n> \n> I would favor a setup that allows a -C *directory* (not file) to be\n> specified as a postmaster parameter separately from the -D directory;\n> then the hand-editable config files would be sought in -C not -D. In\n> the absence of -C the config files should be sought in -D, same as they\n> ever were (thus simplifying life for people like me who run many\n> postmasters and don't give a darn about FHS ;-)).\n> \n> I don't see any great value in a separate postgresql.conf parameter for\n> each secondary config file; that just means clutter to me, especially\n> if we add more such files in future. I am also distinctly not in favor\n> of eliminating the PGDATA environment variable; that reads to me as\n> \"we are going to force you to do it our way rather than the way you've\n> always done it, even if you like the old way\".\n> \n> To make the RPM packagers happy, I guess that the default -C directory\n> has to be settable via configure. We do not currently have a default\n> -D directory, and I didn't hear anyone arguing in favor of adding one.\n> So that leaves the following possible combinations that the postmaster\n> might see at startup, for which I propose the following behaviors:\n> \n> 1. No -C switch, no -D switch, no PGDATA found in environment: seek\n> postgresql.conf in the default -C directory established at configure\n> time. Use the 'datadir' specified therein as -D. Fail if postgresql.conf\n> doesn't define a datadir value.\n> \n> 2. No -C switch, no -D switch, PGDATA found in environment: use $PGDATA\n> as both -C and -D. (Minor detail: if the postgresql.conf in the $PGDATA\n> directory specifies a different directory as datadir, do we follow that\n> or raise an error? I'd be inclined to say \"follow it\" but maybe there\n> is an argument for erroring out.)\n> \n> (In all the following cases, any environment PGDATA value is ignored.)\n> \n> 3. No -C switch, -D switch on command line: use -D value as both -C and -D,\n> proceed as in case 2.\n> \n> 4. -C switch, no -D switch on command line: seek postgresql.conf in\n> -C directory, use the datadir it specifies.\n> \n> 5. -C and -D on command line: seek postgresql.conf in -C directory,\n> use -D as datadir overriding what is in postgresql.conf (this is just\n> the usual rule that command line switches override postgresql.conf).\n> \n> Cases 2 and 3 are backwards-compatible with our historical behavior,\n> so that anyone who likes the historical behavior will not be unhappy.\n> Cases 1 and 4 I think will make mlw and our packagers happy. Case 5\n> is just the logical conclusion for that combination.\n> \n> In all cases, pg_hba.conf and pg_ident.conf would be sought in the\n> same directory as postgresql.conf. The other stuff in the toplevel\n> $PGDATA directory should stay where it is, IMHO.\n> \n> I would venture that the configure-time-default for -C should be\n> ${prefixdir}/etc if configure is not told differently, while the\n> packagers would probably set it to /etc/postgresql/ (ie, the\n> config files should live in a subdirectory that can be owned by\n> postgres user). I'm not wedded to that though.\n> \n> Another interesting question is whether the installed-by-default\n> postgresql.conf should specify a datadir value, and if so what.\n> If initdb installs it, it can and probably should insert the actual\n> datadir location the user gave to initdb into the file. But should\n> initdb install any config files at all anymore? I'm leaning to the\n> thought that initdb should store default config files into $PGDATA\n> same as it ever did, and then it's up to the user (or package install\n> scripts) to move them to the desired -C directory if appropriate.\n> Or I suppose we could add a -C parameter to initdb to tell it where to\n> put 'em.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 07:32:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "In reference to determining what port postgres or any program is listening on\nOn my Redhat Linux machines\nnetstat --inet -nlp\nwhen run as root\nproduces a nice list of all programs listening on the network with IP and port \nnumber the process is listening on, the name of the process and the pid.\n\nThe environment used to start each of these postmasters can be found at\ncat /proc/${POSTMASTER-PID}/environ | tr \"\\000\" \"\\n\"\n\nI'm not arguing one way or the other on your issue, just hope these tips make \nthe \"black magic\" a little easier to use.\n\nOn Friday 14 February 2003 04:58 am, Kevin Brown wrote:\n> Now let's repeat that scenario, except that instead of seeing one\n> postmaster process, you see five. And they all say\n> \"/usr/bin/postmaster\" in the \"ps\" listing. No arguments to clue you\n> in or anything, as before. You might be able to figure out where one\n> of them is going by looking at /etc/postgresql, but what about the\n> rest? Now you're stuck unless you want to do a \"find\" (time consuming\n> and I/O intensive -- a good way to slow the production database down a\n> bit), or you're knowledgeable enough to use 'lsof' or black magic like\n> digging into kernel memory to figure out where the config files and\n> data directories are, or you have enough knowledge to pore through the\n> startup scripts and understand what they're doing.\n\n", "msg_date": "Fri, 14 Feb 2003 06:32:37 -0600", "msg_from": "David Walker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [MLIST] Re: location of the configuration files" }, { "msg_contents": "On Fri, 2003-02-14 at 12:17, Bruce Momjian wrote:\n> If you want ps to display the data dir, you should use -D. Remember, it\n> is mostly important for multiple postmaster, so if you are doing that,\n> just use -D, but don't prevent single-postmaster folks from using\n> PGDATA.\n\nCould not the ps line be rewritten to show this, as the backend's ps\nlines are rewritten?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 13:11:58 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nI am not sure if it is a good idea to be mucking with it. For backend,\nwe do the entire thing, so it is clear we modified something.\n\n---------------------------------------------------------------------------\n\nOliver Elphick wrote:\n> On Fri, 2003-02-14 at 12:17, Bruce Momjian wrote:\n> > If you want ps to display the data dir, you should use -D. Remember, it\n> > is mostly important for multiple postmaster, so if you are doing that,\n> > just use -D, but don't prevent single-postmaster folks from using\n> > PGDATA.\n> \n> Could not the ps line be rewritten to show this, as the backend's ps\n> lines are rewritten?\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight, UK http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"God be merciful unto us, and bless us; and cause his \n> face to shine upon us.\" Psalms 67:1 \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 08:29:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> If you want ps to display the data dir, you should use -D. Remember, it\n> is mostly important for multiple postmaster, so if you are doing that,\n> just use -D, but don't prevent single-postmaster folks from using\n> PGDATA.\n\nPerhaps the best compromise would be to change pg_ctl so that it uses\n-D explicitly when invoking postmaster. That's an easy change.\n\nCould you describe how you and other developers use PGDATA? I'm quite\ninterested in knowing why there seems to be so much resistance to\nremoving the \"potential_DataDir = getenv(\"PGDATA\");\" line from\npostmaster.c.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 05:41:48 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> If you want ps to display the data dir, you should use -D. Remember, it\n> is mostly important for multiple postmaster, so if you are doing that,\n> just use -D, but don't prevent single-postmaster folks from using\n> PGDATA.\n\nPerhaps another reasonable approach would be to put an #ifdef/#endif\naround the \"potential_DataDir = getenv(\"PGDATA\");\" line in postmater.c\nand create a configure option to enable it. That way you guys get the\nbehavior you want for testing but production builds could disable it\nif that's viewed as desirable. You'd want to make the error message\nthat's produced when no data directory is specified depend on the same\n#ifdef variable, of course.\n\nThen the group would get to fight it out over whether the configure\ndefault should be \"enable\" or \"disable\". :-)\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 05:47:37 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Bruce Momjian wrote:\n> > If you want ps to display the data dir, you should use -D. Remember, it\n> > is mostly important for multiple postmaster, so if you are doing that,\n> > just use -D, but don't prevent single-postmaster folks from using\n> > PGDATA.\n> \n> Perhaps the best compromise would be to change pg_ctl so that it uses\n> -D explicitly when invoking postmaster. That's an easy change.\n> \n> Could you describe how you and other developers use PGDATA? I'm quite\n> interested in knowing why there seems to be so much resistance to\n> removing the \"potential_DataDir = getenv(\"PGDATA\");\" line from\n> postmaster.c.\n\nI just set PGDATA in my login and I don't have to deal with it again.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:00:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n> > If you are interested in reading a contrary position, you can read\n> > Berstein's arguments for his recommended way to install services at:\n> > http://cr.yp.to/unix.html\n\nBut since DJB is a class-A monomaniac, he may not be the best person to\nlisten to. /var/qmail/control for qmail configuration files? Yeah, good\none, DJB.\n\n-- \nMartin Coxall <[email protected]>\n\n", "msg_date": "14 Feb 2003 14:07:09 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n> Generally things that live in /etc are owned and operated by the OS. \n> Postgresql, by it's definition is a userspace program, not an OS owned \n> one.\n\nPartially true. The FHS specifies that the /etc top layer is for system-own3d \nstuff, but the subdirectories off it are explicitly used for user space programs\nand, well, everything. (/etc/apache, /etc/postgres, /etc/tomcat3, /etc/tomcat4...)\n\nMartin Coxall\n\n", "msg_date": "14 Feb 2003 14:07:11 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 2003-02-13 at 20:28, Steve Crawford wrote:\n> I don't see why we can't keep everyone happy and let the users choose the \n> setup they want. To wit, make the following, probably simple, changes:\n> \n> 1) Have postgresql default to using /etc/postgresql.conf\n\n/etc/postgres/postgresql.conf, if we want to be proper FHS-bitches.\n\n> 2) Add a setting in postgresql.conf specifying the data directory\n> 3) Change the meaning of -D to mean \"use this config file\"\n> 4) In the absence of a specified data directory in postgresql.conf, use the \n> location of the postgresql.conf file as the data directory\n\nShouldn't it in that case default to, say /var/lib/postgres?\n\n-- \nMartin Coxall <[email protected]>\n\n", "msg_date": "14 Feb 2003 14:07:17 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> I just set PGDATA in my login and I don't have to deal with it\n> again.\n\nHmm...you don't use pg_ctl to start/stop/whatever the database? You\ninvoke the postmaster directly (I can easily see that you would, just\nasking if you do)?\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 06:12:38 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Bruce Momjian wrote:\n> > I just set PGDATA in my login and I don't have to deal with it\n> > again.\n> \n> Hmm...you don't use pg_ctl to start/stop/whatever the database? You\n> invoke the postmaster directly (I can easily see that you would, just\n> asking if you do)?\n\nI can use either to start/stop it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:15:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> I just set PGDATA in my login and I don't have to deal with it\n> again.\n\nDuh....I just realized a reason you guys might care about this so\nmuch.\n\nIt's because you want to be able to start the postmaster from within a\ndebugger (or profiler, or whatever), and you don't want to have to\nmess with command line options from there, right?\n\n\nSounds like fixing pg_ctl to use -D explicitly when invoking the\npostmaster is the right change to make here, since that's probably how\nthe majority of the production shops are going to be starting the\ndatabase anyway. Takes care of the majority of the \"visibility\"\nproblem and leaves PGDATA intact. Thoughts?\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 06:19:59 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On 14 Feb 2003, Martin Coxall wrote:\n\n>\n> > > If you are interested in reading a contrary position, you can read\n> > > Berstein's arguments for his recommended way to install services at:\n> > > http://cr.yp.to/unix.html\n>\n> But since DJB is a class-A monomaniac, he may not be the best person to\n> listen to. /var/qmail/control for qmail configuration files? Yeah, good\n> one, DJB.\n\nI'm guessing that rather than reading it the above mentioned link you\nchose to waste our time with this instead. Good one, MC.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Fri, 14 Feb 2003 09:21:16 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nGive it up. As long as we have -D, we will allow PGDATA. If you don't\nwant to use it, don't use it.\n\n---------------------------------------------------------------------------\n\nKevin Brown wrote:\n> Bruce Momjian wrote:\n> > I just set PGDATA in my login and I don't have to deal with it\n> > again.\n> \n> Duh....I just realized a reason you guys might care about this so\n> much.\n> \n> It's because you want to be able to start the postmaster from within a\n> debugger (or profiler, or whatever), and you don't want to have to\n> mess with command line options from there, right?\n> \n> \n> Sounds like fixing pg_ctl to use -D explicitly when invoking the\n> postmaster is the right change to make here, since that's probably how\n> the majority of the production shops are going to be starting the\n> database anyway. Takes care of the majority of the \"visibility\"\n> problem and leaves PGDATA intact. Thoughts?\n> \n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:22:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nMy point is that folks with multiple postmasters may not want to use\nPGDATA, but for folks who have single postmasters, it makes things\neasier and less error-prone.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> Give it up. As long as we have -D, we will allow PGDATA. If you don't\n> want to use it, don't use it.\n> \n> ---------------------------------------------------------------------------\n> \n> Kevin Brown wrote:\n> > Bruce Momjian wrote:\n> > > I just set PGDATA in my login and I don't have to deal with it\n> > > again.\n> > \n> > Duh....I just realized a reason you guys might care about this so\n> > much.\n> > \n> > It's because you want to be able to start the postmaster from within a\n> > debugger (or profiler, or whatever), and you don't want to have to\n> > mess with command line options from there, right?\n> > \n> > \n> > Sounds like fixing pg_ctl to use -D explicitly when invoking the\n> > postmaster is the right change to make here, since that's probably how\n> > the majority of the production shops are going to be starting the\n> > database anyway. Takes care of the majority of the \"visibility\"\n> > problem and leaves PGDATA intact. Thoughts?\n> > \n> > \n> > \n> > -- \n> > Kevin Brown\t\t\t\t\t [email protected]\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:29:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, 2003-02-14 at 14:21, Vince Vielhaber wrote:\n> On 14 Feb 2003, Martin Coxall wrote:\n> \n> >\n> > > > If you are interested in reading a contrary position, you can read\n> > > > Berstein's arguments for his recommended way to install services at:\n> > > > http://cr.yp.to/unix.html\n> >\n> > But since DJB is a class-A monomaniac, he may not be the best person to\n> > listen to. /var/qmail/control for qmail configuration files? Yeah, good\n> > one, DJB.\n> \n> I'm guessing that rather than reading it the above mentioned link you\n> chose to waste our time with this instead. Good one, MC.\n\nYeah, I've read it several times, and have often linked to it as an\nexample of why one should be wary of DJB's software. It seems to me that\nsince DJB doesn't follow his own advice regarding the filesystem\nhierarchy (see both qmail and djbdns), it'd be odd for him to expect\nanyone else to. *Especially* seing as he's a bit mental. (\"I'm not going\nto take this any more. I demand cross-platform compatibility!\")\n\n-- \nMartin Coxall <[email protected]>\n\n", "msg_date": "14 Feb 2003 14:33:53 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> OK, here is an updated proposal. I think we have decided:\n> \n> \tMoving postmaster.pid and postmaster.opts isn't worth it.\n> \n> \tWe don't want per-file GUC variables, but assume it is in\n> \tthe same config directory as postgresql.conf. I don't\n> \tsee any valid reason they would want to put them somewhere\n> \tdifferent than postgresql.conf.\n> \n> So, we add data_dir to postgresql.conf, and add -C/PGCONFIG to\n> postmaster.\n\nAgreed. One additional thing: when pg_ctl invokes the postmaster, it\nshould explicitly specify -C on the postmaster command line, and if it\ndoesn't find a data_dir in $PGCONFIG/postgresql.conf then it should\nexplicitly specify a -D as well. Pg_ctl is going to have to be\nmodified to take a -C argument anyway, so we may as well go all the\nway to do the right thing here.\n\nThis way, people who start the database using the standard tools we\nsupply will know exactly what's going on when they get a \"ps\" listing.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 06:35:54 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Give it up. As long as we have -D, we will allow PGDATA. If you don't\n> want to use it, don't use it.\n\nAgreed.\n\nI'm not sure I see how this diminishes the argument for fixing pg_ctl\nso that it passes an explicit -D option to the postmaster when\ninvoking it...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 06:40:11 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Bruce Momjian wrote:\n> > OK, here is an updated proposal. I think we have decided:\n> > \n> > \tMoving postmaster.pid and postmaster.opts isn't worth it.\n> > \n> > \tWe don't want per-file GUC variables, but assume it is in\n> > \tthe same config directory as postgresql.conf. I don't\n> > \tsee any valid reason they would want to put them somewhere\n> > \tdifferent than postgresql.conf.\n> > \n> > So, we add data_dir to postgresql.conf, and add -C/PGCONFIG to\n> > postmaster.\n> \n> Agreed. One additional thing: when pg_ctl invokes the postmaster, it\n> should explicitly specify -C on the postmaster command line, and if it\n> doesn't find a data_dir in $PGCONFIG/postgresql.conf then it should\n> explicitly specify a -D as well. Pg_ctl is going to have to be\n> modified to take a -C argument anyway, so we may as well go all the\n> way to do the right thing here.\n> \n> This way, people who start the database using the standard tools we\n> supply will know exactly what's going on when they get a \"ps\" listing.\n\nNo. If you want ps to display, don't use environment variables. Many\ndon't care --- especially those with only one postmaster.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:40:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> > This way, people who start the database using the standard tools we\n> > supply will know exactly what's going on when they get a \"ps\" listing.\n> \n> No. If you want ps to display, don't use environment variables. Many\n> don't care --- especially those with only one postmaster.\n\nYou know that the code in pg_ctl doesn't send an explicit -D to the\npostmaster even if pg_ctl itself is invoked with a -D argument, right?\nThe only way to make pg_ctl do that is by using the \"-o\" option.\n\nA typical vendor-supplied install is going to invoke pg_ctl to do the\ndirty work. That's why I'm focusing on pg_ctl.\n\nI completely understand your need for keeping PGDATA in postmaster. I\ndon't understand why pg_ctl *shouldn't* be changed to invoke\npostmaster with an explicit -D option. It might be desirable for ps\nto not show any arguments to postmaster in some circumstances (I have\nno idea what those would be), but why in the world would you want that\nto be the *default*? Why would we want the default behavior to make\nthings harder on administrators and not easier?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 06:58:49 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> The other side of that coin is, what's the good reason to remove it?\n>> There's a long way between \"I don't want my setup to depend on PGDATA\"\n>> and \"I don't think your setup should be allowed to depend on PGDATA\".\n>> If you don't want to use it, then don't use it. Why do you need to\n>> tell me how I'm allowed to run my installation?\n\n> I'm not talking about getting rid of ALL dependency on PGDATA in our\n> entire distribution, only postmaster's.\n\nWe're obviously talking past each other. You are arguing that under\ncircumstances X, Y, or Z, depending on a PGDATA setting is a bad idea.\nYou are then drawing the conclusion that I should not be allowed to\ndepend on PGDATA, whether or not I care about X, Y, or Z.\n\nI am happy to design an arrangement that allows you not to depend on\nPGDATA if you don't want to. But I don't see why you need to break\nmy configuration procedures in order to fix yours. As I outlined last\nnight, it's possible to do what you want without breaking backwards\ncompatibility for those that like PGDATA.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:02:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On 14 Feb 2003, Martin Coxall wrote:\n\n> On Fri, 2003-02-14 at 14:21, Vince Vielhaber wrote:\n> > On 14 Feb 2003, Martin Coxall wrote:\n> >\n> > >\n> > > > > If you are interested in reading a contrary position, you can read\n> > > > > Berstein's arguments for his recommended way to install services at:\n> > > > > http://cr.yp.to/unix.html\n> > >\n> > > But since DJB is a class-A monomaniac, he may not be the best person to\n> > > listen to. /var/qmail/control for qmail configuration files? Yeah, good\n> > > one, DJB.\n> >\n> > I'm guessing that rather than reading it the above mentioned link you\n> > chose to waste our time with this instead. Good one, MC.\n>\n> Yeah, I've read it several times, and have often linked to it as an\n> example of why one should be wary of DJB's software. It seems to me that\n> since DJB doesn't follow his own advice regarding the filesystem\n> hierarchy (see both qmail and djbdns), it'd be odd for him to expect\n> anyone else to. *Especially* seing as he's a bit mental. (\"I'm not going\n> to take this any more. I demand cross-platform compatibility!\")\n\nI seriously doubt your ability to judge anyone's mental stability.\nI can also see that you prefer cross-platform INcompatibility. Your\nposition and mindset are now crystal clear.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Fri, 14 Feb 2003 10:07:32 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n> I am happy to design an arrangement that allows you not to depend on\n> PGDATA if you don't want to. But I don't see why you need to break\n> my configuration procedures in order to fix yours. As I outlined last\n> night, it's possible to do what you want without breaking backwards\n> compatibility for those that like PGDATA.\n\nYes, I agree. I hadn't really thought of all the possible benefits of\nPGDATA. Sorry. :-(\n\nWould you agree that it would be a beneficial change to have pg_ctl\npass explicit arguments to postmaster? It would go a long way towards\neliminating most of the situations I described.\n\nA warning in the documentation about the consequences of using PGDATA\nmight not be a bad idea, either...\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 07:10:03 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> So, we add data_dir to postgresql.conf, and add -C/PGCONFIG to\n> postmaster.\n\nWait one second. You are blithely throwing in a PGCONFIG variable\nwithout any detailed proposal of exactly how it will work. Does\nthat override a PGDATA environment variable? How do they interact?\n\nAlso, please note Kevin Brown's nearby arguments against using PGDATA\nat all, which surely apply with equal force to a PGCONFIG variable.\nNow, I don't buy that Kevin's arguments are enough reason to break\nbackwards compatibility by removing PGDATA --- but I think they are\nenough reason not to introduce a new environment variable. PGCONFIG\nwouldn't offer any backwards-compatibility value, and that tilts the\nscales against it.\n\n> Regarding Tom's idea of replacing data_dir with a full path during\n> initdb, I think we are better having it be relative to the config\n> directory, that way if they move pgsql/, the system still works.\n\nGood thought, but you're assuming that initdb knows where the config\nfiles will eventually live. If we do that, then moving the config\nfiles breaks the installation. I think it will be fairly common to\nlet initdb drop its proposed config files into $PGDATA, and then\nmanually place them where they should go (or even more likely,\nmanually merge them with a prior version). Probably better to force\ndatadir to be an absolute path in the config file. (In fact, on safety\ngrounds I'd argue in favor of rejecting a datadir value taken from the\nconfig file that wasn't absolute.)\n\n> I also think we should start telling people to use PGCONFIG rather than\n> PGDATA. Then, in 7.5, we move the default config file location to\n> pgsql/etc, and tell folks to point there rather than /data.\n\nI agree with none of this. This is not improvement, this is only change\nfor the sake of change. The packagers will do what they want to do\n(and are already doing, mostly) regardless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:20:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Oliver Elphick <[email protected]> writes:\n> On Fri, 2003-02-14 at 12:17, Bruce Momjian wrote:\n>> If you want ps to display the data dir, you should use -D. Remember, it\n>> is mostly important for multiple postmaster, so if you are doing that,\n>> just use -D, but don't prevent single-postmaster folks from using\n>> PGDATA.\n\n> Could not the ps line be rewritten to show this, as the backend's ps\n> lines are rewritten?\n\nI for one would rather it didn't do that. I already set my postmaster\ncommand lines the way I want 'em, and I don't want the code overriding\nthat. (I prefer to use explicit -p arguments to distinguish the various\npostmasters I have running --- shorter and easier to read than explicit\n-D would be. At least for me.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:27:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I'm quite interested in knowing why there seems to be so much resistance to\n> removing the \"potential_DataDir = getenv(\"PGDATA\");\" line from\n> postmaster.c.\n\nBackwards compatibility. Also, you still haven't explained why\n\"I don't want to use PGDATA\" should translate to \"no one should\nbe allowed to use PGDATA\". If you don't set PGDATA, what problem\nis there for you in that line being there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:30:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Martin Coxall <[email protected]> writes:\n> Partially true. The FHS specifies that the /etc top layer is for system-own3d\n> stuff, but the subdirectories off it are explicitly used for user space programs\n> and, well, everything. (/etc/apache, /etc/postgres, /etc/tomcat3,\n> /etc/tomcat4...)\n\nFHS or no FHS, I would think that the preferred arrangement would be to\nkeep Postgres' config files in a postgres-owned subdirectory, not\ndirectly in /etc. That way you need not be root to edit them. (My idea\nof an editor, Emacs, always wants to write a backup file, so I dislike\nhaving to edit files that live in directories I can't write.)\n\nHere's a pretty topic for a flamewar: should it be /etc/postgres/ or\n/etc/postgresql/ ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:35:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Thu, 2003-02-13 at 19:22, Adam Haberlach wrote:\n> On Thu, Feb 13, 2003 at 05:59:17PM -0500, Robert Treat wrote:\n> > On Thu, 2003-02-13 at 15:08, mlw wrote:\n> > > Stephan Szabo wrote:\n> > > \n> > > On Thu, 13 Feb 2003, mlw wrote:\n> > Personally I think a postgresql installation is much more like an apache\n> > installation, which generally contains all of the files (data and\n> > config) under /usr/local/apache. Maybe someone can dig more to see if\n> > that system is more appropriate a comparison than something like bind.\n> \n> \tI think you are making a pretty uninformed, if not just plain wrong \n> generalization. I've run exactly one system with apache configuration \n> files in /usr/local/apache, and even then, the data was not there.\n\nUh... the last time I built apache from source, it stuck everything\nunder /usr/local/apache. It uses a conf directory for the config files,\nand htdocs for the \"data\" files... That is it's default configuration.\n\n<snip stories of all the different ways people run apache>\n\nYou know, this is why I actually suggested looking closer at apache. By\ndefault, everything is crammed in one directory, but if you want to, you\ncan configure it \"six different ways to sunday\". That seems to be a big\nplus IMO\n\n> \n> What does this mean?\n> \n> People will put things in different places, and there are typically\n> very good reasons for this. This is ESPECIALLY true when one wants to\n> have configuration files, at least the base ones in a common place such\n> as /etc or /usr/local/etc in order to make backup of configuration easy\n> and clean, while leaving data somewhere else for performance or magnitude\n> of partition reasons. It just makes sense to ME to have postgresql.conf\n> reside in /etc, yet put my data in /var/data/postgresql, yet retain the\n> option to put my data in /raid/data/postgresql at a later date, when the\n> new hardware comes in.\n\nIs anyone arguing against this? I'm certainly not. But maybe my needs\nare more varied than yours. On my local development box, I run multiple\nversions of apache, compiled with different versions of php. It really\nhelps to keep all of apache's stuff centralized, and using things like\nrpms actually overly complicates this. Now sure, that's a development\nmachine, but on the phppgadmin demo server, which is essentially a\nproduction system, I run three different versions of postgresql. In\nfact, I need to upgrade two of those (to 7.2.4 and 7.3.2), I shudder to\nthink about doing that if postgresql forced me to use the /etc/\ndirectory for all of my config files. Now sure, this probably isn't\ntypical use, but I would say that when it comes time to upgrade major\nversions, unless you running an operation where you can have large\namounts of downtime, postgresql needs to have the ability to have\nmultiple versions install that don't conflict with each other, and it\nneeds to do this easily. The upgrade process is hard enough already.\n\n<snip> \n> However, this seems, to me, to be a very small addition that has some real-world\n> (and yes, we need to start paying attention to the real world) advantages.\n> \n> And finally, don't go telling me that I'm wrong to put my data and config files\n> where I am. You can offer advice, but I'm probably going to ignore it because\n> I like where they are and don't need to explain why.\n> \n\nHave I wronged you in some former life? I've very little concern for\nwhere you put your data files, and have no idea why you'd think I'd\ncriticize your setup. \n\nRobert Treat\n\n", "msg_date": "14 Feb 2003 10:38:27 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> My point is that folks with multiple postmasters may not want to use\n> PGDATA, but for folks who have single postmasters, it makes things\n> easier and less error-prone.\n\nActually, for multi postmasters too. I have little shell-environment\nconfig files that switch my entire world view between different source\ntrees and installation trees, for example this one sets me up to mess\nwith the 7.2 branch:\n\nSTDPATH=${STDPATH:-$PATH}\nSTDMANPATH=${STDMANPATH:-$MANPATH}\n\nPGSRCROOT=$HOME/REL7_2/pgsql\nPGINSTROOT=$HOME/version72\nPATH=$PGINSTROOT/bin:/opt/perl5.6.1/bin:$STDPATH\nMANPATH=$PGINSTROOT/man:$STDMANPATH\nPGLIB=$PGINSTROOT/lib\nPGDATA=$PGINSTROOT/data\nPMOPTIONS=\"-p 5472 -i -F\"\nPMLOGFILE=server72.log\n\nexport PGSRCROOT PGINSTROOT PATH MANPATH PGLIB PGDATA STDPATH STDMANPATH\nexport PMOPTIONS PMLOGFILE\n\nAfter sourcing one of these, I can use pg_ctl as well as a half dozen\nother convenient little scripts that do things like remake and reinstall\nthe backend:\n\n#!/bin/sh\n\npg_ctl -w stop\n\ncd $PGSRCROOT/src/backend\n\nmake install-bin\n\nstartpg\n\nor this one that fires up gdb on a crashed backend:\n\n#!/bin/sh\n\n# Usage: gdbcore\n\ncd $HOME\n\nCORES=`find $PGDATA/base -name core -type f -print`\n\nif [ x\"$CORES\" != x\"\" ]\nthen\n ls -l $CORES\nfi\n\nif [ `echo \"$CORES\" | wc -w` -eq 1 ]\nthen\n exec gdb $PGINSTROOT/bin/postgres \"$CORES\"\nelse\n exec gdb $PGINSTROOT/bin/postgres\nfi\n\nThis is vastly less error-prone than keeping track of the various\nrelated elements in my head.\n\nNow, it's certainly true that I could still make this work if I had\nto explicitly say -D $PGDATA to the postmaster. But that would clutter\nmy ps display. I am happy with -p as the ps indicator of which\npostmaster is which; I don't want more stuff in there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:51:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Friday 14 Feb 2003 9:05 pm, you wrote:\n> Martin Coxall <[email protected]> writes:\n> Here's a pretty topic for a flamewar: should it be /etc/postgres/ or\n> /etc/postgresql/ ?\n\nI vote for /etc/pgsql. Keeping in line of unix philosophy of cryptic and short \nnames. Who wants a descriptive names anyway..:-)\n\nSeriously, the traffic on last three days ahd very high noise ratio. \nEspecially the whole discussion of PGDATA stuff fails to register as \nsignificant IMO. Right now, I can do things the way I want to do and I guess \nit is pretty much same with everyone else. Is it last topic left to improve?\n\nKeep it simple and on tpoic guys. This is hackers. Keep it low volume \notherwise, two years down the lines, archives will be unsearchable..\n\n Shridhar\n", "msg_date": "Fri, 14 Feb 2003 21:32:48 +0530", "msg_from": "\"Shridhar Daithankar<[email protected]>\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, 13 Feb 2003, mlw wrote:\n\n> \n> \n> scott.marlowe wrote:\n> \n> >>These are not issues at all. You could put the configuration file \n> >>anywhere, just as you can for any UNIX service.\n> >>\n> >>postmaster --config=/home/myhome/mydb.conf\n> >>\n> >>I deal with a number of PG databases on a number of sites, and it is a \n> >>real pain in the ass to get to a PG box and hunt around for data \n> >>directory so as to be able to administer the system. What's really \n> >>annoying is when you have to find the data directory when someone else \n> >>set up the system.\n> >> \n> >>\n> >\n> >Really? I would think it's easier to do this:\n> >\n> >su - pgsuper\n> >cd $PGDATA\n> >pwd\n> >\n> >Than to try to figure out what someone entered when they ran ./configure \n> >--config=...\n> > \n> >\n> Why do you think PGDATA would be set for root?\n\nDid you not notice the \"su - pgsuper\" line above? You know, the one where \nyou become the account that runs that instance of the database. Again, I \nask you, isn't that easier than trying to find out what someone typed when \nthey typed ./configure --config=?\n\n> >>Configuring postgresql via a configuration file which specifies all the \n> >>data, i.e. data directory, name of other configuration files, etc. is \n> >>the right way to do it. Even if you have reasons against it, even if you \n> >>think it is a bad idea, a bad standard is almost always a better \n> >>solution than an arcane work of perfection.\n> >> \n> >>\n> >\n> >Wrong, I strongly disagree with this sentament. Conformity to standards \n> >for simple conformity's sake is as wrong as sticking to the old way \n> >because it's what we're all comfy with. \n> >\n> It isn't conformity for conformitys sake. It is following an established \n> practice, like driving on the same side of the road or stopping at red \n> lights.\n\nBut this isn't the same thing at all. Apache, when built from a tar ball, \ngoes into /usr/local/apache/ and ALL it's configuration files are there. \nWhen installed as a package, my OS manufacturer decides where that goes. \nThose are the two \"standard\" ways of doing things. I like that postgresql \ninstalls into the /usr/local/pgsql directory from a tar ball. I like the \nfact that it uses $PGDATA to tell it where the cluster is, so that all my \nscripts, like pg_ctl, just know where it is without a -D switch each time.\n\n\n", "msg_date": "Fri, 14 Feb 2003 09:14:52 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, 2003-02-14 at 15:35, Tom Lane wrote:\n> Here's a pretty topic for a flamewar: should it be /etc/postgres/ or\n> /etc/postgresql/ ?\n\nIt should be configurable!\n\nDebian uses /etc/postgresql, if you want to stick to what quite a lot of\npeople are familiar with.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"God be merciful unto us, and bless us; and cause his \n face to shine upon us.\" Psalms 67:1 \n\n", "msg_date": "14 Feb 2003 16:44:49 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, 14 Feb 2003, Curt Sampson wrote:\n\n> On Thu, 13 Feb 2003, scott.marlowe wrote:\n> \n> > If I do a .tar.gz install of apache, I get /usr/local/apache/conf, which\n> > is not the standard way you're listing.\n> \n> I'm going to stay out of this argument from now on, but this struck a sore\n> point.\n> \n> /usr is designed to be a filesystem that can be shared. Is the stuff in\n> /usr/local/apache/conf really supposed to be shared amongst all machines\n> of that architecture on your site that run apache?\n\nInteresting. I've always viewed usr EXCEPT for local this way. In \nfact, on most of my boxes I create a seperate mount point for /usr/local \nso it's easier to backup and maintain, and it doesn't fill up the /usr \ndirectory.\n\nAsking for everything in a directory with the name local in it to be \nshared is kind of counter intuitive to me.\n\n", "msg_date": "Fri, 14 Feb 2003 09:46:35 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n> I seriously doubt your ability to judge anyone's mental stability.\n> I can also see that you prefer cross-platform INcompatibility. Your\n> position and mindset are now crystal clear.\n\nCome now- don't take it personally. All I said is, as someone who\nwrestles daily with QMail, we should prefer the FHS over DJB's way of\ndoing things and that DJB is a little, ahem, egocentric at times.\nNeither of these things was meant as a mortal insult to you personally,\nand if I offended you I apologise.\n\nAnyway, it looks like it's all been agreed over there, anyway.\n\n-- \nMartin Coxall <[email protected]>\n\n", "msg_date": "14 Feb 2003 16:57:58 +0000", "msg_from": "Martin Coxall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > So, we add data_dir to postgresql.conf, and add -C/PGCONFIG to\n> > postmaster.\n> \n> Wait one second. You are blithely throwing in a PGCONFIG variable\n> without any detailed proposal of exactly how it will work. Does\n> that override a PGDATA environment variable? How do they interact?\n\nI am just throwing out ideas. I don't think we are near interaction\nissues yet.\n\nI think the big question is whether we want the default to install the\nconfigs in a separate directory, pgsql/etc, or just allow the\nspecification of a separate location. Advantages of pgsql/etc are\ninitdb-safe, and easier backups.\n\nI do think PGCONFIG would be helpful for the same reason that PGDATA is.\nHowever, there is clearly a problem of how does data_dir interact with\nPGDATA.\n\nThe big question is whether PGDATA is still our driving config variable,\nand PGCONFIG/-C is just an additional option, or whether we are moving\nin a direction where PGCONFIG/-C is going to be the driving value, and\ndata_dir is going to be read as part of that.\n\n\n> Also, please note Kevin Brown's nearby arguments against using PGDATA\n> at all, which surely apply with equal force to a PGCONFIG variable.\n> Now, I don't buy that Kevin's arguments are enough reason to break\n> backwards compatibility by removing PGDATA --- but I think they are\n> enough reason not to introduce a new environment variable. PGCONFIG\n> wouldn't offer any backwards-compatibility value, and that tilts the\n> scales against it.\n\nWeren't you just showing how you set environment variables to easily\nconfigure stuff. If you use a separate configure dir, isn't PGCONFIG\npart of that?\n\n> > Regarding Tom's idea of replacing data_dir with a full path during\n> > initdb, I think we are better having it be relative to the config\n> > directory, that way if they move pgsql/, the system still works.\n> \n> Good thought, but you're assuming that initdb knows where the config\n> files will eventually live. If we do that, then moving the config\n> files breaks the installation. I think it will be fairly common to\n> let initdb drop its proposed config files into $PGDATA, and then\n> manually place them where they should go (or even more likely,\n> manually merge them with a prior version). Probably better to force\n> datadir to be an absolute path in the config file. (In fact, on safety\n> grounds I'd argue in favor of rejecting a datadir value taken from the\n> config file that wasn't absolute.)\n\nMaybe. Not sure.\n\n> > I also think we should start telling people to use PGCONFIG rather than\n> > PGDATA. Then, in 7.5, we move the default config file location to\n> > pgsql/etc, and tell folks to point there rather than /data.\n> \n> I agree with none of this. This is not improvement, this is only change\n> for the sake of change. The packagers will do what they want to do\n> (and are already doing, mostly) regardless.\n\nWell, it is a step forward in terms of initdb-safe and easier backups.\nSeveral people said they liked that. I thought you were one of them.\n\nThis is back to the big question, who drives things in the default\ninstall, config file or pgdata.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 12:33:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> The big question is whether PGDATA is still our driving config variable,\n> and PGCONFIG/-C is just an additional option, or whether we are moving\n> in a direction where PGCONFIG/-C is going to be the driving value, and\n> data_dir is going to be read as part of that.\n\nI'm actually leaning towards PGCONFIG + PGDATA.\n\nYeah, it may be a surprise given my previous arguments, but I can't\nhelp but think that the advantages you get with PGDATA will also exist\nfor PGCONFIG.\n\nMy previous arguments for removing PGDATA from postmaster can be dealt\nwith by fixing pg_ctl to use explicit command line directives when\ninvoking postmaster -- no changes to postmaster needed. PGCONFIG\nwould be no different in that regard.\n\n\nSorry if I seem a big gung-ho on the administrator point of view, but\nas a system administrator myself I understand and feel their pain.\n:-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 10:09:37 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\"scott.marlowe\" <[email protected]> writes:\n\n> But this isn't the same thing at all. Apache, when built from a tar ball, \n> goes into /usr/local/apache/ and ALL it's configuration files are there. \n\nTwo comments:\n\n1) Even in that case the config files go into /usr/local/apache/conf and the\n other kinds of files like data logs and cache files, all go in other\n subdirectories.\n\n2) What you describe is only true if you configure with the default\n \"--with-layout=Apache\". The naming should perhaps be a clue that this isn't\n a conventional layout. If you configure with --with-layout=GNU you get the\n conventional Unix layout in /usr/local, If you use --with-layout=RedHat you\n get the conventional layout in /usr directly which is mainly useful for\n distribution packagers.\n\nPutting stuff in a subdirectory like /usr/local/apache or /usr/local/pgsql is\nunfortunately a widespread practice. It does have some advantages over the\nconventional layout in /usr/local/{etc,bin,...} directly. But the major\ndisadvantage is that users can't run programs without adding dozens of entries\nto their paths, can't compile programs without dozens of -L and -I lines, etc.\n\nGNU autoconf script makes it pretty easy to configure packages to work either\nthough, and /usr/local is the purview of the local admin. As long as it's easy\nto configure postgres to install \"properly\" with --prefix=/usr/local it won't\nbe any more of an offender than lots of other packages like apache, kde, etc.\n\nThough I'll mention, please make it $prefix/etc not $prefix/conf. No need to\nbe gratuitously non-standard on an arbitrary name, and no need to pollute\n/usr/local with multiple redundant directories.\n\n-- \ngreg\n\n", "msg_date": "14 Feb 2003 13:16:26 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Thu, Feb 13, 2003 at 11:53:26 -0500,\n mlw <[email protected]> wrote:\n> \n> Where, specificaly are his arguements against a configuration file \n> methodology?\n\nI don't think he is argueing against a configuration methodology, but\nrather against the methodology being used in Unix distributions.\nIn particular he doesn't file the Linux File Standard because it\nputs the same software in different places depending on whether the\nvendor or using installed it.\n", "msg_date": "Fri, 14 Feb 2003 12:27:49 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Bruce Momjian wrote:\n> > The big question is whether PGDATA is still our driving config variable,\n> > and PGCONFIG/-C is just an additional option, or whether we are moving\n> > in a direction where PGCONFIG/-C is going to be the driving value, and\n> > data_dir is going to be read as part of that.\n> \n> I'm actually leaning towards PGCONFIG + PGDATA.\n> \n> Yeah, it may be a surprise given my previous arguments, but I can't\n> help but think that the advantages you get with PGDATA will also exist\n> for PGCONFIG.\n> \n> My previous arguments for removing PGDATA from postmaster can be dealt\n> with by fixing pg_ctl to use explicit command line directives when\n> invoking postmaster -- no changes to postmaster needed. PGCONFIG\n> would be no different in that regard.\n\nI see your point --- pg_ctl does a PGDATA trick when passed -D:\n\n -D)\n shift\n # pass environment into new postmaster\n PGDATA=\"$1\"\n export PGDATA\n\nIt should pass -D just like it was given.\n\n> Sorry if I seem a big gung-ho on the administrator point of view, but\n> as a system administrator myself I understand and feel their pain.\n\nMaking things easy for sysadmins is an important feature of PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 13:37:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, Feb 14, 2003 at 02:58:49 -0800,\n Kevin Brown <[email protected]> wrote:\n> \n> Lest you think that this is an unlikely scenario, keep in mind that\n> most startup scripts, including pg_ctl, currently start the postmaster\n> without arguments and rely on PGDATA, so a shop that hasn't already\n> been bitten by this *will* be. Right now shops that wish to avoid the\n> trap I described have to go to *extra* lengths: they have to make\n> exactly the same kinds of changes to the scripts that I'm talking\n> about us making (putting an explicit '-D \"$PGDATA\"' where none now\n> exists) or they have to resort to tricks like renaming the postmaster\n> executable and creating a shell script in its place that will invoke\n> the (renamed) postmaster with '-D \"$PGDATA\"'.\n\nOn at least some systems ps will dump process' environment and could be\neasily used to check PGDATA.\n", "msg_date": "Fri, 14 Feb 2003 12:50:17 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> I see your point --- pg_ctl does a PGDATA trick when passed -D:\n> \n> -D)\n> shift\n> # pass environment into new postmaster\n> PGDATA=\"$1\"\n> export PGDATA\n> \n> It should pass -D just like it was given.\n\nYes, exactly.\n\nNow, the more interesting question in my mind is: if pg_ctl isn't\npassed -D but inherits PGDATA, should it nonetheless pass -D\nexplicitly to the postmaster? We can make it do that, and it would\nhave the benefit of making transparent what would otherwise be opaque.\n\nI'm inclined to answer \"yes\" to that question, but only because\nsomeone who *really* doesn't want the postmaster to show up with a -D\nargument in \"ps\" can start the postmaster directly without using\npg_ctl at all. Tom made a good argument for sometimes wanting to keep\nthe ps output clean, but it's not clear to me that it should\nnecessarily apply to pg_ctl.\n\nBut you guys might have a different perspective on that. :-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 14 Feb 2003 10:56:28 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Friday 14 February 2003 6:07 am, Martin Coxall wrote:\n> On Thu, 2003-02-13 at 20:28, Steve Crawford wrote:\n> > I don't see why we can't keep everyone happy and let the users choose the\n> > setup they want. To wit, make the following, probably simple, changes:\n> >\n> > 1) Have postgresql default to using /etc/postgresql.conf\n>\n> /etc/postgres/postgresql.conf, if we want to be proper FHS-bitches.\n>\n> > 2) Add a setting in postgresql.conf specifying the data directory\n> > 3) Change the meaning of -D to mean \"use this config file\"\n> > 4) In the absence of a specified data directory in postgresql.conf, use\n> > the location of the postgresql.conf file as the data directory\n>\n> Shouldn't it in that case default to, say /var/lib/postgres?\n\nIdea 4 was just a way to preserve current behaviour for those who desire. \nMoving postgresql.conf requires adding the data directory info into \npostgresql.conf or specifying it in some other way. If, in the absence of any \nspecification in postgresql.conf, postgres just looks in the same directory \nas postgresql.conf then it will be almost identical to the current setup.\n\nCheers,\nSteve\n", "msg_date": "Fri, 14 Feb 2003 11:24:19 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think the big question is whether we want the default to install the\n> configs in a separate directory, pgsql/etc, or just allow the\n> specification of a separate location. Advantages of pgsql/etc are\n> initdb-safe, and easier backups.\n\nI don't see why we don't just let initdb install suggested config files\ninto the new $PGDATA directory, same as it ever did. Then (as long as\nwe don't use relative paths in the config files) people can move them\nsomewhere else if they like, or not if they prefer not to. Adding more\nmechanism than that just adds complexity without buying much (except the\npossibility of initdb overwriting your old config files, which is\nexactly what I thought we wanted to avoid).\n\n> The big question is whether PGDATA is still our driving config variable,\n> and PGCONFIG/-C is just an additional option, or whether we are moving\n> in a direction where PGCONFIG/-C is going to be the driving value, and\n> data_dir is going to be read as part of that.\n\nI thought the idea was to allow both approaches. We are not moving in\nthe direction of one or the other, we are giving people a choice of how\nthey want to drive it.\n\n> Weren't you just showing how you set environment variables to easily\n> configure stuff. If you use a separate configure dir, isn't PGCONFIG\n> part of that?\n\nI'm just pointing out that there's no backward-compatibility argument\nfor PGCONFIG. It should only be put in if the people who want to use\nthe -C-is-driver approach want it. Kevin clearly doesn't ;-).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 15:10:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane writes:\n\n> I would favor a setup that allows a -C *directory* (not file) to be\n> specified as a postmaster parameter separately from the -D directory;\n\nA directory is not going to satisfy people.\n\n> I don't see any great value in a separate postgresql.conf parameter for\n> each secondary config file; that just means clutter to me,\n\nNot to other people.\n\n> 1. No -C switch, no -D switch, no PGDATA found in environment: seek\n> postgresql.conf in the default -C directory established at configure\n> time. Use the 'datadir' specified therein as -D. Fail if postgresql.conf\n> doesn't define a datadir value.\n\nOK.\n\n> 2. No -C switch, no -D switch, PGDATA found in environment: use $PGDATA\n> as both -C and -D.\n\nThis behavior would be pretty inconsistent. But maybe it's the best we\ncan do.\n\n> 3. No -C switch, -D switch on command line: use -D value as both -C and -D,\n> proceed as in case 2.\n\nSame as above.\n\n> 4. -C switch, no -D switch on command line: seek postgresql.conf in\n> -C directory, use the datadir it specifies.\n\nOK.\n\n> 5. -C and -D on command line: seek postgresql.conf in -C directory,\n> use -D as datadir overriding what is in postgresql.conf (this is just\n> the usual rule that command line switches override postgresql.conf).\n\nBut that usual rule seems to be in conflict with cases 2 and 3 above.\n(The usual rule is that a command-line option overrides a postgresql.conf\nparameter. The rule in 3, for example is, that a command-line option (the\nsame one!) overrides where postgresql.conf is in the first place.)\n\n> I would venture that the configure-time-default for -C should be\n> ${prefixdir}/etc if configure is not told differently,\n\nYeah, we already have that as --sysconfdir.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 14 Feb 2003 21:14:30 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > I would favor a setup that allows a -C *directory* (not file) to be\n> > specified as a postmaster parameter separately from the -D directory;\n> \n> A directory is not going to satisfy people.\n\nWho is asking to put postgresql.conf, pg_hba.conf, and pg_ident.conf in\ndifferent directories? I haven't heard anyone ask for that.\n\n> > I don't see any great value in a separate postgresql.conf parameter for\n> > each secondary config file; that just means clutter to me,\n> \n> Not to other people.\n> \n> > 1. No -C switch, no -D switch, no PGDATA found in environment: seek\n> > postgresql.conf in the default -C directory established at configure\n> > time. Use the 'datadir' specified therein as -D. Fail if postgresql.conf\n> > doesn't define a datadir value.\n> \n> OK.\n> \n> > 2. No -C switch, no -D switch, PGDATA found in environment: use $PGDATA\n> > as both -C and -D.\n> \n> This behavior would be pretty inconsistent. But maybe it's the best we\n> can do.\n\nWhat happens if postgresql.conf then defines data_dir? Seems we ignore it.\n\nThis brings up the same issue of whether -C/PGCONFIG is a inferior\noption to -D/PGDATA, and whether we keep the config files in /data by\ndefault.\n\n> > 3. No -C switch, -D switch on command line: use -D value as both -C and -D,\n> > proceed as in case 2.\n> \n> Same as above.\n\n\n> \n> > 4. -C switch, no -D switch on command line: seek postgresql.conf in\n> > -C directory, use the datadir it specifies.\n> \n> OK.\n\nHere we are saying the -C doesn't override postgresql.conf as the proper\nPGDATA value. Is that what we want? We had the question above over how\na data_dir in postgresql.conf is handled.\n\n> \n> > 5. -C and -D on command line: seek postgresql.conf in -C directory,\n> > use -D as datadir overriding what is in postgresql.conf (this is just\n> > the usual rule that command line switches override postgresql.conf).\n> \n> But that usual rule seems to be in conflict with cases 2 and 3 above.\n> (The usual rule is that a command-line option overrides a postgresql.conf\n> parameter. The rule in 3, for example is, that a command-line option (the\n> same one!) overrides where postgresql.conf is in the first place.)\n\n\nYes, the big question seems to be if we are defaulting -C to be the same\nas -D, whether that is an actual specification of -D that should\noverride postgresql.conf.\n\nThis is part of the reason I don't like the -D assumes -C and stuff like\nthat.\n\nI think we need to move the config files to pgsql/etc, for backup and\ninitdb safety, and move toward having PGCONFIG/-C as the driving\nparameter. I think having both function equally and defaulting if the\nother is not specified is going to breed confusion.\n\nI am willing to make thing a little difficult for backward compatibility\nto do this, and I think because it is only administrators, they will\nwelcome the improvement.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 15:19:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I would favor a setup that allows a -C *directory* (not file) to be\n>> specified as a postmaster parameter separately from the -D directory;\n\n> A directory is not going to satisfy people.\n\nWhy not? Who won't it satisfy, and what's their objection?\n\nAFAICS, you can either set -C to /etc if you want your PG config files\nloose in /etc, or you can set it to /etc/postgresql/ if you want them\nin a privately-owned directory. Which other arrangements are needed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 15:27:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I think the big question is whether we want the default to install the\n> > configs in a separate directory, pgsql/etc, or just allow the\n> > specification of a separate location. Advantages of pgsql/etc are\n> > initdb-safe, and easier backups.\n> \n> I don't see why we don't just let initdb install suggested config files\n> into the new $PGDATA directory, same as it ever did. Then (as long as\n> we don't use relative paths in the config files) people can move them\n> somewhere else if they like, or not if they prefer not to. Adding more\n> mechanism than that just adds complexity without buying much (except the\n> possibility of initdb overwriting your old config files, which is\n> exactly what I thought we wanted to avoid).\n> \n> > The big question is whether PGDATA is still our driving config variable,\n> > and PGCONFIG/-C is just an additional option, or whether we are moving\n> > in a direction where PGCONFIG/-C is going to be the driving value, and\n> > data_dir is going to be read as part of that.\n> \n> I thought the idea was to allow both approaches. We are not moving in\n> the direction of one or the other, we are giving people a choice of how\n> they want to drive it.\n\nThat's where I am unsure. Is the initdb-safe and backup advantages\nenough to start to migrate those out to data/? I need to hear\ncomments on that.\n\nOne new idea is to move the config files into data/etc. That makes it\nclear which are config files, and makes backup a little easier. It\nwould make -C more logical because you are not moving a clear directory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 15:53:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown wrote:\n> Bruce Momjian wrote:\n> > The big question is whether PGDATA is still our driving config variable,\n> > and PGCONFIG/-C is just an additional option, or whether we are moving\n> > in a direction where PGCONFIG/-C is going to be the driving value, and\n> > data_dir is going to be read as part of that.\n> \n> I'm actually leaning towards PGCONFIG + PGDATA.\n> \n> Yeah, it may be a surprise given my previous arguments, but I can't\n> help but think that the advantages you get with PGDATA will also exist\n> for PGCONFIG.\n> \n> My previous arguments for removing PGDATA from postmaster can be dealt\n> with by fixing pg_ctl to use explicit command line directives when\n> invoking postmaster -- no changes to postmaster needed. PGCONFIG\n> would be no different in that regard.\n\nThe following patch propogates pg_ctl -D to the postmaster as a -D flag.\nI see no other pg_ctl flags that make sense to propogate.\n\nApplied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/bin/pg_ctl/pg_ctl.sh\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/pg_ctl/pg_ctl.sh,v\nretrieving revision 1.30\ndiff -c -c -r1.30 pg_ctl.sh\n*** src/bin/pg_ctl/pg_ctl.sh\t18 Oct 2002 22:05:35 -0000\t1.30\n--- src/bin/pg_ctl/pg_ctl.sh\t14 Feb 2003 22:04:56 -0000\n***************\n*** 115,120 ****\n--- 115,122 ----\n logfile=\n silence_echo=\n shutdown_mode=smart\n+ PGDATAOPTS=\"\"\n+ POSTOPTS=\"\"\n \n while [ \"$#\" -gt 0 ]\n do\n***************\n*** 129,135 ****\n \t ;;\n \t-D)\n \t shift\n! \t # pass environment into new postmaster\n \t PGDATA=\"$1\"\n \t export PGDATA\n \t ;;\n--- 131,138 ----\n \t ;;\n \t-D)\n \t shift\n! \t # we need to do this so -D datadir shows in ps display\n! \t PGDATAOPTS=\"-D $1\"\n \t PGDATA=\"$1\"\n \t export PGDATA\n \t ;;\n***************\n*** 333,344 ****\n fi\n \n if [ -n \"$logfile\" ]; then\n! \"$po_path\" ${1+\"$@\"} </dev/null >>$logfile 2>&1 &\n else\n # when starting without log file, redirect stderr to stdout, so\n # pg_ctl can be invoked with >$logfile and still have pg_ctl's\n # stderr on the terminal.\n! \"$po_path\" ${1+\"$@\"} </dev/null 2>&1 &\n fi\n \n # if had an old lockfile, check to see if we were able to start\n--- 336,347 ----\n fi\n \n if [ -n \"$logfile\" ]; then\n! \"$po_path\" ${1+\"$@\"} ${PGDATAOPTS+$PGDATAOPTS} </dev/null >>$logfile 2>&1 &\n else\n # when starting without log file, redirect stderr to stdout, so\n # pg_ctl can be invoked with >$logfile and still have pg_ctl's\n # stderr on the terminal.\n! \"$po_path\" ${1+\"$@\"} ${PGDATAOPTS+$PGDATAOPTS} </dev/null 2>&1 &\n fi\n \n # if had an old lockfile, check to see if we were able to start", "msg_date": "Fri, 14 Feb 2003 17:17:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nI don't want to over-engineer this. Propogating -D into postmaster\nmakes sense, but grabbing PGDATA doesn't to me.\n\n---------------------------------------------------------------------------\n\nKevin Brown wrote:\n> Bruce Momjian wrote:\n> > I see your point --- pg_ctl does a PGDATA trick when passed -D:\n> > \n> > -D)\n> > shift\n> > # pass environment into new postmaster\n> > PGDATA=\"$1\"\n> > export PGDATA\n> > \n> > It should pass -D just like it was given.\n> \n> Yes, exactly.\n> \n> Now, the more interesting question in my mind is: if pg_ctl isn't\n> passed -D but inherits PGDATA, should it nonetheless pass -D\n> explicitly to the postmaster? We can make it do that, and it would\n> have the benefit of making transparent what would otherwise be opaque.\n> \n> I'm inclined to answer \"yes\" to that question, but only because\n> someone who *really* doesn't want the postmaster to show up with a -D\n> argument in \"ps\" can start the postmaster directly without using\n> pg_ctl at all. Tom made a good argument for sometimes wanting to keep\n> the ps output clean, but it's not clear to me that it should\n> necessarily apply to pg_ctl.\n> \n> But you guys might have a different perspective on that. :-)\n> \n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 17:41:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Fri, 14 Feb 2003, scott.marlowe wrote:\n\n> Asking for everything in a directory with the name local in it to be\n> shared is kind of counter intuitive to me.\n\nNot really. If you install a particular program that doesn't come with\nthe OS on one machine on your site, why would you not want to install it\nseparately on all of the others?\n\nTypically, I want my favourite non-OS utilities on all machines, not\njust one. (Even if I don't use them on all machines.) Thus /usr/local is\nfor site-local stuff.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 15 Feb 2003 13:44:00 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\nRobert Treat wrote:\n\n>Seems like some are saying one of the problems with the current system\n>is it doesn't follow FHS or LSB. If those are valid reasons to change\n>the system, it seems like a change which doesn't actually address those\n>concerns would not be acceptable. (Unless those really aren't valid\n>concerns...)\n>\n> \n>\nI did not start this thread to make PostgreSQL FHS compatible, someone \nelse brought that up.\n\nAs I said somewhere else, I'm an old fashioned UNIX guy, capability \nwithout policy. The patch that I submitted for 7.3.2 will allow the user \nto configure PostgreSQL with a configuration file outside the $PGDATA \ndirectory. That's all I care about. If someone wants to get on the FHS \nbandwagon, that's fine. PostgreSQL should allow that ability but should \nnot require it.\n\n", "msg_date": "Sat, 15 Feb 2003 09:48:57 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\nMartin Coxall wrote:\n\n>On Thu, 2003-02-13 at 20:28, Steve Crawford wrote:\n> \n>\n>>I don't see why we can't keep everyone happy and let the users choose the \n>>setup they want. To wit, make the following, probably simple, changes:\n>>\n>>1) Have postgresql default to using /etc/postgresql.conf\n>> \n>>\n>\n>/etc/postgres/postgresql.conf, if we want to be proper FHS-bitches.\n>\n> \n>\n>>2) Add a setting in postgresql.conf specifying the data directory\n>>3) Change the meaning of -D to mean \"use this config file\"\n>>4) In the absence of a specified data directory in postgresql.conf, use the \n>>location of the postgresql.conf file as the data directory\n>> \n>>\n>\n>Shouldn't it in that case default to, say /var/lib/postgres?\n>\nI would really like to push back this whole discussion to adding the \nability the flexibility to configure PostgreSQ as opposed to determining \na specific configuration strategy.\n\nAdding the ability is easy. Let the distros determine their strategy. \nTrying to enforce one way over another will make this continue on \nforever and will never be solved.\n\n> \n>\n\n", "msg_date": "Sat, 15 Feb 2003 09:53:23 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw <[email protected]> writes:\n> I would really like to push back this whole discussion to adding the \n> ability the flexibility to configure PostgreSQ as opposed to determining \n> a specific configuration strategy.\n\n> Adding the ability is easy. Let the distros determine their strategy. \n> Trying to enforce one way over another will make this continue on \n> forever and will never be solved.\n\nI agree that we shouldn't be in the business of dictating choices.\n\nBut it is important to examine what the plausible choices are, so that\nwe can be sure the solution we provide will accommodate all of them.\nSo I don't think this part of the thread has been useless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Feb 2003 11:00:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Friday 14 February 2003 15:10, Tom Lane wrote:\n> I don't see why we don't just let initdb install suggested config files\n> into the new $PGDATA directory, same as it ever did.\n\nOk, let me take another tack.\n\nJust exactly why does initdb need to drop any config files anywhere? We \nprovide templates; initdb can initialize the data structure. If we can by \ndefault (as part of make install) put the config file templates in \n$SYSCONFDIR (as set by ./configure), then why does initdb need to retouch \nthem? I say that having configured PostgreSQL like this: (this is for 7.2.4, \nnot 7.3.x)\n--enable-locale --with-CXX --prefix=/usr --disable-rpath --with-perl \n--enable-multibyte --with-tcl --with-odbc --enable-syslog --with-python \n--with-openssl --with-pam --with-krb5=/usr/kerberos --enable-nls \n--sysconfdir=/etc/pgsql --mandir=/usr/share/man --docdir=/usr/share/doc \n--includedir=/usr/include --datadir=/usr/share/pgsql\n\nSo, in my case, it would be preferable to me for initdb to not make a default \npostgresql.conf, pg_hba.conf, and pg_ident.conf. The make install process \nshould populate sysconfdir (/etc/pgsql here) with those files.\n\nWhy does initdb even need to be involved now (I know the historical reason)?\n\nComments?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sat, 15 Feb 2003 19:36:51 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Friday 14 February 2003 11:02, \"Shridhar Daithankar wrote:\n> Especially the whole discussion of PGDATA stuff fails to register as\n> significant IMO. Right now, I can do things the way I want to do and I\n> guess it is pretty much same with everyone else. Is it last topic left to\n> improve?\n\nIf it weren't significant to a few, then there wouldn't be the traffic. If \nthere's too much traffic, well, there are alternatives.\n\n> Keep it simple and on tpoic guys. This is hackers. Keep it low volume\n> otherwise, two years down the lines, archives will be unsearchable..\n\nThe system configuration of PostgreSQL is on topic for -hackers. IMNSHO.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sat, 15 Feb 2003 19:38:23 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Just exactly why does initdb need to drop any config files anywhere?\n\nBecause we'd like it to edit the correct datadir into the config file,\nto take just the most obvious example. There has also been a great deal\nof discussion recently about other things initdb might automatically put\ninto the config file after looking at the system environment. That's\nnot happened yet, but we'd really be restricting ourselves to say that\ninitdb can never customize the config files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Feb 2003 20:19:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Saturday 15 February 2003 20:19, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Just exactly why does initdb need to drop any config files anywhere?\n\n> Because we'd like it to edit the correct datadir into the config file,\n> to take just the most obvious example.\n\nShouldn't we be consistent and have initdb use the datadir set in the config \nfile, which could be supplied by a ./configure switch?\n\n> There has also been a great deal\n> of discussion recently about other things initdb might automatically put\n> into the config file after looking at the system environment. That's\n> not happened yet, but we'd really be restricting ourselves to say that\n> initdb can never customize the config files.\n\nCustomize != writing the original.\n\nI'm looking at a packager point of view here. The initdb is done well after \nthe package is made, and installed. It would be ideal from this point of \nview to have things fully configured pre-initdb (and thus pre-packaging).\n\nBut I understand that this might not be ideal from a multipostmaster point of \nview. Surely these two can be reconciled.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sat, 15 Feb 2003 20:24:57 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Saturday 15 February 2003 09:48 am, mlw wrote:\n> Robert Treat wrote:\n> >Seems like some are saying one of the problems with the current system\n> >is it doesn't follow FHS or LSB. If those are valid reasons to change\n> >the system, it seems like a change which doesn't actually address those\n> >concerns would not be acceptable. (Unless those really aren't valid\n> >concerns...)\n>\n> I did not start this thread to make PostgreSQL FHS compatible, someone\n> else brought that up.\n>\n> As I said somewhere else, I'm an old fashioned UNIX guy, capability\n> without policy. The patch that I submitted for 7.3.2 will allow the user\n> to configure PostgreSQL with a configuration file outside the $PGDATA\n> directory. That's all I care about. If someone wants to get on the FHS\n> bandwagon, that's fine. PostgreSQL should allow that ability but should\n> not require it.\n\nIf we're going to go through the trouble to change the way things work, we \nmight as well try to get something that will allow instalation to match real \ndesired configurations out there, like FHS and LSB, or how Oliver and Lamar \nwant for packaging without symlinks. If the goal is just to get something \nthat you like, apply the patch locally and be done with it. \n\nRobert Treat\n", "msg_date": "Sat, 15 Feb 2003 20:34:16 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> On Saturday 15 February 2003 20:19, Tom Lane wrote:\n>> Lamar Owen <[email protected]> writes:\n> Just exactly why does initdb need to drop any config files anywhere?\n\n>> Because we'd like it to edit the correct datadir into the config file,\n>> to take just the most obvious example.\n\n> Shouldn't we be consistent and have initdb use the datadir set in the config \n> file, which could be supplied by a ./configure switch?\n\nThat'd mean there is no way to perform an initdb into a nonstandard\nlocation without first hand-preparing a config file. I don't much care\nfor that.\n\n> I'm looking at a packager point of view here. The initdb is done well after \n> the package is made, and installed. It would be ideal from this point of \n> view to have things fully configured pre-initdb (and thus pre-packaging).\n\nThis point of view means that no post-configure knowledge can be\napplied. We might as well forget the separate initdb step altogether\nand have \"make install\" do it.\n\nI realize that from a packager's point of view, the separate initdb step\nis not very useful. But it is from my point of view.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Feb 2003 21:06:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Saturday 15 February 2003 21:06, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Shouldn't we be consistent and have initdb use the datadir set in the\n> > config file, which could be supplied by a ./configure switch?\n\n> That'd mean there is no way to perform an initdb into a nonstandard\n> location without first hand-preparing a config file. I don't much care\n> for that.\n\nSix of one and half-dozen of another. And that's my real point. We do things \nquite differently from many other standard services, even those which are \nbuilt from the ground up for multiple instances. Making things more \nconsistent for admins, even if it's not what we are used to or what we might \nlike (because it's familiar) should at least be thought about. I'm not \nadvocating changing just for the sake of change; but getting a new fresh look \nat our current setup can't hurt.\n\n> > I'm looking at a packager point of view here. The initdb is done well\n> > after the package is made, and installed. It would be ideal from this\n> > point of view to have things fully configured pre-initdb (and thus\n> > pre-packaging).\n\n> This point of view means that no post-configure knowledge can be\n> applied. We might as well forget the separate initdb step altogether\n> and have \"make install\" do it.\n\nI wouldn't complain. Although that isn't conducive to the multiple instance \ncase. The necessary post-configure knowledge would be in the instance \npostgresql.conf file. One place for it. But this is hypothetical; fishing \naround the waters here at this point. Realize that my own packages apply an \ninitdb automatically if an install isn't found the first time the system \ninitscript is started. It is virtually automatic. With the multiple \npostmaster support, creating a couple of files and a symlink (for the \ninitscript), and starting the new initscript symlink does all the dirty work. \nBut it could be easier.\n\n> I realize that from a packager's point of view, the separate initdb step\n> is not very useful. But it is from my point of view.\n\nWould you mind elucidating which point of view is yours? General idea of who \nelse might have the same point of view, and why you find the initdb in its \ncurrent form to be more useful than alternatives. Again, I'm fishing for \nknowledge -- if nothing else it gives me an answer to those users who send me \nnastygrams about the way things are right now.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sat, 15 Feb 2003 22:03:56 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n>> I realize that from a packager's point of view, the separate initdb step\n>> is not very useful. But it is from my point of view.\n\n> Would you mind elucidating which point of view is yours?\n\nPrimarily, one that wants to have multiple postmasters running, of the\nsame or different versions; including test and temporary installations\nthat mustn't conflict with the existing primary installation on a machine.\n\nCurrently, I don't need to do anything more than set PGDATA or say -D\nto initdb in order to set up the data directory wherever I like. I also\ndon't need to worry about whether I'm selecting the wrong config file.\n\nYou're talking about making manual installations significantly more\ndifficult (and error-prone, I think) in order to simplify automated\ninstalls. Now you've acknowledged that your script can do what\nyou want it to, and in fact already does. Why is it good to make my\nlife more difficult to make a script's easier? Cycles are cheap.\nI like to think that my time is worth something.\n\nNor will I buy an argument that only a few developers have need for test\ninstallations. Ordinary users will want to do that anytime they are\ndoing preliminary tests on a new PG version before migrating their\nproduction database to it. To the extent that you make manual selection\nof a nonstandard datadir location more difficult and error-prone, you\nare hurting them too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 00:16:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Sunday 16 February 2003 00:16, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Would you mind elucidating which point of view is yours?\n\n> Primarily, one that wants to have multiple postmasters running, of the\n> same or different versions; including test and temporary installations\n> that mustn't conflict with the existing primary installation on a machine.\n\nWell, due to our upgrading difficulty, having multiple versions running has \nits advantages.\n\n> You're talking about making manual installations significantly more\n> difficult (and error-prone, I think) in order to simplify automated\n> installs. Now you've acknowledged that your script can do what\n> you want it to, and in fact already does. Why is it good to make my\n> life more difficult to make a script's easier? Cycles are cheap.\n> I like to think that my time is worth something.\n\nThe script's been out there for awhile. It does some things well, and some \nthings not so well. The config files are still coresident with the database, \nand backup is more difficult than it can be. Meeting all these needs (with \nconfigure switches, configuration file directives, etc) would be a good \nthing. And that's what I'm after; maximum usability for the maximum \naudience. I believe pretty strongly that the usage to which you or I would \nput PostgreSQL is probably quite different from the average user's way of \nusing PostgreSQL. Most probably the typical user has a single installation \nwith multiple databases with little need to run isolated postmasters.\n\n> Nor will I buy an argument that only a few developers have need for test\n> installations. Ordinary users will want to do that anytime they are\n> doing preliminary tests on a new PG version before migrating their\n> production database to it. To the extent that you make manual selection\n> of a nonstandard datadir location more difficult and error-prone, you\n> are hurting them too.\n\nWhile I'm not going to speak for all users, I know that I don't put a \ndevelopment database system on my production servers. The production machine \nonly runs production servers, period. Hardware is cheap. I have development \nmachines for development databases. One also has the error-prone PATH issues \nwith multiple versions, which, if you are running a typical RPM installation \nbecomes quite difficult to manage, since the RPM version's executables are in \n/usr/bin. This could be changed; I've thought about changing it. But I'm \nnot sure of the best way to make multiple versions peacefully and seamlessly \ncoexist -- particularly when older versions may not even build on the newer \nOS: but we've been over that discussion.\n\nCare for a poll?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sun, 16 Feb 2003 00:31:03 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane wrote:\n> Currently, I don't need to do anything more than set PGDATA or say -D\n> to initdb in order to set up the data directory wherever I like. I also\n> don't need to worry about whether I'm selecting the wrong config file.\n\nSo in your case, what's the advantage of having initdb write anything\nto a config file, when you're probably also relying on PGDATA or -D to\nstart the database (if you're not, then fair enough. But see below)?\n\nI'd expect initdb to initialize a database. If I were running initdb\nwithout a lot of foreknowledge of its side effects, I think I'd\nprobably be a bit surprised to find that it had touched my config\nfile. Initdb doesn't have prior knowledge of how you intend to\nstart the database so that it refers to the database initdb just\ncreated, so it can't really know whether it's desirable to touch the\nconfig file.\n\nIf it's desirable for initdb to be able to write to the config file,\nwouldn't it be more appropriate for that to an option that has to be\nexplicitly enabled on initdb's command line? I don't know how often\nyou'd want it to write into the config file for your purposes, but\nhaving it do so automatically seems to violate the principle of least\nsurprise.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sat, 15 Feb 2003 22:37:07 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Tom Lane writes:\n\n> AFAICS, you can either set -C to /etc if you want your PG config files\n> loose in /etc, or you can set it to /etc/postgresql/ if you want them\n> in a privately-owned directory. Which other arrangements are needed?\n\nPeople might want to share them between servers, or allow a user to select\nfrom a few pre-configured ones that which reside in the same directory.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sun, 16 Feb 2003 15:10:05 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> AFAICS, you can either set -C to /etc if you want your PG config files\n>> loose in /etc, or you can set it to /etc/postgresql/ if you want them\n>> in a privately-owned directory. Which other arrangements are needed?\n\n> People might want to share them between servers, or allow a user to select\n> from a few pre-configured ones that which reside in the same directory.\n\nYou can accomplish that without the need to customize the .conf file\nnames; you just make, eg,\n\n\t/etc/postgres/myconfig/postgresql.conf\n\t/etc/postgres/yourconfig/postgresql.conf\n\t/etc/postgres/herconfig/postgresql.conf\n\n(plus additional config files as needed in each of these directories)\nand then the postmaster start command is\n\n\tpostmaster -C /etc/postgres/myconfig\n\nI see no real gain in flexibility in allowing people to choose random\nnames for the individual config files. Also, it'd defeat the\nultimate-fallback approach of doing \"find / -name postgresql.conf\"\nto figure out where the config files are hiding in an unfamiliar\ninstallation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 12:54:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> The script's been out there for awhile. It does some things well, and some \n> things not so well. The config files are still coresident with the database,\n> and backup is more difficult than it can be. Meeting all these needs (with \n> configure switches, configuration file directives, etc) would be a good \n> thing.\n\nSure. I'm happy to change the software in a way that *allows* moving the\nconfig files elsewhere. But it's not apparent to me why you insist on\nforcing people who are perfectly happy with their existing configuration\narrangements to change them. I have not seen any reason in this\ndiscussion why we can't support both a separate-config-location approach\nand the traditional single-location one.\n\nPlease remember that the existing approach has been evolved over quite\na few releases. It may not satisfy the dictates of the FHS religion,\nbut it does meet some people's needs perfectly well. Let's look for a\nsolution that permits coexistence, rather than one that forces change\non people who don't need or want change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 13:15:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> So in your case, what's the advantage of having initdb write anything\n> to a config file, when you're probably also relying on PGDATA or -D to\n> start the database (if you're not, then fair enough. But see below)?\n\nKeep in mind that initdb doesn't currently *need* to put the datadir\nlocation into the config file. It *will* need to do so if we separate\nconfig and data dirs. Or at least, *somebody* will need to do so.\nIt's not apparent to me how it simplifies life not to have initdb do it.\nEspecially when there are other configuration items that initdb should\nor already does record: locale settings, database encoding. And we\nhave already been talking about improving PG's self-tuning capability.\ninitdb would be the natural place to look around for information like\navailable RAM and adjust the config-file settings like sort_mem\naccordingly.\n\nBasically, the notion that initdb shouldn't write a config file seems\nlike a complete dead end to me. It cannot possibly be more convenient\nthan the alternatives. We'd be giving up a lot of current and future\nfunctionality --- and for what?\n\n> I'd expect initdb to initialize a database. If I were running initdb\n> without a lot of foreknowledge of its side effects, I think I'd\n> probably be a bit surprised to find that it had touched my config\n> file.\n\nIf we do it the way I suggested (dump into the datadir, which is\ninitially empty, same as always) then it cannot overwrite your existing\nconfig files. Think of it as providing a suggested config file to\ncompare against what you have.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 14:20:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Sunday 16 February 2003 13:15, Tom Lane wrote:\n> Sure. I'm happy to change the software in a way that *allows* moving the\n> config files elsewhere.\n\nSo we agree. Perfect.\n\n> But it's not apparent to me why you insist on\n> forcing people who are perfectly happy with their existing configuration\n> arrangements to change them.\n\nMe? Trying to force things to change? You misunderstand me. No, I'm trying \nto understand the rationale for a (relative to the way other \ndesigned-multiple daemons do things) different, non-standard configuration \nprocess. I understand better now; the exercise was a success. Many thanks.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Sun, 16 Feb 2003 18:56:57 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Peter Eisentraut <[email protected]> writes:\n> \n>\n>>Tom Lane writes:\n>> \n>>\n>>>I would favor a setup that allows a -C *directory* (not file) to be\n>>>specified as a postmaster parameter separately from the -D directory;\n>>> \n>>>\n>\n> \n>\n>>A directory is not going to satisfy people.\n>> \n>>\n>\n>Why not? Who won't it satisfy, and what's their objection?\n>\n>AFAICS, you can either set -C to /etc if you want your PG config files\n>loose in /etc, or you can set it to /etc/postgresql/ if you want them\n>in a privately-owned directory. Which other arrangements are needed?\n>\n> \n>\nThe idea of using a \"directory\" puts us back to using symlinks to share \nfiles.\n\nWhile I know the core development teams thinks that symlinks are a \nviable configuration option, most admins, myself included, do not like \nto use symlinks because they do not have the ability to carry \ndocumentation, i.e. comments in a configuration file, and are DANGEROUS \nin a production environment.\n\nAny configuration strategy that depends on symlinks is inadequate and \npoorly designed.\n\n\n> \n>\n\n\n", "msg_date": "Sun, 16 Feb 2003 21:40:08 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw <[email protected]> writes:\n> The idea of using a \"directory\" puts us back to using symlinks to share \n> files.\n\nSo? If you want to share files, you're probably sharing all three\nconfig files and don't need a separate directory at all. This is\nnot a sufficient argument to make me buy into the mess of letting\npeople choose nonstandard configuration file names --- especially\nwhen most of the opposite camp seems to be more interested in choosing\n*standard* names for things. Why does that policy stop short at the\ndirectory name?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 21:48:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n> Keep in mind that initdb doesn't currently *need* to put the datadir\n> location into the config file. It *will* need to do so if we separate\n> config and data dirs. Or at least, *somebody* will need to do so.\n> It's not apparent to me how it simplifies life not to have initdb do it.\n> Especially when there are other configuration items that initdb should\n> or already does record: locale settings, database encoding. \n\nIs it possible for the database engine to properly deal with a\ndatabase when it is told to use a different database encoding than the\none the database was initdb'd with?\n\nIf it's not, then that suggests to me that the database encoding is\nsomething that doesn't belong in the configuration file but rather in\nsome other place that is intimately tied with the database itself and\nwhich is difficult/impossible to change, like perhaps a read-only\nsystem table that gets created at initdb time.\n\n> And we have already been talking about improving PG's self-tuning\n> capability. initdb would be the natural place to look around for\n> information like available RAM and adjust the config-file settings\n> like sort_mem accordingly.\n\nI agree here, and since you're thinking of just putting the resulting\nconfig file in the database data directory, then as a DBA I wouldn't\nbe terribly surprised by it ... especially if it came back with a\nmessage that told me what it had done.\n\n> If we do it the way I suggested (dump into the datadir, which is\n> initially empty, same as always) then it cannot overwrite your existing\n> config files. Think of it as providing a suggested config file to\n> compare against what you have.\n\nThere is one minor complication: what if there's an existing config\nfile in the target directory?\n\nOne use for initdb would be as a quick way to completely wipe the\ndatabase and start over (e.g., if the encoding were found to be\nincorrect), but the config file that's already there could easily\ncontain a lot of customization that the administrator would want to\nretain. Which suggests that we should consider writing to a file\nusing a slightly different name (e.g., postgresql.conf.initdb), at\nleast in the event that a config file already exists in the target\ndirectory.\n\nNot sure what the overall right thing to do here is...\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sun, 16 Feb 2003 19:33:24 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>mlw <[email protected]> writes:\n> \n>\n>>The idea of using a \"directory\" puts us back to using symlinks to share \n>>files.\n>> \n>>\n>\n>So? If you want to share files, you're probably sharing all three\n>config files and don't need a separate directory at all. This is\n>not a sufficient argument to make me buy into the mess of letting\n>people choose nonstandard configuration file names --- especially\n>when most of the opposite camp seems to be more interested in choosing\n>*standard* names for things. Why does that policy stop short at the\n>directory name?\n> \n>\nsymlinks suck. Sorry Tom, but they are *BAD* in a production server. You \ncan not add comments to symlinks. Most of the admins I know, myself \nincluded, HATE symlinks and use them as a last resort. Requiring \nsymlinks is just pointless, we are talking about a few lines of code hat \nhas nothing to do with performance.\n\nThe patch that I submitted allows PostgreSQL to work as it always has, \nbut adds the ability for a configuration file to do what is normally \ndone with fixed names in $PGDATA.\n\nI have said before, I do not like policy, I like flexibility, forcing a \ndirectory is similarly restricting as requiring the files in $PGDATA.\n\nWhy is this such a problem? MANY people want to configure PostgreSQL \nthis way, but the patch I submitted allows it, but does not force \nanything. Any configuration solution that requires symlinks is flawed.\n\n> \n>\n\n", "msg_date": "Sun, 16 Feb 2003 22:41:37 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "mlw wrote:\n> symlinks suck. Sorry Tom, but they are *BAD* in a production\n> server. \n\nWell, at least they're better than hard links. ;-)\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Sun, 16 Feb 2003 19:55:49 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Is it possible for the database engine to properly deal with a\n> database when it is told to use a different database encoding than the\n> one the database was initdb'd with?\n\nIt can't be \"told to use a different database encoding\". However, the\ndefault *client* encoding matches the database encoding, and that is\nsomething that can be set in the config file.\n\n>> If we do it the way I suggested (dump into the datadir, which is\n>> initially empty, same as always) then it cannot overwrite your existing\n>> config files. Think of it as providing a suggested config file to\n>> compare against what you have.\n\n> There is one minor complication: what if there's an existing config\n> file in the target directory?\n\nIf there's anything at all in the target directory, initdb refuses to\nrun.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Feb 2003 13:39:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "On Fri, Feb 14, 2003 at 10:35:41AM -0500, Tom Lane wrote:\n\n> FHS or no FHS, I would think that the preferred arrangement would be to\n> keep Postgres' config files in a postgres-owned subdirectory, not\n> directly in /etc. That way you need not be root to edit them. (My idea\n\nBesides, what are you going to do for people installing on a box\nwhere they don't have root? Are they going to need a whole mess of\nextra directories in their private copy?\n\n> of an editor, Emacs, always wants to write a backup file, so I dislike\n> having to edit files that live in directories I can't write.)\n> \n> Here's a pretty topic for a flamewar: should it be /etc/postgres/ or\n> /etc/postgresql/ ?\n\nWow, two flamewar topics in one mail. I'm impressed.\n\nAndrew \"ed is the one true editor\" Sullivan\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 17 Feb 2003 13:51:16 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Sun, Feb 16, 2003 at 12:16:44AM -0500, Tom Lane wrote:\n> Nor will I buy an argument that only a few developers have need for test\n> installations. Ordinary users will want to do that anytime they are\n> doing preliminary tests on a new PG version before migrating their\n> production database to it. To the extent that you make manual selection\n> of a nonstandard datadir location more difficult and error-prone, you\n> are hurting them too.\n\nNot only that. For safety's sake, you may need to run multiple\npostmasters on one machine (so that database user X can't DoS\ndatabase user Y, for instance). And making that sort of\nproduction-grade work more difficult and error-prone would also be\nbad.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 17 Feb 2003 14:03:31 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "On Sat, 15 Feb 2003, Curt Sampson wrote:\n\n> On Fri, 14 Feb 2003, scott.marlowe wrote:\n> \n> > Asking for everything in a directory with the name local in it to be\n> > shared is kind of counter intuitive to me.\n> \n> Not really. If you install a particular program that doesn't come with\n> the OS on one machine on your site, why would you not want to install it\n> separately on all of the others?\n> \n> Typically, I want my favourite non-OS utilities on all machines, not\n> just one. (Even if I don't use them on all machines.) Thus /usr/local is\n> for site-local stuff.\n\nGood point. Of course, in apache, it's quite easy to use the -f switch to \npick the file you're running on. so, with a \n\nhttpd -f /usr/local/apache/conf/`uname -a|cut -d \" \" -f 2`.conf\n\nI can pick and choose the file to run. So, yes, I would gladly use it in \na cluster, and all the files would be in one place, easy to backup.\n\n", "msg_date": "Tue, 18 Feb 2003 10:00:38 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "\nI have a new idea. You know how we have search_path where you can\nspecify multiple schema names. What if we allow the config_dirs/-C to\nspecify multiple directories to search for config files. That way, we\ncan use only one variable, and we can allow people to place different\nconfig files in different directories.\n\n---------------------------------------------------------------------------\n\nAndrew Sullivan wrote:\n> On Sun, Feb 16, 2003 at 12:16:44AM -0500, Tom Lane wrote:\n> > Nor will I buy an argument that only a few developers have need for test\n> > installations. Ordinary users will want to do that anytime they are\n> > doing preliminary tests on a new PG version before migrating their\n> > production database to it. To the extent that you make manual selection\n> > of a nonstandard data_dir location more difficult and error-prone, you\n> > are hurting them too.\n> \n> Not only that. For safety's sake, you may need to run multiple\n> postmasters on one machine (so that database user X can't DoS\n> database user Y, for instance). And making that sort of\n> production-grade work more difficult and error-prone would also be\n> bad.\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 18 Feb 2003 21:55:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have a new idea. You know how we have search_path where you can\n> specify multiple schema names. What if we allow the config_dirs/-C to\n> specify multiple directories to search for config files. That way, we\n> can use only one variable, and we can allow people to place different\n> config files in different directories.\n\nThat's an interesting idea. Were you thinking, perhaps, that you\ncould put, say, a postgresql.conf file in multiple directories and\nhave the settings in the latest one override the settings in earlier\nones? That would mean you could set up a single postgresql.conf that\nhas settings common to all of your instances (settings related to the\nsystem such as shared buffers, and default settings that would apply\nto any instance if not overridden), and a postgresql.conf file for\neach instance that defines the instance-specific configuration\ninformation.\n\nI'm not sure that's quite what you had in mind, though. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 19 Feb 2003 04:09:52 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have a new idea. You know how we have search_path where you can\n> specify multiple schema names. What if we allow the config_dirs/-C to\n> specify multiple directories to search for config files. That way, we\n> can use only one variable, and we can allow people to place different\n> config files in different directories.\n\nHm, a search path for config files? I could support that if it\nsatisfies the folks who object to specifying config directories\nrather than file names.\n\nOne thing that would have to be thought about is whether to re-search\nthe path on each config file reload --- if you first find pg_hba.conf\nin, say, the third directory on the path, should you pay attention if\none materializes in the second directory later? Or do you keep going\nback to the same well? I can see arguments either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Feb 2003 10:35:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "\nThe problem I have with Bruce's scheme is that you could put your config\nfile where you want it and someone else puts one somewhere higher in the\nsearch path and you have no idea what went wrong. It sounds to me like a\nrecipe for an SA's nightmare. Other people have claimed to speak from the SA\nperspective - do they see this too?\n\nandrew\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\n> Bruce Momjian <[email protected]> writes:\n> > I have a new idea. You know how we have search_path where you can\n> > specify multiple schema names. What if we allow the config_dirs/-C to\n> > specify multiple directories to search for config files. That way, we\n> > can use only one variable, and we can allow people to place different\n> > config files in different directories.\n>\n> Hm, a search path for config files? I could support that if it\n> satisfies the folks who object to specifying config directories\n> rather than file names.\n>\n> One thing that would have to be thought about is whether to re-search\n> the path on each config file reload --- if you first find pg_hba.conf\n> in, say, the third directory on the path, should you pay attention if\n> one materializes in the second directory later? Or do you keep going\n> back to the same well? I can see arguments either way.\n\n", "msg_date": "Wed, 19 Feb 2003 10:58:20 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I have a new idea. You know how we have search_path where you can\n> > specify multiple schema names. What if we allow the config_dirs/-C to\n> > specify multiple directories to search for config files. That way, we\n> > can use only one variable, and we can allow people to place different\n> > config files in different directories.\n> \n> Hm, a search path for config files? I could support that if it\n> satisfies the folks who object to specifying config directories\n> rather than file names.\n> \n> One thing that would have to be thought about is whether to re-search\n> the path on each config file reload --- if you first find pg_hba.conf\n> in, say, the third directory on the path, should you pay attention if\n> one materializes in the second directory later? Or do you keep going\n> back to the same well? I can see arguments either way.\n\nOh, I hadn't thought of that. I would vote for researching the path,\nbut I am not sure why.\n\nThe bigger question is whether you can modify config_dirs while the\npostmaster is running. I would think not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 19 Feb 2003 10:58:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The bigger question is whether you can modify config_dirs while the\n> postmaster is running. I would think not.\n\nThere would be no way to do that, because the only way to set it would\nbe from -C on the command line or a PGCONFIG environment variable.\nBut I can't see a real good reason why you'd need to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Feb 2003 11:02:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "\"Andrew Dunstan\" <[email protected]> writes:\n> The problem I have with Bruce's scheme is that you could put your config\n> file where you want it and someone else puts one somewhere higher in the\n> search path and you have no idea what went wrong. It sounds to me like a\n> recipe for an SA's nightmare. Other people have claimed to speak from the SA\n> perspective - do they see this too?\n\nIf you have \"your\" file you'd put it in the directory at the front of\nthe search path. End of problem. Any additional directories would be\nfor config files that you *want* to share.\n\nOffhand I find it hard to visualize needing more than two directories in\nthis path (private and shared), unless we grow to having many more\nconfig files than we do now. But we may as well build the feature with\nno artificial restriction about path length.\n\nSearch path management seems well understood for $PATH --- people do\nget burnt by having the wrong $PATH, but it doesn't qualify as a\nnightmare...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Feb 2003 11:40:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: location of the configuration files " }, { "msg_contents": "After failing to make Itanium competitive, Intel is now downplaying\n64-bit CPU's. Of course, they didn't think that until Itanium failed. \nHere is the slashdot story:\n\n\n\thttp://slashdot.org/article.pl?sid=03/02/23/2050237&mode=nested&tid=118\n\nSeems AMD's hammer is going to be the popular 64-bit desktop CPU.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 24 Feb 2003 18:49:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Intel drops 64-bit Itanium" }, { "msg_contents": "> After failing to make Itanium competitive, Intel is now downplaying\n> 64-bit CPU's. Of course, they didn't think that until Itanium failed.\n> Here is the slashdot story:\n> http://slashdot.org/article.pl?sid=03/02/23/2050237&mode=nested&tid=118\n> \n> Seems AMD's hammer is going to be the popular 64-bit desktop CPU.\n\nIt's really unsurprising; there was /no/ likelihood of Itanium getting\nwidely deployed on desktops when there would be an absolute dearth of\ndesktop software.\n\nThink back: Alpha was presented in /exactly/ the same role, years ago,\nand the challenges it had vis-a-vis:\n\n a) Need for emulation to run legacy software that can't get recompiled;\n b) Need to deploy varying binaries on the substantially varying\n platforms;\n c) It's real costly to be an early adoptor of new hardware, so the\n hardware is expensive stuff.\n\nCertain sorts of \"enterprise\" software got deployed on Alpha, but you\nnever got the ordinary stuff like MS Office and such, which meant there\nwas no point to anyone pushing \"desktop\" software to Alpha. And we\nthereby had the result that Alpha became server-only.\n\nWhy should it be the slightest bit remarkable that IA-64 is revisiting\nthe very same marketing challenges? \n\nIt has the very same set of technical challenges.\n\nIt may well be that by the time it /is/ time to generally deploy IA-64,\nit will have become the Alpha platform. After all, Compaq sold the\narchitecture to Intel, and Alpha already has a mature set of hardware\ndesigns as well as compilers...\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/oses.html\n\"Everything should be built top-down, except the first time.\"\n-- Alan Perlis\n", "msg_date": "Mon, 24 Feb 2003 22:25:04 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Intel drops 64-bit Itanium " }, { "msg_contents": "http://www.theinquirer.net/?article=7966\n\n\nOn Mon, 2003-02-24 at 22:25, [email protected] wrote:\n> > After failing to make Itanium competitive, Intel is now downplaying\n> > 64-bit CPU's. Of course, they didn't think that until Itanium failed.\n> > Here is the slashdot story:\n> > http://slashdot.org/article.pl?sid=03/02/23/2050237&mode=nested&tid=118\n> > \n> > Seems AMD's hammer is going to be the popular 64-bit desktop CPU.\n> \n> It's really unsurprising; there was /no/ likelihood of Itanium getting\n> widely deployed on desktops when there would be an absolute dearth of\n> desktop software.\n> \n> Think back: Alpha was presented in /exactly/ the same role, years ago,\n> and the challenges it had vis-a-vis:\n> \n> a) Need for emulation to run legacy software that can't get recompiled;\n> b) Need to deploy varying binaries on the substantially varying\n> platforms;\n> c) It's real costly to be an early adoptor of new hardware, so the\n> hardware is expensive stuff.\n> \n> Certain sorts of \"enterprise\" software got deployed on Alpha, but you\n> never got the ordinary stuff like MS Office and such, which meant there\n> was no point to anyone pushing \"desktop\" software to Alpha. And we\n> thereby had the result that Alpha became server-only.\n> \n> Why should it be the slightest bit remarkable that IA-64 is revisiting\n> the very same marketing challenges? \n> \n> It has the very same set of technical challenges.\n> \n> It may well be that by the time it /is/ time to generally deploy IA-64,\n> it will have become the Alpha platform. After all, Compaq sold the\n> architecture to Intel, and Alpha already has a mature set of hardware\n> designs as well as compilers...\n> --\n> (reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\n> http://www3.sympatico.ca/cbbrowne/oses.html\n> \"Everything should be built top-down, except the first time.\"\n> -- Alan Perlis\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-- \nDave Cramer <[email protected]>\nCramer Consulting\n\n", "msg_date": "25 Feb 2003 07:02:00 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel drops 64-bit Itanium response from Linus" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nPatch to add XML output to psql:\n\nhttp://www.gtsm.com/xml.patch.txt\n\nNotes and questions:\n\nThe basic output looks something like this:\n\n<?xml version=\"1.0\" encoding=\"SQL_ASCII\"?>\n<resultset psql_version=\"7.4devel\" query=\"select * from foo;\">\n\n<columns>\n <col num=\"1\">a</col>\n <col num=\"2\">b</col>\n <col num=\"3\">c</col>\n <col num=\"4\">mucho nacho </col>\n</columns>\n<row num=\"1\">\n <a>1</a>\n <b>pizza</b>\n <c>2003-02-25 15:19:22.169797</c>\n <\"mucho nacho \"></\"mucho nacho \">\n</row>\n<row num=\"2\">\n <a>2</a>\n <b>mushroom</b>\n <c>2003-02-25 15:19:26.969415</c>\n <\"mucho nacho \"></\"mucho nacho \">\n</row>\n<footer>(2 rows)</footer>\n</resultset>\n\nand with the \\x option:\n\n<?xml version=\"1.0\" encoding=\"SQL_ASCII\"?>\n<resultset psql_version=\"7.4devel\" query=\"select * from foo;\">\n\n<columns>\n <col num=\"1\">a</col>\n <col num=\"2\">b</col>\n <col num=\"3\">c</col>\n <col num=\"4\">mucho nacho </col>\n</columns>\n<row num=\"1\">\n <cell name=\"a\">1</cell>\n <cell name=\"b\">pizza</cell>\n <cell name=\"c\">2003-02-25 15:19:22.169797</cell>\n <cell name=\"mucho nacho \"></cell>\n</row>\n<row num=\"2\">\n <cell name=\"a\">2</cell>\n <cell name=\"b\">mushroom</cell>\n <cell name=\"c\">2003-02-25 15:19:26.969415</cell>\n <cell name=\"mucho nacho \"></cell>\n</row>\n</resultset>\n\n\nThe default encoding \"SQL-ASCII\" is not valid for XML. \nShould it be automatically changed to something else?\n\nThe flag \"-X\" is already taken, unfortunately, although \\X is not. \nI used \"-L\" and \"\\L\" but they are not as memorable as \"X\". Anyone \nsee a way around this? Can we still use \\X inside of psql?\n\n\nIt would be nice to include the string representation of the column \ntypes in the xml output:\n<col type=\"int8\">foo</col>\n....but I could not find an easy way to do this: PQftype returns the \nOID only (which is close but not quite there). Is there an \nexisting way to get the name of the type of a column from a \nPQresult item?\n\nThe HTML, XML, and Latex modes should have better documentation - \nI'll submit a separate doc patch when/if this gets finalized.\n\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200302261518\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+XSR/vJuQZxSWSsgRAi2jAJ9IAKnMBmNcVEEI8TXQBBd/rtm4XQCg0Vjq\nIO9OsCSkdnNJqnrYYutM3jw=\n=9kwY\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 26 Feb 2003 20:46:15 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "XML ouput for psql" }, { "msg_contents": "[email protected] kirjutas K, 26.02.2003 kell 22:46:\n\n> \n> and with the \\x option:\n> \n> <?xml version=\"1.0\" encoding=\"SQL_ASCII\"?>\n> <resultset psql_version=\"7.4devel\" query=\"select * from foo;\">\n> \n> <columns>\n> <col num=\"1\">a</col>\n> <col num=\"2\">b</col>\n> <col num=\"3\">c</col>\n> <col num=\"4\">mucho nacho </col>\n> </columns>\n> <row num=\"1\">\n> <cell name=\"a\">1</cell>\n> <cell name=\"b\">pizza</cell>\n> <cell name=\"c\">2003-02-25 15:19:22.169797</cell>\n> <cell name=\"mucho nacho \"></cell>\n> </row>\n> <row num=\"2\">\n> <cell name=\"a\">2</cell>\n> <cell name=\"b\">mushroom</cell>\n> <cell name=\"c\">2003-02-25 15:19:26.969415</cell>\n> <cell name=\"mucho nacho \"></cell>\n> </row>\n> </resultset>\n> \n> \n> The default encoding \"SQL-ASCII\" is not valid for XML. \n> Should it be automatically changed to something else?\n\nI think you should force conversion to something standard, try using\nautomatic conversion to some known client encoding.\n\nbtw, \"UNICODE\" is also not any known encoding in XML, but PostgreSQL\nuses it to mean utf-8\n\n> The flag \"-X\" is already taken, unfortunately, although \\X is not. \n> I used \"-L\" and \"\\L\" but they are not as memorable as \"X\". Anyone \n> see a way around this? Can we still use \\X inside of psql?\n> \n> \n> It would be nice to include the string representation of the column \n> types in the xml output:\n> <col type=\"int8\">foo</col>\n> ....but I could not find an easy way to do this: PQftype returns the \n> OID only (which is close but not quite there). Is there an \n> existing way to get the name of the type of a column from a \n> PQresult item?\n\nRun \"select oid,typname from pg_type;\" first if run in xml mode and\nstore the oid/columnname pairs.\n\nyou could also store the result in ~/.psql for faster access later on\nand manually clear it if new types are defined\n\n----------------\nHannu\n\n", "msg_date": "26 Feb 2003 23:17:11 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "[email protected] writes:\n\n> Patch to add XML output to psql:\n\nThis would get me more excited if you do one or both of the following:\n\n1. Look into the SQL/XML standard draft (ftp.sqlstandards.org) to find out\nwhether the standard addresses this sort of thing.\n\n2. Use an established/standardized XML (or SGML) table model rather than\nrolling your own.\n\nIncidentally, the HTML table model is such an established and standardized\nXML and SGML table model, so the easiest way to get the task \"add XML\noutput to psql\" done is to update the HTML output to conform to XHTML.\nThat way you get both the strict XML and you can look at the formatted\nresult with any old (er, new) browser.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 27 Feb 2003 14:50:48 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nHannu Krosing wrote:\n> I think you should force conversion to something standard, try using\n> automatic conversion to some known client encoding.\n\nI've thought about this some more, and the only thing I can think \nabout doing without being too heavy-handed is to change the encoding \nto \"US-ASCII\" whenever someone enters \"XML\" mode if the encoding is set \nto \"SQL-ASCII\". Perhaps with a warning.\n\n\"The character set most commonly use in the Internet and used especially in \nprotocol standards is US-ASCII, this is strongly encouraged.\"\nhttp://www.iana.org/assignments/character-sets\n\nOn the other hand, SQLX seems to lean toward a strict unicode encoding \n(see my reply to Peter Eisentraut for more on that).\n\n> Run \"select oid,typname from pg_type;\" first if run in xml mode and\n> store the oid/columnname pairs.\n\nI realize that I could run a SQL query against pg_type to grab the info, \nbut I was hoping there was an internal function similar to PQtype which \nwould return the information.\n\n> you could also store the result in ~/.psql for faster access \n> later on and manually clear it if new types are defined\n\nNot only does pg_type has literally hundreds of entries, but there is no \nway to guarantee that these are correct at the time when the query is \nrun, so I don't think this is viable.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200302280938\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+X3hCvJuQZxSWSsgRArMTAKChouxnFF1ugI1mutXYJf14p1ICGwCfUDG9\nyISxrIvqxnYWHfvD0lOWZAQ=\n=M6nd\n-----END PGP SIGNATURE-----\n\n\n\n", "msg_date": "Fri, 28 Feb 2003 15:00:47 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nPeter Eisentraut wrote:\n> 1. Look into the SQL/XML standard draft (ftp.sqlstandards.org) to find out\n> whether the standard addresses this sort of thing.\n\nThe URL you gave leads to a site curiously content-free and full of dead links. \nI've looked around a bit, but found nothing definitive. One good resource I \ndid find was this:\n\nhttp://www.wiscorp.com/sql/SQLX_Bringing_SQL_and_XML_Together.pdf\n\nThe article mentions a lot of links on the sqlstandards.org and iso.org sites, none \nof which work or are restricted. If anyone knows of some good links, please \nlet me know. (especially ISO 9075). From what I've read of the SQLX stuff, the \nformat in my patch should be mostly standard:\n\n<row>\n <name>Joe Sixpack</name>\n <age>35</age>\n <state>Alabama</state>\n</row>\n\nOne problem is that the recommended way to handle non-standard characters \n(including spaces) is to escape them like this:\n\nfoobar baz => <foobar_x0020_baz>\n\nThis also includes escaping things like \"_x*\" and \"xml*\". We don't have \nanything like that in the code yet (?), but we should probably think about \nheading that way. I think escaping whitespace in quotes is good enough \nfor now for:\n\nfoobar baz => <\"foobar baz\">\n\nThe xsd and xsi standards are also interesting, but needlessly complicated \nfor psql output, IMO.\n\n> Incidentally, the HTML table model is such an established and standardized\n> XML and SGML table model, so the easiest way to get the task \"add XML\n> output to psql\" done is to update the HTML output to conform to XHTML.\n> That way you get both the strict XML and you can look at the formatted\n> result with any old (er, new) browser.\n\nI don't agree with this: XML and XHTML are two different things. We could \ncertainly upgrade the HTML portion, but I am pretty sure that the XML \nstandard calls for this format:\n\n<columnname>data here</columnname>\n\n..which is not valid XHTML and won't be viewable by any browser. The other \nsuggested XML formats are even further from XHTML than the above. The HTML \nformat should be \"html table/layout\" specific and the XML should be \n\"schema/data\" specific.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200302280938\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+X3k5vJuQZxSWSsgRAuXFAKDGO1IsjB9Lwtkcws1xJy47PibcLQCg3dx5\nfsy27qguZv841lPvCjzdUic=\n=4f9B\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Fri, 28 Feb 2003 15:03:14 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "OSDL has just come out with a set of open-source database benchmarks:\nhttp://www.osdl.org/projects/performance/\n\nThe bad news:\n\"This tool kit works with SAP DB open source database versions 7.3.0.23\nor 7.3.0.25.\"\n\n(In fact, they seem to think they are testing kernel performance, not\ndatabase performance, which strikes me as rather bizarre. But anyway.)\n\nThe good news:\n\"We are planning to port this test kit to other databases.\"\n\nPerhaps someone around here should help out...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Mar 2003 00:16:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Yet another open-source benchmark" }, { "msg_contents": "Tom Lane wrote:\n> OSDL has just come out with a set of open-source database benchmarks:\n> http://www.osdl.org/projects/performance/\n> \n> The bad news:\n> \"This tool kit works with SAP DB open source database versions 7.3.0.23\n> or 7.3.0.25.\"\n> \n> (In fact, they seem to think they are testing kernel performance, not\n> database performance, which strikes me as rather bizarre. But anyway.)\n> \n> The good news:\n> \"We are planning to port this test kit to other databases.\"\n> \n> Perhaps someone around here should help out...\n\nYep, this is the group that have hit a performance limit with SAPDB and \nare 100% definitely looking to move it to PostgreSQL, *if* they can get \npeople to assist them.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 03 Mar 2003 15:55:40 +1030", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark" }, { "msg_contents": "> OSDL has just come out with a set of open-source database benchmarks:\n> http://www.osdl.org/projects/performance/\n> \n> The bad news:\n> \"This tool kit works with SAP DB open source database versions 7.3.0.23\n> or 7.3.0.25.\"\n> \n> (In fact, they seem to think they are testing kernel performance, not\n> database performance, which strikes me as rather bizarre. But anyway.)\n\nThat may be a terminology thing; the main SAP-DB process is called the \n\"kernel,\" and it's more than likely that the \"SAP-DB Kernel\" is the sense in \nwhich the term is being used.\n\nWhen they translate things from German, sometimes wordings change :-).\n--\noutput = reverse(\"moc.enworbbc@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nRules of the Evil Overlord #41. \"Once my power is secure, I will\ndestroy all those pesky time-travel devices.\"\n<http://www.eviloverlord.com/>\n\n\n", "msg_date": "Mon, 03 Mar 2003 07:41:05 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark " }, { "msg_contents": "On the results page they list kernels like linux-2.4.18-1tier or \nlinux-2.4.19-rc2 or redhat-stock-2.4.7-10cmp. This sounds really like \nlinux-kernel-versions.\n\nAm Montag, 3. März 2003 13:41 schrieb [email protected]:\n> > OSDL has just come out with a set of open-source database benchmarks:\n> > http://www.osdl.org/projects/performance/\n> >\n> > The bad news:\n> > \"This tool kit works with SAP DB open source database versions 7.3.0.23\n> > or 7.3.0.25.\"\n> >\n> > (In fact, they seem to think they are testing kernel performance, not\n> > database performance, which strikes me as rather bizarre. But anyway.)\n>\n> That may be a terminology thing; the main SAP-DB process is called the\n> \"kernel,\" and it's more than likely that the \"SAP-DB Kernel\" is the sense\n> in which the term is being used.\n>\n> When they translate things from German, sometimes wordings change :-).\n> --\n> output = reverse(\"moc.enworbbc@\" \"enworbbc\")\n> http://www.ntlug.org/~cbbrowne/linuxxian.html\n> Rules of the Evil Overlord #41. \"Once my power is secure, I will\n> destroy all those pesky time-travel devices.\"\n> <http://www.eviloverlord.com/>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \nDr. Eckhardt + Partner GmbH\nhttp://www.epgmbh.de\n", "msg_date": "Mon, 3 Mar 2003 14:53:50 +0100", "msg_from": "Tommi Maekitalo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark" }, { "msg_contents": "[email protected] writes:\n\n> I don't agree with this: XML and XHTML are two different things.\n\nNo one claimed anything to the contrary.\n\n> We could certainly upgrade the HTML portion, but I am pretty sure that\n> the XML standard calls for this format:\n>\n> <columnname>data here</columnname>\n\nThe XML standard does not call for any table format. But a number of\ntable formats have been established within the XML framework. Some of\nthem are formatting-oriented (e.g., the HTML model, or CALS which is used\nin DocBook) and some of them are processing-oriented (e.g., SQL/XML).\nWhich do we need? And which do we need from psql in particular (keeping\nin mind that psql is primarily for interactive use and shell-scripting)?\nIn any case, it should most likely be a standard table model and not a\nhand-crafted one.\n\n(If, for whatever reason, we go the \"processing-oriented\" route, then I\nclaim that there should not be a different output with and without \\x\nmode.)\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 3 Mar 2003 18:55:12 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] XML ouput for psql" }, { "msg_contents": "On Mon, 3 Mar 2003, Tommi Maekitalo wrote:\n\n> On the results page they list kernels like linux-2.4.18-1tier or \n> linux-2.4.19-rc2 or redhat-stock-2.4.7-10cmp. This sounds really like \n> linux-kernel-versions.\n> \n> Am Montag, 3. März 2003 13:41 schrieb [email protected]:\n> > > OSDL has just come out with a set of open-source database benchmarks:\n> > > http://www.osdl.org/projects/performance/\n> > >\n> > > The bad news:\n> > > \"This tool kit works with SAP DB open source database versions 7.3.0.23\n> > > or 7.3.0.25.\"\n> > >\n> > > (In fact, they seem to think they are testing kernel performance, not\n> > > database performance, which strikes me as rather bizarre. But anyway.)\n> >\n> > That may be a terminology thing; the main SAP-DB process is called the\n> > \"kernel,\" and it's more than likely that the \"SAP-DB Kernel\" is the sense\n> > in which the term is being used.\n> >\n> > When they translate things from German, sometimes wordings change :-).\n> > --\n> > output = reverse(\"moc.enworbbc@\" \"enworbbc\")\n> > http://www.ntlug.org/~cbbrowne/linuxxian.html\n> > Rules of the Evil Overlord #41. \"Once my power is secure, I will\n> > destroy all those pesky time-travel devices.\"\n> > <http://www.eviloverlord.com/>\n\nI think they are testing how tuning the linux kernel impacts the database \nrunning on top, at least that's the feeling I got from the site.\n\n", "msg_date": "Mon, 3 Mar 2003 11:17:01 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark" }, { "msg_contents": "On Mon, 2003-03-03 at 07:41, [email protected] wrote:\n> > (In fact, they seem to think they are testing kernel performance, not\n> > database performance, which strikes me as rather bizarre. But anyway.)\n> \n> That may be a terminology thing; the main SAP-DB process is called the \n> \"kernel,\" and it's more than likely that the \"SAP-DB Kernel\" is the sense in \n> which the term is being used.\n\nActually, I believe the reason the benchmark was developed was to\nprovide a workload for optimizing high-end Linux kernel performance\n(with the inference being that SAP-DB is pretty close to Oracle, Oracle\nperformance is important for enterprise deployment of Linux, and\ntherefore optimizing the kernel's handling of SAP-DB running TPC\nbenchmarks will tend to improve the kernel's performance running\nOracle/DB2/etc.) So when they mean \"kernel\", I think they really mean\n\"kernel\".\n\nThat's not to say that the benchmark wouldn't be useful for doing other\nstuff, like pure database benchmarks (as long as its a valid\nimplementation of TPC-C (or TPC-H, etc.), it should be fine...)\n\nA research group at the university I attend (www.queensu.ca) expressed\nsome interested in a TPC-C implementation for PostgreSQL, so I was\nplanning to port the OSDL TPC-C implementation to PostgreSQL.\nUnfortunately, I got sidetracked for a couple reasons: (1) lack of time\n(2) increasing awareness of just how boring writing benchmark apps is\n:-) (3) distaste for ODBC. While I'd like to get some time to do the\nport in the future, that shouldn't stop anyone else from doing so in the\nmean time :-)\n\nCheers,\n\nNeil\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "03 Mar 2003 15:29:55 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark" }, { "msg_contents": "On Mon, 2003-03-03 at 12:29, Neil Conway wrote:\n> On Mon, 2003-03-03 at 07:41, [email protected] wrote:\n> > > (In fact, they seem to think they are testing kernel performance, not\n> > > database performance, which strikes me as rather bizarre. But anyway.)\n> > \n> > That may be a terminology thing; the main SAP-DB process is called the \n> > \"kernel,\" and it's more than likely that the \"SAP-DB Kernel\" is the sense in \n> > which the term is being used.\n> \n> Actually, I believe the reason the benchmark was developed was to\n> provide a workload for optimizing high-end Linux kernel performance\n> (with the inference being that SAP-DB is pretty close to Oracle, Oracle\n> performance is important for enterprise deployment of Linux, and\n> therefore optimizing the kernel's handling of SAP-DB running TPC\n> benchmarks will tend to improve the kernel's performance running\n> Oracle/DB2/etc.) So when they mean \"kernel\", I think they really mean\n> \"kernel\".\n\nYeah, Neil more-or-less hit it on the nose. The SAP DB folks do refer\nto their processes as kernel processes, but our focus is on the Linux\nkernel and helping Linux gain more ground for the enterprise.\n \n> That's not to say that the benchmark wouldn't be useful for doing other\n> stuff, like pure database benchmarks (as long as its a valid\n> implementation of TPC-C (or TPC-H, etc.), it should be fine...)\n> \n> A research group at the university I attend (www.queensu.ca) expressed\n> some interested in a TPC-C implementation for PostgreSQL, so I was\n> planning to port the OSDL TPC-C implementation to PostgreSQL.\n> Unfortunately, I got sidetracked for a couple reasons: (1) lack of time\n> (2) increasing awareness of just how boring writing benchmark apps is\n> :-) (3) distaste for ODBC. While I'd like to get some time to do the\n> port in the future, that shouldn't stop anyone else from doing so in the\n> mean time :-)\n\nAnd we're prepared to aid any effort. :)\n \n> Cheers,\n> \n> Neil\n> -- \n> Neil Conway <[email protected]> || PGP Key ID: DB3C29FC\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nMark Wong - - [email protected]\nOpen Source Development Lab Inc - A non-profit corporation\n15275 SW Koll Parkway - Suite H - Beaverton OR, 97006\n(503)-626-2455 x 32 (office)\n(503)-626-2436 (fax)\nhttp://www.osdl.org/archive/markw/\n\n", "msg_date": "03 Mar 2003 13:20:26 -0800", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another open-source benchmark" }, { "msg_contents": "> [email protected] writes:\n> \n> > I don't agree with this: XML and XHTML are two different things.\n> \n> No one claimed anything to the contrary.\n> \n> > We could certainly upgrade the HTML portion, but I am pretty sure that\n> > the XML standard calls for this format:\n> >\n> > <columnname>data here</columnname>\n> \n> The XML standard does not call for any table format. But a number of\n> table formats have been established within the XML framework. Some of\n> them are formatting-oriented (e.g., the HTML model, or CALS which is used\n> in DocBook) and some of them are processing-oriented (e.g., SQL/XML).\n> Which do we need? And which do we need from psql in particular (keeping\n> in mind that psql is primarily for interactive use and shell-scripting)?\n> In any case, it should most likely be a standard table model and not a\n> hand-crafted one.\n\nI would expect XML output to be based on whatever the tree of data\ncontained.\n\nIf the tree is to be rewritten, then this would mean having some sort of\ntransformation engine in PostgreSQL that you would have to program.\n\nIf I want a CALS table, then I'll push CALS table data into the\ndatabase.\n\nIf I'm storing a GnuCash chart of accounts in PostgreSQL, I am\nludicrously uninterested in seeing it rewritten for some sort of\nphysical layout. Spit out the tags that are stored in the database, not\nsome rewriting of it.\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/linuxdistributions.html\n(1) Sigs are preceded by the \"sigdashes\" line, ie \"\\n-- \\n\" (dash-dash-space).\n(2) Sigs contain at least the name and address of the sender in the first line.\n(3) Sigs are at most four lines and at most eighty characters per line.\n", "msg_date": "Mon, 03 Mar 2003 18:57:26 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PATCHES] XML ouput for psql " }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> The XML standard does not call for any table format. But a number of\n> table formats have been established within the XML framework. Some of\n> them are formatting-oriented (e.g., the HTML model, or CALS which is used\n> in DocBook) and some of them are processing-oriented (e.g., SQL/XML).\n> Which do we need? And which do we need from psql in particular (keeping\n> in mind that psql is primarily for interactive use and shell-scripting)?\n> In any case, it should most likely be a standard table model and not a\n> hand-crafted one.\n \nI think all psql needs is a simple output, similar to the ones used by \nOracle, Sybase, and MySQL; the calling application should then process \nit in some way as needed (obviously this is not for interactive use).\nWhere can one find a \"standard table model?\"\n\nAll of the DBs I mentioned (and the perl module DBIx:XML_RDB) all share \na similar theme, with subtle differences (i.e. some use <row>, some \n<row num=\"x\">, some have <rowset>). I'd be happy to write whatever \nformat we can find or develop. My personal vote is the DBIx::XML_RDB \nformat, perhaps with the row number that Oracle uses, producing this:\n\n<?xml version=\"1.0\"?>\n<RESULTSET statement=\"select * from xmltest\">\n<ROW num=\"1\">\n <scoops>3</scoops>\n <flavor>chocolate</flavor>\n</ROW>\n<ROW num=\"2\">\n <scoops>2</scoops>\n <flavor>vanilla</flavor>\n</ROW>\n</RESULTSET>\n\n\n> (If, for whatever reason, we go the \"processing-oriented\" route, then I\n> claim that there should not be a different output with and without \\x\n> mode.)\n\nI agree with this.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200303041444\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+ZQJNvJuQZxSWSsgRArGEAKD4xs+4Ns3syG175T3k80B6MvNJvgCbBkvF\nhCkf5SMjLzMJ84uMl1w4tMY=\n=a2Uq\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 4 Mar 2003 19:50:12 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "* [email protected] <[email protected]> [2003-03-04 14:21]:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> > The XML standard does not call for any table format. But a number of\n> > table formats have been established within the XML framework. Some of\n> > them are formatting-oriented (e.g., the HTML model, or CALS which is used\n> > in DocBook) and some of them are processing-oriented (e.g., SQL/XML).\n> > Which do we need? And which do we need from psql in particular (keeping\n> > in mind that psql is primarily for interactive use and shell-scripting)?\n> > In any case, it should most likely be a standard table model and not a\n> > hand-crafted one.\n> \n> I think all psql needs is a simple output, similar to the ones used by \n> Oracle, Sybase, and MySQL; the calling application should then process \n> it in some way as needed (obviously this is not for interactive use).\n> Where can one find a \"standard table model?\"\n> \n> All of the DBs I mentioned (and the perl module DBIx:XML_RDB) all share \n> a similar theme, with subtle differences (i.e. some use <row>, some \n> <row num=\"x\">, some have <rowset>). I'd be happy to write whatever \n> format we can find or develop. My personal vote is the DBIx::XML_RDB \n> format, perhaps with the row number that Oracle uses, producing this:\n> \n> <?xml version=\"1.0\"?>\n> <RESULTSET statement=\"select * from xmltest\">\n> <ROW num=\"1\">\n> <scoops>3</scoops>\n> <flavor>chocolate</flavor>\n> </ROW>\n> <ROW num=\"2\">\n> <scoops>2</scoops>\n> <flavor>vanilla</flavor>\n> </ROW>\n> </RESULTSET>\n> \n> \n> > (If, for whatever reason, we go the \"processing-oriented\" route, then I\n> > claim that there should not be a different output with and without \\x\n> > mode.)\n> \n> I agree with this.\n\nI'm interested in creating XML documents that have heirarcy.\nI can produce the above with Perl.\n\nAcually, the difficult part has been getting the information back\ninto the database. Getting it out is a very simple query. I imagine\nthat every language/environment has an SQL->XML library somewhere,\nbut I wasn't able to find something that would go from XML to SQL.\n\nI wrote a utility that takes an xml document, and xml configuration\nfile, and writes the document to a PostgerSQL data base using the\nconfiguration file to figure out what goes where. The configuration\nfile makes some use of XPath to pluck the correct values out of the\nxml doucment.\n\nI suppose the same code could generate a document, but it is so easy\nto do using Perl and cgi, I've not bothered.\n\nIt has some constraints, but it is a very useful utility. I've been\nable to abosorb XML documents into my PostgreSQL db just by tweeking\nthe configuration file.\n\nCurrently, I am porting it to C++ from Perl.\n\n-- \nAlan Gutierrez - [email protected]\nhttp://khtml-win32.sourceforge.net/ - KHTML on Windows\n", "msg_date": "Tue, 4 Mar 2003 14:44:39 -0600", "msg_from": "Alan Gutierrez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "* [email protected] <[email protected]> [2003-03-04 14:21]:\n> \n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n\n> > The XML standard does not call for any table format. But a\n> > number of table formats have been established within the XML\n> > framework. Some of them are formatting-oriented (e.g., the HTML\n> > model, or CALS which is used in DocBook) and some of them are\n> > processing-oriented (e.g., SQL/XML). Which do we need? And\n> > which do we need from psql in particular (keeping in mind that\n> > psql is primarily for interactive use and shell-scripting)? In\n> > any case, it should most likely be a standard table model and\n> > not a hand-crafted one.\n> \n> I think all psql needs is a simple output, similar to the ones used by \n> Oracle, Sybase, and MySQL; the calling application should then process \n> it in some way as needed (obviously this is not for interactive use).\n> Where can one find a \"standard table model?\"\n> \n> All of the DBs I mentioned (and the perl module DBIx:XML_RDB) all share \n> a similar theme, with subtle differences (i.e. some use <row>, some \n> <row num=\"x\">, some have <rowset>). I'd be happy to write whatever \n> format we can find or develop. My personal vote is the DBIx::XML_RDB \n> format, perhaps with the row number that Oracle uses, producing this:\n> \n> <?xml version=\"1.0\"?>\n> <RESULTSET statement=\"select * from xmltest\">\n> <ROW num=\"1\">\n> <scoops>3</scoops>\n> <flavor>chocolate</flavor>\n> </ROW>\n> <ROW num=\"2\">\n> <scoops>2</scoops>\n> <flavor>vanilla</flavor>\n> </ROW>\n> </RESULTSET>\n> \n> \n> > (If, for whatever reason, we go the \"processing-oriented\" route, then I\n> > claim that there should not be a different output with and without \\x\n> > mode.)\n> \n> I agree with this.\n\nI'm interested in creating XML documents that have heirarcy.\nI can produce the above with Perl.\n\nI wrote a utility that takes an xml document, and xml configuration\nfile, and writes the document to a PostgerSQL data base using the\nconfiguration file to figure out what goes where. The configuration\nfile makes some use of XPath to pluck the correct values out of the\nxml doucment.\n\nI suppose the same code could generate a document, but it is so easy\nto do using Perl and cgi, I've not bothered.\n\nThis util has been very helpful to me in developing a document\nmangement application. Rather than writing insert/update logic every\ntime the db or xml schema changes, I just tweak the config file and\nit will generated the inserts, updates, and deletes by comparing the\nXML document with the tables to which the XML elements are mapped.\n\nI've been able to handle tree structures tolerably well.\n\nI am currently rewriting the code in C++ from Perl.\n\n-- \nAlan Gutierrez - [email protected]\nhttp://khtml-win32.sourceforge.net/ - KHTML on Windows\n", "msg_date": "Tue, 4 Mar 2003 17:27:03 -0600", "msg_from": "Alan Gutierrez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "I've done a lot with XML lately, so I'll throw in my $0.02 worth.\n\nOne thing I have noticed about the schemes that are being advanced is that\nthey seem to be inherently unspecifiable, formally, because column names are\nbeing used as tags.\n\nAn alternative might look something like this:\n\n<?xml version=\"1.0\"?>\n<RESULTSET statement=\"select * from xmltest\">\n<COLUMNS>\n <COLUMN name=\"scoops\" type=\"int\" />\n <COLUMN name=\"flavor\" type=\"varchar(40)\" />\n</COLUMNS>\n<ROW>\n <FIELD name=\"scoops\" isNull=\"false\">3</FIELD>\n <FIELD name=\"flavor\" isNull=\"false\">chocolate</FIELD>\n</ROW>\n<ROW>\n <FIELD name=\"scoops\" isNull=\"false\">2</FIELD>\n <FIELD name=\"flavor\" isNull=\"false\">vanilla</FIELD>\n</ROW>\n</RESULTSET>\n\n\nNumbering the rows should be redundant (XPath will give it to you using\n\"position()\", for example). OTOH, reporting out a null value as opposed to\nan empty one is probably a good idea.\n\nThe formal DTD would be something like this (courtesy of the wonderful tools\nat http://www.hitsw.com/xml_utilites/:\n\n<!ELEMENT RESULTSET ( COLUMNS, ROW* ) >\n<!ATTLIST RESULTSET statement CDATA #REQUIRED >\n<!ELEMENT COLUMNS ( COLUMN+ ) >\n\n<!ELEMENT COLUMN EMPTY >\n<!ATTLIST COLUMN name NMTOKEN #REQUIRED >\n<!ATTLIST COLUMN type CDATA #REQUIRED >\n\n<!ELEMENT ROW ( FIELD+ ) ><!ELEMENT FIELD ( #PCDATA ) >\n<!ATTLIST FIELD isNull ( false| true ) \"false\" >\n<!ATTLIST FIELD name NMTOKEN #REQUIRED >\n or the equivalent in a schema:<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">\n <xs:element name=\"COLUMN\">\n <xs:complexType>\n <xs:attribute name=\"type\" type=\"xs:string\" use=\"required\" />\n <xs:attribute name=\"name\" type=\"xs:NMTOKEN\" use=\"required\" />\n </xs:complexType>\n </xs:element>\n\n <xs:element name=\"COLUMNS\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"COLUMN\" minOccurs=\"1\" maxOccurs=\"unbounded\" />\n </xs:sequence>\n </xs:complexType>\n </xs:element>\n\n <xs:element name=\"FIELD\">\n <xs:complexType mixed=\"true\">\n <xs:attribute name=\"isNull\" use=\"optional\" default=\"false\">\n <xs:simpleType>\n <xs:restriction base=\"xs:NMTOKEN\">\n <xs:enumeration value=\"false\" />\n <xs:enumeration value=\"true\" />\n </xs:restriction>\n </xs:simpleType>\n </xs:attribute>\n <xs:attribute name=\"name\" type=\"xs:NMTOKEN\" use=\"required\" />\n </xs:complexType>\n </xs:element>\n\n <xs:element name=\"RESULTSET\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"COLUMNS\" minOccurs=\"1\" maxOccurs=\"1\" />\n <xs:element ref=\"ROW\" minOccurs=\"0\" maxOccurs=\"unbounded\" />\n </xs:sequence>\n <xs:attribute name=\"statement\" type=\"xs:string\" use=\"required\" />\n </xs:complexType>\n </xs:element>\n\n <xs:element name=\"ROW\">\n <xs:complexType>\n <xs:sequence>\n <xs:element ref=\"FIELD\" minOccurs=\"1\" maxOccurs=\"unbounded\" />\n </xs:sequence>\n </xs:complexType>\n </xs:element>\n\n</xs:schema>\n\n", "msg_date": "Wed, 5 Mar 2003 10:50:03 -0500", "msg_from": "\"Andrew Dunstan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "[email protected] writes:\n\n> I think all psql needs is a simple output, similar to the ones used by\n> Oracle, Sybase, and MySQL; the calling application should then process\n> it in some way as needed (obviously this is not for interactive use).\n> Where can one find a \"standard table model?\"\n\nI think for processing-oriented output, the system described in the\nSQL/XML standard draft is the way to go. Considering the people who wrote\nit, it's probably pulled from, or bound to appear in, a major commercial\ndatabase.\n\nI also think that psql is not the place to implement something like this.\nIt's most likely best put in the backend, as a function like\n\n xmlfoo('select * from t1;')\n\nThen any interface and application that likes it, not just psql-based\nones, can use it.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 5 Mar 2003 23:37:35 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Andrew Dunstan writes:\n\n> One thing I have noticed about the schemes that are being advanced is that\n> they seem to be inherently unspecifiable, formally, because column names are\n> being used as tags.\n\nThe SQL/XML draft addresses this by specifying that a mapping from SQL\nthings to XML things spits out both the specification (XML Schema, IIRC)\nand the data in one operation.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 5 Mar 2003 23:38:27 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I also think that psql is not the place to implement something like this.\n\nAgreed.\n\n> It's most likely best put in the backend, as a function like\n> xmlfoo('select * from t1;')\n\nThat seems a little bizarre. Wouldn't we want to have a switch that\njust flips the SELECT output format from one style to the other?\n\nThis is also a good time to stop and ask whether the frontend/backend\nprotocol needs to change to support this. Not having read the spec,\nI have no idea what the low-level transport needs are for XML output,\nbut I suspect our present protocol is not it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 19:16:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "Tom Lane wrote:\n> This is also a good time to stop and ask whether the frontend/backend\n> protocol needs to change to support this. Not having read the spec,\n> I have no idea what the low-level transport needs are for XML output,\n> but I suspect our present protocol is not it ...\n\nIt might be interesting to modify the protocol (and the backend at the \npoint of projection to the front end) so that a user defined formating \nfunction could be applied and either accepted or rejected by the front \nend. Perhaps one flavor of XML output is a start, but I could imagine \nwanting a custom or even different \"standard\" output format.\n\nJoe\n\n", "msg_date": "Wed, 05 Mar 2003 17:32:04 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > I also think that psql is not the place to implement something like this.\n> \n> Agreed.\n> \n> > It's most likely best put in the backend, as a function like\n> > xmlfoo('select * from t1;')\n\n> That seems a little bizarre. Wouldn't we want to have a switch that\n> just flips the SELECT output format from one style to the other?\n\nAh, but this approach has the merit that it doesn't require pushing out\na completely new set of tools.\n\n> This is also a good time to stop and ask whether the frontend/backend\n> protocol needs to change to support this. Not having read the spec, I\n> have no idea what the low-level transport needs are for XML output,\n> but I suspect our present protocol is not it ...\n\nThat could be; there's enough variation in what one might want to do\nwith XML that it is not trivial to suggest an 'ideal' answer.\n\nWe have already seen the proposal of:\n<record a=\"b\" c=\"d\" e=\"f\">\n<record a=\"c\" c=\"e\" e=\"g\">\n<record a=\"d\" c=\"f\" e=\"h\">\n<record a=\"e\" c=\"g\" e=\"i\">\n\nI would rather prefer something like:\n<tablea>\n <record>\n <a>b</a> <c>d</c> <e>f</e>\n </record> \n <record>\n <a>c</a> <c>d</c> <e>f</e>\n </record> \n <record>\n <a>d</a> <c>d</c> <e>f</e>\n </record> \n<tablea>\n\n(Note that both approaches are quite rational possibilities.)\n\nI'd think that the \"protocol\" would involve passing back a row-as-string\nfor each row in the result set.\n--\noutput = (\"cbbrowne\" \"@cbbrowne.com\")\nhttp://www.ntlug.org/~cbbrowne/xml.html\n\"There are two major products that come out of Berkeley: LSD and Unix.\nWe don't believe this to be a coincidence.\" - Jeremy S. Anderson\n", "msg_date": "Wed, 05 Mar 2003 21:10:33 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> I think for processing-oriented output, the system described in the\n> SQL/XML standard draft is the way to go. Considering the people who wrote\n> it, it's probably pulled from, or bound to appear in, a major commercial\n> database.\n\nDo you have a link to the exact section? I've found conflicting versions \nof what constitutes the \"standard\" for xml output of SQL data.\n\n> I also think that psql is not the place to implement something like this.\n> It's most likely best put in the backend, as a function like\n> \n> xmlfoo('select * from t1;')\n> \n> Then any interface and application that likes it, not just psql-based\n> ones, can use it.\n\nI think that is a good long-term solution, but I still think we need \nto address the TODO item in the short run, and allow for a simple \nreformatting of the query results from psql. If not, we should remove \nthat TODO item form psql and add a different one to the backend section.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200303061020\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+Z2jHvJuQZxSWSsgRAj7IAJ4hLEos9OlE67O02gVrrqxwT9n3AQCeJxto\nN2LFyvXPfGY2whPUs5k+PQA=\n=PYfs\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Thu, 6 Mar 2003 15:31:41 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Peter Eisentraut kirjutas N, 06.03.2003 kell 00:37:\n> [email protected] writes:\n> \n> > I think all psql needs is a simple output, similar to the ones used by\n> > Oracle, Sybase, and MySQL; the calling application should then process\n> > it in some way as needed (obviously this is not for interactive use).\n> > Where can one find a \"standard table model?\"\n> \n> I think for processing-oriented output, the system described in the\n> SQL/XML standard draft is the way to go. Considering the people who wrote\n> it, it's probably pulled from, or bound to appear in, a major commercial\n> database.\n> \n> I also think that psql is not the place to implement something like this.\n> It's most likely best put in the backend, as a function like\n> \n> xmlfoo('select * from t1;')\n> \n> Then any interface and application that likes it, not just psql-based\n> ones, can use it.\n\nI have written an aggregate function in pl/python for my own needs that\nreturns underlying query fomatted as XML, but it has some problems:\n\n1) both the row-to-xml-fragment and\ncollect-the-fragments-to-wellformed-xml-doc have to be defined for each\nand every different query (the actual function text is the same).\n\n2) it is unneccesaryly hard to define a function that takes a record as\nargument - the record type is lost: for even simple things like this\n\nselect * from (select * from mytable) mtab; \n\nthe result of inner query is _not_ of rowtype mytable, i.e.\n\nyou can do\n\nselect xmlfrag(mytable) from mytable;\n\nbut not\n\nselect xmlfrag(mytable) from (select * from mytable) mytable;\n\n\n----------------\nHannu\n\n", "msg_date": "06 Mar 2003 23:06:36 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Tom Lane writes:\n\n> This is also a good time to stop and ask whether the frontend/backend\n> protocol needs to change to support this. Not having read the spec,\n> I have no idea what the low-level transport needs are for XML output,\n> but I suspect our present protocol is not it ...\n\nThe spec defines \"mappings\" between tables, schemas, and catalogs on the\none side and each time a pair of XML documents on the other side, one of\nwhich is an XML schema document (sort of a document type declaration) and\nthe other is an XML data document that follows the constraints of the\nschema document and contains the actual data. A table could of course\nmore or less be interpreted to mean a query result. That means, this\nfunctionality provides both query result retrieval via XML and a pg_dump\ntype mechanism with XML output.\n\nSo I imagine, if this is done fully with changes in the protocol layer,\nthen certain commands like \"get table schema in XML\" would have to exist\nin the protocol, which doesn't seem right. Also, the XML output isn't a\nsibling of the current text/binary tuples, since an XML result is always\na whole document, not tuple data.\n\nWhat we could perhaps consider is a family of functions like I\nillustrated, but then provide a fast-path-driven layer on the client side,\nlike for large objects. Initially, the development of these mapping\nfunctions could take place totally in user-space.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 7 Mar 2003 00:08:17 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> So I imagine, if this is done fully with changes in the protocol layer,\n> then certain commands like \"get table schema in XML\" would have to exist\n> in the protocol, which doesn't seem right. Also, the XML output isn't a\n> sibling of the current text/binary tuples, since an XML result is always\n> a whole document, not tuple data.\n\nI would envision a distinction comparable to the existing one between T\nand D messages (RowDescription and AsciiRow, using the documentation's\nnames): you send the table schema first, then the data. Also note that\nthere is no \"command\" to get the T message; it comes for free whenever\na SELECT result is sent to the frontend.\n\n> What we could perhaps consider is a family of functions like I\n> illustrated, but then provide a fast-path-driven layer on the client side,\n> like for large objects. Initially, the development of these mapping\n> functions could take place totally in user-space.\n\nI don't object to that as a quick-and-dirty context for prototyping work,\nbut I'd sure hate to see it as the production version. The fastpath\nprotocol is a mess, and until/unless we get it cleaned up, we ought not\nincrease dependency on it.\n\nA larger point is that this is still a protocol revision; pretending it\nain't is just willful obscurantism. You can tell it's a protocol revision\nbecause you will need to rewrite client-side libraries to take advantage\nof it. If we try to look the other way and pretend it isn't one, then\nwe'll just be incurring pain --- the most obvious pain being that it\nwill be hard for those client libraries to tell whether the protocol\nextension is supported or not.\n\nThe way I'd prefer to see this handled is by providing alternatives to\nthe printtup.c DestReceiver routines. The backend could be switched to\nany desired output representation just by invoking different sets of\nreceiver routines. What we seem to need first is a context for doing\nthat, in particular a way to understand how different output formats can\nbe fit into the FE/BE protocol.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Mar 2003 18:33:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "I tried general, but no response. Anyone here can shed some light on the\nissue? Do I need to code merge sort into postgresql?\n\n----- Forwarded message from Taral <[email protected]> -----\n\nFrom: Taral <[email protected]>\nTo: [email protected]\nDate: Wed, 12 Mar 2003 17:54:35 -0600\nSubject: [GENERAL] No merge sort?\nMessage-ID: <[email protected]>\n\nI have a table \"test\" that looks like this:\n\nCREATE TABLE test (\n id BIGINT,\n time INTEGER\n);\n\nThere is an index:\n\nCREATE INDEX idx ON test(id, time);\n\nThe table has been loaded with 2M rows, where time ranges sequentially\nfrom 0 to 1999999 and id is random values from 0 to 49999.\n\nThis query:\n\nSELECT * FROM idx WHERE id IN (...) AND time > 198000 AND time < 199800\nORDER BY time DESC LIMIT 20;\n\nhas an EXPLAIN ANALYZE of:\n\nLimit (cost=3635.28..3635.28 rows=20 width=12) (actual time=22.94...22.96 rows=14 loops=1)\n -> Sort (cost=3635.28..3635.28 rows=23 width=12) (actual time=22.93..22.93 rows=14 loops=1)\n -> Index Scan using idx, idx, ..., idx, idx on test (cost=0.00...3634.77 rows=23 width=12) (actual time=1.01..22.10 rows=14 loops=1)\nTotal runtime: 29.12 msec\n\nThis query:\n\nSELECT * FROM idx WHERE id IN (...) AND time < 199800 ORDER BY time DESC\nLIMIT 20;\n\nhas an EXPLAIN ANALYZE of:\n\nLimit (cost=14516.46..14516.46 rows=20 width=12) (actual time=1448..83..1448.86 rows=20 loops=1)\n -> Sort (cost=14516.46..14516.46 rows=2527 width=12) (actual time=1448.82..1448.83 rows=21 loops=1)\n -> Index Scan using idx, idx, ..., idx, idx on test (cost=0.00...14373.67 rows=2527 width=12) (actual time=0.14..1437.33 rows=2048 loops=1)\nTotal runtime: 1454.62 msec\n\nSince the index will output 'time' sorted data for each 'id', why isn't\na merge sort being used here? A merge sort would reduce the execution\ntime back to 30 ms.\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Thu, 13 Mar 2003 15:10:49 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "No merge sort?" }, { "msg_contents": "Taral <[email protected]> writes:\n> Do I need to code merge sort into postgresql?\n\nSeems like a waste of effort to me. I find this example less than\ncompelling --- the case that could be sped up is quite narrow,\nand the potential performance gain not all that large. (A sort\nis a sort however you slice it, with O(N log N) runtime...)\n\nAlso, the direction we'd likely be going in in future is to merge\nmultiple indexscans using bitmap techniques, so that the output\nordering of the scans couldn't be counted on anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Mar 2003 16:28:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort? " }, { "msg_contents": "On Thu, Mar 13, 2003 at 04:28:34PM -0500, Tom Lane wrote:\n> Seems like a waste of effort to me. I find this example less than\n> compelling --- the case that could be sped up is quite narrow,\n> and the potential performance gain not all that large. (A sort\n> is a sort however you slice it, with O(N log N) runtime...)\n\nActually, it's O(N) time. The index can produce \"time\" sorted data for\neach \"id\" in linear time, and the merge sort can merge them in linear\ntime. Also, the existing system insists on loading _all_ candidate rows\nwhereas this method can benefit from the limit.\n\nIf you don't want to code it, I will. I need it for the livejournal\nmysql->postgresql transition. (No, mysql doesn't do it right either.)\nBut a few pointers to the right places to look in the code would be\nhelpful.\n\n> Also, the direction we'd likely be going in in future is to merge\n> multiple indexscans using bitmap techniques, so that the output\n> ordering of the scans couldn't be counted on anyway.\n\nI don't understand this. What do these bitmap techniques do?\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Thu, 13 Mar 2003 19:58:14 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Taral <[email protected]> writes:\n> On Thu, Mar 13, 2003 at 04:28:34PM -0500, Tom Lane wrote:\n>> Seems like a waste of effort to me. I find this example less than\n>> compelling --- the case that could be sped up is quite narrow,\n>> and the potential performance gain not all that large. (A sort\n>> is a sort however you slice it, with O(N log N) runtime...)\n\n> Actually, it's O(N) time.\n\nOnly if you assume a fixed number of input streams.\n\n>> Also, the direction we'd likely be going in in future is to merge\n>> multiple indexscans using bitmap techniques, so that the output\n>> ordering of the scans couldn't be counted on anyway.\n\n> I don't understand this. What do these bitmap techniques do?\n\nThe idea is you look at the index to make a list of main-table tuple\npositions you are interested in, which you represent compactly as a\ncompressed bitmap. (There is some finagling needed because PG actually\nuses block/line number rather than a pure tuple number to identify\ntuples, but you can fake it with a reasonably small amount of overhead.)\nThen you can combine multiple index inputs by ANDing or ORing bitmaps\n(the OR case applies to your example). Finally, you traverse the heap,\naccessing the desired rows in heap-location order. This loses in terms\nof producing presorted output --- but it often saves enough in I/O costs\nto more than justify doing the sort in memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Mar 2003 22:30:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort? " }, { "msg_contents": "On Thu, Mar 13, 2003 at 10:30:27PM -0500, Tom Lane wrote:\n> The idea is you look at the index to make a list of main-table tuple\n> positions you are interested in, which you represent compactly as a\n> compressed bitmap. (There is some finagling needed because PG actually\n> uses block/line number rather than a pure tuple number to identify\n> tuples, but you can fake it with a reasonably small amount of overhead.)\n> Then you can combine multiple index inputs by ANDing or ORing bitmaps\n> (the OR case applies to your example). Finally, you traverse the heap,\n> accessing the desired rows in heap-location order. This loses in terms\n> of producing presorted output --- but it often saves enough in I/O costs\n> to more than justify doing the sort in memory.\n\nAnd it loses bigtime in the case of LIMIT. If the unlimited query\nreturns 4,000 records and I only want 20, you're retrieving 200x too\nmuch data from disk.\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Thu, 13 Mar 2003 23:04:01 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Same setup, different query:\n\ntest=> explain select max(time) from test where id = '1';\nNOTICE: QUERY PLAN:\n\nAggregate (cost=5084.67..5084.67 rows=1 width=0)\n -> Index Scan using idx on test (cost=0.00..5081.33 rows=1333 width=0)\n\nSince the index is (id, time), why isn't the index being used to\nretrieve the maximum value?\n\nOn Thu, Mar 13, 2003 at 03:10:49PM -0600, Taral wrote:\n> I have a table \"test\" that looks like this:\n> \n> CREATE TABLE test (\n> id BIGINT,\n> time INTEGER\n> );\n> \n> There is an index:\n> \n> CREATE INDEX idx ON test(id, time);\n> \n> The table has been loaded with 2M rows, where time ranges sequentially\n> from 0 to 1999999 and id is random values from 0 to 49999.\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Fri, 14 Mar 2003 14:19:46 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "No index maximum? (was Re: No merge sort?)" }, { "msg_contents": "Taral <[email protected]> writes:\n> On Thu, Mar 13, 2003 at 10:30:27PM -0500, Tom Lane wrote:\n>> The idea is you look at the index to make a list of main-table tuple\n>> positions you are interested in, which you represent compactly as a\n>> compressed bitmap. [snip]\n\n> And it loses bigtime in the case of LIMIT. If the unlimited query\n> returns 4,000 records and I only want 20, you're retrieving 200x too\n> much data from disk.\n\nSure. That's why we have a planner that distinguishes between startup\ncost and total cost, and interpolates when a LIMIT is involved. But\nif this mergesort idea only helps for small-limit cases, that's another\nrestriction on its scope of usefulness...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Mar 2003 22:43:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort? " }, { "msg_contents": "On Fri, Mar 14, 2003 at 10:43:30PM -0500, Tom Lane wrote:\n> Sure. That's why we have a planner that distinguishes between startup\n> cost and total cost, and interpolates when a LIMIT is involved. But\n> if this mergesort idea only helps for small-limit cases, that's another\n> restriction on its scope of usefulness...\n\nI don't think so, since even in the non-limit case it avoids having to\ndo a full sort if the number of initial streams is finite and small (as\nin the case I demonstrated), reducing time complexity to O(N).\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Fri, 14 Mar 2003 22:07:03 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "On Fri, Mar 14, 2003 at 14:19:46 -0600,\n Taral <[email protected]> wrote:\n> Same setup, different query:\n> \n> test=> explain select max(time) from test where id = '1';\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=5084.67..5084.67 rows=1 width=0)\n> -> Index Scan using idx on test (cost=0.00..5081.33 rows=1333 width=0)\n> \n> Since the index is (id, time), why isn't the index being used to\n> retrieve the maximum value?\n\nIt looks like an index scan is being done.\n\nIf the index was on (time, id) instead of (id, time), then you could get\na further speed up by rewriting the query as:\nselect time from test where id = '1' order by time desc limit 1;\n", "msg_date": "Sat, 15 Mar 2003 09:23:28 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index maximum? (was Re: No merge sort?)" }, { "msg_contents": "On Sat, Mar 15, 2003 at 09:23:28AM -0600, Bruno Wolff III wrote:\n> On Fri, Mar 14, 2003 at 14:19:46 -0600,\n> Taral <[email protected]> wrote:\n> > Same setup, different query:\n> > \n> > test=> explain select max(time) from test where id = '1';\n> > NOTICE: QUERY PLAN:\n> > \n> > Aggregate (cost=5084.67..5084.67 rows=1 width=0)\n> > -> Index Scan using idx on test (cost=0.00..5081.33 rows=1333 width=0)\n> > \n> > Since the index is (id, time), why isn't the index being used to\n> > retrieve the maximum value?\n> \n> It looks like an index scan is being done.\n> \n> If the index was on (time, id) instead of (id, time), then you could get\n> a further speed up by rewriting the query as:\n> select time from test where id = '1' order by time desc limit 1;\n\nYes, that's exactly it. It's an index _scan_. It should simply be able\nto read the maximum straight from the btree.\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Mon, 17 Mar 2003 11:23:47 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index maximum? (was Re: No merge sort?)" }, { "msg_contents": "On Mon, Mar 17, 2003 at 11:23:47 -0600,\n Taral <[email protected]> wrote:\n> On Sat, Mar 15, 2003 at 09:23:28AM -0600, Bruno Wolff III wrote:\n> > On Fri, Mar 14, 2003 at 14:19:46 -0600,\n> > Taral <[email protected]> wrote:\n> > > Same setup, different query:\n> > > \n> > > test=> explain select max(time) from test where id = '1';\n> > > NOTICE: QUERY PLAN:\n> > > \n> > > Aggregate (cost=5084.67..5084.67 rows=1 width=0)\n> > > -> Index Scan using idx on test (cost=0.00..5081.33 rows=1333 width=0)\n> > > \n> > > Since the index is (id, time), why isn't the index being used to\n> > > retrieve the maximum value?\n> > \n> > It looks like an index scan is being done.\n> > \n> > If the index was on (time, id) instead of (id, time), then you could get\n> > a further speed up by rewriting the query as:\n> > select time from test where id = '1' order by time desc limit 1;\n> \n> Yes, that's exactly it. It's an index _scan_. It should simply be able\n> to read the maximum straight from the btree.\n\nmax and min don't use indexes. They are generic aggregate functions and\npostgres doesn't have the special knowledge to know that for those\naggregate functions and index can be used. You can get around this\nby rewriting the query as I previously indicated.\n\nFor more details on why things are this way, search the archives. This\ntopic comes up a lot.\n\nI was also mistaken about have to switch the index around for this case.\nIt should work the way you have it (if you rewrite the query).\n", "msg_date": "Mon, 17 Mar 2003 11:59:23 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index maximum? (was Re: No merge sort?)" }, { "msg_contents": "\nGreg, do you have a newer patch to address the feedback you received, or\nis this one good?\n\n---------------------------------------------------------------------------\n\[email protected] wrote:\n[ There is text before PGP section. ]\n> \n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> Peter Eisentraut wrote:\n> > 1. Look into the SQL/XML standard draft (ftp.sqlstandards.org) to find out\n> > whether the standard addresses this sort of thing.\n> \n> The URL you gave leads to a site curiously content-free and full of dead links. \n> I've looked around a bit, but found nothing definitive. One good resource I \n> did find was this:\n> \n> http://www.wiscorp.com/sql/SQLX_Bringing_SQL_and_XML_Together.pdf\n> \n> The article mentions a lot of links on the sqlstandards.org and iso.org sites, none \n> of which work or are restricted. If anyone knows of some good links, please \n> let me know. (especially ISO 9075). From what I've read of the SQLX stuff, the \n> format in my patch should be mostly standard:\n> \n> <row>\n> <name>Joe Sixpack</name>\n> <age>35</age>\n> <state>Alabama</state>\n> </row>\n> \n> One problem is that the recommended way to handle non-standard characters \n> (including spaces) is to escape them like this:\n> \n> foobar baz => <foobar_x0020_baz>\n> \n> This also includes escaping things like \"_x*\" and \"xml*\". We don't have \n> anything like that in the code yet (?), but we should probably think about \n> heading that way. I think escaping whitespace in quotes is good enough \n> for now for:\n> \n> foobar baz => <\"foobar baz\">\n> \n> The xsd and xsi standards are also interesting, but needlessly complicated \n> for psql output, IMO.\n> \n> > Incidentally, the HTML table model is such an established and standardized\n> > XML and SGML table model, so the easiest way to get the task \"add XML\n> > output to psql\" done is to update the HTML output to conform to XHTML.\n> > That way you get both the strict XML and you can look at the formatted\n> > result with any old (er, new) browser.\n> \n> I don't agree with this: XML and XHTML are two different things. We could \n> certainly upgrade the HTML portion, but I am pretty sure that the XML \n> standard calls for this format:\n> \n> <columnname>data here</columnname>\n> \n> ..which is not valid XHTML and won't be viewable by any browser. The other \n> suggested XML formats are even further from XHTML than the above. The HTML \n> format should be \"html table/layout\" specific and the XML should be \n> \"schema/data\" specific.\n> \n> - --\n> Greg Sabino Mullane [email protected]\n> PGP Key: 0x14964AC8 200302280938\n> \n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n> \n> iD8DBQE+X3k5vJuQZxSWSsgRAuXFAKDGO1IsjB9Lwtkcws1xJy47PibcLQCg3dx5\n> fsy27qguZv841lPvCjzdUic=\n> =4f9B\n> -----END PGP SIGNATURE-----\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n[ Decrypting message... End of raw data. ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Mar 2003 16:36:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Greg, do you have a newer patch to address the feedback you received, or\n> is this one good?\n\nI have a newer patch, but I am not 100% sure a consensus was reached. I recall \nthe thread veering into talk of XML on the backend, but don't recall if anyone \nstill had strong objections to a quick psql wrapper. If not, I will clean up \nthe existing patch and resubmit tomorrow.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200303171641\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+dkE5vJuQZxSWSsgRAkVSAJ9aLoLC23OoNcVEw4hQiaBrPcSqNQCfTxH3\ncrC4ssFKbBo60gHvJT3WsU0=\n=Qsif\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Mon, 17 Mar 2003 21:42:57 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\nI like the idea of doing XML in psql --- it seems like a natural place\nfor it.\n\n---------------------------------------------------------------------------\n\[email protected] wrote:\n[ There is text before PGP section. ]\n> \n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> > Greg, do you have a newer patch to address the feedback you received, or\n> > is this one good?\n> \n> I have a newer patch, but I am not 100% sure a consensus was reached. I recall \n> the thread veering into talk of XML on the backend, but don't recall if anyone \n> still had strong objections to a quick psql wrapper. If not, I will clean up \n> the existing patch and resubmit tomorrow.\n> \n> - --\n> Greg Sabino Mullane [email protected]\n> PGP Key: 0x14964AC8 200303171641\n> \n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n> \n> iD8DBQE+dkE5vJuQZxSWSsgRAkVSAJ9aLoLC23OoNcVEw4hQiaBrPcSqNQCfTxH3\n> crC4ssFKbBo60gHvJT3WsU0=\n> =Qsif\n> -----END PGP SIGNATURE-----\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n[ Decrypting message... End of raw data. ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Mar 2003 17:03:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "On Mon, Mar 17, 2003 at 11:23:47AM -0600, Taral wrote:\n> Yes, that's exactly it. It's an index _scan_. It should simply be able\n> to read the maximum straight from the btree.\n\nStill doesn't work, even with rewritten query. It sort a\nLimit(Sort(Index Scan)), with 1333 rows being pulled from the index.\n\n-- \nTaral <[email protected]>\nThis message is digitally signed. Please PGP encrypt mail to me.\n\"Most parents have better things to do with their time than take care of\ntheir children.\" -- Me", "msg_date": "Mon, 17 Mar 2003 16:09:10 -0600", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No index maximum? (was Re: No merge sort?)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I like the idea of doing XML in psql --- it seems like a natural place\n> for it.\n\nNot really; what of applications other than shell scripts that would\nlike to get XML-formatted output?\n\nThere was some talk in the FE/BE protocol thread of adding hooks to\nsupport more than one output format from the backend. Much of the\ninfrastructure already exists (see DestReceiver in the backend); we\njust need an agreement on the protocol. On the whole I'd rather see\nit done that way than burying the logic in psql.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 17:59:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I like the idea of doing XML in psql --- it seems like a natural place\n> > for it.\n> \n> Not really; what of applications other than shell scripts that would\n> like to get XML-formatted output?\n> \n> There was some talk in the FE/BE protocol thread of adding hooks to\n> support more than one output format from the backend. Much of the\n> infrastructure already exists (see DestReceiver in the backend); we\n> just need an agreement on the protocol. On the whole I'd rather see\n> it done that way than burying the logic in psql.\n\nWell, programs can run psql using popen. It seems overkill to get the\nprotocol involved, specially since it is output-only. I can't imagine\nwho would bother with the wire protocol messiness just to get xml.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Mar 2003 18:08:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> Not really; what of applications other than shell scripts that would\n>> like to get XML-formatted output?\n\n> Well, programs can run psql using popen. It seems overkill to get the\n> protocol involved, specially since it is output-only. I can't imagine\n> who would bother with the wire protocol messiness just to get xml.\n\nHaving to popen a psql isn't overkill? This seems like a far messier\nsolution than the other. Furthermore, it's just plain not an available\nsolution in many scenarios (think of a Java program running JDBC; it may\nnot have privileges to do popen, and may not have access to a copy of\npsql anyway).\n\nIf we were not already opening up the protocol for changes, I'd be\nresistant to the idea too. But since we are, I think it should be fixed\nwhere it's cleanest to fix it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 18:18:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Not really; what of applications other than shell scripts that would\n> >> like to get XML-formatted output?\n> \n> > Well, programs can run psql using popen. It seems overkill to get the\n> > protocol involved, specially since it is output-only. I can't imagine\n> > who would bother with the wire protocol messiness just to get xml.\n> \n> Having to popen a psql isn't overkill? This seems like a far messier\n> solution than the other. Furthermore, it's just plain not an available\n> solution in many scenarios (think of a Java program running JDBC; it may\n> not have privileges to do popen, and may not have access to a copy of\n> psql anyway).\n> \n> If we were not already opening up the protocol for changes, I'd be\n> resistant to the idea too. But since we are, I think it should be fixed\n> where it's cleanest to fix it.\n\nWhat would be interesting would be to enable libpq to dump XML, and have\npsql use that. Why put XML capability in the backend? Of course, that\ndoesn't help jdbc. How do you propose the backend would do XML?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Mar 2003 18:47:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> What would be interesting would be to enable libpq to dump XML, and have\n> psql use that.\n\n... or in the backend so libpq could use it, and thence psql.\n\n> Why put XML capability in the backend?\n\nSo that non-libpq-based clients could use it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Mar 2003 01:22:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > What would be interesting would be to enable libpq to dump XML, and have\n> > psql use that.\n> \n> ... or in the backend so libpq could use it, and thence psql.\n> \n> > Why put XML capability in the backend?\n> \n> So that non-libpq-based clients could use it.\n\nOK, I have two ideas here. First, can we create a function that takes a\nquery result and returns one big XML string. I am not sure how to pump\na result into a function. The other downside is that we would have to\nconstruct the entire result string in memory.\n\nThe other idea I had was a GUC variable that returned all query results\nas one big XML string. That would prevent creating the entire string in\nbackend memory, and might enable cursor fetches through the XML string.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 18 Mar 2003 09:34:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Since now is the time for contrib/ flamewars, this seemed a good time\nto suggest this.\n\nMy colleague, Sorin Iszlai, wrote us a little program for rotating\nour Postgres logs. It reads stdout and stderr, and sends them to\ndifferent files (and rotates them as necessary). It is currently\nhand-configureable (i.e. by altering some variables at the top of the\nscript), and is more or less designed for use in our own environment.\n\nTom Lane recently mentioned to me that a common complaint is that\npostgres doesn't have its own log rotator. There are, of course,\nplenty of good ones, and syslog itself works pretty well for most\npeople. But there are still complaints from time to time about the\nlack of a \"built in\" log rotator.\n\nWe'd be happy to release our rotator under the PostgreSQL BSD\nlicense, if it would be of use to people. I was thinking that\nperhaps contrib/ would be a good place for it, since the idea is to\nreduce complaints that there's no log rotator \"included\". \n\nIs anyone interested in having pglog-rotator?\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Apr 2003 10:21:52 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "more contrib: log rotator" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> Is anyone interested in having pglog-rotator?\n\nFWIW, I saw an early version of pglog-rotator about a year and a half\nago (while consulting for LibertyRMS), and thought at the time that\nit was pretty cool. So I'm for including it ... maybe even as\nmainstream instead of contrib.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 03 Apr 2003 11:07:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "Would the plan be to add it to pg_ctl?\n\n\n> Andrew Sullivan <[email protected]> writes:\n> > Is anyone interested in having pglog-rotator?\n> \n> FWIW, I saw an early version of pglog-rotator about a year and a half\n> ago (while consulting for LibertyRMS), and thought at the time that\n> it was pretty cool. So I'm for including it ... maybe even as\n> mainstream instead of contrib.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Thu, 3 Apr 2003 13:05:51 -0500", "msg_from": "\"Jim Buttafuoco\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "\"Jim Buttafuoco\" <[email protected]> writes:\n> Would the plan be to add it to pg_ctl?\n\nYou would not actually have to: you could just pipe pg_ctl's output to\npglog-rotator. But I think it'd be cool if pg_ctl had an option to use\npglog-rotator, or maybe even adopt it as standard behavior.\n\nI think we would have to make the rotator script be mainstream rather\nthan contrib if we wanted pg_ctl to use it directly. That was why I was\nthinking maybe mainstream ...\n\nAndrew, could you toss up the script on pgsql-patches just so people can\ntake a look? Then we could think more about where to go with it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 03 Apr 2003 13:41:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "Does this log rotator do something that apache's doesn't?\n\nDave\nOn Thu, 2003-04-03 at 13:41, Tom Lane wrote:\n> \"Jim Buttafuoco\" <[email protected]> writes:\n> > Would the plan be to add it to pg_ctl?\n> \n> You would not actually have to: you could just pipe pg_ctl's output to\n> pglog-rotator. But I think it'd be cool if pg_ctl had an option to use\n> pglog-rotator, or maybe even adopt it as standard behavior.\n> \n> I think we would have to make the rotator script be mainstream rather\n> than contrib if we wanted pg_ctl to use it directly. That was why I was\n> thinking maybe mainstream ...\n> \n> Andrew, could you toss up the script on pgsql-patches just so people can\n> take a look? Then we could think more about where to go with it.\n> \n> \t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n-- \nDave Cramer <[email protected]>\nCramer Consulting\n\n", "msg_date": "03 Apr 2003 14:12:03 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Thu, Apr 03, 2003 at 01:41:08PM -0500, Tom Lane wrote:\n> You would not actually have to: you could just pipe pg_ctl's output to\n> pglog-rotator. But I think it'd be cool if pg_ctl had an option to use\n> pglog-rotator, or maybe even adopt it as standard behavior.\n\nIt's currently built to call a program, and read its stdout and\nstderr, rather than acting as a pipe. I guess it shouldn't be too\nhard to modify, though. We actually call the postmaster directly\nwith it, so we use it as a replacement for pg_ctl at startup.\n\n> Andrew, could you toss up the script on pgsql-patches just so people can\n> take a look? Then we could think more about where to go with it.\n\nOk, I sent it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Apr 2003 16:12:14 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Thu, Apr 03, 2003 at 02:12:03PM -0500, Dave Cramer wrote:\n> Does this log rotator do something that apache's doesn't?\n\nProbably not. This was just easier for us.\n\nA little information might be handy here: we run postgres nder a\nhosted environment, and we do not have root on the relevant boxes. \nSo installing anything even a little complicated means building\neverything ourselves. As a result, we end up re-creating plenty of\nfunctionality just to make it easy to install.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Apr 2003 16:14:19 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Thu, Apr 03, 2003 at 01:41:08PM -0500, Tom Lane wrote:\n> Andrew, could you toss up the script on pgsql-patches just so people can\n> take a look? Then we could think more about where to go with it.\n\nOk, the first try failed (of course) because I wasn't subscribed. \nShould be there now, though.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Apr 2003 22:00:23 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "Andrew Sullivan writes:\n\n> Is anyone interested in having pglog-rotator?\n\nWhat would get me a whole lot more excited is if the server could write\ndirectly to a file and do its own rotating (or at least reopening of\nfiles).\n\nConsidering that your rotator is tailored to a rather specific setup, it\ndoesn't do anything better compared to established ones, it prevents the\nuse of pg_ctl, it's written in Perl, and it doesn't do anything for\nWindows users, I think it's not suitable for a general audience.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 4 Apr 2003 17:13:13 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, 4 Apr 2003, Peter Eisentraut wrote:\n\n> Andrew Sullivan writes:\n> \n> > Is anyone interested in having pglog-rotator?\n> \n> What would get me a whole lot more excited is if the server could write\n> directly to a file and do its own rotating (or at least reopening of\n> files).\n> \n> Considering that your rotator is tailored to a rather specific setup, it\n> doesn't do anything better compared to established ones, it prevents the\n> use of pg_ctl, it's written in Perl, and it doesn't do anything for\n> Windows users, I think it's not suitable for a general audience.\n\nThat said, a log rotation capability built right into pg_ctl or \nthereabouts would be a very nice feature. I.e. 'pg_ctl -r 86400 -l \n$PGDATA/logs/pgsql start'\n\nwhere -r is the rotation period in seconds. If it's an external program \nthat pg_ctl calls that's fine, and it could even just be a carbon copy of \napache's log rotater if their license is compatible (isn't it?)\n\n", "msg_date": "Fri, 4 Apr 2003 09:16:39 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, Apr 04, 2003 at 09:16:39AM -0700, scott.marlowe wrote:\n> where -r is the rotation period in seconds. If it's an external program \n\nOurs rotates based on size rather than time. I can see some\nadvantages to the time-based approach, but if you have wide\nvariations in traffic, you run the risk of rotating over useful files\nwith more or less empty ones if you use it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 4 Apr 2003 11:35:09 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, Apr 04, 2003 at 05:13:13PM +0200, Peter Eisentraut wrote:\n> use of pg_ctl, it's written in Perl, and it doesn't do anything for\n> Windows users, I think it's not suitable for a general audience.\n\nIt doesn't prevent the use of pg_ctl, although it does indeed prevent\nthe use of pg_ctl for startup. \n\nI'm not sufficiently familiar with Windows to know how this does or\ndoes not help them. Could you elaborate? And what's wrong with\nPerl?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 4 Apr 2003 11:37:55 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Friday April 4 2003 9:16, scott.marlowe wrote:\n>\n> That said, a log rotation capability built right into pg_ctl or\n> thereabouts would be a very nice feature. I.e. 'pg_ctl -r 86400 -l\n> $PGDATA/logs/pgsql start'\n>\n> where -r is the rotation period in seconds. If it's an external program\n> that pg_ctl calls that's fine, and it could even just be a carbon copy of\n> apache's log rotater if their license is compatible (isn't it?)\n\nBy way of feature ideas, one very convenient but not widely used feature of \nApache's log rotator is the ability to specify a strftime() format string \nfor the file extension. For example, if I want to have my logs rollover \nevery 24 hours and be named log.Mon, log.Tue, log.Wed, I say something like\n\n\tpg_ctl start | rotatelogs 86400 \"%a\"\n\nThis causes the logs to overwrite themselves every seven days, taking log \nmaintenance time to very near zero. We also customized our use of it to \nallow us to automatically move existing logs out of the way to \"log.1\", \n\"log.2\", or to simply overwrite existing logs.\n\nEd\n\n", "msg_date": "Fri, 4 Apr 2003 10:04:25 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Friday April 4 2003 10:04, Ed L. wrote:\n> By way of feature ideas, one very convenient but not widely used feature\n> of Apache's log rotator is the ability to specify a strftime() format\n> string for the file extension. For example, if I want to have my logs\n> rollover every 24 hours and be named log.Mon, log.Tue, log.Wed, I say\n> something like\n>\n> \tpg_ctl start | rotatelogs 86400 \"%a\"\n\nMore accurately, something like this:\n\n\tpg_ctl start | rotatelogs 86400 \"log.%a\"\n\nEd\n\n", "msg_date": "Fri, 4 Apr 2003 10:06:42 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, 4 Apr 2003, Andrew Sullivan wrote:\n\n> On Fri, Apr 04, 2003 at 09:16:39AM -0700, scott.marlowe wrote:\n> > where -r is the rotation period in seconds. If it's an external program \n> \n> Ours rotates based on size rather than time. I can see some\n> advantages to the time-based approach, but if you have wide\n> variations in traffic, you run the risk of rotating over useful files\n> with more or less empty ones if you use it.\n\nI would want time based for sure, and I can see the use for size based \nsplitting as well. I wouldn't be hard to have it do both would it?\n\nI just like the idea of it being one of the dozens or so options for \npg_ctl so it's painless to use for joe six pack.\n\npg_ctl -r 86400 -l $PGDATA/logs/pgsql\n\nwhere -r is the rotation period\n\nOR\n\npg_ctl -f 10M -l $PGDATA/logs/pgsql\n\nwhere -f is the max file size of a log\n\nI'd recommend that the nameing convnention should probably \nbe:\n\nfilenamespec.timestamp, like: $PGDATA/logs/pgsql.1049414400\n\nfor time rotated logs, and \n\nfilename.incnumber like: $PGDATa/logs/pgsql.0000000001\n\n", "msg_date": "Fri, 4 Apr 2003 10:10:22 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Andrew Sullivan writes:\n> \n> > Is anyone interested in having pglog-rotator?\n> \n> What would get me a whole lot more excited is if the server could write\n> directly to a file and do its own rotating (or at least reopening of\n> files).\n\n From a technical point of view I don't think that is desirable. The\nentire log traffic would have to be routed through the postmaster, as it\nis in LibertyRMS's log rotator now through the perl script. And we\nreally try to keep everything outside the postmaster that does not\nabsolutely have to be in there for stability reasons.\n\nWe can discuss if the log rotator should be a child process of the\npostmaster or the other way round, but that will not change the flow of\nbytes between the processes in any way.\n\nI would say it's better the way it is, because it does not pollute the\npostmasters wait logic with another exception.\n\nMy ideal solution would be to integrate the log rotators functionality\ninto a C version of pg_ctl that forks and detaches from the control\nterminal in the way, daemons should.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Fri, 04 Apr 2003 12:13:37 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What would get me a whole lot more excited is if the server could write\n> directly to a file and do its own rotating (or at least reopening of\n> files).\n\nAFAICS, the only practical way to do this is to have a single process\ncollecting the stdout/stderr from the postmaster and all its children.\npglog-rotator is one implementation of that approach.\n\nI too would rather this functionality were integrated into the server,\nbut I haven't noticed anyone stepping up to the plate to do it.\n\n> Considering that your rotator is tailored to a rather specific setup, it\n> doesn't do anything better compared to established ones, it prevents the\n> use of pg_ctl, it's written in Perl, and it doesn't do anything for\n> Windows users, I think it's not suitable for a general audience.\n\nThese might be good arguments for not putting it into the mainstream,\nbut I don't think they have any force if we consider it for contrib.\n\nI feel we really ought to have *some* rotator included in the standard\ndistro, just so that the Admin Guide can point to a concrete solution\ninstead of having to arm-wave about what you can get off the net.\nIf someone can offer a better alternative than Andrew's, great, let's\nsee it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 12:19:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Friday April 4 2003 10:19, Tom Lane wrote:\n>\n> I feel we really ought to have *some* rotator included in the standard\n> distro, just so that the Admin Guide can point to a concrete solution\n> instead of having to arm-wave about what you can get off the net.\n> If someone can offer a better alternative than Andrew's, great, let's\n> see it.\n\nOut of curiosity, are there issues preventing inclusion of Apache's log \nrotation code? It seems you'd be hard-pressed to find a more \nbattle-hardened log rotator.\n\nObviously some people also wish to rotate based on log file size, so adding \nboth to contrib at least seems sensible.\n\nEd\n\n", "msg_date": "Fri, 4 Apr 2003 11:10:16 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, 4 Apr 2003, Ed L. wrote:\n\n> On Friday April 4 2003 10:19, Tom Lane wrote:\n> >\n> > I feel we really ought to have *some* rotator included in the standard\n> > distro, just so that the Admin Guide can point to a concrete solution\n> > instead of having to arm-wave about what you can get off the net.\n> > If someone can offer a better alternative than Andrew's, great, let's\n> > see it.\n> \n> Out of curiosity, are there issues preventing inclusion of Apache's log \n> rotation code? It seems you'd be hard-pressed to find a more \n> battle-hardened log rotator.\n> \n> Obviously some people also wish to rotate based on log file size, so adding \n> both to contrib at least seems sensible.\n\nOK, I'm playing with the pg_ctl script that comes with 7.3, and trying to \nmake it startup with apaches rotatelog script, but this line won't pipe \noutput. I'm a total noob at bash shell scripting, so please feel free to \nsnicker when you answer.\n\nrotatelogs is in my path and all, it just never sees it.\n\n\"$po_path\" ${1+\"$@\"} </dev/null | $PGPATH/rotatelogs $logfile $DURATION 2>&1 &\n\n", "msg_date": "Fri, 4 Apr 2003 11:41:04 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> rotatelogs is in my path and all, it just never sees it.\n\nYou mean the command fails? Or just that it doesn't capture output?\n\n> \"$po_path\" ${1+\"$@\"} </dev/null | $PGPATH/rotatelogs $logfile $DURATION 2>&1 &\n\nMost if not all of the postmaster's log output goes to stderr, so you'd need\n\n\"$po_path\" ${1+\"$@\"} </dev/null 2>&1 | $PGPATH/rotatelogs ...\n\nto have any hope of useful results.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 13:58:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Friday April 4 2003 11:58, Tom Lane wrote:\n> \"scott.marlowe\" <[email protected]> writes:\n> > rotatelogs is in my path and all, it just never sees it.\n>\n> You mean the command fails? Or just that it doesn't capture output?\n>\n> > \"$po_path\" ${1+\"$@\"} </dev/null | $PGPATH/rotatelogs $logfile $DURATION\n> > 2>&1 &\n>\n> Most if not all of the postmaster's log output goes to stderr, so you'd\n> need\n>\n> \"$po_path\" ${1+\"$@\"} </dev/null 2>&1 | $PGPATH/rotatelogs ...\n>\n> to have any hope of useful results.\n\nHmmm. I would have agreed 2>&1 was needed, too, but this command seems to \nroutinely capture all output, including ERRORs:\n\n\tnohup pg_ctl start | nohup rotatelogs server_log.%a 86400\n\nEd\n\n", "msg_date": "Fri, 4 Apr 2003 12:10:13 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, 4 Apr 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > rotatelogs is in my path and all, it just never sees it.\n> \n> You mean the command fails? Or just that it doesn't capture output?\n\nThe database starts, but rotatelogs doesn't get run. I.e. it's just like \neverything after the | symbol isn't there.\n\n", "msg_date": "Fri, 4 Apr 2003 12:49:37 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "\"Ed L.\" <[email protected]> writes:\n> Hmmm. I would have agreed 2>&1 was needed, too, but this command seems to \n> routinely capture all output, including ERRORs:\n> \tnohup pg_ctl start | nohup rotatelogs server_log.%a 86400\n\nThat's 'cause pg_ctl internally redirects the postmaster's stderr.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 15:13:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Fri, 4 Apr 2003, Ed L. wrote:\n\n> On Friday April 4 2003 11:58, Tom Lane wrote:\n> > \"scott.marlowe\" <[email protected]> writes:\n> > > rotatelogs is in my path and all, it just never sees it.\n> >\n> > You mean the command fails? Or just that it doesn't capture output?\n> >\n> > > \"$po_path\" ${1+\"$@\"} </dev/null | $PGPATH/rotatelogs $logfile $DURATION\n> > > 2>&1 &\n> >\n> > Most if not all of the postmaster's log output goes to stderr, so you'd\n> > need\n> >\n> > \"$po_path\" ${1+\"$@\"} </dev/null 2>&1 | $PGPATH/rotatelogs ...\n> >\n> > to have any hope of useful results.\n> \n> Hmmm. I would have agreed 2>&1 was needed, too, but this command seems to \n> routinely capture all output, including ERRORs:\n> \n> \tnohup pg_ctl start | nohup rotatelogs server_log.%a 86400\n\nOK, So I tried putting the 2>&1 before the | and all. No matter what I \ntry, every from the | on is ignored. ps doesn't show it, and neither does \npg_ctl status. Both show a command line of \n/usr/local/pgsql/bin/postmaster as the only input to start the server.\n\nNow, the thing is, I've tried this with hardcoded values, like:\n\n\"$po_path\" ${1+\"$@\"} </dev/null 2>&1 /usr/local/pgsql/bin/rotatelogs \n/mnt/d1/data/logs/pglog 86400\n\nwhere I know the logs directory exists. It works if I do:\n\npg_ctl start | rotatelogs $PGDATA/pglog 86400 2>1&\n\nand puts the log files there.\n\nI've copied rotatelogs into the /usr/local/pgsql/bin directory as well.\n\nSo, I'm thinking this is my weakness in shell scripting that's getting me \nhere, and that the shell is eating the |, not passing it out with the \npostmaster to be used when it starts.\n\n", "msg_date": "Fri, 4 Apr 2003 14:17:41 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Friday April 4 2003 2:17, scott.marlowe wrote:\n>\n> OK, So I tried putting the 2>&1 before the | and all. No matter what I\n> try, every from the | on is ignored. ps doesn't show it, and neither\n> does pg_ctl status. Both show a command line of\n> /usr/local/pgsql/bin/postmaster as the only input to start the server.\n\nNot clear if you're looking at it this way or if this is your problem, but \nyou can't really tell there is log rotation going on just by grepping ps \nfor postmaster because ps does not typically show the postmaster and the \nrotatelogs together on the same line. I wouldn't expect pg_ctl status to \nknow anything at all about rotatelogs when you pipe it like this.\n\nEd\n\n", "msg_date": "Fri, 4 Apr 2003 14:45:07 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Fri, 4 Apr 2003, Ed L. wrote:\n\n> On Friday April 4 2003 2:17, scott.marlowe wrote:\n> >\n> > OK, So I tried putting the 2>&1 before the | and all. No matter what I\n> > try, every from the | on is ignored. ps doesn't show it, and neither\n> > does pg_ctl status. Both show a command line of\n> > /usr/local/pgsql/bin/postmaster as the only input to start the server.\n> \n> Not clear if you're looking at it this way or if this is your problem, but \n> you can't really tell there is log rotation going on just by grepping ps \n> for postmaster because ps does not typically show the postmaster and the \n> rotatelogs together on the same line. I wouldn't expect pg_ctl status to \n> know anything at all about rotatelogs when you pipe it like this.\n\nHey, do you guys think that a setting of silent_mode = false might affect \nno log files getting created?\n\nI had it right as soon as I added Tom's recommended 2>&1 but spent another \n30 minutes figuring out why my log file wasn't getting created / filled. \n\nThanks for the help.\n\n", "msg_date": "Fri, 4 Apr 2003 15:34:22 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> Hey, do you guys think that a setting of silent_mode = false might affect \n> no log files getting created?\n\nNo, but setting it to true would be bad news.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 17:42:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Fri, 4 Apr 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > Hey, do you guys think that a setting of silent_mode = false might affect \n> > no log files getting created?\n> \n> No, but setting it to true would be bad news.\n\nThat's what I'd meant actually. I had to turn of silent mode... You know \nyou're having a bad day when your email explaining how stupid you are is \nfactually incorrect. :-)\n\n If anyone wants the diff, here it is:\n\n22c22\n< $CMDNAME start [-w] [-D DATADIR] [-s] [-l FILENAME] [-o \\\"OPTIONS\\\"]\n---\n> $CMDNAME start [-w] [-D DATADIR] [-s] [-r DURATION] [-l FILENAME] \n[-o \\\"OPTIONS\\\"]\n39a40,41\n> -r DURATION invoke log rotation with DURATION seconds\n> between rotation of files.\n155a158,161\n> -r)\n> DURATION=\"$2\"\n> shift\n> ;;\n336c342,346\n< \"$po_path\" ${1+\"$@\"} </dev/null >>$logfile 2>&1 &\n---\n> if [ -n \"$DURATION\" ]; then\n> \"$po_path\" ${1+\"$@\"} </dev/null 2>&1| $PGPATH/rotatelogs \n$logfile $DURATION 2>&1 &\n> else\n> \"$po_path\" ${1+\"$@\"} </dev/null >>$logfile 2>&1 &\n> fi\n\n", "msg_date": "Fri, 4 Apr 2003 15:50:57 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "Tom Lane writes:\n\n> AFAICS, the only practical way to do this is to have a single process\n> collecting the stdout/stderr from the postmaster and all its children.\n\nI think not. It's a little tricky handling it directly in the child\nprocesses, but it's been done before.\n\n> If someone can offer a better alternative than Andrew's, great, let's\n> see it.\n\nHow about the attached one, which I floated a while ago but which didn't\ngenerate much interest.\n\n-- \nPeter Eisentraut [email protected]", "msg_date": "Sat, 5 Apr 2003 02:03:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> AFAICS, the only practical way to do this is to have a single process\n>> collecting the stdout/stderr from the postmaster and all its children.\n\n> I think not. It's a little tricky handling it directly in the child\n> processes, but it's been done before.\n\nA \"little\" tricky? Thanks, but no thanks ... for one thing, there'd be\nno easy way to know when all the children had switched over to writing\nthe new file. Also, at least for not-too-long messages, writing on a\nsingle pipe gives atomicity guarantees that AFAIK do not exist when\nwriting a file through multiple independently opened descriptors. In\nthe latter case I think we'd have lots of trouble with interleaving of\nmessages from different backends.\n\n>> If someone can offer a better alternative than Andrew's, great, let's\n>> see it.\n\n> How about the attached one, which I floated a while ago but which didn't\n> generate much interest.\n\nSeems like a good bare-bones file writer; but how about all those\nframmishes that people ask for like generating date-based filenames,\nswitching every so many bytes, etc? Also, it'd be nice not to be\ndependent on a cron job to tickle the switchover.\n\nI do think there's an efficiency argument for having the log writer\ncoded in C, so starting with what you have here and building up might\nbe a better idea than starting with Andrew's perl script. But the\nimportant thing in my mind is to get something in there.\n\nWe should also take a look at Apache's rotator to see if there's any need\nto reinvent the wheel at all. I have not seen it, am not even sure what\nit's written in...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 19:22:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Fri, 4 Apr 2003, Tom Lane wrote:\n\n> We should also take a look at Apache's rotator to see if there's any need\n> to reinvent the wheel at all. I have not seen it, am not even sure what\n> it's written in...\n\nIt's written in 140 lines of C (blank lines and all), and has been very\nsolid in my experience. I don't know of any deficiencies that would\nwarrant rewriting it.\n\nJon\n\n", "msg_date": "Sat, 5 Apr 2003 14:23:25 +0000 (UTC)", "msg_from": "Jon Jensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "...and what kind of performance hit we take (and under what\ncircumstances) for not having it?\n\n", "msg_date": "Sat, 05 Apr 2003 16:35:30 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Anyone know why PostgreSQL doesn't support 2 phase execution?" }, { "msg_contents": "...and if so, what are the current efforts focusing on?\n\n", "msg_date": "Sat, 05 Apr 2003 16:35:35 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Anyone working on better transaction locking?" }, { "msg_contents": "AFAIK, there are only 3 general purpose internal sorting techniques\nthat have O(n) behavior:\n1= Insertion Sort for \"almost sorted\" lists. Since \"almost sorted\" is\na fuzzy concept, let's make the first approximation definition that no\nmore than n^(1/2) of the elements can be disordered. There are better\ndefinitions in the literature, but I don't remember them off the top\nof my head.\n\n2= Sorts from the class of Address Calculation, Counting, or\nDistribution Sort. These need to be able to carry out something more\ncomplex than simply a comparison in order to achieve O(n), and\ntherefore have high constants in their execution. For large enough n\nthough, these are the performance kings.\n\n3= Straight Radix Sort where you minimize the number of passes by\nusing a base much greater than two for the the radix. Usually octal\nor hexidecimal. On a 32b or 64b system, this approach will let you\nsort in 2 passes.\n\nAll of the above have potentially nasty trade-offs in comparision to\nthe standard heavily tuned median-of-three quicksort used by the sort\nunix command. Nonetheless, I could see some value in providing all of\nthese with PostgeSQL (including a decent port of the unix sort routine\nfor the Win folks). I'll note in passing that quicksort's Achille's\nheel is that it's unstable (all of the rest are stable), which can be\na problem in a DB.\n\nIn the general case there's a few papers out there that state you can\nsort in O(n) if you can throw O(n^2) space at the problem. That\nimplies to me that for DB's, we are not going to be able to use O(n)\nalgorithms for most of our needs.\n\n\nAs for external sorts, everything I've ever read says that some sort\nof merge technique is used: balanced, multiway, or polyphase. In all\ncases, I've seen comments to the effect that reading some part of the\ndata into internal buffers, sorting them, and then merging them with\nalready sorted data is the best practice.\n\n\nAll of this seems to imply that instead of mergesort (which is\nstable), PostgreSQL might be better off with the 4 sorts I've listed\nplus a viciously efficient merge utility for combining partial results\nthat do fit into memory into result files on disk that don't.\n\n\nOr am I crazy?\n\nRon Peacetree\n\n", "msg_date": "Sun, 06 Apr 2003 16:29:11 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Tom Lane writes:\n\n> Seems like a good bare-bones file writer; but how about all those\n> frammishes that people ask for like generating date-based filenames,\n> switching every so many bytes, etc? Also, it'd be nice not to be\n> dependent on a cron job to tickle the switchover.\n\nLinux systems have a standard system log rotation mechanism (see\nlogrotate(8)), which can rotate logs by size and time and has a number of\nother features. I would rather depend on that kind of preferred system\nmechanism than rolling out our own. And we already depend on cron for\nvacuuming anyway.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sun, 6 Apr 2003 23:03:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "\nTom Lane wrote:\n\n> Seems like a good bare-bones file writer; but how about all those\n> frammishes that people ask for like generating date-based filenames,\n> switching every so many bytes, etc? Also, it'd be nice not to be\n> dependent on a cron job to tickle the switchover.\n\nOne of the earlier times this discussion came up I wrote a log\nrotation program too. It will rotate based on time, file size,\nand/or SIGHUP:\n\n ftp://ftp.nemeton.com.au/pub/src/logwrite-1.0alpha.tar.gz\n\nWritten in C, BSD license, used on my production systems. Needs to be\nupdated to not have 'alpha' in the URL ... no bug reports in 18 months\nmeans that it could at least be beta. ;-) It should be portable: I've\nbuilt it on *BSD, HP-UX, and Linux at least. I'd need help for\nWindows.\n\nI took pains to deal with some of the concerns Tom raised last time\nabout not giving up and exiting when a filesystem fills up. This is\nsomething that the Apache rotatelogs program didn't do at the time I\nlooked at it, else I'd have not written this at all.\n\n> I too would rather this functionality were integrated into the server,\n> but I haven't noticed anyone stepping up to the plate to do it.\n\nShouldn't be hard: the server only has to create a pipe, and run the\nlog rotation program. For extra robustness re-starting the log\nrotation program if it exits (\"don't kill -9 the log rotator\") is a\ngood idea.\n\nI'll look at doing this, unless the discussion heads somewhere else\ne.g., if Peter decides to integrate his code. The server changes are\nprobably independent of whichever log rotation program is preferred\nanyway.\n\nRegards,\n\nGiles\n\n", "msg_date": "Mon, 07 Apr 2003 07:48:24 +1000", "msg_from": "Giles Lean <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Sun, Apr 06, 2003 at 11:03:27PM +0200, Peter Eisentraut wrote:\n> Linux systems have a standard system log rotation mechanism (see\n> logrotate(8)), which can rotate logs by size and time and has a number of\n> other features. I would rather depend on that kind of preferred system\n\nBut it's not available on every platform. And according to my\n/usr/share/doc/logrotate/copyright, it's GPL, so you can't\nredistribute it with PostgreSQL. Building it in was one of the\ncriteria, I thought\n\n> And we already depend on cron for vacuuming anyway.\n\nBut that dependency is actually a liability, because there's no way\nto say, \"Hey, I really need to be vacuumed now.\" We (at Liberty) are\ncurrently going through a remarkable number of hoops trying to get\naround that very limitation\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sun, 6 Apr 2003 18:17:15 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "Andrew Sullivan writes:\n\n> But it's not available on every platform. And according to my\n> /usr/share/doc/logrotate/copyright, it's GPL, so you can't\n> redistribute it with PostgreSQL. Building it in was one of the\n> criteria, I thought\n\nMy point was that log file rotation should be left up to the system\nadministrator. Look at other servers on your system (SMTP, DNS,\nwhatever). How do they handle it?\n\n> > And we already depend on cron for vacuuming anyway.\n>\n> But that dependency is actually a liability, because there's no way\n> to say, \"Hey, I really need to be vacuumed now.\"\n\npsql -c 'VACUUM' ?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 7 Apr 2003 00:42:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Mon, Apr 07, 2003 at 12:42:34AM +0200, Peter Eisentraut wrote:\n> \n> My point was that log file rotation should be left up to the system\n> administrator. Look at other servers on your system (SMTP, DNS,\n> whatever). How do they handle it?\n\nPostgreSQL is not a system process, and I think it's a mistake to\nassume that it is. We, for instance, do not have root on the\nmachines we use. It's important to assume that users needn't be\nsystem administrators to use the system.\n\nI suppose, however, you could make the argument that log rotation\nshould be the responisibility of the adminisistrator of the\nPostgreSQL server. But that just amounts to an argument that nothing\nneeds to be done: as we see, there are lots of log management\nfacilities on offer, and none of them are included with PostgreSQL. \nI don't feel strongly about which one to use. But since people\nfrequently complain that the feature is not available in PostgreSQL,\nit does seem that, if it's not too much trouble, adding the feature\nis worth it.\n\n> > But that dependency is actually a liability, because there's no way\n> > to say, \"Hey, I really need to be vacuumed now.\"\n> \n> psql -c 'VACUUM' ?\n\nI meant on the part of the back end. If you have a busy system on\nwhich some tables need very frequent vacuuming, but it gets\nunpredictable traffi, you don't just want to say, \"Heck, let's vacuum\nevery hour.\" You want to know _actually_ whether the table needs\nvacuuming.\n\nSo you come up with a bunch of profiles, &c., and do a pile of work\nto figure out which tables really need attention. But that's not\nfree: it takes cycles on the machine, and the hypotheical case is one\nin which the machine is already under heavy load. So cron is not\nreally the answer.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sun, 6 Apr 2003 18:54:52 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Sun, 6 Apr 2003, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > Seems like a good bare-bones file writer; but how about all those\n> > frammishes that people ask for like generating date-based filenames,\n> > switching every so many bytes, etc? Also, it'd be nice not to be\n> > dependent on a cron job to tickle the switchover.\n> \n> Linux systems have a standard system log rotation mechanism (see\n> logrotate(8)), which can rotate logs by size and time and has a number of\n> other features. I would rather depend on that kind of preferred system\n> mechanism than rolling out our own. And we already depend on cron for\n> vacuuming anyway.\n\nHow about we set up configure to check what we're on and what's available, \n(i.e. rotatelogs, logrotate, joesbiglogrotatorscript, etc...) and \nconfigure pg_ctl to use one of them? It's a good probability that most \nflavors of Unix have log rotators of some kind as a built in, and we can \ninclude a standard one as well.\n\nThat way, if you want to use the same log rotator with postgresql as you \nuse with the rest of your system, you can, and if you just want the built \nin one, you can use it, and if you don't want any log rotation, everything \nstill works the same as before.\n\n", "msg_date": "Mon, 7 Apr 2003 09:35:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "On Sunday 06 April 2003 18:54, Andrew Sullivan wrote:\n> On Mon, Apr 07, 2003 at 12:42:34AM +0200, Peter Eisentraut wrote:\n> > My point was that log file rotation should be left up to the system\n> > administrator. Look at other servers on your system (SMTP, DNS,\n> > whatever). How do they handle it?\n\n> PostgreSQL is not a system process, and I think it's a mistake to\n> assume that it is. We, for instance, do not have root on the\n> machines we use. It's important to assume that users needn't be\n> system administrators to use the system.\n\nI personally believe that making the assumption that PostgreSQL is not a \nsystem process is wrong. One can run system services as a normal user (in \nfact, it is recommended that as few system services as is possible should run \nas root); but the fact that a daemon is running as a normal user doesn't make \nit not a system process. But that's just a difference of system \nadministration opinion.\n\nHowever, I can see the utility of a bundled simple log rotator. The key word \nis simple -- we have the full-fledged route now, called syslog. And if \nsomeone needs a better logrotator they can certainly get one of the many that \nare already available.\n\nAt the same time I don't necessarily want such a log rotator to be the \ndefault. We have syslog as the default. If someone has the particular need \nfor a stderr/stdout log rotator, then let it be a configure option.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Mon, 7 Apr 2003 11:45:07 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "Andrew Sullivan writes:\n\n> PostgreSQL is not a system process, and I think it's a mistake to\n> assume that it is.\n\nThe point is that PostgreSQL should fit nicely with the customs of the\nsystem that it runs on. This starts with the oft-discussed file system\nlayout, the use of syslog in the first place, using 'cron' and 'at'\ninstead of rolling our own mechanisms to schedule jobs, as is occasionally\nrequested, fitting in with the startup scripts system, and so on.\n\n> I suppose, however, you could make the argument that log rotation\n> should be the responisibility of the adminisistrator of the\n> PostgreSQL server. But that just amounts to an argument that nothing\n> needs to be done: as we see, there are lots of log management\n> facilities on offer, and none of them are included with PostgreSQL.\n\nThat is not the argument. What we need to do is to make it *possible* to\nrotate the logs without shutting down the server, not (necessarily) do the\nrotation ourselves. How can we even begin to do that? Do we need to\ninvent a configuration language that can control when to rotate, where to\nmove the old logs, when to delete the even older logs, etc.?\n\n> I meant on the part of the back end. If you have a busy system on\n> which some tables need very frequent vacuuming, but it gets\n> unpredictable traffi, you don't just want to say, \"Heck, let's vacuum\n> every hour.\" You want to know _actually_ whether the table needs\n> vacuuming.\n\nThat is an argument that manual vacuum is a liability, not the use of\ncron for it.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 7 Apr 2003 19:06:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "On Sat, 2003-04-05 at 11:35, Ron Peacetree wrote:\n> ...and what kind of performance hit we take (and under what\n> circumstances) for not having it?\n\nDo you mean 2-phase commits? If so, how do you take a performance hit\nfrom *not* having it? PostgreSQL doesn't have it (prepare & forget\nphases)) simply because nobody has completed and submitted an\nimplementation. Satoshi is working on the problem.\n\nIf not, what do you mean by 2 phase execution?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "07 Apr 2003 13:11:44 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase execution?" }, { "msg_contents": "Lamar Owen wrote:\n> On Sunday 06 April 2003 18:54, Andrew Sullivan wrote:\n> > On Mon, Apr 07, 2003 at 12:42:34AM +0200, Peter Eisentraut wrote:\n> > > My point was that log file rotation should be left up to the system\n> > > administrator. Look at other servers on your system (SMTP, DNS,\n> > > whatever). How do they handle it?\n> \n> > PostgreSQL is not a system process, and I think it's a mistake to\n> > assume that it is. We, for instance, do not have root on the\n> > machines we use. It's important to assume that users needn't be\n> > system administrators to use the system.\n\n> I personally believe that making the assumption that PostgreSQL is not\n> a system process is wrong. One can run system services as a normal\n> user (in fact, it is recommended that as few system services as is\n> possible should run as root); but the fact that a daemon is running as\n> a normal user doesn't make it not a system process. But that's just a\n> difference of system administration opinion.\n\nI think the mistake lies in making the \"design\" assumption that\nPostgreSQL is either one or the other.\n\n- There are contexts in which it forcibly is \"systemy,\" such as when it\n is used for password authentication using something like PAM. In that\n case, whatever userid it runs as, it's a forcible \"system\" dependancy.\n Users can't log in until PostgreSQL is running.\n\n- There are contexts where it will run as a \"part of the system,\" as is\n typically the case when someone uses \"apt-get install postgresql\" or\n \"rpm -i postgres*.rpm\"\n\n- In a \"hosted\" environment, it may be unacceptable to, in any manner,\n treat PostgreSQL or any related services as \"part of the system.\"\n cron obviously *is* a \"part of the system,\" but if you're not the\n system administrator, you may have /no/ ability to connect in to\n \"system\" logging services. (In the environment where \n pgrotatelog runs, that is indeed the case.)\n\nThese are /all/ legitimate scenarios for PostgreSQL to be in use.\n\n--> Assuming PostgreSQL /is/ a system process is wrong.\n--> Assuming PostgreSQL /is not/ a system process is wrong.\n\nThere are situations where either can be true, and it is vital for\nPostgreSQL to be able to support both.\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.ntlug.org/~cbbrowne/unix.html\n\"There I was, lying, cheating and back-stabbing my way up the\ncorporate ladder, feeling pretty darn good about myself, when someone\ntold me the 'J' in 'WWJD' meant *Jesus* I thought it meant *Judas*!\nHoo boy, am I red in the face!\"\n\n", "msg_date": "Mon, 07 Apr 2003 13:40:41 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "\"Rod Taylor\" <[email protected]> wrote in message\nnews:1049735504.40144.35.camel@jester...\n> --=-v98gp7DTtZFJa7ee6YMf\n> Content-Type: text/plain\n> Content-Transfer-Encoding: quoted-printable\n>\n> On Sat, 2003-04-05 at 11:35, Ron Peacetree wrote:\n> > ...and what kind of performance hit we take (and under what\n> > circumstances) for not having it?\n>\n> Do you mean 2-phase commits? If so, how do you take a performance\nhit\n> from *not* having it? PostgreSQL doesn't have it (prepare & forget\n> phases)) simply because nobody has completed and submitted an\n> implementation. Satoshi is working on the problem.\n>\n> If not, what do you mean by 2 phase execution?\n>\nThe performance hit as in \"in comparison to DB's that =do= have two\nphase execution (and commit for that matter), just how much slower is\nPostgreSQL?\"\n\nTwo phase execution and two phase commit are two different concepts.\nTwo phase execution splits the execution of queries explicitly into a\n\"do all the book keeping and setup stuff before execution\" phase and\nan actual execution phase. The benefit is that if you are going to\nsay, step through a largish table in chunks, doing the same query on\neach chunk, two phase execution allows the DB to do everything (syntax\nchecking, query planning, blah, blah) except the actual execution\n=once= and reuse it for each subsequent chunk. Think of it as a close\ncousin to loop unrolling. It also helps parallel performance since\nyou can hand the \"blessed\" set up query plan to multiple processes and\nthose processes can focus on just getting work done.\n\nThe lack of two phase =commit= is a also a potential performance hit\nin comparison to DB products that have it, but the more important\nissue there is that there are SMP/distributed apps that really can't\nwork acceptably unless a DB product has two phase commit.\n\nThe three \"biggies\" in DB land, SQL Server, Oracle, and DB2, have both\nfeatures. I suspect that PostgreSQL will need to as well...\n\n", "msg_date": "Mon, 07 Apr 2003 18:59:21 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase execution?" }, { "msg_contents": "\"Ron Peacetree\" <[email protected]> writes:\n\n> AFAIK, there are only 3 general purpose internal sorting techniques\n> that have O(n) behavior:\n\nStrictly speaking there are no sorting algorithms that have worst-case time\nbehaviour better than O(nlog(n)). Period.\n\nThe typical kind-of O(n) sorts involve either special-case inputs (almost\nsorted), or only work if you ignore some aspect of the input size (radix\nsort).\n\nSo, for example, for radix sort the log(n) factor comes precisely from having\nto have log(n) passes. If you use octal that might go to log(n)/8 but it's\nstill O(log(n)).\n\nIf you assume your input fields are limited to a fixed size then O() notation\nloses meaning. You can always sort in linear time by just storing bit flags in\na big vector and then scanning your (fixed-size) vector to read out the\nvalues. \n\nHowever databases cannot assume fixed size inputs. Regardless of whether it's\non a 32 or 64 bit system postgres still has to sort much larger data\nstructures. floats are typically 64 - 96 bytes, bigints can be arbitrarily\nlarge.\n\nIn fact posgres can sort user-defined datatypes as long as they have < and =\noperators. (Actually I'm not sure on the precise constraint.)\n\nOh, and just to add one last fly in the ointment, postgres has to be able to\nsort large datasets that don't even fit in memory. That means storing\ntemporary data on disk and minimizing the times data has to move from disk to\nmemory and back. Some alogorithms are better at that than others.\n\n--\ngreg\n\n", "msg_date": "07 Apr 2003 15:36:10 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Ron Peacetree wrote\n> ...and what kind of performance hit we take (and under what\n> circumstances) for not having it?\n\nAre you thinking of \"two phase commit\"?\n\nThere is no \"performance hit for not having it.\"\n\nAnd it does not currently apply to PostgreSQL. Two phase commit is only\nneeded when updates need to be applied simultaneously on multiple\ndatabases.\n\nThat is, you'd have something like:\n\nCREATE DISTRIBUTED TRANSACTION X1;\n insert into table TBL1 in database ('db1', 'id1', 'auth1') values (1, 2, 100.00, now());\n insert into table TBL2 in database ('db2', 'id2', 'auth2') values (1, 3, -100.00, now());\nCOMMIT DISTRIBUTED TRANSACTION X1;\n\nwhere the \"in database ('db1', 'id1', 'auth1')\" part indicates some form\nof connection parameters for the database.\n\nThere certainly is merit to having two phase commit; it allows\ncoordinating updates to multiple databases.\n\nThe \"degradation of performance\" that results from not having this is\nthat you can't have distributed transactions. That's not a \"performance\nhit;\" that's a case of \"you can't do distributed transactions.\" \n\nAnd distributed transactions are probably /more/ expensive than\nnondistributed ones, so it is more readily argued that by not supporting\nthem, you don't have the problem of performance degrading due to making\nuse of distributed transactions.\n--\noutput = reverse(\"gro.gultn@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\nRules of the Evil Overlord #195. \"I will not use hostages as bait in a\ntrap. Unless you're going to use them for negotiation or as human\nshields, there's no point in taking them.\"\n<http://www.eviloverlord.com/>\n\n", "msg_date": "Mon, 07 Apr 2003 15:44:06 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase execution? " }, { "msg_contents": "Ron Peacetree wrote:\n> ...and if so, what are the current efforts focusing on?\n\nWhat is it that you think of as being potentially \"better\" about some\nwould-be-alternative \"transaction locking\" scheme?\n\nPostgreSQL already supports MVCC, which is commonly considered to be the\n\"better\" scheme that eliminates a lot of need to lock data.\n\nFurthermore, the phrase \"transaction locking\" doesn't seem to describe\nwhat one would want to lock. I wouldn't want to lock a \"transaction;\"\nI'd want to lock DATA.\n--\n(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #153. \"My Legions of Terror will be an\nequal-opportunity employer. Conversely, when it is prophesied that no\nman can defeat me, I will keep in mind the increasing number of\nnon-traditional gender roles.\" <http://www.eviloverlord.com/>\n\n", "msg_date": "Mon, 07 Apr 2003 15:48:27 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "On Mon, Apr 07, 2003 at 03:36:10PM -0400, Greg Stark wrote:\n> \"Ron Peacetree\" <[email protected]> writes:\n> \n> > AFAIK, there are only 3 general purpose internal sorting techniques\n> > that have O(n) behavior:\n> \n> Strictly speaking there are no sorting algorithms that have worst-case time\n> behaviour better than O(nlog(n)). Period.\n> \n\nNot true.\n\nhttp://www.elsewhere.org/jargon/html/entry/bogo-sort.html\n\n-Jay 'Eraserhead' Felice\n\nP.S. <g>\n\n", "msg_date": "Mon, 7 Apr 2003 16:10:03 -0400", "msg_from": "\"Jason M. Felice\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Ron Peacetree wrote:\n> AFAIK, there are only 3 general purpose internal sorting techniques\n> that have O(n) behavior:\n\nHum? NO \"general purpose\" sorting technique has O(n) behaviour.\n\nThe theoretical best scenario, _in general_, is O(n log n).\n\nInsertion sort is expected to provide O(n^2) behaviour, and radix-like\nschemes get arbitrarily memory hungry and have bad pathological results.\n\n> All of the above have potentially nasty trade-offs in comparision to\n> the standard heavily tuned median-of-three quicksort used by the sort\n> unix command. Nonetheless, I could see some value in providing all of\n> these with PostgeSQL (including a decent port of the unix sort routine\n> for the Win folks). I'll note in passing that quicksort's Achille's\n> heel is that it's unstable (all of the rest are stable), which can be\n> a problem in a DB.\n\nMaking one sort algorithm work very efficiently is quite likely to be a\nlot more effective than frittering away time trying to get some special\ncases (that you can't regularly use) to work acceptably.\n\n> All of this seems to imply that instead of mergesort (which is\n> stable), PostgreSQL might be better off with the 4 sorts I've listed\n> plus a viciously efficient merge utility for combining partial results\n> that do fit into memory into result files on disk that don't.\n> \n> Or am I crazy?\n\nMore than likely. It is highly likely that it will typically take more\ncomputational effort to figure out that one of the 4 sorts provided\n/any/ improvement than any computational effort they would save.\n\nThat's a /very/ common problem. There's also a fair chance, seen in\npractice, that the action of collecting additional statistics to improve\nquery optimization will cost more than the savings provided by the\noptimizations.\n--\n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/wp.html\nWhen ever in doubt consult a song. --JT Fletcher \n\n", "msg_date": "Mon, 07 Apr 2003 16:20:01 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: No merge sort? " }, { "msg_contents": "<[email protected]> wrote in message\nnews:[email protected]...\n> Ron Peacetree wrote:\n> > AFAIK, there are only 3 general purpose internal sorting\ntechniques\n> > that have O(n) behavior:\n>\n> Hum? NO \"general purpose\" sorting technique has O(n) behaviour.\n>\n> The theoretical best scenario, _in general_, is O(n log n).\nThe O(log(n!)) bound is only for comparison based sorts operating on\narbitarily disordered input. There are general techniques that can\nsort in O(n) time if we can \"break\" either assumption. In a DB, we\noften are dealing with data that is only slightly disordered, or we\nare have enough meta knowledge that we can use more powerful ordering\noperators than simple comparisons, or both.\n\n\n> Insertion sort is expected to provide O(n^2) behaviour, and\nradix-like\n> schemes get arbitrarily memory hungry and have bad pathological\nresults.\n>\nNone of these comments is accurate.\nThe sources for the following discussion are\nA= Vol 3 of Knuth 2nd ed, (ISBN 0-201-89685-0)\nB= Sedgewick's Algorithms in C, (0-201-51425-7)\nC= Handbook of Algorithms and Data Structures 2nd ed by Gonnet and\nBaeza-Yates. (0-201-41607-7)\n\n1= Insertion sort is O(n) for \"almost sorted\" input.\np103 of Sedgewick. There's also discussion on this in Knuth.\n\n2= Distribution Sort and it's \"cousins\" which use more powerful\nordering operators than simply comparisons are a= reasonably general,\nand b= O(n).\nLook in all three references.\n\n3= A proper implementation of straight Radix sort using 8b or 16b at a\ntime a= is NOT pathological, and b= is O(n).\nSedgewick is the most blunt about it on p142-143, but Knuth discusses\nthis as well.\n\nAll of the above are stable, which quicksort is not. There are no\n\"pessimal\" inputs for any of the above that will force worst case\nbehavior. For quicksort there are (they are =very= unlikely however).\nIn real world terms, if you can use any of these approaches you should\nbe able to internally sort your data between 2X and 3X faster.\n\n\nUnfortunately, most of us do not have the luxury of working with\nMemory Resident DB's. In the real world, disk access is an important\npart of our DB sorting efforts, and that changes things. Very fast\ninternal sorting routines combined with multidisk merging algorithms\nthat minimize overall disk I/O while maximizing the sequential nature\nof that I/O are the best we can do overall in such a situation.\n\n", "msg_date": "Tue, 08 Apr 2003 01:32:49 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "<[email protected]> wrote in message\nnews:[email protected]...\n> It is highly likely that it will typically take more\n> computational effort to figure out that one of the 4 sorts provided\n> /any/ improvement than any computational effort they would save.\n>\n> That's a /very/ common problem. There's also a fair chance, seen in\n> practice, that the action of collecting additional statistics to\nimprove\n> query optimization will cost more than the savings provided by the\n> optimizations.\n>\n\"Back in the Day\" I heard similar arguments when discussing whether\nthere should be support for hashing [O(n)], interpolation search\n[O(lglg(n))], binary search [O(lg(n))], and sequential search [O(n)]\nor for only some subset of these for a DB system I was working on. To\nmake a long story short, it was worth it to have support for all of\nthem because the \"useful domain\" of each was reasonably large and\nreasonably unique compared to the others.\n\nI submit a similar situation exists for sorting. (and yes, I've been\nhere before too ;-)\n\nGiving end users of PostgreSQL a reasonable set of tools for building\nhigh performance applications is just good business.\n\n", "msg_date": "Tue, 08 Apr 2003 02:11:07 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "\n\"Ron Peacetree\" <[email protected]> wrote in message\nnews:%[email protected]...\n> <[email protected]> wrote in message\n> news:[email protected]...\n> > It is highly likely that it will typically take more\n> > computational effort to figure out that one of the 4 sorts\nprovided\n> > /any/ improvement than any computational effort they would save.\n> >\n> > That's a /very/ common problem. There's also a fair chance, seen\nin\n> > practice, that the action of collecting additional statistics to\n> improve\n> > query optimization will cost more than the savings provided by the\n> > optimizations.\n> >\n> \"Back in the Day\" I heard similar arguments when discussing whether\n> there should be support for hashing [O(n)], interpolation search\nTYPO ALERT: hashing is, of course, O(1)\n\n", "msg_date": "Tue, 08 Apr 2003 03:11:26 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "<[email protected]> wrote in message\nnews:[email protected]...\n> Ron Peacetree wrote:\n> > ...and if so, what are the current efforts focusing on?\n>\n> What is it that you think of as being potentially \"better\" about\nsome\n> would-be-alternative \"transaction locking\" scheme?\n>\n> PostgreSQL already supports MVCC, which is commonly considered to be\nthe\n> \"better\" scheme that eliminates a lot of need to lock data.\nAgreed. FTR, the reason MVCC is \"better\" is that readers and writers\nto the same data don't block each other. In \"traditional\" locking\nschemes, readers don't block each other, but readers and writers to\nthe same data do. Clearly, writers to the same data must always block\neach other.\n\nUnfortunately, the performance of PostgreSQL MVCC in comparison to say\nOracle (the performance leader amongst MVCC DB's, and pretty much for\nall DB's for that matter) is not competitive. Therefore there is a\nneed to improve the implementation of MVCC that PostgreSQL uses. If\nsomeone can post a detailed blow-by-blow comparison of how the two\noperate so that the entire list can see it that would be a Good Thing.\nIf I can, I'll put together the info and post it myself.\n\n\n> Furthermore, the phrase \"transaction locking\" doesn't seem to\ndescribe\n> what one would want to lock. I wouldn't want to lock a\n\"transaction;\"\n> I'd want to lock DATA.\n>\n*sigh*. The accepted terminology within this domain for what we are\ntalking about is \"transaction locking\". Therefore we should use it to\nease communications. Argue with Codd and Date if you think the term\nis a misnomer. Secondly, you are thinking only in the space\ndimension. Locks have to protect data within a minimum space vs time\n\"bubble\". That bubble is defined by the beginning and end of a\ntransaction, hence we call the locking of resources we do during that\nbubble as \"transaction locking\".\n\n", "msg_date": "Tue, 08 Apr 2003 13:45:25 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "On Mon, 2003-04-07 at 14:59, Ron Peacetree wrote:\n> Two phase execution and two phase commit are two different concepts.\n> Two phase execution splits the execution of queries explicitly into a\n> \"do all the book keeping and setup stuff before execution\" phase and\n> an actual execution phase. The benefit is that if you are going to\n> say, step through a largish table in chunks, doing the same query on\n> each chunk, two phase execution allows the DB to do everything (syntax\n> checking, query planning, blah, blah) except the actual execution\n> =once= and reuse it for each subsequent chunk.\n\nIf \"stepping through a largish table in chunks\" is implemented as a\nsingle SQL query, PostgreSQL already does this internally (the parsing,\nplanning, rewriting, and execution phases are distinct operations inside\nthe backend).\n\nIf the stepping is done as a bunch of similar queries, you can use\nprepared queries (as of PostgreSQL 7.3) to do the parsing, planning and\nrewriting only once, and then reuse the query plan multiple times.\n\n> It also helps parallel performance since\n> you can hand the \"blessed\" set up query plan to multiple processes and\n> those processes can focus on just getting work done.\n\nPrepared queries are per-backend as of PostgreSQL 7.3, so this can't be\ndone (and I'm a little skeptical that it would be very useful...)\n\nCheers,\n\nNeil\n\n", "msg_date": "08 Apr 2003 15:54:50 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase" }, { "msg_contents": "On Tue, Apr 08, 2003 at 01:45:25PM +0000, Ron Peacetree wrote:\n> Unfortunately, the performance of PostgreSQL MVCC in comparison to say\n> Oracle (the performance leader amongst MVCC DB's, and pretty much for\n> all DB's for that matter) is not competitive. Therefore there is a\n\nWhat, is this a troll? The question apparently reduces to, \"Why\nisn't PostgreSQL as good as Oracle?\" I have two things to say about\nthat:\n\n1.\tFor what? There are things that Oracle users will tell you\nnot to do, because there is a faster way in Oracle. \n\n2.\tHow do you know? I haven't seen any real benchmarks\ncomparing PostgreSQL and Oracle similarly tuned on similar hardware. \nSo I'm sceptical.\n\nBut if you have specifica areas which you think need improvement (and\naren't already listed in the TODO), I'll bet people would like to\nhear about it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 8 Apr 2003 19:05:18 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\"Ron Peacetree\" <[email protected]> writes:\n> Unfortunately, the performance of PostgreSQL MVCC in comparison to say\n> Oracle (the performance leader amongst MVCC DB's, and pretty much for\n> all DB's for that matter) is not competitive.\n\nRon, the tests that I've seen offer no support for that thesis. If you\nwant us to accept such a blanket statement as fact, you'd better back\nit up with evidence. Let's see some test cases.\n\nPostgres certainly has plenty of performance issues, but I have no\nreason to believe that the fundamental MVCC mechanism is one of them.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 08 Apr 2003 23:58:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "\"Andrew Sullivan\" <[email protected]> wrote in message\nnews:[email protected]...\n> On Tue, Apr 08, 2003 at 01:45:25PM +0000, Ron Peacetree wrote:\n> > Unfortunately, the performance of PostgreSQL MVCC in comparison to\n> > say Oracle (the performance leader amongst MVCC DB's, and pretty\nmuch\n> > for all DB's for that matter) is not competitive. Therefore there\nis\n>\n> What, is this a troll?\nTime will tell. Hopefully not.\n\n\n> The question apparently reduces to, \"Why isn't PostgreSQL\n> as good as Oracle?\"\nActually, you've just used reductio absurdium, not I. My question\ncompares PostgreSQL to the performance leaders within this domain\nsince I'll have to justify my decisions to my bosses based on such\ncomparisons. If you think that is unrealistic, then I wish I worked\nwhere you do. If you think that is unreasonable, then I think you're\ntreating PostgreSQL as a religion and not a SW product that must\ncompete against every other DB solution in the real world in order to\nbe relevant or even survive.\n\n\n> 1. For what? There are things that Oracle users will tell you\n> not to do, because there is a faster way in Oracle.\n>\n> 2. How do you know? I haven't seen any real benchmarks\n> comparing PostgreSQL and Oracle similarly tuned on similar hardware.\n> So I'm sceptical.\nPlease see my response(s) to Tom below.\n\n\n> But if you have specifica areas which you think need improvement\n> (and aren't already listed in the TODO), I'll bet people would like\nto\n> hear about it.\nPlease see my posts with regards to sorting and searching, two phase\nexecution, and two phase commit. I'll mention thread support in\npassing, and I'll be bringing up other stuff as I investigate. Then\nI'll hopefully start helping to solve some of the outstanding issues\nin priority order...\n\n\n\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> Ron, the tests that I've seen offer no support for that thesis.\nWhat tests? I've seen no tests doing head-to-head,\nfeature-for-feature comparisons (particularly for low level features\nlike locking) of PostgreSQL vs the \"biggies\": DB2, Oracle, and SQL\nServer. What data I have been able to find is application level, and\ncertainly not head-to-head. From those performance results, I've had\nto try and extrapolate likely causes from behavioral characteristics,\ndocs, and what internal code I can look at (clearly not much from the\n\"biggies\").\n\nIf you have specific head-to-head, feature-for-feature comparison test\nresults to share, PLEASE do so. I need the data.\n\n\n> If you want us to accept such a blanket statement as fact, you'd\n> better back it up with evidence. Let's see some test cases.\nSoon as I have the HW and SW to do so, it'll happen. I have some \"bet\nthe company\" decisions to make in the DB realm.\n\nTest cases are, of course, not the only possible evidence. I'll get\nback to you and the list on this.\n\n\n> Postgres certainly has plenty of performance issues, but I have no\n> reason to believe that the fundamental MVCC mechanism is one of\n> them.\nWhere in your opinion are they then? How bad are they in comparison\nto MySQL or any of the \"Big Three\"?\n\n", "msg_date": "Wed, 09 Apr 2003 05:41:06 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Ron Peacetree wrote:\n> [...]\n> The lack of two phase =commit= is a also a potential performance hit\n> in comparison to DB products that have it, but the more important\n> issue there is that there are SMP/distributed apps that really can't\n> work acceptably unless a DB product has two phase commit.\n> \n> The three \"biggies\" in DB land, SQL Server, Oracle, and DB2, have both\n> features. I suspect that PostgreSQL will need to as well...\n\nRon, do you actually have some ideas how to do 2 phase commits?\nEspecially things like how to re-lock during startup after a crash and\nthe like? Or is your knowledge in reality only buzzwords collected from\nhigh glossy tradeshow flyers?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Wed, 09 Apr 2003 10:13:45 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase " }, { "msg_contents": "\"Jan Wieck\" <[email protected]> wrote in message\nnews:[email protected]...\n> Ron Peacetree wrote:\n> > [...]\n> > The lack of two phase =commit= is a also a potential performance\nhit\n> > in comparison to DB products that have it, but the more important\n> > issue there is that there are SMP/distributed apps that really\ncan't\n> > work acceptably unless a DB product has two phase commit.\n> >\n> > The three \"biggies\" in DB land, SQL Server, Oracle, and DB2, have\nboth\n> > features. I suspect that PostgreSQL will need to as well...\n>\n> Ron, do you actually have some ideas how to do 2 phase commits?\n> Especially things like how to re-lock during startup after a crash\nand\n> the like? Or is your knowledge in reality only buzzwords collected\nfrom\n> high glossy tradeshow flyers?\n>\nIf \"some ideas\" means \"do I know how to code it into PostgreSQL right\nnow\", the answer is no. If \"some ideas\" means \"do I understand the\ngeneral problem at a technical level well enough to be thinking about\nthe algorithms and datastructures needed to support the functionality\"\nthe answer is yes.\n\nSo I'd say a fair response to your questions is that my knowledge is\nin between the two extremes you've described, but probably closer to\nthe first than the second ;-). We can have a private email discussion\non the topic if you wish.\n\n", "msg_date": "Wed, 09 Apr 2003 16:35:20 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone know why PostgreSQL doesn't support 2 phase" }, { "msg_contents": "\"Ron Peacetree\" <[email protected]> writes:\n> \"Tom Lane\" <[email protected]> wrote in message\n> news:[email protected]...\n>> Ron, the tests that I've seen offer no support for that thesis.\n\n> What tests? I've seen no tests doing head-to-head,\n> feature-for-feature comparisons (particularly for low level features\n> like locking) of PostgreSQL vs the \"biggies\": DB2, Oracle, and SQL\n> Server. What data I have been able to find is application level, and\n> certainly not head-to-head.\n\nWho said anything about feature-for-feature comparisons? You made an\n(unsupported) assertion about performance, which has little to do with\nfeature checklists.\n\nThe reason I don't believe there's any fundamental MVCC problem is that\nno such problem showed up in the head-to-head performance tests that\nGreat Bridge did about two years ago. GB is now defunct, and I have\nnot heard of anyone else willing to stick their neck out far enough to\npublish comparative benchmarks against Oracle. But I still trust the\nresults they got.\n\nI have helped various people privately with Oracle-to-PG migration\nperformance problems, and so far the issues have never been MVCC or\ntransaction issues at all. What I've seen is mostly planner\nshortcomings, such as failure to optimize \"foo IN (sub-SELECT)\"\ndecently. Some of these things are already addressed in development\nsources for 7.4.\n\n\n>> Postgres certainly has plenty of performance issues, but I have no\n>> reason to believe that the fundamental MVCC mechanism is one of\n>> them.\n\n> Where in your opinion are they then? How bad are they in comparison\n> to MySQL or any of the \"Big Three\"?\n\nSee the TODO list for some of the known problems. As for \"how bad are\nthey\", that depends completely on the particular application and queries\nyou are looking at ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 09 Apr 2003 12:48:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "On Wed, Apr 09, 2003 at 05:41:06AM +0000, Ron Peacetree wrote:\n\n> Actually, you've just used reductio absurdium, not I. My question\n\nNonsense. You explicitly made the MVCC comparison with Oracle, and\nare asking for a \"better\" locking mechanism without providing any\nevidence that PostgreSQL's is bad. \n\n> compares PostgreSQL to the performance leaders within this domain\n> since I'll have to justify my decisions to my bosses based on such\n> comparisons. If you think that is unrealistic, then I wish I worked\n\nWhere I work, we test our systems to performance targets having to do\nwith what we use the database for. Generic database benchmarks are\nnot something I have a great deal of faith in. I repeat my assertion\nthat, if you have specific areas of concern and the like, and they're\nnot on the TODO (or in the FAQ), then people would be likely to be\ninterested; although they'll likely be more interested if the\nspecifics are not a lot of hand-wavy talk about PostgreSQL not doing\nsomething the right way.\n\n> treating PostgreSQL as a religion and not a SW product that must\n> compete against every other DB solution in the real world in order to\n> be relevant or even survive.\n\nActually, given that we are dependent on PostgreSQL's performance and\nstability for the whole of the company's revenue, I am pretty certain\nthat I have as much \"real world\" experience of PostgreSQL use as\nanyone else. \n\n> Please see my posts with regards to sorting and searching, two phase\n> execution, and two phase commit. \n\nI think your other posts were similar to the one which started this\nthread: full of mighty big pronouncements which turned out to depend\non a bunch of not-so-tenable assumptions. \n\nI'm sorry to be so cranky about this, but I get tired of having to\ndefend one of my employer's core technologies from accusations based\non half-truths and \"everybody knows\" assumptions. For instance,\n\n> I'll mention thread support in passing,\n\nthere's actually a FAQ item about thread support, because in the\nopinion of those who have looked at it, the cost is just not worth\nthe benefit. If you have evidence to the contrary (specific evidence,\nplease, for this application), and have already read all the previous\ndiscussion of the topic, perhaps people would be interested in\nopening that debate again (though I have my doubts).\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 9 Apr 2003 13:09:26 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Tom Lane wrote:\n> The reason I don't believe there's any fundamental MVCC problem is that\n> no such problem showed up in the head-to-head performance tests that\n> Great Bridge did about two years ago. GB is now defunct, and I have\n> not heard of anyone else willing to stick their neck out far enough to\n> publish comparative benchmarks against Oracle. But I still trust the\n> results they got.\n\n<irony-mode-on>\nYou're missing where Mr Peacetree documented how MVCC performance\ndegraded by 42.37% between versions 7.1 and 7.3.1, as well as his\nextensive statistical analysis of the relative behaviours of\nPostgreSQL's semantics versus those of DB/2's MVCC implementation.\n</irony-mode-off>\n\n> I have helped various people privately with Oracle-to-PG migration\n> performance problems, and so far the issues have never been MVCC or\n> transaction issues at all. What I've seen is mostly planner\n> shortcomings, such as failure to optimize \"foo IN (sub-SELECT)\"\n> decently. Some of these things are already addressed in development\n> sources for 7.4.\n\nAh, but that's just anecdotal evidence... \n\nAnd if you used radix sorting, that would probably fix it all. (At\nleast until you discovered that you needed 65 bit addressing to set\nsort_mem high enough... Oh, did I neglect to mention anything about\nirony?)\n--\noutput = reverse(\"gro.mca@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/oses.html\n\"Luckily for Microsoft, it's difficult to see a naked emperor in the\ndark.\" --- Ted Lewis, (former) editor-in-chief, IEEE Computer\n\n", "msg_date": "Wed, 09 Apr 2003 13:53:15 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> \"Ron Peacetree\" <[email protected]> writes:\n> > \"Tom Lane\" <[email protected]> wrote in message\n> > news:[email protected]...\n> >> Ron, the tests that I've seen offer no support for that thesis.\n>\n> > What tests? I've seen no tests doing head-to-head,\n> > feature-for-feature comparisons (particularly for low level\nfeatures\n> > like locking) of PostgreSQL vs the \"biggies\": DB2, Oracle, and SQL\n> > Server. What data I have been able to find is application level,\nand\n> > certainly not head-to-head.\n>\n> Who said anything about feature-for-feature comparisons? You made\nan\n> (unsupported) assertion about performance, which has little to do\nwith\n> feature checklists.\n>\nThat's not quite fair. My assertion was about the performance of an\nexact feature in comparison to that same feature in another DB\nproduct, not about overall application level performance... As I\nsaid, I'll get back to you and the list on this.\n\n\n> The reason I don't believe there's any fundamental MVCC problem is\nthat\n> no such problem showed up in the head-to-head performance tests that\n> Great Bridge did about two years ago. GB is now defunct, and I have\n> not heard of anyone else willing to stick their neck out far enough\nto\n> publish comparative benchmarks against Oracle. But I still trust\nthe\n> results they got.\n>\nLast year eWeek did a shoot out that PostgreSQL was notable in its\nabsence from:\nhttp://www.eweek.com/print_article/0,3668,a=23115,00.asp\nTaking those results and adding PostgreSQL to them should be eminently\nfeasible since the entire environment used for the test is documented\nand the actual scripts and data used for the test are also available.\nOf course, MySQL has been evolving at such a ferocious rate that even\none year old results, let alone two year old ones, run the risk of not\nbeing accurate for it.\n\n\n> I have helped various people privately with Oracle-to-PG migration\n> performance problems, and so far the issues have never been MVCC or\n> transaction issues at all. What I've seen is mostly planner\n> shortcomings, such as failure to optimize \"foo IN (sub-SELECT)\"\n> decently. Some of these things are already addressed in development\n> sources for 7.4.\n>\nIt's probably worth noting that since SQL support was added to\nPostgres rather than being part of the product from Day One, certain\n\"hard\" SQL constructs may still be having teething problems. NOT IN,\nfor instance, was a problem for both Oracle and SQL Server at some\npoint in their history (fuzzy memory: pre Oracle 6, not sure about SQL\nServer version...)\n\n\n> >> Postgres certainly has plenty of performance issues, but I have\nno\n> >> reason to believe that the fundamental MVCC mechanism is one of\n> >> them.\n>\n> > Where in your opinion are they then? How bad are they in\ncomparison\n> > to MySQL or any of the \"Big Three\"?\n>\n> See the TODO list for some of the known problems. As for \"how bad\nare\n> they\", that depends completely on the particular application and\nqueries\n> you are looking at ...\n>\nFair enough.\n\n", "msg_date": "Wed, 09 Apr 2003 17:53:17 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\"Andrew Sullivan\" <[email protected]> wrote in message\nnews:[email protected]...\n> On Wed, Apr 09, 2003 at 05:41:06AM +0000, Ron Peacetree wrote:\n> Nonsense. You explicitly made the MVCC comparison with Oracle, and\n> are asking for a \"better\" locking mechanism without providing any\n> evidence that PostgreSQL's is bad.\n>\nJust because someone else's is \"better\" does not mean PostgreSQL's is\n\"bad\", and I've never said such. As I've said, I'll get back to Tom\nand the list on this.\n\n\n> > compares PostgreSQL to the performance leaders within this domain\n> > since I'll have to justify my decisions to my bosses based on such\n> > comparisons. If you think that is unrealistic, then I wish I\n> > worked where you do.\n>\n> Where I work, we test our systems to performance targets having to\n> do with what we use the database for. Generic database benchmarks\n> are not something I have a great deal of faith in. I repeat my\n> assertion that, if you have specific areas of concern and the like,\n> and they're not on the TODO (or in the FAQ), then people would be\n> likely to be interested; although they'll likely be more interested\nif the\n> specifics are not a lot of hand-wavy talk about PostgreSQL not doing\n> something the right way.\n>\nThere's nothing \"hand wavy\"about this unless you think anything except\ntest cases is \"hand wavy\". In that case, you're right. I don't have\nthe time or resources to provide exhaustive tests between each DB for\neach of the issues we are discussing. If I did, I'd be publishing a\n=very= lucrative newsletter for IT decision makers. Also, there are\nother\nvalid ways to analyze issues than just application level test cases.\nIn fact, there are some =better= ways, depending on the issue being\ndiscussed.\n\n\n> > treating PostgreSQL as a religion and not a SW product that must\n> > compete against every other DB solution in the real world in order\n> > to be relevant or even survive.\n>\n> Actually, given that we are dependent on PostgreSQL's performance\n> and stability for the whole of the company's revenue, I am pretty\n> certain that I have as much \"real world\" experience of PostgreSQL\n> use as anyone else.\n>\nYour experience was not questioned, and there were \"if\" clauses at the\nbeginning of my comments that you seem to be ignoring. I'm not here\nto waste my or anyone else's time on flames. We've all got work to\ndo.\n\n\n> > Please see my posts with regards to ...\n>\n> I think your other posts were similar to the one which started this\n> thread: full of mighty big pronouncements which turned out to depend\n> on a bunch of not-so-tenable assumptions.\n>\nHmmm. Well, I don't think of algorithm analysis by the likes of\nKnuth, Sedgewick, Gonnet, and Baeza-Yates as being \"not so tenable\nassumptions\", but YMMV. As for \"mighty pronouncements\", that also\nseems a bit misleading since we are talking about quantifiable\nprogramming and computer science issues, not unquantifiable things\nlike politics.\n\n\n> I'm sorry to be so cranky about this, but I get tired of having to\n> defend one of my employer's core technologies from accusations based\n> on half-truths and \"everybody knows\" assumptions. For instance,\n>\nAgain, \"accusations\" is a bit strong. I thought the discussion was\nabout the technical merits and costs of various features and various\nways to implement them, particularly when this product must compete\nfor installed base with other solutions. Being coldly realistic about\nwhat a product's strengths and weaknesses are is, again, just good\nbusiness. Sun Tzu's comment about knowing the enemy and yourself\nseems appropriate here...\n\n\n> > I'll mention thread support in passing,\n>\n> there's actually a FAQ item about thread support, because in the\n> opinion of those who have looked at it, the cost is just not worth\n> the benefit. If you have evidence to the contrary (specific\n> evidence, please, for this application), and have already read all\nthe\n> previous discussion of the topic, perhaps people would be interested\nin\n> opening that debate again (though I have my doubts).\n>\nZeus had a performance ceiling roughly 3x that of Apache when Zeus\nsupported threading as well as pre-forking and Apache only supported\npre forking. The Apache folks now support both. DB2, Oracle, and SQL\nServer all use threads. Etc, etc.\n\nThat's an awful lot of very bright programmers and some serious $$\nvoting that threads are worth it. Given all that, if PostgreSQL\nspecific\nthread support is =not= showing itself to be a win that's an\nunexpected\nenough outcome that we should be asking hard questions as to why not.\n\nAt their core, threads are a context switching efficiency tweak.\nSince DB's switch context a lot under many circumstances, threads\nshould be a win under such circumstances. At the least, it should be\nhelpful in situations where we have multiple CPUs to split query\nexecution between.\n\nM$'s first implementation of threads was so \"heavy\" that it didn't\nhelp them (until they actually implemented real threads and called\nthem \"strings\"), but that was not due to the inefficacy of the\nconcept, but rather M$'s implementation and the system environment\nwithin which that implementation was being used. Perhaps something\nsimilar is going on here?\n\nCertainly it's =possible= that threads have nothing to offer\nPostgreSQL, but IMHO it's not =probable=. Just another thing for me\nto add to my TODO heap for looking at...\n\n", "msg_date": "Wed, 09 Apr 2003 22:09:14 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\n\"Ron Peacetree\" <[email protected]> wrote in message\nnews:[email protected]...\n> M$'s first implementation of threads was so \"heavy\" that it didn't\n> help them (until they actually implemented real threads and called\n> them \"strings\"),\nTYPO ALERT: M$'s better implementation of threads is called \"fibers\",\nnot \"strings\"\n\n", "msg_date": "Thu, 10 Apr 2003 01:15:26 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Ron Peacetree wrote:\n> \"Tom Lane\" <[email protected]> wrote in message\n> > [...]\n> > If you want us to accept such a blanket statement as fact, you'd\n> > better back it up with evidence. Let's see some test cases.\n> Soon as I have the HW and SW to do so, it'll happen. I have some \"bet\n> the company\" decisions to make in the DB realm.\n\nAnd you are comparing what? Just pure features and/or performace, or\ntotal cost of ownership for your particular case?\n\nIt is a common misunderstanding open source would be free software. It\nis not because since the software comes as is, without any warranty and\nit's usually hard to get support provided or backed by large companies,\nit is safe to build you own support team (depends on how much you \"bet\nthe company\"). Replacing license fees and support contracts with payroll\nentries plus taking the feature and performance differences into account\nmakes this comparision a very individual, non-portable task.\n\nUnfortunately most manager type people can produce an annoyingly high\nvolume of questions and suggestions as long as they need more input,\nthen all of the sudden disappear when they made their decision.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 10 Apr 2003 09:12:56 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\"Jan Wieck\" <[email protected]> wrote in message\nnews:[email protected]...\n> Ron Peacetree wrote:\n> > \"Tom Lane\" <[email protected]> wrote in message\n> > > [...]\n> > > If you want us to accept such a blanket statement\n> > > as fact, you'd better back it up with evidence. Let's\n> > > see some test cases.\n> > Soon as I have the HW and SW to do so, it'll happen.\n> > I have some \"bet the company\" decisions to make.\n>\n> And you are comparing what? Just pure features and/or\n> performance, or total cost of ownership for your\n> particular case?\n>\nTechnical Analysis and Business Analysis are two separate, and equally\nnecessary, activities. However, before one can accurately measure\nthings like Total Cost of Ownership, one needs to have accurately and\nsufficiently characterized what will be owned and one's choices as to\nwhat could be owned...\n\n\n> It is a common misunderstanding open source would be\n> free software. It is not because since the software comes\n> as is, without any warranty and it's usually hard to get\n> support provided or backed by large companies, it is safe\n> to build you own support team (depends on how much\n> you \"bet the company\"). Replacing license fees and\n> support contracts with payroll entries plus taking the\n> feature and performance differences into account makes\n> this comparision a very individual, non-portable task.\n>\nVery valid points, and I was a supporter of the FSF and the LPF when\nUsenet was \"the net\" and backbone nodes communicated by modem, so I've\nbeen wrestling with people's sometimes misappropriate\nuse/understanding of the operator \"free\" for some time.\n\nHowever, a correctly done Technical Analysis =should= be reasonably\nportable since among other things you don't want to have to start all\nover if your company's business or business model changes. Clearly\nBusiness Analysis is very context dependant.\n\nIt should also be noted that given the prices of some of the solutions\nout there, there are many companies who's choices are constrained, but\nstill need to stay in business...\n\n\n> Unfortunately most manager type people can produce an\n> annoyingly high volume of questions and suggestions as\n> long as they need more input, then all of the sudden\n> disappear when they made their decision.\n>\nWord. Although the phrase \"manager type people\" could be replaced\nwith \"people\" and the above would still be true IMHO. Thankfully,\nmost of my bosses are people who have worked their up from the\ntechnical trenches, so the conversation at least rates to be focused\nand reasonable while it's occurring...\n\n", "msg_date": "Thu, 10 Apr 2003 18:20:13 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "On Wed, 9 Apr 2003, Ron Peacetree wrote:\n\n> \"Andrew Sullivan\" <[email protected]> wrote in message\n> news:[email protected]...\n> > On Wed, Apr 09, 2003 at 05:41:06AM +0000, Ron Peacetree wrote:\n> > Nonsense. You explicitly made the MVCC comparison with Oracle, and\n> > are asking for a \"better\" locking mechanism without providing any\n> > evidence that PostgreSQL's is bad.\n> >\n> Just because someone else's is \"better\" does not mean PostgreSQL's is\n> \"bad\", and I've never said such. As I've said, I'll get back to Tom\n> and the list on this.\n\nBut you didn't identify HOW it was better. I think that's the point \nbeing made.\n\n> > > Please see my posts with regards to ...\n> >\n> > I think your other posts were similar to the one which started this\n> > thread: full of mighty big pronouncements which turned out to depend\n> > on a bunch of not-so-tenable assumptions.\n> >\n> Hmmm. Well, I don't think of algorithm analysis by the likes of\n> Knuth, Sedgewick, Gonnet, and Baeza-Yates as being \"not so tenable\n> assumptions\", but YMMV. As for \"mighty pronouncements\", that also\n> seems a bit misleading since we are talking about quantifiable\n> programming and computer science issues, not unquantifiable things\n> like politics.\n\nBut the real truth is revealed when the rubber hits the pavement. \nRemember that Linux Torvalds was roundly criticized for his choice of a \nmonolithic development model for his kernel, and was literally told that \nhis choice would restrict to \"toy\" status and that no commercial OS could \nscale with a monolithic kernel.\n\nThere's no shortage of people with good ideas, just people with the skills \nto implement those good ideas. If you've got a patch to apply that's been \ntested to show something is faster EVERYONE here wants to see it.\n\nIf you've got a theory, no matter how well backed up by academic research, \nit's still just a theory. Until someone writes to code to implement it, \nthe gains are theoretical, and many things that MIGHT help don't because \nof the real world issues underlying your database, like I/O bandwidth or \nCPU <-> memory bandwidth.\n\n> > I'm sorry to be so cranky about this, but I get tired of having to\n> > defend one of my employer's core technologies from accusations based\n> > on half-truths and \"everybody knows\" assumptions. For instance,\n> >\n> Again, \"accusations\" is a bit strong. I thought the discussion was\n> about the technical merits and costs of various features and various\n> ways to implement them, particularly when this product must compete\n> for installed base with other solutions. Being coldly realistic about\n> what a product's strengths and weaknesses are is, again, just good\n> business. Sun Tzu's comment about knowing the enemy and yourself\n> seems appropriate here...\n\nNo, you're wrong. Postgresql doesn't have to compete. It doesn't have to \nwin. it doesn't need a marketing department. All those things are nice, \nand I'm glad if it does them, but doesn't HAVE TO. Postgresql has to \nwork. It does that well.\n\nPostgresql CAN compete if someone wants to put the effort into competing, \nbut it isn't a priority for me. Working is the priority, and if other \npeople aren't smart enough to test Postgresql to see if it works for them, \nall the better, I keep my edge by having a near zero cost database engine, \nwhile the competition spends money on MSSQL or Oracle.\n\nTom and Andrew ARE coldly realistic about the shortcomings of postgresql. \nIt has issues, and things that need to be fixed. It needs more coders. \nIt doesn't need every feature that Oracle or DB2 have. Heck some of their \n\"features\" would be considered a mis-feature in the Postgresql world.\n\n> > > I'll mention thread support in passing,\n> >\n> > there's actually a FAQ item about thread support, because in the\n> > opinion of those who have looked at it, the cost is just not worth\n> > the benefit. If you have evidence to the contrary (specific\n> > evidence, please, for this application), and have already read all\n> the\n> > previous discussion of the topic, perhaps people would be interested\n> in\n> > opening that debate again (though I have my doubts).\n> >\n> Zeus had a performance ceiling roughly 3x that of Apache when Zeus\n> supported threading as well as pre-forking and Apache only supported\n> pre forking. The Apache folks now support both. DB2, Oracle, and SQL\n> Server all use threads. Etc, etc.\n\nYes, and if you configured your apache server to have 20 or 30 spare \nservers, in the real world, it was nearly neck and neck to Zeus, but since \nZeus cost like $3,000 a copy, it is still cheaper to just overwhelm it \nwith more servers running apache than to use zeus.\n\n> That's an awful lot of very bright programmers and some serious $$\n> voting that threads are worth it. \n\nFor THAT application. for what a web server does, threads can be very \nuseful, even useful enough to put up with the problems created by running \nthreads on multiple threading libs on different OSes. \n\nLet me ask you, if Zeus scrams and crashes out, and it's installed \nproperly so it just comes right back up, how much data can you lose?\n\nIf Postgresql scrams and crashes out, how much data can you lost?\n\n> Given all that, if PostgreSQL\n> specific\n> thread support is =not= showing itself to be a win that's an\n> unexpected\n> enough outcome that we should be asking hard questions as to why not.\n\nThere HAS been testing on threads in Postgresql. It has been covered to \ndeath. The fact that you're still arguing proves you likely haven't read \nthe archive (google has it back to way back when, use that to look it up) \nabout this subject.\n\nThreads COULD help on multi-sorted results, and a few other areas, but the \nincrease in performance really wasn't that great for 95% of all the cases, \nand for the 5% it was, simple query planner improvements have provided far \ngreater performance increases.\n\nThe problem with threading is that we can either use the one process -> \nmany thread design, which I personally don't trust for something like a \ndatabase, or a process per backend connection which can run \nmulti-threaded. This scenario makes Postgresql just as stable and \nreliable as it was as a multi-process app, but allows threaded performance \nin certain areas of the backend that are parallelizable to run in parallel \non multi-CPU systems.\n\nthe gain, again, is minimal, and on a system with many users accessing it, \nthere is NO real world gain.\n\n> At their core, threads are a context switching efficiency tweak.\n\nExcept that on the two OSes which Postgresql runs on the most, threads are \nreally no faster than processes. In the Linux kernel, the only real \ndifference is how the OS treats them, creation, destruction of threads \nversus processes is virtually identical there.\n\n> Certainly it's =possible= that threads have nothing to offer\n> PostgreSQL, but IMHO it's not =probable=. Just another thing for me\n> to add to my TODO heap for looking at...\n\nIt's been tested, it didn't help a lot, and it made it MUCH harder to \nmaintain, as threads in Linux are handled by a different lib than in say \nSolaris, or Windows or any other OS. I.e. you can't guarantee the thread \nlib you need will be there, and that there are no bugs. MySQL still has \nthread bug issues pop up, most of which are in the thread libs themselves.\n\n", "msg_date": "Fri, 11 Apr 2003 13:31:06 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "> -----Original Message-----\n> From: Ron Peacetree [mailto:[email protected]] \n> Sent: Wednesday, April 09, 2003 12:38 PM\n> To: [email protected]\n> Subject: Re: [HACKERS] No merge sort?\n> \n> \n> \"\"Dann Corbit\"\" <[email protected]> wrote in message \n> news:D90A5A6C612A39408103E6ECDD77B829408AC6@voyager.corporate.connx.co\n> m...\n> > Distribution sort is not an algorithm. It is a general technique.\n> Both\n> > counting sort and radix sort are distribution sorts. I think you\n> are\n> > talking about counting sort here. In order to insert a technique\n> into a\n> > database, you must solve the general case.\n> >\n> Terminology problem. Please look in the references I posted.\n> \n> \n> > > 2= For Radix sort, that's iff (\"if and only if\") you use \n> > > =characters= as the radix of a radix sort and do a MSB aka \n> > > partition-exchange sort. The appropriate radix here is a \n> =3Dfield=3D \n> > > not a character. Since there are 3 fields vs 60 characters the \n> > > problem becomes 2/3 wasted passes instead of 40/60.\n> >\n> > This is lunacy. If you count the field or use it as a radix, you\n> will\n> > need a radix of 40*8 bits= 320 bits. That means you will \n> have 2^320 \n> > 2.136e96 bins.\n> >\n> You only need as many bins as you have unique key values \n> silly ;-) Remember, at its core Radix sort is just a \n> distribution counting sort (that's the name I learned for the \n> general technique). The simplest implementation uses bits as \n> the atomic unit, but there's nothing saying you have to... \n> In a DB, we know all the values of the fields we currently \n> have stored in the DB. We can take advantage of that.\n\nBy what magic do we know this? If a database knew a-priori what all the\ndistinct values were, it would indeed be excellent magic.\n \n> As you correctly implied, the goal is to minimize the number \n> of unique bins you have to, err, distribute, items into. \n> That and having as few duplicates as feasible are the \n> important points.\n> \n> If, for example, we have <= 255 possible values for the each \n> radix (Using your previous example, let's say you have <= 255 \n> unique values for the combined field \"Company+Division\" in \n> the DB and the same for \"Plant\" ), and 4 sets of radix to \n> sort over, it doesn't matter if we are sorting a 32b quantity \n> using a radix of 8b, or a 4 field quantity using a radix of \n> one field (TBF, we should use key and index techniques to \n> minimize the amount of actual data we retrieve and manipulate \n> from disk during the sort. We want to minimize disk I/O, \n> particularly seeking disk I/O). The situations are analogous.\n\nWith 80 bytes, you have 2^320 possible values. There is no way around\nthat. If you are going to count them or use them as a radix, you will\nhave to classify them. The only way you will know how many unique\nvalues you have in \"Company+Division\" is to ... Either sort them or by\nsome other means discover all that are distinct. The database does not\nknow how many there are beforehand. Indeed, there could be anywhere\nfrom zero to 2^320 (given enough space) distinct values.\n\nI would be interested to see your algorithm that performs a counting or\nradix sort on 320 bit bins and that works in the general case without\nusing extra space.\n\n> Note TANSTAAFL (as Heinlein would've said): We have to use \n> extra space for mapping the radix values to the radix keys, \n> and our \"inner loop\" for the sorting operation is \n> considerably more complicated than that for a quicksort or a \n> mergesort. Hence the fact that even though this is an O(n) \n> technique, in real world terms you can expect only a 2-3X \n> performance improvement over say quicksort for realistic \n> amounts of data.\n> \n> Oh and FTR, IME for most \"interesting\" sorts in the DB \n> domain, even for \"internal\" sorting techniques, the time to \n> read data from disk and\n> (possibly) write results back to disk tends to dominate the \n> time spent actually doing the internal sort...\n> \n> I learned tricks like this 20 years ago. I thought this \n> stuff was part of general lore in the DB field... *shrug*\n\nAs my grandfather used to say:\n\"Picklesmoke.\"\n\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n> \n\n", "msg_date": "Fri, 11 Apr 2003 13:51:01 -0700", "msg_from": "\"Dann Corbit\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Ron Peacetree wrote:\n> Zeus had a performance ceiling roughly 3x that of Apache when Zeus\n> supported threading as well as pre-forking and Apache only supported\n> pre forking. The Apache folks now support both. DB2, Oracle, and SQL\n> Server all use threads. Etc, etc.\n\nYou can't use Apache as an example of why you should thread a database\nengine, except for the cases where the database is used much like the\nweb server is: for numerous short transactions.\n\n> That's an awful lot of very bright programmers and some serious $$\n> voting that threads are worth it. Given all that, if PostgreSQL\n> specific thread support is =not= showing itself to be a win that's\n> an unexpected enough outcome that we should be asking hard questions\n> as to why not.\n\nIt's not that there won't be any performance benefits to be had from\nthreading (there surely will, on some platforms), but gaining those\nbenefits comes at a very high development and maintenance cost. You\nlose a *lot* of robustness when all of your threads share the same\nmemory space, and make yourself vulnerable to classes of failures that\nsimply don't happen when you don't have shared memory space.\n\nPostgreSQL is a compromise in this regard: it *does* share memory, but\nit only shares memory that has to be shared, and nothing else. To get\nthe benefits of full-fledged threads, though, requires that all memory\nbe shared (otherwise the OS has to tweak the page tables whenever it\nswitches contexts between your threads).\n\n> At their core, threads are a context switching efficiency tweak.\n\nThis is the heart of the matter. Context switching is an operating\nsystem problem, and *that* is where the optimization belongs. Threads\nexist in large part because operating system vendors didn't bother to\ndo a good job of optimizing process context switching and\ncreation/destruction.\n\nUnder Linux, from what I've read, process creation/destruction and\ncontext switching happens almost as fast as thread context switching\non other operating systems (Windows in particular, if I'm not\nmistaken).\n\n> Since DB's switch context a lot under many circumstances, threads\n> should be a win under such circumstances. At the least, it should be\n> helpful in situations where we have multiple CPUs to split query\n> execution between.\n\nThis is true, but I see little reason that we can't do the same thing\nusing fork()ed processes and shared memory instead.\n\nThere is context switching within databases, to be sure, but I think\nyou'll be hard pressed to demonstrate that it is anything more than an\ninsignificant fraction of the total overhead incurred by the database.\nI strongly suspect that much larger gains are to be had by optimizing\nother areas of the database, such as the planner, the storage manager\n(using mmap for file handling may prove useful here), the shared\nmemory system (mmap may be faster than System V style shared memory),\netc.\n\nThe big overhead in the process model on most platforms is in creation\nand destruction of processes. PostgreSQL has a relatively high\nconnection startup cost. But there are ways of dealing with this\nproblem other than threading, namely the use of a connection caching\nmiddleware layer. Such layers exist for databases other than\nPostgreSQL, so the high cost of fielding and setting up a database\nconnection is *not* unique to PostgreSQL ... which suggests that while\nthreading may help, it doesn't help *enough*.\n\nI'd rather see some development work go into a connection caching\nprocess that understands the PostgreSQL wire protocol well enough to\nlook like a PostgreSQL backend to connecting processes, rather than\nsee a much larger amount of effort be spent on converting PostgreSQL\nto a threaded architecture (and then discover that connection caching\nis still needed anyway).\n\n> Certainly it's =possible= that threads have nothing to offer\n> PostgreSQL, but IMHO it's not =probable=. Just another thing for me\n> to add to my TODO heap for looking at...\n\nIt's not that threads don't have anything to offer. It's that the\ncosts associated with them are high enough that it's not at all clear\nthat they're an overall win.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Fri, 11 Apr 2003 14:32:59 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\"\"Dann Corbit\"\" <[email protected]> wrote in message\nnews:D90A5A6C612A39408103E6ECDD77B829408ACB@voyager.corporate.connx.co\nm...\n> > From: Ron Peacetree [mailto:[email protected]]=20\n> > Sent: Wednesday, April 09, 2003 12:38 PM\n> > You only need as many bins as you have unique key values=20\n> > silly ;-) Remember, at its core Radix sort is just a=20\n> > distribution counting sort (that's the name I learned for the=20\n> > general technique). The simplest implementation uses bits as=20\n> > the atomic unit, but there's nothing saying you have to...=20=20\n> > In a DB, we know all the values of the fields we currently=20\n> > have stored in the DB. We can take advantage of that.\n>\n> By what magic do we know this? If a database knew a-priori what all\nthe\n> distinct values were, it would indeed be excellent magic.\nFor any table already in the DB, this is self evident. If you are\ntalking about sorting data =before= it gets put into the DB (say for a\nload), then things are different, and the best you can do is used\ncomparison based methods (and probably some reasonably sophisticated\nexternal merging routines in addition if the data set to be sorted and\nloaded is big enough). The original question as I understood it was\nabout a sort as part of a query. That means everything to be sorted\nis in the DB, and we can take advantage of what we know.\n\n\n> With 80 bytes, you have 2^320 possible values. There is no way\n> around that. If you are going to count them or use them as a\n> radix, you will have to classify them. The only way you will\n> know how many unique values you have in\n> \"Company+Division\" is to ...\n> Either sort them or by some means discover all that are distinct\nUmmm, Indexes, particularly Primary Key Indexes, anyone? Finding the\nunique values in an index should be perceived as trivial... ...and you\noften have the index in memory for other reasons already.\n\n\n> . The database does not know how many there are\n> beforehand. Indeed, there could be anywhere\n> from zero to 2^320 (given enough space) distinct values.\n>\n> I would be interested to see your algorithm that\n> performs a counting or radix sort on 320 bit bins and that\n> works in the general case without using extra space.\n>\n1= See below. I clearly stated that we need to use extra space\n2= If your definition of \"general\" is \"we know nothing about the\ndata\", then of course any method based on using more sophisticated\nordering operators than comparisons is severely hampered, if not\ndoomed. This is not a comparison based method. You have to know some\nthings about the data being sorted before you can use it. I've been\n=very= clear about that throughout this.\n\n\n> > Note TANSTAAFL (as Heinlein would've said): We have to use=20\n> > extra space for mapping the radix values to the radix keys,=20\n> > and our \"inner loop\" for the sorting operation is=20\n> > considerably more complicated than that for a quicksort or a=20\n> > mergesort. Hence the fact that even though this is an O(n)=20\n> > technique, in real world terms you can expect only a 2-3X=20\n> > performance improvement over say quicksort for realistic=20\n> > amounts of data.\n> >=20\n> > Oh and FTR, IME for most \"interesting\" sorts in the DB=20\n> > domain, even for \"internal\" sorting techniques, the time to=20\n> > read data from disk and\n> > (possibly) write results back to disk tends to dominate the=20\n> > time spent actually doing the internal sort...\n> >=20\nSince you don't believe me, and have implied you wouldn't believe me\neven if I posted results of efforts on my part, go sort some 2GB files\nby implementing the algorithms in the sources I've given and as\nmentioned in this thread. (I'm assuming that you have access to\n\"real\" machines that can perform this as an internal sort, or we\nwouldn't be bothering to have this discussion). Then come back to the\ntable if you still think there are open issues to be discussed.\n\n", "msg_date": "Sat, 12 Apr 2003 00:01:40 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "\"Ron Peacetree\" <[email protected]> wrote in message\nnews:[email protected]...\n> \"\"Dann Corbit\"\" <[email protected]> wrote in message\n>\nnews:D90A5A6C612A39408103E6ECDD77B829408ACB@voyager.corporate.connx.co\n> m...\n> > With 80 bytes, you have 2^320 possible values. There is no way\n> > around that. If you are going to count them or use them as a\n> > radix, you will have to classify them. The only way you will\n> > know how many unique values you have in\n> > \"Company+Division\" is to ...\n> > Either sort them or by some means discover all that are distinct\n> Ummm, Indexes, particularly Primary Key Indexes, anyone? Finding\nthe\n> unique values in an index should be perceived as trivial... ...and\nyou\n> often have the index in memory for other reasons already.\n>\nInteresting Note: DB2 and Oracle have switches that turn on a table\nfeature that keeps track of all the unique values of a field and a\ncounter for how often each of those unique values occurs. The\nimplications for speeding up non write querys involving those tables\nshould be obvious (again, TANSTAAFL: writes are now going to have the\nextra overhead of updating this information)...\n\nWonder how hard this would be to put into PostgreSQL?\n\n", "msg_date": "Sat, 12 Apr 2003 02:33:04 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "On Saturday 12 April 2003 03:02, you wrote:\n> Ron Peacetree wrote:\n> > Zeus had a performance ceiling roughly 3x that of Apache when Zeus\n> > supported threading as well as pre-forking and Apache only supported\n> > pre forking. The Apache folks now support both. DB2, Oracle, and SQL\n> > Server all use threads. Etc, etc.\n>\n> You can't use Apache as an example of why you should thread a database\n> engine, except for the cases where the database is used much like the\n> web server is: for numerous short transactions.\n\nOK. Let me put my experiences. These are benchmarks on a intranet(100MBps lan) \nrun off a 1GHZ P-III/IV webserver on mandrake9 for a single 8K file.\n\napache2044: 1300 rps\nboa: \t4500rps\nZeus: \t6500 rps.\n\nApache does too many things to be a speed daemon and what it offers is pretty \nimpressive from performance POV.\n\nBut database is not webserver. It is not suppose to handle tons of concurrent \nrequests. That is a fundamental difference.\n\n>\n> > That's an awful lot of very bright programmers and some serious $$\n> > voting that threads are worth it. Given all that, if PostgreSQL\n> > specific thread support is =not= showing itself to be a win that's\n> > an unexpected enough outcome that we should be asking hard questions\n> > as to why not.\n>\n> It's not that there won't be any performance benefits to be had from\n> threading (there surely will, on some platforms), but gaining those\n> benefits comes at a very high development and maintenance cost. You\n> lose a *lot* of robustness when all of your threads share the same\n> memory space, and make yourself vulnerable to classes of failures that\n> simply don't happen when you don't have shared memory space.\n\nWell. Threading does not necessarily imply one thread per connection model. \nThreading can be used to make CPU work during I/O and taking advantage of SMP \nfor things like sort etc. This is especially true for 2.4.x linux kernels \nwhere async I/O can not be used for threaded apps. as threads and signal do \nnot mix together well.\n\nOne connection per thread is not a good model for postgresql since it has \nalready built a robust product around process paradigm. If I have to start a \nnew database project today, a mix of process+thread is what I would choose bu \npostgresql is not in same stage of life.\n\n> > At their core, threads are a context switching efficiency tweak.\n>\n> This is the heart of the matter. Context switching is an operating\n> system problem, and *that* is where the optimization belongs. Threads\n> exist in large part because operating system vendors didn't bother to\n> do a good job of optimizing process context switching and\n> creation/destruction.\n\nBut why would a database need a tons of context switches if it is not supposed \nto service loads to request simaltenously? If there are 50 concurrent \nconnections, how much context switching overhead is involved regardless of \namount of work done in a single connection? Remeber that database state is \nmaintened in shared memory. It does not take a context switch to access it.\n\nThe assumption stems from database being very efficient in creating and \nservicing a new connection. I am not very comfortable with that argument.\n\n> Under Linux, from what I've read, process creation/destruction and\n> context switching happens almost as fast as thread context switching\n> on other operating systems (Windows in particular, if I'm not\n> mistaken).\n\nI hear solaris also has very heavy processes. But postgresql has other issues \nwith solaris as well.\n>\n> > Since DB's switch context a lot under many circumstances, threads\n> > should be a win under such circumstances. At the least, it should be\n> > helpful in situations where we have multiple CPUs to split query\n> > execution between.\n\nCan you give an example where database does a lot of context switching for \nmoderate number of connections?\n\n Shridhar\n\n", "msg_date": "Sat, 12 Apr 2003 12:21:12 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Shridhar Daithankar wrote:\n> Apache does too many things to be a speed daemon and what it offers\n> is pretty impressive from performance POV.\n>\n> But database is not webserver. It is not suppose to handle tons of\n> concurrent requests. That is a fundamental difference.\n\nI'm not sure I necessarily agree with this. A database is just a\ntool, a means of reliably storing information in such a way that it\ncan be retrieved quickly. Whether or not it \"should\" handle lots of\nconcurrent requests is a question that the person trying to use it\nmust answer.\n\nA better answer is that a database engine that can handle lots of\nconcurrent requests can also handle a smaller number, but not vice\nversa. So it's clearly an advantage to have a database engine that\ncan handle lots of concurrent requests because such an engine can be\napplied to a larger number of problems. That is, of course, assuming\nthat all other things are equal...\n\nThere are situations in which a database would have to handle a lot of\nconcurrent requests. Handling ATM transactions over a large area is\none such situation. A database with current weather information might\nbe another, if it is actively queried by clients all over the country.\nActing as a mail store for a large organization is another. And, of\ncourse, acting as a filesystem is definitely another. :-)\n\n> Well. Threading does not necessarily imply one thread per connection\n> model. Threading can be used to make CPU work during I/O and taking\n> advantage of SMP for things like sort etc. This is especially true\n> for 2.4.x linux kernels where async I/O can not be used for threaded\n> apps. as threads and signal do not mix together well.\n\nThis is true, but whether you choose to limit the use of threads to a\nfew specific situations or use them throughout the database, the\ndangers and difficulties faced by the developers when using threads\nwill be the same.\n\n> One connection per thread is not a good model for postgresql since\n> it has already built a robust product around process paradigm. If I\n> have to start a new database project today, a mix of process+thread\n> is what I would choose bu postgresql is not in same stage of life.\n\nCertainly there are situations for which it would be advantageous to\nhave multiple concurrent actions happening on behalf of a single\nconnection, as you say. But that doesn't automatically mean that a\nthread is the best overall solution. On systems such as Linux that\nhave fast process handling, processes are almost certainly the way to\ngo. On other systems such as Solaris or Windows, threads might be the\nright answer (on Windows they might be the *only* answer). But my\nargument here is simple: the responsibility of optimizing process\nhandling belongs to the maintainers of the OS. Application developers\nshouldn't have to worry about this stuff.\n\nOf course, back here in the real world they *do* have to worry about\nthis stuff, and that's why it's important to quantify the problem.\nIt's not sufficient to say that \"processes are slow and threads are\nfast\". Processes on the target platform may well be slow relative to\nother systems (and relative to threads). But the question is: for the\nproblem being solved, how much overhead does process handling\nrepresent relative to the total amount of overhead the solution itself\nincurs?\n\nFor instance, if we're talking about addressing the problem of\ndistributing sorts across multiple CPUs, the amount of overhead\ninvolved in doing disk activity while sorting could easily swamp, in\nthe typical case, the overhead involved in creating parallel processes\nto do the sorts themselves. And if that's the case, you may as well\ngain the benefits of using full-fledged processes rather than deal\nwith the problems that come with the use of threads -- because the\ngains to be found by using threads will be small in relative terms.\n\n> > > At their core, threads are a context switching efficiency tweak.\n> >\n> > This is the heart of the matter. Context switching is an operating\n> > system problem, and *that* is where the optimization belongs. Threads\n> > exist in large part because operating system vendors didn't bother to\n> > do a good job of optimizing process context switching and\n> > creation/destruction.\n> \n> But why would a database need a tons of context switches if it is\n> not supposed to service loads to request simaltenously? If there are\n> 50 concurrent connections, how much context switching overhead is\n> involved regardless of amount of work done in a single connection? \n> Remeber that database state is maintened in shared memory. It does\n> not take a context switch to access it.\n\nIf there are 50 concurrent connections with one process per\nconnection, then there are 50 database processes. The context switch\noverhead is incurred whenever the current process blocks (or exhausts\nits time slice) and the OS activates a different process. Since\ndatabase handling is generally rather I/O intensive as services go,\nrelatively few of those 50 processes are likely to be in a runnable\nstate, so I would expect the overall hit from context switching to be\nrather low -- I'd expect the I/O subsystem to fall over well before\ncontext switching became a real issue.\n\nOf course, all of that is independent of whether or not the database\ncan handle a lot of simultaneous requests.\n\n> > Under Linux, from what I've read, process creation/destruction and\n> > context switching happens almost as fast as thread context switching\n> > on other operating systems (Windows in particular, if I'm not\n> > mistaken).\n> \n> I hear solaris also has very heavy processes. But postgresql has\n> other issues with solaris as well.\n\nYeah, I didn't want to mention Solaris because I haven't kept up with\nit and thought that perhaps they had fixed this...\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 12 Apr 2003 03:54:52 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "On Saturday 12 April 2003 16:24, you wrote:\n> A better answer is that a database engine that can handle lots of\n> concurrent requests can also handle a smaller number, but not vice\n> versa. So it's clearly an advantage to have a database engine that\n> can handle lots of concurrent requests because such an engine can be\n> applied to a larger number of problems. That is, of course, assuming\n> that all other things are equal...\n>\n> There are situations in which a database would have to handle a lot of\n> concurrent requests. Handling ATM transactions over a large area is\n> one such situation. A database with current weather information might\n> be another, if it is actively queried by clients all over the country.\n> Acting as a mail store for a large organization is another. And, of\n> course, acting as a filesystem is definitely another. :-)\n\nWell, there is another aspect one should consider. Tuning a database engine \nfor a specifiic workload is a hell of a job and shifting it to altogether \nother end of paradigm must be justified.\n\nOK. Postgresql is not optimised to handle lots of concurrent connections, at \nleast not much to allow one apache request handler to use a connection. Then \nmiddleware connection pooling like done in php might be a simpler solution to \ngo rather than redoing the postgresql stuff. Because it works.\n\n> This is true, but whether you choose to limit the use of threads to a\n> few specific situations or use them throughout the database, the\n> dangers and difficulties faced by the developers when using threads\n> will be the same.\n\nI do not agree. Let's say I put threading functions in posgresql that do not \ntouch shared memory interface at all. They would be hell lot simpler to code \nand mainten than converting postgresql to one thread per connection model.\n\n> Of course, back here in the real world they *do* have to worry about\n> this stuff, and that's why it's important to quantify the problem.\n> It's not sufficient to say that \"processes are slow and threads are\n> fast\". Processes on the target platform may well be slow relative to\n> other systems (and relative to threads). But the question is: for the\n> problem being solved, how much overhead does process handling\n> represent relative to the total amount of overhead the solution itself\n> incurs?\n\nThat is correct. However it would be a fair assumption on part of postgresql \ndevelopers that a process once setup does not have much of processing \noverhead involved as such, given the state of modern server class OS and \nhardware. So postgresql as it is, fits in that model. I mean it is fine that \npostgresql has heavy connections. Simpler solution is to pool them.\n\nThat gets me wondering. Has anybody ever benchmarked how much a database \nconnection weighs in terms of memory/CPU/IO BW. for different databases on \ndifferent platforms? Is postgresql really that slow?\n\n Shridhar\n\n", "msg_date": "Sat, 12 Apr 2003 16:39:56 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\nShridhar Daithankar <[email protected]> writes:\n\n> But database is not webserver. It is not suppose to handle tons of concurrent \n> requests. That is a fundamental difference.\n\nAnd in one fell swoop you've dismissed the entire OLTP database industry. \n\nHave you ever called a travel agent and had him or her look up a fare in the\nairline database within seconds? Ever placed an order over the telephone? \nEver used a busy database-backed web site?\n\nOn database-backed web sites, probably the main application for databases\ntoday, almost certainly the main application for free software databases,\nevery web page request translates into at least one, probably several database\nqueries. \n\nAll those database queries must complete within a limited time, measured in\nmilliseconds. When they complete another connection needs to be context\nswitched in and run again within milliseconds.\n\nOn a busy web site the database machine will have several processors and be\nprocessing queries for several web pages simultaneously, but what really\nmatters is precisely the context switch time between one set of queries and\nanother.\n\nThe test I'm most interested in in the benchmarks effort is simply an index\nlookup or update of a single record from a large table. How many thousands of\ntransactions per second is postgres going to be able to handle on the same\nmachine as mysql and oracle? How many hundreds of thousands of transactions\nper second will they be able to handle on a 4 processor hyperthreaded machine\nwith a raid array striped across ten disks?\n\n--\ngreg\n\n", "msg_date": "12 Apr 2003 10:59:57 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, 9 Apr 2003, Ron Peacetree wrote:\n> \n> > \"Andrew Sullivan\" <[email protected]> wrote in message\n> > news:[email protected]...\n> > > On Wed, Apr 09, 2003 at 05:41:06AM +0000, Ron Peacetree wrote:\n> > > Nonsense. You explicitly made the MVCC comparison with Oracle, and\n> > > are asking for a \"better\" locking mechanism without providing any\n> > > evidence that PostgreSQL's is bad.\n> > >\n> > Just because someone else's is \"better\" does not mean PostgreSQL's is\n> > \"bad\", and I've never said such. As I've said, I'll get back to Tom\n> > and the list on this.\n> \n> But you didn't identify HOW it was better. I think that's the point \n> being made.\n\nOh, but he presented such detailed statistics to prove his case, didn't you \nsee it? :-)\n\n> > > > Please see my posts with regards to ...\n> > >\n> > > I think your other posts were similar to the one which started this\n> > > thread: full of mighty big pronouncements which turned out to depend\n> > > on a bunch of not-so-tenable assumptions.\n> > >\n> > Hmmm. Well, I don't think of algorithm analysis by the likes of\n> > Knuth, Sedgewick, Gonnet, and Baeza-Yates as being \"not so tenable\n> > assumptions\", but YMMV. As for \"mighty pronouncements\", that also\n> > seems a bit misleading since we are talking about quantifiable\n> > programming and computer science issues, not unquantifiable things\n> > like politics.\n> \n> But the real truth is revealed when the rubber hits the pavement. \n> Remember that Linux Torvalds was roundly criticized for his choice of a \n> monolithic development model for his kernel, and was literally told that \n> his choice would restrict to \"toy\" status and that no commercial OS could \n> scale with a monolithic kernel.\n\nIndeed. I have the books from all of the above (when I studied databases \nunder Gonnet, Baeza-Yates was his TA...). And I have seen enough cases of the \nconglomeration of multiple algorithms not behaving the way a blind read of \ntheir books might suggest to refuse to blindly assume that things are so \nsimple.\n\nIn the /real/ world, the dictates of flushing buffers to help ensure \nrobustness can combine with having enough memory to virtually eliminate read \nI/O to substantially change the results from some simplistic O(f(n)) analysis.\n\nWhich is NOT to say that computational complexity is unimportant; what it \nindicates is that theoretical results are merely theoretical. And may only \nrepresent a small part of what happens in practice. The nonsense about radix \nsorts was a wonderful example; it would likely only be useful with PostgreSQL \nif you had some fantastical amount of memory that might not actually be able \nto be constructed within the confines of our solar system.\n\n> There's no shortage of people with good ideas, just people with the skills \n> to implement those good ideas. If you've got a patch to apply that's been \n> tested to show something is faster EVERYONE here wants to see it.\n> \n> If you've got a theory, no matter how well backed up by academic research, \n> it's still just a theory. Until someone writes to code to implement it, \n> the gains are theoretical, and many things that MIGHT help don't because \n> of the real world issues underlying your database, like I/O bandwidth or \n> CPU <-> memory bandwidth.\n\nAn unfortunate thing (to my mind) is that *genuinely novel* operating system \nresearch has pretty much disappeared. All we see, these days, are rehashes of \nVMS, MVS, and Unix, along with some reimplementations of P-Code under monikers \nlike \"JVM\", \".NET\" or \"Parrot.\"\n\nThere's good reason for it; if you build something that is much more than 95% \nindistinguishable from Unix, then you'll be left with the *enormous* projects \nof creating completely new infrastructure for compilers, data persistence \n(\"novel\" would mean, to my mind, concepts different from files), program \neditors, and such. But if it's 95% the same as Unix, then Emacs, GCC, CVS, \nPostgreSQL, and all sorts of \"tool chain\" are available to you.\n\nWhat is unfortunate is that it would be nice to try out some things that are \nVery Different. Unfortunately, it might take five years of slogging through \nrecreating compilers and editors in order to get in about 6 months of \"solid \nnovel work.\"\n\nOf course, if you don't plan to lift your finger to help make any of it \nhappen, it's easy enough to \"armchair quarterback\" and suggest that someone \nelse do all sorts of would-be \"neat things.\"\n\n> > > I'm sorry to be so cranky about this, but I get tired of having to\n> > > defend one of my employer's core technologies from accusations based\n> > > on half-truths and \"everybody knows\" assumptions. For instance,\n> > >\n> > Again, \"accusations\" is a bit strong. I thought the discussion was\n> > about the technical merits and costs of various features and various\n> > ways to implement them, particularly when this product must compete\n> > for installed base with other solutions. Being coldly realistic about\n> > what a product's strengths and weaknesses are is, again, just good\n> > business. Sun Tzu's comment about knowing the enemy and yourself\n> > seems appropriate here...\n\n> No, you're wrong. Postgresql doesn't have to compete. It doesn't have to \n> win. it doesn't need a marketing department. All those things are nice, \n> and I'm glad if it does them, but doesn't HAVE TO. Postgresql has to \n> work. It does that well.\n\nHaving a bit more of a \"marketing department\" might be a nice thing; it could \nmake it easier for people that would like to deploy PG to get the idea past \nthe higher-ups that have a hard time listening to things that *don't* come \nfrom that department.\n\n> > > > I'll mention thread support in passing,\n> > >\n> > > there's actually a FAQ item about thread support, because in the\n> > > opinion of those who have looked at it, the cost is just not worth\n> > > the benefit. If you have evidence to the contrary (specific\n> > > evidence, please, for this application), and have already read all\n> > the\n> > > previous discussion of the topic, perhaps people would be interested\n> > in\n> > > opening that debate again (though I have my doubts).\n> > >\n> > Zeus had a performance ceiling roughly 3x that of Apache when Zeus\n> > supported threading as well as pre-forking and Apache only supported\n> > pre forking. The Apache folks now support both. DB2, Oracle, and SQL\n> > Server all use threads. Etc, etc.\n> \n> Yes, and if you configured your apache server to have 20 or 30 spare \n> servers, in the real world, it was nearly neck and neck to Zeus, but since \n> Zeus cost like $3,000 a copy, it is still cheaper to just overwhelm it \n> with more servers running apache than to use zeus.\n\nAll quite entertaining. Andrew was perhaps trolling just a little bit there; \nour resident \"algorithm expert\" was certainly easily sucked into leaping down \nthe path-too-much-trod. Just as with choices of sorting algorithms, it's easy \nenough for there to be more to things than whatever the latest academic \npropaganda about threading is.\n\nThe VITAL point to be made about threading is that there is a tradeoff, and \nit's not the one that \"armchair-quarterbacks-that-don't-write-code\" likely \nthink of.\n\n--> Hand #1: Implementing a threaded model would require a lot of work, and \nthe *ACTUAL* expected benefits are unknown.\n\n--> Hand #2: So far, other *easier* optimizations have been providing \nsignificant speedups, requiring much less effort.\n\nAt some point in time, it might be that \"doing threading\" might become the \nstrategy most expected to reap the most rewards for the least amount of \nprogrammer effort. Until that time, it's not worth worrying about it.\n\n> > That's an awful lot of very bright programmers and some serious $$\n> > voting that threads are worth it. \n> \n> For THAT application. for what a web server does, threads can be very \n> useful, even useful enough to put up with the problems created by running \n> threads on multiple threading libs on different OSes. \n> \n> Let me ask you, if Zeus scrams and crashes out, and it's installed \n> properly so it just comes right back up, how much data can you lose?\n> \n> If Postgresql scrams and crashes out, how much data can you lost?\n\nThere's another possibility, namely that the \"voting\" may not have anything to \ndo with threading being \"best.\" Instead, it may be a road to allow the \nlargest software houses, that can afford to have enough programmers that can \n\"do threading,\" to crush smaller competitors. After all, threading offers \ndaunting new opportunities for deadlocks, data overruns, and crashes; if only \nthose with the most, best thread programmers can compete, that discourages \nothers from even /trying/ to compete.\n--\noutput = (\"cbbrowne\" \"@ntlug.org\")\nhttp://www3.sympatico.ca/cbbrowne/sgml.html\n\"I visited a company that was doing programming in BASIC in Panama\nCity and I asked them if they resented that the BASIC keywords were in\nEnglish. The answer was: ``Do you resent that the keywords for\ncontrol of actions in music are in Italian?''\" -- Kent M Pitman\n\n", "msg_date": "Sat, 12 Apr 2003 11:00:51 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "On Fri, 2003-04-11 at 17:32, Kevin Brown wrote:\n> The big overhead in the process model on most platforms is in creation\n> and destruction of processes. PostgreSQL has a relatively high\n> connection startup cost. But there are ways of dealing with this\n> problem other than threading, namely the use of a connection caching\n> middleware layer.\n\nFurthermore, IIRC PostgreSQL's relatively slow connection creation time\nhas as much to do with other per-backend initialization work as it does\nwith the time to actually fork() a new backend. If there is interest in\noptimizing backend startup time, my guess would be that there is plenty\nof room for improvement without requiring the replacement of processes\nwith threads.\n\nCheers,\n\nNeil\n\n", "msg_date": "12 Apr 2003 15:29:38 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Neil Conway wrote:\n\n> Furthermore, IIRC PostgreSQL's relatively slow connection creation time\n> has as much to do with other per-backend initialization work as it does\n> with the time to actually fork() a new backend. If there is interest in\n> optimizing backend startup time, my guess would be that there is plenty\n> of room for improvement without requiring the replacement of processes\n> with threads.\n\nI see there is a whole TODO Chapter devoted to the topic. There is the idea\nof pre-forked and persistent backends. That would be very useful in an\nenvironment where it's quite hard to use connection pooling. We are\ncurrently working on a mail system for a free webmail. The mda (mail\ndelivery agent) written in C connects to the pg database to do some queries\neverytime a new mail comes in. I didn't find a solution for connection\npooling yet.\n\nAbout the TODO items, apache has a nice description of their accept()\nserialization:\nhttp://httpd.apache.org/docs-2.0/misc/perf-tuning.html\n\nPerhaps this could be useful if someone decided to start implementing those\nfeatures.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Sat, 12 Apr 2003 22:08:40 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Greg Stark wrote:\n\n>Shridhar Daithankar <[email protected]> writes:\n>\n> \n>\n>>But database is not webserver. It is not suppose to handle tons of concurrent \n>>requests. That is a fundamental difference.\n>> \n>>\n>\n>And in one fell swoop you've dismissed the entire OLTP database industry. \n>\n>Have you ever called a travel agent and had him or her look up a fare in the\n>airline database within seconds? Ever placed an order over the telephone? \n>Ever used a busy database-backed web site?\n> \n>\nThat situation is usually handled by means of a TP Monitor that keeps \nopen database connections ( e.g, CICS + DB2 ).\n\nI think there is some confusion between \"many concurrent connections + \nshort transactions\" and \"many connect / disconnect + short transactions\" \nin some of this discussion.\n\nOLTP systems typically fall into the first case - perhaps because their \ndb products do not have fast connect / disconnect :-). Postgresql plus \nsome suitable middleware (e.g Php) will handle this configuration *with* \nits current transaction model.\n\nI think you are actually talking about the connect / disconnect speed \nrather than the *transaction* model per se.\n\nbest wishes\n\nMark\n\n", "msg_date": "Sun, 13 Apr 2003 10:45:30 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\nMark Kirkwood <[email protected]> writes:\n\n> I think there is some confusion between \"many concurrent connections + short\n> transactions\" and \"many connect / disconnect + short transactions\" in some of\n> this discussion.\n\nI was intended to clarify that but left it out. In fact I think that's\nprecisely one of the confusions that's obscuring things in this ongoing\ndebate.\n\nWorrying about connection time is indeed a red herring. Most databases have\nslow connection times so most database drivers implement some form of cached\nconnections. A lot of effort has gone into working around this particular\ndatabase design deficiency.\n\nHowever even if you reuse existing database connections, you nonetheless are\nstill context switching between hundreds or potentially thousands of threads\nof execution. The lighter-weight that context switch is, the faster it'll be\nable to do that.\n\nFor a web site where all the queries are preparsed, all the data is cached in\nram, and all the queries involve quick single record lookups and updates, the\nmachine is often quite easily driven 100% cpu bound. \n\nIt's tricky to evaluate the cost of the context switches because a big part of\nthe cost is simply the tlb flushes. Not only does a process context switch\ninvolve swapping in memory maps and other housekeeping, but all future memory\naccesses like the data copies that an OLTP system spends most of its time\ndoing are slowed down.\n\nAnd the other question is how much memory does having many processes running\nconsume? Every page those processes are consuming that could have been shared\nis a page that isn't being used for disk caching, and another page to pollute\nthe processor's cache.\n\nSo for example, I wonder how fast postgres would be if there were a thousand\nconnections open, all doing fast one-record index lookups as fast as they can.\n\nPeople are going to say that would just be a poorly designed system, but I\nthink they're just not applying much foresight. Reasonably designed systems\neasily need several hundred connections now, and future large systems will\nundoubtedly need thousands.\n\nAnyways, this is a long standing debate and the FAQ answer is mostly, we'll\nfind out when someone writes the code. Continuing to debate it isn't going to\nbe very productive. My only desire here is to see more people realize that\noptimizing for tons of short transactions using data cached in ram is at least\nas important as optimizing for big complex transactions on huge datasets.\n\n--\ngreg\n\n", "msg_date": "12 Apr 2003 19:21:36 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> However even if you reuse existing database connections, you nonetheless are\n> still context switching between hundreds or potentially thousands of threads\n> of execution. The lighter-weight that context switch is, the faster it'll be\n> able to do that.\n\n> It's tricky to evaluate the cost of the context switches because a big part of\n> the cost is simply the tlb flushes. Not only does a process context switch\n> involve swapping in memory maps and other housekeeping, but all future memory\n> accesses like the data copies that an OLTP system spends most of its time\n> doing are slowed down.\n\nSo? You're going to be paying those costs *anyway*, because most of the\nprocess context swaps will be between the application server and the\ndatabase. A process swap is a process swap, and if you are doing only\nvery short transactions, few of those swaps will be between database\ncontexts --- app to database to app will be the common pattern. Unless\nyou'd like to integrate the client into the same address space as the\ndatabase, I do not see that there's an argument here that says multiple\nthreads in the database will be markedly faster than multiple processes.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 12 Apr 2003 21:09:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "Greg Stark wrote:\n\n>So for example, I wonder how fast postgres would be if there were a thousand\n>connections open, all doing fast one-record index lookups as fast as they can.\n>\nYes - some form of \"connection reducing\" middleare is probably needed at \nthat point ( unless you have fairly highly spec'ed hardware )\n\n>People are going to say that would just be a poorly designed system, but I\n>think they're just not applying much foresight. Reasonably designed systems\n>easily need several hundred connections now, and future large systems will\n>undoubtedly need thousands.\n> \n>\nI guess the question could be reduced to : whether some form of TP \nMonitor functionality should be built into Postgresql? This *might* be a \nbetter approach - as there may be a limit to how much faster a Pg \nconnection can get. By way of interest I notice that DB2 8.1 has a \nconnection concentrator in it - probably for the very reason that we \nhave been discussing...\n\nMaybe there should be a TODO list item in the Pg \"Exotic Features\" for \nconnection pooling / concentrating ???\n\nWhat do people think ?\n\nMark\n\n", "msg_date": "Sun, 13 Apr 2003 16:00:22 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Shridhar Daithankar wrote:\n> > There are situations in which a database would have to handle a lot of\n> > concurrent requests. Handling ATM transactions over a large area is\n> > one such situation. A database with current weather information might\n> > be another, if it is actively queried by clients all over the country.\n> > Acting as a mail store for a large organization is another. And, of\n> > course, acting as a filesystem is definitely another. :-)\n> \n> Well, there is another aspect one should consider. Tuning a database\n> engine for a specifiic workload is a hell of a job and shifting it\n> to altogether other end of paradigm must be justified.\n\nCertainly, but that justification comes from the problem being\nsolved. If the nature of the problem demands tons of short\ntransactions (and as I said, a number of problems have such a\nrequirement), then tuning the database so that it can deal with it is\na requirement if that database is to be used at all.\n\nNow, keep in mind that \"tuning the database\" here covers a *lot* of\nground and a lot of solutions, including connection-pooling\nmiddleware.\n\n> OK. Postgresql is not optimised to handle lots of concurrent\n> connections, at least not much to allow one apache request handler\n> to use a connection. Then middleware connection pooling like done in\n> php might be a simpler solution to go rather than redoing the\n> postgresql stuff. Because it works.\n\nI completely agree. In fact, I see little reason to change PG's\nmethod of connection handling because I see little reason that a\ngeneral-purpose connection pooling frontend can't be developed.\n\nAnother method that could help is to prefork the postmaster.\n\n> > This is true, but whether you choose to limit the use of threads to a\n> > few specific situations or use them throughout the database, the\n> > dangers and difficulties faced by the developers when using threads\n> > will be the same.\n> \n> I do not agree. Let's say I put threading functions in posgresql\n> that do not touch shared memory interface at all. They would be hell\n> lot simpler to code and mainten than converting postgresql to one\n> thread per connection model.\n\nI think you misunderstand what I'm saying.\n\nThere are two approaches we've been talking about thus far:\n\n1. One thread per connection. In this instance, every thread shares\n exactly the same memory space.\n\n2. One process per connection, with each process able to create\n additional worker threads to handle things like concurrent sorts.\n In this instance, threads that belong to the same process all\n share the same memory space (including the SysV shared memory pool\n that the processes use to communicate with each other), but the\n only memory that *all* the threads will have in common is the SysV\n shared memory pool.\n\nNow, the *scope* of the problems introduced by using threading is\ndifferent between the two approaches, but the *nature* of the problems\nis the same: for any given process, the introduction of threads will\nsignificantly complicate the debugging of memory corruption issues.\nThis problem will be there no matter which approach you use; the only\ndifference will be the scale.\n\nAnd that's why you're probably better off with the third approach:\n\n3. One process per connection, with each process able to create\n additional worker subprocesses to handle things like concurrent\n sorts. IPC between the subprocesses can be handled using a number\n of different mechanisms, perhaps including the already-used SysV\n shared memory pool.\n\nThe reason you're probably better off with this third approach is that\nby the time you need the concurrency for sorting, etc., the amount of\ntime you'll spend on the actual process of sorting, etc. will be so\nmuch larger than the amount of time it takes to create, manage, and\ndestroy the concurrent processes (even on systems that have extremely\nheavyweight processes, like Solaris and Windows) that there will be no\ndiscernable difference between using threads and using processes. It\nmay take a few milliseconds to create, manage, and destroy the\nsubprocesses, but the amount of work to be done is likely to represent\nat least a couple of *hundred* milliseconds for a concurrent approach\nto be worth it at all. And if that's the case, you may as well save\nyourself the problems associated with using threads.\n\nEven if you'd gain as much as a 10% speed improvement by using threads\nto handle concurrent sorts and such instead of processes (an\nimprovement that is likely to be very difficult to achieve), I think\nyou're still going to be better off using processes. To justify the\ndangers of using threads, you'd need to see something like a factor of\ntwo or more gain in overall performance, and I don't see how that's\ngoing to be possible even on systems with very heavyweight processes.\n\n\nI might add that the case where you're likely to gain significant\nbenefits from using either threads or subprocesses to handle\nconcurrent sorts is one in which you probably *won't* get many\nconcurrent connections...because if you're dealing with a lot of\nconcurrent connections (no matter how long-lived they may be), you're\nprobably *already* using all of the CPUs on the machine anyway. The\nsituation where doing the concurrent subprocesses or subthreads will\nhelp you is one where the connections in question are relatively\nlong-lived and are performing big, complex queries -- exactly the\nsituation in which threads won't help you at all relative to\nsubprocesses, because the amount of work to do on behalf of the\nconnection will dwarf (that is, be many orders of magnitude greater\nthan) the amount of time it takes to create, manage, and tear down a\nprocess.\n\n> > Of course, back here in the real world they *do* have to worry about\n> > this stuff, and that's why it's important to quantify the problem.\n> > It's not sufficient to say that \"processes are slow and threads are\n> > fast\". Processes on the target platform may well be slow relative to\n> > other systems (and relative to threads). But the question is: for the\n> > problem being solved, how much overhead does process handling\n> > represent relative to the total amount of overhead the solution itself\n> > incurs?\n> \n> That is correct. However it would be a fair assumption on part of\n> postgresql developers that a process once setup does not have much\n> of processing overhead involved as such, given the state of modern\n> server class OS and hardware. So postgresql as it is, fits in that\n> model. I mean it is fine that postgresql has heavy\n> connections. Simpler solution is to pool them.\n\nI'm in complete agreement here, and it's why I have very little faith\nthat a threaded approach to any of the concurrency problems will yield\nenough benefits to justify the very significant drawbacks that a\nthreaded approach brings to the table.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 12 Apr 2003 21:17:10 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "[email protected] wrote:\n> > > That's an awful lot of very bright programmers and some serious $$\n> > > voting that threads are worth it. \n> > \n> > For THAT application. for what a web server does, threads can be very \n> > useful, even useful enough to put up with the problems created by running \n> > threads on multiple threading libs on different OSes. \n> > \n> > Let me ask you, if Zeus scrams and crashes out, and it's installed \n> > properly so it just comes right back up, how much data can you lose?\n> > \n> > If Postgresql scrams and crashes out, how much data can you lost?\n> \n> There's another possibility, namely that the \"voting\" may not have\n> anything to do with threading being \"best.\" Instead, it may be a\n> road to allow the largest software houses, that can afford to have\n> enough programmers that can \"do threading,\" to crush smaller\n> competitors. After all, threading offers daunting new opportunities\n> for deadlocks, data overruns, and crashes; if only those with the\n> most, best thread programmers can compete, that discourages others\n> from even /trying/ to compete.\n\nYes, but any smart small software shop will realize that threading is\nmore about buzzword compliance than anything else. In the real world\nwhere things must get done, threading is just another tool to use when\nit's appropriate. And the only time it's appropriate is when the\namount of time it takes to create, manage, and tear down a process is\na very large fraction of the total amount of time it takes to do the\nwork.\n\nIf we're talking about databases, it's going to be very rare that\nthreads will *really* buy you any significant performance advantage\nover concurrent processes + shared memory.\n\n\nBuzzword compliance is nice but it doesn't get things done. At the\nend of the day, all that matters is whether or not the tool you chose\ndoes the job you need it to do for as little money as possible. I\nhope in this lean economy that people are starting to realize this.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 12 Apr 2003 21:31:28 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> So? You're going to be paying those costs *anyway*, because most of the\n> process context swaps will be between the application server and the\n> database. \n\nSeparating the database and application onto dedicated machines is normally\nthe first major optimization busy sites do when they discover that having the\ntwo on the same machine never scales well.\n\n--\ngreg\n\n", "msg_date": "13 Apr 2003 01:20:26 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "On Sunday 13 April 2003 09:47, you wrote:\n> Even if you'd gain as much as a 10% speed improvement by using threads\n> to handle concurrent sorts and such instead of processes (an\n> improvement that is likely to be very difficult to achieve), I think\n> you're still going to be better off using processes. To justify the\n> dangers of using threads, you'd need to see something like a factor of\n> two or more gain in overall performance, and I don't see how that's\n> going to be possible even on systems with very heavyweight processes.\n\nI couldn't agree more. \n\nThere is just a corner case to justify threads. Looking around, it would be a \nfair assumption that on any platforms threads are at least as fast as \nprocesses. So using threads it is guarenteed that \"sub-work\" will be lot more \nfaster.\n\nOf course that does not justify threads even in 5% of cases. So again, no \nreason to use threads for sort etc. However the subprocesses used should be \nsimple enough. A process as heavy as a full database connection might not be \ntoo good.\n\n Shridhar\n\n", "msg_date": "Sun, 13 Apr 2003 11:59:59 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "On Saturday 12 April 2003 20:29, you wrote:\n> Shridhar Daithankar <[email protected]> writes:\n> > But database is not webserver. It is not suppose to handle tons of\n> > concurrent requests. That is a fundamental difference.\n>\n> And in one fell swoop you've dismissed the entire OLTP database industry.\n>\n> Have you ever called a travel agent and had him or her look up a fare in\n> the airline database within seconds? Ever placed an order over the\n> telephone? Ever used a busy database-backed web site?\n\nWell, I was involved in designing a database solution for a telco for their \nsupport system. That was a fairly big database, aroung 600GB. There was a \nresponse time limit as well. Though it was not millisecond.\n\nThough project did not go thr. for non-technical reasons, the bechmark we did \nwith postgresql/mysql/oracle left an impression that any of mysql/postgresql \nwould handle that kind of load with server clustering. Furthermore postgresql \nwould have been the choice given the mature SQL capabilities it has. Even \nwith oracle, the database had to be clustered to keep the cost low enough.\n\n> On database-backed web sites, probably the main application for databases\n> today, almost certainly the main application for free software databases,\n> every web page request translates into at least one, probably several\n> database queries.\n\nQueries != connection. We are talking about reducing number of connections \nrequired, not number of queries sent across.\n\n> All those database queries must complete within a limited time, measured in\n> milliseconds. When they complete another connection needs to be context\n> switched in and run again within milliseconds.\n>\n> On a busy web site the database machine will have several processors and be\n> processing queries for several web pages simultaneously, but what really\n> matters is precisely the context switch time between one set of queries and\n> another.\n\nWell, If the application is split between application server and database \nserver, I would rather put a cluster of low end database machines and have an \ndata consolidating layer in middleware. That is cheaper than big iron \ndatabase machine and can be expanded as required.\n\nHowever this would not work in all cases unless you are able to partition the \ndata. Otherwise you need a database that can have single database image \nacross machines. \n\nIf and when postgresql moves to mmap based model, postgresql running on mosix \nshould be able to do it. Using PITR mechanism, it would get clustering \nabilities as well. This is been discussed before.\n\nRight now, postgresql does not have any of these capabilities. So using \napplication level data consolidation is the only answer\n\n> The test I'm most interested in in the benchmarks effort is simply an index\n> lookup or update of a single record from a large table. How many thousands\n> of transactions per second is postgres going to be able to handle on the\n> same machine as mysql and oracle? How many hundreds of thousands of\n> transactions per second will they be able to handle on a 4 processor\n> hyperthreaded machine with a raid array striped across ten disks?\n\nI did the same test on a 4 way xeon machine with 4GB of RAM and 40GB of data. \nBoth mysql and postgresql did lookups at approximately 80% speed of oracle. \nIIRC they were doing 600 queries per second but I could be off. It was more \nthan 6 months ago.\n\n However testing number of clients was not a criteria. We only tested with 10 \nconcurrent clients. Mysql freezes at high database loads and high number of \nconcurrent connection. Postgresql has tendency to hold the load muh longer. \nSo more the number of connections, faster will be response time. That should \nbe a fairly flat curve for upto 100 concurrent connection. Good enough \nhardware assumed.\n\n Shridhar\n\n", "msg_date": "Sun, 13 Apr 2003 12:13:16 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "\"\"Dann Corbit\"\" <[email protected]> wrote in message\nnews:D90A5A6C612A39408103E6ECDD77B829408ACB@voyager.corporate.connx.co\nm...\n> I would be interested to see your algorithm that performs a counting\n> or radix sort on 320 bit bins and that works in the general case\n> without using extra space.\n>\nSince I can't be sure of my audience, I'm writing for the widest\nreasonably possible audience, and I apologize in advance to those who\nknow most of this.\n\nFirst, let's use some examples for the data to be sorted. I'll\nconsider two cases, 1= each 320b entity is a row, and 2= the original\nexample where there are 3 fields per row; the first being 320b, with\neach of the others being 160b.\nFor case 1, a 2GB file has 2^34/320= 53,687,091 records. Let's call\nit 50x10^6 to keep things simple. Case 2 is 26,843,545 records.\nLet's call that 25x10^6.\n\nLet's examine the non space issues first, then explore space\nrequirements.\n\nThe most space and time efficient comparison based sorting algorithm\n(median-of-three partitioning Qsort with Isort used when almost\nsorted) will on average execute in ~(32/3)*n*ln(n) tyme units\n(~9.455x10^9 for case 1; ~4.543x10^9 for case 2). The use of \"tyme\"\nhere is not a typo or a spice. It's a necessary abstraction because\nreal time units are very HW and SW context dependant.\n\nNow let's consider a radix address calculation sort based approach.\nThe best implementation does a Rsort on the MSBs of the data to the\npoint of guaranteeing that the data is \"almost sorted\" then uses Isort\nas a \"finishing touch\". The Rsort part of the algorithm will execute\non average in 11*p*n-n+O(p*m) tyme units, where p is the number of\npasses used and m is the number of unique states in the radix (2^16\nfor a 16b radix in which we can't prune any possibilities).\n\nFor a 320b data value that we know nothing about, we must consider all\npossible values for any length radix we use. The function\n11*p*n-n+O(p*m) has a minimum when we use a 16b radix:\n11*20*50x10^6-50x10^6+O(20*2^16)= ~10.951x10^9 tyme units. If we use\nenough Rsort passes to get the data \"almost ordered\" and then use\nIsort to finish things we can do a bit better. Since Qsort needs a\naverage of ~9.455x10^9 tyme units to sort the same data. It's not\nclear that the effort to code Rsort is worth it for this case.\n\nHowever, look what happens if we can assume that the number of\ndistinct key values is small enough that we can sort the data in a\nrelatively few number of passes: if the 320b key has <= 2^16 distinct\nvalues, we can sort them using one pass in ~500x10^6+O(2^16) tyme\nunits. Even noting that O(2^16) has just become much larger since we\nare now manipulating key values rather than bits, this is stupendously\nbetter than Qsort and there should be no question that Rsort is worth\nconsidering in such cases. FTR, this shows that my naive (AKA \"I\ndidn't do the math\") instinct was wrong. This technique is FAR better\nthan I originally was estimating under these circumstances.\n\nAs long as we can store a \"map\" of unique keys in a relatively small\namount of space and use relatively few radix passes, this technique is\nfar better than comparison based sorting methods. Said mapping is\neasily derived from table indexes, and some DB systems (DB2 and\nOracle) will generate and save this data for you as part of their set\nof optimization tweaks. Even if none of that is true, an extra full\nscan of a memory resident data set to derive this information is cheap\nif the gamble pays off and you find out that the number of distinct\nkey values is a small subset of the potential universe of said.\n\nNow let's consider that \"extra space\". At the least Qsort is going to\nneed extra space proportional to lg(n) for its call tree (explicit or\nsystem level to support recursion). If we are using a 16b radix, the\nextra space in question in 64KB. In either case, it's in the noise.\n\n If we are using a mapping of <= 2^16 keys, the extra space in\nquestion is ~2.5MB. Considering we are sorting an ~2GB file and the\nincredible improvement in running time, this seems to be a very\nworthwhile investment.\n\nThis would be worth adding support for to PostgreSQL.\n\n", "msg_date": "Sun, 13 Apr 2003 10:59:38 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "\n\"Ron Peacetree\" <[email protected]> wrote in message\nnews:[email protected]...\n> If we are using a mapping of <= 2^16 keys, the extra space in\n> question is ~2.5MB.\nEDIT: add \"...for 320b keys\"\n\n", "msg_date": "Sun, 13 Apr 2003 11:17:47 GMT", "msg_from": "\"Ron Peacetree\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No merge sort?" }, { "msg_contents": "Greg Stark wrote:\n> \n> Tom Lane <[email protected]> writes:\n> \n> > So? You're going to be paying those costs *anyway*, because most of the\n> > process context swaps will be between the application server and the\n> > database.\n> \n> Separating the database and application onto dedicated machines is normally\n> the first major optimization busy sites do when they discover that having the\n> two on the same machine never scales well.\n\nIf there is enough of an \"application\" to separate it ... :-)\n\nPeople talk about \"database backed websites\" when all they need is\nthousands of single index lookups. Can someone give me a real world\nexample of such a website? And if so, what's wrong with using ndbm/gdbm? \n\nAll these hypothetical arguments based on \"the little test I made\" don't\nlead to anything. I can create such tests that push the CPU load to 20\nor more or get all the drive LED's nervous in no time. They don't tell\nanything, that's the problem. They are purely synthetic. They hammer a\nfew different simple queries in a totally unrealistic, hysteric fashion\nagainst a database and are called benchmarks. There are absolutely no\nmeans of consistency checks built into the tests and if one really runs\nchecksum tests after 100 concurrent clients hammered this other super\nfast superior sql database for 10 minutes people wonder how inconsistent\nit can become after 10 minutes ... without a single error message.\n\nAnyone ever thought about a reference implementation of TPC-W? \n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Sun, 13 Apr 2003 10:00:40 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Ron Peacetree wrote:\n> \n> \"Jan Wieck\" <[email protected]> wrote in message\n> > And you are comparing what? Just pure features and/or\n> > performance, or total cost of ownership for your\n> > particular case?\n> >\n> Technical Analysis and Business Analysis are two separate, and equally\n> necessary, activities. However, before one can accurately measure\n> things like Total Cost of Ownership, one needs to have accurately and\n> sufficiently characterized what will be owned and one's choices as to\n> what could be owned...\n\nOkay, so you are doing the technical analysis for now.\n\n> [...]\n> However, a correctly done Technical Analysis =should= be reasonably\n> portable since among other things you don't want to have to start all\n> over if your company's business or business model changes. Clearly\n> Business Analysis is very context dependant.\n\nHowever, doing a technical analysis correctly does not mean to blindly\nask about all the advanced features for each subsystem. The technical\nanalysis is part of the entire evaluation process. That process starts\nwith collecting the business requirements and continues with specifying\nthe technical requirements based on that. Not the other way round,\nbecause technology should not drive, it should serve (unless the\ntechnology in question is your business).\n\nPossible changes in business model might slightly change the technical\nrequirements in the future, so an appropriate security margin is added.\nBut the attempt to build canned technical analysis for later reuse is\nwhat leads to the worst solutions. How good is a decision based on 2\nyear old technical information?\n\nNow all the possible candidates get compared against these\n\"requirements\". That candidate \"O\" has the super duper buzzword feature\n\"XYZ\" candidate \"P\" does not have is of very little importance unless\n\"XYZ\" is somewhere on the requirements list. The availability of that\nextra feature will not result in any gain here.\n\nIn an earlier eMail you pointed out that 2 phase commit is essential for\nSMP/distributed applications. I know well what a distributed application\nis, but what in the world is an SMP application?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Sun, 13 Apr 2003 11:16:55 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Michael Paesold wrote:\n> I see there is a whole TODO Chapter devoted to the topic. There is the idea\n> of pre-forked and persistent backends. That would be very useful in an\n> environment where it's quite hard to use connection pooling. We are\n> currently working on a mail system for a free webmail. The mda (mail\n> delivery agent) written in C connects to the pg database to do some queries\n> everytime a new mail comes in. I didn't find a solution for connection\n> pooling yet.\n\nI am still playing with the model of reusing connections in a\ntransparent fashion with a pool manager that uses SCM_RIGHTS messages\nover UNIX domain socketpairs. I will scribble down some concept anytime\nsoon. This will include some more advantages than pure startup cost\nreduction, okay?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Sun, 13 Apr 2003 11:25:54 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "[ Warning, topic drift ahead ]\n\nShridhar Daithankar <[email protected]> writes:\n> However this would not work in all cases unless you are able to partition the\n> data. Otherwise you need a database that can have single database image \n> across machines. \n\n> If and when postgresql moves to mmap based model, postgresql running on mosix\n> should be able to do it.\n\nIn a thread that's been criticizing handwavy arguments for fundamental\nredesigns offering dubious performance improvements, you should know\nbetter than to say such a thing ;-)\n\nI don't believe that such a design would work at all, much less have\nany confidence that it would give acceptable performance. Would mosix\nshared memory support TAS mutexes? I don't see how it could, really.\nThat leaves you needing to come up with some other low-level lock\nmechanism and get it to have adequate performance across CPUs. Even\nafter you've got the locking to work, what would performance be like?\nPostgres is built on the assumption of cheap access to shared data\nstructures (lock manager, buffer manager, etc) and I don't think this'll\nqualify as cheap.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 13 Apr 2003 11:45:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "Tom Lane kirjutas P, 13.04.2003 kell 18:45:\n> [ Warning, topic drift ahead ]\n> \n> Shridhar Daithankar <[email protected]> writes:\n> > However this would not work in all cases unless you are able to partition the\n> > data. Otherwise you need a database that can have single database image \n> > across machines. \n> \n> > If and when postgresql moves to mmap based model, postgresql running on mosix\n> > should be able to do it.\n> \n> In a thread that's been criticizing handwavy arguments for fundamental\n> redesigns offering dubious performance improvements, you should know\n> better than to say such a thing ;-)\n> \n> I don't believe that such a design would work at all, much less have\n> any confidence that it would give acceptable performance. Would mosix\n> shared memory support TAS mutexes? I don't see how it could, really.\n> That leaves you needing to come up with some other low-level lock\n> mechanism and get it to have adequate performance across CPUs.\n\nDoes anybody have any idea how Oracle RAC does it ?\n\nThey seem to need to syncronize a lot (at least locks and data cache\ncoherency) across different machines.\n\n> Even\n> after you've got the locking to work, what would performance be like?\n> Postgres is built on the assumption of cheap access to shared data\n> structures (lock manager, buffer manager, etc) and I don't think this'll\n> qualify as cheap.\n\n[OT]\n\nI vaguely remember some messages about getting PG to work well on NUMA\ncomputers, which by definition should have non-uniformly cheap access to\nshared data structures. \n\nThey must have faced similar problems.\n\n-------------\nHannu\n\n", "msg_date": "13 Apr 2003 22:43:06 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> I vaguely remember some messages about getting PG to work well on NUMA\n> computers, which by definition should have non-uniformly cheap access to\n> shared data structures. \n\nMy recollection of the thread is that we didn't know how to do it ;-)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 13 Apr 2003 23:37:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "Mark Kirkwood wrote:\n\n> Maybe there should be a TODO list item in the Pg \"Exotic Features\" \n> for connection pooling / concentrating ???\n>\n>\nOh dear, there already is... (under \"Startup Time\"), I just missed it :-(\n\n\nMark\n\n", "msg_date": "Mon, 14 Apr 2003 19:19:54 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "Ron Peacetree wrote:\n> \"\"Dann Corbit\"\" <[email protected]> wrote in message\n> news:D90A5A6C612A39408103E6ECDD77B829408ACB@voyager.corporate.connx.co\n> m...\n> > > From: Ron Peacetree [mailto:[email protected]]=20\n> > > Sent: Wednesday, April 09, 2003 12:38 PM\n> > > You only need as many bins as you have unique key values=20\n> > > silly ;-) Remember, at its core Radix sort is just a=20\n> > > distribution counting sort (that's the name I learned for the=20\n> > > general technique). The simplest implementation uses bits as=20\n> > > the atomic unit, but there's nothing saying you have to...=20=20\n> > > In a DB, we know all the values of the fields we currently=20\n> > > have stored in the DB. We can take advantage of that.\n> >\n> > By what magic do we know this? If a database knew a-priori what all\n> the\n> > distinct values were, it would indeed be excellent magic.\n> For any table already in the DB, this is self evident. If you are\n> talking about sorting data =before= it gets put into the DB (say for a\n> load), then things are different, and the best you can do is used\n> comparison based methods (and probably some reasonably sophisticated\n> external merging routines in addition if the data set to be sorted and\n> loaded is big enough). The original question as I understood it was\n> about a sort as part of a query. That means everything to be sorted\n> is in the DB, and we can take advantage of what we know.\n> \n> > With 80 bytes, you have 2^320 possible values. There is no way\n> > around that. If you are going to count them or use them as a\n> > radix, you will have to classify them. The only way you will\n> > know how many unique values you have in\n> > \"Company+Division\" is to ...\n> > Either sort them or by some means discover all that are distinct\n\n> Ummm, Indexes, particularly Primary Key Indexes, anyone? Finding the\n> unique values in an index should be perceived as trivial... ...and you\n> often have the index in memory for other reasons already.\n\nBut this does not remove the fact that a radix sort on an 80 byte field\nrequires O(2^320) space, that is, O(N), where N is the size of the state\nspace of the key...\n\nPerhaps you need to reread Gonnet; it tells you that...\n--\noutput = reverse(\"moc.enworbbc@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/multiplexor.html\nRules of the Evil Overlord #28. \"My pet monster will be kept in a\nsecure cage from which it cannot escape and into which I could not\naccidentally stumble.\" <http://www.eviloverlord.com/>\n\n", "msg_date": "Mon, 14 Apr 2003 12:52:57 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: No merge sort? " }, { "msg_contents": "On Sun, Apr 13, 2003 at 10:43:06PM +0300, Hannu Krosing wrote:\n> Does anybody have any idea how Oracle RAC does it ?\n\nAccording to some marketing literature I saw, it was licensed\ntechnology; it was supposed to be related to VMS. Moredetails I\ndon't have, though.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 14 Apr 2003 14:48:57 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking?" }, { "msg_contents": "> On Sun, Apr 13, 2003 at 10:43:06PM +0300, Hannu Krosing wrote:\n> > Does anybody have any idea how Oracle RAC does it ?\n> \n> According to some marketing literature I saw, it was licensed\n> technology; it was supposed to be related to VMS. More details I\n> don't have, though.\n\nThat would fit perfectly with it having been part of the purchase of Rdb\nfrom Digital... There might well be some \"harvestable\" Rdb information\nout there somewhere...\n\nhttp://citeseer.nj.nec.com/lomet92private.html\n\n(note that (Rdb != /RDB) && (Rdb != Rand RDB); there's an unfortunate\npreponderance of \"things called RDB\")\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\nRules of the Evil Overlord #15. \"I will never employ any device with a\ndigital countdown. If I find that such a device is absolutely\nunavoidable, I will set it to activate when the counter reaches 117\nand the hero is just putting his plan into operation.\"\n<http://www.eviloverlord.com/>\n\n", "msg_date": "Mon, 14 Apr 2003 16:56:45 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Anyone working on better transaction locking? " }, { "msg_contents": "Several people have asked if we are losing momentum. Specifically, they\nare concerned about Red Hat dropping their Red Hat Database and instead\ndistributing PostgreSQL as part of Red Hat Enterprise Server, and they\nare concerned about recent press articles about MySQL.\n\nLet me address these. First, the Red Hat change probably has a lot to\ndo with Oracle's relationship with Red Hat, and very little to do with\nPostgreSQL. Their pullback is similar to Great Bridge's closing, except\nthat Red Hat's database group is still around, so we aren't losing Tom\nLane or Patrick MacDonald (who is completing our PITR work for 7.4).\n\nAs far as MySQL, they have a company to push articles to the press, and\nmany writers just dress them up and print them --- you can tell them\nbecause the pushed ones mention only MySQL, while the non=pushed ones\nmention MySQL and PostgreSQL.\n\nI have been around the globe enough to know that PostgreSQL is well on\ntrack. Our user base is growing, we have Win32 and PITR ready for 7.4\n(and each had some commercial funding to make them happen.) Recently, I\nhave also been fielding questions from several companies that want to\nhire PostgreSQL developers to work for the community.\n\nBut most importantly, there is mind share. I get _very_ few questions\nabout MySQL anymore, and when the database topic comes up on Slashdot,\nthe MySQL guys usually end up looking foolish for using MySQL. And my\nrecent trip to Toronto (who's details I have shared with core but can\nnot discuss) left no doubt in my mind that PostgreSQL is moving forward\nat a rapid rate.\n\nAnd, I have 1.5k emails to read after a one week trip. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Mon, 14 Apr 2003 18:37:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Are we losing momentum?" }, { "msg_contents": "With the feature set that Postgres has, it isn't going to lose momentum. \n\nIt is lacking in some areas that are slowly being addressed. \n\nIf they weren't being addressed, THEN, postgres would lose momentum.\n\nWe're fortunate to have good volunteers and the private donations of\ncompanies as well.\n\nBruce Momjian wrote:\n> \n> Several people have asked if we are losing momentum. Specifically, they\n> are concerned about Red Hat dropping their Red Hat Database and instead\n> distributing PostgreSQL as part of Red Hat Enterprise Server, and they\n> are concerned about recent press articles about MySQL.\n> \n> Let me address these. First, the Red Hat change probably has a lot to\n> do with Oracle's relationship with Red Hat, and very little to do with\n> PostgreSQL. Their pullback is similar to Great Bridge's closing, except\n> that Red Hat's database group is still around, so we aren't losing Tom\n> Lane or Patrick MacDonald (who is completing our PITR work for 7.4).\n> \n> As far as MySQL, they have a company to push articles to the press, and\n> many writers just dress them up and print them --- you can tell them\n> because the pushed ones mention only MySQL, while the non=pushed ones\n> mention MySQL and PostgreSQL.\n> \n> I have been around the globe enough to know that PostgreSQL is well on\n> track. Our user base is growing, we have Win32 and PITR ready for 7.4\n> (and each had some commercial funding to make them happen.) Recently, I\n> have also been fielding questions from several companies that want to\n> hire PostgreSQL developers to work for the community.\n> \n> But most importantly, there is mind share. I get _very_ few questions\n> about MySQL anymore, and when the database topic comes up on Slashdot,\n> the MySQL guys usually end up looking foolish for using MySQL. And my\n> recent trip to Toronto (who's details I have shared with core but can\n> not discuss) left no doubt in my mind that PostgreSQL is moving forward\n> at a rapid rate.\n> \n> And, I have 1.5k emails to read after a one week trip. :-)\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Mon, 14 Apr 2003 16:41:28 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Several people have asked if we are losing momentum.\n\nI don't think we are losing momentum considering the project in\nisolation --- things seem to be moving as well as they ever have,\nif not better.\n\nBut I do sometimes worry that we are losing the mindshare war.\nWe might be growing fine, but if we're growing slower than MySQL is,\nwe've got a problem. I was just in the local Barnes & Noble store\nyesterday, and could not help but notice how many books had \"MySQL\" in\nthe title. I didn't notice a single Postgres title (though I did not\nlook hard, since I was just passing through the computer area).\n\nMindshare eventually translates into results, if only because it\nmeans that capable developers will gravitate there instead of here.\nSo we need to worry about it.\n\nThere isn't anyone presently willing to spend real money and effort on\nmarketing PG (as you say, Red Hat won't, for reasons that have nothing\nto do with the merits of the product). That means that MySQL's\nmarketeers have a free hand to do things like boast about features that\nmight materialize in a year or so :-(\n\nI don't know what we can do about it, other than maybe push harder to\nget some more PG titles into O'Reilly's catalog ... that would help\nnarrow the bookshelf gap a little, at least. Any wannabee authors\nout there? (And Bruce, your book is due for a second edition...)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 14 Apr 2003 19:54:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "On Mon, 14 Apr 2003, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Several people have asked if we are losing momentum.\n> \n> I don't think we are losing momentum considering the project in\n> isolation --- things seem to be moving as well as they ever have,\n> if not better.\n\nI agree. I am surprised at the pace at which new features are added,\nconsidering the relatively small number of people working on the project.\n\n> \n> But I do sometimes worry that we are losing the mindshare war.\n> We might be growing fine, but if we're growing slower than MySQL is,\n> we've got a problem. I was just in the local Barnes & Noble store\n> yesterday, and could not help but notice how many books had \"MySQL\" in\n> the title. I didn't notice a single Postgres title (though I did not\n> look hard, since I was just passing through the computer area).\n\nI've considered this at length. I put some ideas together in December and\nsent it off to the advocacy list. Most/all were not implemented -- not\nleast because I didn't do anything I said I would :-). But, some of the\nmost important things, such as a proper media kit, quotes for journos,\npress contacts with authority to give fast/correct answers really need to\nbe implemented.\n\nAs for why MySQL has *significantly* more market share: there's not a lot\nwe can match them on. They have significant financial backing -- important\nif you're an IT manager who actually knows very little about the technical\nmerit of the product. It has close ties to a *very* widely deployed\nscripting language (PHP). MySQL AB employs marketing and 'advocacy' staff,\nwho attend conferences all over the world, speak several languages, and\nhave a fairly good understanding of the industry, open source, databases,\netc. They have infrastructure: tech support, on site support,\nconsultancy.\n\nMySQL AB promotes MySQL as a high performance database, easy to use,\nuncomplicated, with features implemented in a way which is syntactically\nconvenient -- not 'complicated' like Oracle, DB2 or Postgres.\n\nIts hard to argue against that. At a *technical* conference I recently\nspoke at, I was criticised for delivering a talk which was too advanced\nand didn't explain Postgres for MySQL users. During a lecture series at a\nuniversity, I was criticised for not discussing Oracle instead of Postgres\n-- students told me that Oracle will make them money and Postgres wont.\n\nRegardless, I'm still of the opinion that if you build it, they will come\n-- particularly costly features like replication, PITR, etc. But maybe\nthat is what the BSDs say about Linux?\n\nGavin\n\n", "msg_date": "Tue, 15 Apr 2003 10:18:59 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "Gavin Sherry wrote:\n> During a lecture series at a university, I was criticised for not\n> discussing Oracle instead of Postgres -- students told me that\n> Oracle will make them money and Postgres wont.\n\nTheir impressions are probably based on reality as it was a couple of\nyears ago before the U.S. economy came crashing down.\n\nBut today? Companies are trying to figure out how to do things\ncheaper, and there are a lot of situations for which Postgres is a\ngood fit but for which MySQL is a bad fit -- if it'll fit at all.\n\n\nI seriously think the native Win32 port of Postgres will make a big\ndifference, because it'll be a SQL Server killer. Especially if it\ncomes with a nice administrative GUI. :-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Mon, 14 Apr 2003 17:30:27 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Mon, 14 Apr 2003, Kevin Brown wrote:\n\n> Gavin Sherry wrote:\n> > During a lecture series at a university, I was criticised for not\n> > discussing Oracle instead of Postgres -- students told me that\n> > Oracle will make them money and Postgres wont.\n> \n> Their impressions are probably based on reality as it was a couple of\n> years ago before the U.S. economy came crashing down.\n> \n> But today? Companies are trying to figure out how to do things\n> cheaper, and there are a lot of situations for which Postgres is a\n> good fit but for which MySQL is a bad fit -- if it'll fit at all.\n> \n> \n> I seriously think the native Win32 port of Postgres will make a big\n> difference, because it'll be a SQL Server killer. Especially if it\n> comes with a nice administrative GUI. :-)\n\nI've been thinking about this too. Addressing Tom's point: any one with\nWindows experience, interested in the native port and willing to write a\nWindows book would probably do a lot for the project. For one, I would be\nwilling to help write parts which were not Windows specific -- as I\nhaven't used that system in some time :-).\n\nGavin\n\n", "msg_date": "Tue, 15 Apr 2003 10:38:05 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\n\n--On Monday, April 14, 2003 19:54:27 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Bruce Momjian <[email protected]> writes:\n>> Several people have asked if we are losing momentum.\n>\n> I don't think we are losing momentum considering the project in\n> isolation --- things seem to be moving as well as they ever have,\n> if not better.\n>\n> But I do sometimes worry that we are losing the mindshare war.\n> We might be growing fine, but if we're growing slower than MySQL is,\n> we've got a problem. I was just in the local Barnes & Noble store\n> yesterday, and could not help but notice how many books had \"MySQL\" in\n> the title. I didn't notice a single Postgres title (though I did not\n> look hard, since I was just passing through the computer area).\nI was in the Local MicroCenter, and found 3 PG titles, in addition to \nBruce's.\n\nThis is MUCH better than a year ago, when there were NONE.\n\nAgreed, that MySQL, has a bigger shelf space.\n\nI did all 3 authors a favor and bought copies.\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Mon, 14 Apr 2003 20:37:00 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "Gretings!\n\n[2003-04-14 19:54] Tom Lane said:\n| Bruce Momjian <[email protected]> writes:\n| > Several people have asked if we are losing momentum.\n\n| I don't know what we can do about it, other than maybe push harder to\n| get some more PG titles into O'Reilly's catalog ... that would help\n| narrow the bookshelf gap a little, at least. Any wannabee authors\n| out there? (And Bruce, your book is due for a second edition...)\n\n I've wanted to pipe up in a few of these \"popularity\" \ndiscussions in the past. Seeing how I can't make time to\nparticipate in any other meaningful capacity, I'll share\nmy thoughts on _why_ mysql has the mindshare.\n\n\n Applications, specifically applications that _use_ mysql.\n\n\n A quick search over at freshmeat returns 1044 results for \n\"mysql\" and 260 for \"postgresql\". Before this turns into a \ncause/effect discussion, I want to state up front that the \nreal \"effect\" of this is that someone is 4 times as likely to \ndownload an application that uses mysql. Sure, many are \n\"trivial\" applications, but I posit that it is _specifically_ \nthese \"trivial\" applications that inoculate the uninitiated \nwith the belief that mysql is suitable for use in real, albeit\ntrivial applications. Additionally, it these rudimentary \napplications that will be studied by many as the way to write \na database application.\n\n It is all good and well that postgres /can/ do, but until\nthe application developers see that those features are\nvaluable enough to forgo mysql support, they'll write the \napplication to support whatever database is most likely to \n_already_ be installed, which will be mysql. Granted, \nmany developers will also try to support multiple dbs via\nthe language's db api, but this leaves the less-supported\ndbs in an even worse position; being relegated to an\n\"might work with XXX database\". When anxious user learns\nthat \"might\" currently means \"doesn't,\" the second-string\ndatabase looks even worse in the eyes of the user.\n\n How to solve this problem? This is the hard part, but\nluckily ISTM that there are a few ways to succeed. Neither\nof which involves marketing or writing books.\n\n 1) become active in the \"also supports postgres\" projects,\n and add features that are made available _because_ of\n postgres' superiority. Eventually, market pressure\n for the cool feature(s) will lead users to choose\n postgres, and mysql could be relegated to the \"also\n runs on mysql, with limited featureset\"\n 2) take a popular project that uses mysql, fork it, and\n add features that can only be implemented using posgres.\n 3) release that super-cool code that you've been hacking\n on for years, especially if it is a \"trivial\" app.\n 4) convince your employer that it would be _beneficial_ to\n them to release, as open source, the internal app(s) you've \n developed, using postgres-specific features. (This is \n about all I can claim to be doing at this point in my \n indentured servitude, and I can't say I'm doing a good\n job... :-/)\n\n I'm sure this idea is not original, but I'm also sure that\nit _is_ the answer to gaining market^Wmindshare in this\ndatabase market.\n\n (I must apologize in advance, that I might not have time\nto even follow this thread, in fact, I hope that instead of\nreplying to this, the potential respondent might consider\nhelping to increase the number of apps that require postgres\n:-)\n\nwishing-I-could-contribute-more-ly yours,\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n\n", "msg_date": "Mon, 14 Apr 2003 21:52:17 -0400", "msg_from": "Brent Verner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> 1) become active in the \"also supports postgres\" projects,\r\n> and add features that are made available _because_ of\r\n> postgres' superiority. Eventually, market pressure\r\n> for the cool feature(s) will lead users to choose\r\n> postgres, and mysql could be relegated to the \"also\r\n> runs on mysql, with limited featureset\"\r\n\r\nTake, for example, phpPgAdmin. It was originally forked from phpMyAdmin, but we've just done a complete rewrite (because phpMyAdmin was written my mysql/php weenies who couldn't code nicely to save their lives...).\r\n\r\nHowever, it's me doing 99% of the coding, Rob doing advocacy and a heap of people who send in translations. Translations are very nice, but I so rarely get actual code contributions.\r\n\r\nphpMyAdmin even implements it's OWN comment and foreign key feature!!\r\n\r\nChris\r\n", "msg_date": "Tue, 15 Apr 2003 10:15:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Kevin, without the \"e\", wrote...\n> I seriously think the native Win32 port of Postgres will make a big\n> difference, because it'll be a SQL Server killer. Especially if it\n> comes with a nice administrative GUI. :-)\n\nI wouldn't be too sanguine about that, from two perspectives:\n\n a) There's a moving target, here, in that Microsoft seems to be\n looking for the next \"new thing\" to be the elimination of\n the use of \"files\" in favor of the filesystem being treated\n as a database.\n\n b) We recently were considering how we'd put a sharable Windows box \n in, at the office. Were considering using VNC to allow it to be\n accessible. Then someone thought to read the license, only to\n discover that the license pretty much expressly forbids running\n \"foreign, competing applications\" on the platform.\n\nIt seems pretty plausible that the net result of further development\nwill be platforms that are actively hostile to foreign software.\n\nIf I suggested that the licensing of Win2003 would expressly forbid\ninstalling PostgreSQL, people would rightly accuse me of being a\nparanoid conspiracy theorist.\n\nBut considering that the thought of VNC being outlawed would have seemed\npretty daft a few years ago, and we see things like DMCA combining with\n\"Homeland Security.\" Anti-\"hacking\" provisions have been going into\ntelecom laws that appear to classify network hardware that can do NAT as\n\"illegal hacking\" equipment. I'm not sure what we'd have to consider\n\"daft\" come 2005...\n--\n(reverse (concatenate 'string \"gro.mca@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/internet.html\n\"Heuristics (from the French heure, \"hour\") limit the amount of time\nspent executing something. [When using heuristics] it shouldn't take\nlonger than an hour to do something.\"\n\n", "msg_date": "Mon, 14 Apr 2003 22:24:39 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "On Mon, 14 Apr 2003, Brent Verner wrote:\n\n> Applications, specifically applications that _use_ mysql.\n>\n> A quick search over at freshmeat returns 1044 results for\n> \"mysql\" and 260 for \"postgresql\".\n\nThat's a pretty reasonable thought. I work for a shop that sells\nPostgres support, and even we install MySQL for the Q&D ticket tracking\nsystem we recommend because we can't justify the cost to port it to\npostgres. If the postgres support were there, we would surely be using it.\n\nHow to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\nanyone? :-)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 15 Apr 2003 11:36:07 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\nI agree, things aren't good when you look at the book shelf and app\nsupport, but fortunately these are things that are shaded more by the\nstate of things 1-3 years ago rather than currently. Certainly, we\nwould have seen an even worse ratio than 1:4 if we had looked last year\n--- we aren't on parity yet, but I think we are getting there.\n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> \n> \n> --On Monday, April 14, 2003 19:54:27 -0400 Tom Lane <[email protected]> \n> wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> >> Several people have asked if we are losing momentum.\n> >\n> > I don't think we are losing momentum considering the project in\n> > isolation --- things seem to be moving as well as they ever have,\n> > if not better.\n> >\n> > But I do sometimes worry that we are losing the mindshare war.\n> > We might be growing fine, but if we're growing slower than MySQL is,\n> > we've got a problem. I was just in the local Barnes & Noble store\n> > yesterday, and could not help but notice how many books had \"MySQL\" in\n> > the title. I didn't notice a single Postgres title (though I did not\n> > look hard, since I was just passing through the computer area).\n> I was in the Local MicroCenter, and found 3 PG titles, in addition to \n> Bruce's.\n> \n> This is MUCH better than a year ago, when there were NONE.\n> \n> Agreed, that MySQL, has a bigger shelf space.\n> \n> I did all 3 authors a favor and bought copies.\n> \n> LER\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Mon, 14 Apr 2003 22:48:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\r\n> That's a pretty reasonable thought. I work for a shop that sells\r\n> Postgres support, and even we install MySQL for the Q&D ticket tracking\r\n> system we recommend because we can't justify the cost to port it to\r\n> postgres. If the postgres support were there, we would surely be using it.\r\n> \r\n> How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\r\n> anyone? :-)\r\n\r\nThe real problem is PHP. PHP is just the cruftiest language ever invented (trust me, I use it every day). The PHP people are totally dedicated to MySQL, to the exclusion of all rational thought (eg. When I asked Rasmas at a conference about race conditions in his replicated setup, he replied \"it's never going to happen - MySQL's replication is just too fast...).\r\n\r\nChris\r\n", "msg_date": "Tue, 15 Apr 2003 10:50:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "[email protected] wrote:\n\n> Kevin, without the \"e\", wrote...\n> \n>>I seriously think the native Win32 port of Postgres will make a big\n>>difference, because it'll be a SQL Server killer. Especially if it\n>>comes with a nice administrative GUI. :-)\n\nI agree. I don't think PostgreSQL will be a SQL Server killer,\nbut my completely ignorant guess is that 90% of the cause of the\n*initial* gap between mySQL and PostgreSQL grew out of the fact\nthat a Win32 version of mySQL was available. Once the gap became\npresent, one then had to suffer switching costs. If the\nfeatures/performance of PostgreSQL > mySQL switching costs, then\nPostgreSQL wins in the long term. Without a Win32 port, the\nswitching costs also include those switching costs associated\nwith switching from Win32 to Unix.\n\n> \n> I wouldn't be too sanguine about that, from two perspectives:\n> \n> a) There's a moving target, here, in that Microsoft seems to be\n> looking for the next \"new thing\" to be the elimination of\n> the use of \"files\" in favor of the filesystem being treated\n> as a database.\n\nThey ought to get their database up to speed first, it seems to\nme. I agree Microsoft's view of data management is a moving\ntarget. 6 years ago everything, including network resources were\ngoing to be accessed strickly through an OLE2 Compound Document\ninterface and OLE structured storage. Then the Internet got hot\nand all data suddenly had to be accessible through URLs. Now\nit's XML that hot. Perhaps the Microsoft filesystem of the\nfuture will be one big XML document ;-)\n\n> \n> b) We recently were considering how we'd put a sharable Windows box \n> in, at the office. Were considering using VNC to allow it to be\n> accessible. Then someone thought to read the license, only to\n> discover that the license pretty much expressly forbids running\n> \"foreign, competing applications\" on the platform.\n> \n> It seems pretty plausible that the net result of further development\n> will be platforms that are actively hostile to foreign software.\n> \n> If I suggested that the licensing of Win2003 would expressly forbid\n> installing PostgreSQL, people would rightly accuse me of being a\n> paranoid conspiracy theorist.\n\nI think you are a paranoid conspiracy theorist. :-)\n\nMike Mascari\[email protected]\n\n", "msg_date": "Mon, 14 Apr 2003 22:51:41 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Bruce Momjian wrote:\n> I agree, things aren't good when you look at the book shelf and app\n> support, but fortunately these are things that are shaded more by the\n> state of things 1-3 years ago rather than currently. Certainly, we\n> would have seen an even worse ratio than 1:4 if we had looked last year\n> --- we aren't on parity yet, but I think we are getting there.\n\nWhat's missing are the \"FOO Applications With PostgreSQL\" sorts of\nbooks, where\n (member FOO '(|Web| |PHP| |Perl| |Python| |Application Frameworks|))\n\nThe one PostgreSQL book that _does_ have some of this is the O'Reilly\none, where I was disappointed to see how much of the book was devoted to\na framework I /wasn't/ planning to use.\n\nRight at the moment is probably /not/ a good time to be pushing books on\npotentially-obscure application areas; my ex-publisher (Wrox) just\nbecame an ex-publisher as a result of trying too hard to too quickly\nhawk too many books in obscure application areas.\n\nMy suspicion is that this, along with very soft book sales throughout\nthe publishing industry, is likely to make \"obscure application area\"\nbooks a tough sell in the short term. Like it or not, \"PostgreSQL +\nFOO\" is not going to be the easiest sell, particularly in the absence of\nthe much denigrated \"PostgreSQL Marketing Cabal.\"\n--\noutput = (\"cbbrowne\" \"@cbbrowne.com\")\nhttp://cbbrowne.com/info/wp.html\n\"There is no psychiatrist in the world like a puppy licking your\nface.\" -- Ben Williams\n\n", "msg_date": "Mon, 14 Apr 2003 23:12:35 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> But I do sometimes worry that we are losing the mindshare\n Tom> war. We might be growing fine, but if we're growing slower\n Tom> than MySQL is, we've got a problem. I was just in the local\n\nThis is probably true. Once people get exposed to PostgreSQL then\nthere is a fair chance of forming an opinion. Today one of the\nundergraduates in my class was telling me how after hacking pgsql\ninternals he has such a different impression of the two systems\n(earlier he'd built a site with MySQL going by the \"works for\nslashdot\" philosophy). \n\n-- \nPeace, at last ?\nSailesh\nhttp://www.cs.berkeley.edu/~sailesh\n\n", "msg_date": "14 Apr 2003 20:13:55 -0700", "msg_from": "Sailesh Krishnamurthy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Mike Mascari wrote:\n> [email protected] wrote:\n>>I wouldn't be too sanguine about that, from two perspectives:\n>>\n>> a) There's a moving target, here, in that Microsoft seems to be\n>> looking for the next \"new thing\" to be the elimination of\n>> the use of \"files\" in favor of the filesystem being treated\n>> as a database.\n> \n> \n> They ought to get their database up to speed first, it seems to\n> me. I agree Microsoft's view of data management is a moving\n> target. \n\nNot to mention the fact that there's a significant number of NT 4 \nservers still out there -- what is that, 7 years old? A lot of places \naren't upgrading because they don't need to & don't want to shell out \nthe cash. (And it should go without saying that Microsoft is none too \nhappy with it.) With Windows 2K3 just coming out and who knows how much \nlonger until the next version (or ther version after that, who knows \nwhen these \"features\" will actually show up), there's still a \nsignificant window in there for conventional database servers, \nespecially for the price conscious out there.\n\n----\nJeff Hoffmann\nPropertyKey.com\n\n", "msg_date": "Mon, 14 Apr 2003 22:54:11 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "IMVHO it's reference customers/users more than books & windows ports.\n\nIf I were a naive middle manager in some company, would I rather\nuse:\n\n (a) the database used by Yahoo, Cisco, and Sony?\n (b) the database used by Shannon Med Center, Mohawk SW, Vanten Inc, and BASF.\n\nNow suppose I told that same middle manager there was an open \nsource alternative:\n\n (c) used by Lockheed Martin, Nasdaq, AOL, and Unisys.\n\nAs far as I can tell (5-minutes searching) (c) is PostgreSQL.\n\n http://jobsearch.monster.com/jobsearch.asp?q=postgresql\n http://www.hotjobs.com/cgi-bin/job-search?KEYWORDS=postgres\n http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/8/1/816e9b7e50ae92331bb5c47a791a589f@activejobs0&c=1\n http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/c/8/c8dc5841d18329c6c50b55f67a7ff038@activejobs0&c=1\n http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/1/6/168f30dc84b8f195d1fc35feb6a2f67a@activejobs0&c=1\n \"The Nasdaq Stock Market ... currently looking to fill the following \n positions in Trumbull, CT...Some positions require knowledge of ...Postgre SQL..\"\n\n\nI'm not sure quite what it'd take to get the permission to use\nthese company's names, but surely we could have a list of links \nto the job postings... I'd bet that one of monster, hotjobs, \nand/or dice would even provide a datafeed of relevant jobs to\nbe posted on the postgresql.org site.\n\n\nIf we simply had a list of companies using postgresql highly visible \nsomewhere -- not necessarily a complex case study, just simple list \nof \"company X uses postgresql for Y\" statements -- I think it would \ngo a long way. I'll contribute. InterVideo uses postgresql (for\nrunning user surveys and some internal reporting and development tools).\n\n Ron\n\nPS: No offense to Shannon, Mohawk, Vanten, and yes, I know BASF is\n an awesome company. But they're all, even BASF, less of\n a household name than Sony,Yahoo,Cisco,AOL,Nasdaq,Lockheed.\n\n", "msg_date": "Mon, 14 Apr 2003 21:38:19 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tuesday 15 April 2003 05:48, you wrote:\n> Regardless, I'm still of the opinion that if you build it, they will come\n> -- particularly costly features like replication, PITR, etc. But maybe\n> that is what the BSDs say about Linux?\n\nThat is an unfair comparison. The technical differences between BSD and linux \nare not as much as postgresql and mysql. Besides what is the parallel of SQL \nstandard in OS world? POSIX? And both BSD/linux are doing fine sitting next \nto each other on that. \n\nAfter porting my small application in less than half an hour from linux to \nfreeBSD and vice versa, I really do not agree with that comment. Not even in \nthe spirit of it.\n\n Shridhar\n\n", "msg_date": "Tue, 15 Apr 2003 11:38:05 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tue, 2003-04-15 at 00:37, Bruce Momjian wrote:\n> Several people have asked if we are losing momentum. Specifically, they\n> are concerned about Red Hat dropping their Red Hat Database and instead\n> distributing PostgreSQL as part of Red Hat Enterprise Server, and they\n> are concerned about recent press articles about MySQL.\n\nJust reading GENERAL every day shows quite the contrary! There are more\nand more questions which really belong on NOVICE. More and more\nquestions about porting applications from MySQL and Access.\n\nRedHat renaming \"PostgreSQL\" to \"PostgreSQL\" after a short stint a.k.a.\n\"Red Hat Database\" is a very positive step.\n\nLots of stupid journalists are starting to write \"replace Oracle with\nMySQL\" rubbish. I am very concerned about this because the DBA who is\nstupid enough to do this will fail miserably and get the wrong\nimpression about free RDBMS. \n\nLets forget the \"replace MySQL with PostgreSQL\" stuff and go looking for\nhigher end converts. Our marketing push should be \"replace Oracle with\nPostgreSQL and replace Access with MySQL\". This puts the emphasis on\nwhich database can do what... \n\nJust my 2 EURO cents\n\nCheers\nTony Grant\n(yes I use PostgreSQL where MySQL would suffice...)\n\n-- \nwww.tgds.net Library management software toolkit, \nredhat linux on Sony Vaio C1XD, \nDreamweaver MX with Tomcat and PostgreSQL\n\n", "msg_date": "15 Apr 2003 08:12:04 +0200", "msg_from": "Tony Grant <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Curt Sampson <[email protected]> writes:\n> That's a pretty reasonable thought. I work for a shop that sells\n> Postgres support, and even we install MySQL for the Q&D ticket tracking\n> system we recommend because we can't justify the cost to port it to\n> postgres. If the postgres support were there, we would surely be using it.\n\n> How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\n> anyone? :-)\n\nWhat issues are creating a compatibility problem for you?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 02:40:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "** Reply to message from Tony Grant <[email protected]> on 15 Apr 2003 08:12:04\n+0200\nHi,\n I've got to agree with this.\n We have just been through the excercise of porting our quite extensive fleet\nmaintenance and accounting package from db2 to postgres. While the port was not\nwithout pain :) the results are very good. We run windows clients, (mostly),\nand whatever the database of choice performs best on for the backend. We are\nseeing performance gains for the 20-100 user bracket of a factor of 10 to 20\nfor switching from db2 to postgres on the same platform.\n We literally could not port this stuff to mysql. It does not have enough\nfeature to support us :)\n Definitely the story is not that postgres competes with mysql/access but\nrather with db2/oracle, and the point in time recovery coming in the next\nrelease, just strengthens that story a whole lot more too.\n\nAnd, client reaction to an opensource (free :) database on linux has been very\nenthusiastic.\n\nRegards,\nWayne\nhttp://www.bacchus.com.au\n\n> On Tue, 2003-04-15 at 00:37, Bruce Momjian wrote:\n> > Several people have asked if we are losing momentum. Specifically, they\n> > are concerned about Red Hat dropping their Red Hat Database and instead\n> > distributing PostgreSQL as part of Red Hat Enterprise Server, and they\n> > are concerned about recent press articles about MySQL.\n> \n> Just reading GENERAL every day shows quite the contrary! There are more\n> and more questions which really belong on NOVICE. More and more\n> questions about porting applications from MySQL and Access.\n> \n> RedHat renaming \"PostgreSQL\" to \"PostgreSQL\" after a short stint a.k.a.\n> \"Red Hat Database\" is a very positive step.\n> \n> Lots of stupid journalists are starting to write \"replace Oracle with\n> MySQL\" rubbish. I am very concerned about this because the DBA who is\n> stupid enough to do this will fail miserably and get the wrong\n> impression about free RDBMS. \n> \n> Lets forget the \"replace MySQL with PostgreSQL\" stuff and go looking for\n> higher end converts. Our marketing push should be \"replace Oracle with\n> PostgreSQL and replace Access with MySQL\". This puts the emphasis on\n> which database can do what... \n> \n> Just my 2 EURO cents\n> \n> Cheers\n> Tony Grant\n> (yes I use PostgreSQL where MySQL would suffice...)\n> \n> -- \n> www.tgds.net Library management software toolkit, \n> redhat linux on Sony Vaio C1XD, \n> Dreamweaver MX with Tomcat and PostgreSQL\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Tue, 15 Apr 2003 18:30:25 +1000", "msg_from": "\"Wayne Armstrong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tuesday 15 April 2003 14:00, you wrote:\n> ** Reply to message from Tony Grant <[email protected]> on 15 Apr 2003 08:12:04\n> +0200\n> Hi,\n> I've got to agree with this.\n> We have just been through the excercise of porting our quite extensive\n> fleet maintenance and accounting package from db2 to postgres. While the\n> port was not without pain :) the results are very good. We run windows\n> clients, (mostly), and whatever the database of choice performs best on for\n> the backend. We are seeing performance gains for the 20-100 user bracket of\n> a factor of 10 to 20 for switching from db2 to postgres on the same\n> platform.\n> We literally could not port this stuff to mysql. It does not have enough\n> feature to support us :)\n> Definitely the story is not that postgres competes with mysql/access but\n> rather with db2/oracle, and the point in time recovery coming in the next\n> release, just strengthens that story a whole lot more too.\n>\n> And, client reaction to an opensource (free :) database on linux has been\n> very enthusiastic.\n\nIf you don't mind, could you please submit a short write up on your experience \nfor submission on postgresql advocacy-Case Study section? You can either post \nit to posgresql advocacy list or send it to me offlist, if required.\n\nSee http://advocacy.postgresql.org\n\n Shridhar\n\n", "msg_date": "Tue, 15 Apr 2003 14:12:35 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Are we losing momentum?" }, { "msg_contents": "Tom Lane wrote:\n> Curt Sampson <[email protected]> writes:\n> > That's a pretty reasonable thought. I work for a shop that sells\n> > Postgres support, and even we install MySQL for the Q&D ticket tracking\n> > system we recommend because we can't justify the cost to port it to\n> > postgres. If the postgres support were there, we would surely be using it.\n> \n> > How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\n> > anyone? :-)\n> \n> What issues are creating a compatibility problem for you?\n\nErm...reserved words? \"Freeze\" is a reserved word, for instance, and\nthat actually bit me when converting an MS-SQL database...\n\nI have no problem with reserved words in principle, at least when they\nrefer to the SQL-standard commands and their options, but it's not\nclear that turning options (such as FREEZE) for PG-specific commands\n(such as VACUUM) into reserved words is a good idea. But it may not\nbe possible to avoid it, unfortunately. :-(\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Tue, 15 Apr 2003 02:10:42 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tue, 15 Apr 2003, Tom Lane wrote:\n\n> > How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\n> > anyone? :-)\n>\n> What issues are creating a compatibility problem for you?\n\nWe can't unthinkingly point the product at a PostgreSQL server and\nhave it Just Work. So all we really need is full SQL and wire-protocol\ncompatability. :-)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 15 Apr 2003 18:28:01 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "Hi,\nYour experience is very interesting.\nBut could you tell me what is the Database size ?\n\n Regards,\n Thierry\n\nWayne Armstrong wrote:\n\n> ** Reply to message from Tony Grant <[email protected]> on 15 Apr 2003 08:12:04\n> +0200\n> Hi,\n> I've got to agree with this.\n> We have just been through the excercise of porting our quite extensive fleet\n> maintenance and accounting package from db2 to postgres. While the port was not\n> without pain :) the results are very good. We run windows clients, (mostly),\n> and whatever the database of choice performs best on for the backend. We are\n> seeing performance gains for the 20-100 user bracket of a factor of 10 to 20\n> for switching from db2 to postgres on the same platform.\n> We literally could not port this stuff to mysql. It does not have enough\n> feature to support us :)\n> Definitely the story is not that postgres competes with mysql/access but\n> rather with db2/oracle, and the point in time recovery coming in the next\n> release, just strengthens that story a whole lot more too.\n>\n> And, client reaction to an opensource (free :) database on linux has been very\n> enthusiastic.\n>\n> Regards,\n> Wayne\n> http://www.bacchus.com.au\n>\n> > On Tue, 2003-04-15 at 00:37, Bruce Momjian wrote:\n> > > Several people have asked if we are losing momentum. Specifically, they\n> > > are concerned about Red Hat dropping their Red Hat Database and instead\n> > > distributing PostgreSQL as part of Red Hat Enterprise Server, and they\n> > > are concerned about recent press articles about MySQL.\n> >\n> > Just reading GENERAL every day shows quite the contrary! There are more\n> > and more questions which really belong on NOVICE. More and more\n> > questions about porting applications from MySQL and Access.\n> >\n> > RedHat renaming \"PostgreSQL\" to \"PostgreSQL\" after a short stint a.k.a.\n> > \"Red Hat Database\" is a very positive step.\n> >\n> > Lots of stupid journalists are starting to write \"replace Oracle with\n> > MySQL\" rubbish. I am very concerned about this because the DBA who is\n> > stupid enough to do this will fail miserably and get the wrong\n> > impression about free RDBMS.\n> >\n> > Lets forget the \"replace MySQL with PostgreSQL\" stuff and go looking for\n> > higher end converts. Our marketing push should be \"replace Oracle with\n> > PostgreSQL and replace Access with MySQL\". This puts the emphasis on\n> > which database can do what...\n> >\n> > Just my 2 EURO cents\n> >\n> > Cheers\n> > Tony Grant\n> > (yes I use PostgreSQL where MySQL would suffice...)\n> >\n> > --\n> > www.tgds.net Library management software toolkit,\n> > redhat linux on Sony Vaio C1XD,\n> > Dreamweaver MX with Tomcat and PostgreSQL\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]", "msg_date": "Tue, 15 Apr 2003 11:41:47 +0200", "msg_from": "Thierry Missimilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "** Reply to message from Thierry Missimilly <[email protected]> on\nTue, 15 Apr 2003 11:41:47 +0200\nHi,\n About 450 tables (+ lots of views:)\n Row counts range from 0 to about 3 million.\n Database sizes range from about 600 meg (a bus company with about 50 busses\nand a years fueling/servicing/rostering history) to around 12 gig currently :).\n We are not newbies with db2. We have been using/suppling it since version\n2.1(We actually started development on ver 1. something, but didn't ship till\nafter the version 2 release :). We do do a good job of tuning db2.\nNevertheless, performance of postgresql across the board (and we have not\ndesigned for postgres like we did for db2), leaves db2 for dead most places. \n\nRegards,\nWayne\n \n> Hi,\n> Your experience is very interesting.\n> But could you tell me what is the Database size ?\n> \n> Regards,\n> Thierry\n> \n> Wayne Armstrong wrote:\n> \n> > ** Reply to message from Tony Grant <[email protected]> on 15 Apr 2003 08:12:04\n> > +0200\n> > Hi,\n> > I've got to agree with this.\n> > We have just been through the excercise of porting our quite extensive fleet\n> > maintenance and accounting package from db2 to postgres. While the port was not\n> > without pain :) the results are very good. We run windows clients, (mostly),\n> > and whatever the database of choice performs best on for the backend. We are\n> > seeing performance gains for the 20-100 user bracket of a factor of 10 to 20\n> > for switching from db2 to postgres on the same platform.\n> > We literally could not port this stuff to mysql. It does not have enough\n> > feature to support us :)\n> > Definitely the story is not that postgres competes with mysql/access but\n> > rather with db2/oracle, and the point in time recovery coming in the next\n> > release, just strengthens that story a whole lot more too.\n> >\n> > And, client reaction to an opensource (free :) database on linux has been very\n> > enthusiastic.\n> >\n> > Regards,\n> > Wayne\n> > http://www.bacchus.com.au\n> >\n> > > On Tue, 2003-04-15 at 00:37, Bruce Momjian wrote:\n> > > > Several people have asked if we are losing momentum. Specifically, they\n> > > > are concerned about Red Hat dropping their Red Hat Database and instead\n> > > > distributing PostgreSQL as part of Red Hat Enterprise Server, and they\n> > > > are concerned about recent press articles about MySQL.\n> > >\n> > > Just reading GENERAL every day shows quite the contrary! There are more\n> > > and more questions which really belong on NOVICE. More and more\n> > > questions about porting applications from MySQL and Access.\n> > >\n> > > RedHat renaming \"PostgreSQL\" to \"PostgreSQL\" after a short stint a.k.a.\n> > > \"Red Hat Database\" is a very positive step.\n> > >\n> > > Lots of stupid journalists are starting to write \"replace Oracle with\n> > > MySQL\" rubbish. I am very concerned about this because the DBA who is\n> > > stupid enough to do this will fail miserably and get the wrong\n> > > impression about free RDBMS.\n> > >\n> > > Lets forget the \"replace MySQL with PostgreSQL\" stuff and go looking for\n> > > higher end converts. Our marketing push should be \"replace Oracle with\n> > > PostgreSQL and replace Access with MySQL\". This puts the emphasis on\n> > > which database can do what...\n> > >\n> > > Just my 2 EURO cents\n> > >\n> > > Cheers\n> > > Tony Grant\n> > > (yes I use PostgreSQL where MySQL would suffice...)\n> > >\n> > > --\n> > > www.tgds.net Library management software toolkit,\n> > > redhat linux on Sony Vaio C1XD,\n> > > Dreamweaver MX with Tomcat and PostgreSQL\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Tue, 15 Apr 2003 20:10:48 +1000", "msg_from": "\"Wayne Armstrong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\n\nChristopher Kings-Lynne wrote:\n\n>>That's a pretty reasonable thought. I work for a shop that sells\n>>Postgres support, and even we install MySQL for the Q&D ticket tracking\n>>system we recommend because we can't justify the cost to port it to\n>>postgres. If the postgres support were there, we would surely be using it.\n>>\n>>How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\n>>anyone? :-)\n>> \n>>\n>\n>The real problem is PHP. PHP is just the cruftiest language ever invented (trust me, I use it every day). The PHP people are totally dedicated to MySQL, to the exclusion of all rational thought (eg. When I asked Rasmas at a conference about race conditions in his replicated setup, he replied \"it's never going to happen - MySQL's replication is just too fast...).\n>\n> \n>\nHey! don't go knocking PHP, it is probably one of the most flexible and \neasy to use systems around. I have done several fairly large projects \nwith PHP and while it is an \"ugly\" environment, it performs well enough, \nhas a very usable extension interface, it is quick and easy to even \nlarge projects done.\n\nAs for MySQL, there are two things that PostgreSQL does not do, and \nprobably can not do to support MySQL:\n\n(1) REPLACE INTO (I think that's the name) which does either an insert \nor update into a table depending on the existence of a row. I was told \nthat this was impossible.\n\n(2) MySQL returns a value on insert which is usually usable, for instance,\ninsert into mytable (x,y,z) values(1,2,3);\nselect rowid from mytable where x=1 and y=2 and z=3;\n\nI have had many discussions with MySQL people, and one common thread \nexists. People who use MySQL do not usually understand databases all \nthat well. Arguments about *why* it is a horrible database and barely \nSQL at all, fall on deaf ears. They don't understand PostgreSQL, they \ncomplain that it is \"too big.\" They complain that it is \"too much,\" \nMySQL is all they need. They complain that it is \"too hard\" to use.\n\nAll of these things are largely imagined. PostgreSQL is not much bigger \nthan MySQL, in fact, the difference is negligible with regards to \naverage system capability these days. It isn't any more difficult to \nuse, its just a little different. They, however, feel safe with MySQL. \nMySQL is the Microsoft of databases, everyone uses it because everyone \nuses it, not because it is better or even adequate.\n\nWe need to take projects like Bugzilla (Did RH ever release the PG \nversion or am I way out of date?) and port them to PostgreSQL. We need \nto write free articles for Linux and IT magazines about how to take a \nMySQL project over to PostgreSQL easily, why PostgreSQL is much better \nthan MySQL, lastly we have to play the MySQL benchmark game .. we need \nto create a Benchmark program that clearly shows how PostgreSQL compares \nto MySQL.\n\n", "msg_date": "Tue, 15 Apr 2003 07:51:30 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\n\nMike Mascari wrote:\n\n>[email protected] wrote:\n> \n>\n>> b) We recently were considering how we'd put a sharable Windows box \n>> in, at the office. Were considering using VNC to allow it to be\n>> accessible. Then someone thought to read the license, only to\n>> discover that the license pretty much expressly forbids running\n>> \"foreign, competing applications\" on the platform.\n>>\n>>It seems pretty plausible that the net result of further development\n>>will be platforms that are actively hostile to foreign software.\n>>\n>>If I suggested that the licensing of Win2003 would expressly forbid\n>>installing PostgreSQL, people would rightly accuse me of being a\n>>paranoid conspiracy theorist.\n>> \n>>\n>\n>I think you are a paranoid conspiracy theorist. :-)\n>\n>Mike Mascari\n>[email protected]\n> \n>\n\"Just because you're paranoid does not mean they're not out to get you.\"\nHenry Kissinger.\n\n", "msg_date": "Tue, 15 Apr 2003 08:07:37 -0400", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> Hey! don't go knocking PHP, it is probably one of the most flexible and \n> easy to use systems around. I have done several fairly large projects \n> with PHP and while it is an \"ugly\" environment, it performs well enough, \n> has a very usable extension interface, it is quick and easy to even \n> large projects done.\n\nRight. PHP is our friend. In Japan Apache+PHP+PostgreSQL combo is the\nstandard for Web systems. Very few people uses Apache+PHP+MySQL.\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 15 Apr 2003 21:09:31 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Tatsuo, this has always fascinated me. Any insights you could share about how PostgreSQL achieved the prominence it has in Japan (and how MySQL did not) would be very interesting.\n\nCheers,\nNed\n\n----- Original Message ----- \nFrom: \"Tatsuo Ishii\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>; <[email protected]>; <[email protected]>; <[email protected]>\nSent: Tuesday, April 15, 2003 8:09 AM\nSubject: Re: [HACKERS] Are we losing momentum?\n\n\n> Hey! don't go knocking PHP, it is probably one of the most flexible and \n> easy to use systems around. I have done several fairly large projects \n> with PHP and while it is an \"ugly\" environment, it performs well enough, \n> has a very usable extension interface, it is quick and easy to even \n> large projects done.\n\nRight. PHP is our friend. In Japan Apache+PHP+PostgreSQL combo is the\nstandard for Web systems. Very few people uses Apache+PHP+MySQL.\n--\nTatsuo Ishii\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Tue, 15 Apr 2003 08:38:05 -0400", "msg_from": "\"Ned Lilly\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL vs. MySQL in Japan (was: Are we losing momentum?)" }, { "msg_contents": "As one of the top search engine campaign optimization companies in the\nspace, we here at Did-it.com have been using Postgresql for over a year\nnow. We had serious locking problems with MySQL and even switching to\ntheir Innodb handler did not solve all the issues.\n\nAs the DB administrator, I recommended we switch when it came time to\nre-write our client platform. That we did, and we have not looked back.\nWe have millions of listings, keywords and we perform live visitor\ntracking in our database. We capture on the order of about 1 million\nvisitors every day, with each hit making updates, selects and possibly\ninserts.\n\nWe could not have done this in mySQL. Basically when I see silly posts\nover on Slashdot about MySQL being as good a sliced bread, you can check\nout the debunking posts that I make as \"esconsult1\".\n\nYes, perhaps Postgresql needs a central org that manages press and so\non, but we know that we dont get the press that MySQL does, but quietly\nin the background Postgresql is handling large important things.\n\n- Ericson Smith\nWeb Developer\nDb Admin\nhttp://www.did-it.com\n\n-- \nEricson Smith <[email protected]>\n\n", "msg_date": "15 Apr 2003 09:12:19 -0400", "msg_from": "Ericson Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Hi,\n\nWe should probably put out an FAQ saying, if you have success story, please \nmake a write-up and send to us at http://advocacy.postgresql.org.\n\n Shridhar\n\nOn Tuesday 15 April 2003 18:42, Ericson Smith wrote:\n> As one of the top search engine campaign optimization companies in the\n> space, we here at Did-it.com have been using Postgresql for over a year\n> now. We had serious locking problems with MySQL and even switching to\n> their Innodb handler did not solve all the issues.\n>\n> As the DB administrator, I recommended we switch when it came time to\n> re-write our client platform. That we did, and we have not looked back.\n> We have millions of listings, keywords and we perform live visitor\n> tracking in our database. We capture on the order of about 1 million\n> visitors every day, with each hit making updates, selects and possibly\n> inserts.\n>\n> We could not have done this in mySQL. Basically when I see silly posts\n> over on Slashdot about MySQL being as good a sliced bread, you can check\n> out the debunking posts that I make as \"esconsult1\".\n>\n> Yes, perhaps Postgresql needs a central org that manages press and so\n> on, but we know that we dont get the press that MySQL does, but quietly\n> in the background Postgresql is handling large important things.\n>\n> - Ericson Smith\n> Web Developer\n> Db Admin\n> http://www.did-it.com\n\n", "msg_date": "Tue, 15 Apr 2003 18:48:46 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Are we losing momentum?" }, { "msg_contents": "mlw <[email protected]> writes:\n> We need to take projects like Bugzilla (Did RH ever release the PG \n> version or am I way out of date?) and port them to PostgreSQL.\n\nSee http://bugzilla.redhat.com/bugzilla/ ... note icon at bottom ...\nnote tarball offered in News ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 10:03:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "On Tue, 2003-04-15 at 07:51, mlw wrote:\n> Christopher Kings-Lynne wrote:\n> >\n> >The real problem is PHP. PHP is just the cruftiest language ever invented \n> > (trust me, I use it every day). The PHP people are totally dedicated to \n> > MySQL, to the exclusion of all rational thought (eg. When I asked \n> > Rasmas at a conference about race conditions in his replicated \n> > setup, he replied \"it's never going to happen - MySQL's replication \n> > is just too fast...).\n> >\n> Hey! don't go knocking PHP, it is probably one of the most flexible and \n> easy to use systems around. I have done several fairly large projects \n> with PHP and while it is an \"ugly\" environment, it performs well enough, \n> has a very usable extension interface, it is quick and easy to even \n> large projects done.\n> \n\nThe problem is the marriage of PHP and MySql. I've always held the\nnotion that early on several of the php developers, being windows\nhackers, needed an open source database that would run on windows. They\npicked mysql (which was probably their best option at the time) and\nmysql rode on the shoulders php's success. \n\n> As for MySQL, there are two things that PostgreSQL does not do, and \n> probably can not do to support MySQL:\n> \n> (1) REPLACE INTO (I think that's the name) which does either an insert \n> or update into a table depending on the existence of a row. I was told \n> that this was impossible.\n> \n> (2) MySQL returns a value on insert which is usually usable, for instance,\n> insert into mytable (x,y,z) values(1,2,3);\n> select rowid from mytable where x=1 and y=2 and z=3;\n> \n\nI'm pretty sure I've seen people create db functions to duplicate these\nfeatures, but admittedly that would be more complicated.\n\n<snip>\n> \n> We need to take projects like Bugzilla (Did RH ever release the PG \n> version or am I way out of date?) and port them to PostgreSQL. We need \n> to write free articles for Linux and IT magazines about how to take a \n> MySQL project over to PostgreSQL easily, why PostgreSQL is much better \n> than MySQL, \n\nRed Hat actually did do this, and does make the source available. One\nproblem I found with porting of mysql apps is that those apps tend to do\na lot of dump things to make up for mysql's missing features. Unless\nyou really are willing to fork the code and then maintain it as a new\nproject, porting applications gets somewhat futile.\n\nRobert Treat\n\n", "msg_date": "15 Apr 2003 10:19:11 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tue, 15 Apr 2003, mlw wrote:\n\n> \n> \n> Christopher Kings-Lynne wrote:\n> \n> >>That's a pretty reasonable thought. I work for a shop that sells\n> >>Postgres support, and even we install MySQL for the Q&D ticket tracking\n> >>system we recommend because we can't justify the cost to port it to\n> >>postgres. If the postgres support were there, we would surely be using it.\n> >>\n> >>How to fix such a situation, I'm not sure. \"MySQL Compatability Mode,\"\n> >>anyone? :-)\n> >> \n> >>\n> >\n> >The real problem is PHP. PHP is just the cruftiest language ever invented (trust me, I use it every day). The PHP people are totally dedicated to MySQL, to the exclusion of all rational thought (eg. When I asked Rasmas at a conference about race conditions in his replicated setup, he replied \"it's never going to happen - MySQL's replication is just too fast...).\n> >\n> > \n> >\n> Hey! don't go knocking PHP, it is probably one of the most flexible and \n> easy to use systems around. I have done several fairly large projects \n> with PHP and while it is an \"ugly\" environment, it performs well enough, \n> has a very usable extension interface, it is quick and easy to even \n> large projects done.\n\nI would say that compared to Perl, TCL, and many other scripting languages \nthat PHP is actually a far better and more logically designed language. \nthe way it handles arrays and global vars is the way every language \nshould.\n\n", "msg_date": "Tue, 15 Apr 2003 10:18:28 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Mon, Apr 14, 2003 at 07:54:27PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Several people have asked if we are losing momentum.\n> \n> I was just in the local Barnes & Noble store\n> yesterday, and could not help but notice how many books had \"MySQL\" in\n> the title. I didn't notice a single Postgres title (though I did not\n> look hard, since I was just passing through the computer area).\n\n2 local stores here:\nOne has 11 PostgresQL books and 40 MySQL, the other had 5 on\nPostgresQL and 23 about MySQL.\n\n\nKurt\n\n", "msg_date": "Tue, 15 Apr 2003 18:36:17 +0200", "msg_from": "Kurt Roeckx <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tuesday 15 April 2003 07:51, mlw wrote:\n> We need to take projects like Bugzilla (Did RH ever release the PG\n> version or am I way out of date?) and port them to PostgreSQL.\n\nhttp://bugzilla.redhat.com/download/rh-bugzilla-pg-LATEST.tar.gz\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Tue, 15 Apr 2003 12:43:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Tom Lane wrote:\n> narrow the bookshelf gap a little, at least. Any wannabee authors\n> out there? (And Bruce, your book is due for a second edition...)\n\nAgreed. I will contact the publisher and get started, maybe in the\nsummer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 15 Apr 2003 12:50:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\nHaving good reference sites is important, and I could list as many\nimpressive ones as MySQL, but who has time to hunt around and get\npermission to list them --- I will tell you who --- the MySQL marketing\nguys, while the PostgreSQL guys don't. :-(\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> IMVHO it's reference customers/users more than books & windows ports.\n> \n> If I were a naive middle manager in some company, would I rather\n> use:\n> \n> (a) the database used by Yahoo, Cisco, and Sony?\n> (b) the database used by Shannon Med Center, Mohawk SW, Vanten Inc, and BASF.\n> \n> Now suppose I told that same middle manager there was an open \n> source alternative:\n> \n> (c) used by Lockheed Martin, Nasdaq, AOL, and Unisys.\n> \n> As far as I can tell (5-minutes searching) (c) is PostgreSQL.\n> \n> http://jobsearch.monster.com/jobsearch.asp?q=postgresql\n> http://www.hotjobs.com/cgi-bin/job-search?KEYWORDS=postgres\n> http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/8/1/816e9b7e50ae92331bb5c47a791a589f@activejobs0&c=1\n> http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/c/8/c8dc5841d18329c6c50b55f67a7ff038@activejobs0&c=1\n> http://seeker.dice.com/jobsearch/servlet/JobSearch?op=1002&dockey=xml/1/6/168f30dc84b8f195d1fc35feb6a2f67a@activejobs0&c=1\n> \"The Nasdaq Stock Market ... currently looking to fill the following \n> positions in Trumbull, CT...Some positions require knowledge of ...Postgre SQL..\"\n> \n> \n> I'm not sure quite what it'd take to get the permission to use\n> these company's names, but surely we could have a list of links \n> to the job postings... I'd bet that one of monster, hotjobs, \n> and/or dice would even provide a datafeed of relevant jobs to\n> be posted on the postgresql.org site.\n> \n> \n> If we simply had a list of companies using postgresql highly visible \n> somewhere -- not necessarily a complex case study, just simple list \n> of \"company X uses postgresql for Y\" statements -- I think it would \n> go a long way. I'll contribute. InterVideo uses postgresql (for\n> running user surveys and some internal reporting and development tools).\n> \n> Ron\n> \n> PS: No offense to Shannon, Mohawk, Vanten, and yes, I know BASF is\n> an awesome company. But they're all, even BASF, less of\n> a household name than Sony,Yahoo,Cisco,AOL,Nasdaq,Lockheed.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 15 Apr 2003 12:52:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On Tuesday 15 April 2003 05:48, you wrote:\n> > Regardless, I'm still of the opinion that if you build it, they will come\n> > -- particularly costly features like replication, PITR, etc. But maybe\n> > that is what the BSDs say about Linux?\n> \n> That is an unfair comparison. The technical differences between BSD and linux \n> are not as much as postgresql and mysql. Besides what is the parallel of SQL \n> standard in OS world? POSIX? And both BSD/linux are doing fine sitting next \n> to each other on that. \n\nAgreed, Linux and BSD are pretty close --- but Linux used to be behind\nBSD --- they caught up because both are open source. The big question\nis whether MySQL (which isn't openly developed) will catch up to\nPostgreSQL. And if they do catch up, will we have mind share parity by\nthat time?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 15 Apr 2003 12:55:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "The impression of MySQL is light weight and fast,\nthe reputation of PostgreSQL is full featured. \nBusiness chooses PostgreSQL is bacause PostgreSQL is close\nto database like Oracle, reliable but without cost.\n\nTo compete with MySQL is not a good strategy, IMHO,\nPostgreSQL needs to focus adding features such as\ntable partitioning like Oracle, needs to improve\nthe performance of subquery, etc. Those lack performance \nfeatures are the choke point (it's easy to get better performance \nfor a big table [~100 million] with partitions in oracle than\npostgreSQL; it's a nightmare, a mess for using subquery in postgreSQL,\nI can't wait 7.4's smarter on this).\n\nIf you really have a super product, don't you\nworry user will not switch to it with no cost?\n\njust some thoughts ...\n\njohnl\n\n", "msg_date": "Tue, 15 Apr 2003 12:21:15 -0500", "msg_from": "\"John Liu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tue, 2003-04-15 at 09:43, Lamar Owen wrote:\n> On Tuesday 15 April 2003 07:51, mlw wrote:\n> > We need to take projects like Bugzilla (Did RH ever release the PG\n> > version or am I way out of date?) and port them to PostgreSQL.\n> \n> http://bugzilla.redhat.com/download/rh-bugzilla-pg-LATEST.tar.gz\n\nOf course, the installation instructions that come with it tell you\nto install perl's interface to MySQL, not PostgreSQL. Sigh.\n\n-- \nSteve Wampler <[email protected]>\nNational Solar Observatory\n\n", "msg_date": "15 Apr 2003 10:23:04 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "IMHO, mySql 5.0 will put more pressure on PostgreSql, when it's\navailable.\n\nOne of the features that PostgreSql must have, IMHO, is support for\ncross-db operations (queries, updates, deletes, inserts). 2PC and\ncross-server stuff would be nice but it's not as important as simple\ncross -db operations across databases on the same server. All major\ncomercial RDBMS (and even mySql!) support this but for Postgres. Sad.\n\n\n\n\n\n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 15 Apr 2003 10:30:19 -0700 (PDT)", "msg_from": "ow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> > Regardless, I'm still of the opinion that if you build it, they\n> > will come -- particularly costly features like replication, PITR,\n> > etc. But maybe that is what the BSDs say about Linux?\n> \n> That is an unfair comparison. The technical differences between BSD\n> and linux are not as much as postgresql and mysql. Besides what is\n> the parallel of SQL standard in OS world? POSIX? And both BSD/linux\n> are doing fine sitting next to each other on that.\n> \n> After porting my small application in less than half an hour from\n> linux to freeBSD and vice versa, I really do not agree with that\n> comment. Not even in the spirit of it.\n\nYes, that is the joy of POSIX, ANSI, SUS, SUSv2, XPG*, etc. The\ndifferences in the OS aren't visible at the user level and shouldn't\nbe (beyond the layout/management). That said, standards are great,\nbut all select()/poll() calls weren't created equal, just like all\nSELECT statements weren't created equal. -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Tue, 15 Apr 2003 11:29:49 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tue, 2003-04-15 at 13:30, ow wrote:\n> IMHO, mySql 5.0 will put more pressure on PostgreSql, when it's\n> available.\n> \n> One of the features that PostgreSql must have, IMHO, is support for\n> cross-db operations (queries, updates, deletes, inserts). 2PC and\n> cross-server stuff would be nice but it's not as important as simple\n> cross -db operations across databases on the same server. All major\n> comercial RDBMS (and even mySql!) support this but for Postgres. Sad.\n> \n\ndblink ?\n\nRobert Treat\n\n", "msg_date": "15 Apr 2003 14:50:10 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": ">From my experience, almost every time I talk to a MySQL supporter about\nPostgreSQL, the whole \"vacuum\" issue always seems to come up. Some way\nto get vacuum automated (and thus out of sight, out of mind) I think\nwould make great strides in making PG at least \"seem\" more friendly to\nsomeone on the outside.\n\nShared hosting enviroments. I work for a web hosting company that offers\nMySQL to all of its customers, our MySQL server has several thousand\ndatabases on it, and I must say it works exceptionally well. \n\nCreating users/databases/changing passwords is as simple as sending it a\ncouple queries from our Customer web interface, trouble shooting poor\nqueries takes seconds when using \"mytop\" (mtop), and tracking/billing\nfor disk usage is as simple as running \"du /var/lib/mysql/*\". I would\nlike to say the same things for PG, but I'm affrid I can't.\n\nI think it all comes down to how simple PG is to setup and use on a\ndaily basis. This is what determines the size of its community. Even\njust the simple things make a big difference. ie:\n\n\\dt\n\ncompared to:\n\nshow tables;\n\nYes, once you get over the \"hump\" PG is quite efficient, but you need to\nunderstand it, and learn some small quriks first. With MySQL, you can\npretty much guess commands, and they often work! Not as much luck with\nPG. \n\nshow indexes\nshow processlist\nshow columns from <table>\n\nThese are all easy/simple commands that make sense to someone who is\njust learning the ropes. Short abbreviated, commands are great for the\nexperts, but can greatly discourage newbies.\n\n\nOn Mon, 2003-04-14 at 18:52, Brent Verner wrote:\n> Gretings!\n> \n> [2003-04-14 19:54] Tom Lane said:\n> | Bruce Momjian <[email protected]> writes:\n> | > Several people have asked if we are losing momentum.\n> \n> | I don't know what we can do about it, other than maybe push harder to\n> | get some more PG titles into O'Reilly's catalog ... that would help\n> | narrow the bookshelf gap a little, at least. Any wannabee authors\n> | out there? (And Bruce, your book is due for a second edition...)\n> \n> I've wanted to pipe up in a few of these \"popularity\" \n> discussions in the past. Seeing how I can't make time to\n> participate in any other meaningful capacity, I'll share\n> my thoughts on _why_ mysql has the mindshare.\n> \n> \n> Applications, specifically applications that _use_ mysql.\n> \n> \n> A quick search over at freshmeat returns 1044 results for \n> \"mysql\" and 260 for \"postgresql\". Before this turns into a \n> cause/effect discussion, I want to state up front that the \n> real \"effect\" of this is that someone is 4 times as likely to \n> download an application that uses mysql. Sure, many are \n> \"trivial\" applications, but I posit that it is _specifically_ \n> these \"trivial\" applications that inoculate the uninitiated \n> with the belief that mysql is suitable for use in real, albeit\n> trivial applications. Additionally, it these rudimentary \n> applications that will be studied by many as the way to write \n> a database application.\n> \n> It is all good and well that postgres /can/ do, but until\n> the application developers see that those features are\n> valuable enough to forgo mysql support, they'll write the \n> application to support whatever database is most likely to \n> _already_ be installed, which will be mysql. Granted, \n> many developers will also try to support multiple dbs via\n> the language's db api, but this leaves the less-supported\n> dbs in an even worse position; being relegated to an\n> \"might work with XXX database\". When anxious user learns\n> that \"might\" currently means \"doesn't,\" the second-string\n> database looks even worse in the eyes of the user.\n> \n> How to solve this problem? This is the hard part, but\n> luckily ISTM that there are a few ways to succeed. Neither\n> of which involves marketing or writing books.\n> \n> 1) become active in the \"also supports postgres\" projects,\n> and add features that are made available _because_ of\n> postgres' superiority. Eventually, market pressure\n> for the cool feature(s) will lead users to choose\n> postgres, and mysql could be relegated to the \"also\n> runs on mysql, with limited featureset\"\n> 2) take a popular project that uses mysql, fork it, and\n> add features that can only be implemented using posgres.\n> 3) release that super-cool code that you've been hacking\n> on for years, especially if it is a \"trivial\" app.\n> 4) convince your employer that it would be _beneficial_ to\n> them to release, as open source, the internal app(s) you've \n> developed, using postgres-specific features. (This is \n> about all I can claim to be doing at this point in my \n> indentured servitude, and I can't say I'm doing a good\n> job... :-/)\n> \n> I'm sure this idea is not original, but I'm also sure that\n> it _is_ the answer to gaining market^Wmindshare in this\n> database market.\n> \n> (I must apologize in advance, that I might not have time\n> to even follow this thread, in fact, I hope that instead of\n> replying to this, the potential respondent might consider\n> helping to increase the number of apps that require postgres\n> :-)\n> \n> wishing-I-could-contribute-more-ly yours,\n> brent\n-- \nBest Regards,\n \nMike Benoit\nNetNation Communications Inc.\nSystems Engineer\nTel: 604-684-6892 or 888-983-6600\n ---------------------------------------\n \n Disclaimer: Opinions expressed here are my own and not \n necessarily those of my employer\n\n", "msg_date": "15 Apr 2003 11:56:50 -0700", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> On Tue, 2003-04-15 at 13:30, ow wrote:\n>> One of the features that PostgreSql must have, IMHO, is support for\n>> cross-db operations (queries, updates, deletes, inserts). 2PC and\n>> cross-server stuff would be nice but it's not as important as simple\n>> cross -db operations across databases on the same server. All major\n>> comercial RDBMS (and even mySql!) support this but for Postgres. Sad.\n\n> dblink ?\n\nI'm of the opinion that the availability of schemas solves most of the\nproblems that people say they need cross-database access for. If you\nwant cross-database access, first say why putting your data into several\nschemas in a single database doesn't get the job done for you.\n\n(Obviously, this only addresses cases where you'd have put the multiple\ndatabases under one postmaster, but that's the scenario people seem to be\nconcerned about.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 15:17:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "--- Tom Lane <[email protected]> wrote:\n> I'm of the opinion that the availability of schemas solves most of\n> the problems that people say they need cross-database access for. If\n\n> you want cross-database access, first say why putting your data into\n> several schemas in a single database doesn't get the job done for\nyou.\n\nSome databases contain lots of data, e.g. dbs that contain historical\ndata. No one wants to have one HUGE db that runs all company's apps,\ntakes hours (if not days) to recover and when this huge db goes down\nnone of the apps is available.\n\n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 15 Apr 2003 12:35:18 -0700 (PDT)", "msg_from": "ow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "On Tue, 2003-04-15 at 15:35, ow wrote:\n> Some databases contain lots of data, e.g. dbs that contain historical\n> data. No one wants to have one HUGE db that runs all company's apps,\n> takes hours (if not days) to recover and when this huge db goes down\n> none of the apps is available.\n\nAre you talking about queries between databases on the same postmaster\n(i.e. running under the same PostgreSQL installation), or queries\nbetween postmasters running on different systems? If the former, I don't\nsee how putting your data into multiple schemas in a single database is\nsignificantly less reliable than putting it into multiple databases.\n\nCheers,\n\nNeil\n\n", "msg_date": "15 Apr 2003 16:27:07 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Mike Benoit writes:\n\n> Shared hosting enviroments. I work for a web hosting company that offers\n> MySQL to all of its customers, our MySQL server has several thousand\n> databases on it, and I must say it works exceptionally well.\n>\n> Creating users/databases/changing passwords is as simple as sending it a\n> couple queries from our Customer web interface, trouble shooting poor\n> queries takes seconds when using \"mytop\" (mtop), and tracking/billing\n> for disk usage is as simple as running \"du /var/lib/mysql/*\". I would\n> like to say the same things for PG, but I'm affrid I can't.\n\nAt least in the latest versions, things are quite easy.\n\nUser/database administration?\nCREATE USER someuser ENCRYPTED PASSWORD '...' NOCREATEDB NOCREATEUSER;\nCREATE DATABASE someuser OWNER someuser ENCODING 'UNICODE';\n\nDisk usage account? Use contrib/dbsize (README for easy setup)\nSELECT database_size('someuser');\nDone.\n\nPoor queries -> query stats?\n\nOf course, some things are easier in MySQL. On the other hand, what about\nInnoDB, \"du /var/lib/mysql/*\" won't help much...\n\nI just wanted to show that PostgreSQL administration is not that hard in a\nhosting environment.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Tue, 15 Apr 2003 23:51:00 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "--- Neil Conway <[email protected]> wrote:\n> Are you talking about queries between databases on the same\n> postmaster\n> (i.e. running under the same PostgreSQL installation),\n\nYes\n\n> or queries\n> between postmasters running on different systems? If the former, I\n> don't\n> see how putting your data into multiple schemas in a single database\n> is\n> significantly less reliable than putting it into multiple databases.\n\nI disagree. For example, suppose you have\napp12 that uses db1 and db2,\napp23 that uses db2 and db3,\napp3 that uses db3.\n\nIf db3 goes down then app12 is not affected, app23 could be partially\naffected (e.g. user may not be able to run historic queries) and app3\nis completely unavailable. This is definitely better than all three\napps are down. Besides, having one huge db makes everything more\ndifficult and requires (much) more time for backups, restores, etc.\n\nEvery major RDBMS vendor (and mySql) finds this feature important and\nthey support it. Hope Postgresql will too.\n\n\n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 15 Apr 2003 14:58:24 -0700 (PDT)", "msg_from": "ow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "ow <[email protected]> writes:\n> --- Neil Conway <[email protected]> wrote:\n>> Are you talking about queries between databases on the same\n>> postmaster\n\n> Yes\n\n> [snip]\n\n> If db3 goes down then app12 is not affected, app23 could be partially\n> affected (e.g. user may not be able to run historic queries) and app3\n> is completely unavailable.\n\nThis is nonsense. There is no scenario where one DB \"goes down\" and\nother DBs on the same postmaster remain up. There are advantages to\nhaving separate DBs on one postmaster (like separate copies of the\nsystem catalogs), but there's very little reliability differential\ncompared to a multi-schema approach.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 19:28:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "\n--- Tom Lane <[email protected]> wrote:\n> This is nonsense. There is no scenario where one DB \"goes down\" and\n> other DBs on the same postmaster remain up. There are advantages to\n> having separate DBs on one postmaster (like separate copies of the\n> system catalogs), but there's very little reliability differential\n> compared to a multi-schema approach.\n\nPerhaps \"goes down\" is not the best term. You can replace it with \"is\nnot available\" (as in being restored, etc) if you like. \n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 15 Apr 2003 16:56:24 -0700 (PDT)", "msg_from": "ow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "On the \"lets make more apps work with Postgres\" front people can check out\nhttp://sourceforge.net/projects/bind-dlz\n\nThis is a patch for Bind 9.2.1 that allows all DNS data to be stored in an\nexternal database. Makes DNS administration easy, and changes to DNS data\nare reflected immediately. The project supports multiple databases now, but\nthe first one was postgres!\n\nLater\nRob\n\n", "msg_date": "Tue, 15 Apr 2003 20:08:44 -0400", "msg_from": "\"Rob Butler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "ow said...\n> --- Neil Conway <[email protected]> wrote:\n> > Are you talking about queries between databases on the same\n> > postmaster\n> > (i.e. running under the same PostgreSQL installation),\n> \n> Yes\n\nBased on your later comments, the answer seems to /actually/ be \"No.\"\n\n> > or queries\n> > between postmasters running on different systems? If the former, I\n> > don't\n> > see how putting your data into multiple schemas in a single database\n> > is\n> > significantly less reliable than putting it into multiple databases.\n> \n> I disagree. For example, suppose you have\n> app12 that uses db1 and db2,\n> app23 that uses db2 and db3,\n> app3 that uses db3.\n> \n> If db3 goes down then app12 is not affected, app23 could be partially\n> affected (e.g. user may not be able to run historic queries) and app3\n> is completely unavailable. This is definitely better than all three\n> apps are down. Besides, having one huge db makes everything more\n> difficult and requires (much) more time for backups, restores, etc.\n> \n> Every major RDBMS vendor (and mySql) finds this feature important and\n> they support it. Hope Postgresql will too.\n\nIf it's all running as just one PostgreSQL instance, then if db1 goes down, \nthen, since it's the same postmaster as is supporting db2 and db3, they \nnecessarily go down as well.\n\nThe only way that you get to take down one DB without affecting the others is \nfor them NOT to be running as part of the same PG installation.\n\nBy the way, if you only have one PG instance, then you may very well find it \nchallenging to suitably parallelize all the loads/dumps of data. If you have \nthree disks, or three arrays, it may make a lot of sense to have separate PG \ninstances on each one, as that allows I/O to not need to interfere between \ninstances. (There are, admittedly, other ways of tuning this sort of thing, \nsuch as moving WAL to a separate disk, or perhaps even specific table files, \nidentified by OID...)\n\nBut the most general ways of separating things out lead to having quite \nseparate DB instances. And when you've got that, it certainly is attractive \nto have 2PC, as is available for the \"expensive guys.\"\n--\noutput = reverse(\"gro.mca@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/sap.html\nYou know that little indestructible black box that is used on\nplanes---why can't they make the whole plane out of the same\nsubstance?\n\n", "msg_date": "Tue, 15 Apr 2003 20:28:25 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "> Tatsuo, this has always fascinated me. Any insights you could share about how PostgreSQL achieved the prominence it has in Japan (and how MySQL did not) would be very interesting.\n\nPostgreSQL started to become popular in 1998(PostgreSQL 6.4 days). In\nthe year a publisher asked me to write the first PostgreSQL book and\nfortunately it has sold very well. From then many PostgreSQL books\nhave been published and lots of magazine articles have been written\ntoo. As as result, PostgreSQL users could enjoy rich PostgreSQL\ninformation in Japanese. Since most Japanese (including me) is not\nvery good at English, localized docs for PostgreSQL is the key factor\nfor the \"prominence\". On the other hand, almost no good Japanese MySQL\nbooks have ever appeared.\n\nNext point is the community. Japan PostgreSQL Users Group (JPUG) has\nbeen established in 1999 and now has over 1800 registered members\n(local ML for PostgreSQL has over 5400 subscribers). I guess MySQL\ndoes not have this kind large community.\n\nThese are not proven factors for the popularity of PostgreSQL in\nJapan, I believe they definitely could be listed as one of the top 10\nreasons.\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 16 Apr 2003 14:43:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. MySQL in Japan" }, { "msg_contents": "On Wednesday 16 April 2003 00:26, Mike Benoit wrote:\n> From my experience, almost every time I talk to a MySQL supporter about\n> PostgreSQL, the whole \"vacuum\" issue always seems to come up. Some way\n> to get vacuum automated (and thus out of sight, out of mind) I think\n> would make great strides in making PG at least \"seem\" more friendly to\n> someone on the outside.\n\nAgreed. But that is not an impossible issue for a DBA, is it? I mean some \nlearning is required but that can be done.\n\n> Creating users/databases/changing passwords is as simple as sending it a\n> couple queries from our Customer web interface, trouble shooting poor\n> queries takes seconds when using \"mytop\" (mtop), and tracking/billing\n> for disk usage is as simple as running \"du /var/lib/mysql/*\". I would\n> like to say the same things for PG, but I'm affrid I can't.\n\nAdding users, databases, password changes are as easy in postgresql. Tracking \ndisk usage is no different in postgresql barring additional step of using \noid2name to find out directory you want to du.\n\nIn fact I think postgresql is easier to use. Till date, I could never start \nmysql by hand and get it behave sanely. pg_ctl or nohup postmaster has always \nworked for me.\n\nBesides postgresql is true to it's resource usage. You allocate 128MB of \nshared buffers, and they are consumed. You stop postmaster and all the \nbuffers are back to system. With mysql, I found that large amount of memory \nwas never returned to system even after service shutdown. I hate black-boxes \non my system where I can not fathom into. Had to reboot the machine. \n\n> I think it all comes down to how simple PG is to setup and use on a\n> daily basis. This is what determines the size of its community. Even\n> just the simple things make a big difference. ie:\n>\n> \\dt\n>\n> compared to:\n>\n> show tables;\n\n<I assume that show tables is not a standard SQL syntax>\n\nThat is very shallow view. \\dt is a postgresql terminal client extension where \nas show tables is part of mysql SQL offerings. Such brutal twisting of SQL \nstandards encourages dependence on mysql only features, flushing standard \ncompliance down the drain.\n\n> Yes, once you get over the \"hump\" PG is quite efficient, but you need to\n> understand it, and learn some small quriks first. With MySQL, you can\n> pretty much guess commands, and they often work! Not as much luck with\n> PG.\n>\n> show indexes\n> show processlist\n> show columns from <table>\n>\n> These are all easy/simple commands that make sense to someone who is\n> just learning the ropes. Short abbreviated, commands are great for the\n> experts, but can greatly discourage newbies.\n\nWell, I might get flamed for this but let me clarify. I am not against \nnewbies. Everybody once was a newbie. But being a newbie, does not justify \nreluctance to go thr. manuals. If you are reluctant to go thr. manuals., you \nbetter hire a commercial support.\n\nMy advise has always been ,to read postgresql manual start to end before even \ntouching it. It takes a day to digest but pays off big later. When I started \npostgresql back in 1999, I started on postgresql and SQL simalteneously. \nDidn't have faintest idea, what any of those stand for. So I read the manual, \nstart to end in couple of days. In one day I could do things that worked as \nexpected.\n\nRTFM is not an advice thrown to kick out newbies. It is ground fact that \neverybody has to suffer thr. Borg transplants are not yet available here.\n\n Shridhar\n\n", "msg_date": "Wed, 16 Apr 2003 12:50:33 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Tuesday 15 April 2003 22:25, Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n> > That is an unfair comparison. The technical differences between BSD and\n> > linux are not as much as postgresql and mysql. Besides what is the\n> > parallel of SQL standard in OS world? POSIX? And both BSD/linux are doing\n> > fine sitting next to each other on that.\n>\n> Agreed, Linux and BSD are pretty close --- but Linux used to be behind\n> BSD --- they caught up because both are open source. The big question\n> is whether MySQL (which isn't openly developed) will catch up to\n> PostgreSQL. And if they do catch up, will we have mind share parity by\n> that time?\n\nThat is a tough question. But if we focus on enterprise features and reach \nthreshold in decision making circles, that would be great.\n\nMind share parity certainly matters. Bigger question is in which circles. I \nwould better put decision making circle as fist target.\n\nBesides we won't sit still while mysql catches with us. \n\n Shridhar\n\n", "msg_date": "Wed, 16 Apr 2003 13:25:14 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> BTW, DB2 doesn't have 'em [cross-db queries] either.\n> In DB2, you can of course have cross-schema queries but no cross-db\n> queries, unless you rig up the federated functionality to connect one\n> db to the other.\n\nA while ago I was at a client who wanted to migrate to DB2 and this\nquestions was raised during discussions with IBM. There was a way to do\nthis, if I remember correctly the solution involved creating views for\nall tables from db2 that you wanted to use in db1 and maybe something\nelse. Can't tell you for sure, I'm not working with DB2.\n\nOracle, Sybase, Ms, Informix (? AFAIK) , mySql, they all support\ncross-db queries.\n\nAnyway, I thought it was important to bring this up. With large number\nof apps and large amount of data having everything in one db is a sure\nway for disaster, IMHO.\n\n\n\n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo\nhttp://search.yahoo.com\n\n", "msg_date": "Wed, 16 Apr 2003 05:20:17 -0700 (PDT)", "msg_from": "ow <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum? " }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> In fact I think postgresql is easier to use. Till date, I could never start \n> mysql by hand and get it behave sanely. pg_ctl or nohup postmaster has always \n> worked for me.\n\nThis is weird, because despite mysql's technical inferiority, it really is \npretty simple to use. Also seems a little hypocritical of you in light of \nthe RTFM rant later on in your email. :)\n\n\n> Besides postgresql is true to it's resource usage. You allocate 128MB of \n> shared buffers, and they are consumed. You stop postmaster and all the \n> buffers are back to system. With mysql, I found that large amount of memory \n> was never returned to system even after service shutdown. I hate black-boxes \n> on my system where I can not fathom into. Had to reboot the machine.\n\n\"Black-boxes\"? It's open-source, just like we are. Did you read their manual \n\"start to end\"? Did you ask on their mailing lists? I'm no MySQL fan, but \nI'd rather let them, not us, dish out the FUD. The original poster had some \nvalid points (auto-vacuum and non-intuitive commands) that still need \naddressing, IMO.\n\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200304160945\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+nV/EvJuQZxSWSsgRAsaXAKCAY3vGFxDzk9dniqojpi+RK3ToUwCgpv5L\nSl6e9Or440U5QeLIhvNsaro=\n=k5Np\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 16 Apr 2003 13:51:28 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Wednesday 16 April 2003 19:21, [email protected] wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> > In fact I think postgresql is easier to use. Till date, I could never\n> > start mysql by hand and get it behave sanely. pg_ctl or nohup postmaster\n> > has always worked for me.\n>\n> This is weird, because despite mysql's technical inferiority, it really is\n> pretty simple to use. Also seems a little hypocritical of you in light of\n> the RTFM rant later on in your email. :)\n\nYes. That is correct. But going thr. 3.5MB html to find out things which has \ngot tons of options and figuring out interdependencies by trial and error is \nnot a good job at that. Whoever thinks that such a style on manual writing is \ngood, needs an attitude readjustment. Postgresql manual is ten times better.\n\n> > Besides postgresql is true to it's resource usage. You allocate 128MB of\n> > shared buffers, and they are consumed. You stop postmaster and all the\n> > buffers are back to system. With mysql, I found that large amount of\n> > memory was never returned to system even after service shutdown. I hate\n> > black-boxes on my system where I can not fathom into. Had to reboot the\n> > machine.\n>\n> \"Black-boxes\"? It's open-source, just like we are. Did you read their\n> manual \"start to end\"? Did you ask on their mailing lists? I'm no MySQL\n> fan, but I'd rather let them, not us, dish out the FUD. The original poster\n> had some valid points (auto-vacuum and non-intuitive commands) that still\n> need addressing, IMO.\n\nI didn't go to any mailing list. My point is, if I pierce the startup-shutdown \nchapter in mysql manual and can not get it working by hand, either I am \nstupid or something wrong with mysql. May sound arrogant but I count on \nlater.\n\nHave you seen postgresql 101 I wrote? It is at \nhttp://wiki.ael.be/index.php/PostgresQL101. It is that simple with \npostgresql. Now this is not the forum but can anybody point me to similar \ndocument for mysql. /etc/rc.d/init.d/mysql start always works but it does not \nallow me to tweak options for mysqld which is first thing I want.\n\nAnyway I must admit that I was reluctant to use mysql and was turned off \npretty quickly. Mine is probably a irreproducible bug but I did encounter it.\n\n Shridhar\n\n", "msg_date": "Wed, 16 Apr 2003 19:46:37 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "[email protected] kirjutas K, 16.04.2003 kell 16:51:\n> The original poster had some \n> valid points (auto-vacuum and non-intuitive commands) that still need \n> addressing, IMO.\n\nAs of 7.3 (or was it 7.2) auto-vacuum is just one line in crontab. In\nmany scenarios it can be left running continuously with very little\neffect on performance. In others it must be run nightly, but having it\nkick in at unexpected times may not be what you want at all. So it has\nto be configured for good performance weather it is built-in or run in a\nseparate backend process.\n\nAnd I can't see how \"show tables\" is more intuitive than \"\\dt\" - I\nexpected it to be \"list tables\" or \"tablelist\" or \"näita tabeleid\" .\n\nOnce you have found \\? it is all there (and you are advised to use \\? at\npsql startup).\n\nThat may also be why PostgreSQL is more popular in Japan - if one has to\nremember nonsensical strings, then it is easier to remember short ones\n;)\n\n----------------\nHannu\n\n", "msg_date": "16 Apr 2003 17:17:47 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> > That's a pretty reasonable thought. I work for a shop that sells\n> > Postgres support, and even we install MySQL for the Q&D ticket\n> > tracking system we recommend because we can't justify the cost to\n> > port it to postgres. If the postgres support were there, we would\n> > surely be using it.\n> \n> > How to fix such a situation, I'm not sure. \"MySQL Compatability\n> > Mode,\" anyone? :-)\n> \n> What issues are creating a compatibility problem for you?\n\nI don't think these should be hacked into the backend/libpq, but I\nthink it'd be a huge win to hack in \"show *\" support into psql for\nMySQL users so they can type:\n\n SHOW (databases|tables|views|functions|triggers|schemas);\n\nI have yet to meet a MySQL user who understands the concept of system\ncatalogs even though it's just the 'mysql' database (this irritates me\nenough as is)... gah, f- it: mysql users be damned, I have three\ndevelopers that think that postgresql is too hard to use because they\ncan't remember \"\\d [table name]\" and I'm tired of hearing them bitch\nwhen I push using PostgreSQL instead of MySQL. I have better things\nto do with my time than convert their output to PostgreSQL. Here goes\nnothing...\n\nI've tainted psql and added a MySQL command compatibility layer for\nthe family of SHOW commands (psql [-m | --mysql]).\n\n\nThe attached patch does a few things:\n\n1) Implements quite a number of SHOW commands (AGGREGATES, CASTS,\n CATALOGS, COLUMNS, COMMENTS, CONSTRAINTS, CONVERSIONS, DATABASES,\n DOMAINS, FUNCTIONS, HELP, INDEX, LARGEOBJECTS, NAMES, OPERATORS,\n PRIVILEGES, PROCESSLIST, SCHEMAS, SEQUENCES, SESSION, STATUS,\n TABLES, TRANSACTION, TYPES, USERS, VARIABLES, VIEWS)\n\n SHOW thing\n SHOW thing LIKE pattern\n SHOW thing FROM pattern\n SHOW HELP ON (topic || ALL);\n etc.\n\n Some of these don't have \\ command eqiv's. :( I was tempted to add\n them, but opted not to for now, but it'd certainly be a nice to\n have.\n\n2) Implements the necessary tab completion for the SHOW commands for\n the tab happy newbies/folks out there. psql is more friendly than\n mysql's CLI now in terms of tab completion for the show commands.\n\n3) Few trailing whitespace characters were nuked\n\n4) guc.c is now in sync with the list of available variables used for\n tab completion\n\n\nFew things to note:\n\n1) SHOW INDEXES is the same as SHOW INDEX, I think MySQL is wrong in\n this regard and that it should be INDEXES to be plural along with\n the rest of the types, but INDEX is preserved for compatibility.\n\n2) There are two bugs that I have yet to address\n\n 1) SHOW VARIABLES doesn't work, but \"SHOW [TAB][TAB]y\" does\n 2) \"SHOW [variable_of_choice];\" doesn't work, but \"SHOW\n [variable_of_choice]\\n;\" does work... not sure where this\n problem is coming from\n\n3) I think psql is more usable as a result of this more verbose\n syntax, but it's not the prettiest thing on the planet (wrote a\n small parser outside of the backend or libraries: I don't want to\n get those dirty with MySQL's filth).\n\n4) In an attempt to wean people over to PostgreSQL's syntax, I\n included translation tips on how to use the psql equiv of the SHOW\n commands. Going from SHOW foo to \\d foo is easy, going from \\d foo\n to SHOW foo is hard and drives me nuts. This'll help userbase\n retention of newbies/converts. :)\n\n5) The MySQL mode is just a bounce layer that provides different\n syntax wrapping exec_command() so it should provide little in the\n way of maintenance headaches. Some of the SHOW commands, however,\n don't have \\ couterparts, but once they do and that code is\n centralized, this feature should come for zero cost.\n\n6) As an administrator, I'd be interested in having an environment\n variable that I could set that'd turn on MySQL mode for some of my\n bozo users that way they don't complain if they forget the -m\n switch. Thoughts?\n\n\nI'll try and iron out the last of those two bugs/features, but at this\npoint, would like to see this patch get wider testing/feedback.\nComments, as always, are welcome.\n\nPostgreSQL_usability++\n\n-sc\n\n-- \nSean Chittenden", "msg_date": "Wed, 16 Apr 2003 08:45:17 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Rob Butler [mailto:[email protected]] \n> Sent: 16 April 2003 16:33\n> To: [email protected]\n> Subject: [HACKERS] Many comments (related to \"Are we losing \n> momentum?\") \n> \n> \n> You \n> want a real shocker... Visit the gborg homepage (click the \n> gborg link on the bottom left corner of the postgres site). \n> Then read the news on the right side of the gborg homepage. \n> See that url for http://www.greatbridge.org/ in the \n> GreatBridge.Org Version 2.0.0 Release story from way back in \n> 2001! Go ahead, visit that URL. What's the first thing you \n> see on that page? \"Purchase IBM DB2 At IBM.com\". Come on! \n> Not even a mention of Postgres, and this is right on the \n> gborg homepage! \n\nWe don't own that site, and we never did. We did manage to inherit the\nprojects database through it's key programmer and his goodwill.\n\n> The firebird, sapdb and mysql sites are \n> killing postgres here. The postgres homepage and related \n> links is the first thing someone new to postgres sees! There \n> shouldn't be news on any of the main pages from back in 2001. \n\nThere isn't. The oldest news is from January 2003.\n\nWrt your comments on the style of some of the pages linked off the main\nsite (the archives spring to mind), if there are any volunteers to help\nfix that, please raise your hands because my time is limited and I could\ndo with some committed help on that sort of thing.\n\nRegards, Dave.\n\n", "msg_date": "Wed, 16 Apr 2003 16:51:17 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\") " }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> 4) guc.c is now in sync with the list of available variables used for\n> tab completion\n\nAFAIK, the GUC variables not listed in tab-complete.c were omitted\ndeliberately. We could have a discussion about the sensefulness of\nthose decisions, but please do not consider it a bug to be fixed \nout-of-hand.\n\n> 6) As an administrator, I'd be interested in having an environment\n> variable that I could set that'd turn on MySQL mode for some of my\n> bozo users that way they don't complain if they forget the -m\n> switch. Thoughts?\n\nCan't you set it in ~/.psqlrc ?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 11:53:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "> > 4) guc.c is now in sync with the list of available variables used for\n> > tab completion\n> \n> AFAIK, the GUC variables not listed in tab-complete.c were omitted\n> deliberately. We could have a discussion about the sensefulness of\n> those decisions, but please do not consider it a bug to be fixed\n> out-of-hand.\n\nAlright, there weren't many omitted GUC's, but those that were\nomitted did have counterparts that were include already so I figured\nthere was some bit rot going on.\n\n> > 6) As an administrator, I'd be interested in having an environment\n> > variable that I could set that'd turn on MySQL mode for some of my\n> > bozo users that way they don't complain if they forget the -m\n> > switch. Thoughts?\n> \n> Can't you set it in ~/.psqlrc ?\n\nHrm... ah, ok, done. Patch updated.\n\nhttp://people.freebsd.org/~seanc/patches/patch_postgresql-HEAD::src::bin::psql\n\n-sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Wed, 16 Apr 2003 09:17:25 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "On Wed, 2003-04-16 at 11:51, Dave Page wrote:\n> > -----Original Message-----\n> > From: Rob Butler [mailto:[email protected]] \n> > Sent: 16 April 2003 16:33\n> > To: [email protected]\n> > Subject: [HACKERS] Many comments (related to \"Are we losing \n> > momentum?\") \n> > \n> > \n> > You \n> > want a real shocker... Visit the gborg homepage (click the \n> > gborg link on the bottom left corner of the postgres site). \n> > Then read the news on the right side of the gborg homepage. \n> > See that url for http://www.greatbridge.org/ in the \n> > GreatBridge.Org Version 2.0.0 Release story from way back in \n> > 2001! Go ahead, visit that URL. What's the first thing you \n> > see on that page? \"Purchase IBM DB2 At IBM.com\". Come on! \n> > Not even a mention of Postgres, and this is right on the \n> > gborg homepage! \n> \n> We don't own that site, and we never did. We did manage to inherit the\n> projects database through it's key programmer and his goodwill.\n> \n\nOddly I don't see the link he's referring to, but I've always thought it\nunfortunate that the postgresql community didn't hold onto those domain\nnames. I was just speaking with a tech writer last night who mentioned\nhow the company that was initially behind postgresql went kaput (\"I\nthink it was called great river or something?\") Took me a minute to set\nhim straight on that. \n\nAnyways, I've offered to look into setting up gforge services in the\npast as it's a mature project with a good development community and the\nfounder (Tim Perdue) has always been friendly with the PostgreSQL\ncommunity. Course I don't see this as an immediate need, but long term I\nthink it would be a good idea.\n\n> > The firebird, sapdb and mysql sites are \n> > killing postgres here. The postgres homepage and related \n> > links is the first thing someone new to postgres sees! There \n> > shouldn't be news on any of the main pages from back in 2001. \n> \n> There isn't. The oldest news is from January 2003.\n> \n> Wrt your comments on the style of some of the pages linked off the main\n> site (the archives spring to mind), if there are any volunteers to help\n> fix that, please raise your hands because my time is limited and I could\n> do with some committed help on that sort of thing.\n> \n\nI'm in the process of adding the newsletters to the main page (expect\nquestions your way later today ;-) but once I am done with that\n\"cohesiveness\" will definitely be on my radar screen. \n\nRobert Treat\n\n", "msg_date": "16 Apr 2003 12:21:55 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\")" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> 4) guc.c is now in sync with the list of available variables used for\n> tab completion\n>> \n>> AFAIK, the GUC variables not listed in tab-complete.c were omitted\n>> deliberately. We could have a discussion about the sensefulness of\n>> those decisions, but please do not consider it a bug to be fixed\n>> out-of-hand.\n\n> Alright, there weren't many omitted GUC's, but those that were\n> omitted did have counterparts that were include already so I figured\n> there was some bit rot going on.\n\nThere could be some of that too. I was just saying that it's not a\nforegone conclusion to me that every parameter known to guc.c should\nbe in the tab completion list.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 12:26:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> Oddly I don't see the link he's referring to, but I've always thought it\n> unfortunate that the postgresql community didn't hold onto those domain\n> names.\n\nWe were not offered the chance; Landmark wanted to hold onto the Great\nBridge trademark, apparently. We did come away with the postgres.com,\npostgres.org, and postgresql.com domains, which GB had managed to wrestle\naway from some korean domain squatter.\n\nThe fact that some of those aren't currently resolving nicely (eg,\nwww.postgres.org gets you a Horde login page) is our own darn fault.\nAs Dave Page was mentioning, some additional hands in the webpage\nmaintenance effort would be great.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 13:29:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\") " }, { "msg_contents": "On Wed, 2003-04-16 at 13:29, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > Oddly I don't see the link he's referring to, but I've always thought it\n> > unfortunate that the postgresql community didn't hold onto those domain\n> > names.\n> \n> We were not offered the chance; Landmark wanted to hold onto the Great\n> Bridge trademark, apparently. We did come away with the postgres.com,\n> postgres.org, and postgresql.com domains, which GB had managed to wrestle\n> away from some korean domain squatter.\n> \n\nSorry. I knew that but I guess my statement implied different.\n\n> The fact that some of those aren't currently resolving nicely (eg,\n> www.postgres.org gets you a Horde login page) is our own darn fault.\n> As Dave Page was mentioning, some additional hands in the webpage\n> maintenance effort would be great.\n> \n\nWell, that's more of a DNS issue than a webpage, and afaik Marc is the\nonly one who can make that change. If he's willing to change it now by\nall means let's do so. If he's too busy but willing to give someone else\naccess to do it I'm sure we can dig someone up (like me)\n\nRobert Treat\n\n", "msg_date": "16 Apr 2003 14:32:32 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\")" }, { "msg_contents": "Glad to see all the comments about the web page stuff, and see that people\nrecognize the need for a cohesive look and feel for Postgres.org. Do people\nhave any comments about the rest of the stuff in my post?\n\nLike, are the JDBC developers aware that you will be changing the FE/BE\nprotocol? Do they know you will (possibly) be adding 2PC?\n\nAny communication going on between core and postgres-r developers to make\nsure that replication and 2PC will work together simultaneously as I\ndescribed in my earlier e-mail?\n\nAny comments about the section on competition? Do people agree / disagree\nthat postgres should not be competing with MySQL (because it's no real\ncompetition) or does everyone think MySQL is a real threat? Do people agree\n/ disagree that the real competition is commercial DB's and Firebird / SAP\ndb?\n\nLater\nRob\n\n", "msg_date": "Wed, 16 Apr 2003 16:50:47 -0400", "msg_from": "\"Rob Butler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\")" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Any comments about the section on competition? Do people agree / disagree\n> that postgres should not be competing with MySQL (because it's no real\n> competition) or does everyone think MySQL is a real threat? Do people \n> agree / disagree that the real competition is commercial DB's and Firebird \n> / SAP db?\n\nThere has been plenty of discussion on the advocacy list about this: it is \na much better place for this sort of talk than hackers anyway.\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200304161743\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+nc6+vJuQZxSWSsgRAmt/AJwKpkYWubiF+fz0mgZoXSIaMBpASACcDgjl\n9Ks9DmXP+SjkzHA7DIMF0q8=\n=qmPX\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 16 Apr 2003 21:44:55 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\")" }, { "msg_contents": "\n\nBruce wrote:\n\n>Having good reference sites is important, and I could list as many\n>impressive ones as MySQL, but who has time to hunt around and get\n>permission to list them --- I will tell you who --- the MySQL marketing\n>guys, while the PostgreSQL guys don't. :-(\n\nIs it a good enough benefit to make the ones we already\nhave easier to find?\n\nIf the content on these pages:\n\n http://techdocs.postgresql.org/techdocs/supportcontracts.php\n http://advocacy.postgresql.org/casestudies/\n http://archives.postgresql.org/pgsql-announce/2002-11/msg00004.php\n\ncould be integrated and put on an easy to find page in the \nadvocacy area it'd be a lot easier for new people to see.\n\n\nI know PostgreSQL's got at least as impressive a list as MySQL. It's\njust that you need to dig harder to find it.\n\n Ron\n\n", "msg_date": "Wed, 16 Apr 2003 15:38:31 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "\"Rob Butler\" <[email protected]> writes:\n> Like, are the JDBC developers aware that you will be changing the FE/BE\n> protocol?\n\nThe ones who have been participating in the discussion are ;-)\n\n> Do they know you will (possibly) be adding 2PC?\n\nThe odds of that appearing in 7.4 are nil, IMHO.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 23:10:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many comments (related to \"Are we losing momentum?\") " }, { "msg_contents": "Hi Kevin\n\n> -----Original Message-----\n> From: Kevin Brown [mailto:[email protected]] \n> Sent: 17 April 2003 03:35\n> To: [email protected]\n> Subject: Re: [HACKERS] pg_clog woes with 7.3.2 - Episode 2\n> \n> I'd also start looking carefully through the system logs for \n> SCSI errors. You should see some if you're getting bad block \n> problems (in particular, you should see bad block remapping \n> attempts that couldn't read the data from the original bad \n> block -- this, or running out of spare blocks, is the only \n> reason you should see errors at all on an otherwise functional setup).\n\nNo errors on the system at all apart from PostgreSQL. On the previous\ndisk, the number of errors reported by badblocks was rising from 64\ninitially to 80-something when I took it out.\n\n> If badblocks shows errors but you don't see any SCSI errors \n> in the system logs, then it's time to start suspecting the \n> disk controller or perhaps even the PCI bus controller, \n> because it means something really weird is happening on the \n> backend that is entirely invisible. Cabling or termination \n> could be an issue, but I'd expect to see parity errors, timed \n> out commands, etc. if that's the problem.\n\nYes, me too. Still, I've now tried a 29160N adaptor, and changed the\ncable to a DPT one (both that and the previous Adaptec have hardwired\nterminators). I've also removed the DAT drive from the system so the\nonly things on the SCSI bus are 2 identical disks and the adaptor.\n\nI'm beginning to wonder about my shiny new 2.4.20 kernel...\n\nRegards, Dave.\n\n", "msg_date": "Thu, 17 Apr 2003 08:32:40 +0100", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2" }, { "msg_contents": "I am not a hacker of PgSQL, and new to Databases. I was using MySQL \nunder .NET, but was annoyed by their agressive licence agreements and \nimmaturity. (Their sales personel are also very rude. One girl once \ntold me that if I didn't like their licence terms I should just use \nflat-files instead.) One of the .NET developers for MySQL advised me to \nlook at PostgreSQL, and I have never looked back.\n\nThis was not the fist time I looked at PostgreSQL, I initially looked at \n30+ Databases, but rejected PostgreSQL off hand. There was no Windows \nversion.\n\nSpeaking stricktly as an ameture, I would like to make a few comments I \ncould not see mensioned in this thread. This is not a dig, more a wish \nlist!\n\nBecause: With the expanding industry and popularity of IT as part of \nunrelated collage courses (Engineering, Accountancy etc), there are lots \nof ametures and newbe's. We will never be first class hackers, we will \nprobably never get to the end of the manuals or understand what P2C / \nFE/BE is or where it's used. But we are the silent magority. (I am \npersonally extreamly dyslexic. I learn from example, talking and brief \npainful trips into the documentation archives.)\n\nTherefore we learn as much as we need to know. In time I am sure we all \nwant to be guru's in everything. I have lots of ameture friends at this \nlevel, running ISP's, producing commercial applications, in unrelated \nresearch, needing office systems... All needing a DB. Most using \nMS-SQL or MySQL.\n\nTo draw on a popular examples, MySQL helps the ameture: (I'm putting my \nfoot in it that some of this probably exists. I just haven't found it yet.)\n\n-\tA true Windows version which people can learn their craft on.\n-\tTools which look like Access, to do row level data editing with no SQL.\n-\tEasy to use and remember command extensions, like 'CREATE IF NOT \nEXISTS', 'DROP IF EXISTS' which are universal.\n-\tCentrally located complete documentation in many consistent easy to \nread formats, of the system and *ALL* API's, including in-line tutorials \nand examples.\n-\tData types like 'ENUM' which appeal to ametures.\n-\tThere are no administrative mandatorys. Eg, VACUUM. (A stand-alone \ncommercial app, like an Email client, will be contrainted by having to \nbe an app and a DBA in one.)\n-\tThe tables (not innodb) are in different files of the same name. \nAllowing the OS adminitrator great ability. EG, putting tables on \nseparate partitions and therefore greatly speeding performance.\n-\tThey have extensive backup support. Including now, concurrent backup \nwithout user interuption or risk of inconsistency.\n\nNow I have begun to climb the ladder a bit, I know this it of little \nimportance compared to working referential constraints, triggers, \nprocedures and transactions... You also have the excelent mailing list \n'novice', with excelent support for Ametures, with the most friendly \nwelcome note: 'No problem too minor'! Thanks to you all for providing \nthe system I am now beginning to enjoy using.\n\nPS: I like the '\\dt'. Especially the way it can be used half way \nthrough a true statement, inspired bit of genious there.\n\nBen\n\n", "msg_date": "Thu, 17 Apr 2003 09:45:29 +0000", "msg_from": "Ben Clewett <[email protected]>", "msg_from_op": false, "msg_subject": "For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "On Wed, 16 Apr 2003 [email protected] wrote:\n\n> \n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> > In fact I think postgresql is easier to use. Till date, I could never start \n> > mysql by hand and get it behave sanely. pg_ctl or nohup postmaster has always \n> > worked for me.\n> \n> This is weird, because despite mysql's technical inferiority, it really is \n> pretty simple to use. Also seems a little hypocritical of you in light of \n> the RTFM rant later on in your email. :)\n\nI hate to join in this thread but...\n\nI don't find it weird. It's probably a different mind set or something but I\nfind the MySQL documentation discussing something that will be in version 8.34\nwhen they still list 3.23 as the latest production version is so confusing when\nit's written with no indication that the thing isn't already in place.\n\nJust my own view. People say MySQL is easy and PostgreSQL is difficult to\nlearn. I say PostgreSQL is easy and MySQL is difficult to learn.\n\nAnd as for it being maintenance free while a regular vacuum is something too\ndifficult a concept for people to grasp. Well, what do these maintenance free\nMySQL folk do with the regular tasks that MySQL needs run?\n\n\n--\nNigel Andrews\n\n", "msg_date": "Thu, 17 Apr 2003 12:35:07 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Sean Chittenden writes:\n\n> I don't think these should be hacked into the backend/libpq, but I\n> think it'd be a huge win to hack in \"show *\" support into psql for\n> MySQL users so they can type:\n>\n> SHOW (databases|tables|views|functions|triggers|schemas);\n\nWell, we (will) have the information schema, and if you like you can put\nit in the path and write\n\nselect * from tables;\n\netc., which seems just as good.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Thu, 17 Apr 2003 14:45:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "On Thursday 17 April 2003 13:35, Nigel J. Andrews wrote:\n\n> I hate to join in this thread but...\n\nme too, but I am suffering from a bout of MySQL :-(\n\n(...)\n> Just my own view. People say MySQL is easy and PostgreSQL is difficult to\n> learn. I say PostgreSQL is easy and MySQL is difficult to learn.\n\nHaving had to use MySQL seriously for the first time for a long time, I am finding\nit makes the easy things (appear) easy and the difficult things impossible.\nFor example, AUTO_INCREMENT is easy to set up and use, but \nis a toy feature compared to real sequences...\n\n> And as for it being maintenance free while a regular vacuum is something\n> too difficult a concept for people to grasp. Well, what do these\n> maintenance free MySQL folk do with the regular tasks that MySQL needs run?\n\nThis is what MySQL recommends:\nhttp://www.mysql.com/doc/en/Maintenance_regimen.html\n\nHow about repackaging VACUUM as a \"database defragmentation\nutility\"? After all many many people have come to accept\ndisk defragmenters as an essential part of their OS ;-) \n\n\nIan Barwick\[email protected]\n\n", "msg_date": "Thu, 17 Apr 2003 14:50:47 +0200", "msg_from": "Ian Barwick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Sean Chittenden writes:\n>> I don't think these should be hacked into the backend/libpq, but I\n>> think it'd be a huge win to hack in \"show *\" support into psql for\n>> MySQL users so they can type:\n>> \n>> SHOW (databases|tables|views|functions|triggers|schemas);\n\n> Well, we (will) have the information schema, and if you like you can put\n> it in the path and write\n> select * from tables;\n> etc., which seems just as good.\n\nI think Sean's idea is not to make it \"just as easy as MySQL\", it's to\nmake it \"the *same* as MySQL\", for the benefit of those that refuse to\nlearn differently. Them as won't adjust to \"\\dt\" in place of \"show\ntables\" aren't likely to adjust to \"select * from tables\" either.\nNot even (maybe especially not) if it's arguably a standard.\n\nI think the idea has some merit; although I wonder whether it wouldn't\nbe smarter to put the code in the backend so that you don't need a\nparser in psql. The SHOW code could fall back to looking at these\npossibilities after it fails to find a match to a GUC variable name.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 Apr 2003 10:05:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "Ben Clewett <[email protected]> writes:\n> -\tThe tables (not innodb) are in different files of the same name. \n> Allowing the OS adminitrator great ability. EG, putting tables on \n> separate partitions and therefore greatly speeding performance.\n\nFWIW, we used to do it that way too, many releases ago. We gave it up\nbecause it was impossible to support rollback of table deletion/rename\nwith that storage rule underneath. Consider\n\n\t\tBEGIN;\n\t\tDROP TABLE a;\n\t\tCREATE TABLE a (with-some-other-schema);\n\t\t-- oops, think better of it\n\t\tROLLBACK;\n\nWith table files named after the table, we could not support the above,\nbecause we'd need two \"a\"'s in existence at the same time. Postgres'\ncatalog mechanisms can handle rollback of the catalog changes, but the\nUnix filesystem never heard of rollback :-(\n\nThere are other reasons, which some folks have pointed out elsewhere in\nthis thread, but that was the killer one.\n\nI notice that MySQL seems to be migrating in this direction as well...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 Apr 2003 10:35:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "> >> I don't think these should be hacked into the backend/libpq, but\n> >> I think it'd be a huge win to hack in \"show *\" support into psql\n> >> for MySQL users so they can type:\n> >> \n> >> SHOW (databases|tables|views|functions|triggers|schemas);\n> \n> > Well, we (will) have the information schema, and if you like you\n> > can put it in the path and write select * from tables; etc., which\n> > seems just as good.\n\nI thought about changing the SHOW commands to executing a SELECT from\nthe information_schema schema, actually, but unless the various \\d\ncommands are going to get switched to SELECT'ing from the\ninformation_schema, it decreases the likelihood that someone will\nsuccessfully switch over to using the \\d commands in psql. I believe\nthat the \\d commands are a good thing and it's good that the backend\ndoesn't have support for the \\d commands. Having \\d or SHOW commands\nin the backend would dirty up the sources of the backend, IMHO.\n\n> I think Sean's idea is not to make it \"just as easy as MySQL\", it's\n> to make it \"the *same* as MySQL\", for the benefit of those that\n> refuse to learn differently. Them as won't adjust to \"\\dt\" in place\n> of \"show tables\" aren't likely to adjust to \"select * from tables\"\n> either. Not even (maybe especially not) if it's arguably a\n> standard.\n\nIt's amazing what you can accomplish by taking out your DBAs and\nracking up a $20 bar tab (note to would be attempters of this method:\nwhen the bar tab gets to $50, you find out things you didn't really\nwant to know or weren't intending to hear). So yeah, as Tom said,\nthey know \\d [table_name], I've been able to get that much through,\nbut syntactically, it doesn't offer the same syntactic goo that their\nfingers are used to typing and they _hate_ that there's no SHOW\ncommand for tables. It's a usability irk that kills them every time,\nthey type \"SHOW [tab]\" him. There are a few other things I picked up\nthat evening too.\n\n*) MySQL's CLI tab completion is terrible and the SHOW commands work\n even worse than other tab operations in mysql, this _is_ something\n that they do like about psql. psql's tab completion is really\n snappy by comparison.\n\n*) The _only_ time that they use the SHOW commands is when they're\n using a CLI.\n\nSo as opposed to supporting MySQL's brokenness, I hacked a small\nparser into psql and added a TIP to the top of the result set printed\nto stderr that tells the user how to use the equiv \\d command.\n\n> I think the idea has some merit; although I wonder whether it\n> wouldn't be smarter to put the code in the backend so that you don't\n> need a parser in psql. The SHOW code could fall back to looking at\n> these possibilities after it fails to find a match to a GUC variable\n> name.\n\nWell, I think that the backend should be kept clean of MySQL's\nnastiness. It's ugly to have a small parser in psql, but I think it's\nbetter off than letting MySQL dictate non-existent standards to the\nrest of the DB community because its developers have struck a chord\nwith the newbie database masses. PostgreSQL is a better database, we\nshouldn't have to cater to their hackery. -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Thu, 17 Apr 2003 12:48:03 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n>> I think the idea has some merit; although I wonder whether it\n>> wouldn't be smarter to put the code in the backend so that you don't\n>> need a parser in psql. The SHOW code could fall back to looking at\n>> these possibilities after it fails to find a match to a GUC variable\n>> name.\n\n> Well, I think that the backend should be kept clean of MySQL's\n> nastiness.\n\nKeep in mind though that there was already talk of migrating most of the\n\\d functionality to the backend (primarily as a way of decoupling psql\nfrom catalog version changes). If we were to do that, it would make\ngood sense to make it accessible via SHOW as well. IMHO anyway.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 Apr 2003 15:52:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "> >> I think the idea has some merit; although I wonder whether it\n> >> wouldn't be smarter to put the code in the backend so that you\n> >> don't need a parser in psql. The SHOW code could fall back to\n> >> looking at these possibilities after it fails to find a match to\n> >> a GUC variable name.\n> \n> > Well, I think that the backend should be kept clean of MySQL's\n> > nastiness.\n> \n> Keep in mind though that there was already talk of migrating most of\n> the \\d functionality to the backend (primarily as a way of\n> decoupling psql from catalog version changes). If we were to do\n> that, it would make good sense to make it accessible via SHOW as\n> well. IMHO anyway.\n\n:-/ Yeah, I've been following that from a distance and I'm not so wild\nabout that. I really like that the information_schema has been\nintegrated into the base, but translating the SHOW commands into\nSELECTs from information_schema on the backend seems like a bad idea\nunless its going to be done abstract enough via some kind of rewrite\nengine that allows users to program the database to translate their\nverbiage into SQL (ex: KILL -> DROP, GET -> SELECT), which could be\nkinda fun.\n\nGetting back to SHOW, what do you want to show or not show? Does the\nbackend show what's most user friendly? If that's the case, do you\nonly show tables that a user has SELECT access to? Does SHOW return\ntuples like a SELECT? What if a SHOW statement doesn't show what the\nuser is interested in (view definitions)? How about when those view\ndefinitions get really long and hard to visually see on a terminal\nscreen? There's no select list available in the SHOW syntax to limit\nout excessive bits.\n\nWhile adding the ability to set MYSQL_MODE as something that a user\ncould set in their .psqlrc, I thought it'd be the ideal progression to\ndo a few things: \n\n1) change the \\d commands to the appropriate SELECT from the\n information_schema. Doing this'll go a long way toward keeping the\n structure of the database contained in the database and psql\n independent.\n\n2) Set a few tunables that specify the select list for the SELECTs\n from the information_schema that way a user can specify what they\n see/don't see.\n\n3) SHOW is syntactic user goo that makes MySQL users feel happy and\n should be in the user interface. Because SHOW is a user interface\n nicety, real admins that over see database users could change\n users' .psqlrc files to specify the select list that the user/site\n wants, which could possibly be even the entire query.\n\nHrm, how's this for a more concise argument:\n\nPushing SHOW/\\d into the backend is a bad idea. The backend is a\nrelational database, not a user interface. The information_schema.*\ntables/views are the SQL sanctioned interface that the backend\nprovides. How a user interfaces with the database/information_schema\nis something that should be left up to the user interface program\n(psql) and not pushed into the backend. If a user wants to type \"SHOW\nTABLES LIKE p\" instead of \"\\dt p*\", so be it, but that's a user\ninterface concern, not an SQL concern. The SQL way of getting the\nsame data as \"SHOW TABLES\" is via SELECTing from the\ninformation_schema schema. Implementing SQL commands in the backend\nto make up for MySQL's inability to be forward thinking and\nconsequently hack in a syntax to wrap around their system catalogs for\nnewbie DB users is bad juju. By the same token, doesn't mean\nPostgreSQL can't provide the same lovey dovey interface that new users\nexpect, it should, however mean that the backend should be left alone\nto do what it specializes in (being an SQL conformant relational DB)\nand that the user interface (psql in this case) should be left alone\nto implement what SHOW TABLES really means.\n\nKeep in mind, that the only time that the SHOW commands are used, from\nwhat I've been able to ascertain, is when DBAs are in psql and doing\nbasic admin work and exploring/creating their corner of the universe.\nAnyone who's seriously trying to write a tool to inspect the database\nknows PostgreSQL reasonably well and uses SELECT + the system\ncatalogs. The target audience for a SHOW syntax isn't the power DBAs\nor people writing interfaces to examine PostgreSQL, it's the newbie\ncreating a table for a hack project via the CLI (psql). Allowing\nusers to customize the meaning of the \\d/SHOW commands would make psql\nmuch more powerful than it currently is and would address many of\nthese usability concerns. I'm now thinking that psql should intercept\nall non-standard SQL calls (bits not starting with SELECT, UPDATE,\nINSERT, ALTER, etc) and translate them into the appropriate SQL.\nHaving a generic mechanism for doing this would make psql\nsignificantly cleaner.\n\nAnyway, I'll rest on this topic until I hear whether or not folks\nwould rather have this done in psql or on the backend, but I'd like to\nget this in place somewhere so that I can stop reworking bits from\nMySQL to PostgreSQL. If it's determined that the bits should be done\nin psql, I'll gladly finish things up, clean things up, add the docs,\nmove things over to use the information_schema, and if folks would\nlike, add the appropriate functionality that'll allow folks to\nconfigure the \\d commands/SHOW via their .psqlrc.\n\n-sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Thu, 17 Apr 2003 14:09:41 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "It's out of date and I did not receive any feedback on it (so I assume\nit's not very useful), but the patch submitted in late February would\neasily allow the described 'translation' to occur.\n\nSummary:\n\nSchema in backend translates a \\<command> <arg> command into an sql\nquery which is then to be executed. Logic is still in psql, but the\navailable commands and how data is retrieved from the schema was backend\nspecific.\n\n\nI'm using it here due to it's ability to add new psql commands (since\nit's just data in a table.) \n\nhttp://archives.postgresql.org/pgsql-patches/2003-02/msg00216.php\n\n\nAnyway, might be useful for thoughts. Simply moving the commands into\nthe backend still leaves us with unsupported new commands in old clients\nor new commands in old databases (leaving errors) unless the SHOW ...\nsyntax itself is handled by the backend.\n\n> :-/ Yeah, I've been following that from a distance and I'm not so wild\n> about that. I really like that the information_schema has been\n> integrated into the base, but translating the SHOW commands into\n> SELECTs from information_schema on the backend seems like a bad idea\n> unless its going to be done abstract enough via some kind of rewrite\n> engine that allows users to program the database to translate their\n> verbiage into SQL (ex: KILL -> DROP, GET -> SELECT), which could be\n> kinda fun.\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "17 Apr 2003 17:25:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n>> Keep in mind though that there was already talk of migrating most of\n>> the \\d functionality to the backend (primarily as a way of\n>> decoupling psql from catalog version changes). If we were to do\n>> that, it would make good sense to make it accessible via SHOW as\n>> well. IMHO anyway.\n\n> :-/ Yeah, I've been following that from a distance and I'm not so wild\n> about that. I really like that the information_schema has been\n> integrated into the base, but translating the SHOW commands into\n> SELECTs from information_schema on the backend seems like a bad idea\n> unless its going to be done abstract enough via some kind of rewrite\n> engine that allows users to program the database to translate their\n> verbiage into SQL (ex: KILL -> DROP, GET -> SELECT), which could be\n> kinda fun.\n\nWell, I don't want to convert \\d into selects from information_schema,\nprimarily because that would constrain us to showing only things that\nare known to the SQL spec --- goodbye, Postgres-specific features\n(such as user-definable operators).\n\nI was, however, wondering whether the backend internal support for\n\"SHOW tables\" couldn't be simply to translate it to \"SELECT * FROM\nsome_view\". Then it'd be possible for people to customize the output by\nreplacing the view definition.\n\n> Getting back to SHOW, what do you want to show or not show? Does the\n> backend show what's most user friendly? If that's the case, do you\n> only show tables that a user has SELECT access to? Does SHOW return\n> tuples like a SELECT? What if a SHOW statement doesn't show what the\n> user is interested in (view definitions)?\n\nI thought you only wanted MySQL-equivalent functionality here ;-).\nDon't tell me they have customizable SHOW output ...\n\n> The information_schema.* tables/views are the SQL sanctioned interface\n> that the backend provides.\n\nThis argument sounds great in the abstract, but it falls down as soon as\nyou consider the reality that we want to support things that aren't\nSQL-sanctioned. Now, we could define some views that are *not* exactly\nINFORMATION_SCHEMA, but at that point the claim that it's a stable\nstandard interface is looking a lot weaker :-(\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 Apr 2003 17:40:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "> >> Keep in mind though that there was already talk of migrating most\n> >> of the \\d functionality to the backend (primarily as a way of\n> >> decoupling psql from catalog version changes). If we were to do\n> >> that, it would make good sense to make it accessible via SHOW as\n> >> well. IMHO anyway.\n> \n> > :-/ Yeah, I've been following that from a distance and I'm not so\n> > wild about that. I really like that the information_schema has\n> > been integrated into the base, but translating the SHOW commands\n> > into SELECTs from information_schema on the backend seems like a\n> > bad idea unless its going to be done abstract enough via some kind\n> > of rewrite engine that allows users to program the database to\n> > translate their verbiage into SQL (ex: KILL -> DROP, GET ->\n> > SELECT), which could be kinda fun.\n> \n> Well, I don't want to convert \\d into selects from\n> information_schema, primarily because that would constrain us to\n> showing only things that are known to the SQL spec --- goodbye,\n> Postgres-specific features (such as user-definable operators).\n\n::nods:: Good point. All the more the reason to put this in the\nclient. :)\n\n> I was, however, wondering whether the backend internal support for\n> \"SHOW tables\" couldn't be simply to translate it to \"SELECT * FROM\n> some_view\". Then it'd be possible for people to customize the\n> output by replacing the view definition.\n\nWell, my attitude is to arm psql with a good set of defaults (already\nhas some good ones, IMHO), and having it catch \\[token1] [token2] and\nSHOW [token1] and translate it into the appropriate query. If it\nworks out in psql, then leave it. If people complain about it not\nbeing available in the backend, then we can move it there in 8.0. :)\n\n> > Getting back to SHOW, what do you want to show or not show? Does\n> > the backend show what's most user friendly? If that's the case,\n> > do you only show tables that a user has SELECT access to? Does\n> > SHOW return tuples like a SELECT? What if a SHOW statement\n> > doesn't show what the user is interested in (view definitions)?\n> \n> I thought you only wanted MySQL-equivalent functionality here ;-).\n> Don't tell me they have customizable SHOW output ...\n\nHeh, they don't, but letting psql customize what SHOW means would be a\nfeature that mysql doesn't have and one that'd be reasonably useful,\nIMHO.\n\n> > The information_schema.* tables/views are the SQL sanctioned\n> > interface that the backend provides.\n> \n> This argument sounds great in the abstract, but it falls down as\n> soon as you consider the reality that we want to support things that\n> aren't SQL-sanctioned. Now, we could define some views that are\n> *not* exactly INFORMATION_SCHEMA, but at that point the claim that\n> it's a stable standard interface is looking a lot weaker :-(\n\nDoes the spec preclude us from adding views/tables to the\ninformation_schema that allow information_schema to be a completely\nreflective interface into the structure of the backend? I'm worried\nthat things are out of control because the existing, already in use\nbackend's system catalogs aren't user friendly (ex: usename ->\nusername).\n\n-sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Thu, 17 Apr 2003 14:55:31 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Tom Lane writes:\n\n> I think Sean's idea is not to make it \"just as easy as MySQL\", it's to\n> make it \"the *same* as MySQL\", for the benefit of those that refuse to\n> learn differently. Them as won't adjust to \"\\dt\" in place of \"show\n> tables\" aren't likely to adjust to \"select * from tables\" either.\n> Not even (maybe especially not) if it's arguably a standard.\n\n\"Same as MySQL\" is impossible. We can pick here and there and add tons of\nduplicate interfaces, play catch-up when they change them, but there's\nalways going to be a next feature that \"would be *really* nice if\nPostgreSQL could support it and it would surely draw *tons* of users to\nPostgreSQL\". Freely written MySQL code is basically completely\nincompatible with PostgreSQL, so someone who needs to switch will have to\nrelearn anyway.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Fri, 18 Apr 2003 02:21:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "Tom Lane wrote:\n> Ben Clewett <[email protected]> writes:\n> > -\tThe tables (not innodb) are in different files of the same name. \n> > Allowing the OS adminitrator great ability. EG, putting tables on \n> > separate partitions and therefore greatly speeding performance.\n> \n> FWIW, we used to do it that way too, many releases ago. We gave it up\n> because it was impossible to support rollback of table deletion/rename\n> with that storage rule underneath.\n\nIt occurs to me that we could make it possible to get some of the\nperformance gains MySQL gets through its naming conventions by\nincluding the type of object in the path of the object. For instance,\na table with relfilenode 52715 in database with relfilenode 46722\nwould have a path of $PGDATA/table/46722/52715, an index in the same\ndatabase with OID 98632 would have a path of\n$PGDATA/index/46722/98632, etc. Then you could use symlinks to have\ntables, indexes, etc. point to various places on disk where they\nreally need to live.\n\nIs that even remotely feasible? I looked into making the changes\nrequired but didn't see an obvious way to get a type string from an\nobject's RelFileNode internally (once you have that, it's a matter of\nchanging relpath() appropriately).\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Thu, 17 Apr 2003 21:56:52 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> It occurs to me that we could make it possible to get some of the\n> performance gains MySQL gets through its naming conventions by\n> including the type of object in the path of the object.\n\n\"Performance gains\"? Name one.\n\nWe have been there and done that. I see no reason to go back.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Apr 2003 01:56:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > It occurs to me that we could make it possible to get some of the\n> > performance gains MySQL gets through its naming conventions by\n> > including the type of object in the path of the object.\n> \n> \"Performance gains\"? Name one.\n\nInstead of tables and their indexes being on the same platter, you'd\nbe able to put them on separate platters. Sounds like it would likely\nyield a performance gain to me...\n\n> We have been there and done that. I see no reason to go back.\n\nI'm not proposing that we return to calling the individual files (or\nthe database they reside in) by name, only that we include a \"type\"\nidentifier in the path so that objects of different types can be\nlocated on different spindles if the DBA so desires. As it is right\nnow, tables and indexes are all stored in the same directory, and\nmoving the indexes to a different spindle is an uncertain operation at\nbest (you get to shut down the database in order to move any\nnewly-created indexes, and dropping a moved index will not free the\nspace occupied by the index as it'll only remove the symlink).\n\nAll the current transactional operations (including things like table\nrename ops) will still be transactional, with the only difference\nbeing that instead of one directory (base/) to deal with you'd have\nseveral (one for each type of object, thus a base/<type>/ directory\nfor each object type). Creating a database would mean creating a\ndirectory for the database in each of the type directories instead of\njust one in base/, and dropping it would mean removing said\ndirectories.\n\nIt's not like we'd be losing anything by it: the only operations that\nyou wouldn't necessarily be able to run in a transaction are the ones\nthat you can't currently run in a transaction anyway, like CREATE\nDATABASE.\n\nBut the benefit is that you can now safely put indexes on a different\nspindle than the data. That sounds like a net win to me.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Thu, 17 Apr 2003 23:19:57 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Dave Page wrote:\n> Still, I've now tried a 29160N adaptor, and changed the\n> cable to a DPT one (both that and the previous Adaptec have hardwired\n> terminators). I've also removed the DAT drive from the system so the\n> only things on the SCSI bus are 2 identical disks and the adaptor.\n> \n> I'm beginning to wonder about my shiny new 2.4.20 kernel...\n\nThat's a possibility. There's one other I'd try, prior to using a\ndifferent kernel: try booting the kernel with the \"noapic\" option. On\ncertain SMP systems, this has allowed the kernel to come up properly\nand see the SCSI adaptor (among other things), whereas without it the\nsystem would hang on attempts to access the SCSI device. Even if\nyou're not running SMP, it may help your system's stability.\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Fri, 18 Apr 2003 02:58:19 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2" }, { "msg_contents": "Kevin Brown wrote:\n> Tom Lane wrote:\n> > Kevin Brown <[email protected]> writes:\n> > > It occurs to me that we could make it possible to get some of the\n> > > performance gains MySQL gets through its naming conventions by\n> > > including the type of object in the path of the object.\n> > \n> > \"Performance gains\"? Name one.\n> \n> Instead of tables and their indexes being on the same platter, you'd\n> be able to put them on separate platters. Sounds like it would likely\n> yield a performance gain to me...\n> \n> > We have been there and done that. I see no reason to go back.\n> \n> I'm not proposing that we return to calling the individual files (or\n> the database they reside in) by name, only that we include a \"type\"\n> identifier in the path so that objects of different types can be\n> located on different spindles if the DBA so desires. As it is right\n> now, tables and indexes are all stored in the same directory, and\n> moving the indexes to a different spindle is an uncertain operation at\n> best (you get to shut down the database in order to move any\n> newly-created indexes, and dropping a moved index will not free the\n> space occupied by the index as it'll only remove the symlink).\n\nThe thing is, this isn't necessarily particularly useful in managing the \npartitioning of data across disks.\n\nIf I have, defined, /disk1, /disk2, /disk3, /disk4, and /disk5, it is highly \nunlikely that my partitioning will be based on the notion of \"put indices on \ndisk1, tables on disk2, and, well, skip the others.\"\n\nI'm liable to want WAL separate from all the others, for a start, but then \nlook for what to put on different disks based on selecting particular tables \nand indices as candidates.\n\nYour observation about the dropping of a moved index is well taken; that would \npoint to the idea that the top level \"thing\" containing each table/index \nperhaps should be a directory, with two interesting properties:\n\n- By being a directory, and putting files in it, this allows extensions to be \nmore clearly tied to the table/index when a file grows towards the \nnot-uncommon 2GB barrier;\n\n- In order for the linking to physical devices to be kept under control, \nparticularly if an index gets dropped and recreated, the postmaster needs to \nbe able to establish the links, suggesting an extension to syntax. At first \nblush:\n\n CREATE INDEX FROBOZZ_IDX LOCATION '/disk1/pgindices' on FROBOZZ(ID);\n\nSupposing the OID number was 234231, the postmaster would then create the \nsymbolic link from $PGDATA/base/234231 to the freshly-created directory \n/disk1/pgindices/234231, where the index would reside. (And if the directory \nexists, there should be some complaint :-).)\n\nI have made that up out of whole cloth; it _doesn't_ take into consideration \nhow you would specify the location of implicitly-created indices.\n\nBut it seems a useful approach that can be robust, and where it's even \nplausible that the postmaster could cope with a request to shift a table or \nindex to another location. (Which would, quite naturally, put a lock on \naccess to the object for the duration of the operation.)\n--\noutput = reverse(\"gro.gultn@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/\n\"The dinosaurs died because they didn't have a space program.\"\n-- Arthur C Clarke\n\n", "msg_date": "Fri, 18 Apr 2003 08:32:25 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing " }, { "msg_contents": "Kevin Brown wrote:\n> Dave Page wrote:\n> > Still, I've now tried a 29160N adaptor, and changed the\n> > cable to a DPT one (both that and the previous Adaptec have hardwired\n> > terminators). I've also removed the DAT drive from the system so the\n> > only things on the SCSI bus are 2 identical disks and the adaptor.\n> > \n> > I'm beginning to wonder about my shiny new 2.4.20 kernel...\n> \n> That's a possibility. There's one other I'd try, prior to using a\n> different kernel: try booting the kernel with the \"noapic\" option. On\n> certain SMP systems, this has allowed the kernel to come up properly\n> and see the SCSI adaptor (among other things), whereas without it the\n> system would hang on attempts to access the SCSI device. Even if\n> you're not running SMP, it may help your system's stability.\n\nThe \"noapic\" option seems a quasi-magical elixir for many sorts of ailments.\n\nI upgraded a box to 2.4.20 and discovered that my NIC was no longer properly \nrecognized until I threw that option in. Others in my office have \n/apparently/ the same hardware, and found they didn't need the option.\n\nAs a \"fix,\" it certainly seems to fall into the \"snakeoil/superstition\" \ncategory. While it often seems to have a useful effect, I haven't located any \nactual explanations as to why it should be expected to work.\n\nBut based on the comments in the thread, I would concur that my suspicions \nwould be with the kernel as the most likely root of the problem.\n--\n(reverse (concatenate 'string \"moc.enworbbc@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\nRules of the Evil Overlord #82. \"I will not shoot at any of my enemies\nif they are standing in front of the crucial support beam to a heavy,\ndangerous, unbalanced structure. <http://www.eviloverlord.com/>\n\n", "msg_date": "Fri, 18 Apr 2003 08:39:45 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2 " }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> \"Performance gains\"? Name one.\n\n> Instead of tables and their indexes being on the same platter, you'd\n> be able to put them on separate platters. Sounds like it would likely\n> yield a performance gain to me...\n\nThat has *nothing* to do with whether we name files after tables or not.\nAs Andrew pointed out, you don't really want people munging file\nlocations by hand anyway; until we have a proper tablespace\nimplementation, it's going to be tedious and error-prone no matter what.\n\n> I'm not proposing that we return to calling the individual files (or\n> the database they reside in) by name, only that we include a \"type\"\n> identifier in the path so that objects of different types can be\n> located on different spindles if the DBA so desires.\n\nThis has been proposed and rejected repeatedly in the tablespace\ndiscussions. It's too limiting; and what's worse, it's not actually\nany easier to implement than a proper tablespace facility. The\nlow-level I/O routines still need access to a piece of info they do\nnot have now. You may as well make it a tablespace identifier instead\nof a file-type identifier.\n\nThe real point here is that \"put the indexes on a different platter\"\nis policy. One should never confuse policy with mechanism, nor build a\nmechanism that can only implement one policy.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Apr 2003 10:06:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "[email protected] wrote:\n> The \"noapic\" option seems a quasi-magical elixir for many sorts of\n> ailments.\n>\n> I upgraded a box to 2.4.20 and discovered that my NIC was no longer\n> properly recognized until I threw that option in. Others in my\n> office have /apparently/ the same hardware, and found they didn't\n> need the option.\n>\n> As a \"fix,\" it certainly seems to fall into the\n> \"snakeoil/superstition\" category. While it often seems to have a\n> useful effect, I haven't located any actual explanations as to why\n> it should be expected to work.\n\nWell, when it comes to booting a computer, the placebo effect doesn't\nreally exist. :-)\n\nNormally I'd agree that \"noapic\" sounds and smells like snakeoil. The\nproblem is that it has observable and repeatable effects on some\nsystems, and thus can't really be classified as snakeoil (much as one\nmight like to!).\n\nWhy should it be expected to work? I don't know...possibly because\nthe APIC hardware is buggy (perhaps in very subtle ways) on some\nsystems? Possibly because the APIC driver is subtlely incompatible\nwith certain APIC hardware? Possibly because the APIC driver has\ncertain subtle bugs that only manifest themselves on certain\nmotherboards with certain peripheral devices?\n\nWhatever the reason, the \"noapic\" option *does* work on certain\nsystems, so it unfortunately isn't something that can be dismissed as\nmere superstition -- the computer isn't being asked its opinion of its\nown health here, nor does it \"know\" that it should get \"well\" when\ngiven different boot options. No \"placebo effect\" involved, just\nrepeatable observation (that the observation isn't terribly repeatable\n*across* systems does not diminish the validity of the observation).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Fri, 18 Apr 2003 13:37:03 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2" }, { "msg_contents": "Tom Lane wrote:\n> > I'm not proposing that we return to calling the individual files (or\n> > the database they reside in) by name, only that we include a \"type\"\n> > identifier in the path so that objects of different types can be\n> > located on different spindles if the DBA so desires.\n> \n> This has been proposed and rejected repeatedly in the tablespace\n> discussions. It's too limiting; and what's worse, it's not actually\n> any easier to implement than a proper tablespace facility. \n\nIt's not? This is a little surprising, since the type information is\nalready stored, is it not? A proper tablespace implementation\nrequires the addition of commands to manage it and table\ninfrastructure to store it. That seems like a bit more work than\nwriting a function to translate an object ID into a type name (and\nchanging CREATE/DROP DATABASE to deal with multiple directories). But\nsince you're much more familiar with the internals, I'll take your\nword for it.\n\nI figured getting the type name of the object would be a relatively\neasy thing to do, obvious to anyone with any real familiarity with the\nsource. Guess not...\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Fri, 18 Apr 2003 15:06:19 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> This has been proposed and rejected repeatedly in the tablespace\n>> discussions. It's too limiting; and what's worse, it's not actually\n>> any easier to implement than a proper tablespace facility. \n\n> It's not? This is a little surprising, since the type information is\n> already stored, is it not? A proper tablespace implementation\n> requires the addition of commands to manage it and table\n> infrastructure to store it.\n\nWell, yeah, you do have to provide some user interface stuff ;-)\n\nBut the hard, dirty, dangerous stuff is all in the low-level internals\n(bufmgr, smgr, etc). I don't want to put a kluge in there when the same\namount of work will support a non-kluge solution.\n\nAlso, you'd still have to provide some user interface stuff for the\nkluge, so it's not like you can avoid doing any work at that level.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Apr 2003 20:11:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "> > Instead of tables and their indexes being on the same platter, you'd\n> > be able to put them on separate platters. Sounds like it would likely\n> > yield a performance gain to me...\n>\n> That has *nothing* to do with whether we name files after tables or not.\n> As Andrew pointed out, you don't really want people munging file\n> locations by hand anyway; until we have a proper tablespace\n> implementation, it's going to be tedious and error-prone no matter what.\n\nJust so people are aware, I'm getting Jim Buttfuoco's tablespaces patch\nfrom him again, and getting it up to CVS. I'll then see what needs to be\ndone to it such that it would be accepted...\n\nChris\n\n", "msg_date": "Sat, 19 Apr 2003 12:21:01 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "\nGreat. I think it can be made acceptable with little work.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > > Instead of tables and their indexes being on the same platter, you'd\n> > > be able to put them on separate platters. Sounds like it would likely\n> > > yield a performance gain to me...\n> >\n> > That has *nothing* to do with whether we name files after tables or not.\n> > As Andrew pointed out, you don't really want people munging file\n> > locations by hand anyway; until we have a proper tablespace\n> > implementation, it's going to be tedious and error-prone no matter what.\n> \n> Just so people are aware, I'm getting Jim Buttfuoco's tablespaces patch\n> from him again, and getting it up to CVS. I'll then see what needs to be\n> done to it such that it would be accepted...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Sat, 19 Apr 2003 01:21:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Great. I think it can be made acceptable with little work.\n\nIIRC, the reason Jim's patch got bounced was exactly that it offered an\nimplementation of only one policy, with no possibility of extension.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 01:27:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "\n\nOn Sat, 19 Apr 2003, Bruce Momjian wrote:\n\n>\n> Great. I think it can be made acceptable with little work.\n\nJust keep in mind my track record of finding things too hard ;)\n\nJim has offered to help, he was just put off by some negativity...\n\nChris\n\n>\n> Christopher Kings-Lynne wrote:\n> > > > Instead of tables and their indexes being on the same platter, you'd\n> > > > be able to put them on separate platters. Sounds like it would likely\n> > > > yield a performance gain to me...\n> > >\n> > > That has *nothing* to do with whether we name files after tables or not.\n> > > As Andrew pointed out, you don't really want people munging file\n> > > locations by hand anyway; until we have a proper tablespace\n> > > implementation, it's going to be tedious and error-prone no matter what.\n> >\n> > Just so people are aware, I'm getting Jim Buttfuoco's tablespaces patch\n> > from him again, and getting it up to CVS. I'll then see what needs to be\n> > done to it such that it would be accepted...\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n", "msg_date": "Sat, 19 Apr 2003 17:21:46 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Great. I think it can be made acceptable with little work.\n>\n> IIRC, the reason Jim's patch got bounced was exactly that it offered an\n> implementation of only one policy, with no possibility of extension.\n\nI read all the comments regarding Jim's patch, but would you mind stating\nexactly what your concern is, Tom? What do you mean by 'one policy'?\n\nChris\n\n", "msg_date": "Sat, 19 Apr 2003 17:22:49 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > Great. I think it can be made acceptable with little work.\n> >\n> > IIRC, the reason Jim's patch got bounced was exactly that it offered an\n> > implementation of only one policy, with no possibility of extension.\n> \n> I read all the comments regarding Jim's patch, but would you mind stating\n> exactly what your concern is, Tom? What do you mean by 'one policy'?\n\nAs I remember, the patch only put indexes in one place, and tables in\nanother place. We need a general tablespace solution where we can\ncreate tablespaces and put tables/indexes in any of those.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Sat, 19 Apr 2003 11:33:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\")" }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n> I read all the comments regarding Jim's patch, but would you mind stating\n> exactly what your concern is, Tom? What do you mean by 'one policy'?\n\nI don't want something that will only support a policy of \"put the\nindexes over there\". It should be possible to assign individual tables\nor indexes to particular tablespaces if the DBA wants to do that.\nI have nothing against making it easy to \"put the indexes over there\"\n--- for example, we might say that a database has a default tablespace\nfor each kind of object. But if the mechanism can only support a\nper-object-kind determination of tablespace then it's insufficiently\nflexible.\n\nI no longer recall any details about Jim's patch, but I believe we felt\nthat it failed the flexibility criterion.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 11:33:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: For the ametures. (related to \"Are we losing momentum?\") " }, { "msg_contents": "On 16 Apr 2003, Hannu Krosing wrote:\n\n> [email protected] kirjutas K, 16.04.2003 kell 16:51:\n> > The original poster had some \n> > valid points (auto-vacuum and non-intuitive commands) that still need \n> > addressing, IMO.\n> \n> As of 7.3 (or was it 7.2) auto-vacuum is just one line in crontab. In\n> many scenarios it can be left running continuously with very little\n> effect on performance. In others it must be run nightly, but having it\n> kick in at unexpected times may not be what you want at all. So it has\n> to be configured for good performance weather it is built-in or run in a\n> separate backend process.\n> \n> And I can't see how \"show tables\" is more intuitive than \"\\dt\" - I\n> expected it to be \"list tables\" or \"tablelist\" or \"näita tabeleid\" .\n\n'show tables' is SQL, and can be run from a script, with the output \nparsed. For some reason when I run a query of \\dt from PHP I get an \nerror. :-)\n\n> Once you have found \\? it is all there (and you are advised to use \\? at\n> psql startup).\n\nI love \\ commands, but remember, those are psql commands, not postgresql \ncommands. show tables would be a postgreSQL command the backend parser \nwould understand. Apples and Oranges.\n\n> That may also be why PostgreSQL is more popular in Japan - if one has to\n> remember nonsensical strings, then it is easier to remember short ones\n\nBut, how do I write an app to ask such questions easily? psql -E is not \nthe easiest and most intuitive way to learn how to get the data into a \nstructure for parsing in a client side app.\n\n", "msg_date": "Mon, 21 Apr 2003 09:30:49 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Monday 21 April 2003 21:00, scott.marlowe wrote:\n> But, how do I write an app to ask such questions easily? psql -E is not\n> the easiest and most intuitive way to learn how to get the data into a\n> structure for parsing in a client side app.\n\nHow about selecting from pg_class? Nothing could have been more structured..\n\n Shridhar\n\n", "msg_date": "Mon, 21 Apr 2003 21:14:33 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Mon, 2003-04-21 at 08:30, scott.marlowe wrote:\n> On 16 Apr 2003, Hannu Krosing wrote:\n> \n> > Once you have found \\? it is all there (and you are advised to use \\? at\n> > psql startup).\n> \n> I love \\ commands, but remember, those are psql commands, not postgresql \n> commands. show tables would be a postgreSQL command the backend parser \n> would understand. Apples and Oranges.\n> \n> > That may also be why PostgreSQL is more popular in Japan - if one has to\n> > remember nonsensical strings, then it is easier to remember short ones\n> \n> But, how do I write an app to ask such questions easily? psql -E is not \n> the easiest and most intuitive way to learn how to get the data into a \n> structure for parsing in a client side app.\n\nHe speaks my mind.\n-- \nSteve Wampler <[email protected]>\nNational Solar Observatory\n\n", "msg_date": "21 Apr 2003 08:45:12 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Mon, 2003-04-21 at 11:30, scott.marlowe wrote:\n> 'show tables' is SQL, and can be run from a script, with the output \n> parsed.\n\nBut \"select * from pg_tables\" (or the equivalent query on the\ninformation schemas) is SQL, can be run from a script, and can be parsed\nby a client application.\n\n> But, how do I write an app to ask such questions easily? psql -E is not \n> the easiest and most intuitive way to learn how to get the data into a \n> structure for parsing in a client side app.\n\nYou're conflating two distinct issues: (1) providing an interface for\nCLI use by the DBA (2) providing an API for programmer use in\napplications.\n\nIf you think the existing system catalogs are not sufficiently\nintuitive, then we should fix that problem properly (for example,\nthrough better documentation), not by copying some ad-hoc syntax from\nanother RDBMS.\n\nIf you think the existing CLI interface (\\d etc.) is not sufficiently\nintuitive (which has been what a couple people in this thread have\nargued), I don't see what that has to do with client side applications\nor parsing the output.\n\nCheers,\n\nNeil\n\n", "msg_date": "21 Apr 2003 13:06:01 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "On Fri, 18 Apr 2003, Kevin Brown wrote:\n\n> [email protected] wrote:\n> > The \"noapic\" option seems a quasi-magical elixir for many sorts of\n> > ailments.\n> >\n> > I upgraded a box to 2.4.20 and discovered that my NIC was no longer\n> > properly recognized until I threw that option in. Others in my\n> > office have /apparently/ the same hardware, and found they didn't\n> > need the option.\n> >\n> > As a \"fix,\" it certainly seems to fall into the\n> > \"snakeoil/superstition\" category. While it often seems to have a\n> > useful effect, I haven't located any actual explanations as to why\n> > it should be expected to work.\n> \n> Well, when it comes to booting a computer, the placebo effect doesn't\n> really exist. :-)\n> \n> Normally I'd agree that \"noapic\" sounds and smells like snakeoil. The\n> problem is that it has observable and repeatable effects on some\n> systems, and thus can't really be classified as snakeoil (much as one\n> might like to!).\n> \n> Why should it be expected to work? I don't know...possibly because\n> the APIC hardware is buggy (perhaps in very subtle ways) on some\n> systems? Possibly because the APIC driver is subtlely incompatible\n> with certain APIC hardware? Possibly because the APIC driver has\n> certain subtle bugs that only manifest themselves on certain\n> motherboards with certain peripheral devices?\n> \n> Whatever the reason, the \"noapic\" option *does* work on certain\n> systems, so it unfortunately isn't something that can be dismissed as\n> mere superstition -- the computer isn't being asked its opinion of its\n> own health here, nor does it \"know\" that it should get \"well\" when\n> given different boot options. No \"placebo effect\" involved, just\n> repeatable observation (that the observation isn't terribly repeatable\n> *across* systems does not diminish the validity of the observation).\n\nJust to add to this, on some of the first SMP systems I messed with there \nwas a setting for some SMP version of 1.1 or 1.4, and using 1.1 resulted \nin an unstable box for me. 1.4 fixed all the issues. SMP on Intel is a \nwild ride, and no two motherboards are equivalent. I've had good luck \nwith Supermicro and Intel SMP motherboards, although both have needed BIOS \nupdates at times.\n\n", "msg_date": "Mon, 21 Apr 2003 11:40:32 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2" }, { "msg_contents": "On 21 Apr 2003, Neil Conway wrote:\n\n> On Mon, 2003-04-21 at 11:30, scott.marlowe wrote:\n> > 'show tables' is SQL, and can be run from a script, with the output \n> > parsed.\n> \n> But \"select * from pg_tables\" (or the equivalent query on the\n> information schemas) is SQL, can be run from a script, and can be parsed\n> by a client application.\n\nBut it's not an answer. In psql we have the \\ commands, which I love. In \na client side app, select * from pg_tables is just the beginning. You've \ngot to join that to pg_class and jump through quite a few hoops.\n\nFor instance, a \\d on a simple table in my database produces this much SQL \nin the backend:\n\n********* QUERY **********\nSELECT relhasindex, relkind, relchecks, reltriggers, relhasrules\nFROM pg_class WHERE relname='profile'\n**************************\n\n********* QUERY **********\nSELECT a.attname, format_type(a.atttypid, a.atttypmod), a.attnotnull, \na.atthasdef, a.attnum\nFROM pg_class c, pg_attribute a\nWHERE c.relname = 'profile'\n AND a.attnum > 0 AND a.attrelid = c.oid\nORDER BY a.attnum\n**************************\n\n********* QUERY **********\nSELECT substring(d.adsrc for 128) FROM pg_attrdef d, pg_class c\nWHERE c.relname = 'profile' AND c.oid = d.adrelid AND d.adnum = 1\n**************************\n\n********* QUERY **********\nSELECT c2.relname\nFROM pg_class c, pg_class c2, pg_index i\nWHERE c.relname = 'profile' AND c.oid = i.indrelid AND i.indexrelid = \nc2.oid\nAND NOT i.indisunique ORDER BY c2.relname\n**************************\n\n********* QUERY **********\nSELECT c2.relname\nFROM pg_class c, pg_class c2, pg_index i\nWHERE c.relname = 'profile' AND c.oid = i.indrelid AND i.indexrelid = \nc2.oid\nAND i.indisprimary AND i.indisunique ORDER BY c2.relname\n**************************\n\n********* QUERY **********\nSELECT c2.relname\nFROM pg_class c, pg_class c2, pg_index i\nWHERE c.relname = 'profile' AND c.oid = i.indrelid AND i.indexrelid = \nc2.oid\nAND NOT i.indisprimary AND i.indisunique ORDER BY c2.relname\n**************************\n\nYet there is no equivalent materialized view that puts the data together \nfor the user. I don't know about you, but show table tablename is a bit \neasier to grasp for beginners than the above sequence of SQL statements.\n\n> > But, how do I write an app to ask such questions easily? psql -E is not \n> > the easiest and most intuitive way to learn how to get the data into a \n> > structure for parsing in a client side app.\n> \n> You're conflating two distinct issues: (1) providing an interface for\n> CLI use by the DBA (2) providing an API for programmer use in\n> applications.\n\nWhy are those two seperate issues? Why can't the same answer be easily \nand readily available to both the DBA and the programmer? Why does one \nhave to first use psql -E to figure out the queries needed then figure out \nwhich ones to use and not use etc...? I'm not saying the \\ commands are \nbad, I'm saying they're implemented in the wrong place. Having \\ in the \npsql monitor is fine. But it should really be hitting views in the \nbackground where possible.\n\n> If you think the existing system catalogs are not sufficiently\n> intuitive, then we should fix that problem properly (for example,\n> through better documentation), not by copying some ad-hoc syntax from\n> another RDBMS.\n\nI don't care what MySQL does. Period. But, I do think Postgresql has a \nhigh learning curve because so much of it is hidden from beginners. \n\nBetter documentation won't fix this issue. The real issue here is that \npsql has a facility (\\ commands) that isn't present in the rest of \npostgresql, and really should be. psql shouldn't be the only interface \nthat allows you to easily see how tables are put together etc...\n\n> If you think the existing CLI interface (\\d etc.) is not sufficiently\n> intuitive (which has been what a couple people in this thread have\n> argued), I don't see what that has to do with client side applications\n> or parsing the output.\n\nNo, I like the psql interface. It's intuitive to me and has been since \nday one. It's the lack of intuition on the application side that bothers \nme.\n\n", "msg_date": "Mon, 21 Apr 2003 14:26:20 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "> It's out of date and I did not receive any feedback on it (so I assume\n> it's not very useful), but the patch submitted in late February would\n> easily allow the described 'translation' to occur.\n> \n> Summary:\n> \n> Schema in backend translates a \\<command> <arg> command into an sql\n> query which is then to be executed. Logic is still in psql, but the\n> available commands and how data is retrieved from the schema was backend\n> specific.\n> \n> \n> I'm using it here due to it's ability to add new psql commands (since\n> it's just data in a table.) \n> \n> http://archives.postgresql.org/pgsql-patches/2003-02/msg00216.php\n> \n> \n> Anyway, might be useful for thoughts. Simply moving the commands into\n> the backend still leaves us with unsupported new commands in old clients\n> or new commands in old databases (leaving errors) unless the SHOW ...\n> syntax itself is handled by the backend.\n> \n> > :-/ Yeah, I've been following that from a distance and I'm not so wild\n> > about that. I really like that the information_schema has been\n> > integrated into the base, but translating the SHOW commands into\n> > SELECTs from information_schema on the backend seems like a bad idea\n> > unless its going to be done abstract enough via some kind of rewrite\n> > engine that allows users to program the database to translate their\n> > verbiage into SQL (ex: KILL -> DROP, GET -> SELECT), which could be\n> > kinda fun.\n\nAnyone other than Rod (and now myself) had a chance to look this over?\nThis doesn't really address the lack of a SHOW syntax for new MySQL\nusers, but it sure does open up the possibilities for making it easier\nto probe the backend.\n\nOn a related note, any thoughts on the SHOW stuff given that the topic\nhas come back to life on -hackers? -sc\n\n-- \nSean Chittenden", "msg_date": "Mon, 21 Apr 2003 14:17:55 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "On Mon, 2003-04-21 at 16:26, scott.marlowe wrote:\n> Yet there is no equivalent materialized view that puts the data together \n> for the user. I don't know about you, but show table tablename is a bit \n> easier to grasp for beginners than the above sequence of SQL statements.\n\nGranted -- but I don't think that replacing or augmenting the system\ncatalogs with a set of SHOW commands is a good idea (which is what you\nsuggested originally). IMHO enhancing the system catalogs by adding\nviews that encapsulate more of the \\ command functionality into the\nbackend is a good idea, and one that should be implemented eventually.\nAFAIK that's been the consensus for some time...\n\nCheers,\n\nNeil\n\n", "msg_date": "21 Apr 2003 17:47:08 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Hi,\n\n> On Mon, 2003-04-21 at 16:26, scott.marlowe wrote:\n> > Yet there is no equivalent materialized view that puts the data together\n> > for the user. I don't know about you, but show table tablename is a bit\n> > easier to grasp for beginners than the above sequence of SQL statements.\n>\n> Granted -- but I don't think that replacing or augmenting the system\n> catalogs with a set of SHOW commands is a good idea (which is what you\n> suggested originally). IMHO enhancing the system catalogs by adding\n> views that encapsulate more of the \\ command functionality into the\n> backend is a good idea, and one that should be implemented eventually.\n> AFAIK that's been the consensus for some time...\n\nI think the SHOW commands won't be neccesary when there are views to use.\nThere is already a good SQL command to get data/information from the\ndatabaseserver: SELECT. Adding SHOW commands to the backend that essentially\ndo a SELECT on a system view are a bad thing IMHO. The user can just as easy\ndo a SELECT on the view himself.\n\nAll just IMHO ofcourse :)\nSander.\n\n", "msg_date": "Tue, 22 Apr 2003 00:48:10 +0200", "msg_from": "\"Sander Steffann\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Keven Brown wrote:\n> Normally I'd agree that \"noapic\" sounds and smells like snakeoil. The\n> problem is that it has observable and repeatable effects on some\n> systems, and thus can't really be classified as snakeoil (much as one\n> might like to!).\n\nThere's a /slight/ difference between 'superstition' and 'snakeoil'; the\nlatter is something you don't expect to find effectual. The former may\nrepresent something you don't/can't understand.\n\nRecall Clarke's observation that \"Any sufficiently advanced technology\nis indistinguishable from magic.\"\n\nAnything I can't explain is likely to either be:\n\n a) Something I don't understand yet, or\n\n b) Perhaps something truly supernatural, that cannot be explained\n based on any sort of natural reasoning. Perhaps explainable by \"God\n did something unexplainable, therefore things are the way they are.\"\n\nThere are disagreements as to where to draw the line. People of some\ndegrees of \"superstitiousness\" may be prepared to explain _everything_\nas involving \"God decided to make that happen,\" making _no_ attempt to\nunderstand lower level processes. Some, more skeptical, may reject that\nanything should be able to fall into category b).\n\nOn the other hand, some people get themselves rip-roaring drunk and trip\nover the line we thought was one of concept:\n\n \"SCSI is *NOT* magic. There are *fundamental technical reasons* why\n it is necessary to sacrifice a young goat to your SCSI chain now and\n then.\"\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www3.sympatico.ca/cbbrowne/wp.html\n\"While preceding your entrance with a grenade is a good tactic in\nQuake, it can lead to problems if attempted at work.\" -- C Hacking\n-- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html\n\n", "msg_date": "Mon, 21 Apr 2003 21:16:32 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_clog woes with 7.3.2 - Episode 2 " }, { "msg_contents": "On Tue, Apr 15, 2003 at 08:12:04AM +0200, Tony Grant wrote:\n> Lets forget the \"replace MySQL with PostgreSQL\" stuff and go looking for\n> higher end converts. Our marketing push should be \"replace Oracle with\n> PostgreSQL and replace Access with MySQL\". This puts the emphasis on\n> which database can do what... \n \nExcept going from MS Access to MySQL would be a step backwards. :P\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 22 Apr 2003 02:07:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "Neil Conway writes:\n\n> IMHO enhancing the system catalogs by adding\n> views that encapsulate more of the \\ command functionality into the\n> backend is a good idea, and one that should be implemented eventually.\n\nThat would be very nice.\n\n\tTilo\n\n", "msg_date": "Tue, 22 Apr 2003 17:14:07 +0200", "msg_from": "Tilo Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we losing momentum?" }, { "msg_contents": "multilog\n\n\[email protected] (Andrew Sullivan) wrote:\n> Since now is the time for contrib/ flamewars, this seemed a good time\n> to suggest this.\n> \n...\n> \n> Is anyone interested in having pglog-rotator?\n\n", "msg_date": "Tue, 6 May 2003 02:40:51 +0000 (UTC)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator" }, { "msg_contents": "I didn't find any documentation mentioning scheduled jobs. I assume there is no such a feature\nyet. I would like to implement it if someone helps me with the development process (I am brand new\nto OpenSource projects).\n\nBasically the feature should include scheduling function exeution at:\n - postmaster startup\n - postmaster shutdown\n - a specified moment\n - a time of the day/month/year\n - recurring at a time interval\n\nI know this could be implemented in exernal processes but from an application standpoint it would\nbe much more consistent if all the database-related functionality is in the database server.\nBesides, both Oracle and Microsoft have the feature.\n\nPlease advise.\n\nThanks,\nZlatko\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 15:03:50 -0700 (PDT)", "msg_from": "Zlatko Michailov <[email protected]>", "msg_from_op": false, "msg_subject": "Scheduled jobs" }, { "msg_contents": "Zlatko Michailov <[email protected]> writes:\n> I didn't find any documentation mentioning scheduled jobs. I assume\n> there is no such a feature yet. I would like to implement it if\n> someone helps me with the development process (I am brand new to\n> OpenSource projects).\n\n> Basically the feature should include scheduling function exeution at:\n> - postmaster startup\n> - postmaster shutdown\n> - a specified moment\n> - a time of the day/month/year\n> - recurring at a time interval\n\nUse cron. I see no value in duplicating cron's functionality inside\nPostgres.\n\n> Besides, both Oracle and Microsoft have the feature.\n\nThey can afford to expend developer time on inventing and maintaining\nuseless \"features\". We have very finite resources and have to be\ncareful of buying into supporting things that won't really pull their\nweight.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 12 May 2003 18:17:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "On Mon, 12 May 2003, Tom Lane wrote:\n\n> Zlatko Michailov <[email protected]> writes:\n> > I didn't find any documentation mentioning scheduled jobs. I assume\n> > there is no such a feature yet. I would like to implement it if\n> > someone helps me with the development process (I am brand new to\n> > OpenSource projects).\n> \n> > Basically the feature should include scheduling function exeution at:\n> > - postmaster startup\n> > - postmaster shutdown\n> > - a specified moment\n> > - a time of the day/month/year\n> > - recurring at a time interval\n> \n> Use cron. I see no value in duplicating cron's functionality inside\n> Postgres.\n\nI was going to say use cron :)\n\nOnly cron can't handle some of those cases listed, but then one could always\npatch one's own local installation of pg_ctl etc. to run things at startup and\nshutdown.\n\nI'm not sure how specified moment differs from time of day/month/year (except\ncan cron handle years?)\n\n\n> \n> > Besides, both Oracle and Microsoft have the feature.\n> \n> They can afford to expend developer time on inventing and maintaining\n> useless \"features\". We have very finite resources and have to be\n> careful of buying into supporting things that won't really pull their\n> weight.\n> \n> \t\t\tregards, tom lane\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Mon, 12 May 2003 23:22:23 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "\"Nigel J. Andrews\" <[email protected]> writes:\n> Only cron can't handle some of those cases listed, but then one could always\n> patch one's own local installation of pg_ctl etc. to run things at startup and\n> shutdown.\n\nIf you are using a typical init script setup, it's easy to add\nadditional operations to the init script's start and stop actions.\n\nI'm a tad suspicious of adding on-shutdown actions anyway, as there's\nlittle guarantee they would get done (consider system crash, postmaster\ncrash, etc).\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 12 May 2003 18:27:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "Zlatko Wrote:\n> I didn't find any documentation mentioning scheduled jobs. I assume\n> there is no such a feature yet. I would like to implement it if\n> someone helps me with the development process (I am brand new to\n> OpenSource projects).\n\n> Basically the feature should include scheduling function exeution at:\n> - postmaster startup\n> - postmaster shutdown\n> - a specified moment\n> - a time of the day/month/year\n> - recurring at a time interval\n\nHave you ever considered using cron?\n\n- It is available on every Unix.\n- It may readily be compiled for Cygwin.\n\nIt seems preposterous to imagine that reimplementing the functionality\nof cron would significantly add to the functionality of PostgreSQL.\n\nIf you really want a unified way of accessing crontabs, then feel free\nto write some plpgsql functions that are \"wrappers\" for cron...\n--\n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://www.ntlug.org/~cbbrowne/unix.html\nFLORIDA: Relax, Retire, Re Vote.\n\n", "msg_date": "Mon, 12 May 2003 18:38:14 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "On Mon, 12 May 2003, Tom Lane wrote:\n\n> \"Nigel J. Andrews\" <[email protected]> writes:\n> > Only cron can't handle some of those cases listed, but then one could always\n> > patch one's own local installation of pg_ctl etc. to run things at startup and\n> > shutdown.\n> \n> If you are using a typical init script setup, it's easy to add\n> additional operations to the init script's start and stop actions.\n> \n> I'm a tad suspicious of adding on-shutdown actions anyway, as there's\n> little guarantee they would get done (consider system crash, postmaster\n> crash, etc).\n> \n> \t\t\tregards, tom lane\n\nAbsolutely. That's why you'd patch your startup/shutdown scripts. Adding it to\npg_ctl does enable those to kick the necessary stuff without requiring use of\nthe system's init scripts for manual control of the postmaster. When the\nemphasis on the 'controlled' aspect of this is acknowledged then it's just a\ntoss up between editing pg_ctl or your own wrapper for it. I would go for my\nown wrapper since then that still leaves the ability for pg_ctl to be used\n_without_ kicking those startup/shutdown actions.\n\n\nI believe this has arisen several times and each time there's been no\nenthusiasm to stick cron into the core which I think is a reasonable stance.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Mon, 12 May 2003 23:38:59 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "For windows, look up wincron, I used it back in the day. don't know if \nit's still an up to date package, but it was great back when NT4.0 still \nheld some small attraction to me.\n\nOn Mon, 12 May 2003, Christopher Browne wrote:\n\n> Zlatko Wrote:\n> > I didn't find any documentation mentioning scheduled jobs. I assume\n> > there is no such a feature yet. I would like to implement it if\n> > someone helps me with the development process (I am brand new to\n> > OpenSource projects).\n> \n> > Basically the feature should include scheduling function exeution at:\n> > - postmaster startup\n> > - postmaster shutdown\n> > - a specified moment\n> > - a time of the day/month/year\n> > - recurring at a time interval\n> \n> Have you ever considered using cron?\n> \n> - It is available on every Unix.\n> - It may readily be compiled for Cygwin.\n> \n> It seems preposterous to imagine that reimplementing the functionality\n> of cron would significantly add to the functionality of PostgreSQL.\n> \n> If you really want a unified way of accessing crontabs, then feel free\n> to write some plpgsql functions that are \"wrappers\" for cron...\n> --\n> (concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\n> http://www.ntlug.org/~cbbrowne/unix.html\n> FLORIDA: Relax, Retire, Re Vote.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Mon, 12 May 2003 16:56:02 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "-*- Tom Lane <[email protected]> [ 2003-05-12 22:18 ]:\n> Use cron. I see no value in duplicating cron's functionality inside\n> Postgres.\n\nThe biggest advantages I see:\n - running tasks as a specific database user without having to store passwords on the server\n - When deploying a database -- maintenance jobs can be created with SQL commands\n - Not everybody have access to the server or don't have another machine to run it from\n\nJust mentioning some pros I see -- I do agree with your point on resources and future maintenance.\n\n\n-- \nRegards,\nTolli\[email protected]\n\n", "msg_date": "Mon, 12 May 2003 23:03:20 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "Quoth [email protected] (\"Nigel J. Andrews\"):\n> I believe this has arisen several times and each time there's been\n> no enthusiasm to stick cron into the core which I think is a\n> reasonable stance.\n\nI think it _would_ be kind of neat to set up some tables to contain\nwhat's in the postgres user's crontab, and have a pair of stored\nprocedures to move data in and out. If you had some Truly Appalling\nnumber of cron jobs, manipulating them in a database could well be a\ngreat way to do it.\n\nThat is, of course, quite separate from having cron in the core. And\nhaving a _good_ set of semantics for the push/pull is a nontrivial\nmatter...\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/nonrdbms.html\nAs Will Rogers would have said, \"There is no such thing as a free\nvariable.\" -- Alan Perlis\n\n", "msg_date": "Mon, 12 May 2003 19:29:46 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Mon, May 12, 2003 at 23:03:20 +0000,\n Þórhallur Hálfdánarson <[email protected]> wrote:\n> \n> The biggest advantages I see:\n> - running tasks as a specific database user without having to store passwords on the server\n\nYou can do that now using ident authentication. For some OS's you can do\nthis using domain sockets and don't have to run an ident server.\n\n", "msg_date": "Tue, 13 May 2003 06:40:31 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "-*- Bruno Wolff III <[email protected]> [ 2003-05-13 11:39 ]:\n> On Mon, May 12, 2003 at 23:03:20 +0000,\n> Þórhallur Hálfdánarson <[email protected]> wrote:\n> > \n> > The biggest advantages I see:\n> > - running tasks as a specific database user without having to store passwords on the server\n> \n> You can do that now using ident authentication. For some OS's you can do\n> this using domain sockets and don't have to run an ident server.\n\nIn most of my setups there is only a limited number of users on the system, but many other users in PostgreSQL. Creating a user on the system for every user I create in the DB and allowing him to run processes is not according to procedures I follow, and I believe that applies to more people.\n\n-- \nRegards,\nTolli\[email protected]\n\n", "msg_date": "Tue, 13 May 2003 12:40:41 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Tue, May 13, 2003 at 12:40:41PM +0000, Þórhallur Hálfdánarson wrote:\n\n> In most of my setups there is only a limited number of users on the\n> system, but many other users in PostgreSQL. Creating a user on the\n> system for every user I create in the DB and allowing him to run\n> processes is not according to procedures I follow, and I believe that\n> applies to more people.\n\nIn this case you can put the passwords in ~/.pgpass, I think.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n", "msg_date": "Tue, 13 May 2003 09:15:46 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "-*- Alvaro Herrera <[email protected]> [ 2003-05-13 13:19 ]:\n> On Tue, May 13, 2003 at 12:40:41PM +0000, Þórhallur Hálfdánarson wrote:\n> \n> > In most of my setups there is only a limited number of users on the\n> > system, but many other users in PostgreSQL. Creating a user on the\n> > system for every user I create in the DB and allowing him to run\n> > processes is not according to procedures I follow, and I believe that\n> > applies to more people.\n> \n> In this case you can put the passwords in ~/.pgpass, I think.\n\nThe suggestion on using ident was to eliminate the need for storing passwords in the first place...\n\n-- \nRegards,\nTolli\[email protected]\n\n", "msg_date": "Tue, 13 May 2003 13:33:25 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "Yes. We need a system table to store scheduled jobs and a single external daemon to fire them up.\n\nThe example I have in mind is maintaining active sessions where my app maintains its own user\naccounts. When a user logs in, a session row is created in a an app table. Every time a new\nrequest comes through that session, a last_used timestamp is updated. At the same time there must\nbe a job checking that same table every minute for rows where the last_used timestamp is over 20\nminutes old and remove such rows. Since the account registrations are inside the database, it\nappeals to me that session maintenance should also be there.\n\nPlease think about it again. I can provide a table and SQL command (or stored proc) proposal.\n\nThanks,\nZlatko\n\n\n--- Christopher Browne <[email protected]> wrote:\n> Quoth [email protected] (\"Nigel J. Andrews\"):\n> > I believe this has arisen several times and each time there's been\n> > no enthusiasm to stick cron into the core which I think is a\n> > reasonable stance.\n> \n> I think it _would_ be kind of neat to set up some tables to contain\n> what's in the postgres user's crontab, and have a pair of stored\n> procedures to move data in and out. If you had some Truly Appalling\n> number of cron jobs, manipulating them in a database could well be a\n> great way to do it.\n> \n> That is, of course, quite separate from having cron in the core. And\n> having a _good_ set of semantics for the push/pull is a nontrivial\n> matter...\n> -- \n> If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\n> http://cbbrowne.com/info/nonrdbms.html\n> As Will Rogers would have said, \"There is no such thing as a free\n> variable.\" -- Alan Perlis\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 13 May 2003 07:18:52 -0700 (PDT)", "msg_from": "Zlatko Michailov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Tue, May 13, 2003 at 01:33:25PM +0000, ??rhallur H?lfd?narson wrote:\n> The suggestion on using ident was to eliminate the need for storing\n> passwords in the first place...\n\nBut how are you going to let them run scheduled jobs inside the\npostmaster if they can't be authenticated, then? You either have to\nuse .pgpass, user kerberos, or use ident; nothing else is safe in the\ncontext you're discussing. I don't understand the problem.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 13 May 2003 10:41:34 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Tue, May 13, 2003 at 07:18:52AM -0700, Zlatko Michailov wrote:\n\n> remove such rows. Since the account registrations are inside the\n> database, it appeals to me that session maintenance should also be\n> there.\n\nI still don't see what this is going to buy which cron will not. And\nwhat you're asking for is that the whole community pay the cost of\nmaintaining an expensive and redundant piece of functionality because\nof your preference.\n\nIf you really want that, you can probably implement it yourself. I\nknow plenty of people (including me) who would argue very strongly\nagainst putting this sort scheduling function inside any release of\nPostgreSQL. It's a maintenance nightmare which adds nothing to cron;\nmoreover, it's a potential source of some extremely serious bugs,\nincluding security vulnerabilities. Finally, the effort that might\nbe expended in maintaining a redundant piece of code like this is\nsomething that could be expended instead on providing functionality\nwhich is at present not available at all.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 13 May 2003 10:47:37 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "-*- Andrew Sullivan <[email protected]> [ 2003-05-13 14:42 ]:\n> On Tue, May 13, 2003 at 01:33:25PM +0000, ??rhallur H?lfd?narson wrote:\n> > The suggestion on using ident was to eliminate the need for storing\n> > passwords in the first place...\n> \n> But how are you going to let them run scheduled jobs inside the\n> postmaster if they can't be authenticated, then? You either have to\n> use .pgpass, user kerberos, or use ident; nothing else is safe in the\n> context you're discussing. I don't understand the problem.\n\nI was simply pointing out some scenarios when scheduled jobs are nice. :-)\n\nI believe you have to be authenticated to *create* jobs... and would probably run as the owner, if it gets implemented.\n\n-- \nRegards,\nTolli\[email protected]\n\n", "msg_date": "Tue, 13 May 2003 14:48:11 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "> > That is, of course, quite separate from having cron in the core. And\n> > having a _good_ set of semantics for the push/pull is a nontrivial\n\nSometimes I need some external action to be taken when a table is updated. I \ncan't find anything about how to do this with triggers.\n\nThe easiest would be to have a server listening on a port. But can I write a \nPL/pgSQL trigger that can talk tcp/ip?\n\nThere is a short description about untrusted Perl. This might solve the \nproblem of talking to the port, but I'm not sure I would like to run anything \ncalled \"Untrusted\" in my server!\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 Åben 12.00-18.00 Email: [email protected]\n2000 Frederiksberg Lørdag 12.00-16.00 Web: www.suse.dk\n\n", "msg_date": "Tue, 13 May 2003 17:04:52 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Tue, 2003-05-13 at 10:48, Þórhallur Hálfdánarson wrote:\n> -*- Andrew Sullivan <[email protected]> [ 2003-05-13 14:42 ]:\n> > On Tue, May 13, 2003 at 01:33:25PM +0000, ??rhallur H?lfd?narson wrote:\n> > > The suggestion on using ident was to eliminate the need for storing\n> > > passwords in the first place...\n> > \n> > But how are you going to let them run scheduled jobs inside the\n> > postmaster if they can't be authenticated, then? You either have to\n> > use .pgpass, user kerberos, or use ident; nothing else is safe in the\n> > context you're discussing. I don't understand the problem.\n> \n> I was simply pointing out some scenarios when scheduled jobs are nice. :-)\n> \n> I believe you have to be authenticated to *create* jobs... and would probably run as the owner, if it gets implemented.\n\nWouldn't it make more sense to modify cron to be able to read scheduling\ndetails out of the database -- rather than trying to modify PostgreSQL\nto try to feed cron?\n\nSee examples of FTP, DNS, etc. software that can read authentication\nelements from databases -- and the lack of DBs that have knowledge of\nhow to push data into those services.\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "13 May 2003 11:07:55 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On Tue, May 13, 2003 at 04:55:01PM +0200, Reinoud van Leeuwen wrote:\n\n> One thing comes in mind: portability. Scripts and cron work different on \n> Unix compared to Windows or other platforms. Even cron is not the same on \n> al Unix variants.\n> When the scheduling system is inside the database, it works identical on \n> all platforms...\n\nUnless, of course, they have used different versions of an\nidentically-named library. In which case you get different\nperformance anyway, and so then you end up writing a custom library\nwhich is or is not the default for HP-UX version 9 when compiled with\ncertain options (see another recent thread about exactly such a\ncase).\n\nThat's exactly the sort of terrible maintenance problem that I can\nsee by implementing such functionality, and I can't see that it's\nanywhere near worth the cost. Given that the behaviour of /bin/ksh\nand cron are both POSIX, you can still rely on some standardisation\nacross platforms.\n\nIt seems to me that, if the price of supporting Windows is that\nPostgres has to have its own cron, the cost is too high. I don't\nbelieve that Postgres _does_ need that, however: a scheduling service\nis available on Windows that's good enough for these purposes, and\nyou cannot really expect perfect portability between any flavour of\nUNIX and Windows (as anyone who's had to support such a heterogenous\nenvironment knows). \n\nOf course it's true that if you re-implement every service of every\nsupported operating system yourself, you get a more portable system. \nBut in that case, perhaps someone should start the PostgrOS project. \n(It's a database! No, it's an operating system! No, it's a\ndata-based operating environment! Wait. Someone already did that:\nPICK. Nice system, but not SQL.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 13 May 2003 12:18:25 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "On 13 May 2003, Rod Taylor wrote:\n\n> On Tue, 2003-05-13 at 10:48, Þórhallur Hálfdánarson wrote:\n> > -*- Andrew Sullivan <[email protected]> [ 2003-05-13 14:42 ]:\n> > > On Tue, May 13, 2003 at 01:33:25PM +0000, ??rhallur H?lfd?narson wrote:\n> > > > The suggestion on using ident was to eliminate the need for storing\n> > > > passwords in the first place...\n> > > \n> > > But how are you going to let them run scheduled jobs inside the\n> > > postmaster if they can't be authenticated, then? You either have to\n> > > use .pgpass, user kerberos, or use ident; nothing else is safe in the\n> > > context you're discussing. I don't understand the problem.\n> > \n> > I was simply pointing out some scenarios when scheduled jobs are nice. :-)\n> > \n> > I believe you have to be authenticated to *create* jobs... and would probably run as the owner, if it gets implemented.\n> \n> Wouldn't it make more sense to modify cron to be able to read scheduling\n> details out of the database -- rather than trying to modify PostgreSQL\n> to try to feed cron?\n> \n> See examples of FTP, DNS, etc. software that can read authentication\n> elements from databases -- and the lack of DBs that have knowledge of\n> how to push data into those services.\n\nBingo, Rod. You obviously reached across the miles into my head and stole \nthat from my brain, because honestly I was about 30 seconds from posting \nthe same thing.\n\n", "msg_date": "Tue, 13 May 2003 10:59:46 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "Tolli wrote:\n> -*- Andrew Sullivan <[email protected]> [ 2003-05-13 14:42 ]:\n> > On Tue, May 13, 2003 at 01:33:25PM +0000, ??rhallur H?lfd?narson wrote:\n> > > The suggestion on using ident was to eliminate the need for storing\n> > > passwords in the first place...\n> > \n> > But how are you going to let them run scheduled jobs inside the\n> > postmaster if they can't be authenticated, then? You either have to\n> > use .pgpass, user kerberos, or use ident; nothing else is safe in the\n> > context you're discussing. I don't understand the problem.\n\n> I was simply pointing out some scenarios when scheduled jobs are\n> nice. :-)\n\n\"Nice\" does not dictate \"Someone should be responsible for the\nimplementation.\"\n\nIn the old fable about the mice and the cat, it would sure be \"nice\" if\nthey could put a bell on the cat so the mice could hear the cat coming.\nBut in the fable, none of the mice were prepared to risk life and limb\ngetting the bell put onto the cat.\n\nIn this case, the fact that you'd like a scheduler does not imply that\nanyone will want to take the job on.\n\n> I believe you have to be authenticated to *create* jobs... and would\n> probably run as the owner, if it gets implemented.\n\nNo, these \"jobs\" would run as the \"postgres\" user. (Or whatever user\nit is that the PostgreSQL server runs as.)\n\nAnd there enters a *big* whack of complexity, particularly if that\nisn't the right answer.\n\nIt rapidly turns into a *very* complex system that, even with MS-SQL\nand Oracle, isn't really part of the database. Why is it complex?\nBecause of the need to be able to change user roles to different\nsystem users, which is inherently system-dependent (e.g. - very\ndifferent between Unix and Windows) and *highly* security-sensitive.\n\nI agree with the thoughts that it would be a slick idea to come up\nwith a way of having PostgreSQL be the \"data store\" for some outside\nscheduling tool. You likely won't have something that anyone will\nhave compete with Cron or Maestro or [whatever they call the Windows\n'scheduler'], but it could be useful to those that care. And by\nkeeping it separate, those of us that don't care don't get a bloated\nsystem.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/spreadsheets.html\nThink of C++ as an object-oriented assembly language.\n\n", "msg_date": "Tue, 13 May 2003 16:35:28 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "Andrew Sullivan wrote:\n> Of course it's true that if you re-implement every service of every\n> supported operating system yourself, you get a more portable system. \n> But in that case, perhaps someone should start the PostgrOS project. \n> (It's a database! No, it's an operating system! No, it's a\n> data-based operating environment! Wait. Someone already did that:\n> PICK. Nice system, but not SQL.)\n\nMaVerick apparently implements something Pick-like on top of\nPostgreSQL... <http://www.maverick-dbms.org/articles/article1.html>\n\nAnd IBM Universe and UniData *do* make this into SQL...\nhttp://www-3.ibm.com/software/data/u2/pubs/whitepapers/nested_rdbms.pdf\n\nAnd it's also worth considering that we have array types that support\nsomething Like MV, although that's quite a separate debate. \n\nOr perhaps not; let me suggest the thought that it would be more\nworthwhile to examine the notion of adding MV SQL keywords to PostgreSQL\nthan it would be to try adding a batch scheduler...\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/multiplexor.html\nRules of the Evil Overlord #47. \"If I learn that a callow youth has\nbegun a quest to destroy me, I will slay him while he is still a\ncallow youth instead of waiting for him to mature.\"\n<http://www.eviloverlord.com/>\n\n", "msg_date": "Tue, 13 May 2003 16:35:40 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs " }, { "msg_contents": "Hi\n\n-*- Christopher Browne <[email protected]> [ 2003-05-13 20:38 ]:\n> Tolli wrote:\n> \"Nice\" does not dictate \"Someone should be responsible for the\n> implementation.\"\n> \n> In the old fable about the mice and the cat, it would sure be \"nice\" if\n> they could put a bell on the cat so the mice could hear the cat coming.\n> But in the fable, none of the mice were prepared to risk life and limb\n> getting the bell put onto the cat.\n> \n> In this case, the fact that you'd like a scheduler does not imply that\n> anyone will want to take the job on.\n\nAs I said in my original reply to Tom: \"Just mentioning some pros I see -- I do agree with your point on resources and future maintenance.\"\n\nThe point being, which I might have stated explicitly, that if someone (for example Zlatko who originally suggested it) will go on implementing it, I believe it helps is indeed good. Weather or not it should be included in the main distribution is a matter of a totally seperate debate later on. :-)\n\n> > I believe you have to be authenticated to *create* jobs... and would\n> > probably run as the owner, if it gets implemented.\n> \n> No, these \"jobs\" would run as the \"postgres\" user. (Or whatever user\n> it is that the PostgreSQL server runs as.)\n> \n> And there enters a *big* whack of complexity, particularly if that\n> isn't the right answer.\n\nEeek! What I've been thinking about all along is something for running, err, SQL (which therefor can be run as the owner) or some internal tasks -- nothing with external processes. \n\n> It rapidly turns into a *very* complex system that, even with MS-SQL\n> and Oracle, isn't really part of the database. Why is it complex?\n> Because of the need to be able to change user roles to different\n> system users, which is inherently system-dependent (e.g. - very\n> different between Unix and Windows) and *highly* security-sensitive.\n> \n> I agree with the thoughts that it would be a slick idea to come up\n> with a way of having PostgreSQL be the \"data store\" for some outside\n> scheduling tool. You likely won't have something that anyone will\n> have compete with Cron or Maestro or [whatever they call the Windows\n> 'scheduler'], but it could be useful to those that care. And by\n> keeping it separate, those of us that don't care don't get a bloated\n> system.\n\nI sincerely agree that I'd not like to see PostgreSQL bloated with a cron-wannabe. ;-)\n\n-- \nRegards,\nTolli\[email protected]\n\n", "msg_date": "Tue, 13 May 2003 21:09:16 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "OK, here's an idea. You write a set of stored procs that let you do \nsomething like:\n\ninsert into batch_jobs ('..... I'm not sure what we'd put here...)\n\nthen, postgresql has a crontab entry that uses something like redhats \nrunparts script to run the SQL commands it finds in the table.\n\nI.e. the jobs could be scheduled by something as simple as a query, and \nremoved as well. Just need a postgresql cron that runs every 5 minutes or \nwhatever resolution you need.\n\n", "msg_date": "Tue, 13 May 2003 15:51:28 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\nGod willin' an' the creek don't rise. If anyone's got anything you've\nbeen planning to fix in the 7.3 branch, now is a real good time to get\nit done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 01:15:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Tom Lane wrote:\n> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> God willin' an' the creek don't rise. If anyone's got anything you've\n> been planning to fix in the 7.3 branch, now is a real good time to get\n> it done.\n> \n\nFunny you should ask :-)\n\nHere's a fix for a bug in connectby (crashes when called as a targetlist \nfunction instead of as a table function). The patch is against cvs HEAD, \nbut should also be applied to the 7.3 branch, I think.\n\nThanks,\n\nJoe", "msg_date": "Thu, 15 May 2003 22:49:31 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "tablefunc bugfix (was Re: [HACKERS] Heads up: 7.3.3 this Wednesday)" }, { "msg_contents": "> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> God willin' an' the creek don't rise. If anyone's got anything you've\n> been planning to fix in the 7.3 branch, now is a real good time to get\n> it done.\n\nOnly thing is that %rowtype and dropped columns business - but I think you\nindicated that would be a 7.4 fix...I think it's a bit too complex for me...\n\nChris\n\n", "msg_date": "Fri, 16 May 2003 14:01:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> Here's a fix for a bug in connectby (crashes when called as a targetlist \n> function instead of as a table function). The patch is against cvs HEAD, \n> but should also be applied to the 7.3 branch, I think.\n\nRight-o, applied in both branches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 02:08:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablefunc bugfix (was Re: [HACKERS] Heads up: 7.3.3 this\n\tWednesday)" }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Only thing is that %rowtype and dropped columns business - but I think you\n> indicated that would be a 7.4 fix...I think it's a bit too complex for me...\n\nI think only low-risk bug fixes need apply for 7.3 at this point ... the\ndropped-col thing needs some study ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 02:12:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "On Fri, 16 May 2003, Tom Lane wrote:\n\n> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> God willin' an' the creek don't rise. If anyone's got anything you've\n> been planning to fix in the 7.3 branch, now is a real good time to get\n> it done.\n\nI think Bruce should check his mail box for unapplied patches.\nI suspect some of our patches are still waiting for attention.\n\n Oleg\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Fri, 16 May 2003 11:46:27 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> On Fri, 16 May 2003, Tom Lane wrote:\n>> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n\n> I think Bruce should check his mail box for unapplied patches.\n> I suspect some of our patches are still waiting for attention.\n\nBruce is going to be mostly out of the loop on this release (he's out of\ntown this weekend, and planning a server upgrade Monday). So if you've\ngot any problems, let me know about 'em.\n\nI have just finished digging through the pgsql-patches archives back to\nFebruary, and found only a few small things that seemed appropriate for\nback-patching. I do seem to recall seeing some fixes from you and\nTeodor recently, though. Could you check REL7_3_STABLE CVS tip against\nwhat you have, and either resubmit any missing patches or point me to\nwhere they're archived?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 11:25:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "I sended it before (24 Apr), but I don't sure that it applied.\n\nThis patch fixes nt[]-int[] operation\n\n\nTom Lane wrote:\n> Oleg Bartunov <[email protected]> writes:\n> \n>>On Fri, 16 May 2003, Tom Lane wrote:\n>>\n>>>pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> \n> \n>>I think Bruce should check his mail box for unapplied patches.\n>>I suspect some of our patches are still waiting for attention.\n> \n> \n> Bruce is going to be mostly out of the loop on this release (he's out of\n> town this weekend, and planning a server upgrade Monday). So if you've\n> got any problems, let me know about 'em.\n> \n> I have just finished digging through the pgsql-patches archives back to\n> February, and found only a few small things that seemed appropriate for\n> back-patching. I do seem to recall seeing some fixes from you and\n> Teodor recently, though. Could you check REL7_3_STABLE CVS tip against\n> what you have, and either resubmit any missing patches or point me to\n> where they're archived?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \nTeodor Sigaev E-mail: [email protected]", "msg_date": "Fri, 16 May 2003 22:30:32 +0400", "msg_from": "Teodor Sigaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Teodor Sigaev <[email protected]> writes:\n> I sended it before (24 Apr), but I don't sure that it applied.\n> This patch fixes nt[]-int[] operation\n\nYeah, you're right, it hadn't been applied yet. Done now; thanks!\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 14:52:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "Tom Lane wrote:\n> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> God willin' an' the creek don't rise. If anyone's got anything you've\n> been planning to fix in the 7.3 branch, now is a real good time to get\n> it done.\n\nI've seen no problems with the deferred trigger patch (which addresses\na performance issue with deferred triggers) you gave me some time ago.\nIs this something that's likely to be included in 7.3.3?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 16 May 2003 12:20:04 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I've seen no problems with the deferred trigger patch (which addresses\n> a performance issue with deferred triggers) you gave me some time ago.\n> Is this something that's likely to be included in 7.3.3?\n\nI've been wondering myself about whether to include Jan's patch that\nreduces foreign-key deadlocks. Neither of these patches have gotten\nenough testing (that I know of) to make me feel very comfortable about\ndropping them into 7.3.3 ... but on the other hand, they could be pretty\nsignificant fixes.\n\nAny votes pro or con? Who else can report successful use of either of\nthese patches?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 16:18:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "> > I've seen no problems with the deferred trigger patch (which\n> > addresses a performance issue with deferred triggers) you gave me\n> > some time ago. Is this something that's likely to be included in\n> > 7.3.3?\n> \n> I've been wondering myself about whether to include Jan's patch that\n> reduces foreign-key deadlocks. Neither of these patches have gotten\n> enough testing (that I know of) to make me feel very comfortable\n> about dropping them into 7.3.3 ... but on the other hand, they could\n> be pretty significant fixes.\n> \n> Any votes pro or con? Who else can report successful use of either\n> of these patches?\n\nI've been using the deferred triggers patch in production for about a\nmonth without any problems. I'm definitely for that as well as Jan's\npatch, though I haven't used it. Anything to help out FK's and I'm\ngame. :)\n\nIf someone needs the clean patch for 7.3 of the deferred triggers, the\npatch is at the URL below. The one originally posted didn't merge 100%\ncleanly to my 7.3.2 sources.\n\nhttp://people.FreeBSD.org/~seanc/patch_postgresql-7.3.2::src::backend::utils::adt::ri_triggers.c\n\n -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 16 May 2003 13:26:50 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Is is possible to make it a build time switch so many folks can beta test \nit for us, so to speak? I'd certainly be willing to test it out a bit, \nbut we don't have time to test it before 7.3.3 comes out.\n\nThat would allow us to basically test the deferred triggers in a \nrelatively stable code base (7.3) on semi-production machines (i.e. the \nones running batch files at night and such) long before 7.4 goes beta.\n\nOr is it too complex or ugly to put something like that into configure and \nthe code? Just a thought.\n\nOn Fri, 16 May 2003, Tom Lane wrote:\n\n> Kevin Brown <[email protected]> writes:\n> > I've seen no problems with the deferred trigger patch (which addresses\n> > a performance issue with deferred triggers) you gave me some time ago.\n> > Is this something that's likely to be included in 7.3.3?\n> \n> I've been wondering myself about whether to include Jan's patch that\n> reduces foreign-key deadlocks. Neither of these patches have gotten\n> enough testing (that I know of) to make me feel very comfortable about\n> dropping them into 7.3.3 ... but on the other hand, they could be pretty\n> significant fixes.\n> \n> Any votes pro or con? Who else can report successful use of either of\n> these patches?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Fri, 16 May 2003 15:13:37 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> Is is possible to make it a build time switch so many folks can beta test \n> it for us, so to speak?\n\nI doubt it'd get tested enough to notice, if it's not in the default\nbuild.\n\nI actually think that both of these are pretty good candidates to put\ninto 7.3.3. I'm just trying to adopt an appropriately paranoid stance\nand ask hard questions about how much they've been tested. Between\nKevin and Sean it seems that the deferred-triggers change has gotten\nenough testing to warrant some trust, but I'm not hearing anything\nabout the FK-deadlock one :-(.\n\nBTW, if anyone is looking for that patch, it was at\nhttp://archives.postgresql.org/pgsql-hackers/2003-04/msg00260.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 17:36:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "> > Is is possible to make it a build time switch so many folks can beta test \n> > it for us, so to speak?\n> \n> I doubt it'd get tested enough to notice, if it's not in the default\n> build.\n> \n> I actually think that both of these are pretty good candidates to put\n> into 7.3.3. I'm just trying to adopt an appropriately paranoid stance\n> and ask hard questions about how much they've been tested. Between\n> Kevin and Sean it seems that the deferred-triggers change has gotten\n> enough testing to warrant some trust, but I'm not hearing anything\n> about the FK-deadlock one :-(.\n> \n> BTW, if anyone is looking for that patch, it was at\n> http://archives.postgresql.org/pgsql-hackers/2003-04/msg00260.php\n\nAre there any test cases to get this bug to fire? I haven't had any\nproblems with this particular bug so I can apply this patch, but I\ncan't promise that my use of the patch will result in anything useful\nother than, \"nothing's broke yet\" since I haven't had any real\nproblems with 7.3.2 other than the deferred trigger speed. Off hand,\nI don't see why this patch would cause any problems if it were\napplied. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 16 May 2003 14:44:53 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n>> BTW, if anyone is looking for that patch, it was at\n>> http://archives.postgresql.org/pgsql-hackers/2003-04/msg00260.php\n\n> Are there any test cases to get this bug to fire? I haven't had any\n> problems with this particular bug so I can apply this patch, but I\n> can't promise that my use of the patch will result in anything useful\n> other than, \"nothing's broke yet\" since I haven't had any real\n> problems with 7.3.2 other than the deferred trigger speed.\n\n\"Nothing's broke yet\" would be a useful report. I just would like to\nsee some more mileage on the beast before we let it loose on the\nunsuspecting world. Test cases aren't really the point --- we know what\nwe expect them to do. It's the cases we didn't think of that worry me\nat times like this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2003 17:51:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "On Fri, 16 May 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > Is is possible to make it a build time switch so many folks can beta test \n> > it for us, so to speak?\n> \n> I doubt it'd get tested enough to notice, if it's not in the default\n> build.\n\nWell, we could reverse that and make it the default, and if you find a FK \nproblem folks can turn it off.\n\nBut that probably is sub optimal too.\n\n", "msg_date": "Fri, 16 May 2003 15:57:20 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "Did that 'prevent cluster on partial and non-NULL indexes' patch get\nbackported?\n\nChris\n\n\n", "msg_date": "Sat, 17 May 2003 23:32:50 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n> Did that 'prevent cluster on partial and non-NULL indexes' patch get\n> backported?\n\nThis one?\n\n2003-03-02 23:37 tgl\n\n\t* src/backend/commands/: cluster.c (REL7_3_STABLE), cluster.c:\n\tPrevent clustering on incomplete indexes: partial indexes are\n\tverboten, as are non-amindexnulls AMs unless first column is\n\tattnotnull.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 May 2003 11:38:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "\n> I doubt it'd get tested enough to notice, if it's not in the default\n> build.\n> \n> I actually think that both of these are pretty good candidates to put\n> into 7.3.3. I'm just trying to adopt an appropriately paranoid stance\n> and ask hard questions about how much they've been tested. Between\n> Kevin and Sean it seems that the deferred-triggers change has gotten\n> enough testing to warrant some trust, but I'm not hearing anything\n> about the FK-deadlock one :-(.\n> \n> BTW, if anyone is looking for that patch, it was at\n> http://archives.postgresql.org/pgsql-hackers/2003-04/msg00260.php\n\n7.3.2: I applied the above patch and did install and restarted postgresql,\nbut the 'deadlock detected' error on FK update still exist. The below is\nthe test case. Someone *advice* me, if it the above mentioned patch is not\nintended to address the below case.\n\nTest case:\n\ntest_pg=# CREATE TABLE prim_test (id int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'prim_test_pkey'\nfor table 'prim_test'\nCREATE TABLE\ntest_pg=# CREATE TABLE for_test (id int references prim_test(id), name \ntext);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY \ncheck(s)\nCREATE TABLE\ntest_pg=# INSERT INTO prim_test VALUES ('1');\nINSERT 4383707 1\ntest_pg=# INSERT INTO prim_test VALUES ('2');\nINSERT 4383708 1\ntest_pg=# INSERT INTO for_test VALUES (1, 'foo');\nINSERT 4383710 1\ntest_pg=# INSERT INTO for_test VALUES (2, 'bar');\nINSERT 4383711 1\n\nt1:\ntest_pg=# BEGIN ;\nBEGIN\ntest_pg=# UPDATE for_test set name ='FOO' where id = 1;\nUPDATE 1\ntest_pg=# UPDATE for_test set name ='Bar' where id = 2;\nUPDATE 1\n\nt2:\ntest_pg=# BEGIN ;\nBEGIN\ntest_pg=# UPDATE for_test set name = 'BAR' where id = 2;\nUPDATE 1\ntest_pg=# UPDATE for_test set name = 'Foo' where id = 1;\nERROR: deadlock detected\n\nregards,\nbhuvaneswaran\n\n\n\n", "msg_date": "Mon, 19 May 2003 12:48:24 +0530 (IST)", "msg_from": "\"A.Bhuvaneswaran\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "\"A.Bhuvaneswaran\" <[email protected]> writes:\n> 7.3.2: I applied the above patch and did install and restarted postgresql,\n> but the 'deadlock detected' error on FK update still exist. The below is\n> the test case. Someone *advice* me, if it the above mentioned patch is not\n> intended to address the below case.\n\nThat is not a foreign-key deadlock; it's a plain old deadlock. It would\nhappen exactly the same way without the foreign key, because the\ncontention is directly for the rows being updated.\n\nAn example of what the patch fixes:\n\nregression=# CREATE TABLE prim_test (id int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'prim_test_pkey' for table 'prim_test'\nCREATE TABLE\nregression=# CREATE TABLE for_test (id int, name text,\nregression(# ref int references prim_test(id));\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE TABLE\nregression=# INSERT INTO prim_test VALUES ('1');\nINSERT 566517 1\nregression=# INSERT INTO prim_test VALUES ('2');\nINSERT 566518 1\nregression=# INSERT INTO for_test VALUES (11, 'foo', 1);\nINSERT 566520 1\nregression=# INSERT INTO for_test VALUES (12, 'fooey', 1);\nINSERT 566521 1\nregression=# INSERT INTO for_test VALUES (21, 'fooey', 2);\nINSERT 566522 1\nregression=# INSERT INTO for_test VALUES (22, 'fooey', 2);\nINSERT 566523 1\nregression=# begin;\nBEGIN\nregression=# UPDATE for_test set name ='FOO' where id = 11;\nUPDATE 1\n\n-- in client 2 do\n\nregression=# begin;\nBEGIN\nregression=# UPDATE for_test set name = 'BAR' where id = 22;\nUPDATE 1\nregression=# UPDATE for_test set name = 'BAR' where id = 12;\nUPDATE 1\n\n-- back to client 1, do\n\nregression=# UPDATE for_test set name ='FOO' where id = 21;\nUPDATE 1\n\nThis deadlocks in 7.3, but works in CVS tip.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 May 2003 10:24:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "> regression=# UPDATE for_test set name ='FOO' where id = 21;\n> UPDATE 1\n>\n> This deadlocks in 7.3, but works in CVS tip.\n\nHmmm...I suspect this will remove a lot of the FK deadlocks I see in my logs\nall the time...does seem like a bug doesn't it?\n\nChris\n\n", "msg_date": "Tue, 20 May 2003 10:42:58 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n>> This deadlocks in 7.3, but works in CVS tip.\n\n> Hmmm...I suspect this will remove a lot of the FK deadlocks I see in my logs\n> all the time...does seem like a bug doesn't it?\n\nYeah. I'm just worried that there might be some downside we've not\nspotted yet. I'd feel better about it if *anyone* had reported\nsuccessful production use of the patch in the month since it's been\navailable. I thought one or two people had expressed the intention\nto run the patch when Jan offered it ... where are they?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 May 2003 23:38:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "Tom Lane wrote:\n> Yeah. I'm just worried that there might be some downside we've not\n> spotted yet. I'd feel better about it if *anyone* had reported\n> successful production use of the patch in the month since it's been\n> available. I thought one or two people had expressed the intention\n> to run the patch when Jan offered it ... where are they?\n\nI put it in on some DBs we were doing testing against at Liberty; the\nresults unfortunately were inconclusive. \n\nWe didn't see any noticeable improvement in performance out of it,\nalbeit with limited use, because we weren't planning to use 7.3 in\nproduction just yet, and so did the vast majority of the testing against\n7.2.4, where the patch doesn't even come close to applying...\n\nThe result is that I can't suggest either a \"yea\" or \"nay\"...\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/spiritual.html\nIf con is the opposite of pro, is Congress the opposite of progress?\n", "msg_date": "Tue, 20 May 2003 07:16:21 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "On Mon, 19 May 2003, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> >> This deadlocks in 7.3, but works in CVS tip.\n> \n> > Hmmm...I suspect this will remove a lot of the FK deadlocks I see in my logs\n> > all the time...does seem like a bug doesn't it?\n> \n> Yeah. I'm just worried that there might be some downside we've not\n> spotted yet. I'd feel better about it if *anyone* had reported\n> successful production use of the patch in the month since it's been\n> available. I thought one or two people had expressed the intention\n> to run the patch when Jan offered it ... where are they?\n\nI'd be glad to test it, but we don't have any issues with fk deadlocks \nsince our load is 99% read, 1% write, and most of the tables with fks on \nthem only have a handful of writers, so any testing I would do would \nprobably just be the \"we used it in production and it didn't die\" kind of \ntesting.\n\n", "msg_date": "Tue, 20 May 2003 09:31:50 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> I'd be glad to test it, but we don't have any issues with fk deadlocks \n> since our load is 99% read, 1% write, and most of the tables with fks on \n> them only have a handful of writers, so any testing I would do would \n> probably just be the \"we used it in production and it didn't die\" kind of \n> testing.\n\nThat's what I'm looking for mostly: that it does not have any adverse\nside-effects.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 May 2003 11:54:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "On Tue, 20 May 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > I'd be glad to test it, but we don't have any issues with fk deadlocks \n> > since our load is 99% read, 1% write, and most of the tables with fks on \n> > them only have a handful of writers, so any testing I would do would \n> > probably just be the \"we used it in production and it didn't die\" kind of \n> > testing.\n> \n> That's what I'm looking for mostly: that it does not have any adverse\n> side-effects.\n\nSo where's that patch again? The search function of the mail archives is \nbroken, so I can't seem to find it.\n\n\n\n", "msg_date": "Tue, 20 May 2003 10:08:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "I wrote:\n> pgsql-core have agreed to put out a 7.3.3 release on Wednesday (5/21),\n> God willin' an' the creek don't rise.\n\nWell, the postgresql.org server move has proven to be much messier than\nwe hoped, so it seems prudent to delay a couple days while the kinks\nget worked out. New plan is 7.3.3 release on Friday (5/23).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 May 2003 13:04:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Heads up: 7.3.3 this, er, Friday" }, { "msg_contents": "On Tue, 20 May 2003, scott.marlowe wrote:\n\n> On Tue, 20 May 2003, Tom Lane wrote:\n>\n> > \"scott.marlowe\" <[email protected]> writes:\n> > > I'd be glad to test it, but we don't have any issues with fk deadlocks\n> > > since our load is 99% read, 1% write, and most of the tables with fks on\n> > > them only have a handful of writers, so any testing I would do would\n> > > probably just be the \"we used it in production and it didn't die\" kind of\n> > > testing.\n> >\n> > That's what I'm looking for mostly: that it does not have any adverse\n> > side-effects.\n>\n> So where's that patch again? The search function of the mail archives is\n> broken, so I can't seem to find it.\n>\n\ndid you try http://fts.postgresql.org/ ? It should works\n\n\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 20 May 2003 21:36:00 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "scott.marlowe wrote:\n> On Tue, 20 May 2003, Tom Lane wrote:\n> \n> \n>>\"scott.marlowe\" <[email protected]> writes:\n>>\n>>>I'd be glad to test it, but we don't have any issues with fk deadlocks \n>>>since our load is 99% read, 1% write, and most of the tables with fks on \n>>>them only have a handful of writers, so any testing I would do would \n>>>probably just be the \"we used it in production and it didn't die\" kind of \n>>>testing.\n>>\n>>That's what I'm looking for mostly: that it does not have any adverse\n>>side-effects.\n> \n> \n> So where's that patch again? The search function of the mail archives is \n> broken, so I can't seem to find it.\n> \n\nAttached.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #", "msg_date": "Tue, 20 May 2003 15:32:55 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Thanks! We should just replace the search screen (that doesn't work \nanyway) on the archives with that one. That fts site rocks!\n\nI found it on the ftp server by the way, in the patches directory of all \nplaces. I'll put it online on our backup server tonight.\n\nOn Tue, 20 May 2003, Oleg Bartunov wrote:\n\n> On Tue, 20 May 2003, scott.marlowe wrote:\n> \n> > On Tue, 20 May 2003, Tom Lane wrote:\n> >\n> > > \"scott.marlowe\" <[email protected]> writes:\n> > > > I'd be glad to test it, but we don't have any issues with fk deadlocks\n> > > > since our load is 99% read, 1% write, and most of the tables with fks on\n> > > > them only have a handful of writers, so any testing I would do would\n> > > > probably just be the \"we used it in production and it didn't die\" kind of\n> > > > testing.\n> > >\n> > > That's what I'm looking for mostly: that it does not have any adverse\n> > > side-effects.\n> >\n> > So where's that patch again? The search function of the mail archives is\n> > broken, so I can't seem to find it.\n> >\n> \n> did you try http://fts.postgresql.org/ ? It should works\n> \n> \n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Tue, 20 May 2003 14:43:56 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> Yes, 3 weeks of testing with the patch that Jan provided for 7.2. We\n> had no failures that I know of. It provided no measurable\n> performance gain, but it also never deadlocked, and we tested under\n> conditions where it sometimes did in the past.\n\n> Note, however, that we were testing it indirectly; that is, while we\n> were testing to see if it would break, our application (which does a\n> number of the referential checks itself, multiple-connection-safety\n> notwithstanding :( ) tends not to try to violate the foreign keys\n> anyway. I wouldn't want to claim that we've tested it real heavily.\n\nNonetheless, this does seem to speak to my real concern, which is\nwhether the patch introduces any unexpected side-effects. The code\nwas getting executed, whether or not it detected any FK violations,\nso we can have some hope that any bizarre problems would have been\nnoticed.\n\nI'll go ahead and apply it for 7.3.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 May 2003 13:40:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "On Wed, May 21, 2003 at 01:40:02PM -0400, Tom Lane wrote:\n> was getting executed, whether or not it detected any FK violations,\n> so we can have some hope that any bizarre problems would have been\n> noticed.\n\nCertainly, bizarre problems would have been noticed. It's safe to\napply, if you ask me.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 21 May 2003 16:52:04 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Tom Lane wrote:\n> I actually think that both of these are pretty good candidates to put\n> into 7.3.3. I'm just trying to adopt an appropriately paranoid stance\n> and ask hard questions about how much they've been tested. Between\n> Kevin and Sean it seems that the deferred-triggers change has gotten\n> enough testing to warrant some trust, but I'm not hearing anything\n> about the FK-deadlock one :-(.\n> \n> BTW, if anyone is looking for that patch, it was at\n> http://archives.postgresql.org/pgsql-hackers/2003-04/msg00260.php\n\nI've been running with this patch ever since I upgraded to 7.3.2 some\ntime back, without any ill effects at all, FWIW...\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 21 May 2003 21:56:59 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I've been running with this patch ever since I upgraded to 7.3.2 some\n> time back, without any ill effects at all, FWIW...\n\nAndrew Sullivan also reported doing a fair bit of testing without\nnoticing any bad side-effects, so I've gone ahead and applied it for\n7.3.3. Appreciate the confirmation though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2003 01:15:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heads up: 7.3.3 this Wednesday " }, { "msg_contents": "I worked on project at GBorg to implement:\n\n> - a specified moment\n> - a time of the day/month/year\n> - recurring at a time interval\n\nBut now the project is frozen for some time, exist only UML.\n\nExpected features were:\n\n1. All transactions from job spool performed in one connection serially. \n So on multy processors servers spooled jobs will eat only one CPU. \nAnd job performed in expected order.\n2. Guarantee launch job from spool in none guarantee time, even if \nserver crashed when job transaction not ended.\n\nCron can't implement this.\n\n-- \nOlleg\n\n", "msg_date": "Thu, 22 May 2003 15:57:12 +0400", "msg_from": "Olleg Samojlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scheduled jobs" }, { "msg_contents": "Peter Eisentraut writes:\n> I think not. It's a little tricky handling it directly in the child\n> processes, but it's been done before.\n\nCertainly has...\n\nIn the cfengine2 code base, the relevant file is \"rotate.c\"; it\nessentially attaches the file descriptor to a new file \"on the fly,\" and\ndoes so as a separate process. The code apparently works on NT, too;\nthere is a comment that indicates that they use chown() rather than\nfchown() because the latter doesn't exist on NT.\n\nIn fact, that file is well worth taking a look at for strategies on\nthis. It's GPLed code, and so may not be suitable for integration, but\nthere are doubtless some useful techniques to be seen...\n--\nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','cbbrowne.com').\nhttp://cbbrowne.com/info/languages.html\n\"Microsoft is sort of a mixture between the Borg and the\nFerengi. Combine the Borg marketing with Ferengi networking...\"\n-- Andre Beck in dcouln\n", "msg_date": "Fri, 23 May 2003 22:42:58 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more contrib: log rotator " }, { "msg_contents": "\nI assume we are not moving in the XML/psql direction, right? We want it\nint he backend, or the psql HTML converted to XHTML?\n\n---------------------------------------------------------------------------\n\[email protected] wrote:\n[ There is text before PGP section. ]\n> \n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> Patch to add XML output to psql:\n> \n> http://www.gtsm.com/xml.patch.txt\n> \n> Notes and questions:\n> \n> The basic output looks something like this:\n> \n> <?xml version=\"1.0\" encoding=\"SQL_ASCII\"?>\n> <resultset psql_version=\"7.4devel\" query=\"select * from foo;\">\n> \n> <columns>\n> <col num=\"1\">a</col>\n> <col num=\"2\">b</col>\n> <col num=\"3\">c</col>\n> <col num=\"4\">mucho nacho </col>\n> </columns>\n> <row num=\"1\">\n> <a>1</a>\n> <b>pizza</b>\n> <c>2003-02-25 15:19:22.169797</c>\n> <\"mucho nacho \"></\"mucho nacho \">\n> </row>\n> <row num=\"2\">\n> <a>2</a>\n> <b>mushroom</b>\n> <c>2003-02-25 15:19:26.969415</c>\n> <\"mucho nacho \"></\"mucho nacho \">\n> </row>\n> <footer>(2 rows)</footer>\n> </resultset>\n> \n> and with the \\x option:\n> \n> <?xml version=\"1.0\" encoding=\"SQL_ASCII\"?>\n> <resultset psql_version=\"7.4devel\" query=\"select * from foo;\">\n> \n> <columns>\n> <col num=\"1\">a</col>\n> <col num=\"2\">b</col>\n> <col num=\"3\">c</col>\n> <col num=\"4\">mucho nacho </col>\n> </columns>\n> <row num=\"1\">\n> <cell name=\"a\">1</cell>\n> <cell name=\"b\">pizza</cell>\n> <cell name=\"c\">2003-02-25 15:19:22.169797</cell>\n> <cell name=\"mucho nacho \"></cell>\n> </row>\n> <row num=\"2\">\n> <cell name=\"a\">2</cell>\n> <cell name=\"b\">mushroom</cell>\n> <cell name=\"c\">2003-02-25 15:19:26.969415</cell>\n> <cell name=\"mucho nacho \"></cell>\n> </row>\n> </resultset>\n> \n> \n> The default encoding \"SQL-ASCII\" is not valid for XML. \n> Should it be automatically changed to something else?\n> \n> The flag \"-X\" is already taken, unfortunately, although \\X is not. \n> I used \"-L\" and \"\\L\" but they are not as memorable as \"X\". Anyone \n> see a way around this? Can we still use \\X inside of psql?\n> \n> \n> It would be nice to include the string representation of the column \n> types in the xml output:\n> <col type=\"int8\">foo</col>\n> ....but I could not find an easy way to do this: PQftype returns the \n> OID only (which is close but not quite there). Is there an \n> existing way to get the name of the type of a column from a \n> PQresult item?\n> \n> The HTML, XML, and Latex modes should have better documentation - \n> I'll submit a separate doc patch when/if this gets finalized.\n> \n> \n> - --\n> Greg Sabino Mullane [email protected]\n> PGP Key: 0x14964AC8 200302261518\n> \n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n> \n> iD8DBQE+XSR/vJuQZxSWSsgRAi2jAJ9IAKnMBmNcVEEI8TXQBBd/rtm4XQCg0Vjq\n> IO9OsCSkdnNJqnrYYutM3jw=\n> =9kwY\n> -----END PGP SIGNATURE-----\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n[ Decrypting message... End of raw data. ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 24 May 2003 13:37:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> I assume we are not moving in the XML/psql direction, right? We want it\n> int he backend, or the psql HTML converted to XHTML?\n\nI don't think a consensus was ever reached. It would certainly be better if \nthis was done on the backend, but that seems to be a long time away, and \nsome have argued that it is not the job of the engine to do this anyway.\n\nI agree at the very least we should update the HTML ourput for psql: I'll \ntry to make a patch for that this weekend.\n\nI still think we should at least have a rudimentary xml output option inside \nof psql. It won't be perfect, but we can certainly have the flag toggle \na backend variable when/if the backend supports XML directly.\n\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200305301452\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE+16npvJuQZxSWSsgRAnTHAJ0UN3HFWVybqDd/5lnsV2CcotRxSgCgp7md\nW9Iho/Y1mwUYEl8SX/9oAVc=\n=G1Jx\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Fri, 30 May 2003 19:06:38 -0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "> > I assume we are not moving in the XML/psql direction, right? We\n> > want it int he backend, or the psql HTML converted to XHTML?\n> \n> I don't think a consensus was ever reached. It would certainly be\n> better if this was done on the backend, but that seems to be a long\n> time away, and some have argued that it is not the job of the engine\n> to do this anyway.\n\nFew points for the archives regarding XML and databases (spent 9mo\nworking on this kinda stuff during the .com days):\n\n*) Use libxml2. MIT Licensed, most complete opensource XML\n implementation available, and fast. See the XML benchmarks on\n sf.net for details. To avoid library naming conflicts, the library\n should likely be renamed to pgxml.so and imported into the src\n tree. Mention java in this context and risk being clubbed to death.\n\n*) There should be two storage formats for XML data:\n\n a) DOM-esque storage: broken down xmlNodes. This is necessary for\n indexing specific places in documents (ala XPath queries).\n Actual datums on the disk should be similar in structure to the\n xmlNode struct found in libxml2 (would help with the\n serialization in either direction). In database xslt\n transformations are also possible with the data stored this way.\n\n b) SAX-esque storage: basically a single BYTEA/TEXT column. Not\n all documents need to be indexed/searchable and SAX processing\n of data is generally more efficient if you don't know what\n you're looking for. This format is the low hanging fruit\n though.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 30 May 2003 13:20:28 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "\nDone. Added to 1.15) How can I financially assist PostgreSQL?\n\n <P>Also, if you have a success story about PostgreSQL, please submit\n it to our advocacy site at <a href=\"http://advocacy.postgresql.org\">\n http://advocacy.postgresql.org</a>.\n\n\n---------------------------------------------------------------------------\n\nShridhar Daithankar wrote:\n> Hi,\n> \n> We should probably put out an FAQ saying, if you have success story, please \n> make a write-up and send to us at http://advocacy.postgresql.org.\n> \n> Shridhar\n> \n> On Tuesday 15 April 2003 18:42, Ericson Smith wrote:\n> > As one of the top search engine campaign optimization companies in the\n> > space, we here at Did-it.com have been using Postgresql for over a year\n> > now. We had serious locking problems with MySQL and even switching to\n> > their Innodb handler did not solve all the issues.\n> >\n> > As the DB administrator, I recommended we switch when it came time to\n> > re-write our client platform. That we did, and we have not looked back.\n> > We have millions of listings, keywords and we perform live visitor\n> > tracking in our database. We capture on the order of about 1 million\n> > visitors every day, with each hit making updates, selects and possibly\n> > inserts.\n> >\n> > We could not have done this in mySQL. Basically when I see silly posts\n> > over on Slashdot about MySQL being as good a sliced bread, you can check\n> > out the debunking posts that I make as \"esconsult1\".\n> >\n> > Yes, perhaps Postgresql needs a central org that manages press and so\n> > on, but we know that we dont get the press that MySQL does, but quietly\n> > in the background Postgresql is handling large important things.\n> >\n> > - Ericson Smith\n> > Web Developer\n> > Db Admin\n> > http://www.did-it.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 May 2003 22:26:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Are we losing momentum?" }, { "msg_contents": "\nWhere do we want to go with this? It is interesting that it maps MySQL\nSHOW commands on top of our existing SHOW syntax in psql. The patch\ndoesn't look too big.\n\nShould this be applied? Sean, I know you have a newer patch the the URL\nyou posted isn't good anymore. This also contains GUC change. Sean,\nwould you show each one you added and we can discuss if any are\ninappropriate.\n\n---------------------------------------------------------------------------\n\nSean Chittenden wrote:\n> > > That's a pretty reasonable thought. I work for a shop that sells\n> > > Postgres support, and even we install MySQL for the Q&D ticket\n> > > tracking system we recommend because we can't justify the cost to\n> > > port it to postgres. If the postgres support were there, we would\n> > > surely be using it.\n> > \n> > > How to fix such a situation, I'm not sure. \"MySQL Compatability\n> > > Mode,\" anyone? :-)\n> > \n> > What issues are creating a compatibility problem for you?\n> \n> I don't think these should be hacked into the backend/libpq, but I\n> think it'd be a huge win to hack in \"show *\" support into psql for\n> MySQL users so they can type:\n> \n> SHOW (databases|tables|views|functions|triggers|schemas);\n> \n> I have yet to meet a MySQL user who understands the concept of system\n> catalogs even though it's just the 'mysql' database (this irritates me\n> enough as is)... gah, f- it: mysql users be damned, I have three\n> developers that think that postgresql is too hard to use because they\n> can't remember \"\\d [table name]\" and I'm tired of hearing them bitch\n> when I push using PostgreSQL instead of MySQL. I have better things\n> to do with my time than convert their output to PostgreSQL. Here goes\n> nothing...\n> \n> I've tainted psql and added a MySQL command compatibility layer for\n> the family of SHOW commands (psql [-m | --mysql]).\n> \n> \n> The attached patch does a few things:\n> \n> 1) Implements quite a number of SHOW commands (AGGREGATES, CASTS,\n> CATALOGS, COLUMNS, COMMENTS, CONSTRAINTS, CONVERSIONS, DATABASES,\n> DOMAINS, FUNCTIONS, HELP, INDEX, LARGEOBJECTS, NAMES, OPERATORS,\n> PRIVILEGES, PROCESSLIST, SCHEMAS, SEQUENCES, SESSION, STATUS,\n> TABLES, TRANSACTION, TYPES, USERS, VARIABLES, VIEWS)\n> \n> SHOW thing\n> SHOW thing LIKE pattern\n> SHOW thing FROM pattern\n> SHOW HELP ON (topic || ALL);\n> etc.\n> \n> Some of these don't have \\ command eqiv's. :( I was tempted to add\n> them, but opted not to for now, but it'd certainly be a nice to\n> have.\n> \n> 2) Implements the necessary tab completion for the SHOW commands for\n> the tab happy newbies/folks out there. psql is more friendly than\n> mysql's CLI now in terms of tab completion for the show commands.\n> \n> 3) Few trailing whitespace characters were nuked\n> \n> 4) guc.c is now in sync with the list of available variables used for\n> tab completion\n> \n> \n> Few things to note:\n> \n> 1) SHOW INDEXES is the same as SHOW INDEX, I think MySQL is wrong in\n> this regard and that it should be INDEXES to be plural along with\n> the rest of the types, but INDEX is preserved for compatibility.\n> \n> 2) There are two bugs that I have yet to address\n> \n> 1) SHOW VARIABLES doesn't work, but \"SHOW [TAB][TAB]y\" does\n> 2) \"SHOW [variable_of_choice];\" doesn't work, but \"SHOW\n> [variable_of_choice]\\n;\" does work... not sure where this\n> problem is coming from\n> \n> 3) I think psql is more usable as a result of this more verbose\n> syntax, but it's not the prettiest thing on the planet (wrote a\n> small parser outside of the backend or libraries: I don't want to\n> get those dirty with MySQL's filth).\n> \n> 4) In an attempt to wean people over to PostgreSQL's syntax, I\n> included translation tips on how to use the psql equiv of the SHOW\n> commands. Going from SHOW foo to \\d foo is easy, going from \\d foo\n> to SHOW foo is hard and drives me nuts. This'll help userbase\n> retention of newbies/converts. :)\n> \n> 5) The MySQL mode is just a bounce layer that provides different\n> syntax wrapping exec_command() so it should provide little in the\n> way of maintenance headaches. Some of the SHOW commands, however,\n> don't have \\ couterparts, but once they do and that code is\n> centralized, this feature should come for zero cost.\n> \n> 6) As an administrator, I'd be interested in having an environment\n> variable that I could set that'd turn on MySQL mode for some of my\n> bozo users that way they don't complain if they forget the -m\n> switch. Thoughts?\n> \n> \n> I'll try and iron out the last of those two bugs/features, but at this\n> point, would like to see this patch get wider testing/feedback.\n> Comments, as always, are welcome.\n> \n> PostgreSQL_usability++\n> \n> -sc\n> \n> -- \n> Sean Chittenden\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 May 2003 23:07:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Hi everyone,\n\nAs an additional point of interest, we're still processing all of the \nCase Study submissions received from the last call around February, and \nshould be beginning translations within the next week or two.\n\nThere is a lot of very good news contained in the submissions, and they \nwill definitely assist in bringing into the open just how good a \ndatabase system and Community we truly have.\n\nHope everyone will be as amazed as I was at just where PostgreSQL is \nbeing used already. There are some very clued-in people out there.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> Done. Added to 1.15) How can I financially assist PostgreSQL?\n> \n> <P>Also, if you have a success story about PostgreSQL, please submit\n> it to our advocacy site at <a href=\"http://advocacy.postgresql.org\">\n> http://advocacy.postgresql.org</a>.\n> \n> \n> ---------------------------------------------------------------------------\n> \n> Shridhar Daithankar wrote:\n> \n>>Hi,\n>>\n>>We should probably put out an FAQ saying, if you have success story, please \n>>make a write-up and send to us at http://advocacy.postgresql.org.\n>>\n>> Shridhar\n>>\n>>On Tuesday 15 April 2003 18:42, Ericson Smith wrote:\n>>\n>>>As one of the top search engine campaign optimization companies in the\n>>>space, we here at Did-it.com have been using Postgresql for over a year\n>>>now. We had serious locking problems with MySQL and even switching to\n>>>their Innodb handler did not solve all the issues.\n>>>\n>>>As the DB administrator, I recommended we switch when it came time to\n>>>re-write our client platform. That we did, and we have not looked back.\n>>>We have millions of listings, keywords and we perform live visitor\n>>>tracking in our database. We capture on the order of about 1 million\n>>>visitors every day, with each hit making updates, selects and possibly\n>>>inserts.\n>>>\n>>>We could not have done this in mySQL. Basically when I see silly posts\n>>>over on Slashdot about MySQL being as good a sliced bread, you can check\n>>>out the debunking posts that I make as \"esconsult1\".\n>>>\n>>>Yes, perhaps Postgresql needs a central org that manages press and so\n>>>on, but we know that we dont get the press that MySQL does, but quietly\n>>>in the background Postgresql is handling large important things.\n>>>\n>>>- Ericson Smith\n>>>Web Developer\n>>>Db Admin\n>>>http://www.did-it.com\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>>\n> \n> \n\n\n", "msg_date": "Sat, 31 May 2003 15:39:34 +0800", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Are we losing momentum?" }, { "msg_contents": "Sean Chittenden kirjutas R, 30.05.2003 kell 23:20:\n> > > I assume we are not moving in the XML/psql direction, right? We\n> > > want it int he backend, or the psql HTML converted to XHTML?\n> > \n> > I don't think a consensus was ever reached. It would certainly be\n> > better if this was done on the backend, but that seems to be a long\n> > time away, and some have argued that it is not the job of the engine\n> > to do this anyway.\n> \n> Few points for the archives regarding XML and databases (spent 9mo\n> working on this kinda stuff during the .com days):\n> \n> *) Use libxml2. MIT Licensed, most complete opensource XML\n> implementation available, and fast. See the XML benchmarks on\n> sf.net for details. To avoid library naming conflicts, the library\n> should likely be renamed to pgxml.so and imported into the src\n> tree. Mention java in this context and risk being clubbed to death.\n\nAgree completely on all points ;)\n\n> *) There should be two storage formats for XML data:\n> \n> a) DOM-esque storage: broken down xmlNodes. This is necessary for\n> indexing specific places in documents (ala XPath queries).\n> Actual datums on the disk should be similar in structure to the\n> xmlNode struct found in libxml2 (would help with the\n> serialization in either direction). In database xslt\n> transformations are also possible with the data stored this way.\n> \n> b) SAX-esque storage: basically a single BYTEA/TEXT column. Not\n> all documents need to be indexed/searchable and SAX processing\n> of data is generally more efficient if you don't know what\n> you're looking for. This format is the low hanging fruit\n> though.\n\nI think that Oleg and Todor very recently proposed somethink that could\nuse b) and still provide indexed access. \n\nMost flexible would be some way to define, how much of a tree is kept\ntogether, as xmlNode/tuple would probably be too much overhead for most\noperations, whereas xmlFile/tuple would also, just for other ops;)\n\n--------------\nHannu\n\n\n", "msg_date": "31 May 2003 13:35:43 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML ouput for psql" }, { "msg_contents": "Bruce Momjian writes:\n\n> Where do we want to go with this? It is interesting that it maps MySQL\n> SHOW commands on top of our existing SHOW syntax in psql. The patch\n> doesn't look too big.\n\nThe response to \"Are we losing momentum?\" isn't to add redundant syntax\nfor nonstandard features that we have no control over.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sat, 31 May 2003 17:55:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Bruce Momjian writes:\n>> Where do we want to go with this? It is interesting that it maps MySQL\n>> SHOW commands on top of our existing SHOW syntax in psql. The patch\n>> doesn't look too big.\n\n> The response to \"Are we losing momentum?\" isn't to add redundant syntax\n> for nonstandard features that we have no control over.\n\nI'm of two minds about it myself. I don't like trying to play follow-the-\nleader with a moving target. But if you think of it as trying to win\nover converts from MySQL, it seems a lot more palatable.\n\nIt would also be interesting to combine this with Rod's idea of driving\ndescribe-type queries by table instead of hardwired code. Imagine that\nthe backend's \"show foo\" command first looks for \"foo\" as a GUC\nvariable, as it does now, but upon failing to find one it looks in a\nsystem table for a query associated with the name \"foo\". If it finds\nsuch a query, it runs it and sends back the result. Now, not only can\nwe emulate \"show tables\", but people can easily add application-specific\n\"show whatever\" commands, which seems tremendously cool.\n\nThere are some safety and protection issues to be solved here (probably\nonly superusers should be allowed to modify the query table, and we\nshould restrict the form of the query to be a single SELECT command)\nbut I can't think of any showstoppers. Now that we've abandoned backend\nautocommit there are no technical reasons that SHOW shouldn't be allowed\nto run a SELECT query.\n\nI did not care for the original patch (which IIRC made psql recognize\n\"show\" commands, rather than doing it in the backend) but something like\nthe above seems reasonably clean.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 12:49:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "> >> Where do we want to go with this? It is interesting that it maps\n> >> MySQL SHOW commands on top of our existing SHOW syntax in psql.\n> >> The patch doesn't look too big.\n> >\n> > The response to \"Are we losing momentum?\" isn't to add redundant\n> > syntax for nonstandard features that we have no control over.\n> \n> I'm of two minds about it myself. I don't like trying to play\n> follow-the- leader with a moving target. But if you think of it as\n> trying to win over converts from MySQL, it seems a lot more\n> palatable.\n\nThe _only_ reason I added this was to aid in the conversion of MySQL\nusers who don't know how to navigate their way through psql and\nPostgreSQL. If you read through the patch, the extended SHOW commands\nare only enabled if you turn them on via the --mysql option or by\nsetting an option (don't recall at the moment) that way you can turn\nthese on by default in a person's .psqlrc file. With Win32 coming\ndown the pipe (right??? *cough cough* :-P), converting existing users\nis going to be important for PostgreSQL's long term success if it\nseeks to gain market share and momentum.\n\n> It would also be interesting to combine this with Rod's idea of\n> driving describe-type queries by table instead of hardwired code.\n> Imagine that the backend's \"show foo\" command first looks for \"foo\"\n> as a GUC variable, as it does now, but upon failing to find one it\n> looks in a system table for a query associated with the name \"foo\".\n> If it finds such a query, it runs it and sends back the result.\n> Now, not only can we emulate \"show tables\", but people can easily\n> add application-specific \"show whatever\" commands, which seems\n> tremendously cool.\n\nI really like the ability to program in queries or syntaxes into the\nbackend, but as it stands, SHOW foo would have to be pretty smart to\nhandle the LIKE clauses and other bits. And how would tab completion\nbe handled in psql? I can't remember if I implemented stuff like,\nSHOW TABLES IN SCHEMA foo, but 'SHOW TABLES LIKE a%' was quite a hit\nto the people I have using the patch and I don't see how that'd be\napplicable with what has been proposed so far. The extra verbosity\nand tab completion-ability is important for the newbie, it's an\ninteractive environment that lets them learn and explore in a\nproactive manner. Having IN SCHEMA vs LIKE handled on the backend\nwould be a rather complex generic model, but in an ideal world, one\nthat I'd prefer if done right.\n\n> There are some safety and protection issues to be solved here\n> (probably only superusers should be allowed to modify the query\n> table, and we should restrict the form of the query to be a single\n> SELECT command) but I can't think of any showstoppers. Now that\n> we've abandoned backend autocommit there are no technical reasons\n> that SHOW shouldn't be allowed to run a SELECT query.\n> \n> I did not care for the original patch (which IIRC made psql\n> recognize \"show\" commands, rather than doing it in the backend) but\n> something like the above seems reasonably clean.\n\nWell, unless something is done very cleanly and abstractly, and can be\nextended to handle IN SCHEMA, LIKE, etc., I'm of the opinion that this\nkinda stuff belongs on the client side of things and _not_ in the\nbackend. Putting MySQL's brokenness in the backend seems, well, to\nperpetuate the brokenness and acknowledge that their brokenness isn't\nreally that broken: not a message I'm fond of. Putting MySQL's\nbrokenness in psql and hiding it behind the --mysql CLI option,\nhowever, is tolerably broken, IMHO and of the same use as ora2pg or\nmysql2pg.\n\nBruce wrote:\n> Where do we want to go with this? It is interesting that it maps\n> MySQL SHOW commands on top of our existing SHOW syntax in psql. The\n> patch doesn't look too big.\n>\n> Should this be applied? Sean, I know you have a newer patch the the\n> URL you posted isn't good anymore. This also contains GUC change.\n> Sean, would you show each one you added and we can discuss if any\n> are inappropriate.\n\nThe GUC change was just flushing out various GUC options that weren't\navailable as tab completion options... the usefulness of those GUCs\ncould be debated, but at least there's a complete list in the CLI\nnow... having this pulled from the server would be wise, IMHO, but I\ndidn't spend the time to add that at the time. If there's interest, I\ncan do that.\n\nI have the patch someplace, don't worry about that. The patch isn't\n100% correct as it stands: there is a problem in recognizing the end\nof a SHOW command so that the query buffer was considered non-empty\n(wasn't scanning properly for ;'s to emulate the end of a query:\nsomething that's not necessary with the normal \\ syntax). Other than\nthat, there are no bugs that I'm aware of and the work around is to\nissue a \\r: not an earth shattering problem given psql has a history.\nBefore I finished, I wanted to get some review/thoughts on the patch\nbefore I completed the work and sent it off only to have it rejected.\n\nI know Tom doesn't like the idea because it's not \"clean,\" but I can't\nhelp but feel that adding similar functionality to the backend is some\nhow contaminating PostgreSQL and acknowledges MySQL's done something\ncreditable, which is too big of an admission, especially since their\nproduct is pretty fundamentally flawed in terms of spec compliance.\nThe last thing we want is to have their brokenness turn into spec and\nfor that to become the norm. So, as life would have it, I'm an\nadvocate of adding the SHOW syntax to psql and an opponent of having it\nin the backend, the core developers would rather have the SHOW syntax\nin the backend because it's not clean to have it in the front end.\n*shrug*\n\npsql has to have a quasi-parser in the front end as is to handle tab\ncompletion, so I don't really buy the argument that it's bad to have a\nparser in the front end (sorry Tom). In fact, I actually think that\nthe parser in psql needs to be _beefed up_ and made _smarter_ that way\npsql's more user friendly, even if that means increasing the\ncomplexity of psql. At the moment, there are short falls with psql's\nability to perform tab completion of column names on aliased tables,\netc. Some of it's non-trivial and won't ever see the light of day in\npsql, but there is room for improvement. Anyway, usability is a\nproblem for clients, not for servers. MySQL usability belongs in the\nclient, not the backend.... and hidden behind a CLI flag at that\n'cause I don't available in my personal development environment.\n\nAnd before someone says, \"that's fine and dandy, who's going to do the\nwork on psql,\" I'll gladly work on psql and even go so far as to clean\nup the code now that someone's around to apply the patches. :)\n\n*listens and waits before updating the patch and fixing the above\n bugs* -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 31 May 2003 18:43:23 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "> It would also be interesting to combine this with Rod's idea of driving\n> describe-type queries by table instead of hardwired code. Imagine that\n> the backend's \"show foo\" command first looks for \"foo\" as a GUC\n> variable, as it does now, but upon failing to find one it looks in a\n> system table for a query associated with the name \"foo\". If it finds\n> such a query, it runs it and sends back the result. Now, not only can\n> we emulate \"show tables\", but people can easily add application-specific\n> \"show whatever\" commands, which seems tremendously cool.\n\nEasy enough to accomplish for the most part. I suppose the most\ndifficult part is whether we support arbitrary syntax?\n\nSHOW INDEXES ON TABLE <bleah>;\nSHOW TABLES;\nSHOW DATABASES;\nSHOW COLUMNS FROM <table>; <-- Long form of DESCRIBE <table>\n\nI believe all of the above is valid MySQL syntax, so we would need to be\nable to work with a list of colId, rather than a single one.\n\nThis does not help with tab completion or help for the above items. \nThough one could certainly argue help on available commands should come\nfrom the backend, and psql could be taught how to read the table to\ndetermine how to deal with tab completion.\n\nOne significant downside is that describe commands would require an\ninitdb when updated.\n\nOh, if it's the backend doing the work, the queries should probably be\nfunctions, not free-form queries. Simply makes it easier to inject\nvariables into the right place. In the initial patch, psql is using a\nprepared statement for this work with the side benefit that the plan is\ncached. An SQL function would accomplish the same thing.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "31 May 2003 23:08:49 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "[ moving this thread to a more appropriate place ]\n\nSean Chittenden <[email protected]> writes:\n>> It would also be interesting to combine this with Rod's idea of\n>> driving describe-type queries by table instead of hardwired code.\n>> Imagine that the backend's \"show foo\" command first looks for \"foo\"\n>> as a GUC variable, as it does now, but upon failing to find one it\n>> looks in a system table for a query associated with the name \"foo\".\n>> If it finds such a query, it runs it and sends back the result.\n>> Now, not only can we emulate \"show tables\", but people can easily\n>> add application-specific \"show whatever\" commands, which seems\n>> tremendously cool.\n\n> I really like the ability to program in queries or syntaxes into the\n> backend, but as it stands, SHOW foo would have to be pretty smart to\n> handle the LIKE clauses and other bits.\n\nIt's certainly doable. I thought more about how to handle parameters\nand such, and came up with this sketch:\n\n1. We generalize the SHOW syntax to accept 1 or more identifiers (might\nas well allow strings too). The existing special cases like SHOW TIME\nZONE would be taken out of the grammar and checked for at runtime.\n\n2. The \"key\" field of the show_queries table is an array of one or more\nstrings that can be either keywords or parameter placeholders ($n).\nThere must be at least one keyword. Then SHOW matches a particular\ntable entry if there are the right number of words and all the keyword\nstrings match the corresponding words. The other words become the\nparameter values.\n\n3. The \"query\" field of the table is a SELECT possibly containing\nparameter references $n. This can be handled the same way as a\npreparable statement (we already have mechanisms for resolving the types\nof the parameters).\n\nWhile I haven't studied the MySQL manual to see what-all they allow,\nthis certainly seems sufficient to support \"SHOW TABLE foo\" and similar\nvariants. And the possibility of user-added extensions to the table\nseems really cool.\n\n> And how would tab completion be handled in psql?\n\nYou look at the table to see what can come after SHOW. We already have\ndatabase-driven completion, so this doesn't seem out of reach.\n\nOne thing that is doable with psql's current hard-wired approach, but\ndoesn't seem easy to do with this solution, is automatic localization\nof strings such as column headings. Rod had looked at that a little\nin his trial patch to convert psql's \\d stuff to table-driven form,\nbut AFAIR he didn't have a satisfactory answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Jun 2003 12:03:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Table-driven SHOW (was Re: Are we losing momentum?)" }, { "msg_contents": "Tom Lane writes:\n\n> Now, not only can we emulate \"show tables\", but people can easily add\n> application-specific \"show whatever\" commands, which seems tremendously\n> cool.\n\nWe already have that. They're called views.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Sun, 1 Jun 2003 23:52:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum? " }, { "msg_contents": "> > Now, not only can we emulate \"show tables\", but people can easily\n> > add application-specific \"show whatever\" commands, which seems\n> > tremendously cool.\n> \n> We already have that. They're called views.\n\nUm, I'm interested in aiding in the conversion of users from MySQL to\nPostgreSQL (how ever it happens, I don't really care). Tom and Rod\nare interested in an extensible/programmable SHOW syntax (very cool\nand much more interesting than the hack I put together). Views solve\nnone of the above problems. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 1 Jun 2003 14:57:27 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "> One thing that is doable with psql's current hard-wired approach, but\n> doesn't seem easy to do with this solution, is automatic localization\n> of strings such as column headings. Rod had looked at that a little\n> in his trial patch to convert psql's \\d stuff to table-driven form,\n> but AFAIR he didn't have a satisfactory answer.\n\nI've yet to come up with anything better. Best answer I've come up with\nis to make the column headings a separate query result set and let the\nback-end do the translations.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 20:02:45 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table-driven SHOW (was Re: Are we losing momentum?)" }, { "msg_contents": "Sean Chittenden writes:\n\n> Um, I'm interested in aiding in the conversion of users from MySQL to\n> PostgreSQL (how ever it happens, I don't really care).\n\nYour approach to that reminds me of those\n\nalias dir='ls'\nalias md='mkdir'\n\nthings that Linux distributors once stuck (or still stick?) in the default\nprofile files, presumably to help conversion from DOS. It's pretty\npointless, because Linux is still very different from DOS, and once you\nwant to do something besides showing or changing directories, you will\nneed documentation and training. That is what the MySQL conversion\nprocess needs as well, otherwise you're not converting, you're emulating,\nand that is not a game you can win.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 2 Jun 2003 21:07:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "> > Um, I'm interested in aiding in the conversion of users from MySQL\n> > to PostgreSQL (how ever it happens, I don't really care).\n> \n> Your approach to that reminds me of those\n> \n> alias dir='ls'\n> alias md='mkdir'\n> \n> things that Linux distributors once stuck (or still stick?) in the\n> default profile files, presumably to help conversion from DOS. It's\n> pretty pointless, because Linux is still very different from DOS,\n> and once you want to do something besides showing or changing\n> directories, you will need documentation and training.\n\nWell, interestingly enough, those commands work for getting people in\nthe door and to the point that they're able to learn more. The first\nstep to any kind of adult education or reeducation is to have concepts\nthat the people are familiar with (in this case MySQL) be translated\ninto the concepts of the area that they're trying to learn. If you'd\nhave read the original patch that I'd posted, you'd see that I'd done\nthat by adding a TIP section to the top of the response.\n\n*SNIP*\nSHOW COLUMNS FROM [tblname];\n\n TIP: In psql, \"SHOW COLUMNS FROM [tblname]\" is natively written as \\dt [tblname]\n\n[normal output from \\dt tblname]\n*END SNIP*\n\nThe point of my patch was to aid the conversion, not to gimp along\nb0rk3d habits from MySQL.\n\n> That is what the MySQL conversion process needs as well, otherwise\n> you're not converting, you're emulating, and that is not a game you\n> can win.\n\nEmulation within reason. dir->ls and md->mkdir worked for a handful\nof people that I've transitioned into the UNIX world from Win32 land,\nin fact, I have one friend from school who's been so successful that\nhe's converted from using Win32 on his desktop to using Linux, worked\nwith me on a job where we were hacking mod_perl on a site pushing in\nexcess of 80Mbps to 25M people a day, but still types dir to this day.\nNot bad for an aero student who graduated with a 4.0 in his major,\nexceedingly bright, adaptive, learns fast, etc. My point is that\nregardless of how bright the person or what the right invocation, aids\nlike these help get people in the door and if they like what they see\nonce they're through the door, they'll stay. Once people try and use\nPostgreSQL, they stay. When people try MySQL, they're left wanting or\nneeding more and are bound by the limits of the software... that's not\nreally the case with PostgreSQL.\n\nTo get people to try, you play the association or emulation game and\nit works. Ask anyone in adult education and they will say the same.\nThe Internet was an \"information super highway\" because that was\nsomething that people could grasp, regardless of how flawed it really\nis. Bandwidth is thought of as pipes in various sizes diameters, a\nmuch better analogy. To geeks, broadcast is explained as the same as\nradio and unicast as satellite TV. To communication majors, TCP is\ndescried as a letter that's been chopped up into a thousand numbered\npieces and sent to the other side of the US via the postal mail.\nLeche is milk to English speakers learning Spanish. One way or\nanother, you have to play the game of working within the understanding\nof the target audience, in this case, MySQL users who use a SHOW\nTABLES type syntax.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 2 Jun 2003 12:37:55 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "\nWow, this is a good argument! I must admit I made 'ps -ef' work on BSD\nbecause I was just so used to it on mainframe Unix. There is that\n'fingers type without thinking' thing, and I think that is what he is\ntalking about.\n\nI wonder if we should just support SHOW TABLES or the most common ones.\n\nMaybe emulation is the wrong approach --- maybe we just need 'finger\nthinking' shortcuts.\n\n---------------------------------------------------------------------------\n\nSean Chittenden wrote:\n> > > Um, I'm interested in aiding in the conversion of users from MySQL\n> > > to PostgreSQL (how ever it happens, I don't really care).\n> > \n> > Your approach to that reminds me of those\n> > \n> > alias dir='ls'\n> > alias md='mkdir'\n> > \n> > things that Linux distributors once stuck (or still stick?) in the\n> > default profile files, presumably to help conversion from DOS. It's\n> > pretty pointless, because Linux is still very different from DOS,\n> > and once you want to do something besides showing or changing\n> > directories, you will need documentation and training.\n> \n> Well, interestingly enough, those commands work for getting people in\n> the door and to the point that they're able to learn more. The first\n> step to any kind of adult education or reeducation is to have concepts\n> that the people are familiar with (in this case MySQL) be translated\n> into the concepts of the area that they're trying to learn. If you'd\n> have read the original patch that I'd posted, you'd see that I'd done\n> that by adding a TIP section to the top of the response.\n> \n> *SNIP*\n> SHOW COLUMNS FROM [tblname];\n> \n> TIP: In psql, \"SHOW COLUMNS FROM [tblname]\" is natively written as \\dt [tblname]\n> \n> [normal output from \\dt tblname]\n> *END SNIP*\n> \n> The point of my patch was to aid the conversion, not to gimp along\n> b0rk3d habits from MySQL.\n> \n> > That is what the MySQL conversion process needs as well, otherwise\n> > you're not converting, you're emulating, and that is not a game you\n> > can win.\n> \n> Emulation within reason. dir->ls and md->mkdir worked for a handful\n> of people that I've transitioned into the UNIX world from Win32 land,\n> in fact, I have one friend from school who's been so successful that\n> he's converted from using Win32 on his desktop to using Linux, worked\n> with me on a job where we were hacking mod_perl on a site pushing in\n> excess of 80Mbps to 25M people a day, but still types dir to this day.\n> Not bad for an aero student who graduated with a 4.0 in his major,\n> exceedingly bright, adaptive, learns fast, etc. My point is that\n> regardless of how bright the person or what the right invocation, aids\n> like these help get people in the door and if they like what they see\n> once they're through the door, they'll stay. Once people try and use\n> PostgreSQL, they stay. When people try MySQL, they're left wanting or\n> needing more and are bound by the limits of the software... that's not\n> really the case with PostgreSQL.\n> \n> To get people to try, you play the association or emulation game and\n> it works. Ask anyone in adult education and they will say the same.\n> The Internet was an \"information super highway\" because that was\n> something that people could grasp, regardless of how flawed it really\n> is. Bandwidth is thought of as pipes in various sizes diameters, a\n> much better analogy. To geeks, broadcast is explained as the same as\n> radio and unicast as satellite TV. To communication majors, TCP is\n> descried as a letter that's been chopped up into a thousand numbered\n> pieces and sent to the other side of the US via the postal mail.\n> Leche is milk to English speakers learning Spanish. One way or\n> another, you have to play the game of working within the understanding\n> of the target audience, in this case, MySQL users who use a SHOW\n> TABLES type syntax.\n> \n> -sc\n> \n> -- \n> Sean Chittenden\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Jun 2003 15:53:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "\nI assume we agreed against adding a MySQL mode --- just verifying.\n\n---------------------------------------------------------------------------\n\nSean Chittenden wrote:\n> > > That's a pretty reasonable thought. I work for a shop that sells\n> > > Postgres support, and even we install MySQL for the Q&D ticket\n> > > tracking system we recommend because we can't justify the cost to\n> > > port it to postgres. If the postgres support were there, we would\n> > > surely be using it.\n> > \n> > > How to fix such a situation, I'm not sure. \"MySQL Compatability\n> > > Mode,\" anyone? :-)\n> > \n> > What issues are creating a compatibility problem for you?\n> \n> I don't think these should be hacked into the backend/libpq, but I\n> think it'd be a huge win to hack in \"show *\" support into psql for\n> MySQL users so they can type:\n> \n> SHOW (databases|tables|views|functions|triggers|schemas);\n> \n> I have yet to meet a MySQL user who understands the concept of system\n> catalogs even though it's just the 'mysql' database (this irritates me\n> enough as is)... gah, f- it: mysql users be damned, I have three\n> developers that think that postgresql is too hard to use because they\n> can't remember \"\\d [table name]\" and I'm tired of hearing them bitch\n> when I push using PostgreSQL instead of MySQL. I have better things\n> to do with my time than convert their output to PostgreSQL. Here goes\n> nothing...\n> \n> I've tainted psql and added a MySQL command compatibility layer for\n> the family of SHOW commands (psql [-m | --mysql]).\n> \n> \n> The attached patch does a few things:\n> \n> 1) Implements quite a number of SHOW commands (AGGREGATES, CASTS,\n> CATALOGS, COLUMNS, COMMENTS, CONSTRAINTS, CONVERSIONS, DATABASES,\n> DOMAINS, FUNCTIONS, HELP, INDEX, LARGEOBJECTS, NAMES, OPERATORS,\n> PRIVILEGES, PROCESSLIST, SCHEMAS, SEQUENCES, SESSION, STATUS,\n> TABLES, TRANSACTION, TYPES, USERS, VARIABLES, VIEWS)\n> \n> SHOW thing\n> SHOW thing LIKE pattern\n> SHOW thing FROM pattern\n> SHOW HELP ON (topic || ALL);\n> etc.\n> \n> Some of these don't have \\ command eqiv's. :( I was tempted to add\n> them, but opted not to for now, but it'd certainly be a nice to\n> have.\n> \n> 2) Implements the necessary tab completion for the SHOW commands for\n> the tab happy newbies/folks out there. psql is more friendly than\n> mysql's CLI now in terms of tab completion for the show commands.\n> \n> 3) Few trailing whitespace characters were nuked\n> \n> 4) guc.c is now in sync with the list of available variables used for\n> tab completion\n> \n> \n> Few things to note:\n> \n> 1) SHOW INDEXES is the same as SHOW INDEX, I think MySQL is wrong in\n> this regard and that it should be INDEXES to be plural along with\n> the rest of the types, but INDEX is preserved for compatibility.\n> \n> 2) There are two bugs that I have yet to address\n> \n> 1) SHOW VARIABLES doesn't work, but \"SHOW [TAB][TAB]y\" does\n> 2) \"SHOW [variable_of_choice];\" doesn't work, but \"SHOW\n> [variable_of_choice]\\n;\" does work... not sure where this\n> problem is coming from\n> \n> 3) I think psql is more usable as a result of this more verbose\n> syntax, but it's not the prettiest thing on the planet (wrote a\n> small parser outside of the backend or libraries: I don't want to\n> get those dirty with MySQL's filth).\n> \n> 4) In an attempt to wean people over to PostgreSQL's syntax, I\n> included translation tips on how to use the psql equiv of the SHOW\n> commands. Going from SHOW foo to \\d foo is easy, going from \\d foo\n> to SHOW foo is hard and drives me nuts. This'll help userbase\n> retention of newbies/converts. :)\n> \n> 5) The MySQL mode is just a bounce layer that provides different\n> syntax wrapping exec_command() so it should provide little in the\n> way of maintenance headaches. Some of the SHOW commands, however,\n> don't have \\ couterparts, but once they do and that code is\n> centralized, this feature should come for zero cost.\n> \n> 6) As an administrator, I'd be interested in having an environment\n> variable that I could set that'd turn on MySQL mode for some of my\n> bozo users that way they don't complain if they forget the -m\n> switch. Thoughts?\n> \n> \n> I'll try and iron out the last of those two bugs/features, but at this\n> point, would like to see this patch get wider testing/feedback.\n> Comments, as always, are welcome.\n> \n> PostgreSQL_usability++\n> \n> -sc\n> \n> -- \n> Sean Chittenden\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 16 Aug 2003 19:22:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I assume we agreed against adding a MySQL mode --- just verifying.\n\nWe agreed that applications that need schema information are much better \noff using the schema views.\n\n\nJan\n\n> \n> ---------------------------------------------------------------------------\n> \n> Sean Chittenden wrote:\n>> > > That's a pretty reasonable thought. I work for a shop that sells\n>> > > Postgres support, and even we install MySQL for the Q&D ticket\n>> > > tracking system we recommend because we can't justify the cost to\n>> > > port it to postgres. If the postgres support were there, we would\n>> > > surely be using it.\n>> > \n>> > > How to fix such a situation, I'm not sure. \"MySQL Compatability\n>> > > Mode,\" anyone? :-)\n>> > \n>> > What issues are creating a compatibility problem for you?\n>> \n>> I don't think these should be hacked into the backend/libpq, but I\n>> think it'd be a huge win to hack in \"show *\" support into psql for\n>> MySQL users so they can type:\n>> \n>> SHOW (databases|tables|views|functions|triggers|schemas);\n>> \n>> I have yet to meet a MySQL user who understands the concept of system\n>> catalogs even though it's just the 'mysql' database (this irritates me\n>> enough as is)... gah, f- it: mysql users be damned, I have three\n>> developers that think that postgresql is too hard to use because they\n>> can't remember \"\\d [table name]\" and I'm tired of hearing them bitch\n>> when I push using PostgreSQL instead of MySQL. I have better things\n>> to do with my time than convert their output to PostgreSQL. Here goes\n>> nothing...\n>> \n>> I've tainted psql and added a MySQL command compatibility layer for\n>> the family of SHOW commands (psql [-m | --mysql]).\n>> \n>> \n>> The attached patch does a few things:\n>> \n>> 1) Implements quite a number of SHOW commands (AGGREGATES, CASTS,\n>> CATALOGS, COLUMNS, COMMENTS, CONSTRAINTS, CONVERSIONS, DATABASES,\n>> DOMAINS, FUNCTIONS, HELP, INDEX, LARGEOBJECTS, NAMES, OPERATORS,\n>> PRIVILEGES, PROCESSLIST, SCHEMAS, SEQUENCES, SESSION, STATUS,\n>> TABLES, TRANSACTION, TYPES, USERS, VARIABLES, VIEWS)\n>> \n>> SHOW thing\n>> SHOW thing LIKE pattern\n>> SHOW thing FROM pattern\n>> SHOW HELP ON (topic || ALL);\n>> etc.\n>> \n>> Some of these don't have \\ command eqiv's. :( I was tempted to add\n>> them, but opted not to for now, but it'd certainly be a nice to\n>> have.\n>> \n>> 2) Implements the necessary tab completion for the SHOW commands for\n>> the tab happy newbies/folks out there. psql is more friendly than\n>> mysql's CLI now in terms of tab completion for the show commands.\n>> \n>> 3) Few trailing whitespace characters were nuked\n>> \n>> 4) guc.c is now in sync with the list of available variables used for\n>> tab completion\n>> \n>> \n>> Few things to note:\n>> \n>> 1) SHOW INDEXES is the same as SHOW INDEX, I think MySQL is wrong in\n>> this regard and that it should be INDEXES to be plural along with\n>> the rest of the types, but INDEX is preserved for compatibility.\n>> \n>> 2) There are two bugs that I have yet to address\n>> \n>> 1) SHOW VARIABLES doesn't work, but \"SHOW [TAB][TAB]y\" does\n>> 2) \"SHOW [variable_of_choice];\" doesn't work, but \"SHOW\n>> [variable_of_choice]\\n;\" does work... not sure where this\n>> problem is coming from\n>> \n>> 3) I think psql is more usable as a result of this more verbose\n>> syntax, but it's not the prettiest thing on the planet (wrote a\n>> small parser outside of the backend or libraries: I don't want to\n>> get those dirty with MySQL's filth).\n>> \n>> 4) In an attempt to wean people over to PostgreSQL's syntax, I\n>> included translation tips on how to use the psql equiv of the SHOW\n>> commands. Going from SHOW foo to \\d foo is easy, going from \\d foo\n>> to SHOW foo is hard and drives me nuts. This'll help userbase\n>> retention of newbies/converts. :)\n>> \n>> 5) The MySQL mode is just a bounce layer that provides different\n>> syntax wrapping exec_command() so it should provide little in the\n>> way of maintenance headaches. Some of the SHOW commands, however,\n>> don't have \\ couterparts, but once they do and that code is\n>> centralized, this feature should come for zero cost.\n>> \n>> 6) As an administrator, I'd be interested in having an environment\n>> variable that I could set that'd turn on MySQL mode for some of my\n>> bozo users that way they don't complain if they forget the -m\n>> switch. Thoughts?\n>> \n>> \n>> I'll try and iron out the last of those two bugs/features, but at this\n>> point, would like to see this patch get wider testing/feedback.\n>> Comments, as always, are welcome.\n>> \n>> PostgreSQL_usability++\n>> \n>> -sc\n>> \n>> -- \n>> Sean Chittenden\n> \n> [ Attachment, skipping... ]\n> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n> \n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Sat, 16 Aug 2003 22:41:39 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "\nPersonally, I think adding the SHOW commands would be a good thing ...\npsql is nice with its \\df to get information without having to learn all\nthe JOINs required ... having that ability easily from any of the\ninterfaces would definitely be a plus ... to me, its not about MySQL\ncompatibility, but about a small improvement to ease of use :)\n\nOn Sat, 16 Aug 2003, Bruce Momjian wrote:\n\n>\n> I assume we agreed against adding a MySQL mode --- just verifying.\n>\n> ---------------------------------------------------------------------------\n>\n> Sean Chittenden wrote:\n> > > > That's a pretty reasonable thought. I work for a shop that sells\n> > > > Postgres support, and even we install MySQL for the Q&D ticket\n> > > > tracking system we recommend because we can't justify the cost to\n> > > > port it to postgres. If the postgres support were there, we would\n> > > > surely be using it.\n> > >\n> > > > How to fix such a situation, I'm not sure. \"MySQL Compatability\n> > > > Mode,\" anyone? :-)\n> > >\n> > > What issues are creating a compatibility problem for you?\n> >\n> > I don't think these should be hacked into the backend/libpq, but I\n> > think it'd be a huge win to hack in \"show *\" support into psql for\n> > MySQL users so they can type:\n> >\n> > SHOW (databases|tables|views|functions|triggers|schemas);\n> >\n> > I have yet to meet a MySQL user who understands the concept of system\n> > catalogs even though it's just the 'mysql' database (this irritates me\n> > enough as is)... gah, f- it: mysql users be damned, I have three\n> > developers that think that postgresql is too hard to use because they\n> > can't remember \"\\d [table name]\" and I'm tired of hearing them bitch\n> > when I push using PostgreSQL instead of MySQL. I have better things\n> > to do with my time than convert their output to PostgreSQL. Here goes\n> > nothing...\n> >\n> > I've tainted psql and added a MySQL command compatibility layer for\n> > the family of SHOW commands (psql [-m | --mysql]).\n> >\n> >\n> > The attached patch does a few things:\n> >\n> > 1) Implements quite a number of SHOW commands (AGGREGATES, CASTS,\n> > CATALOGS, COLUMNS, COMMENTS, CONSTRAINTS, CONVERSIONS, DATABASES,\n> > DOMAINS, FUNCTIONS, HELP, INDEX, LARGEOBJECTS, NAMES, OPERATORS,\n> > PRIVILEGES, PROCESSLIST, SCHEMAS, SEQUENCES, SESSION, STATUS,\n> > TABLES, TRANSACTION, TYPES, USERS, VARIABLES, VIEWS)\n> >\n> > SHOW thing\n> > SHOW thing LIKE pattern\n> > SHOW thing FROM pattern\n> > SHOW HELP ON (topic || ALL);\n> > etc.\n> >\n> > Some of these don't have \\ command eqiv's. :( I was tempted to add\n> > them, but opted not to for now, but it'd certainly be a nice to\n> > have.\n> >\n> > 2) Implements the necessary tab completion for the SHOW commands for\n> > the tab happy newbies/folks out there. psql is more friendly than\n> > mysql's CLI now in terms of tab completion for the show commands.\n> >\n> > 3) Few trailing whitespace characters were nuked\n> >\n> > 4) guc.c is now in sync with the list of available variables used for\n> > tab completion\n> >\n> >\n> > Few things to note:\n> >\n> > 1) SHOW INDEXES is the same as SHOW INDEX, I think MySQL is wrong in\n> > this regard and that it should be INDEXES to be plural along with\n> > the rest of the types, but INDEX is preserved for compatibility.\n> >\n> > 2) There are two bugs that I have yet to address\n> >\n> > 1) SHOW VARIABLES doesn't work, but \"SHOW [TAB][TAB]y\" does\n> > 2) \"SHOW [variable_of_choice];\" doesn't work, but \"SHOW\n> > [variable_of_choice]\\n;\" does work... not sure where this\n> > problem is coming from\n> >\n> > 3) I think psql is more usable as a result of this more verbose\n> > syntax, but it's not the prettiest thing on the planet (wrote a\n> > small parser outside of the backend or libraries: I don't want to\n> > get those dirty with MySQL's filth).\n> >\n> > 4) In an attempt to wean people over to PostgreSQL's syntax, I\n> > included translation tips on how to use the psql equiv of the SHOW\n> > commands. Going from SHOW foo to \\d foo is easy, going from \\d foo\n> > to SHOW foo is hard and drives me nuts. This'll help userbase\n> > retention of newbies/converts. :)\n> >\n> > 5) The MySQL mode is just a bounce layer that provides different\n> > syntax wrapping exec_command() so it should provide little in the\n> > way of maintenance headaches. Some of the SHOW commands, however,\n> > don't have \\ couterparts, but once they do and that code is\n> > centralized, this feature should come for zero cost.\n> >\n> > 6) As an administrator, I'd be interested in having an environment\n> > variable that I could set that'd turn on MySQL mode for some of my\n> > bozo users that way they don't complain if they forget the -m\n> > switch. Thoughts?\n> >\n> >\n> > I'll try and iron out the last of those two bugs/features, but at this\n> > point, would like to see this patch get wider testing/feedback.\n> > Comments, as always, are welcome.\n> >\n> > PostgreSQL_usability++\n> >\n> > -sc\n> >\n> > --\n> > Sean Chittenden\n>\n> [ Attachment, skipping... ]\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n", "msg_date": "Sat, 16 Aug 2003 23:48:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "> Bruce Momjian wrote:\n> >I assume we agreed against adding a MySQL mode --- just verifying.\n> \n> We agreed that applications that need schema information are much better \n> off using the schema views.\n> \n> Jan\n\nHeh, I don't think there was any agreement on anything in that thread,\neveryone had their own view (no pun intended).\n\n> From: The Hermit Hacker <[email protected]>\n>\n> Personally, I think adding the SHOW commands would be a good thing\n> ... psql is nice with its \\df to get information without having to\n> learn all the JOINs required ... having that ability easily from any\n> of the interfaces would definitely be a plus ... to me, its not\n> about MySQL compatibility, but about a small improvement to ease of\n> use :)\n\nWhich goes back to the point about there being little agreement on\nthis patch or its issues. A handful of folks think it's a _user\ninterface_ issue (read: psql, phppgadmin, pgadminIII, etc) and would\nbe good for converting MySQL users to PostgreSQL (or simply because\nits easy and less obtuse than a \\ command), others thought it was a\nfugly hack to have a parser in the front end and that it should be\nhandled on the backend by extending SQL to conform to MySQL's\ninterface (that some argue is incorrect and would unjustly bloat the\nbackend) that way all clients have the SHOW syntax (thus averting a\npossible FAQ), and others took a more elitist mindset and simply\nthought that everyone should just select from the information schemas.\n\n*shrug* I tabled working on the patch until there was some kind of\nagreement from someone with commit privs and am waiting to pick up\nquashing the remaining parser state bug until after 7.4's out the door\nor there's renewed interest from non-users.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 16 Aug 2003 22:48:16 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Are we losing momentum?" }, { "msg_contents": "Short summary:\n\n This patch allows ISO 8601 \"time intervals\" using the \"format \n with time-unit designators\" to specify postgresql \"intervals\".\n\n Below I have (A) What these time intervals are, (B) What I\n modified to support them, (C) Issues with intervals I want\n to bring up, and (D) a patch supporting them.\n\n It's helpful to me. Any feedback is appreciated. If you \n did want to consider including it, let me know what to clean \n up. If not, I thought I'd just put it here if anyone else finds\n it useful too.\n\n Thanks for your time,\n \n Ron Mayer\n\nLonger:\n\n(A) What these intervals are.\n\n ISO 8601, the standard from which PostgreSQL gets some of it's \n time syntax, also has a specification for \"time-intervals\".\n \n In particular, section 5.5.4.2 has a \"Representation of\n time-interval by duration only\" which I believe maps\n nicely to ISO intervals.\n\n Compared to the ISO 8601 time interval specification, the\n postgresql interval syntax is quite verbose. For example:\n\n Postgresql interval: ISO8601 Interval\n ---------------------------------------------------\n '1 year 6 months' 'P1Y6M'\n '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n\n Yeah, it's uglier, but it sure is short which can make\n for quicker typing and shorter scripts, and if for some\n strange reason you had an application using this format\n it's nice not to have to translate.\n\n The syntax is as follows:\n Basic extended format: PnYnMnDTnHnMnS\n PnW\n\n Where everything before the \"T\" is a date-part and everything\n after is a time-part. W is for weeks.\n In the date-part, Y=Year, M=Month, D=Day\n In the time-part, H=Hour, M=Minute, S=Second\n\n Much more info can be found from the draft standard\n ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n The final standard's only available for $$$ so I didn't\n look at it. Some other sites imply that this part didn't\n change from the last draft to the standard.\n\n\n(B) This change was made by adding two functions to \"datetime.c\"\n next to where DecodeInterval parses the normal interval syntax.\n\n A total of 313 lines were added, including comments and sgml docs.\n Of these only 136 are actual code, the rest, comments, whitespace, etc.\n\n\n One new function \"DecodeISO8601Interval\" follows the style of\n \"DecodeInterval\" below it, and trys to strictly follow the ISO\n syntax. If it doesn't match, it'll return -1 and the old syntax\n will be checked as before.\n\n The first test (first character of the first field must be 'P', \n and second character must be 'T' or '\\0') should be fast so I don't\n think this will impact performance of existing code.\n\n\n The second function (\"adjust_fval\") is just a small helper-function\n to remove some of the cut&paste style that DecodeInterval used.\n\n It seems to work.\n =======================================================================\n betadb=# select 'P1M15DT12H30M7S'::interval;\n interval \n ------------------------\n 1 mon 15 days 12:30:07\n (1 row)\n\n betadb=# select '1 month 15 days 12 hours 30 minutes 7 seconds'::interval;\n\t interval \n ------------------------\n 1 mon 15 days 12:30:07\n (1 row)\n =====================================================================\n\n\n\n(C) Open issues with intervals, and questions I'd like to ask.\n\n 1. DecodeInterval seems to have a hardcoded '.' for specifying\n fractional times. ISO 8601 states that both '.' and ',' are\n ok, but \"of these, the comma is the preferred sign\".\n\n In DecodeISO8601Interval I loosened the test to allow\n both but left it as it was in DecodeInterval. Should\n both be changed to make them more consistant?\n\n 2. In \"DecodeInterval\", fractional weeks and fractional months\n can produce seconds; but fractional years can not (rounded\n to months). I didn't understand the reasoning for this, so\n I left it the same, and followed the same convention for\n ISO intervals. Should I change this?\n\n 3. I could save a bunch of copy-paste-lines-of-code from the\n pre-existing DecodeInterval by calling the adjust_fval helper\n function. The tradeoff is a few extra function-calls when\n decoding an interval. However I didn't want to risk changes\n to the existing part unless you guys encourage me to do so.\n\n\n(D) The patch.\n\n\nIndex: doc/src/sgml/datatype.sgml\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/datatype.sgml,v\nretrieving revision 1.123\ndiff -u -1 -0 -r1.123 datatype.sgml\n--- doc/src/sgml/datatype.sgml\t31 Aug 2003 17:32:18 -0000\t1.123\n+++ doc/src/sgml/datatype.sgml\t8 Sep 2003 04:04:58 -0000\n@@ -1735,20 +1735,71 @@\n Quantities of days, hours, minutes, and seconds can be specified without\n explicit unit markings. For example, <literal>'1 12:59:10'</> is read\n the same as <literal>'1 day 12 hours 59 min 10 sec'</>.\n </para>\n \n <para>\n The optional precision\n <replaceable>p</replaceable> should be between 0 and 6, and\n defaults to the precision of the input literal.\n </para>\n+\n+\n+ <para>\n+ Alternatively, <type>interval</type> values can be written as \n+ ISO 8601 time intervals, using the \"Format with time-unit designators\".\n+ This format always starts with the character <literal>'P'</>, followed \n+ by a string of values followed by single character time-unit designators.\n+ A <literal>'T'</> separates the date and time parts of the interval.\n+ </para>\n+\n+ <para>\n+ Format: PnYnMnDTnHnMnS\n+ </para>\n+ <para>\n+ In this format, <literal>'n'</> gets replaced by a number, and \n+ <literal>Y</> represents years, \n+ <literal>M</> (in the date part) months,\n+ <literal>D</> months,\n+ <literal>H</> hours,\n+ <literal>M</> (in the time part) minutes,\n+ and <literal>S</> seconds.\n+ </para>\n+ \n+\n+ <table id=\"interval-example-table\">\n+\t <title>Interval Example</title>\n+\t <tgroup cols=\"2\">\n+\t\t<thead>\n+\t\t <row>\n+\t\t <entry>Traditional</entry>\n+\t\t <entry>ISO-8601 time-interval</entry>\n+\t\t </row>\n+\t\t</thead>\n+\t\t<tbody>\n+\t\t <row>\n+\t\t <entry>1 month</entry>\n+\t\t <entry>P1M</entry>\n+\t\t </row>\n+\t\t <row>\n+\t\t <entry>1 hour 30 minutes</entry>\n+\t\t <entry>PT1H30M</entry>\n+\t\t </row>\n+\t\t <row>\n+\t\t <entry>2 years 10 months 15 days 10 hours 30 minutes 20 seconds</entry>\n+\t\t <entry>P2Y10M15DT10H30M20S</entry>\n+\t\t </row>\n+\t\t</tbody>\n+\t </thead>\n+\t </table>\n+\t \n+ </para>\n </sect3>\n \n <sect3>\n <title>Special Values</title>\n \n <indexterm>\n <primary>time</primary>\n <secondary>constants</secondary>\n </indexterm>\n \nIndex: src/backend/utils/adt/datetime.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/datetime.c,v\nretrieving revision 1.116\ndiff -u -1 -0 -r1.116 datetime.c\n--- src/backend/utils/adt/datetime.c\t27 Aug 2003 23:29:28 -0000\t1.116\n+++ src/backend/utils/adt/datetime.c\t8 Sep 2003 04:04:59 -0000\n@@ -30,20 +30,21 @@\n \t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n static int DecodeNumberField(int len, char *str,\n \t\t\t\t int fmask, int *tmask,\n \t\t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n static int DecodeTime(char *str, int fmask, int *tmask,\n \t\t struct tm * tm, fsec_t *fsec);\n static int\tDecodeTimezone(char *str, int *tzp);\n static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n static void TrimTrailingZeros(char *str);\n+static int DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec);\n \n \n int\t\t\tday_tab[2][13] = {\n \t{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0},\n {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0}};\n \n char\t *months[] = {\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\", NULL};\n \n char\t *days[] = {\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\",\n@@ -2872,30 +2873,271 @@\n \t\t\tdefault:\n \t\t\t\t*val = tp->value;\n \t\t\t\tbreak;\n \t\t}\n \t}\n \n \treturn type;\n }\n \n \n+void adjust_fval(double fval,struct tm * tm, fsec_t *fsec, int scale);\n+{\n+\tint\tsec;\n+\tfval\t *= scale;\n+\tsec\t\t = fval;\n+\ttm->tm_sec += sec;\n+#ifdef HAVE_INT64_TIMESTAMP\n+\t*fsec\t += ((fval - sec) * 1000000);\n+#else\n+\t*fsec\t += (fval - sec);\n+#endif\n+}\n+\n+\n+/* DecodeISO8601Interval()\n+ *\n+ * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n+ * time-interval by duration only.\" \n+ * Basic extended format: PnYnMnDTnHnMnS\n+ * PnW\n+ * For more info.\n+ * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n+ * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n+ *\n+ * Examples: P1D for 1 day\n+ * PT1H for 1 hour\n+ * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n+ *\n+ * The first field is exactly \"p\" or \"pt\" it may be of this type.\n+ *\n+ * Returns -1 if the field is not of this type.\n+ *\n+ * It pretty strictly checks the spec, with the two exceptions\n+ * that a week field ('W') may coexist with other units, and that\n+ * this function allows decimals in fields other than the least\n+ * significant units.\n+ */\n+int\n+DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec) \n+{\n+\tchar\t *cp;\n+\tint\t\t\tfmask = 0,\n+\t\t\t\ttmask;\n+\tint\t\t\tval;\n+\tdouble\t\tfval;\n+\tint\t\t\targ;\n+\tint\t\t\tdatepart;\n+\n+ /*\n+\t * An ISO 8601 \"time-interval by duration only\" must start\n+\t * with a 'P'. If it contains a date-part, 'p' will be the\n+\t * only character in the field. If it contains no date part\n+\t * it will contain exactly to characters 'PT' indicating a\n+\t * time part.\n+\t * Anything else is illegal and will be treated like a \n+\t * traditional postgresql interval.\n+\t */\n+ if (!(field[0][0] == 'p' &&\n+ ((field[0][1] == 0) || (field[0][1] == 't' && field[0][2] == 0))))\n+\t{\n+\t return -1;\n+\t}\n+\n+\n+ /*\n+\t * If the first field is exactly 1 character ('P'), it starts\n+\t * with date elements. Otherwise it's two characters ('PT');\n+\t * indicating it starts with a time part.\n+\t */\n+\tdatepart = (field[0][1] == 0);\n+\n+\t/*\n+\t * Every value must have a unit, so we require an even\n+\t * number of value/unit pairs. Therefore we require an\n+\t * odd nubmer of fields, including the prefix 'P'.\n+\t */\n+\tif ((nf & 1) == 0)\n+\t\treturn -1;\n+\n+\t/*\n+\t * Process pairs of fields at a time.\n+\t */\n+\tfor (arg = 1 ; arg < nf ; arg+=2) \n+\t{\n+\t\tchar * value = field[arg ];\n+\t\tchar * units = field[arg+1];\n+\n+\t\t/*\n+\t\t * The value part must be a number.\n+\t\t */\n+\t\tif (ftype[arg] != DTK_NUMBER) \n+\t\t\treturn -1;\n+\n+\t\t/*\n+\t\t * extract the number, almost exactly like the non-ISO interval.\n+\t\t */\n+\t\tval = strtol(value, &cp, 10);\n+\n+\t\t/*\n+\t\t * One difference from the normal postgresql interval below...\n+\t\t * ISO 8601 states that \"Of these, the comma is the preferred \n+\t\t * sign\" so I allow it here for locales that support it.\n+\t\t * Note: Perhaps the old-style interval code below should\n+\t\t * allow for this too, but I didn't want to risk backward\n+\t\t * compatability.\n+\t\t */\n+\t\tif (*cp == '.' || *cp == ',') \n+\t\t{\n+\t\t\tfval = strtod(cp, &cp);\n+\t\t\tif (*cp != '\\0')\n+\t\t\t\treturn -1;\n+\n+\t\t\tif (val < 0)\n+\t\t\t\tfval = -(fval);\n+\t\t}\n+\t\telse if (*cp == '\\0')\n+\t\t\tfval = 0;\n+\t\telse\n+\t\t\treturn -1;\n+\n+\n+\t\tif (datepart)\n+\t\t{\n+\t\t\t/*\n+\t\t\t * All the 8601 unit specifiers are 1 character, but may\n+\t\t\t * be followed by a 'T' character if transitioning between\n+\t\t\t * the date part and the time part. If it's not either\n+\t\t\t * one character or two characters with the second being 't'\n+\t\t\t * it's an error.\n+\t\t\t */\n+\t\t\tif (!(units[1] == 0 || (units[1] == 't' && units[2] == 0)))\n+\t\t\t\treturn -1;\n+\n+\t\t\tif (units[1] == 't')\n+\t\t\t\tdatepart = 0;\n+\n+\t\t\tswitch (units[0]) /* Y M D W */\n+\t\t\t{\n+\t\t\t\tcase 'd':\n+\t\t\t\t\ttm->tm_mday += val;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t adjust_fval(fval,tm,fsec, 86400);\n+\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tcase 'w':\n+\t\t\t\t\ttm->tm_mday += val * 7;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t adjust_fval(fval,tm,fsec,7 * 86400);\n+\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tcase 'm':\n+\t\t\t\t\ttm->tm_mon += val;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t adjust_fval(fval,tm,fsec,30 * 86400);\n+\t\t\t\t\ttmask = DTK_M(MONTH);\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tcase 'y':\n+\t\t\t\t\t/*\n+\t\t\t\t\t * Why can fractional months produce seconds,\n+\t\t\t\t\t * but fractional years can't? Well the older\n+\t\t\t\t\t * interval code below has the same property\n+\t\t\t\t\t * so this one follows the other one too.\n+\t\t\t\t\t */\n+\t\t\t\t\ttm->tm_year += val;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t\ttm->tm_mon += (fval * 12);\n+\t\t\t\t\ttmask = ((fmask & DTK_M(YEAR)) ? 0 : DTK_M(YEAR));\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tdefault:\n+\t\t\t\t\treturn -1; /* invald date unit prefix */\n+\t\t\t}\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\t/*\n+\t\t\t * ISO 8601 time part.\n+\t\t\t * In the time part, only one-character\n+\t\t\t * unit prefixes are allowed. If it's more\n+\t\t\t * than one character, it's not a valid ISO 8601\n+\t\t\t * time interval by duration.\n+\t\t\t */\n+\t\t\tif (units[1] != 0)\n+\t\t\t\treturn -1;\n+\n+\t\t\tswitch (units[0]) /* H M S */\n+\t\t\t{\n+\t\t\t\tcase 's':\n+\t\t\t\t\ttm->tm_sec += val;\n+#ifdef HAVE_INT64_TIMESTAMP\n+\t\t\t\t\t*fsec += (fval * 1000000);\n+#else\n+\t\t\t\t\t*fsec += fval;\n+#endif\n+\t\t\t\t\ttmask = DTK_M(SECOND);\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tcase 'm':\n+\t\t\t\t\ttm->tm_min += val;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t adjust_fval(fval,tm,fsec,60);\n+\t\t\t\t\ttmask = DTK_M(MINUTE);\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tcase 'h':\n+\t\t\t\t\ttm->tm_hour += val;\n+\t\t\t\t\tif (fval != 0)\n+\t\t\t\t\t adjust_fval(fval,tm,fsec,3600);\n+\t\t\t\t\ttmask = DTK_M(HOUR);\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tdefault:\n+\t\t\t\t\treturn -1; /* invald time unit prefix */\n+\t\t\t}\n+\t\t}\n+\t\tfmask |= tmask;\n+\t}\n+\n+\tif (*fsec != 0)\n+\t{\n+\t\tint\t\t\tsec;\n+\n+#ifdef HAVE_INT64_TIMESTAMP\n+\t\tsec = (*fsec / INT64CONST(1000000));\n+\t\t*fsec -= (sec * INT64CONST(1000000));\n+#else\n+\t\tTMODULO(*fsec, sec, 1e0);\n+#endif\n+\t\ttm->tm_sec += sec;\n+\t}\n+\treturn (fmask != 0) ? 0 : -1;\n+}\n+\n+\n /* DecodeInterval()\n * Interpret previously parsed fields for general time interval.\n * Returns 0 if successful, DTERR code if bogus input detected.\n *\n * Allow \"date\" field DTK_DATE since this could be just\n *\tan unsigned floating point number. - thomas 1997-11-16\n *\n * Allow ISO-style time span, with implicit units on number of days\n *\tpreceding an hh:mm:ss field. - thomas 1998-04-30\n+ * \n+ * Allow ISO-8601 style \"Representation of time-interval by duration only\"\n+ * of the format 'PnYnMnDTnHnMnS' and 'PnW' - ron 2003-08-30\n */\n+\n int\n DecodeInterval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec)\n {\n \tint\t\t\tis_before = FALSE;\n \tchar\t *cp;\n \tint\t\t\tfmask = 0,\n \t\t\t\ttmask,\n \t\t\t\ttype;\n \tint\t\t\ti;\n \tint\t\t\tdterr;\n@@ -2906,20 +3148,37 @@\n \n \ttype = IGNORE_DTF;\n \ttm->tm_year = 0;\n \ttm->tm_mon = 0;\n \ttm->tm_mday = 0;\n \ttm->tm_hour = 0;\n \ttm->tm_min = 0;\n \ttm->tm_sec = 0;\n \t*fsec = 0;\n \n+\t/*\n+\t * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n+ * time-interval by duration only.\" \n+\t * Basic extended format: PnYnMnDTnHnMnS\n+\t * PnW\n+\t * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n+\t * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n+\t * Examples: P1D for 1 day\n+\t * PT1H for 1 hour\n+\t * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n+\t *\n+\t * The first field is exactly \"p\" or \"pt\" it may be of this type.\n+\t */\n+\tif (DecodeISO8601Interval(field,ftype,nf,dtype,tm,fsec) == 0) {\n+\t return 0;\n+ }\n+\n \t/* read through list backwards to pick up units before values */\n \tfor (i = nf - 1; i >= 0; i--)\n \t{\n \t\tswitch (ftype[i])\n \t\t{\n \t\t\tcase DTK_TIME:\n \t\t\t\tdterr = DecodeTime(field[i], fmask, &tmask, tm, fsec);\n \t\t\t\tif (dterr)\n \t\t\t\t\treturn dterr;\n \t\t\t\ttype = DTK_DAY;\n@@ -2983,20 +3242,21 @@\n \t\t\t\t}\n \t\t\t\t/* DROP THROUGH */\n \n \t\t\tcase DTK_DATE:\n \t\t\tcase DTK_NUMBER:\n \t\t\t\tval = strtol(field[i], &cp, 10);\n \n \t\t\t\tif (type == IGNORE_DTF)\n \t\t\t\t\ttype = DTK_SECOND;\n \n+\t\t\t\t/* should this allow ',' for locales that use it ? */\n \t\t\t\tif (*cp == '.')\n \t\t\t\t{\n \t\t\t\t\tfval = strtod(cp, &cp);\n \t\t\t\t\tif (*cp != '\\0')\n \t\t\t\t\t\treturn DTERR_BAD_FORMAT;\n \n \t\t\t\t\tif (val < 0)\n \t\t\t\t\t\tfval = -(fval);\n \t\t\t\t}\n \t\t\t\telse if (*cp == '\\0')\n\n===================================================================\n", "msg_date": "Sun, 7 Sep 2003 21:50:49 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "ISO 8601 \"Time Intervals\" of the \"format with time-unit deignators\"" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n> Compared to the ISO 8601 time interval specification, the\n> postgresql interval syntax is quite verbose. For example:\n\n> Postgresql interval: ISO8601 Interval\n> ---------------------------------------------------\n> '1 year 6 months' 'P1Y6M'\n> '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n\nEr, don't we support that already? I know I saw code to support\nsomething much like that syntax last time I looked into the datetime\nroutines.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 01:47:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "\nIs there a way of producing as well as reading this format? Or did I miss\nsomething?\n\ncheers\n\nandrew\n\nRon Mayer said:\n> Short summary:\n>\n> This patch allows ISO 8601 \"time intervals\" using the \"format\n> with time-unit designators\" to specify postgresql \"intervals\".\n>\n> Below I have (A) What these time intervals are, (B) What I\n> modified to support them, (C) Issues with intervals I want\n> to bring up, and (D) a patch supporting them.\n>\n> It's helpful to me. Any feedback is appreciated. If you\n> did want to consider including it, let me know what to clean\n> up. If not, I thought I'd just put it here if anyone else finds it\n> useful too.\n>\n> Thanks for your time,\n>\n> Ron Mayer\n>\n> Longer:\n>\n> (A) What these intervals are.\n>\n> ISO 8601, the standard from which PostgreSQL gets some of it's time\n> syntax, also has a specification for \"time-intervals\".\n>\n> In particular, section 5.5.4.2 has a \"Representation of\n> time-interval by duration only\" which I believe maps\n> nicely to ISO intervals.\n>\n> Compared to the ISO 8601 time interval specification, the\n> postgresql interval syntax is quite verbose. For example:\n>\n> Postgresql interval: ISO8601 Interval\n> ---------------------------------------------------\n> '1 year 6 months' 'P1Y6M'\n> '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n>\n> Yeah, it's uglier, but it sure is short which can make\n> for quicker typing and shorter scripts, and if for some\n> strange reason you had an application using this format\n> it's nice not to have to translate.\n>\n> The syntax is as follows:\n> Basic extended format: PnYnMnDTnHnMnS\n> PnW\n>\n> Where everything before the \"T\" is a date-part and everything\n> after is a time-part. W is for weeks.\n> In the date-part, Y=Year, M=Month, D=Day\n> In the time-part, H=Hour, M=Minute, S=Second\n>\n> Much more info can be found from the draft standard\n> ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> The final standard's only available for $$$ so I didn't\n> look at it. Some other sites imply that this part didn't\n> change from the last draft to the standard.\n>\n>\n> (B) This change was made by adding two functions to \"datetime.c\"\n> next to where DecodeInterval parses the normal interval syntax.\n>\n> A total of 313 lines were added, including comments and sgml docs.\n> Of these only 136 are actual code, the rest, comments, whitespace,\n> etc.\n>\n>\n> One new function \"DecodeISO8601Interval\" follows the style of\n> \"DecodeInterval\" below it, and trys to strictly follow the ISO\n> syntax. If it doesn't match, it'll return -1 and the old syntax\n> will be checked as before.\n>\n> The first test (first character of the first field must be 'P', and\n> second character must be 'T' or '\\0') should be fast so I don't\n> think this will impact performance of existing code.\n>\n>\n> The second function (\"adjust_fval\") is just a small helper-function\n> to remove some of the cut&paste style that DecodeInterval used.\n>\n> It seems to work.\n>\n=======================================================================\n> betadb=# select 'P1M15DT12H30M7S'::interval;\n> interval\n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n>\n> betadb=# select '1 month 15 days 12 hours 30 minutes 7\n> seconds'::interval;\n> \t interval\n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n> =====================================================================\n>\n>\n>\n> (C) Open issues with intervals, and questions I'd like to ask.\n>\n> 1. DecodeInterval seems to have a hardcoded '.' for specifying\n> fractional times. ISO 8601 states that both '.' and ',' are ok,\n> but \"of these, the comma is the preferred sign\".\n>\n> In DecodeISO8601Interval I loosened the test to allow\n> both but left it as it was in DecodeInterval. Should\n> both be changed to make them more consistant?\n>\n> 2. In \"DecodeInterval\", fractional weeks and fractional months\n> can produce seconds; but fractional years can not (rounded to\n> months). I didn't understand the reasoning for this, so I left\n> it the same, and followed the same convention for\n> ISO intervals. Should I change this?\n>\n> 3. I could save a bunch of copy-paste-lines-of-code from the\n> pre-existing DecodeInterval by calling the adjust_fval helper\n> function. The tradeoff is a few extra function-calls when\n> decoding an interval. However I didn't want to risk changes to\n> the existing part unless you guys encourage me to do so.\n>\n>\n> (D) The patch.\n>\n>\n[snip]\n\n\n", "msg_date": "Mon, 8 Sep 2003 02:32:17 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "\nTom wrote:\n> \"Ron Mayer\" <[email protected]> writes:\n> > Compared to the ISO 8601 time interval specification, the\n> > postgresql interval syntax is quite verbose. For example:\n> \n> > Postgresql interval: ISO8601 Interval\n> > ---------------------------------------------------\n> > '1 year 6 months' 'P1Y6M'\n> > '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n> \n> Er, don't we support that already? I know I saw code to support\n> something much like that syntax last time I looked into the datetime\n> routines.\n> \n\nNope.\n\nPostgresql supports a rather bizzare shorthand that has a similar\nsyntax, but AFAICT, doesn't match ISO 8601 in any way that makes \nit practical.\n\n\nA disclaimer, I have the \"Final Draft\" (ISO/TC 154N 362 \nof 2000-12-19) of the spec; but have not seen the official,\nexpensive, version. \nftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n\n\nFor example, if I read it right, I have differences\nlike this:\n\n Interval ISO Postgres\n 8601 shorthand\n -----------------------------------------------------\n '1 year 1 minute' 'P1YT1M' '1Y1M'\n '1 year 1 month' 'P1Y1M' N/A\n\nThe best part about the postgresql syntax is that\nthey omit the required 'P', so it's easy to differentiate\nbetween the two. :-)\n\n\nPerhaps one could argue that the postgres shorthand should \nfollow the ISO conventions, but I'd not want to break backward\ncompatability, incase someone out there is using '1H30M' and\nexpecting minutes instead of months. If we didn't want to\nsupport two syntaxes, I wouldn't mind eventually depricating\nthe less-standard one.\n\n Ron\n\n\n", "msg_date": "Mon, 8 Sep 2003 11:59:50 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "[email protected] wrote:\n> \n> Is there a way of producing as well as reading this format? Or did I miss\n> something?\n\nNot yet, but I'd be happy to add it. \n\nMy immediate problem was having some 'P1Y6M' intervals to load. \nI posted this much largely because it was useful to me so might \nhelp others, and to see if it was of interest to others and \nget feedback on what else to change.\n\nI'd be happy to make it produce the output, and have some style\nquestions for doing so.\n\nI'd hate to trigger this output on the already-existing 'datestyle' \nof 'ISO', since that would break backward compatability.\nI do notice that 8601 has both \"basic\" and \"extended\" formats.\nThe \"basic format\" is more terse ('19980115' instead of '1998-01-15').\n\nWould it be useful if I added a 'datestyle' of 'ISO basic' which\nwould produce the most terse formats ('19980115' for dates, \nand 'P1Y1M' for intervals)?\n\n Ron\n\nPS: What's the best inexpenive way for me to know if this changed \nat all between the final draft and the published standard? \n\n> Ron Mayer said:\n> > This patch allows ISO 8601 \"time intervals\" using the \"format\n> > with time-unit designators\" to specify postgresql \"intervals\".\n> >...\n> > Much more info can be found from the draft standard\n> > ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> > The final standard's only available for $$$ so I didn't\n> > look at it.\n\n", "msg_date": "Mon, 8 Sep 2003 12:16:50 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n> Tom wrote:\n>> Er, don't we support that already?\n\n> Postgresql supports a rather bizzare shorthand that has a similar\n> syntax, but AFAICT, doesn't match ISO 8601 in any way that makes \n> it practical.\n\nWell, it's *supposed* to match ISO, AFAICT (the comments in the code\ntalk about \"ISO dates\"). Unless ISO has put out multiple specs that\ncover this?\n\n> Perhaps one could argue that the postgres shorthand should \n> follow the ISO conventions, but I'd not want to break backward\n> compatability, incase someone out there is using '1H30M' and\n> expecting minutes instead of months.\n\nI doubt anyone is using it, because it's completely undocumented.\nIf we're going to support the real ISO spec, I'd suggest ripping\nout any not-quite-there variant. (Especially so noting that your\ncode seems a lot cleaner than the ptype stuff.)\n\nThe datetime code is kind of a mess right now, because Thomas Lockhart\nwalked away from the project while only partway through some significant\nadditions. He left some incomplete features and quite a number of bugs\nin new-and-untested code. We've been gradually cleaning up the problems,\nbut if if you find something that doesn't seem to make sense, it's\nlikely a bug rather than anything we want to preserve. In particular,\ngiven the knowledge that it doesn't meet the ISO spec, I'd judge that\nthe existing code for the ISO shorthand was a work-in-progress.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 15:59:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n> Would it be useful if I added a 'datestyle' of 'ISO basic' which\n> would produce the most terse formats ('19980115' for dates, \n> and 'P1Y1M' for intervals)?\n\nI don't really care for using that name for it --- for one thing, you\ncouldn't do\n\tset datestyle to iso basic;\nbecause of syntax limitations. A one-word name is a much better idea.\n\nPerhaps call it \"compact\" or \"terse\" datestyle?\n\n\n> PS: What's the best inexpenive way for me to know if this changed \n> at all between the final draft and the published standard? \n\nANSI sells PDFs of ISO specs at their online store\nhttp://webstore.ansi.org/ansidocstore/default.asp\nalthough it looks like they want $81 for 8601, which is not my idea of\n\"inexpensive\".\n\nUsually ISO final drafts differ very little from the published specs;\nI think you could just work from the draft and no one would complain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 16:17:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "Tom Lane wrote:\n> > Perhaps one could argue that the postgres shorthand should \n> > follow the ISO conventions, but I'd not want to break backward\n> > compatability, incase someone out there is using '1H30M' and\n> > expecting minutes instead of months.\n> \n> I doubt anyone is using it, because it's completely undocumented.\n> If we're going to support the real ISO spec, I'd suggest ripping\n> out any not-quite-there variant. (Especially so noting that your\n> code seems a lot cleaner than the ptype stuff.)\n\nAgreed. Let me put your code in the queue for 7.5.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 8 Sep 2003 16:27:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "\nThis has been saved for the 7.5 release:\n\n\thttp:/momjian.postgresql.org/cgi-bin/pgpatches2\n\nFeel free to submit an updated patch that rips out the old syntax, as\ndiscussed, or replace this patch with a more comprehensive one.\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> Short summary:\n> \n> This patch allows ISO 8601 \"time intervals\" using the \"format \n> with time-unit designators\" to specify postgresql \"intervals\".\n> \n> Below I have (A) What these time intervals are, (B) What I\n> modified to support them, (C) Issues with intervals I want\n> to bring up, and (D) a patch supporting them.\n> \n> It's helpful to me. Any feedback is appreciated. If you \n> did want to consider including it, let me know what to clean \n> up. If not, I thought I'd just put it here if anyone else finds\n> it useful too.\n> \n> Thanks for your time,\n> \n> Ron Mayer\n> \n> Longer:\n> \n> (A) What these intervals are.\n> \n> ISO 8601, the standard from which PostgreSQL gets some of it's \n> time syntax, also has a specification for \"time-intervals\".\n> \n> In particular, section 5.5.4.2 has a \"Representation of\n> time-interval by duration only\" which I believe maps\n> nicely to ISO intervals.\n> \n> Compared to the ISO 8601 time interval specification, the\n> postgresql interval syntax is quite verbose. For example:\n> \n> Postgresql interval: ISO8601 Interval\n> ---------------------------------------------------\n> '1 year 6 months' 'P1Y6M'\n> '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n> \n> Yeah, it's uglier, but it sure is short which can make\n> for quicker typing and shorter scripts, and if for some\n> strange reason you had an application using this format\n> it's nice not to have to translate.\n> \n> The syntax is as follows:\n> Basic extended format: PnYnMnDTnHnMnS\n> PnW\n> \n> Where everything before the \"T\" is a date-part and everything\n> after is a time-part. W is for weeks.\n> In the date-part, Y=Year, M=Month, D=Day\n> In the time-part, H=Hour, M=Minute, S=Second\n> \n> Much more info can be found from the draft standard\n> ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> The final standard's only available for $$$ so I didn't\n> look at it. Some other sites imply that this part didn't\n> change from the last draft to the standard.\n> \n> \n> (B) This change was made by adding two functions to \"datetime.c\"\n> next to where DecodeInterval parses the normal interval syntax.\n> \n> A total of 313 lines were added, including comments and sgml docs.\n> Of these only 136 are actual code, the rest, comments, whitespace, etc.\n> \n> \n> One new function \"DecodeISO8601Interval\" follows the style of\n> \"DecodeInterval\" below it, and trys to strictly follow the ISO\n> syntax. If it doesn't match, it'll return -1 and the old syntax\n> will be checked as before.\n> \n> The first test (first character of the first field must be 'P', \n> and second character must be 'T' or '\\0') should be fast so I don't\n> think this will impact performance of existing code.\n> \n> \n> The second function (\"adjust_fval\") is just a small helper-function\n> to remove some of the cut&paste style that DecodeInterval used.\n> \n> It seems to work.\n> =======================================================================\n> betadb=# select 'P1M15DT12H30M7S'::interval;\n> interval \n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n> \n> betadb=# select '1 month 15 days 12 hours 30 minutes 7 seconds'::interval;\n> \t interval \n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n> =====================================================================\n> \n> \n> \n> (C) Open issues with intervals, and questions I'd like to ask.\n> \n> 1. DecodeInterval seems to have a hardcoded '.' for specifying\n> fractional times. ISO 8601 states that both '.' and ',' are\n> ok, but \"of these, the comma is the preferred sign\".\n> \n> In DecodeISO8601Interval I loosened the test to allow\n> both but left it as it was in DecodeInterval. Should\n> both be changed to make them more consistant?\n> \n> 2. In \"DecodeInterval\", fractional weeks and fractional months\n> can produce seconds; but fractional years can not (rounded\n> to months). I didn't understand the reasoning for this, so\n> I left it the same, and followed the same convention for\n> ISO intervals. Should I change this?\n> \n> 3. I could save a bunch of copy-paste-lines-of-code from the\n> pre-existing DecodeInterval by calling the adjust_fval helper\n> function. The tradeoff is a few extra function-calls when\n> decoding an interval. However I didn't want to risk changes\n> to the existing part unless you guys encourage me to do so.\n> \n> \n> (D) The patch.\n> \n> \n> Index: doc/src/sgml/datatype.sgml\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/datatype.sgml,v\n> retrieving revision 1.123\n> diff -u -1 -0 -r1.123 datatype.sgml\n> --- doc/src/sgml/datatype.sgml\t31 Aug 2003 17:32:18 -0000\t1.123\n> +++ doc/src/sgml/datatype.sgml\t8 Sep 2003 04:04:58 -0000\n> @@ -1735,20 +1735,71 @@\n> Quantities of days, hours, minutes, and seconds can be specified without\n> explicit unit markings. For example, <literal>'1 12:59:10'</> is read\n> the same as <literal>'1 day 12 hours 59 min 10 sec'</>.\n> </para>\n> \n> <para>\n> The optional precision\n> <replaceable>p</replaceable> should be between 0 and 6, and\n> defaults to the precision of the input literal.\n> </para>\n> +\n> +\n> + <para>\n> + Alternatively, <type>interval</type> values can be written as \n> + ISO 8601 time intervals, using the \"Format with time-unit designators\".\n> + This format always starts with the character <literal>'P'</>, followed \n> + by a string of values followed by single character time-unit designators.\n> + A <literal>'T'</> separates the date and time parts of the interval.\n> + </para>\n> +\n> + <para>\n> + Format: PnYnMnDTnHnMnS\n> + </para>\n> + <para>\n> + In this format, <literal>'n'</> gets replaced by a number, and \n> + <literal>Y</> represents years, \n> + <literal>M</> (in the date part) months,\n> + <literal>D</> months,\n> + <literal>H</> hours,\n> + <literal>M</> (in the time part) minutes,\n> + and <literal>S</> seconds.\n> + </para>\n> + \n> +\n> + <table id=\"interval-example-table\">\n> +\t <title>Interval Example</title>\n> +\t <tgroup cols=\"2\">\n> +\t\t<thead>\n> +\t\t <row>\n> +\t\t <entry>Traditional</entry>\n> +\t\t <entry>ISO-8601 time-interval</entry>\n> +\t\t </row>\n> +\t\t</thead>\n> +\t\t<tbody>\n> +\t\t <row>\n> +\t\t <entry>1 month</entry>\n> +\t\t <entry>P1M</entry>\n> +\t\t </row>\n> +\t\t <row>\n> +\t\t <entry>1 hour 30 minutes</entry>\n> +\t\t <entry>PT1H30M</entry>\n> +\t\t </row>\n> +\t\t <row>\n> +\t\t <entry>2 years 10 months 15 days 10 hours 30 minutes 20 seconds</entry>\n> +\t\t <entry>P2Y10M15DT10H30M20S</entry>\n> +\t\t </row>\n> +\t\t</tbody>\n> +\t </thead>\n> +\t </table>\n> +\t \n> + </para>\n> </sect3>\n> \n> <sect3>\n> <title>Special Values</title>\n> \n> <indexterm>\n> <primary>time</primary>\n> <secondary>constants</secondary>\n> </indexterm>\n> \n> Index: src/backend/utils/adt/datetime.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/datetime.c,v\n> retrieving revision 1.116\n> diff -u -1 -0 -r1.116 datetime.c\n> --- src/backend/utils/adt/datetime.c\t27 Aug 2003 23:29:28 -0000\t1.116\n> +++ src/backend/utils/adt/datetime.c\t8 Sep 2003 04:04:59 -0000\n> @@ -30,20 +30,21 @@\n> \t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> static int DecodeNumberField(int len, char *str,\n> \t\t\t\t int fmask, int *tmask,\n> \t\t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> static int DecodeTime(char *str, int fmask, int *tmask,\n> \t\t struct tm * tm, fsec_t *fsec);\n> static int\tDecodeTimezone(char *str, int *tzp);\n> static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n> static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n> static void TrimTrailingZeros(char *str);\n> +static int DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec);\n> \n> \n> int\t\t\tday_tab[2][13] = {\n> \t{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0},\n> {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0}};\n> \n> char\t *months[] = {\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n> \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\", NULL};\n> \n> char\t *days[] = {\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\",\n> @@ -2872,30 +2873,271 @@\n> \t\t\tdefault:\n> \t\t\t\t*val = tp->value;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> \n> \treturn type;\n> }\n> \n> \n> +void adjust_fval(double fval,struct tm * tm, fsec_t *fsec, int scale);\n> +{\n> +\tint\tsec;\n> +\tfval\t *= scale;\n> +\tsec\t\t = fval;\n> +\ttm->tm_sec += sec;\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t*fsec\t += ((fval - sec) * 1000000);\n> +#else\n> +\t*fsec\t += (fval - sec);\n> +#endif\n> +}\n> +\n> +\n> +/* DecodeISO8601Interval()\n> + *\n> + * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> + * time-interval by duration only.\" \n> + * Basic extended format: PnYnMnDTnHnMnS\n> + * PnW\n> + * For more info.\n> + * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> + * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> + *\n> + * Examples: P1D for 1 day\n> + * PT1H for 1 hour\n> + * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> + *\n> + * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> + *\n> + * Returns -1 if the field is not of this type.\n> + *\n> + * It pretty strictly checks the spec, with the two exceptions\n> + * that a week field ('W') may coexist with other units, and that\n> + * this function allows decimals in fields other than the least\n> + * significant units.\n> + */\n> +int\n> +DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec) \n> +{\n> +\tchar\t *cp;\n> +\tint\t\t\tfmask = 0,\n> +\t\t\t\ttmask;\n> +\tint\t\t\tval;\n> +\tdouble\t\tfval;\n> +\tint\t\t\targ;\n> +\tint\t\t\tdatepart;\n> +\n> + /*\n> +\t * An ISO 8601 \"time-interval by duration only\" must start\n> +\t * with a 'P'. If it contains a date-part, 'p' will be the\n> +\t * only character in the field. If it contains no date part\n> +\t * it will contain exactly to characters 'PT' indicating a\n> +\t * time part.\n> +\t * Anything else is illegal and will be treated like a \n> +\t * traditional postgresql interval.\n> +\t */\n> + if (!(field[0][0] == 'p' &&\n> + ((field[0][1] == 0) || (field[0][1] == 't' && field[0][2] == 0))))\n> +\t{\n> +\t return -1;\n> +\t}\n> +\n> +\n> + /*\n> +\t * If the first field is exactly 1 character ('P'), it starts\n> +\t * with date elements. Otherwise it's two characters ('PT');\n> +\t * indicating it starts with a time part.\n> +\t */\n> +\tdatepart = (field[0][1] == 0);\n> +\n> +\t/*\n> +\t * Every value must have a unit, so we require an even\n> +\t * number of value/unit pairs. Therefore we require an\n> +\t * odd nubmer of fields, including the prefix 'P'.\n> +\t */\n> +\tif ((nf & 1) == 0)\n> +\t\treturn -1;\n> +\n> +\t/*\n> +\t * Process pairs of fields at a time.\n> +\t */\n> +\tfor (arg = 1 ; arg < nf ; arg+=2) \n> +\t{\n> +\t\tchar * value = field[arg ];\n> +\t\tchar * units = field[arg+1];\n> +\n> +\t\t/*\n> +\t\t * The value part must be a number.\n> +\t\t */\n> +\t\tif (ftype[arg] != DTK_NUMBER) \n> +\t\t\treturn -1;\n> +\n> +\t\t/*\n> +\t\t * extract the number, almost exactly like the non-ISO interval.\n> +\t\t */\n> +\t\tval = strtol(value, &cp, 10);\n> +\n> +\t\t/*\n> +\t\t * One difference from the normal postgresql interval below...\n> +\t\t * ISO 8601 states that \"Of these, the comma is the preferred \n> +\t\t * sign\" so I allow it here for locales that support it.\n> +\t\t * Note: Perhaps the old-style interval code below should\n> +\t\t * allow for this too, but I didn't want to risk backward\n> +\t\t * compatability.\n> +\t\t */\n> +\t\tif (*cp == '.' || *cp == ',') \n> +\t\t{\n> +\t\t\tfval = strtod(cp, &cp);\n> +\t\t\tif (*cp != '\\0')\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tif (val < 0)\n> +\t\t\t\tfval = -(fval);\n> +\t\t}\n> +\t\telse if (*cp == '\\0')\n> +\t\t\tfval = 0;\n> +\t\telse\n> +\t\t\treturn -1;\n> +\n> +\n> +\t\tif (datepart)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * All the 8601 unit specifiers are 1 character, but may\n> +\t\t\t * be followed by a 'T' character if transitioning between\n> +\t\t\t * the date part and the time part. If it's not either\n> +\t\t\t * one character or two characters with the second being 't'\n> +\t\t\t * it's an error.\n> +\t\t\t */\n> +\t\t\tif (!(units[1] == 0 || (units[1] == 't' && units[2] == 0)))\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tif (units[1] == 't')\n> +\t\t\t\tdatepart = 0;\n> +\n> +\t\t\tswitch (units[0]) /* Y M D W */\n> +\t\t\t{\n> +\t\t\t\tcase 'd':\n> +\t\t\t\t\ttm->tm_mday += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec, 86400);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'w':\n> +\t\t\t\t\ttm->tm_mday += val * 7;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,7 * 86400);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'm':\n> +\t\t\t\t\ttm->tm_mon += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,30 * 86400);\n> +\t\t\t\t\ttmask = DTK_M(MONTH);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'y':\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Why can fractional months produce seconds,\n> +\t\t\t\t\t * but fractional years can't? Well the older\n> +\t\t\t\t\t * interval code below has the same property\n> +\t\t\t\t\t * so this one follows the other one too.\n> +\t\t\t\t\t */\n> +\t\t\t\t\ttm->tm_year += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t\ttm->tm_mon += (fval * 12);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(YEAR)) ? 0 : DTK_M(YEAR));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tdefault:\n> +\t\t\t\t\treturn -1; /* invald date unit prefix */\n> +\t\t\t}\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * ISO 8601 time part.\n> +\t\t\t * In the time part, only one-character\n> +\t\t\t * unit prefixes are allowed. If it's more\n> +\t\t\t * than one character, it's not a valid ISO 8601\n> +\t\t\t * time interval by duration.\n> +\t\t\t */\n> +\t\t\tif (units[1] != 0)\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tswitch (units[0]) /* H M S */\n> +\t\t\t{\n> +\t\t\t\tcase 's':\n> +\t\t\t\t\ttm->tm_sec += val;\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t\t\t\t\t*fsec += (fval * 1000000);\n> +#else\n> +\t\t\t\t\t*fsec += fval;\n> +#endif\n> +\t\t\t\t\ttmask = DTK_M(SECOND);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'm':\n> +\t\t\t\t\ttm->tm_min += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,60);\n> +\t\t\t\t\ttmask = DTK_M(MINUTE);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'h':\n> +\t\t\t\t\ttm->tm_hour += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,3600);\n> +\t\t\t\t\ttmask = DTK_M(HOUR);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tdefault:\n> +\t\t\t\t\treturn -1; /* invald time unit prefix */\n> +\t\t\t}\n> +\t\t}\n> +\t\tfmask |= tmask;\n> +\t}\n> +\n> +\tif (*fsec != 0)\n> +\t{\n> +\t\tint\t\t\tsec;\n> +\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t\tsec = (*fsec / INT64CONST(1000000));\n> +\t\t*fsec -= (sec * INT64CONST(1000000));\n> +#else\n> +\t\tTMODULO(*fsec, sec, 1e0);\n> +#endif\n> +\t\ttm->tm_sec += sec;\n> +\t}\n> +\treturn (fmask != 0) ? 0 : -1;\n> +}\n> +\n> +\n> /* DecodeInterval()\n> * Interpret previously parsed fields for general time interval.\n> * Returns 0 if successful, DTERR code if bogus input detected.\n> *\n> * Allow \"date\" field DTK_DATE since this could be just\n> *\tan unsigned floating point number. - thomas 1997-11-16\n> *\n> * Allow ISO-style time span, with implicit units on number of days\n> *\tpreceding an hh:mm:ss field. - thomas 1998-04-30\n> + * \n> + * Allow ISO-8601 style \"Representation of time-interval by duration only\"\n> + * of the format 'PnYnMnDTnHnMnS' and 'PnW' - ron 2003-08-30\n> */\n> +\n> int\n> DecodeInterval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec)\n> {\n> \tint\t\t\tis_before = FALSE;\n> \tchar\t *cp;\n> \tint\t\t\tfmask = 0,\n> \t\t\t\ttmask,\n> \t\t\t\ttype;\n> \tint\t\t\ti;\n> \tint\t\t\tdterr;\n> @@ -2906,20 +3148,37 @@\n> \n> \ttype = IGNORE_DTF;\n> \ttm->tm_year = 0;\n> \ttm->tm_mon = 0;\n> \ttm->tm_mday = 0;\n> \ttm->tm_hour = 0;\n> \ttm->tm_min = 0;\n> \ttm->tm_sec = 0;\n> \t*fsec = 0;\n> \n> +\t/*\n> +\t * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> + * time-interval by duration only.\" \n> +\t * Basic extended format: PnYnMnDTnHnMnS\n> +\t * PnW\n> +\t * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> +\t * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> +\t * Examples: P1D for 1 day\n> +\t * PT1H for 1 hour\n> +\t * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> +\t *\n> +\t * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> +\t */\n> +\tif (DecodeISO8601Interval(field,ftype,nf,dtype,tm,fsec) == 0) {\n> +\t return 0;\n> + }\n> +\n> \t/* read through list backwards to pick up units before values */\n> \tfor (i = nf - 1; i >= 0; i--)\n> \t{\n> \t\tswitch (ftype[i])\n> \t\t{\n> \t\t\tcase DTK_TIME:\n> \t\t\t\tdterr = DecodeTime(field[i], fmask, &tmask, tm, fsec);\n> \t\t\t\tif (dterr)\n> \t\t\t\t\treturn dterr;\n> \t\t\t\ttype = DTK_DAY;\n> @@ -2983,20 +3242,21 @@\n> \t\t\t\t}\n> \t\t\t\t/* DROP THROUGH */\n> \n> \t\t\tcase DTK_DATE:\n> \t\t\tcase DTK_NUMBER:\n> \t\t\t\tval = strtol(field[i], &cp, 10);\n> \n> \t\t\t\tif (type == IGNORE_DTF)\n> \t\t\t\t\ttype = DTK_SECOND;\n> \n> +\t\t\t\t/* should this allow ',' for locales that use it ? */\n> \t\t\t\tif (*cp == '.')\n> \t\t\t\t{\n> \t\t\t\t\tfval = strtod(cp, &cp);\n> \t\t\t\t\tif (*cp != '\\0')\n> \t\t\t\t\t\treturn DTERR_BAD_FORMAT;\n> \n> \t\t\t\t\tif (val < 0)\n> \t\t\t\t\t\tfval = -(fval);\n> \t\t\t\t}\n> \t\t\t\telse if (*cp == '\\0')\n> \n> ===================================================================\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 8 Sep 2003 16:29:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Tom wrote: \n> \"Ron Mayer\" <[email protected]> writes:\n> > Tom wrote:\n> >> Er, don't we support that already?\n> > ...AFAICT, doesn't match ISO 8601...\n> \n> Well, it's *supposed* to match ISO.... Unless ISO has put out \n> multiple specs that cover this?\n\nAny way to tell if this is the case. \n8601's the one I see cited the most.\n\n\n> > ...I'd not want to break backward compatability...'1H30M'\n>\n> I doubt anyone is using it, because it's completely undocumented.\n> If we're going to support the real ISO spec, I'd suggest ripping\n> out any not-quite-there variant.\n\nI'm happy to look into it. Rip out completely? Ifdef? \n\n> We've been gradually cleaning up the problems, but if if you find \n> something that doesn't seem to make sense, it's likely a bug rather\n> than anything we want to preserve. \n\nI've seen a few more cases that don't make sense.\n\nFor example \"why is 0.001 years less than 0.001 months\".\n\n betadb=# select '0.01 years'::interval\n interval\n ----------\n 00:00:00\n\n betadb=# select '0.01 months'::interval\n interval\n ----------\n 07:12:00\n\nIf I'm breaking backward compatability anyway, I'd be happy to tweak\nthings like this one too. Unless, of course someone can give me a \nreason why we want fractional years rounded to months, but fractional \nmonths are rounded to fractions of a second.\n\n Ron Mayer.\n\nPS: mailinglist etiquite question... for discussion, should I\n more this to hackers, or continue it here.\n\n", "msg_date": "Mon, 8 Sep 2003 13:42:11 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "Ron Mayer wrote:\n> If I'm breaking backward compatability anyway, I'd be happy to tweak\n> things like this one too. Unless, of course someone can give me a \n> reason why we want fractional years rounded to months, but fractional \n> months are rounded to fractions of a second.\n> \n> Ron Mayer.\n> \n> PS: mailinglist etiquite question... for discussion, should I\n> more this to hackers, or continue it here.\n\nYour choice, but you get a larger audience on hackers. I usually keep\nthings on patches when I have lots of code to post, and other times move\nto hackers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Mon, 8 Sep 2003 17:05:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n> For example \"why is 0.001 years less than 0.001 months\".\n\nAnd look at this:\n\nregression=# select '0.99 years'::interval;\n interval\n----------\n 11 mons\n(1 row)\n\nregression=# select '0.99 months'::interval;\n interval\n------------------\n 29 days 16:48:00\n(1 row)\n\nIt kinda looks like fractional years are converted to integer months\n(truncating) while fractional months are moved to the seconds part\nof the interval. Ick. The handling ought to be consistent if you\nask me.\n\n> If I'm breaking backward compatability anyway, I'd be happy to tweak\n> things like this one too. Unless, of course someone can give me a \n> reason why we want fractional years rounded to months, but fractional \n> months are rounded to fractions of a second.\n\nActually, what I'd like to see done with interval is re-implement it as\na three-field entity, separately storing months, days, and seconds.\nThe separation between months and smaller units is good because a month\nisn't a fixed number of any smaller unit, but the same holds true for\ndays and smaller units (days are not always 24 hours, consider DST\ntransitions). This would no doubt cause some backwards compatibility\nproblems, but overall it would fix many more cases than it breaks.\nWe see complaints about related issues regularly, every spring and fall...\n\nI'm unsure whether fractional months or fractional days are sensible\nto accept, but surely we should accept both or reject both. (This might\nsuggest that the underlying storage for the month and day fields should\nbe float not int, btw, but I am not sure about it.)\n\n> PS: mailinglist etiquite question... for discussion, should I\n> more this to hackers, or continue it here.\n\nAt this point it should move to pghackers, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 17:19:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "Tom Lane writes:\n\n> \"Ron Mayer\" <[email protected]> writes:\n> > Would it be useful if I added a 'datestyle' of 'ISO basic' which\n> > would produce the most terse formats ('19980115' for dates,\n> > and 'P1Y1M' for intervals)?\n>\n> I don't really care for using that name for it --- for one thing, you\n> couldn't do\n> \tset datestyle to iso basic;\n> because of syntax limitations. A one-word name is a much better idea.\n\niso8601\n\nKeep in mind that SQL itself is also a kind of ISO, so being more specific\nis useful.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 8 Sep 2003 23:58:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> I don't really care for using that name for it ---\n\n> iso8601\n\n> Keep in mind that SQL itself is also a kind of ISO, so being more specific\n> is useful.\n\nYes, but by the same token \"iso8601\" isn't specific enough either.\nSeveral of the other input formats we support have at least as good a\nclaim on that name.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 18:06:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "Tom Lane writes:\n\n> Yes, but by the same token \"iso8601\" isn't specific enough either.\n> Several of the other input formats we support have at least as good a\n> claim on that name.\n\nThe only input formats we support are along the lines of\n\n@ 1 year 2 mons 3 days 4 hours 5 mins 6 secs\n@ 1 year 2 mons 3 days 04:05:06\n\nThese are also the supported output formats (the first you get for 'sql',\n'postgres', and 'german' formats; the second is 'iso'). A quick check of\nISO 8601 shows, however, that neither of these are close to anything\nspecified in that standard.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Tue, 9 Sep 2003 00:39:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Tom Lane writes:\n>> Yes, but by the same token \"iso8601\" isn't specific enough either.\n>> Several of the other input formats we support have at least as good a\n>> claim on that name.\n\n> The only input formats we support are along the lines of\n\n> @ 1 year 2 mons 3 days 4 hours 5 mins 6 secs\n> @ 1 year 2 mons 3 days 04:05:06\n\nSorry, I was thinking of timestamp formats not intervals.\nYou're right that we don't have anything else particularly ISO-standard\nfor intervals, but my understanding is that formats like '2003-09-08\n18:43:31.046283-04' are ISO8601 compatible for timestamps. (Possibly\nyou need to put a T in there for strict compatibility, not sure.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 18:45:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "Tom wrote...\n> At this point it should move to pghackers, I think.\n\nBackground for pghackers first, open issues below...\n\n Over on pgpatches we've been discussing ISO syntax for\n �time intervals� of the �format with time-unit designators�.\n http://archives.postgresql.org/pgsql-patches/2003-09/msg00103.php\n A short summary is that I�ve submitted a patch that\n accepts intervals of this format..\n Postgresql interval: ISO8601 Interval\n ---------------------------------------------------\n '1 year 6 months' 'P1Y6M'\n '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n The final draft is here\n ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n\n This patch was backward-compatable, but further improvements\n discussed on patches may break compatability so I wanted to\n discuss them here before implementing them. I�ll also\n be submitting a new datestyle �iso8601� to output these intervals.\n\nOpen issues:\n\n1. Postgresql supported a shorthand for intervals that had\n a similar, but not compatable syntax:\n Interval ISO Existing postgres\n 8601 shorthand\n -----------------------------------------------------\n '1 year 1 minute' 'P1YT1M' '1Y1M'\n '1 year 1 month' 'P1Y1M' N/A\n\n The current thinking of the thread in pgpatches is to remove\n the existing (undocumented) syntax.\n\n Removing this will break backward compatability if anyone\n used this feature. Let me know if you needed it.\n\n2. Some of the parsing for intervals is inconsistant and\n confusing. For example, note that �0.01 years� is\n less than �0.01 months�.\n\n betadb=# select '0.01 month'::interval as hundredth_of_month,\n betadb-# '0.01 year'::interval as hundredth_of_year;\n hundredth_of_month | hundredth_of_year\n --------------------+-------------------\n 07:12:00 | 00:00:00\n\n This occurs because the current interval parsing rounds\n fractional years to the month, but fractional months\n to the fraction of a second.\n\n The current thinking on the thread in patches is\n at the very least to make these consistant, but with\n some open-issues because months aren�t a fixed number\n of days, and days aren�t a fixed number of seconds.\n\n The easiest and most minimal change would be to assume\n that any fractional part automatically gets turned\n into seconds, assuming things like 30 seconds/month,\n 24 hrs/day. Since all units except years work that way\n today, it�d would have the least impact on existing code.\n\n A probably better way that Tom recommended would remember\n fractional months and fractional days. This has the\n advantage that unlike today,\n �.5 months�::interval + �.5 months�::interval\n would then equal 1 month.\n\n So what should �.5 years� be?\n\n Today, it�s �6 mons�. But I could just as easily\n argue that it should be 365.2425/2 days, or 4382.91\n seconds. Each of these will be different (the last\n two are different durring daylight savings).\n\n3. This all is based on the final draft standard of\n ISO 8601, but I haven�t seen the actual expensive\n standard. If anyone has it handy...\n\n Also, I�m curious to know what if anything the SQL\n spec says about intervals and units. Any pointers.\n\n Ron\n\nAny other interval annoyances I should hit at the same time?\n\n", "msg_date": "Mon, 8 Sep 2003 16:05:38 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdesignators\"" }, { "msg_contents": "\nTom wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Tom Lane writes:\n> >> Yes, but by the same token \"iso8601\" isn't specific enough either.\n\nISO 8601 gives more specific names.\n\n ISO 8601 Basic Format: P2Y10M15DT10H20M30S\n ISO 8601 Alternative Format: P00021015T102030\n ISO 8601 Extended Format: P0002-10-15T10:20:30\n\nIn a way, the Extended Format is kinda nice, since it�s\nalmost human readable.\n\nI could put in both the basic and extended ones, and\ncall the dateformats �iso8601basic� and �iso8601extended�.\nThe negative is that to do �iso8601basic� right, I�d also\nhave to tweak the �date� and �time� parts of the code too.\n\n", "msg_date": "Mon, 8 Sep 2003 16:46:13 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "[ backtracking a little ]\n\n\"Ron Mayer\" <[email protected]> writes:\n> Tom wrote: \n>> I doubt anyone is using it, because it's completely undocumented.\n>> If we're going to support the real ISO spec, I'd suggest ripping\n>> out any not-quite-there variant.\n\n> I'm happy to look into it. Rip out completely? Ifdef? \n\n\"Rip\" was what I had in mind --- the idea is to simplify the code,\nwhich you will surely agree is too complicated as it stands. ifdefs\nwon't simplify the code or make it more understandable, rather the\nreverse.\n\nI have no problem with complex code when it's needed, but in this case\nthe ptype implementation of almost-ISO notation seems to me to affect\nmuch more of the code than it has any right to on a usefulness basis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Sep 2003 00:19:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "\nWhere did we leave this?\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> \n> Tom wrote:\n> > Peter Eisentraut <[email protected]> writes:\n> > > Tom Lane writes:\n> > >> Yes, but by the same token \"iso8601\" isn't specific enough either.\n> \n> ISO 8601 gives more specific names.\n> \n> ISO 8601 Basic Format: P2Y10M15DT10H20M30S\n> ISO 8601 Alternative Format: P00021015T102030\n> ISO 8601 Extended Format: P0002-10-15T10:20:30\n> \n> In a way, the Extended Format is kinda nice, since it?s\n> almost human readable.\n> \n> I could put in both the basic and extended ones, and\n> call the dateformats ?iso8601basic? and ?iso8601extended?.\n> The negative is that to do ?iso8601basic? right, I?d also\n> have to tweak the ?date? and ?time? parts of the code too.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 26 Sep 2003 18:49:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Where did we leave this?\n\nI thought it was proposed work for 7.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Sep 2003 18:54:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "So far,\n\n I have submitted the input-part.\n\n I have a working output-part (attached below, but I'm\n still cleaning up the documentation so I'll submit another\n one later). The output is chosen by setting\n the datestyle to 'iso8601basic'.\n\n Those two changes don't break backward compatability\n but don't fix too much odd behavior except ISO time interval I/O.\n\n\n I was encouraged to look into changing the way timestamp\n math is done (keeping month, and day, and second separate\n until the end). This is a bigger change and I don't have\n a stable version yet, and it breaks backward compatability. \n I hope to submit a proposal for changes early enough in the\n 7.5 timeframe to submit fixes then as well. At the very least\n I will fully document the existing interval-math as part of\n this proposal so the docs can be updated even if the proposal\n gets rejected.\n\n Ron\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Friday, September 26, 2003 3:54 PM\n> To: Bruce Momjian\n> Cc: Ron Mayer; Peter Eisentraut; [email protected];\n> [email protected]\n> Subject: Re: [PATCHES] ISO 8601 'Time Intervals' of the 'format with\n> time-unit deignators' \n> \n> \n> Bruce Momjian <[email protected]> writes:\n> > Where did we leave this?\n> \n> I thought it was proposed work for 7.5.\n> \n> \t\t\tregards, tom lane\n> \n>", "msg_date": "Fri, 26 Sep 2003 15:54:58 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "> I have a working output-part (attached below, but I'm\n> still cleaning up the documentation so I'll submit another\n> one later)\n\nUgh. Something in this pc quoted some characters in the attachment.\nRather than trying to apply it, wait a couple days and I'll submit\nan update where the docs match.\n\n Ron\n\n\n", "msg_date": "Fri, 26 Sep 2003 16:04:37 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "\nOh... and I'm pretty sure noone expects this before 7.5. :-)\nLooks like I'm the only one with input files that have these \nwierd-but-iso8601 'P1Y6M' and 'PT30M' inputs; so I can just \nuse my own patch. :-)\n\n\n> \n> So far,\n> \n> I have submitted the input-part.\n> \n> I have a working output-part (attached below, but I'm\n> still cleaning up the documentation so I'll submit another\n> one later). The output is chosen by setting\n> the datestyle to 'iso8601basic'.\n> \n> Those two changes don't break backward compatability\n> but don't fix too much odd behavior except ISO time interval I/O.\n> \n> \n> I was encouraged to look into changing the way timestamp\n> math is done (keeping month, and day, and second separate\n> until the end). This is a bigger change and I don't have\n> a stable version yet, and it breaks backward compatability. \n> I hope to submit a proposal for changes early enough in the\n> 7.5 timeframe to submit fixes then as well. At the very least\n> I will fully document the existing interval-math as part of\n> this proposal so the docs can be updated even if the proposal\n> gets rejected.\n> \n> Ron\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> > Sent: Friday, September 26, 2003 3:54 PM\n> > To: Bruce Momjian\n> > Cc: Ron Mayer; Peter Eisentraut; [email protected];\n> > [email protected]\n> > Subject: Re: [PATCHES] ISO 8601 'Time Intervals' of the 'format with\n> > time-unit deignators' \n> > \n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > Where did we leave this?\n> > \n> > I thought it was proposed work for 7.5.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n\n", "msg_date": "Fri, 26 Sep 2003 16:04:37 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "\nIs this ready for application? It looks good to me. However, there is\nan \"Open issues\" section.\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> Short summary:\n> \n> This patch allows ISO 8601 \"time intervals\" using the \"format \n> with time-unit designators\" to specify postgresql \"intervals\".\n> \n> Below I have (A) What these time intervals are, (B) What I\n> modified to support them, (C) Issues with intervals I want\n> to bring up, and (D) a patch supporting them.\n> \n> It's helpful to me. Any feedback is appreciated. If you \n> did want to consider including it, let me know what to clean \n> up. If not, I thought I'd just put it here if anyone else finds\n> it useful too.\n> \n> Thanks for your time,\n> \n> Ron Mayer\n> \n> Longer:\n> \n> (A) What these intervals are.\n> \n> ISO 8601, the standard from which PostgreSQL gets some of it's \n> time syntax, also has a specification for \"time-intervals\".\n> \n> In particular, section 5.5.4.2 has a \"Representation of\n> time-interval by duration only\" which I believe maps\n> nicely to ISO intervals.\n> \n> Compared to the ISO 8601 time interval specification, the\n> postgresql interval syntax is quite verbose. For example:\n> \n> Postgresql interval: ISO8601 Interval\n> ---------------------------------------------------\n> '1 year 6 months' 'P1Y6M'\n> '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n> \n> Yeah, it's uglier, but it sure is short which can make\n> for quicker typing and shorter scripts, and if for some\n> strange reason you had an application using this format\n> it's nice not to have to translate.\n> \n> The syntax is as follows:\n> Basic extended format: PnYnMnDTnHnMnS\n> PnW\n> \n> Where everything before the \"T\" is a date-part and everything\n> after is a time-part. W is for weeks.\n> In the date-part, Y=Year, M=Month, D=Day\n> In the time-part, H=Hour, M=Minute, S=Second\n> \n> Much more info can be found from the draft standard\n> ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> The final standard's only available for $$$ so I didn't\n> look at it. Some other sites imply that this part didn't\n> change from the last draft to the standard.\n> \n> \n> (B) This change was made by adding two functions to \"datetime.c\"\n> next to where DecodeInterval parses the normal interval syntax.\n> \n> A total of 313 lines were added, including comments and sgml docs.\n> Of these only 136 are actual code, the rest, comments, whitespace, etc.\n> \n> \n> One new function \"DecodeISO8601Interval\" follows the style of\n> \"DecodeInterval\" below it, and trys to strictly follow the ISO\n> syntax. If it doesn't match, it'll return -1 and the old syntax\n> will be checked as before.\n> \n> The first test (first character of the first field must be 'P', \n> and second character must be 'T' or '\\0') should be fast so I don't\n> think this will impact performance of existing code.\n> \n> \n> The second function (\"adjust_fval\") is just a small helper-function\n> to remove some of the cut&paste style that DecodeInterval used.\n> \n> It seems to work.\n> =======================================================================\n> betadb=# select 'P1M15DT12H30M7S'::interval;\n> interval \n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n> \n> betadb=# select '1 month 15 days 12 hours 30 minutes 7 seconds'::interval;\n> \t interval \n> ------------------------\n> 1 mon 15 days 12:30:07\n> (1 row)\n> =====================================================================\n> \n> \n> \n> (C) Open issues with intervals, and questions I'd like to ask.\n> \n> 1. DecodeInterval seems to have a hardcoded '.' for specifying\n> fractional times. ISO 8601 states that both '.' and ',' are\n> ok, but \"of these, the comma is the preferred sign\".\n> \n> In DecodeISO8601Interval I loosened the test to allow\n> both but left it as it was in DecodeInterval. Should\n> both be changed to make them more consistant?\n> \n> 2. In \"DecodeInterval\", fractional weeks and fractional months\n> can produce seconds; but fractional years can not (rounded\n> to months). I didn't understand the reasoning for this, so\n> I left it the same, and followed the same convention for\n> ISO intervals. Should I change this?\n> \n> 3. I could save a bunch of copy-paste-lines-of-code from the\n> pre-existing DecodeInterval by calling the adjust_fval helper\n> function. The tradeoff is a few extra function-calls when\n> decoding an interval. However I didn't want to risk changes\n> to the existing part unless you guys encourage me to do so.\n> \n> \n> (D) The patch.\n> \n> \n> Index: doc/src/sgml/datatype.sgml\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/datatype.sgml,v\n> retrieving revision 1.123\n> diff -u -1 -0 -r1.123 datatype.sgml\n> --- doc/src/sgml/datatype.sgml\t31 Aug 2003 17:32:18 -0000\t1.123\n> +++ doc/src/sgml/datatype.sgml\t8 Sep 2003 04:04:58 -0000\n> @@ -1735,20 +1735,71 @@\n> Quantities of days, hours, minutes, and seconds can be specified without\n> explicit unit markings. For example, <literal>'1 12:59:10'</> is read\n> the same as <literal>'1 day 12 hours 59 min 10 sec'</>.\n> </para>\n> \n> <para>\n> The optional precision\n> <replaceable>p</replaceable> should be between 0 and 6, and\n> defaults to the precision of the input literal.\n> </para>\n> +\n> +\n> + <para>\n> + Alternatively, <type>interval</type> values can be written as \n> + ISO 8601 time intervals, using the \"Format with time-unit designators\".\n> + This format always starts with the character <literal>'P'</>, followed \n> + by a string of values followed by single character time-unit designators.\n> + A <literal>'T'</> separates the date and time parts of the interval.\n> + </para>\n> +\n> + <para>\n> + Format: PnYnMnDTnHnMnS\n> + </para>\n> + <para>\n> + In this format, <literal>'n'</> gets replaced by a number, and \n> + <literal>Y</> represents years, \n> + <literal>M</> (in the date part) months,\n> + <literal>D</> months,\n> + <literal>H</> hours,\n> + <literal>M</> (in the time part) minutes,\n> + and <literal>S</> seconds.\n> + </para>\n> + \n> +\n> + <table id=\"interval-example-table\">\n> +\t <title>Interval Example</title>\n> +\t <tgroup cols=\"2\">\n> +\t\t<thead>\n> +\t\t <row>\n> +\t\t <entry>Traditional</entry>\n> +\t\t <entry>ISO-8601 time-interval</entry>\n> +\t\t </row>\n> +\t\t</thead>\n> +\t\t<tbody>\n> +\t\t <row>\n> +\t\t <entry>1 month</entry>\n> +\t\t <entry>P1M</entry>\n> +\t\t </row>\n> +\t\t <row>\n> +\t\t <entry>1 hour 30 minutes</entry>\n> +\t\t <entry>PT1H30M</entry>\n> +\t\t </row>\n> +\t\t <row>\n> +\t\t <entry>2 years 10 months 15 days 10 hours 30 minutes 20 seconds</entry>\n> +\t\t <entry>P2Y10M15DT10H30M20S</entry>\n> +\t\t </row>\n> +\t\t</tbody>\n> +\t </thead>\n> +\t </table>\n> +\t \n> + </para>\n> </sect3>\n> \n> <sect3>\n> <title>Special Values</title>\n> \n> <indexterm>\n> <primary>time</primary>\n> <secondary>constants</secondary>\n> </indexterm>\n> \n> Index: src/backend/utils/adt/datetime.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/datetime.c,v\n> retrieving revision 1.116\n> diff -u -1 -0 -r1.116 datetime.c\n> --- src/backend/utils/adt/datetime.c\t27 Aug 2003 23:29:28 -0000\t1.116\n> +++ src/backend/utils/adt/datetime.c\t8 Sep 2003 04:04:59 -0000\n> @@ -30,20 +30,21 @@\n> \t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> static int DecodeNumberField(int len, char *str,\n> \t\t\t\t int fmask, int *tmask,\n> \t\t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> static int DecodeTime(char *str, int fmask, int *tmask,\n> \t\t struct tm * tm, fsec_t *fsec);\n> static int\tDecodeTimezone(char *str, int *tzp);\n> static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n> static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n> static void TrimTrailingZeros(char *str);\n> +static int DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec);\n> \n> \n> int\t\t\tday_tab[2][13] = {\n> \t{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0},\n> {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0}};\n> \n> char\t *months[] = {\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n> \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\", NULL};\n> \n> char\t *days[] = {\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\",\n> @@ -2872,30 +2873,271 @@\n> \t\t\tdefault:\n> \t\t\t\t*val = tp->value;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> \n> \treturn type;\n> }\n> \n> \n> +void adjust_fval(double fval,struct tm * tm, fsec_t *fsec, int scale);\n> +{\n> +\tint\tsec;\n> +\tfval\t *= scale;\n> +\tsec\t\t = fval;\n> +\ttm->tm_sec += sec;\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t*fsec\t += ((fval - sec) * 1000000);\n> +#else\n> +\t*fsec\t += (fval - sec);\n> +#endif\n> +}\n> +\n> +\n> +/* DecodeISO8601Interval()\n> + *\n> + * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> + * time-interval by duration only.\" \n> + * Basic extended format: PnYnMnDTnHnMnS\n> + * PnW\n> + * For more info.\n> + * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> + * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> + *\n> + * Examples: P1D for 1 day\n> + * PT1H for 1 hour\n> + * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> + *\n> + * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> + *\n> + * Returns -1 if the field is not of this type.\n> + *\n> + * It pretty strictly checks the spec, with the two exceptions\n> + * that a week field ('W') may coexist with other units, and that\n> + * this function allows decimals in fields other than the least\n> + * significant units.\n> + */\n> +int\n> +DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec) \n> +{\n> +\tchar\t *cp;\n> +\tint\t\t\tfmask = 0,\n> +\t\t\t\ttmask;\n> +\tint\t\t\tval;\n> +\tdouble\t\tfval;\n> +\tint\t\t\targ;\n> +\tint\t\t\tdatepart;\n> +\n> + /*\n> +\t * An ISO 8601 \"time-interval by duration only\" must start\n> +\t * with a 'P'. If it contains a date-part, 'p' will be the\n> +\t * only character in the field. If it contains no date part\n> +\t * it will contain exactly to characters 'PT' indicating a\n> +\t * time part.\n> +\t * Anything else is illegal and will be treated like a \n> +\t * traditional postgresql interval.\n> +\t */\n> + if (!(field[0][0] == 'p' &&\n> + ((field[0][1] == 0) || (field[0][1] == 't' && field[0][2] == 0))))\n> +\t{\n> +\t return -1;\n> +\t}\n> +\n> +\n> + /*\n> +\t * If the first field is exactly 1 character ('P'), it starts\n> +\t * with date elements. Otherwise it's two characters ('PT');\n> +\t * indicating it starts with a time part.\n> +\t */\n> +\tdatepart = (field[0][1] == 0);\n> +\n> +\t/*\n> +\t * Every value must have a unit, so we require an even\n> +\t * number of value/unit pairs. Therefore we require an\n> +\t * odd nubmer of fields, including the prefix 'P'.\n> +\t */\n> +\tif ((nf & 1) == 0)\n> +\t\treturn -1;\n> +\n> +\t/*\n> +\t * Process pairs of fields at a time.\n> +\t */\n> +\tfor (arg = 1 ; arg < nf ; arg+=2) \n> +\t{\n> +\t\tchar * value = field[arg ];\n> +\t\tchar * units = field[arg+1];\n> +\n> +\t\t/*\n> +\t\t * The value part must be a number.\n> +\t\t */\n> +\t\tif (ftype[arg] != DTK_NUMBER) \n> +\t\t\treturn -1;\n> +\n> +\t\t/*\n> +\t\t * extract the number, almost exactly like the non-ISO interval.\n> +\t\t */\n> +\t\tval = strtol(value, &cp, 10);\n> +\n> +\t\t/*\n> +\t\t * One difference from the normal postgresql interval below...\n> +\t\t * ISO 8601 states that \"Of these, the comma is the preferred \n> +\t\t * sign\" so I allow it here for locales that support it.\n> +\t\t * Note: Perhaps the old-style interval code below should\n> +\t\t * allow for this too, but I didn't want to risk backward\n> +\t\t * compatability.\n> +\t\t */\n> +\t\tif (*cp == '.' || *cp == ',') \n> +\t\t{\n> +\t\t\tfval = strtod(cp, &cp);\n> +\t\t\tif (*cp != '\\0')\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tif (val < 0)\n> +\t\t\t\tfval = -(fval);\n> +\t\t}\n> +\t\telse if (*cp == '\\0')\n> +\t\t\tfval = 0;\n> +\t\telse\n> +\t\t\treturn -1;\n> +\n> +\n> +\t\tif (datepart)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * All the 8601 unit specifiers are 1 character, but may\n> +\t\t\t * be followed by a 'T' character if transitioning between\n> +\t\t\t * the date part and the time part. If it's not either\n> +\t\t\t * one character or two characters with the second being 't'\n> +\t\t\t * it's an error.\n> +\t\t\t */\n> +\t\t\tif (!(units[1] == 0 || (units[1] == 't' && units[2] == 0)))\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tif (units[1] == 't')\n> +\t\t\t\tdatepart = 0;\n> +\n> +\t\t\tswitch (units[0]) /* Y M D W */\n> +\t\t\t{\n> +\t\t\t\tcase 'd':\n> +\t\t\t\t\ttm->tm_mday += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec, 86400);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'w':\n> +\t\t\t\t\ttm->tm_mday += val * 7;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,7 * 86400);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'm':\n> +\t\t\t\t\ttm->tm_mon += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,30 * 86400);\n> +\t\t\t\t\ttmask = DTK_M(MONTH);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'y':\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Why can fractional months produce seconds,\n> +\t\t\t\t\t * but fractional years can't? Well the older\n> +\t\t\t\t\t * interval code below has the same property\n> +\t\t\t\t\t * so this one follows the other one too.\n> +\t\t\t\t\t */\n> +\t\t\t\t\ttm->tm_year += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t\ttm->tm_mon += (fval * 12);\n> +\t\t\t\t\ttmask = ((fmask & DTK_M(YEAR)) ? 0 : DTK_M(YEAR));\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tdefault:\n> +\t\t\t\t\treturn -1; /* invald date unit prefix */\n> +\t\t\t}\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * ISO 8601 time part.\n> +\t\t\t * In the time part, only one-character\n> +\t\t\t * unit prefixes are allowed. If it's more\n> +\t\t\t * than one character, it's not a valid ISO 8601\n> +\t\t\t * time interval by duration.\n> +\t\t\t */\n> +\t\t\tif (units[1] != 0)\n> +\t\t\t\treturn -1;\n> +\n> +\t\t\tswitch (units[0]) /* H M S */\n> +\t\t\t{\n> +\t\t\t\tcase 's':\n> +\t\t\t\t\ttm->tm_sec += val;\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t\t\t\t\t*fsec += (fval * 1000000);\n> +#else\n> +\t\t\t\t\t*fsec += fval;\n> +#endif\n> +\t\t\t\t\ttmask = DTK_M(SECOND);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'm':\n> +\t\t\t\t\ttm->tm_min += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,60);\n> +\t\t\t\t\ttmask = DTK_M(MINUTE);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tcase 'h':\n> +\t\t\t\t\ttm->tm_hour += val;\n> +\t\t\t\t\tif (fval != 0)\n> +\t\t\t\t\t adjust_fval(fval,tm,fsec,3600);\n> +\t\t\t\t\ttmask = DTK_M(HOUR);\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\tdefault:\n> +\t\t\t\t\treturn -1; /* invald time unit prefix */\n> +\t\t\t}\n> +\t\t}\n> +\t\tfmask |= tmask;\n> +\t}\n> +\n> +\tif (*fsec != 0)\n> +\t{\n> +\t\tint\t\t\tsec;\n> +\n> +#ifdef HAVE_INT64_TIMESTAMP\n> +\t\tsec = (*fsec / INT64CONST(1000000));\n> +\t\t*fsec -= (sec * INT64CONST(1000000));\n> +#else\n> +\t\tTMODULO(*fsec, sec, 1e0);\n> +#endif\n> +\t\ttm->tm_sec += sec;\n> +\t}\n> +\treturn (fmask != 0) ? 0 : -1;\n> +}\n> +\n> +\n> /* DecodeInterval()\n> * Interpret previously parsed fields for general time interval.\n> * Returns 0 if successful, DTERR code if bogus input detected.\n> *\n> * Allow \"date\" field DTK_DATE since this could be just\n> *\tan unsigned floating point number. - thomas 1997-11-16\n> *\n> * Allow ISO-style time span, with implicit units on number of days\n> *\tpreceding an hh:mm:ss field. - thomas 1998-04-30\n> + * \n> + * Allow ISO-8601 style \"Representation of time-interval by duration only\"\n> + * of the format 'PnYnMnDTnHnMnS' and 'PnW' - ron 2003-08-30\n> */\n> +\n> int\n> DecodeInterval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec)\n> {\n> \tint\t\t\tis_before = FALSE;\n> \tchar\t *cp;\n> \tint\t\t\tfmask = 0,\n> \t\t\t\ttmask,\n> \t\t\t\ttype;\n> \tint\t\t\ti;\n> \tint\t\t\tdterr;\n> @@ -2906,20 +3148,37 @@\n> \n> \ttype = IGNORE_DTF;\n> \ttm->tm_year = 0;\n> \ttm->tm_mon = 0;\n> \ttm->tm_mday = 0;\n> \ttm->tm_hour = 0;\n> \ttm->tm_min = 0;\n> \ttm->tm_sec = 0;\n> \t*fsec = 0;\n> \n> +\t/*\n> +\t * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> + * time-interval by duration only.\" \n> +\t * Basic extended format: PnYnMnDTnHnMnS\n> +\t * PnW\n> +\t * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> +\t * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> +\t * Examples: P1D for 1 day\n> +\t * PT1H for 1 hour\n> +\t * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> +\t *\n> +\t * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> +\t */\n> +\tif (DecodeISO8601Interval(field,ftype,nf,dtype,tm,fsec) == 0) {\n> +\t return 0;\n> + }\n> +\n> \t/* read through list backwards to pick up units before values */\n> \tfor (i = nf - 1; i >= 0; i--)\n> \t{\n> \t\tswitch (ftype[i])\n> \t\t{\n> \t\t\tcase DTK_TIME:\n> \t\t\t\tdterr = DecodeTime(field[i], fmask, &tmask, tm, fsec);\n> \t\t\t\tif (dterr)\n> \t\t\t\t\treturn dterr;\n> \t\t\t\ttype = DTK_DAY;\n> @@ -2983,20 +3242,21 @@\n> \t\t\t\t}\n> \t\t\t\t/* DROP THROUGH */\n> \n> \t\t\tcase DTK_DATE:\n> \t\t\tcase DTK_NUMBER:\n> \t\t\t\tval = strtol(field[i], &cp, 10);\n> \n> \t\t\t\tif (type == IGNORE_DTF)\n> \t\t\t\t\ttype = DTK_SECOND;\n> \n> +\t\t\t\t/* should this allow ',' for locales that use it ? */\n> \t\t\t\tif (*cp == '.')\n> \t\t\t\t{\n> \t\t\t\t\tfval = strtod(cp, &cp);\n> \t\t\t\t\tif (*cp != '\\0')\n> \t\t\t\t\t\treturn DTERR_BAD_FORMAT;\n> \n> \t\t\t\t\tif (val < 0)\n> \t\t\t\t\t\tfval = -(fval);\n> \t\t\t\t}\n> \t\t\t\telse if (*cp == '\\0')\n> \n> ===================================================================\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 30 Nov 2003 23:52:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "\nHere is an email on the open issues.\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> Tom wrote...\n> > At this point it should move to pghackers, I think.\n> \n> Background for pghackers first, open issues below...\n> \n> Over on pgpatches we've been discussing ISO syntax for\n> ?time intervals? of the ?format with time-unit designators?.\n> http://archives.postgresql.org/pgsql-patches/2003-09/msg00103.php\n> A short summary is that I?ve submitted a patch that\n> accepts intervals of this format..\n> Postgresql interval: ISO8601 Interval\n> ---------------------------------------------------\n> '1 year 6 months' 'P1Y6M'\n> '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n> The final draft is here\n> ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> \n> This patch was backward-compatable, but further improvements\n> discussed on patches may break compatability so I wanted to\n> discuss them here before implementing them. I?ll also\n> be submitting a new datestyle ?iso8601? to output these intervals.\n> \n> Open issues:\n> \n> 1. Postgresql supported a shorthand for intervals that had\n> a similar, but not compatable syntax:\n> Interval ISO Existing postgres\n> 8601 shorthand\n> -----------------------------------------------------\n> '1 year 1 minute' 'P1YT1M' '1Y1M'\n> '1 year 1 month' 'P1Y1M' N/A\n> \n> The current thinking of the thread in pgpatches is to remove\n> the existing (undocumented) syntax.\n> \n> Removing this will break backward compatability if anyone\n> used this feature. Let me know if you needed it.\n> \n> 2. Some of the parsing for intervals is inconsistant and\n> confusing. For example, note that ?0.01 years? is\n> less than ?0.01 months?.\n> \n> betadb=# select '0.01 month'::interval as hundredth_of_month,\n> betadb-# '0.01 year'::interval as hundredth_of_year;\n> hundredth_of_month | hundredth_of_year\n> --------------------+-------------------\n> 07:12:00 | 00:00:00\n> \n> This occurs because the current interval parsing rounds\n> fractional years to the month, but fractional months\n> to the fraction of a second.\n> \n> The current thinking on the thread in patches is\n> at the very least to make these consistant, but with\n> some open-issues because months aren?t a fixed number\n> of days, and days aren?t a fixed number of seconds.\n> \n> The easiest and most minimal change would be to assume\n> that any fractional part automatically gets turned\n> into seconds, assuming things like 30 seconds/month,\n> 24 hrs/day. Since all units except years work that way\n> today, it?d would have the least impact on existing code.\n> \n> A probably better way that Tom recommended would remember\n> fractional months and fractional days. This has the\n> advantage that unlike today,\n> ?.5 months?::interval + ?.5 months?::interval\n> would then equal 1 month.\n> \n> So what should ?.5 years? be?\n> \n> Today, it?s ?6 mons?. But I could just as easily\n> argue that it should be 365.2425/2 days, or 4382.91\n> seconds. Each of these will be different (the last\n> two are different durring daylight savings).\n> \n> 3. This all is based on the final draft standard of\n> ISO 8601, but I haven?t seen the actual expensive\n> standard. If anyone has it handy...\n> \n> Also, I?m curious to know what if anything the SQL\n> spec says about intervals and units. Any pointers.\n> \n> Ron\n> \n> Any other interval annoyances I should hit at the same time?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 30 Nov 2003 23:53:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ISO 8601 \"Time Intervals\" of the \"format with" }, { "msg_contents": "\nAnd another open issues email.\n\n---------------------------------------------------------------------------\n\nRon Mayer wrote:\n> \n> Tom wrote:\n> > Peter Eisentraut <[email protected]> writes:\n> > > Tom Lane writes:\n> > >> Yes, but by the same token \"iso8601\" isn't specific enough either.\n> \n> ISO 8601 gives more specific names.\n> \n> ISO 8601 Basic Format: P2Y10M15DT10H20M30S\n> ISO 8601 Alternative Format: P00021015T102030\n> ISO 8601 Extended Format: P0002-10-15T10:20:30\n> \n> In a way, the Extended Format is kinda nice, since it?s\n> almost human readable.\n> \n> I could put in both the basic and extended ones, and\n> call the dateformats ?iso8601basic? and ?iso8601extended?.\n> The negative is that to do ?iso8601basic? right, I?d also\n> have to tweak the ?date? and ?time? parts of the code too.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 30 Nov 2003 23:53:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit" }, { "msg_contents": "Bruce Momjian writes:\n\n> Is this ready for application? It looks good to me. However, there is\n> an \"Open issues\" section.\n\nIt would be more useful to implement the SQL standard for intervals first\ninstead of inventing more nonstandard formats for it.\n\n>\n> ---------------------------------------------------------------------------\n>\n> Ron Mayer wrote:\n> > Short summary:\n> >\n> > This patch allows ISO 8601 \"time intervals\" using the \"format\n> > with time-unit designators\" to specify postgresql \"intervals\".\n> >\n> > Below I have (A) What these time intervals are, (B) What I\n> > modified to support them, (C) Issues with intervals I want\n> > to bring up, and (D) a patch supporting them.\n> >\n> > It's helpful to me. Any feedback is appreciated. If you\n> > did want to consider including it, let me know what to clean\n> > up. If not, I thought I'd just put it here if anyone else finds\n> > it useful too.\n> >\n> > Thanks for your time,\n> >\n> > Ron Mayer\n> >\n> > Longer:\n> >\n> > (A) What these intervals are.\n> >\n> > ISO 8601, the standard from which PostgreSQL gets some of it's\n> > time syntax, also has a specification for \"time-intervals\".\n> >\n> > In particular, section 5.5.4.2 has a \"Representation of\n> > time-interval by duration only\" which I believe maps\n> > nicely to ISO intervals.\n> >\n> > Compared to the ISO 8601 time interval specification, the\n> > postgresql interval syntax is quite verbose. For example:\n> >\n> > Postgresql interval: ISO8601 Interval\n> > ---------------------------------------------------\n> > '1 year 6 months' 'P1Y6M'\n> > '3 hours 25 minutes 42 seconds' 'PT3H25M42S'\n> >\n> > Yeah, it's uglier, but it sure is short which can make\n> > for quicker typing and shorter scripts, and if for some\n> > strange reason you had an application using this format\n> > it's nice not to have to translate.\n> >\n> > The syntax is as follows:\n> > Basic extended format: PnYnMnDTnHnMnS\n> > PnW\n> >\n> > Where everything before the \"T\" is a date-part and everything\n> > after is a time-part. W is for weeks.\n> > In the date-part, Y=Year, M=Month, D=Day\n> > In the time-part, H=Hour, M=Minute, S=Second\n> >\n> > Much more info can be found from the draft standard\n> > ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> > The final standard's only available for $$$ so I didn't\n> > look at it. Some other sites imply that this part didn't\n> > change from the last draft to the standard.\n> >\n> >\n> > (B) This change was made by adding two functions to \"datetime.c\"\n> > next to where DecodeInterval parses the normal interval syntax.\n> >\n> > A total of 313 lines were added, including comments and sgml docs.\n> > Of these only 136 are actual code, the rest, comments, whitespace, etc.\n> >\n> >\n> > One new function \"DecodeISO8601Interval\" follows the style of\n> > \"DecodeInterval\" below it, and trys to strictly follow the ISO\n> > syntax. If it doesn't match, it'll return -1 and the old syntax\n> > will be checked as before.\n> >\n> > The first test (first character of the first field must be 'P',\n> > and second character must be 'T' or '\\0') should be fast so I don't\n> > think this will impact performance of existing code.\n> >\n> >\n> > The second function (\"adjust_fval\") is just a small helper-function\n> > to remove some of the cut&paste style that DecodeInterval used.\n> >\n> > It seems to work.\n> > =======================================================================\n> > betadb=# select 'P1M15DT12H30M7S'::interval;\n> > interval\n> > ------------------------\n> > 1 mon 15 days 12:30:07\n> > (1 row)\n> >\n> > betadb=# select '1 month 15 days 12 hours 30 minutes 7 seconds'::interval;\n> > \t interval\n> > ------------------------\n> > 1 mon 15 days 12:30:07\n> > (1 row)\n> > =====================================================================\n> >\n> >\n> >\n> > (C) Open issues with intervals, and questions I'd like to ask.\n> >\n> > 1. DecodeInterval seems to have a hardcoded '.' for specifying\n> > fractional times. ISO 8601 states that both '.' and ',' are\n> > ok, but \"of these, the comma is the preferred sign\".\n> >\n> > In DecodeISO8601Interval I loosened the test to allow\n> > both but left it as it was in DecodeInterval. Should\n> > both be changed to make them more consistant?\n> >\n> > 2. In \"DecodeInterval\", fractional weeks and fractional months\n> > can produce seconds; but fractional years can not (rounded\n> > to months). I didn't understand the reasoning for this, so\n> > I left it the same, and followed the same convention for\n> > ISO intervals. Should I change this?\n> >\n> > 3. I could save a bunch of copy-paste-lines-of-code from the\n> > pre-existing DecodeInterval by calling the adjust_fval helper\n> > function. The tradeoff is a few extra function-calls when\n> > decoding an interval. However I didn't want to risk changes\n> > to the existing part unless you guys encourage me to do so.\n> >\n> >\n> > (D) The patch.\n> >\n> >\n> > Index: doc/src/sgml/datatype.sgml\n> > ===================================================================\n> > RCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/datatype.sgml,v\n> > retrieving revision 1.123\n> > diff -u -1 -0 -r1.123 datatype.sgml\n> > --- doc/src/sgml/datatype.sgml\t31 Aug 2003 17:32:18 -0000\t1.123\n> > +++ doc/src/sgml/datatype.sgml\t8 Sep 2003 04:04:58 -0000\n> > @@ -1735,20 +1735,71 @@\n> > Quantities of days, hours, minutes, and seconds can be specified without\n> > explicit unit markings. For example, <literal>'1 12:59:10'</> is read\n> > the same as <literal>'1 day 12 hours 59 min 10 sec'</>.\n> > </para>\n> >\n> > <para>\n> > The optional precision\n> > <replaceable>p</replaceable> should be between 0 and 6, and\n> > defaults to the precision of the input literal.\n> > </para>\n> > +\n> > +\n> > + <para>\n> > + Alternatively, <type>interval</type> values can be written as\n> > + ISO 8601 time intervals, using the \"Format with time-unit designators\".\n> > + This format always starts with the character <literal>'P'</>, followed\n> > + by a string of values followed by single character time-unit designators.\n> > + A <literal>'T'</> separates the date and time parts of the interval.\n> > + </para>\n> > +\n> > + <para>\n> > + Format: PnYnMnDTnHnMnS\n> > + </para>\n> > + <para>\n> > + In this format, <literal>'n'</> gets replaced by a number, and\n> > + <literal>Y</> represents years,\n> > + <literal>M</> (in the date part) months,\n> > + <literal>D</> months,\n> > + <literal>H</> hours,\n> > + <literal>M</> (in the time part) minutes,\n> > + and <literal>S</> seconds.\n> > + </para>\n> > +\n> > +\n> > + <table id=\"interval-example-table\">\n> > +\t <title>Interval Example</title>\n> > +\t <tgroup cols=\"2\">\n> > +\t\t<thead>\n> > +\t\t <row>\n> > +\t\t <entry>Traditional</entry>\n> > +\t\t <entry>ISO-8601 time-interval</entry>\n> > +\t\t </row>\n> > +\t\t</thead>\n> > +\t\t<tbody>\n> > +\t\t <row>\n> > +\t\t <entry>1 month</entry>\n> > +\t\t <entry>P1M</entry>\n> > +\t\t </row>\n> > +\t\t <row>\n> > +\t\t <entry>1 hour 30 minutes</entry>\n> > +\t\t <entry>PT1H30M</entry>\n> > +\t\t </row>\n> > +\t\t <row>\n> > +\t\t <entry>2 years 10 months 15 days 10 hours 30 minutes 20 seconds</entry>\n> > +\t\t <entry>P2Y10M15DT10H30M20S</entry>\n> > +\t\t </row>\n> > +\t\t</tbody>\n> > +\t </thead>\n> > +\t </table>\n> > +\n> > + </para>\n> > </sect3>\n> >\n> > <sect3>\n> > <title>Special Values</title>\n> >\n> > <indexterm>\n> > <primary>time</primary>\n> > <secondary>constants</secondary>\n> > </indexterm>\n> >\n> > Index: src/backend/utils/adt/datetime.c\n> > ===================================================================\n> > RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/datetime.c,v\n> > retrieving revision 1.116\n> > diff -u -1 -0 -r1.116 datetime.c\n> > --- src/backend/utils/adt/datetime.c\t27 Aug 2003 23:29:28 -0000\t1.116\n> > +++ src/backend/utils/adt/datetime.c\t8 Sep 2003 04:04:59 -0000\n> > @@ -30,20 +30,21 @@\n> > \t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> > static int DecodeNumberField(int len, char *str,\n> > \t\t\t\t int fmask, int *tmask,\n> > \t\t\t\t struct tm * tm, fsec_t *fsec, int *is2digits);\n> > static int DecodeTime(char *str, int fmask, int *tmask,\n> > \t\t struct tm * tm, fsec_t *fsec);\n> > static int\tDecodeTimezone(char *str, int *tzp);\n> > static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n> > static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n> > static void TrimTrailingZeros(char *str);\n> > +static int DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec);\n> >\n> >\n> > int\t\t\tday_tab[2][13] = {\n> > \t{31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0},\n> > {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 0}};\n> >\n> > char\t *months[] = {\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\",\n> > \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\", NULL};\n> >\n> > char\t *days[] = {\"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\",\n> > @@ -2872,30 +2873,271 @@\n> > \t\t\tdefault:\n> > \t\t\t\t*val = tp->value;\n> > \t\t\t\tbreak;\n> > \t\t}\n> > \t}\n> >\n> > \treturn type;\n> > }\n> >\n> >\n> > +void adjust_fval(double fval,struct tm * tm, fsec_t *fsec, int scale);\n> > +{\n> > +\tint\tsec;\n> > +\tfval\t *= scale;\n> > +\tsec\t\t = fval;\n> > +\ttm->tm_sec += sec;\n> > +#ifdef HAVE_INT64_TIMESTAMP\n> > +\t*fsec\t += ((fval - sec) * 1000000);\n> > +#else\n> > +\t*fsec\t += (fval - sec);\n> > +#endif\n> > +}\n> > +\n> > +\n> > +/* DecodeISO8601Interval()\n> > + *\n> > + * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> > + * time-interval by duration only.\"\n> > + * Basic extended format: PnYnMnDTnHnMnS\n> > + * PnW\n> > + * For more info.\n> > + * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> > + * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> > + *\n> > + * Examples: P1D for 1 day\n> > + * PT1H for 1 hour\n> > + * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> > + *\n> > + * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> > + *\n> > + * Returns -1 if the field is not of this type.\n> > + *\n> > + * It pretty strictly checks the spec, with the two exceptions\n> > + * that a week field ('W') may coexist with other units, and that\n> > + * this function allows decimals in fields other than the least\n> > + * significant units.\n> > + */\n> > +int\n> > +DecodeISO8601Interval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec)\n> > +{\n> > +\tchar\t *cp;\n> > +\tint\t\t\tfmask = 0,\n> > +\t\t\t\ttmask;\n> > +\tint\t\t\tval;\n> > +\tdouble\t\tfval;\n> > +\tint\t\t\targ;\n> > +\tint\t\t\tdatepart;\n> > +\n> > + /*\n> > +\t * An ISO 8601 \"time-interval by duration only\" must start\n> > +\t * with a 'P'. If it contains a date-part, 'p' will be the\n> > +\t * only character in the field. If it contains no date part\n> > +\t * it will contain exactly to characters 'PT' indicating a\n> > +\t * time part.\n> > +\t * Anything else is illegal and will be treated like a\n> > +\t * traditional postgresql interval.\n> > +\t */\n> > + if (!(field[0][0] == 'p' &&\n> > + ((field[0][1] == 0) || (field[0][1] == 't' && field[0][2] == 0))))\n> > +\t{\n> > +\t return -1;\n> > +\t}\n> > +\n> > +\n> > + /*\n> > +\t * If the first field is exactly 1 character ('P'), it starts\n> > +\t * with date elements. Otherwise it's two characters ('PT');\n> > +\t * indicating it starts with a time part.\n> > +\t */\n> > +\tdatepart = (field[0][1] == 0);\n> > +\n> > +\t/*\n> > +\t * Every value must have a unit, so we require an even\n> > +\t * number of value/unit pairs. Therefore we require an\n> > +\t * odd nubmer of fields, including the prefix 'P'.\n> > +\t */\n> > +\tif ((nf & 1) == 0)\n> > +\t\treturn -1;\n> > +\n> > +\t/*\n> > +\t * Process pairs of fields at a time.\n> > +\t */\n> > +\tfor (arg = 1 ; arg < nf ; arg+=2)\n> > +\t{\n> > +\t\tchar * value = field[arg ];\n> > +\t\tchar * units = field[arg+1];\n> > +\n> > +\t\t/*\n> > +\t\t * The value part must be a number.\n> > +\t\t */\n> > +\t\tif (ftype[arg] != DTK_NUMBER)\n> > +\t\t\treturn -1;\n> > +\n> > +\t\t/*\n> > +\t\t * extract the number, almost exactly like the non-ISO interval.\n> > +\t\t */\n> > +\t\tval = strtol(value, &cp, 10);\n> > +\n> > +\t\t/*\n> > +\t\t * One difference from the normal postgresql interval below...\n> > +\t\t * ISO 8601 states that \"Of these, the comma is the preferred\n> > +\t\t * sign\" so I allow it here for locales that support it.\n> > +\t\t * Note: Perhaps the old-style interval code below should\n> > +\t\t * allow for this too, but I didn't want to risk backward\n> > +\t\t * compatability.\n> > +\t\t */\n> > +\t\tif (*cp == '.' || *cp == ',')\n> > +\t\t{\n> > +\t\t\tfval = strtod(cp, &cp);\n> > +\t\t\tif (*cp != '\\0')\n> > +\t\t\t\treturn -1;\n> > +\n> > +\t\t\tif (val < 0)\n> > +\t\t\t\tfval = -(fval);\n> > +\t\t}\n> > +\t\telse if (*cp == '\\0')\n> > +\t\t\tfval = 0;\n> > +\t\telse\n> > +\t\t\treturn -1;\n> > +\n> > +\n> > +\t\tif (datepart)\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * All the 8601 unit specifiers are 1 character, but may\n> > +\t\t\t * be followed by a 'T' character if transitioning between\n> > +\t\t\t * the date part and the time part. If it's not either\n> > +\t\t\t * one character or two characters with the second being 't'\n> > +\t\t\t * it's an error.\n> > +\t\t\t */\n> > +\t\t\tif (!(units[1] == 0 || (units[1] == 't' && units[2] == 0)))\n> > +\t\t\t\treturn -1;\n> > +\n> > +\t\t\tif (units[1] == 't')\n> > +\t\t\t\tdatepart = 0;\n> > +\n> > +\t\t\tswitch (units[0]) /* Y M D W */\n> > +\t\t\t{\n> > +\t\t\t\tcase 'd':\n> > +\t\t\t\t\ttm->tm_mday += val;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t adjust_fval(fval,tm,fsec, 86400);\n> > +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tcase 'w':\n> > +\t\t\t\t\ttm->tm_mday += val * 7;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t adjust_fval(fval,tm,fsec,7 * 86400);\n> > +\t\t\t\t\ttmask = ((fmask & DTK_M(DAY)) ? 0 : DTK_M(DAY));\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tcase 'm':\n> > +\t\t\t\t\ttm->tm_mon += val;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t adjust_fval(fval,tm,fsec,30 * 86400);\n> > +\t\t\t\t\ttmask = DTK_M(MONTH);\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tcase 'y':\n> > +\t\t\t\t\t/*\n> > +\t\t\t\t\t * Why can fractional months produce seconds,\n> > +\t\t\t\t\t * but fractional years can't? Well the older\n> > +\t\t\t\t\t * interval code below has the same property\n> > +\t\t\t\t\t * so this one follows the other one too.\n> > +\t\t\t\t\t */\n> > +\t\t\t\t\ttm->tm_year += val;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t\ttm->tm_mon += (fval * 12);\n> > +\t\t\t\t\ttmask = ((fmask & DTK_M(YEAR)) ? 0 : DTK_M(YEAR));\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tdefault:\n> > +\t\t\t\t\treturn -1; /* invald date unit prefix */\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\telse\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * ISO 8601 time part.\n> > +\t\t\t * In the time part, only one-character\n> > +\t\t\t * unit prefixes are allowed. If it's more\n> > +\t\t\t * than one character, it's not a valid ISO 8601\n> > +\t\t\t * time interval by duration.\n> > +\t\t\t */\n> > +\t\t\tif (units[1] != 0)\n> > +\t\t\t\treturn -1;\n> > +\n> > +\t\t\tswitch (units[0]) /* H M S */\n> > +\t\t\t{\n> > +\t\t\t\tcase 's':\n> > +\t\t\t\t\ttm->tm_sec += val;\n> > +#ifdef HAVE_INT64_TIMESTAMP\n> > +\t\t\t\t\t*fsec += (fval * 1000000);\n> > +#else\n> > +\t\t\t\t\t*fsec += fval;\n> > +#endif\n> > +\t\t\t\t\ttmask = DTK_M(SECOND);\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tcase 'm':\n> > +\t\t\t\t\ttm->tm_min += val;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t adjust_fval(fval,tm,fsec,60);\n> > +\t\t\t\t\ttmask = DTK_M(MINUTE);\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tcase 'h':\n> > +\t\t\t\t\ttm->tm_hour += val;\n> > +\t\t\t\t\tif (fval != 0)\n> > +\t\t\t\t\t adjust_fval(fval,tm,fsec,3600);\n> > +\t\t\t\t\ttmask = DTK_M(HOUR);\n> > +\t\t\t\t\tbreak;\n> > +\n> > +\t\t\t\tdefault:\n> > +\t\t\t\t\treturn -1; /* invald time unit prefix */\n> > +\t\t\t}\n> > +\t\t}\n> > +\t\tfmask |= tmask;\n> > +\t}\n> > +\n> > +\tif (*fsec != 0)\n> > +\t{\n> > +\t\tint\t\t\tsec;\n> > +\n> > +#ifdef HAVE_INT64_TIMESTAMP\n> > +\t\tsec = (*fsec / INT64CONST(1000000));\n> > +\t\t*fsec -= (sec * INT64CONST(1000000));\n> > +#else\n> > +\t\tTMODULO(*fsec, sec, 1e0);\n> > +#endif\n> > +\t\ttm->tm_sec += sec;\n> > +\t}\n> > +\treturn (fmask != 0) ? 0 : -1;\n> > +}\n> > +\n> > +\n> > /* DecodeInterval()\n> > * Interpret previously parsed fields for general time interval.\n> > * Returns 0 if successful, DTERR code if bogus input detected.\n> > *\n> > * Allow \"date\" field DTK_DATE since this could be just\n> > *\tan unsigned floating point number. - thomas 1997-11-16\n> > *\n> > * Allow ISO-style time span, with implicit units on number of days\n> > *\tpreceding an hh:mm:ss field. - thomas 1998-04-30\n> > + *\n> > + * Allow ISO-8601 style \"Representation of time-interval by duration only\"\n> > + * of the format 'PnYnMnDTnHnMnS' and 'PnW' - ron 2003-08-30\n> > */\n> > +\n> > int\n> > DecodeInterval(char **field, int *ftype, int nf, int *dtype, struct tm * tm, fsec_t *fsec)\n> > {\n> > \tint\t\t\tis_before = FALSE;\n> > \tchar\t *cp;\n> > \tint\t\t\tfmask = 0,\n> > \t\t\t\ttmask,\n> > \t\t\t\ttype;\n> > \tint\t\t\ti;\n> > \tint\t\t\tdterr;\n> > @@ -2906,20 +3148,37 @@\n> >\n> > \ttype = IGNORE_DTF;\n> > \ttm->tm_year = 0;\n> > \ttm->tm_mon = 0;\n> > \ttm->tm_mday = 0;\n> > \ttm->tm_hour = 0;\n> > \ttm->tm_min = 0;\n> > \ttm->tm_sec = 0;\n> > \t*fsec = 0;\n> >\n> > +\t/*\n> > +\t * Check if it's a ISO 8601 Section 5.5.4.2 \"Representation of\n> > + * time-interval by duration only.\"\n> > +\t * Basic extended format: PnYnMnDTnHnMnS\n> > +\t * PnW\n> > +\t * http://www.astroclark.freeserve.co.uk/iso8601/index.html\n> > +\t * ftp://ftp.qsl.net/pub/g1smd/154N362_.PDF\n> > +\t * Examples: P1D for 1 day\n> > +\t * PT1H for 1 hour\n> > +\t * P2Y6M7DT1H30M for 2 years, 6 months, 7 days 1 hour 30 min\n> > +\t *\n> > +\t * The first field is exactly \"p\" or \"pt\" it may be of this type.\n> > +\t */\n> > +\tif (DecodeISO8601Interval(field,ftype,nf,dtype,tm,fsec) == 0) {\n> > +\t return 0;\n> > + }\n> > +\n> > \t/* read through list backwards to pick up units before values */\n> > \tfor (i = nf - 1; i >= 0; i--)\n> > \t{\n> > \t\tswitch (ftype[i])\n> > \t\t{\n> > \t\t\tcase DTK_TIME:\n> > \t\t\t\tdterr = DecodeTime(field[i], fmask, &tmask, tm, fsec);\n> > \t\t\t\tif (dterr)\n> > \t\t\t\t\treturn dterr;\n> > \t\t\t\ttype = DTK_DAY;\n> > @@ -2983,20 +3242,21 @@\n> > \t\t\t\t}\n> > \t\t\t\t/* DROP THROUGH */\n> >\n> > \t\t\tcase DTK_DATE:\n> > \t\t\tcase DTK_NUMBER:\n> > \t\t\t\tval = strtol(field[i], &cp, 10);\n> >\n> > \t\t\t\tif (type == IGNORE_DTF)\n> > \t\t\t\t\ttype = DTK_SECOND;\n> >\n> > +\t\t\t\t/* should this allow ',' for locales that use it ? */\n> > \t\t\t\tif (*cp == '.')\n> > \t\t\t\t{\n> > \t\t\t\t\tfval = strtod(cp, &cp);\n> > \t\t\t\t\tif (*cp != '\\0')\n> > \t\t\t\t\t\treturn DTERR_BAD_FORMAT;\n> >\n> > \t\t\t\t\tif (val < 0)\n> > \t\t\t\t\t\tfval = -(fval);\n> > \t\t\t\t}\n> > \t\t\t\telse if (*cp == '\\0')\n> >\n> > ===================================================================\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Mon, 1 Dec 2003 07:20:21 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Is this ready for application? It looks good to me. However, there is\n> > an \"Open issues\" section.\n> \n> It would be more useful to implement the SQL standard for intervals first\n> instead of inventing more nonstandard formats for it.\n\nOK, patch removed from queue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 1 Dec 2003 09:12:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "> -----Original Message-----\n> \n> Is this ready for application? It looks good to me. However, there is\n> an \"Open issues\" section.\n\nIn my mind there were two categories of open issues\n a) ones that are 100% backward (such as the comment about \n outputting this format)\nand\n b) ones that aren't (such as deprecating the current\n postgresql shorthand of \n '1Y1M'::interval = 1 year 1 minute\n in favor of the ISO-8601\n 'P1Y1M'::interval = 1 year 1 month.\n\nAttached is a patch that addressed all the discussed issues that\ndid not break backward compatability, including the ability to\noutput ISO-8601 compliant intervals by setting datestyle to\niso8601basic.\n\n Ron", "msg_date": "Mon, 1 Dec 2003 12:50:47 -0800", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit\n\tdeignators\"" }, { "msg_contents": "Peter wrote:\n> \n> It would be more useful to implement the SQL standard for intervals first\n> instead of inventing more nonstandard formats for it.\n\nMuch of the postgresql docs talks about ISO-8601 formats, so I would think\nof the patch more as a standards-based improvemnt for the current interval \nshortand. \n\nFor example, where today postgresql accepts \"1Y1M\" as '1 year 1 minute',\nwith the patch, the ISO-8601-standard \"P1Y1M\" would mean '1 year 1 month'.\n\nI would be happy to implement the SQL-standard-intervals as well if \nsomeone can point me to that spec. It's just that the system I worked\nwith happend to exchange data with ISO8601 time intervals.\n\n Ron Mayer\n\n[Moderators of psql-patches... I first replied to Peter from a\nseparate (personal) account. I think this reply is now in the \nmoderation queue). In retrospect I thought it might be better if \nthe archives had the whole thread from the original email address. \nIf it's not too late, could you reject my post in the moderation queue?\nIf not, sorry for the spam.]\n\n\n", "msg_date": "Mon, 1 Dec 2003 13:18:20 -0800", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "[sNip]\n> ISO 8601 gives more specific names.\n> \n> ISO 8601 Basic Format: P2Y10M15DT10H20M30S\n> ISO 8601 Alternative Format: P00021015T102030\n> ISO 8601 Extended Format: P0002-10-15T10:20:30\n> \n> In a way, the Extended Format is kinda nice, since it�s\n> almost human readable.\n> \n> I could put in both the basic and extended ones, and\n> call the dateformats �iso8601basic� and �iso8601extended�.\n> The negative is that to do �iso8601basic� right, I�d also\n> have to tweak the �date� and �time� parts of the code too.\n\n \tPerhaps all three formats should be supported, and if the following \nnames were all valid things could be simplified further too:\n\n \t \tiso8601basic\n \t \tiso8601bas\n \t \tiso8601alternative\n \t \tiso8601alt\n \t \tiso8601extended\n \t \tiso8601ext\n\n \tThe reason for allowing shorter names is to simplify database \nmanagement for anyone who may need to store the format name in a column for \nsome reason (I can't think of one now, but I get a feeling that someone \nwill want to do this type of thing in the future).\n\n \tFor that matter, the first letter could be used instead of the first \nthree for the short versions. Any thoughts on this?\n\n-- \nRandolf Richardson - [email protected]\nVancouver, British Columbia, Canada\n\nPlease do not eMail me directly when responding\nto my postings in the newsgroups.\n", "msg_date": "Wed, 3 Dec 2003 10:20:23 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit\n\tdeignators'" }, { "msg_contents": "Randolf Richardson wrote:\n> [sNip]\n> > ISO 8601 gives more specific names.\n> > \n> > ISO 8601 Basic Format: P2Y10M15DT10H20M30S\n> > ISO 8601 Alternative Format: P00021015T102030\n> > ISO 8601 Extended Format: P0002-10-15T10:20:30\n> > \n> > In a way, the Extended Format is kinda nice, since it�s\n> > almost human readable.\n> > \n> > I could put in both the basic and extended ones, and\n> > call the dateformats �iso8601basic� and �iso8601extended�.\n> > The negative is that to do �iso8601basic� right, I�d also\n> > have to tweak the �date� and �time� parts of the code too.\n> \n> \tPerhaps all three formats should be supported, and if the following \n> names were all valid things could be simplified further too:\n> \n> \t \tiso8601basic\n> \t \tiso8601bas\n> \t \tiso8601alternative\n> \t \tiso8601alt\n> \t \tiso8601extended\n> \t \tiso8601ext\n> \n> \tThe reason for allowing shorter names is to simplify database \n> management for anyone who may need to store the format name in a column for \n> some reason (I can't think of one now, but I get a feeling that someone \n> will want to do this type of thing in the future).\n> \n> \tFor that matter, the first letter could be used instead of the first \n> three for the short versions. Any thoughts on this?\n\nJust go with the full spellings.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 9 Dec 2003 18:28:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 'Time Intervals' of the 'format with time-unit" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRon Mayer wrote:\n> > -----Original Message-----\n> > \n> > Is this ready for application? It looks good to me. However, there is\n> > an \"Open issues\" section.\n> \n> In my mind there were two categories of open issues\n> a) ones that are 100% backward (such as the comment about \n> outputting this format)\n> and\n> b) ones that aren't (such as deprecating the current\n> postgresql shorthand of \n> '1Y1M'::interval = 1 year 1 minute\n> in favor of the ISO-8601\n> 'P1Y1M'::interval = 1 year 1 month.\n> \n> Attached is a patch that addressed all the discussed issues that\n> did not break backward compatability, including the ability to\n> output ISO-8601 compliant intervals by setting datestyle to\n> iso8601basic.\n> \n> Ron\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 15 Dec 2003 18:30:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Bruce Momjian wrote:\n> Your patch has been added to the PostgreSQL unapplied patches list\n> at:\n>\n> \thttp://momjian.postgresql.org/cgi-bin/pgpatches\n>\n> I will try to apply it within the next 48 hours.\n\nI keep reading about open issues, and deprecating certain things, and \npatch removed, and patch readded. What is going on?\n\n", "msg_date": "Tue, 16 Dec 2003 01:06:32 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian wrote:\n> > Your patch has been added to the PostgreSQL unapplied patches list\n> > at:\n> >\n> > \thttp://momjian.postgresql.org/cgi-bin/pgpatches\n> >\n> > I will try to apply it within the next 48 hours.\n> \n> I keep reading about open issues, and deprecating certain things, and \n> patch removed, and patch readded. What is going on?\n\nI think the patch just added is OK, no?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 15 Dec 2003 19:09:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Bruce Momjian wrote:\n> > I keep reading about open issues, and deprecating certain things,\n> > and patch removed, and patch readded. What is going on?\n>\n> I think the patch just added is OK, no?\n\nI don't know, but earlier the identical patch was rejected by you.\n\n", "msg_date": "Tue, 16 Dec 2003 01:37:50 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian wrote:\n> > > I keep reading about open issues, and deprecating certain things,\n> > > and patch removed, and patch readded. What is going on?\n> >\n> > I think the patch just added is OK, no?\n> \n> I don't know, but earlier the identical patch was rejected by you.\n\nI thought he made an adjustment so no backward compatibility was broken.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 15 Dec 2003 19:39:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Bruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Bruce Momjian wrote:\n> > > > I keep reading about open issues, and deprecating certain\n> > > > things, and patch removed, and patch readded. What is going\n> > > > on?\n> > >\n> > > I think the patch just added is OK, no?\n> >\n> > I don't know, but earlier the identical patch was rejected by you.\n>\n> I thought he made an adjustment so no backward compatibility was\n> broken.\n\nThen I wouldn't have said \"identical\".\n\n", "msg_date": "Tue, 16 Dec 2003 01:42:09 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian wrote:\n> > Peter Eisentraut wrote:\n> > > Bruce Momjian wrote:\n> > > > > I keep reading about open issues, and deprecating certain\n> > > > > things, and patch removed, and patch readded. What is going\n> > > > > on?\n> > > >\n> > > > I think the patch just added is OK, no?\n> > >\n> > > I don't know, but earlier the identical patch was rejected by you.\n> >\n> > I thought he made an adjustment so no backward compatibility was\n> > broken.\n> \n> Then I wouldn't have said \"identical\".\n\nOK, can anyone raise an objection to the patch. The new description\nmeans to me that he addressed our concerns and that my original\nhesitation was unwarranted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 15 Dec 2003 19:44:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nRon Mayer wrote:\n> > -----Original Message-----\n> > \n> > Is this ready for application? It looks good to me. However, there is\n> > an \"Open issues\" section.\n> \n> In my mind there were two categories of open issues\n> a) ones that are 100% backward (such as the comment about \n> outputting this format)\n> and\n> b) ones that aren't (such as deprecating the current\n> postgresql shorthand of \n> '1Y1M'::interval = 1 year 1 minute\n> in favor of the ISO-8601\n> 'P1Y1M'::interval = 1 year 1 month.\n> \n> Attached is a patch that addressed all the discussed issues that\n> did not break backward compatability, including the ability to\n> output ISO-8601 compliant intervals by setting datestyle to\n> iso8601basic.\n> \n> Ron\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 20 Dec 2003 10:32:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISO 8601 \"Time Intervals\" of the \"format with time-unit" }, { "msg_contents": "Hello,\n\nIs there ANY chance to recover data from a database system that suffered disk \ncrash, and is not missing the data/global directory?\n\nVersion is 7.2.4. Database files seem to be intact as well as pg_clog and \npg_xlog directories.\n\nThanks in advance for any ideas.\n\nDaniel\n\n", "msg_date": "Mon, 23 Aug 2004 16:00:01 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "missing data/global" }, { "msg_contents": "Daniel Kalchev <[email protected]> writes:\n> Is there ANY chance to recover data from a database system that suffered disk\n> crash, and is not missing the data/global directory?\n> Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n> pg_xlog directories.\n\nThe hard part I think would be reconstructing pg_database, because you'd\nneed to get the database OIDs right. I can't think of any way to do\nthat that doesn't involve poking at the file with a hex editor.\n\nHere's a sketch of how I'd proceed:\n\n1. Make a tar backup of what you have! That way you can start over\nafter you screw up ;-)\n\n2. I assume you know the names and properties of your databases, users,\nand groups if any; also the SYSID numbers for the users and groups.\nA recent pg_dumpall script would be a good place to get this info.\n\n3. You're also going to need to figure out the OIDs of your databases\n(the OIDs are the same as the names of their subdirectories under\n$PGDATA/base). Possibly you can do this just from directory/file sizes.\nNote that template1 should be OID 1, and template0 will have the next\nlowest number (probably 16555, in 7.2).\n\n4. Initdb a scratch database in some other place (or move aside your\nexisting files, if that seems safer). In this scratch DB, create\ndatabases, users, and groups to match your old setup. You should be\nable to duplicate everything except the database OIDs using standard\nSQL commands.\n\n5. Shut down scratch postmaster, then hex-edit pg_database to insert the\ncorrect OIDs. Use pg_filedump or a similar tool to verify that you did\nthis properly.\n\n6. Restart scratch postmaster, and VACUUM FREEZE pg_database, pg_shadow,\nand pg_group (from any database). This will make the next step safe.\n\n7. Stop scratch postmaster, and then copy over its $PGDATA/global\ndirectory into the old DB.\n\n8. Cross your fingers and start postmaster ...\n\nThis will probably *not* work if you had been doing anything to\npg_database, pg_shadow, or pg_group between your last checkpoint and the\ncrash, because the reconstructed tables are not going to be physically\nidentical to what they were before, so any actions replayed from WAL\nagainst those tables will be wrong. Hopefully you won't have that\nproblem. If you do, it might work to shut down the postmaster and again\ncopy the scratch $PGDATA/global directory into the old DB, thereby\noverwriting what the WAL replay did. This is getting into the realm of\nspeculation though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Aug 2004 14:46:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": "Tom,\n\nThis is basically what I had in mind, but you described it better than I ever \ncould :)\n\nWhat I need from this database system is just one database and probably not \nall of the tables anyway (but some do seem valuable). This database happens to \nbe second in the pg_dumpall script. The next databases are rather big (and I \nactually have more recent backup and could eventually recreate the data) The \nvaluable database hasn't had significant structure changes since the backup.\n\nLooking at the files, I am confident which is the proper database oid - if \nthis cannot be properly fixed, is there .. reasonable way to dump data from \nthe (heap) files?\n\nHere is what I have:\n\nsu-2.02# du\n1747 ./base/1\n1693 ./base/16555\n1 ./base/77573557/pgsql_tmp\n127036 ./base/77573557\n1 ./base/13255137/pgsql_tmp\n1379190 ./base/13255137\n11246 ./base/95521309\n1781 ./base/96388007\n1 ./base/133512058/pgsql_tmp\n11933861 ./base/133512058\n13456555 ./base\n98209 ./pg_xlog\n41315 ./pg_clog\n13596100 .\n\nMy database should be with oid 77573557, template0 is apparently 16555\n\nLet's see how all this works.\n\nDaniel\n\n>>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > Is there ANY chance to recover data from a database system that suffered d\n isk\n > > crash, and is not missing the data/global directory?\n > > Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n > > pg_xlog directories.\n > \n > The hard part I think would be reconstructing pg_database, because you'd\n > need to get the database OIDs right. I can't think of any way to do\n > that that doesn't involve poking at the file with a hex editor.\n > \n > Here's a sketch of how I'd proceed:\n > \n > 1. Make a tar backup of what you have! That way you can start over\n > after you screw up ;-)\n > \n > 2. I assume you know the names and properties of your databases, users,\n > and groups if any; also the SYSID numbers for the users and groups.\n > A recent pg_dumpall script would be a good place to get this info.\n > \n > 3. You're also going to need to figure out the OIDs of your databases\n > (the OIDs are the same as the names of their subdirectories under\n > $PGDATA/base). Possibly you can do this just from directory/file sizes.\n > Note that template1 should be OID 1, and template0 will have the next\n > lowest number (probably 16555, in 7.2).\n > \n > 4. Initdb a scratch database in some other place (or move aside your\n > existing files, if that seems safer). In this scratch DB, create\n > databases, users, and groups to match your old setup. You should be\n > able to duplicate everything except the database OIDs using standard\n > SQL commands.\n > \n > 5. Shut down scratch postmaster, then hex-edit pg_database to insert the\n > correct OIDs. Use pg_filedump or a similar tool to verify that you did\n > this properly.\n > \n > 6. Restart scratch postmaster, and VACUUM FREEZE pg_database, pg_shadow,\n > and pg_group (from any database). This will make the next step safe.\n > \n > 7. Stop scratch postmaster, and then copy over its $PGDATA/global\n > directory into the old DB.\n > \n > 8. Cross your fingers and start postmaster ...\n > \n > This will probably *not* work if you had been doing anything to\n > pg_database, pg_shadow, or pg_group between your last checkpoint and the\n > crash, because the reconstructed tables are not going to be physically\n > identical to what they were before, so any actions replayed from WAL\n > against those tables will be wrong. Hopefully you won't have that\n > problem. If you do, it might work to shut down the postmaster and again\n > copy the scratch $PGDATA/global directory into the old DB, thereby\n > overwriting what the WAL replay did. This is getting into the realm of\n > speculation though.\n > \n > \t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Aug 2004 23:02:49 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": "If you're not missing your data dir, clog or xlog then what's the problem?\n\nDaniel Kalchev wrote:\n> Hello,\n> \n> Is there ANY chance to recover data from a database system that suffered disk \n> crash, and is not missing the data/global directory?\n> \n> Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n> pg_xlog directories.\n> \n> Thanks in advance for any ideas.\n> \n> Daniel\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Tue, 24 Aug 2004 09:11:26 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global" }, { "msg_contents": "data/base/global is missing and this is where postgres gets all it's startup \ndata from (database oids, next oid, transaction id etc).\n\nLet's see how easy to recover from this it will turn to be.\n\nDaniel\n\n>>>Christopher Kings-Lynne said:\n > If you're not missing your data dir, clog or xlog then what's the problem?\n > \n > Daniel Kalchev wrote:\n > > Hello,\n > > \n > > Is there ANY chance to recover data from a database system that suffered d\n isk \n > > crash, and is not missing the data/global directory?\n > > \n > > Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n > > pg_xlog directories.\n > > \n > > Thanks in advance for any ideas.\n > > \n > > Daniel\n > > \n > > \n > > ---------------------------(end of broadcast)---------------------------\n > > TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n", "msg_date": "Tue, 24 Aug 2004 11:30:20 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": "Ah, you said 'is NOT missing'.\n\nChris\n\nDaniel Kalchev wrote:\n\n> data/base/global is missing and this is where postgres gets all it's startup \n> data from (database oids, next oid, transaction id etc).\n> \n> Let's see how easy to recover from this it will turn to be.\n> \n> Daniel\n> \n> \n>>>>Christopher Kings-Lynne said:\n> \n> > If you're not missing your data dir, clog or xlog then what's the problem?\n> > \n> > Daniel Kalchev wrote:\n> > > Hello,\n> > > \n> > > Is there ANY chance to recover data from a database system that suffered d\n> isk \n> > > crash, and is not missing the data/global directory?\n> > > \n> > > Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n> > > pg_xlog directories.\n> > > \n> > > Thanks in advance for any ideas.\n> > > \n> > > Daniel\n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n", "msg_date": "Tue, 24 Aug 2004 17:07:22 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global" }, { "msg_contents": "Tom I did the following:\n\n(found out 7.2.3 does not have pg_database)\n\n1. saved old data etc.\n\n2. created new database, and the database. database oid was 16556;\n\n3. moved data/global to the old data directory.\n\n4. though, that postmaster would actually use the database oid to locate the \ndirectory, then load everything from there.. old database oid was 77573557, so \nI just linked this to 16556 in the data/base direcotry. (this might be the \nfirst possible error)\n\nNow I can connect to the 'old' database, but get the error \n\nFATAL 1: Index pg_operator_oid_index is not a btree\n\n(if I run postmaster with -P I get not errors, but no tables as well).\n\nBy the way, I had to copy over the 'new' files from pg_clog and pg_xlog (this \nis the second possible error) to get the postmaster running. Perhaps better \nwould be to use pg_resetxlog or similar?\n\nDaniel\n\n>>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > Is there ANY chance to recover data from a database system that suffered d\n isk\n > > crash, and is not missing the data/global directory?\n > > Version is 7.2.4. Database files seem to be intact as well as pg_clog and \n > > pg_xlog directories.\n > \n > The hard part I think would be reconstructing pg_database, because you'd\n > need to get the database OIDs right. I can't think of any way to do\n > that that doesn't involve poking at the file with a hex editor.\n > \n > Here's a sketch of how I'd proceed:\n > \n > 1. Make a tar backup of what you have! That way you can start over\n > after you screw up ;-)\n > \n > 2. I assume you know the names and properties of your databases, users,\n > and groups if any; also the SYSID numbers for the users and groups.\n > A recent pg_dumpall script would be a good place to get this info.\n > \n > 3. You're also going to need to figure out the OIDs of your databases\n > (the OIDs are the same as the names of their subdirectories under\n > $PGDATA/base). Possibly you can do this just from directory/file sizes.\n > Note that template1 should be OID 1, and template0 will have the next\n > lowest number (probably 16555, in 7.2).\n > \n > 4. Initdb a scratch database in some other place (or move aside your\n > existing files, if that seems safer). In this scratch DB, create\n > databases, users, and groups to match your old setup. You should be\n > able to duplicate everything except the database OIDs using standard\n > SQL commands.\n > \n > 5. Shut down scratch postmaster, then hex-edit pg_database to insert the\n > correct OIDs. Use pg_filedump or a similar tool to verify that you did\n > this properly.\n > \n > 6. Restart scratch postmaster, and VACUUM FREEZE pg_database, pg_shadow,\n > and pg_group (from any database). This will make the next step safe.\n > \n > 7. Stop scratch postmaster, and then copy over its $PGDATA/global\n > directory into the old DB.\n > \n > 8. Cross your fingers and start postmaster ...\n > \n > This will probably *not* work if you had been doing anything to\n > pg_database, pg_shadow, or pg_group between your last checkpoint and the\n > crash, because the reconstructed tables are not going to be physically\n > identical to what they were before, so any actions replayed from WAL\n > against those tables will be wrong. Hopefully you won't have that\n > problem. If you do, it might work to shut down the postmaster and again\n > copy the scratch $PGDATA/global directory into the old DB, thereby\n > overwriting what the WAL replay did. This is getting into the realm of\n > speculation though.\n > \n > \t\t\tregards, tom lane\n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 2: you can get off all lists at once with the unregister command\n > (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n", "msg_date": "Tue, 24 Aug 2004 12:54:26 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": "Daniel Kalchev <[email protected]> writes:\n> (found out 7.2.3 does not have pg_database)\n\nYou think not?\n\n> By the way, I had to copy over the 'new' files from pg_clog and pg_xlog (this\n> is the second possible error) to get the postmaster running.\n\nThat was *not* part of the recipe, and is guaranteed *not* to work.\n\nIt seems likely though that you are wasting your time --- the index\nfailure suggests strongly that you have more corruption than just the\nloss of the /global subdirectory :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 2004 10:43:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <[email protected]> writes:\n > > (found out 7.2.3 does not have pg_database)\n > \n > You think not?\n\nNot as a file similar to pg_control. pg_database is indeed table in the system \ncatalog.\n\n > > By the way, I had to copy over the 'new' files from pg_clog and pg_xlog (t\n his\n > > is the second possible error) to get the postmaster running.\n > \n > That was *not* part of the recipe, and is guaranteed *not* to work.\n\nI know that, but wondered if it would help in any way.. By the way, what would \nbe the solution to sync WAL with the pg_control contents?\n\n > \n > It seems likely though that you are wasting your time --- the index\n > failure suggests strongly that you have more corruption than just the\n > loss of the /global subdirectory :-(\n\nAfter spending some time to find possible ways to adjust pointers (could \neventually save part of the data), I decided to move to plan B, which is to \nhave few people manually re-enter the data - would have been more effective to \nwaste my time anyway - but not if it will take days and the result be not \nguaranteed to be consistent.\n\nDoes such toll exist, that could dump data (records?) from the heap files \ngiven the table structure?\n\nRegards,\nDaniel\n\n", "msg_date": "Wed, 25 Aug 2004 19:07:23 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global " }, { "msg_contents": "On Wed, Aug 25, 2004 at 07:07:23PM +0300, Daniel Kalchev wrote:\n\n> Does such toll exist, that could dump data (records?) from the heap files \n> given the table structure?\n\nYou may want to check pg_filedump (from http://sources.redhat.com/rhdb\nIIRC).\n\n(What happened to pg_fsck BTW?)\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\n", "msg_date": "Wed, 25 Aug 2004 12:34:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing data/global" }, { "msg_contents": "Coincidentally I JUST NOW built 8.0 on Solaris 9, and ran into the same\nproblem. As they say, \"this used to work\"..... \n\nWe build databases as part of the build of our product, and I'm looking\ninto what we need to do to upgrade from 7.4.5, and this was the first\nthing I ran into. I hadn't gotten as far as truss yet, so thanks Kenneth\nfor that extra info.\n\nDid initdb previously just assume the -D path existed, and now it is\ntrying to create the whole path, if necessary?\n\n- DAP\n\n>-----Original Message-----\n>From: [email protected] \n>[mailto:[email protected]] On Behalf Of Kenneth Lareau\n>Sent: Thursday, January 27, 2005 5:23 PM\n>To: [email protected]\n>Subject: [HACKERS] Strange issue with initdb on 8.0 and \n>Solaris automounts\n>\n>Folks,\n>\n>I ran into an interesting issue when installing PostgreSQL 8.0 \n>that I'm not sure how to resolve correctly. My system is a \n>Sun machine (Blade\n>1000) running Solaris 9, with relatively recent patches. After \n>install- ing 8.0, I went to run the 'initdb' command and was \n>greeted with the\n>following:\n>\n>[delirium:postgres] ~\n>(11) initdb -D /software/postgresql-8.0.0/data The files \n>belonging to this database system will be owned by user \"postgres\".\n>This user must also own the server process.\n>\n>The database cluster will be initialized with locale C.\n>\n>creating directory /software/postgresql-8.0.0/data ... initdb: \n>could not create directory \"/software/postgresql-8.0.0\": \n>Operation not applicable\n>\n>\n>The error message was a bit confusing, so I decided to run a \n>truss on the process to see what might be happening, and this \n>is what I came\n>across:\n>\n>[...]\n>8802/1: write(1, \" c r e a t i n g d i r\".., 62) = 62\n>8802/1: umask(0) = 077\n>8802/1: umask(077) = 0\n>8802/1: mkdir(\"/software\", 0777) \n> Err#17 EEXIST\n>8802/1: stat64(\"/software\", 0xFFBFC858) = 0\n>8802/1: mkdir(\"/software/postgresql-8.0.0\", 0777) \n> Err#89 ENOSYS\n>[...]\n>\n>\n>The last error in that section, ENOSYS, is very strange, as \n>the Solaris manpage for 'mkdir' does not mention it as a \n>possible error. One thing to note in this, however, is that \n>'/software/postgresql-8.0.0' is not a regular directory, but \n>an automount point (which in this case is just a local \n>loopback mount). So the indication is that Solaris seems to \n>have a bug not in mkdir, but deeper in their VFS code that's \n>causing this seemingly strange issue.\n>\n>Two workarounds for this problem have been found: running \n>'initdb' with a directory that's *not* an automount point and \n>then moving the 'data'\n>directory to its final destination worked fine, along with a \n>suggestion from Andrew Dunstan (on the #postgresql IRC \n>channel) with using a rela- tive path for the data directory. \n>Both were successful in avoiding the issue, but I decided to \n>mention this here in case someone felt it might be worth \n>looking into to see if the Sun problem can be avoided; I am \n>going to notify Sun of their bug, just don't know how long it \n>will take them to actually resolve it (if they ever do).\n>\n>While I can fully understand that a code change here may not \n>be desire- able, might some notes in the documentation be \n>useful for those who might stumble across the problem as well? \n> Just a suggestion...\n>\n>I hope I gave sufficient information on the problem, though \n>I'm always willing to give any clarification needed. Thank \n>you for your time.\n>\n>\n>Ken Lareau\n>[email protected]\n>\n>---------------------------(end of \n>broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Thu, 27 Jan 2005 17:49:37 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts" }, { "msg_contents": "\"David Parker\" <[email protected]> writes:\n> Did initdb previously just assume the -D path existed, and now it is\n> trying to create the whole path, if necessary?\n\nPre-8.0 it was using mkdir(1), which might possibly contain some weird\nworkaround for this case on Solaris.\n\nI suppose that manually creating the data directory before running\ninitdb would also avoid this issue, since the mkdir(2) loop is only\nentered if we don't find the directory in existence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jan 2005 18:22:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts " }, { "msg_contents": "In message <[email protected]>, Tom Lane writes:\n>\"David Parker\" <[email protected]> writes:\n>> Did initdb previously just assume the -D path existed, and now it is\n>> trying to create the whole path, if necessary?\n>\n>Pre-8.0 it was using mkdir(1), which might possibly contain some weird\n>workaround for this case on Solaris.\n>\n>I suppose that manually creating the data directory before running\n>initdb would also avoid this issue, since the mkdir(2) loop is only\n>entered if we don't find the directory in existence.\n>\n>\t\t\tregards, tom lane\n>\n\nActually, creating the 'data' directory first doesn't work either:\n\n[delirium:postgres] ~\n(17) mkdir data\n[delirium:postgres] ~\n(18) initdb -D /software/postgresql-8.0.0/data\nThe files belonging to this database system will be owned by user \"postgres\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale C.\n\nfixing permissions on existing directory /software/postgresql-8.0.0/data ... ok\ncreating directory /software/postgresql-8.0.0/data/global ... initdb: could not create directory \"/software/postgresql-8.0.0\": Operation not applicable\ninitdb: removing contents of data directory \"/software/postgresql-8.0.0/data\"\n\n\nSince there's subdirectories that need to be created, it still runs into\nthe problem. I don't know why the command 'mkdir' doesn't exhibit the\nsame problem as the function 'mkdir', but running:\n\n mkdir /software/postgresql-8.0.0\n\nproduces the correct error \"File exists\" on my system. I suspect the\n'mkdir' command probably checks to see if the directory exists first\nbefore trying to create it, which avoids the problem.\n\n\nKen Lareau\[email protected]\n", "msg_date": "Thu, 27 Jan 2005 15:35:41 -0800", "msg_from": "Kenneth Lareau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts" }, { "msg_contents": "Kenneth Lareau <[email protected]> writes:\n> In message <[email protected]>, Tom Lane writes:\n>> I suppose that manually creating the data directory before running\n>> initdb would also avoid this issue, since the mkdir(2) loop is only\n>> entered if we don't find the directory in existence.\n\n> Actually, creating the 'data' directory first doesn't work either:\n\nGood point.\n\n> I don't know why the command 'mkdir' doesn't exhibit the\n> same problem as the function 'mkdir', but running:\n\n> mkdir /software/postgresql-8.0.0\n\n> produces the correct error \"File exists\" on my system.\n\nCould you truss that and see what it does? It would be a simple change\nin initdb to make it stat before mkdir instead of after, but I'm not\ntotally convinced that would fix the problem. If mkdir returns a funny\nerror code then stat might as well ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jan 2005 18:50:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts " }, { "msg_contents": "In message <[email protected]>, Tom Lane writes:\n>Kenneth Lareau <[email protected]> writes:\n>> In message <[email protected]>, Tom Lane writes:\n>>> I suppose that manually creating the data directory before running\n>>> initdb would also avoid this issue, since the mkdir(2) loop is only\n>>> entered if we don't find the directory in existence.\n>\n>> Actually, creating the 'data' directory first doesn't work either:\n>\n>Good point.\n>\n>> I don't know why the command 'mkdir' doesn't exhibit the\n>> same problem as the function 'mkdir', but running:\n>\n>> mkdir /software/postgresql-8.0.0\n>\n>> produces the correct error \"File exists\" on my system.\n>\n>Could you truss that and see what it does? It would be a simple change\n>in initdb to make it stat before mkdir instead of after, but I'm not\n>totally convinced that would fix the problem. If mkdir returns a funny\n>error code then stat might as well ...\n>\n>\t\t\tregards, tom lane\n>\n\nHere's the relevant truss output from 'mkdir /software/postgresql-8.0.0'\non my Solaris 9 system:\n\n10832: umask(0) = 077\n10832: umask(077) = 0\n10832: mkdir(\"/software/postgresql-8.0.0\", 0777) Err#89 ENOSYS\n10832: stat64(\"/software/postgresql-8.0.0\", 0xFFBFFA38) = 0\n10832: fstat64(2, 0xFFBFEB78) = 0\n10832: write(2, \" m k d i r\", 5) = 5\n10832: write(2, \" : \", 2) = 2\n10832: write(2, \" c a n n o t c r e a t\".., 24) = 24\n10832: write(2, \" ` / s o f t w a r e / p\".., 28) = 28\n10832: write(2, \" : \", 2) = 2\n10832: write(2, \" F i l e e x i s t s\", 11) = 11\n10832: write(2, \"\\n\", 1) = 1\n10832: _exit(1)\n\n\nIt's doing the stat after the mkdir attempt it seems, and coming back\nwith the correct response. Hmm, maybe I should look at the Solaris 8\ncode for the mkdir command...\n\n\nKen Lareau\[email protected]\n", "msg_date": "Thu, 27 Jan 2005 16:18:36 -0800", "msg_from": "Kenneth Lareau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>>I don't know why the command 'mkdir' doesn't exhibit the\n>>same problem as the function 'mkdir', but running:\n>> \n>>\n>\n> \n>\n>> mkdir /software/postgresql-8.0.0\n>> \n>>\n>\n> \n>\n>>produces the correct error \"File exists\" on my system.\n>> \n>>\n>\n>Could you truss that and see what it does? It would be a simple change\n>in initdb to make it stat before mkdir instead of after, but I'm not\n>totally convinced that would fix the problem. If mkdir returns a funny\n>error code then stat might as well ...\n>\n>\n> \n>\n\nThere's also a tiny race condition, which I guess isn't worth worrying \nabout.\n\nReturning ENOSYS is pretty bogus ...\n\ncheers\n\nandrew\n", "msg_date": "Thu, 27 Jan 2005 19:28:25 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts" }, { "msg_contents": "Kenneth Lareau <[email protected]> writes:\n> In message <[email protected]>, Tom Lane writes:\n>> Could you truss that and see what it does?\n\n> Here's the relevant truss output from 'mkdir /software/postgresql-8.0.0'\n> on my Solaris 9 system:\n\n> 10832: mkdir(\"/software/postgresql-8.0.0\", 0777) Err#89 ENOSYS\n> 10832: stat64(\"/software/postgresql-8.0.0\", 0xFFBFFA38) = 0\n\n> It's doing the stat after the mkdir attempt it seems, and coming back\n> with the correct response. Hmm, maybe I should look at the Solaris 8\n> code for the mkdir command...\n\nWell, the important point is that the stat does succeed. I'm not going\nto put in anything as specific as a check for ENOSYS, but it seems\nreasonable to try the stat first and mkdir only if stat fails.\nI've applied the attached patch.\n\n\t\t\tregards, tom lane\n\n*** src/bin/initdb/initdb.c.orig\tSat Jan 8 17:51:12 2005\n--- src/bin/initdb/initdb.c\tThu Jan 27 19:23:49 2005\n***************\n*** 476,481 ****\n--- 476,484 ----\n * this tries to build all the elements of a path to a directory a la mkdir -p\n * we assume the path is in canonical form, i.e. uses / as the separator\n * we also assume it isn't null.\n+ *\n+ * note that on failure, the path arg has been modified to show the particular\n+ * directory level we had problems with.\n */\n static int\n mkdir_p(char *path, mode_t omode)\n***************\n*** 544,573 ****\n \t\t}\n \t\tif (last)\n \t\t\t(void) umask(oumask);\n! \t\tif (mkdir(path, last ? omode : S_IRWXU | S_IRWXG | S_IRWXO) < 0)\n \t\t{\n! \t\t\tif (errno == EEXIST || errno == EISDIR)\n! \t\t\t{\n! \t\t\t\tif (stat(path, &sb) < 0)\n! \t\t\t\t{\n! \t\t\t\t\tretval = 1;\n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t\telse if (!S_ISDIR(sb.st_mode))\n! \t\t\t\t{\n! \t\t\t\t\tif (last)\n! \t\t\t\t\t\terrno = EEXIST;\n! \t\t\t\t\telse\n! \t\t\t\t\t\terrno = ENOTDIR;\n! \t\t\t\t\tretval = 1;\n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t}\n! \t\t\telse\n \t\t\t{\n \t\t\t\tretval = 1;\n \t\t\t\tbreak;\n \t\t\t}\n \t\t}\n \t\tif (!last)\n \t\t\t*p = '/';\n--- 547,570 ----\n \t\t}\n \t\tif (last)\n \t\t\t(void) umask(oumask);\n! \n! \t\t/* check for pre-existing directory; ok if it's a parent */\n! \t\tif (stat(path, &sb) == 0)\n \t\t{\n! \t\t\tif (!S_ISDIR(sb.st_mode))\n \t\t\t{\n+ \t\t\t\tif (last)\n+ \t\t\t\t\terrno = EEXIST;\n+ \t\t\t\telse\n+ \t\t\t\t\terrno = ENOTDIR;\n \t\t\t\tretval = 1;\n \t\t\t\tbreak;\n \t\t\t}\n+ \t\t}\n+ \t\telse if (mkdir(path, last ? omode : S_IRWXU | S_IRWXG | S_IRWXO) < 0)\n+ \t\t{\n+ \t\t\tretval = 1;\n+ \t\t\tbreak;\n \t\t}\n \t\tif (!last)\n \t\t\t*p = '/';\n", "msg_date": "Thu, 27 Jan 2005 19:37:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts " }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> There's also a tiny race condition, which I guess isn't worth worrying \n> about.\n\nConsidering that we're not checking ownership or permissions of the\nparent directories, I'd say not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jan 2005 19:39:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts " }, { "msg_contents": "In message <[email protected]>, Tom Lane writes:\n>Kenneth Lareau <[email protected]> writes:\n>> In message <[email protected]>, Tom Lane writes:\n>>> Could you truss that and see what it does?\n>\n>> Here's the relevant truss output from 'mkdir /software/postgresql-8.0.0'\n>> on my Solaris 9 system:\n>\n>> 10832: mkdir(\"/software/postgresql-8.0.0\", 0777) Err#89 ENOSYS\n>> 10832: stat64(\"/software/postgresql-8.0.0\", 0xFFBFFA38) = 0\n>\n>> It's doing the stat after the mkdir attempt it seems, and coming back\n>> with the correct response. Hmm, maybe I should look at the Solaris 8\n>> code for the mkdir command...\n>\n>Well, the important point is that the stat does succeed. I'm not going\n>to put in anything as specific as a check for ENOSYS, but it seems\n>reasonable to try the stat first and mkdir only if stat fails.\n>I've applied the attached patch.\n>\n>\t\t\tregards, tom lane\n\n\nTom, thank you very much for the patch, it worked like a charm.\n\n\nKen Lareau\[email protected]\n", "msg_date": "Thu, 27 Jan 2005 17:10:04 -0800", "msg_from": "Kenneth Lareau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange issue with initdb on 8.0 and Solaris automounts" }, { "msg_contents": "> My opinion is that this is a very bogus shortcut in the \n> network datatype code. There are no cases outside the \n> inet/cidr group where an operator doesn't exactly match its \n> underlying function. (The whole business of inet and cidr \n> being almost but not quite the same type is maldesigned\n> anyway...)\n> \n> The right solution for you is to declare two SQL functions. \n> Whether you make them point at the same underlying C code is \n> up to you.\n\nRight,...\n\nIn that case may I suggest fixing the catalog so network_* functions exists for both datatypes!\nAnything less I'd consider inconsistent...\n\nKind regards,\n\nJohn\n", "msg_date": "Sun, 30 Jan 2005 13:46:34 +1100", "msg_from": "\"John Hansen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in create operator and/or initdb " }, { "msg_contents": "\"John Hansen\" <[email protected]> writes:\n> In that case may I suggest fixing the catalog so network_* functions exists for both datatypes!\n\nRedesigning the inet/cidr distinction is on the to-do list (though I'm\nafraid not very high on the list). ISTM it should either be one type\nwith a distinguishing bit in the runtime representation, or two types\nwith no such bit needed. Having both is a schizophrenic design. It's\nled directly to bugs in the past, and I think there are still some\ncorner cases that act oddly (see the archives).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Jan 2005 22:07:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in create operator and/or initdb " }, { "msg_contents": "On Sat, Jan 29, 2005 at 10:07:30PM -0500, Tom Lane wrote:\n> \"John Hansen\" <[email protected]> writes:\n> > In that case may I suggest fixing the catalog so network_* functions exists for both datatypes!\n> \n> Redesigning the inet/cidr distinction is on the to-do list (though I'm\n> afraid not very high on the list). ISTM it should either be one type\n> with a distinguishing bit in the runtime representation, or two types\n> with no such bit needed. Having both is a schizophrenic design. It's\n> led directly to bugs in the past, and I think there are still some\n> corner cases that act oddly (see the archives).\n\n From a network engineering point of view the inet type is utterly\nbogus. I'm not aware of data of that type being needed or used in\nany real application. Given that, the complexity that it causes\nsimply by existing seems too high a cost.\n\nI suspect that the right thing to do is to kill the inet type\nentirely, and replace it with a special case of cidr. (And possibly\nthen to kill cidr and replace it with something that can be indexed\nmore effectively.)\n\nFor a replacement type, how important is it that it be completely\ncompatible with the existing inet/cidr types? Is anyone actually using\ninet types with a non-cidr mask?\n\nCheers,\n Steve\n", "msg_date": "Sat, 29 Jan 2005 20:20:10 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "Steve Atkins <[email protected]> writes:\n> For a replacement type, how important is it that it be completely\n> compatible with the existing inet/cidr types? Is anyone actually using\n> inet types with a non-cidr mask?\n\nIf you check the archives you'll discover that our current inet/cidr\ntypes were largely designed and implemented by Paul Vixie (yes, that\nVixie). I'm disinclined to second-guess Paul about the external\ndefinition of these types; I just want to rationalize the internal\nrepresentation a bit. In particular we've got some issues about\nconversions between the two types ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2005 01:46:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "On Sun, 30 Jan 2005, Tom Lane wrote:\n\n> Steve Atkins <[email protected]> writes:\n>> For a replacement type, how important is it that it be completely\n>> compatible with the existing inet/cidr types? Is anyone actually using\n>> inet types with a non-cidr mask?\n>\n> If you check the archives you'll discover that our current inet/cidr\n> types were largely designed and implemented by Paul Vixie (yes, that\n> Vixie). I'm disinclined to second-guess Paul about the external\n> definition of these types; I just want to rationalize the internal\n> representation a bit. In particular we've got some issues about\n> conversions between the two types ...\nPlease do **NOT** break the external representations. We had enough fights\nabout that 2-3 releases ago, and I personally don't want to revisit them.\n\nYes, we do flakey things with inet on the masking stuff.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 30 Jan 2005 21:49:43 -0600 (CST)", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "On Sun, Jan 30, 2005 at 09:49:43PM -0600, Larry Rosenman wrote:\n> On Sun, 30 Jan 2005, Tom Lane wrote:\n> \n> >Steve Atkins <[email protected]> writes:\n> >>For a replacement type, how important is it that it be completely\n> >>compatible with the existing inet/cidr types? Is anyone actually using\n> >>inet types with a non-cidr mask?\n> >\n> >If you check the archives you'll discover that our current inet/cidr\n> >types were largely designed and implemented by Paul Vixie (yes, that\n> >Vixie). I'm disinclined to second-guess Paul about the external\n> >definition of these types; I just want to rationalize the internal\n> >representation a bit. In particular we've got some issues about\n> >conversions between the two types ...\n>\n> Please do **NOT** break the external representations. We had enough fights\n> about that 2-3 releases ago, and I personally don't want to revisit them.\n\n> Yes, we do flakey things with inet on the masking stuff.\n\nWell, if you want the ability to store both a host address and a\nnetmask in the same datatype the inet masking stuff makes sense.\nThat's not really a useful datatype for any actual use, but it's\nfairly well-defined. The problem is that when someone looks at the\ndocs they'll see inet as the obvious datatype to use to store IP\naddresses, and it isn't very good for that.\n\nBut that's not all that's flakey, unfortunately.\n\nThe CIDR input format is documented to be classful, which in itself is\nhorribly obsolete and completely useless in this decades internet (and\nwas when the current code was written in '98).\n\nBut the implementation isn't either classful or classless, and the\nbehaviour disagrees with documented behaviour, and the behaviour you'd\nreasonably expect, in many cases.\n\n-- Class A - documented to be 10.0.0.0/8\n steve=# select '10.0.0.0'::cidr;\n cidr \n -------------\n 10.0.0.0/32\n\n-- Class B - documented to be 128.0.0.0/16\n steve=# select '128.0.0.0'::cidr;\n cidr \n --------------\n 128.0.0.0/32\n\n-- Class C - documented to be 223.10.0.0/24\n steve=# select '223.10.0.0'::cidr;\n cidr \n ---------------\n 223.10.0.0/32\n\n-- Class D\n steve=# select '224.10.0.0'::cidr;\n ERROR: invalid cidr value: \"224.10.0.0\"\n DETAIL: Value has bits set to right of mask.\n\n steve=# select '224.0.0.0'::cidr;\n cidr \n -------------\n 224.0.0.0/4\n\n-- Class E\n steve=# select '240.10.0.0'::cidr;\n cidr \n ---------------\n 240.10.0.0/32\n\nI use postgresql for network-related applications and for IP address\nrelated data mining, so I'm dealing with IP addresses in postgresql\non a daily basis.\n\nThe cidr type, including it's external interface, is simply broken.\nThere is no way to fix it that doesn't change that external interface.\n\nI know of at least two independant implementations of function IP\naddress types that have been put together for specific projects to\nimplement a working IP datatype. The ability to use gist indexes on\nthem to accelerate range-based lookups is a bonus.\n\nIf it's not possible (for backwards compatibility reasons) to fix\ninet+cidr, would migrating them out to contrib be a possibility?\nData types in the core tend to be widely used, even if they're\nbroken and there are better datatypes implemented as external\nmodules.\n\nCheers,\n Steve\n", "msg_date": "Mon, 31 Jan 2005 07:23:05 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "Steve Atkins <[email protected]> writes:\n> The cidr type, including it's external interface, is simply broken.\n\nThat is a large claim that I don't think you have demonstrated.\nThe only one of your examples that seems to me to contradict the\ndocumentation is this one:\n\n steve=# select '224.0.0.0'::cidr;\n cidr \n -------------\n 224.0.0.0/4\n\nwhich should be /32 according to what the docs say:\n\n: If y is omitted, it is calculated using assumptions from the older\n: classful network numbering system, except that it will be at least large\n: enough to include all of the octets written in the input.\n\nThe bogus netmask is in turn responsible for this case:\n\n steve=# select '224.10.0.0'::cidr;\n ERROR: invalid cidr value: \"224.10.0.0\"\n DETAIL: Value has bits set to right of mask.\n\n\nLooking at the source code, there seems to be a special case for \"class D\"\nnetwork numbers that causes the code not to extend y to cover the\nsupplied inputs:\n\n /* If no CIDR spec was given, infer width from net class. */\n if (bits == -1)\n {\n if (*odst >= 240) /* Class E */\n bits = 32;\n else if (*odst >= 224) /* Class D */\n bits = 4;\n else if (*odst >= 192) /* Class C */\n bits = 24;\n else if (*odst >= 128) /* Class B */\n bits = 16;\n else /* Class A */\n bits = 8;\n /* If imputed mask is narrower than specified octets, widen. */\n if (bits >= 8 && bits < ((dst - odst) * 8))\n ^^^^^^^^^\n bits = (dst - odst) * 8;\n }\n\nI think the test for \"bits >= 8\" should be removed. Does anyone know\nwhy it's there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 12:16:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "On Mon, Jan 31, 2005 at 12:16:26PM -0500, Tom Lane wrote:\n> Steve Atkins <[email protected]> writes:\n> > The cidr type, including it's external interface, is simply broken.\n> \n> That is a large claim that I don't think you have demonstrated.\n> The only one of your examples that seems to me to contradict the\n> documentation is this one:\n> \n> steve=# select '224.0.0.0'::cidr;\n> cidr \n> -------------\n> 224.0.0.0/4\n> \n> which should be /32 according to what the docs say:\n\nOK. If this sort of thing is considered a bug, rather than part\nof the external interface that shouldn't be changed, then I'd\nagree that cidr isn't entirely broken and it may well be possible\nto improve it without changing the interface.\n\n/me goes grovelling through the IPv6 inet code...\n\nCheers,\n Steve\n\n", "msg_date": "Mon, 31 Jan 2005 09:54:24 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "when my cidr datatype was integrated into pgsql, the decision was made to\nincorporate a copy of bind's inet_net_pton.c rather than add a link-time\ndependence to libbind.a (libbind.so). thus, when this bug was fixed in\n2003:\n\n----------------------------\nrevision 1.14\ndate: 2003/08/20 02:21:08; author: marka; state: Exp; lines: +10 -4\n1580. [bug] inet_net_pton() didn't fully handle implicit\n multicast IPv4 network addresses.\n\nthe pgsql \"fork\" of this code did not benefit from the fix. the patch was:\n\nIndex: inet_net_pton.c\n===================================================================\nRCS file: /proj/cvs/prod/bind8/src/lib/inet/inet_net_pton.c,v\nretrieving revision 1.13\nretrieving revision 1.14\ndiff -u -r1.13 -r1.14\n--- inet_net_pton.c 27 Sep 2001 15:08:38 -0000 1.13\n+++ inet_net_pton.c 20 Aug 2003 02:21:08 -0000 1.14\n@@ -16,7 +16,7 @@\n */\n \n #if defined(LIBC_SCCS) && !defined(lint)\n-static const char rcsid[] = \"$Id: inet_net_pton.c,v 1.13 2001/09/27 15:08:38 marka Exp $\";\n+static const char rcsid[] = \"$Id: inet_net_pton.c,v 1.14 2003/08/20 02:21:08 marka Exp $\";\n #endif\n \n #include \"port_before.h\"\n@@ -59,7 +59,7 @@\n * Paul Vixie (ISC), June 1996\n */\n static int\n-inet_net_pton_ipv4( const char *src, u_char *dst, size_t size) {\n+inet_net_pton_ipv4(const char *src, u_char *dst, size_t size) {\n static const char xdigits[] = \"0123456789abcdef\";\n static const char digits[] = \"0123456789\";\n int n, ch, tmp = 0, dirty, bits;\n@@ -152,7 +152,7 @@\n if (*odst >= 240) /* Class E */\n bits = 32;\n else if (*odst >= 224) /* Class D */\n- bits = 4;\n+ bits = 8;\n else if (*odst >= 192) /* Class C */\n bits = 24;\n else if (*odst >= 128) /* Class B */\n@@ -160,8 +160,14 @@\n else /* Class A */\n bits = 8;\n /* If imputed mask is narrower than specified octets, widen. */\n- if (bits >= 8 && bits < ((dst - odst) * 8))\n+ if (bits < ((dst - odst) * 8))\n bits = (dst - odst) * 8;\n+ /*\n+ * If there are no additional bits specified for a class D\n+ * address adjust bits to 4.\n+ */\n+ if (bits == 8 && *odst == 224)\n+ bits = 4;\n }\n /* Extend network to cover the actual mask. */\n while (bits > ((dst - odst) * 8)) {\n\nre:\n\n> To: Steve Atkins <[email protected]>\n> Cc: pgsql-hackers <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] [BUGS] Bug in create operator and/or initdb \n> Comments: In-reply-to Steve Atkins <[email protected]>\n> \tmessage dated \"Mon, 31 Jan 2005 07:23:05 -0800\"\n> Date: Mon, 31 Jan 2005 12:16:26 -0500\n> From: Tom Lane <[email protected]>\n> \n> Steve Atkins <[email protected]> writes:\n> > The cidr type, including it's external interface, is simply broken.\n> \n> That is a large claim that I don't think you have demonstrated.\n> The only one of your examples that seems to me to contradict the\n> documentation is this one:\n> \n> steve=# select '224.0.0.0'::cidr;\n> cidr \n> -------------\n> 224.0.0.0/4\n> \n> which should be /32 according to what the docs say:\n> \n> : If y is omitted, it is calculated using assumptions from the older\n> : classful network numbering system, except that it will be at least large\n> : enough to include all of the octets written in the input.\n> \n> The bogus netmask is in turn responsible for this case:\n> \n> steve=# select '224.10.0.0'::cidr;\n> ERROR: invalid cidr value: \"224.10.0.0\"\n> DETAIL: Value has bits set to right of mask.\n> \n> \n> Looking at the source code, there seems to be a special case for \"class D\"\n> network numbers that causes the code not to extend y to cover the\n> supplied inputs:\n> \n> /* If no CIDR spec was given, infer width from net class. */\n> if (bits == -1)\n> {\n> if (*odst >= 240) /* Class E */\n> bits = 32;\n> else if (*odst >= 224) /* Class D */\n> bits = 4;\n> else if (*odst >= 192) /* Class C */\n> bits = 24;\n> else if (*odst >= 128) /* Class B */\n> bits = 16;\n> else /* Class A */\n> bits = 8;\n> /* If imputed mask is narrower than specified octets, widen. */\n> if (bits >= 8 && bits < ((dst - odst) * 8))\n> ^^^^^^^^^\n> bits = (dst - odst) * 8;\n> }\n> \n> I think the test for \"bits >= 8\" should be removed. Does anyone know\n> why it's there?\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 19:28:28 +0000", "msg_from": "Paul Vixie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> steve=# select '224.0.0.0'::cidr;\n> cidr \n> -------------\n> 224.0.0.0/4\n> \n> which should be /32 according to what the docs say:\n\n224-239 are multicast addresses. Making it /4 makes the entire multicast\naddress space one network block which is about as reasonable an answer as\nanything else.\n\n> if (bits >= 8 && bits < ((dst - odst) * 8))\n> ^^^^^^^^^\n> bits = (dst - odst) * 8;\n> }\n> \n> I think the test for \"bits >= 8\" should be removed. Does anyone know\n> why it's there?\n\nI guess Vixie figured network blocks subdividing multicast address space\nweren't a sensible concept? It's a bit of a strange constraint to hard code\ninto the C code though.\n\nIncidentally, how can that code possibly work? It treats odst as a pointer in\nsome places but then calculates bits using arithmetic on it directly without\ndereferencing?\n\n-- \ngreg\n\n", "msg_date": "31 Jan 2005 14:35:58 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "Paul Vixie <[email protected]> writes:\n> when my cidr datatype was integrated into pgsql, the decision was made to\n> incorporate a copy of bind's inet_net_pton.c rather than add a link-time\n> dependence to libbind.a (libbind.so).\n\nWe didn't really want to assume that all platforms are using libbind :-(\n\n> thus, when this bug was fixed in 2003:\n\n> ----------------------------\n> revision 1.14\n> date: 2003/08/20 02:21:08; author: marka; state: Exp; lines: +10 -4\n> 1580. [bug] inet_net_pton() didn't fully handle implicit\n> multicast IPv4 network addresses.\n\n> the pgsql \"fork\" of this code did not benefit from the fix. the patch was:\n\nAh-hah. Many thanks for supplying the patch --- will integrate it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 14:36:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "> We didn't really want to assume that all platforms are using libbind :-(\n\ni think you could have, at the time, since windows wasn't even a gleam in\npgsql's eye. even now, libbind would be a dependable universal dependency,\nsince we publish windows binaries.\n\n> > the pgsql \"fork\" of this code did not benefit from the fix. the patch was:\n> \n> Ah-hah. Many thanks for supplying the patch --- will integrate it.\n\ni have two suggestions. first, look at the rest of the current source file,\nin case there are other fixes. second, track changes this source file during\nyour release engineering process for each new pgsql version.\n", "msg_date": "Mon, 31 Jan 2005 22:44:38 +0000", "msg_from": "Paul Vixie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "Paul Vixie <[email protected]> writes:\n> i have two suggestions. first, look at the rest of the current source file,\n> in case there are other fixes.\n\nRight, I already grabbed the latest.\n\n> second, track changes this source file during\n> your release engineering process for each new pgsql version.\n\nBruce, do you think this is worth adding to RELEASE_CHANGES?\ninet_net_ntop.c and inet_net_pton.c are both extracted from the BIND\ndistribution. But they're hardly the only files we took from elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 17:52:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "Tom Lane wrote:\n> Paul Vixie <[email protected]> writes:\n> > i have two suggestions. first, look at the rest of the current source file,\n> > in case there are other fixes.\n> \n> Right, I already grabbed the latest.\n> \n> > second, track changes this source file during\n> > your release engineering process for each new pgsql version.\n> \n> Bruce, do you think this is worth adding to RELEASE_CHANGES?\n> inet_net_ntop.c and inet_net_pton.c are both extracted from the BIND\n> distribution. But they're hardly the only files we took from elsewhere.\n\nYes, I do. Most of the stuff we pull from other OS projects has\nclearly-defined behavior, while inet/cidr seem to be still in flux a\nlittle.\n\nAdded to release checklist:\n\n\t* Update inet/cidr data types with newest Bind patches\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 12:00:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to release checklist:\n> \t* Update inet/cidr data types with newest Bind patches\n\nYou should also add \"check for zic database updates\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Feb 2005 12:02:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Added to release checklist:\n> > \t* Update inet/cidr data types with newest Bind patches\n> \n> You should also add \"check for zic database updates\".\n\nUh, we already have:\n\n\t* Update timezone data to match latest zic database (see\n\t src/timezone/README)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 12:38:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug in create operator and/or initdb" } ]
[ { "msg_contents": "\nHiya...\n\n\tWould send email directly, but only have the address at home :(\n\n\tD'Arcy...you've been working on a Python interface, correct? Do\nwe want to include that as part of src/interfaces? *raised eyebrows*\n\n\n", "msg_date": "Thu, 15 Jan 1998 11:57:17 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Python...?" }, { "msg_contents": "Thus spake The Hermit Hacker\n> \tWould send email directly, but only have the address at home :(\n> \n> \tD'Arcy...you've been working on a Python interface, correct? Do\n> we want to include that as part of src/interfaces? *raised eyebrows*\n\nSure. I'm currently working on a few enhancements and I hope to make\nit compliant with the Python standard database API. The current version\nworks fine though so go right ahead.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 15 Jan 1998 13:37:43 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Python...?" } ]
[ { "msg_contents": "Hi,\n\nI have installed PostgreSQL6.2.1 from my own user account (I was just trying\nto test out something first). It seems to be running too slow. For example,\nthe self join of a table with roughly 300K records takes 2-3 hours. There is\nan index on the join attribute.\nI am running it on a 8 processor (each is a 248 MHz SUNW,UltraSPARC-II)\n machine with a total of 2.0 GB main memory.\nBut, since it is not parallelized, it may get the power of only one processor.\n\nDoes it make a difference in performance since I have installed it from my\nuser account, and not the root. Also, I compiled it using gcc on a \nsolaris 2.5 machine and the machine I ran it is a solaris 2.6 machine. I did\nthat since the solaris 2.6 machine is the fastest we have here and the \nexecutable\ncompiled on 2.5 was running properly also. Can that have an impact on the\nperformance?\n\nThanks\n--shiby\n\n", "msg_date": "Thu, 15 Jan 1998 13:01:37 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "postgres performance" }, { "msg_contents": "On Thu, 15 Jan 1998, Shiby Thomas wrote:\n\n> Hi,\n> \n> I have installed PostgreSQL6.2.1 from my own user account (I was just trying\n> to test out something first). It seems to be running too slow. For example,\n> the self join of a table with roughly 300K records takes 2-3 hours. There is\n> an index on the join attribute.\n> I am running it on a 8 processor (each is a 248 MHz SUNW,UltraSPARC-II)\n> machine with a total of 2.0 GB main memory.\n> But, since it is not parallelized, it may get the power of only one processor.\n> \n> Does it make a difference in performance since I have installed it from my\n> user account, and not the root. Also, I compiled it using gcc on a \n> solaris 2.5 machine and the machine I ran it is a solaris 2.6 machine. I did\n> that since the solaris 2.6 machine is the fastest we have here and the \n> executable\n> compiled on 2.5 was running properly also. Can that have an impact on the\n> performance?\n\n\tThere may be optimizations in the 2.6 libraries that would improve\nperformance, but I wouldn't suspect that it would make *that* big of a\ndifference. What is your SQL/join statemnt? How are you running\npostmaster? What does 'explain' show?\n\n\n", "msg_date": "Thu, 15 Jan 1998 13:15:59 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres performance" }, { "msg_contents": "> \n> Hi,\n> \n> I have installed PostgreSQL6.2.1 from my own user account (I was just trying\n> to test out something first). It seems to be running too slow. For example,\n> the self join of a table with roughly 300K records takes 2-3 hours. There is\n> an index on the join attribute.\n> I am running it on a 8 processor (each is a 248 MHz SUNW,UltraSPARC-II)\n> machine with a total of 2.0 GB main memory.\n> But, since it is not parallelized, it may get the power of only one processor.\n\nYou said a self-join. I think we have a performance problem there. \nVadim?\n\n\n> \n> Does it make a difference in performance since I have installed it from my\n> user account, and not the root. Also, I compiled it using gcc on a \n> solaris 2.5 machine and the machine I ran it is a solaris 2.6 machine. I did\n> that since the solaris 2.6 machine is the fastest we have here and the \n> executable\n> compiled on 2.5 was running properly also. Can that have an impact on the\n> performance?\n> \n> Thanks\n> --shiby\n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 13:23:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres performance" }, { "msg_contents": "\n=> \tThere may be optimizations in the 2.6 libraries that would improve\n=> performance, but I wouldn't suspect that it would make *that* big of a\n=> difference. What is your SQL/join statemnt? How are you running\n=> postmaster? What does 'explain' show?\n=> \nThe complete query is this:\n\nselect item1, item2, count(t1.tid) into table f2_temp from data t1, data t2, \nc2\nwhere t1.item = c2.item1 and t2.item = c2.item2 and t1.tid = t2.tid group by \nite\nm1, item2\n\ndata is a table with 2 integer columns (tid, item) and it has ~300K records\nc2 is a table (item1, item2), both integers and has ~1.5K records.\n\nI was directly running postgres with the -B and -S flags to give more buffers\nand sortMem. I also tried several join plans by the -f flags. Hash join works\nthe best and that itself is too slow (perhaps due to the self join)\n\n--shiby\n\n\n\n", "msg_date": "Thu, 15 Jan 1998 13:33:08 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgres performance " }, { "msg_contents": "> \n> \n> => \tThere may be optimizations in the 2.6 libraries that would improve\n> => performance, but I wouldn't suspect that it would make *that* big of a\n> => difference. What is your SQL/join statemnt? How are you running\n> => postmaster? What does 'explain' show?\n> => \n> The complete query is this:\n> \n> select item1, item2, count(t1.tid) into table f2_temp from data t1, data t2, \n> c2\n> where t1.item = c2.item1 and t2.item = c2.item2 and t1.tid = t2.tid group by \n> ite\n> m1, item2\n> \n> data is a table with 2 integer columns (tid, item) and it has ~300K records\n> c2 is a table (item1, item2), both integers and has ~1.5K records.\n> \n> I was directly running postgres with the -B and -S flags to give more buffers\n> and sortMem. I also tried several join plans by the -f flags. Hash join works\n> the best and that itself is too slow (perhaps due to the self join)\n> \n\nI have a possible workaround. Turn GEQO on:\n\n\tSET GEQO ON=1\n\nand try it. Let us know.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 19:26:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres performance" }, { "msg_contents": "Shiby Thomas wrote:\n> \n> => There may be optimizations in the 2.6 libraries that would improve\n> => performance, but I wouldn't suspect that it would make *that* big of a\n> => difference. What is your SQL/join statemnt? How are you running\n> => postmaster? What does 'explain' show?\n> =>\n> The complete query is this:\n> \n> select item1, item2, count(t1.tid) into table f2_temp from data t1, data t2,\n> c2\n> where t1.item = c2.item1 and t2.item = c2.item2 and t1.tid = t2.tid group by\n> ite\n> m1, item2\n> \n> data is a table with 2 integer columns (tid, item) and it has ~300K records\n> c2 is a table (item1, item2), both integers and has ~1.5K records.\n> \n> I was directly running postgres with the -B and -S flags to give more buffers\n> and sortMem. I also tried several join plans by the -f flags. Hash join works\n> the best and that itself is too slow (perhaps due to the self join)\n\nIndices ?\nEXPLAIN ?\n\nVadim\n", "msg_date": "Fri, 16 Jan 1998 10:30:22 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres performance" } ]
[ { "msg_contents": "\n> OIDs are a bastardization of the relational model. If you have to keep\n> them, then do so, but their use should be SEVERELY discouraged.\n\nExplain yourself, please.\n\nIn my opinion, I view the OID in the same way as I view the SERIAL datatype\nin Informix. It is usually a primary key field in a table. On an insert,\nthe DBMS will increment the current serial-maximum (for that table) and insert\nthe new serial value into that field; thus creating a unique identifier.\n\nThere are differences between OID and SERIAL. The main difference is that\nthe OID field (always called 'oid') is always present whereas a DB designer\nexplicitly creates 'id' fields (of SERIAL type). Thus, postgresql treats\nevery table as an object (which is not always the case).\n\nIs the SERIAL datatype part of the SQL-92 standard? Does PostgreSQL plan\nto support SERIAL in the future. This would be an acceptable replacement\nfor the OID.\n\n-Bryan Basham\n", "msg_date": "Thu, 15 Jan 1998 11:39:16 -0700 (MST)", "msg_from": "Bryan Basham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, 15 Jan 1998, Bryan Basham wrote:\n\n> \n> > OIDs are a bastardization of the relational model. If you have to keep\n> > them, then do so, but their use should be SEVERELY discouraged.\n> \n> Explain yourself, please.\n> \n> In my opinion, I view the OID in the same way as I view the SERIAL datatype\n> in Informix. It is usually a primary key field in a table. On an insert,\n> the DBMS will increment the current serial-maximum (for that table) and insert\n> the new serial value into that field; thus creating a unique identifier.\n> \n> There are differences between OID and SERIAL. The main difference is that\n> the OID field (always called 'oid') is always present whereas a DB designer\n> explicitly creates 'id' fields (of SERIAL type). Thus, postgresql treats\n> every table as an object (which is not always the case).\n\n\tMajor problem with OID: OIDs are sequenced across the database,\nnot the table. ie. tableA inserts with OID #1, tableB inserts with OID\n#2, tableA inserts next record with OID #3, tableC then gets #4, etc...\n\n\tAnd...# of OIDs is finite...so if you have a lot of tables with\nalot of data in each...you run the risk of running out.\n\n\tIn this sense, sequences are the better alternative, but again,\nthey are a newer feature to PostgreSQL then the code that I wrote using\nOIDs\n\n\n", "msg_date": "Thu, 15 Jan 1998 13:48:24 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" }, { "msg_contents": "On Thu, 15 Jan 1998, The Hermit Hacker wrote:\n\n> \tMajor problem with OID: OIDs are sequenced across the database,\n> not the table. ie. tableA inserts with OID #1, tableB inserts with OID\n> #2, tableA inserts next record with OID #3, tableC then gets #4, etc...\n\nin my oo world (i.e., systems i develop), i use oid's to determine the\ntype of object and due to the limitation of postgresql's oids, i use a\nseparate field for the system's oid - kinda redundant but gotta live with\nit. nonetheless, postgresql's oids can still be improved.\n\n> \tAnd...# of OIDs is finite...so if you have a lot of tables with\n> alot of data in each...you run the risk of running out.\n\nfinitely limited :)\n\n[---]\nNeil D. Quiogue <[email protected]>\nIPhil Communications Network, Inc.\nOther: [email protected]\n\n", "msg_date": "Fri, 16 Jan 1998 09:04:10 +0800 (HKT)", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" } ]
[ { "msg_contents": "\nHi...\n\n\tI've just installed (massaged in?) a major major patch for PL(?)\nthat was submitted in Nov...snapshot will be created in about 18hrs or\nso...please test and report back any problems...haven't had a chance to\ntest compile it here yet :(\n\n\n\n", "msg_date": "Thu, 15 Jan 1998 14:44:20 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "*Major* Patch for PL " }, { "msg_contents": "> Hi...\n>\n> I've just installed (massaged in?) a major major patch for PL(?)\n> that was submitted in Nov...snapshot will be created in about 18hrs or\n> so...please test and report back any problems...haven't had a chance to\n> test compile it here yet :(\n\n Waited for that so long - thanks. I'll take a look at it and\n run my tests asap.\n\n\nUntil later, Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 19 Jan 1998 12:12:11 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] *Major* Patch for PL" }, { "msg_contents": ">\n> > Hi...\n> >\n> > I've just installed (massaged in?) a major major patch for PL(?)\n> > that was submitted in Nov...snapshot will be created in about 18hrs or\n> > so...please test and report back any problems...haven't had a chance to\n> > test compile it here yet :(\n>\n> Waited for that so long - thanks. I'll take a look at it and\n> run my tests asap.\n>\n\n Looks O.K. - tests passed through.\n\n Now anything is prepared for PL support in all places\n (aggregates, operators, etc.). At least PL/Tcl ran in all\n that areas.\n\n Found a little bug in interfaces/libpq/fe-connect.c during\n tests. In line 403 conn->pgpass is set to DefaultPassword and\n later in line 665 this might be free()'d. Just wrapping a\n strdup() around DefaultPassword fixed it.\n\n Did anybody worked on other languages for PL modules since\n this new interface was designed? Who would like to dive into\n (perl/phyton/...)? Who would like to co-work on a pure\n PL/pgSQL?\n\n\nUntil later, Jan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 19 Jan 1998 14:18:01 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] *Major* Patch for PL" } ]
[ { "msg_contents": "I keep getting:\n\nToo Large Allocation Request(\"!(0 < (size) && (size) <= \n(0xfffffff)):size=-3 [0xfffffffd]\", File: \"mcxt.c\", Line: 232)\n!(0 < (size) && (size) <= (0xfffffff)) (0) [No such file or directory]\n\nfrom a simple select:\n\nSELECT * from provinces;\n\nincidentally, \"SELECT oid from provinces;\" works...\n(as does any select on one component)... but checking...\nNope - countrycode fails. Prints out garbage (8bit garbage at that)\nAs do all fields after that - though they're just bad data.\n\nFor comparison, I have another database with \"varchar(50), integer,\ntimestamp, timestamp\" - and the only noncorrupted field is the first\nvarchar() entry. Perhaps it's in the string-handling?\n\nThe contents of the database:\n---\nCREATE TABLE provinces\n (\n code\t\tchar(4),\t-- ...\n name\t\tvarchar(50),\t-- ...\n countrycode\tchar(2),\t-- ...\n country\tvarchar(50),\t-- ...\n\n -- creation and modification dates of this record...\n creationdate\ttimestamp default now(),\n modifydate\ttimestamp default now()\n );\n\nINSERT INTO provinces (code, name, countrycode, country)\n\tVALUES ('bc','British Columbia','ca','Canada');\nINSERT INTO provinces (code, name, countrycode, country)\n\tVALUES ('ab','Alberta', 'ca','Canada');\nINSERT INTO provinces (code, name, countrycode, country)\n\tVALUES ('alta','Alberta', 'ca','Canada');\n-- one gets tired of databases referring to \"state\"... so time to turn the\n-- tables *grin*\nINSERT INTO provinces (code, name, countrycode, country)\n\tVALUES ('wa', 'Washington',\t'us','United States');\n---\n\nIncidentally, this worked perfectly up 'till christmas-CVS. I had some\nother problems so I upgraded... and now it core-dumps 'psql' and sends\nthis to the backend.\n\nOn debugging psql I found the error in \"UP()\", but as I don't know how to\nuse gdb properly I can't look any deeper (I've always preferred logfiles -\nespecially when working on videodriver code :)\n[mayhaps psql needs better recovery code... will have to look at that\nsometime]\n\nJDBC also fails FWIW - with these messages: (same query)\njava.sql.SQLException: Error reading from backend: java.io.IOException: EOF\n at postgresql.PG_Stream.ReceiveInteger(PG_Stream.java:184)\n at postgresql.PG_Stream.ReceiveTuple(PG_Stream.java:256)\n at postgresql.Connection.ExecSQL(Connection.java:626)\n at postgresql.Statement.execute(Statement.java:259)\n at postgresql.Statement.executeQuery(Statement.java:46)\n at Mauve.Main.readProvinces(Main.java:234)\n at Mauve.Main.main(Main.java:363)\njava.sql.SQLException: I/O Error: java.io.IOException: Broken pipe\n at postgresql.Connection.ExecSQL(Connection.java:583)\n at postgresql.Statement.execute(Statement.java:259)\n at postgresql.Statement.executeQuery(Statement.java:46)\n at Mauve.Main.readPaymentForms(Main.java:252)\n at Mauve.Main.main(Main.java:366)\n\nI suspect a bug somewhere - but don't know how to find it. Or if my\ndata's incorrect.... many things have changed in postgres.\n\nPlatform:\n\tlinux 2.0.33 (got tired of being flamed for using 2.1 kernels)\n\tglibc-2.0.5c (standard! ref: redhat-5.0)\n\tgcc 2.7.2.1 (I also have egc-1.0 but I don't trust it yet)\n\tCyrix 6x86, 40M ram, ~5G drivespace\n\nThough it's nice to finally see large objects working *grin*.\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Thu, 15 Jan 1998 12:54:13 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": true, "msg_subject": "a small problem with current CVS - wondering how to debug" } ]
[ { "msg_contents": "\nI wish ppl would quite dropping off CC's :( Especially when this stuff\n*should* be being discussed in [email protected]...\n\nAs for the pg_privileges relation and whatnot...is this something we want\nfor v6.3 (can it be done in 2weeks?) or wait until after v6.3 is released?\n\n\n\nOn Thu, 15 Jan 1998, todd brandys wrote:\n\n> > wouldn't that pose a problem with setting up someone other the\n> > the superuser to create databases? *Raised eyebrows*\n> \n> Again I feel that PostgreSQL should have a pg_privileges relation which would\n> handle the CREATE DATABASE privilege. There would be no problem then.\n> \n> Todd A. Brandys\n> [email protected]\n> \n\n", "msg_date": "Thu, 15 Jan 1998 15:27:21 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> \n> I wish ppl would quite dropping off CC's :( Especially when this stuff\n> *should* be being discussed in [email protected]...\n> \n> As for the pg_privileges relation and whatnot...is this something we want\n> for v6.3 (can it be done in 2weeks?) or wait until after v6.3 is released?\n> \n\nWell, I can create the table quite easily. The issue is what type of\nflack we will get by haveing pg_user non-readable, and removing the user\nname from psql's \\d output.\n\n> \n> \n> On Thu, 15 Jan 1998, todd brandys wrote:\n> \n> > > wouldn't that pose a problem with setting up someone other the\n> > > the superuser to create databases? *Raised eyebrows*\n> > \n> > Again I feel that PostgreSQL should have a pg_privileges relation which would\n> > handle the CREATE DATABASE privilege. There would be no problem then.\n> > \n> > Todd A. Brandys\n> > [email protected]\n> > \n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 15:51:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "> Fork off the postgres process first, then authenticate inside of\n> there...which would get rid of the problem with pg_user itself being a\n> text file vs a relation...no?\n\nYes, yes, yes. This is how authentication should be done (for HBA, etc.)\nFurthermore, we could reduce the footprint of the postmaster drastically. It\nwould only need to accept a socket connection and fork the backend. This\nscenario would also allow the postmaster to be run as the root user. Good\nthings could only come of this method.\n\nThe only reason I put my authentication scheme where it is, is that all the\nother authentication schemes take place in the postmaster, and to work things\nproperly, use of my scheme (checking to see if there is a password or not) must\ncome first.\n\nTodd A. Brandys\[email protected]\n\n", "msg_date": "Thu, 15 Jan 1998 14:31:37 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "todd brandys wrote:\n> \n> > Fork off the postgres process first, then authenticate inside of\n> > there...which would get rid of the problem with pg_user itself being a\n> > text file vs a relation...no?\n> \n> Yes, yes, yes. This is how authentication should be done (for HBA, etc.)\n\nNo, no, no! For security reasons, you can't fork (and exec)\nunauthenticated processes. Especially HBA authentication should be done\nto consume as low resources as possbile. Otherwise you open a giant door\nfor so infamously called Denial of Service attacks. Afterwards, every\nhacker will know that to bring your system running postgres to it's\nknees he just have to try to connect to 5432 port very frequently. \"OK\",\nyou might say, \"I have this firewall\". \"OK\", I say, \"so what's that HBA\nfor?\".\n\nSo it's the postmaster's role to deny as much connections as possible.\nUnless we speak of non-execing postgres childs?\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Fri, 16 Jan 1998 02:35:38 +0000", "msg_from": "\"Micha��� Mosiewicz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Fri, 16 Jan 1998, Micha3 Mosiewicz wrote:\n\n> No, no, no! For security reasons, you can't fork (and exec)\n> unauthenticated processes. Especially HBA authentication should be done\n> to consume as low resources as possbile. Otherwise you open a giant door\n> for so infamously called Denial of Service attacks. Afterwards, every\n> hacker will know that to bring your system running postgres to it's\n> knees he just have to try to connect to 5432 port very frequently. \"OK\",\n> you might say, \"I have this firewall\". \"OK\", I say, \"so what's that HBA\n> for?\".\n> \n> So it's the postmaster's role to deny as much connections as possible.\n> Unless we speak of non-execing postgres childs?\n\n\tHrmmmm...i don't quite agree with this. postmaster can handle one \nconnection at a time, and then has to pass it off to the postgres backend\nprocess...DoS attacks are easier now then by forking before HBA. I just have\nto continuously open a connection to port 5432...so, while postmaster is\nhandling that connection, checking HBA, checking a password...no other new \nconnections can happen. Can't think of a stronger DoS then that...? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 16 Jan 1998 00:11:46 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 16 Jan 1998, Micha3 Mosiewicz wrote:\n> \n> > No, no, no! For security reasons, you can't fork (and exec)\n> > unauthenticated processes. Especially HBA authentication should be done\n> > to consume as low resources as possbile. Otherwise you open a giant door\n> > for so infamously called Denial of Service attacks. Afterwards, every\n> > hacker will know that to bring your system running postgres to it's\n> > knees he just have to try to connect to 5432 port very frequently. \"OK\",\n> > you might say, \"I have this firewall\". \"OK\", I say, \"so what's that HBA\n> > for?\".\n> >\n> > So it's the postmaster's role to deny as much connections as possible.\n> > Unless we speak of non-execing postgres childs?\n> \n> Hrmmmm...i don't quite agree with this. postmaster can handle one\n> connection at a time, and then has to pass it off to the postgres backend\n> process...DoS attacks are easier now then by forking before HBA. I just have\n> to continuously open a connection to port 5432...so, while postmaster is\n> handling that connection, checking HBA, checking a password...no other new\n> connections can happen. Can't think of a stronger DoS then that...? :)\n\nI think Micha is right. The postmaster can handle multiple connections\nas the read of the startup packet is done a fragment at a time without\nblocking so there is no DoS problem until the postmaster runs out of\nsockets. I think this is less of a problem than loads of\nunauthenticated, resource hungry backends forked by the postmaster.\n\nIn changing the authentication methods for 6.3 I've had to add the\nability for the postmaster to do non-blocking writes as well as reads so\nthat a two-way (non-blocking) dialog can take place between frontend and\npostmaster.\n\nHaving said that, I won't fix (for 6.3 anyway) other parts of the\npostmaster that do blocked I/O - the ident lookup in particular. \nHowever, it is at least under the control of the DBA whether or not\nident is used.\n\nPhil\n", "msg_date": "Fri, 16 Jan 1998 19:45:14 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Hrmmmm...i don't quite agree with this. postmaster can handle one\n> connection at a time, and then has to pass it off to the postgres backend\n> process...DoS attacks are easier now then by forking before HBA. I just have\n\nForking is not so bad... but isn't there any exec also? And of course\nit's a difference if your machine is overloaded by processes or if it's\nonly one service that doesn't respond becouse the access-controling code\nis disabled.\n\nSecond question... if we speak only about forking postmaster, or it's\nabout forking-execing-opening files-reading-etc stuff? If it's only\nfork, I would totally agree with you, otherwise I'm not sure which is\nworse... \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n\n", "msg_date": "Sun, 18 Jan 1998 03:39:45 +0000", "msg_from": "\"Micha3 Mosiewicz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "\n\nOn Fri, 16 Jan 1998, Phil Thompson wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Fri, 16 Jan 1998, Micha3 Mosiewicz wrote:\n> > \n> > > No, no, no! For security reasons, you can't fork (and exec)\n> > > unauthenticated processes. Especially HBA authentication should be done\n> > > to consume as low resources as possbile. Otherwise you open a giant door\n> > > for so infamously called Denial of Service attacks. Afterwards, every\n> > > hacker will know that to bring your system running postgres to it's\n> > > knees he just have to try to connect to 5432 port very frequently. \"OK\",\n> > > you might say, \"I have this firewall\". \"OK\", I say, \"so what's that HBA\n> > > for?\".\n> > >\n> > > So it's the postmaster's role to deny as much connections as possible.\n> > > Unless we speak of non-execing postgres childs?\n> > \n> > Hrmmmm...i don't quite agree with this. postmaster can handle one\n> > connection at a time, and then has to pass it off to the postgres backend\n> > process...DoS attacks are easier now then by forking before HBA. I just have\n> > to continuously open a connection to port 5432...so, while postmaster is\n> > handling that connection, checking HBA, checking a password...no other new\n> > connections can happen. Can't think of a stronger DoS then that...? :)\n> \n> I think Micha is right. The postmaster can handle multiple connections\n> as the read of the startup packet is done a fragment at a time without\n> blocking so there is no DoS problem until the postmaster runs out of\n> sockets. I think this is less of a problem than loads of\n> unauthenticated, resource hungry backends forked by the postmaster.\n> \n> In changing the authentication methods for 6.3 I've had to add the\n> ability for the postmaster to do non-blocking writes as well as reads so\n> that a two-way (non-blocking) dialog can take place between frontend and\n> postmaster.\n> \n> Having said that, I won't fix (for 6.3 anyway) other parts of the\n> postmaster that do blocked I/O - the ident lookup in particular. \n> However, it is at least under the control of the DBA whether or not\n> ident is used.\n> \n> Phil\n> \n\nOne way or another PostgreSQL is subject to attack. If we do continue on \nwith postmaster doing authentication, then the postmaster should be made \ncapable of performing queries against the system catalog. This would \nallow HBA to be incorporated into the system catalog (pg_user or \nwhatever). Why would we do this? It will make PostgreSQL easier to \nadminister. The user will not have to edit files to make a change. HBA's\ncharacteristics could be partially handled by adding a:\n\nFROM (<host1> | <group1>) ... [, (<hostn> | <groupn>)]\n\nclause to the 'CREATE USER' statement. Right now the only way this could be\nachieved would be to pass authentication down to the backend processes, \nas the postmaster can not execute SQL statements.\n\nTodd A. Brandys\[email protected]\n\n", "msg_date": "Sun, 18 Jan 1998 15:47:30 -0600 (CST)", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": " So, if postmaster can't access the system tables for authentication\npurposes, why not have one child forked, connected to the main \npostmaster process via some IPC mechanism, whose only purpose\nin life it is to answer SQL-queries for postmaster.\n\n\tmjl\n\n", "msg_date": "Mon, 19 Jan 1998 04:18:34 +0100 (MET)", "msg_from": "\"Martin J. Laubach\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "\n\nOn Mon, 19 Jan 1998, Martin J. Laubach wrote:\n\n> So, if postmaster can't access the system tables for authentication\n> purposes, why not have one child forked, connected to the main \n> postmaster process via some IPC mechanism, whose only purpose\n> in life it is to answer SQL-queries for postmaster.\n> \n> \tmjl\n> \n> \n\nI would have to say \"footprint\". All the code for the backend has been \nincorporated into the postmaster. The postmaster is nothing more than a\nbackend run with some special command line options. The postgres executable\nis over 1 MB large and is a static executable. This is quite a bit of\n(virtual) memory. I realize that memory is cheap these days, but to \nhave all the backend code included in with the postmaster and not have it \nuseable is a waste of resources. To then have a second instance running \nwould take more resources.\n\nIf we must maintain user authorization in the postmaster, then I believe \nit would be best to strip this code from the postgres executable to \ncreate a postmaster with a smaller footprint. Furthermore, the postmaster\nshould be made capable of performing heapscans, so that it could at least\nview the data in the system catalog. Then all the data needed for each \nauthentication method (or a single method if we incorporated the best \naspects of each method into one super authentication method), could be \nstored in the system catalog. This information could then be managed by the\nCREATE/ALTER USER statements, which would alleviate the need to edit flat \nfiles for system configuration.\n\nTodd A. Brandys\[email protected]\n\n", "msg_date": "Tue, 20 Jan 1998 12:10:23 -0600 (CST)", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "> As for the pg_privileges relation and whatnot...is this something we want\n> for v6.3 (can it be done in 2weeks?) or wait until after v6.3 is released?\n\nI don't think (realistically) that such a task could be done in two weeks. No.\nRather, we should wait until after release 6.3, and then maybe spend some time\ndebating on what the pg_privileges table should look like. After the table is\ncreated (the easy part), then it becomes a hunt to find all the places where\nprivileges are checked and change to the code in these spot (not too bad really).\nFinally, we have to develop the code for governing column permissions (this is\nthe most difficult part of the project). The query processor must be changed\nto check the permissions of each column in all commands.\n\nThis would be a tall order for two weeks. Especially, to be certain that we\nhad a consensus among the hacker community that what was being done was being\ndone in the best way possible.\n\nTodd A. Brandys\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 14:41:51 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> > As for the pg_privileges relation and whatnot...is this something we want\n> > for v6.3 (can it be done in 2weeks?) or wait until after v6.3 is released?\n> \n> I don't think (realistically) that such a task could be done in two weeks. No.\n> Rather, we should wait until after release 6.3, and then maybe spend some time\n> debating on what the pg_privileges table should look like. After the table is\n> created (the easy part), then it becomes a hunt to find all the places where\n> privileges are checked and change to the code in these spot (not too bad really).\n> Finally, we have to develop the code for governing column permissions (this is\n> the most difficult part of the project). The query processor must be changed\n> to check the permissions of each column in all commands.\n> \n> This would be a tall order for two weeks. Especially, to be certain that we\n> had a consensus among the hacker community that what was being done was being\n> done in the best way possible.\n\n\nI believe doing permissions on VIEWS would be much simpler than\ncolumn-level permissions. That way, you create the view with the\ncolumns you need, and give that to the user.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 15 Jan 1998 16:05:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "subscribe\n", "msg_date": "Thu, 15 Jan 1998 14:42:31 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Forwarded message:\n> > Why attlen should be -1 ?\n> > attlen in pg_attribute for v in table t is 84, why run-time attlen\n> > should be -1 ? How else maxlen constraint could be checked ?\n> > IMHO, you have to change heap_getattr() to check is atttype == VARCHAROID\n> > and use vl_len if yes. Also, other places where attlen is used must be \n> > changed too - e.g. ExecEvalVar():\n> > \n> > {\n> > len = tuple_type->attrs[attnum - 1]->attlen;\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > byval = tuple_type->attrs[attnum - 1]->attbyval ? true : false;\n> > }\n> > \n> > execConstByVal = byval;\n> > execConstLen = len;\n> > ^^^^^^^^^^^^^^^^^^ - used in nodeHash.c\n> > \n> \n> The major problem is that TupleDesc comes from several places, and\n> attlen means several things.\n> \n> There are some cases where TupleDesc (int numatt, Attrs[]) is created\n> on-the-fly (tupdesc.c), and the attlen is the length of the type. In\n> other cases, we get attlen from opening the relation, heap_open(), and\n> in these cases it is the length as defined for the particular attribute.\n> \n> Certainly a bad situation. I am not sure about a fix.\n\nOK, here is a temporary fix to the problem. It does the heap_open(),\nthen replaces the attrs for VARCHAR with attlen of -1. You can't just\nchange the field, because the data is in a cache and you are just\nreturned a pointer.\n\nCan I add an 'attdeflen' for \"attributed defined length\" field to\npg_attribute, and change the attlen references needed to the new field?\nThis is the only proper way to fix it.\n\n\n---------------------------------------------------------------------------\n\n*** ./backend/executor/execAmi.c.orig\tThu Jan 15 22:42:13 1998\n--- ./backend/executor/execAmi.c\tThu Jan 15 23:54:37 1998\n***************\n*** 42,47 ****\n--- 42,48 ----\n #include \"access/genam.h\"\n #include \"access/heapam.h\"\n #include \"catalog/heap.h\"\n+ #include \"catalog/pg_type.h\"\n \n static Pointer\n ExecBeginScan(Relation relation, int nkeys, ScanKey skeys,\n***************\n*** 124,129 ****\n--- 125,155 ----\n \tif (relation == NULL)\n \t\telog(DEBUG, \"ExecOpenR: relation == NULL, heap_open failed.\");\n \n+ \t{\n+ \t\tint i;\n+ \t\tRelation trel = palloc(sizeof(RelationData));\n+ \t\tTupleDesc tdesc = palloc(sizeof(struct tupleDesc));\n+ \t\tAttributeTupleForm *tatt =\n+ \t\t\t\tpalloc(sizeof(AttributeTupleForm*)*relation->rd_att->natts);\n+ \n+ \t\tmemcpy(trel, relation, sizeof(RelationData));\n+ \t\tmemcpy(tdesc, relation->rd_att, sizeof(struct tupleDesc));\n+ \t\ttrel->rd_att = tdesc;\n+ \t\ttdesc->attrs = tatt;\n+ \t\t\n+ \t\tfor (i = 0; i < relation->rd_att->natts; i++)\n+ \t\t{\n+ \t\t\tif (relation->rd_att->attrs[i]->atttypid != VARCHAROID)\n+ \t\t\t\ttdesc->attrs[i] = relation->rd_att->attrs[i];\n+ \t\t\telse\n+ \t\t\t{\n+ \t\t\t\ttdesc->attrs[i] = palloc(sizeof(FormData_pg_attribute));\n+ \t\t\t\tmemcpy(tdesc->attrs[i], relation->rd_att->attrs[i],\n+ \t\t\t\t\t\t\t\t\t\t\tsizeof(FormData_pg_attribute));\n+ \t\t\t\ttdesc->attrs[i]->attlen = -1;\n+ \t\t\t}\n+ \t\t}\n+ \t}\n \treturn relation;\n }\n \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 00:05:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "> OK, here is a temporary fix to the problem. It does the heap_open(),\n> then replaces the attrs for VARCHAR with attlen of -1. You can't just\n> change the field, because the data is in a cache and you are just\n> returned a pointer.\n>\n> Can I add an 'attdeflen' for \"attributed defined length\" field to\n> pg_attribute, and change the attlen references needed to the new field?\n> This is the only proper way to fix it.\n\nBruce, does your \"temporary fix\" seem to repair all known problems with varchar()? If so, would you be interested in\nholding off on a \"proper fix\" and coming back to it after v6.3 is released? At that time, we can try solving the general\nproblem of retaining column-specific attributes, such as your max len for varchar, declared dimensions for arrays, and\nnumeric() and decimal() types. Or, if you have time to try a solution now _and_ come back to it later...\n\n - Tom\n\n", "msg_date": "Fri, 16 Jan 1998 05:26:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "> \n> > OK, here is a temporary fix to the problem. It does the heap_open(),\n> > then replaces the attrs for VARCHAR with attlen of -1. You can't just\n> > change the field, because the data is in a cache and you are just\n> > returned a pointer.\n> >\n> > Can I add an 'attdeflen' for \"attributed defined length\" field to\n> > pg_attribute, and change the attlen references needed to the new field?\n> > This is the only proper way to fix it.\n> \n\n> Bruce, does your \"temporary fix\" seem to repair all known problems\nwith varchar()? If so, would you be interested in > holding off on a\n\"proper fix\" and coming back to it after v6.3 is released? At that time,\nwe can try solving the general > problem of retaining column-specific\nattributes, such as your max len for varchar, declared dimensions for\narrays, and > numeric() and decimal() types. Or, if you have time to try\na solution now _and_ come back to it later... > > \n\n[Those wide post really are difficult.]\n\nI don't think my solution is perfect or complete. I only caught one\ngroup of heap_open calls used in the executor. I could funnel all of\nthem through this patched function, but I can imagine there would be\nones I would miss. Once the structure is gotten from the cache, it\nseems to fly around the executor code quite freely, and it is hard to\nknow when a tuple descriptor is being created, if it is being used for\ndata creation or data reference. attlen references are much clearer in\ntheir intent.\n\nIf I add a new field type to FormData_pg_attribute, I can then check\neach attlen reference, and check if it is trying to move through the\non-disk storage (attlen/typlen) or create a new/modify an entry\n(attdeflen).\n\nHow much time I have depends on what Vadim needs me to do for\nsubselects.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 00:44:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "> \n> > OK, here is a temporary fix to the problem. It does the heap_open(),\n> > then replaces the attrs for VARCHAR with attlen of -1. You can't just\n> > change the field, because the data is in a cache and you are just\n> > returned a pointer.\n> >\n> > Can I add an 'attdeflen' for \"attributed defined length\" field to\n> > pg_attribute, and change the attlen references needed to the new field?\n> > This is the only proper way to fix it.\n> \n> Bruce, does your \"temporary fix\" seem to repair all known problems with varchar()? If so, would you be interested in\n> holding off on a \"proper fix\" and coming back to it after v6.3 is released? At that time, we can try solving the general\n> problem of retaining column-specific attributes, such as your max len for varchar, declared dimensions for arrays, and\n> numeric() and decimal() types. Or, if you have time to try a solution now _and_ come back to it later...\n> \n> - Tom\n> \n> \n\nIn fact, I am inclined to leave attlen unchanged, and add atttyplen that\nis a copy of the length of the type. That way, the attlen for varchar()\nreally contains the defined length, and atttyplen is used for disk\nreferences, and it is very clear what it means.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 00:49:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > >\n> > > Can I add an 'attdeflen' for \"attributed defined length\" field to\n> > > pg_attribute, and change the attlen references needed to the new field?\n> > > This is the only proper way to fix it.\n> >\n> > Bruce, does your \"temporary fix\" seem to repair all known problems with varchar()? If so, would you be interested in\n> > holding off on a \"proper fix\" and coming back to it after v6.3 is released? At that time, we can try solving the general\n> > problem of retaining column-specific attributes, such as your max len for varchar, declared dimensions for arrays, and\n> > numeric() and decimal() types. Or, if you have time to try a solution now _and_ come back to it later...\n> >\n> \n> In fact, I am inclined to leave attlen unchanged, and add atttyplen that\n> is a copy of the length of the type. That way, the attlen for varchar()\n> really contains the defined length, and atttyplen is used for disk\n> references, and it is very clear what it means.\n\npg_attribute.h:\n\n int2 attlen; \n\n /*\n * attlen is a copy of the typlen field from pg_type for this\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n * attribute... \n ^^^^^^^^^\n: I would suggest to don't change this to preserve the same meaning\nfor all data types. attlen = -1 means that attribute is varlena.\n\nWe certainly need in new field in pg_attribute! I don't like names like\n\"attdeflen\" or \"atttyplen\" - bad names for NUMERIC etc. Something like\natttspec (ATTribute Type SPECification) is better, imho.\n\nFor the varchar(N) we'll have attlen = -1 and atttspec = N (or N + 4 - what's\nbetter). For the text: attlen = -1 and atttspec = -1. And so on.\n\nOf 'course, it's not so much matter where to put maxlen of varchar.\nattlen = -1 for varchar just seems more clear to me.\n\nBut in any case we need in new field and, imho, this should be added\nin 6.3\n\nVadim\n", "msg_date": "Fri, 16 Jan 1998 16:31:45 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > >\n> > > > Can I add an 'attdeflen' for \"attributed defined length\" field to\n> > > > pg_attribute, and change the attlen references needed to the new field?\n> > > > This is the only proper way to fix it.\n> > >\n> > > Bruce, does your \"temporary fix\" seem to repair all known problems with varchar()? If so, would you be interested in\n> > > holding off on a \"proper fix\" and coming back to it after v6.3 is released? At that time, we can try solving the general\n> > > problem of retaining column-specific attributes, such as your max len for varchar, declared dimensions for arrays, and\n> > > numeric() and decimal() types. Or, if you have time to try a solution now _and_ come back to it later...\n> > >\n> > \n> > In fact, I am inclined to leave attlen unchanged, and add atttyplen that\n> > is a copy of the length of the type. That way, the attlen for varchar()\n> > really contains the defined length, and atttyplen is used for disk\n> > references, and it is very clear what it means.\n> \n> pg_attribute.h:\n> \n> int2 attlen; \n> \n> /*\n> * attlen is a copy of the typlen field from pg_type for this\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> * attribute... \n> ^^^^^^^^^\n> : I would suggest to don't change this to preserve the same meaning\n> for all data types. attlen = -1 means that attribute is varlena.\n> \n> We certainly need in new field in pg_attribute! I don't like names like\n> \"attdeflen\" or \"atttyplen\" - bad names for NUMERIC etc. Something like\n> atttspec (ATTribute Type SPECification) is better, imho.\n> \n> For the varchar(N) we'll have attlen = -1 and atttspec = N (or N + 4 - what's\n> better). For the text: attlen = -1 and atttspec = -1. And so on.\n> \n> Of 'course, it's not so much matter where to put maxlen of varchar.\n> attlen = -1 for varchar just seems more clear to me.\n> \n> But in any case we need in new field and, imho, this should be added\n> in 6.3\n\nOK, we have a new pg_attribute column called 'atttypmod' for\n'type-specific modifier'. Currently, it is only used to hold the char\nand varchar length, but I am sure will be used soon for other types.\n\nHere is the test:\n\t\n\ttest=> insert into test values ('asdfasdfasdfasdfasdfadsfasdf11',3);\n\tINSERT 18282 1\n\ttest=> select * from test;\n\tx |y\n\t--------------------+-\n\tasdfasdfasdfasdfasdf|3\n\t(1 row)\n\n'attlen' was certainly a confusing double-used field that I am glad to\nreturn to single-use status.\n\nI will be installing the patch soon, and will then start on subselects\nin the parser. It will probably take me until Monday to finish that.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 17:48:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, we have a new pg_attribute column called 'atttypmod' for\n> 'type-specific modifier'. Currently, it is only used to hold the char\n> and varchar length, but I am sure will be used soon for other types.\n\nNice!\n\n> \n> Here is the test:\n> \n> test=> insert into test values ('asdfasdfasdfasdfasdfadsfasdf11',3);\n> INSERT 18282 1\n> test=> select * from test;\n> x |y\n> --------------------+-\n> asdfasdfasdfasdfasdf|3\n> (1 row)\n> \n> 'attlen' was certainly a confusing double-used field that I am glad to\n> return to single-use status.\n> \n> I will be installing the patch soon, and will then start on subselects\n> in the parser. It will probably take me until Monday to finish that.\n\nOk.\n\nVadim\n", "msg_date": "Sun, 18 Jan 1998 15:30:11 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: varchar() troubles (fwd)" } ]
[ { "msg_contents": "> scan.l: In function `yylex':\n> scan.l:202: `ABORT' undeclared (first use this function)\n\nSince the snapshot is intended for experienced (or wanting to become so)\npostgresql'ers\nthat would like to help the team and don't have CVS (like me on AIX),\nmaybe the snapshot shoud be moved to a development subdir ?\n\nThat said, it's not a bad build, only there seem to be some files\nincluded\nthat should not, like scan.c (that is outdated) .\nDo a 'make clean' before the make and everything is fine.\n\nAndreas\n", "msg_date": "Fri, 16 Jan 1998 09:23:53 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Nope: Cannot build recent snapshot" }, { "msg_contents": "> \n> > scan.l: In function `yylex':\n> > scan.l:202: `ABORT' undeclared (first use this function)\n> \n> Since the snapshot is intended for experienced (or wanting to become so)\n> postgresql'ers\n> that would like to help the team and don't have CVS (like me on AIX),\n> maybe the snapshot shoud be moved to a development subdir ?\n> \n> That said, it's not a bad build, only there seem to be some files\n> included\n> that should not, like scan.c (that is outdated) .\n> Do a 'make clean' before the make and everything is fine.\n\nWe include scan.c because some lex versions can't handle scan.l.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 09:20:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Nope: Cannot build recent snapshot" } ]
[ { "msg_contents": ">> OIDs are a bastardization of the relational model. If you have to\nkeep\n>> them, then do so, but their use should be SEVERELY discouraged.\n>\n>\tActually, I use them quite extensively...I have several\nWWW-based\n>search directories that are searched with:\n>\n>select oid,<fields> from <table> where <search conditions>;\n>\n>\tThat display minimal data to the browser, and then if someone\n>wants more detailed information, I just do:\n>\n>select * from <table> where oid = '';\n>\n>\tIts also great if you mess up the original coding for a table\nand\n>want to remove 1 of many duplicates that you've accidently let pass\n>through :(\n\nSince OID is not a rowid (physical address) it needs an extra index,\nthat has to be built,\nfor what you state ROWID will be a lot better (since it don't need an\nindex)\nThis is on the TODO (something like add ctid to where clause)\n\nAndreas \n", "msg_date": "Fri, 16 Jan 1998 09:34:29 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Arrays (inserting and removing)" } ]
[ { "msg_contents": "unsubscribe [email protected]\n", "msg_date": "Fri, 16 Jan 1998 10:29:07 +0100", "msg_from": "bas aerts <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "unsubscribe pgsql-hackers\n", "msg_date": "Fri, 16 Jan 1998 10:29:31 +0100", "msg_from": "bas aerts <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "> > \n> > > Fork off the postgres process first, then authenticate\ninside of\n> > > there...which would get rid of the problem with pg_user itself\nbeing a\n> > > text file vs a relation...no?\n> > \n> > Yes, yes, yes. This is how authentication should be done (for HBA,\netc.)\n> \n> No, no, no! For security reasons, you can't fork (and exec)\n> unauthenticated processes. Especially HBA authentication should be\ndone\n> to consume as low resources as possbile.\n\nStartup time for a valid connect client is now < 0.16 secs, so is this\nreally a threat ?\nI would say might leave hba to postmaster (since postgres don't need to\nknow about it)\nthen fork off postgres and do the rest of the authentication. \n\nRunning postgres as root though is a **very** bad idea.\nRemember that we have user defined Functions !\n\nno, yes, yes \nAndreas\n", "msg_date": "Fri, 16 Jan 1998 10:33:00 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "On Fri, 16 Jan 1998, Zeugswetter Andreas DBT wrote:\n \n> Running postgres as root though is a **very** bad idea.\n> Remember that we have user defined Functions !\n\n\tpostmaster nor postgres will run as root, so I'm not sure where\nyou are coming up with a \"Running postgres as root...\" problem :(\n\n\n", "msg_date": "Fri, 16 Jan 1998 08:08:02 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "\nAnother *cool* project would be a LDAPd front-end to PgSQL,\nor perhaps rather a PgSQL-backend to SLAPD.\n\nI am planning to look into this when I got the neccesary time,\nbut feel free to start ahead of me if you like the idea.\n\n happy hacking,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "16 Jan 1998 13:21:08 -0000", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": true, "msg_subject": "LDAP (was: CBAC ...)" } ]
[ { "msg_contents": "> On 12-Jan-98 The Hermit Hacker wrote:\n> > Does anyone here *understand* the LGPL? If we put the ODBC\n> >drivers *under* src/interfaces, does that risk contaminating the rest of\n> >the code *in any way*? Anyone here done a reasonably thorough study of\n> >the LGPL and can comment on it?\n> \n> My understanding from Stallman's statements on the matter are: Distribution of\n> GPL'd source with non-GPL'd source is fine, as long as it is simple to figure\n> out which is which. By definition, GPL'd sources can be distributed freely.\n> For binaries which fall under the GPL, again, mixing them with other stuff is\n> OK, as long as GPL'd stuff is identified as such. Sources must be available,\n> of course.\n> \n> LGPL is completely different. LGPL is what you use when you link your\n> non-GPL'd sources against a library built with GPL'd sources. In that case,\n> you are legal IFF you stuff can be re-linked against a different, non-GPL'd\n> library without recompilation. Actually, there's a bit of confusion on my\n> part about how much recompilation is permitted.\n> \n> Companies like DG/Sequent/Sun/etc wouldn't be able to include FSF software on\n> the distributions if the above were not the case.\n> \n> ObCaveat: I'm not a lawyer. I don't look like a lawyer, I don't smell like a\n> lawyer, and I don't lie like a lawyer.\n> \n> \nMy understanding is pretty much the same. Originally there was only GPL. This\nreally says that anything you link with GPL code must be distributed under\nGPL - effectively your source becomes part of the original GPL'd product.\n\nClearly this is ridiculous when you are linking against, say, the GNU\nC-library, so Stallman introduced LGPL which effectively says that any\nmodifications or additions you make to the library fall under the LGPL,\nbut anything which calls the LGPL'd library can have whatever copyright\nyou want. Thus it is possible to produce commercial products which use\nthe GNU C-library, etc., etc.\n\n\nAndrew\n\n(Same caveat as above :-)\n\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 15:06:12 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] ODBC & LGPL license..." }, { "msg_contents": "On Fri, 16 Jan 1998, Andrew Martin wrote:\n\n> > On 12-Jan-98 The Hermit Hacker wrote:\n> > > Does anyone here *understand* the LGPL? If we put the ODBC\n> > >drivers *under* src/interfaces, does that risk contaminating the rest of\n> > >the code *in any way*? Anyone here done a reasonably thorough study of\n> > >the LGPL and can comment on it?\n> > \n> > My understanding from Stallman's statements on the matter are: Distribution of\n> > GPL'd source with non-GPL'd source is fine, as long as it is simple to figure\n> > out which is which. By definition, GPL'd sources can be distributed freely.\n> > For binaries which fall under the GPL, again, mixing them with other stuff is\n> > OK, as long as GPL'd stuff is identified as such. Sources must be available,\n> > of course.\n> > \n> > LGPL is completely different. LGPL is what you use when you link your\n> > non-GPL'd sources against a library built with GPL'd sources. In that case,\n> > you are legal IFF you stuff can be re-linked against a different, non-GPL'd\n> > library without recompilation. Actually, there's a bit of confusion on my\n> > part about how much recompilation is permitted.\n> > \n> > Companies like DG/Sequent/Sun/etc wouldn't be able to include FSF software on\n> > the distributions if the above were not the case.\n> > \n> > ObCaveat: I'm not a lawyer. I don't look like a lawyer, I don't smell like a\n> > lawyer, and I don't lie like a lawyer.\n> > \n> > \n> My understanding is pretty much the same. Originally there was only GPL. This\n> really says that anything you link with GPL code must be distributed under\n> GPL - effectively your source becomes part of the original GPL'd product.\n> \n> Clearly this is ridiculous when you are linking against, say, the GNU\n> C-library, so Stallman introduced LGPL which effectively says that any\n> modifications or additions you make to the library fall under the LGPL,\n> but anything which calls the LGPL'd library can have whatever copyright\n> you want. Thus it is possible to produce commercial products which use\n> the GNU C-library, etc., etc.\n\n\tOkay, then going back to the original...the PostODBC drivers that\nI'd like to include as part of the src/interfaces directory falls under\nLGPL...if we did include it, then we wouldn't/shouldn't be contaminating\nthe source tree in any way?\n\n\n\n", "msg_date": "Fri, 16 Jan 1998 10:41:09 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] ODBC & LGPL license..." } ]
[ { "msg_contents": "> Got my Linux Journal today and the first featured article is entitled\n> \n> \"PostgreSQL - The Linux of Databases\"\n> \n> Now scrappy, before you get bent out of joint, they mean this in a nice\n> way ;-)\n> \n> The author is Rolf Herzog from Germany. It seems like a good article,\n> with a few factual errors but on the whole complimentary without\n> ignoring the weak points (yes Bruce, subselects are mentioned).\n> \nHaven't seen it yet - guess my copy will arrive any day. It does seem\na shame that an article like this came from someone who isn't active on\nany of these lists (AFAIK). Maybe he could have posted the article here\nso any factual errors could have been pointed out before it went\nto LJ.\n\nLJ have featured PG/SQL on a couple of previous occassions --- it really\nis a very good Unix magazine; an awful lot of it is far from Linux\nspecific.\n\nBest Wishes,\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 15:25:35 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" }, { "msg_contents": "On Fri, 16 Jan 1998, Andrew Martin wrote:\n\n> > Got my Linux Journal today and the first featured article is entitled\n> > \n> > \"PostgreSQL - The Linux of Databases\"\n> > \n> > Now scrappy, before you get bent out of joint, they mean this in a nice\n> > way ;-)\n> > \n> > The author is Rolf Herzog from Germany. It seems like a good article,\n> > with a few factual errors but on the whole complimentary without\n> > ignoring the weak points (yes Bruce, subselects are mentioned).\n> > \n> Haven't seen it yet - guess my copy will arrive any day. It does seem\n> a shame that an article like this came from someone who isn't active on\n> any of these lists (AFAIK). Maybe he could have posted the article here\n> so any factual errors could have been pointed out before it went\n> to LJ.\n\n\tI'm borrowing a copy from one of the profs here at the University,\nbut from Thomas' \"review\", sounds like he was pretty accurate :)\n\n\n", "msg_date": "Fri, 16 Jan 1998 10:44:24 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" }, { "msg_contents": "> > Got my Linux Journal today and the first featured article is entitled\n> > \"PostgreSQL - The Linux of Databases\"\n> Haven't seen it yet - guess my copy will arrive any day. It does seem\n> a shame that an article like this came from someone who isn't active on\n> any of these lists (AFAIK). Maybe he could have posted the article here\n> so any factual errors could have been pointed out before it went\n> to LJ.\n\nYes, that was my initial reaction too. However, to look at it another way, it\nis great that someone, who is not as wrapped up in Postgres as those of us\nactive on the lists seem to be, can look at it, use it, and say nice things\nabout it. I have _never_ seen an article written about something with which I\nam familiar which seemed to be 100% correct, except (perhaps :) in refereed\njournals.\n\n - Tom\n\n", "msg_date": "Fri, 16 Jan 1998 16:23:10 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux Journal article on PostgreSQL" } ]
[ { "msg_contents": "> On Thu, Jan 15, 1998 at 09:27:21AM +0700, Vadim B. Mikheev wrote:\n> > The Hermit Hacker wrote:\n> > > \n> > > On Wed, 14 Jan 1998, Ralf Mattes wrote:\n> > > \n> > > > Yes, i agree. Mike's implementation is the way to go in a traditional\n> > > > realtional db (BTW, a question about oids: lurking on this list\n> > > > i overheard the idea of dropping oids. This would break a lot\n> > > > of my code! What's the last word on this ?).\n> > > \n> > > The last we discussed in pgsql-hackers was that OIDs would not be\n> > > dropped...\n> > \n> > ..but would be optional.\n> > \n> > Vadim\n> > \n> \n> OIDs are a bastardization of the relational model. If you have to keep\n> them, then do so, but their use should be SEVERELY discouraged.\n> \n> Karl Denninger ([email protected])| MCSNet - Serving Chicagoland and Wisconsin\n\nSo what if they are a \"bastardization of the relational model\". If they\nare the best way of solving a problem, then they should be used. Most people\nwho use RDBMSs for real work find that the purity of the relational model\nhas to be compromised for efficiency (e.g. creating combined columns, so\nyour queries don't need dozens of ANDs). Having something like OIDs to\ndefine relationships between rows in a table can be the only way to solve\ncertain problems.\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 15:29:23 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Arrays (inserting and removing)" } ]
[ { "msg_contents": "> > The remaining part of the patch is to force the undefinition of\n> > HAVE_INT_TIMEZONE; again this is glibc2-specific, but I don't know\n> > any reason to suppose it wouldn't be needed on any machine with glibc2.\n> > \n> > It would really be helpful to have someone on a non-Linux machine test\n> > it; but is there anyone?\n> \n> Our code is complicated enough without adding patches for OS bugs. A\n> good place for the patch is the Linux FAQ. If it really becomes a\n> problem, we can put the patch as a separate file in the distribution,\n> and mention it in the INSTALL instructions. If you really want to get\n> fancy, as I did with the flex bug, you can run a test at compile time to\n> see if the bug exists.\n> \n> -- \n> Bruce Momjian\n> [email protected]\n> \n\nIf someone can mail me an exact description of the problem with info on\nthe setup which shows it AND the patch info, then I'll add it to the\nLinux-specific FAQ. (Much like the ld bug in Irix-6 has a patch\ndescription in the Irix-FAQ).\n\n\nBets wishes,\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 15:52:51 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Patch for glibc2 date problems" } ]
[ { "msg_contents": "> > > ObCaveat: I'm not a lawyer. I don't look like a lawyer, I don't smell like a\n> > > lawyer, and I don't lie like a lawyer.\n> > > \n> > > \n> > My understanding is pretty much the same. Originally there was only GPL. This\n> > really says that anything you link with GPL code must be distributed under\n> > GPL - effectively your source becomes part of the original GPL'd product.\n> > \n> > Clearly this is ridiculous when you are linking against, say, the GNU\n> > C-library, so Stallman introduced LGPL which effectively says that any\n> > modifications or additions you make to the library fall under the LGPL,\n> > but anything which calls the LGPL'd library can have whatever copyright\n> > you want. Thus it is possible to produce commercial products which use\n> > the GNU C-library, etc., etc.\n> \n> \tOkay, then going back to the original...the PostODBC drivers that\n> I'd like to include as part of the src/interfaces directory falls under\n> LGPL...if we did include it, then we wouldn't/shouldn't be contaminating\n> the source tree in any way?\n> \n> \n> \nI'm sure that's OK. Just been reading the LGPL again. Here are some salient\nparagraphs with comments:\n\n\n\n!!! From the Preamble:\nThe reason we have a separate public license for some libraries is that they blur the distinction we\nusually make between modifying or adding to a program and simply using it. Linking a program\nwith a library, without changing the library, is in some sense simply using the library, and is\nanalogous to running a utility program or application program. However, in a textual and legal\nsense, the linked executable is a combined work, a derivative of the original library, and the\nordinary General Public License treats it as such. \n\nBecause of this blurred distinction, using the ordinary General Public License for libraries did not\neffectively promote software sharing, because most developers did not use the libraries. We\nconcluded that weaker conditions might promote sharing better. \n\nHowever, unrestricted linking of non-free programs would deprive the users of those programs of\nall benefit from the free status of the libraries themselves. This Library General Public License is\nintended to permit developers of non-free programs to use free libraries, while preserving your\nfreedom as a user of such programs to change the free libraries that are incorporated in them. (We\nhave not seen how to achieve this as regards changes in header files, but we have achieved it as\nregards changes in the actual functions of the Library.) The hope is that this will lead to faster\ndevelopment of free libraries. \n\n\n\n\nFor \"non-free programs\" one should really read \"programs not distributed \nunder the GPL\".\n\nHere is the critical part from Section 2 of the LGPL:\n\n\n\n\n!!! From Section 2:\nThese requirements apply to the modified work as a whole. If identifiable sections of that work are\nnot derived from the Library, and can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those sections when you distribute\nthem as separate works. But when you distribute the same sections as part of a whole which is a\nwork based on the Library, the distribution of the whole must be on the terms of this License,\nwhose permissions for other licensees extend to the entire whole, and thus to each and every part\nregardless of who wrote it. \n\nThus, it is not the intent of this section to claim rights or contest your rights to work written\nentirely by you; rather, the intent is to exercise the right to control the distribution of derivative or\ncollective works based on the Library. \n\nIn addition, mere aggregation of another work not based on the Library with the Library (or with a\nwork based on the Library) on a volume of a storage or distribution medium does not bring the\nother work under the scope of this License. \n\n\n\n\n\nIt really relies upon the distinction between what is \"the library\" and what is a\n\"separate work\" (i.e. PostgreSQL) which happens to make use of the library.\n\nBut what really applies to us is Section 5:\n\n\n\n\n!!! From Section 5:\n5. A program that contains no derivative of any portion of the Library, but is designed to work\nwith the Library by being compiled or linked with it, is called a \"work that uses the Library\". Such\na work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of\nthis License. \n\n\n\n\n\ni.e. the PostgreSQL source remains under our copyright.\nBut then:\n\n\n\n\n!!! From Section 5:\nHowever, linking a \"work that uses the Library\" with the Library creates an executable that is a\nderivative of the Library (because it contains portions of the Library), rather than a \"work that uses\nthe library\". The executable is therefore covered by this License. Section 6 states terms for\ndistribution of such executables. \n\n\n\n\n\nWe don't normally distribute executables, so that's not a problem.\nIn any case, this is simply a legal definition since, as Section 6\nstates:\n\n\n\n\n!!! From Section 6:\n6. As an exception to the Sections above, you may also compile or link a \"work that uses the\nLibrary\" with the Library to produce a work containing portions of the Library, and distribute that\nwork under terms of your choice, provided that the terms permit modification of the work for the\ncustomer's own use and reverse engineering for debugging such modifications. \n\n\n\n\nWe allow all that, so it's not a problem (though this would appear to \nrestrict commercial products which are certainly available!). I guess,\nhowever, that these are linked to use shared libraries.\n\nHowever, this section then appears to contradict itself as it then says:\n\n\n\n\n!!! From Section 6:\n...Also, you must do one of these things:\n a) Accompany the work with the complete corresponding machine-readable source code for\n the Library including whatever changes were used in the work (which must be distributed\n under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with\n the complete machine-readable \"work that uses the Library\", as object code and/or source\n code, so that the user can modify the Library and then relink to produce a modified\n executable containing the modified Library. (It is understood that the user who changes the\n contents of definitions files in the Library will not necessarily be able to recompile the\n application to use the modified definitions.) \n\n\n\n\n\nHere, you only need to supply the object code such that you can \nmodify *the library* not the \"work that uses the Library\" whereas\nthe first part of Section 6 implies that you must be able to\nmodify the \"work that uses the Library\". Commercial offerings\nclearly rely on this being the clause they follow. By linking\nwith an LGPL'd library dynamically, they also don't need to\nsupply object code as you are fulfilling the requirement that\nyou can link with a modified version of the library.\n\n\n\nIn summary (remembering that I am not a lawyer), it appears that\nthe LGPL certainly does not \"pollute\" our code, but to use an LGPL\nlibrary we need to satisfy certain conditions concerning distribution.\nSince the licence which we give is even more free than LGPL and\nwe do satisfy all these requirements, it really is not a problem.\n\nThe only restriction I can see if for third parties who wish to\nuse PostgreSQL for some commercial product which they then wish\nto sell. They must satisfy the requirements of the LGPL when they\nsell that product if they link PostgreSQL with the LGPL'd library.\nThus having a compile time option to include (or not) the LGPL'd\nODBC stuff would seem like a sensible option. \n\n\nIf the ODBC stuff is simply a stand-alone library (which uses the \nPostgreSQL libraries) rather than being linked into PostgreSQL \nitself, then there really is no problem whatsoever, as this would\nthen be a simple aggregation of work. From Section 2:\n\nIn addition, mere aggregation of another work not based on the Library with the Library (or with a\nwork based on the Library) on a volume of a storage or distribution medium does not bring the\nother work under the scope of this License. \n\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 16:27:44 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] ODBC & LGPL license..." } ]
[ { "msg_contents": "\n> OK, we never installed this for 6.2 because we were already in Beta. \n> Can we do this for 6.3? Vadim suggested we make this part of libpq, so\n> all applications could make use of it.\n> \n> \n\nAny more comments about adding support for .psqlrc? The original\nproposal was that psql should read commands from /etc/psqlrc\nand then from $(HOME)/.psqlrc\n\nNobody seems to have raised any objections. I vote for including this\nfor 6.3\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 16 Jan 1998 16:30:34 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "Andrew Martin wrote:\n\n> > OK, we never installed this for 6.2 because we were already in Beta.\n> > Can we do this for 6.3? Vadim suggested we make this part of libpq, so\n> > all applications could make use of it.\n> >\n> >\n>\n> Any more comments about adding support for .psqlrc? The original\n> proposal was that psql should read commands from /etc/psqlrc\n> and then from $(HOME)/.psqlrc\n>\n> Nobody seems to have raised any objections. I vote for including this\n> for 6.3\n\nI'm in favor of it also, perhaps as a libpq function call which is used in\npsql. That way, other apps or frontends can choose to use it or not.\n\nWould much prefer leaving it out as a _mandatory_ part of connection\ninitialization, since there will be side-effects for embedded apps. Combined\nwith PGDATESTYLE and PGTZ there will be pretty good control over the frontend\nenvironment.\n\n - Tom\n\n", "msg_date": "Fri, 16 Jan 1998 16:52:30 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "> \n> \n> > OK, we never installed this for 6.2 because we were already in Beta. \n> > Can we do this for 6.3? Vadim suggested we make this part of libpq, so\n> > all applications could make use of it.\n> > \n> > \n> \n> Any more comments about adding support for .psqlrc? The original\n> proposal was that psql should read commands from /etc/psqlrc\n> and then from $(HOME)/.psqlrc\n> \n> Nobody seems to have raised any objections. I vote for including this\n> for 6.3\n> \n\nSure. Let's do it from psql only, I think.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 12:06:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "On Fri, 16 Jan 1998, Andrew Martin wrote:\n\n> \n> > OK, we never installed this for 6.2 because we were already in Beta. \n> > Can we do this for 6.3? Vadim suggested we make this part of libpq, so\n> > all applications could make use of it.\n> > \n> > \n> \n> Any more comments about adding support for .psqlrc? The original\n> proposal was that psql should read commands from /etc/psqlrc\n> and then from $(HOME)/.psqlrc\n> \n> Nobody seems to have raised any objections. I vote for including this\n> for 6.3\n\n\tI think its a good thing and should be added...I also agree with\nBruce(?) who disagreed with Vadim...it shouldn't be inherent in libpq\nitself, only in the \"applications\" themselves\n\n\n", "msg_date": "Fri, 16 Jan 1998 12:12:14 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "On Fri, 16 Jan 1998, Thomas G. Lockhart wrote:\n\n> I'm in favor of it also, perhaps as a libpq function call which is used in\n> psql. That way, other apps or frontends can choose to use it or not.\n> \n> Would much prefer leaving it out as a _mandatory_ part of connection\n> initialization, since there will be side-effects for embedded apps. Combined\n> with PGDATESTYLE and PGTZ there will be pretty good control over the frontend\n> environment.\n \nI agree entirely with you Tom, as this could cause problems if it was a \n_mandatory_ part of connecting. \n \nInfact, it would (with the JDBC driver) prevent it from being used with \nApplets (accessing local files violate applet security).\n \nIt's best to make this an optional call for libpq.\n\nFor jdbc use, the following is the best way to do this (rather than \nincluding it in the driver):\n\npublic void readRCfile(Statement stat,File file) throws SQLException\n{\n try {\n FileInputStream fis = new FileInputStream(file);\n BufferedReader r = new BufferedReader(new Reader(fis));\n\n while((String line=r.readLine())!=null) {\n if(!line.startsWith(\"#\"))\n\tstat.executeUpdate(line);\n }\n r.close();\n } catch(IOException ioe)\n throw new SQLException(ioe.toString());\n}\n\npublic void initConnection(Connection con) throws SQLException\n{\n Statement stat = con.createStatement();\n \n // Process ~/.psqlrc\n try {\n String dir=System.getProperty(\"user.home\");\n if(dir!=null)\n readRCfile(stat,new File(dir,\".psqlrc\"));\n } catch(SQLException se) {\n // Ignore if it doesn't exist.\n }\n\n // Now /etc/psqlrc\n readRCfile(stat,new File(\"/etc/psqlrc\"));\n\n stat.close();\n}\n\nI'll add this to the examples later.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 17 Jan 1998 14:26:31 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "Peter T Mount wrote:\n> \n> On Fri, 16 Jan 1998, Thomas G. Lockhart wrote:\n> \n> > I'm in favor of it also, perhaps as a libpq function call which is used in\n> > psql. That way, other apps or frontends can choose to use it or not.\n> >\n> > Would much prefer leaving it out as a _mandatory_ part of connection\n> > initialization, since there will be side-effects for embedded apps. Combined\n> > with PGDATESTYLE and PGTZ there will be pretty good control over the frontend\n> > environment.\n> \n> I agree entirely with you Tom, as this could cause problems if it was a\n> _mandatory_ part of connecting.\n\nAgreed!\n\nBTW, do you like X11 ?\n\nXLock.background:\t\t\tBlack\nNetscape*background:\t\t#B2B2B2\n\nHow about following this way ?\n\npsql.pgdatestyle:\t\teuro\t# this is for psql\n_some_app_.pgdatestyle:\tiso\t# this is for some application\npgaccess.background:\tblack\t# setting pgaccess' specific feature\n*.pggeqo:\t\ton=3\t# this is for all applics\n\nWe could use 'pg' prefix for standard features (datestyle, tz, etc)\nto give applic developers ability to set applic' specific features\nin the pgsqlrc file(s).\n\nVadim\n", "msg_date": "Sun, 18 Jan 1998 16:25:28 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "On Sun, 18 Jan 1998, Vadim B. Mikheev wrote:\n\n> Peter T Mount wrote:\n> > \n> > On Fri, 16 Jan 1998, Thomas G. Lockhart wrote:\n> > \n> > > I'm in favor of it also, perhaps as a libpq function call which is used in\n> > > psql. That way, other apps or frontends can choose to use it or not.\n> > >\n> > > Would much prefer leaving it out as a _mandatory_ part of connection\n> > > initialization, since there will be side-effects for embedded apps. Combined\n> > > with PGDATESTYLE and PGTZ there will be pretty good control over the frontend\n> > > environment.\n> > \n> > I agree entirely with you Tom, as this could cause problems if it was a\n> > _mandatory_ part of connecting.\n> \n> Agreed!\n> \n> BTW, do you like X11 ?\n\nYes, as its much better than a certain other windowing front end I could\nmention ;-)\n\n> XLock.background:\t\t\tBlack\n> Netscape*background:\t\t#B2B2B2\n> \n> How about following this way ?\n> \n> psql.pgdatestyle:\t\teuro\t# this is for psql\n> _some_app_.pgdatestyle:\tiso\t# this is for some application\n> pgaccess.background:\tblack\t# setting pgaccess' specific feature\n> *.pggeqo:\t\ton=3\t# this is for all applics\n\nI assume you mean using X style configuration in /etc/psqlrc ? If so, then\nyes, it's a good way to go, as it allows a lot of flexibility.\n\nOne problem with the earlier .psqlrc idea was that any commands would run\nfor any database (as I saw it). This could cause big problems, if the user\naccessed a database that didn't support those commands. With the above\nmethod, you can get round this, by something like:\n\npsql.database.mydb:\t.mydbrc\t\t# sql file run when mydb is opened\npsql.database.*:\t.generalrc\t# sql file run when any db is\n\t\t\t\t\t# opened\n\nNow, JDBC driver can't use the same config file as that violates security.\nTo get round this, we use Java's system properties. For applets, the\ndriver simply gets nothing when it asks for the parameter. Applications\ncan load their own parameters from a file, or have them set on the command\nline.\n\nThe file looks like:\n\n# Default authorisation scheme\npostgresql.auth=password\n\nThe command line form looks like:\n\njava -Dpostgresql.auth=password myapp\n\n> We could use 'pg' prefix for standard features (datestyle, tz, etc)\n> to give applic developers ability to set applic' specific features\n> in the pgsqlrc file(s).\n\nSounds good to me\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 18 Jan 1998 11:21:48 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" }, { "msg_contents": "> Agreed!\n> \n> BTW, do you like X11 ?\n> \n> XLock.background:\t\t\tBlack\n> Netscape*background:\t\t#B2B2B2\n> \n> How about following this way ?\n> \n> psql.pgdatestyle:\t\teuro\t# this is for psql\n> _some_app_.pgdatestyle:\tiso\t# this is for some application\n> pgaccess.background:\tblack\t# setting pgaccess' specific feature\n> *.pggeqo:\t\ton=3\t# this is for all applics\n> \n> We could use 'pg' prefix for standard features (datestyle, tz, etc)\n> to give applic developers ability to set applic' specific features\n> in the pgsqlrc file(s).\n> \n\nCool idea.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 18 Jan 1998 14:33:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PSQL man page patch" } ]
[ { "msg_contents": "\n\n\n\nPG Version: CVSup src from 15:00 EST Jan 16, 1998\n OS: Linux 2.0.30 (Slackware 3.3)\n\nI am trying to test a patch I created for the pg_dump utility. I cvsuped\nthe latest sources\nand created the patch. I used the following procedure:\n 1 configure --prefix=/usr/local/pgsql-test --with-pgport=7777\n 2 make\n 3 make install\n 4 /usr/local/pgsql-test/bin/initdb --pgdata=/usr/local/pgsql/data\n--pglib=/usr/local/pgsql-test/lib\n 5 started the postmaster for the regression tests\n 6 tried to make runtest to do the regression tests and it could not\nfind the postmaster at port 7777\nI looked at netstat and found the postmaster was using a UNIX domain\nsocket, but it seems that the regression\ntests are not. How do I change the postmaster to use the TCP socket instead\nof the UNIX domain socket.\nI looked in the documentation with the src and did not see anything. It\nappears that the docs have not\nbeen update to be release with the beta.\n\nThanks,\n\nMatt\n\n\n", "msg_date": "Fri, 16 Jan 1998 16:07:49 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Latest src tree" }, { "msg_contents": "postmaster -i\n\n> \n> \n> \n> \n> \n> PG Version: CVSup src from 15:00 EST Jan 16, 1998\n> OS: Linux 2.0.30 (Slackware 3.3)\n> \n> I am trying to test a patch I created for the pg_dump utility. I cvsuped\n> the latest sources\n> and created the patch. I used the following procedure:\n> 1 configure --prefix=/usr/local/pgsql-test --with-pgport=7777\n> 2 make\n> 3 make install\n> 4 /usr/local/pgsql-test/bin/initdb --pgdata=/usr/local/pgsql/data\n> --pglib=/usr/local/pgsql-test/lib\n> 5 started the postmaster for the regression tests\n> 6 tried to make runtest to do the regression tests and it could not\n> find the postmaster at port 7777\n> I looked at netstat and found the postmaster was using a UNIX domain\n> socket, but it seems that the regression\n> tests are not. How do I change the postmaster to use the TCP socket instead\n> of the UNIX domain socket.\n> I looked in the documentation with the src and did not see anything. It\n> appears that the docs have not\n> been update to be release with the beta.\n> \n> Thanks,\n> \n> Matt\n> \n> \n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 16 Jan 1998 16:35:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Latest src tree" } ]
[ { "msg_contents": "Hi,\n\nquestion that i couldnt get answered in Questions, so at the risk of \nbeing shot sending it here.\n\nIs it possible to compile only lib(pq) and not the rest of the DBMS.\nI'm facing the problem where i have to install client software on\nmultiple platforms, and wanting to make C stubs for the applications.\nBuilding the entire DBMS on every platform is no option (space limits).\nI have already tried a config, then compile in the interfaces/lib\ndir, but that does not seem to work. Unfortunately i am not yet fully\nup to date with knowledge of the sources, so making a 'make lib' isnt\nwithin my powers......\n\nAny suggestions?\n\nRamon\n", "msg_date": "Fri, 16 Jan 1998 16:05:29 -0500", "msg_from": "Ramon Krikken <[email protected]>", "msg_from_op": true, "msg_subject": "Compiling libs only" } ]
[ { "msg_contents": "Hi, I'm running Postgres 6.2.1 on IRIX 5.3 using native compiler.\n\nRunning test/examples/testlibpq3 generates the following erroneous\nresults. Can anyone shed some light on the matter? Thanks,\n\ntype[0] = 23, size[0] = 4\ntype[1] = 700, size[1] = 4\ntype[2] = 604, size[2] = -1\n\ntuple 0: got\n i = (1 bytes) 822083585,\n d = (5 bytes) 0.000000,\n p = (5 bytes) 673723180 points boundbox = (hi=0.000000/0.000000, lo = 0.000000,0.000000)\ntuple 1: got\n i = (1 bytes) 838876160,\n d = (5 bytes) 0.000044,\n p = (5 bytes) 673723436 points boundbox = (hi=0.000000/0.000000, lo = 0.000000,0.000000)\n\n\n-- \n-------------------------------------------------------------------------\nBrian Sanders\t [email protected]\t\n=========================================================================\n", "msg_date": "Fri, 16 Jan 1998 16:39:49 -0800 (PST)", "msg_from": "Brian Sanders <[email protected]>", "msg_from_op": true, "msg_subject": "testlibpq3 gives incorrect results" } ]
[ { "msg_contents": "OK, I have created the SubLink structure with supporting routines, and\nhave added code to create the SubLink structures in the parser, and have\nadded Query->hasSubLink.\n\nI changed gram.y to support:\n\n\t(x,y,z) OP (subselect)\n\nwhere OP is any operator. Is that right, or are we doing only certain\nones, and of so, do we limit it in the parser?\n\nI still need to add code to handle SubLink fields in the Where clause\nand other supporting code, but it should be enough for Vadim to start\ncoding if he wants to. I also need to add Var->sublevels_up.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 17 Jan 1998 00:00:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects coding started" }, { "msg_contents": "Bruce Momjian wrote:\n\n> OK, I have created the SubLink structure with supporting routines, and\n> have added code to create the SubLink structures in the parser, and have\n> added Query->hasSubLink.\n>\n> I changed gram.y to support:\n>\n> (x,y,z) OP (subselect)\n>\n> where OP is any operator. Is that right, or are we doing only certain\n> ones, and of so, do we limit it in the parser?\n\nSeems like we would want to pass most operators and expressions through\ngram.y, and then call elog() in either the transformation or in the\noptimizer if it is an operator which can't be supported.\n\n - Tom\n\n", "msg_date": "Sat, 17 Jan 1998 05:42:45 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselects coding started" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> \n> > OK, I have created the SubLink structure with supporting routines, and\n> > have added code to create the SubLink structures in the parser, and have\n> > added Query->hasSubLink.\n> >\n> > I changed gram.y to support:\n> >\n> > (x,y,z) OP (subselect)\n> >\n> > where OP is any operator. Is that right, or are we doing only certain\n> > ones, and of so, do we limit it in the parser?\n> \n> Seems like we would want to pass most operators and expressions through\n> gram.y, and then call elog() in either the transformation or in the\n> optimizer if it is an operator which can't be supported.\n> \n\nThat's what I thought.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 17 Jan 1998 11:00:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselects coding started" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> Bruce Momjian wrote:\n> \n> > OK, I have created the SubLink structure with supporting routines, and\n> > have added code to create the SubLink structures in the parser, and have\n> > added Query->hasSubLink.\n> >\n> > I changed gram.y to support:\n> >\n> > (x,y,z) OP (subselect)\n> >\n> > where OP is any operator. Is that right, or are we doing only certain\n> > ones, and of so, do we limit it in the parser?\n> \n> Seems like we would want to pass most operators and expressions through\n> gram.y, and then call elog() in either the transformation or in the\n> optimizer if it is an operator which can't be supported.\n\nNot in optimizer, in parser, please.\nRemember that for <> SubLink->useor must be TRUE and this is parser work\n(optimizer don't know about \"=\", \"<>\", etc but only about Oper nodes).\n\nIN (\"=\" ANY) and NOT IN (\"<>\" ALL) transformations are also parser work.\n\nVadim\n", "msg_date": "Sun, 18 Jan 1998 19:27:09 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselects coding started" } ]
[ { "msg_contents": "\nWhile looking thru the nodeGroup code, I noticed the following\nthat I'm not sure is correct.\n\n-- Using 01-09 snapshot\ncreate table t1 (a int4, b char(2), c char(2));\nCREATE\ninsert into t1 (a,c) values (1,'x');\nINSERT 149419 1\ninsert into t1 (a,c) values (2,'x');\nINSERT 149420 1\ninsert into t1 (a,c) values (3,'z');\nINSERT 149421 1\ninsert into t1 (a,c) values (2,'x');\nINSERT 149422 1\nselect * from t1;\na|b|c\n-+-+--\n1| |x \n2| |x \n3| |z\n2| |x\n(4 rows)\n\nselect b,c,sum(a) from t1 group by b,c;\nb|c |sum\n-+--+---\n |x | 3\n |z | 3\n |x | 2\n(3 rows)\n\nselect b,c,sum(a) from t1 group by b,c order by c;\nb|c |sum\n-+--+---\n |x | 3\n |x | 2\n |z | 3\n(3 rows)\n\nIn the second query, the first two rows have been grouped, but shouldn't\nthey not be since b is NULL? I thought that NULL != NULL?\n\nIf so, is the third query wrong? The first two rows are different, but\nonly because of the aggregated column that is the source of the group by.\nAccording to the logic from the second query, these should have been\ngrouped, no?\n\nWhat does the standard say about comparing two NULL values?\n\nThe fixes for these inconsistencies appear to be simple. To cause a new\ngroup to be started if NULL != NULL, simply change the \"continue;\" in the\nsameGroup function in nodeGroup.c to \"return FALSE;\" Ignoring aggregated\ncolumns would also then be added to sameGroup().\n\ndarrenk\n", "msg_date": "Sat, 17 Jan 1998 16:27:26 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Group By, NULL values and inconsistent behaviour." }, { "msg_contents": "Where are we on this. It appears this NULL group by is seriously\nbroken.\n\nCan we have some tests on commercial databases, and get a patch\ngenerated?\n\n> \n> \n> While looking thru the nodeGroup code, I noticed the following\n> that I'm not sure is correct.\n> \n> -- Using 01-09 snapshot\n> create table t1 (a int4, b char(2), c char(2));\n> CREATE\n> insert into t1 (a,c) values (1,'x');\n> INSERT 149419 1\n> insert into t1 (a,c) values (2,'x');\n> INSERT 149420 1\n> insert into t1 (a,c) values (3,'z');\n> INSERT 149421 1\n> insert into t1 (a,c) values (2,'x');\n> INSERT 149422 1\n> select * from t1;\n> a|b|c\n> -+-+--\n> 1| |x \n> 2| |x \n> 3| |z\n> 2| |x\n> (4 rows)\n> \n> select b,c,sum(a) from t1 group by b,c;\n> b|c |sum\n> -+--+---\n> |x | 3\n> |z | 3\n> |x | 2\n> (3 rows)\n> \n> select b,c,sum(a) from t1 group by b,c order by c;\n> b|c |sum\n> -+--+---\n> |x | 3\n> |x | 2\n> |z | 3\n> (3 rows)\n> \n> In the second query, the first two rows have been grouped, but shouldn't\n> they not be since b is NULL? I thought that NULL != NULL?\n> \n> If so, is the third query wrong? The first two rows are different, but\n> only because of the aggregated column that is the source of the group by.\n> According to the logic from the second query, these should have been\n> grouped, no?\n> \n> What does the standard say about comparing two NULL values?\n> \n> The fixes for these inconsistencies appear to be simple. To cause a new\n> group to be started if NULL != NULL, simply change the \"continue;\" in the\n> sameGroup function in nodeGroup.c to \"return FALSE;\" Ignoring aggregated\n> columns would also then be added to sameGroup().\n> \n> darrenk\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 25 Jan 1998 14:53:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Where are we on this. It appears this NULL group by is seriously\n> broken.\n> \n> Can we have some tests on commercial databases, and get a patch\n> generated?\n\nI ran the test on Sybase. The only real changes were int4->int and\nexplicitly calling out field b as null (it defaults to not null).\n\n1> select @@version\n2> go\n \n ----------------------------------------------------------------------------- \n SQL Server/11.0.2/P/Sun_svr4/OS 5.4/EBF 6536/OPT/Sat Aug 17 11:54:59 PDT 1996 \n \n(1 row affected)\n1> create table t1 (a int, b char(2) null, c char(2))\n2> go\n1> insert into t1 (a,c) values (1,'x')\n2> go\n(1 row affected)\n1> insert into t1 (a,c) values (2,'x')\n2> go\n(1 row affected)\n1> insert into t1 (a,c) values (3,'z')\n2> go\n(1 row affected)\n1> insert into t1 (a,c) values (2,'x')\n2> go\n(1 row affected)\n1> select * from t1\n2> go\n a b c \n ----------- -- -- \n 1 NULL x \n 2 NULL x \n 3 NULL z \n 2 NULL x \n \n(4 rows affected)\n1> select b,c,sum(a) from t1 group by b,c\n2> go\n b c \n -- -- ----------- \n NULL x 5 \n NULL z 3 \n \n(2 rows affected)\n1> select b,c,sum(a) from t1 group by b,c order by c\n2> go\n b c \n -- -- ----------- \n NULL x 5 \n NULL z 3 \n \n(2 rows affected)\n\nIt seems that Sybase thinks a null is a null in this case. However,\ntry the following:\n\nselect * from t1 x, t1 y where x.b=y.b and y.c='z';\n\nSybase returns zero rows for this. It seems that it treats NULLs as\nequal for order and group operations, but not for join operations.\n\nOcie Mitchell\n", "msg_date": "Mon, 26 Jan 1998 11:20:13 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." } ]
[ { "msg_contents": "On Sat, 17 Jan 1998, Tom wrote:\n\n> How are large users handling the vacuum problem? vaccuum locks other\n> users out of tables too long. I don't need a lot performance (a few per\n> minutes), but I need to be handle queries non-stop).\n\n\tNot sure, but this one is about the only major thing that is continuing\nto bother me :( Is there any method of improving this?\n\n> Also, how are people handling tables with lots of rows? The 8k tuple\n> size can waste a lot of space. I need to be able to handle a 2 million\n> row table, which will eat up 16GB, plus more for indexes.\n\n\tThis oen is improved upon in v6.3, where at compile time you can stipulate\nthe tuple size. We are looking into making this an 'initdb' option instead,\nso that you can have the same binary for multiple \"servers\", but any database\ncreated under a particular server will be constrained by that tuple size.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 17 Jan 1998 19:06:21 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Business cases" }, { "msg_contents": "\nOn Sat, 17 Jan 1998, The Hermit Hacker wrote:\n\n> On Sat, 17 Jan 1998, Tom wrote:\n> \n> > How are large users handling the vacuum problem? vaccuum locks other\n> > users out of tables too long. I don't need a lot performance (a few per\n> > minutes), but I need to be handle queries non-stop).\n> \n> \tNot sure, but this one is about the only major thing that is continuing\n> to bother me :( Is there any method of improving this?\n\n vacuum seems to do a _lot_ of stuff. It seems that crash recovery\nfeatures, and maintenance features should be separated. I believe the\nonly required maintenance features are recovering space used by deleted\ntuples and updating stats? Both of these shouldn't need to lock the\ndatabase for long periods of time.\n\n> > Also, how are people handling tables with lots of rows? The 8k tuple\n> > size can waste a lot of space. I need to be able to handle a 2 million\n> > row table, which will eat up 16GB, plus more for indexes.\n> \n> \tThis oen is improved upon in v6.3, where at compile time you can stipulate\n> the tuple size. We are looking into making this an 'initdb' option instead,\n> so that you can have the same binary for multiple \"servers\", but any database\n> created under a particular server will be constrained by that tuple size.\n\n That might help a bit, but same tables may have big rows and some not.\nFor example, my 2 million row table requires only requires two date\nfields, and 7 integer fields. That isn't very much data. However, I'd\nlike to be able to join against another table with much larger rows.\n\n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nTom\n\n", "msg_date": "Sat, 17 Jan 1998 15:07:09 -0800 (PST)", "msg_from": "Tom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Business cases" }, { "msg_contents": "Tom wrote:\n> > > How are large users handling the vacuum problem? vaccuum locks other\n> > > users out of tables too long. I don't need a lot performance (a few per\n> > > minutes), but I need to be handle queries non-stop).\n> >\n> > Not sure, but this one is about the only major thing that is continuing\n> > to bother me :( Is there any method of improving this?\n> \n> vacuum seems to do a _lot_ of stuff. It seems that crash recovery\n> features, and maintenance features should be separated. I believe the\n> only required maintenance features are recovering space used by deleted\n> tuples and updating stats? Both of these shouldn't need to lock the\n> database for long periods of time.\n\nWould it be possible to add an option to VACUUM, like a max number\nof blocks to sweep? Or is this impossible because of the way PG works?\n\nWould it be possible to (for example) compact data from the front of\nthe file to make one block free somewhere near the beginning of the\nfile and then move rows from the last block to this new, empty block?\n\n-- To limit the number of rows to compact:\npsql=> VACUUM MoveMax 1000; -- move max 1000 rows\n\n-- To limit the time used for vacuuming:\npsql=> VACUUM MaxSweep 1000; -- Sweep max 1000 blocks\n\nCould this work with the current method of updating statistics?\n\n\n*** Btw, why doesn't PG update statistics when inserting/updating?\n\n\n/* m */\n", "msg_date": "Mon, 19 Jan 1998 13:59:56 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "> > Also, how are people handling tables with lots of rows? The 8k tuple\n> > size can waste a lot of space. I need to be able to handle a 2 million\n> > row table, which will eat up 16GB, plus more for indexes.\n>\n> This oen is improved upon in v6.3, where at compile time you can stipulate\n> the tuple size. We are looking into making this an 'initdb' option instead,\n> so that you can have the same binary for multiple \"servers\", but any database\n> created under a particular server will be constrained by that tuple size.\n\nTom's \"problem\" is probably not a bad as he thinks. The 8k tuple size limit is a\nresult of the current 8k page size limit, but multiple tuples are allowed on a page.\nThey are just not allowed to span pages. So, there is some wasted space (as is true\nfor many, most, or all commercial dbs also) but it is on average only the size of\nhalf a tuple, and can be easily predicted once the tuple sizes are known.\n\n - Tom (T2?)\n\n", "msg_date": "Mon, 19 Jan 1998 16:05:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "> Could this work with the current method of updating statistics?\n> \n> \n> *** Btw, why doesn't PG update statistics when inserting/updating?\n\nToo slow to do at that time. You need to span all the data to get an\naccurate figure.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 12:04:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > *** Btw, why doesn't PG update statistics when inserting/updating?\n> \n> Too slow to do at that time. You need to span all the data to get an\n> accurate figure.\n\nIs it not possible to take the current statistics and the latest\nchanges and calculate new statistics from that?\n\nWhat information is kept in these statistics? How are they used?\nObviously this is more than just number of rows, but what exactly?\n\n/* m */\n", "msg_date": "Tue, 20 Jan 1998 13:01:34 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > *** Btw, why doesn't PG update statistics when inserting/updating?\n> > \n> > Too slow to do at that time. You need to span all the data to get an\n> > accurate figure.\n> \n> Is it not possible to take the current statistics and the latest\n> changes and calculate new statistics from that?\n> \n> What information is kept in these statistics? How are they used?\n> Obviously this is more than just number of rows, but what exactly?\n\nLook in commands/vacuum.c. It measures the spread-ness of the data, and\nthere is no way to get this figure on-the-fly unless you maintain\nbuckets for each range of data values and decrement/increment as values\nare added-subtraced. Seeing a the MySQL optimizer is a single file, and\nso is the executor, I doubt that is what it is doing. Probably just\nkeeps a count of how many rows in the table.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 09:45:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" } ]
[ { "msg_contents": "> > Also, how are people handling tables with lots of rows? The 8k tuple\n> > size can waste a lot of space. I need to be able to handle a 2 million\n> > row table, which will eat up 16GB, plus more for indexes.\n>\n\n16GB?!? Not unless your tuples are 8k. The 8k is/was the max _tuple_ size,\nbut more than one tuple can be stored per block. :)\n\nTry the formula in the FAQ to get a reasonable estimate for the table's size.\n\n> \tThis oen is improved upon in v6.3, where at compile time you can stipulate\n> the tuple size. We are looking into making this an 'initdb' option instead,\n> so that you can have the same binary for multiple \"servers\", but any database\n> created under a particular server will be constrained by that tuple size.\n\nIf the patch I sent to PATCHES is applied, then it will be a compile-time setting\nand you'll need a postmaster for each database w/differing block sizes. Not the\ngreatest solution, but it would work.\n\nI almost have the \"-k\" option working today. Two files left to do...\n\nbackend/access/nbtree/nbtsort.c\nbackend/utils/adt/chunk.c\n\nI'm being careful about pfree'ing all the stuff that I'm going to have to palloc.\n\nTiggers definitely do _not_ like memory leaks.\n\ndarrenk\n", "msg_date": "Sat, 17 Jan 1998 18:25:27 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "\nOn Sat, 17 Jan 1998, Darren King wrote:\n\n> 16GB?!? Not unless your tuples are 8k. The 8k is/was the max _tuple_ size,\n> but more than one tuple can be stored per block. :)\n> \n> Try the formula in the FAQ to get a reasonable estimate for the table's size.\n\n The sentence \"Tuples do not cross 8k boundaries so a 5k tuple will\nrequire 8k of storage\" in 3.8 of the FAQ confused me. I did not realize\nthat multiple tuples could be stored in a page. So I took it to mean\nthat one tuple was stored in page. I didn't even even see 3.26, because I\nthought that 3.8 answered my question :(\n\nTom\n\n", "msg_date": "Sat, 17 Jan 1998 18:10:08 -0800 (PST)", "msg_from": "Tom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "> \n> > > Also, how are people handling tables with lots of rows? The 8k tuple\n> > > size can waste a lot of space. I need to be able to handle a 2 million\n> > > row table, which will eat up 16GB, plus more for indexes.\n> >\n> \n> 16GB?!? Not unless your tuples are 8k. The 8k is/was the max _tuple_ size,\n> but more than one tuple can be stored per block. :)\n> \n> Try the formula in the FAQ to get a reasonable estimate for the table's size.\n> \n\nThe FAQ copy on the web page has it. The FAQ in the 6.2.1 distribution\ndoes not.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 17 Jan 1998 21:56:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" } ]
[ { "msg_contents": "> > > Also, how are people handling tables with lots of rows? The 8k tuple\n> > > size can waste a lot of space. I need to be able to handle a 2 million\n> > > row table, which will eat up 16GB, plus more for indexes.\n> > \n> > \tThis oen is improved upon in v6.3, where at compile time you can stipulate\n> > the tuple size. We are looking into making this an 'initdb' option instead,\n> > so that you can have the same binary for multiple \"servers\", but any database\n> > created under a particular server will be constrained by that tuple size.\n> \n> That might help a bit, but same tables may have big rows and some not.\n> For example, my 2 million row table requires only requires two date\n> fields, and 7 integer fields. That isn't very much data. However, I'd\n> like to be able to join against another table with much larger rows.\n\nTwo dates and 7 integers would make tuple of 90-some bytes, call it 100 max.\nSo you would prolly get 80 tuples per 8k page, so 25000 pages would use a\nfile of 200 meg.\n\nThe block size parameter will be database-specific, not table-specific, and\nsince you can't join tables from different _databases_, 2nd issue is moot.\n\nIf I could get around to the tablespace concept again, then maybe a different\nblock size per tablespace would be useful. But, that is putting the cart\na couple of light-years ahead of the proverbial horse...\n\nDarren aka [email protected]\n", "msg_date": "Sat, 17 Jan 1998 18:44:54 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" } ]
[ { "msg_contents": "\nI installed some patches today for the univel port, and one of the changes\ndid the following to include/storage/s_lock.h:\n\n302c318\n< __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n---\n> __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n\n\nUnder FreeBSD, this breaks the compile with an 'unimplemented' error...I'm\nnot going to commit the \"fix\" yet (reverse the patch)...does this break other\nports as well, or just FreeBSD? Anyone with Assembler experience out there? :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 17 Jan 1998 21:38:48 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "S_LOCK() change produces error..." }, { "msg_contents": "> \n> \n> I installed some patches today for the univel port, and one of the changes\n> did the following to include/storage/s_lock.h:\n> \n> 302c318\n> < __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> ---\n> > __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> \n\nI guess this is a multiple cpu modifier for asm, and most people don't\nrun multiple cpus. I guess our gcc's call it an error, rather than\nignore it. I think we need an OS-specific ifdef there. We can't have\nUnivel changing the normal i386 stuff that works so well now.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 17 Jan 1998 21:59:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." }, { "msg_contents": "On Sat, 17 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > I installed some patches today for the univel port, and one of the changes\n> > did the following to include/storage/s_lock.h:\n> > \n> > 302c318\n> > < __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > ---\n> > > __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > \n> \n> I guess this is a multiple cpu modifier for asm, and most people don't\n> run multiple cpus. I guess our gcc's call it an error, rather than\n> ignore it. I think we need an OS-specific ifdef there. We can't have\n> Univel changing the normal i386 stuff that works so well now.\n\n\tActually, I think that the patch was meant to improve...if you look at the\ncode, he put all the Univel stuff inside of its own #ifdef...see around\nline 297 in include/storage/s_lock.h and you'll see what I mean.\n\n\tHe seems to have only added a 'lock' to the beginning of the __asm__, \nwhich is what is breaking things under FreeBSD, but unless it affects every\nother port, I'm loath to remove it without just throwing in a FreeBSD #ifdef\nin there...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 17 Jan 1998 23:05:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." }, { "msg_contents": "> \n> On Sat, 17 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > \n> > > I installed some patches today for the univel port, and one of the changes\n> > > did the following to include/storage/s_lock.h:\n> > > \n> > > 302c318\n> > > < __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > > ---\n> > > > __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > > \n> > \n> > I guess this is a multiple cpu modifier for asm, and most people don't\n> > run multiple cpus. I guess our gcc's call it an error, rather than\n> > ignore it. I think we need an OS-specific ifdef there. We can't have\n> > Univel changing the normal i386 stuff that works so well now.\n> \n> \tActually, I think that the patch was meant to improve...if you look at the\n> code, he put all the Univel stuff inside of its own #ifdef...see around\n> line 297 in include/storage/s_lock.h and you'll see what I mean.\n> \n> \tHe seems to have only added a 'lock' to the beginning of the __asm__, \n> which is what is breaking things under FreeBSD, but unless it affects every\n> other port, I'm loath to remove it without just throwing in a FreeBSD #ifdef\n> in there...\n\nI will check when I apply my next patch. I am hesitant to cvsup at this\ntime if the code is broken.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sat, 17 Jan 1998 22:37:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." }, { "msg_contents": "On Sat, 17 Jan 1998, The Hermit Hacker wrote:\n\n> On Sat, 17 Jan 1998, Bruce Momjian wrote:\n> > > I installed some patches today for the univel port, and one of the changes\n> > > did the following to include/storage/s_lock.h:\n> > > \n> > > 302c318\n> > > < __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > > ---\n> > > > __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > \n> > I guess this is a multiple cpu modifier for asm, and most people don't\n> > run multiple cpus. I guess our gcc's call it an error, rather than\n> > ignore it. I think we need an OS-specific ifdef there. We can't have\n> > Univel changing the normal i386 stuff that works so well now.\n> \n> \tActually, I think that the patch was meant to improve...if you look at the\n> code, he put all the Univel stuff inside of its own #ifdef...see around\n> line 297 in include/storage/s_lock.h and you'll see what I mean.\n> \n> \tHe seems to have only added a 'lock' to the beginning of the __asm__, \n> which is what is breaking things under FreeBSD, but unless it affects every\n> other port, I'm loath to remove it without just throwing in a FreeBSD #ifdef\n> in there...\n\n(clip from SMP support in linux' asm/spinlocks.h)\n#define spin_unlock(lock) \\\n__asm__ __volatile__( \\\n\t\"lock ; btrl $0,%0\" \\\n\t:\"=m\" (__dummy_lock(lock)))\n\nin linux the lock has \";\" following.\nYep - it's for multiCPU systems (SMP). Handy for shared-memory systems\ntoo if you're really into multithreading-speed.\n\nIt locks that particular byte (word?) of memory against access by other\nCPU's accessing it IIRC...\n\nPerhaps your GAS is too old? (GNU binutils)\n(does BSD support multiple CPU's under intel?)\n\nmultiprocessor really isn't that rare under linux - even Linus Torvalds\nuses a SMP system *grin*...\n\nMaybe he encountered a locking problem with a multicpu host and needed a\nsemaphore (or equiv) to lock things? Just trying to figure this out...\n(sometimes necessary if you're doing shared memory across processes)\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Mon, 19 Jan 1998 16:28:46 -0700 (MST)", "msg_from": "teunis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." }, { "msg_contents": "> \n> On Sat, 17 Jan 1998, The Hermit Hacker wrote:\n> \n> > On Sat, 17 Jan 1998, Bruce Momjian wrote:\n> > > > I installed some patches today for the univel port, and one of the changes\n> > > > did the following to include/storage/s_lock.h:\n> > > > \n> > > > 302c318\n> > > > < __asm__(\"xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > > > ---\n> > > > > __asm__(\"lock xchgb %0,%1\": \"=q\"(_res), \"=m\"(*lock):\"0\"(0x1)); \\\n> > > \n> > > I guess this is a multiple cpu modifier for asm, and most people don't\n> > > run multiple cpus. I guess our gcc's call it an error, rather than\n> > > ignore it. I think we need an OS-specific ifdef there. We can't have\n> > > Univel changing the normal i386 stuff that works so well now.\n> > \n> > \tActually, I think that the patch was meant to improve...if you look at the\n> > code, he put all the Univel stuff inside of its own #ifdef...see around\n> > line 297 in include/storage/s_lock.h and you'll see what I mean.\n> > \n> > \tHe seems to have only added a 'lock' to the beginning of the __asm__, \n> > which is what is breaking things under FreeBSD, but unless it affects every\n> > other port, I'm loath to remove it without just throwing in a FreeBSD #ifdef\n> > in there...\n> \n> (clip from SMP support in linux' asm/spinlocks.h)\n> #define spin_unlock(lock) \\\n> __asm__ __volatile__( \\\n> \t\"lock ; btrl $0,%0\" \\\n> \t:\"=m\" (__dummy_lock(lock)))\n> \n> in linux the lock has \";\" following.\n> Yep - it's for multiCPU systems (SMP). Handy for shared-memory systems\n> too if you're really into multithreading-speed.\n> \n> It locks that particular byte (word?) of memory against access by other\n> CPU's accessing it IIRC...\n> \n> Perhaps your GAS is too old? (GNU binutils)\n> (does BSD support multiple CPU's under intel?)\n> \n> multiprocessor really isn't that rare under linux - even Linus Torvalds\n> uses a SMP system *grin*...\n> \n> Maybe he encountered a locking problem with a multicpu host and needed a\n> semaphore (or equiv) to lock things? Just trying to figure this out...\n> (sometimes necessary if you're doing shared memory across processes)\n\nMarc, I will try 'lock;' and if it works, will submit a patch.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 18:59:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." }, { "msg_contents": "> > (clip from SMP support in linux' asm/spinlocks.h)\n> > #define spin_unlock(lock) \\\n> > __asm__ __volatile__( \\\n> > \t\"lock ; btrl $0,%0\" \\\n> > \t:\"=m\" (__dummy_lock(lock)))\n> > \n> > in linux the lock has \";\" following.\n> > Yep - it's for multiCPU systems (SMP). Handy for shared-memory systems\n> > too if you're really into multithreading-speed.\n> > \n> > It locks that particular byte (word?) of memory against access by other\n> > CPU's accessing it IIRC...\n> > \n> > Perhaps your GAS is too old? (GNU binutils)\n> > (does BSD support multiple CPU's under intel?)\n> > \n> > multiprocessor really isn't that rare under linux - even Linus Torvalds\n> > uses a SMP system *grin*...\n> > \n> > Maybe he encountered a locking problem with a multicpu host and needed a\n> > semaphore (or equiv) to lock things? Just trying to figure this out...\n> > (sometimes necessary if you're doing shared memory across processes)\n> \n> Marc, I will try 'lock;' and if it works, will submit a patch.\n\nYep, it works. Patch applied.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 14:37:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] S_LOCK() change produces error..." } ]
[ { "msg_contents": "\nvacuuming template1\nAltering pg_user acl\npostgres in free(): warning: modified (chunk-) pointer.\npostgres in free(): warning: modified (chunk-) pointer.\nloading pg_description\n\n\n> psql template1\npsql in free(): warning: junk pointer, too high to make sense.\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=> \n\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 17 Jan 1998 21:55:25 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "free() error in current source tree under FreeBSD..." } ]
[ { "msg_contents": "\nIn trying to use gdb to step through the code, it appears that the problem\nis manifesting itself in PQsetdb():\n \n(gdb) s\n2518 if (settings.getPassword)\n(gdb) s\n2533 settings.db = PQsetdb(host, port, NULL, NULL, dbname);\n(gdb) s\npsql in free(): warning: junk pointer, too high to make sense.\n2534 }\n(gdb) \n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Sat, 17 Jan 1998 22:35:52 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "More on free() bug..." } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Sat, 17 Jan 1998 20:09:39 -0800 (PST)", "msg_from": "[email protected] (Gordon Irlam)", "msg_from_op": true, "msg_subject": "unusb" } ]
[ { "msg_contents": "\n\n Hi,\n\n I got no answer from pgsql-questions list.\n\n basically I'm using only one table at a time, and I need only a few\nsimple operations on tuples (find, add, updates, delete).\n\n -- after a quick look at all `cd src; find . -name '*.h'` I didn't\nfind any such similar interface.\n\n Can sombody advise me how could I get this simple interface ?\n\n I'm fluent in C/C++.\n\n Thanx,\n\n Jan\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n---------- Forwarded message ----------\nDate: Fri, 16 Jan 1998 16:03:05 -0500 (EST)\nFrom: Jan Vicherek <[email protected]>\nTo: [email protected]\nSubject: non-SQL C interface ?\n\n\n If I would like to access the database through a C interface using a few\nsimple database operations, like \nopen db\nselect fields to work with\nselect index to use\nselect a record\nupdate a record\nadd a record\ndelete a record\nlock a table against updates, unlock\n\n and that's it. (Taken from Informinx 3.3, \"ALL-II\" C interface.)\n\n We are looking into porting an app written for this Informix interface\ninto Postgres, and I would wrinte a library simulating these calls.\n\n Is there a set of calls seomwhere inside of PGSQL that I could use in my\nsimulation library to utilize ?\n\n THanx, \n\n Jan\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n\n", "msg_date": "Sun, 18 Jan 1998 01:21:32 -0500 (EST)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": true, "msg_subject": "non-SQL C interface ? (fwd)" }, { "msg_contents": "On Sun, 18 Jan 1998, Jan Vicherek wrote:\n> Hi,\n> \n> I got no answer from pgsql-questions list.\n\nI saw this there, but was knee deep in code at the time\n\n> basically I'm using only one table at a time, and I need only a few\n> simple operations on tuples (find, add, updates, delete).\n> \n> -- after a quick look at all `cd src; find . -name '*.h'` I didn't\n> find any such similar interface.\n> \n> Can sombody advise me how could I get this simple interface ?\n> \n> I'm fluent in C/C++.\n\nI think the simplest way of doing this is to write your own stubs, that\nthen sit on top of libpq. This is the way I've done this in the past.\n\n[snip]\n\n> and that's it. (Taken from Informinx 3.3, \"ALL-II\" C interface.)\n> \n> We are looking into porting an app written for this Informix interface\n> into Postgres, and I would wrinte a library simulating these calls.\n> \n> Is there a set of calls seomwhere inside of PGSQL that I could use in my\n> simulation library to utilize ?\n\nAs your trying to simulate another api, and it looked like it's fairly\nsimple, then it should be simple to do this, leaving libpq to do the\nactual network stuff to the database.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 18 Jan 1998 11:34:54 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] non-SQL C interface ? (fwd)" }, { "msg_contents": "On Sun, 18 Jan 1998, Peter T Mount wrote:\n\n> > basically I'm using only one table at a time, and I need only a few\n> > simple operations on tuples (find, add, updates, delete).\n> > \n> > Can sombody advise me how could I get this simple interface ?\n> \n> I think the simplest way of doing this is to write your own stubs, that\n> then sit on top of libpq. This is the way I've done this in the past.\n\n It seems like a bit of overkill, since this would still have to go\nthrough the SQL parser,optimizer,executor,etc. If I want to access the\ntables only one at a time, and only load the rows into a simple C\nstructure, it seems like there has to be this interface, since the SQL\nexecutor has to get to the data somehow too. And I thought it would be\nthrough this C interface that I'm looking for.\n\n here goes the list of simple few calls I need :\nopen db\nselect fields to work with\nselect index to use\nselect a record\nupdate a record\nadd a record\ndelete a record\nlock a table against updates, unlock\n\n> > and that's it. (Taken from Informinx 3.3, \"ALL-II\" C interface.)\n> > \n> > We are looking into porting an app written for this Informix interface\n> > into Postgres, and I would wrinte a library simulating these calls.\n> > \n> > Is there a set of calls seomwhere inside of PGSQL that I could use in my\n> > simulation library to utilize ?\n> \n> As your trying to simulate another api, and it looked like it's fairly\n> simple, then it should be simple to do this, leaving libpq to do the\n> actual network stuff to the database.\n\n but libpq and do *only* SQL calls only, right ? Or can libpq do some of\nthese lower-level stuff too? that would be excellent !\n\n Thanx, \n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n", "msg_date": "Sun, 18 Jan 1998 13:36:30 -0500 (EST)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] non-SQL C interface ? (fwd)" } ]
[ { "msg_contents": "> Well, I can create the table quite easily. The issue is what type of\n> flack we will get by haveing pg_user non-readable, and removing the user\n\nWhat if we were to put the pg_user accessibility to the admin setting up\nPostgreSQL (at least until pg_privileges could become a reality.). If you\nlook in dbinit--toward the end of the script--I run a SQL command to revoke\nall privileges from public on the pg_user table. If you are not going to\nuse the pg_pwd scheme for authentication, then you don't need to run this\ncommand. All we need do for now is print out a little message saying that if\nyou use HBA or Kerberos, then say No to blocking the PUBLIC from accessing\npg_user. We also say that if you choose to block access to pg_user, these\nare the consequences. When a better privileges method is developed this\nquestion in the dbinit script can be eliminated.\n\nI myself would choose to block access to the pg_user relation. Others may not\nwant it this way. Using the above scenario, the user would have an informed\nchoice that would be taken care of at initialization.\n\nTodd A. Brandys\[email protected]\n", "msg_date": "Sun, 18 Jan 1998 15:29:50 -0600", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "> \n> > Well, I can create the table quite easily. The issue is what type of\n> > flack we will get by haveing pg_user non-readable, and removing the user\n> \n> What if we were to put the pg_user accessibility to the admin setting up\n> PostgreSQL (at least until pg_privileges could become a reality.). If you\n> look in dbinit--toward the end of the script--I run a SQL command to revoke\n> all privileges from public on the pg_user table. If you are not going to\n> use the pg_pwd scheme for authentication, then you don't need to run this\n> command. All we need do for now is print out a little message saying that if\n> you use HBA or Kerberos, then say No to blocking the PUBLIC from accessing\n> pg_user. We also say that if you choose to block access to pg_user, these\n> are the consequences. When a better privileges method is developed this\n> question in the dbinit script can be eliminated.\n> \n> I myself would choose to block access to the pg_user relation. Others may not\n> want it this way. Using the above scenario, the user would have an informed\n> choice that would be taken care of at initialization.\n\nThis is exactly what I was thinking yesterday.\n\nI recommend something different. What if we just skip your REVOKE\ncommand in initdb, but we add a check in user.c, and if they try to set\na non-NULL password and they have pg_user as world-readable, we prevent\nthem from doing it, and tell them explicitly the REVOKE command the\ndatabase administrator must issue to allow passwords to be set.\n\nThe advantage of this is that they can use the other database USER commands,\njust not the password commands, and they can easily change their mind. \nIt puts the checking for world-readble pg_user at the proper place. I\nam afraid if we didn't someone would answer y to world-readable pg_user,\nthen start assigning passwords.\n\nWe can also change psql.c to do a \\d lookup with pg_user, and if it\nfails, we do another SELECT without pg_user showing just user-ids. That\nway, the administrator will get usernames, but non-priv users will not.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 18 Jan 1998 21:21:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "\n\nOn Sun, 18 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > > Well, I can create the table quite easily. The issue is what type of\n> > > flack we will get by haveing pg_user non-readable, and removing the user\n> > \n> > What if we were to put the pg_user accessibility to the admin setting up\n> > PostgreSQL (at least until pg_privileges could become a reality.). If you\n> > look in dbinit--toward the end of the script--I run a SQL command to revoke\n> > all privileges from public on the pg_user table. If you are not going to\n> > use the pg_pwd scheme for authentication, then you don't need to run this\n> > command. All we need do for now is print out a little message saying that if\n> > you use HBA or Kerberos, then say No to blocking the PUBLIC from accessing\n> > pg_user. We also say that if you choose to block access to pg_user, these\n> > are the consequences. When a better privileges method is developed this\n> > question in the dbinit script can be eliminated.\n> > \n> > I myself would choose to block access to the pg_user relation. Others may not\n> > want it this way. Using the above scenario, the user would have an informed\n> > choice that would be taken care of at initialization.\n> \n> This is exactly what I was thinking yesterday.\n> \n> I recommend something different. What if we just skip your REVOKE\n> command in initdb, but we add a check in user.c, and if they try to set\n> a non-NULL password and they have pg_user as world-readable, we prevent\n> them from doing it, and tell them explicitly the REVOKE command the\n> database administrator must issue to allow passwords to be set.\n> \n> The advantage of this is that they can use the other database USER commands,\n> just not the password commands, and they can easily change their mind. \n> It puts the checking for world-readble pg_user at the proper place. I\n> am afraid if we didn't someone would answer y to world-readable pg_user,\n> then start assigning passwords.\n> \n> We can also change psql.c to do a \\d lookup with pg_user, and if it\n> fails, we do another SELECT without pg_user showing just user-ids. That\n> way, the administrator will get usernames, but non-priv users will not.\n> \n> -- \n> Bruce Momjian\n> [email protected]\n> \n\nI agree that we should do the check for the 'World-readable' \npg_user and give a warning if someone attempts to assign a password.\nI still think the admin should be given an option in the dbinit script to \nchoose whether or no to run the 'REVOKE'. At this point it would be easy \nto inform the admin what the trade-offs are, and we will have his/her \nundivided attention (They will be more apt to read about it to get past the\nprompt.).\n\nThese changes should not take long to make. I need to get the current \nCVS version (I will do so tonight), and I should have the changes \n(performed and tested) in a day or so.\n\nTodd A. Brandys\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 11:58:12 -0600 (CST)", "msg_from": "todd brandys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" }, { "msg_contents": "> I agree that we should do the check for the 'World-readable' \n> pg_user and give a warning if someone attempts to assign a password.\n> I still think the admin should be given an option in the dbinit script to \n> choose whether or no to run the 'REVOKE'. At this point it would be easy \n> to inform the admin what the trade-offs are, and we will have his/her \n> undivided attention (They will be more apt to read about it to get past the\n> prompt.).\n> \n> These changes should not take long to make. I need to get the current \n> CVS version (I will do so tonight), and I should have the changes \n> (performed and tested) in a day or so.\n\nSure, why not ask the admin. Saves him a step when he tries to do the\nfirst password. I just think we should also check when doing a password\nchange, which makes sense.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 14:53:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: New pg_pwd patch and stuff" } ]
[ { "msg_contents": "> \n> Bruce,\n> \n> The new varchar() stuff looks good, just a minor problem with \"select into\"\n> where the new table does not seem to get a copy of the atttypmod value\n> from the source table.\n> \n> I had a quick look at the code but guess you'll find the problem 10 times\n> faster than I could.\n\nOK, I have fixed this. The real way to fix this it to add restypmod to\nResdom, and pass the value all the way through the engine, so tupDesc\nalways has the proper atttypmod value, but it is not clear how to do\nthis in the parser, so I put the code back in to just do a lookup in\nexecMain/execUtils when doing an SELECT * INTO TABLE.\n\nIf we start using atttypmod more, we will have to do this.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Sun, 18 Jan 1998 21:39:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: varchar() troubles" } ]
[ { "msg_contents": "Felix Morley Finch wrote:\n> \n> >>In article <[email protected]>, \"Vadim B. Mikheev\" <[email protected]> writes:\n> \n> > Felix Morley Finch wrote:\n> >>\n> >> I tried searching and ran into several new lessons in SQL; I ended up\n> >> with this --\n> >>\n> >> select name from names n, keys k1, keys k2, family f1, family f2\n> >> where ((k1.key ~* 'xyzzy') and (k1.id = f1.key) and (n.id = f1.name))\n> >> or ((k2.key ~* 'plugh) and (k2.id = f2.key) and (n.id = f2.name));\n> >>\n> >> and psql never returned. This is a test database, maybe 20 records.\n> >> If I keep it down to a single keyword, it works fine.\n> \n> > EXPLAIN ?\n> > Did you try SET geqo TO 'on=1' ?\n> \n> I just tried it --\n> \n> WARN:Bad value for # of relations (1)\n> \n> I tried ON=2 which gave no WARNING but didn't fix the join. I tried\n> OFF and same result.\n\n(So, this is problem of not just old optimizer...)\n\nCould you post me output of\n\nEXPLAIN VERBOSE _select_statement_ ?\n\nVadim\n", "msg_date": "Mon, 19 Jan 1998 11:06:09 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Join-crazy newbie may be pushing the limits :-)" } ]
[ { "msg_contents": "\nOK, I have added code to allow the SubLinks make it to the optimizer.\n\nI implemented ParseState->parentParseState, but not parentQuery, because\nthe parentParseState is much more valuable to me, and Vadim thought it\nmight be useful, but was not positive. Also, keeping that parentQuery\npointer valid through rewrite may be difficult, so I dropped it. \nParseState is only valid in the parser.\n\nI have not done:\n\n\tcorrelated subquery column references\n\tadded Var->sublevels_up\n\tgotten this to work in the rewrite system\n\thave not added full CopyNode support\n\nI will address these in the next few days.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 00:54:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, I have added code to allow the SubLinks make it to the optimizer.\n> \n> I implemented ParseState->parentParseState, but not parentQuery, because\n> the parentParseState is much more valuable to me, and Vadim thought it\n> might be useful, but was not positive. Also, keeping that parentQuery\n> pointer valid through rewrite may be difficult, so I dropped it.\n> ParseState is only valid in the parser.\n> \n> I have not done:\n> \n> correlated subquery column references\n> added Var->sublevels_up\n> gotten this to work in the rewrite system\n> have not added full CopyNode support\n> \n> I will address these in the next few days.\n\nNice! I'm starting with non-correlated subqueries...\n\nVadim\n", "msg_date": "Mon, 19 Jan 1998 13:37:41 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subselects" }, { "msg_contents": "> > I have not done:\n> > \n> > correlated subquery column references\n> > added Var->sublevels_up\n> > gotten this to work in the rewrite system\n> > have not added full CopyNode support\n> > \n> > I will address these in the next few days.\n> \n\nOK, had some bugs, but now it works. Ran postmaster with full debug,\nand saw proper values in SubLink structure. In fact, the optimizer\nseems to pass this through fine, only to error out in the executor with\n'unknown node.'\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 13:00:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > I have not done:\n> > \n> > correlated subquery column references\n> > added Var->sublevels_up\n> > gotten this to work in the rewrite system\n\nThis item is done now:\n> > have not added full CopyNode support\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 13:09:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > I have not done:\n> > \n> > correlated subquery column references\n> > added Var->sublevels_up\n> > gotten this to work in the rewrite system\n> > have not added full CopyNode support\n> > \n\nOK, these are all done now.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 23:22:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: subselects" } ]
[ { "msg_contents": "\nIt hangs at the following error, CVSup'd and built as of this morning:\n\nScript started on Mon 19 Jan 1998 10:16:17 AM AST\n%./initdb --pglb\b\u001b[Kib=/loc/pgsql/lib --pgdata=/loc/pgsql/data\ninitdb: using /loc/pgsql/lib/local1_template1.bki.source as input to create the template database.\ninitdb: using /loc/pgsql/lib/global1.bki.source as input to create the global classes.\ninitdb: using /loc/pgsql/lib/pg_hba.conf.sample as the host-based authentication control file.\n\nWe are initializing the database system with username marc (uid=100).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /loc/pgsql/data\n\nCreating Postgres database system directory /loc/pgsql/data/base\n\ninitdb: creating template database in /loc/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/loc/pgsql/data -Q template1\nERROR: heap_modifytuple: repl is \\-62\nERROR: heap_modifytuple: repl is \\-62\n^C%\nscript done on Mon 19 Jan 1998 10:17:38 AM AST\n\n\n", "msg_date": "Mon, 19 Jan 1998 09:15:52 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Newest Source ..." }, { "msg_contents": "> \n> \n> It hangs at the following error, CVSup'd and built as of this morning:\n> \n> Script started on Mon 19 Jan 1998 10:16:17 AM AST\n> %./initdb --pglb\b\u001b[Kib=/loc/pgsql/lib --pgdata=/loc/pgsql/data\n> initdb: using /loc/pgsql/lib/local1_template1.bki.source as input to create the template database.\n> initdb: using /loc/pgsql/lib/global1.bki.source as input to create the global classes.\n\nI cvsup'ed last night, and initdb was working fine.\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 12:06:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Newest Source ..." } ]
[ { "msg_contents": "HI\n\nI found some bug:\nWhen I select from table (select A,B,count(*) INTO table tmp1 FROM AAA\ngroup by A,B; ) in \nbig table some touples are duplicated ;(,\n\n A,\tB,\tcount(*)\n321\t1\t3\t\\\t?\n321\t1\t2\t/ \t?\n321\t2\t5\n...........\n\nPS: I use linux 2.0.33 and solaris 2.5.1 and postgres 6.2.1v7\n\n-- \nSY, Serj\n", "msg_date": "Mon, 19 Jan 1998 17:42:35 +0300", "msg_from": "Serj <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 6.2.1 Group BY BUG" }, { "msg_contents": "Can you try this in the current 6.3 beta and tell us if it is fixed?\n\n\n> HI\n> \n> I found some bug:\n> When I select from table (select A,B,count(*) INTO table tmp1 FROM AAA\n> group by A,B; ) in \n> big table some touples are duplicated ;(,\n> \n> A,\tB,\tcount(*)\n> 321\t1\t3\t\\\t?\n> 321\t1\t2\t/ \t?\n> 321\t2\t5\n> ...........\n> \n> PS: I use linux 2.0.33 and solaris 2.5.1 and postgres 6.2.1v7\n> \n> -- \n> SY, Serj\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Fri, 13 Feb 1998 15:14:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] postgres 6.2.1 Group BY BUG" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Can you try this in the current 6.3 beta and tell us if it is fixed?\n> \n> > HI\n> >\n> > I found some bug:\n> > When I select from table (select A,B,count(*) INTO table tmp1 FROM AAA\n> > group by A,B; ) in\n> > big table some touples are duplicated ;(,\n> >\n> > A, B, count(*)\n> > 321 1 3 \\ ?\n> > 321 1 2 / ?\n> > 321 2 5\n\nI didn't fix GROUP BY yet.\n\nVadim\n", "msg_date": "Sun, 15 Feb 1998 16:44:00 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] postgres 6.2.1 Group BY BUG" } ]
[ { "msg_contents": "I removed the 'lock' in the source tree. I think Lock is a valid 386\nopcode, but gcc under Freebsd and BSDI does not support it.\n\nCan you put some ifdef in there so Unixware uses it, but non-unixware\ndoes not?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 13:12:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "unixware" } ]
[ { "msg_contents": "Thanks Bruce,\n\nIt was obviously not quite as simple a problem as I had 1st imagined.\n\nI did have a root around in the code but could not work out how the\nattributes were copied to the newly created table.\n\nThanks for the fix,\nKeith.\n\nBruce Momjian <[email protected]>\n> [email protected]\n> > Bruce,\n> > \n> > The new varchar() stuff looks good, just a minor problem with \"select into\"\n> > where the new table does not seem to get a copy of the atttypmod value\n> > from the source table.\n> > \n> > I had a quick look at the code but guess you'll find the problem 10 times\n> > faster than I could.\n> \n> OK, I have fixed this. The real way to fix this it to add restypmod to\n> Resdom, and pass the value all the way through the engine, so tupDesc\n> always has the proper atttypmod value, but it is not clear how to do\n> this in the parser, so I put the code back in to just do a lookup in\n> execMain/execUtils when doing an SELECT * INTO TABLE.\n> \n> If we start using atttypmod more, we will have to do this.\n> \n\n\n", "msg_date": "Mon, 19 Jan 1998 18:52:52 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: varchar() troubles" } ]
[ { "msg_contents": "\nThis comes up since the varchar will no longer be padded...\n\nShould the user be allowed to create a table with fields that\nwould go over the maximum tuple size if all fields were max'd?\n\nExample...with a block size of 8192...\n\ncreate table foo (a varchar(4000), b varchar(4000), c varchar(500));\n\nIt's possible that a valid tuple according to the table definition\nwill be rejected internally.\n\n6.2 lets me create the above table, but won't let me insert in it\nunless at least one of the fields is null. 6.3 will allow the row\nas long as they collectively aren't more than the MAXTUPLEN.\n\nShould postgres issue a warning or an error about this condition\nwhen the table is created?\n\ndarrenk\n", "msg_date": "Mon, 19 Jan 1998 13:53:25 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Max tuple size." }, { "msg_contents": "> \n> \n> This comes up since the varchar will no longer be padded...\n> \n> Should the user be allowed to create a table with fields that\n> would go over the maximum tuple size if all fields were max'd?\n\nYes, text does not do checking, and certainly a table with three text\nfields can go over the maximum.\n\n> \n> Example...with a block size of 8192...\n> \n> create table foo (a varchar(4000), b varchar(4000), c varchar(500));\n> \n> It's possible that a valid tuple according to the table definition\n> will be rejected internally.\n> \n> 6.2 lets me create the above table, but won't let me insert in it\n> unless at least one of the fields is null. 6.3 will allow the row\n> as long as they collectively aren't more than the MAXTUPLEN.\n\nIMHO, that is good.\n\n> Should postgres issue a warning or an error about this condition\n> when the table is created?\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 16:54:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Max tuple size." } ]
[ { "msg_contents": "Hi,\n\nIf I want to compile the PostgreSQL 6.2.1 code with the -g flag, which Makefile\nshould I add it. I would like to run it thru a debugger. I am compiling it\nwith gcc; so I should be able to use xgdb, right?\n\nThe sort cost in the planner is assumed to be 0. So, there is an unfair bias\ntowards sort-merge join in the optimizer. If I want to add a sort cost, will\nit be enough to add it in the make_sortplan() function in \noptimizer/plan/planner.c.\nIs the size of the relation to be sorted estimated at that time?\nIf not what is the best way to do this?\n\nThanks\n--shiby\n\n", "msg_date": "Mon, 19 Jan 1998 14:34:15 -0500", "msg_from": "Shiby Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "debug flag" }, { "msg_contents": "> \n> Hi,\n> \n> If I want to compile the PostgreSQL 6.2.1 code with the -g flag, which Makefile\n> should I add it. I would like to run it thru a debugger. I am compiling it\n> with gcc; so I should be able to use xgdb, right?\n> \n> The sort cost in the planner is assumed to be 0. So, there is an unfair bias\n> towards sort-merge join in the optimizer. If I want to add a sort cost, will\n> it be enough to add it in the make_sortplan() function in \n> optimizer/plan/planner.c.\n> Is the size of the relation to be sorted estimated at that time?\n> If not what is the best way to do this?\n\nI think this is fixed in 6.3. Beta is Feb 1.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Mon, 19 Jan 1998 16:55:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] debug flag" }, { "msg_contents": "On Mon, 19 Jan 1998, Shiby Thomas wrote:\n\n> Hi,\n> \n> If I want to compile the PostgreSQL 6.2.1 code with the -g flag, which Makefile\n> should I add it. I would like to run it thru a debugger. I am compiling it\n> with gcc; so I should be able to use xgdb, right?\n\n\tFrom Makefile.global:\n\n##############################################################################\n# COPT\n#\n# COPT is for options that the sophisticated builder might want to vary\n# from one build to the next, like options to build Postgres with \ndebugging\n# information included. COPT is meant to be set on the make command line,\n# for example with the command \"make COPT=-g\". The value you see set here\n# is the default that gets used if the builder does not give a value for\n# COPT on his make command.\n#\n#\n\n\n", "msg_date": "Mon, 19 Jan 1998 17:12:55 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] debug flag" } ]
[ { "msg_contents": "\n Hi,\n\n I'm trying to figgure out how does the locking work.\n\n If I get a ``Result Table'' from a select with, say, only one row which\ncomes from only one ``Schema Table'', and I lock the ``Result Table'',\nwill it not prevent updates of the row in ``Schema Table'' that appeared \nin the ``Result Table'' ? \n\n Thanx,\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n\n", "msg_date": "Mon, 19 Jan 1998 17:40:33 -0500 (EST)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": true, "msg_subject": "locking" }, { "msg_contents": "\n *Any* answers would be appreciated :\n\n I'll rephrase the original questions :\n\npg doesn't have \"row-level locking\" but has \"table locking\".\na result from a SELECT * FROM TESTTABLE; is a table.\nI lock the table (which is result of a SELECT).\n\n Q : will this locking prevent update of the rows in TESTTABLE that have\nbeen SELECTed ?\n\n Thanx,\n\n Jan\n\nOn Mon, 19 Jan 1998, Jan Vicherek wrote:\n\n> I'm trying to figgure out how does the locking work.\n> \n> If I get a ``Result Table'' from a select with, say, only one row which\n> comes from only one ``Schema Table'', and I lock the ``Result Table'',\n> will it not prevent updates of the row in ``Schema Table'' that appeared \n> in the ``Result Table'' ? \n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n", "msg_date": "Tue, 20 Jan 1998 23:19:48 -0500 (EST)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: locking" }, { "msg_contents": "If you FIRST use \"BEGIN\" to start a transaction, and THEN do the select,\nyes.\n\nThe SELECT will cause a table lock to be asserted (a read lock, allowing\nfurther SELECTs - but updates must come from INSIDE this transaction stream\nat that point).\n\nYou release the lock with either COMMIT or ROLLBACK (which either makes your\nchanges \"visible to others\", or discards them, respectively.\n\nA SELECT coming from elsewhere in the middle of a transaction sees the data\nas it existed at the time you sent the BEGIN.\n\nThis is the basic premise of transaction processing and being able to insure\nthat the data has is \"correct\", in that all in-process transactions are\ncomplete and not half-done, when someone comes browsing through the tables.\n\nWithout this you don't have a transaction system.\n\nIf you DO NOT use BEGIN/COMMIT|ROLLBACK, then each SQL statement is a\ntransaction in and of itself (ie: \"UPDATE\" locks during the execution of the\ncommand, and releases it when the update is complete).\n\n--\n-- \nKarl Denninger ([email protected])| MCSNet - Serving Chicagoland and Wisconsin\nhttp://www.mcs.net/ | T1's from $600 monthly to FULL DS-3 Service\n\t\t\t | NEW! K56Flex support on ALL modems\nVoice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS\nFax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost\n\nOn Tue, Jan 20, 1998 at 11:19:48PM -0500, Jan Vicherek wrote:\n> \n> *Any* answers would be appreciated :\n> \n> I'll rephrase the original questions :\n> \n> pg doesn't have \"row-level locking\" but has \"table locking\".\n> a result from a SELECT * FROM TESTTABLE; is a table.\n> I lock the table (which is result of a SELECT).\n> \n> Q : will this locking prevent update of the rows in TESTTABLE that have\n> been SELECTed ?\n> \n> Thanx,\n> \n> Jan\n> \n> On Mon, 19 Jan 1998, Jan Vicherek wrote:\n> \n> > I'm trying to figgure out how does the locking work.\n> > \n> > If I get a ``Result Table'' from a select with, say, only one row which\n> > comes from only one ``Schema Table'', and I lock the ``Result Table'',\n> > will it not prevent updates of the row in ``Schema Table'' that appeared \n> > in the ``Result Table'' ? \n> \n> -- Gospel of Jesus is the saving power of God for all who believe --\n> Jan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n> >>> Free Software Union President ... www.fslu.org <<<\n> Interactive Electronic Design Inc. -#- PGP: finger [email protected]\n> \n> \n", "msg_date": "Wed, 21 Jan 1998 07:53:02 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: locking" }, { "msg_contents": "> \n> \n> *Any* answers would be appreciated :\n> \n> I'll rephrase the original questions :\n> \n> pg doesn't have \"row-level locking\" but has \"table locking\".\n> a result from a SELECT * FROM TESTTABLE; is a table.\n> I lock the table (which is result of a SELECT).\n> \n> Q : will this locking prevent update of the rows in TESTTABLE that have\n> been SELECTed ?\n> \n\nWhile the SELECT is running, no one can update the table. If the SELECT\nis in a transaction, no one can update the table until the transaction\ncompletes. Other people can read from the table, though.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 10:05:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: locking" }, { "msg_contents": "> > \n> > Q : will this locking prevent update of the rows in TESTTABLE that have\n> > been SELECTed ?\n> > \n> \n> While the SELECT is running, no one can update the table. If the SELECT\n> is in a transaction, no one can update the table until the transaction\n> completes. Other people can read from the table, though.\n\nDoes only one transaction run at a time? That is, if you have a\ntransaction from client A that does a select followed by an update\nand client B attempts the same thing, will the read portion of\nB's transaction be deferred until A's commit completes, while\na non-transaction select (read only) from B would be allowed to\nproceed?\n\n Les Mikesell\n", "msg_date": "Wed, 21 Jan 1998 09:55:03 -0600 (CST)", "msg_from": "Leslie Mikesell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Re: locking" } ]
[ { "msg_contents": "I have cleaned up the parser code as best I could. I still am very\nconfused by much of the code in parser_func, parser_oper, and\nparser_type, and others. If someone wants to study this stuff and get a\nclearer layout, that would be great.\n\nI have added 'lock;' to the i386 asm.\n\nI have change SubLink to send Oper* for Vadim.\n\nI have not added Var->sublevels_up yet.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 00:03:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "parser cleanup" } ]
[ { "msg_contents": "I can't figure out how to properly do the the Var->sublevels_up search in the\nexisting code, partly because I can't figure out what it is doing.\n\nIt will take me a few days more to do it. I just can't look at the code\nanymore.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 00:21:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Var->sublevels_up" } ]
[ { "msg_contents": "> \n> I can't figure out how to properly do the the Var->sublevels_up search in the\n> existing code, partly because I can't figure out what it is doing.\n> \n> It will take me a few days more to do it. I just can't look at the code\n> anymore.\n\nOK, I think I have a handle on it now. Should be two days.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 01:55:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Var->sublevels_up" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > I can't figure out how to properly do the the Var->sublevels_up search in the\n> > existing code, partly because I can't figure out what it is doing.\n ^^^^^^^^^^^^^^^^\nIt says what range table should be used to find Var' relation.\n\n> >\n> > It will take me a few days more to do it. I just can't look at the code\n> > anymore.\n> \n> OK, I think I have a handle on it now. Should be two days.\n\nOk.\n\nVadim\n", "msg_date": "Tue, 20 Jan 1998 14:11:43 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Var->sublevels_up" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > I can't figure out how to properly do the the Var->sublevels_up search in the\n> > > existing code, partly because I can't figure out what it is doing.\n> ^^^^^^^^^^^^^^^^\n> It says what range table should be used to find Var' relation.\n\nMy problem was the ParseFunc really does much more than handle\nfunctions. It handles:\n\n\tcolname\n\trel.colname\n\trel.colname.func\n\tsum(colname)\n\tsum(rel.colname)\n\t\nand probably much more than that.\n\n> \n> > >\n> > > It will take me a few days more to do it. I just can't look at the code\n> > > anymore.\n> > \n> > OK, I think I have a handle on it now. Should be two days.\n> \n> Ok.\n> \n> Vadim\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 09:42:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Var->sublevels_up" }, { "msg_contents": "> \n> Can someone figure out why atttypmod for system relations is not always\n> zero? Sometimes it is -1 and other times it is 1, and I can't figure\n> out why.\n\nI found the problem. You will have to initdb to see the fix.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 5 Feb 1998 14:00:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atttypmod is wrong in system catalogs" }, { "msg_contents": "> \n> > > CREATED relation pg_description with OID 17847\n> > > Commit End\n> > Amopen: relation pg_description. attrsize 63\n> > create attribute 0 name objoid len 4 num 1 type 26\n> > create attribute 1 name description len -1 num 2 type 25\n> > > Amclose: relation (null).\n> > > initdb: could not create template database\n> > initdb: cleaning up by wiping out /usr/local/pgsql/data/base/template1\n> > \n> > Installing the \"#define long int\" gives about 40 pages of errors.\n\nI saw this in hsearch.h:\n\t\n\ttypedef struct element\n\t{\n\t unsigned long next; /* secret from user */\n\t long key;\n\t} ELEMENT;\n\t \n\ttypedef unsigned long BUCKET_INDEX;\n\nI wonder is this is the problem. Should then be ints. It would be\ngreat if you could read the hash value going in for mkoidname function,\nand then see if the same key is being used for the lookup. Perhaps some\nelog(NOTICE) lines near the hash function would tell you this. \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 12 Feb 1998 09:39:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostGreSQL v6.2.1 for Linux Alpha" }, { "msg_contents": "> Yes, the threading topic has come up before, and I have never considered\n> it a big win. We want to remove the exec() from the startup, so we just\n> do a fork. Will save 0.001 seconds of startup.\n> \n> That is a very easy win for us. I hadn't considered the synchonization\n> problems with palloc/pfree, and that could be a real problem.\n\nI was wrong here. Removing exec() will save 0.01 seconds, not 0.001\nseconds. Typical backend startup and a single query is 0.08 seconds. \nRemoval of exec() will take this down to 0.07 seconds.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 13 Mar 1998 09:49:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Does Storage Manager support >2GB\n\ttables?" }, { "msg_contents": "> OK, I have the artist_fti table with 4.5 million rows, with an index\n> artist_fti_idx on artist_fti.string. I don't have a 'product' table\n> because I don't have the disk space, but that is not really a problem\n> for testing. \n> \n> I used the product table to populate the artits_fti, then deleted the\n> table so I could create the artist_fti_idx index. Single table, no\n> join.\n> \n> I am running on BSD/OS 3.1 PP200 with SCSI Ultra Barracuda disks.\n> \n> I am seeing the same thing you are. 'lling' and 'tones' take 11-22\n> seconds on the first few runs, then return almost immediately. If I do\n> another query and come back to the first one, I have the same slowness,\n> with the disk light being on almost constantly. Certain queries seem to\n> lend themselves to speeding up, while 'rol' never seems to get really\n> fast.\n> \n> I have to conclude that because of the way this table is created by\n> slicing words, its layout is almost random. The table is 272MB, and\n> with 8k/page, that is 34,000 pages. If we are going for 2,500 rows,\n> odds are that each row is in a separate page. So, to do the query, we\n> are retrieving 2,500 8k pages, or 20MB of random data on the disk. How\n> long does it take to issue 2,500 disk requests that are scattered all\n> over the disk. Probably 11-22 seconds.\n> \n> My OS only lets me have 400 8k buffers, or 3.2MB of buffer. As we are\n> transferring 2,500 8k pages or 20MB of data, it is no surprise that the\n> buffer cache doesn't help much. Sometime, the data is grouped together\n> on the disk, and that is why some are fast, but often they are not, and\n> certainly in a situation where you are looking for two words to appear\n> on the same row, they certainly are not on adjacent pages.\n> \n> Just started running CLUSTER, and it appears to be taking forever. Does\n> not seem to be using the disk, but a lot of CPU. It appears to be\n> caught in an endless loop.\n\nOK, I have an answer for you.\n\nFirst, the CLUSTER problem I was having was some fluke, probably\nsomething else that got stuck somewhere. Not a bug.\n\nSecond, CLUSTER takes forever because it is moving all over the disk\nretrieving each row in order.\n\nThird, I should have been able to solve this for you sooner. The issues\nof slowness you are seeing are the exact issues I dealt with five years\nago when I designed this fragmentation system. I was so excited that\nyou could develop triggers to slice the words, I had forgotten the other\nissues.\n\nOK, it is. As part of this fragmentation job, EVERY NIGHT, I re-slice\nthe user strings and dump them into a flat file. I then load them into\nIngres as a heap table, then modify the table to ISAM.\n\nYou may say, why every night. Well, the problem is I load the data at\n100% fill-factor, and ISAM rapidly becomes fragmented with\ninsertions/deletions.\n\nBtree does not become fragmented, but the problem there is that btree\nuse was very slow. I believe this is because ISAM is basically a SORTED\nHEAP with an index, so everything is close and packed tight. Btree\ndoesn't seem to do that as well. It is more spread out.\n\nUsers complain about the nightly re-index, but I tell them, 'Hey, you\nare searching for fragments of words in 200k entries in 6 seconds. I can\ndesign it without the night job, but your searches all day will take\nmuch longer.\" That stops the conversation.\n\nSo, to your solution. CLUSTER is too slow. The disk is going crazy\nmoving single rows into the temp table. I recommend doing a COPY of\nartist_fti to a flat file, doing a Unix 'sort' on the flat file, then\nre-loading the data into the artist_fti, and then putting the index on\nthe table and vacuum.\n\nI have done this, and now all searches are instantaneous THE FIRST TIME\nand every time.\n\nWith this change, I am anxious to hear how fast you can now do your\nmulti-word searches. Daily changes will not really impact performance\nbecause they are a small part of the total search, but this process of\nCOPY/sort/reload/reindex will need to be done on a regular basis to\nmaintain performance.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 14 Mar 1998 01:27:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexing words slow" }, { "msg_contents": "> Second, CLUSTER takes forever because it is moving all over the disk\n> retrieving each row in order.\n> \n\n> So, to your solution. CLUSTER is too slow. The disk is going crazy\n> moving single rows into the temp table. I recommend doing a COPY of\n> artist_fti to a flat file, doing a Unix 'sort' on the flat file, then\n> re-loading the data into the artist_fti, and then putting the index on\n> the table and vacuum.\n> \n> I have done this, and now all searches are instantaneous THE FIRST TIME\n> and every time.\n> \n> With this change, I am anxious to hear how fast you can now do your\n> multi-word searches. Daily changes will not really impact performance\n> because they are a small part of the total search, but this process of\n> COPY/sort/reload/reindex will need to be done on a regular basis to\n> maintain performance.\n\nOne more piece of good news. The reason CLUSTER was so slow is because\nyou loaded massive unordered amounts of data into the system. Once you\ndo the COPY out/reload, subsequent clusters will run very quickly,\nbecause 99% of the data is already ordered. Only the new/changed data\nis unordered, so you should be able to rapidly run CLUSTER from then on\nto keep good performance.\n\nI think a user module allowing this word fragment searching will be a\nbig hit with users.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 14 Mar 1998 01:40:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: indexing words slow" }, { "msg_contents": "Here is a general comments on maintaining the FAQ, and pointing people\nto answers.\n\nFirst, the FAQ is a pretty weird way to give people information. It is\nan almost random list of questions and answers. If the list is long,\npeople can get very impatient, and start skimming over the list. That\nis OK, as long as they can find the information when they need it. \nUnfortunately, because it is in random order, it makes things very hard\nto find.\n\nOK, my strategy. Let me give you two examples. People get deadlocks\nquite often, usually novice SQL users. We need to explain to them what\nthey have done so they can re-design their queries to avoid it. I could\nhave put it in the FAQ, but it really belongs in the LOCK manual page. \nSo I put it in the lock manual page, and added a mention in the\n'deadlock' error message to look at the 'lock' manual page for a\ndescription of the problem. This is perfect, because as soon as they\nget the error, they are pointed to the proper place that has the exact\nanswer they need. Same with the new postmaster -i option. By putting a\nmention in the libpq connect failure message to look for postmaster -i,\npeople are pointed right to the problem.\n\nThis has cut down on the problem reports tremendously. I think\ncommercial software doesn't do very good in this area, probably because\nthe support people are not the same as the developers. Because we are\nthe same people, we think of these things.\n\nSecond, if an FAQ issue is described in the manual pages or docs, I\npoint them to those, rather than re-iterating a long description of the\nproblem. I have tried to move as much as possible OUT of the FAQ, and\ninto the well-structured manual pages or error message. This leaves\nreal FAQ items on the FAQ, things that really don't fit in a manual page\nor are too general to fit anywhere else.\n\nI must say that people usually have been reading the FAQ, because we\nrarely get a question that is answered on the FAQ.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Apr 1998 22:50:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] FAQ organization" }, { "msg_contents": "On Sat, 18 Apr 1998, Bruce Momjian wrote:\n\n> Here is a general comments on maintaining the FAQ, and pointing people\n> to answers.\n> \n> First, the FAQ is a pretty weird way to give people information. It is\n> an almost random list of questions and answers. If the list is long,\n> people can get very impatient, and start skimming over the list. That\n> is OK, as long as they can find the information when they need it. \n> Unfortunately, because it is in random order, it makes things very hard\n> to find.\n\nWell, this is why we need to structure it, so that it is _not_ a long\nunsorted list of questions, and trying to group questions that look alike.\n\n> OK, my strategy. Let me give you two examples. People get deadlocks\n> quite often, usually novice SQL users. We need to explain to them what\n> they have done so they can re-design their queries to avoid it. I could\n> have put it in the FAQ, but it really belongs in the LOCK manual page. \n> So I put it in the lock manual page, and added a mention in the\n> 'deadlock' error message to look at the 'lock' manual page for a\n> description of the problem. This is perfect, because as soon as they\n> get the error, they are pointed to the proper place that has the exact\n> answer they need. Same with the new postmaster -i option. By putting a\n> mention in the libpq connect failure message to look for postmaster -i,\n> people are pointed right to the problem.\n\nWell, I understand what you want. Sure, the right solution for the FAQ\nwould be to give pointers to the manual pages. But sometimes it is not\nsufficient. Sometimes, people have problems and have no idea where they\ncome from, they don't know where to look at, and what they should look at.\nSometimes there are more than one suitable way of solving a problem, since\nsimilar problems can sometimes be best resolved by completely different\nsolutions, which are spread over the docs. There, the FAQ can be : \"you've\ngot this kind of problems, you can solve it by doing that, or that, or\nthat. Look here, there, and there for details\". \n\nSometimes, it's better not to wait for people to get the error messages.\nAs a software user, I often read quickly through the FAQs to look at\ncommon errors people do, so that, even before I start using the software,\nI have a vague idea of what I should not do. So, people dealing with\nconcurrent access/update have a chance to be said : \"Beware! You should\nlook at the LOCK man page / `Concurrent access under Postgres' user manual\nsection if you feel you might have deadlock problems\". Or to people using\nJDBC, \"don't forget to use the -i flag !\", instead of waiting for them to\npanic because it doesn't work the way it should.\n\n> This has cut down on the problem reports tremendously. I think\n> commercial software doesn't do very good in this area, probably because\n> the support people are not the same as the developers. Because we are\n> the same people, we think of these things.\n\nWell, actually, I've always found commercial software support docs - when\nthere are any - to be much worse that support files in free software :)\nEspecially the \"troubleshooting\" section, where they often consider 1. you\nare dumb, and forgot to power on the computer 2. the software never fails,\nso they don't describe at all the error code -3257.\n\n> Second, if an FAQ issue is described in the manual pages or docs, I\n> point them to those, rather than re-iterating a long description of the\n> problem. I have tried to move as much as possible OUT of the FAQ, and\n> into the well-structured manual pages or error message. This leaves\n> real FAQ items on the FAQ, things that really don't fit in a manual page\n> or are too general to fit anywhere else.\n\nWell, the FAQ shouldn't re-say things in detail when they can be found\nsomewhere else. But sometimes, you have to explain the same things\ndifferently because the problem doesn't direct people necessarily to the\nright page, or maybe they should find it by themselves, but they don't,\nbecause they are not familiar with the structure of the software or of the\nsupport documentations.\n\nTaking your example of the -i option : often people which have problems\nconcerning this don't even realize their problem comes from not allowing\nthe postmaster to listen to TCP/IP ports... they think JDBC, or perl\ninterface, or postgres itself, are not working, and the symptoms can be\nquite different. You have to tell them the cause of their problem, and\nwhere to look, and if you can provide a quick fix without obliging them to\ndig in the docs, it can be only better.\n\nAnother example : constraints, foreign keys (I know the man pages will be\nupdated :) ). People ask questions like : \"I did this which should work,\nand it doesn't work in Postgres!\". If they read the CREATE TABLE man page,\nthey would have the answer. But somehow there are people who, even after\nreading the man page, won't see what's wrong. So the FAQ may be the\nsolution to their problem, saying quickly : \"You cannot specify\nconstraints this way, do them that way (quick example). Read the CREATE\nTABLE, CREATE INDEX man pages\".\n\nHmm... actually, for this specific example, we could do a FAQ like this :\n\nQxx.xx : I tried to enter this SQL command as I used to in (MySQL,\nInformix, ORACLE), and it doesn't work. Why ?\n\nA : Every DBMS has some particularities in its SQL syntax. Please refer to\nthe Postgres man pages to see what syntax the Postgres parser follows.\n\netc...\n\nand maybe some other questions, answering special cases where it is not so\nevident... (we don't do subselects in target ?).\n\n> I must say that people usually have been reading the FAQ, because we\n> rarely get a question that is answered on the FAQ.\n\nSo by collecting questions on the mailing list, we can see what answers\nthe FAQ does not provide yet :)\n\nAnyway, I think it's important for the FAQ to redirect people to the right\nsections of the different manuals, but if you can have quick fixes, maybe\ngive them in the answer itself (\"did you use the -i flag on the command\nline ?\").\n\nIt is better to say it twice, if it can help people solve their problem\nfaster, though answers _must_ stay short (or else they deserve to have a\nspecial section in the main docs and a pointer to it :) ).\n\nDo I seem reasonable ??\n\nPatrice\n\n--\nPatrice H�D� --------------------------------- [email protected] -----\n\n", "msg_date": "Sun, 19 Apr 1998 18:23:41 +0200 (MET DST)", "msg_from": "=?ISO-8859-1?Q?Patrice_H=E9d=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] FAQ organization" }, { "msg_contents": "> So by collecting questions on the mailing list, we can see what answers\n> the FAQ does not provide yet :)\n> \n> Anyway, I think it's important for the FAQ to redirect people to the right\n> sections of the different manuals, but if you can have quick fixes, maybe\n> give them in the answer itself (\"did you use the -i flag on the command\n> line ?\").\n> \n> It is better to say it twice, if it can help people solve their problem\n> faster, though answers _must_ stay short (or else they deserve to have a\n> special section in the main docs and a pointer to it :) ).\n> \n> Do I seem reasonable ??\n\nI agree 100% with everything you said. I just wanted to mention that if\nyou find frequent questions that should addressed in the manual\npages/docs, or error messages, please let us know so we can add them\nthere too. I am sure there are many that I have missed.\n\nI have taken a minimalist approach to the FAQ, partially because I have\nnot concentrated on it. I am looking forward to a better, more\ncomprehensive FAQ. I have been particularly lacking in adding FAQ items\nfor novice questions.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Apr 1998 13:50:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] FAQ organization" }, { "msg_contents": "On Mon, 20 Apr 1998, Bruce Momjian wrote:\n\n> I agree 100% with everything you said. I just wanted to mention that if\n> you find frequent questions that should addressed in the manual\n> pages/docs, or error messages, please let us know so we can add them\n> there too. I am sure there are many that I have missed.\n\nOk. I'll do it in two steps : first, SGMLize the FAQ and adding entries,\nthen look at it and say \"this should go here, that should move there,\netc...\" in coordination with Tom and all the docs-enthousiasts :)\n\n> I have taken a minimalist approach to the FAQ, partially because I have\n> not concentrated on it. I am looking forward to a better, more\n> comprehensive FAQ. I have been particularly lacking in adding FAQ items\n> for novice questions.\n\nWell, you have so much to do besides the FAQ. As I don't do any coding,\nI'll be able to focus on docs, and especially the FAQ. I still wonder how\nyou all are able to find the time to code, debug, document, and still\nkeeping answering people on the mailing list and all ! :) \n\nPatrice\n\n--\nPatrice H�D� --------------------------------- [email protected] -----\n\n", "msg_date": "Tue, 21 Apr 1998 12:33:33 +0200 (MET DST)", "msg_from": "=?ISO-8859-1?Q?Patrice_H=E9d=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] FAQ organization" }, { "msg_contents": "> > I have taken a minimalist approach to the FAQ, partially because I have\n> > not concentrated on it. I am looking forward to a better, more\n> > comprehensive FAQ. I have been particularly lacking in adding FAQ items\n> > for novice questions.\n> \n> Well, you have so much to do besides the FAQ. As I don't do any coding,\n> I'll be able to focus on docs, and especially the FAQ. I still wonder how\n> you all are able to find the time to code, debug, document, and still\n> keeping answering people on the mailing list and all ! :) \n\nI only spend about 2-3 days a month coding. The rest is just little\ntweaking.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:43:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] FAQ organization" }, { "msg_contents": "> This exec() takes 15% of our startup time. I have wanted it removed for\n> many releases now. The only problem is to rip out the code that\n> re-attached to shared memory and stuff like that, because you will no\n> longer loose the shared memory in the exec(). The IPC code is\n> complicated, so good luck. I or others can help if you get stuck.\n> \n\nAnother item is to no longer use SYSV shared memory but use\nmmap(MAP_ANON) because this allows a much larger amount of shared memory\nto be used.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 21:53:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "On Wed, 29 April 1998, at 21:53:36, Bruce Momjian wrote:\n\n> > This exec() takes 15% of our startup time. I have wanted it removed for\n> > many releases now. The only problem is to rip out the code that\n> > re-attached to shared memory and stuff like that, because you will no\n> > longer loose the shared memory in the exec(). The IPC code is\n> > complicated, so good luck. I or others can help if you get stuck.\n> > \n> \n> Another item is to no longer use SYSV shared memory but use\n> mmap(MAP_ANON) because this allows a much larger amount of shared memory\n> to be used.\n\nWhat are the portability issues? I haven't written much portable\ncode, and certainly not with IPC.\n", "msg_date": "Wed, 29 Apr 1998 19:01:01 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> \n> On Wed, 29 April 1998, at 21:53:36, Bruce Momjian wrote:\n> \n> > > This exec() takes 15% of our startup time. I have wanted it removed for\n> > > many releases now. The only problem is to rip out the code that\n> > > re-attached to shared memory and stuff like that, because you will no\n> > > longer loose the shared memory in the exec(). The IPC code is\n> > > complicated, so good luck. I or others can help if you get stuck.\n> > > \n> > \n> > Another item is to no longer use SYSV shared memory but use\n> > mmap(MAP_ANON) because this allows a much larger amount of shared memory\n> > to be used.\n> \n> What are the portability issues? I haven't written much portable\n> code, and certainly not with IPC.\n\nNot sure. mmap() is pretty portable. We will shake out any portability\nissues as we go, or you can ask the list if everyone has such-and-such a\nfunction.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 29 Apr 1998 22:05:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] removing the exec() from doexec()" }, { "msg_contents": "> \n> Quick note... Just to say that I found a bug in postgres 6.3.2 that I just this\n> minute downloaded from the ftp site... It doesn't compile under AIX 4.2.1 with\n> the latest C for AIX ver 3.1.4\n> \n> It's only aminor problem, some of the variables in pqcomm.c are declared as\n> int, and being passed to functions that expect a long * variable (Actually the\n> function paramaters are declared as size_t).\n> \n> The fix is to change the addrlen variable used on line 673 to a size_t instead\n> of an int, and also for the len variable used on line 787.\n> \n> Sorry... No diffs... No time, and I dont' subscribe to the list... I just like\n> postgres (Maybe I'll subscribe one day... Too busy at the moment).\n\nThe line you are complaining about is:\n\n if ((port->sock = accept(server_fd,\n (struct sockaddr *) & port->raddr,\n &addrlen)) < 0)\n\nwhile BSDI has accept defined as:\n\n\tint accept(int s, struct sockaddr *addr, int *addrlen);\n\nSo AIX has the last parameter defined as size_t, huh? I looked at the\naccept manual page, and addrlen is the length of the addr field. Hard\nto imagine that is ever going to be larger than an int. Does any other\nOS have that third parameter as anything but an int*?\n\nWe may need to add some aix-specific check on a configure check for\nthis.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 22 May 1998 00:33:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in postgresql-6.3.2" }, { "msg_contents": "> The line you are complaining about is:\n> \n> if ((port->sock = accept(server_fd,\n> (struct sockaddr *) & port->raddr,\n> &addrlen)) < 0)\n> \n> while BSDI has accept defined as:\n> \n> \tint accept(int s, struct sockaddr *addr, int *addrlen);\n> \n> So AIX has the last parameter defined as size_t, huh? I looked at the\n> accept manual page, and addrlen is the length of the addr field. Hard\n> to imagine that is ever going to be larger than an int. Does any other\n> OS have that third parameter as anything but an int*?\n> \n> We may need to add some aix-specific check on a configure check for\n> this.\n\n>From aix 4.1 to 4.2, it changed from an int* to an unsigned long*, which\nis probably what size_t is defined as.\n\nWasn't just accept() though. There were other socket functions, but I\ndon't recall the names offhand. Not around aix anymore either... :)\n\nCheck thru the questions digests. I helped a couple of people compile\npast this glitch, latest being Jim Kraii I believe.\n\ndarrenk\n", "msg_date": "Fri, 22 May 1998 01:27:41 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Bug in postgresql-6.3.2" }, { "msg_contents": "> \n> > Please enter a FULL description of your problem:\n> > ------------------------------------------------\n> > Dropping table after aborting a transanction makes PosgresSQL unsable.\n> > \n> > \n> > Please describe a way to repeat the problem. Please try to provide a\n> > concise reproducible example, if at all possible: \n> > ----------------------------------------------------------------------\n> > [srashd]t-ishii{67} psql -e test < b\n> > QUERY: drop table test;\n> > WARN:Relation test Does Not Exist!\n> > QUERY: create table test (i int4);\n> > QUERY: create index iindex on test using btree(i);\n> > QUERY: begin;\n> > QUERY: insert into test values (100);\n> > QUERY: select * from test;\n> > i\n> > ---\n> > 100\n> > (1 row)\n> > \n> > QUERY: rollback;\n> > QUERY: drop table test;\n> > NOTICE:AbortTransaction and not in in-progress state \n> > NOTICE:AbortTransaction and not in in-progress state \n> > \n> > Note that if I do not make an index, it would be ok.\n> \n> Can someone comment on the cause of the above problem? Is it a bug to\n> add to the TODO list? I have verified it still exists in the current\n> sources.\n> \n\nThe message I see in the logs for this is:\n\nERROR: cannot write block 1 of iindex [test] blind\nNOTICE: AbortTransaction and not in in-progress state\nNOTICE: EndTransactionBlock and not inprogress/abort state\n\nVadim, sounds familiar.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 13 Jun 1998 01:31:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] NOTICE:AbortTransaction and not in in-progress state" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > QUERY: drop table test;\n> > > WARN:Relation test Does Not Exist!\n> > > QUERY: create table test (i int4);\n> > > QUERY: create index iindex on test using btree(i);\n> > > QUERY: begin;\n> > > QUERY: insert into test values (100);\n\nThere will be dirty heap & index buffers in the pool after insertion ...\n\n> > > QUERY: select * from test;\n> > > i\n> > > ---\n> > > 100\n> > > (1 row)\n> > >\n> > > QUERY: rollback;\n\nThey are still marked as dirty...\n\n> > > QUERY: drop table test;\n\nheap_destroy_with_catalog () calls ReleaseRelationBuffers:\n\n * this function unmarks all the dirty pages of a relation\n * in the buffer pool so that at the end of transaction\n * these pages will not be flushed.\n\nbefore unlinking relation, BUT index_destroy() unlinks index\nand DOESN'T call this func ...\n\n> > > NOTICE:AbortTransaction and not in in-progress state\n> > > NOTICE:AbortTransaction and not in in-progress state\n\nCOMMIT (of drop table) tries to flush all dirty buffers\nfrom pool but there is no index file any more ...\n\n> ERROR: cannot write block 1 of iindex [test] blind\n\nsmgrblindwrt () fails.\n\n> NOTICE: AbortTransaction and not in in-progress state\n> NOTICE: EndTransactionBlock and not inprogress/abort state\n\n...transaction state is IN_COMMIT...\n\nSeems that ReleaseRelationBuffers() should be called by\nindex_destroy() ... Note that heap_destroy() also calls\n\n /* ok - flush the relation from the relcache */\n RelationForgetRelation(rid);\n\nat the end...\n\nVadim\n", "msg_date": "Sat, 13 Jun 1998 14:18:05 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] NOTICE:AbortTransaction and not in in-progress state" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > > QUERY: drop table test;\n> > > > WARN:Relation test Does Not Exist!\n> > > > QUERY: create table test (i int4);\n> > > > QUERY: create index iindex on test using btree(i);\n> > > > QUERY: begin;\n> > > > QUERY: insert into test values (100);\n> \n> There will be dirty heap & index buffers in the pool after insertion ...\n> \n> > > > QUERY: select * from test;\n> > > > i\n> > > > ---\n> > > > 100\n> > > > (1 row)\n> > > >\n> > > > QUERY: rollback;\n> \n> They are still marked as dirty...\n> \n> > > > QUERY: drop table test;\n> \n> heap_destroy_with_catalog () calls ReleaseRelationBuffers:\n> \n> * this function unmarks all the dirty pages of a relation\n> * in the buffer pool so that at the end of transaction\n> * these pages will not be flushed.\n> \n> before unlinking relation, BUT index_destroy() unlinks index\n> and DOESN'T call this func ...\n> \n> > > > NOTICE:AbortTransaction and not in in-progress state\n> > > > NOTICE:AbortTransaction and not in in-progress state\n> \n> COMMIT (of drop table) tries to flush all dirty buffers\n> from pool but there is no index file any more ...\n> \n> > ERROR: cannot write block 1 of iindex [test] blind\n> \n> smgrblindwrt () fails.\n> \n> > NOTICE: AbortTransaction and not in in-progress state\n> > NOTICE: EndTransactionBlock and not inprogress/abort state\n> \n> ...transaction state is IN_COMMIT...\n> \n> Seems that ReleaseRelationBuffers() should be called by\n> index_destroy() ... Note that heap_destroy() also calls\n> \n> /* ok - flush the relation from the relcache */\n> RelationForgetRelation(rid);\n> \n> at the end...\n> \n> Vadim\n> \n\nOK, the following patch fixes the problem. Vadim, I added both function\ncalls to index_destroy and made heap_destroy consistent with\nheap_destroy_with_catalog.\n\n---------------------------------------------------------------------------\n\nIndex: src/backend/catalog/heap.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/heap.c,v\nretrieving revision 1.48\ndiff -c -r1.48 heap.c\n*** heap.c\t1998/04/27 04:04:47\t1.48\n--- heap.c\t1998/06/13 20:13:18\n***************\n*** 12,18 ****\n * INTERFACE ROUTINES\n *\t\theap_create()\t\t\t- Create an uncataloged heap relation\n *\t\theap_create_with_catalog() - Create a cataloged relation\n! *\t\theap_destroy_with_catalog()\t\t\t- Removes named relation from catalogs\n *\n * NOTES\n *\t this code taken from access/heap/create.c, which contains\n--- 12,18 ----\n * INTERFACE ROUTINES\n *\t\theap_create()\t\t\t- Create an uncataloged heap relation\n *\t\theap_create_with_catalog() - Create a cataloged relation\n! *\t\theap_destroy_with_catalog()\t- Removes named relation from catalogs\n *\n * NOTES\n *\t this code taken from access/heap/create.c, which contains\n***************\n*** 1290,1307 ****\n \t * ----------------\n \t */\n \tif (rdesc->rd_rel->relhasindex)\n- \t{\n \t\tRelationRemoveIndexes(rdesc);\n- \t}\n \n \t/* ----------------\n \t *\tremove rules if necessary\n \t * ----------------\n \t */\n \tif (rdesc->rd_rules != NULL)\n- \t{\n \t\tRelationRemoveRules(rid);\n- \t}\n \n \t/* triggers */\n \tif (rdesc->rd_rel->reltriggers > 0)\n--- 1290,1303 ----\n***************\n*** 1347,1355 ****\n \t * ----------------\n \t */\n \tif (!(rdesc->rd_istemp) || !(rdesc->rd_tmpunlinked))\n- \t{\n \t\tsmgrunlink(DEFAULT_SMGR, rdesc);\n! \t}\n \trdesc->rd_tmpunlinked = TRUE;\n \n \tRelationUnsetLockForWrite(rdesc);\n--- 1343,1350 ----\n \t * ----------------\n \t */\n \tif (!(rdesc->rd_istemp) || !(rdesc->rd_tmpunlinked))\n \t\tsmgrunlink(DEFAULT_SMGR, rdesc);\n! \n \trdesc->rd_tmpunlinked = TRUE;\n \n \tRelationUnsetLockForWrite(rdesc);\n***************\n*** 1375,1380 ****\n--- 1370,1376 ----\n \trdesc->rd_tmpunlinked = TRUE;\n \theap_close(rdesc);\n \tRemoveFromTempRelList(rdesc);\n+ \tRelationForgetRelation(rdesc->rd_id);\n }\n \n \nIndex: src/backend/catalog/index.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/index.c,v\nretrieving revision 1.41\ndiff -c -r1.41 index.c\n*** index.c\t1998/05/09 23:42:59\t1.41\n--- index.c\t1998/06/13 20:13:27\n***************\n*** 1270,1276 ****\n \twhile (tuple = heap_getnext(scan, 0, (Buffer *) NULL),\n \t\t HeapTupleIsValid(tuple))\n \t{\n- \n \t\theap_delete(catalogRelation, &tuple->t_ctid);\n \t}\n \theap_endscan(scan);\n--- 1270,1275 ----\n***************\n*** 1296,1307 ****\n \theap_close(catalogRelation);\n \n \t/*\n! \t * physically remove the file\n \t */\n \tif (FileNameUnlink(relpath(indexRelation->rd_rel->relname.data)) < 0)\n \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n \n \tindex_close(indexRelation);\n }\n \n /* ----------------------------------------------------------------\n--- 1295,1309 ----\n \theap_close(catalogRelation);\n \n \t/*\n! \t * flush cache and physically remove the file\n \t */\n+ \tReleaseRelationBuffers(indexRelation);\n+ \n \tif (FileNameUnlink(relpath(indexRelation->rd_rel->relname.data)) < 0)\n \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n \n \tindex_close(indexRelation);\n+ \tRelationForgetRelation(indexRelation->rd_id);\n }\n \n /* ----------------------------------------------------------------\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 13 Jun 1998 16:18:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] NOTICE:AbortTransaction and not in in-progress state" }, { "msg_contents": "Bruce Momjian wrote:\n> > Bruce Momjian wrote:\n> > > \n> > > > > QUERY: drop table test;\n> > > > > WARN:Relation test Does Not Exist!\n> > > > > QUERY: create table test (i int4);\n> > > > > QUERY: create index iindex on test using btree(i);\n> > > > > QUERY: begin;\n> > > > > QUERY: insert into test values (100);\n> > \n> > There will be dirty heap & index buffers in the pool after insertion ...\n> > \n> > > > > QUERY: select * from test;\n> > > > > i\n> > > > > ---\n> > > > > 100\n> > > > > (1 row)\n> > > > >\n> > > > > QUERY: rollback;\n> > \n> > They are still marked as dirty...\n> > \n> > > > > QUERY: drop table test;\n> > \n> > heap_destroy_with_catalog () calls ReleaseRelationBuffers:\n> > \n> > * this function unmarks all the dirty pages of a relation\n> > * in the buffer pool so that at the end of transaction\n> > * these pages will not be flushed.\n> > \n> > before unlinking relation, BUT index_destroy() unlinks index\n> > and DOESN'T call this func ...\n> > \n> > > > > NOTICE:AbortTransaction and not in in-progress state\n> > > > > NOTICE:AbortTransaction and not in in-progress state\n> > \n> > COMMIT (of drop table) tries to flush all dirty buffers\n> > from pool but there is no index file any more ...\n> > \n> > > ERROR: cannot write block 1 of iindex [test] blind\n> > \n> > smgrblindwrt () fails.\n> > \n> > > NOTICE: AbortTransaction and not in in-progress state\n> > > NOTICE: EndTransactionBlock and not inprogress/abort state\n> > \n> > ...transaction state is IN_COMMIT...\n> > \n> > Seems that ReleaseRelationBuffers() should be called by\n> > index_destroy() ... Note that heap_destroy() also calls\n> > \n> > /* ok - flush the relation from the relcache */\n> > RelationForgetRelation(rid);\n> > \n> > at the end...\n> > \n> > Vadim\n> > \n> \n> OK, the following patch fixes the problem. Vadim, I added both function\n> calls to index_destroy and made heap_destroy consistent with\n> heap_destroy_with_catalog.\n> \n> ---------------------------------------------------------------------------\n> \n> Index: src/backend/catalog/heap.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/heap.c,v\n> retrieving revision 1.48\n> diff -c -r1.48 heap.c\n> *** heap.c\t1998/04/27 04:04:47\t1.48\n> --- heap.c\t1998/06/13 20:13:18\n> ***************\n> *** 12,18 ****\n> * INTERFACE ROUTINES\n> *\t\theap_create()\t\t\t- Create an uncataloged heap relation\n> *\t\theap_create_with_catalog() - Create a cataloged relation\n> ! *\t\theap_destroy_with_catalog()\t\t\t- Removes named relation from catalogs\n> *\n> * NOTES\n> *\t this code taken from access/heap/create.c, which contains\n> --- 12,18 ----\n> * INTERFACE ROUTINES\n> *\t\theap_create()\t\t\t- Create an uncataloged heap relation\n> *\t\theap_create_with_catalog() - Create a cataloged relation\n> ! *\t\theap_destroy_with_catalog()\t- Removes named relation from catalogs\n> *\n> * NOTES\n> *\t this code taken from access/heap/create.c, which contains\n> ***************\n> *** 1290,1307 ****\n> \t * ----------------\n> \t */\n> \tif (rdesc->rd_rel->relhasindex)\n> - \t{\n> \t\tRelationRemoveIndexes(rdesc);\n> - \t}\n> \n> \t/* ----------------\n> \t *\tremove rules if necessary\n> \t * ----------------\n> \t */\n> \tif (rdesc->rd_rules != NULL)\n> - \t{\n> \t\tRelationRemoveRules(rid);\n> - \t}\n> \n> \t/* triggers */\n> \tif (rdesc->rd_rel->reltriggers > 0)\n> --- 1290,1303 ----\n> ***************\n> *** 1347,1355 ****\n> \t * ----------------\n> \t */\n> \tif (!(rdesc->rd_istemp) || !(rdesc->rd_tmpunlinked))\n> - \t{\n> \t\tsmgrunlink(DEFAULT_SMGR, rdesc);\n> ! \t}\n> \trdesc->rd_tmpunlinked = TRUE;\n> \n> \tRelationUnsetLockForWrite(rdesc);\n> --- 1343,1350 ----\n> \t * ----------------\n> \t */\n> \tif (!(rdesc->rd_istemp) || !(rdesc->rd_tmpunlinked))\n> \t\tsmgrunlink(DEFAULT_SMGR, rdesc);\n> ! \n> \trdesc->rd_tmpunlinked = TRUE;\n> \n> \tRelationUnsetLockForWrite(rdesc);\n> ***************\n> *** 1375,1380 ****\n> --- 1370,1376 ----\n> \trdesc->rd_tmpunlinked = TRUE;\n> \theap_close(rdesc);\n> \tRemoveFromTempRelList(rdesc);\n> + \tRelationForgetRelation(rdesc->rd_id);\n> }\n> \n> \n> Index: src/backend/catalog/index.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/index.c,v\n> retrieving revision 1.41\n> diff -c -r1.41 index.c\n> *** index.c\t1998/05/09 23:42:59\t1.41\n> --- index.c\t1998/06/13 20:13:27\n> ***************\n> *** 1270,1276 ****\n> \twhile (tuple = heap_getnext(scan, 0, (Buffer *) NULL),\n> \t\t HeapTupleIsValid(tuple))\n> \t{\n> - \n> \t\theap_delete(catalogRelation, &tuple->t_ctid);\n> \t}\n> \theap_endscan(scan);\n> --- 1270,1275 ----\n> ***************\n> *** 1296,1307 ****\n> \theap_close(catalogRelation);\n> \n> \t/*\n> ! \t * physically remove the file\n> \t */\n> \tif (FileNameUnlink(relpath(indexRelation->rd_rel->relname.data)) < 0)\n> \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n> \n> \tindex_close(indexRelation);\n> }\n> \n> /* ----------------------------------------------------------------\n> --- 1295,1309 ----\n> \theap_close(catalogRelation);\n> \n> \t/*\n> ! \t * flush cache and physically remove the file\n> \t */\n> + \tReleaseRelationBuffers(indexRelation);\n> + \n> \tif (FileNameUnlink(relpath(indexRelation->rd_rel->relname.data)) < 0)\n> \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n> \n> \tindex_close(indexRelation);\n> + \tRelationForgetRelation(indexRelation->rd_id);\n> }\n> \n> /* ----------------------------------------------------------------\n\n\nTwo comments:\n\n - I notice you getting rid of { } pairs eg:\n\n if (condition)\n {\n dosomething();\n }\n\n becomes\n\n if (condition)\n dosomething();\n\n Is this policy? I prefer to have the braces almost always as I find\n it easier to read, and less error prone if someone adds a statement or an\n else clause, so in most of my patches, I would tend to put braces in.\n If you are busy taking them out simaltaniously, this could get silly.\n\n Btw, I have been badly bit by:\n\n if (condition);\n dosomething();\n\n which turned out to be very hard to see indeed.\n\n\n - I think the bit at line 1295-1309 might want to do all the work before\n the elog. Otherwise the elog leave the buffer cache polluted with buffers\n belonging to a mostly deleted index. Eg:\n\n + ReleaseRelationBuffers(indexRelation);\n +\n fname = relpath(indexRelation->rd_rel->relname.data);\n\t status = FileNameUnlink(fname);\n\n index_close(indexRelation);\n + RelationForgetRelation(indexRelation->rd_id);\n\n if (status < 0)\n elog(ERROR, \"amdestroyr: unlink: %m\");\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Don't worry about people stealing your ideas. If your ideas are any\n good, you'll have to ram them down people's throats.\" -- Howard Aiken\n\n", "msg_date": "Sat, 13 Jun 1998 18:32:16 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] NOTICE:AbortTransaction and not in\n\tin-progress state" }, { "msg_contents": "> Two comments:\n> \n> - I notice you getting rid of { } pairs eg:\n> \n> if (condition)\n> {\n> dosomething();\n> }\n> \n> becomes\n> \n> if (condition)\n> dosomething();\n> \n> Is this policy? I prefer to have the braces almost always as I find\n> it easier to read, and less error prone if someone adds a statement or an\n> else clause, so in most of my patches, I would tend to put braces in.\n> If you are busy taking them out simaltaniously, this could get silly.\n\nI think several developers agreed that they were just wasting screen\nspace. The code is large enough without brace noise. I have considered\nwriting a script to remove the single-statement braces, but have not\ndone it yet.\n\nIf people would like to re-vote on this issue, I would be glad to hear\nabout it.\n\n> \n> Btw, I have been badly bit by:\n> \n> if (condition);\n> dosomething();\n> \n> which turned out to be very hard to see indeed.\n\nSure, but braces don't help you either. This is just as legal:\n\n\tif (condition);\n\t{\n\t\tdosomething();\n\t}\n\n\n> \n> \n> - I think the bit at line 1295-1309 might want to do all the work before\n> the elog. Otherwise the elog leave the buffer cache polluted with buffers\n> belonging to a mostly deleted index. Eg:\n> \n> + ReleaseRelationBuffers(indexRelation);\n> +\n> fname = relpath(indexRelation->rd_rel->relname.data);\n> \t status = FileNameUnlink(fname);\n> \n> index_close(indexRelation);\n> + RelationForgetRelation(indexRelation->rd_id);\n> \n> if (status < 0)\n> elog(ERROR, \"amdestroyr: unlink: %m\");\n\nYes, that is true, but I kept the order as used in the rest of the code,\nfiguring the original coder knew better than I do. IMHO, if we get that\n\"amdestroyr\" error, we have bigger problems than an invalid relation\ncache\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 13 Jun 1998 22:35:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [BUGS] NOTICE:AbortTransaction and not in\n\tin-progress state" }, { "msg_contents": "> I think several developers agreed that they were just wasting screen\n> space. The code is large enough without brace noise. I have considered\n> writing a script to remove the single-statement braces, but have not\n> done it yet.\n> \n> If people would like to re-vote on this issue, I would be glad to hear\n> about it.\n\nOk, I won't add them. I'm not taking them out if I see them though ;-). \n\n> Sure, but braces don't help you either. This is just as legal:\n> \n> \tif (condition);\n> \t{\n> \t\tdosomething();\n> \t}\n\nTrue enough, but I think less likely to happen. \n\n> > - I think the bit at line 1295-1309 might want to do all the work before\n> > the elog. Otherwise the elog leave the buffer cache polluted with buffers\n> > belonging to a mostly deleted index. Eg:\n> > \n> > + ReleaseRelationBuffers(indexRelation);\n> > +\n> > fname = relpath(indexRelation->rd_rel->relname.data);\n> > \t status = FileNameUnlink(fname);\n> > \n> > index_close(indexRelation);\n> > + RelationForgetRelation(indexRelation->rd_id);\n> > \n> > if (status < 0)\n> > elog(ERROR, \"amdestroyr: unlink: %m\");\n> \n> Yes, that is true, but I kept the order as used in the rest of the code,\n> figuring the original coder knew better than I do. IMHO, if we get that\n> \"amdestroyr\" error, we have bigger problems than an invalid relation\n> cache\n\nWell, the code in heap.c calls smgrunlink() without checking the return code.\nsmgrunlink() calls mdunlink() which contains:\n\n if (FileNameUnlink(fname) < 0)\n return (SM_FAIL);\n\nSo heap_destroy does not even through an elog() at all if the FileNameUnlink()\nfails. I think this is actually the right policy since if FileNameUnlink()\nfails the only consequence is that a file is left on the disk (or maybe\nthe unlink failed because it was already gone). The system state (buffers etc)\nand catalogs are consitant with the heap having been destroyed. So not a\nproblem from the database's perspective. \n\nI suggest you change your patch to simply ignore the return code from\nFileNameUnlink(). As in:\n\n ReleaseRelationBuffers(indexRelation);\n\n (void) FileNameUnlink(relpath(indexRelation->rd_rel->relname.data));\n\n index_close(indexRelation);\n RelationForgetRelation(indexRelation->rd_id); \n\nIn this way it will have the same behaviour as heap_destroy...\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468\nInformix Software 300 Lakeside Drive Oakland, CA 94612\n - A child of five could understand this! Fetch me a child of five.\n", "msg_date": "Sat, 13 Jun 1998 19:59:12 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] NOTICE:AbortTransaction and not in\n\tin-progress state" }, { "msg_contents": "\nI've been trying to use them in my patches as well, maily for\nconsistency as its something I'd never do myself. I'll gladly\nstop though.\n\nOn Sat, 13 June 1998, at 19:59:12, David Gould wrote:\n\n> Ok, I won't add them. I'm not taking them out if I see them though ;-). \n> \n> > Sure, but braces don't help you either. This is just as legal:\n> > \n> > \tif (condition);\n> > \t{\n> > \t\tdosomething();\n> > \t}\n> \n> True enough, but I think less likely to happen. \n", "msg_date": "Sat, 13 Jun 1998 20:07:35 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] NOTICE:AbortTransaction and not in\n\tin-progress state" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Index: src/backend/catalog/heap.c\n> ===================================================================\n> ***************\n> *** 1375,1380 ****\n> --- 1370,1376 ----\n> rdesc->rd_tmpunlinked = TRUE;\n> heap_close(rdesc);\n> RemoveFromTempRelList(rdesc);\n> + RelationForgetRelation(rdesc->rd_id);\n\nWe need not in RelationForgetRelation() in heap_destroy().\nLocal relations are handled in other way...\n\nVadim\n", "msg_date": "Sun, 14 Jun 1998 14:35:31 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [BUGS] NOTICE:AbortTransaction and not in\n\tin-progress state" }, { "msg_contents": "> Gentlemen:\n\tThe test of the Irix port of PostgreSQL running with default\noptimization passes all the regression tests the same as before\nexcept that the random number test is now different. The new output is\nincluded below. It appears to be OK, but I would like confirmation.\nThanks.\n\n=============================random.out=================================\n\nQUERY: SELECT count(*) FROM onek;\ncount\n-----\n 1000\n(1 row)\n\nQUERY: SELECT count(*) AS random INTO RANDOM_TBL\n FROM onek WHERE oidrand(onek.oid, 10);\nQUERY: INSERT INTO RANDOM_TBL (random)\n SELECT count(*)\n FROM onek WHERE oidrand(onek.oid, 10);\nQUERY: SELECT random, count(random) FROM RANDOM_TBL\n GROUP BY random HAVING count(random) > 1;\nrandom|count\n------+-----\n(0 rows)\n\nQUERY: SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\nrandom\n------\n 74\n(1 row)\n\n+------------------------------------------+------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Associate Research Professor |\n| phone: 732 235 5796 | Center for Advanced |\n| Fax: 732 235 4850 | Biotechnology and Medicine |\n| email: [email protected] | Rutgers University |\n| URL: http://www.cabm.rutgers.edu/~bruc | 679 Hoes Lane |\n| | Piscataway, NJ 08854-5638 |\n+------------------------------------------+------------------------------+\n", "msg_date": "Tue, 29 Sep 1998 11:01:35 -0400 (EDT)", "msg_from": "Robert Bruccoleri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SGI Port of Postgresql 6.4 snapshot of 09/28/98" }, { "msg_contents": "Thus spake darcy\n> > Here I assume user darcy has usesuper set in pg_shadow. Check\n> > and correct me if I'm wrong. The superuser flag is set if you\n> > allow darcy to create users on createuser time.\n> \n> Correct again. I half suspected something like this. Perhaps the\n> prompt in createuser should be changed to reflect that the user is\n> being granted full superuser privileges rather than just being able\n> to create more users.\n\nShould I send in the (trivial) diffs to effect this change? I figure\nto ask the following.\n\n \"Does user \"x\" have superuser privileges? (y/n)\"\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 13 Oct 1998 13:13:02 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Permissions not working?" }, { "msg_contents": "> > I strongly suggest patching this before 6.5 ...\n>\n> No comment other that \"sorry\".\n\n Mark: Should I commit the fix on regress.sh?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 14 Jun 1999 22:14:03 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regress.sh" }, { "msg_contents": "Thus spake Bruce Momjian\n> If anyone was concerned about our bug database being visible and giving\n> the impression we don't fix any bugs, see this URL:\n> \n> \thttp://www.isthisthingon.org/nisca/postgres.html\n\nJeez, Louise. Talk about a blaming the tools because you don't know\nanything about database design. I mean, his biggest complaint is that\nPostgreSQL makes it hard (not impossible as he implies) to change the\nschema. Perhaps that is because it was written by GOOD database designers\nwho don't have to change their schema every other week and so that issue\nhasn't been a squeaky wheel.\n\nI can't believe that anyone important is listening to this guy.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 21 Aug 2001 07:24:57 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Link to bug webpage" }, { "msg_contents": "Thus spake Bruce Momjian\n> > Unless someone has something they are sitting on, I'd like to wrap up a\n> > 7.2b2 this afternoon, and do a proper release announcement for it like\n> > didn't happen for 7.2b1 ...\n> \n> I have been working with Tom on some pgindent issues and have made\n> slight improvements to the script. Because we are early in beta and no\n> one has outstanding patches, I would like to run it again and commit the\n> changes. It should improve variables defined as structs and alignment\n> of include/catalog/*.h files.\n\nI have a change I would like to discuss. It doesn't change the code\nbase, only the build system. The patches I would like to commit follow\nthis text. The only thing it does is create a config option to bytecode\ncompile the Python modules. It also cleans up the install in the Makefile\na little bit.\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.150\ndiff -u -r1.150 configure.in\n--- configure.in\t2001/10/25 13:02:01\t1.150\n+++ configure.in\t2001/11/06 09:09:50\n@@ -398,6 +398,18 @@\n AC_MSG_RESULT([$with_python])\n AC_SUBST(with_python)\n \n+# If python is enabled (above), then optionally byte-compile the modules.\n+AC_MSG_CHECKING([whether to byte-compile Python modules])\n+if test \"$with_python\" = yes; then\n+ PGAC_ARG_BOOL(with, python_compile, no,\n+ [ --with-python-compile byte-compile modules if Python is enabled])\n+else\n+ with_python_compile=no\n+fi\n+AC_MSG_RESULT([$with_python_compile])\n+AC_SUBST([with_python_compile])\n+\n+\n #\n # Optionally build the Java/JDBC tools\n #\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.140\ndiff -u -r1.140 Makefile.global.in\n--- src/Makefile.global.in\t2001/10/13 15:24:23\t1.140\n+++ src/Makefile.global.in\t2001/11/06 09:09:54\n@@ -123,6 +123,7 @@\n with_java\t= @with_java@\n with_perl\t= @with_perl@\n with_python\t= @with_python@\n+with_python_compile\t= @with_python_compile@\n with_tcl\t= @with_tcl@\n with_tk\t\t= @with_tk@\n enable_odbc\t= @enable_odbc@\nIndex: src/interfaces/python/GNUmakefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/python/GNUmakefile,v\nretrieving revision 1.11\ndiff -u -r1.11 GNUmakefile\n--- src/interfaces/python/GNUmakefile\t2001/08/24 14:07:50\t1.11\n+++ src/interfaces/python/GNUmakefile\t2001/11/06 09:10:00\n@@ -19,10 +19,23 @@\n \n override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS) $(python_includespec)\n \n-all: all-lib\n+PY_SCRIPTS = pg.py pgdb.py\n+ifeq ($(with_python_compile), yes)\n+PY_COMPILED_SCRIPTS = $(PY_SCRIPTS:%.py=%.pyc) $(PY_SCRIPTS:%.py=%.pyo)\n+else\n+PY_COMPILED_SCRIPTS =\n+endif\n \n+all: all-lib $(PY_COMPILED_SCRIPTS)\n+\n all-lib: libpq-all\n \n+%.pyc: %.py\n+\tpython -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n+\n+%.pyo: %.py\n+\tpython -O -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n+\n .PHONY: libpq-all\n libpq-all:\n \t$(MAKE) -C $(libpq_builddir) all\n@@ -37,12 +50,11 @@\n \t@if test -w $(DESTDIR)$(python_moduleexecdir) && test -w $(DESTDIR)$(python_moduledir); then \\\n \t echo \"$(INSTALL_SHLIB) $(shlib) $(DESTDIR)$(python_moduleexecdir)/_pgmodule$(DLSUFFIX)\"; \\\n \t $(INSTALL_SHLIB) $(shlib) $(DESTDIR)$(python_moduleexecdir)/_pgmodule$(DLSUFFIX); \\\n-\t\\\n-\t echo \"$(INSTALL_DATA) $(srcdir)/pg.py $(DESTDIR)$(python_moduledir)/pg.py\"; \\\n-\t $(INSTALL_DATA) $(srcdir)/pg.py $(DESTDIR)$(python_moduledir)/pg.py; \\\n \t\\\n-\t echo \"$(INSTALL_DATA) $(srcdir)/pgdb.py $(DESTDIR)$(python_moduledir)/pgdb.py\"; \\\n-\t $(INSTALL_DATA) $(srcdir)/pgdb.py $(DESTDIR)$(python_moduledir)/pgdb.py; \\\n+\t for i in $(PY_SCRIPTS) $(PY_COMPILED_SCRIPTS); do \\\n+\t\techo $(INSTALL_DATA) $$i $(python_moduledir); \\\n+\t\t$(INSTALL_DATA) $$i $(python_moduledir); \\\n+\t done \\\n \telse \\\n \t $(install-warning-msg); \\\n \tfi\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 6 Nov 2001 04:12:56 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "\nOkay, I heard a 'yelp' from Tom concerning hte pgindent stuff, so I\nhaven't done beta2 up ... can someoen comment on this, as to whether we\ncan get it in, before I throw together beta2?\n\nTom ... are/were you okay with Bruce's last pgindent run?\n\n\n\nOn Tue, 6 Nov 2001, D'Arcy J.M. Cain wrote:\n\n> Thus spake Bruce Momjian\n> > > Unless someone has something they are sitting on, I'd like to wrap up a\n> > > 7.2b2 this afternoon, and do a proper release announcement for it like\n> > > didn't happen for 7.2b1 ...\n> >\n> > I have been working with Tom on some pgindent issues and have made\n> > slight improvements to the script. Because we are early in beta and no\n> > one has outstanding patches, I would like to run it again and commit the\n> > changes. It should improve variables defined as structs and alignment\n> > of include/catalog/*.h files.\n>\n> I have a change I would like to discuss. It doesn't change the code\n> base, only the build system. The patches I would like to commit follow\n> this text. The only thing it does is create a config option to bytecode\n> compile the Python modules. It also cleans up the install in the Makefile\n> a little bit.\n>\n> Index: configure.in\n> ===================================================================\n> RCS file: /cvsroot/pgsql/configure.in,v\n> retrieving revision 1.150\n> diff -u -r1.150 configure.in\n> --- configure.in\t2001/10/25 13:02:01\t1.150\n> +++ configure.in\t2001/11/06 09:09:50\n> @@ -398,6 +398,18 @@\n> AC_MSG_RESULT([$with_python])\n> AC_SUBST(with_python)\n>\n> +# If python is enabled (above), then optionally byte-compile the modules.\n> +AC_MSG_CHECKING([whether to byte-compile Python modules])\n> +if test \"$with_python\" = yes; then\n> + PGAC_ARG_BOOL(with, python_compile, no,\n> + [ --with-python-compile byte-compile modules if Python is enabled])\n> +else\n> + with_python_compile=no\n> +fi\n> +AC_MSG_RESULT([$with_python_compile])\n> +AC_SUBST([with_python_compile])\n> +\n> +\n> #\n> # Optionally build the Java/JDBC tools\n> #\n> Index: src/Makefile.global.in\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/Makefile.global.in,v\n> retrieving revision 1.140\n> diff -u -r1.140 Makefile.global.in\n> --- src/Makefile.global.in\t2001/10/13 15:24:23\t1.140\n> +++ src/Makefile.global.in\t2001/11/06 09:09:54\n> @@ -123,6 +123,7 @@\n> with_java\t= @with_java@\n> with_perl\t= @with_perl@\n> with_python\t= @with_python@\n> +with_python_compile\t= @with_python_compile@\n> with_tcl\t= @with_tcl@\n> with_tk\t\t= @with_tk@\n> enable_odbc\t= @enable_odbc@\n> Index: src/interfaces/python/GNUmakefile\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/interfaces/python/GNUmakefile,v\n> retrieving revision 1.11\n> diff -u -r1.11 GNUmakefile\n> --- src/interfaces/python/GNUmakefile\t2001/08/24 14:07:50\t1.11\n> +++ src/interfaces/python/GNUmakefile\t2001/11/06 09:10:00\n> @@ -19,10 +19,23 @@\n>\n> override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS) $(python_includespec)\n>\n> -all: all-lib\n> +PY_SCRIPTS = pg.py pgdb.py\n> +ifeq ($(with_python_compile), yes)\n> +PY_COMPILED_SCRIPTS = $(PY_SCRIPTS:%.py=%.pyc) $(PY_SCRIPTS:%.py=%.pyo)\n> +else\n> +PY_COMPILED_SCRIPTS =\n> +endif\n>\n> +all: all-lib $(PY_COMPILED_SCRIPTS)\n> +\n> all-lib: libpq-all\n>\n> +%.pyc: %.py\n> +\tpython -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n> +\n> +%.pyo: %.py\n> +\tpython -O -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n> +\n> .PHONY: libpq-all\n> libpq-all:\n> \t$(MAKE) -C $(libpq_builddir) all\n> @@ -37,12 +50,11 @@\n> \t@if test -w $(DESTDIR)$(python_moduleexecdir) && test -w $(DESTDIR)$(python_moduledir); then \\\n> \t echo \"$(INSTALL_SHLIB) $(shlib) $(DESTDIR)$(python_moduleexecdir)/_pgmodule$(DLSUFFIX)\"; \\\n> \t $(INSTALL_SHLIB) $(shlib) $(DESTDIR)$(python_moduleexecdir)/_pgmodule$(DLSUFFIX); \\\n> -\t\\\n> -\t echo \"$(INSTALL_DATA) $(srcdir)/pg.py $(DESTDIR)$(python_moduledir)/pg.py\"; \\\n> -\t $(INSTALL_DATA) $(srcdir)/pg.py $(DESTDIR)$(python_moduledir)/pg.py; \\\n> \t\\\n> -\t echo \"$(INSTALL_DATA) $(srcdir)/pgdb.py $(DESTDIR)$(python_moduledir)/pgdb.py\"; \\\n> -\t $(INSTALL_DATA) $(srcdir)/pgdb.py $(DESTDIR)$(python_moduledir)/pgdb.py; \\\n> +\t for i in $(PY_SCRIPTS) $(PY_COMPILED_SCRIPTS); do \\\n> +\t\techo $(INSTALL_DATA) $$i $(python_moduledir); \\\n> +\t\t$(INSTALL_DATA) $$i $(python_moduledir); \\\n> +\t done \\\n> \telse \\\n> \t $(install-warning-msg); \\\n> \tfi\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n\n", "msg_date": "Tue, 6 Nov 2001 08:07:14 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "> \n> Okay, I heard a 'yelp' from Tom concerning hte pgindent stuff, so I\n> haven't done beta2 up ... can someoen comment on this, as to whether we\n> can get it in, before I throw together beta2?\n> \n> Tom ... are/were you okay with Bruce's last pgindent run?\n\nI threw it up on a web site so people could review it. With no\nobjections, I think we are fine for beta2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Nov 2001 11:16:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": ">> Okay, I heard a 'yelp' from Tom concerning hte pgindent stuff, so I\n>> haven't done beta2 up ... can someoen comment on this, as to whether we\n>> can get it in, before I throw together beta2?\n\nI looked over the diffs, they seem okay.\n\nSince Thomas just committed a horology regress test fix, the regression\ntests are broken on platforms that use variant horology test files.\nGive me an hour to do something about that, and then we can roll beta2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Nov 2001 12:07:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "\nsounds cool to me ...\n\n\nOn Tue, 6 Nov 2001, Tom Lane wrote:\n\n> >> Okay, I heard a 'yelp' from Tom concerning hte pgindent stuff, so I\n> >> haven't done beta2 up ... can someoen comment on this, as to whether we\n> >> can get it in, before I throw together beta2?\n>\n> I looked over the diffs, they seem okay.\n>\n> Since Thomas just committed a horology regress test fix, the regression\n> tests are broken on platforms that use variant horology test files.\n> Give me an hour to do something about that, and then we can roll beta2.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 6 Nov 2001 12:07:51 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "> \n> sounds cool to me ...\n> \n\nI am sorry about my pgindent run. If I had realized it would hold up\nbeta for a day, I wouldn't have done it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Nov 2001 12:16:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am sorry about my pgindent run. If I had realized it would hold up\n> beta for a day, I wouldn't have done it.\n\nWell, we needed the regression fix anyway. Not a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Nov 2001 12:29:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "\nwhat about D'Arcy's python patch?\n\nOn Tue, 6 Nov 2001, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I am sorry about my pgindent run. If I had realized it would hold up\n> > beta for a day, I wouldn't have done it.\n>\n> Well, we needed the regression fix anyway. Not a problem.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 6 Nov 2001 12:48:14 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "> \n> what about D'Arcy's python patch?\n\nI think it has to wait for review or 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Nov 2001 12:49:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "> Since Thomas just committed a horology regress test fix, the regression\n> tests are broken on platforms that use variant horology test files.\n> Give me an hour to do something about that, and then we can roll beta2.\n\nDone --- we're good to go, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Nov 2001 13:05:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "> \n> what about D'Arcy's python patch?\n\nI will work up the Open Items list today and we can see what needs to be\nput into beta3, or rc1, or whatever. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Nov 2001 13:23:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "\nthe patch adds a --with-python switch to confiugure, and appropriate lines\nto the make files to compile it ... I would *like* to see it in beta2\nunless someone can see a glaring error in it that would cause us to have\nto delay beta2 ...\n\nits less then 50 lines ... cna you please take a quick peak at it adn\napply it if you don't see anything jump out at you?\n\nOn Tue, 6 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > what about D'Arcy's python patch?\n>\n> I will work up the Open Items list today and we can see what needs to be\n> put into beta3, or rc1, or whatever. :-)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Tue, 6 Nov 2001 13:43:07 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "> \n> the patch adds a --with-python switch to confiugure, and appropriate lines\n> to the make files to compile it ... I would *like* to see it in beta2\n> unless someone can see a glaring error in it that would cause us to have\n> to delay beta2 ...\n> \n> its less then 50 lines ... cna you please take a quick peak at it adn\n> apply it if you don't see anything jump out at you?\n\nUnfortunately I don't understand configure.in well enough to have any\ncomment on the code. I recommend we make beta2 and give others time to\nreview it. If it is OK, we can add it later. We do have some other\nopen items like the Libpq signal handling and AIX compile so we are not\ndone applying things yet anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Nov 2001 14:22:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> what about D'Arcy's python patch?\n\nSince it's a configure/build thing, I'd want to see Peter E's reaction\nto it before making a decision. I'd not suggest holding up beta2 for\nit, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Nov 2001 14:29:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today " }, { "msg_contents": "D'Arcy J.M. Cain writes:\n\n> I have a change I would like to discuss. It doesn't change the code\n> base, only the build system. The patches I would like to commit follow\n> this text. The only thing it does is create a config option to bytecode\n> compile the Python modules.\n\nWe've seen such a patch before, but I'm still not convinced it works.\nAccording to my knowledge, the pre-compiled bytecode files need to be\ncreated after the source files have been installed in their final\nlocation, because the file name and timestamp is encoded in the compiled\nfile (it's sort of used as a cache file). While this can be accomplished\nwith a different patch, it wouldn't really work when DESTDIR is used\nbecause you'd create a \"dead\" cache file. In a sense, this operation is\nlike running ldconfig -- it's outside the scope of the build system.\nPackage managers typically put it in the \"post install\" section.\n\n> +# If python is enabled (above), then optionally byte-compile the modules.\n> +AC_MSG_CHECKING([whether to byte-compile Python modules])\n> +if test \"$with_python\" = yes; then\n> + PGAC_ARG_BOOL(with, python_compile, no,\n> + [ --with-python-compile byte-compile modules if Python is enabled])\n\n--enable\n\n> +else\n> + with_python_compile=no\n> +fi\n> +AC_MSG_RESULT([$with_python_compile])\n> +AC_SUBST([with_python_compile])\n\n> +%.pyc: %.py\n> +\tpython -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n> +\n> +%.pyo: %.py\n> +\tpython -O -c \"import py_compile; py_compile.compile(\\\"$<\\\")\"\n> +\n\n$(PYTHON)\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 7 Nov 2001 02:07:49 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "Thus spake Marc G. Fournier\n> the patch adds a --with-python switch to confiugure, and appropriate lines\n> to the make files to compile it ... I would *like* to see it in beta2\n> unless someone can see a glaring error in it that would cause us to have\n> to delay beta2 ...\n> \n> its less then 50 lines ... cna you please take a quick peak at it adn\n> apply it if you don't see anything jump out at you?\n\nI have one minor change to it. Where I call \"python\" to bytecode compile\nit should be \"$(PYTHON)\" instead. If you want I can just commit the\nchanges directly.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 7 Nov 2001 20:50:28 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [011106 20:01]:\n> D'Arcy J.M. Cain writes:\n> > I have a change I would like to discuss. It doesn't change the code\n> > base, only the build system. The patches I would like to commit follow\n> > this text. The only thing it does is create a config option to bytecode\n> > compile the Python modules.\n> \n> We've seen such a patch before, but I'm still not convinced it works.\n> According to my knowledge, the pre-compiled bytecode files need to be\n> created after the source files have been installed in their final\n> location, because the file name and timestamp is encoded in the compiled\n> file (it's sort of used as a cache file). While this can be accomplished\n> with a different patch, it wouldn't really work when DESTDIR is used\n> because you'd create a \"dead\" cache file. In a sense, this operation is\n> like running ldconfig -- it's outside the scope of the build system.\n> Package managers typically put it in the \"post install\" section.\n\nDo you have a reference for this? I tried looking for one but the only\nthing I could find was http://www.python.org/doc/1.6/dist/built-dist.html\nwhich suggests to me that they can be compiled before shipping which of\ncourse certainly involves moving them. In any case NetBSD does this\npatch before building and everything works there.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 20 Mar 2002 13:25:43 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "D'Arcy J.M. Cain writes:\n\n> Do you have a reference for this? I tried looking for one but the only\n> thing I could find was http://www.python.org/doc/1.6/dist/built-dist.html\n> which suggests to me that they can be compiled before shipping which of\n> course certainly involves moving them. In any case NetBSD does this\n> patch before building and everything works there.\n\nMy reference is Automake. They go out of their way to compile the Python\nfiles at the right time. We could use this as a model.\n\nIt's easy to determine that the time stamp appears to be encoded into the\ncompiled output file:\n\n$ cat test.py\nprint \"test\"\n$ python -c 'import py_compile; py_compile.compile(\"test.py\", \"test.pyc\")'\n$ md5sum test.pyc\na0e690271636fcbf067db628f9c7d0c3 test.pyc\n$ python -c 'import py_compile; py_compile.compile(\"test.py\", \"test.pyc\")'\n$ md5sum test.pyc\na0e690271636fcbf067db628f9c7d0c3 test.pyc\n$ touch test.py\n$ python -c 'import py_compile; py_compile.compile(\"test.py\", \"test.pyc\")'\n$ md5sum test.pyc\n1d78ae79994b102c89a14a2dd2addc55 test.pyc\n\nWhat you need to do is to create the compiled files after you have\ninstalled the original. Binary packaging probably preserves the time\nstamps of the files, so that shouldn't be a problem. I withdraw that part\nof the objection.\n\nAlso, I think if we add this feature, let's just make it the default and\nnot add another configure option for it.\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 20 Mar 2002 14:47:59 -0500 (EST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: 7.2b2 today" }, { "msg_contents": "> As far as I can tell, not capitalizing the first letter after a dash\n> is the only inconsistency with Oracle's implementation of this function.\n\nWrong again. Oracle also capitalizes the first letter after a comma, \nsemicolon, colon, period, and both a single and double quote. (And that's \nall I've tested so far.)\n\nSo, I guess I need to write a program to test all possible combinations\nto see how incompatible the function is.\n\nMaking this change will be a larger patch than I had initially anticipated.\n\nThat also brings into question whether this is really a bugfix or a\nspecification change, a question which is relevant since we're in the \nfeature freeze for 7.4.\n--\nMike Nolan \n", "msg_date": "Wed, 9 Jul 2003 13:44:44 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: initcap incompatibility issue" }, { "msg_contents": "On Wed, 9 Jul 2003 [email protected] wrote:\n\n> > As far as I can tell, not capitalizing the first letter after a dash\n> > is the only inconsistency with Oracle's implementation of this function.\n> \n> Wrong again. Oracle also capitalizes the first letter after a comma, \n> semicolon, colon, period, and both a single and double quote. (And that's \n> all I've tested so far.)\n> \n> So, I guess I need to write a program to test all possible combinations\n> to see how incompatible the function is.\n> \n> Making this change will be a larger patch than I had initially anticipated.\n> \n> That also brings into question whether this is really a bugfix or a\n> specification change, a question which is relevant since we're in the \n> feature freeze for 7.4.\n\nIt sounds like Oracle is simply regexing for anything that ISN'T a letter \nto initcap right after it. If that's the case, you could just regex too.\n\n", "msg_date": "Wed, 9 Jul 2003 13:42:50 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initcap incompatibility issue" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> On Wed, 9 Jul 2003 [email protected] wrote:\n>> Wrong again. Oracle also capitalizes the first letter after a comma, \n>> semicolon, colon, period, and both a single and double quote. (And that's \n>> all I've tested so far.)\n\n> It sounds like Oracle is simply regexing for anything that ISN'T a letter \n> to initcap right after it. If that's the case, you could just regex too.\n\nOr more likely, use the appropriate ctype.h function (isalpha, probably).\n\n>> That also brings into question whether this is really a bugfix or a\n>> specification change, a question which is relevant since we're in the \n>> feature freeze for 7.4.\n\nAFAIK, our specification for this function is \"be like Oracle\", so it's\na bug fix and fair game for 7.4. Of course, the sooner you get it in\nthe more likely we'll see it that way ;-). Later in beta, only critical\nbugfixes will be accepted, and this one surely ain't very critical.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jul 2003 22:31:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initcap incompatibility issue " }, { "msg_contents": "> > It sounds like Oracle is simply regexing for anything that ISN'T a letter \n> > to initcap right after it. If that's the case, you could just regex too.\n> \n> Or more likely, use the appropriate ctype.h function (isalpha, probably).\n\nHaving tested it, Oracle capitalizes after all non-alphanumeric characters,\nso !isalnum() is the appropriate function. (That makes it a one-line \npatch on 7.3.3, which I've already tested.)\n\n> AFAIK, our specification for this function is \"be like Oracle\", so it's\n> a bug fix and fair game for 7.4. Of course, the sooner you get it in\n> the more likely we'll see it that way ;-). Later in beta, only critical\n> bugfixes will be accepted, and this one surely ain't very critical.\n\nNow if I can just get CVS working on Redhat 8 and remember how to build\na patch, even a one-liner. :-)\n--\nMike Nolan\n \n", "msg_date": "Fri, 11 Jul 2003 01:34:48 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: initcap incompatibility issue" } ]
[ { "msg_contents": "Defining CUSTOM_LDFLAGS in Makefile.custom does not have any\neffect on the compilation. I found that LDFLAGS is hardcoded\nin Makefile.global. Perhaps it would be a good idea to change\nthis? I had to edit Makefile.global and add '-pg' to LDFLAGS\nmanually.\n\nI thought it was about time to run a profile on the backend...\n\n/* m */\n", "msg_date": "Tue, 20 Jan 1998 14:43:21 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": true, "msg_subject": "Compiling postgresql with profiling" } ]
[ { "msg_contents": "PostgreSQL seem to have a lot of names;\n Postgres 95, Postgres, Pg, Pgsql ... All these names are used in\n FAQ, filenames, docs, installation info, messages etc.\n\nExamples:\n The backend executable is 'postgres', why not 'postgresql'?\n INSTALL: \"User postgres is the Postgres superuser\"?\n\nI think it would be a good idea to use only \"PostgreSQL\" in all\ndocs, file names and so on, and \"pgsql\" as the official abbrev.\n\nThis is one of the things new users notice and find strange.\nI know, because I did, and people I know did it too.\n\nAnother strange thing is the name \"postmaster\". Someone doing a 'ps'\nseeing this would probably think it is part of the email system.\nA better way would be to use 'postgresql' only, or perhaps with a\n'--master' option.\n\n/* m */\n", "msg_date": "Tue, 20 Jan 1998 15:13:37 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": true, "msg_subject": "Small changes for the \"no excuses\" release" }, { "msg_contents": "> \n> PostgreSQL seem to have a lot of names;\n> Postgres 95, Postgres, Pg, Pgsql ... All these names are used in\n> FAQ, filenames, docs, installation info, messages etc.\n> \n> Examples:\n> The backend executable is 'postgres', why not 'postgresql'?\n> INSTALL: \"User postgres is the Postgres superuser\"?\n> \n> I think it would be a good idea to use only \"PostgreSQL\" in all\n> docs, file names and so on, and \"pgsql\" as the official abbrev.\n> \n> This is one of the things new users notice and find strange.\n> I know, because I did, and people I know did it too.\n\nAdded to TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 15 Mar 1998 21:31:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Small changes for the \"no excuses\" release" }, { "msg_contents": "> > PostgreSQL seem to have a lot of names;\n> > Postgres 95, Postgres, Pg, Pgsql ... All these names are used in\n> > FAQ, filenames, docs, installation info, messages etc.\n> >\n> > Examples:\n> > The backend executable is 'postgres', why not 'postgresql'?\n> > INSTALL: \"User postgres is the Postgres superuser\"?\n> >\n> > I think it would be a good idea to use only \"PostgreSQL\" in all\n> > docs, file names and so on, and \"pgsql\" as the official abbrev.\n> >\n> > This is one of the things new users notice and find strange.\n> > I know, because I did, and people I know did it too.\n> \n> Added to TODO list.\n\nFrankly, the voluminous docs, many adapted from the originals, seem to\nread better using \"Postgres\" rather than \"PostgreSQL\" or \"Postgres95\". I\nchanged 'em all after defining what each is in the introduction. Would\nbe a good bit of work to change them back, particularly since folks\naren't volunteering in droves for work on documentation...\n\n - Tom\n", "msg_date": "Mon, 16 Mar 1998 06:04:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Small changes for the \"no excuses\" release" }, { "msg_contents": "Thomas G. Lockhart wrote:\n>\n> Frankly, the voluminous docs, many adapted from the originals, seem to\n> read better using \"Postgres\" rather than \"PostgreSQL\" or \"Postgres95\". I\n> changed 'em all after defining what each is in the introduction. Would\n> be a good bit of work to change them back, particularly since folks\n> aren't volunteering in droves for work on documentation...\n> \n> - Tom\n\nWith sed doing all the work, \"a good bit\" would be something like\ntyping a couple of lines and hitting ENTER.\n\n/* m */\n", "msg_date": "Mon, 16 Mar 1998 13:20:03 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] Re: [HACKERS] Small changes for the \"no excuses\" release" }, { "msg_contents": "\nOn Mon, 16 Mar 1998, Thomas G. Lockhart wrote:\n\n> > > PostgreSQL seem to have a lot of names;\n> > > Postgres 95, Postgres, Pg, Pgsql ... All these names are used in\n> > > FAQ, filenames, docs, installation info, messages etc.\n> > >\n> > > Examples:\n> > > The backend executable is 'postgres', why not 'postgresql'?\n> > > INSTALL: \"User postgres is the Postgres superuser\"?\n> > >\n> > > I think it would be a good idea to use only \"PostgreSQL\" in all\n> > > docs, file names and so on, and \"pgsql\" as the official abbrev.\n> > >\n> > > This is one of the things new users notice and find strange.\n> > > I know, because I did, and people I know did it too.\n> > \n> > Added to TODO list.\n> \n> Frankly, the voluminous docs, many adapted from the originals, seem to\n> read better using \"Postgres\" rather than \"PostgreSQL\" or \"Postgres95\". I\n> changed 'em all after defining what each is in the introduction. Would\n> be a good bit of work to change them back, particularly since folks\n> aren't volunteering in droves for work on documentation...\n\nWell, I am working on the JDBC docs at the moment (when I can fit it in\nbetween everything else). I'm just about to plan some tutorials to go in\nas well.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n\n", "msg_date": "Wed, 18 Mar 1998 07:37:44 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Re: [HACKERS] Small changes for the \"no excuses\" release" } ]
[ { "msg_contents": "\nHello,\n\nAfter installing the latest cvs tree this morning, I cannot use \ncreateuser and createdb. They break when trying to run psql. I can\nconnect with psql to the template1 database after setting up hba to use\npassword authentication and starting it with the -u option. \n\nDoes the hba still use trust and ident as well? Is there a primer on the\nnew authentication scheme?\n\nI am moving up from 6.1.0 on Linux.\n\nThanks,\n\n\n-James\n\n", "msg_date": "Tue, 20 Jan 1998 14:30:36 -0500 (EST)", "msg_from": "James Hughes <[email protected]>", "msg_from_op": true, "msg_subject": "Authentication Woes" }, { "msg_contents": "On Tue, 20 Jan 1998, James Hughes wrote:\n\n> \n> Hello,\n> \n> After installing the latest cvs tree this morning, I cannot use \n> createuser and createdb. They break when trying to run psql. I can\n> connect with psql to the template1 database after setting up hba to use\n> password authentication and starting it with the -u option. \n> \n> Does the hba still use trust and ident as well? Is there a primer on the\n> new authentication scheme?\n> \n> I am moving up from 6.1.0 on Linux.\n\n\tv6.1.0 would have used TCP/IP for communications exclusively...\nv6.3 moved to using Unix Domain Sockets as default, with TCP/IP disabled\nby default. To \"mirror\" the old behavior, add the -i option to your\nstartup script and youshould be okay...\n\n\n", "msg_date": "Tue, 20 Jan 1998 14:43:51 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Authentication Woes" }, { "msg_contents": "\n\nOn Tue, 20 Jan 1998, The Hermit Hacker wrote:\n\n> \n> \tv6.1.0 would have used TCP/IP for communications exclusively...\n> v6.3 moved to using Unix Domain Sockets as default, with TCP/IP disabled\n> by default. To \"mirror\" the old behavior, add the -i option to your\n> startup script and youshould be okay...\n> \n> \n\nOK, I should have said that the only way I can run psql is after making\nan entry in pg_hba.conf to enable passwords, starting the postmaster\nwith the \"-i\" option then using the \"-u\" option with psql. The\ncreateuser and createdb scripts will not run regardless.\n\nMaybe I have other problems?? I am going to dig a little bit deeper :)\n\n\n-James\n\n", "msg_date": "Tue, 20 Jan 1998 15:13:29 -0500 (EST)", "msg_from": "James Hughes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Authentication Woes" }, { "msg_contents": "On Tue, 20 Jan 1998, James Hughes wrote:\n\n> \n> \n> On Tue, 20 Jan 1998, The Hermit Hacker wrote:\n> \n> > \n> > \tv6.1.0 would have used TCP/IP for communications exclusively...\n> > v6.3 moved to using Unix Domain Sockets as default, with TCP/IP disabled\n> > by default. To \"mirror\" the old behavior, add the -i option to your\n> > startup script and youshould be okay...\n> > \n> > \n> \n> OK, I should have said that the only way I can run psql is after making\n> an entry in pg_hba.conf to enable passwords, starting the postmaster\n> with the \"-i\" option then using the \"-u\" option with psql. The\n> createuser and createdb scripts will not run regardless.\n> \n> Maybe I have other problems?? I am going to dig a little bit deeper :)\n\nI have the same problem with 6.2.1. Authentication using passwords works, \nbut createuser (and probably the other scripts) fail. 'createuser -a \npassword' fails to use -u to actually ask for username/password. To \nprevent having to type this 10 times during createuser, you could get it \nfrom the script and then do 'echo \"username\\npassword\" | psql ...'. I \ntried this on another program (one of mine which uses the \nprompt_for_password function from psql) and it works.\n\nMaarten\n\n_____________________________________________________________________________\n| Maarten Boekhold, Faculty of Electrical Engineering TU Delft, NL |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Wed, 21 Jan 1998 09:35:27 +0100 (MET)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Authentication Woes" } ]
[ { "msg_contents": "I have completed adding Var.varlevelsup, and have added code to the\nparser to properly set the field. It will allow correlated references\nin the WHERE clause, but not in the target list.\n\nI did not make many changes in the optimizer or executor because I\nbelieved varlevelsup would be zero by the time it got to that point.\nFor example, there are cases where the optimizer calls getrelid(Var *,\nRangeTable), and of course, because we don't have a parentQuery* in the\nQuery, this will ignore the varlevelsup and only look in the current\nrange table.\n\nI did make a few additions of varlevelsup in cases where they were\nchecking for equality or copying Var records. I made changes in all the\nsupport code, like /nodes handling.\n\nLet me know if you need additional changes. makeVar takes a new\nvarlevelsup parameter, and I made changes to all calls.\n\nI have tested the code, and debug output shows varlevelsup being set\ncorrectly.\n\nThe only open item is how to do rewrite. I will check into this later,\nprobably in a week. It will work by Feb 1.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 17:18:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects" } ]
[ { "msg_contents": "\nWe are only going to have subselects in the WHERE clause, not in the\ntarget list, right?\n\nThe standard says we can have them either place, but I didn't think we\nwere implementing the target list subselects.\n\nIs that correct?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 22:24:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> We are only going to have subselects in the WHERE clause, not in the\n> target list, right?\n> \n> The standard says we can have them either place, but I didn't think we\n> were implementing the target list subselects.\n> \n> Is that correct?\n\nYes, this is right for 6.3. I hope that we'll support subselects in \ntarget list, FROM, etc in future.\n\nBTW, I'm going to implement subselect in (let's say) \"natural\" way -\nwithout substitution of parent query relations into subselect and so on,\nbut by execution of (correlated) subqueries for each upper query row\n(may be with cacheing of results in hash table for better performance).\nSure, this is much more clean way and much more clear how to do this.\nThis seems like SQL-func way, but funcs start/run/stop Executor each time\nwhen called and this breaks performance. \n\nVadim\n", "msg_date": "Wed, 21 Jan 1998 17:10:22 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subselects" }, { "msg_contents": "> > Is that correct?\n> \n> Yes, this is right for 6.3. I hope that we'll support subselects in \n> target list, FROM, etc in future.\n\nOK.\n\n> \n> BTW, I'm going to implement subselect in (let's say) \"natural\" way -\n> without substitution of parent query relations into subselect and so on,\n> but by execution of (correlated) subqueries for each upper query row\n> (may be with cacheing of results in hash table for better performance).\n> Sure, this is much more clean way and much more clear how to do this.\n> This seems like SQL-func way, but funcs start/run/stop Executor each time\n> when called and this breaks performance. \n\nSure, lets see how it performs. Most correlated subqueries are very\nslow in commercial databases too. I guess I thought you could do the\nwhole subquery, then sort on the correlated columns, which allows quick\naccess to the results, but if the subquery references only a small part\nof the upper query's output, it is quicker to do it your way.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 10:29:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subselects" } ]
[ { "msg_contents": "OK, I now have rewrites working with subselects. At least, I think it\nwill work. No way to test it yet.\n\nIt basically rewrites all the subqueries first, then the main query. It\ndoes the lowest queries first. I do this so if a rewrite adds a\nsubquery as part of the rewrite, the new subquery does not get\nprocessed.\n\nFor each query rewritten, I have code to go into each SubLink and recode\nany correlated variables that reference the outer query level I am\nrewriting. Pretty slick.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Tue, 20 Jan 1998 23:21:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "subselects" } ]
[ { "msg_contents": "Bruce wrote:\n> I have completed adding Var.varlevelsup, and have added code to the\n> parser to properly set the field. It will allow correlated references\n> in the WHERE clause, but not in the target list.\n\nselect i2.ip1, i1.ip4 from nameip i1 where ip1 = (select ip1 from nameip\ni2);\n 522: Table (i2) not selected in query.\nselect i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2);\n 284: A subquery has returned not exactly one row.\nselect i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2\nwhere name='zeus');\n 2 row(s) retrieved.\n\nInformix allows correlated references in the target list. It also allows\nsubselects in the target list as in:\nselect i1.ip4, (select i1.ip1 from nameip i2) from nameip i1;\n 284: A subquery has returned not exactly one row.\nselect i1.ip4, (select i1.ip1 from nameip i2 where name='zeus') from\nnameip i1;\n 2 row(s) retrieved.\n\nIs this what you were looking for ?\n\nAndreas\n", "msg_date": "Wed, 21 Jan 1998 09:42:52 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subselects" }, { "msg_contents": "> \n> Bruce wrote:\n> > I have completed adding Var.varlevelsup, and have added code to the\n> > parser to properly set the field. It will allow correlated references\n> > in the WHERE clause, but not in the target list.\n> \n> select i2.ip1, i1.ip4 from nameip i1 where ip1 = (select ip1 from nameip\n> i2);\n> 522: Table (i2) not selected in query.\n> select i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2);\n> 284: A subquery has returned not exactly one row.\n> select i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2\n> where name='zeus');\n> 2 row(s) retrieved.\n> \n> Informix allows correlated references in the target list. It also allows\n> subselects in the target list as in:\n> select i1.ip4, (select i1.ip1 from nameip i2) from nameip i1;\n> 284: A subquery has returned not exactly one row.\n> select i1.ip4, (select i1.ip1 from nameip i2 where name='zeus') from\n> nameip i1;\n> 2 row(s) retrieved.\n> \n> Is this what you were looking for ?\n> \n> Andreas\n> \n> \n\nYes, I know other engines support subqueries and references in the\ntarget list. I want to know if we are going to do that for 6.3. \nPersonally, I have never seen much use for it.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 10:09:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "On Wed, 21 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > Bruce wrote:\n> > > I have completed adding Var.varlevelsup, and have added code to the\n> > > parser to properly set the field. It will allow correlated references\n> > > in the WHERE clause, but not in the target list.\n> > \n> > select i2.ip1, i1.ip4 from nameip i1 where ip1 = (select ip1 from nameip\n> > i2);\n> > 522: Table (i2) not selected in query.\n> > select i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2);\n> > 284: A subquery has returned not exactly one row.\n> > select i1.ip4 from nameip i1 where ip1 = (select i1.ip1 from nameip i2\n> > where name='zeus');\n> > 2 row(s) retrieved.\n> > \n> > Informix allows correlated references in the target list. It also allows\n> > subselects in the target list as in:\n> > select i1.ip4, (select i1.ip1 from nameip i2) from nameip i1;\n> > 284: A subquery has returned not exactly one row.\n> > select i1.ip4, (select i1.ip1 from nameip i2 where name='zeus') from\n> > nameip i1;\n> > 2 row(s) retrieved.\n> > \n> > Is this what you were looking for ?\n> > \n> > Andreas\n> > \n> > \n> \n> Yes, I know other engines support subqueries and references in the\n> target list. I want to know if we are going to do that for 6.3. \n> Personally, I have never seen much use for it.\n\n\tIf its easy to add in the next couple of days, sure, go for\nit...but can someone explain to me *why* you would use a subselect in the\ntarget list? I've actually never seen that before :9\n\n\n", "msg_date": "Wed, 21 Jan 1998 10:18:29 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" }, { "msg_contents": "> > Yes, I know other engines support subqueries and references in the\n> > target list. I want to know if we are going to do that for 6.3. \n> > Personally, I have never seen much use for it.\n> \n> \tIf its easy to add in the next couple of days, sure, go for\n> it...but can someone explain to me *why* you would use a subselect in the\n> target list? I've actually never seen that before :9\n\nI have no idea why someone would want to do that. I have enough trouble\nfiguring out how the engine is going to execute normal queries, let\nalone strange ones like that.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 11:03:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: subselects" } ]
[ { "msg_contents": "> > In all tests crash_me crashed backend server - postgres but postmaster\n> > still run and allow to work with all dbs. I run crash_me several times\n> > with postmaster started once upon boot time.\n> \n> \tIs there any indication as to why the crash?\n\nI can tell indirect only using limits/pg.cfg where crash_me logs all\nresults.\nEvery time it runs it checks this file to set limits in the test to avoid\ncrash.\nI think the best way to look PGSQL source code and set up checkups\nfor these limits. So :\n\n1. output:\nquery size: Broken pipe\n\npg.cfg:\nquery_size=4096 # query size\n\n2. output:\nconstant string size in where:\nFatal error: Can't check 'constant string size in where' for limit=1\nerror: PQexec() -- There is no connection to the backend.\n\npg.cfg:\nwhere_string_size=4062 # constant string size in where\n\n3. output:\ntables in join:\nFatal error: Can't check 'tables in join' for limit=1\nerror: PQexec() -- There is no connection to the backend.\n\npg.cfg:\njoin_tables=0 # tables in join\n\n4. output:\ntable name length:\nFatal error: Can't check 'table name length' for limit=1\nerror: PQexec() -- There is no connection to the backend.\n\npg.cfg:\nmax_table_name=31 # table name length\n\n5. output:\nmax table row length (without blobs):\nFatal error: Can't check 'max table row length (without blobs)' for limit=1\nerror: PQexec() -- There is no connection to the backend.\n\npg.cfg:\nmax_row_length=7969 # max table row length (without blobs)\n\nWhen I run tests first time I crashed at \"column name length: 31\" also.\nNow it runs OK - 8-[.\n\nAnd I cann't indicate couse of the problem with \"index length\" - it run out\nof memory.\n\nIgor Sysoev\n\n", "msg_date": "Wed, 21 Jan 1998 16:34:25 +0300", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Business cases" } ]
[ { "msg_contents": "> We are only going to have subselects in the WHERE clause, not in the\n> target list, right?\n> \n> The standard says we can have them either place, but I didn't think we\n> were implementing the target list subselects.\n> \n> Is that correct?\n\nWhat about the HAVING clause? Currently not in, but someone here wants\nto take a stab at it.\n\nDoesn't seem that tough...loops over the tuples returned from the group\nby node and checks the expression such as \"x > 5\" or \"x = (subselect)\".\n\nThe cost analysis in the optimizer could be tricky come to think of it.\nIf a subselect has a HAVING, would have to have a formula to determine\nthe selectiveness. Hmmm...\n\ndarrenk\n", "msg_date": "Wed, 21 Jan 1998 09:13:05 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselects" }, { "msg_contents": "> \n> > We are only going to have subselects in the WHERE clause, not in the\n> > target list, right?\n> > \n> > The standard says we can have them either place, but I didn't think we\n> > were implementing the target list subselects.\n> > \n> > Is that correct?\n> \n> What about the HAVING clause? Currently not in, but someone here wants\n> to take a stab at it.\n> \n> Doesn't seem that tough...loops over the tuples returned from the group\n> by node and checks the expression such as \"x > 5\" or \"x = (subselect)\".\n> \n> The cost analysis in the optimizer could be tricky come to think of it.\n> If a subselect has a HAVING, would have to have a formula to determine\n> the selectiveness. Hmmm...\n\nCode is in the grammar, but have to add Aggreg code to parser and\noptimizer needs a qual restriction on a Aggreg field. You really just\nneed to be able to put a restriction on an aggregate in the WHERE\nclause, but have it evaluated AFTER the GROUP BY.\n\nWell, I have just done lots of work on fixing aggregate issues, so maybe\nI should give it a try, but not for 6.3. Run out of non-business-work\ntime this month. Boss is going to figure it out soon. :-) (Hey, I am\nthe boss.)\n\nGive it a stab. I can help you out. I can even review your patches and\ngive you ideas.\n\nYou will have to enable HAVING in gram.y, and have analyze.c call\ntransformExpr() for the HAVING clause. And have rewrite do the HAVING\nclause just like it does the qual. Then, in the upper optimizer, you\nwill see where aggregates are handled in planner.c. You will need to\nput the HAVING restriction above the GROUPBY and AGG nodes, so you can\ndo the restriction AFTER those have been computed. The tricky part is\nthat we only allow aggregates in the target list, so in this case you\nwant an aggregate that is not in the target list.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 10:56:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselects" } ]
[ { "msg_contents": "Hi\n\nI find next bug:\nin query \"select a,b,count(*) from q group by a,b;\" postgres do:\na\t,b\t,count(*)\n5\t2\t3\t???\n5\t2\t2\t???\n6\t3\t6\n.................\n\ntable - about 10000-30000 tuples.\n\nPS: i use linux 2.0.33 & solaris 2.5.1\n\n-- \nSY, Serj\n", "msg_date": "Wed, 21 Jan 1998 17:56:50 +0300", "msg_from": "Serj <[email protected]>", "msg_from_op": true, "msg_subject": "group by bug in 6.2.1 & 6.3 -snapshot" } ]
[ { "msg_contents": "> \n> \n> Anyone know what the heap_modifytuple error refers to down below?\n> \n> The error is generated around line 884 of\n> backend/access/common/heaptuple.c, where 'repl[attoff]' != 'r' \n> \n> Haven't had a chance to test this under FreeBSD yet, will hopefully try it\n> out tonight...\n> \n> \n\nI am initdb'ing all the time here.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 10:34:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] -current version won't initdb under i386_solaris..." }, { "msg_contents": "On Wed, 21 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > Anyone know what the heap_modifytuple error refers to down below?\n> > \n> > The error is generated around line 884 of\n> > backend/access/common/heaptuple.c, where 'repl[attoff]' != 'r' \n> > \n> > Haven't had a chance to test this under FreeBSD yet, will hopefully try it\n> > out tonight...\n> > \n> > \n> \n> I am initdb'ing all the time here.\n\n\tBruce...help me out here. *rofl* I tried adding a '-d 3' to the\ninitdb BACKENDARGS variable, but it didn't seem to produce any noticable\nerrors :(\n\n\tThoughts as to what else I shoudl try to narrow it down?\n\n\n", "msg_date": "Wed, 21 Jan 1998 10:39:54 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] -current version won't initdb under i386_solaris..." } ]
[ { "msg_contents": "\n> \tIf its easy to add in the next couple of days, sure, go for\n> it...but can someone explain to me *why* you would use a subselect in\n> the\n> target list? I've actually never seen that before :9\n> \n> \nI think it is always possible to rewrite a 'subselect in the target\nlist' as a join.\nSo if it is complicated to implement now, I'd say leave it out, as there\nis no functionality aspect\nI could think of.\n\nAndreas\n\n\n", "msg_date": "Wed, 21 Jan 1998 17:00:37 +0100", "msg_from": "Zeugswetter Andreas DBT <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: subselects" } ]
[ { "msg_contents": "\nMoved to [email protected], where it should have been moved\n*ages* ago\n\n\nOn Wed, 21 Jan 1998, Igor Sysoev wrote:\n\n> > The result you're seeing is, IMHO, *correct*.\n> > \n> > The first row in the table, when the update is undertaken, produces a \n> > duplicate key. So you are getting a complaint which you SHOULD receive,\n> > unless I'm misunderstanding how this is supposed to actually work.\n> > \n> > The \"update\" statement, if it is behaving as an atomic thing, effectively\n> \n> > \"snapshots\" the table and then performs the update. Since the first \n> > attempted update is on the first row it \"finds\", and adding one to it \n> > produces \"3\", which is already on file, I believe it should bitch - \n> > and it does.\n> \n> I'm not SQL guru and cannot tell how it must be.\n> But it seems that Oracle and Solid allows update primary keys such way.\n\nConnected to:\nOracle7 Server Release 7.3.3.0.0 - Production Release\nWith the distributed, replication and parallel query options\nPL/SQL Release 2.3.3.0.0 - Production\n\nSQL> create table one ( a integer primary key not null );\n\nTable created.\n\nSQL> insert into one values (2);\n\n1 row created.\n\nSQL> insert into one values (3);\n\n1 row created.\n\nSQL> insert into one values (1);\n\n1 row created.\n\nSQL> select * from one;\n\n A\n----------\n 2\n 3\n 1\n\nSQL> update one set a=a+1;\n\n3 rows updated.\n\nSQL> select * from one;\n\n A\n----------\n 3\n 4\n 2\n\nSQL> \n\n\n", "msg_date": "Wed, 21 Jan 1998 13:01:10 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Business cases" }, { "msg_contents": " > \n> \n> Moved to [email protected], where it should have been moved\n> *ages* ago\n> \n> Connected to:\n> Oracle7 Server Release 7.3.3.0.0 - Production Release\n> With the distributed, replication and parallel query options\n> PL/SQL Release 2.3.3.0.0 - Production\n> \n> SQL> create table one ( a integer primary key not null );\n> \n> Table created.\n> \n> SQL> insert into one values (2);\n> \n> 1 row created.\n> \n> SQL> insert into one values (3);\n> \n> 1 row created.\n> \n> SQL> insert into one values (1);\n> \n> 1 row created.\n> \n> SQL> select * from one;\n> \n> A\n> ----------\n> 2\n> 3\n> 1\n> \n> SQL> update one set a=a+1;\n> \n> 3 rows updated.\n> \n> SQL> select * from one;\n> \n> A\n> ----------\n> 3\n> 4\n> 2\n> \n\nMan, how do you implement that behavior? No wonder MySQL fails on it\ntoo.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 13:35:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "On Wed, 21 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > Moved to [email protected], where it should have been moved\n> > *ages* ago\n> > \n> > Connected to:\n> > Oracle7 Server Release 7.3.3.0.0 - Production Release\n> > With the distributed, replication and parallel query options\n> > PL/SQL Release 2.3.3.0.0 - Production\n> > \n> > SQL> create table one ( a integer primary key not null );\n> > \n> > Table created.\n> > \n> > SQL> insert into one values (2);\n> > \n> > 1 row created.\n> > \n> > SQL> insert into one values (3);\n> > \n> > 1 row created.\n> > \n> > SQL> insert into one values (1);\n> > \n> > 1 row created.\n> > \n> > SQL> select * from one;\n> > \n> > A\n> > ----------\n> > 2\n> > 3\n> > 1\n> > \n> > SQL> update one set a=a+1;\n> > \n> > 3 rows updated.\n> > \n> > SQL> select * from one;\n> > \n> > A\n> > ----------\n> > 3\n> > 4\n> > 2\n> > \n> \n> Man, how do you implement that behavior? No wonder MySQL fails on it\n> too.\n\n\tI don't know...the one suggestion that was made seemed to make\nabout the most sense...\n\n\tIf update is atomic, then it should allow you to change all the\nresultant fields and then try to commit it. After all the fields are\nchanged, then it becomes 3,4,2 instead of 2,3,1, and, therefore, is all\nunique...\n\n\n", "msg_date": "Wed, 21 Jan 1998 13:38:07 -0500 (EST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> \n> Moved to [email protected], where it should have been moved\n> *ages* ago\n> \n> \n> On Wed, 21 Jan 1998, Igor Sysoev wrote:\n> \n> > > The result you're seeing is, IMHO, *correct*.\n> > > \n> > > The first row in the table, when the update is undertaken, produces a \n> > > duplicate key. So you are getting a complaint which you SHOULD receive,\n> > > unless I'm misunderstanding how this is supposed to actually work.\n> > > \n> > > The \"update\" statement, if it is behaving as an atomic thing, effectively\n> > \n> > > \"snapshots\" the table and then performs the update. Since the first \n> > > attempted update is on the first row it \"finds\", and adding one to it \n> > > produces \"3\", which is already on file, I believe it should bitch - \n> > > and it does.\n> > \n> > I'm not SQL guru and cannot tell how it must be.\n> > But it seems that Oracle and Solid allows update primary keys such way.\n> \n> Connected to:\n> Oracle7 Server Release 7.3.3.0.0 - Production Release\n> With the distributed, replication and parallel query options\n> PL/SQL Release 2.3.3.0.0 - Production\n> \n> SQL> create table one ( a integer primary key not null );\n> \n> Table created.\n> \n> SQL> insert into one values (2);\n> \n> 1 row created.\n> \n> SQL> insert into one values (3);\n> \n> 1 row created.\n> \n> SQL> insert into one values (1);\n> \n> 1 row created.\n> \n> SQL> select * from one;\n> \n> A\n> ----------\n> 2\n> 3\n> 1\n> \n> SQL> update one set a=a+1;\n> \n> 3 rows updated.\n> \n> SQL> select * from one;\n> \n> A\n> ----------\n> 3\n> 4\n> 2\n> \n> SQL> \n\nI have been \"lurking\" on the pgsql-hackers list for a couple of days,\nbut thought I'd help where I can. I tried your above example on\nSybase, and got the same results. The only difference was that the\nitems were always returned from the table \"one\" in sorted order rather\nthan in insertion order.\n\nI also tried a slight modification to your query:\n\nupdate one set a=a+1 where a<3;\n\nThis produces an error as would be expected:\n\nAttempt to insert duplicate key row in object 'one' with unique index\n'one_a_8473420831'\nCommand has been aborted.\n(0 rows affected)\n\n\nOcie Mitchell\n\n", "msg_date": "Wed, 21 Jan 1998 12:15:13 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Business cases" } ]
[ { "msg_contents": "Do people want the overhead of char(), varchar(), and text to be reduced\nfrom 4-bytes to 2-bytes. We store the length in this overhead, but\nsince we have a size limit on tuple size, we can't have a field over 8k\nin size anyway. Even if we up that to 32k for 6.3, we still only use 2\nbytes.\n\nI have added it to the TODO list. Most of the code already supports it\nby using VARSIZE and VARDATA macros. Once the structure size changes,\nthe macros change too. The only issue is places where they take the\nfirst four bytes of the variable-length type and cast it to an int32,\nwhich will not work in this case. We have to change this so it uses the\nmacros too.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 16:29:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "varchar(), text,char() overhead" } ]
[ { "msg_contents": "> Do people want the overhead of char(), varchar(), and text to be reduced\n> from 4-bytes to 2-bytes. We store the length in this overhead, but\n> since we have a size limit on tuple size, we can't have a field over 8k\n> in size anyway. Even if we up that to 32k for 6.3, we still only use 2\n> bytes.\n> \n> I have added it to the TODO list. Most of the code already supports it\n> by using VARSIZE and VARDATA macros. Once the structure size changes,\n> the macros change too. The only issue is places where they take the\n> first four bytes of the variable-length type and cast it to an int32,\n> which will not work in this case. We have to change this so it uses the\n> macros too.\n\nWould be a nice space-saver if you have tables with many small text fields.\n\nDig out that old message of mine concerning block size and check out item #4.\n\nExcerpted below if you've finally deleted it... :) :)\n\n> Date: Wed, 29 Jan 1997 13:38:10 -0500\n> From: aixssd!darrenk (Darren King)\n> Subject: [HACKERS] Max size of data types and tuples.\n> ...\n> 4. Since only 13 bits are needed for storing the size of these\n> textual fields in a tuple, could PostgreSql use a 16-bit int to\n> store it? Currently, the size is padded to four bytes in the\n> tuple and this eats space if you have many textual fields.\n> Without further digging, I'm assuming that the size is double-word\n> aligned so that the actual text starts on a double-word boundary.\n> ...\n\ndarrenk\n", "msg_date": "Wed, 21 Jan 1998 17:14:52 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" }, { "msg_contents": "> > macros too.\n> \n> Would be a nice space-saver if you have tables with many small text fields.\n> \n> Dig out that old message of mine concerning block size and check out item #4.\n> \n> Excerpted below if you've finally deleted it... :) :)\n> \n> > Date: Wed, 29 Jan 1997 13:38:10 -0500\n> > From: aixssd!darrenk (Darren King)\n> > Subject: [HACKERS] Max size of data types and tuples.\n> > ...\n> > 4. Since only 13 bits are needed for storing the size of these\n> > textual fields in a tuple, could PostgreSql use a 16-bit int to\n> > store it? Currently, the size is padded to four bytes in the\n> > tuple and this eats space if you have many textual fields.\n> > Without further digging, I'm assuming that the size is double-word\n> > aligned so that the actual text starts on a double-word boundary.\n> > ...\n\nI had forgotten about your mention of this. I am running some tests\nnow, and things look promising. However, if we go to 64k or 128k\ntuples, we would be in trouble. (We can do 64k tuples by changing the\n'special variable' length value from -1 to 0.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 18:38:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" }, { "msg_contents": "> \n> I had forgotten about your mention of this. I am running some tests\n> now, and things look promising. However, if we go to 64k or 128k\n> tuples, we would be in trouble. (We can do 64k tuples by changing the\n> 'special variable' length value from -1 to 0.\n> \n\nI am not going to make any changes to the variable length overhead for\nchar(), varchar(), and text at this time. It is too close to beta. I\nwill keep the item on the TODO list, and we can hash it out later.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 20:06:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > macros too.\n> >\n> > Would be a nice space-saver if you have tables with many small text fields.\n> >\n> > Dig out that old message of mine concerning block size and check out item #4.\n> >\n> > Excerpted below if you've finally deleted it... :) :)\n> >\n> > > Date: Wed, 29 Jan 1997 13:38:10 -0500\n> > > From: aixssd!darrenk (Darren King)\n> > > Subject: [HACKERS] Max size of data types and tuples.\n> > > ...\n> > > 4. Since only 13 bits are needed for storing the size of these\n> > > textual fields in a tuple, could PostgreSql use a 16-bit int to\n> > > store it? Currently, the size is padded to four bytes in the\n> > > tuple and this eats space if you have many textual fields.\n> > > Without further digging, I'm assuming that the size is double-word\n> > > aligned so that the actual text starts on a double-word boundary.\n> > > ...\n> \n> I had forgotten about your mention of this. I am running some tests\n> now, and things look promising. However, if we go to 64k or 128k\n> tuples, we would be in trouble. (We can do 64k tuples by changing the\n ^^^^^^^^^^^^^^^^^^^^^^\nAlso, multi-representation feature allows to have 2Gb in varlena fields.\n\n> 'special variable' length value from -1 to 0.\n\nYes, it's way.\n\nVadim\n", "msg_date": "Thu, 22 Jan 1998 09:40:30 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" }, { "msg_contents": "> > I had forgotten about your mention of this. I am running some tests\n> > now, and things look promising. However, if we go to 64k or 128k\n> > tuples, we would be in trouble. (We can do 64k tuples by changing the\n> ^^^^^^^^^^^^^^^^^^^^^^\n> Also, multi-representation feature allows to have 2Gb in varlena fields.\n\nWhat is multi-representation feature? Large objects?\n\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Wed, 21 Jan 1998 22:05:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > I had forgotten about your mention of this. I am running some tests\n> > > now, and things look promising. However, if we go to 64k or 128k\n> > > tuples, we would be in trouble. (We can do 64k tuples by changing the\n> > ^^^^^^^^^^^^^^^^^^^^^^\n> > Also, multi-representation feature allows to have 2Gb in varlena fields.\n> \n> What is multi-representation feature? Large objects?\n\nYes. Server could store varlena fields in LO when size of field or\ntuple at whole is too big to be stored in relation blocks. \nThis allows to have tuples much longer than data blocks.\nThis is also Ok for performance sometime (if big varlenas are not used\nin WHERE they could be not read from disk for each tuple; if UPDATE don't\nchange out-stored varlenas they could be not stored twice).\n\nWe could use vl_len < 0 for out-stored varlenas: vl_len = -1000\ncould mean that size of data is 1000 bytes, data stored in LO and \nLO' id (oid?) is in vl_dat. It seems easy to implement (without\noptimization of access to data).\n\nVadim\n", "msg_date": "Thu, 22 Jan 1998 10:47:58 +0700", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" } ]
[ { "msg_contents": "Here they are:\n---------------------------------------------------------------------------\n\nGROUP BY bug of duplicates\nGROUP BY nulls bug\nORDER BY nulls(Vadim?)\nmany OR's exhaust optimizer memory(Vadim?)\nmax tuple length settings(Darren & Peter)\npg_user defaults to world-readable until passwords used(Todd)\npsql .psqlrc file startup(Andrew)\nself-join optimizer bug\nsubselects(Vadim)\n", "msg_date": "Wed, 21 Jan 1998 22:08:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Current open 6.3 issues" }, { "msg_contents": "On Wed, 21 Jan 1998, Bruce Momjian wrote:\n\n> Here they are:\n> ---------------------------------------------------------------------------\n> \n> GROUP BY bug of duplicates\n> GROUP BY nulls bug\n> ORDER BY nulls(Vadim?)\n> many OR's exhaust optimizer memory(Vadim?)\n> max tuple length settings(Darren & Peter)\n\nThe last I saw, we agreed (on the client side) to leave the current\nsettings for 8k tuples.\n\n> pg_user defaults to world-readable until passwords used(Todd)\n> psql .psqlrc file startup(Andrew)\n> self-join optimizer bug\n> subselects(Vadim)\n\nPS: I'm having to commute to London all this week, so I'm not getting the\ntime to do anything :-(\n\nHopefully it should ease after this Friday.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Thu, 22 Jan 1998 06:35:20 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "\nWhere can i get it ?\n\nOn Wed, 21 Jan 1998, Bruce Momjian wrote:\n\n> Here they are:\n> ---------------------------------------------------------------------------\n> \n> GROUP BY bug of duplicates\n> GROUP BY nulls bug\n> ORDER BY nulls(Vadim?)\n> many OR's exhaust optimizer memory(Vadim?)\n> max tuple length settings(Darren & Peter)\n> pg_user defaults to world-readable until passwords used(Todd)\n> psql .psqlrc file startup(Andrew)\n> self-join optimizer bug\n> subselects(Vadim)\n> \n\nSY, Serj\n\n", "msg_date": "Thu, 22 Jan 1998 12:42:15 +0300 (MSK)", "msg_from": "Serj <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "> \n> \n> Where can i get it ?\n\nBeta is feb1. Copy on ftp.postgresql.org now.\n\n> \n> On Wed, 21 Jan 1998, Bruce Momjian wrote:\n> \n> > Here they are:\n> > ---------------------------------------------------------------------------\n> > \n> > GROUP BY bug of duplicates\n> > GROUP BY nulls bug\n> > ORDER BY nulls(Vadim?)\n> > many OR's exhaust optimizer memory(Vadim?)\n> > max tuple length settings(Darren & Peter)\n> > pg_user defaults to world-readable until passwords used(Todd)\n> > psql .psqlrc file startup(Andrew)\n> > self-join optimizer bug\n> > subselects(Vadim)\n> > \n> \n> SY, Serj\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 10:37:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "> \n> On Wed, 21 Jan 1998, Bruce Momjian wrote:\n> \n> > Here they are:\n> > ---------------------------------------------------------------------------\n> > \n> > GROUP BY bug of duplicates\n> > GROUP BY nulls bug\n> > ORDER BY nulls(Vadim?)\n> > many OR's exhaust optimizer memory(Vadim?)\n> > max tuple length settings(Darren & Peter)\n> \n> The last I saw, we agreed (on the client side) to leave the current\n> settings for 8k tuples.\n\nI thought we were increasing the client size to the maximum value\nallowable for any backend.\n\n> \n> > pg_user defaults to world-readable until passwords used(Todd)\n> > psql .psqlrc file startup(Andrew)\n> > self-join optimizer bug\n> > subselects(Vadim)\n> \n> PS: I'm having to commute to London all this week, so I'm not getting the\n> time to do anything :-(\n> \n> Hopefully it should ease after this Friday.\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n> Main Homepage: http://www.demon.co.uk/finder\n> Work Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 10:41:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "> >\n> > Where can i get it ?\n> \n> Beta is feb1. Copy on ftp.postgresql.org now.\n> \n\n-rw-r--r-- 1 1005 96 2858711 Jan 16 08:03\npostgresql.snapshot.tar.gz \n\nIs it, or not ?\n\n-- \nSY, Serj\n", "msg_date": "Thu, 22 Jan 1998 19:32:53 +0300", "msg_from": "Serj <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "On Thu, 22 Jan 1998, Serj wrote:\n\n> > >\n> > > Where can i get it ?\n> > \n> > Beta is feb1. Copy on ftp.postgresql.org now.\n> > \n> \n> -rw-r--r-- 1 1005 96 2858711 Jan 16 08:03\n> postgresql.snapshot.tar.gz \n> \n> Is it, or not ?\n\n\tIt is purely a alpha copy, meant for testing purposes only, until Feb\n1st...expect the new one coming out on Friday to have major changes in it\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 22 Jan 1998 17:40:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Thu, 22 Jan 1998 11:46:02 +0100", "msg_from": "expertel <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "> Here they are:\n> ---------------------------------------------------------------------------\n> \n> GROUP BY bug of duplicates\n> GROUP BY nulls bug\n> ORDER BY nulls(Vadim?)\n> many OR's exhaust optimizer memory(Vadim?)\n> max tuple length settings(Darren & Peter)\n> pg_user defaults to world-readable until passwords used(Todd)\n> psql .psqlrc file startup(Andrew)\nUnfortunately I simply don't have time to implement any of the nicer suggestions\nfor how this should work. I wish I did...\n\n> self-join optimizer bug\n> subselects(Vadim)\n> \n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 22 Jan 1998 13:02:10 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Current open 6.3 issues" }, { "msg_contents": "> \n> > Here they are:\n> > ---------------------------------------------------------------------------\n> > \n> > GROUP BY bug of duplicates\n> > GROUP BY nulls bug\n> > ORDER BY nulls(Vadim?)\n> > many OR's exhaust optimizer memory(Vadim?)\n> > max tuple length settings(Darren & Peter)\n> > pg_user defaults to world-readable until passwords used(Todd)\n> > psql .psqlrc file startup(Andrew)\n> Unfortunately I simply don't have time to implement any of the nicer suggestions\n> for how this should work. I wish I did...\n\nOK, what are we doing the .psqlrc. Can you send the old patch, or are\nwe dropping it for 6.3?\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 10:40:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" } ]
[ { "msg_contents": "> > max tuple length settings(Darren & Peter)\n> \n> The last I saw, we agreed (on the client side) to leave the current\n> settings for 8k tuples.\n> \n> PS: I'm having to commute to London all this week, so I'm not getting the\n> time to do anything :-(\n> \n> Hopefully it should ease after this Friday.\n> \n> -- \n> Peter T Mount [email protected] or [email protected]\n\nKnow the feeling... I commute to London every day :-( (except when I\nwork at home...)\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 22 Jan 1998 13:04:45 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" } ]
[ { "msg_contents": "> > \n> > I had forgotten about your mention of this. I am running some tests\n> > now, and things look promising. However, if we go to 64k or 128k\n> > tuples, we would be in trouble. (We can do 64k tuples by changing the\n> > 'special variable' length value from -1 to 0.\n> > \n> \n> I am not going to make any changes to the variable length overhead for\n> char(), varchar(), and text at this time. It is too close to beta. I\n> will keep the item on the TODO list, and we can hash it out later.\n> \n\nI've been slowed this week...totalled my car Sat. nite (actually some\nother bonehead did it for me), so I've been a touch busy with insurance\nagents, etc...but I _will_ have this in by the beta date.\n\nTuples will only go up to 32k since there are only 15 bits available to\npoint to items on the page. Unless we want to expand that structure...I\ntested bit-field alignment on aix and it seems to favor 4-byte boundaries.\n\nFor now the three bit fields total 32 bits, but expand those and the size\nis padded up to 64. I have no idea how gcc or other compilers align bit\nfields. I been working without trying to expand this structure size, so\nI've stuck to 32k (15-bits) as the limit.\n\nI have so far...\n\n1. Synced all references to the BLCKSZ define.\n2. Made a \"-k\" option to postgres and created a global \"BlockSize\" variable.\n3. Fixed the places where BLCKSZ was used in variable declarations to use\n the BlockSize global.\n\nTo do...\n\n1. Should the block size of a database be written to a file like the version?\nAnd then be read in when postmaster starts and passed to each backend? This\nwould limit all of the databases in one PG_DATA directory to the same block\nsize. Couldn't do it on a \"per database\" basis since the template is only\ncreated once by initdb.\n\n2. Should the limit of the char fields be based on the block size? Been trying\nto get this to work. Creates fields just fine, but backend seems to be passing\nthe fields back padded to the full size.\n\nIs it possible for the back and front end to have a tuple split across packets\nor does everything have to be in one 8k packet? Rather than have the interfaces\nneeding to know about differing sizes, could tuples-spanning-packets be added\nto the libpq protocol somehow? Or would this have a bottleneck effect?\n\ndarrenk\n", "msg_date": "Thu, 22 Jan 1998 10:36:17 -0500", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] varchar(), text,char() overhead" } ]
[ { "msg_contents": "\nHello.\n\nZeev Suraski <[email protected]> is making some changes to the PostgreSQL code \nin PHP3 so that the Oid is returned in the pg_exec funtion if it is an \ninsert.\n\nWhat is the size of the Oid (unsigned, signed etc) and will it ever be \nzero.\n\nMichael\n\n* Michael J. Rogan, Network Administrator, 905-624-3020 *\n* Mark IV Industries, F-P Electronics & I.V.H.S. Divisions *\n* [email protected] [email protected] *\n", "msg_date": "Thu, 22 Jan 1998 16:11:14 +0000", "msg_from": "\"Michael J. Rogan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Oid Questions" }, { "msg_contents": "Developers Frequently Asked Questions (FAQ) for PostgreSQL\n\nLast updated: Thu Jan 22 15:04:10 EST 1998\n\nCurrent maintainer: Bruce Momjian ([email protected])\n\nThe most recent version of this document can be viewed at the postgreSQL Web\nsite, http://postgreSQL.org.\n\n ------------------------------------------------------------------------\n\nQuestions answered:\n\n1) General questions\n\n1) What tools are available for developers?\n2) What books are good for developers?\n3) Why do we use palloc() and pfree() to allocate memory?\n4) Why do we use Node() and List() to make data structures?\n ------------------------------------------------------------------------\n\n1) What tools are available for developers?\n\nAside from the User documentation mentioned in the regular FAQ, there are\nseveral development tools available. First, all the files in the /tools\ndirectory are designed for developers.\n\n RELEASE_CHANGES changes we have to make for each release\n SQL_keywords standard SQL'92 keywords\n backend web flowchart of the backend directories\n ccsym find standard defines made by your compiler\n entab converts tabs to spaces, used by pgindent\n find_static finds functions that could be made static\n find_typedef get a list of typedefs in the source code\n make_ctags make vi 'tags' file in each directory\n make_diff make *.orig and diffs of source\n make_etags make emacs 'etags' files\n make_keywords.README make comparison of our keywords and SQL'92\n make_mkid make mkid ID files\n mkldexport create AIX exports file\n pgindent indents C source files\n\nLet me note some of these. If you point your browser at the tools/backend\ndirectory, you will see all the backend components in a flow chart. You can\nclick on any one to see a description. If you then click on the directory\nname, you will be taken to the source directory, to browse the actual source\ncode behind it. We also have several README files in some source directories\nto describe the function of the module. The browser will display these when\nyou enter the directory also. The tools/backend directory is also contained\non our web page under the title Backend Flowchart.\n\nSecond, you really should have an editor that can handle tags, so you can\ntag a function call to see the function definition, and then tag inside that\nfunction to see an even lower-level function, and then back out twice to\nreturn to the original function. Most editors support this via tags or etags\nfiles.\n\nThird, you need to get mkid from ftp.postgresql.org. By running\ntools/make_mkid, an archive of source symbols can be created that can be\nrapidly queried like grep or edited.\n\nmake_diff has tools to create patch diff files that can be applied to the\ndistribution.\n\npgindent will format source files to match our standard format, which has\nfour-space tabs, and an indenting format specified by flags to the your\noperating system's utility indent.\n\n2) What books are good for developers?\n\nI have two good books, An Introduction to Database Systems, by C.J. Date,\nAddison, Wesley and A Guide to the SQL Standard, by C.J. Date, et. al,\nAddison, Wesley.\n\n3) Why do we use palloc() and pfree() to allocate memory?\n\npalloc() and pfree() are used in place of malloc() and free() because we\nautomatically free all memory allocated when a transaction completes. This\nmakes it easier to make sure we free memory that gets allocated in one\nplace, but only freed much later. There are several contexts that memory can\nbe allocated in, and this controls when the allocated memory is\nautomatically freed by the backend.\n\n4) Why do we use Node() and List() to make data structures?\n\nWe do this because this allows a consistent way to pass data inside the\nbackend in a flexible way. Every node has a NodeTag which specifies what\ntype of data is inside the Node. Lists are lists of Nodes. lfirst(),\nlnext(), and foreach() are used to get, skip, and traverse throught Lists.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 15:05:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "new DEV FAQ" }, { "msg_contents": "> \n> \n> Hello.\n> \n> Zeev Suraski <[email protected]> is making some changes to the PostgreSQL code \n> in PHP3 so that the Oid is returned in the pg_exec funtion if it is an \n> insert.\n> \n> What is the size of the Oid (unsigned, signed etc) and will it ever be \n> zero.\n\n\ttypedef unsigned int Oid;\n\nA zero value for OID is reserved for an Invalid OID. If it returns a\nzero, I would pass it back to the application. You may want to call\nlibpq's PQoidStatus() to get this information. That is a new function\nfor 6.2.1.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 16:25:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oid Questions" }, { "msg_contents": "> \n> At 16:25 22/01/98 -0500, Bruce Momjian wrote:\n> >A zero value for OID is reserved for an Invalid OID. If it returns a\n> >zero, I would pass it back to the application. You may want to call\n> >libpq's PQoidStatus() to get this information. That is a new function\n> >for 6.2.1.\n> \n> Ok, please tell me if this logic is correct. It describes the logic behind\n> the implementation of pg_exec() in PHP.\n> \n> First, the query is executed. If the status is PGRES_EMPTY_QUERY or\n> PGRES_BAD_RESPONSE or PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, return\n> FALSE (failure).\n> If it returns PGRES_COMMAND_OK (which is, from what I understand, a\n> successful query that is, by definition, not supposed to return rows) -\n> check PQoidStatus(). If it's not null, atoi() of it is not 0, return its\n> return value as a numeric integer. If it is zero, assume that this was a\n> successful query that didn't cause the oid to be updated, and return TRUE\n> (successful query that does not return rows or oid).\n> Otherwise, assume that that was a succesful query that returned rows -\n> return a PostgresSQL result identifier.\n\nSorry, got lost in this paragraph.\n\n> \n> I guess that my key question is, whether or not it's correct to assume\n> PGRES_COMMAND_OK + PQoidStatus() == 0 or \"0\" => successful query that did\n> not return rows and did not update the oid, OR, is it possible that\n> PQoidStatus() of zero reflects some error, even though the return value was\n> PGRES_COMMAND_OK?\n\nIt is the result status, PGRES_COMMAND_OK that is important. The\nPQoidStatus() is really there just as an aid. Only an INSERT returns an\nOID as part of the result string. We use this function to just pull it\nout of the string. If they do a SELECT ... INTO, they are inserting\nzero or multiple OIDs, so this value is the last oid inserted.\n\nWe also have new in 6.2.1 PQcmdTuples(), which shows how many rows were\naffected by the INSERT, UPDATE, or DELETE. Again, it comes out of the\nstring returned in the Result structure.\n\nSee libpq/fe-exec.c for the source to both of these.\n\n> \n> PHP can return different datatypes from the same function, in case you were\n> wondering :)\n\nYep, I remember.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 17:09:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oid Questions" }, { "msg_contents": "> \n> I'll try to rephrase the question without taking 3 complex paragraphs to do\n> that :)\n> \n> Is there a way to know a PostgresSQL result holds NO interesting\n> information (no rows, no oids, no nothing)?\n> \n> The more I think of it, the more it seems like this isn't the case with\n> PostgresSQL. Moreover, it seems like in most cases the result holds one\n> interesting tidbit of information or another. When I wrote the MySQL\n> module, basically, I made any query that did not return rows (not including\n> select's that returned 0 rows) but succeeded return TRUE instead of a\n> result handler, since there wasn't much point at keeping that result. With\n> MySQL the information about the last inserted id (mysql_insert_it(), I\n> think it's comparable to the last oid in pgsql) and the number of affected\n> rows can be obtained from the 'server' structure, and not the restul\n> structure as it is with Postgres.\n> \n> I guess I'll change the Postgres module to always keep the result\n> structures and return result identifiers on a successful query.\n\nYes, all the return information for results that return zero rows are in\nthe Result structure, so you can have multiple results open at the same\ntime, and query them separately. They remain valid until PQclear()'ed.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 17:34:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oid Questions" }, { "msg_contents": "At 16:25 22/01/98 -0500, Bruce Momjian wrote:\n>A zero value for OID is reserved for an Invalid OID. If it returns a\n>zero, I would pass it back to the application. You may want to call\n>libpq's PQoidStatus() to get this information. That is a new function\n>for 6.2.1.\n\nOk, please tell me if this logic is correct. It describes the logic behind\nthe implementation of pg_exec() in PHP.\n\nFirst, the query is executed. If the status is PGRES_EMPTY_QUERY or\nPGRES_BAD_RESPONSE or PGRES_NONFATAL_ERROR or PGRES_FATAL_ERROR, return\nFALSE (failure).\nIf it returns PGRES_COMMAND_OK (which is, from what I understand, a\nsuccessful query that is, by definition, not supposed to return rows) -\ncheck PQoidStatus(). If it's not null, atoi() of it is not 0, return its\nreturn value as a numeric integer. If it is zero, assume that this was a\nsuccessful query that didn't cause the oid to be updated, and return TRUE\n(successful query that does not return rows or oid).\nOtherwise, assume that that was a succesful query that returned rows -\nreturn a PostgresSQL result identifier.\n\nI guess that my key question is, whether or not it's correct to assume\nPGRES_COMMAND_OK + PQoidStatus() == 0 or \"0\" => successful query that did\nnot return rows and did not update the oid, OR, is it possible that\nPQoidStatus() of zero reflects some error, even though the return value was\nPGRES_COMMAND_OK?\n\nPHP can return different datatypes from the same function, in case you were\nwondering :)\n\nZeev\n---\nZeev Suraski <[email protected]>\nWeb programmer, System administrator, Netvision LTD\nhttp://bourbon.netvision.net.il/ ICQ: 1450980 \nFor a PGP public key, finger [email protected]\n", "msg_date": "Thu, 22 Jan 1998 23:55:35", "msg_from": "Zeev Suraski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oid Questions" }, { "msg_contents": "I'll try to rephrase the question without taking 3 complex paragraphs to do\nthat :)\n\nIs there a way to know a PostgresSQL result holds NO interesting\ninformation (no rows, no oids, no nothing)?\n\nThe more I think of it, the more it seems like this isn't the case with\nPostgresSQL. Moreover, it seems like in most cases the result holds one\ninteresting tidbit of information or another. When I wrote the MySQL\nmodule, basically, I made any query that did not return rows (not including\nselect's that returned 0 rows) but succeeded return TRUE instead of a\nresult handler, since there wasn't much point at keeping that result. With\nMySQL the information about the last inserted id (mysql_insert_it(), I\nthink it's comparable to the last oid in pgsql) and the number of affected\nrows can be obtained from the 'server' structure, and not the restul\nstructure as it is with Postgres.\n\nI guess I'll change the Postgres module to always keep the result\nstructures and return result identifiers on a successful query.\n\nZeev\n---\nZeev Suraski <[email protected]>\nWeb programmer, System administrator, Netvision LTD\nhttp://bourbon.netvision.net.il/ ICQ: 1450980 \nFor a PGP public key, finger [email protected]\n", "msg_date": "Fri, 23 Jan 1998 00:33:09", "msg_from": "Zeev Suraski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Oid Questions" } ]
[ { "msg_contents": "Here is the top part of my gprof output from a simple session, creating\ntwo tables, inserting some rows, creating an index and doing a couple\nof simple selects (one minute of typing):\n----------\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 39.74 12.39 12.39 mcount (profiler overhead)\n 7.86 14.84 2.45 964885 0.00 0.00 fastgetattr\n 2.79 15.71 0.87 906153 0.00 0.00 fastgetiattr\n 2.44 16.47 0.76 _psort_cmp\n 2.08 17.12 0.65 400783 0.00 0.00 _bt_compare\n 1.60 17.62 0.50 125987 0.00 0.01 hash_search\n 1.48 18.08 0.46 128756 0.00 0.01 SearchSysCache\n 1.28 18.48 0.40 120307 0.00 0.00 SpinAcquire\n 1.25 18.87 0.39 1846682 0.00 0.00 fmgr_faddr\n 1.06 19.20 0.33 253022 0.00 0.00 StrategyTermEvaluate\n 1.03 19.52 0.32 31578 0.01 0.04 heapgettup\n 0.99 19.83 0.31 128842 0.00 0.00 CatalogCacheComputeHashIndex\n---------- \nFastgetattr() doesn't seem to be so fast, after all... or perhaps it would be\nbest to try and reduce the number of calls to it? One million calls to read\nattributes out of tuples seems to me as extreme when we are talking about less\nthan one hundred rows.\n\nPerhaps it would be better to add a new function 'fastgetattrlist' to retrieve\nmultiple attributes at once, instead of calling a macro wrapped around another\nbunch of macros, calling 'fastgetattr' for each attribute to retrieve?\n\nOr perhaps the tuples could be fitted with a \"lookup table\" when being stored\nin the backend cache? It could take .000005 second or so to build the table and\nattach it to the tuple, but it would definitively speed up retrieval of attributes\nfrom that tuple. If the same tuple is searched for its atributtes lots of times (as\nseem to be the case) then this would be faster in the end.\n\nCan we afford not to optimize this? I just hate those MySql people showing their\nperformance figures. PostgreSQL should be the best...\n\n\nHow about this (seemingly) unnecessarily complex part of\naccess/common/heaptuple.c [fastgetattr] ...\n----------\nswitch (att[i]->attlen)\n{\n\tcase sizeof(char):\n\t\toff++;\t\t<-- why not 'sizeof(char)'?\n\t\tbreak;\n\tcase sizeof(int16):\n\t\toff += sizeof(int16);\n\t\tbreak;\n\tcase sizeof(int32):\n\t\toff += sizeof(int32);\n\t\tbreak;\n\tcase -1:\n\t\tusecache = false;\n\t\toff += VARSIZE(tp + off);\n\t\tbreak;\n\tdefault:\n\t\toff += att[i]->attlen;\n\t\tbreak;\n}\n----------\n\nWould it not be faster *and* easier to read if written as:\n----------\noff += (att[i]->attlen == -1 ? (usecache=false,VARSIZE(tp+off)) : att[i]->attlen);\n----------\n\n...or is this some kind of magic which I should not worry about? There are almost\nno comments in this code, and most of the stuff is totally incomprehensible to me.\n\nWould it be a good idea to try and optimize things like this, or will these\nfunctions be replace sometime anyway?\n\n/* m */\n", "msg_date": "Thu, 22 Jan 1998 17:36:16 +0100", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": true, "msg_subject": "Profiling the backend (gprof output) [current devel]" }, { "msg_contents": "> \n> Here is the top part of my gprof output from a simple session, creating\n> two tables, inserting some rows, creating an index and doing a couple\n> of simple selects (one minute of typing):\n> ----------\n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 39.74 12.39 12.39 mcount (profiler overhead)\n> 7.86 14.84 2.45 964885 0.00 0.00 fastgetattr\n> 2.79 15.71 0.87 906153 0.00 0.00 fastgetiattr\n> 2.44 16.47 0.76 _psort_cmp\n> 2.08 17.12 0.65 400783 0.00 0.00 _bt_compare\n> 1.60 17.62 0.50 125987 0.00 0.01 hash_search\n> 1.48 18.08 0.46 128756 0.00 0.01 SearchSysCache\n> 1.28 18.48 0.40 120307 0.00 0.00 SpinAcquire\n> 1.25 18.87 0.39 1846682 0.00 0.00 fmgr_faddr\n> 1.06 19.20 0.33 253022 0.00 0.00 StrategyTermEvaluate\n> 1.03 19.52 0.32 31578 0.01 0.04 heapgettup\n> 0.99 19.83 0.31 128842 0.00 0.00 CatalogCacheComputeHashIndex\n> ---------- \n> Fastgetattr() doesn't seem to be so fast, after all... or perhaps it would be\n> best to try and reduce the number of calls to it? One million calls to read\n> attributes out of tuples seems to me as extreme when we are talking about less\n> than one hundred rows.\n> \n> Perhaps it would be better to add a new function 'fastgetattrlist' to retrieve\n> multiple attributes at once, instead of calling a macro wrapped around another\n> bunch of macros, calling 'fastgetattr' for each attribute to retrieve?\n> \n> Or perhaps the tuples could be fitted with a \"lookup table\" when being stored\n> in the backend cache? It could take .000005 second or so to build the table and\n> attach it to the tuple, but it would definitively speed up retrieval of attributes\n> from that tuple. If the same tuple is searched for its atributtes lots of times (as\n> seem to be the case) then this would be faster in the end.\n> \n> Can we afford not to optimize this? I just hate those MySql people showing their\n> performance figures. PostgreSQL should be the best...\n> \n> \n> How about this (seemingly) unnecessarily complex part of\n> access/common/heaptuple.c [fastgetattr] ...\n> ----------\n> switch (att[i]->attlen)\n> {\n> \tcase sizeof(char):\n> \t\toff++;\t\t<-- why not 'sizeof(char)'?\n> \t\tbreak;\n> \tcase sizeof(int16):\n> \t\toff += sizeof(int16);\n> \t\tbreak;\n> \tcase sizeof(int32):\n> \t\toff += sizeof(int32);\n> \t\tbreak;\n> \tcase -1:\n> \t\tusecache = false;\n> \t\toff += VARSIZE(tp + off);\n> \t\tbreak;\n> \tdefault:\n> \t\toff += att[i]->attlen;\n> \t\tbreak;\n> }\n> ----------\n> \n> Would it not be faster *and* easier to read if written as:\n> ----------\n> off += (att[i]->attlen == -1 ? (usecache=false,VARSIZE(tp+off)) : att[i]->attlen);\n> ----------\n> \n> ...or is this some kind of magic which I should not worry about? There are almost\n> no comments in this code, and most of the stuff is totally incomprehensible to me.\n> \n> Would it be a good idea to try and optimize things like this, or will these\n> functions be replace sometime anyway?\n\nOK, here is my statement on this. GO FOR IT. YOU HAVE THE GREEN LIGHT.\nRUN WITH THE BALL.\n\nYes, PostgreSQL is very modularized, but this modularization causes\nsevere function all overhead, as you have seen. I did some tuning in\n6.2 that improved some things, but much more needs to be done.\n\nAnything that can be done without making the code harder to understand\nor maintain should be done.\n\nYour change to the above switch statement is a good example of a good\ncleanup. If things like this can be improved or cached or made into\nmacros, let's do it.\n\nThe fastgetattr function does quite a bit in terms of reading the tuple,\nso you may need to re-code part of it to optimize it. There is a\nattcacheoff value, but that is only removing part of the overhead.\n\nThere are three things that make us slower that other databases: \ntransactions, user-defined type system, and a good optimizer, which can\nslow small queries, but makes large queries much faster.\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 13:14:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Profiling the backend (gprof output) [current devel]" }, { "msg_contents": "> Anything that can be done without making the code harder to understand\n> or maintain should be done.\n\nOne thing. If you find a better/faster/clearer way to code something,\nand it appears in several areas/files of the code, it should probably be\ndone in all those parts, not just the part that gets used a lot.\n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 22 Jan 1998 13:47:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Profiling the backend (gprof output) [current devel]" }, { "msg_contents": "I think I am on to something with fastgetattr. I will send a patch in a\nfew hours.\n\n> \n> Here is the top part of my gprof output from a simple session, creating\n> two tables, inserting some rows, creating an index and doing a couple\n> of simple selects (one minute of typing):\n> ----------\n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 39.74 12.39 12.39 mcount (profiler overhead)\n> 7.86 14.84 2.45 964885 0.00 0.00 fastgetattr\n> 2.79 15.71 0.87 906153 0.00 0.00 fastgetiattr\n> 2.44 16.47 0.76 _psort_cmp\n> 2.08 17.12 0.65 400783 0.00 0.00 _bt_compare\n> 1.60 17.62 0.50 125987 0.00 0.01 hash_search\n> 1.48 18.08 0.46 128756 0.00 0.01 SearchSysCache\n> 1.28 18.48 0.40 120307 0.00 0.00 SpinAcquire\n> 1.25 18.87 0.39 1846682 0.00 0.00 fmgr_faddr\n> 1.06 19.20 0.33 253022 0.00 0.00 StrategyTermEvaluate\n> 1.03 19.52 0.32 31578 0.01 0.04 heapgettup\n> 0.99 19.83 0.31 128842 0.00 0.00 CatalogCacheComputeHashIndex\n> ---------- \n> Fastgetattr() doesn't seem to be so fast, after all... or perhaps it would be\n> best to try and reduce the number of calls to it? One million calls to read\n> attributes out of tuples seems to me as extreme when we are talking about less\n> than one hundred rows.\n> \n> Perhaps it would be better to add a new function 'fastgetattrlist' to retrieve\n> multiple attributes at once, instead of calling a macro wrapped around another\n> bunch of macros, calling 'fastgetattr' for each attribute to retrieve?\n> \n> Or perhaps the tuples could be fitted with a \"lookup table\" when being stored\n> in the backend cache? It could take .000005 second or so to build the table and\n> attach it to the tuple, but it would definitively speed up retrieval of attributes\n> from that tuple. If the same tuple is searched for its atributtes lots of times (as\n> seem to be the case) then this would be faster in the end.\n> \n> Can we afford not to optimize this? I just hate those MySql people showing their\n> performance figures. PostgreSQL should be the best...\n> \n> \n> How about this (seemingly) unnecessarily complex part of\n> access/common/heaptuple.c [fastgetattr] ...\n> ----------\n> switch (att[i]->attlen)\n> {\n> \tcase sizeof(char):\n> \t\toff++;\t\t<-- why not 'sizeof(char)'?\n> \t\tbreak;\n> \tcase sizeof(int16):\n> \t\toff += sizeof(int16);\n> \t\tbreak;\n> \tcase sizeof(int32):\n> \t\toff += sizeof(int32);\n> \t\tbreak;\n> \tcase -1:\n> \t\tusecache = false;\n> \t\toff += VARSIZE(tp + off);\n> \t\tbreak;\n> \tdefault:\n> \t\toff += att[i]->attlen;\n> \t\tbreak;\n> }\n> ----------\n> \n> Would it not be faster *and* easier to read if written as:\n> ----------\n> off += (att[i]->attlen == -1 ? (usecache=false,VARSIZE(tp+off)) : att[i]->attlen);\n> ----------\n> \n> ...or is this some kind of magic which I should not worry about? There are almost\n> no comments in this code, and most of the stuff is totally incomprehensible to me.\n> \n> Would it be a good idea to try and optimize things like this, or will these\n> functions be replace sometime anyway?\n> \n> /* m */\n> \n> \n\n\n-- \nBruce Momjian\[email protected]\n", "msg_date": "Thu, 29 Jan 1998 09:47:06 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Profiling the backend (gprof output) [current devel]" } ]