text
stringlengths
1.46k
56.1k
hdel ¶ Delete a file or folder hdel x hdel[x] Where x is a file symbol atom, deletes the file or folder and returns x . q)hdel`:test.txt / delete test.txt in current working directory `:test.txt q)hdel`:test.txt / should generate an error 'test.txt: No such file or directory hdel can delete folders only if empty. To delete a folder and its contents, recursively /diR gets recursive dir listing q)diR:{$[11h=type d:key x;raze x,.z.s each` sv/:x,/:d;d]} /hide power behind nuke q)nuke:hdel each desc diR@ / desc sort! q)nuke`:mydir For a general visitor pattern with hdel q)visitNode:{if[11h=type d:key y;.z.s[x]each` sv/:y,/:d;];x y} q)nuke:visitNode[hdel] Unlike Linux, Windows doesn’t allow one to overwrite files which are memory mapped, and it takes some mS after unmapping for that to become possible. hopen , hclose ¶ kdb+ communicates with the file system and other processes through - one-shot functions - handles to persistent connections Connections are opened and closed respectively by hopen and hclose . hopen ¶ Open a connection to a file or process hopen filehandle hopen processhandle hopen (communicationhandle;timeout) hopen port Where filehandle is a symbol atom (or string since V3.6 2017.09.26)communicationhandle is a symbol atom (or string since V3.6 2017.09.26)timeout is milliseconds as an integerport is a local port number as an integer atom connects to a file object or a communication handle, and returns a connection handle as an int. hopen ":path/to/file.txt" / filehandle hopen `:unix://5010 / localhost, Unix domain socket hopen `:tcps://mydb.us.com:5010 / SSL/TLS with hostname hopen(":10.43.23.198:5010";10000) / IP address and timeout hopen 5010 / local port number For IPC compatibility, it serializes to {hopen x}. e.g. hopen each(`:mysymbol; ":mycharvector"; `:localhost:5000; ":localhost:5000"; (`:localhost:5000;1000); (":localhost:5000";1000)) Files¶ If a filehandle specifies a non-existent filepath, it is created, including directories. q)hdat:hopen ":f.dat" / data file (bytes) q)htxt:hopen ":c:/q/test.txt" / text file Passing strings instead of symbols avoids interning of such symbols. This is useful if embedding frequently-changing tokens in the username or password fields. Do not use colons in a file-path. It conflicts with the pattern used to identify a process. To append to these files, the syntax is the same as for IPC: q)r:hdat 0x2324 q)r:htxt "some text\n" q)r:htxt ` sv("asdf";"qwer") Processes¶ Communication handles¶ A communication handle specifies a network resource, and may include authentication credentials for it. There are four forms. - TCP `:host:port[:user:password] host can be a hostname or IP address; omitted, it denotes the localhost- Unix domain socket `:unix://port[:user:password] - (Since V3.4.) Unix domain sockets can have significantly lower latency and higher throughput than a localhost TCP connection - SSL/TLS `:tcps://host:port[:user:password] - tcp with SSL/TLS encryption `:unixs://port - unix domain socket with SSL/TLS encryption - SSL/TLS - Fifo/named pipe - `:fifo://filename - On Unix builds since V3.4. hopen `:10.43.23.198:5010 / IP address hopen ":mydb.us.com:5010" / hostname hopen `::5010 / localhost hopen 5010 / localhost hopen `:unix://5010 / localhost, Unix domain socket hopen `:tcps://mydb.us.com:5010 / SSL/TLS with hostname hopen (`:mydb.us.com:5010:elmo:sesame;10000) / full arg list, 10s timeout User and password are required if the server session has been started with the -u or -U command line options, and are passed to .z.pw for (optional) additional processing. The optional timeout applies to the initial connection, not subsequent use of it. To send messages to the remote process: q)h"2+2" / synchronous (GET) 4 q)(neg h)"a:2" / asynchronous (SET) One-shot request¶ If only one synchronous query/request is to be run, then the one-shot synchronous request can be used to connect, send the query, get the results, then disconnect. q)`:mydb.us.com:5010:elmo:sesame "1+1" 2 It is more efficient to keep a connection open if there is an opportunity to re-use it for other queries. One-shot sync queries can now execute via `::[(":host:port";timeout);query] . (Since V4.0 2020.03.09.) `::[(":localhost:5000:username:password";5000);"2+3"] ":host:port" can also be a symbol as `:host:port . hclose ¶ Close a connection to a file or process hclose x hclose[x] Where x is a connection handle, closes the connection, and destroys the handle. The corresponding integer can then no longer be applied to an argument. q)show h:hopen `::5001 3i q)h"til 5" 0 1 2 3 4 q)hclose h q)h"til 5" ': Bad file descriptor Async connections: pending data on the connection handle is not sent prior to closing. If flushing is required prior to close, this must be done explicitly. (Since V3.6 2019.09.19) q)neg[h][];hclose h; hclose before V3.6 2019.09.19 If the handle refers to a WebSocket, hclose blocks until any pending data on the connection handle has been sent. .Q.Xf (create file) Communication handle, Connection handle, File system, Interprocess communication Named pipes, SSL/TLS Q for Mortals §11.6.2 Opening a Connection Handle hsym ¶ Symbol/s to file or process symbol/s hsym x hsym[x] Where x is a symbol atom or vector (since V3.1) returns the symbol/s prefixed with a colon if it does begin with one. q)hsym`c:/q/test.txt / file path to symbolic file handle `:c:/q/test.txt q)hsym`10.43.23.197 / IP address to symbolic handle `:10.43.23.197 q)hsym `host:port`localhost:8001 / hostname to symbolic handle `:host:port`:localhost:8001 q)hsym `abc`:def`::ghi `:abc`:def`::ghi Identity, Null¶ When the generic null is applied to another value, it is the Identity function. Indexing with the generic null has the same effect. :: Identity¶ Return a value unchanged Applying null to a value¶ (::) x ::[x] Where x is any value, returns x . q)(::)1 1 Applying multiple functions to the same data, with one of the operations as “do nothing”. q)(::;avg)@\:1 2 3 1 2 3 2f Applying a value to null¶ x :: x[::] Identity can also be achieved via indexing. q)1 2 3 :: 1 2 3 and used in variants thereof for e.g. amends q)@[til 10;(::;2 3);2+] 2 3 6 7 6 7 8 9 10 11 When prefix notation is used, x does not have to be an applicable value. q)q:3[::] / not an applicable value 'type [0] q:3[::] ^ q)q:3 :: q)q~3 1b :: Null¶ Q does not have a dedicated null type. Instead :: is used to denote a generic null value. For example, functions that ‘return no value’, actually return :: . q)enlist {1;}[] :: We use enlist above to force display of a null result – a pure :: is not displayed. When a unary function is called with no arguments, :: is passed in. q)enlist {x}[] :: Use :: to prevent a mixed list changing type. Since :: has a type for which no vector variant exists, it is useful to prevent a mixed list from being coerced into a vector when all items happen to be of the same type. (This is important when you need to preserve the ability to add non-conforming items later.) q)x:(1;2;3) q)x,:`a 'type but q)x:(::;1;2) q)x,:`a / ok if ¶ Evaluate expression/s under some condition if[test;e1;e2;e3;…;en] Control construct. Where test is an expression that evaluates to an atom of integral typee1 ,e2 , …en are expressions unless test evaluates to zero, the expressions e1 to en are evaluated, in order. The result of if is always the generic null. q)a:100 q)r:"" q)if[a>10;a:20;r:"true"] q)a 20 q)r "true" if is not a function but a control construct. It cannot be iterated or projected. if is often preferred to Cond when a test guards a side effect, such as amending a global. A common use is to catch special or invalid arguments to a function. foo:{[x;y] if[type[x]<0; :x]; / no-op for atom x if[count[y]<>3; '"length"]; / invalid y .. } Name scope¶ The brackets of the expression list do not create lexical scope. Name scope within the brackets is the same as outside them. Setting local variables using if can have unintended consequences. Cond, do , while , Vector Conditional Controlling evaluation Q for Mortals §10.1.4 if ij , ijf ¶ Inner join x ij y ij [x;y] x ijf y ijf[x;y] Where x andy are tablesy is keyed, and its key columns are columns ofx returns two tables joined on the key columns of the second table. The result has one combined record for each row in x that matches a row in y . q)t sym price --------------- IBM 0.7029677 FDP 0.08378167 FDP 0.06046216 FDP 0.658985 IBM 0.2608152 MSFT 0.5433888 q)s sym | ex MC ----| -------- IBM | N 1000 MSFT| CME 250 q)t ij s sym price ex MC ----------------------- IBM 0.7029677 N 1000 IBM 0.2608152 N 1000 MSFT 0.5433888 CME 250 Common columns are replaced from y . q)([] k:1 2 3 4; v:10 20 30 40) ij ([k:2 3 4 5]; v:200 300 400 500;s:`a`b`c`d) k v s ------- 2 200 a 3 300 b 4 400 c ij is a multithreaded primitive. Changes in V3.0 Since V3.0, ij has changed behavior (similarly to lj ): when there are nulls in y , ij uses the y null, where the earlier version left the corresponding value in x unchanged: q)show x:([]a:1 2;b:`x`y;c:10 20) a b c ------ 1 x 10 2 y 20 q)show y:([a:1 2]b:``z;c:1 0N) a| b c -| --- 1| 1 2| z q)x ij y /V3.0 a b c ----- 1 1 2 z q)x ij y /V2.8 a b c ------ 1 x 1 2 z 20 Since 2016.02.17, the earlier version is available in all V3.4 and later versions as ijf . Joins Q for Mortals §9.9.4 Ad Hoc Inner Join in ¶ Whether x is an item of y x in y in[x;y] Where y is - an atom or vector of the same type as x , returns whether atoms ofx are items ofy - a list, returns as a boolean atom whether x is an item ofy Where y is an atom or vector, comparison is left-atomic. q)"x" in "a" / atom in atom 0b q)"x" in "acdexyz" / atom in vector 1b q)"wx" in "acdexyz" / vector in vector 01b q)("abc";("def";"ghi");"jkl")in "bed" / list in vector 010b (110b;000b) 000b Where y is a list there is no iteration through x . q)"wx" in ("acdexyz";"abcd";"wx") / vector in list 1b q)("ab";"cd") in (("ab";"cd");0 1 2) / list in list 1b q)any ("ab";"cd") ~/: (("ab";"cd");0 1 2) 1b Further examples: q)1 3 7 6 4 in 5 4 1 6 / which of x are in y 10011b q)1 2 in (9;(1 2;3 4)) / no item of x is in y 00b q)1 2 in (1 2;9) / 1 2 is an item of y 1b q)1 2 in ((1 2;3 4);9) / 1 2 is not an item of y 0b q)(1 2;3 4) in ((1 2;3 4);9) / x is an item of y 1b in uses Find to search for x in y . in is a multithreaded primitive. Queries¶ in is often used with select . q)\l sp.q q)select from p where city in `paris`rome p | name color weight city --| ------------------------ p2| bolt green 17 paris p3| screw blue 17 rome p5| cam blue 12 paris Mixed argument types¶ Optimized support for atom or 1-list y allows a wider input type mix. q)1 2. in 2 01b q)1 2. in 1#2 01b q)1 2. in 0#2 'type [0] 1 2. in 0#2 ^ q)1 2. in 2#2 'type [0] 1 2. in 2#2 ^ There is no plan to extend that to vectors of any length, and it might be removed in a future release. We strongly recommend avoiding relying on this. Mixed argument ranks¶ Results for mixed-rank arguments are not intuitive q)3 in (1 2;3) 0b q)3 in (3;1 2) 1b Instead use Match: q)any ` ~/: (1 2;`) 1b
.tst.desc["Mocking"]{ should["assign the given value to the named variable"]{ `foo mock 0; foo musteq 0; }; should["assign to a non-fully qualifeid name with respect to the current context"]{ `.tst.context mock `.foo; `foo mock 3; .foo.foo musteq 3; }; should["backup a variable if it already exists"]{ `..foo set 0; `..foo mock 1; .tst.restore[]; (get `..foo) musteq 0; delete foo from `.; }; should["remove any variables that did not originally exist when all variables are restored"]{ `foo mock 1; .tst.restore[]; mustthrow["foo"] {get `foo}; }; should["refuse to mock a top level namespace"]{ mustthrow[()] { `.tst mock ` }; }; should["support namespaces at top level that are just local variables"]{ mustnotthrow[()] {`var.name mock 1}; }; }; ================================================================================ FILE: qspec_test_test_spec_runner.q SIZE: 2,275 characters ================================================================================ .tst.desc["Running a specification should"]{ before{ `defaultSpecification mock `context`tstPath`expectations!(`.foo;`:/foo/bar;enlist (`result,())!(),`pass); `.tst.runExpec mock {[x;y]y}; }; should["set the correct context and the correct filepath for its expectations"]{ `.tst.runExpec mock {[x;y]; .tst.context mustmatch `.foo; .tst.tstPath mustmatch `:/foo/bar; x }; .tst.runSpec defaultSpecification; }; should["restore any partitioned directories that were loaded"]{ `..run mock 0b; `.tst.restoreDir mock {`..run mock 1b}; / This won't be executed in the same context .tst.runSpec defaultSpecification; must[`.[`run];"Expected the directory restoring function to have been called"]; }; should["restore the context and filepath to what they previously were"]{ oldContext: .tst.context; oldPath: .tst.tstPath; .tst.runSpec defaultSpecification; oldContext mustmatch .tst.context; oldPath mustmatch .tst.tstPath; }; should["pass only if all expectations passed"]{ run1: .tst.runSpec defaultSpecification; run1[`result] mustmatch `pass; otherSpecification: defaultSpecification; otherSpecification[`expectations]: ((`result,())!(),`fail;(`result,())!(),`pass); run2: .tst.runSpec otherSpecification; run2[`result] mustmatch `fail; }; should["work with an empty expectation list"]{ mustnotthrow[();{.tst.runSpec @[defaultSpecification;`expectations;:;([];result:`symbol$())]}]; mustnotthrow[();{.tst.runSpec @[defaultSpecification;`expectations;:;()]}]; }; }; .tst.desc["Halting execution of specifications"]{ before{ `defaultSpecification mock `context`tstPath`expectations!(`.foo;`:/foo/bar;enlist (`result,())!(),`pass); `.tst.runExpec mock {[x;y]y}; `myOldContext mock .tst.context; `myOldPath mock .tst.tstPath; `.tst.halt mock 1b; }; after{ .tst.halt:0b; .tst.restoreDir[]; .tst.context:myOldContext; .tst.tstPath:myOldPath; }; should["not run further expectations"]{ `.tst.runExpec mock {[x;y]'"error"}; mustnotthrow[();{.tst.runSpec defaultSpecification}]; }; should["not restore the context and filepath to what they previously were"]{ .tst.runSpec defaultSpecification; myOldContext mustnmatch .tst.context; myOldPath mustnmatch .tst.tstPath; }; }; ================================================================================ FILE: qspec_test_test_ui.q SIZE: 2,893 characters ================================================================================ .tst.desc["The Testing UI"]{ alt { before{ `myRestore mock .tst.restore; `.tst.restore mock {}; `.tst.callbacks.descLoaded mock {}; `.tst.mock mock {[x;y]}; }; after{ myRestore[]; }; should["let you create specifications"]{ descStr: "This is a description"; myDesc: .tst.desc[descStr]{}; / Not testing the other parts of the UI here 99h musteq type myDesc; descStr musteq myDesc`title; }; should["cause specifications to assume the context that they were defined in"]{ oldContext: string system "d"; system "d .foo"; myDesc: .tst.desc["Blah"]{}; system "d ", oldContext; `.foo musteq myDesc`context; }; should["call the descLoaded callback when a new specification is defined"]{ `callbackCalled mock 0b; `.tst.callbacks.descLoaded mock {`callbackCalled set 1b}; .tst.desc["Blah"]{}; must[callbackCalled;"Expected the descLoaded callback to have been called"]; }; }; should["let you set a before function"]{ `.tst.currentBefore mock .tst.currentBefore; bFunction: {"unique message before"}; .tst.before bFunction; bFunction mustmatch .tst.currentBefore; }; should["let you set an after function"]{ `.tst.currentAfter mock .tst.currentAfter; aFunction: {"unique message after"}; .tst.after aFunction; aFunction mustmatch .tst.currentAfter; }; should["let you create an expectation"]{ `.tst.expecList mock .tst.expecList; description:"unique description expec"; func:{"unique message expec"}; .tst.should[description;func]; e:.tst.fillExpecBA .tst.expecList; 1 musteq count e; description musteq first e[`desc]; func mustmatch first e[`code]; }; should["let you create a fuzz expectation"]{ `.tst.expecList mock .tst.expecList; description:"unique description fuzz"; func:{"unique message fuzz"}; .tst.holds[description;()!();func]; e:.tst.fillExpecBA .tst.expecList; 1 musteq count e; description musteq first e[`desc]; func mustmatch first e[`code]; }; should["let you mask before and after functions inside of alternate blocks"]{ `.tst.currentBefore`.tst.currentAfter mock' .tst[`currentBefore`currentAfter]; `s mock .tst.desc["A spec"]{ .tst.alt { should["retain this expectation"]{}; }; .tst.before {"unique before"}; .tst.after {"unique after"}; .tst.alt { .tst.before {"another before"}; .tst.after {"another after"}; {"another before"} mustmatch .tst.currentBefore; {"another after"} mustmatch .tst.currentAfter; should["retain this expectation too"]{}; }; should["and this one"]{}; }; s[`expectations;`desc] mustmatch ("retain this expectation";"retain this expectation too";"and this one"); s[`expectations;`before] mustmatch (.tst.currentBefore;{"another before"};{"unique before"}); s[`expectations;`after] mustmatch (.tst.currentAfter;{"another after"};{"unique after"}); }; }; ================================================================================ FILE: qtips_cep.q SIZE: 976 characters ================================================================================ \l qtips.q c:.opt.config c,:(`ref;`:ref.csv;"file with reference data") c,:(`eod;0D23:59;"time for end of day event") c,:(`db;`:db;"end of day dump location") c,:(`debug;0b;"don't start engine") c,:(`log;2;"log level") / utility function to generate until timer events / (e)nd (t)ime, ids, (d)uration, (f)unction genu:{[et;ids;d;f]flip(`.timer.until;d;et;flip(f;ids))} / (p)arameter list, current (t)i(m)e main:{[p;tm] r:("J FFJFF";1#",") 0: p `ref; `ref upsert `id`px`ts`qs`vol`rfr xcol r; `price upsert flip ((0!ref)`id`px),tm; tms:(n:count ids:key[ref]`id)#tm; u:genu[p `eod;ids]; .timer.add[`timer.job;`updp;u[d:n?0D00:00:01;`.md.updp];tms]; .timer.add[`timer.job;`updq;u[d+:n?0D00:00:01;`.md.updq];tms]; .timer.add[`timer.job;`updt;u[d+:n?0D00:00:01;`.md.updt];tms]; .timer.add[`timer.job;`dump;(`.md.dump;p `db);p[`eod]+"d"$tm]; } p:.opt.getopt[c;`ref`db] .z.x if[`help in key p;-1 .opt.usage[1_c;.z.f];exit 1] .log.lvl:p `log if[not p`debug;main[p;.z.P]] ================================================================================ FILE: qtips_deriv.q SIZE: 1,236 characters ================================================================================ \d .deriv / monte carlo / (S)pot, (s)igma, (r)ate, (t)ime, / (p)ayoff (f)unction, (n)umber of paths mc:{[S;s;r;t;pf;n] z:.stat.bm n?/:count[t]#1f; f:S*prds .stat.gbm[s;r;deltas[first t;t]] z; v:pf[f]*exp neg r*last t; v} / monte carlo result statistics / (e)xpected (v)alue, (err)or, (n)umber of paths mcstat:{`ev`err`n!(avg x;1.96*sdev[x]%sqrt n;n:count x)} / european option payoff / (c)all flag, stri(k)e, (f)uture prices eu:{[c;k;f]0f|$[c;last[f]-k;k-last f]} / asian option payoff / (c)all flag, stri(k)e, (f)uture prices as:{[c;k;f]0f|$[c;avg[f]-k;k-avg f]} / lookback payoff / (c)all flag, stri(k)e, (f)uture prices lb:{[c;k;f]0f|$[c;max[f]-k;k-min f]} / barrier option / (b)arrier (f)unction, (p)ayoff (f)unction / (f)uture prices bo:{[bf;pf;f]bf[f]*pf f} / black-scholes-merton / (S)pot, stri(k)e, (r)ate, (t)ime, / (c)all flag, (s)igma bsm:{[S;k;r;t;c;s] x:(log[S%k]+rt:r*t)%ssrt:s*srt:sqrt t; d1:ssrt+d2:x-.5*ssrt; n1:m*.stat.cnorm d1*m:-1 1f c; n2:m*.stat.cnorm d2*m; p:(S*n1)-n2*pvk:k*pv:exp neg rt; g:(n1p:exp[-.5*d1*d1]%sqrt 2f*acos -1f)%S*ssrt; v:srt*Sn1p:S*n1p; th:neg (r*pvk*n2)+Sn1p*s*.5%srt; rho:pvk*t*n2; d:`price`delta`gamma`vega`theta`rho; d!:(p;n1;g;v;th;rho); if[0h<type p;d:flip d]; d} ================================================================================ FILE: qtips_hist.q SIZE: 1,230 characters ================================================================================ \d .hist / create range of n buckets between (s)tart and (e)nd nrng:{[n;s;e]s+til[1+n]*(e-s)%n} / group data by a (b)inning (f)unction bgroup:{[bf;x] b:nrng[bf x;min x;max x]; g:group b bin x; g:b!x g til count b; g} / convert (w)indow size to number of buckets nw:{[w;x]ceiling (max[x]-min x)%w} / square root bucket algorithm sqrtn:{ceiling sqrt count x} / sturges' bucket algorithm sturges:{ceiling 1f+2f xlog count x} / normalized skew used for doane nskew:{[x].stat.skew[x]%sqrt 6f*(n-2)%(n+1)*3+n:count x} / doane's bucket algorithm doane:{ceiling 1f+(2f xlog count x)+2f xlog 1f+abs nskew x} / scott's windowing algorithm scott:{nw[;x] 3.4908*sdev[x]*count[x] xexp -1f%3f} /freedman-diaconis windowing algorithm fd:{nw[;x] 2f*.stat.iqr[x]*count[x] xexp -1f%3f} / bar-chart plotting function / (c)haracter, (w)indow size, (n)umber of points bar:{[c;w;n]w$n#c} / dot-chart plotting function / (c)haracter, (w)indow size, (n)umber of points dot:{[c;w;n]w$neg[n]$1#c} / use (p)lotting (f)unction to chart (d)ata with max (w)idth chart:{[pf;w;d] n:"j"$(m&w)*n%m:max n:value d; d:d,'enlist each pf[w] each n; d} / example freedman-diaconis histogram composition fdhist:chart[bar"*";30] count each bgroup[fd]@ ================================================================================ FILE: qtips_log.q SIZE: 502 characters ================================================================================ \d .log
// The functions in this library are simple wrappers on top of the standard .z.* date/time primitive values. // Use this library to make it easier to change time zones in the future. If the time zone must be changed, // simply override this library with your custom definitions // We default to assuming GMT times .require.lib`type; / @returns (Timestamp) The current date and time to nanosecond precision .time.now:{ .z.p }; / @returns (Time) The current time to millisecond precision .time.nowAsTime:{ .z.t }; / @returns (Timespan) The current time to nanosecond precision .time.nowAsTimespan:{ .z.n }; / @returns (Date) The current date .time.today:{ .z.d }; / @returns (Time) The time difference of the current process .time.getLocalTimeDifference:{ :.z.T - .z.t; }; ================================================================================ FILE: kdb-common_src_time.util.q SIZE: 1,725 characters ================================================================================ // Time Utility Functions // Copyright (c) 2018 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/time.util.q / Day integer (from 'mod') to 3 letter abbrieviation mapping .time.c.days:`sat`sun`mon`tue`wed`thu`fri; / @param x (Date) The date to check / @returns (Boolean) True if the specified date is a weekday. False otherwise .time.isWeekday:{ if[not .type.isDate x; '"IllegalArgumentException"; ]; :mod[x; 7] within 2 6; }; / @param dt (Date) The date to get the day of / @returns (Symbol) 3 letter abbrieviation of the day of the specified date / @see .time.c.days .time.getDay:{[dt] :.time.c.days dt mod 7; }; / @returns (String) A file name friendly representation of the current date time. Format is 'yyyymmdd_hhMMss_SSS' / @see .time.nowAsTime / @see .time.today .time.nowForFileName:{ timeNow:.time.nowAsTime[]; ddmmss:`second$timeNow; millis:`long$timeNow mod 1000; :except[;".:"] string[.time.today[]],"_",string[ddmmss],"_",string millis; }; / @returns (Timestamp) Current time as timestamp but rounded to the nearest millisecond .time.nowAsMsRoundedTimestamp:{ :.time.today[] + .time.nowAsTime[]; }; / @returns (String) A file name friendly representation of the current date. Format is 'yyyymmdd' / @see .time.today[] .time.todayForFileName:{ :except[;"."] string .time.today[]; }; / Rounds nanosecond precision timestamps and timespans to milliseconds .time.roundTimestampToMs:.time.roundTimespanToMs:{ if[not any .type[`isTimestamp`isTimespan] @\: x; '"IllegalArgumentException"; ]; if[.type.isInfinite x; :x; ]; :.Q.t[abs type x]$1000000 * (`long$x) div 1000000; }; ================================================================================ FILE: kdb-common_src_type.q SIZE: 4,990 characters ================================================================================ // Type Checking and Normalisation // Copyright (c) 2016 - 2020 Sport Trades Ltd, (c) 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/type.q / All infinite values / @see .type.isInfinite .type.const.infinites:raze (::;neg)@\:(0Wh;0Wi;0Wj;0We;0Wf;0Wp;0Wm;0Wd;0Wz;0Wn;0Wu;0Wv;0Wt); / Mapping of type name based on index in the list (matching .Q.t behaviour) .type.const.types:`mixedList`boolean`guid``byte`short`integer`long`real`float`character`symbol`timestamp`month`date`datetime`timespan`minute`second`time; / Function string to use for all .type.is* functions for higher performance .type.const.typeFunc:"{ --TYPE--~type x }"; .type.init:{ types:.type.const.types where not null .type.const.types; .type.i.setCheckFuncs each types; }; .type.isString:{ :10h~type x; }; .type.isNumber:{ :type[x] in -5 -6 -7 -8 -9h; }; .type.isWholeNumber:{ :type[x] in -5 -6 -7h; }; .type.isDecimal:{ :type[x] in -8 -9h; }; .type.isDateOrTime:{ :type[x] in -12 -13 -14 -15 -16 -17 -18 -19h; }; .type.isFilePath:{ :.type.isSymbol[x] & ":"~first string x; }; .type.isHostPort:{ :.type.isLong[x] | .type.isSymbol[x] & 2 <= count where ":" = string x; }; .type.isDict:.type.isDictionary:{ :99h~type x; }; .type.isTable:.Q.qt; .type.isKeyedTable:{ if[not .type.isTable x; :0b; ]; :0 < count keys x; }; / Supports checking a folder path without being loaded via system "l" .type.isSplayedTable:{ if[.type.isFilePath x; if[not .type.isFolder x; :0b; ]; if[not "/" = last string x; x:` sv x,`; ]; ]; :0b~.Q.qp $[.type.isSymbol x;get;::] x; }; .type.isPartedTable:{ :1b~.Q.qp $[.type.isSymbol x;get;::] x; }; / @returns (Boolean) If one or more columns in the table are enumerated .type.isEnumeratedTable:{ :any .type.isEnumeration each .Q.V x; }; .type.isFunction:{ :type[x] in 100 101 102 103 104 105 106 107 108 109 110 111 112h; }; .type.isEnumeration:{ :abs[type x] within 20 76h; }; .type.isAnymap:{ :77h = type x; }; .type.isInfinite:{ :x in .type.const.infinites; }; / @return (Boolean) True if the input is a file reference and the file exists, false otherwise .type.isFile:{ if[not .type.isFilePath x; '"IllegalArgumentException"; ]; :x~key x; }; / @returns (Boolean) True if the input is a folder reference, the reference exists on disk and the reference is a folder. False otherwise .type.isFolder:{ if[not .type.isFilePath x; '"IllegalArgumentException"; ]; :(not ()~key x) & not .type.isFile x; }; .type.isNamespace:{ :(~). 1#/:(.q;x); }; .type.isEmptyNamespace:{ :x ~ 1#.q; }; .type.isAtom:{ :type[x] in -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19h; }; / Equivalent any of '.type.isMixedList' / '.type.isTypedList' / '.type.isAnymap' / '.type.isNestedList' returning true .type.isList:{ :type[x] in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96h; }; .type.isTypedList:{ :type[x] in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19h; }; .type.isNestedList:{ :type[x] in 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96h; }; .type.isDistinct:{ :x~distinct x; }; / @param x (Atom|SymbolList) The input to convert into a symbol / @returns (Symbol) A symbol version of the input .type.ensureSymbol:{ if[.type.isSymbol[x] | .type.isSymbolList x; :x; ]; if[.type.isNumber[x] | .type.isDateOrTime[x] | .type.isBoolean[x] | .type.isGuid x; :`$string x; ]; :`$x; }; / @returns (String) A string version of the input .type.ensureString:{ $[.type.isString x; :x; .type.isDict[x] | .type.isTable[x] | .type.isMixedList x; :.Q.s1 x; .type.isTypedList x; :", " sv .type.ensureString each x; / else :string x ]; }; / @returns (HostPort) A valid host/port connection symbol, converting a port only input as appropriate .type.ensureHostPortSymbol:{ if[not .type.isHostPort x; '"IllegalArgumentException"; ]; if[.type.isLong x; :`$"::",string x; ]; :x; }; / Builds type checking functions .type.is*Type* and .type.is*Type*List from a string template function for highest performance / @param typeName (Symbol) The name of the type to build the functions for / @see .type.const.types .type.i.setCheckFuncs:{[typeName] listType:`short$.type.const.types?typeName; typeName:@[string typeName; 0; upper]; atomName:`$"is",typeName; listName:`$"is",typeName,"List"; set[` sv `.type,atomName;] get ssr[.type.const.typeFunc; "--TYPE--"; .Q.s1 neg listType]; / If type 0, don't create the list version if[not listType = neg listType; set[` sv `.type,listName;] get ssr[.type.const.typeFunc; "--TYPE--"; .Q.s1 listType]; ]; }; ================================================================================ FILE: kdb-common_src_tz.q SIZE: 4,664 characters ================================================================================ // Timezone Conversion Library // Copyright (c) 2019 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/tz.q // INFO: This library is a implementation of the code described at https://code.kx.com/v2/kb/timezones/ .require.lib each `csv`type; / The expected file name containing the timezone configuration .tz.cfg.csvFilename:`$"timezone-config.csv"; / The expected column types of the timezone configuration .tz.cfg.csvTypes:"SPJ"; / Optional path containing the timezone configuration. If this is not set, the init function will default to: / `:require-root/config/timezone .tz.cfg.csvPath:`; / The discovered file path of the timezone configuration file .tz.csvSrcPath:`; / The timezone configuration as a kdb table .tz.timezones:(); .tz.init:{ $[null .tz.cfg.csvPath; searchLoc:` sv .require.location.root,`config`timezone; / else searchLoc:.tz.cfg.csvPath ]; .tz.csvSrcPath:` sv searchLoc,.tz.cfg.csvFilename; if[not .type.isFile .tz.csvSrcPath; .log.if.error "No Timezone configuration found in expected location [ Path: ",string[.tz.csvSrcPath]," ]"; .log.if.error " Set '.tz.cfg.csvPath' before initialising the library"; '"NoTzConfigException"; ]; .log.if.info "Initialising Timezone Conversion library [ Source: ",string[.tz.csvSrcPath]," ]"; .tz.i.loadTimezoneCsv[]; }; / @returns (Symbol) All the supported timezones for conversion .tz.getSupportedTimezones:{ :distinct .tz.timezones`timezoneID; }; / Converts a timestamp in UTC into the specified target timezone / @param timestamp (Timestamp|TimestampList) The timestamps to convert / @param targetTimezone (Symbol) The timezone to convert to / @throws InvalidTargetTimezoneException If the timezone specified is not present in the configuration / @see .tz.timezones .tz.utcToTimezone:{[timestamp; targetTimezone] if[not targetTimezone in .tz.timezones`timezoneID; '"InvalidTargetTimezoneException"; ]; convertTable:([] timezoneID:count[timestamp]#targetTimezone; gmtDateTime:(),timestamp); convertRes:(::; first) .type.isAtom timestamp; :convertRes exec gmtDateTime + gmtOffset from aj[`timezoneID`gmtDateTime; convertTable; .tz.timezones]; }; / Converts a timestamp in the specified timezone into the UTC timezone / @param timestamp (Timestamp|TimestampList) The timestamps to convert / @param sourceTimezone (Symbol) The timezone that the specified timestamps are currently in / @throws InvalidSourceTimezoneException If the timezone specified is not present in the configuration / @see .tz.timezones .tz.timezoneToUtc:{[timestamp; sourceTimezone] if[not sourceTimezone in .tz.timezones`timezoneID; '"InvalidSourceTimezoneException"; ]; convertTable:([] timezoneID:count[timestamp]#sourceTimezone; localDateTime:(),timestamp); convertRes:(::; first) .type.isAtom timestamp; :convertRes exec localDateTime - gmtOffset from aj[`timezoneID`localDateTime; convertTable; .tz.timezones]; };
Linux production notes¶ Linux kernels KX recommendations for NUMA hardware, Transparent Huge Pages and Huge Pages are different for different Linux kernels. Details below. Look for the icon. Non-Uniform Access Memory (NUMA) hardware¶ Historically, there have been a number of situations where the choice of NUMA memory management settings in the kernel would adversely affect the performance of q on systems using NUMA memory architectures. This resulted in higher-than-expected system-process usage for q, and lower memory performance. For this reason we made certain recommendations for the settings for memory interleave and transparent huge pages. One of the performance issues seen by q in this context is the same as the “swap insanity” issue, as linked below. Essentially, when the Linux kernel decides to swap out dirty pages, due to memory exhaustion, it was observed to affect performance of q, significantly more than expected. A relief for this situation was achieved via setting NUMA interleaving options in the kernel. However, with the introduction of new Linux distributions based on newer kernel versions we now recommend different NUMA settings, depending on the version of the distribution being used. The use of the interleave feature should still be considered for those cases where your code drives the q processes to write to memory pages in excess of the physical memory capacity of the node. For distributions based on kernels - 3.x or higher, please disable interleave , and enablezone_reclaim ; for all situations where memory page demand is constrained to the physical memory space of the node, this should return a better overall performance. - 2.6 or earlier (e.g RHEL 6.7 or CentoS 6.7 or earlier), we recommend to disable NUMA, and instead set an interleave memory policy, especially in the use-case described above. | Linux kernel | NUMA | interleave memory | zone-reclaimed | |---|---|---|---| | 3.x or higher | enable | disable | enable | | 2.6 or earlier | disable | enable | In both cases, q is unaware of whether NUMA is enabled or not. If possible, you should change the NUMA settings via a BIOS setting, if that is supported by your system. Otherwise use the technique below. To fully disable NUMA and enable an interleave memory policy, start q with the numactl command as follows $ numactl --interleave=all q and disable zone-reclaim in the proc settings as follows $ echo 0 > /proc/sys/vm/zone_reclaim_mode The MySQL “swap insanity” problem and the effects of NUMA Although this post is about the impact on MySQL, the issues are the same for other databases such as q. To find out whether NUMA is enabled in your bios, use $ dmesg | grep -i numa And to see if NUMA is enabled on a process basis $ numactl -s Huge Pages and Transparent Huge Pages (THP)¶ A number of customers have been impacted by bugs in the Linux kernel with respect to Transparent Huge Pages. These issues manifest themselves as process crashes, stalls at 100% CPU usage, and sporadic performance degradation. Our recommendation for THP is similar to the recommendation for memory interleaving. | Linux kernel | THP | |---|---| | 2.6 or earlier | disable | | 3.x or higher | enable | Other database vendors are also reporting similar issues with THP. Note that changing Transparent Huge Pages isn’t possible via sysctl(8) . Rather, it requires manually echoing settings into /sys/kernel at or after boot. In /etc/rc.local or by hand. To disable THP, do this: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi Some distributions may require a slightly different path, e.g: $ echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled Another possibility to configure this is via grub transparent_hugepage=never To enable THP for Linux kernel 3.x, do this: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo always > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi Q must be restarted to pick up the new setting. Monitoring free disk space¶ In addition to monitoring free disk space for the usual partitions you write to, ensure you also monitor free space of /tmp on Unix, since q uses this area for capturing the output from system commands, such as system "ls" . Disk space for log files It is essential to ensure there is sufficient disk space for tickerplant log files, as in the event of exhausting disk space, the logging mechanism may write a partial record, and then drop records, thereby leaving the log file in a corrupt state due to the partial record. Back up the sym file¶ The sym file is found in the root of your HDB. It is the key to the default enums. Regularly back up the sym file outside the HDB. Compression¶ If you find that q is seg faulting (crashing) when accessing compressed files, try increasing the Linux kernel parameter vm.max_map_count . As root $ sysctl vm.max_map_count=16777216 and/or make a suitable change for this parameter more permanent through /etc/sysctl.conf . As root $ echo "vm.max_map_count = 16777216" | tee -a /etc/sysctl.conf $ sysctl -p You can check current settings with $ more /proc/sys/vm/max_map_count Assuming you are using 128-KB logical size blocks for your compressed files, a general guide is, at a minimum, set max_map_count to one map per 128 KB of memory, or 65530, whichever is higher. If you are encountering a SIGBUS error, please check that the size of /dev/shm is large enough to accommodate the decompressed data. Typically, you should set the size of /dev/shm to be at least as large as a fully decompressed HDB partition. Set ulimit to the higher of 4096 and 1024 plus the number of compressed columns which may be queried concurrently. $ ulimit -n 4096 lz4 compression Certain releases of lz4 do not function correctly within kdb+. Notably, lz4-1.7.5 does not compress, and lz4-1.8.0 appears to hang the process. Kdb+ requires at least lz4-r129 . lz4-1.8.3 works. We recommend using the latest lz4 release available. Timekeeping¶ Timekeeping on production servers is a complicated topic. These are just a few notes which can help. If you are using any of local time functions .z.(TPNZD) q will use the localtime(3) system function to determine time offset from GMT. In some setups (GNU libc) this can cause excessive system calls to /etc/localtime . Setting TZ environment helps this: $ export TZ=America/New_York or from q q)setenv[`TZ;"Europe/London"] One more way of getting excessive system calls when using .z.(pt…) is to have a slow clock source configured on your OS. Modern Linux distributions provide very low overhead functionality for getting current time. Use tsc clocksource to activate this. $ echo tsc >/sys/devices/system/clocksource/clocksource0/current_clocksource # list available clocksource on the system $ cat /sys/devices/system/clocksource/clocksource*/available_clocksource If you are using PTP for timekeeping, your PTP hardware vendor might provide their own implementation of time. Check that those utilize VDSO mechanism for exposing time to user space. A load-balancing kdb+ server¶ The script KxSystems/kdb/e/mserve.q can be used to start a load-balancing kdb+ server. The primary server starts a number of secondary servers (in the same host). Clients then send requests to the primary server which, transparently to the client, chooses a secondary server with low CPU load, and forwards the request there. This set-up is useful for read operations, such as queries on historical databases. Each query is executed in one of the secondary threads, hence writes are not replicated. Starting the primary server¶ The arguments are the number of secondary servers, and the name of a q script that to be executed by the secondary servers at start-up. Typically this script reads in a database from disk into memory. $ q mserve.q -p 5001 2 startup.q Client request¶ In the client, connect to the server with hopen q)h: hopen `:localhost:5001 Synchronous messages are executed at the primary server. q)h "xs: til 9" q)h "xs" 0 1 2 3 4 5 6 7 8 Asynchronous messages are forwarded to one of the secondary servers, transparently to the client. The code below issues an asynchronous request, then blocks on the handle waiting for a result to be returned. This is called deferred synchronous. q)(neg h) "select sym,price from trade where size > 50000" ; h[] Deferred synchronous requests can also be made from non-q clients. For example, the Java based example grid viewer code can be modified to issue a deferred synchronous request rather than a synchronous request by sending an async request and blocking on the handle in exactly the same way. The line model.setFlip((c.Flip) c.k(query)); should be modified to c.ks(query); model.setFlip((c.Flip) c.k());
Overview¶ In this document, we compare compression algorithms using a popular financial dataset from the New York Stock Exchange (NYSE). There are three key metrics to evaluate compression algorithms. - Compression ratio - Compression speed - Decompression speed These metrics impact storage cost, data write time and query response times respectively. Both compression and decompression speeds depend on the hardware - primarily on storage speed and the compute (CPU) capacity. Our partner, Intel(R), provided access to two systems with different storage characteristics in its FasterLab, a facility dedicated to optimization of Financial Services Industry (FSI) solutions. The first system has fast local disks, while the second system comes with a slower NFS storage. The next sections describe the results in detail. Compression ratios¶ Compression ratio measures the relative reduction in size of data. This ratio is calculated by dividing the uncompressed size by the compressed size. For example, a ratio of 4 indicates that the data consumes a quarter of the disk space after compression. In this document, we show the relative sizes after compression, which is the inverse of compression ratios. Lower values indicate better compression. The numbers are in percentages, so 25 corresponds to compression ratio 4. The block size parameter was set to 17, which translates to logical block size of 128 KB. The table-level results are presented below. zstd outperforms lz4 and snappy by nearly 2x, though it is only marginally better than gzip . The following tables provide a column-level breakdown. The columns are ordered by entropy in decreasing order. Low-entropy columns typically compress well so those at the top of the table likely contribute most to disk savings. Gradient background coloring highlights results (dark red = poor compression). Table quote : | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | Participant_Timestamp | 46.8 | 44.9 | 45.0 | 45.2 | 70.2 | 69.5 | 69.0 | 68.9 | 68.9 | 100.0 | 71.5 | 96.5 | 41.3 | 40.9 | 40.9 | 41.0 | 41.0 | | Time | 38.5 | 36.3 | 36.3 | 36.4 | 61.3 | 61.1 | 60.5 | 60.4 | 60.4 | 82.5 | 62.4 | 81.6 | 31.3 | 31.3 | 31.3 | 33.8 | 33.8 | | Sequence_Number | 41.3 | 41.1 | 41.2 | 41.2 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 81.4 | 81.4 | 81.4 | 35.1 | 36.8 | | Offer_Price | 8.6 | 6.9 | 6.6 | 6.5 | 13.5 | 11.1 | 10.1 | 9.6 | 9.6 | 17.5 | 15.9 | 16.3 | 7.7 | 6.4 | 6.4 | 6.1 | 5.6 | | Bid_Price | 8.6 | 6.9 | 6.6 | 6.5 | 13.5 | 11.1 | 10.1 | 9.6 | 9.6 | 17.5 | 15.8 | 16.3 | 7.6 | 6.4 | 6.4 | 6.1 | 5.6 | | Symbol | 0.6 | 0.2 | 0.2 | 0.2 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 1.7 | 4.7 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | | Offer_Size | 17.1 | 14.9 | 14.3 | 13.5 | 29.8 | 23.3 | 19.2 | 17.1 | 17.1 | 35.3 | 34.3 | 28.0 | 16.3 | 13.6 | 13.0 | 12.7 | 11.8 | | Bid_Size | 16.9 | 14.7 | 14.0 | 13.2 | 29.4 | 23.0 | 18.9 | 16.8 | 16.8 | 34.8 | 33.9 | 27.7 | 16.0 | 13.4 | 12.8 | 12.5 | 11.6 | | Exchange | 47.3 | 44.6 | 44.2 | 44.0 | 65.7 | 60.0 | 58.7 | 57.8 | 57.8 | 99.9 | 71.4 | 83.1 | 41.8 | 43.9 | 43.6 | 40.5 | 40.3 | | National_BBO_Ind | 20.1 | 15.5 | 14.8 | 13.5 | 33.2 | 25.5 | 20.4 | 17.0 | 17.0 | 80.7 | 30.4 | 31.8 | 18.1 | 14.8 | 13.7 | 13.1 | 12.7 | | Short_Sale_Restriction_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Source_Of_Quote | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Retail_Interest_Indicator | 12.6 | 9.6 | 9.2 | 8.4 | 18.8 | 14.7 | 12.1 | 10.5 | 10.5 | 34.0 | 19.2 | 21.1 | 11.0 | 9.0 | 8.6 | 8.3 | 7.9 | | National_BBO_LULD_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Quote_Condition | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.9 | 4.7 | 0.1 | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | | SIP_Generated_Message_Identifier | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Security_Status_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | LULD_BBO_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | FINRA_BBO_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Quote_Cancel_Correction | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | FINRA_ADF_MPID_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | FINRA_ADF_Market_Participant_Quote_Indicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | FINRA_ADF_Timestamp | 0.6 | 0.2 | 0.2 | 0.2 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 2.4 | 4.7 | 0.0 | 3.2 | 0.0 | 0.0 | 0.0 | 0.0 | Table trade : | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | Time | 45.5 | 43.2 | 43.2 | 43.4 | 67.0 | 66.8 | 66.3 | 66.2 | 66.2 | 97.8 | 68.6 | 86.2 | 39.5 | 39.6 | 39.6 | 41.5 | 41.7 | | ParticipantTimestamp | 46.8 | 44.5 | 44.6 | 44.8 | 63.5 | 63.1 | 62.9 | 62.9 | 62.9 | 99.2 | 66.4 | 80.7 | 43.4 | 42.7 | 42.7 | 40.7 | 40.3 | | SequenceNumber | 44.0 | 43.8 | 43.8 | 43.8 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 80.3 | 80.3 | 80.3 | 39.2 | 43.1 | | TradeId | 26.1 | 22.3 | 22.1 | 22.0 | 43.8 | 42.7 | 42.5 | 42.3 | 42.3 | 75.8 | 47.7 | 39.1 | 21.4 | 18.6 | 18.6 | 16.6 | 16.5 | | TradePrice | 19.6 | 16.9 | 16.7 | 16.6 | 28.6 | 24.1 | 22.9 | 22.4 | 22.4 | 30.2 | 29.9 | 36.5 | 20.2 | 17.1 | 17.1 | 16.0 | 15.2 | | Symbol | 0.6 | 0.2 | 0.2 | 0.2 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 1.7 | 4.7 | 0.1 | 0.2 | 0.0 | 0.0 | 0.0 | 0.0 | | TradeReportingFacilityTRFTimestamp | 20.9 | 19.8 | 19.7 | 19.7 | 26.0 | 24.8 | 24.1 | 24.1 | 24.1 | 29.9 | 28.7 | 31.0 | 23.6 | 19.5 | 19.4 | 18.7 | 18.5 | | TradeVolume | 29.3 | 26.8 | 25.7 | 23.9 | 40.1 | 32.5 | 29.2 | 27.8 | 27.8 | 49.4 | 45.4 | 37.7 | 27.3 | 23.8 | 23.1 | 22.9 | 22.0 | | Exchange | 42.7 | 39.3 | 38.7 | 38.1 | 58.7 | 52.2 | 50.1 | 48.9 | 48.9 | 100.0 | 60.7 | 68.1 | 38.9 | 38.3 | 37.9 | 36.2 | 35.8 | | SaleCondition | 7.7 | 5.9 | 5.2 | 4.6 | 13.9 | 9.9 | 7.5 | 6.2 | 6.2 | 15.2 | 16.5 | 18.1 | 7.8 | 5.3 | 4.8 | 4.9 | 4.2 | | SourceofTrade | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.9 | 4.7 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.1 | | TradeThroughExemptIndicator | 18.5 | 13.6 | 13.0 | 12.0 | 32.6 | 24.8 | 19.2 | 15.4 | 15.4 | 87.2 | 29.8 | 29.4 | 16.5 | 13.3 | 12.5 | 11.7 | 11.1 | | TradeReportingFacility | 9.8 | 7.1 | 6.8 | 6.3 | 15.8 | 12.3 | 9.6 | 8.1 | 8.1 | 25.3 | 16.5 | 17.0 | 8.3 | 6.8 | 6.4 | 6.1 | 5.6 | | TradeCorrectionIndicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | TradeStopStockIndicator | 0.5 | 0.1 | 0.1 | 0.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 4.7 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | zstd excels at column-level compression, though its advantage over gzip is less pronounced at the table level. This discrepancy arises because minor differences in highly compressible columns (e.g. 0.025% vs. 0.1% relative size) have negligible impact on overall storage, whereas larger columns dominate. For example: Sequence_Number : A typical capital markets column (monotonically increasing integers with repetitions) shows a stark contrast:gzip : 40% relative sizezstd : 80% relative size (except at high compression levels)lz4 /snappy /qipc : No compression (100% relative size) qipc does not compress all columns by default. The conditions under which qipc applies compression are documented precisely. Key Observations¶ gzip andzstd deliver the best overall ratios.gzip significantly outperformszstd forSequence_Number (except atzstd 's highest levels).zstd excels at small file compression, which can be particularly useful for columns with many repetitions (e.g., wide schemas in IoT applications).- The best compression ratio can be achieved by using mixed compression strategies. You can set differenct compression for each column by passing a dictionaty to .z.zd gzip levels 6–9 show minimal difference, but level 1 performs poorly on low-entropy columns.qipc has the worst compression ratio among the tested algorithms. Write speed, compression times¶ The typical bottleneck of data ingestion is persiting tables to a storage. The write time determines the maximal ingestion rate. Writing compressed data to storage involves three sequential steps: - Serializing the data - Compressing the serialized bytes - Persisting the compressed output These steps are executed by the set command in kdb+. Although the underlying compression library (e.g. gzip , zstd ) may support multithreading, set is single-threaded. The compression time ratios relative to the uncompressed set are in the tables below. Value e.g. 2 means that it takes twice as much to compress and save the table than to save (memory map) the table. Smaller numbers are better. Results on system with block storage: The following tables provide a column-level breakdown. Green cells mark speed improvement over uncompressed set, red cells highlight significant slowdown. | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | Time | 14.3 | 23.5 | 34.9 | 171.3 | 13.7 | 14.6 | 21.6 | 89.9 | 89.5 | 2.4 | 2.9 | 1.8 | 3.6 | 19.6 | 38.4 | 69.8 | 143.1 | | ParticipantTimestamp | 16.6 | 26.2 | 38.4 | 138.1 | 15.2 | 19.0 | 22.0 | 74.8 | 74.2 | 2.8 | 3.2 | 1.8 | 3.9 | 21.3 | 33.0 | 62.5 | 158.6 | | SequenceNumber | 19.8 | 35.2 | 43.3 | 46.9 | 23.3 | 22.6 | 26.5 | 28.3 | 27.6 | 2.0 | 0.7 | 0.8 | 1.8 | 7.1 | 5.6 | 65.7 | 113.3 | | TradeId | 2.7 | 6.5 | 12.9 | 74.8 | 2.8 | 4.2 | 9.3 | 49.0 | 48.7 | 1.3 | 1.3 | 1.4 | 1.5 | 8.8 | 20.1 | 19.4 | 127.5 | | TradePrice | 8.5 | 17.3 | 27.4 | 53.7 | 8.0 | 13.8 | 35.0 | 65.1 | 65.1 | 2.5 | 2.3 | 2.5 | 3.2 | 27.2 | 67.6 | 84.5 | 374.2 | | Symbol | 2.2 | 4.8 | 5.2 | 5.5 | 1.5 | 0.6 | 1.5 | 6.1 | 6.1 | 0.8 | 0.4 | 0.3 | 0.4 | 0.7 | 0.7 | 3.6 | 4.5 | | TradeReportingFacilityTRFTimestamp | 7.5 | 11.4 | 15.3 | 88.8 | 7.6 | 8.8 | 25.3 | 368.5 | 368.9 | 2.0 | 1.7 | 1.6 | 2.4 | 12.9 | 64.1 | 54.5 | 482.5 | | TradeVolume | 12.4 | 33.5 | 81.1 | 877.8 | 8.9 | 23.9 | 148.2 | 519.5 | 518.5 | 2.4 | 2.8 | 3.5 | 3.9 | 44.6 | 122.9 | 163.9 | 436.5 | | Exchange | 15.3 | 55.5 | 117.5 | 525.3 | 15.8 | 39.6 | 171.6 | 294.0 | 295.0 | 3.6 | 4.4 | 3.8 | 5.0 | 62.2 | 104.0 | 156.7 | 362.6 | | SaleCondition | 4.5 | 9.2 | 20.5 | 197.3 | 6.3 | 13.0 | 99.2 | 537.3 | 537.4 | 1.6 | 1.5 | 2.1 | 2.3 | 26.6 | 128.4 | 118.5 | 712.7 | | SourceofTrade | 2.2 | 4.4 | 4.7 | 5.1 | 2.5 | 3.7 | 1.7 | 3.7 | 3.6 | 0.7 | 0.3 | 0.3 | 0.3 | 1.1 | 0.9 | 3.0 | 18.7 | | TradeThroughExemptIndicator | 11.0 | 42.7 | 97.4 | 2104.0 | 12.4 | 35.6 | 347.6 | 3033.8 | 3035.3 | 4.7 | 2.1 | 3.2 | 3.3 | 102.5 | 200.0 | 216.9 | 784.7 | | TradeReportingFacility | 5.9 | 19.1 | 39.5 | 610.3 | 7.1 | 17.6 | 156.2 | 900.3 | 901.3 | 2.5 | 1.2 | 1.7 | 1.8 | 41.2 | 92.6 | 94.0 | 374.5 | | TradeCorrectionIndicator | 2.4 | 4.6 | 4.9 | 4.7 | 0.6 | 0.6 | 0.7 | 0.9 | 1.2 | 0.8 | 0.4 | 0.4 | 0.4 | 1.0 | 0.8 | 3.0 | 3.9 | | TradeStopStockIndicator | 2.9 | 5.4 | 5.8 | 5.4 | 0.5 | 0.3 | 0.5 | 0.5 | 0.5 | 1.0 | 0.5 | 0.4 | 0.4 | 1.2 | 0.9 | 3.5 | 1.8 | Key Observations¶ - Compression typically slows down set operations. - Notable exceptions: snappy andzstd level 1 actually improve write speed for certain column types. For these columns,zstd provides significantly better compression ratios thansnappy . - The level has a substantial impact on compression time, even for algorithms like lz4 ; for example,zstd level 10 is considerably faster than level 22. - Higher compression levels rarely justify the performance cost, offering minimal improvement in ratio at the expense of significantly slower compression. zstd level 1 offers the fastest compression.- Although the general principle that lower zstd levels equate to faster speeds (with reduced compression) holds true, the kdb+ wrapper introduces exceptions, making it challenging to pinpoint the optimal compression level. Let us see how compression performs with a slower storage. | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | Time | 1.9 | 2.9 | 4.4 | 22.3 | 1.8 | 2.2 | 2.7 | 12.2 | 12.1 | 1.2 | 1.0 | 0.9 | 0.7 | 2.3 | 4.4 | 8.3 | 16.4 | | ParticipantTimestamp | 2.0 | 2.9 | 4.4 | 16.7 | 1.8 | 2.2 | 2.5 | 7.8 | 8.3 | 1.2 | 1.0 | 0.9 | 0.8 | 2.4 | 3.7 | 6.9 | 16.4 | | SequenceNumber | 2.1 | 3.6 | 4.8 | 5.3 | 2.7 | 2.9 | 2.9 | 3.1 | 3.1 | 1.1 | 1.0 | 1.1 | 0.9 | 1.3 | 1.2 | 6.9 | 11.5 | | TradeId | 1.1 | 2.2 | 4.8 | 30.2 | 1.1 | 1.5 | 3.1 | 23.2 | 23.2 | 1.0 | 0.8 | 0.8 | 0.7 | 3.1 | 7.6 | 7.2 | 46.7 | | TradePrice | 1.0 | 1.8 | 2.9 | 5.8 | 0.9 | 1.6 | 3.7 | 7.0 | 7.0 | 0.5 | 0.5 | 0.6 | 0.5 | 2.7 | 7.0 | 8.7 | 36.1 | | Symbol | 0.2 | 0.3 | 0.3 | 0.4 | 0.1 | 0.1 | 0.1 | 0.5 | 0.5 | 0.1 | 0.1 | 0.0 | 0.0 | 0.1 | 0.1 | 0.3 | 0.4 | | TradeReportingFacilityTRFTimestamp | 0.9 | 1.2 | 1.7 | 10.6 | 0.8 | 1.0 | 2.6 | 50.2 | 50.2 | 0.5 | 0.4 | 0.4 | 0.4 | 1.4 | 6.9 | 6.0 | 47.2 | | TradeVolume | 1.4 | 3.3 | 7.7 | 98.2 | 1.1 | 2.6 | 13.4 | 58.5 | 58.4 | 0.7 | 0.8 | 0.7 | 0.6 | 4.3 | 11.9 | 16.9 | 42.9 | | Exchange | 1.9 | 5.6 | 11.4 | 54.0 | 1.9 | 4.3 | 16.7 | 28.7 | 28.6 | 1.4 | 1.1 | 1.1 | 0.8 | 6.3 | 10.4 | 15.8 | 35.6 | | SaleCondition | 0.5 | 0.9 | 2.2 | 22.3 | 0.6 | 1.5 | 10.7 | 60.3 | 60.2 | 0.3 | 0.3 | 0.4 | 0.3 | 2.7 | 13.1 | 12.3 | 69.7 | | SourceofTrade | 0.3 | 0.4 | 0.4 | 0.4 | 0.1 | 0.3 | 0.2 | 0.2 | 0.2 | 0.1 | 0.1 | 0.0 | 0.0 | 0.1 | 0.1 | 0.4 | 1.6 | | TradeThroughExemptIndicator | 1.3 | 4.5 | 10.3 | 211.2 | 1.5 | 3.7 | 34.1 | 306.3 | 306.2 | 1.3 | 0.6 | 0.7 | 0.6 | 9.4 | 18.8 | 20.3 | 71.0 | | TradeReportingFacility | 0.7 | 2.1 | 4.5 | 67.3 | 0.7 | 1.9 | 16.8 | 100.7 | 100.6 | 0.5 | 0.3 | 0.4 | 0.3 | 4.2 | 9.9 | 9.9 | 37.8 | | TradeCorrectionIndicator | 0.2 | 0.3 | 0.3 | 0.3 | 0.0 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.0 | 0.1 | 0.1 | 0.3 | 0.4 | | TradeStopStockIndicator | 0.2 | 0.4 | 0.4 | 0.4 | 0.0 | 0.1 | 0.0 | 0.0 | 0.1 | 0.2 | 0.1 | 0.0 | 0.0 | 0.1 | 0.1 | 0.4 | 0.3 | These results — smaller ratios compared to uncompressed set and more green cells — indicate that the performance benefits of compression are amplified on slower disks. Notably, only zstd at level 1 consistently outperforms uncompressed set across all columns, while other compression methods generally slow down the set operation. Scaling, syncing and appending¶ Because the set command is single-threaded, kdb+ systems often persist columns in parallel by peach when memory allows. In our case, the number of columns is smaller than the available cores so parallelizing provided clear speed advantage. Persisting all columns simultaneously took roughly the same time as persisting the largest column (TradeID ). In real life, the writer process may have other responsibilities like ingesting new data or serving queries. These responsibilities also compete for CPU. Data persisted via set may remain in the OS buffer cache before being written to disk, risking data loss if the system crashes. The user can trigger the flush with the fsync system call. If kdb+ processes wrote several files simultaneously and a consistent state is desired then system calls fsync and syncfs are used. These calls block the kdb+ process and so their execution time contributes to the write time. In our experiment fsync times were marginal compared to set , especially on NFS. The network is the bottleneck for NFS and the underlying storage system has plenty of time to flush the data. While set is a common persistence method, intraday writedowns often use appends, implemented by amend at like .[file; (); ,; chunk] . set also chunks large vector writes behind the scenes, this explains why our test showed no speed difference between the two methods, regardless of compression. Query response times¶ When data is stored in a compressed format, it must be decompressed before processing queries. The decompression speed directly impacts query execution time. In the query test, we executed 14 distinct queries. The queries vary in filtering, grouping and aggregation parameters. Some filters trigger sequential reads, and queries with several filtering constraints perform random reads. We included queries with explicit parallel iteration (peach) and with as-of join as well. Data in the financial sector is streaming in simultaneously and as-of joins play a critical role in joining various tables. The table below details each query’s performance metrics: - Elapsed time in milliseconds (measured via \ts ) - Storage read in KB (returned by iostat ) - Query memory need of the query in KB (output of \ts ) - Memory need of the result table in KB | Query | Elapsed time ms | Storage read (KB) | Query memory need (KB) | Result memory need (KB) | |---|---|---|---|---| | select from quote where date=2022.03.31, i<500000000 | 13379 | 36137692 | 44023417 | 39728448 | aj[sym time`ex; select from tradeNorm where date=2022.03.31, size>500000; select from quoteNorm where date=2022.03.31] | 6845 | 2820024 | 9797897 | 82 | | select nr: count i, avgMid: avg (bid + ask) % 2 by sym from quoteNorm where date=2022.03.31 | 5572 | 13914952 | 42950069 | 327 | | select from tradeNorm where date=2022.03.31, i<>0 | 553 | 1721124 | 4160752 | 3087008 | aj[Symbol Time; select from trade where date=2022.03.31, Symbol in someSyms; select from quote where date=2022.03.31] | 438 | 62524 | 8591330 | 1210 | | distinct select sym, ex from tradeNorm where date=2022.03.31, size > 700000 | 343 | 379256 | 1207962 | 2 | | raze {select from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 76 | 2736 | 1451 | 491 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 49 | 1996 | 38 | 3 | | select bsize wavg bid, asize wavg ask from quoteNorm where date=2022.03.31, sym in someSyms | 23 | 10888 | 4197 | 0 | | raze {select from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 19 | 2724 | 564 | 491 | | select from quote where date=2022.03.31, Symbol=`VFVA | 16 | 9400 | 10226 | 9699 | | raze {select from quoteNorm where date=2022.03.31, sym=x, 4000<bsize+asize} peach infreqIdList | 16 | 1204 | 9 | 0 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 12 | 1996 | 15 | 3 | | select medMidSize: med (bsize + asize) % 2 from quoteNorm where date=2022.03.31, sym=`CIIG.W | 2 | 464 | 35 | 0 | We started the kdb+ processes with numactl -N 0 -m 0 and 144 threads (-s 144 ). The Linux kernel parameter read_ahead_kb was set to 128. Query time ratios (shown in subsequent tables) compare performance with/without compression. Value, for example, 2 means that the query runs twice as fast without compression. Lower ratios are better. Dark red highlighting denotes significant slowdowns under compression. The queries are sorted descending by the storage read (iostat output). To isolate caching effects, we cleared the page cache (echo 3 | sudo tee /proc/sys/vm/drop_caches ) and executed the queries twice. Data came from the storage during the first execution, then from the page cache (memory). | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | query | ||||||||||||||||| | select from quote where date=2022.03.31, i<500000000 | 4.7 | 4.7 | 4.7 | 4.7 | 4.7 | 4.6 | 4.6 | 4.6 | 4.6 | 4.7 | 4.6 | 4.7 | 4.7 | 4.7 | 4.7 | 4.7 | 4.7 | | select nr: count i, avgMid: avg (bid + ask) % 2 by sym from quoteNorm where date=2022.03.31 | 4.0 | 4.1 | 4.1 | 4.1 | 4.0 | 4.0 | 3.9 | 3.9 | 3.9 | 4.1 | 4.1 | 4.1 | 4.1 | 4.1 | 4.1 | 4.1 | 4.1 | | aj[`sym`time`ex; select from tradeNorm where date=2022.03.31, size>500000; select from quoteNorm where date=2022.03.31] | 1.9 | 1.8 | 1.8 | 1.7 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.5 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.3 | 1.3 | | select from tradeNorm where date=2022.03.31, i<>0 | 5.4 | 5.5 | 5.4 | 5.4 | 5.4 | 5.4 | 5.4 | 5.4 | 5.4 | 5.3 | 5.3 | 5.4 | 5.5 | 5.5 | 5.5 | 5.5 | 5.5 | | distinct select sym, ex from tradeNorm where date=2022.03.31, size > 700000 | 2.6 | 2.6 | 2.6 | 2.6 | 2.2 | 2.2 | 2.1 | 2.1 | 2.1 | 2.3 | 2.4 | 2.2 | 2.2 | 2.2 | 2.2 | 2.2 | 2.2 | | aj[`Symbol`Time; select from trade where date=2022.03.31, Symbol in someSyms; select from quote where date=2022.03.31] | 1.3 | 1.4 | 1.3 | 1.3 | 1.1 | 1.1 | 1.0 | 1.0 | 1.0 | 1.1 | 1.1 | 1.2 | 1.2 | 1.3 | 1.2 | 1.3 | 1.2 | | select bsize wavg bid, asize wavg ask from quoteNorm where date=2022.03.31, sym in someSyms | 2.2 | 1.9 | 2.0 | 1.9 | 1.4 | 1.3 | 1.2 | 1.2 | 1.3 | 2.0 | 1.7 | 1.7 | 1.5 | 1.5 | 1.3 | 1.6 | 1.4 | | select from quote where date=2022.03.31, Symbol=`VFVA | 2.6 | 2.5 | 2.6 | 2.5 | 1.8 | 1.8 | 1.8 | 1.9 | 1.8 | 1.9 | 1.8 | 2.5 | 2.6 | 2.6 | 2.6 | 2.6 | 2.7 | | raze {select from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 5.6 | 5.6 | 5.6 | 5.5 | 3.9 | 3.7 | 3.7 | 3.6 | 3.4 | 4.7 | 4.1 | 4.8 | 4.8 | 4.7 | 4.6 | 4.8 | 4.6 | | raze {select from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 12.1 | 12.5 | 12.2 | 11.7 | 11.8 | 11.5 | 11.6 | 11.6 | 12.3 | 10.4 | 11.0 | 13.9 | 13.8 | 13.8 | 15.7 | 14.2 | 12.5 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 5.8 | 5.7 | 5.7 | 5.6 | 4.1 | 3.9 | 4.1 | 3.8 | 3.7 | 5.4 | 4.5 | 5.2 | 5.4 | 5.1 | 5.0 | 5.3 | 5.0 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 12.8 | 13.3 | 14.3 | 13.3 | 12.8 | 14.8 | 13.8 | 13.2 | 14.8 | 14.9 | 14.7 | 15.1 | 15.9 | 18.4 | 16.5 | 14.4 | 14.7 | raze {select from quoteNorm where date=2022.03.31, sym=x, 4000| 8.9 | 8.4 | 8.4 | 9.2 | 8.2 | 8.1 | 8.4 | 8.0 | 8.1 | 8.4 | 8.2 | 9.0 | 7.8 | 8.2 | 9.6 | 9.8 | 9.6 | | | select medMidSize: med (bsize + asize) % 2 from quoteNorm where date=2022.03.31, sym=`CIIG.W | 3.0 | 3.0 | 3.5 | 3.5 | 2.5 | 2.5 | 3.0 | 2.5 | 3.0 | 2.5 | 3.0 | 2.5 | 2.5 | 3.0 | 2.5 | 3.0 | 2.5 | The table below displays the second executions of the queries, that is, data was sourced from memory. Because these used the page cache, the storage speed impact is smaller. Query select medMidSize: med (bsize + asize) % 2 from quoteNorm where date=2022.03.31, sym=`CIIG.W without compression executed in less than 1 msec, so we rounded up the execution time to 1 msec to avoid division by zero. Observe OS cache impact - higher ratios and more dark red cells. | Compression Algorithm | gzip | lz4 | qipc | snappy | zstd | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | 1 | 5 | 6 | 9 | 1 | 5 | 9 | 12 | 16 | 0 | 0 | -7 | 1 | 10 | 12 | 14 | 22 | | query | ||||||||||||||||| | select from quote where date=2022.03.31, i<500000000 | 21.4 | 21.5 | 21.4 | 21.4 | 21.3 | 21.0 | 21.0 | 20.9 | 20.9 | 21.2 | 21.0 | 21.4 | 21.6 | 21.4 | 21.3 | 21.5 | 21.6 | | select nr: count i, avgMid: avg (bid + ask) % 2 by sym from quoteNorm where date=2022.03.31 | 19.1 | 19.2 | 19.3 | 19.2 | 19.0 | 18.7 | 18.6 | 18.4 | 18.5 | 19.5 | 19.4 | 19.5 | 19.5 | 19.4 | 19.3 | 19.4 | 19.4 | | aj[`sym`time`ex; select from tradeNorm where date=2022.03.31, size>500000; select from quoteNorm where date=2022.03.31] | 4.2 | 4.0 | 4.0 | 4.0 | 2.3 | 2.3 | 2.3 | 2.3 | 2.3 | 3.2 | 2.5 | 2.5 | 2.7 | 2.7 | 2.7 | 2.9 | 2.9 | | select from tradeNorm where date=2022.03.31, i<>0 | 12.3 | 12.3 | 12.4 | 12.4 | 12.1 | 12.2 | 12.1 | 12.2 | 12.2 | 12.0 | 12.0 | 12.4 | 12.5 | 12.4 | 12.4 | 12.5 | 12.6 | | distinct select sym, ex from tradeNorm where date=2022.03.31, size > 700000 | 16.7 | 16.8 | 16.7 | 16.7 | 13.3 | 13.1 | 13.1 | 13.1 | 13.2 | 13.2 | 14.1 | 14.0 | 14.0 | 14.1 | 14.0 | 14.2 | 14.1 | | aj[`Symbol`Time; select from trade where date=2022.03.31, Symbol in someSyms; select from quote where date=2022.03.31] | 1.7 | 1.8 | 1.8 | 1.8 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 | 1.3 | 1.6 | 1.7 | 1.7 | 1.7 | 1.7 | 1.7 | | select bsize wavg bid, asize wavg ask from quoteNorm where date=2022.03.31, sym in someSyms | 16.0 | 14.0 | 13.5 | 13.0 | 7.5 | 6.5 | 6.5 | 6.5 | 6.5 | 13.0 | 9.5 | 10.0 | 10.5 | 9.0 | 8.5 | 9.0 | 9.0 | | select from quote where date=2022.03.31, Symbol=`VFVA | 9.3 | 10.7 | 10.7 | 10.7 | 6.0 | 6.0 | 6.0 | 6.0 | 5.7 | 7.0 | 6.3 | 9.3 | 10.3 | 10.7 | 10.3 | 10.3 | 10.7 | | raze {select from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 15.3 | 15.5 | 15.4 | 15.4 | 9.7 | 9.2 | 9.0 | 9.0 | 8.9 | 11.2 | 11.2 | 12.0 | 13.2 | 13.0 | 12.6 | 12.9 | 12.7 | | raze {select from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 10.7 | 11.4 | 11.5 | 11.5 | 10.8 | 11.0 | 10.6 | 11.0 | 10.5 | 10.5 | 10.5 | 13.0 | 12.7 | 13.0 | 12.8 | 12.6 | 12.7 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 15.9 | 15.5 | 15.1 | 15.1 | 8.9 | 8.4 | 8.4 | 8.2 | 8.2 | 13.7 | 10.9 | 13.9 | 14.5 | 14.2 | 14.2 | 14.3 | 13.9 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 11.6 | 12.1 | 12.0 | 12.2 | 11.8 | 11.8 | 11.6 | 12.0 | 12.1 | 12.1 | 11.4 | 13.1 | 13.6 | 13.4 | 13.6 | 12.9 | 13.2 | raze {select from quoteNorm where date=2022.03.31, sym=x, 4000| 8.5 | 8.5 | 8.3 | 8.2 | 8.3 | 8.4 | 8.5 | 8.5 | 8.1 | 8.4 | 8.8 | 8.4 | 8.6 | 8.2 | 8.2 | 8.2 | 8.2 | | | select medMidSize: med (bsize + asize) % 2 from quoteNorm where date=2022.03.31, sym=`CIIG.W | 3.0 | 3.0 | 3.0 | 3.0 | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 | 3.0 | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 | 2.0 | Key Observations¶ - Compression slows queries, especially for CPU-bound workloads (e.g., multiple aggregations using multi-threaded primitives). Some queries were 20× slower with compression. - OS caching amplifies slowdowns: When data resides in memory, compression overhead becomes more pronounced. Recommendation: Avoid compression for frequently accessed ("hot") data. - Compression level has negligible impact on decompression speed. This consistent with zstd’s documentation: Decompression speed is preserved and remains roughly the same at all settings, a property shared by most LZ compression algorithms, such as zlib or lzma. - Algorithm choice matters minimally — except for lz4 with restrictive queries. However,lz4 trades speed for higher disk usage. Let us see how compression impacts query times if the data is stored on a slower (NFS) storage. The table below displays the ratios of the first execution of the queries. We omit the results of the second run because they are similar to the fast storage case. | Compression Algorithm | zstd | lz4 | snappy | gzip | qipc | |||||||||||| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Compression Level | -7 | 22 | 1 | 14 | 12 | 10 | 9 | 5 | 16 | 1 | 12 | 0 | 9 | 6 | 5 | 1 | 0 | | query | ||||||||||||||||| | select from quote where date=2022.03.31, i<500000000 | 0.4 | 0.2 | 0.3 | 0.2 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.4 | 0.3 | 0.4 | 0.3 | 0.3 | 0.3 | 0.3 | 0.4 | | select nr: count i, avgMid: avg (bid + ask) % 2 by sym from quoteNorm where date=2022.03.31 | 0.3 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.3 | 0.2 | 0.3 | 0.2 | 0.2 | 0.2 | 0.2 | 0.3 | | aj[`sym`time`ex; select from tradeNorm where date=2022.03.31, size>500000; select from quoteNorm where date=2022.03.31] | 0.6 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.5 | 0.4 | 0.5 | 0.4 | 0.5 | 0.5 | 0.6 | 0.5 | 0.6 | 0.5 | | select from tradeNorm where date=2022.03.31, i<>0 | 0.4 | 0.2 | 0.3 | 0.2 | 0.2 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.4 | 0.2 | 0.2 | 0.2 | 0.3 | 0.4 | | distinct select sym, ex from tradeNorm where date=2022.03.31, size > 700000 | 0.4 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.4 | 0.3 | 0.4 | 0.3 | 0.5 | 0.3 | 0.4 | 0.4 | 0.4 | 0.6 | | aj[`Symbol`Time; select from trade where date=2022.03.31, Symbol in someSyms; select from quote where date=2022.03.31] | 0.9 | 0.8 | 1.0 | 0.8 | 0.8 | 0.8 | 0.9 | 1.0 | 0.8 | 0.9 | 0.9 | 1.1 | 1.5 | 0.9 | 0.9 | 1.0 | 1.0 | | select bsize wavg bid, asize wavg ask from quoteNorm where date=2022.03.31, sym in someSyms | 1.2 | 0.9 | 1.0 | 0.9 | 0.8 | 0.9 | 1.0 | 1.1 | 1.0 | 1.1 | 0.9 | 1.3 | 1.0 | 0.9 | 1.0 | 1.1 | 1.3 | | select from quote where date=2022.03.31, Symbol=`VFVA | 1.9 | 1.6 | 1.9 | 1.6 | 1.6 | 1.7 | 2.1 | 2.1 | 1.9 | 2.1 | 1.8 | 2.6 | 1.9 | 1.8 | 1.9 | 1.8 | 2.3 | | raze {select from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 1.3 | 1.2 | 1.3 | 1.3 | 1.2 | 1.2 | 1.2 | 1.4 | 1.3 | 1.5 | 1.3 | 1.5 | 1.4 | 1.4 | 1.4 | 1.7 | 1.6 | | raze {select from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 6.3 | 4.9 | 5.2 | 5.0 | 5.2 | 5.4 | 5.2 | 5.3 | 5.5 | 5.9 | 5.2 | 5.7 | 5.0 | 5.0 | 4.8 | 5.4 | 6.4 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} each infreqIdList | 4.8 | 4.6 | 5.0 | 4.3 | 4.7 | 4.7 | 4.1 | 3.9 | 3.9 | 3.9 | 3.4 | 4.8 | 5.6 | 5.5 | 5.7 | 5.8 | 5.0 | | raze {select first sym, wsumAsk:asize wsum ask, wsumBid: bsize wsum bid, sdevask:sdev ask,sdevbid:sdev bid, corPrice:ask cor bid, corSize: asize cor bsize from quoteNorm where date=2022.03.31, sym=x} peach infreqIdList | 5.3 | 3.8 | 3.7 | 3.7 | 3.6 | 3.8 | 4.2 | 4.3 | 4.1 | 4.7 | 4.1 | 5.0 | 3.6 | 3.8 | 3.8 | 4.0 | 5.0 | raze {select from quoteNorm where date=2022.03.31, sym=x, 4000| 3.9 | 3.5 | 3.4 | 3.1 | 3.4 | 3.5 | 3.2 | 3.8 | 3.5 | 4.0 | 3.5 | 4.1 | 3.3 | 3.6 | 3.3 | 3.5 | 4.0 | | | select medMidSize: med (bsize + asize) % 2 from quoteNorm where date=2022.03.31, sym=`CIIG.W | 2.4 | 2.7 | 2.2 | 1.8 | 2.1 | 2.1 | 2.0 | 2.0 | 1.6 | 1.8 | 2.0 | 2.7 | 2.2 | 1.7 | 2.3 | 2.0 | 2.4 | Compression improves performance when large datasets are read from slow storage. Thus, it is recommended for cold tiers (rarely accessed data). Summary¶ For an optimal balance of cost and query performance, we recommend a tiered storage strategy - Hot tier should not employ compression to allow maximal ingestion rate and fastest queries. The hot tier should be located on fast storage. This tier typically stores a few weeks to month of data. - The second tier contains a high volume of less frequently queried data. We recommend using compression. If the goal is ultimate query speed, then snappy orlz4 with level 5 or 6 are good choices. Chooselz4 if the second priority is storage saving and usesnappy if you would like the data migration process (from the hot tier) to be fast. If the storage space is limited and you would like to achieve a high compression ratio, then usezstd level 10 for most columns.gzip level 5 for sequence number-like columns.- Columns with parted attribute (e.g. sym ) are exceptions and should not be compressed. - The cold tier contains high volume, rarely accessed data. It is typically placed on cheaper and slower storage, like object storage or HDD-backed solutions. We recommend using compression. If the second tier uses lz4 orsnappy then you might want to recompress the data withzstd to save more storage space. Not all tables require identical partitioning strategies. Frequently accessed tables may remain in the hot tier for extended durations. Conversely, even within a heavily queried table, certain columns might be seldom accessed. In such cases, symbolic links can be used to migrate column files to the appropriate storage tier. Infrastructure¶ Tests were conducted on version 9.4 of Red Hat Enterprise Linux using kdb+ 4.1 (version 2025.01.17). Compression performance depends on the compression library versions, which are listed below: zlib : 1.2.11lz4 : 1.9.3snappy : 1.1.8zstd : 1.5.1 Key specifications for the two systems: - Local block storage and Intel Xeon 6 efficient CPU - Storage: Intel SSD D7-P5510 (3.84 TB), with interface PCIe 4.0 x4, NVMe - CPU: Intel(R) Xeon(R) 6780E (Efficient series) - Sockets: 2 - Cores per socket: 144 - Thread(s) per core: 1 - NUMA nodes: 2 - filesystem: ext4 - memory: 502GiB, DDR5 6400 MT/s, 8 channels - NFS storage and Intel Xeon 6 performance CPU - Storage: NFS (version 4.2), mounted in sync mode, with read and write chunk sizes ( wsize andrsize ) 1 MB. NFS cache was not set up, i.e-o fsc mount parameter was not set. - Some network parameters: - MTU: 1500 - TCP read/write buffer size ( /proc/sys/net/core/rmem_default ,/proc/sys/net/core/wmem_default ): 212992 - CPU: Intel(R) Xeon(R) 6747P (Performance series) - Sockets: 2 - Cores per socket: 48 - Thread(s) per core: 2 - NUMA nodes: 4 - memory: 502GiB, DDR5 6400 MT/s, 8 channels - Storage: NFS (version 4.2), mounted in sync mode, with read and write chunk sizes ( The tests ran on a single NUMA node, using local node memory only. This is achieved by launching the kdb+ processes with numactl -N 0 -m 0 . Data¶ We used publicly available NYSE TAQ data for this analysis. Tables quote and trade were generated using the script taq.k. Table quote had 1.78 billion rows and consumed 180 GB disk space uncompressed. Table trade was smaller, contained 76 million rows and required 5.7 GB space. All tables were parted by the instrument ID (column Symbol ). The data corresponds to a single day in 2022. Below you can find some column details, including - data type, - uncompressed size, - number of unique items, - number of value changes, - entropy using logarithm base 2. Table trade : | Column Name | Data Type | Size | Unique nr | Differ nr | Entropy | |---|---|---|---|---|---| | Time | timespan | 612882088 | 72361453 | 72635907 | 26.0 | | Exchange | char | 76610275 | 19 | 39884984 | 3.4 | | Symbol | sym | 613372360 | 11615 | 11615 | 10.9 | | SaleCondition | sym | 612886168 | 118 | 34742301 | 3.1 | | TradeVolume | int | 306441052 | 30432 | 61288223 | 5.5 | | TradePrice | real | 306441052 | 1709277 | 29812918 | 15.5 | | TradeStopStockIndicator | boolean | 76610275 | 2 | 3 | 0.0 | | TradeCorrectionIndicator | short | 153220534 | 5 | 1461 | 0.0 | | SequenceNumber | int | 306441052 | 7052899 | 76610223 | 22.3 | | TradeId | string | 1507277336 | 25424743 | 76578507 | 20.0 | | SourceofTrade | char | 76610275 | 2 | 4408 | 0.9 | | TradeReportingFacility | boolean | 76610275 | 2 | 8561937 | 0.5 | | ParticipantTimestamp | timespan | 612882088 | 61126348 | 63260094 | 25.6 | | TradeReportingFacilityTRFTimestamp | timespan | 612882088 | 19609986 | 26931059 | 7.0 | | TradeThroughExemptIndicator | boolean | 76610275 | 2 | 17317273 | 0.9 | Table quote : | Column Name | Data Type | Size | Unique nr | Differ nr | Entropy | |---|---|---|---|---|---| | Time | timespan | 14248237176 | 1580281467 | 1674631088 | 30.4 | | Exchange | char | 1781029661 | 17 | 1349077745 | 3.7 | | Symbol | sym | 14248731544 | 12127 | 12127 | 11.0 | | Bid_Price | float | 14248237176 | 155049 | 815619737 | 14.3 | | Bid_Size | int | 7124118596 | 7279 | 1100563342 | 3.9 | | Offer_Price | float | 14248237176 | 168093 | 822513327 | 14.3 | | Offer_Size | int | 7124118596 | 7264 | 1121734176 | 4.0 | | Quote_Condition | char | 1781029661 | 7 | 451430 | 0.0 | | Sequence_Number | int | 7124118596 | 113620772 | 1781029645 | 26.2 | | National_BBO_Ind | char | 1781029661 | 9 | 525963500 | 1.7 | | FINRA_BBO_Indicator | char | 1781029661 | 1 | 1 | 0.0 | | FINRA_ADF_MPID_Indicator | char | 1781029661 | 1 | 1 | 0.0 | | Quote_Cancel_Correction | char | 1781029661 | 1 | 1 | 0.0 | | Source_Of_Quote | char | 1781029661 | 2 | 4421 | 0.8 | | Retail_Interest_Indicator | char | 1781029661 | 4 | 282004323 | 0.6 | | Short_Sale_Restriction_Indicator | char | 1781029661 | 8 | 6161 | 1.0 | | LULD_BBO_Indicator | char | 1781029661 | 2 | 3 | 0.0 | | SIP_Generated_Message_Identifier | char | 1781029661 | 3 | 32151 | 0.0 | | National_BBO_LULD_Indicator | char | 1781029661 | 8 | 26600 | 0.1 | | Participant_Timestamp | timespan | 14248237176 | 1720929968 | 1773753484 | 30.6 | | FINRA_ADF_Timestamp | timespan | 14248237176 | 1 | 1 | 0.0 | | FINRA_ADF_Market_Participant_Quote_Indicator | char | 1781029661 | 1 | 1 | 0.0 | | Security_Status_Indicator | char | 1781029661 | 7 | 52 | 0.0 |
.log.if.info ("Splay table compression complete [ Source: {} ] [ Target: {} ] [ Compression: {} ] [ Time Taken: {} ]"; sourceSplayPath; targetSplayPath; compressType; .time.now[] - st); compressCfg:update time:compressTimes from compressCfg where not writeMode = `ignore; :compressCfg; }; / Compresses multiple splayed tables within a HDB partition. / NOTE: The 'sym' file of the source HDB is not copied or symlinked to the target HDB / @param sourceRoot (FolderPath) The path of the source HDB / @param targetRoot (FolderPath) The path of the target HDB / @param partVal (Date|Month|Year|Long) The specific partition within the HDB to compress / @param tbls (Symbol|SymbolList) The list of tables in the partition to compress. If `COMP_ALL` is specified, all tables in the partition will be compressed / @param compressType (Symbol|IntegerList) See '.compress.splay' / @param options (Dict) See '.compress.splay', 'srcParTxt' / 'tgtParTxt' - set to false to ignore 'par.txt' in source or target HDBs respectively / @throws SourceHdbPartitionDoesNotExistException If the specified source HDB does not exist / @see .compress.cfg.compressDefaults / @see .compress.splay .compress.partition:{[sourceRoot; targetRoot; partVal; tbls; compressType; options] tbls:(),tbls; options:.compress.cfg.compressDefaults ^ options; if[not .type.isSymbolList tbls; '"IllegalArgumentException"; ]; srcPartPath:.file.hdb.qPar[sourceRoot; partVal]; tgtPartPath:.file.hdb.qPar[targetRoot; partVal]; if[not options`srcParTxt; srcPartPath:` sv sourceRoot,.type.ensureSymbol partVal; ]; if[not options`tgtParTxt; tgtPartPath:` sv targetRoot,.type.ensureSymbol partVal; ]; if[not .type.isFolder srcPartPath; .log.if.error ("Source HDB partition does not exist [ Path: {} ] [ par.txt: {} ]"; srcPartPath; `no`yes options`srcParTxt); '"SourceHdbPartitionDoesNotExistException"; ]; srcTables:.file.ls srcPartPath; if[not `COMP_ALL in tbls; srcTables:tbls inter srcTables; ]; srcTblPaths:` sv/: srcPartPath,/:srcTables; tgtTblPaths:` sv/: tgtPartPath,/:srcTables; .log.if.info ("Starting HDB partition compression [ Source HDB: {} ] [ Target HDB: {} ] [ Partition: {} ] [ Tables: {} ] [ Compression Type: {} ]"; sourceRoot; targetRoot; partVal; srcTables; compressType); st:.time.now[]; compressCfg:.compress.splay[;; compressType; options]'[srcTblPaths; tgtTblPaths]; compressCfg:(flip each enlist[`table]!/:enlist each (count each compressCfg)#'srcTables),''compressCfg; compressCfg:.compress.cfg.schemas[`compPartition] upsert raze compressCfg; compressCfg:update part:partVal from compressCfg; .log.if.info ("HDB partition compression complete [ Source HDB: {} ] [ Target HDB: {} ] [ Partition: {} ] [ Tables: {} ] [ Compression Type: {} ] [ Time Taken: {} ]"; sourceRoot; targetRoot; partVal; srcTables; compressType; .time.now[] - st); :compressCfg; }; / Compress or copy an individual file within a splay / NOTE: No parameter validation is performed within this function / @param compressType (IntegerList) The compression mode to apply / @param compressCfg (Dict) A row of compression configuration (schema in .compress.cfg.schemas`compSplay) / @returns (Timespan) The time taken to compress or copy the file .compress.i.file:{[compressType; compressCfg] st:.time.now[]; .log.if.debug enlist["Processing file [ Source: {} ] [ Target: {} ] [ Write Mode: {} ]"],compressCfg`source`target`writeMode; $[`copy = compressCfg`writeMode; .os.run[`cp;] "|" sv 1_/: string compressCfg`source`target; `compress = compressCfg`writeMode; -19!compressCfg[`source`target],compressType ]; :.time.now[] - st; }; / @returns (SymbolList) Columns in the table order (as defined in '.d') with any additional columns for nested lists appended on the end .compress.i.getColumns:{[splayPath] :cols[splayPath] union raze .file.find[; splayPath] each "*",/:.compress.cfg.nestedListSuffixes; }; ================================================================================ FILE: kdb-common_src_convert.q SIZE: 3,023 characters ================================================================================ // Type Conversion Functions // Copyright (c) 2015 - 2020 Sport Trades Ltd // Documentation: https://github.com/BuaBook/kdb-common/wiki/convert.q .require.lib `type; / @returns (Timespan) The supplied milliseconds in timespan form .convert.msToTimespan:{ :`timespan$1e6*x; }; / @returns (Long) The supplied timestamp in milliseonds .convert.timespanToMs:{ :`long$x%1e6; }; / This function can be used to convert between Javascript millisecond timestamps to kdb. It / assumes that the supplied milliseconds are from the UNIX epoch (00:00 1st January 1970) / @returns (Timestamp) A timestamp of milliseconds from UNIX epoch .convert.epochMsToTimestamp:{ :(1970.01.01+00:00:00)+.convert.msToTimespan x; }; / @returns (Long) The supplied timestamp in milliseconds from UNIX epoch .convert.timestampToEpochMs:{ :.convert.timespanToMs x - 1970.01.01+00:00:00; }; / @returns (String) String version of path specified by parameter .convert.hsymToString:{ :1_string x; }; / @returns (FilePath|FolderPath) Path version of the string specified .convert.stringToHsym:{ :hsym .type.ensureSymbol x; }; / @param ipO (Integer) An IP address in octal format (e.g. .z.a) / @returns (Symbol) An IPv4 address .convert.ipOctalToSymbol:{[ipO] :`$"." sv string "h"$0x0 vs ipO; }; / @param list (List) List to separate by commas. List should not contain nested lists, dictionaries or tables / @returns (String) The specified list as a string separated by commas. Useful for logging .convert.listToString:{[list] :", " sv string list; }; / A more general version of '.convert.listToString' to ensure all elements of the specified list are string-ed / @returns (String) The specified list as a single string. .convert.genericListToString:{[list] :" | " sv .type.ensureString each list; }; / Converts bytes into it's equivalent as a long integer. Any byte lists shorter than 8 will be padded appropriately / @param bytes (Byte|ByteList) The bytes to convert into a long / @returns (Long) The bytes as a long / @throws TooManyBytesException If the byte list provided is longer than 8 (too big to represent in a long) .convert.bytesToLong:{[bytes] if[8 < count bytes; '"TooManyBytesException"; ]; :0x0 sv ((0|8 - count bytes)#0x00),bytes; }; / Converts a kdb table into a HTML <table> representation of it / @param tbl (Table) A table with all values of the table convertable to string by '.type.ensureString' / @returns (String) A HTML version of the table / @throws IllegalArgumentException If the parameter is not a table / @see .type.ensureString .convert.tableToHtml:{[tbl] if[not .type.isTable tbl; '"IllegalArgumentException"; ]; if[.type.isKeyedTable tbl; tbl:0!tbl; ]; header:.h.htc[`thead;] .h.htc[`tr;] raze .h.htc[`th;] each .type.ensureString each cols tbl; body:"\n" sv { .h.htc[`tr;] raze .h.htc[`td;] each .type.ensureString each x } each tbl; :"\n",.h.htc[`table;] header,"\n",body; }; ================================================================================ FILE: kdb-common_src_cron.q SIZE: 16,716 characters ================================================================================ // Cron Job Scheduler // Copyright (c) 2017 - 2020 Sport Trades Ltd, 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/cron.q .require.lib each `util`ns`type`convert`time; / The interval at which the cron system checks for jobs to run. This uses the built-in / kdb system timer .cron.cfg.timerInterval:100i; / Configures if job status should be stored or not. If true, the status of all job's will be recorded in '.cron.status'. / Even with this option disabled, all cron job failures will be stored for debugging / Cron by default clears the status table every day at midnight to ensure the table doesn't grow too large / @see .cron.status .cron.cfg.logStatus:1b; / If true and a cron job fails, the stack trace of where the failure occurred will be logged to the console .cron.cfg.printBacktraceOnFailure:1b; / The mode of operaton for the cron system. There are 2 supported modes: / * ticking: Traditional timer system with the timer function running on a frequent interval / * tickless: New approach to only 'tick' the timer when the next job is due to run. Can reduce process load when infrequent jobs are run .cron.cfg.mode:`ticking; / Configures how start times to '.cron.add' are handled. There are 3 supported modes: / * 'disallowed': Any time earlier than 'now' (to the nearest second) will be rejected and an exception thrown / * 'allowed': Any time will be allowed / * 'setAsNow': Any time earlier than 'now' will be modified to be 'now' and then added / NOTE: That with this set to 'allowed' or 'setAsNow' the job will execute as soon as the current function execution completes .cron.cfg.historicalStartTimes:`disallowed; / Unique job ID for each cron job added .cron.jobId:1; / The configured cron jobs for this process .cron.jobs:`id xkey flip `id`func`args`runType`startTime`endTime`interval`nextRunTime!"JS*SPPNP"$\:(); / The status of each cron job execution (with null row inserted) / NOTE: If the job fails, result will contain a dictionary with `errorMsg and, optionally, `backtrace .cron.status:flip `id`func`expectedStartTime`startTime`runTime`success`result!"JSPPNB*"$\:(); `.cron.status upsert @[first .cron.status; `result; :; (::)]; / The supported run type for the cron system .cron.runners:(`symbol$())!`symbol$(); .cron.runners[`once]: `.cron.i.runOnce; .cron.runners[`repeat]:`.cron.i.runRepeat; / The supported tick modes for the cron system .cron.supportedModes:(`symbol$())!`symbol$(); .cron.supportedModes[`ticking]: `.cron.mode.ticking; .cron.supportedModes[`tickless]:`.cron.mode.tickless; / The supported modes to deal with 'historical' start times .cron.historicalStartTimeModes:`disallowed`allowed`setAsNow; / The maximum supported timer interval as a timespan .cron.maxTimerAsTimespan:.convert.msToTimespan 0Wi - 1; / One millisecond as a timespan (to not require calculation each time) .cron.oneMsAsTimespan:.convert.msToTimespan 1; / NOTE: If '.z.ts' is defined at initialisation, the function will short-circuit and not configure the library .cron.init:{ if[.ns.isSet `.z.ts; .log.if.warn "Timer function is already set. Cron will not override automatically"; :(::); ]; set[`.z.ts; .cron.ts]; .cron.changeMode .cron.cfg.mode; if[not `.cron.cleanStatus in exec func from .cron.jobs; .cron.addRepeatForeverJob[`.cron.cleanStatus; (::); `timestamp$.time.today[]+1; 1D]; ]; if[not .cron.cfg.historicalStartTimes in .cron.historicalStartTimeModes; .log.if.error ("Invalid historical start time configuration. Must be one of: {}"; .cron.historicalStartTimeModes); '"InvalidCronConfigurationException"; ]; }; / Changes between the supported cron timer modes / @param mode (Symbol) The cron timer mode to use / @throws InvalidCronModeException If the mode is not one of the supported modes / @see .cron.supportedModes .cron.changeMode:{[mode] if[not mode in key .cron.supportedModes; .log.if.error "Cron timer mode is invalid. Must be one of: ",.convert.listToString key .cron.supportedModes; '"InvalidCronModeException"; ]; .cron.cfg.mode:mode; .cron.supportedModes[.cron.cfg.mode][]; }; / Disables the kdb timer to deactivate the cron system .cron.disable:{ .log.if.info "Disabling cron job scheduler"; .log.if.warn " No scheduled jobs will be executed until cron is enabled again"; system "t 0"; };
.os.m.tail:{ :"tail -n 30 ",x; }; .os.m.safeRmFolder:{ :"rmdir ",x; }; .os.m.procCount:{ :"getconf _NPROCESSORS_ONLN"; }; .os.m.which:{ :"which ",x; }; .os.m.ver:{ :"sw_vers"; }; / cp requires 2 arguments so pass string separated by "|" / First argument should be the source, 2nd argument should be the target .os.m.cpFolder:{ args:"|" vs x; :"cp -rv ",args[0]," ",args 1; }; .os.m.terminalSize:{ :"stty size"; }; / 'tty' exits 0 if there is a TTY attached, 1 otherwise .os.m.isInteractive:{ :"test -t 0; echo $?"; }; .os.m.readlink:{ :"readlink -f ",x; }; ================================================================================ FILE: kdb-common_src_rand.q SIZE: 2,797 characters ================================================================================ // Random Data Generation // Copyright (c) 2021 - 2022 Jaskirat Rajasansir / If true, a handle to /dev/random will be kept open after the first call to any '.rand.dr' functions. If false, a handle will be opened for every call .rand.cfg.keepDevRandomOpen:0b; / The cached handle to /dev/random, if '.rand.cfg.devRandomHandle' is set to true .rand.cfg.devRandomHandle:0Ni; / Some sensible maximum values for random data generation: / - Date & times: 2030.01.01 - bring the maximum date in a bit / - Symbols: 4 characters .rand.types:(`short$til count .Q.t)!(::; 0b; 0Ng; ::; 0x0; 256h; 10000i; 1000000j; 100e; 1000f; .Q.a; `4),(`timestamp`month`date`datetime`timespan`minute`second`time$\:`timestamp$2030.01.01),`4; / The equivalent types list but with the character types instead of short .rand.charTypes:.Q.t!value .rand.types; / Generates some random data for simple table schemas / @param schema (Table) An table schema (i.e. a table with no rows) to generate random data for / @param rows (Integer) The number of random rows to generate / @returns (Table) A table with the same schema as provided, with the specified number of rows of random data / @see .rand.types .rand.generate:{[schema; rows] if[not .type.isTable schema; '"IllegalArgumentException"; ]; if[not .type.isWholeNumber rows; '"IllegalArgumentException"; ]; if[rows <= 0; '"IllegalArgumentException"; ]; .log.if.info ("Generating random data [ Schema Cols: {} ] [ Rows: {} ] [ Seed: {} ]"; cols schema; rows; system "S"); :flip rows?/:.rand.types type each flip schema; }; / @param (Integer) The number of bytes to return from the OS /dev/random file / @returns (ByteList) The random data from the OS /dev/random file / @see .rand.cfg.keepDevRandomOpen / @see .rand.cfg.devRandomHandle .rand.dr.getBytes:{[bNum] devRandom:0Ni; if[.rand.cfg.keepDevRandomOpen; if[null .rand.cfg.devRandomHandle; .rand.cfg.devRandomHandle:hopen `:fifo:///dev/random; ]; devRandom:.rand.cfg.devRandomHandle; ]; if[not .rand.cfg.keepDevRandomOpen; devRandom:hopen `:fifo:///dev/random; ]; bytes:read1 (devRandom; bNum); if[not .rand.cfg.keepDevRandomOpen; hclose devRandom; ]; :bytes; }; / @returns (Short) A short generated from /dev/random data .rand.dr.short:{ :.rand.dr.i.get 2; }; / @returns (Integer) An integer generated from /dev/random data .rand.dr.int:{ :.rand.dr.i.get 4; }; / @returns (Long) A long generated from /dev/random data .rand.dr.long:{ :.rand.dr.i.get 8; }; / @returns (GUID) A GUID generated from /dev/random data .rand.dr.guid:{ :.rand.dr.i.get 16; }; .rand.dr.i.get:{[bytes] :0x0 sv .rand.dr.getBytes bytes; }; ================================================================================ FILE: kdb-common_src_require.q SIZE: 9,945 characters ================================================================================ // Code Loading Library // Copyright (c) 2016 - 2017 Sport Trades Ltd, (c) 2020 - 2023 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/require.q / The file suffixes that are supported for a library .require.fileSuffixes:(".q";".k";".*.q";".*.k";".q_";".*.q_"); / Table containing the state of each library loaded via require .require.loadedLibs:`lib xkey flip `lib`loaded`loadedTime`initExists`inited`initedTime`forced`files!"SBPBBPB*"$\:(); / Root folder to search for libraries .require.location.root:`; / Regexs to filter discovered files / @see .require.i.tree .require.location.ignore:("*.git";"*target"); / Complete list of discovered files from the root directory .require.location.discovered:enlist`; / Required interface implementations for 'require' and related kdb-common libraries to function correctly .require.interfaces:`lib`ifFunc xkey flip `lib`ifFunc`implFunc!"SS*"$\:(); .require.interfaces[``]:(::); .require.interfaces[`log`.log.if.trace]:`.require.i.log; .require.interfaces[`log`.log.if.debug]:`.require.i.log; .require.interfaces[`log`.log.if.info]: `.require.i.log; .require.interfaces[`log`.log.if.warn]: `.require.i.log; .require.interfaces[`log`.log.if.error]:`.require.i.logE; .require.interfaces[`log`.log.if.fatal]:`.require.i.logE; .require.init:{[root] if[.require.loadedLibs[`require]`inited; .log.if.trace "Require is already initialised. Will ignore request to init again"; :(::); ]; $[null root; .require.location.root:.require.i.getCwd[]; .require.location.root:root ]; .require.i.setDefaultInterfaces[]; .require[`markLibAsLoaded`markLibAsInited] .\: (`require; 0b); .require.loadedLibs[`require; `initExists]:1b; / If file tree has already been specified, don't overwrite if[.require.location.discovered~enlist`; .require.rescanRoot[]; ]; .require.i.initInterfaceLibrary[]; .log.if.info "Require library initialised [ Root: ",string[.require.location.root]," ]"; }; / Loads the specified library but does not initialise it. Useful if there is some configuration / to perform after load, but prior to initialisation. When you are ready to to initialise, / use .require.lib. / @see .require.i.load .require.libNoInit:{[lib] if[lib in key .require.loadedLibs; :(::); ]; .require.i.load[lib; 0b]; }; / Loads the specified libary and initialises it. Checks loaded and initialised state to prevent / reload or re-init if already performed. / @see .require.i.load / @see .require.i.init .require.lib:{[lib] operations:`load`init; if[lib in key .require.loadedLibs; if[.require.loadedLibs[lib]`inited; :(::); ]; operations:operations except `load; ]; .require.i[operations] .\: (lib; 0b); }; / Loads the sepcified library and initialises it regardless of the current loaded and initialised state / This should be used for reloading a stateless library without having to restart the kdb process / @see .require.i.load / @see .require.i.init .require.libForce:{[lib] libInfo:.require.loadedLibs lib; operations:lib,/:libInfo`loaded`inited; if[libInfo`loaded; .log.if.info ("Force reloading library [ Library: {} ] [ Already Loaded: {} ] [ Already Initialised: {} ]"; lib; `no`yes libInfo`loaded; `no`yes libInfo`inited); ]; .require.i[`load`init] .' operations; }; .require.rescanRoot:{ .require.location.discovered:.require.i.tree .require.location.root; .log.if.info "Library root location refreshed [ File Count: ",string[count .require.location.discovered]," ]"; }; / Marks the specified library as loaded in the loaded libraries table. NOTE: This / function does not actually do the load / @see .require.loadedLibs .require.markLibAsLoaded:{[lib; forced] .require.loadedLibs[lib]:`loaded`loadedTime`forced!(1b; .z.P; forced); }; / Marks the specified library as initialised in the loaded libraries table. NOTE: / This function does not actually do the init / @see .require.loadedLibs .require.markLibAsInited:{[lib; forced] .require.loadedLibs[lib]:`inited`initedTime`forced!(1b; .z.P; forced); }; / Attempts to load the specified library / @throws LibraryDoesNotExistException If no files are found for the specified library / @throws LibraryLoadException If any of the library files fail to load .require.i.load:{[lib; force] .log.if.info "Loading library: ",string lib; libFiles:.require.i.findFiles[lib;.require.location.discovered]; if[0~count libFiles; .log.if.error "No files found for library [ Lib: ",string[lib]," ]"; '"LibraryDoesNotExistException (",string[lib],")"; ]; { .log.if.info "Loading ",x; loadRes:.require.i.protectedExecute[system; "l ",x; `LOAD_FAILURE]; if[`LOAD_FAILURE~first loadRes; .log.if.error "Library file failed to load! [ File: ",x," ]. Error - ",last loadRes; if[`backtrace in key loadRes; .log.if.error "Backtrace: \n",loadRes`backtrace; ]; '"LibraryLoadException"; ]; } each 1_/:string libFiles; .require.markLibAsLoaded[lib; force]; .require.loadedLibs[lib]:`files`initExists!(libFiles; .require.i.getInitFunc[lib]`exists); }; / Searchs for files with the specified library prefix in the source folder supplied / @see .require.fileSuffixes .require.i.findFiles:{[lib;files] filesNoPath:last each ` vs/:files; :files where any filesNoPath like/: string[lib],/:.require.fileSuffixes; }; / Performs the initialisation of the specified library. Assumes .*lib*.init[]. Also checks for / .*lib*.*stack*.init[] and executes if exists (if not present, ignored). / @throws UnknownLibraryException If the library is not loaded / @throws LibraryInitFailedException If the init function throws an exception / @throws RequireReinitialiseAssertionError If 'reinit' is set to false, but the library is already initialised - this should not happen .require.i.init:{[lib; reinit] if[not lib in key .require.loadedLibs; '"UnknownLibraryException"; ]; if[not[reinit] & .require.loadedLibs[lib]`inited; '"RequireReinitialiseAssertionError"; ]; if[not .require.loadedLibs[lib]`initExists; :(::); ]; initF:.require.i.getInitFunc lib; initArgs:enlist[`reinit]!enlist reinit; .log.if.info "Library initialisation function detected [ Func: ",string[initF`initFname]," ]"; initRes:.require.i.protectedExecute[initF`initF; initArgs; `INIT_FUNC_ERROR]; if[`INIT_FUNC_ERROR~first initRes; .log.if.error "Init function (",string[initF`initFname],") failed to execute successfully [ Lib: ",string[lib]," ]. Error - ",last initRes; if[`backtrace in key initRes; .log.if.error "Backtrace:\n",initRes`backtrace; ]; '"LibraryInitFailedException (",string[initF`initFname],")"; ]; .require.markLibAsInited[lib; reinit]; .log.if.info "Initialised library: ",string lib; }; / NOTE: This function currently does not validate the object at *lib*.init is actually a function .require.i.getInitFunc:{[lib] initFname:` sv `,lib,`init; initF:@[get;initFname;`NO_INIT_FUNC]; :`initFname`initF`exists!(initFname; initF; not `NO_INIT_FUNC ~ first initF); };
// Load in schema file and kill proc if not present loadschemas:{ if[not `schemafile in key .proc.params;.lg.e[`loadschema;"Schema file required!"];exit 1]; @[.proc.loadf;raze .proc.params[`schemafile];{.lg.e[`loadschema;"Failed to load schema file!"];exit 1}]; }; // Set up UPD and ZTS behaviour based on batching mode setup:{[batch] // Handle bad batch mode, see whether STP is chained or default if[not all batch in/: key'[.stplg`upd`zts];'"mode ",(string batch)," must be defined in both .stplg.upd and .stplg.zts"]; chainmode:$[.sctp.chainedtp;`chained;`def]; // Set inner UPD and ZTS behaviour from batch mode, then set outer functions based on whether STP is chained .stplg.updmsg:.stplg.upd[batch]; .stplg.ts:.stplg.zts[batch]; .u.upd:.stpps.upd[chainmode]; .dotz.set[`.z.ts; {[f;x] @[;x;()]each f} (@[value;.dotz.getcommand[`.z.ts];{{}}]; .stpps.zts[chainmode])]; // Error mode - error trap UPD to write failed updates to separate TP log if[.stplg.errmode; .stp.upd:.u.upd; .u.upd:{[t;x] .[.stp.upd;(t;x);{.stplg.badmsg[x;y;z]}[;t;x]]} ]; // Default the timer to 1 second if not set if[not system "t";.lg.o[`timer;"defaulting timer to 1000ms"];system"t 1000"]; }; // Initialise process init:{ // Set up the update and publish functions setup[.stplg.batchmode]; // If process is a chained STP then subscribe to the main STP, if not, load schema file $[.sctp.chainedtp;.sctp.init[];loadschemas[]]; // Set up pubsub mechanics and table schemas generateschemas[]; // Set up logs and log handles using name of process as an identifier .stplg.init[string .proc.procname]; }; // Have the init function called from torq.q .proc.addinitlist(`init;`); ================================================================================ FILE: TorQ_code_processes_tickerlogreplay.q SIZE: 25,454 characters ================================================================================ // Script to replay tickerplant log files .merge.mergebybytelimit:@[value;`.merge.mergebybytelimit;0b]; // merge limit configuration - default is 0b row count limit, 1b is bytesize limit \d .replay // Variables firstmessage:@[value;`firstmessage;0] // the first message to execute segmentedmode:@[value;`segmentedmode;1b] // if using segmented tickerplant, then set to true, otherwise set false for old tickerplant autoreplay:@[value;`autoreplay;1b] // replay tplogs automatically set to 1b to be backward compatible. lastmessage:@[value;`lastmessage;0W] // the last message to replay messagechunks:@[value;`messagechunks;0W] // the number of messages to replay at once schemafile:@[value;`schemafile;`] // the schema file to load data in to tablelist:@[value;`tablelist;enlist `all] // the tables to replay into (to allow subsets of tp logs to be replayed). `all means all hdbdir:@[value;`hdbdir;`] // the hdb directory to write to tplogfile:@[value;`tplogfile;`] // the tp log file to replay. Only this or tplogdir should be used (not both) tplogdir:@[value;`tplogdir;`] // the tp log directory to read the log files from. Only this or tplogfile should be used (not both) partitiontype:@[value;`partitiontype;`date] // the partitioning of the database. Can be date, month or year (int would have to be handled bespokely) emptytables:@[value;`emptytables;1b] // whether to overwrite any tables at start up sortafterreplay:@[value;`sortafterreplay;1b] // whether to re-sort the data and apply attributes at the end of the replay. Sort order is determined by the sortcsv (:config/sort.csv) basicmode:@[value;`basicmode;0b] // do a basic replay, which replays everything in, then saves it down with .Q.hdpf[`::;d;p;`sym] exitwhencomplete:@[value;`exitwhencomplete;1b] // exit when the replay is complete checklogfiles:@[value;`checklogfiles;0b] // check if the log file is corrupt, if it is then write a new "good" file and replay it instead gc:@[value;`gc;1b] // garbage collect at appropriate points (after each table save and after the full log replay) upd:@[value;`upd;{{[t;x] insert[t;x]}}] // default upd function used for replaying data clean:@[value;`clean;1b] // clean existing folders on start up. Needed if a replay screws up and we are replaying by chunk or multiple tp logs sortcsv:@[value;`sortcsv;hsym first .proc.getconfigfile["sort.csv"]] // location of sort csv file compression:@[value;`compression;()]; // specify the compress level, empty list if no required partandmerge:@[value;`partandmerge;0b]; // setting to do a replay where the data is partitioned and then merged on disk tempdir:@[value;`tempdir;`:tempmergedir]; // location to save data for partandmerge replay mergenumrows:@[value;`mergenumrows;10000000]; // default number of rows for merge process mergenumtab:@[value;`mergenumtab;`quote`trade!10000 50000]; // specify number of rows per table for merge process mergenumbytes:@[value;`mergenumbytes;500000000]; // default number of bytes for merge process mergemethod:@[value;`mergemethod;`part]; // the partbyattr writedown mode can merge data from temporary storage to the hdb in three ways: // 1. part - the entire partition is merged to the hdb // 2. col - each column in the temporary partitions are merged individually // 3. hybrid - partitions merged by column or entire partittion based on byte limit / - settings for the common save code (see code/common/save.q) .save.savedownmanipulation:@[value;`savedownmanipulation;()!()] // a dict of table!function used to manipuate tables at EOD save .save.postreplay:@[value;`postreplay;{{[d;p] }}] // post replay function, invoked after all the tables have been written down for a given log file
Compression in kdb+¶ As the rate of data generation in financial markets continues to increase, there is a strong impetus to investigate how large data volumes can be more efficiently processed. Even if disk is considered cheap, it can be a valuable exercise for many applications to determine what improvements can be gleaned from an analysis of the use of compression. Aside from reduced disk costs, some use cases can gain significant performance improvements through a problem-specific approach to compression. For example, some systems have fast CPUs, but find that disk I/O is a bottleneck. In some such cases, utilizing CPU power to reduce the amount of data being sent to disk can improve overall performance. Early versions of kdb+ achieved compression via file systems with inbuilt compression, such as ZFS. V2.7 introduced built-in OS-agnostic compression, which allowed on-disk data to be converted to compressed format using a range of algorithms and compression levels. This was expanded (V2.8), with the ability to stream in-memory data directly to compressed format on disk. ZFS compression is still useful for some kdb+ applications, as it keeps cached data available for multiple processes. However, this paper will focus on the inbuilt data-compression options provided by kdb+, available on all supported architectures. Each system has its own characteristics, which determine the appropriate compression configurations to use. All tests were run using kdb+ version 3.1 (2013.09.05) Compression options¶ There are two high-level approaches to saving on-disk data in compressed format. The first is a two-step approach: save data to disk in the regular uncompressed format using set , then convert it to a compressed format using set . The second approach is to stream data directly from memory to compressed format on disk by modifying the left argument to set . The first approach is useful for archiving existing historical data, or in cases where it is significantly faster to save the data to disk uncompressed, without the overhead of first compressing the data. In many other cases, it can be more convenient and/or performant to compress the data on the fly while saving. Converting saved data to compressed format using set ¶ Reference: set Logical block size Page size for AMD64 is 4KB, Sparc is 8KB. Windows seems to have a default allocation granularity of 64KB. The compression-algorithm and compression-level arguments are self-explanatory. The logical block size determines the amount of data which will be compressed at a time in each block. A larger block will afford more opportunity for identification of repeated values, and hence a higher overall compression ratio. But this argument also determines the minimum amount of data which can be decompressed at a time. If it is large relative to the amount of data that is likely to be accessed at a time, then a lot of unnecessary decompression work may be carried out during queries. Its lower bound is the system’s allocation granularity, because dividing the data into smaller chunks than that would result in wasted space. The various combinations of arguments will be discussed further in the following sections, but example expressions to save data to disk and then convert it to compressed format would look like this: `:/db/trade_uncompressed set trade (`:/db/trade_compressed; 16; 1; 0) set `:/db/trade_uncompressed If this approach is used to compress data, it is preferable to have the source and target files on separate physical disks. This will reduce the number of disk seeks required to move the data iteratively in chunks. simongarland/compress/cutil.q for migrating uncompressed databases to compressed format Saving in-memory data directly to compressed format on disk¶ In many cases, it is preferable to save data to disk in compressed format in a single step. This is more convenient than having a separate process to convert uncompressed on-disk data to compressed format after it has been saved. It can also be faster than the two-step method, depending on the compression algorithm and level used, and on system CPU and disk characteristics. Direct streaming compression has been implemented by overriding the left argument to set : (`:targetFile; blockSize; alg; level) set table Field-by-field compression¶ It is also possible to apply compression on a field-by-field basis, i.e. to use different compression rules from one table or column to another. Files of various compression levels (including no compression at all) can happily coexist inside a single table. So if some columns do not compress well or their usage patterns mean they don’t benefit from compression, they can have different compression algorithms applied or be left uncompressed. This can be achieved by modifying the left argument to set to include a dictionary, mapping field names to the compression parameters to be used for each one. The null symbol in the key of this dictionary defines the default behavior, which will be applied to any fields not specified explicitly. t:([]a:asc 1000000?10; b:asc 1000000?10; c:asc 1000000?10) (`:splay/; ``a`b!((17;2;9); (17;2;6); (17;2;6))) set t Compression defaults¶ Rather than specifying the compression parameters individually every time set is called, we also have the option of defining default compression parameters which will be used if set is called the old-fashioned way, i.e. `:filename set table . This is done by defining the zip-defaults variable .z.zd . The format is the same as the non-filename arguments passed to set , e.g. .z.zd:(17;2;6);`:zfile set asc 10000?`3 Reading compressed data¶ The process of decompression is automatic and transparent to the user. Data consumers can read compressed files in the same way that they would read uncompressed ones, e.g. using get or by memory-mapping and querying the files. However, it is important to allow for some resource consumption associated with decompression when querying compressed data. While querying memory-mapped compressed data, any file blocks used by select will be decompressed into memory and cached there for the duration of the query. This is to avoid having to decompress the data twice – once during constraint evaluation, and once when being used as a selected column in the result set. Only required data will be decompressed into memory, but kdb+ will allocate enough memory to decompress the entirety of each vector touched by a query on compressed data. This means that memory use shown in top may indicate a higher number than expected, but only the required sub-set of this amount will actually be used. Increasing system swap space may be required, even if swap itself will not be used. For frequently-repeated queries or regular usage patterns where the data is usually present in the OS cache, compressing data may be counterproductive. The OS can only cache the compressed data, and the decompressed queried data will only be cached by kdb+ for the duration of a query. So the decompression overhead must be overcome on every query. Because of this, columns which need to be accessed very frequently may perform better if left uncompressed. It should be noted that because of the convenience in terms of automatic decompression, OS-level tools such as scp should be used if compressed data just needs to be moved about on disk or between hosts, without being manipulated in any way. This will prevent needless de/re-compression of the data, which would occur if kdb+ processes and IPC were used for this purpose. What determines the compression ratio?¶ The compression ratio (uncompressed size vs compressed size) is influenced by a number of factors. The choice of logical block size, compression algorithm and compression level will have a big impact here. But the compression ratio of any combination of inputs will also be heavily influenced by the nature of the data which is being compressed. Some data-driven factors that influence the compression ratio are: Number of distinct values vs the total count of the vector¶ A low number of distinct values in a large vector could, for example, be grouped and referenced by index in each compressed block. A vector of entirely unique values offers less potential for efficient storage. Similarly, sparsely populated columns will compress well, as the nulls can be compressed like a highly repeating value would be. Level of contiguity of repeated values¶ A contiguous array of identical values can be described by the distinct value and the number of times it occurs. In some cases we can deliberately increase the level of contiguity by sorting tables on multiple columns before compressing. In the following example, we will compare the compression ratios achieved on a table of two highly-repeating columns – first unsorted, then sorted by one column, then sorted by both columns. n:100000000 t:([]sym:n?`ibm`goog`aapl`tsla`spx;venue:n?`nsdq`nyse) `:uncompressed/ set .Q.en[`:uncompressed] t (`:unsorted/;16;2;5) set .Q.en[`:unsorted] t (`:symSorted/;16;2;5) set .Q.en[`:symSorted] `sym xasc t (`:symVenueSorted/;16;2;5) set .Q.en[`:symVenueSorted] `sym`venue xasc t Because the data is so highly repeating, we get a nice compression ratio of 10 even if the table is unsorted. But sorting on one column yields a significant improvement, and sorting on both means the entire 100M rows of these two fields can be persisted using a little over 1KB of disk. Obviously this sorting only increases the compression ratios for the columns that are actually sorted – other columns won’t see improvements. | table version | compression ratio | |---|---| | Unsorted | 10.19 | | Sorted by sym | 30.47 | | Sorted by sym, venue | 669.55 | Datatype of the vector¶ If a vector of high-precision datatype such as long contains a high proportion of values which could have been expressed as ints or shorts, some of the unused precision can be ‘reclaimed’ in a way through compression. Booleans also compress particularly well, but this is a special case of the above point regarding repeating values, as there are only two possible distinct values in a boolean column. The test table below contains boolean, int and long columns. The int and long columns contain the same values. Taking a sample compression configuration of gzip with logical block size 16 and compression level 5, we can see how each data type compresses: n:100000000 ints:n?0Wi t:([]boolean:n?01b;integer:ints;longint:`long$ints) `:nocomp/ set t (`:comp/;16;2;5) set t The compression ratios achieved on each field show that booleans have compressed by a large factor, and the longs have also compressed significantly. A lot of the storage cost of using long precision has been reclaimed. | field | compression ratio | |---|---| | boolean | 5.97 | | int | 1.00 | | long int | 1.62 | In some cases, the extra precision afforded by longs is required only for a small proportion of values. In such cases, compression allows us the benefit of increased precision where it is needed, while staying close to the disk cost of regular ints where it is not. Effects of compression on query performance¶ The user-specifiable parameters for compression provide a lot of room for experimentation and customization, according to the demands of a particular system. For a given use case, the choice of compression parameters should be informed by a number of factors: - CPU time taken to perform the compression - change in disk I/O time - compression ratio achieved - usage patterns for the on-disk data The relative weighting of these factors will vary from one system to another. If reduction in physical disk usage is the primary motivating concern behind the use of compression, then naturally the compression ratio itself must be maximized, while constraining the other parameters to acceptable levels. The focus of our tests will be on the impact to downstream users of the data rather than the impact on the writing process. Random access to compressed data is supported, which means only required blocks of the file will be decompressed. The choice of logical block size will determine the minimum amount of data which can be decompressed at a time, so if logical block size is large relative to the average amount of data retrieved at a time, then redundant work may be performed in decompressing the data. If the gzip algorithm is used, then the trade-off between compression level and time taken to de/compress must also be considered. Setup¶ Each system will have its own performance characteristics, and experimentation with the compression parameters should be done on a case-by-case basis. We will measure the impact of using compressed rather than uncompressed data across a range of query types. The tests will be run first with the data present in the OS cache, and then with the cache completely cold, having been emptied before each test using the cache-flush functionality of the io.q script. Each test will be performed 10 times (with the cache being fully flushed before each iteration of the cold-cache tests) and timings will be averaged. For hot-cache results, the first query time will not be included in the average. This is to allow for initial caching from disk. All tests are performed using kdb+ 3.1 2013.09.05 on a Linux Intel(R) Xeon(R) L5520 16 CPU/4 core 2.27GHz with 144GB RAM. Disk used is ext3 SAN. We will use a basic test schema of a trade and quote table, with 100M rows of trades and 400M rows of quotes, evenly distributed throughout a trading day. n:100000000 st:.z.D+09:30 et:.z.D+16:00 trade:([]sym:asc n?`3; time:"p"$st+((et-st)%n-1)*til n; price:n?1000.; size:n?100) n*:4 quote:([]sym:asc n?`3; time:"p"$st+((et-st)%n-1)*til n; bp:n?1000.; ap:n?1000.; bs:n?100; as:n?100) Procedure¶ The above test tables were saved to disk, both uncompressed and compressed using the kdb+ IPC algorithm, with low, mid- and high-range logical block sizes. The relative changes in performance (compressed vs uncompressed) of the following query types were measured: - Select all data from the table - Select all data for a subset of symbols - Aggregate values by symbol - Aggregate values by symbol and time bucket - As-of join straight from disk Results¶ The following charts plot the ratio of performance of queries on compressed data vs the performance of the same query on uncompressed data. So a value of 2 means the query on compressed data took twice as long as its uncompressed counterpart, and a value less than 1 means the performance improved on compressed data. The compression ratios achieved did not change significantly across logical block sizes: | logical block size | compression ratio | |---|---| | 12 | 1.50 | | 16 | 1.55 | | 20 | 1.55 | Select all data from the table¶ select from trade where i<>0 When querying the entire table with cold OS cache, we see significant improvements in performance using compressed data. This is because there is a reduction in the amount of raw data which must be read from disk, and the cost of decompression is low when compared to this benefit. The relative improvement increases with logical block size, as decompressing an entire vector can be done more efficiently when there are fewer blocks to read and coalesce. In the case of hot OS cache, we see that performance has degraded to 2-3 times its original level. This illustrates the cost of the OS having access only to compressed raw data. When the cache is hot, the decompression cost is high relative to the total time taken to get the uncompressed data into memory. Select all data for a subset of symbols¶ select from trade where sym in ids ids is a vector of 100 distinct randomly-selected symbols from the trade table, which was created prior to cache flushing. Querying on 100 symbols with a cold cache, we see moderate improvements in query performance on compressed data at low logical block sizes. With larger logical blocks, we need to carry out some redundant decompression work, as we only require sub-sections of each block. The cost of this redundant work eventually cancels out the disk-read improvement for queries on this particular number of symbols. With a hot cache, querying on a subset of symbols gets a lot slower with large logical blocks. This illustrates the benefit of fully-random access (i.e. no redundant work), which is afforded by using uncompressed data. However, these queries still return in the range of hundreds of milliseconds, so a significant increase in query time may still be worth it, depending on the compression and performance desired in a particular case. Aggregate values by a subset of symbols¶ select size wavg price by sym from trade where sym in ids When we add the additional work of aggregating data by symbol, we don’t see much change in the case of a cold cache. But with a hot cache, the increased CPU load of aggregation reduces the relative advantage of compressed data. Data-read time is now slightly less of a contributing factor towards overall performance. Aggregate values by a subset of symbols using time buckets¶ select size wavg price by sym, 5 xbar time.minute from trade where sym in ids Aggregating by time bucket in addition to by sym, we see that the relative decrease in performance on compressed data has again reduced, as data-retrieval time is less of a factor. As-of join a trade and quote table¶ aj[`sym`time; select from trade where sym in ids; select from quote] The final test is a highly CPU-intensive as-of join of the trade and quote records for 100 symbols. Extending the pattern of previous tests, we can see that performance against compressed trades and quotes is quite comparable to results against uncompressed tables. Conclusion¶ In any given application, the metrics which will determine if and how compression should be used will be informed by the individual use case. Our tests have shown that for this particular setup, assuming some OS caching, the more CPU-intensive queries will see significantly less overall impact than simple reads on small, randomly distributed pieces of data. This implies that we could expect to see acceptable performance levels for intensive queries, while achieving worthwhile reductions in disk usage, i.e. our compression ratio of 1.5. Experimenting with logical block sizes suggested that we should keep this parameter at the low end of the spectrum for our queries, or we will see a fall-off in performance. This is reinforced by the relative lack of change in compression ratio across logical block sizes. Depending on the individual use case, other applications may choose to optimize for overall compression ratio or time taken to save the data. Author¶ Eoin Killeen has worked with KX technology for over ten years, designing and building real-time trading and analytics systems for clients in New York and London. Eoin’s current focus is Solution Architecture for EMEA-based clients.
Circles in a Circle, 1923 Everything begins with a dot. — W.W. Kandinsky . Apply, Index, Trap @ Apply At, Index At, Trap At¶ - Apply a function to a list of arguments - Get items at depth in a list - Trap errors | rank | syntax | function semantics | list semantics | |---|---|---|---| | 2 | v . vx .[v;vx] | Apply Apply v to list vx of arguments | Index Get item/s vx at depth from v | | 2 | u @ ux @[u;ux] | Apply At Apply unary u to argument ux | Index At Get items ux from u | | 3 | .[g;gx;e] | Trap Try g . gx ; catch with e | | | 3 | @[f;fx;e] | Trap At Try f@fx ; catch with e | Where e is an expression, typically a functionf is a unary function andfx in its domaing is a function of rank \(n\) andgx an atom or list of count \(n\) with items in the domains ofg v is a value of rank \(n\) (or a handle to one) andvx a list of count \(n\) with items in the domains ofv u is a unary value (or a handle to one) andux in its domain Amend, Amend At¶ For the ternary and quaternary forms .[d; i; u] @[d; i; u] .[d; i; v; vy] @[d; i; v; vy] where d is a list or dictionary, or a handle to a list, dictionary or datafilei indexesd asd . i ord @ i (must be a list for Amend)u is a unary withd in its domainv is a binary withd andvy in its left and right domains see Amend and Amend At. Apply, Index¶ v . vx evaluates value v on the \(n\) arguments listed in vx . q)add / addition 'table' 0 1 2 3 1 2 3 4 2 3 4 5 3 4 5 6 q)add . 2 3 / add[2;3] (Index) 5 q)(+) . 2 3 / +[2;3] (Apply) 5 q).[+;2 3] 5 q).[add;2 3] 5 If v has rank \(n\), then vx has \(n\) items and v is evaluated as: v[vx[0]; vx[1]; …; vx[-1+count vx]] If v has rank 2, then vx has 2 items and v is applied to the first argument vx[0] and the second argument vx[1] . v[vx[0];vx[1]] Variadic operators Most binary operators such as Add have deprecated unary forms and are thus actually variadic. Where v is such a variadic operator, parenthesize it to provide it as the left argument of Apply. q).[+;2 2] 4 q)(+) . 2 2 4 If v has rank 1, then vx has one item and v is applied to the argument vx[0] . v[vx[0]] Q for Mortals §6.5.3 Indexing at Depth Nullaries¶ Nullaries (functions of rank 0) are handled differently. The pattern above suggests that the empty list () would be the argument list to nullary v , but Apply for nullary v is denoted by v . enlist[::] , i.e. the right argument is the enlisted null. For example: q)a: 2 3 q)b: 10 20 q){a + b} . enlist[::] 12 23 Index¶ d . i returns an item from list or dictionary d as specified by successive items in list i . Since 4.1t 2022.03.25, d can be a persisted table. The result is found in d at depth count i as follows. The list i is a list of successive indexes into d . i[0] must be in the domain of d@ . It selects an item of d , which is then indexed by i[1] , and so on. ( (d@i[0]) @ i[1] ) @ i[2] … q)d ((1 2 3;4 5 6 7) ;(8 9;10;11 12) ;(13 14;15 16 17 18;19 20)) q)d . enlist 1 / select item 1, i.e. d@1 8 9 10 11 12 q)d . 1 2 / select item 2 of item 1 11 12 q)d . 1 2 0 / select item 0 of item 2 of item 1 11 A right argument of enlist[::] selects the entire left argument. q)d . enlist[::] (1 2 3;4 5 6 7) (8 9;10;11 12) (13 14;15 16 17 18;19 20) Index At¶ The selections at each level are individual applications of Index At: first, item d@i[0] is selected, then (d@i[0])@i[1] , then ((d@i[0])@ i[1])@ i[2] , and so on. These expressions can be rewritten using Over applied to Index At; the first is d@/i[0] , the second is d@/i[0 1] , and the third is d@/i[0 1 2] . In general, for a vector i of any count, d . i is identical to d@/i . q)((d @ 1) @ 2) @ 0 / selection in terms of a series of @s 11 q)d @/ 1 2 0 / selection in terms of @-Over 11 Cross sections¶ Index is cross-sectional when the items of i are lists. That is, items-at-depth in d are indexed for paths made up of all combinations of atoms of i[0] and atoms of i[1] and atoms of i[2] , and so on to the last item of i . The simplest case of cross-sectional index occurs when the items of i are vectors. For example, d .(2 0;0 1) selects items 0 and 1 from both items 2 and 0: q)d . (2 0; 0 1) 13 14 15 16 17 18 1 2 3 4 5 6 7 q)count each d . (2 0; 0 1) 2 2 Note that items appear in the result in the same order as the indexes appear in i . The first item of i selects two items of d , as in d@i[0] . The second item of i selects two items from each of the two items just selected, as in (d@i[0])@'i[1] . Had there been a third vector item in i , say of count 5, then that item would select five items from each of the four items-at-depth 1 just selected, as in ((d@i[0])@'i[1])@''i[2] , and so on. When the items of i are vectors the result is rectangular to at least depth count i , depending on the regularity of d , and the k th item of its shape vector is (count i)[k] for every k less than count i . That is, the first count i items of the shape of the result are count each i . More general cross-sectional indexing occurs when the items of i are rectangular lists, not just vectors, but the situation is much like the simpler case of vector items. Nulls in i ¶ Nulls in i mean “select all”: if i[0] is null, then continue on with d and the rest of i , i.e. 1_i ; if i[1] is null, then for every selection made through i[0] , continue on with that selection and the rest of i , i.e. 2_i ; and so on. For example, d .(::;0) means that the 0th item of every item of d is selected. q)d (1 2 3;4 5 6 7) (8 9;10;11 12) (13 14;15 16 17 18;19 20) q)d . (::;0) 1 2 3 8 9 13 14 Another example, this time with i[1] equal to null: q)d . (0 2;::;1 0) (2 1;5 4) (14 13;16 15;20 19) Note that d .(::;0) is the same as d .(0 1 2;0) , but in the last example, there is no value that can be substituted for null in (0 2;;1 0) to get the same result, because when item 0 of d is selected, null acts like 0 1 , but when item 2 of d is selected, it acts like 0 1 2 . The general case of a non-negative integer list i ¶ In the general case, when the items of i are non-negative integer atoms or lists, or null, the structure of the result can be thought of as cascading structures of the items of i . That is, with nulls aside, the result is structurally like i[0] , except that wherever there is an atom in i[0] , the result is structurally like i[1] , except that wherever there is an atom in i[1] , the result is structurally like i[2] , and so on. The general case of Index can be defined recursively in terms of Index At by partitioning the list i into its first item and the rest: Index:{[d;F;R] $[ F~::; Index[d; first R; 1 _ R]; 0 =count R; d @ F; 0>type F; Index[d @ F; first R; 1 _ R] Index[d;; R]'F ]} That is, d . i is Index[d;first i;1_i] . To work through the definition, start with F as the first item of i and R as the remainder. At each step in the recursion: - if F is null then select all ofd and continue on, with the first item of the remainderR as the newF and the remainder ofR as the new remainder; - otherwise, if the remainder is the empty vector apply Index At (the right argument F is now the last item ofi ), and we are done; - otherwise, if F is an atom, apply Index At to select that item ofd and continue on in the same way as whenF is null; - otherwise, apply Index with fixed arguments d andR , but independently to the items of the listF . Dictionaries and symbolic indexing¶ If i is a symbol atom then d must be a dictionary or handle of a directory on the K-tree, and d . i selects the value of the entry named in i . For example, if: dir:`a`b!(2 3 4;"abcdefg") then `dir . enlist`b is "abcdefg" and `dir . (`b;1 3 5) is "bdf" . If i is a list whose items are non-negative integer atoms and symbol atoms, then just like the non-negative integer vector case, d . i is a single item at depth count i in d . The difference is that wherever a symbol appears in i , say as the kth item, the selection up to the kth item must produce a dictionary or a handle of a directory. Selection by the kth item is the value of an entry in that dictionary or directory, and further selections go on from there. For example: q)(1;`a`b!(2 3 4;10 20 30 40)) . (1; `b; 2) 30 As we have seen above for the general case, every atom in the k th item of i must be a valid index of all items at depth k selected by d . k # i . Moreover, symbols can only select from dictionaries and directories, and integers cannot. Consequently, if the k th item of i contains a symbol atom, then all items selected by d . k # i must be dictionaries or handles of directories, and therefore all atoms in the k th item of i must be symbols. It follows that each item of i must be made up entirely of non-negative integer atoms, or entirely of symbol atoms, and if the k th item of i is made up of symbols, then all items at depth k in d selected by the first k items of i must be dictionaries. Note that if d is either a dictionary or handle to a directory then d . enlist key d is a list of values of all the entries. Step dictionaries¶ Where d is a dictionary, d@i or d[i] or d i returns for each item of i that is outside the domain of d a null of the same type as the keys. q)d:`cat`cow`dog`sheep!`chat`vache`chien`mouton q)d cat | chat cow | vache dog | chien sheep| mouton q)d `sheep`snake`cat`ant `mouton``chat` q) q)e:(10*til 10)!til 10 q)e 0 | 0 10| 1 20| 2 30| 3 40| 4 50| 5 60| 6 70| 7 80| 8 90| 9 q)e 80 35 20 -10 8 0N 2 0N A step dictionary has the sorted attribute set. Its keys are a sorted vector. Where s is a step dictionary, and i[k] are the items of i that are outside the domain of d , the value/s for d@i@k are the values for the highest keys that are lower than i k . q)d:`cat`cow`dog`sheep!`chat`vache`chien`mouton q)ds:`s#d q)ds~d 1b q)ds `sheep`snake`cat`ant `mouton`mouton`chat` q) q)es:`s#e q)es~e 1b q)es 80 35 20 -10 8 3 2 0N Set Attribute Step Dictionaries Apply At, Index At¶ @ is syntactic sugar for the case where u is a unary and ux a 1-item list. u@ux is always equivalent to u . enlist ux . Brackets are syntactic sugar The brackets of an argument list are also syntactic sugar. Nothing can be expressed with brackets that cannot also be expressed using . . You can use the derived function @\: to apply a list of unary values to the same argument. q){`o`h`l`c!(first;max;min;last)@\:x}1 2 3 4 22 / open, high, low, close o| 1 h| 22 l| 1 c| 22 Composition¶ A sequence of unaries u , v , w … can be composed with Apply At as u@v@w@ . All but the last @ may be elided: u v w@ . q)tc:til count@ / indexes of a list q)tc "abc" "0 1 2" The last value in the sequence can have higher rank if projected as a unary by Apply. q)di:reciprocal(%). / divide into q)di 2 3 / divide 2 into 3 1.5 Trap¶ In the ternary, if evaluation of the function fails, the expression is evaluated. (Compare try/catch in some other languages.) q).[+;"ab";`ouch] `ouch If the expression is a function, it is evaluated on the text of the signalled error. q).[+;"ab";{"Wrong ",x}] "Wrong type" For a successful evaluation, the ternary returns the same result as the binary. q).[+;2 3;{"Wrong ",x}] 5 Trap At¶ @[f;fx;e] is equivalent to .[f;enlist fx;e] . Use Trap At as a simpler form of Trap, for unary values. .Q.trp (extend trap at) Limit of the trap¶ Trap catches only errors signalled in the applications of f or g . Errors in the evaluation of fx or gg themselves are not caught. q)@[2+;"42";`err] `err q)@[2+;"42"+3;`err] 'type [0] @[2+;"42"+3;`err] ^ When e is not a function¶ If e is a function it will be evaluated only if f or g fails. It will however be parsed before any of the other expressions are evaluated. q)@[2+;"42";{)}] ') [0] @[2+;"42";{)}] ^ If e is any other kind of expression it will always be evaluated – and first, in the usual right-to-left sequence. In this respect Trap and Trap At are unlike try/catch in other languages. q)@[string;42;a:100] / expression not a function "42" q)a // but a was assigned anyway 100 q)@[string;42;{b::99}] / expression is a function "42" q)b // not evaluated 'b [0] b ^ For most purposes, you will want e to be a function. Q for Mortals §10.1.8 Protected Evaluation Errors signalled¶ index an atom in vx or ux is not an index to an item-at-depth in d rank the count of vx is greater than the rank of v type v or u is a symbol atom, but not a handle to an value type an atom of vx or ux is not an integer, symbol or null
Glossary¶ Ontology asks, What exists?, to which the answer is Everything. — W.V.O. Quine, Word and Object Aggregate function¶ A function that reduces its argument, typically a list to an atom, e.g. sum Applicable value¶ A function, file- or process-handle, list, or dictionary: an object that can be applied to its argument/s or index/es. Apply¶ As in apply a function to its arguments: evaluate a function on values corresponding to its arguments. Application Argument¶ In the expression 10%4 the operator % is evaluated on the arguments 10 and 4. 10 is the left argument and 4 is the right argument. By extension, the first and second arguments of a binary function are called its left argument and right argument regardless of whether it is applied infix. In the expression %[10;4] 10 and 4 are still referred to as the left and right arguments. By extension, where a function has rank >2, its left argument is its first argument, and its right arguments are the remaining arguments. Correspondingly, the left domain and right domain of a binary function are the domains of its first and second arguments, regardless of whether or not the function may be applied infix. By extension, where a function has rank >2, its left domain is the domain of its first argument, and its right domains are the domains of the remaining arguments. The terminology generalizes to values. - The left domain of a matrix m istil count m . - The right domain of a matrix is til count first m . - The right domains of a list m of depthn are1_n{til count first x}\m . The single argument of a unary function is sometimes referred to as its right argument. Argument list¶ A pair of square brackets enclosing zero or more items separated by semicolons. %[10;4] / % applied to argument list [10;4] Atom¶ A single instance of a datatype, eg 42 , "a" , 1b , 2012.09.15 . The type of an atom is always negative. Atomic function¶ An atomic function is a uniform function such that for r:f[x] r[i]~f x[i] is true for all i , e.g. signum . A function f is atomic if f is identical to f' . Attribute¶ Attributes are metadata associated primarily with tables and dictionaries to improve performance. The attributes are: sorted, unique, grouped, and partitioned. Reference: Set Attribute, Step dictionaries Binary¶ A value of rank 2, i.e. a function that takes 2 arguments, or a list of depth ≥2. (The terms dyad and dyadic are now deprecated.) Bracket notation¶ Applying a value to its argument/s or indexes by writing it to the left of an argument list, e.g. +[2;3] or count["zero"] . Chained tickerplant¶ A chained tickerplant subscribes to the master tickerplant and receives updates like any other subscriber, and then serves that data to its subscribers in turn. Character constant¶ A character constant is defined by entering the characters between double-quotes, as in "abcdefg" . If only one character is entered the constant is an atom, otherwise the constant is a list. For example, "a" is an atom. The expression enlist "a" is required to indicate a one character list. Escape sequences for entering non-graphic characters in character constants. Character vector¶ A character vector is a simple list whose items are all character atoms. When displayed in a session, it appears as a string of characters surrounded by double-quotes, as in: "abcdefg" , not as individual characters separated by semicolons and surrounded by parentheses (that is, not in list notation). When a character vector contains only one character, the display is distinguished from the atomic character by prepending a comma, as in ,"x" . String is another name for character vector. Comment¶ Characters ignored by the interpreter. Communication handle¶ A communication handle specifies a network resource. Communication handles Q for Mortals §11.6.1 Communication Handle Comparison tolerance¶ Because floating-point values resulting from computations are usually only approximations to the true mathematical values, the Equal operator is defined so that x = y is 1b (true) for two floating-point values that are either near one another or identical. Compound list¶ A list of vectors of uniform type, e.g. ("quick";"brown";"fox") . Conform¶ Lists, dictionaries and tables conform if they are either atoms or have the same count. Connection handle¶ A handle to a connection opened to a communication handle or object in the file system. hclose , hopen Connection handles, File system, Interprocess communication Console¶ Console refers to the source of messages to q and their responses that are typed in a q session. It is denoted by system handle 0 . Control word¶ Control words do , if , and while interrupt the usual evaluation rules, e.g. by omitting expressions, terminating evaluation. Count¶ The number of items in a list, keys in a dictionary or rows in a table. The count of an atom is 1. Depth¶ The depth of a list is the number of levels of nesting. For example, an atom has depth 0, a list of atoms has depth 1, a list of lists of atoms has depth 2, and so on. The following function computes the depth of any data object: q)depth:{$[0>type x; 0; 1 + max depth'[x]]} That is, an atom has depth 0 and a list has depth equal to 1 plus the maximum depth of its items. q)depth 10 / atom 0 q)depth 10 20 / vector 1 q)depth (10 20;30) / list 2 Dictionary¶ A dictionary is a mapping from a list of keys to a list of values. (The keys should be unique, though q does not enforce this.) The values of a dictionary can be any data structure. q)/4 keys and 4 atomic values q)`bob`carol`ted`alice!42 39 51 44 bob | 42 carol| 39 ted | 51 alice| 44 q)/2 keys and 2 list values q)show kids:`names`ages!(`bob`carol`ted`alice;42 39 51 44) names| bob carol ted alice ages | 42 39 51 44 Domain¶ The domain of a function is all the possible values of its argument. Functions with multiple arguments have multiple domains. A function’s first domain is known as its left domain. Its second domain is its right domain. For example, the left domain of rotate is integer atoms and its right domain is lists. q)3 rotate "abcde" "deabc" If a function has more than two arguments, all but the first domain are its right arguments and their corresponding domains its right domains. For example, the left domain of ssr is char lists, and its right domains are char lists or atoms. q)ssr["advance";"adv";"a d"] "a dance" q)ssr["advance";"a";"-"] "-dv-nce" q)ssr["a";"a";"-"] / left domain doesn't include atoms 'type [0] ssr["a";"a";"-"] All applicable values have domains. The domain of a dictionary is its keys. The domain of a list is its indexes. The left domain of a matrix is its row numbers. Its right domain is its column numbers. The left domain of a table is its row numbers. Its right domain is its column names. All applicable values are mappings from their domains to their ranges. Empty list¶ The generic empty list has no items, has count 0, and is denoted by () . The empty character vector may be written "" , the empty integer vector 0#0 , the empty floating-point vector 0#0.0 , and the empty symbol vector 0#` or `$() . The distinction between () and the typed empty lists is relevant to certain operators (e.g. Match) and also to formatting data on the screen. Enumeration¶ A representation of a list as indexes of the items in its nub or another list. Enumerations Entry¶ The items of a dictionary are its entries. Each entry consists of a key and a corresponding value. Escape sequence¶ An escape sequence is a special sequence of characters representing a character atom. An escape sequence usually has some non-graphic meaning, for example the tab character. An escape sequence can be entered in a character constant and displayed in character data. Expression block, expression list¶ A pair of square brackets enclosing zero or more expressions separated by semicolons. Feedhandler¶ A process that receives and processes, typically high volumes of, messages from a source such as a financial exchange. File descriptor¶ Either: - a file symbol - a 2-list (filesymbol;offset) - a 3-list (filesymbol;offset;length) whereoffset andlength are non-zero integers Filehandle¶ Either a filename or a filesymbol. Filename¶ An absolute or relative path in the filesystem to a file or directory as a string, e.g. ":path/to/data" . File symbol¶ An absolute or relative path in the filesystem to a file or directory as a symbol atom, e.g. `:path/to/data Finite-state machine¶ A dictionary or list represents a finite-state machine when its values (dictionary) or items (list) can be used to index it. For example: q)show l:-10?10 1 8 5 7 0 3 6 4 2 9 / all items are also indexes q)yrp / a European tour from to wp ---------------- London Paris 0 Paris Genoa 1 Genoa Milan 1 Milan Vienna 1 Vienna Berlin 1 Berlin London 0 q)show route:(!/)yrp`from`to / finite-state machine London| Paris Paris | Genoa Genoa | Milan Milan | Vienna Vienna| Berlin Berlin| London Flag¶ A boolean or an integer in the range (0,1). Function¶ A mapping from input/s to result defined by an algorithm. Operators, keywords, compositions, projections and lambdas are all functions. .Q.res returns a list of keywords Function atom¶ A function can appear in an expression as data, and not be subject to immediate evaluation when the expression is executed, in which case it is an atom. For example: q)f: + / f is assigned Add q)(f;102) / an item in a list + 102 Handle¶ A handle is a symbol holding the name of a global variable, which is a node in the K-tree. For example, the handle of the name a_c is `a_c . The term handle is used to point out that a global variable is directly accessed. Both of the following expressions amend x : x: .[ x; i; f; y] .[`x; i; f; y] In the first, referencing x as the first argument causes its entire value to be constructed, even though only a small part may be needed. In the second, the symbol `x is used as the first argument. In this case, only the parts of x referred to by the index i will be referenced and reassigned. The second case is usually more efficient than the first, sometimes significantly so. Where x is a directory, referencing the global variable x causes the entire dictionary value to be constructed, even though only a small part of it may be needed. Consequently, in the description of Amend, the symbol atoms holding global variable names are referred to as handles. HDB¶ Historical database: a database that represents past states of affairs. Identity element¶ For function f the value x such that y~f[x;y] for any y . Q knows the identity elements of some functions, e.g. + (zero), but not others, e.g. {x+y} (also zero). Infix¶ Applying an operator by writing it between its arguments, e.g. 2+3 applies + to 2 and 3 Item, list item¶ A member of a list: can be any function or data structure. Iterator¶ An iterator is a higher-order operator. It takes a value as its argument and returns a derived function that iterates it. All the iterators are unary operators. They are the only operators that can be applied postfix. They almost invariably are. Iterators Iterator pattern, Iterator What exactly are iterator, iterable, and iteration? Wiktionary, Lexico K-tree¶ The K-tree is the hierarchical name space containing all global variables created in a session. The initial state of the K-tree when kdb+ is started is a working directory whose absolute path name is `. together with a set of other top-level directories containing various utilities. The working directory is for interactive use and is the default active, or current, directory. An application should define its own top-level directory that serves as its logical root, using a name which will not conflict with any other top-level application or utility directories present. Every subdirectory in the K-tree is a dictionary that can be accessed like any other variable, simply by its name. Keyed table¶ See Table. Lambda¶ Functions are defined in the lambda notation: an optional signature followed by a list of expressions, separated by semicolons, and all embraced by curly braces, e.g. {[a;b](a*a)+(b*b)+2*a*b} . A defined function is also known as a lambda. Left argument¶ See Argument Left-atomic function¶ A left-atomic function f is a binary f that is atomic in its left, or first, argument. That is, for every valid right argument y , the unary f[;y] is atomic. Left domain¶ See Argument Left uniform¶ The result of a left-uniform function has the same length as its left argument. List¶ An array, its items indexed by position. Matrix¶ A list in which all items are lists of the same count. Name, namespace¶ A namespace is a container or context within which a name resolves to a unique value. Namespaces are children of the default namespace and are designated by a dot prefix. Names in the default namespace have no prefix. The default namespace of a q session is parent to multiple namespaces, e.g. .h , .Q and .z . (Namespaces with 1-character names – of either case – are reserved for use by KX.) q).z.p / UTC timestamp 2017.02.01D14:58:38.579614000 Namespaces are dictionaries. q)v:5 q).ns.v:6 q)`.[`v] / value of v in root namespace 5 q)`.ns[`v] / value of v in ns 6 q)`. `v / indexed by juxtaposition 5 q)`.ns `v`v 6 6 q)`.`.ns@\:`v 5 6 Native¶ A synonym for primitive. Nub¶ The unique items of a list. Reference: distinct Null¶ Null is the value of an unspecified item in a list formed with parentheses and semicolons. For example, null is the item at index position 2 of (1 2;"abc";;`xyz) . Null is an atom; its value is :: . Nulls have special meaning in the right argument of the operator Index and in the bracket form of function application. Nullary¶ A function of rank 0, i.e. that takes no arguments. Operator¶ A primitive binary function that may be applied infix as well as prefix, e.g. + , & . Partitioned file¶ To limit the size of files in an HDB it is common to partition them by time period, for example, calendar day. The partitioning scheme is described to kdb+ in the par.txt file. Files representing a splayed table may also be partitioned. Postfix¶ Applying an iterator to its argument by writing it to the right, e.g. +/ applies iterator / to + . (Not to be confused with projecting an operator on its left argument.) Prefix¶ Prefix notation applies a unary value v to its argument or indices x ; i.e. vx is equivalent to v[x] . Primitive¶ Defined in the q language. Process symbol¶ A symbol defining the communication path to a process. Project, projection¶ A function passed fewer arguments than its rank projects those arguments and returns a projection: a function of the unspecified argument/s. Quaternary¶ A value with rank 4. Range¶ The range of a function is the complete set of all its possible results. All applicable values are mappings from their domains to their ranges. Some operators and keywords have obvious range types; e.g. Divide % always returns a float, and sublist a list of the same type as its right argument. Otherwise, each operator or keyword article tabulates the range datatypes for its domain/s. Rank¶ Of a function, the number of arguments it takes. | rank | adjective | example | |---|---|---| | 0 | nullary | {42} | | 1 | unary | til | | 2 | binary | + Add | | 3 | ternary | ssr string search and replace | | 4 | quaternary | .[d;i;m;my] Amend | Of a list, the depth to which it is nested. A vector has rank 1; a matrix, rank 2. RDB¶ Real-time database: a database that aims to represent a state of affairs in real time. Reference, pass by¶ Pass by reference means passing the name of an object (as a symbol atom) as an argument to a function, e.g. key `.q . Right argument/s¶ See Argument Right-atomic function¶ A right-atomic function f is a binary that is atomic in its right, or second, argument. That is, for every valid left argument x , the unary function f[x;] is an atomic function. Right domain/s¶ See Argument Right uniform¶ The result of a right-uniform function has the same length as its right argument. Script¶ A script is a text file; its lines a list of expressions and/or system commands, to be executed in sequence. By convention, a script file has the extension q . Within a script - function definitions may extend over multiple lines - an empty comment begins a multiline comment. Signature¶ The argument list that (optionally) begins a lambda, e.g. in {[a;b](a*a)+(b*b)+2*a*b} , the signature is [a;b] . Simple table¶ See Table. Splayed table¶ To limit the size of individual files, and to speed searches, it is common to splay a large table by storing its columns as separate files. The files may also be partitioned. String¶ There is no string datatype in q. String in q means a char vector, e.g. "abc". Symbol¶ A symbol is an atom which holds a string of characters, much as an integer holds a string of digits. For example, `abc denotes a symbol atom. This method of forming symbols can only be used when the characters are those that can appear in names. To form symbols containing other characters, put the contents between double quotes, as in `$"abc-345" . A symbol is an atom, and as such has count 1; its count is not related to the number of characters that appear in its display. The individual characters in a symbol are not directly accessible, but symbols can be sorted and compared with other symbols. Symbols are analogous to integers and floating-point numbers, in that they are atoms but their displays may require more than one character. (If they are needed, the characters in a symbol can be accessed by converting it to a character string.) System command¶ Expressions beginning with \ are system commands. (Or multiline comments). q)/ load the script in file my_app.q q)\l my_app.q System handle¶ A connection handle to console (0), stdin (1), or stderr (2) Table¶ A simple table is a list of named lists of equal count. q)show t:([]names:`bob`carol`ted`alice; ages:42 39 51 44) names ages ---------- bob 42 carol 39 ted 51 alice 44 It is also a list of dictionaries with the same keys. q)first t names| `bob ages | 42 Table syntax can declare one or more columns of a table as a key. The values of the key column/s of a table are unique. q)show kt:([names:`bob`carol`bob`alice;city:`NYC`CHI`SFO`SFO]; ages:42 39 51 44) names city| ages ----------| ---- bob NYC | 42 carol CHI | 39 bob SFO | 51 alice SFO | 44 A keyed table is a table of which one or more columns have been defined as its key. A table’s key/s (if any) are supposed to be distinct: updating the table with rows with existing keys overwrites the previous records with those keys. A table without keys is a simple table. A keyed table is a dictionary. Its key is a table. q)key kt names city ---------- bob NYC carol CHI bob SFO alice SFO Ternary¶ A value of rank 3, i.e. a function with three arguments; or a list of depth ≥3. Ticker plant¶ A source of messages. Unary form¶ Most binary operators have unary forms that take a single argument. Q provides more legible covers for these functions. Unary function¶ A value of rank 1, i.e. a function with 1 argument, or a list of depth ≥1. Unary operator¶ See Iterator. Underlying value¶ Temporal and text data values are represented internally by numbers known as their underlying value. Comparisons – even between types – work on these underlying values. Uniform function¶ A uniform function f such that count[x]~count f x , e.g. deltas Uniform list¶ A list in which all items are of the same datatype. See also vector. Unsigned function¶ A lambda without a signature, e.g. {x*x} . Value, pass by¶ Pass by value means passing an object (not its name) as an argument to a function, e.g. key .q . Variadic¶ A value that may be applied to a variable number arguments is variadic. For example, a matrix, the operator @ , or the derived function +/ . Vector¶ A uniform list of basic types that has a special shorthand notation. A char vector is known as a string. x ¶ Default name of the first or only argument of an unsigned function. y ¶ Default name of the second argument of an unsigned function. z ¶ Default name of the third argument of an unsigned function. View¶ A view is a calculation that is re-evaluated only if the values of the underlying dependencies have changed since its last evaluation. Views can help avoid expensive calculations by delaying propagation of change until a result is demanded. The syntax for the definition is q)viewname::[expression;expression;…]expression The act of defining a view does not trigger its evaluation. A view should not have side effects, i.e. should not update global variables. view , views .Q.view (subview) Tutorial: Views
Data visualization with kdb+ using ODBC: A Tableau case study¶ Business intelligence (BI) tools are widely used across many industries for their interactive nature, which enables users to create and customize dynamic data visualizations easily. KX provides its own visualization tool, Dashboards for KX, but clients might have incumbent solutions they wish to connect to kdb+. Alternatively, many organizations might wish to migrate their back-end database to kdb+ for increased efficiency and scalability, while retaining their current visualization front end. Tableau is an example of a widely-used BI tool. This paper outlines how it can be used to access kdb+ via ODBC (Open Database Connectivity), a standard application-programming interface used to connect different database management systems, specifically designed to be independent of databases and operating systems. This paper illustrates the flexibility with which kdb+ data can be accessed by Tableau using ODBC. It explains further how kdb+’s caching feature may be used to improve performance by optimizing repeated queries. Keep in mind that there will always be limitations on third-party solutions not designed from the outset for processing real-time streaming data. KX’s own visualization tool Dashboards for KX is optimized for streaming queries and inherits functionality such as user management, load balancing, access control, caching and queuing from the underlying platform as well as direct access to q for comprehensive querying capabilities. Such features and their ability to support high-volume, low-latency access to streaming data cannot be assumed in third-party products. All tests were run using kdb+ version 3.5 and Tableau 10.3. Connecting to kdb+ using ODBC¶ Instructions on how to connect kdb+ from Tableau Desktop for both Windows and Linux can be found at Interfaces: kdb+ server for ODBC3, ensuring that the extra step is performed for Tableau. For an ODBC driver to connect to an application, it needs a DSN (Data Source Name). A DSN contains the name, directory and driver of the database, and (depending on the type of DSN) the access credentials of the user. Connecting to kdb+ from Tableau Desktop¶ Once a kdb+ DSN has been added, and the rest of the set-up instructions are followed, you are ready to connect to kdb+ from Tableau. On opening Tableau, you will be prompted to select the type of database you wish to connect to, select the option Other Databases (ODBC). Next, select the correct DSN from the dropdown list and click Connect. This will automatically populate the Connection Attributes in the bottom half of the window using the DSN details defined previously. The final step is to click the Sign In button, which creates a connection to the kdb+ process, enabling the database to be queried via Tableau’s Custom SQL, as demonstrated in the following sections. Connecting to kdb+ from Tableau Server¶ The set-up instructions above, both explicit and linked, are specifically for a user connecting from Tableau Desktop. This is the local version of Tableau installed on a desktop or laptop. Tableau Server, on the other hand, is installed on a server and is accessible to users via a browser. Tableau workbooks can be shared between both by publishing from Tableau Desktop to Tableau Server. This procedure is detailed in the section Publishing to Tableau Server . This process may be handled by an organization’s support team, depending on the installation setup. The driver also needs to be installed, and then the connection can be initialized much as for Tableau Desktop. Note that Tableau Server can require that the dependent kdb+ configuration file q.tdc be placed in a different location than Tableau Desktop, in addition to restarting Tableau Server and all nodes in use. Other considerations¶ Since a release on 2017.09.11, qodbc3 allows specification of connection details without a DSN. This means all details, except the password, will be saved by Tableau in a workbook or saved data source. However, this change only affects desktop users. Because the password is not embedded, the DSN is still required to be defined on the server as this is the only way the password will be picked up for published reports. It is also important to note that connection details are embedded in both the Tableau workbook and the DSN definition. For version management, when sharing workbooks between developers or when publishing them to Tableau Server, this can become problematic. One workaround solution to manage this is to wipe these details from the workbook with a script before sharing or publishing workbooks. This concept is explored below in Publishing to Tableau Server . Tableau functionality for kdb+¶ Calling q from Tableau¶ Once a successful connection has been made, the next step is to begin by running some sample queries. Tableau’s Custom SQL is the method by which q queries can be run from Tableau. In particular, the q() function can be used to send synchronous queries to kdb+, as shown below. To demonstrate this, define a table tab in the kdb+ process you are connecting to. q)N:8 q)dates:2018.03.28 + til 3 q)tab:([] date:N?dates;category:N?`CORP`EQ`GOV;volume:N?til 10000) Then, in Tableau run the following in the Custom SQL. Now the data in the table tab is available for use in Tableau. Note that if tab is a not a partitioned table (and is small enough to be handled via SQL), you can just type its name into the table selector, there is no need to use q('select from tab') . Other acceptable syntaxes are: q('tablename') q('select from table where date in 2018.07.02') q('function',<Parameters.Date>) q('{[mydate] func[…]}',<Parameters.Date>) Queries can be a simple select statement or can become much more complex and flexible using inbuilt parameters supplied by Tableau, which will be demonstrated in the next section. List of known SQL compatibility issues Datatype Mapping¶ Tableau caters for multiple q datatypes. | Tableau | q | |---|---| | String | Symbol, String | | Date | Date | | Date & Time | Timestamp | | Numerical | Int, float | | Boolean | Boolean | On loading data, Tableau automatically interprets the datatype of a field. It is recommended that the user checks these have been interpreted correctly after the data is loaded. If it is incorrect, the datatype can then be easily changed on the Data Source page or in the Data pane as shown below. Function Parameters¶ Simple parameters¶ Tableau parameters provide further flexibility when working with q functions. To demonstrate, define a function func that selects from the table tab defined above. This function can be called from Tableau using Tableau-defined parameters. func:{[mydate;mycategory] select from tab where date in mydate, category in mycategory }; Take the parameter mycategory : in this example, a list of allowable symbols that are acceptable for the parameter mycategory can be defined in Tableau. This can be done in the Custom SQL stage when you are writing your query. These parameters can then be shown and made available for users as a dropdown list on worksheets and dashboards as can be seen below. Tableau parameters are limited to static values, and a single select option when placed in a view. However, there are ways to make them more dynamic and flexible. This will be explored below in Dynamic Parameters. Dynamic parameters¶ As mentioned above in Simple parameters, Tableau parameters are limited to static values, and a single select option when placed in a view. However, there are several ways to make parameters smarter, and can increase their usability and flexibility. Below, two such methods are described. Predefining parameter options in a q function¶ From the previous example, the input parameter Category is limited to single values. This can be made more flexible by defining in the function a range of acceptable values. In the example below, the argument `all leads to a select with no restriction on category . func:{[mydate;mycategory] $[mycategory=`all; select from tab where date in mydate; select from tab where date in mydate, category in mycategory] }; Then all can be added to the list of predefined values in Tableau’s definition of Category: Parameters with calculated fields¶ Using parameters in conjunction with Tableau’s calculated-field functionality can be a convenient and flexible tool in calculations as well as graphical representation. This is useful when the output the user wants to see is dependent on an input parameter, and a field needs to be adjusted accordingly. For example, in the user-defined Calculation1 logic below, the quantity field is divided by a different amount depending on the chosen Category value. Below is sample output from when the user selects a Category value of EQ . In contrast, when the user selects CORP the calculated field is divided by 50. Tableau filters¶ As shown above, parameters are a useful tool for creating user-defined inputs to visualizations. However, there are cases where the user may want to return the entire data set first and only afterwards reduce the data set. This can be achieved using Tableau’s filters. Tableau Category Parameter as defined in the previous section Tableau Category Filter Filters are the standard way to reduce the set of data displayed on a worksheet. Note from the above screenshots that filters are not limited to a single select option as parameters are. Filters are most effective with fast queries on small datasets. For longer queries and/or larger datasets, filters become challenging from a performance point of view. This is because every time a filter selection is changed, the Custom SQL query runs the same query multiple times per view to build dimensions. Therefore the more filters and dimensions you add to a view, the slower performance becomes. Caching¶ One way to get around this inefficiency is to introduce caching in kdb+. Caching is storing results from previous queries or calculations in an internal lookup table (or cache) for faster data retrieval on subsequent queries. Caching here is being used to address the problem of filters causing queries to be re-run. The following example demonstrates the performance improvement of caching when incorporated into a simple q function, getTotalVolume (below), which extracts the total volume by symbol from a table t . The demonstration table t contains randomly-generated mock data of symbol and volume values. N:100000000; t:([] sym:N?`3;volume:N?10.0); // Function used to compute the total volume by symbol from the table t getTotalVolume:{[syms] select totalVolume:sum volume by sym from t where sym in syms }; Below is sample output of this function when called from Tableau. Query response times for an increasing number of symbols runs from hundreds of milliseconds to seconds: | number of symbols | time | |---|---| | 1,000,000 | 13 ms | | 10,000,000 | 120 ms | | 100,000,000 | 1038 ms | To incorporate caching, the existing function can be modified to store the total volume result for each queried symbol in a keyed table, called volumeCache . Whenever the function is called from Tableau, an internal lookup is performed on the volumeCache table to determine if the calculation for the requested symbol has already been performed. If so, the result can be immediately returned, otherwise a calculation against the table t is performed. volumeCache:([sym:`u#`symbol$()];totalVolume:`float$()) getTotalVolume:{[syms] if[-11h~type syms;syms:enlist syms]; // Get the list of syms which contain entries in the volumeCache // Extract the totalVolume values for those symbols if[count preCalculated:([]sym:syms) inter key[volumeCache]; result:select from volumeCache where ([]sym) in preCalculated ]; // If all syms are contained in the volumeCache then return result if[not count notPreCalculated:([]sym:syms) except key[volumeCache]; :result ]; // For syms not present in volumeCache, perform lookup result,:newEntries:select totalVolume:sum volume by sym from t where ([]sym) in notPreCalculated; // upsert new results to volumeCache upsert[`volumeCache;newEntries]; result }; Tableau queries against this modified function are significantly faster and become sub-millisecond when symbols are already present within the volumeCache . This approach greatly reduces the effect of filtering previously highlighted: | number of symbols | time (1st query) | time (2nd query) | |---|---|---| | 1,000,000 | 3 ms | <0ms | | 10,000,000 | 96 ms | <0ms | | 100,000,000 | 1021 ms | <0ms | Multiple data sources¶ kdb+ is efficient at joining data sets, and can easily do so in memory at the gateway level. However, it is also worth noting that it is possible to join two or more different datasets in Tableau if they share a common dimension or key. This can be useful when it is desirable to join certain datasets for reporting purposes only. Tableau maintains connections to multiple data sources via a number of open live connections to a q instance. This functionality makes it possible to use the results from one data source to filter another. So far, in this paper, the examples have described functionality using only one data source. For the rest of this section, working with multiple data sources and joining them in Tableau will be explored. One of the first things to note is that fields from different data sources can be included on the same worksheet, provided the sources are mapped to each other. In Tableau, fields from different data sources can be mapped to each other even if they have a different name, so long as they are the same datatype. This can be controlled and edited in Data > Edit Relationships. Dashboard Actions¶ Once a dashboard is created, the filters are controlled in Dashboard > Actions. When setting up actions for kdb+ data sources, it is important to note how the selection is cleared. For large datasets, it is recommended that you select the action Exclude all values. This feature prevents data from being displayed in Sheet 2 until data is first selected in Sheet 1 . This has a very significant effect on performance as it means Tableau only builds dimensions for views within the dataset that has been filtered. The following example demonstrates how much of an improvement on performance this feature can have. Once a table t is defined and subsequently called from Tableau, the next step is to create a dashboard. q) N:10000000 q) t:([] sym:N?`3;volume:N?10.0) Step-by-step instructions on how to build the dashboard shown below and performance tests can be found in Appendix A . Action Selection: Show all values Action Selection: Exclude all values Using the Exclude all values option yields a clear performance improvement. Computing time reduces from ~45secs per select/deselect down to ~0.3ms. Also, when using Exclude all values there is no Executing Query time. Exploiting this feature can be hugely useful when working with kdb+ and Tableau where the volume of datasets can be very large. Publishing from Tableau Desktop to Tableau Server¶ To share workbooks between Tableau Desktop and Tableau Server you can publish the former to the latter. Tableau provides detailed documentation and instructions on the general publishing procedure, which involves publishing from within an already-open workbook. This is not an ideal way to publish workbooks that are connected to a kdb+ database, because connection details are stored within the workbook itself. Take the following scenario: - A workbook has been developed in Tableau Desktop and is ready to share to the Testing partition in Tableau Server. - Throughout development, a development DSN has been used. But the workbook needs to be published to a UAT DSN. - So the DSN details need to be changed to the UAT DSN before publication. - The workbook again needs to be promoted, this time to the Production partition. - The workbook must be reopened, and the DSN details changed to the production DSN, before final promotion to Production . This process is manual and prone to errors. Publishing using tabcmd¶ For kdb+ connections, it is recommended to use the tabcmd command-line utility which, among other things, enables you to publish to Tableau Server from the command line. This utility allows you to deploy sheets programmatically, streamlining the process hugely. It also means that as part of the deploy procedure, the workbook can be edited by a script before publishing via tabcmd . This means you can do some efficient things like: - Wipe out the connection details that are automatically embedded in the workbook - Pick which DSN to point to, e.g. DEV ,UAT ,QA ,Prod - Pick which Tableau server to publish e.g. tableau.net ortableau-uat.net - Pick which Tableau environment to publish to e.g. Development ,Testing orProduction - Edit the Tableau project name Using tabcmd and a script to edit the workbook can be an effective way to make the publishing process smoother when connecting to kdb+, especially when scaling use cases and looking to publish across multiple environments and DSNs. Author¶ Michaela Woods is a KX Technical Evangelist and Training Manager. She pioneered combining kdb+ with Tableau, transforming the data-visualization platform for a Tier-1 Investment Bank. Appendix A¶ - Create Sheet 1 - Drag and drop sym to Columns. - Drag and drop Number of Records to Rows. - Drag and drop volume to the Marks pane on color. Right-click and pick Discrete. - - Create Sheet 2 - Drag and drop sym to Rows. - Drag and drop volume to Rows. Right-click and pick both Dimension and Discrete. This means every row will be displayed and not just the summed value. - - Create Dashboard 1 - Drag Sheet 1 onto the top of the dashboard. - Drag Sheet 2 onto the bottom of the dashboard. - - Make Sheet 1 a filter forSheet 2 on the dashboard.- Hover over Sheet 1 and on the top right-hand side select the middle icon that looks like a filter. - Hover over - Testing performance with default filter selection - Pick Help > Settings and Performance > Start Performance Recording - Select and deselect some of the bars in the top graph. You should notice much slower performance on deselect. - Pick Help > Settings and Performance > Stop Performance Recording A performance workbook will then pop up, and you can analyze the performance. - - Testing performance with selection Exclude all values - Pick Dashboard > Actions > Edit > Select 'Exclude all values' - Repeat step 5 A second performance workbook will pop up and can be compared with the previous one to analyze performance. -
Parse trees¶ Overview¶ parse is a useful tool for seeing how a statement in q is evaluated. Pass the parse keyword a q statement as a string and it returns the parse tree of that expression. A parse tree represents an expression, not immediately evaluated. Its virtue is that the expression can be evaluated whenever and in whatever context it is needed. The two main functions dealing with parse trees are: eval , which evaluates a parse tree.parse , which returns one from a string containing a valid q expression. Parse trees may be the result of applying parse , or constructed explicitly. The simplest parse tree is a single constant expression. Note that, in a parse tree, a variable is represented by a symbol containing its name. To represent a symbol or a list of symbols, you will need to use enlist on that expression. q)eval 45 45 q)x:4 q)eval `x 4 q)eval enlist `x `x Any other parse tree takes a form of a list, of which the first item is a function and the remaining items are its arguments. Any of these items can be parse trees. Parse trees may be arbitrarily deep (up to thousands of layers), so any expression can be represented. q)eval (til;4) 0 1 2 3 q)eval (/;+) +/ q)eval ((/;+);(til;(+;2;2))) 6 k4, q and q.k ¶ kdb+ is a database management system which ships with the general-purpose and database language q. Q is an embedded domain-specific language implemented in the k programming language, sometimes known as k4. The q interpreter can switch between q and k modes and evaluate expressions written in k as well as q. The parse keyword can expose the underlying implementation in k . The k language is for KX implementors. It is not documented or supported for use outside KX. All the same functionality is available in the much more readable q language. However in certain cases, such as debugging, a basic understanding of some k syntax can be useful. The q.k file is part of the standard installation of q and loads into each q session on startup. It defines many of the q keywords in terms of k. To see how a q keyword is defined in terms of k we could check the q.k file or simply enter it into the q prompt: q)type @: The parse keyword on an operation involving the example above exposes the k code. Using the underlying code, it can be run using kdb+ in-build k interpreter to show that it produces the same result: q)type 6 -7h q)parse "type 6" @: 6 q)k)@6 -7h A few q keywords are defined natively from C and do not have a k representation: q)like like Parse trees¶ A parse tree is a q construct which represents an expression but which is not immediately evaluated. It takes the form of a list where the first item is a function and the remaining items are the arguments. Any of the items of the list can be parse trees themselves. Note that, in a parse tree, a variable is represented by a symbol containing its name. Thus, to distinguish a symbol or a list of symbols from a variable, it is necessary to enlist that expression. When we apply the parse function to create a parse tree, explicit definitions in .q are shown in their full k form. In particular, an enlisted element is represented by a preceding comma. q)parse"5 6 7 8 + 1 2 3 4" + //the function/operator 5 6 7 8 //first argument 1 2 3 4 //second argument q)parse"2+4*7" + //the function/operator 2 //first argument (*;4;7) //second argument, itself a parse tree q)v:`e`f q)`a`b`c,`d,v `a`b`c`d`e`f q)parse"`a`b`c,`d,v" , // join operator ,`a`b`c //actual symbols/lists of symbols are enlisted (,;,`d;`v) //v a variable represented as a symbol We can also manually construct a parse tree: q)show pTree:parse "(aggr;data) fby grp" k){@[(#y)#x[0]0#x 1;g;:;x[0]'x[1]g:.=y]} //fby in k form (enlist;`aggr;`data) `grp q)pTree~(fby;(enlist;`aggr;`data);`grp) //manually constructed 1b //parse tree As asserted previously every statement in q parses into the form: (function; arg 1; …; arg n) where every item could itself be a parse tree. In this way we see that every action in q is essentially a function evaluation. eval and value ¶ eval can be thought of as the dual to parse . The following holds for all valid q statements (without side effects) put into a string. (Recall that value executes the command inside a string.) //a tautology (for all valid q expressions str) q)value[str]~eval parse str 1b q)value["2+4*7"]~eval parse"2+4*7" //simple example 1b When passed a list, value applies the first item (which contains a function) to the rest of the list (the arguments). q)function[arg 1;..;arg n] ~ value(function;arg 1;..;arg n) 1b When eval and value operate on a parse tree with no nested parse trees, they return the same result. However it is not true that eval and value are equivalent in general. eval operates on parse trees, evaluating any nested parse trees, whereas value operates on the literals. q)value(+;7;3) //parse tree, with no nested trees 10 q)eval(+;7;3) 10 q)eval(+;7;(+;2;1)) //parse tree with nested trees 10 q)value(+;7;(+;2;1)) 'type q)value(,;`a;`b) `a`b q)eval(,;`a;`b) //no variable b defined 'b q)eval(,;enlist `a;enlist `b) `a`b Variadic operators¶ Many operators and some keywords in k and q are variadic. That means they are overloaded so that the behavior of the operator changes depending on the number and type of arguments. In q (not k), the unary form of operators such as (+ , $ , . , & etc.) is disabled, and keywords are provided instead. For example, in k the unary form of the $ operator equates to the string keyword in q. q)k)$42 "42" q)$42 //$ unary form disabled in q '$ q)string 42 "42" A parenthesized variadic function applied prefix is parsed as its unary form. q)($)42 "42" A familiar example of a variadic function is the Add Over function +/ derived by applying the Over iterator to the Add operator. q)+/[1000;2 3 4] // +/ applied binary 1009 q)+/[2 3 4] // +/ applied unary 9 q)(+/)2 3 4 // +/ applied unary 9 In k, the unary form of an operator can also be specified explicitly by suffixing it with a colon. q)k)$:42 "42" +: is a unary operator; the unary form of + . We can see this in the parse tree: q)parse"6(+)4" 6 (+:;4) The items of a parse result use k syntax. Since (most of) the q keywords are defined in the .q namespace, you can use dictionary reverse lookup to find the meaning. q).q?(+:) `flip So we can see that in k, the unary form of + corresponds to flip in q. q)d:`c1`c2`c3!(1 2;3 4;5 6) q)d c1| 1 2 c2| 3 4 c3| 5 6 q)k)+d c1 c2 c3 -------- 1 3 5 2 4 6 q)k)+:d c1 c2 c3 -------- 1 3 5 2 4 6 Exposed infrastructure The unary forms of operators are exposed infrastructure. Their use in q expressions is strongly discouraged. Use the corresponding q keywords instead. For example, write flip d rather than (+:)d . The unary forms are reviewed here to enable an understanding of parse trees, in which k syntax is visible. When using reverse lookup on the .q context we are slightly hampered by the fact that it is not an injective mapping. The Find ? operator returns only the first q keyword matching the k expression. In some cases there is more than one. Instead use the following function: q)qfind:{key[.q]where x~/:string value .q} q)qfind"k){x*y div x:$[16h=abs[@x];\"j\"$x;x]}" ,`xbar q)qfind"~:" `not`hdel We see not and hdel are equivalent. Writing the following could be confusing: q)hdel 01001b 10110b So q provides two different names for clarity. Iterators as higher-order functions¶ An iterator applies to a value (function, list, or dictionary) to produce a related function. This is again easy to see by inspecting the parse tree: q)+/[1 2 3 4] 10 q)parse "+/[1 2 3 4]" (/;+) 1 2 3 4 The first item of the parse tree is (/;+) , which is itself a parse tree. We know the first item of a parse tree is to be applied to the remaining items. Here / (the Over iterator) is applied to + to produce a new function which sums the items of a list. Functional form of a qSQL query¶ Sometimes you need to translate a qSQL query into its functional form. For example, so you can pass column names as arguments. Details are provided here.
The .h namespace¶ Markup tools Markup (HTML and XML) Data Serialization .h.br linebreak .h.cd CSV from data .h.code code after Tab .h.d delimiter .h.fram frame .h.ed Excel from data .h.ha anchor .h.edsn Excel from tables .h.hb anchor target .h.ht Marqdown to HTML .h.hc escape lt .h.iso8601 ISO timestamp .h.hr horizontal rule .h.jx table .h.hta start tag .h.td TSV from data .h.htac element .h.tx filetypes .h.htc element .h.xd XML from data .h.html document .h.xt JSON .h.http hyperlinks .h.logo KX logo Web Console .h.nbr no break .h.c0 web color .h.pre pre .h.c1 web color .h.text paragraphs .h.HOME webserver root .h.xmp XMP .h.sa anchor style .h.xs XML escape .h.sb body style .h.val value HTTP .h.he HTTP 400 URI formatting .h.hn HTTP response .h.hu URI escape .h.hp HTTP response pre .h.hug URI map .h.hy HTTP response content .h.sc URI-safe .h.ka HTTP keep-alive .h.uh URI unescape .h.ty MIME types The .h namespace contains objects for - marking up strings as HTML - converting data into various formats - composing HTTP responses - web-console display The .h namespace is reserved for use by KX, as are all single-letter namespaces. Consider all undocumented functions in the namespace as its private API | and do not use them. .h.br (linebreak)¶ HTML linebreak (string), defaults to "<br>" . .h.c0 (web color)¶ Color used by the web console (symbol), defaults to `024C7E . .h.c1 (web color)¶ Color used by the web console (symbol), defaults to `958600 . .h.cd (CSV from data)¶ .h.cd x Where x is a table or a list of columns returns a matrix of comma-separated values. q).h.cd ([]a:1 2 3;b:`x`y`z) "a,b" "1,x" "2,y" "3,z" q).h.cd (`a`b`c;1 2 3;"xyz") "a,1,x" "b,2,y" "c,3,z" Columns can be nested vectors, in which case .h.d is used to separate subitems. (Since V4.0 2020.03.17.) 0: load csv, save (save and format data) .h.code (code after Tab)¶ .h.code x Where x is a string with embedded Tab characters, returns the string with alternating segments marked up as - plain text code andnobr . q).h.code "foo\tbar" "foo <code><nobr>bar</nobr></code>" q).h.code "foo\tbar\tabc\tdef" "foo <code><nobr>bar</nobr></code> abc <code><nobr>def</nobr></code>" q).h.code "foo" "foo" .h.d (delimiter)¶ Delimiter used by .h.cd to join subitems of nested lists. Default is " " . q)show t:([a:til 3]b:3 3#"abc";c:3 3#1 2 3) a| b c -| ----------- 0| "abc" 1 2 3 1| "abc" 1 2 3 2| "abc" 1 2 3 q).h.d " " q).h.cd t "a,b,c" "0,a b c,1 2 3" "1,a b c,1 2 3" "2,a b c,1 2 3" q).h.d:"*" q).h.cd t "a,b,c" "0,a*b*c,1*2*3" "1,a*b*c,1*2*3" "2,a*b*c,1*2*3" .h.ed (Excel from data)¶ .h.ed x Where x is a table, returns as a list of strings the XML for an Excel workbook. q).h.ed ([]a:1 2 3;b:`x`y`z) "<?xml version=\"1.0\"?><?mso-application progid=\"Excel.Sheet\"?>" "<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\" xmlns:o=\"u.. save (save and format data) .h.edsn (Excel from tables)¶ .h.edsn x!y Where x is a symbol vectory is a conformable list of tables returns as a list of strings an XML document describing an Excel spreadsheet. q)show t1:([]sym:`a`b`c`d`e`f;price:36.433 30.327 31.554 29.277 30.965 33.028) sym price ---------- a 36.433 b 30.327 c 31.554 d 29.277 e 30.965 f 33.028 q)show t2:([]sym:`a`b`c`d`e`f;price:30.0 40.0 50.0 60.0 70.0 80.0) sym price --------- a 30 b 40 c 50 d 60 e 70 f 80 q).h.edsn `test1`test2!(t1;t2) "<?xml version=\"1.0\"?><?mso-application progid=\"Excel.Sheet\"?>" "<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\" xmlns:ss=\".. q)`:/Users/sjt/tmp/excel.xls 0: .h.edsn `test1`test2!(t1;t2) `:/Users/sjt/tmp/excel.xls save (save and format data) .h.fram (frame)¶ HTML page with two frames .h.fram[t;s;(l;r)] Where t is the page title (string)s is a list of stringsl andr are respectively the sources of the left and right frames (strings) returns as a string an HTML page with two frames in a frameset wide enough to accommodate the lines of s . Example: suppose tmp.htm contains the content for the first frame. q)`tmp.txt: 0:0N!s:" "sv'2#''string 5 10#50?100 "12 10 11 90 73 90 43 90 84 63" "93 54 38 97 88 58 68 45 22 39" "64 49 82 40 88 77 30 17 23 12" "66 36 37 44 28 20 30 34 77 61" "70 36 12 97 92 99 45 83 94 88" q).h.fram["Five rows";s;("tmpl.htm";"tmp.txt")] "<html><head><title>Five rows</title><frameset cols=\"316,*\"><frame src=\"tmp.htm\"><frame name=v src=\"tmp.txt\"></frameset></head></html>" .h.ha (anchor)¶ .h.ha[x;y] Where x is the href attribute as a symbol atom or a string, and y is the link text as a string, returns as a string an HTML A element. q).h.ha[`http://www.example.com;"Example.com Main Page"] "<a href=http://www.example.com>Example.com Main Page</a>" q).h.ha["http://www.example.com";"Example.com Main Page"] "<a href=\"http://www.example.com\">Example.com Main Page</a>" .h.hb (anchor target)¶ .h.hb[x;y] Same as .h.ha , but adds a target=v attribute to the tag. q).h.hb["http://www.example.com";"Example.com Main Page"] "<a target=v href=\"http://www.example.com\">Example.com Main Page</a>" .h.hc (escape lt)¶ .h.hc x Where x is a string, returns x with any < chars escaped. q).h.hc "<foo>" "<foo>" .h.he (HTTP 400)¶ .h.he x Where x is a string, escapes "<" characters, adds a "'" at the front, and returns an HTTP 400 error (Bad Request) with that content. q).h.he "<rubbish>" "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain\r\nConnection: close\r\.. .h.hn (HTTP response)¶ .h.hn[x;y;z] Where x is the status code (string)y is the content type (symbol)z is the content (string) returns as a string an HTTP error response. q).h.hn["404";`txt;"Not found: favicon.ico"] "HTTP/1.1 404\r\nContent-Type: text/plain\r\nConnection: close\r\nContent-Len.. .h.ty MIME types .h.hp (HTTP response pre)¶ .h.hp x Where x is a list of strings, returns as a string a valid HTTP response displaying them as a pre element in an HTML document. q)1 .h.hp" "sv'2#''string 5 10#50?100; HTTP/1.1 200 OK Content-Type: text/html Connection: close Content-Length: 257 <html><head><style>body{font:10pt verdana;text-align:justify}</style></head><body><pre>89 97 11 99 33 77 98 30 22 15 28 17 11 55 51 81 68 96 61 70 70 39 76 26 91 83 76 88 44 56 32 30 97 31 96 53 47 65 34 50 96 99 13 72 81 70 33 99 56 12 </pre></body></html> .h.hr (horizontal rule)¶ .h.hr x Where x is a string, returns a string of the same length filled with "-" . q).h.hr "foo" "---" .h.ht (Marqdown to HTML)¶ .h.ht x HTML documentation generator: where x is a symbol atom, reads file :src/x.txt and writes file :x.htm . (Marqdown is a rudimentary form of Markdown.) - edit src/mydoc.txt q).h.ht`mydoc - browse mydoc.htm (a/_mydoc.htm is navigation frame,a/mydoc.htm is content frame) Basic Marqdown formatting rules: - Paragraph text starts at the beginning of the line. - Lines beginning with "." are treated as section headings. - Lines beginning with "\t" get wrapped incode tags - Line data beginning with " " get wrapped inxmp tags - If second line of data starts with "-" , draw a horizontal rule to format the header - Aligns two-column data if 2nd column starts with "\t " .h.hta (start tag)¶ .h.hta[x;y] Where x is the element as a symbol atom, and y is a dictionary of attributes and values, returns as a string an opening HTML tag for element x . q).h.hta[`a;(`href`target)!("http://www.example.com";"_blank")] "<a href=\"http://www.example.com\" target=\"_blank\">" .h.htac (element)¶ .h.htac[x;y;z] Where x is the element as a symbol atom, y is a dictionary of attributes and their values, and z is the content of the node as a string, returns as a string the HTML element. q).h.htac[`a;(`href`target)!("http://www.example.com";"_blank");"Example.com Main Page"] "<a href=\"http://www.example.com\" target=\"_blank\">Example.com Main Page</.. .h.htc (element)¶ .h.htc[x;y] Where x is the HTML element as a symbol atom, and y is the content of the node as a string, returns as a string the HTML node. q).h.htc[`tag;"value"] "<tag>value</tag>" .h.html (document)¶ .h.html x Where x is the body of an HTML document as a string, returns as a string an HTML document with fixed style rules. <html> <head> <style> a{text-decoration:none}a:link{color:024C7E}a:visited{color:024C7E}a:active{color:958600}body{font:10pt verdana;text-align:justify} </style> </head> <body> BODY </body> </html> q).h.html "<p>Hello world!</p>" "<html><head><style>a{text-decoration:none}a:link{color:024C7E}a:visited{colo.. .h.http (hyperlinks)¶ .h.http x Where x is a string, returns x with embedded URLs beginning "http://" converted to HTML hyperlinks. q).h.http "The main page is http://www.example.com" "The main page is <a href=\"http://www.example.com\">http://www.example.com</.. .h.hu (URI escape)¶ .h.hu x Where x is a string, returns x with URI-unsafe characters replaced with safe equivalents. q).h.hu "http://www.kx.com" "http%3a%2f%2fwww.kx.com" .h.hug (URI map)¶ .h.hug x Where x is a char vector, returns a mapping from characters to % xx escape sequences except for the chars in x , which get mapped to themselves. .h.hy (HTTP response content)¶ .h.hy[x;y] Where x is an HTTP content type as a symbol atomy is a string returns as a string an HTTP response for y as content-type x . q)show t:([]idx: 1 2 3 4 5;val: `a`b`c`d`e) idx val ------- 1 a 2 b 3 c 4 d 5 e q)show r: .h.hy[`json] .j.j 0! select count i by val from t "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nConnection: close\r\nCo.. q)`:test.txt 0: enlist r `:test.txt q)\head test.txt "HTTP/1.1 200 OK" "Content-Type: application/json" "Connection: close" "Content-Length: 99" "" "[{\"val\":\"a\",\"x\":1}," " {\"val\":\"b\",\"x\":1}," " {\"val\":\"c\",\"x\":1}," " {\"val\":\"d\",\"x\":1}," " {\"val\":\"e\",\"x\":1}]" .h.HOME (webserver root)¶ String: location of the webserver root. .h.iso8601 (ISO timestamp)¶ .h.iso8601 x Where x is nanoseconds since 2000.01.01 as an int atom, returns as a string a timestamp in ISO-8601 format. q).h.iso8601 100 "2000-01-01T00:00:00.000000100" .h.jx (table)¶ .h.jx[x;y] Where x is an int atom, and y is the name of a table, returns a list of strings representing the records of y , starting from row x . q)a:([] a:100*til 1000;b:1000?1000;c:1000?1000) q){(where x="<")_x}first .h.jx[0;`a] "<a href=\"?[0\">home" "</a> " "<a href=\"?[0\">up" "</a> " "<a href=\"?[32\">down" "</a> " "<a href=\"?[968\">end" "</a> 1000[0]" q)1_.h.jx[5;`a] "" "a b c " "------------" "500 904 34 " "600 251 912" "700 584 388" "800 810 873" "900 729 430" "1000 210 148" "1100 645 499" "1200 898 285" "1300 20 279" "1400 686 267" "1500 894 668" "1600 879 611" "1700 350 352" "1800 254 600" "1900 145 257" "2000 666 101" "2100 757 132" "2200 601 910" "2300 794 637" .. .h.ka (HTTP keepalive)¶ .h.ka x Where x is an integer representing the idle timeout in units of milliseconds. A value of 0i disables keepalive (i.e. .h.ka then returns "close"). Returns a string of value close or keep-alive which can be used for the Connection HTTP header field value in the HTTP response. Can be used during the processing of an HTTP request to enable persistent connections i.e. should be called within an HTTP callback such as .z.ph, .z.pp, etc. A basic example of showing keep-alive in action for a simple response: \p 1234 q)f:{[x;y]"HTTP/1.1 200 OK\r\nConnection : ",.h.ka[x*1000i],"\r\nContent-Type: ",(.h.ty`txt),"\r\nContent-Length: ",(string count y),"\r\n\r\n",y} q).z.ph:{f[2i;"test response\n"]} Running an HTTP client such as cURL, from the same machine, shows the connection being reused for two requests. curl -v -v http://localhost:1234 http://localhost:1234 .h.logo (KX logo)¶ String: defaults to the KX logo in HTML format. .h.nbr (no break)¶ .h.nbr x Where x is a string, returns x as the content of a nobr element. q).h.nbr "foo bar" "<nobr>foo bar</nobr>" .h.pre (pre)¶ .h.pre x Where x is a list of strings, returns x as a string with embedded newlines with a pre HTML element. q).h.pre("foo";"bar") "<pre>foo\nbar\n</pre>" .h.sa (anchor style)¶ String: CSS style rules used in the web console for anchor elements. q).h.sa "a{text-decoration:none}a:link{color:024C7E}a:visited{color:024C7E}a:active{c.. .h.sb (body-style)¶ String: CSS style rules used in the web console for the HTML body. q).h.sb "body{font:10pt verdana;text-align:justify}" .h.sc (URI-safe)¶ String: characters that do not need to be escaped in URIs. q).h.sc "$-.+!*'(),abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789" .h.td (TSV from data)¶ .h.td x Where x is a table, returns it as a list of tab-separated value strings q).h.td ([]a:1 2 3;b:`x`y`z) "a\tb" "1\tx" "2\ty" "3\tz" .h.text (paragraphs)¶ .h.text x Where x is a list of strings, returns as a string, x with each item as the content of a p element. q).h.text("foo";"bar") "<p>foo</p>\n<p>bar</p>\n" .h.tx (filetypes)¶ Dictionary of file types and corresponding conversion functions (.h.cd , .h.td , .h.xd , .h.ed ). q).h.tx raw | ,: json| k){$[10=abs t:@x;s@,/{$[x in r:"\t\n\r\"\\";"\\","tnr\"\\"r?x;x]}'x;(::.. csv | k){.q.csv 0:x} txt | k){"\t"0:x} xml | k){g:{(#*y)#'(,,"<",x),y,,,"</",x:($x),">"};(,"<R>"),(,/'+g[`r]@,/(!x)g.. xls | k){ex eb es[`Sheet1]x} Streaming and static JSON The result of .h.tx[`json] is designed for streaming as JSON Lines. For static JSON, enlist its argument: q).h.tx[`json] ([] 0 1) / JSON Lines "{\"x\":0}" "{\"x\":1}" q).h.tx[`json] enlist ([] 0 1) / static JSON "[{\"x\":0},\n {\"x\":1}]" q)show t:flip`items`sales`prices!(`nut`bolt`cam`cog;6 8 0 3;10 20 15 20) items sales prices ------------------ nut 6 10 bolt 8 20 cam 0 15 cog 3 20 q).h.tx[`json] t / JSON Lines "{\"items\":\"nut\",\"sales\":6,\"prices\":10}" "{\"items\":\"bolt\",\"sales\":8,\"prices\":20}" "{\"items\":\"cam\",\"sales\":0,\"prices\":15}" "{\"items\":\"cog\",\"sales\":3,\"prices\":20}" q).h.tx[`json] enlist t // static JSON "[{\"items\":\"nut\",\"sales\":6,\"prices\":10},\n {\"items\":\"bolt\",\"sale.. .h.ty (MIME types)¶ Dictionary of content types and corresponding media types. q).h.ty htm | "text/html" html| "text/html" csv | "text/comma-separated-values" txt | "text/plain" xml | "text/plain" xls | "application/msexcel" gif | "image/gif" .. .h.uh (URI unescape)¶ .h.uh x Where x is a string, returns x with % xx hex sequences replaced with character equivalents. q).h.uh "http%3a%2f%2fwww.kx.com" "http://www.kx.com" .h.val (value)¶ .h.val x .h.val is called by .z.ph to evaluate a request to the server. Its default value is value . Users can override this with a custom evaluation function. Since V3.6 and V3.5 2019.11.13. .h.xd (XML)¶ .h.xd x Where x is a table, returns as a list of strings, x as an XML table. q).h.xd ([]a:1 2 3;b:`x`y`z) "<R>" "<r><a>1</a><b>x</b></r>" "<r><a>2</a><b>y</b></r>" "<r><a>3</a><b>z</b></r>" "</R>" save (save and format data) .h.xmp (XMP)¶ .h.xmp x Where x is a list of strings, returns as a string x as the newline-separated content of an HTML xmp element. q).h.xmp("foo";"bar") "<xmp>foo\nbar\n</xmp>" .h.xs (XML escape)¶ .h.xs x Where x is a string, returns x with characters XML-escaped where necessary. q).h.xs "Arthur & Co." "Arthur & Co." .h.xt (JSON)¶ .h.xt[x;y] Where x is `json and y is a list of JSON strings, returns y as a list of dictionaries. q).h.xt[`json;("{\"foo\":\"bar\"}";"{\"this\":\"that\"}")] (,`foo)!,"bar" (,`this)!,"that" q)first .h.xt[`json;("{\"foo\":\"bar\"}";"{\"this\":\"that\"}")] foo| "bar" .j namespace (JSON de/serialization), save (save and format data)
// @private // @kind function // @category optimizationUtility // @desc Check if the zoom conditions are sufficient // @param phi0 {float} Objective function evaluation at index 0 // @param derPhi0 {float} Derivative of objective function evaluated at index 0 // @param phiMin {float} 0bjective function evaluated at the current minimum // @param findMin {float} The currently calculated minimum value // @param zoomDict {dictionary} Parameters to be updated as 'zoom' procedure is // applied to find the optimal value of alpha // @param params {dictionary} Parameter dictionary containing the updated/ // default information used to modify the behaviour of the system as a whole // @returns {boolean} Indication as to if further zooming is required i.zoomCriteria1:{[phi0;derPhi0;phiMin;findMin;zoomDict;params] calc:phi0+findMin*derPhi0*params`c1; check1:phiMin>calc; check2:phiMin>=zoomDict`phiLo; check1 or check2 } // @private // @kind function // @category optimizationUtility // @desc Check if the zoom conditions are sufficient // @param derPhi0 {float} Derivative of the objective function evaluated at // index 0 // @param derPhiMin {float} Derivative of the objective function evaluated at // the current minimum // @param params {dictionary} Parameter dictionary containing the // updated/default information used to modify the behaviour of the system // as a whole // @returns {boolean} Indication as to if further zooming is required i.zoomCriteria2:{[derPhi0;derPhiMin;params] abs[derPhiMin`derval]<=neg derPhi0*params`c2 } // @private // @kind function // @category optimizationUtility // @desc Check if the zoom conditions are sufficient // @param derPhiMin {float} Derivative of the objective function evaluated at // the current minimum // @param alphaDiff {float} Difference between the upper and lower bound of the // zoom bracket // @returns {boolean} Indication as to if further zooming is required i.zoomCriteria3:{[derPhiMin;alphaDiff] 0<=derPhiMin[`derval]*alphaDiff } // Zoom dictionary // @private // @kind symbol // @category optimizationUtility // @desc Input keys of zoom dictionary // @type symbol[] i.zoomKeys:`aLo`aHi`phiLo`phiHi`derPhiLo`phiRec; // @private // @kind symbol // @category optimizationUtility // @desc Keys to be updated in zoom each iteration // @type symbol[] i.zoomKeys1:`phiRec`aRec`aHi`phiHi; // @private // @kind symbol // @category optimizationUtility // @desc Extra keys that have to be updated in some scenarios // @type symbol[] i.zoomKeys2:`aLo`phiLo`derPhiLo; // @private // @kind symbol // @category optimizationUtility // @desc Extra keys that have to be updated in some scenarios // @type symbol[] i.zoomKeys3:`phiRec`aRec // @private // @kind symbol // @category optimizationUtility // @desc Final updated keys to be used // @type symbol[] i.zoomReturn:`alphaStar`phiStar`derPhiStar; ================================================================================ FILE: ml_ml_registry_config_config.q SIZE: 2,534 characters ================================================================================ // config.q - Configuration used by the default usage of the registry functions // Copyright (c) 2021 Kx Systems Inc // // @category Model-Registry // @subcategory Configuration \d .ml // @kind function // @category config // // @overview // Retreive default dictionary values from JSON file // // @param file {string} File to retrieve // // @return {dict} Default JSON values getJSON:{[file] registry.config.util.getJSON"registry/config/",file,".json" } // @private registry.config.default:getJSON"model" // @private registry.config.model:getJSON"default" // @private /registry.config.cloudDefault:getJSON"cloud" // @private registry.config.cliDefault:getJSON"command-line" // @private symConvert:`modelName`version`vendor`code // @private registry.config.cliDefault[symConvert]:`$registry.config.cliDefault symConvert // @kind function // @category config // // @overview // Convert CLI version to correct format // // @param cfg {dict} CLI config dictionary // // @return {string|null} Updated version convertVersion:{[cfg] $[`inf~cfg`version;(::);raze"J"$"."vs string cfg`version] } // @private registry.config.commandLine:.Q.def[registry.config.cliDefault].Q.opt .z.x // @private registry.config.commandLine[`version]:convertVersion registry.config.commandLine // Ensure only one cloud vendor is to be used // @private cloudVendors:`aws`azure`gcp if[1<sum cv:cloudVendors in key registry.config.commandLine; .ml.log.fatal "Only one of `aws`azure`gcp should be defined as command line inputs" ] // @kind function // @category config // // @overview // Update configuration appropriately based on cloud vendor // input to ensure that command line arguments are picked appropriately // and inputs are appropriately formatted for each vendor. Then updated the // registry location based on cloud vs local and vendor. // // @param storage {symbol} Type registry storage, e.g. gcp, aws, azure // // @return {dict} Cloud storage location cloudLocation:{[storage] func:`$"update",$[storage in`gcp`aws;upper;@[;0;upper]]string storage; registry.config.util[func][]; enlist[storage]!enlist registry.config.commandLine storage } // @kind function // @category config // // @overview // Update registry location to local storage location from CLI // // @return {dict} Local storage location onpremLocation:{ l:registry.config.commandLine`local; enlist[`local]!enlist$[l~`;".";l] } // @private registry.location:$[any cv; string cloudLocation first cloudVendors where cv; onpremLocation[] ] ================================================================================ FILE: ml_ml_registry_config_utils.q SIZE: 2,915 characters ================================================================================ // utils.q - Utilities for the generation and modification of configuration // Copyright (c) 2021 Kx Systems Inc // // @category Model-Registry // @subcategory Configuration \d .ml // @private // // @overview // Read JSON from file and store as a q object // // @param filePath {string} The path to a JSON file to be read // // @return {dict} A q representation of the underlying JSON file registry.config.util.readJSON:{[filePath] .j.k raze read0 hsym `$filePath } // @private // // @overview // Retrieve JSON from file and store as a q object at startup // // @param filePath {string} The path to a JSON file to be read // // @return {dict} A q representation of the underlying JSON file .ml.registry.config.util.getJSON:{[filePath] @[.ml.registry.config.util.readJSON; path,"/",filePath; {[x;y].ml.registry.config.util.readJSON x}[filePath] ] } // @private // // @overview // Update the AWS default configuration if required and validate // configuration is suitable, error if configuration is not appropriate // as command line input or within the default configuration. // // @return {null} registry.config.util.updateAWS:{[] cli :registry.config.commandLine`aws; json:registry.config.cloudDefault[`aws;`bucket]; bool:`~.ml.registry.config.commandLine`aws; aws :$[bool;json;cli]; if[not aws like "s3://*"; .ml.log.fatal "AWS bucket must be defined via command line or in JSON config in the form s3://*" ]; .ml.registry.config.commandLine[`aws]:$[-11h<>type aws;`$;]aws; } // @private // // @overview // Update the GCP default configuration if required and validate // configuration is suitable, error if configuration is not appropriate // as command line input or within the default configuration. // // @return {null} registry.config.util.updateGCP:{[] cli :registry.config.commandLine`gcp; json:registry.config.cloudDefault[`gcp;`bucket]; bool:`~.ml.registry.config.commandLine`gcp; gcp :$[bool;json;cli]; if[not gcp like "gs://*"; .ml.log.fatal "GCP bucket must be defined via command line or in JSON config in the form gs://*"; ]; .ml.registry.config.commandLine[`gcp]:$[-11h<>type gcp;`$;]gcp; } // @private // // @overview // Update the Azure default configuration if required and validate // configuration is suitable, error if configuration is not appropriate // as command line input or within the default configuration. // // @return {null} registry.config.util.updateAzure:{[] cli :registry.config.commandLine`azure; json :`${x[0],"?",x 1}registry.config.cloudDefault[`azure;`blob`token]; bool :`~.ml.registry.config.commandLine`azure; azure:$[bool;json;cli]; if[not like[azure;"ms://*"]|all like[azure]each("*?*";"http*"); .ml.log.fatal "Azure blob definition via command line or in JSON config in the form http*?* | ms://*"; ]; .ml.registry.config.commandLine[`azure]:$[-11h<>type azure;`$;]azure; } ================================================================================ FILE: ml_ml_registry_init.q SIZE: 358 characters ================================================================================ \d .ml restinit:0b; //Not applicable functionality if[not @[get;".ml.registry.init";0b]; /loadfile`:src/analytics/util/init.q; registry.config.init:.Q.opt .z.x; loadfile`:registry/config/utils.q; loadfile`:registry/config/config.q; loadfile`:registry/q/init.q; if[restinit; loadfile`:registry/q/rest/init.q ] ] .ml.registry.init:1b ================================================================================ FILE: ml_ml_registry_q_init.q SIZE: 454 characters ================================================================================ // init.q - Initialise q functionality related to the model registry // Copyright (c) 2021 Kx Systems Inc // // Functionality relating to all basic interactions with the registry \d .ml if[not @[get;"registry.q.init";0b]; // Load all utilities loadfile`:registry/q/main/utils/init.q; // Load all functionality; loadfile`:registry/q/main/init.q; loadfile`:registry/q/local/init.q; /loadfile`:registry/q/cloud/init.q; ] registry.q.init:1b ================================================================================ FILE: ml_ml_registry_q_local_delete.q SIZE: 858 characters ================================================================================ // delete.q - Functionality for the deletion of items locally // Copyright (c) 2021 Kx Systems Inc // // @overview // Delete local items // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category local // @subcategory delete // // @overview // Delete a registry and the entirety of its contents locally // // @param cli {dict} UNUSED // @param folderPath {string|null} A folder path indicating the location // the registry to be deleted or generic null to remove registry in the current // directory // @param config {dict} Information relating to registry being deleted // // @return {null} registry.local.delete.registry:{[folderPath;config] config:registry.util.getRegistryPath[folderPath;config]; registry.util.delete.folder config`registryPath; -1 config[`registryPath]," deleted."; } ================================================================================ FILE: ml_ml_registry_q_local_init.q SIZE: 512 characters ================================================================================ // init.q - Initialise functionality for local FS interactions // Copyright (c) 2021 Kx Systems Inc // // Functionality relating to all interactions with local // file system storage \d .ml if[not @[get;".ml.registry.q.local.init";0b]; // Load all utilities loadfile`:registry/q/local/utils/init.q; // Load all functionality loadfile`:registry/q/local/new.q; loadfile`:registry/q/local/set.q; loadfile`:registry/q/local/update.q; loadfile`:registry/q/local/delete.q ] registry.q.local.init:1b ================================================================================ FILE: ml_ml_registry_q_local_new.q SIZE: 1,692 characters ================================================================================ // new.q - Generation of new elements of the ML registry locally // Copyright (c) 2021 Kx Systems Inc // // @overview // This functionality is intended to provide the ability to generate new // registries and experiments within these registries. // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category local // @subcategory new // // @overview // Generates a new model registry at a user specified location on-prem. // // @param config {dict|null} Any additional configuration needed for // initialising the registry // // @return {dict} Updated config dictionary containing relevant // registry paths registry.local.new.registry:{[config] config:registry.util.create.registry config; config:registry.util.create.modelStore config; registry.util.create.experimentFolders config; config } // @kind function // @category local // @subcategory new // // @overview // Generates a new named experiment within the specified registry // locally without adding a model // // @todo // It should be possible via configuration to add descriptive information // about an experiment. // // @param experimentName {string} The name of the experiment to be located // under the namedExperiments folder which can be populated by new models // associated with the experiment // @param config {dict|null} Any additional configuration needed for // initialising the experiment // // @return {dict} Updated config dictionary containing relevant // registry paths registry.local.new.experiment:{[experimentName;config] config:registry.local.util.check.registry config; registry.util.create.experiment[experimentName;config] } ================================================================================ FILE: ml_ml_registry_q_local_set.q SIZE: 3,054 characters ================================================================================
/// // Called when a registration callback throws an error. // Can be overwritten by user. // @param connName Connection name // @param err Error message // @return A dictionary to make up for the failed callback. .finos.conn.rcbErrorHandler:{[connName;err] .finos.conn.log"Registration callback threw signal: \"", err, "\" for conn: ", string connName; ()!()}; /// // Resolve a connection string. This function can be overwritten by the user. // @param hostport The connection string passed to .finos.conn.open. Always a string. // @return A list of actual connection strings that can be passed to hopen. .finos.conn.resolveAddress:enlist; .finos.conn.priv.lazyConnCooldownList:([addr:()]; lastErrorTime:`timestamp$()); .finos.conn.priv.attemptConnection:{[connName] hostports:.finos.conn.priv.connections[connName;`addresses]; i:0; n:count hostports; // Check to see if we are actually connected, this will happen // if we managed to connect to an earlier hostport in the list. conn:.finos.conn.priv.connections[connName]; fd:conn`fd; ecb:conn`ecb; if[ecb~(::);ecb:.finos.conn.priv.defaultErrorCallback]; while[null[fd] and i<n; hostport:hostports i; cont:1b; resolvedHostports:(); if[any hostport~/:exec addr from .finos.conn.priv.lazyConnCooldownList; $[.z.P>rt:.finos.conn.priv.lazyConnCooldownList[hostport;`lastErrorTime]+conn`lazyRetryTime; delete from `.finos.conn.priv.lazyConnCooldownList where addr~\:hostport; [cont:0b; .finos.conn.log"Address ",hostport," is not retried for ",string`time$rt-.z.P ] ]; ]; if[cont; resolvedHostports:@[.finos.conn.resolveAddress;hostport;.finos.conn.priv.resolverErrorCallback[connName;hostport;]]; ]; while[(null fd) and 0<count resolvedHostports; resolvedHostport:first resolvedHostports; resolvedHostports:1_resolvedHostports; if[not null fd:.finos.conn.errorTrapAt[hopen;(resolvedHostport;conn`timeout);'[{0Ni};]ecb[connName;hostport;]]; resolvedHostports:(); .finos.conn.log"Connection ",string[connName]," connected to ",hostport; .finos.conn.priv.connections[connName;`fd]:fd; //Invoke the connect cb inside protected evaluation .finos.conn.errorTrapAt[ {.finos.conn.priv.connections[x;`ccb][x]}; connName; .finos.conn.ccbErrorHandler[connName;]]; regcb:.finos.conn.priv.connections[connName;`rcb]; if[not 0b~regcb; reginfo:$[regcb~(::);()!();@[regcb;::;.finos.conn.rcbErrorHandler[connName;]]]; if[not 99h=type reginfo; .finos.conn.log"registration callback didn't return a dictionary for conn: ",string[connName]; reginfo:()!()]; reginfo:reginfo,enlist[`connStr]!enlist .Q.s1 .finos.conn.list[][connName;`addresses]; .finos.conn.registerRemote[connName;reginfo]; @[.finos.conn.asyncFlush;connName;{}]; //fails with 'domain if handle=0 ]; ]; ]; if[(null fd) and .finos.conn.priv.connections[connName;`lazy] and not hostport in enlist[()],exec addr from .finos.conn.priv.lazyConnCooldownList; .finos.conn.log"Not retrying address ",hostport," for ",string[conn`lazyRetryTime]; `.finos.conn.priv.lazyConnCooldownList upsert enlist`addr`lastErrorTime!(hostport;.z.P); ]; i+:1; ]; fd}; .finos.conn.priv.scheduleRetry:{[name;timeout] // Work out the next backoff timeout, if it's too high then go to the max newTimeout:$[.finos.conn.priv.maxBackoff<double:timeout*2; .finos.conn.priv.maxBackoff; double]; .finos.conn.log"Scheduling retry for connection ",string[name]," in ",string newTimeout; .finos.conn.priv.connections[name;`timerId]:.finos.timer.addRelativeTimer[{[n;t;x].finos.conn.priv.retryConnection[n;t]}[name;newTimeout]; newTimeout]; }; /// // Close an existing connection // @param connName The name of the connection to close // @return none // @throws error if there is no connection with this name .finos.conn.close:{[connName] if[-11h<>type connName; '"Invalid type for connName"]; if[not connName in exec name from .finos.conn.priv.connections; '"No connection for this name!"]; //If the connection is connected then close it if[not null h:.finos.conn.priv.connections[connName;`fd]; hclose h]; if[not null tid:.finos.conn.priv.connections[connName;`timerId]; .finos.timer.removeTimer tid]; //Remove it from the table delete from `.finos.conn.priv.connections where name=connName; }; /// // Returns the list of registered connections. // @return A table with the columns matching the options to .finos.conn.open, plus fd for the connection handle. .finos.conn.list:{.finos.conn.priv.connections}; .finos.conn.priv.lazyGetFd:{[connName] if[-11h<>type connName; '"Invalid name type"]; if[null fd:.finos.conn.priv.connections[connName;`fd]; if[.finos.conn.priv.connections[connName;`lazy]; fd:.finos.conn.priv.attemptConnection[connName]; ]; if[null fd; '"Connection not valid: ",string connName ]; ]; fd}; /// // Synchronously execute on this connection // @param name Connection name to use // @param data Data to send // @return The result of the calculation // @throws error if there is no connection with this name .finos.conn.syncSend:{[name;data] fd:.finos.conn.priv.lazyGetFd[name]; fd data}; /// // Asnchronously execute on this connection // @param name Connection name to use // @param data Data to send // @return none // @throws error if there is no connection with this name .finos.conn.asyncSend:{[name;data] fd:.finos.conn.priv.lazyGetFd[name]; neg[fd] data}; /// // Blocks until all previous messages are handed over to the TCP stack // @param name Connection name to use // @return none // @throws error if there is no connection with this name .finos.conn.asyncFlush:{[name] .finos.conn.asyncSend[name;(::)]}; /// // Sends a sync chaser on the connection, blocking until all async messages have been processed by the peer. // @param name Connection name to use // @return none // @throws error if there is no connection with this name .finos.conn.syncFlush:{[name] .finos.conn.syncSend[name;""]; }; .finos.conn.priv.lastClientConnID:0; .finos.conn.priv.clientList:([fd:`int$()] protocol:`$(); app:`$(); conn:`$(); user:`$(); host:`$(); pid:`int$(); connID:`long$(); connStr:()); /// // Gets the list of connected clients. // @return A table containing info such as fd, protocol, app, conn, user, host, pid, connID, connStr. // Some fields are filled in by the .finos.conn library, others are only filled in for registered clients. .finos.conn.clientList:{.finos.conn.priv.clientList}; .finos.conn.priv.clientRegisterCallbacks:`$(); .finos.conn.priv.clientDisconnectCallbacks:`$(); .finos.conn.priv.clientConnectCallbacks:`$(); .finos.conn.priv.clientWSDisconnectCallbacks:`$(); .finos.conn.priv.clientWSConnectCallbacks:`$(); /// // Registers a client. This should be called from a client query such that .z.w is set. Automatically called on the server if a connection // is opened by .finos.conn.open. // @param items A dictionary that may contain the following items: app (symbol), conn (symbol), host (symbol), pid (int), connStr (string) // @return none .finos.conn.register:{[items] if[not 99h=type items; '"parameter to .finos.conn.register must be a dictionary"]; if[0>system"p";if[0<>.z.w;:()]]; //don't overwrite globals in parallel process allItems:(.finos.conn.priv.clientList[.z.w],items),enlist[`fd]!enlist .z.w; if[0h=type allItems`connStr; allItems[`connStr]:""]; `.finos.conn.priv.clientList upsert cols[.finos.conn.priv.clientList]#allItems; .finos.conn.priv.clientRegisterCallbacks @\: allItems; }; .finos.conn.priv.addGenericCallback:{[name;fn] if[not name in `clientRegisterCallbacks`clientDisconnectCallbacks`clientConnectCallbacks`clientWSDisconnectCallbacks`clientWSConnectCallbacks; '"invalid callback type"; ]; if[not -11h=type fn;'"function name must be a symbol"]; value fn; //to throw error if not defined varname:` sv `.finos.conn.priv,name; if[fn in value varname; '"duplicate callback - ",string[fn]]; varname set varname,fn; }; /// // Add a callback that is called when a client registers. The callback receives a dictionary with the registration info. // @param Symbol containing the name of the callback function. // @return none .finos.conn.addClientRegisterCallback:{.finos.conn.priv.addGenericCallback[`clientRegisterCallbacks;x]}; /// // Add a callback that is called when a client connects using KDB protocol. Can be used in place of chaining .z.po. // @param Symbol containing the name of the callback function. // @return none .finos.conn.addClientConnectCallback:{.finos.conn.priv.addGenericCallback[`clientConnectCallbacks;x]}; /// // Add a callback that is called when a client disconnects using KDB protocol. Can be used in place of chaining .z.pc. // @param Symbol containing the name of the callback function. // @return none .finos.conn.addClientDisconnectCallback:{.finos.conn.priv.addGenericCallback[`clientDisconnectCallbacks;x]}; /// // Add a callback that is called when a client connects using WebSocket protocol. Can be used in place of chaining .z.wo. // @param Symbol containing the name of the callback function. // @return none .finos.conn.addClientWSConnectCallback:{.finos.conn.priv.addGenericCallback[`clientWSConnectCallbacks;x]}; /// // Add a callback that is called when a client diconnects using WebSocket protocol. Can be used in place of chaining .z.wc. // @param Symbol containing the name of the callback function. // @return none .finos.conn.addClientWSDisconnectCallback:{.finos.conn.priv.addGenericCallback[`clientWSDisconnectCallbacks;x]};
// Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // End period function to send to subs endp:{[x;y;z] .tst.endp:@[{1+value x};`.tst.endp;0]}; ================================================================================ FILE: TorQ_tests_stp_pubsub_settings.q SIZE: 560 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`pubsub; .servers.USERPASS:`admin:admin; // Path to table schemas schemapath:getenv[`TORQHOME],"/database.q"; // Keyed sub table ksubtab:1!enlist `tabname`filters`columns!(`trade;"sym in `GOOG`AMZN,price>80";"time,sym,price"); // Define local functions to be called from pubsub endofperiod:{[x;y;z] .tst.eop:@[{1+value x};`.tst.eop;0]}; endofday:{[x;y] .tst.eod:@[{1+value x};`.tst.eod;0]}; upd:{[t;x] .tst.upd:@[{1+value x};`.tst.upd;0]}; // Test trade update testtrade:(.z.p;`sym;90f;50;1b;"H";"I";`ask); ================================================================================ FILE: TorQ_tests_stp_recovery_settings.q SIZE: 674 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant`tickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV and test STP log directory processcsv:getenv[`KDBTESTS],"/stp/recovery/process.csv"; stptestlogs:getenv[`KDBTESTS],"/stp/recovery/testlog"; stporiglogs:getenv[`KDBTPLOG]; testlogdb:"testlog"; // Test updates testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; stopproc:startorstopproc["stop";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_stripe_performance_prep_test.q SIZE: 1,120 characters ================================================================================ // Load timer functions system"l ",getenv[`KDBCODE],"/common/timer.q" // Load specific test files system"l ",getenv[`KDBTESTS],"/k4unit.q" system"l ",getenv[`KDBTESTS],"/helperfunctions.q" system"l ",getenv[`testpath],"/settings.q" // Initialize stp first init[] // Prevent re-initialization init:{} // Get connection management set up .servers.startup[] \d .timer starttests:{[] // Check if ready to start tests - retry connection if not ready if[(exec any null w from .servers.SERVERS where proctype=`rdb)|(not`upd in key`.u)|(not`upd in key`.stp)|not`nextendUTC in key`.stplg;.servers.retry[]]; // Check again after retrying connection - return if still not ready if[(exec any null w from .servers.SERVERS where proctype=`rdb)|(not`upd in key`.u)|(not`upd in key`.stp)|not`nextendUTC in key`.stplg;:()]; // Once connection ready - clear timer and start tests remove[1]; system"l ",getenv[`perfpath],"/run_perf_test.q"; } // Check every 5s repeat[.proc.cp[];.proc.cp[]+0D00:01;0D00:00:05;(starttests;`);"Check for rdbs connections & start tests once all connected"] .dotz.set[`.z.ts;run] \t 1000 ================================================================================ FILE: TorQ_tests_stp_stripe_performance_run_perf_test.q SIZE: 1,812 characters ================================================================================ // Set up results and logging directories .k4.setup:{[respath;testname] .os.md each (respath;rp:respath,string[.z.d],"/"); if[not 11h=type key hsym `$logpath:raze rp,"/logs/";.os.md logpath]; .proc.createlog[logpath;testname;`$ssr[string .z.p;"[D.:]";"_"];0b]; }; // Generate results files and handles to them .k4.handlesandfiles:{[dir;filename] h:hopen f:hsym `$dir,"/",filename; if[not hcount f;.lg.o[`writeres;"Creating file ",1_string f]]; :(h;f) }; // Write test results to disk .k4.writeres:{[res;err;respath;rtime;testname] // Create results directories and files and open handles, timestamp test results .os.md each (respath;rp:respath,string[.z.d],"/"); hf:.k4.handlesandfiles[rp;] each ("results_";"failures_") ,\: (raze "." vs string .z.d),".csv"; res:`runtime xcols update runtime:first rtime from delete timestamp from res; err:`runtime xcols update runtime:first rtime from delete timestamp from err; // If file is empty, append full results/error table to it, if not, drop the header row before appending .lg.o[testname;"Writing ",string[count KUTR]," results rows and ",string[count KUerr]," error rows"]; {neg[x] $[hcount y;1;0]_csv 0: z} .' hf ,' enlist each (res;err); hclose each first each hf; }; //-- SCRIPT START --// clargs:(getenv[`KDBTESTS],"/stp/results/stripe/performance/";`timestamp$();`stripe_performance); // Set up results and logging directories if not in debug mode and results directory defined .[.k4.setup;clargs 0 2;{.lg.e[`test;"Error: ",x]}]; // Load & run tests, show results KUltd each hsym`$getenv[`perfpath]; KUrt[]; show each ("k4unit Test Results";KUTR;"k4unit Test Errors";KUerr); // Write results to disk .[.k4.writeres;(KUTR;KUerr),clargs;{.lg.e[`test;"Error: ",x]}]; if[not `debug in key .Q.opt .z.x;exit count KUerr]; ================================================================================ FILE: TorQ_tests_stp_stripe_settings.q SIZE: 1,224 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Test trade batches syms:`${x,raze x,/:\: .Q.A}/[1;raze(raze .Q.A ,/:\: .Q.A),/:\: .Q.A]; ls:raze(ls2:til 500)+/:0.01*til 101; src:`BARX`GETGO`SUN`DB; q:{[syms;len;ls;ls2;src] len?/:(syms;ls;ls;ls2;ls2;" 89ABCEGJKLNOPRTWZ";"NO";src)}[syms;;ls;ls2;src]; t:{[syms;len;ls;ls2;src] len?/:(syms;ls;`int$ls2;01b;" 89ABCEGJKLNOPRTWZ";"NO";`buy`sell)}[syms;;ls;ls2;src]; // For performance testing qu:{[syms;len;uniq;ls;ls2;src] len?/:(neg[uniq]?syms;ls;ls;ls2;ls2;" 89ABCEGJKLNOPRTWZ";"NO";src)}[syms;;;ls;ls2;src]; tu:{[syms;len;uniq;ls;ls2;src] len?/:(neg[uniq]?syms;ls;`int$ls2;01b;" 89ABCEGJKLNOPRTWZ";"NO";`buy`sell)}[syms;;;ls;ls2;src]; // Local trade table schema trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); quote:flip `time`sym`bid`ask`bsize`asize`mode`ex`src!"PSFFJJCCS" $\: (); // Get the number of rdb processes from process.csv numseg:sum `rdb=((.proc`readprocs).proc`file)`proctype; skey:til numseg; // Get the splits to each rdb splits:numseg#1%numseg; // Local upd and error log function upd:{[t;x] t insert x}; upderr:{[t;x].tst.err:x}; // Test db name testlogdb:"testlog"; ================================================================================ FILE: TorQ_tests_stp_strsub_settings.q SIZE: 573 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Paths to process CSV, strings to move the sub CSV around processcsv:getenv[`KDBTESTS],"/stp/exit/process.csv"; tstlogs:"tstlogs"; tstlogsdir:hsym `$getenv[`KDBTPLOG],"/",tstlogs,"_",string .z.d; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; // Subscription strings to test simplesyms:"AMZN,MSFT"; complexwhr:"sym in `GOOG`IBM,price>90"; columnlist:"sym,price"; badwhr:"sm in `GOOG`IBM,price>90"; ================================================================================ FILE: TorQ_tests_stp_subfile_settings.q SIZE: 380 characters ================================================================================ // Paths to process CSV, strings to move the sub CSV around processcsv:getenv[`KDBTESTS],"/stp/wdb/process.csv"; mv1:" " sv enlist["mv"],@[getenv each `KDBTESTS`TORQHOME;0;,;"/rdbsub.csv"]; mv2:" " sv enlist["mv"],@[getenv each `TORQHOME`KDBTESTS;0;,;"/rdbsub.csv"]; // Function projections (using functions from helperfunctions.q) startproc:startorstopproc["start";;processcsv]; ================================================================================ FILE: TorQ_tests_stp_subscription_settings.q SIZE: 587 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`rdb`segmentedtickerplant; .servers.USERPASS:`admin:admin; // Test trade batches testtrade:((5#`GOOG),5?`4;10?100.0;10?100i;10#0b;10?.Q.A;10?.Q.A;10#`buy); testquote:(10?`4;(5?50.0),50+5?50.0;10?100.0;10?100i;10?100i;10?.Q.A;10?.Q.A;10#`3); // Local trade table schema trade:flip `time`sym`price`size`stop`cond`ex`side!"PSFIBCCS" $\: (); quote:flip `time`sym`bid`ask`bsize`asize`mode`ex`src!"PSFFJJCCS" $\: (); // Local upd and error log function upd:{[t;x] t insert x}; upderr:{[t;x].tst.err:x}; // Test db name testlogdb:"testlog"; ================================================================================ FILE: TorQ_tests_stp_tickerlog_settings.q SIZE: 2,331 characters ================================================================================ // IPC connection parameters .servers.CONNECTIONS:`tickerlogreplay; .servers.USERPASS:`admin:admin; // Test HDB location testhdb:getenv[`KDBTESTS],"/stp/tickerlog/testhdb/"; loadhdb:"l ",testhdb; // Logs locations oldelogs:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/oldlog/testoldlog"; fakelogs:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/oldlog/fakeoldlog"; nonelogs:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/stpnone"; perilogs:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/stptabperiod"; tabulogs:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/stptabular"; emptydir:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/nologs"; oldelogdir:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/oldlogdir"; zipfile:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/zipfile/testoldlog.gz"; zipdir:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/zipdir/"; zipdirstp:hsym `$getenv[`KDBTESTS],"/stp/tickerlog/logs/zipdirstp/"; // Reset and replay STP logs - to be executed on the tickerlog replay process resplay:{[logdir] .replay.segmentedmode:1b; .replay.tplogfile:`; .replay.tplogdir:logdir; .replay.initandrun[]; }; // Reset to old TP mode and only replay quote table logs oldify:{[logfile] .replay.segmentedmode:0b; .replay.tablelist:`quote; .replay.tablestoreplay:`quote,(); .replay.tplogfile:logfile; .replay.tplogdir:`; .replay.initandrun[]; }; // Reset to old TP mode and play in a log directory olddir:{[logdir] .replay.segmentedmode:0b; .replay.tplogfile:`; .replay.tplogdir:logdir; .replay.initandrun[]; }; // Reset and try to load file oldfile:{[logfile] .replay.segmentedmode:0b; .replay.tplogfile:logfile; .replay.tplogdir:`; .replay.initandrun[]; };
addcheck:{[checkdict] // add a new monitoring check // this is for manual adds - add it as a dictionary if[not 99h=type checkdict;'"input must be a dictionary"]; if[not all (req:`family`metric`process`query`resultchecker`params`period`runtime)in key checkdict; '"not all required dictionary keys supplied; missing ",.Q.s1 req except key checkdict]; if[not 11h=type checkdict`family`metric`process; '"keys family, metric, process must have type symbol"]; if[not all 10h=type each checkdict`query`resultchecker; '"keys query, resultchecker must have type char array (string)"]; if[not all 16h=type checkdict`period`runtime; '"keys period, runtime must have type timespan"]; if[not 99h=type checkdict`params; '"key params must have type dictionary"]; // add the config addconfig enlist req#checkdict; } addconfig:{ // function to insert to check config table, and into checkstatus // input is a table of config checks // pull out the current max checkid nextcheckid:1i+0i|exec max checkid from checkconfig; // add checkid if not already present if[not `checkid in cols x; x:update checkid:nextcheckid+til count x from x]; // add active if not already present if[not `active in cols x; x:update active:1b from x]; // select only the columns we need, and key it x:`checkid xkey (cols checkconfig)#0!x; // insert to checkconfig `checkconfig upsert x; // and insert to checkstatus `checkstatus upsert select checkid,family,metric,process,lastrun:0Np,nextrun:.z.p,status:0Nh,executiontime:0Nn,totaltime:0Nn,timerstatus:0Nh,running:0Nh,result:(count process)#() from x; } copyconfig:{[checkid;newproc] //function to copy config from checkconfig table and reinsert with new target process //check if supplied checkid exists if[not checkid in exec checkid from checkconfig; '"supplied checkid doesn't exist in checkconfig table"]; newcheck:update process:newproc from delete checkid from exec from checkconfig where checkid=checkid; addcheck newcheck } togglecheck:{[cid;status] if[not cid in exec checkid from checkconfig; '"checkid ",(string cid)," doesn't exist"]; update active:status from `checkconfig where checkid=cid; } disablecheck:togglecheck[;0b] enablecheck:togglecheck[;1b] // input to runcheck will be a row from checkconfig runcheck:{ // increment the run id runid+:1i; // get the handle to the process if[null h:gethandle[x`process]; `checkstatus upsert ((enlist`checkid)!enlist x`checkid),update running:0h,result:"no handle connection",status:0h,timerstatus:0h from checkstatus x`checkid]; // run the check remotely // send over the id, return a dict of `id`status`res .async.postback[h;({start:.z.p; (`runid`executiontime!(y;.z.p-start)),`status`result!@[{(1h;value x)};x;{(0h;x)}]};(x`query;x`params);runid);`checkresulthandler]; // add a record to track `checktracker insert (runid;.z.p;0Np;0Nn;x`checkid;0Nh;()); // update the status to be running `checkstatus upsert ((enlist`checkid)!enlist x`checkid),update running:1h from checkstatus x`checkid; } //check that process has not been running over next allotted runtime //if so, set status and timerstatus to neg checkruntime:{[n] update status:0h,timerstatus:0h,running:0h from `checkstatus where running=1h,n<.z.p-nextrun; } checkresulthandler:{ // update the appropriate record in checktracker toinsert:((enlist`runid)#x),(checktracker x`runid),((enlist`receivetime)!enlist .z.p),`executiontime`status`result#x; // store the record `checktracker upsert toinsert; // get the configuration for this alert conf:checkconfig toinsert`checkid; // need to run the resultchecker against the actual result // but only if the actual query has been successful if[x`status; // pull out the resultchecker function and run it // this should only get triggered in dev cycle- // protect against devs inserting incorrect resultchecker definitions r:@[value;(conf`resultchecker;conf`params;toinsert);{`status`result!(0b;"failed to run result checker for runid ",(string x`runid)," and checkid ",(string x`checkid),": ",y)}[toinsert]]; // have to make sure we have dictionary result if[not 99h=type r; r:`status`result!(0b;"resultchecker function did not return a dictionary")]; // check here if it has failed or passed // override the status and error message as appropriate toinsert[`status]&:r`status; toinsert[`result]:r`result; ]; // if the query has failed, add in the error if[not x`status; toinsert[`result]:"request failed on remote server: ",x`result]; // insert the record into checkstatus `checkstatus upsert ((enlist `checkid)!enlist toinsert`checkid), (checkstatus toinsert[`checkid]), `lastrun`nextrun`status`executiontime`totaltime`timerstatus`running`result!(toinsert`sendtime;.z.p+conf`period;toinsert`status;toinsert`executiontime;toinsert[`receivetime]-toinsert[`sendtime];`short$conf[`runtime]>toinsert`executiontime;0h;toinsert`result) } // run each check that needs to be run runnow:{ // run each check which hasn't been run recently enough runcheck each 0!select from checkconfig where active, checkid in exec checkid from checkstatus where .z.p>nextrun,not running=1h; } //Check median sendtimes against variable input timespan timecheck:{[n] //Extract median time from checkstatus, check against input,return true if median is less than n select medtime,loadstatus:n>medtime from select medtime:`timespan$med (totaltime-executiontime) from checkstatus } // SUPPORT API //Update config based on checkid updateconfig:{[checkid;paramkey;newval] // update a config value if[not checkid in exec checkid from checkconfig; '"supplied checkid doesn't exist in checkconfig table" ]; // check existance if[not paramkey in key current:.[`checkconfig;(checkid;`params)]; '"supplied paramkey does not exist in params for checkid ",(string checkid) ]; // check type if[not type[newval]=type current paramkey; '"supplied value type ",(string type newval)," doesn't match current type ",string type current paramkey ]; // crack on .[`checkconfig;(checkid;`params;paramkey);:;newval]; } forceconfig:{[checkid;newconfig] // force config over the top of current // don't check for existence of parameters,parameter types etc. if[not checkid in exec checkid from checkconfig; '"supplied checkid doesn't exist in checkconfig table" ]; if[not 99h=type newconfig; '"new supplied config must be of type dictionary"]; .[`checkconfig;(checkid;`params);:;newconfig]; } //Function to update config value based on family and metric combination updateconfigfammet:{[f;m;paramkey;newval] if[0=count checkid: exec checkid from checkconfig where family=f,metric=m; '"family and metric combination doesn't exist in checkconfig table" ]; updateconfig[first checkid;paramkey;newval]; } //Function to return only required metrics on current status of check currentstatus:{[c] if[all null c; :select checkid,family,metric,process,status,timerstatus,running,result from checkstatus]; :select checkid,family,metric,process,status,timerstatus,running,result from checkstatus where checkid in c; } //Get ordered status and timer status by family //If null return fulltable statusbyfam:{[f] if[all null f;:`status`timerstatus xasc select from checkstatus]; `status`timerstatus xasc select from checkstatus where family in f } //Clear checks in checktracker older than certain age cleartracker:{[time] delete from `checktracker where (.z.p-sendtime)> time } // RESULT HANDLERS // some example result handler functions here // these should always return `status`result!(status; message) // they should take two params- p (dictionary parameters) and r (resultrow) checkcount:{[p;r] if[`morethan=p`cond; if[p[`count]<r`result; :`status`result!(1h;"")]; :`status`result!(0h;"variable ",(string p`varname)," has count of ",(string r`result)," but requires ",string p`count) ]; if[`lessthan=p`cond; if[p[`count]>r`result; :`status`result!(1h;"")]; :`status`result!(0h;"variable ",(string p`varname)," has count of ",(string r`result)," but should be less than ",string p`count) ]; } queuecheck:{[p;r] if[not count where any each(r`result)>p`count;:`status`result!(1h;"")]; `status`result!(0h;"There are slow subscribers to publisher that have message queues longer than ",string p`count) } truefalse:{[p;r] if[`true=p`cond; if[1b=r`result; :`status`result!(1h;"")]; :`status`result!(0h;"variable ",(string p`varname),"is returning false, but should return true") ]; if[`false=p`cond; if[0b=r`result; :`status`result!(1h;"")]; :`status`result!(0h;"variable ",(string p`varname),"is returning true, but should return false") ]; } resulttrue:{[p;r] if[(r`result)=p`result;:`status`result!(1h;"")]; `status`result!(0h;"The variable ",(string p`varname),"is returning ",(string r`result)," but should be returning ",string p`result) } ================================================================================ FILE: TorQ_code_monitor_datadogchecks.q SIZE: 186 characters ================================================================================ \d .dg //send result of check from monitor process to datadog agent sendresultmetric:{[p;r] sendmetric["torqup.",(string `..checkconfig[r`checkid]`process);r`result];`..truefalse[p;r]} ================================================================================ FILE: TorQ_code_processes_chainedtp.q SIZE: 7,332 characters ================================================================================ / Chained tickerplant /- subscribers use this to determine what type of process they are talking to tptype:`chained; /- functions used by subscribers tablelist:{.stpps.t} /- subscribers who want to replay need this info subdetails:{[tabs;instruments] r:.ctp.sub[tabs;instruments]; /- reshape dict, make sure logfilelist is an empty list if no log file `schemalist`logfilelist`rowcounts`date!r@/:(`schema;$[r[`logfile]~();();enlist`i`logfile];`icounts;`d) } \d .ctp /- user defined variables tickerplantname:@[value;`tickerplantname;`tickerplant1]; /- list of tickerplant names to try and make a connection to pubinterval:@[value;`pubinterval;0D00:00:00]; /- publish batch updates at this interval tpconnsleep:@[value;`tpconnsleep;10]; /- number of seconds between attempts to connect to the source tickerplant createlogfile:@[value;`createlogfile;0b]; /- create a log file logdir:@[value;`logdir;`:tplogs]; /- hdb directory containing tp logs subscribeto:@[value;`subscribeto;`]; /- list of tables to subscribe for subscribesyms:@[value;`subscribesyms;`]; /- list of syms to subscription to replay:@[value;`replay;0b]; /- replay the tickerplant log file schema:@[value;`schema;1b]; /- retrieve schema from tickerplant clearlogonsubscription:@[value;`clearlogonsubscription;0b]; /- clear logfile on subscription tpcheckcycles:@[value;`tpcheckcycles;0W]; /- specify the number of times the process will check for an available tickerplant tph:0N; /- setting tickerplant handle to null .u.icounts:.u.jcounts:(`symbol$())!0#0,(); /- initialise icounts & jcounts dict .u.i:.u.j:0; /- clears log clearlog:{[lgfile] if[not type key lgfile;:()]; .lg.o[`clearlog;"clearing log file : ",string lgfile]; .[set;(lgfile;());{.lg.e[`clearlog;"cannot empty tickerplant log: ", x]}] }
Changes in 3.5¶ Below is a summary of changes from V3.4. Commercially licensed users may obtain the detailed change list / release notes from http://downloads.kx.com Production release date¶ 2017.03.15 Update 2019.11.13¶ .z.ph now calls .h.val instead of value , to allow users to interpose their own evaluation code. .h.val defaults to value . Enhanced debugger¶ In V3.5, the debugger has been extended to include the backtrace of the q call stack, including the current line being executed, the filename, line and character offset of code, with a visual indicator (caret) pointing to the operator which failed. The operator and arguments may be captured programmatically for further propagation in error reporting. Backtraces may also be printed at any point by inserting the .Q.bt[] command in your code. Please see here for further details. Concurrent memory allocator¶ V3.5 has an improved memory allocator which allows memory to be used across threads without the overhead of serialization, hence the use-cases for peach now expand to include large result sets. Note kdb+ manages its own heap, using thread-local heaps to avoid contention. One complication of thread-local heaps is how to share allocations between threads, and avoid ballooning of allocated space due to the producer allocating in one arena and the consumer freeing that area in another arena. This was the primary reason for serialization/deserialization of the results from secondary threads to the main thread when peach completes. This serialization and associated overhead has now been removed. Socket sharding¶ V3.5 introduces a new feature that enables the use of the SO_REUSEPORT socket option, which is available in newer versions of many operating systems, including Linux (kernel version 3.9 and later). This socket option allows multiple sockets (kdb+ processes) to listen on the same IP address and port combination. The kernel then load-balances incoming connections across the processes. When the SO_REUSEPORT option is not enabled, a single kdb+ process receives incoming connections on the socket. With the SO_REUSEPORT option enabled, there can be multiple processes listening on an IP address and port combination. The kernel determines which available socket listener (and by implication, which process) gets the connection. This can reduce lock contention between processes accepting new connections, and improve performance on multicore systems. However, it can also mean that when a process is stalled by a blocking operation, the block affects not only connections that the process has already accepted, but also connection requests that the kernel has assigned to the process since it became blocked. To enable the SO_REUSEPORT socket option, include the new reuseport parameter (rp ) to the listen directive for the \p command, or -p command-line arg. e.g. q)\p rp,5000 Use cases include coarse load-balancing and HA/failover. Note When using socket sharding (e.g. -p rp,5000 ) the Unix domain socket (uds ) is not active; this is deliberate and not expected to change. Number of secondary threads¶ Secondary threads can now be adjusted dynamically up to the maximum specified on the command line. A negative number indicates that processes should be used, instead of threads. q)0N!("current secondary threads";system"s");system"s 4";0N!("current,max secondary threads";system"s";system"s 0N"); / q -s 8 ("current secondary threads";0i) ("current,max secondary threads";4i;8i) q)system"s 0" / disable secondary threads q)system"s 0N" / show max secondary threads 8i Improved sort performance¶ kdb+ uses a hybrid sort, selecting the algorithm it deems best for the data type, size and domain of the input. With V3.5, this has been tweaked to improve significantly the sort performance of certain distributions, typically those including a null. e.g. q)a:@[10000001?100000;0;:;0N];system"t iasc a" / 5x faster than V3.4 Improved search performance¶ V3.5 significantly improves the performance of bin , find , distinct and various joins for large inputs, particularly for multi-column input. The larger the data set, the better the performance improvement compared to previous versions. e.g. q)nn:166*n:60000;V1.50?V2.neg[100]?`2;t1:`c1`c2`c3#n?t2:([]c1:`g#nn?V1.c2:nn?V1.c3:nn?V2.val:nn?100);system"ts t1 lj 3!t2" / 100x faster than V3.4 q)a:-1234567890 123456789,100000?10;b:1000?a;system each("ts:100 distinct a";"ts:1000 a?b") / 30% faster than V3.4 SSL/TLS¶ Added hopen timeout for TLS, e.g. q)hopen(`:tcps://myhost:5000:username:password;30000) NUCs – not upwardly compatible¶ We have tried to make the process of upgrading seamless, however please pay attention to the following NUCs to consider whether they impact your particular installation - added ujf (new keyword) which mimics the behavior ofuj from V2.x, i.e. that it fills from lhs, e.g.q)([a:1 2 3]b:2 3 7;c:10 20 30;d:"WEC")~([a:1 2]b:2 3;c:5 7;d:"WE")ujf([a:1 2 3]b:2 3 7;c:10 20 30;d:" C") - constants limit in lambdas reduced from 96 to 95; could cause existing user code to signal a 'constants error, e.g.q)value raze"{",(string[10+til 96],\:";"),"}" - now uses abstract namespace for Unix domain sockets on Linux to avoid file permission issues in /tmp . N.B. hence V3.5 cannot connect to V3.4 using UDS. e.g.q)hopen`:unix://5000 - comments are no longer stripped from the function text by the tokenizer ( -4!x ); comments within functions can be stripped explicitly from the-4! result withq){x where not(1<count each x)&x[;0]in" \t\n"} -4!"{2+ 3; /comment\n3\n\t/ another comment\n \n/yet another\n /and one more\n}" - the structure of the result of value on a lambda, e.g.value {x+y} , is:(bytecode;parameters;locals;(namespace,globals);constants[0];…;constants[n];m;n;f;l;s) where this is m bytecode to source position map; -1 if position unknownn fully qualified (with namespace) function name as a string, set on first global assignment, with @ appended for inner lambdas;() if not applicablef full path to the file where the function originated from; "" if not applicablel line number in said file; -1 if n/as source code This structure is subject to change. Suggested upgrade process¶ Even though we have run a wide range of tests on V3.5, and various customers have been kind enough to repeatedly run their own tests during the last few months of development, customers who wish to upgrade to V3.5 should run their own tests on their own data and code/queries before promoting to production usage. In the event that you do discover a suspected bug, please report it at support.kx.com. Detailed change list¶ There are also a number of smaller enhancements and fixes; please see the detailed change list README.txt on downloads.kx.com – ask your company support representative to download this for you. Changes in 3.6¶ Below is a summary of changes from V3.5. Commercially licensed users may obtain the detailed change list / release notes from http://downloads.kx.com Production release date¶ 2018.05.16 Update 2019.11.13¶ .z.ph now calls .h.val instead of value , to allow users to interpose their own evaluation code. .h.val defaults to value . Deferred response¶ More efficient gateways: a server process can now use -30!x to defer responding to a sync query until, for example, worker process have completed their tasks. 64-bit enumerations¶ Enums, and linked columns, now use 64-bit indexes: q)(type a:`sym?`a`b;type `zym!4000000000) 20 -20h - 64-bit enums are type 20 regardless of their domain. - There is no practical limit to the number of 64-bit enum domains. - 64-bit enums save to a new file format which is not readable by previous versions. - 32-bit enums files from previous versions are still readable, use type space 21 thru 76 and all ops cast them to 64-bit. Anymap¶ A new mappable type, 'mapped list' (77h) improves on existing mapped nested types (77h+t) by removing the uniformity restriction. e.g. q)a:get`:a set (1 2 3;"cde"); b:get`:b set ("abc";"def") q)77 77h~(type a;type b) Mapped lists' elements can be of any type, including lists, dictionaries, tables. e.g. q)a:get`:a set ((1 2;3 4);`time`price`vol!(.z.p;1.;100i);([]a:1 2;b:("ab";"cd"))) q)77 0h~(type a;type first a) A new write primitive alternative to set , `:a 1: x , allows mapped lists to nest within other mapped lists. For files written with 1: , vectors within all structures remain mapped, no matter the depth, and can be used without being copied to the heap. e.g. q)a:get`:a 1: ((1 2;3 4);([]time:1000?.z.p;price:1000?100.);([]time:1000?.z.p;price:1000?200)) q)77 77h~(type a;type first a) q).Q.w[]`used`mmap / 336736 40432 q)p:exec price from a[1] q).Q.w[]`used`mmap /336736 40432 - Symbol vectors/atoms are automatically enumerated against file## and de-enumerated (and therefore always copied) on access. e.g. q)`:file set((`a`b;`b`c);0 1) / symbols cause a 3rd file to be created, file##, which contains the enumeration domain - The underlying storage (file# ) stays mapped as long as there exists a reference to any mapped object within. Hence, care should be taken when working with compressed data, as anything ever decompressed in a file would stay in memory until the last reference is gone. GUID hashing¶ Hash now considers all bits of the guid. Guids with u , p or g attribute use a new file format, unreadable by previous versions. File compression¶ Added lz4hc as file-compression algorithm #4. e.g. q).z.zd:17 4 16;`:z set z:100000?200;z~get`:z lz4 compression Certain releases of lz4 do not function correctly within kdb+. Notably, lz4-1.7.5 does not compress, and lz4-1.8.0 appears to hang the process. kdb+ requires at least lz4-r129 . lz4-1.8.3 works. We recommend using the latest lz4 release available. NUCs – not upwardly compatible¶ We have tried to make the process of upgrading seamless, however please pay attention to the following NUCs to consider whether they impact your particular installation. The following files use a new file format. They are therefore unreadable by the previous versions. However, 3.6 can read all formats 3.5 can: - 64-bit enumerations use a new file format. 3.5 enum files are read-only. - Mapped list type (77h) deprecates old mapped nested types (77h+t). 77h+t files are read-only. - Guids with u ,p org attribute use a new file format. - Accepts a websocket connection only if .z.ws is defined, otherwise returns HTTP 501 code Added ajf and ajf0 , to behave as V2.8 aj and aj0 , i.e. they fill from LHS if RHS is null. e.g. q)a:([]time:2#00:00:01;sym:`a`b;p:1 1;n:`r`s) q)b:([]time:2#00:00:01;sym:`a`b;p:0 1) q)c:([]time:2#00:00:00;sym:`a`b;p:1 0N;n:`r`s) q)a~ajf[`sym`time;b;c] 1b Suggested upgrade process¶ Even though we have run a wide range of tests on V3.6, and various customers have been kind enough to repeatedly run their own tests during the last few months of development, customers who wish to upgrade to V3.6 should run their own tests on their own data and code/queries before promoting to production usage. Most importantly, be aware that rolling back to a previous version will be complicated by the fact that files written by v3.6 are not readable by prior versions, hence users should test thoroughly prior to committing to an upgrade. In the event that you do discover a suspected bug, please report it at support.kx.com. Detailed change list¶ There are also a number of smaller enhancements and fixes; please see the detailed change list (README.txt) on downloads.kx.com – ask your company support representative to download this for you.
RDB Intraday writedown solutions¶ With data volumes in the financial-services sector continuing to grow at exponential rates, kdb+ is the data-storage technology of choice for many financial institutions due to its efficiency in storing and retrieving large volumes of data. kdb+ is uniquely equipped to deal with these growing data volumes as it is extremely scalable and can deal with increasing data volumes with ease. As volumes grow the amount of data that an RDB can keep in memory may eventually be limited by the RAM available on the server. There exist two types of solution to this problem. - The easiest and most obvious is the hardware solution which would involve increasing the RAM available or to scale across multiple machines, splitting the data up across servers by region, table or symbol. - The second solution – the software solution – continues to use a server which has inadequate RAM to store a whole day’s data. In this solution, the reliance on RAM is reduced by periodically writing the data to disk and then purging it from memory. This intraday write to disk allows a full day’s worth of data to be contained on a single server. This is not the ideal setup for kdb+ and as such will come with some penalties attached. This paper discusses various software approaches to performing intraday writedowns in kdb+, which help overcome memory limitations. Standard tick setup¶ A common kdb+ vanilla tick setup, has a tickerplant (TP) receiving data and then logs it to disk, whilst publishing to an in-memory realtime database (RDB), which keeps all of the current day’s data in memory. At the end of the day, the RDB commits this data to disk in a separate historical database (HDB) that stores all of this historical data. This means that the most recent data (and often most important) always has the fastest access time as it is stored in RAM. The standard approach above can be limited by available RAM if daily data volumes grow too large. It is important to realize also that extra RAM is required to query the data, on top of what is required to keep it in memory. The extra amount required will vary depending on the different use cases and queries that are run on it. Consideration must also be given to other processes such as chained RDBs or HDBs which will need to share the resources on the server. One solution is to write down some of the data from the RDB to a temporary directory on disk at different points throughout the day and then delete the data from memory, thus freeing up RAM. Various methods to achieve this will be discussed. w.q ¶ w.q is available from simongarland/tick/w.q w.q is an RDB, an alternative solution from the standard r.q RDB script. A write-only RDB script (w.q ) can easily be modified to work with any standard kdb+ setup. Handling real-time updates¶ In a standard tick setup that uses r.q , the TP is publishing data asynchronously to the RDB and calling a upd function equivalent to the insert function. The important changes in w.q begin with the callback function upd which no longer simply inserts data into the table. append:{[t;data] t insert data; if[MAXROWS<count value t; // append enumerated buffer to disk .[` sv TMPSAVE,t,`;();,;.Q.en[`:.]`. t]; // clear buffer @[`.;t;0#]; ]} upd:append The new upd function inserts the data into the table, and then if the count has exceeded a pre-configured value – MAXROWS – all data in the table is enumerated and is appended to a splayed table on disk in the TMPSAVE temporary directory. The data is then deleted from the RDB, thus reducing the memory used by the process. End-of-day processing¶ The end-of-day logic is invoked by the TP communicating with the RDB to call .u.end . In the standard RDB (r.q ), this consists of writing the RDB data to a new partition and then deleting all data from the RDB tables. At the end of the day all data has been written to disk in splayed tables, in time order. Most HDBs however are partitioned by date and a parted attribute is applied to the sym column, while also retaining time order within each sym. Therefore the on-disk temporary tables need to be reorganized before they can be added to the HDB as a new date partition. In w.q , .u.end is overridden to save any remaining data in the tables to the temporary directory before purging them. The data is then sorted on disk, moved from the temporary directory to a new date partition in the main HDB directory and made available to clients by reloading the HDB. / end of day: save, clear, sort on disk, move, hdb reload .u.end:{ t:tables`.; t@:where 11h=type each t@\:`sym; / append enumerated buffer to disk {.[` sv TMPSAVE,x,`;();,;.Q.en[`:.]`. x]}each t; / clear buffer @[`.;t;0#]; / sort on disk by sym and set `p# / {@[`sym xasc` sv TMPSAVE,x,`;`sym;`p#]}each t; {disksort[` sv TMPSAVE,x,`;`sym;`p#]}each t; / move the complete partition to final home, / use mv instead of built-in r if filesystem whines system"r ",(1_string TMPSAVE)," ",-1_1_string .Q.par[`:.;x;`]; / reset TMPSAVE for new day TMPSAVE::getTMPSAVE .z.d; / and notify hdb to reload and pick up new partition if[h:@[hopen;`$":",.u.x 1;0];h"\\l .";hclose h]; } Instead of using xasc to sort the table, the script implements an optimized function for sorting tables on disk. This function disksort takes three arguments: - the handle to the on-disk table - the column name to part the table by, generally sym - a function to apply an attribute, e.g. `p# for parted disksort:{[t;c;a] if[not`s~attr(t:hsym t)c; if[count t; ii:iasc iasc flip c!t c,:(); if[not$[(0,-1+count ii)~(first;last)@\:ii;@[{`s#x;1b};ii;0b];0b]; {v:get y; if[not$[all(fv:first v)~/:256#v;all fv~/:v;0b]; v[x]:v; y set v];}[ii] each ` sv't,'get ` sv t,`.d ] ]; @[t;first c;a]]; t} The table is not reorganized if the column we are parting the table by is already sorted – it may actually already have a s attribute applied. If the table needs to be sorted, each column is sorted in turn except if all values in a particular column are identical. Rather than check the whole column, initially just the first 256 entries are checked for uniqueness. Finally the p attribute is set on the sym column. To ensure best performance, xasc times should be compared with disksort on each table. The w.q script has an additional option to delete the temporary data on exit to handle recovery scenarios. The default behavior is to delete the temporary data and recover from the TP log as it is difficult to locate the point in the TP log which was last committed to disk. Limitations of w.q ¶ Downtime¶ When rolling the RDB at end of day it is very important to minimize the downtime of the RDB and to have the new date partition available as quickly as possible in the HDB. However, sorting very large tables on disk can add significant delay, no matter which sorting method is used. Table 1 describes the time taken (in seconds) to sort a quote table for increasing numbers of rows. The schema of the quote table is described below: quote:([] time:`time$(); sym:`symbol$(); bid:`float$(); ask:`float$(); bsize:`int$(); asize:`int$() ) Table 1: rows disksort xasc ------------------------------ 100,000 0.017 0.011 1,000,000 0.207 0.125 10,000,000 2.240 1.447 50,000,000 10.778 7.046 100,000,000 20.102 13.285 500,000,000 121.485 112.452 As can be seen, the amount of time taken to sort a simple table like the above is quite large. This may be a serious problem as yesterday’s data may not be queryable for a significant period each morning. Performance¶ The w.q solution was intended more as an effective method to alleviate RAM problems during data capture than to be a queryable process. Since the most recent data will be in-memory and everything else is splayed on disk, any queries for intraday data will have to be run against both tables and be combined. The query against the on-disk splay with no attributes will have a significant impact on query performance. This problem may be somewhat mitigated as the most recent data is of most interest. For example, we could keep the last 5 minutes of data in memory. This could be achieved by amending the append function described above. append:{[t;data] t insert data; // find if any rows older than 5 mins if[(first t`time) <minT:.z.t-00:05; // append enumerated buffer to disk cnt:count tab:select from t where time<minT; .[` sv TMPSAVE,t,`;();,;.Q.en[`:.] tab]; // clear buffer @[`.;t;cnt _] ] } upd:append However, for a table with many updates, a small number of rows will be written to disk very often making this approach inefficient. A better solution would be to write the data to disk on a timer. The timer could be set to trigger every 5 minutes, meaning that at all times the most recent 5 minutes worth of data in each table is available (up to 10 minutes). This would have a much smaller cost per writedown and operate more like a standard RDB. upd:insert writedown:{[t] // find if any rows older than 5 mins if[(first t`time) <minT:.z.t-00:05; // append enumerated buffer to disk cnt:count tab:select from t where time<minT; .[` sv TMPSAVE,t,`;();,;.Q.en[`:.] tab]; // clear buffer @[`.;t;cnt _] ] } .z.ts:{writedown each tables[]} // timer function system"t 300000" // set timer to 5 mins Alternatively, instead of keeping between 0 and MAXROWS in memory, we could keep between MINROWS and MAXROWS , thus guaranteeing a certain number of rows at all times in the RDB. Another consideration may be that some tables (perhaps smaller reference tables) may not need to be written down as much or indeed at all. Therefore a method to differentiate between the tables is required. WRITETBLS:`trade`quote // tables to write down intra day MAXROWS:30000 // default max value MINROWS:20000 // default min value MAXTBL:(enlist `quote)!enlist 100000 // dict of max values per table MINTBL:(enlist `quote)!enlist 50000 // dict of min values per table append:{[t;data] t insert data; if[t in WRITETBLS; // if table over its allowable size if[(mx:MAXROWS^MAXTBL[t])<count value t; // append enumerated buffer to disk (specific to table) .[` sv TMPSAVE,t,`;();,;.Q.en[`:.](cnt:mx-MINROWS^MINTBL[t]) sublist `. t]; // clear buffer @[`.;t;cnt _] ] ] } upd:append Using the above, the quote table would write down in chunks of 50,000 whenever it hit 100,000 rows, thus always having at least 50,000 in memory. The trade table, however, has no specific values set and so it would default back to writing in chunks of 10,000 rows, thus always having 20,000 in memory. Any other table would be unaffected and hold all data in memory until end of day. The end-of-day (EOD) function would have to change as some of the tables are not subject to the intraday writedown. These tables can be written straight to the HDB as previously. .u.end:{ t:tables`.;t@:where `g=attr each t@\:`sym; / append enumerated buffer to disk for write tables {.[` sv TMPSAVE,x,`;();,;.Q.en[`:.]`. x]}each WRITETBLS; / clear buffer for write tables @[`.;WRITETBLS;0#]; / write normal tables down in usual manner {[x;t].Q.dpft[`:.;x;`sym;]each t;@[`.;t;0#]}[x;]each t except WRITETBLS; / special logic to sort and move tmp tables to hdb .u.endWTbls[x;WRITETBLS]; / reapply grouped attribute @[;`sym;`g#] each t; / and notify hdb to reload and pick up new partition if[h:@[hopen;`$":",.u.x 1;0];h"\\l .";hclose h] } / end of day: save, clear, sort on disk, move .u.endWTbls:{[x;t] t@:where 11h=type each t@\:`sym; / sort on disk by sym, set `p# and move {disksort[` sv TMPSAVE,x,`;`sym;`p#]}each t; system"r ",(1_string TMPSAVE)," ",-1_1_string .Q.par[`:.;x;`]; / reset TMPSAVE for new day TMPSAVE::getTMPSAVE .z.d; } Another customization, although beyond the scope of this paper, would be to have a separate process carry out the disksort-and-move for any tables that were written down intraday. This would mean the RDB could very quickly do its end-of-day processing (mostly writing any remaining rows to the temporary directory) and continue to receive data as usual from the TP. However, while this significantly reduces downtime for the RDB, the HDB will still not be able to pick up the new partition until the disksort is complete which may take quite some time as detailed earlier. Intraday write with partitioned temporary directory¶ A partitioned table in kdb+ may be partitioned by one of four separate datatypes, namely date, month, year and int. Date is the most commonly used, however, our solution for intraday writedowns involves partitioning by int. Partitioning by int offers some extra possibilities that can be used to help mitigate some of the problems associated with intraday writedowns. We will also make use of the fact that symbols in an HDB are enumerated against a simple int list. Each partition in our intraday writedown directory will store data for a single sym and the partition value will be the enumerated integer value for that sym. If, for example, in your HDB’s symfile the enumerated value of `eurusd is 223, then during the day EURUSD updates that are being written to disk will be appended to the relevant table in the 223 int partition. These entries will be sorted by time as they are being appended and so will have a s attribute on time in the temporary directory. The advantage of of this method is that the data in the temporary directory can be queried much more efficiently as it is essentially partitioned by sym and sorted by time. The second processing time-saving is seen at EOD: no sort is required since the data is already divided by sym. Therefore, adding to the HDB reduces from an append-and-sort to a simple append. The solution begins by setting the following in the RDB. // config TMPSAVE:`$":/home/local/FD/cmccarthy/wp/tmpPW" HDBDIR:`$":/home/local/FD/cmccarthy/wp/hdb/schema" WRITETBLS:`trade`quote MAXROWS:2000; MINROWS:1000 MAXTBL:MINTBL:(0#`)!0#0N MAXTBL[`quote]:100000; MINTBL[`quote]:0 MAXTBLSYM:MINTBLSYM:enlist[`]!enlist(0#`)!0#0N MAXTBLSYM[`quote;`eurusd]:100000; MINTBLSYM[`quote;`eurusd]:0 // number of rows to write to disk by table by sym minrows:{[t;s]MINROWS^MINTBL[t]^MINTBLSYM[t;s]} maxrows:{[t;s]MAXROWS^MAXTBL[t]^MAXTBLSYM[t;s]} writecount:{[t;s]maxrows[t;s]-minrows[t;s]} // time of last record saved to TMPSAVE by table and sym LASTTIME:enlist[`]!enlist(0#`)!0#0Nt Initially, the location of the HDB and temporary intraday DB are set, and the tables that need to be written down intraday are defined, as well as the minimum and maximum number of rows to keep in memory. This configuration is similar to some of the customizations described in previously for w.q , but slightly more granular in that it allows values to be set for each table and for each sym within that table. This is done using a dictionary of dictionaries, which can be easily indexed. q)MAXTBLSYM:MINTBLSYM:enlist[`]!enlist(0#`)!0#0N q)MAXTBLSYM[`quote;`eurgbp]:75000;MAXTBLSYM[`quote;`eurusd]:100000 q)MAXTBLSYM[`trade;`eurgbp]:40000;MAXTBLSYM[`trade;`eurusd]:50000 q)MAXTBLSYM | (`symbol$())!`long$() quote| `eurgbp`eurusd!75000 100000 trade| `eurgbp`eurusd!40000 50000 q)MAXTBLSYM[`trade;`eurgbp] 40000 q)MAXTBLSYM[`quote;`eurgbp`eurusd] 75000 100000 The minrows , maxrows and writecount functions simply take a table and sym and return the relevant counts. LASTTIME stores the time of the last record for each sym per table that was written to the temporary directory. This may be used later to help speed up some queries. // store table schemas and column names for all tables system "l /home/local/FD/cmccarthy/wp/tick/schema.q" TBLSCHEMAS.:(); TBLCOLS.:() {TBLSCHEMAS[x]:0#value x}each tables[]; {TBLCOLS[x]:cols value x}each tables[] // create structure for tables in the in memory portion of WRITETBLS createTblStruct:{(` sv`.mem,x,`)set TBLSCHEMAS x} createTblStruct each WRITETBLS // retrieve sym file from HDB directory HDBSYM:` sv HDBDIR,`sym sym:@[get;HDBSYM;`symbol$()] // remove all but an empty 0 directory and symlink to HDB sym file clearTmpDir:{[] system"rm -rf ",(1_string TMPSAVE),"/0/*"; {system"rm -rf ",x;}each 1_'string` sv'TMPSAVE,'key[TMPSAVE]except`sym`0; } // run on startup in case TMPSAVE directory is not empty i.e. recovery clearTmpDir[] The table schema is loaded and the schema of each table is stored in TBLSCHEMAS as they will be needed at EOD. This is also stored in a dictionary of dictionaries. The same method is used to store the column names of each table in TBLCOLS . As we are planning on storing the data in the temporary directory divided by sym, it also makes sense to store it in memory divided by sym. This is achieved by creating a tree-like structure which allows us to easily retrieve data for a particular sym in a particular table. The tables are stored in a top level namespace, .mem . Using the quote table from earlier as an example: q)eurusd:eurgbp:0#quote q)insert[`eurusd;(2#12:10:01.000;2#`eurusd;2#1.0;2#1.1;2#10;2#20)] q)insert[`eurgbp;(2#12:10:01.000;2#`eurgbp;2#1.0;2#1.1;2#10;2#20)] q).mem[`quote;`eurusd]:eurusd q).mem[`quote;`eurgbp]:eurgbp q).mem.quote | +`time`sym`bid`ask`bsize`asize!(`time$();`symbol$();`float... eurusd| +`time`sym`bid`ask`bsize`asize!(12:10:01.000 12:10:01.000;... eurgbp| +`time`sym`bid`ask`bsize`asize!(12:10:01.000 12:10:01.000;... q).mem[`quote;`eurgbp] time sym bid ask bsize asize --------------------------------------- 12:10:01.000 eurgbp 1 1.1 10 20 12:10:01.000 eurgbp 1 1.1 10 20 When creating the temporary directory initially, it is advisable to create an empty 0 partition inside it. This is to avoid the case whereby the first sym written down is mistaken by kdb+ to be a year e.g. 2021. By having an empty 0 folder kdb+ recognises the whole directory to be partitioned by integer. It is also helpful to create a symlink to the actual HDB symfile rather than attempting to copy it between the two directories. The clearTmpDir function will empty the temporary directory of everything except the symlink and an empty 0 partition. This is run on startup in case there is still data in the temporary directory e.g. in case of running recovery. It is also run at EOD to clear the temporary directory in preparation for the next day. This temporary directory will be loaded into kdb+ like a regular HDB. Thus these memory-mapped tables will override the empty tables loaded from the table schema on start up of the RDB. These on-disk tables can then be queried (akin to a HDB). As with w.q , the upd function needs to be overridden to work with this new solution. // adjusted upd upd:{[t;x] $[t in WRITETBLS; // insert data into new table structure [.[` sv `.mem,t;();,';x group x`sym]; // append to disk any sym segments over the allowable count writeToTmp[t;;0b]each s where maxrows[t;s]<=count each .mem[t]s:distinct x`sym;]; // if not a WRITETBL then insert as normal .[insert;(t;x);{}] ]; } First, the table is grouped by sym. This function assumes that bulk updates are being received. If it is known that only single rows will be received (i.e. no buffering of data in the TP) then this group is unnecessary and should be removed. Next, for each sym the rows are appended to the relevant table in the .mem dictionary structure. After this there is a check, using the maxrows function, to determine if any of the tables for these syms have now exceeded their max allowable rowcount. If they have then this table name and sym name are sent to the writeToTmp function. If the table was not in the WRITETBLS list then it is inserted as normal. // function to append a table/sym to the temporary directory writeToTmp:{[t;s;e] // enumerate against symfile i:sym?(HDBSYM?s); newTbl:not t in key LASTTIME; newPart:not(`$string i)in key TMPSAVE; // if EOD append full table (else writecount) to int partition cnt:$[e;count .mem[t;s]; writecount[t;s]]; (dir:` sv .Q.par[TMPSAVE;i;t],`)upsert .Q.en[HDBDIR] $[e;.mem[t;s];cnt sublist .mem[t;s]]; // apply sort attribute to on disk partition @[dir;`time;`s#]; // update LASTTIME with time from last row appended to disk LASTTIME[t;s]:.mem[t;s;`time]cnt-1; // delete the rows that have been written and apply sort attribute .[`.mem;(t;s);cnt _]; .[`.mem;(t;s);@[;`time;`s#]]; // if new partition/table then populate all partitions and reload if[newTbl or newPart;.Q.chk[TMPSAVE];system"l ",1_ string TMPSAVE] } The writeToTmp function takes three arguments: the table name, symbol name and an end-of-day flag. This end-of-day flag is added because at EOD the tables will need to emptied completely instead of a partial write and purge. This function: - Enumerates the symbol against the sym file. - If a new partition or new table is being created, then .Q.chk is run to populate all partitions correctly and the temporary directory is reloaded. - The number of rows to write is calculated using writecount (orcount table if EOD). - These rows are upserted to the correct table in the temporary directory with a sorted attribute. LASTTIME stores the time of the last row appended to disk.- The rows written down are deleted from the RDB and the sorted attribute is reapplied. Our logic for EOD processing now becomes: // modified EOD funct .u.end:{[d] // set new path for HDB and date hdbDateDir:` sv HDBDIR,`$string d; // writedown for normal tables {[x;t].Q.dpft[HDBDIR;x;`sym;]each t;@[`.;t;:;TBLSCHEMAS t]}[d;]each tables[] except WRITETBLS; // flush yet to be written data to TPMSAVE (with end of day flag) {[t]writeToTmp[t;;1b]each where 0<count each .mem[t]; // reset the table to initial schema and reset .mem structure @[`.;t;:;TBLSCHEMAS t];![`.mem;();0b;enlist t];createTblStruct t}each WRITETBLS; // append partitioned tables from TMPSAVE into one table in HDB appendHDB[WRITETBLS;hdbDateDir]; // notify HDB to reload if[h:@[hopen;`$":",.u.x 1;0];h"\\l .";hclose h]; // clear temp directory clearTmpDir[]; // reset global LASTTIME LASTTIME::enlist[`]!enlist(0#`)!0#0Nt; } The first step of this .u.end is to write down any tables that are not part of the intraday writedown as normal. Next, any data still left in the .mem structure needs to be flushed to the temporary directory with the end-of-day flag. At this point, the tables can be emptied, and reset to their initial schema, as can the .mem structure. The appendHDB function is detailed below and moves/appends the temporary directory into a regular HDB partition. Similarly to w.q , this step could easily be performed by a separate process, thus freeing up the RDB to continue to receive data for the next day. Finally, the HDB process is notified to reload, the temporary directory is cleared and the LASTTIME variable is reset. The EOD move/append logic is divided into four functions/steps: appendCol // # columns can be omitted (used for list columns including strings) appendCol:{[dtDir;tbl;col;colPath] if[not col like"*#";upsert[` sv dtDir,tbl,col;get colPath]] } - This function appends the data from one column in the temporary directory onto the similarly-named column in that table in the HDB date partition. Any # columns should not be moved as these will be generated automatically in the HDB partition. (# columns are used to store the lengths of each row for a list column.) appendPart // append one temp partition table to HDB appendPart:{[dtDir;tbl;tblPart] colz:key[tblPart]except`.d;colzPaths:` sv'tblPart,'colz; // write each column to disk appendCol[dtDir;tbl]'[colz;colzPaths]; } - The appendPart function works on a single table in a partition in the temporary directory and determines all the columns present in that table, and their fully qualified paths.appendCol is then invoked for each of these columns. appendTable // append each temp partition table into one table in hdb appendTable:{[dtDir;tbl;parts] tblParts:{[tbl;part]` sv TMPSAVE,part,tbl}[tbl]each parts; appendPart[dtDir;tbl;]each tblParts; // create .d file with sym before time as normal for hdb @[hdb:` sv dtDir,tbl,`; `.d; :; `sym`time,get[` sv (first tblParts),`.d]except`time`sym ]; } - For one table, this works out the fully-qualified path of that table in each partition in the temporary directory and sends each of them to the appendPart function, which appends all theses table into one in the HDB date partition. Finally, a new.d file is created in the HDB partition as generally the order of thesym andtime columns are switched compared to the RDB. appendHDB // append all data in TMPSAVE to HDB appendHDB:{[tbls;dtDir] parts:key[TMPSAVE]except`sym; appendTable[dtDir;;parts] each key` sv TMPSAVE,first parts; .Q.chk[HDBDIR]; // ensure all tables present in hdb // apply p# to each table directory {[dir;t]@[` sv dir,t,`;`sym;`p#]}[dtDir] each tbls; } - This function works out what partitions are present (i.e. 0 20 56 222 etc.) and also what tables. After appending all the tables using appendTable ,.Q.chk is performed on the new HDB partition to ensure no table has been missed out (e.g. if aWRITETBLS table received no updates that day). Finally each table in the HDB date partition has ap (parted) attribute applied. Querying partitioned writedown¶ Before the query speeds for the different solutions can be compared, the method for querying the partitioned writedown must be discussed. The data for each table is no longer stored in one in-memory table but divided into a different table for each sym, both on-disk and in-memory. This is far from ideal but is one of the penalties that comes with this solution for dealing with low memory. To query the partitioned portion the correct int value of the sym must be used. This is an example of how querying the intraday partition would work: q)select from quote where int=sym?`eurusd int time sym bid ask bsize asize ------------------------------------------------------- 3 00:00:01.000 eurusd 0.6735184 0.1566519 0 9 3 00:00:02.000 eurusd 0.4668601 0.3365118 0 1 ... q)select bid,bidsize from quote where int in sym?`gbpusd`eurusd sym bid bsize ---------------------- gbpusd 0.2159371 5 gbpusd 0.6669928 3 ... To query the equivalent in-memory portions of the tables. the following should be run: q)select from .mem[`quote;`eurusd] time sym bid ask bsize asize ------------------------------------------------ 16:32:40.946 eurusd 0.6387174 0.2846485 9 7 16:32:40.947 eurusd 0.704888 0.2335227 4 9 ... q){[t;s]raze{[t;s]select sym,bid,bsize from .mem[t;s]}[t;]each s}[`quote;`gbpusd`eurusd] sym bid bsize ------------------- gbpusd 0.6735184 0 gbpusd 0.4668601 0 ... Obviously, we will want to run just one query/function which will do all the selects and joining of different tables and return the desired result. Ideally, the query should always limit the results by sym and use as few columns as possible. The general solution is to select the raw columns from both in memory and on disk for each sym and combine them into one table. While Where clauses can be added to the individual selects relatively easily, group-bys or aggregations should be carried out afterwards on the combined dataset. If the aggregations were to be done while performing the selects some sort of map-reduce functionality would be necessary. The aim should always be to get the smallest raw dataset possible and then apply aggregations and group-bys. This may mean some large queries would end up pulling in too much data (considering lack of memory was the initial problem) or unnecessarily large amounts of data are returned for relatively simple queries such as first/last. User behavior will have to adjust to take account of some of these limitations. One approach would be to implement a general query function which will combine results from the on-disk and in-memory datasets. If a particular user’s query is not catered for by this function, a bespoke function would need to be written for that user. The following is an example of a query function which combines in-memory data with data from the intraday partition: // general function for querying WRITETBLS genQuery:{[t;s;c;whr;st;et] // t table name // s list of syms to return, ` will return all syms // c columns to return, ` will return all columns // whr a functional Where clause // st start of time interval // et end of time interval // treat ` as requesting all syms s:(),$[s~`;key[.mem t]except`;s]; // treat ` as requesting all cols c:(),$[c~`;TBLCOLS t;c]; // use time window to narrow down search using within win:enlist(within;`time;(st;et)); // if start time greater than last time on disk for sym, no disk select required memFlag:st>LASTTIME[t;s]; // functional select for each sym from rdb/temp and join // (unenumerating the historic data), also takes Where clause tabs:{[t;c;win;whr;s;memFlag] $[memFlag; (); unEnum delete int from ?[t;(enlist(=;`int;sym?s)),win,whr;0b;c!c]] , ?[.mem[t;s];win,whr;0b;c!c]}[t;c;win;whr;;]'[s;memFlag]; raze tabs } The time interval specifies the times that the data requested should be within. This can be very helpful when used in conjunction with LASTTIME , which can tell you whether the desired data for a given sym is completely in-memory, or in-memory and on disk. If it is all in memory then the more expensive disk read can be omitted. The actual selects are carried out using the functional format, one sym at a time and the results joined together using raze . It is important to un-enumerate any sym columns from the on disk selection so that when the tables are razed together the tables sym columns will be consistent and type 11 instead of a mix of 11 and 20. This is achieved using the unEnum function: // unenumerate the sym columns from the HDB unEnum:{[tab] {[t;c]$[20=type t[c];@[t;c;:;sym@t[c]];t]}/[tab;cols tab] } In the following example, the Where clause is applied to each of the individual selects before the whole result set is combined: q)genQuery[`quote;`eurusd`gbpusd;`;((<;`bsize;5);(<;`bid;0.5));10:00;23: 59] time sym bid ask bsize asize -------------------------------------------------- 10:40:21.210 eurusd 0.2440827 0.8446513 2 7 10:40:21.240 eurusd 0.2088232 0.1153054 1 1 10:40:21.520 eurusd 0.2007019 0.1717345 2 2 ... The genQuery function could easily be expanded and refined based on what type of queries are run most often and how they are best optimized. However this function demonstrates the principles of how to query both the in-memory and the partitioned data structure together. Limitations¶ This solution, comes with some limitations, namely added complexity in maintaining the data in the RDB and in creating the HDB partition. Also, querying the data is much more complicated as result of the data being stored in a different format in memory and in the temporary directory. However, depending on the use case, the benefits may outweigh the drawbacks. Comparison of w.q and partitioned writedown¶ Although not exactly like-for-like, an attempt will be made to compare w.q with the partitioned writedown discussed above. This will be done in two areas: end-of-day processing speed and query speed intraday. While w.q was not designed to be queried intraday, it will serve as a useful benchmark for querying the partitioned writedown solution. End-of-day speed¶ This test measures how long it takes for the temporary intraday directory to be converted into a standard HDB date partition. At EOD, w.q must first sort the tables on disk and then move them to the HDB, whereas a simple append is all that is required for the partitioned writedown. This offers significant performance benefits for EOD processing, as seen in Table 2 and Figure 2. This test was performed using the earlier quote table with varying numbers of rows. Table 2: rows disksort xasc partWrite ------------------------------------- 100,000 0.017 0.011 0.014 1,000,000 0.207 0.125 0.091 10,000,000 2.240 1.447 0.826 50,000,000 10.778 7.046 4.363 100,000,000 20.102 13.285 8.844 500,000,000 121.485 112.452 71.658 Figure 2: Time to convert to HDB As can be seen, using the partitioned write offers between 40-60% speedup on the end of day processing compared to w.q . Query speed¶ To compare the query speeds between w.q and the partitioned writedown we must compare how long it takes to select data from the respective temporary directories, keeping in mind that w.q stores the data as a single splayed table for each table. To demonstrate, four different queries were run on a quote table as defined in Limitations of w.q above, with 100 million rows and 10 distinct syms and the time measured in milli-seconds: | query | solution | syntax | time | |---|---|---|---| | All cols, one sym | w.q | select from quote where sym=`eurusd | 2491 | | All cols, 3 syms | w.q | select from quote where sym in `eurusd`eurgbp`gbpusd | 7128 | | 3 cols, 1 sym | w.q | select time,bid,bsize from quote where sym=`eurusd | 1750 | | 3 cols, 3 syms | w.q | select time,bid,bsize from quote where sym in `eurusd`eurgbp`gbpusd | 2990 | | All cols, one sym | partWrite.q | select from quote where int=sym?`eurusd | 312 | | All cols, 3 syms | partWrite.q | select from quote where int in sym?`eurusd`eurgbp`gbpusd | 982 | | 3 cols, 1 sym | partWrite.q | select time,bid,bsize from quote where int=sym?`eurusd | 209 | | 3 cols, 3 syms | partWrite.q | select time,bid,bsize from quote where int in sym?`eurusd`eurgbp`gbpusd | 382 | Table 3 Figure 3: Query speeds for on-disk data As can be seen in Table 3 and Figure 3, the speed-up in query times is substantial and thus may be worth the more complex storage and querying method. Author¶ Colm McCarthy is a senior kdb+ consultant who has worked for leading investment banks across a number of different asset classes.
// Server connection details \d .servers enabled:1b // whether server tracking is enabled CONNECTIONS:`rdb`hdb // list of connections to make at start up DISCOVERYREGISTER:1b // whether to register with the discovery service CONNECTIONSFROMDISCOVERY:1b // whether to get connection details from the discovery service (as opposed to the static file). TRACKNONTORQPROCESS:1b // whether to track and register non torQ processes NONTORQPROCESSFILE:hsym first .proc.getconfigfile["nontorqprocess.csv"] // non torQ processes file SUBSCRIBETODISCOVERY:1b // whether to subscribe to the discovery service for new processes becoming available DISCOVERYRETRY:0D00:05 // how often to retry the connection to the discovery service. If 0, no connection is made. This also dictates if the discovery service can connect it and cause it to re-register itself (val > 0) HOPENTIMEOUT:2000 // new connection time out value in milliseconds RETRY:0D00:05 // period on which to retry dead connections. If 0, no reconnection attempts RETAIN:`long$0D00:30 // length of time to retain server records AUTOCLEAN:0b // clean out old records when handling a close DEBUG:1b // log messages when opening new connections LOADPASSWORD:1b // load the external username:password from ${KDBCONFIG}/passwords STARTUP:0b // whether to automatically make connections on startup DISCOVERY:enlist` // list of discovery services to connect to (if not using process.csv) SOCKETTYPE:enlist[`]!enlist ` // dict of proctype -> sockettype e.g. `hdb`rdb`tp!`tcps`tcp`unix PASSWORDS:enlist[`]!enlist ` // dict of host:port!user:pass // functions to ignore when called async - bypass all permission checking and logging \d .zpsignore enabled:1b // whether its enabled ignorelist:(`upd;"upd";`.u.upd;".u.upd") // list of functions to ignore // timer functions \d .timer enabled:1b // whether the timer is enabled debug:0b // print when the timer runs any function logcall:1b // log each timer call by passing it through the 0 handle nextscheduledefault:2h // the default way to schedule the next timer // Assume there is a function f which should run at time T0, actually runs at time T1, and finishes at time T2 // if mode 0, nextrun is scheduled for T0+period // if mode 1, nextrun is scheduled for T1+period // if mode 2, nextrun is scheduled for T2+period // caching functions \d .cache maxsize:500 // the maximum size in MB of the cache as a whole. Evaluated using -22!. To be sure, set to half the required size maxindividual:100 // the maximum size in MB of any individual item in the cache. Evaluated using -22!. To be sure, set to half the required size // timezone functions \d .tz default:`$"Europe/London" // default local timezone // configuration for default mail server \d .email enabled:0b // whether emails are enabled url:` // url of email server e.g. `$"smtp://smtpout.secureserver.net:80" user:` // user account to use to send emails e.g. [email protected] password:` // password for user account from:`$"torq@localhost" // address for return emails e.g. [email protected] usessl:0b // connect using SSL/TLS debug:0i // debug level for email library: 0i = none, 1i=normal, 2i=verbose img:`$getenv[`KDBHTML],"/img/DataIntellect-TorQ-logo.png" // default image for bottom of email // configuration for kafka \d .kafka enabled:0b // whether kafka is enabled kupd:{[k;x] -1 `char$x;} // default definition of kupd // heartbeating \d .hb enabled:1b // whether the heartbeating is enabled subenabled:0b // whether subscriptions to other hearbeats are made CONNECTIONS:`ALL // processes that heartbeat subscriptions are recieved from (as a subset of .servers.CONNECTIONS) debug:1b // whether to print debug information publishinterval:0D00:00:30 // how often heartbeats are published checkinterval:0D00:00:10 // how often heartbeats are checked warningtolerance:2f // a process will move to warning state when it hasn't heartbeated in warningtolerance*checkinterval errortolerance:3f // and to an error state when it hasn't heartbeated in errortolerance*checkinterval \d .ldap enabled:0b // whether ldap authentication is enabled debug:0i // debug level for ldap library: 0i = none, 1i=normal, 2i=verbose servers: enlist `$"ldap://localhost:0"; // ldap server address(es) version:3; // ldap version number blocktime:0D00:30:00; // time before blocked user can attempt authentication checklimit:3; // number of attempts before user is temporarily blocked checktime:0D00:05; // period for user to reauthenticate without rechecking LDAP server buildDNsuf:""; // suffix used for building bind DN buildDN:{"uid=",string[x],",",buildDNsuf}; // function to build bind DN // broadcast publishing \d .u broadcast:1b; // broadcast publishing is on by default. Availble in kdb version 3.4 or later. // timezone \d .eodtime rolltimeoffset:0D00:00:00.000000000; // offset from default midnight roll datatimezone:`$"GMT"; // timezone for TP to timestamp data in rolltimezone:`$"GMT"; // timezone to perform rollover in //Subscriber cut-off \d .subcut enabled:0b; //flag for enabling subscriber cutoff. true means slow subscribers will be cut off. Default is 0b maxsize:100000000; //a global value for the max byte size of a subscriber. Default is 100000000 breachlimit:3; //the number of times a handle can exceed the size limit check in a row before it is closed. Default is 3 checkfreq:0D00:01; //the frequency for running the queue size check on subscribers. Default is 0D00:01 // Grafana Adaptor \d .grafana timecol:`time; sym:`sym; timebackdate:2D; ticks:1000; del:"."; //Datadog configuration \d .dg enabled:0b; //whether .lg.ext is overwritten to send errors to datadog. Default is 0b meaning errors will not be sent to datadog. webreq:0b; //whether datadog agent or web request function is used. Default is 0b which means datadog agent is used. // k4unit tests \d .KU VERBOSE:1; // 0 - no logging to console, 1 - log filenames, >1 - log tests DEBUG:0; // 0 - trap errors, 1 - suspend if errors (except action=`fail) DELIM:","; // csv delimiter SAVEFILE:`:KUTR.csv; // test results savefile // data striping \d .ds numseg:0i; // default value for process.csv to overwrite and pubsub.q check ================================================================================ FILE: TorQ_config_settings_discovery.q SIZE: 984 characters ================================================================================ // Bespoke config for the discovery service // Server connection details \d .servers CONNECTIONS:`ALL // list of connections to make at start up DISCOVERYREGISTER:0b // whether to register with the discovery service CONNECTIONSFROMDISCOVERY:0b // whether to get connection details from the discovery service (as opposed to the static file) TRACKNONTORQPROCESS:1b // whether to track and register non torQ processes throught discovery. DISCOVERYRETRY:0D // how often to retry the connection to the discovery service. If 0, no connection is made HOPENTIMEOUT:200 // new connection time out value in milliseconds RETRY:0D00 // length of time to retry dead connections. If 0, no reconnection attempts RETAIN:`long$0Wp // length of time to retain server records AUTOCLEAN:0b // clean out old records when handling a close DEBUG:1b // log messages when opening new connections //========================================================================================== ================================================================================ FILE: TorQ_config_settings_dqc.q SIZE: 645 characters ================================================================================ //data quality engine config \d .dqe configcsv:first .proc.getconfigfile["dqcconfig.csv"] dqcdbdir:hsym`$getenv[`KDBDQCDB] // location to save dqc data hdbdir:hsym`$getenv[`KDBHDB] // for locating the sym file utctime:1b // define whether this process is on UTC time or not partitiontype:`date // default partition type to date writedownperiod:0D00:05:00 // period for writedown getpartition:{@[value;`.dqe.currentpartition;(`date^partitiontype)$(.z.D,.z.d)utctime]} \d .proc loadprocesscode:1b // whether to load the process specific code defined at ${KDBCODE}/{process type} ================================================================================ FILE: TorQ_config_settings_dqe.q SIZE: 626 characters ================================================================================ \d .dqe dqedbdir:hsym`$getenv[`KDBDQEDB] // location to save dqc data hdbdir:hsym`$getenv[`KDBHDB] // for locating the sym file utctime:1b // define whether this process is on UTC time or not partitiontype:`date // default partition type to date getpartition:{@[value;`.dqe.currentpartition;(`date^partitiontype)$(.z.D,.z.d)utctime]} writedownperiodengine:0D00:05:00 // period for writedown hdbtypes:enlist`hdb; // hdb types for use in saving \d .proc loadprocesscode:1b // whether to load the process specific code defined at ${KDBCODE}/{process type} ================================================================================ FILE: TorQ_config_settings_filealerter.q SIZE: 931 characters ================================================================================ /-Defines the default variables for the file alerter process \d .fa inputcsv:first .proc.getconfigfile["filealerter.csv"] // The name of the input csv to drive what gets done polltime:0D00:00:10 // The period to poll the file system alreadyprocessed:first .proc.getconfigfile["filealerterprocessed"] // The location of the table on disk to store the information about files which have already been processed skipallonstart:0b // Whether to skip all actions when the file alerter process starts up (so only "new" files after the processes starts will be processed) moveonfail:0b // If the processing of a file fails (by any action) then whether to move it or not regardless usemd5:1b // User configuration for whether to find the md5 hash of new files. usemd5 takes 1b (on) or 0b (off) \d .proc loadprocesscode:1b // Whether to load the process specific code defined at ${KDBCODE}/{process type} ================================================================================ FILE: TorQ_config_settings_gateway.q SIZE: 1,126 characters ================================================================================ // Default configuration for the gateway process
depth:5 //depth to maintain in book table stdepth:100*depth //depth to maintain in state dicts bidst:(`u#enlist`)!enlist(`float$())!`float$() //bid state dict askst:(`u#enlist`)!enlist(`float$())!`float$() //ask state dict lb:(`u#enlist`)!enlist(`bids`bsizes`asks`asizes!()) //last book state /* Redefine publish function to pass to TP for real FH */ publish:upsert //define publish function to upsert for example FH rec.book:{[t;s] /* determine if book record needs published & publish if so */ bk: `bids`bsizes!depth sublist'(key;value)@\:bidst[s]; //get current bid book up to depth bk,:`asks`asizes!depth sublist'(key;value)@\:askst[s]; //get current ask book up to depth if[not bk~lb[s]; //compare to last book publish[`book;@[bk;`sym`time;:;(s;"p"$t)]]; //publish record if changed lb[s]:bk; //record state of last book ]; } rec.trade:{[t] /* record trade record */ publish[`trade;t]; } sort.state:{[s] /* sort state dictionaries & drop empty levels */ @[;s;{(where 0=x)_x}]'[`.gdax.bidst`.gdax.askst]; //drop all zeros @[`.gdax.askst;s;{stdepth sublist asc[key x]#x}]; //sort asks ascending @[`.gdax.bidst;s;{stdepth sublist desc[key x]#x}]; //sort bids descending } msg.snapshot:{ /* handle snapshot messages */ x:"SSFF"$x; //cast dictionary to relevant types s:.Q.id x`product_id; //extract sym, remove bad chars askst[s]:stdepth sublist (!/) flip x`asks; //get ask state bidst[s]:stdepth sublist (!/) flip x`bids; //get bid state } msg.l2update:{ /* handle level2 update messages */ x:"SSZ*"$x; //cast dictionary to relevant types s:.Q.id x`product_id; //extract sym, remove bad chars c:"SFF"$/:x`changes; //extract and cast changes {.[`.gdax.askst`.gdax.bidst y[0]=`buy;(x;y 1);:;y 2]}[s]'[c]; //update state dict(s) sort.state[s]; //sort state dicts rec.book[x`time;s]; //record current state of book } msg.ticker:{ /* handle ticker (trade) messages */ x:"SFFFSZjF"$`product_id`price`best_bid`best_ask`side`time`trade_id`last_size#x; //cast dict fields x:@[x;`product_id;.Q.id]; //fix sym x:@[x;`time;"p"$]; //cast time to timestamp if[not count x`trade_id;x[`trade_id]:0N]; //first rec has empty list x:`sym`price`bid`ask`side`time`tid`size!value x; //rename fields rec.trade `time`sym xcols enlist x; //make table & record } upd:{ /* entrypoint for received messages */ j:.j.k x; //parse received JSON message if[(t:`$j[`type]) in key msg; //check for handler of this message type msg[t]j; //pass to relevant message handler ]; } sub:{[h;s;t] t:$[t~`;`trade`depth;(),t]; //expand null to all tables, make list /* subscribe to l2 data for a given sym */ if[`depth in t; h .j.j `type`product_ids`channels!(`subscribe;enlist string s;enlist"level2"); //send subscription message ]; if[`trade in t; h .j.j `type`product_ids`channels!(`subscribe;enlist string s;enlist"ticker"); //send subscription message ]; } getref:{[] r:.req.get["https://api.gdax.com/products";()!()]; //get reference data using reQ :"SSSFFFSSb*FFbbb"$/:r; //cast to appropriate data types } \d . .gdax.ref:.gdax.getref[]; //get reference data .gdax.h:.ws.open["wss://ws-feed.gdax.com";`.gdax.upd] //open websocket to feed .gdax.sub[.gdax.h;`$"ETH-USD";`]; //subscribe to L2 & trade data for ETH-USD .gdax.sub[.gdax.h;`$"BTC-GBP";`]; //subscribe to L2 & trade data for BTC-GBP ================================================================================ FILE: ws.q_js.k SIZE: 930 characters ================================================================================ /JS object parser /minor modification of https://github.com/KxSystems/kdb/blob/master/e/json.k \d .js /[]{} Cbg*xhijefcspmdznuvt q:"\"";s:{q,x,q};J:(($`0`1)!$`false`true;s;{$[#x;x;"null"]};s;{s@[x;&"."=8#x;:;"-"]};s)1 2 5 11 12 16h bin j:{$[10=abs t:@x;s@,/{$[x in r:"\t\n\r\"\\";"\\","tnr\"\\"r?x;x]}'x;99=t;"{",(","/:(j'!x),'":",'j'. x),"}";-1<t;"[",($[98=t;",\n ";","]/:.Q.fc[j']x),"]";J[-t]@$x]} /enclose e:{(*x),(","/:y),*|x};a:"\t\n\r\"\\";f:{$[x in a;"\\","tnr\"\\"a?x;x]} j:{$[10=abs t:@x;s$[|/x in a;,/f'x;x];99=t;e["{}"](j'!x),'":",'j'. x;-1<t;e["[]"].Q.fc[j']x;J[-t]@$x]} /disclose v:{=\~("\\"=-1_q,x)<q=x};d:{$[1<n:(s:+\v[x]*1 -1 1 -1"{}[]"?x)?0;1_'(0,&(v[x]&","=x)&1=n#s)_x:n#x;()]} c:{$["{"=*x;(`$c's'n#'x)!c'(1+n:x?'":")_'x:d x;"["=*x;.Q.fc[c']d x;q=*x;$[1<+/v x;'`err;"",. x];"a">*x;"F"$x;"n"=*x;0n;"t"=*x]} k:{c x@&~v[x]&x in" \t\n\r"}; \ k j x:([]C:$`as`;b:01b;j:0N 2;z:0Nz,.z.z) k j x:"\"a \\" k"{},2]" ================================================================================ FILE: ws.q_ws-client_ws.q SIZE: 5,174 characters ================================================================================ \d .ws
// @kind function // @category optimizeModels // @desc Predict sklearn models on test data // @param bestModel {<} Fitted best model // @param tts {dictionary} Feature and target data split into training/testing // sets // @return {float[]|boolean[]|int[]} Predicted scores optimizeModels.scoreSklearn:{[bestModel;tts] bestModel[`:predict][tts`xtest]` } // @kind function // @category optimizeModels // @desc Predict custom models on test data // @param modelInfo {table} Information about models applied to the data // @param modelDict {dictionary} Data related to model retrieval and various // configuration associated with a run // @param config {dictionary} Information relating to the current run of AutoML // @return {float[]|boolean[]|int[]} Predicted values optimizeModels.paramSearch:{[modelInfo;modelDict;config] tts:modelDict`tts; scoreFunc:modelDict`scoreFunc; orderFunc:modelDict`orderFunc; modelName:modelDict`modelName; config[`logFunc]utils.printDict`hyperParam; // Hyperparameter (HP) search inputs hyperParams:optimizeModels.i.extractdict[modelName;config]; hyperTyp:$[`gs=hyperParams`hyperTyp;"gridSearch";"randomSearch"]; numFolds:config`$hyperTyp,"Argument"; numReps:1; xTrain:tts`xtrain; yTrain:tts`ytrain; modelFunc:utils.bestModelDef[modelInfo;modelName;`minit]; scoreCalc:get[config`predictionFunction]modelFunc; // Extract HP dictionary hyperDict:hyperParams`hyperDict; embedPyModel:(exec first minit from modelInfo where model=modelName)[]; hyperFunc:config`$hyperTyp,"Function"; splitCnt:optimizeModels.i.splitCount[hyperFunc;numFolds;tts;config]; hyperDict:optimizeModels.i.updDict[modelName;hyperParams`hyperTyp;splitCnt; hyperDict;config]; // Final parameter required for result ordering and function definition params:`val`ord`scf!(config`holdoutSize;orderFunc;scoreFunc); // Perform HP search and extract best HP set based on scoring function results:get[hyperFunc][numFolds;numReps;xTrain;yTrain;scoreCalc; hyperDict;params]; bestHPs:first key first results; bestModel:embedPyModel[pykwargs bestHPs][`:fit][xTrain;yTrain]; preds:bestModel[`:predict][tts`xtest]`; `bestModel`hyperParams`predictions!(bestModel;bestHPs;preds) } // @kind function // @category optimizeModels // @desc Create confusion matrix // @param pred {dictionary} All data generated during the process // @param tts {dictionary} Feature and target data split into training/testing // sets // @param config {dictionary} Information relating to the current run of AutoML // return {dictionary} Confusion matrix created from predictions and true vals optimizeModels.confMatrix:{[pred;tts;config] if[`reg~config`problemType;:()!()]; yTest:tts`ytest; if[not type[pred]~type yTest; pred:`long$pred; yTest:`long$yTest ]; confMatrix:.ml.confMatrix[pred;yTest]; confTable:optimizeModels.i.confTab confMatrix; config[`logFunc]each(utils.printDict`confMatrix;confTable); confMatrix } // @kind function // @category optimizeModels // @desc Create impact dictionary // @param modelDict {dictionary} Library and function for best model // @param hyperSearch {dictionary} Values returned from hyperParameter search // @param tts {dictionary} Feature and target data split into training/testing // sets // @param config {dictionary} Information relating to the current run of AutoML // @param scoreFunc {fn} Scoring function // @param orderFunc {fn} Ordering function // return {dictionary} Impact of each column in the data set optimizeModels.impactDict:{[modelDict;hyperSearch;config] tts:modelDict`tts; scoreFunc:modelDict`scoreFunc; orderFunc:modelDict`orderFunc; bestModel:hyperSearch`bestModel; countCols:count first tts`xtest; scores:optimizeModels.i.predShuffle[modelDict;bestModel;tts;scoreFunc; config`seed]each til countCols; optimizeModels.i.impact[scores;countCols;orderFunc] } // @kind function // @category optimizeModels // @desc Get residuals for regression models // @param hyperSearch {dictionary} Values returned from hyperParameter search // @param tts {dictionary} Feature and target data split into training/testing // sets // @param config {dictionary} Information relating to the current run of AutoML // return {dictionary} Residual errors and true values optimizeModels.residuals:{[hyperSearch;tts;config] if[`class~config`problemType;()!()]; true:tts`ytest; pred:hyperSearch`predictions; `residuals`preds!(true-pred;pred) } // @kind function // @category optimizeModels // @desc Consolidate all parameters created from node // @param hyperSearch {dictionary} Values returned from hyperParameter search // @param confMatrix {dictionary} Confusion matrix created from model // @param impactDict {dictionary} Impact of each column in data // @param residuals {dictionary} Residual errors for regression problems // @return {dictionary} All parameters created during node optimizeModels.consolidateParams:{[hyperSearch;confMatrix;impactDict;residuals] analyzeDict:`confMatrix`impact`residuals!(confMatrix;impactDict;residuals); (`predictions _hyperSearch),enlist[`analyzeModel]!enlist analyzeDict } ================================================================================ FILE: ml_automl_code_nodes_optimizeModels_init.q SIZE: 295 characters ================================================================================ // code/nodes/optimizeModels/init.q - Load optimizeModels node // Copyright (c) 2021 Kx Systems Inc // // Load code for optimizeModels node \d .automl loadfile`:code/nodes/optimizeModels/utils.q loadfile`:code/nodes/optimizeModels/funcs.q loadfile`:code/nodes/optimizeModels/optimizeModels.q ================================================================================ FILE: ml_automl_code_nodes_optimizeModels_optimizeModels.q SIZE: 2,017 characters ================================================================================ // code/nodes/optimizeModels/optimizeModels.q - Optimize models node // Copyright (c) 2021 Kx Systems Inc // // Following the initial selection of the most promising model apply the user // defined optimization grid/random/sobol if feasible. // Ignore for keras/pytorch etc. \d .automl // @kind function // @category node // @desc Optimize models using hyperparmeter search procedures if // appropriate, otherwise predict on test data // @param config {dictionary} Information related to the current run of AutoML // @param modelInfo {table} Information about models applied to the data // @param bestModel {<} Fitted best model // @param modelName {symbol} Name of best model // @param tts {dictionary} Feature and target data split into training/testing // sets // @param orderFunc {fn} Function used to order scores // @return {dictionary} Score, prediction and best model optimizeModels.node.function:{[config;modelInfo;bestModel;modelName;tts;orderFunc] ptype:$[`reg=config`problemType;"Regression";"Classification"]; scoreFunc:config`$"scoringFunction",ptype; modelDictKeys:`tts`scoreFunc`orderFunc`modelName`modelLib`modelFunc; modelLibFunc:utils.bestModelDef[modelInfo;modelName]each`lib`fnc; modelDictVals:(tts;scoreFunc;orderFunc;modelName),modelLibFunc; modelDict:modelDictKeys!modelDictVals; hyperSearch:optimizeModels.hyperSearch[modelDict;modelInfo;bestModel;config]; confMatrix:optimizeModels.confMatrix[hyperSearch`predictions;tts;config]; impactReport:optimizeModels.impactDict[modelDict;hyperSearch;config]; residuals:optimizeModels.residuals[hyperSearch;tts;config]; optimizeModels.consolidateParams[hyperSearch;confMatrix;impactReport; residuals] } // Input information optimizeModels.i.k:`config`models`bestModel`bestScoringName`ttsObject`orderFunc optimizeModels.node.inputs:optimizeModels.i.k!"!+<s!<" // Output information optimizeModels.i.k2:`bestModel`hyperParams`modelName`testScore`analyzeModel optimizeModels.node.outputs:optimizeModels.i.k2!"<!sf!" ================================================================================ FILE: ml_automl_code_nodes_optimizeModels_utils.q SIZE: 9,923 characters ================================================================================ // code/nodes/optimizeModels/utils.q - Utilities for the optimizeModels node // Copyright (c) 2021 Kx Systems Inc // // Utility functions specific the the optimizeModels node implementation \d .automl // Utility functions for optimizeModels // @kind function // @category optimizeModelsUtility // @desc Extract the hyperparameter dictionaries based on the applied model // @param bestModel {<} Fitted best Model // @param cfg {dictionary} Configuration information assigned // by the user and related to the current run // @return {dictionary} The hyperparameters appropriate for the model being // used optimizeModels.i.extractdict:{[bestModel;cfg] hyperParam:cfg`hyperparameterSearchType; // Get grid/random hyperparameter file name hyperTyp:$[`grid=hyperParam;`gs; hyperParam in`random`sobol;`rs; '"Unsupported hyperparameter generation method" ]; // Load table of hyperparameters to dictionary with (hyperparameter!values) hyperParamsDir:path,"/code/customization/hyperParameters/"; hyperParamFile:string[hyperTyp],"HyperParameters.json"; hyperParams:.j.k raze read0`$hyperParamsDir,hyperParamFile; extractParams:hyperParams bestModel; typeConvert:`$extractParams[`meta;`typeConvert]; n:where `symbol=typeConvert; typeConvert[n]:`; extractParams:$[`gs~hyperTyp; optimizeModels.i.gridParams; optimizeModels.i.randomParams ] . (extractParams;typeConvert); `hyperTyp`hyperDict!(hyperTyp;extractParams) } // @kind function // @category optimizeModelsUtility // @desc Convert hyperparameters from json to the correct types // @param extractParams {dictionary} Hyperparameters for the given model // type (class/reg) // initially parsed with '.j.k' from 'gsHyperParameters.json' // @param typeConvert {string} List of appropriate types to convert the // hyperparameters to // @return {dictionary} Hyperparameters cast to appropriate representation optimizeModels.i.gridParams:{[extractParams;typeConvert] typeConvert$'extractParams[`Parameters] } // @kind function // @category optimizeModelsUtility // @desc Parse the correct structure for random/sobol search from // JSON format provided // @param extractParams {dictionary} Hyperparameters for the given model type // (class/reg) // initially parsed with '.j.k' from 'rsHyperParameters.json' // @param typeConvert {string} List of appropriate types to convert the // hyperparameters to // @return {dictionary} Hyperparameters converted to an appropriate // representation optimizeModels.i.randomParams:{[extractParams;typeConvert] randomType:`$extractParams[`meta;`randomType]; paramDict:extractParams`Parameters; params:typeConvert$'paramDict; // Generate the structure required for random/sobol search paramsJoin:randomType,'value[params],'typeConvert; key[paramDict]!paramsJoin } // @kind function // @category optimizeModelsUtility // @desc Split the training data into a representation of the breakdown of // data for the hyperparameter search. This is used to ensure that if a // hyperparameter search is done on KNN that there are sufficient, // data points in the validation set for all hyperparameter // nearest neighbour calculations. // @param hyperFunc {symbol} Hyperparameter function to be used // @param numFolds {int} Number of folds to use // @param tts {dictionary} Feature and target data split into training // and testing set // @param cfg {dictionary} Configuration information assigned by the // user and related to the current run // @return {dictionary} The hyperparameters appropriate for the model being // used optimizeModels.i.splitCount:{[hyperFunc;numFolds;tts;cfg] $[hyperFunc in`mcsplit`pcsplit; 1-numFolds; (numFolds-1)%numFolds ]*count[tts`xtrain]*1-cfg`holdoutSize } // @kind function // @category optimizeModelsUtility // @desc Alter hyperParameter dictionary depending on bestModel and type // of hyperopt to be used // @param modelName {symbol} Name of best model // @param hyperTyp {symbol} Type of hyperparameter to be used // @param splitCnt {int} How data shoudl be split for hyperParam search // @param hyperDict {dictionary} HyperParameters used for hyperParam search // @param cfg {dictionary} Configuration information assigned by the // user and related to the current run // @return {dictionary} The hyperparameters appropriate for the model being // used optimizeModels.i.updDict:{[modelName;hyperTyp;splitCnt;hyperDict;cfg] knModel:modelName in`KNeighborsClassifier`KNeighborsRegressor; if[knModel&hyperTyp~`gs; n:splitCnt<hyperDict`n_neighbors; if[0<count where n; hyperDict[`n_neighbors]@:where not n ] ]; if[hyperTyp~`rs; if[knModel; if[splitCnt<hyperDict[`n_neighbors;2]; hyperDict[`n_neighbors;2]:"j"$splitCnt ] ]; hyperDict:`typ`random_state`n`p!(cfg`hyperparameterSearchType;cfg`seed; cfg`numberTrials;hyperDict) ]; hyperDict }
enabled:@[value;`enabled;1b] // whether server tracking is enabled CONNECTIONS:@[value;`CONNECTIONS;`] // the list of connections to make at start up DISCOVERYREGISTER:@[value;`DISCOVERYREGISTER;1b] // whether to register with the discovery service CONNECTIONSFROMDISCOVERY:@[value;`CONNECTIONSFROMDISCOVERY;1b] // whether to get connection details from the discovery service (as opposed to the static file) SUBSCRIBETODISCOVERY:@[value;`SUBSCRIBETODISCOVERY;1b] // whether to subscribe to the discovery service for new processes becoming available DISCOVERYRETRY:@[value;`DISCOVERYRETRY;0D00:05] // how often to retry the connection to the discovery service. If 0, no connection is made TRACKNONTORQPROCESS:@[value;`TRACKNONTORQPROCESS;0b] // whether to track and register non torQ processes NONTORQPROCESSFILE:@[value;`NONTORQPROCESSFILE;hsym .proc.getconfigfile["nontorqprocess.csv"]] // non torQ processes file HOPENTIMEOUT:@[value;`HOPENTIMEOUT;2000] // new connection time out value in milliseconds RETRY:@[value;`RETRY;0D00:05] // period on which to retry dead connections. If 0 no connection is made RETAIN:@[value;`RETAIN;`long$0D00:30] // length of time to retain server records AUTOCLEAN:@[value;`AUTOCLEAN;0b] // clean out old records when handling a close DEBUG:@[value;`DEBUG;1b] // whether to print debug output LOADPASSWORD:@[value;`LOADPASSWORD;1b] // load the external username:password from ${KDBCONFIG}/passwords USERPASS:` // the username and password used to make connections STARTUP:@[value;`STARTUP;0b] // whether to automatically make connections on startup DISCOVERY:@[value;`DISCOVERY;enlist`] // list of discovery services to connect to (if not using process.csv) SOCKETTYPE:@[value;`SOCKETTYPE;enlist[`]!enlist `] // dict of proctype!sockettype. sockettype options : `tcp`tcps`unix. e.g. `rdb`tickerplant!`tcp`unix PASSWORDS:@[value;`PASSWORDS;enlist[`]!enlist `] // dict of host:port!user:pass e.g. `:host:1234!`user:pass // If required, change this method to something more secure! // Otherwise just load the usernames and passwords from the passwords directory // using the usual hierarchic approach loadpassword:{ .lg.o[`conn;"attempting to load external connection username:password from file"]; // load a password file loadpassfile:{[file] $[()~key hsym file; .lg.o[`conn;"password file ",(string file)," not found"]; [.lg.o[`conn;"password file ",(string file)," found"]; .servers.USERPASS:first`$read0 hsym file]]}; files:{.proc.getconfig["passwords/",(string x),".txt";2]} each `default,.proc.parentproctype,.proc.proctype,.proc.procname; loadpassfile each distinct[raze flip files[;1 0]]except`; } loadpassword[] // open a connection opencon:{ if[DEBUG;.lg.o[`conn;"attempting to open handle to ",string x]]; // If the supplied connection string has no more than 2 colons (or 3 if using a tcps connection) append user:pass from passwords dictionary // else return connection string passed in tcps:string[x] like ":tcps:*"; connection:hsym $[(2+tcps) >= sum ":"=string x; `$(string x),":",string USERPASS^PASSWORDS[x];x]; h:@[{(hopen x;"")};(connection;.servers.HOPENTIMEOUT);{(0Ni;x)}]; // just log this as standard out. Depending on the scenario, failing to open a connection isn't necessarily an error if[DEBUG;.lg.o[`conn;"connection to ",(string x),$[null first h;" failed: ",last h;" successful"]]]; first h} // req = required set of attribute // avail = the attributes which the server process is advertising // return a dict of (complete match boolean; partial match values) attributematch:{[req;avail] // the dictionary is mixed type - so have to handle non values in the avialable dictionary separately vals:key[req] inter key avail; notpresent:noval!(count noval:key[req] except key avail)#enlist(0b;()); notpresent,vals!{($[0>type y;x~y;all x in y];(x,()) inter y,())}'[req vals;avail vals]} // Get the list of servers which match specific types or names // attributes is used to return an attribute dictionary with the matches to the required attributes // autoopen is used to attempt to automatically open a connection if it is registered but not available // if onlyone is true, and at least one server per name/type is found, then autoopen is ignored (as we only need one server, which we have) // name or type can be `procname`proctype getservers:{[nameortype;lookups;req;autoopen;onlyone] r:$[`~lookups; select procname,proctype,lastp,w,hpup,attributes,index:i from .servers.SERVERS; nameortype~`proctype; select procname,proctype,lastp,w,hpup,attributes,index:i from .servers.SERVERS where proctype in lookups; select procname,proctype,lastp,w,hpup,attributes,index:i from .servers.SERVERS where procname in lookups]; // no servers found matching the criteria - so throw them back if[0=count r;:update attributematch:attributes from r]; r:update alive:.dotz.liveh w from r; // try to automatically reopen handles if there are closed ones, and we need more than one connection if[autoopen; if[(count r) > alivecount:sum r`alive; // we don't have any servers, or we have to try to return them all, or there is a specified name/type where we don't have an open handle if[(alivecount=0) or (not onlyone) or (any not exec max alive by agg:?[nameortype~`proctype;proctype;procname] from r); retryrows exec index from r where not alive; r:select procname,proctype,lastp,w,hpup,attributes,alive:.dotz.liveh w from .servers.SERVERS where i in r`index]]]; select procname,proctype,lastp,w,hpup,attributes,attribmatch:.servers.attributematch[req]each attributes from r where alive} selector:{[servertable;selection] $[selection=`roundrobin;first `lastp xasc servertable; selection=`any;rand servertable; selection=`last;last `lastp xasc servertable; '"unknown selection type : ",string selection]} // short cut function to get a server by type // Only require one server of the given type getserverbytype:{[ptype;serverval;selection] r:getservers[`proctype;ptype;()!();1b;1b]; if[count r; r:selector[r;selection]; updatestats[r`w]]; r[serverval]} gethandlebytype:getserverbytype[;`w;] gethpbytype:getserverbytype[;`hpup;] // Update the server stats updatestats:{[W] update lastp:.proc.cp[],hits:1+hits from`.servers.SERVERS where w=W} names:{asc distinct exec procname from`.servers.SERVERS where .dotz.liveh w} types:{asc distinct exec proctypes from`.servers.SERVERS where .dotz.liveh w} unregistered:{except[key .z.W;exec w from`.servers.SERVERS]} cleanup:{if[count w0:exec w from`.servers.SERVERS where not .dotz.livehn w; update endp:.proc.cp[],lastp:.proc.cp[],w:0Ni from`.servers.SERVERS where w in w0]; if[AUTOCLEAN;delete from`.servers.SERVERS where not .dotz.liveh w,(.proc.cp[]^endp)<.proc.cp[]-.servers.RETAIN];} / add a new server for current session addnthawc:{[name;proctype;hpup;attributes;W;checkhandle] if[checkhandle and not isalive:.dotz.liveh W;'"invalid handle"]; cleanup[]; $[not hpup in (exec hpup from .servers.SERVERS) inter (exec hpup from .servers.nontorqprocesstab); `.servers.SERVERS insert(name;proctype;lower hpup;W;0i;$[isalive;.proc.cp[];0Np];.proc.cp[];0Np;attributes); .lg.o[`conn;"Removed double entries: name->", string[name],", proctype->",string[proctype],", hpup->\"",string[hpup],"\""]]; W } addh:{[hpuP] W:opencon hpuP; $[null W; '"failed to open handle to ",string hpuP; addhw[hpuP;W]]}
// @kind function // @category freshFeat // @desc Find the position of the last occurrence of the maximum value // in the series relative to the series length // @param data {number[]} Numerical data points // @return {float} Last max relative to number of data points fresh.feat.lastMax:{[data] (last where data=max data)%count data } // @kind function // @category freshFeat // @desc Find the position of the last occurrence of the minimum value // in the series relative to the series length // @param data {number[]} Numerical data points // @return {float} Last min relative to number of data points fresh.feat.lastMin:{[data] (last where data=min data)%count data } // @kind function // @category freshFeat // @desc Calculate the slope/intercept/r-value associated of a series // @param data {number[]} Numerical data points // @return {dictionary} Slope, intercept and r-value fresh.feat.linTrend:{[data] k:til count data; slope:(xk:data cov k)%vk:var k; intercept:avg[data]-slope*avg k; rval:xk%sqrt vk*var data; `rval`intercept`slope!0^(rval;intercept;slope) } // @kind function // @category freshFeat // @desc Longest sequence of consecutive data points within the series with // a value greater than the mean // @param data {number[]} Numerical data points // @return {boolean} Is longest subsequence greater than the mean fresh.feat.longStrikeAboveMean:{[data] max 0,fresh.i.getLenSeqWhere data>avg data } // @kind function // @category freshFeat // @desc Longest sequence of consecutive data points within the series with // a value lower than the mean // @param data {number[]} Numerical data points // @return {boolean} Is longest subsequence less than the mean fresh.feat.longStrikeBelowMean:{[data] max 0,fresh.i.getLenSeqWhere data<avg data } // @kind function // @category freshFeat // @desc Maximum value // @param data {number[]} Numerical data points // @return {number} Maximum value of the series fresh.feat.max:{[data] max data } // @kind function // @category freshFeat // @desc Average value // @param data {number[]} Numerical data points // @return {number} Mean value of the series fresh.feat.mean:{[data] avg data } // @kind function // @category freshFeat // @desc Calculate the average over the absolute difference between // subsequent series values // @param data {number[]} Numerical data points // @return {float} Mean over the absolute difference between data points fresh.feat.meanAbsChange:{[data] avg abs 1_deltas data } // @kind function // @category freshFeat // @desc Calculate the average over the difference between subsequent // series values // @param data {number[]} Numerical data points // @return {float} Mean over the difference between data points fresh.feat.meanChange:{[data] n:-1+count data; (data[n]-data 0)%n } // @kind function // @category freshFeat // @desc Calculate the average central approximation of the second // derivative of a series // @param data {number[]} Numerical data points // @return {float} Mean central approximation of the second derivative fresh.feat.mean2DerCentral:{[data] p:prev data; avg(.5*data+prev p)-p } // @kind function // @category freshFeat // @desc Median value // @param data {number[]} Numerical data points // @return {number} Median value of the series fresh.feat.med:{[data] med data } // @kind function // @category freshFeat // @desc Minimum value // @param data {number[]} Numerical data points // @return {number} Minimum value of the series fresh.feat.min:{[data] min data } // @kind function // @category freshFeat // @desc Number of crossings in the series over the value crossVal // @param data {number[]} Numerical data points // @param crossVal {number} Crossing va;ue // @return {int} Number of crossings fresh.feat.numCrossing:{[data;crossVal] sum 1_differ data>crossVal } // @kind function // @category freshFeat // @desc Number of peaks in a series following data smoothing via // application of a Ricker wavelet of defined width // @param data {number[]} Numerical data points // @param width {long} Width of wavelet // @return {long} Number of peaks fresh.feat.numCwtPeaks:{[data;width] count fresh.i.findPeak[data;1+til width]` } // @kind function // @category freshFeat // @desc Number of peaks in the series with a specified support // @param data {number[]} Numerical data points // @param support {long} Support of the peak // @return {int} Number of peaks fresh.feat.numPeaks:{[data;support] sum all fresh.i.peakFind[data;support]each 1+til support } // @kind function // @category freshFeat // @desc Partial auto-correlation of a series with a specified lag // @param data {number[]} Numerical data points // @param lag {long} Lag to apply to data // @return {dictionary} Partial auto-correlation fresh.feat.partAutoCorrelation:{[data;lag] corrKeys:`$"lag_",/:string 1+til lag; corrVals:lag#$[1>mx:lag&count[data]-1; (); 1_fresh.i.pacf[data;`nlags pykw mx;`method pykw`ld]` ],lag#0n; corrKeys!corrVals } // @kind function // @category freshFeat // @desc Ratio of the number of non-distinct values to the number of // possible values // @param data {number[]} Numerical data points // @return {float} Calculated ratio fresh.feat.perRecurToAllData:{[data] g:count each group data; sum[1<g]%count g } // @kind function // @category freshFeat // @desc the number of non-distinct values to the number of data points // @param data {number[]} Numerical data points // @return {float} Calculated ratio fresh.feat.perRecurToAllVal:{[data] g:count each group data; sum[g where 1<g]%count data } // @kind function // @category freshFeat // @desc The value of a series greater than a user-defined quantile // percentage of the ordered series // @param data {number[]} Numerical data points // @param quantile {float} Quantile to check // @return {float} Value greater than quantile fresh.feat.quantile:{[data;quantile] p:quantile*-1+count data; idx:0 1+\:floor p; r:0^deltas asc[data]idx; r[0]+(p-idx 0)*last r } // @kind function // @category freshFeat // @desc The number of values greater than or equal to some minimum and // less than some maximum // @param data {number[]} Numerical data points // @param minVal {number} Min value allowed // @param maxVal {number} Max value allowed // @return {int} Number of data points in specified range fresh.feat.rangeCount:{[data;minVal;maxVal] sum(data>=minVal)&data<maxVal } // @kind function // @category freshFeat // @desc Ratio of values greater than sigma from the mean value // @param data {number[]} Numerical data points // @param r {float} Ratio to compare // @return {float} Calculated ratio fresh.feat.ratioBeyondRSigma:{[data;r] avg abs[data-avg data]>r*dev data } // @kind function // @category freshFeat // @desc Ratio of the number of unique values to total number of values // in a series // @param data {number[]} Numerical data points // @return {float} Calculated ratio fresh.feat.ratioValNumToSeriesLength:{[data] count[distinct data]%count data } // @kind function // @category freshFeat // @desc Skew of a time series indicating asymmetry within the series // @param data {number[]} Numerical data points // @return {float} Skew of data fresh.feat.skewness:{[data] n:count data; s:sdev data; m:data-avg data; n*sum[m*m*m]%(s*s*s)*(n-1)*-2+n } // @kind function // @category freshFeat // @desc Calculate the cross power spectral density of a time series // @param data {number[]} Numerical data points // @param coeff {int} Frequency at which calculation is performed // @return {float} Cross power spectral density of data at given coeff fresh.feat.spktWelch:{[data;coeff] fresh.i.welch[data;`nperseg pykw 256&count data][@;1][`]coeff } // @kind function // @category freshFeat // @desc Standard deviation // @param data {number[]} Numerical data points // @return {float} Standard deviation of series fresh.feat.stdDev:{[data] dev data } // @kind function // @category freshFeat // @desc Sum points that appear more than once in a series // @param data {number[]} Numerical data points // @return {number} Sum of all points present more than once fresh.feat.sumRecurringDataPoint:{[data] g:count each group data; k:where 1<g; sum k*g k } // @kind function // @category freshFeat // @desc Sum values that appear more than once in a series // @param data {number[]} Numerical data points // @return {number} Sum of all values present more than once fresh.feat.sumRecurringVal:{[data] sum where 1<count each group data } // @kind function // @category freshFeat // @desc Sum data points // @param data {number[]} Numerical data points // @return {number} Sum of values within the series fresh.feat.sumVal:{[data] sum data } // @kind function // @category freshFeat // @desc Measure symmetry of a time series // @param data {number[]} Numerical data points // @param ratio {float} Ratio in range 0->1 // @return {boolean} Measure of symmetry fresh.feat.symmetricLooking:{[data;ratio] abs[avg[data]-med data]<ratio*max[data]-min data } // @kind function // @category freshFeat // @desc Measure the asymmetry of a series based on a user-defined lag // @param data {number[]} Numerical data points // @param lag {long} Size of lag to apply // @return {float} Measure of asymmetry of data fresh.feat.treverseAsymStat:{[data;lag] x1:xprev[lag]data; x2:xprev[lag]x1; 0^avg x1*(data*data)-x2*x2 } // @kind function // @category freshFeat // @desc Return the number occurrences of a specific value within a // dataset // @param data {number[]} Numerical data points // @param val {number} Value to check // @return {int} Number of occurrences of val within the series fresh.feat.valCount:{[data;val] sum data=val } // @kind function // @category freshFeat // @desc Variance of a dataset // @param data {number[]} Numerical data points // @return {float} Variance of the series fresh.feat.var:{[data] var data } // @kind function // @category freshFeat // @desc Check if the variance of a dataset is larger than its standard // deviation // @param data {number[]} Numerical data points // @return {boolean} Indicates if variance is larger than standard deviation fresh.feat.varAboveStdDev:{[data] 1<var data } ================================================================================ FILE: ml_ml_fresh_init.q SIZE: 417 characters ================================================================================ // fresh/init.q - Load fresh library // Copyright (c) 2021 Kx Systems Inc // // FeatuRe Extraction and Scalable Hypothesis testing (FRESH) // FRESH algorithm implementation (https://arxiv.org/pdf/1610.07717v3.pdf) .ml.loadfile`:fresh/utils.q .ml.loadfile`:fresh/feat.q .ml.loadfile`:fresh/extract.q .ml.loadfile`:fresh/select.q .ml.loadfile`:util/utils.q .ml.loadfile`:util/utilities.q .ml.i.deprecWarning[`fresh] ================================================================================ FILE: ml_ml_fresh_select.q SIZE: 2,594 characters ================================================================================ // fresh/select.q - Feature selection // Copyright (c) 2021 Kx Systems Inc // // Selection of statistically significant features \d .ml // @kind function // @category fresh // @desc Statistically significant features based on defined selection // procedure // @param tab {table} Value side of a table of created features // @param target {int[]|float[]} Targets corresponding to the rows the table // @param func {fn} Projection of significant feature function to apply e.g. // .ml.fresh.kSigFeat[10] // @returns {symbol[]} Features deemed statistically significant according to // user-defined func fresh.significantFeatures:{[tab;target;func] func fresh.sigFeat[tab;target] } // @kind function // @category fresh // @desc Return p-values for each feature // @param tab {table} Value side of a table of created features // @param target {int[]|float[]} Targets corresponding to the rows the table // @return {dictionary} P-value for each feature to be passed to user-defined // significance function fresh.sigFeat:{[tab;target] func:fresh.i$[2<count distinct target;`kTau`ksYX;`ks`fisher]; sigCols:where each(2<;2=)@\:(count distinct@)each flip tab; raze[sigCols]!(func[where count each sigCols]@\:target)@'tab raze sigCols } // @kind function // @category fresh // @desc The Benjamini-Hochberg-Yekutieli (BHY) procedure: determines // if the feature meets a defined False Discovery Rate (FDR) level. The // recommended input is 5% (0.05). // @param rate {float} False Discovery Rate // @param pValues {dictionary} Output of .ml.fresh.sigFeat // @return {symbol[]} Significant features fresh.benjhoch:{[rate;pValues] idx:1+til n:count pValues:asc pValues; where pValues<=rate*idx%n*sums 1%idx }
// @kind function // @category main // @subcategory delete // // @overview // Delete the metric table associated with a name // from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // @param modelName {string|null} The name of the model to retrieve // @param version {long[]} The version of the model to retrieve (major;minor) // // @return {null} registry.delete.metrics:{[folderPath;experimentName;modelName;version] config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; // Locate/retrieve the registry locally or from the cloud config:$[storage~`local; registry.local.util.check.registry config; [checkFunction:registry.cloud.util.check.model; checkFunction[experimentName;modelName;version;config`folderPath;config]] ]; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; modelName:first modelDetails `modelName; version:first modelDetails `version; config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; folderPath:config`folderPath; $[`local<>storage; registry.cloud.delete.metrics[config;experimentName;modelName;version]; [function:registry.util.getFilePath; params:(folderPath;experimentName;modelName;version;`metrics;()!()); location:function . params; if[()~key location;logging.error"No metric table exists at this location, unable to delete."]; hdel location; ] ]; } // @kind function // @category main // @subcategory delete // // @overview // Delete the code associated with a name // from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // @param modelName {string|null} The name of the model to retrieve // @param version {long[]} The version of the model to retrieve (major;minor) // @param codeFile {string} The type of config // // @return {null} registry.delete.code:{[folderPath;experimentName;modelName;version;codeFile] config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; // Locate/retrieve the registry locally or from the cloud config:$[storage~`local; registry.local.util.check.registry config; [checkFunction:registry.cloud.util.check.model; checkFunction[experimentName;modelName;version;config`folderPath;config]] ]; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; modelName:first modelDetails `modelName; version:first modelDetails `version; config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; folderPath:config`folderPath; $[`local<>storage; [function:registry.cloud.delete.code; params:(config;experimentName;modelName;version;codeFile); function . params ]; [function:registry.util.getFilePath; params:(folderPath;experimentName;modelName;version;`code;enlist[`codeFile]!enlist codeFile); location:function . params; if[()~key location;logging.error"No such code exists at this location, unable to delete."]; hdel location ] ]; } // @kind function // @category main // @subcategory delete // // @overview // Delete a metric from the metric table associated with a name // from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string} Name of the experiment to be deleted // @param modelName {string|null} The name of the model to retrieve // @param version {long[]} The version of the model to retrieve (major;minor) // @param metricName {string} The name of the metric // // @return {null} registry.delete.metric:{[folderPath;experimentName;modelName;version;metricName] if[-11h=type metricName;metricName:string metricName]; config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; // Locate/retrieve the registry locally or from the cloud config:$[storage~`local; registry.local.util.check.registry config; [checkFunction:registry.cloud.util.check.model; checkFunction[experimentName;modelName;version;config`folderPath;config]] ]; modelDetails:registry.util.search.model[experimentName;modelName;version;config]; modelName:first modelDetails `modelName; version:first modelDetails `version; config:registry.util.check.config[folderPath;()!()]; if[not`local~storage:config`storage;storage:`cloud]; folderPath:config`folderPath; $[`local<>storage; [function:registry.cloud.delete.metric; params:(config;experimentName;modelName;version;metricName); function . params ]; [function:registry.util.getFilePath; params:(folderPath;experimentName;modelName;version;`metrics;()!()); location:function . params; if[()~key location;logging.error"No metric table exists at this location, unable to delete."]; location set ?[location;enlist (not;(like;`metricName;metricName));0b;`symbol$()]; ] ]; } ================================================================================ FILE: ml_ml_registry_q_main_get.q SIZE: 14,133 characters ================================================================================ // get.q - Main callable functions for retrieving information from the // model registry // Copyright (c) 2021 Kx Systems Inc // // @overview // Retrieve items from the registry including // 1. Models: // - q (functions/projections/appropriate dictionaries) // - Python (python functions + sklearn/keras specific functionality) // 2. Configuration // 3. Model registry // // @category Model-Registry // @subcategory Functionality // // @end \d .ml // @kind function // @category main // @subcategory get // // @overview // Retrieve a q/python/sklearn/keras model from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // // @return {dict} The model and information related to the // generation of the model registry.get.model:registry.util.get.object[`model;;;;;::] // @kind function // @category main // @subcategory get // // @overview // Retrieve a keyed q/python/sklearn/keras model from the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param key {symbol} key from the model to retrieve // // @return {dict} The model and information related to the // generation of the model registry.get.keyedmodel:registry.util.get.object[`model] // @kind function // @category main // @subcategory get // // @overview // Retrieve language/library version information associated with a model stored in the registry // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve model information, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model from which to retrieve // version information in the case this is null, the newest model associated // with the experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // // @return {dict} Information about the model stored in the registry including // q version/date and if applicable Python version and Python library versions registry.get.version:registry.util.get.object[`version;;;;;::] // @kind function // @category main // @subcategory get // // @overview // Load the metric table for a specific model // // @param folderPath {dict|string|null} Registry location, can be: // 1. A dictionary containing the vendor and location as a string, e.g. // ```enlist[`local]!enlist"myReg"``` or // ```enlist[`aws]!enlist"s3://ml-reg-test"``` etc; // 2. A string indicating the local path; // 3. A generic null to use the current .ml.registry.location pulled from CLI/JSON. // @param experimentName {string|null} The name of an experiment from which // to retrieve a model, if no modelName is provided the newest model // within this experiment will be used. If neither modelName or // experimentName are defined the newest model within the // "unnamedExperiments" section is chosen // @param modelName {string|null} The name of the model to be retrieved // in the case this is null, the newest model associated with the // experiment is retrieved // @param version {long[]|null} The specific version of a named model to retrieve // in the case that this is null the newest model is retrieved (major;minor) // @param param {null|dict|symbol|string} Search parameters for the retrieval // of metrics // in the case when this is a string, it is converted to a symbol // // @return {table} The metric table for a specific model, which may // potentially be filtered registry.get.metric:registry.util.get.object[`metric]
:`original`toConnect`toLog!(hostPort; connHostPort; logHostPort); }; ================================================================================ FILE: kdb-common_src_log.loggers.q SIZE: 1,156 characters ================================================================================ // Logging // Copyright (c) 2015 - 2017 Sport Trades Ltd, 2020 - 2021 Jaskirat Rajasansir / Basic logger .log.loggers.basic:{[formatter; fd; lvl; message] fd " " sv formatter[5$string lvl; message]; }; / Logger with color highlighting of the level based on the configuration in .log.levels .log.loggers.color:{[formatter; fd; lvl; message] lvl:(.log.levels[lvl]`color),(5$string lvl),.log.resetColor; fd " " sv formatter[lvl; message]; }; / Non-color logger with the additional syslog priority prefix at the start of the log line. This is useful / when capturing log output into systemd (via 'systemd-cat'). .log.loggers.syslog:{[formatter; fd; lvl; message] syslogLvl:"<",string[.log.levels[lvl]`syslog],">"; fd " " sv enlist[syslogLvl],formatter[5$string lvl; message]; }; / JSON logger / NOTE: This logger does not do the slf4j-style parameterised replacement of the message but prints as the supplied list .log.loggers.json:{[formatter; fd; lvl; message] logElems:`date`time`level`processId`user`handle`message!(.time.today[]; .time.nowAsTime[]; lvl; .log.process; `system^.z.u; .z.w; message); fd .j.j logElems; }; ================================================================================ FILE: kdb-common_src_log.q SIZE: 10,495 characters ================================================================================ // Logging Library // Copyright (c) 2015 - 2020 Sport Trades Ltd, 2020 - 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/log.q .require.lib each `util`type`time`cargs; / Functions to determine which logger to use. The dictionary key is a symbol reference to the logger function if the / value function is true. / NOTE: .log.loggers.basic should always be last so it is the fallback if all others are not available .log.cfg.loggers:()!(); .log.cfg.loggers[`.log.loggers.color]: { (not ""~getenv`KDB_COLORS) | `logColors in key .cargs.get[] }; .log.cfg.loggers[`.log.loggers.syslog]:{ (not ""~getenv`KDB_LOG_SYSLOG) | `logSyslog in key .cargs.get[] }; .log.cfg.loggers[`.log.loggers.json]: { (not ""~getenv`KDB_LOG_JSON) | `logJson in key .cargs.get[] }; .log.cfg.loggers[`.log.loggers.basic]: { 1b }; / The formatting functions that take the logging level and the raw message and provide an output ready for printing by the logger. / NOTE: Any custom formatter must also support the slf4j-style parameterised logging / NOTE 2: The 'pattern' formatter is used when a logging format pattern (in '.log.cfg.format') has been specified .log.cfg.formatters:()!(); .log.cfg.formatters[`default]:`.log.formatter.default; .log.cfg.formatters[`pattern]:`.log.formatter.pattern; / The available patterns for logging / NOTE: "l" and "m" cannot be modified - these will always be the log level and the message respectively .log.cfg.patterns:(`char$())!(); .log.cfg.patterns[" "]:(::); .log.cfg.patterns["d"]:`.time.today; .log.cfg.patterns["t"]:`.time.nowAsTime; .log.cfg.patterns["P"]:`.time.now; .log.cfg.patterns["n"]:`.time.nowAsTimespan; .log.cfg.patterns["l"]:(::); .log.cfg.patterns["p"]:`.log.process; .log.cfg.patterns["u"]:{ `system ^ .z.u }; .log.cfg.patterns["h"]:`.z.w; .log.cfg.patterns["H"]:`.z.h; .log.cfg.patterns["m"]:(::); .log.cfg.patterns["T"]:`.log.patterns.callLineTrace; / Custom logging format. Each logging pattern must be prefixed with "%" and all elements will be space separated. If this is / an empty string, the logging library will use the (faster) default logging .log.cfg.format:""; / The optional logging pattern to use ONLY when the library is initialised and process is in 'debug mode' (i.e. -e 1). / If you don't want the library to change to the pattern-based logger (on library init), ensure this is an empty string / before the library is initialised .log.cfg.enhancedDebugPattern:"%d %t %l %p %u %h %T %m"; / The maximum level to log at. The logging order is TRACE, DEBUG, INFO, WARN, ERROR, FATAL. .log.current:(`symbol$())!`symbol$(); .log.current[`level]: `INFO; .log.current[`logger]: `; .log.current[`formatter]:`; / Constant string to reset the color back to default after logging some color .log.resetColor:"\033[0m"; / Supported log levels and their related configuration. The order of the table implies the logging order (.e.g a log / level of ERROR will log ERROR and FATAL only). The information stored in this table: / - fd: The file descriptor to output that log level to / - syslog: The equivalent syslog log priority (for use with systemd logging). See 'man syslog' / - color: The colors to use for each log level. Empty string means no coloring / of ERROR will log ERROR and FATAL) .log.levels:`level xkey flip `level`fd`syslog`color!"SII*"$\:(); .log.levels[`TRACE]:(-1i; 7i; ""); .log.levels[`DEBUG]:(-1i; 7i; ""); .log.levels[`INFO]: (-1i; 6i; ""); .log.levels[`WARN]: (-1i; 4i; "\033[1;33m"); .log.levels[`ERROR]:(-2i; 3i; "\033[1;31m"); .log.levels[`FATAL]:(-2i; 2i; "\033[4;31m"); / Process identification / NOTE: If this is set prior to the library being initialised, it will not be overwritten during '.log.init' .log.process:""; / The parsed custom logging format (from '.log.cfg.format'). This is only populated if '.log.cfg.format' is non-empty / @see .log.i.parsePattern .log.pattern:()!(); / When call line tracing is enabled, this list of strings can be used to remove common prefixes from the file paths. By default, if this is / empty when the library is initialised, it will be defaulted to '.require.location.root' .log.sourcePathExcludePrefixes:(); .log.init:{ if[.util.inDebugMode[]; .log.current[`level]:`DEBUG; if[(0 < count .log.cfg.enhancedDebugPattern) & 0 = count .log.process; .log.cfg.format:.log.cfg.enhancedDebugPattern; ]; ]; if[0 = count .log.process; .log.process:"pid-",string .z.i; ]; if[0 = count .log.sourcePathExcludePrefixes; .log.sourcePathExcludePrefixes,:enlist 1_ string .require.location.root; ]; / setLogger calls setLevel .log.setLogger[]; }; / Sets the current logger based on the result of the functions defined in .log.cfg.loggers. The first function in the / dictionary will be used as the current logger / @see .log.cfg.loggers / @see .log.current.logger / @see .log.current.formatter / @see .log.cfg.formatters / @see .log.setLevel .log.setLogger:{ logger:first where .log.cfg.loggers@\:(::); .log.current[`logger]:logger; .log.current[`formatter]:.log.cfg.formatters`default; if[0 < count .log.cfg.format; .log.i.parsePattern[]; .log.current[`formatter]:.log.cfg.formatters`pattern; ]; .log.setLevel .log.current`level; }; / Configures the logging functions based on the specified level. Any levels below the new level will / be set to the identity function / @param newLevel (Symbol) The new level to log from / @see .log.levels .log.setLevel:{[newLevel] if[not newLevel in key .log.levels; '"IllegalArgumentException"; ]; logLevel:key[.log.levels]?newLevel; enabled:0!logLevel _ .log.levels; disabled:0!logLevel # .log.levels; logger:get[.log.current`logger][get .log.current`formatter;;]; @[`.log; lower enabled`level ; :; logger ./: flip (0!enabled)`fd`level]; @[`.log; lower disabled`level; :; count[disabled]#(::)]; .log.current[`level]:newLevel; .log.i.setInterfaceImplementations[]; -1 "\nLogging enabled ","[ ",(" ] [ " sv ": " sv/: flip (@[;0;upper]@/:; ::)@' string (key; value) @\: .log.current)," ]\n"; }; / Provides a way to know which log levels are currently being logged. For example, if the log level is currently INFO / .log.isLoggingAt will return true for all of INFO, WARN, ERROR, FATAL and false for DEBUG, TRACE. / @param level (Symbol) The logging level to check is currently being logged / @returns (Boolean) True if the specified level is currently being logged by this library. False otherwise .log.isLoggingAt:{[level] if[not level in key .log.levels; '"IllegalArgumentException"; ]; :(<=). key[.log.levels]?/: .log.current[`level],level; }; / Default string log formatter with slf4j-style parameterised formatting / @returns (StringList) List of log elements in string format / @see http://www.slf4j.org/faq.html#logging_performance .log.formatter.default:{[lvl; message] if[0h = type message; message:"" sv ("{}" vs first message),'(.type.ensureString each 1_ message),enlist ""; ]; elems:(.time.today[]; .time.nowAsTime[]; lvl; .log.process; `system^.z.u; .z.w; message); elems:@[elems; where not .type.isString each elems; string]; :elems; }; / Pattern-based string log formatter with slf4j-style parameterised formatting / @returns (StringList) List of log elements in string format .log.formatter.pattern:{[lvl; message] if[0h = type message; message:"" sv ("{}" vs first message),'(.type.ensureString each 1_ message),enlist ""; ]; patterns:.log.pattern,"lm"!(lvl; message); patterns:@[patterns; where .type.isSymbol each patterns; get]; patterns:@[patterns; where .type.isFunction each patterns; @[;::]]; patterns:@[patterns; where not .type.isString each patterns; string]; patStrs:value patterns; patStrs@:where not ""~/:patStrs; :patStrs; }; / Provides function line tracing to where the log line was executed (outside of the 'log' library). / It logs in the following formating: 'source-file:function(function-line-number):log-line-number'. / NOTE: Not all elements of the call line trace will be available in all situations (e.g. locked code) / If the function name is suffixed with an '@' it means an anonymous function within the specified function name. The log / line number is then relative to that .log.patterns.callLineTrace:{ rawBacktrace:.Q.btx .Q.Ll `; / Append the intra-function character position to the other backtrace info backtrace:(,).' rawBacktrace@\:1 2; backtrace@:where .type.isString each first each backtrace; backtrace@:where not (first each backtrace) like ".log.*"; if[0 = count backtrace; :""; ]; backtrace:first backtrace; file:.util.findAndReplace[backtrace 1; .log.sourcePathExcludePrefixes; count[.log.sourcePathExcludePrefixes]#enlist "."]; func:backtrace 0; funcLine:backtrace 2; lineNum:first where last[backtrace] < sums count each "\n" vs backtrace 3; location:enlist "["; location,:$[0 < count file; file,":"; ""]; location,:$[0 < count func; func; "anon"]; location,:$[not -1 = funcLine; "(",string[funcLine],")"; ""]; location,:$[not null lineNum; ":",string lineNum; ""]; location,:"]"; :location; }; / Sets the interface functions for other kdb-common component and libraries if the interface 'if' library is defined / in the current process / @see .require.loadedLibs / @see .if.setImplementationsFor / @see .if.bindInterfacesFor .log.i.setInterfaceImplementations:{ if[not `if in key .require.loadedLibs; :(::); ]; allLevels:lower exec level from .log.levels; ifFuncs:` sv/: `.log`if,/:allLevels; implFuncs:` sv/:`.log,/:allLevels; ifTable:flip `ifFunc`implFunc!(ifFuncs; implFuncs); .if.setImplementationsFor[`log; ifTable]; .if.bindInterfacesFor[`log; 1b]; }; / If a log pattern is supplied (via '.log.cfg.format'), attempt to parse it and ensure that all the patterns are valid / @throws InvalidLogPatternException If any of the patterns specified are not configured in '.log.cfg.patterns' .log.i.parsePattern:{ if[0 = count .log.cfg.format; :(::); ];
The .z namespace¶ Environment and callbacks Environment Callbacks .z.a IP address .z.bm msg validator .z.b view dependencies .z.exit action on exit .z.c cores .z.pc close .z.f file .z.pd peach handles .z.h host .z.pg get .z.i PID .z.pi input .z.K version .z.po open .z.k release date .z.pq qcon .z.l license .z.r blocked .z.o OS version .z.ps set .z.q quiet mode .z.pw validate user .z.s self .z.ts timer .z.u user ID .z.vs value set .z.X/x raw/parsed command line Callbacks (HTTP) Environment (Compression/Encryption) .z.ac HTTP auth .z.zd compression/encryption defaults .z.ph HTTP get .z.pm HTTP methods Environment (Connections) .z.pp HTTP post .z.e TLS connection status .z.H active sockets Callbacks (WebSockets) .z.W/w handles/handle .z.wc WebSocket close .z.wo WebSocket open Environment (Debug) .z.ws WebSockets .z.ex failed primitive .z.ey arg to failed primitive Environment (Time/Date) .z.D/d date shortcuts .z.N/n local/UTC timespan .z.P/p local/UTC timestamp .z.T/t time shortcuts .z.Z/z local/UTC datetime The .z namespace contains environment variables and functions, and hooks for callbacks. The .z namespace is reserved for use by KX, as are all single-letter namespaces. Consider all undocumented functions in the namespace as exposed infrastructure – and do not use them. By default, callbacks are not defined in the session After they have been assigned, you can restore the default using \x to delete the definition that was made. Callbacks, Using .z Q for Mortals: §11.6 Interprocess Communication .z.a (IP address)¶ The IP address as a 32-bit integer q).z.a -1408172030i Note its relationship to .z.h for the hostname, converted to an int using .Q.addr . q).Q.addr .z.h -1408172030i It can be split into components as follows: q)"i"$0x0 vs .z.a 172 17 0 2i When invoked inside a .z.p* callback via a TCP/IP connection, it is the IP address of the client session, not the current session. For example, connecting from a remote machine: q)h:hopen myhost:1234 q)h"\"i\"$0x0 vs .z.a" 192 168 65 1i or from same machine: q)h:hopen 1234 q)h"\"i\"$0x0 vs .z.a" 127 0 0 1i When invoked via a Unix Domain Socket, it is 0. q)h:hopen `:unix://1234 q)h".z.a" 0i .z.h (host), .Q.host (IP to hostname) .z.ac (HTTP auth)¶ .z.ac:(requestText;requestHeaderAsDictionary) Lets you define custom code to authorize/authenticate an HTTP request. e.g. inspect HTTP headers representing oauth tokens, cookies, etc. Your custom code can then return different values based on what is discovered. .z.ac is a unary function, whose single parameter is a two-element list providing the request text and header. If .z.ac is not defined, it uses basic access authentication as per (4;"") below The function should return a two-element list. The list of possible return values is: - User not authorized/authenticated User not authorized. Client is sent default 401 HTTP unauthorized response. An HTTP callback to handle the request will not be called.(0;"") - User authorized/authenticated The provided username is used to set(1;"username") .z.u . The relevant HTTP callback to handle this request will be allowed. - User not authorized/authenticated (custom response) The custom response to be sent should be provided in the "response text" section. The response text should be comprised of a valid HTTP response message, for example a 401 response with a customised message. An HTTP callback to handle the original request is not called.(2;"response text") - Fallback to basic authentication Fallback to basic access authentication, where the username/password are base64 decoded and processed via the(4;"") -u /-U file and.z.pw (if defined). If the user is not permitted, the client is sent a default 401 HTTP unauthorized response. Since V4.0 2021.07.12. .z.b (view dependencies)¶ The dependency dictionary. q)a::x+y q)b::x+1 q).z.b x| `a`b y| ,`a .z.bm (msg validator)¶ .z.bm:x Where x is a unary function. kdb+ before V2.7 was sensitive to being fed malformed data structures, sometimes resulting in a crash, but now validates incoming IPC messages to check that data structures are well formed, reporting 'badmsg and disconnecting senders of malformed data structures. The raw message is captured for analysis via the callback .z.bm . The sequence upon receiving such a message is - calls .z.bm with a 2-item list:(handle;msgBytes) - close the handle and call .z.pc - signals 'badmsg E.g. with the callback defined q).z.bm:{`msg set (.z.p;x);} after a bad msg has been received, the global var msg will contain the timestamp, the handle and the full message. Note that this check validates only the data structures, it cannot validate the data itself. .z.c (cores)¶ The number of physical cores. .z.e (TLS connection status)¶ TLS details used with a connection handle. Returns an empty dictionary if the connection is not TLS enabled. E.g. where h is a connection handle. q)h".z.e" CIPHER | `AES128-GCM-SHA256 PROTOCOL| `TLSv1.2 CERT | `SUBJECT`ISSUER`SERIALNUMBER`NOTVALIDBEFORE`NOTVALIDAFTER`VERIFIED`VERIFYERROR!("/C=US/ST=New York/L=Brooklyn/O=Example Brooklyn Company/CN=myname.com";"/C=US/ST=New York/L=Brooklyn/O=Example Brooklyn Company/CN=examplebrooklyn.com";,"1";"Jul 6 10:08:57 2021 GMT";"May 15 10:08:57 2031 GMT";1b;0) Since V3.4 2016.05.16. CERT details of VERIFIED ,VERIFYERROR available since 4.1t 2024.02.07. .z.ex (failed primitive)¶ In a debugger session, .z.ex is set to the failed primitive. Since V3.5 2017.03.15. .z.ey (argument to failed primitive) .z.exit (action on exit)¶ .z.exit:f Where f is a unary function, f is called with the exit parameter as the argument just before exiting the kdb+ session. The exit parameter is the argument to the exit function, or 0 if manual exit with \\ quit The handler cannot cancel the exit. .z.exit can be unset with \x .z.exit , which restores the default behavior. The default behavior is equivalent to setting .z.exit to {} , i.e. do nothing. q).z.exit '.z.exit q).z.exit:{0N!x} q)\\ 0 os>.. q).z.exit:{0N!x} q)exit 42 42 os>.. q).z.exit:{0N!x} q)exit 0 0 os>.. If the exit behavior has an error (disk full for example if exit tries to save the current state), the session is suspended and exits after completion or manual exit from the suspension. q).z.exit:{`thiswontwork+x} q)\\ {`thiswontwork+x} 'type + `thiswontwork 0 q))x 0 q))'`up 'up os>.. .z.pc (port close) exit \\ quit .z.ey (argument to failed primitive)¶ In a debugger session, .z.ey is set to the argument to failed primitive. Since V3.5 2017.03.15. .z.ex (failed primitive) .z.f (file)¶ Name of the q script as a symbol. $ q test.q q).z.f `test.q .z.x (argv) .z.H (active sockets)¶ Active sockets as a list (a low-cost method). Since v4.0 2020.06.01. List has sorted attribute applied since v4.1 2024.07.08. q).z.H~key .z.W 1b .z.W (handles), .z.w (handle), -38! (socket table) .z.h (host)¶ The host name as a symbol q).z.h `demo.kx.com On Linux this should return the same as the shell command hostname . If you require a fully qualified domain name, and the hostname command returns a hostname only (with no domain name), this should be resolved by your system administrators. Often this can be traced to the ordering of entries in /etc/hosts , e.g. Non-working /etc/host looks like : 127.0.0.1 localhost.localdomain localhost 192.168.1.1 myhost.mydomain.com myhost Working one has this ordering : 127.0.0.1 localhost.localdomain localhost 192.168.1.1 myhost myhost.mydomain.com One solution seems to be to flip around the entries, i.e. so the entries should be ip hostname fqdn A workaround from within kdb+ is q).Q.host .z.a .z.a (IP address), .Q.addr (IP/host as int) .z.i (PID)¶ The process ID as an integer. q).z.i 23219 .z.K (version)¶ The major version number, as a float, of the version of kdb+ being used. (A test version of 2.4t is reported as 2.4) q).z.K 2.4 q).z.k 2006.10.30 .z.k (release date) .z.k (release date)¶ Date on which the version of kdb+ being used was released. q).z.k 2006.10.30 q) This value is checked against .Q.k as part of the startup to make sure that the executable and the version of q.k being used are compatible. .z.K (version) .z.l (license)¶ License information as a list of strings; () for non-commercial 32-bit versions. q)`maxCoresAllowed`expiryDate`updateDate`````bannerText`!.z.l maxCoresAllowed| "" expiryDate | "2021.05.27" updateDate | "2021.05.27" | ,"1" | ,"1" | ,"1" | ,"0" bannerText | "[email protected] #59875" | ,"0" bannerText is the custom text displayed at startup, and always contains the license number as the last token. .z.N (local timespan)¶ System local time as timespan in nanoseconds. q).z.N 0D23:30:10.827156000 .z.n (UTC timespan), .z.P (local timestamp), .z.p (UTC timestamp), .z.Z (local datetime), .z.z (UTC datetime) .z.n (UTC timespan)¶ System UTC time as timespan in nanoseconds. q).z.n 0D23:30:10.827156000 Changes since 4.1t 2021.03.30,4.0 2022.07.01 Linux clock source returns a nanosecond precision timespan .z.n (local timespan), .z.P (local timestamp), .z.p (UTC timestamp), .z.Z (local datetime), .z.z (UTC datetime) .z.o (OS version)¶ kdb+ operating system version as a symbol. q).z.o `w32 Values for V3.5+ are shown below in bold type. | os | 32-bit | 64-bit | |---|---|---| | Linux | l32 | l64 | | Linux on ARM | l64 (reports l64arm since 4.1t 2022.09.02) | | | macOS | m32 | m64 | | Solaris | s32 | s64 | | Solaris on Intel | v32 | v64 | | Windows | w32 | w64 | Note this is the version of the kdb+ executable, NOT the OS itself. You might run both 32-bit and 64-bit versions of kdb+ on the same machine to support older external interfaces. .z.P (local timestamp)¶ System localtime timestamp in nanoseconds. q).z.P 2018.04.30D10:18:31.932126000 .z.p (UTC timestamp), .z.N (local timespan), .z.n (UTC timespan), .z.Z (local datetime), .z.z (UTC datetime) .z.p (UTC timestamp)¶ UTC timestamp in nanoseconds. q).z.p 2018.04.30D09:18:38.117667000 Changes since 4.1t 2021.03.30,4.0 2022.07.01 Linux clock source returns a nanosecond precision timestamp .z.P (local timestamp), .z.N (local timespan), .z.n (UTC timespan), .z.Z (local datetime), .z.z (UTC datetime) .z.pc (close)¶ .z.pc:f Where f is a unary function, .z.pc is called after a connection has been closed. As the connection has been closed by the time f is called there are strictly no remote values that can be put into .z.a , .z.u or .z.w – so the local values are returned. To allow you to clean up things like tables of users keyed by handle, the handle that was being used is passed as a parameter to .z.pc KDB+ 2.3 2007.03.27 Copyright (C) 1993-2007 Kx Systems l64/ 8cpu 16026MB simon ... q).z.pc '.z.pc q).z.pc:{0N!(.z.a;.z.u;.z.w;x);x} q)\p 2021 q)(2130706433;`simon;0;4) q).z.a 2130706433 q).z.u `simon q).z.w 0 q) .z.pc is not called by hclose . .z.po (port open) .z.pd (peach handles)¶ .z.pd: x Where q has been started with secondary processes for use in parallel processing, x is - an int vector of handles to secondary processes - a function that returns a list of handles to those secondary processes For evaluating the function passed to peach or ': , kdb+ gets the handles to the secondary processes by calling .z.pd[] . The processes with these handles must not be used for other messaging. Each Parallel will close them if it receives anything other than a response message. q)/open connections to 4 processes on the localhost q).z.pd:`u#hopen each 20000+til 4 The int vector (returned by) x must have the unique attribute set. A more comprehensive setup might be q).z.pd:{n:abs system"s";$[n=count handles;handles;[hclose each handles;:handles::`u#hopen each 20000+til n]]} q).z.pc:{handles::`u#handles except x;} q)handles:`u#`int$(); Note that (since V3.1) the worker processes are not started automatically by kdb+. Disabled in V4.1t Using handles within peach is not supported e.g. q)H:hopen each 4#4000;{x""}peach H 3 4 5 6i One-shot IPC requests can be used within peach instead. .z.pg (get)¶ .z.pg:f Where f is a unary function, called with the object that is passed to the q session via a synchronous request. The return value, if any, is returned to the calling task. .z.pg can be unset with \x .z.pg , which restores the default behavior. The default behavior is equivalent to setting .z.pg to value and executes in the root context. .z.ps (set), -30!(x) (deferred response) .z.ph (HTTP get)¶ .z.ph:f Where f is a unary function, it is evaluated when a synchronous HTTP request is received by the kdb+ session. .z.ph is passed a single argument, a 2-item list (requestText;requestHeaderAsDictionary) : requestText is parsed in.z.ph – detecting special cases like requests for CSV, XLS output – and the result is returned to the calling task.requestHeaderAsDictionary contains a dictionary of HTTP header names and values as sent by the client. This can be used to return content optimized for particular browsers. The function returns a string representation of an HTTP response message e.g. HTTP/1.1 response message format. Since V3.6 and V3.5 2019.11.13, the default implementation calls .h.val instead of value , allowing users to interpose their own valuation code. It is called with requestText as the argument. .z.pp (HTTP post), .z.pm (HTTP methods), .z.ac (HTTP auth) .h namespace HTTP Q for Mortals §11.7.1 HTTP Connections .z.pi (input)¶ .z.pi:f Where f is a unary function, it is evaluated as the default handler for input. As this is called on every line of input it can be used to log all console input, or even to modify the output. For example, if you prefer the more compact V2.3 way of formatting tables, you can reset the output handler. q)aa:([]a:1 2 3;b:11 22 33) q)aa a b ---- 1 11 2 22 3 33 q).z.pi:{0N!value x;} q)aa +`a`b!(1 2 3;11 22 33) q) To return to the default display, just delete your custom handler q)\x .z.pi .z.pm (HTTP methods)¶ .z.pm:f Where f is a unary function, .z.pm is evaluated when the following HTTP request methods are received in the kdb+ session. - OPTIONS - PATCH (since V4.1t 2021.03.30) - PUT (since V4.1t 2021.03.30) - DELETE (since V4.1t 2021.03.30) Each method is passed to f as a 3-item list e.g. (`OPTIONS;requestText;requestHeaderDict) For the POST method use .z.pp, and for GET use .z.ph. .z.ph (HTTP get), .z.pp (HTTP post), .z.ac (HTTP auth) HTTP .z.po (open)¶ .z.po:f Where f is a unary function, .z.po is evaluated when a connection to a kdb+ session has been initialized, i.e. after it’s been validated against any -u /-U file and .z.pw checks. Its argument is the handle and is typically used to build a dictionary of handles to session information like the value of .z.a , .z.u .z.pc (port close), .z.pw (validate user) Q for Mortals §11.6 Interprocess Communication .z.pp (HTTP post)¶ .z.pp:f Where f is a unary function, .z.pp is evaluated when an HTTP POST request is received in the kdb+ session. There is no default implementation, but an example would be that it calls value on the first item of its argument and returns the result to the calling task. See .z.ph for details of the argument and return value. Allows empty requests since 4.1t 2021.03.30 (previously signalled length error). .z.ph (HTTP get), .z.pm (HTTP methods), .z.ac (HTTP auth) .h namespace HTTP Q for Mortals §11.7.1 HTTP Connections .z.pq (qcon)¶ .z.pq:f Remote connections using the ‘qcon’ text protocol are routed to .z.pq , which defaults to calling .z.pi . (Since V3.5+3.6 2019.01.31.) This allows a user to handle remote qcon connections (via .z.pq ) without defining special handling for console processing (via .z.pi ). Firewalling for locking down message handlers .z.ps (set)¶ .z.ps:f Where f is a unary function, .z.ps is evaluated with the object that is passed to this kdb+ session via an asynchronous request. The return value is discarded. .z.ps can be unset with \x .z.ps , which restores the default behavior. The default behavior is equivalent to setting .z.ps to value . Note that .z.ps is used in preference to .z.pg when messages are sent to the local process using handle 0. q).z.ps:{[x]0N!(`zps;x);value x} q).z.pg:{[x]0N!(`zpg;x);value x} q)0 "2+2" (`zps;"2+2") 4 .z.pg (get) .z.pw (validate user)¶ .z.pw:f Where f is a binary function, .z.pw is evaluated after the -u /-U checks, and before .z.po when opening a new connection to a kdb+ session. The arguments are the user ID (as a symbol) and password (as a string) to be verified; the result is a boolean atom. As .z.pw is simply a function it can be used to implement rules such as “ordinary users can sign on only between 0800 and 1800 on weekdays” or can go out to external resources like an LDAP directory. If .z.pw returns 0b the task attempting to establish the connection will get an 'access error. The default definition is {[user;pswd]1b} .z.po (port open) Changes in 2.4 .z.q (quiet mode)¶ 1b if Quiet Mode is set, else 0b . .z.r (blocked)¶ A boolean, indicating whether an update in the current context would be blocked. Returns 1b - in reval - where the -b command-line option has been set - in a thread other than the main event thread Since V4.1t 2021.04.16. .z.s (self)¶ A reference to the current function. q){.z.s}[] {.z.s} Can be used to generate recursive function calls. q)fact:{$[x<=0;1;x*.z.s x-1]} q)fact[5] 120 Note this is purely an example; there are other ways to achieve the same result. .z.ts (timer)¶ .z.ts:f Where f is a unary function, .z.ts is evaluated on intervals of the timer variable set by system command \t . The timestamp is returned as Greenwich Mean Time (GMT). q)/ set the timer to 1000 milliseconds q)\t 1000 q)/ argument x is the timestamp scheduled for the callback q)/ .z.ts is called once per second and returns the timestamp q).z.ts:{0N!x} q)2010.12.16D17:12:12.849442000 2010.12.16D17:12:13.849442000 2010.12.16D17:12:14.849442000 2010.12.16D17:12:15.849442000 2010.12.16D17:12:16.849442000 When kdb+ has completed executing a script passed as a command-line argument, and if there are no open sockets nor a console, kdb+ will exit. The timer alone is not enough to stop the process exiting – it must have an event source which is a file descriptor (socket, console, or some plugin registering a file descriptor and callback via the C API sd1 function). .z.u (user ID)¶ User ID, as a symbol, associated with the current handle. q).z.u `demo For - handle 0 (console) returns the userid under which the process is running. - handles > 0 returns either: - on the server end of a connection, the userid as passed to hopen by the client - on the client end of a connection, the null symbol ` - on the server end of a connection, the userid as passed to q).z.u / console is handle 0 `charlie q)0".z.u" / explicitly using handle 0 `charlie q)h:hopen`:localhost:5000:geoffrey:geffspasswd q)h".z.u" / server side .z.u is as passed by the client to hopen `geoffrey q)h({.z.w".z.u"};::) / client side returns null symbol ` .z.vs (value set)¶ .z.vs:f Where f is a binary function, .z.vs is evaluated after a value is set globally in the default namespace (e.g. a , a.b ). For function f[x;y] , x is the symbol of the modified variable and y is the index. Applies only to globals in the default namespace This is not triggered for function-local variables, nor globals that are not in the default namespace, e.g. those prefixed with a dot such as .a.b . This is the same restriction that applies to logging. The following example sets .z.vs to display the symbol, the index and the value of the variable. q).z.vs:{0N!(x;y;value x)} q)m:(1 2;3 4) (`m;();(1 2;3 4)) q)m[1;1]:0 (`m;1 1;(1 2;3 0)) .z.W (handles)¶ Dictionary of IPC handles with the number of bytes waiting in their output queues. q)h:hopen ... q)h 3 q)neg[h]({};til 1000000); neg[h]({};til 10); .z.W 3| 8000030 110 q)neg[h]({};til 1000000); neg[h]({};til 10); sum each .z.W 3| 8000140 Since 4.1 2023.09.15, this returns handles!bytes as I!J , instead of the former handles!list of individual msg sizes. Use sum each .z.W if writing code targeting 4.0 and 4.1 q)h:hopen ... q)h 6i q)neg[h]({};til 1000000); neg[h]({};til 10); .z.W 6| 8000140 q)neg[h]({};til 1000000); neg[h]({};til 10); sum each .z.W 6| 8000140 Querying known handles can also be performed using -38! , which can be more performant than using .z.W to return the entire dataset of handles. q)h:hopen 5000 q)neg[h]"11+1111111";.z.W h 24 q)neg[h]"11+1111111";(-38!h)`m 24 .z.H (active sockets), .z.w (handle), -38! (socket table) .z.w (handle)¶ Connection handle; 0 for current session console. q).z.w 0i Inside a .z.p * callback it returns the handle of the client session, not the current session. .z.H (active sockets), .z.W (handles), -38! (socket table) .z.wc (websocket close)¶ .z.wc:f Where f is a unary functionh is the handle to a websocket connection to a kdb+ session f[h] is evaluated after a websocket connection has been closed. (Since V3.3t 2014.11.26.) As the connection has been closed by the time .z.wc is called, there are strictly no remote values that can be put into .z.a , .z.u or .z.w so the local values are returned. This allows you to clean up things like tables of users keyed by handle. .z.wo (websocket open), .z.ws (websockets), .z.ac (HTTP auth) .z.wo (websocket open)¶ .z.wo:f Where f is a unary functionh is the handle to a websocket connection to a kdb+ session f[h] is evaluated when the connection has been initialized, i.e. after it has been validated against any -u /-U file and .z.pw checks. (Since V3.3t 2014.11.26) The handle argument is typically used by f to build a dictionary of handles to session information such as the value of .z.a , .z.u . .z.wc (websocket close), .z.ws (websockets), .z.ac (HTTP auth) .z.ws (websockets)¶ z.ws:f Where f is a unary function, it is evaluated on a message arriving at a websocket. If the incoming message is a text message the argument is a string; if a binary message, a byte vector. Sending a websocket message is limited to async messages only (sync is 'nyi ). A string will be sent as a text message; a byte vector as a binary message. .z.wo (websocket open), .z.wc (websocket close), .z.ac (HTTP auth) WebSockets .z.X (raw command line)¶ .z.X Returns a list of strings of the raw, unfiltered command line with which kdb+ was invoked, including the name under which q was invoked, as well as single-letter arguments. (Since V3.3 2015.02.12) $ q somefile.q -customarg 42 -p localhost:17200 KDB+ 3.4 2016.09.22 Copyright (C) 1993-2016 KX Systems m64/ 4()core 8192MB ... q).z.X ,"q" "somefile.q" "-customarg" "42" "-p" "localhost:17200" .z.x (argv), .z.f (file), .z.q (quiet mode), .Q.opt (command parameters), .Q.def (command defaults), .Q.x (non-command parameters) .z.x (argv)¶ Command-line arguments as a list of strings $ q test.q -P 0 -abc 123 q).z.x "-abc" "123" The script name and the single-letter options used by q itself are not included. Command-line options can be converted to a dictionary using the convenient .Q.opt function. .z.X (raw command line), .z.f (file), .z.q (quiet mode), .Q.opt (command parameters), .Q.def (command defaults), .Q.x (non-command parameters) .z.Z (local datetime)¶ Local time as a datetime atom. q).z.Z 2006.11.13T21:16:14.601 The offset from UTC is fetched from the OS: kdb+ does not have its own time-offset database. Which avoids problems like this. .z.z (UTC datetime), .z.P (local timestamp), .z.p (UTC timestamp), .z.N (local timespan), .z.n (UTC timespan) .z.z (UTC datetime)¶ UTC time as a datetime atom. q).z.z 2006.11.13T21:16:14.601 z.z calls gettimeofday and so has microsecond precision .z.Z (local datetime), .z.P (local timestamp), .z.p (UTC timestamp), .z.N (local timespan), .z.n (UTC timespan) .z.zd (compression/encryption defaults)¶ .z.zd:(lbs;alg;lvl) .z.zd:dict Integers lbs , alg , and lvl are compression parameters and/or encryption parameters. They set default values for logical block size, compression/encryption algorithm and compression level that apply when saving to files with no file extension. Encryption available since 4.0 2019.12.12. q).z.zd:17 2 6 / set zip defaults q)\x .z.zd / clear zip defaults You can also assign a dictionary to .z.zd . The keys of the dictionary are either column names or the null symbol ` . The value of each entry is an integer vector: lbs , alg , and lvl . The null symbol is used as a default for columns that do not match the other keys. q)show dict:``a`b!(17 5 3;17 2 6;17 2 6) / default compression is `zstd` with level 3 | 17 5 3 a| 17 2 6 b| 17 2 6 q).z.zd:dict -21!x (compression/encryption stats), set (per file/dir compression) File compression Compression in kdb+ Data at rest encryption (DARE) .z.T .z.t .z.D .z.d (time/date shortcuts)¶ Shorthand forms: .z.T `time$.z.Z .z.D `date$.z.Z .z.t `time$.z.z .z.d `date$.z.z .z.Z (local datetime), .z.z (UTC datetime) Callbacks, Using .z Q for Mortals: §11.6 Interprocess Communication
Query Routing: A kdb+ framework for a scalable, load balanced system¶ Due to the large scale that kdb+ systems commonly grow to, it is important to build solid foundations so that as the number of users and size of databases increase, the system is able to easily absorb the extra capacity. Distributed kdb+ systems have been covered in a number of KX Technical White papers. The primary objective of this paper is to expand on routing, query tagging and connectivity management of a large distributing kdb+ system. The basic architecture used in this paper is based heavily on the ideas discussed in “Common design principles for kdb+ gateways”. It is recommended the reader understand these concepts before progressing with this paper. This paper focuses on the design principle of the Connection Manager Load Balancer schematic whilst providing an asynchronous-only method of communication between processes. In this paper, our Load Balancer will also act as a Connection Manager with distributing access to all services whilst minimizing the waiting time for gateways. Traditional load-balancing techniques such as a straightforward round-robin approach to resource allocation is an acceptable approach for many systems, however it can result in several queries becoming queued up behind a long-running query whilst other service resources are idle. In this paper, the method used aims to be more efficient by tagging user queries that enter a gateway, identifying free services, and allocating queries on this basis. There are many other potential solutions to building a kdb+ framework for load balancing and query routing. Rather than presenting a golden solution, this paper explores one method of implementation. All tests were run using kdb+ version 3.3 (2015.11.03) Technical overview¶ Interaction between the user and a gateway occurs through deferred synchronous communication, allowing multiple users to interact with a single gateway at the same time. With exception to the interaction between the user and the gateway, all processes in our system communicate via asynchronous messaging. Overview of system framework Gateway¶ The gateway is designed to accommodate multiple user requests, storing each query in an internal table and assigning a unique sequence number to each while keeping record of the handle to the user’s process. The gateway requests a service from the load balancer and sends the user’s query to the allocated service. The results are then returned to the user via the handle associated with the query sequence number. Load Balancer¶ The Load Balancer has the following purposes: - A service provider informing gateways of all services within the system - Service allocator assigning gateways and their unique query sequence number to the next available service - Connectivity manager appropriately amending requests based on whether services/gateways are connected/disconnected to the system. Service¶ A service is a broad term that can denote a typical HDB/RDB kdb+ database, or more complex report/function processes custom-designed to perform any range of aggregations. By starting duplicate instances of a service (e.g. HDBs pointing at the same data, RDBs subscribing to the same Tickerplant), we provide a pool of resources per service that can be deployed to different servers. This allows the potential for a hot-hot set up in which our framework will not only efficiently allocate between resources, but also automatically divert user queries in the event of a service/host failure. User interaction and logic flow¶ Services for any of the above databases can be distributed on separate servers, the quantity of which may be dependent on available hardware or user demand and be configured per database type. For the purpose of this paper, we minimize the complexity of the gateway query routing in order to emphasize the functionality of the Load Balancer. We will require the user to send her query to the gateway handle by calling the function userQuery with a two-item list parameter: the required service and the query to be executed. The user interacts with the gateway using deferred synchronous messaging. Further information can be found at the Knowledge Base article on load balancing. gw:{h:hopen x;{(neg x)(`userQuery;y);x[]}[h]}[`:localhost:5555] // `:localhost:5555 is an example gateway address gw(`EQUITY_MARKET_RDB;"select from trade where date=max date") // EQUITY_MARKET_RDB is the name of the required service The diagram below outlines the logical steps taken when a user’s query enters the system. Results are then returned to the user through the gateway. Errors can be returned to users due to an invalid service request from the gateway or an error from the service on evaluating a user query. Flow diagram of system logic Instead of taking a standard round-robin approach to load balancing as explained in “Common design principles for kdb+ gateways”, our Load Balancer will keep track of what resources are free and allocate queries to services only when they are available. After executing a query, the service provides a notification to the Load Balancer that it is available. The only exception to this occurs when a service gets allocated to a query but the user has since disconnected from the gateway. Here, the gateway notifies the Load Balancer that the service is no longer required. Gateways¶ When a connection is opened to the Load Balancer, the handle is set to the variable LB , which will be referenced throughout this paper. As asynchronous messages are used throughout this framework, we also create the variable NLB , which is assigned with the negative handle to the load balancer. \p 5555 manageConn:{@[{NLB::neg LB::hopen x};`:localhost:1234;{show x}]}; registerGWFunc:{addResource LB(`registerGW;`)}; The gateway connects to the Load Balancer and retrieves the addresses of all service resources, establishing a connection to each. This is the only time the gateway uses synchronous IPC communication to ensure it has all of the details it requires before accepting user queries. After the gateway registers itself as a subscriber for any new resources that come available, all future communication is sent via asynchronous messages. resources:([address:()] source:();sh:()) addResource:{ `resources upsert `address xkey update sh:{hopen first x}each address from x } The gateway process creates and maintains an empty query table. The complexity of this table is at the developer’s discretion. In this example we’ll record: - Unique sequence number per query ( sq ) - Handle from user process ( uh ) - Timestamps for when the query was received, when the query got sent to an available resource, and when the query results are sent back to the user ( rec ,snt ,ret respectively) - The user ID ( user ) - The service handle ( sh ) - The service requested by user ( serv ) - The user’s query queryTable:([sq:`int$()]; uh:`int$(); rec:`timestamp$(); snt:`timestamp$(); ret:`timestamp$(); user:`$(); sh:`int$(); serv:`$(); query:() ) This table could be extended to include more information by making small changes to the code in this paper. These fields could include the status of a query, error messages received from service or the total time a query took from start to end. As mentioned previously, users make requests by calling the userQuery function on the gateway. This function takes a two-item list argument: (Service;Query) . The gateway will validate the existence of a service matching the name passed to userQuery and send an error if no such resource exists. We are setting outside the scope of this paper any further request validation, including access permissioning. For further details on access control, please refer to “Permissions with kdb+”. When a user sends her query via the userQuery function, we assign the query a unique sequence number and publish an asynchronous request to the Load Balancer to be assigned an available resource. userQuery:{ $[(serv:x 0) in exec distinct source from resources; // valid service? [queryTable,:(SEQ+:1;.z.w;.z.p;0Np;0Np;.z.u;0N;serv;x 1); NLB(`requestService;SEQ;serv)]; (neg .z.w)(`$"Service Unavailable")] } The addResource function defined earlier is used to add new service instances to the plant, while the serviceAlloc function is used to pass back an allocated resource for a given query sequence number. The query is retrieved by its sequence number from queryTable and sent to the allocated service resource. If the user has since disconnected from the gateway before a resource could be provided, the gateway informs the Load Balancer to make this resource free again by executing the returnService function in the Load Balancer. After each event, the timestamp fields are updated within the queryTable . serviceAlloc:{[sq;addr] $[null queryTable[sq;`uh]; // Check if user is still waiting on results NLB(`returnService;sq); // Service no longer required [(neg sh:resources[addr;`sh]) (`queryService;(sq;queryTable[sq;`query])); // Send query to allocated resource, update queryTable queryTable[sq;`snt`sh]:(.z.p;sh)]] } When a service returns results to the gateway, the results arrive tagged with the same sequence number sent in the original query. This incoming message packet executes the returnRes function, which uses the sequence number to identify the user handle and return the results. If the user has disconnected before the results can be returned then the user handle field uh will be set to null (through the .z.pc trigger) causing nothing further to be done. returnRes:{[res] uh:first exec uh from queryTable where sq=(res 0); // (res 0) is the sequence number if[not null uh;(neg uh)(res 1)]; // (res 1) is the result queryTable[(res 0);`ret]:.z.p } In the situation where a process disconnects from the gateway, .z.pc establishes what actions to take. As mentioned, a disconnected user will cause queryTable to be updated with a null user handle. If the user currently has no outstanding queries, the gateway has nothing to do. If a service disconnects from the gateway whilst processing an outstanding user request, then all users that have outstanding requests to this database are informed and the database is purged from the available resources table. If our Load Balancer connection has dropped, all users with queued queries will be informed. All connections are disconnected and purged from the resources table. This ensures that all new queries will be returned directly to users as the Load Balancer is unavailable to respond to their request. A timer is set to attempt to reconnect to the Load Balancer. On reconnection, the gateway will re-register itself, pull all available resources and establish new connections. The .z.ts trigger is executed once, on script startup, to initialize and register the process. .z.pc:{[handle] // if handle is for a user process, set the query handle (uh) as null update uh:0N from `queryTable where uh=handle; // if handle is for a resource process, remove from resources delete from `resources where sh=handle; // if any user query is currently being processed on the service which // disconnected, send message to user if[count sq:exec distinct sq from queryTable where sh=handle,null ret; returnRes’[sq cross `$”Service Disconnect”]]; if[handle~LB; // if handle is Load Balancer // Send message to each connected user, which has not received results (neg exec uh from queryTable where not null uh,null snt)@\: `$”Service Unavailable”; // Close handle to all resources and clear resources table hclose each (0!resources)`sh; delete from `resources; // update queryTable to close outstanding user queries update snt:.z.p,ret:.z.p from `queryTable where not null uh,null snt; // reset LB handle and set timer of 10 seconds // to try and reconnect to Load Balancer process LB::0; NLB::0; value”\\t 10000”] } .z.ts:{ manageConn[]; if[0<LB;@[registerGWFunc;`;{show x}];value”\\t 0”] } .z.ts[] Load Balancer¶ Within our Load Balancer there are two tables and a list: \p 1234 services:([handle:`int$()] address:`$(); source:`$(); gwHandle:`int$(); sq:`int$(); udt:`timestamp$() ) serviceQueue:([gwHandle:`int$();sq:`int$()] source:`$(); time:`timestamp$() ) gateways:() The service table maintains all available instances/resources of services registered and the gateways currently using each service resource. The serviceQueue maintains a list of requests waiting on resources. A list is also maintained, called gateways , which contains all gateway handles. Gateways connecting to the Load Balancer add their handle to the gateways list. New service resources add their connection details to the services table. When a service resource registers itself using the registerResource function, the Load Balancer informs all registered gateways of the newly available resource. The next outstanding query within the serviceQueue table is allocated immediately to this new resource. registerGW:{gateways,:.z.w ; select source, address from services} registerResource:{[name;addr] `services upsert (.z.w;addr;name;0N;0N;.z.p); (neg gateways)@\:(`addResource;enlist`source`address!(name;addr)); // Sends resource information to all registered gateway handles serviceAvailable[.z.w;name] } Incoming requests for service allocation arrive with a corresponding sequence number. The combination of gateway handle and sequence number will always be unique. The requestService function either provides a service to the gateway or adds the request to the serviceQueue . When a resource is allocated to a user query, the resource address is returned to the gateway along with the query sequence number that made the initial request. sendService:{[gw;h]neg[gw]raze(`serviceAlloc;services[h;`sq`address])} // Returns query sequence number and resource address to gateway handle requestService:{[seq;serv] res:exec first handle from services where source=serv,null gwHandle; // Check if any idle service resources are available $[null res; addRequestToQueue[seq;serv;.z.w]; [services[res;`gwHandle`sq`udt]:(.z.w;seq;.z.p); sendService[.z.w;res]]] } If all matching resources are busy, then the gateway handle + sequence number combination is appended to the serviceQueue table along with the service required. addRequestToQueue:{[seq;serv;gw]`serviceQueue upsert (gw;seq;serv;.z.p)} After a service resource has finished processing a request, it sends an asynchronous message to the Load Balancer, executing the returnService function. As mentioned previously, if the user disconnects from the gateway prior to being allocated a service resource, the gateway also calls this function. The incoming handle differentiates between these two situations. returnService:{ serviceAvailable . $[.z.w in (0!services)`handle; (.z.w;x); value first select handle,source from services where gwHandle=.z.w,sq=x ] } On execution of the serviceAvailable function, the Load Balancer will either mark this resource as free, or allocate the resource to the next gateway + sequence number combination that has requested this service, updating the services and serviceQueue tables accordingly. serviceAvailable:{[zw;serv] nxt:first n:select gwHandle,sq from serviceQueue where source=serv; serviceQueue::(1#n)_ serviceQueue; // Take first request for service and remove from queue services[zw;`gwHandle`sq`udt]:(nxt`gwHandle;nxt`sq;.z.p); if[count n; sendService[nxt`gwHandle;zw]] } Any resource that disconnects from the Load Balancer is removed from the services table. If a gateway has disconnected, it is removed from the resource subscriber list gateways and all queued queries for any resources must also be removed, and the resource freed up for other gateways. Unlike other components in this framework, the Load Balancer does not attempt to reconnect to processes, as they may have permanently been removed from the service pool of resources. In a dynamically adjustable system, service resources could be added and removed on demand based on the size of the serviceQueue table. .z.pc:{[h] services _:h; gateways::gateways except h; delete from `serviceQueue where gwHandle=h; update gwHandle:0N from `services where gwHandle=h } If a gateway dies, data services will continue to run queries that have already been routed to them, which will not subsequently be returned to the client. It is also possible that the next query assigned to this resource may experience a delay as the previous query is still being evaluated. As mentioned later, all resources should begin with a timeout function to limit interruption of service. Example service¶ The example below takes a simple in-memory database containing trade and quote data that users can query. An example \T timeout of ten seconds is assigned, to prevent queries running for too long. \T 10 \p 2222 LB:0 quote:([] date:10#.z.D-1; sym:10#`FDP; time:09:30t+00:30t*til 10; bid:100.+0.01*til 10; ask:101.+0.01*til 10 ) trade:([] date:10#.z.D-1; sym:10#`FDP; time:09:30t+00:30t*til 10; price:100.+0.01*til 10; size:10#100 ) Each instance of a service uses the same service name. Within this example, the service name is hard-coded, but this would ideally be set via a command line parameter. In our example below, our service name is set to `EQUITY_MARKET_RDB . In designing a user-friendly system, service names should be carefully set to clearly describe a service’s purpose. Similar processes (with either a different port number or running on a different host) can be started up with this service name, increasing the pool of resources available to users. The serviceDetails function is executed on connection to the Load Balancer to register each service address. manageConn:{@[{NLB::neg LB::hopen x}; `:localhost:1234; {show "Can't connect to Load Balancer-> ",x}] } serviceName:`EQUITY_MARKET_RDB serviceDetails:(`registerResource; serviceName; `$":" sv string (();.z.h;system"p") ) When a gateway sends the service a request via the queryService function, a unique sequence number assigned by a given gateway arrives as the first component of the incoming asynchronous message. The second component, the query itself, is then evaluated. The results of this query is stamped with the same original sequence number and returned to the gateway handle. As mentioned previously, query interpretation/validation on the gateway side is outside of the scope of this paper. Any errors that occur due to malformed queries will be returned via protected evaluation from database back to the user. In the situation where the process query times out, 'stop will be returned to the user via the projection errProj . On completion of a request, an asynchronous message is sent to the Load Balancer informing it that the service is now available for the next request. execRequest:{[nh;rq]nh(`returnRes;(rq 0;@[value;rq 1;{x}]));nh[]} queryService:{ errProj:{[nh;sq;er]nh(sq;`$er);nh[]}; @[execRequest[neg .z.w];x;errProj[neg .z.w;x 0]]; NLB(`returnService;serviceName) } Note that in the execRequest function, nh is the asynchronous handle to the gateway. Calling nh[] after sending the result causes the outgoing message queue for this handle to be flushed immediately. Basics: Interprocess communications Like our gateway, the .z.pc handle is set to reconnect to the Load Balancer on disconnect. The .z.ts function retries to connect to the Load Balancer, and once successful the service registers its details. The .z.ts function is executed once on start-up – like the gateway – to initialize the first connection. .z.ts:{manageConn[];if[0<LB;@[NLB;serviceDetails;{show x}];value"\\t 0"]} .z.pc:{[handle]if[handle~LB;LB::0;value"\\t 10000"]} .z.ts[] Example client¶ An example query from a user may look like the following: q)gw:{h:hopen x;{(neg x)(`userQuery;y);x[]}[h]}[`:localhost:5555] q)gw(`EQUITY_MARKET_RDB;"select from quote") date sym time bid ask ----------------------------------------- 2016.01.31 FDP 09:30:00.000 100 101 2016.01.31 FDP 10:00:00.000 100.01 101.01 2016.01.31 FDP 10:30:00.000 100.02 101.02 2016.01.31 FDP 11:00:00.000 100.03 101.03 .. q)gw(`EQUITY_MARKET_RDB;"select from trade") date sym time price size --------------------------------------- 2016.01.31 FDP 09:30:00.000 100 100 2016.01.31 FDP 10:00:00.000 100.01 100 2016.01.31 FDP 10:30:00.000 100.02 100 2016.01.31 FDP 11:00:00.000 100.03 100 .. An example query from a user requesting an invalid service name will show the following: q)gw(`MADE_UP_SERVICE;"select from quote") `Service Unavailable All queries for valid data services can then be viewed by looking at queryTable within the gateway: sq| uh rec snt .. --| ---------------------------------------------------------------- .. 1 | 244 2016.02.16D11:39:20.634490000 2016.02.16D11:39:20.634490000 .. 2 | 244 2016.02.16D11:39:22.994304000 2016.02.16D11:39:22.994304000 .. ret user sh serv .. ---------------------------------------------------------- .. 2016.02.16D11:39:20.634490000 Kevin 464 EQUITY_MARKET_RDB .. 2016.02.16D11:39:22.994304000 Kevin 464 EQUITY_MARKET_RDB .. query ------------------- "select from quote" "select from trade" Conclusion¶ This paper has presented an approach to building a kdb+ framework for query routing and load balancing. Within this example we’ve achieved the following: - A minimal IPC hop architecture for users to retrieve results from a network distributed set of databases - Service provision with an aim to reduce waiting time of gateways and users. - Plant connection stability including smooth additions of new resources to help deal with query queue and methods for recovering due to a process drop within the plant. - Error tracking through protected evaluation. - Enforced asynchronous communication between processes to prevent blocking. As an example framework focused on network routing, this paper covers much of the core functionality, but the scope of this paper does not encompass some desirable production features a system architect should consider, such as permissions, query validation and capacity management. Where topics haven’t been covered previously, KX will continue to drill down on important components that provide the building blocks for a stable, scalable, protected and efficient kdb+ system. All tests were run using kdb+ version 3.3 (2015.11.03) Author¶ Kevin Holsgrove is a kdb+ consultant based in New York. He has developed data and analytic systems for some of the world’s largest financial institutions in a range of asset classes.
//-function to move files in a table, first col is files second col is destination moveall:{[TAB] /-don't attempt to move any file that are not there, e.g. if the file was already moved during process tomove:delete from TAB where 0=count each key each hsym `$TAB[`filename]; /-only attempt to move distinct values in the table tomove:distinct tomove; /-if the move-to directory is null, do not move the file tomove:delete from tomove where 0=count each moveto; /-error check that a file does not have two different move-to paths errors:exec filename from (select n:count distinct moveto by filename from tomove) where n>1; if[0<count errors;{.lg.e[`alerter;"file ",x," has two differnt move-to directories in the csv:it will not be moved"]} each errors]; tomove:delete from tomove where filename in errors; movefile each tomove;} movefile:{[DICT] .lg.o[`alerter;"moving ",DICT[`filename]," to ",DICT[`moveto]]; @[system; "r ",DICT[`filename]," ",DICT[`moveto],"/",getfile[DICT[`filename]]; {.lg.e[`alerter;"could not move file ",x, ": ",y]}[DICT[`filename]]]} loadcsv:{csvloader inputcsv} FArun:{.lg.o[`alerter;"running filealerter process"]; $[0=count filealertercsv; .lg.o[`alerter;"csv file is empty"]; [lastproc:raze processfiles each filealertercsv; newproc:select filename,md5hash,filesize,moveto from lastproc; $[moveonfail; successful:newproc; successful:select filename,md5hash,filesize,moveto from lastproc where (all;funcpassed=1b) fby filename]; complete newproc; moveall[successful]]]; } splaytables:{[BIN] FILE_PATH: hsym `$BIN; //create file path symbol // table doesnt exist // create a splayed table on disk if[0 = count key FILE_PATH;.lg.o[`alerter;"no table found, creating new table"]; .Q.dd[FILE_PATH;`] set ([] filename:();md5hash:(); filesize:`long$())]; //table does exist //if it is flat (1)- cast md5hash symbols to string (where applicable) and splay it $[-11h ~ type key FILE_PATH; [FILE_PATH_BK: `$(string FILE_PATH),"_bk"; //create backup file path symbol .lg.o[`alerter;"flat table found: ",string FILE_PATH]; .lg.o[`alerter;"creating backup of flatfile : ", string FILE_PATH_BK]; .os.cpy[FILE_PATH;FILE_PATH_BK]; //create _bk of file .lg.o[`alerter;"removing original flatfile: ", string FILE_PATH]; hdel FILE_PATH; //delete original file .lg.o[`alerter;"creating new splayed table: ", string .Q.dd[FILE_PATH;`]]; //if md5hash is symbol set it to string else just splay .[set;(.Q.dd[FILE_PATH;`];$[11h ~ type exec md5hash from get FILE_PATH_BK; update string md5hash from (get FILE_PATH_BK); get FILE_PATH_BK]);{.lg.e[`alerter;"failed to write",x]}]; //cast md5 to string and splay table ]; //else splayed table found [.lg.o[`alerter;"splayed table found: ",string FILE_PATH];] ]; } loadcsv[]; loadprocessed[alreadyprocessed]; .timer.rep[.proc.cp[];0Wp;polltime;(`FArun`);0h;"run filealerter";1b] .servers.CONNECTIONS:distinct .servers.CONNECTIONS,tickerplanttype .servers.startup[] ================================================================================ FILE: TorQ_code_processes_housekeeping.q SIZE: 7,243 characters ================================================================================ //Housekeeping /-variables inputcsv:@[value;`.hk.inputcsv;.proc.getconfigfile["housekeeping.csv"]] runtimes:@[value;`.hk.runtimes;12:00] runnow:@[value;`.hk.runnow;0b] .win.version:@[value;`.win.version;`w10] inputcsv:string inputcsv /- set up the usage information .hk.extrausage: "Housekeeping:\n This process is used to remove and zip files older than certain date. The process is designed to be extended through user defined functions added to this script. Calling hkrun[] will run the service immediately. The process can be set on a timer from the times in the config file. The process runs from a csv, the location of which can be set in the config file. Housekeeping reads in from a file that has headers; \n [function]\t\t\t\trm for remove zip for gzip [path]\t\t\t\t\tdirectory to files [match]\t\t\t\tstring to match [exclude]\t\t\t\tstring to exclude from matched selection [age]\t\t\t\t\tage of files (days) you wish to match \n There are some config options which can be set via the standard command line switches e.g. -runnow \n [-hkusage]\t\t\t\tShow usage information [-.hk.inputcsv x]\t\t\tThe directory of the housekeeping CSV folder. If null, config/housekeeping.csv is used [-.hk.runtimes]\t\t\tThe time you wish to schedule housekeeping. Defaults to 12:00 [-.hk.runnow]\t\t\t\tRun the housekeeping process and exit \n The behaviour upon encountering errors can be modified using the standard flags. With no flags set, the process will exit when it hits an error. To trap an error and carry on, use the -trap flag To stop at error and not exit, use the -stop flag " //using the -hkusage tag in command line dumps info if[`hkusage in key .proc.params; -1 .hk.extrausage; exit 0] //-defines housekeeping csvloader:{[CSV] //-rethrows error if file doesn't exist, checks to see if correct columns exist in file housekeepingcsv::@[{.lg.o[`housekeeping;"Opening ",x];("S***IBB"; enlist ",") 0:"S"$x};CSV;{.lg.e[`housekeeping;"failed to open ",x," : ", y];'y}[CSV]]; housekeepingcsv::@[cols hk;5 6;:;`agemin`checkfordirectory] xcol hk:housekeepingcsv; check:all `function`path`match`exclude`age`agemin`checkfordirectory in cols housekeepingcsv; //-if check shows incorrect columns, report error $[check~0b; [{.lg.e[`housekeeping;"The file ",x," has incorrect layout"];'`$"incorrect housekeeping csv layout"}[CSV]]; //-if correctly columned csv has nulls, report error and skip lines [if[(any nullcheck:any null (housekeepingcsv.function;housekeepingcsv.age))>0; .lg.o[`housekeeping;"Null values found in file, skipping line(s) ", ("," sv (string where nullcheck))]]; housekeepingcsv2:(housekeepingcsv[where not nullcheck]); wrapper each housekeepingcsv2]]} //-Sees if the function in the CSV file is in the function list. if so- it carries out that function on files that match the parameters in the csv [using find function] wrapper:{[DICT] if[not DICT[`function] in key `.;:.lg.e[`housekeeping;"Could not find function: ",string DICT`function]]; p:find[.rmvr.removeenvvar DICT`path;;DICT`age;DICT`agemin;DICT`checkfordirectory]; value[DICT`function] each p[DICT`match] except p DICT`exclude} //compress by calling .cmp.compress function defined using -19! kdbzip:{[FILE] @[{.lg.o[`housekeeping;"compressing ",x]; .cmp.compress[filehandles;2;17;4;hcount filehandles:hsym `$x]}; FILE; {.lg.e[`housekeeping;"Failed to compress ",x," : ", y]}[FILE]] } //FUNCTIONS FOR LINUX \d .unix //-locates files or directories with path, matching string and age find:{[path;match;age;agemin;checkfordirectory] $[0=checkfordirectory;[fileordir:"files";flag:"f"];[fileordir:"directories";flag:"d"]]; dayormin:$[0=agemin;"mtime";"mmin"]; findmatches:{[path;match;age;fileordir;flag;dayormin] .lg.o[`housekeeping;"Searching for ",fileordir,": ",path,match]; .proc.sys "/usr/bin/find ",path," -maxdepth 1 -type ",flag," -name \"",match,"\" -",dayormin," +",raze string age}; matches:.[findmatches;(path;match;age;fileordir;flag;dayormin);{.lg.e[`housekeeping;"Find function failed: ",x];()}]; if[0=count matches;.lg.o[`housekeeping;"No matching ",fileordir," located"]]; matches} //-removes files rm:{[FILE] @[{.lg.o[`housekeeping;"removing ",x]; .proc.sys "/bin/rm -f ",x};FILE; {.lg.e[`housekeeping;"Failed to remove ",x," : ", y]}[FILE]]} //-zips files zip:{[FILE] @[{.lg.o[`housekeeping;"zipping ",x]; .proc.sys "/bin/gzip ",x};FILE; {.lg.e[`housekeeping;"Failed to zip ",x," : ", y]}[FILE]]} //-creates a tar ball from a directory and removes original directory and files tardir:{[DIR] @[{.lg.o[`housekeeping;"creating tar ball from ",x]; .proc.sys "/bin/tar -czf ",x,".tar.gz ",x," --remove-files"};DIR; {.lg.e[`housekeeping;"Failed to create tar ball from ",x," : ", y]}[DIR]]} \d . //FUNCTIONS FOR WINDOWS \d .win //-locates files with path, matching string and age find:{[path;match;age;agemin] //returns empty list for files if match is an empty string if[""~match; .lg.o[`housekeeping;"No matching files located"]; :(); ]; //renames the path to a windows readable format PATH:ssr[path;"/";"\\"]; //error and info for find function err:{.lg.e[`housekeeping;"Find function failed: ", x]; ()}; //searches for files and refines return to usable format $[0=agemin; files:.[{[PATH;match;age].lg.o[`housekeeping;"Searching for: ", match]; .proc.sys "z 1";fulllist:winfilelist[PATH;match]; removelist:fulllist where ("D"${10#x} each fulllist)<.proc.cd[]-age; .proc.sys "z 0"; {[path;x]path,last " " vs x} [PATH;] each removelist};(PATH;match;age);err]; files:.[{[PATH;match;age].lg.o[`housekeeping;"Searching for: ", match]; .proc.sys "z 1";fulllist:winfilelist[PATH;match]; removelist:fulllist where ({ts:(" " vs 17#x);("D"$ts[0]) + value ts[1]} each fulllist)<.proc.cp[]-`minute$age; .proc.sys "z 0"; {[path;x]path,last " " vs x} [PATH;] each removelist};(PATH;match;age);err]]; $[(count files)=0; [.lg.o[`housekeeping;"No matching files located"];files]; files]} //defines full list of files based on Windows OS version winfilelist:{[path;match] $[version in `w7`w8`w10;-2_(5_.proc.sys "dir ",path,match);-5_(5_.proc.sys "dir ",path,match, " /s")] } //removes files rm: {[FILE] @[{.lg.o[`housekeeping;"removing ",x];.proc.sys"del /F /Q ", x};FILE;{.lg.e[`housekeeping;"Failed to remove ",x," : ", y]}[FILE]]} //zip not yet implemented on Windows zip:{[FILE] .lg.e[`housekeeping;"zipping not yet implemented for file ",FILE]} //tar not yet implemented on Windows tardir:{[DIR] .lg.e[`housekeeping;"tar not yet implemented for directory ",DIR]} \d . $[.z.o like "w*";[find:.win.find; rm:.win.rm;zip:.win.zip; tardir:.win.tardir;];[find:.unix.find; rm:.unix.rm;zip:.unix.zip; tardir:.unix.tardir;]] //-runner function hkrun:{[] csvloader inputcsv} $[runnow=1b;[hkrun[];exit 0];] //-sets timers occording to csv $[(count runtimes)=0; .lg.e[`housekeeping;"No runtimes provided in config file"]; [{[runtime].timer.rep[$[.proc.cp[]>.proc.cd[]+runtime;1D+`timestamp$.proc.cd[]+runtime;`timestamp$.proc.cd[]+runtime];0Wp;1D;(`hkrun`);0h;"run housekeeping";0b]} each runtimes;.lg.o[`housekeeping;"Housekeeping scheduled for: ", (" " sv string raze runtimes)]]] ================================================================================ FILE: TorQ_code_processes_idb.q SIZE: 4,652 characters ================================================================================ /-default parameters \d .idb wdbtypes:@[value;wdbtypes;`wdb]; wdbconnsleepintv:@[value;`wdbconnsleepintv;10]; wdbcheckcycles:@[value;`wdvcheckcycles;0W];
vs ¶ “Vector from scalar” - partition a symbol, string, or bytestream - encode a vector from an atom, or a matrix from a vector x vs y vs[x;y] Partition¶ String by char¶ Where x is a char atom or string, and y is a string, returns a list of strings: y cut using x as the delimiter. q)"," vs "one,two,three" "one" "two" "three" q)", " vs "spring, summer, autumn, winter" "spring" "summer" "autumn" "winter" q)"|" vs "red|green||blue" "red" "green" "" "blue" String or bytestream by linebreak¶ Where x is the empty symbol ` , and y is a string or bytestream, returns as a list of strings y partitioned on embedded line terminators into lines. (Recognizes both Unix \n and Windows \r\n terminators). q)` vs "abc\ndef\nghi" "abc" "def" "ghi" q)` vs "x"$"abc\ndef\nghi" "abc" "def" "ghi" q)` vs "abc\r\ndef\r\nghi" "abc" "def" "ghi" Elides trailing linebreaks The treatment of linebreaks varies usefully from a left argument of \n . q)"\n" vs "abc\ndef\nghi\n" "abc" "def" "ghi" "" q)` vs "abc\ndef\nghi\n" "abc" "def" "ghi" Symbol by dot¶ Where x is the null symbol ` , and y is a symbol, returns as a symbol vector y split on `.` . q)` vs `mywork.dat `mywork`dat File handle¶ Where x is the empty symbol ` , and y is a file handle, returns as a symbol vector y split into directory and file parts. q)` vs `:/home/kdb/data/mywork.dat `:/home/kdb/data`mywork.dat sv join Byte Vectors¶ Since 4.1t 2024.01.11, y can be a byte vector: y cut using x as the delimiter. q)0x02 vs 0x0102010201 ,0x01 ,0x01 ,0x01 q)0x0203 vs 0x000102030405 0x0001 0x0405 q)" "vs"x"$"a b" / type inferred from left hand side ,"a" ,"b" Encode¶ Bit representation¶ Where x is 0b and y is an integer, returns the bit representation of y . q)0b vs 23173h 0101101010000101b q)0b vs 23173 00000000000000000101101010000101b Since 4.1t 2021.09.03, y also supports guids. q)0b vs rand 0Ng 10001100011010111000101101100100011010000001010101100000100001000000101000111110000101111000010000000001001001010001101101101000b Byte representation¶ Where x is 0x0 and y is a number, returns the internal representation of y , with each byte in hex. q)0x0 vs 2413h 0x096d q)0x0 vs 2413 0x0000096d q)0x0 vs 2413e 0x4516d000 q)0x0 vs 2413f 0x40a2da0000000000 q)"."sv string"h"$0x0 vs .z.a / ip address string from .z.a "192.168.1.213" Base-x representation¶ Where x and y are integer, the result is the representation of y in base x . (Since V3.4t 2015.12.13.) q)10 vs 1995 1 9 9 5 q)2 vs 9 1 0 0 1 q)24 60 60 vs 3805 1 3 25 q)"." sv string 256 vs .z.a / ip address string from .z.a "192.168.1.213" Where y is an integer vector the result is a matrix with count[x] items whose i -th column (x vs y)[;i] is identical to x vs y[i] . More generally, y can be any list of integers, and each item of the result is identical to y in structure. q)a:10 vs 1995 1996 1997 q)a 1 1 1 9 9 9 9 9 9 5 6 7 q)a[;0] 1 9 9 5 q)10 vs(1995;1996 1997) 1 1 1 9 9 9 9 9 9 5 6 7 sv decode .Q.j10 encode binhex, .Q.j12 encode base36 .Q.x10 decode binhex, .Q.x12 decode base36 where ¶ Copies of indexes of a list or keys of a dictionary where x where[x] Where x is a: Vector of non-negative integers¶ returns a vector containing, for each item of x , that number of copies of its index. q)where 2 3 0 1 0 0 1 1 1 3 q)raze x #' til count x:2 3 0 1 0 0 1 1 1 3 Where x is boolean, the result is the indices of the 1s. Thus where is often used after a logical test: q)where 0 1 1 0 1 1 2 4 q)x:1 5 6 8 11 17 20 21 q)where 0 = x mod 2 / indices of even numbers 2 3 6 q)x where 0 = x mod 2 / select even numbers from list 6 8 20 Dictionary whose values are non-negative integers¶ returns a list of keys repeated as many times as the corresponding value. q)d:`amr`ibm`msft!2 3 1 q)where d `amr`amr`ibm`ibm`ibm`msft q)where 2 3 0 1 / usual operation on integer list 0 0 1 1 1 3 q)where 0 1 2 3 ! 2 3 0 1 / same on dictionary with indices as keys 0 0 1 1 1 3 Insight If a list is viewed as a mapping from indexes to entries, than the definition for the integer list above is merely a special case. while ¶ Evaluate expression/s while some condition remains true while[test;e1;e2;e3;…;en] Control construct. Where test is an expression that evaluates to an atom of integral typee1 ,e2 , …en are expressions unless test evaluates to zero, the expressions e1 to en are evaluated, in order. The cycle – evaluate test , then the expressions – continues until test evaluates to zero. q)r:1 1 q)x:10 q)while[x-:1;r,:sum -2#r] q)r 1 1 2 3 5 8 13 21 34 55 89 The result of while is always the generic null. while is not a function but a control construct. It cannot be iterated or projected. Name scope¶ The brackets of the expression list do not create lexical scope. Name scope within the brackets is the same as outside them. Accumulators – While, do , if Controlling evaluation Q for Mortals §10.1.6 while within ¶ Check bounds x within y within[x;y] Where x is an atom or list of sortable type/sy is an ordered pair (i.e.(<). y is true), or the flip of a list of ordered pairs of the same count and type/s asx , the result is a boolean for each item ofx indicating whether it is within the inclusive bounds given byy . q)1 3 10 6 4 within 2 6 01011b q)"acyxmpu" within "br" / chars are ordered 0100110b q)select sym from ([]sym:`dd`ccc`ccc) where sym within `c`d sym --- ccc ccc within is a left-uniform function: its result conforms to its left argument. q)5 within (1 2 6;3 5 7) 010b q)2 5 6 within (1 2 6;3 5 7) 111b q)(1 3 10 6 4;"acyxmpu") within ((2;"b");(6;"r")) 01011b 0100110b within uses Find to search for x in y . within is a multithreaded primitive. wj , wj1 ¶ Window join wj [w; c; t; (q; (f0;c0); (f1;c1))] wj1[w; c; t; (q; (f0;c0); (f1;c1))] Where t andq are simple tables to be joined (q should be sorted`sym`time with`p# on sym). Since 4.1t 2023.08.04 ift is the name of a table, it is updated in place.w is a pair of lists of times/timestamps, begin and endc are the names of the common columns, syms and times, which must have integral typesf0 ,f1 are aggregation functions applied to values in q columnsc0 ,c1 over the intervals returns for each record in t , a record with additional columns c0 and c1 , which are the results of the aggregation functions applied to values over the matching intervals in w . Typically this might be: wj[w;`sym`time;trade;(quote;(max;`ask);(min;`bid))] A quote is understood to be in existence until the next quote. To see all the values in each window, pass the identity function :: in place of the aggregates E.g. wj[w;c;t;(q;(::;c0);(::;c1))] Multi-column arguments¶ Since 3.6 2018.12.24, wj and wj1 support multi-col args, forming the resulting column name from the last argument e.g. wj[w; f; t; (q; (wavg;`asize;`ask); (wavg;`bsize;`bid))] Interval behavior¶ wj and wj1 are both [] interval, i.e. they consider quotes ≥beginning and ≤end of the interval. For wj , the prevailing quote on entry to the window is considered valid as quotes are a step function. wj1 considers quotes on or after entry to the window. If the join is to consider quotes that arrive from the beginning of the interval, use wj1 . Behavior prior to V3.0 Prior to V3.0, wj1 considered only quotes in the window except for the window end (i.e. quotes ≥start and <end of the interval). | version | wj1 | wj | |---|---|---| | 3.0+ | [] | prevailing + [] | | 2.7/2.8 | [) | prevailing + [] | q)t:([]sym:3#`ibm;time:10:01:01 10:01:04 10:01:08;price:100 101 105) q)t sym time price ------------------ ibm 10:01:01 100 ibm 10:01:04 101 ibm 10:01:08 105 q)a:101 103 103 104 104 107 108 107 108 q)b:98 99 102 103 103 104 106 106 107 q)q:([]sym:`ibm; time:10:01:01+til 9; ask:a; bid:b) q)q sym time ask bid -------------------- ibm 10:01:01 101 98 ibm 10:01:02 103 99 ibm 10:01:03 103 102 ibm 10:01:04 104 103 ibm 10:01:05 104 103 ibm 10:01:06 107 104 ibm 10:01:07 108 106 ibm 10:01:08 107 106 ibm 10:01:09 108 107 q)f:`sym`time q)w:-2 1+\:t.time q)wj[w;f;t;(q;(max;`ask);(min;`bid))] sym time price ask bid -------------------------- ibm 10:01:01 100 103 98 ibm 10:01:04 101 104 99 ibm 10:01:08 105 108 104 The interval values may be seen as: q)wj[w;f;t;(q;(::;`ask);(::;`bid))] sym time price ask bid -------------------------------------------------- ibm 10:01:01 100 101 103 98 99 ibm 10:01:04 101 103 103 104 104 99 102 103 103 ibm 10:01:08 105 107 108 107 108 104 106 106 107 Window joins with multiple symbols should be used only with a `p#sym like schema. Typical RTD-like `g# gives undefined results. Window join is a generalization of as-of join An as-of join takes a snapshot of the current state, while a window join aggregates all values of specified columns within intervals. aj , asof Joins Q for Mortals §9.9.9 Window Joins
.finos.qclone.priv.blockingWriteAndClose:{[x] total:count x; sent:0; rc:0; .finos.clib.setBlocking[.z.w;1b]; while[sent<total ;rc:.finos.clib.write[.z.w;sent _ x] ;sent+:rc ]; hclose .z.w} // After fork, child is handed off to this function to manage // file descriptors, do some accounting, and fire off user handlers. // @param oldZpg Shimmed http renderer. Want to execute in the child. // @param x Whatever the original .z.ph handler rendered into text. // @return Never. .finos.qclone.priv.forkedChildZpg:{[oldZpg;x] info:".z.i=",string[.z.i],", .z.w=",string[.z.w],", x=",(-3!x); .finos.log.debug".finos.qclone.priv.forkedChildZpg0: ",info; .finos.qclone.priv.setupChildContext[]; // Call handler after handles are all set up. @[.finos.qclone.childZpgHandler;(::);{[x].finos.log.error".finos.qclone.childZpgHandler signaled: ",-3!x}]; // Process the input. r:@[oldZpg;x;{[x]$[10h=type x;x;-3!x]}]; // Can't return the string since it makes it more complicated // to figure out when to exit. // Force feed the string down the handle. .finos.qclone.priv.blockingWriteAndClose .finos.qclone.priv.serialize r; .finos.log.debug".finos.qclone.priv.forkedChildZpg1: ",info; exit 0; } // Handler to call from .z.pg to have query processed by a clone. // @param x Whatever the original .z.pg handler rendered into text. // @return Generic null, to be discarded since the client handle was closed. .finos.qclone.priv.forkConnectionZpg:{[oldZpg;x] rc:.finos.clib.fork[]; $[rc>0 ;.finos.qclone.priv.forkedParent[rc;`zpg] ;.finos.qclone.priv.forkedChildZpg[oldZpg;x] // Will exit. ]; (::) } .finos.qclone.priv.help,:( ".finos.qclone.activateZpg[]"; " Hooks up .z.pg handler for clone-per-query capability.") // Hook up .z.pg handler for clone-per-query capability. // @return Nothing. .finos.qclone.activateZpg:{[] if[`zpg in .finos.qclone.priv.activated ; : (::) // Already activated. ]; if[`zpo in .finos.qclone.priv.activated ; '"activateZpo already active and mutually exclusive" ]; $[-11h=type key `.z.pg // Handler exists? ;.z.pg:{[oldZpg;w]@[oldZpg;w;(::)];.finos.qclone.priv.forkConnectionZpg w}[.z.pg;] // Assign. ;.z.pg:.finos.qclone.priv.forkConnectionZpg[value;] ]; .finos.qclone.priv.activated,:`zpg; } // After fork, child is handed off to this function to manage // file descriptors, do some accounting, and fire off user handlers. // @param lambdaThatReturnsStatusCode Function passed by user. Takes no arguments. .finos.qclone.priv.forkedChildSpawn:{[lambdaThatReturnsStatusCode;args] info:".z.i=",string[.z.i],", .z.w=",string[.z.w]; .finos.log.debug".finos.qclone.priv.forkedChildSpawn0: ",info; .finos.qclone.priv.setupChildContext[]; // Call handler after handles are all set up. @[.finos.qclone.childSpawnHandler;(::);{[x].finos.log.error".finos.qclone.childSpawnHandler signaled: ",-3!x}]; // Run the lambda. r:.[lambdaThatReturnsStatusCode;(),args;{[x]$[10h=type x;x;-3!x]}]; // Can't return the string since it makes it more complicated .finos.log.debug".finos.qclone.priv.forkedChildSpawn1: ",info,", result: ",-3!r; // If result isn't an integer, use -1 as status code. .finos.clib.underscoreExit $[type[r]in -6 -7h;r;-1]; }; .finos.qclone.priv.forkFailedHandler:{[err] 'err} .finos.qclone.priv.help,:( ".finos.qclone.spawn[lambdaThatReturnsStatusCode]"; " Create a clone and run the lambda."; " lambdaThatReturnsStatusCode can be a lambda like {1+2}"; " or a lambda with arguments like ({x+y};1;2) or ({x+5};1}") // Spawn a child process and run the lambda. // @param lambda or (lambda;args1;..;argN) lambda(with optional arguments) to do work and then exit with a status code. // @return newChildPid to the parent. .finos.qclone.spawn:{[f] // To make sure code execution doesm't break out on a //"fork. OS reports: Resource temporarily unavailable" error due to hitting // your ulimit for processes open rc:@[.finos.clib.fork;(::);.finos.qclone.priv.forkFailedHandler]; // Only run the forkedParent[...] handler if fork(2) was successful. $[rc>0; .finos.qclone.priv.forkedParent[rc;`spawn]; .finos.qclone.priv.forkedChildSpawn[first f;$[1<count f;1_f;(::)]]]; // Will exit. rc} // After fork, child is handed off to this function to manage // file descriptors, do some accounting, and fire off user handlers. // @param lambdaThatReturnsString Function passed by user. // The lambda takes no arguments. // Returns a string to be sent to the HTTP client. .finos.qclone.priv.forkedChildOffloadHttp:{[lambdaThatReturnsString;contentType] info:".z.i=",string[.z.i],", .z.w=",string[.z.w]; .finos.log.debug".finos.qclone.priv.forkedChildOffloadHttp0: ",info; .finos.qclone.priv.setupChildContext[]; // Call handler after handles are all set up. @[.finos.qclone.childOffloadHttpHandler;(::);{[x].finos.log.error".finos.qclone.childOffloadHttpHandler signaled: ",-3!x}]; // Run the lambda. r:@[lambdaThatReturnsString;(::);{[x]$[10h=type x;x;-3!x]}]; // If they didn't return a string, do a simple render. if[10h<>type r; ;r:.h.pre .Q.s2 r ]; // Can't return the string since it makes it more complicated // to figure out when to exit. // Force feed the string down the handle. .finos.qclone.priv.blockingWriteAndClose .h.hy[contentType;]r; // Can't return the string since it makes it more complicated .finos.log.debug".finos.qclone.priv.forkedChildOffloadHttp1: ",info; exit 0; } .finos.qclone.priv.help,:( ".finos.qclone.offloadHttp[lambdaThatReturnsString;contentType]"; " Create a clone and run the lambda. Send string returned from lambda to HTTP client.") // OffloadHtml a child process and run the lambda. // @param lambdaThatReturnsString Lambda to do work and then give a string as the result. // @return Empty string to avoid interfering with the child's communication with the client. .finos.qclone.offloadHttp:{[lambdaThatReturnsString;contentType] rc:.finos.clib.fork[]; $[rc>0 ;.finos.qclone.priv.forkedParent[rc;`offloadHttp] // Function call below will cause clone process to exit when it's done. ;.finos.qclone.priv.forkedChildOffloadHttp[lambdaThatReturnsString;contentType] ]; } // This gets set to 1b so functions called from HTML // interface can exhibit special behaviour for // StratStudio. .finos.qclone.zphActive:0b .finos.qclone.oldZph:.z.ph // Shim to set the variable. .finos.qclone.priv.zphActiveHandler:{[arg] .finos.qclone.zphActive:1b; // Don't need protected eval since HTML rendering coe // does that already. r:.finos.qclone.oldZph arg; .finos.qclone.zphActive:0b; r } .finos.qclone.priv.help,:( ".finos.qclone.activateZphActive[]"; " Hooks up .z.ph handler with shim to maintain .finos.qclone.zphActive flag.") .finos.qclone.activateZphActive:{[] if[`zphActive in .finos.qclone.priv.activated ; : (::) // Already activated. ]; if[not -11h=type key `.z.ph // HTML renderer installed? ;'"no HTML handler installed on .z.ph" ]; // Assign. .z.ph:.finos.qclone.priv.zphActiveHandler; .finos.qclone.priv.activated,:`zphActive; } .finos.qclone.priv.help,:( ".finos.qclone.unload[]"; " Remove artifacts from .finos.qclone namespace. (But can't unshim, so only useful on exit.)") // Try to clean everything up. // However, the wrapper functions on .z.po, .z.pc., .z.ph, .z.pg aren't // going to be removed. So further connections will mess things up. // Only useful for cleanup on process exit. .finos.qclone.unload:{ @[`.help;`DIR`TXT;_;`qclone]; // Remove help entry. .finos.qclone.reap[]; // Clean up remaining zombies. r:.finos.qclone.priv.childProcesses; // Copy the things we need before they're deleted func:unloadHandler; // so we can perform the callback. delete qclone from `.finos; // Delete entire context and its contents from .finos namespace. func r; } .finos.qclone.isClone:0b; //used by .finos.qclone.spawnPersistent to prevent runaway spawning .finos.qclone.parentPort:system"p"; //The file handle operations must occur in this specific order. //Skipping actions or changing their order can cause weird behavior like //the parent and child fighting over the console, or the parent hanging on exiting. .finos.qclone.priv.setupPersistentChildContext:{[fun] .finos.log.priv.h:-2; //avoid writing to log files until the clone sets up its own logging p:system "p"; if[p>0;.finos.qclone.parentPort:p]; .finos.qclone.priv.setupChildContextCommon[]; system"p 0W"; .finos.qclone.close 0; hopen`:/dev/null; system"p 0"; fun[]}; // Spawns a persistent clone. This is basically an enhanced version of .finos.clib.fork[] that // fixes some anomalies and allows a real functioning clone to be spawned as a result. // Since this function calls fork, all the code that follows the call will be executed in the // clone as well, so this should be the last call inside the function that calls it, or // check the value of .finos.qclone.isClone to decide which actions to execute. // Warning: In q version 3.4, the unix domain socket will be deleted after calling this function. // @param fun A function to execute in the clone (e.g. this could connect back to the parent). // @return PID (0 in the clone) .finos.qclone.spawnPersistent:{[fun] if[0<.z.w; '"spawnIdleClone must be run from main thread"]; if[.finos.qclone.isClone; :0i]; //required since this function may be called in each pid:.finos.clib.fork[]; if[0=pid; //we are the clone .finos.sys.errorTrapAt[.finos.qclone.priv.setupPersistentChildContext;fun;{-1 x;exit 1}]; ]; pid}; .help.TXT[`qclone]:.finos.qclone.priv.help // .finos.qclone.activateZph[] // .finos.qclone.activateZpo[] // Mutually exclusive with activateZpg. // .finos.qclone.activateZpg[] // Mutually exclusive with activateZpo. ================================================================================ FILE: kdb_q_qclone_test_qclone.q SIZE: 265 characters ================================================================================ .finos.log.debug:{-2@x} .finos.log.error:{-2@x} \l finos_clib.q \l finos_qclone.q t:([]til 3) // Function to show PID and content. show_t:{0N!.z.i+t;7i} .finos.qclone.spawn(show_t) .finos.qclone.spawn(show_t) .finos.qclone.reap[] .finos.qclone.activateZph[] ================================================================================ FILE: kdb_q_query_query.q SIZE: 2,350 characters ================================================================================
Real-time data cluster¶ Instead of one large instance, our RDB will now be a cluster of smaller instances and the day’s real-time data will be distributed between them. An Auto Scaling group will be used to maintain the RAM capacity of the cluster. Throughout the day more data will be ingested by the tickerplant and added to the cluster. The ASG will increase the number of instances in the cluster throughout the day in order to hold this new data. At the end of the day, the day’s data will be flushed from memory and the ASG will scale the cluster in. Distributed RDBs This solution has one obvious difference to a regular kdb+ system in that there are multiple RDB servers. User queries will need to be parsed and routed to each one to ensure the data can be retrieved effectively. Engineering a solution for this is beyond the scope of this article, but it will be tackled in the future. kdb+tick¶ The code here has been written to act as a wrapper around kdb+tick’s .u functionality. The code to coordinate the RDBs has been put in a new .u.asg namespace, its functions determine when to call .u.sub and .u.del to add and remove subscribers from .u.w . Scaling the cluster¶ On a high level the scaling method is quite simple. - A single RDB instance is launched and subscribes to the tickerplant. - When it fills up with data a second RDB will come up to take its place. - This cycle repeats throughout the day growing the cluster. - At end-of-day all but the latest RDB instances are shutdown. The subscriber queue¶ There is an issue with the solution outlined above. An RDB will not come up at the exact moment its predecessor unsubscribes, so there are two scenarios that the tickerplant must be able to handle. - The new RDB comes up too early. - The new RDB does not come up in time. If the RDB comes up too early, the tickerplant must add it to a queue, while remembering the RDB’s handle, and the subscription info. If it does this, it can add the RDB to .u.w when it needs to. If the RDB does not come up in time, the tickerplant must remember the last upd message it sent to the previous RDB. When the RDB eventually comes up it can use this to recover the missing data from the tickerplant’s log file. This will prevent any gaps in the data. The tickerplant will store these details in .u.asg.tab . / table used to handle subscriptions / time - time the subscriber was added / handle - handle of the subscriber / tabs - tables the subscriber has subscribed for / syms - syms the subscriber has subscribed for / ip - ip of the subscriber / queue - queue the subscriber is a part of / live - time the tickerplant addd the subscriber to .u.w / rolled - time the subscriber unsubscribed / firstI - upd count when subscriber became live / lastI - last upd subscriber processed .u.asg.tab: flip `time`handle`tabs`syms`queue`live`rolled`lastI!() q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ------------------------------------------------------- The first RDB to come up will be added to this table and .u.w , it will then be told to replay the log. We will refer to the RDB that is in .u.w and therefore currently being published to as live. When it is time to roll to the next subscriber the tickerplant will query .u.asg.tab . It will look for the handle, tables and symbols of the next RDB in the queue and make it the new live subscriber. kdb+tick’s functionality will then take over and start publishing to the new RDB. Adding subscribers¶ To be added to .u.asg.tab a subscriber must call .u.asg.sub , it takes three parameters. - A list of tables to subscribe for. - A list of symbol lists to subscribe for (one symbol list for each of the tables). - The name of the queue to subscribe to. If the RDB is subscribing to a queue with no live subscriber, the tickerplant will immediately add it to .u.w and tell it to replay the log. This means the RDB cannot make multiple .u.asg.sub calls for each table it wants from the tickerplant. Instead table and symbol lists are sent as parameters. So multiple subscriptions can still be made. / t - A list of tables (or ` for all). / s - Lists of symbol lists for each of the tables. / q - The name of the queue to be added to. .u.asg.sub:{[t;s;q] if[-11h = type t; t: enlist t; s: enlist s]; if[not (=) . count each (t;s); '"Count of table and symbol lists must match"]; if[not all missing: t in .u.t,`; '.Q.s1[t where not missing]," not available"]; `.u.asg.tab upsert (.z.p; .z.w; t; s; `$"." sv string 256 vs .z.a; q; 0Np; 0Np; 0N; 0N); liveProc: select from .u.asg.tab where not null handle, not null live, null rolled, queue = q; if[not count liveProc; .u.asg.add[t;s;.z.w]]; } .u.asg.sub first carries out some checks on the arguments. - Ensures t ands are enlisted. - Checks that the count of t ands match. - Checks that all tables in t are available for subscription. A record is then added to .u.asg.tab for the subscriber. Finally, .u.asg.tab is checked to see if there are other RDBs in the same queue. If the queue is empty the tickerplant will immediately make this RDB the live subscriber. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI -------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 ,` ,` 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 0 q).u.w Quote| 7i ` Trade| 7i ` If there is already a live subscriber the RDB will just be added to the queue. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI --------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 0 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg q).u.w Quote| 7i ` Trade| 7i ` The live subscriber¶ To make an RDB the live subscriber the tickerplant will call .u.asg.add . There are two instances when this is called. - When an RDB subscribes to a queue with no live subscriber. - When the tickerplant is rolling subscribers. / t - List of tables the RDB wants to subscribe to. / s - Symbol lists the RDB wants to subscribe to. / h - The handle of the RDB. .u.asg.add:{[t;s;h] schemas: raze .u.subInner[;;h] .' flip (t;s); q: first exec queue from .u.asg.tab where handle = h; startI: max 0^ exec lastI from .u.asg.tab where queue = q; neg[h] @ (`.sub.rep; schemas; .u.L; (startI; .u.i)); update live:.z.p, firstI:startI from `.u.asg.tab where handle = h; } In .u.asg.add .u.subInner is called to add the handle to .u.w for each table. This function is equivalent to kdb+tick’s .u.sub but it takes a handle as a third argument. This change to .u will be discussed in a later section. The tickerplant then calls .sub.rep on the RDB and the schemas, log file, and the log window are passed down as parameters. Once the replay is kicked off on the RDB it is marked as the live subscriber in .u.asg.tab . Becoming the live subscriber¶ When the tickerplant makes an RDB the live subscriber it will call .sub.rep to initialize it. / schemas - table names and corresponding schemas / tplog - file path of the tickerplant log / logWindow - start and end of the window needed in the log, (start;end) .sub.rep:{[schemas;tplog;logWindow] .sub.live: 1b; (.[;();:;].) each schemas; .sub.start: logWindow 0; `upd set .sub.replayUpd; -11!(logWindow 1;tplog); `upd set .sub.upd; .z.ts: .sub.monitorMemory; system "t 5000"; } The RDB first marks itself as live, then as in tick/r.q the RDBs will set the table schemas and replay the tickerplant’s log. Replaying the tickerplant log¶ In kdb+tick .u.i will be sent to the RDB. The RDB will then replay that many upd messages from the log. As it replays it inserts every row of data in the upd messages into the tables. In our case we may not want to keep all of the data in the log as other RDBs in the cluster may be holding some of it. This is why the logWindow is passed down by the tickerplant. logWindow is a list of two integers. - The last upd message processed by the other RDBs in the same queue. - The last upd processed by the tickerplant,.u.i . To replay the log .sub.start is set to the first element of logWindow and upd is set to .sub.replayUpd . The tickerplant log replay is then kicked off with -11! until the second element in the logWindow , .u.i . .sub.replayUpd is then called for every upd message. With each upd it increments .sub.i until it reaches .sub.start . From that point it calls .sub.upd to insert the data. .sub.replayUpd:{[t;data] if[.sub.i > .sub.start; if[not .sub.i mod 100; .sub.monitorMemory[]]; .sub.upd[t;flip data]; :(::); ]; .sub.i+: 1; } .sub.upd: {.sub.i+: 1; x upsert y} One other function of .sub.replayUpd is to monitor the memory of the server while we are replaying. This will protect the RDB in the case where there is too much data in the log to replay. In this case the RDB will unsubscribe from the tickerplant and another RDB will continue the replay. After the log has been replayed upd is set to .sub.upd , this will upsert data and keep incrementing .sub.i for every upd the RDB receives. Finally the RDB sets .z.ts to .sub.monitorMemory and initializes the timer to run every five seconds. Monitoring RDB server memory¶ The RDB server’s memory is monitored for two reasons. - To tell the Auto Scaling group to scale out. - To unsubscribe from the tickerplant when full. Scaling out¶ As discussed in the Auto Scaling in q section, AWS CLI commands can take some time to run. This could create some unwanted buffering in the RDB if they were to run while subscribed to the tickerplant. To avoid this another q process runs separately on the server to coordinate the scale out. It will continuously run .mon.monitorMemory to check the server’s memory usage against a scale threshold, say 60%. If the threshold is breached it will increment the Auto Scaling group’s DesiredCapacity and set .sub.scaled to be true. This will ensure the monitor process does not tell the Auto Scaling group to scale out again. .mon.monitorMemory:{[] if[not .mon.scaled; if[.util.getMemUsage[] > .mon.scaleThreshold; .util.aws.scale .aws.groupName; .mon.scaled: 1b; ]; ]; } Unsubscribing¶ The RDB process runs its own timer function to determine when to unsubscribe from the tickerplant. It will do this to stop the server from running out of memory. .sub.monitorMemory:{[] if[.sub.live; if[.util.getMemUsage[] > .sub.rollThreshold; .sub.roll[] ]; ]; } .sub.monitorMemory checks when the server’s memory usage breaches the .sub.rollThreshold . It then calls .sub.roll on the tickerplant which will then roll to the next subscriber. Thresholds¶ Ideally .mon.scaleThreshold and .sub.rollThreshold will be set far enough apart so that the new RDB has time to come up before the tickerplant tries to roll to the next subscriber. This will prevent the cluster from falling behind and reduce the number of upd messages that will need to be recovered from the log. Rolling subscribers¶ As discussed, when .sub.rollThreshold is hit the RDB will call .sub.roll to unsubscribe from the tickerplant. From that point The RDB will not receive any more data, but it will be available to query. .sub.roll:{[] .sub.live: 0b; `upd set {[x;y] (::)}; neg[.sub.TP] @ ({.u.asg.roll[.z.w;x]}; .sub.i); } .sub.roll marks .sub.live as false and upd is set to do nothing so that no further upd messages are processed. It will also call .u.asg.roll on the tickerplant, using its own handle and .sub.i (the last upd it has processed) as arguments. / h - handle of the RDB / subI - last processed upd message .u.asg.roll:{[h;subI] .u.del[;h] each .u.t; update rolled:.z.p, lastI:subI from `.u.asg.tab where handle = h; q: first exec queue from .u.asg.tab where handle = h; waiting: select from .u.asg.tab where not null handle, null live, queue = q; if[count waiting; .u.asg.add . first[waiting]`tabs`syms`handle]; } .u.asg.roll uses kdb+tick’s .u.del to delete the RDB’s handle from .u.w . It then marks the RDB as rolled and .sub.i is stored in the lastI column of .u.asg.tab . Finally .u.asg.tab is queried for the next RDB in the queue. If one is ready the tickerplant calls .u.asg.add making it the new live subscriber and the cycle continues. This switch to the new RDB may cause some latency in high volume systems. The switch itself will only take a moment but there may be some variability over the network as the tickerplant starts sending data to a new server. Implementing batching in the tickerplant could lessen this latency. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 2020.04.14D08:13:05.942338000 0 9746 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D08:13:05.942400000 9746 q).u.w Quote| 9i ` Trade| 9i ` If there is no RDB ready in the queue, the next one to subscribe up will immediately be added to .u.w and lastI will be used to recover from the tickerplant log. End of day¶ Throughout the day the RDB cluster will grow in size as the RDBs launch, subscribe, fill and roll. .u.asg.tab will look something like the table below. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.13D23:36:43.518172000 7 10.0.1.5 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.13D23:36:43.518223000 2020.04.14D08:13:05.942338000 0 9746 2020.04.14D07:37:42.451523000 9 10.0.1.22 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D08:13:05.942400000 2020.04.14D09:37:17.475790000 9746 19366 2020.04.14D09:14:14.831793000 10 10.0.1.212 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D09:37:17.475841000 2020.04.14D10:35:36.456220000 19366 29342 2020.04.14D10:08:37.606592000 11 10.0.1.196 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D10:35:36.456269000 2020.04.14D11:42:57.628761000 29342 39740 2020.04.14D11:24:45.642699000 12 10.0.1.42 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D11:42:57.628809000 2020.04.14D13:09:57.867826000 39740 50112 2020.04.14D12:41:57.889318000 13 10.0.1.80 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D13:09:57.867882000 2020.04.14D15:44:19.011327000 50112 60528 2020.04.14D14:32:22.817870000 14 10.0.1.246 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D15:44:19.011327000 60528 2020.04.14D16:59:10.663224000 15 10.0.1.119 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg Usually when end-of-day occurs .u.end is called in the tickerplant. It informs the RDB which would write its data to disk and flush it from memory. In our case when we do this the rolled RDBs will be sitting idle with no data. To scale in .u.asg.end is called alongside kdb+tick’s .u.end . .u.asg.end:{[] notLive: exec handle from .u.asg.tab where not null handle, (null live) or not any null (live;rolled); neg[notLive] @\: (`.u.end; dt); delete from `.u.asg.tab where any (null handle; null live; not null rolled); update firstI:0 from `.u.asg.tab where not null live; } The function first sends .u.end to all non live subscribers. It then deletes these servers from .u.asg.tab and resets firstI to zero for all of the live RDBs. q).u.asg.tab time handle tabs syms ip queue live rolled firstI lastI ----------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.04.14D15:32:22.817870000 14 10.0.1.246 rdb-cluster-v1-RdbASGMicro-NWN25W2UPGWQ.r-asg 2020.04.14D15:44:19.011327000 0 When .u.end is called on the RDB it will delete the previous day’s data from each table. If the process is live it will mark .mon.scaled to false on the monitor process so that it can scale out again when it refills. If the RDB is not live and it has flushed all of its data it will terminate its own instance and reduce the DesiredCapacity of the ASG by one. .u.end: .sub.end; .sub.end:{[dt] .sub.i: 0; .sub.clear dt+1; } / tm - clear all data from all tables before this time .sub.clear:{[tm] ![;enlist(<;`time;tm);0b;`$()] each tables[]; if[.sub.live; .Q.gc[]; neg[.sub.MON] (set;`.mon.scaled;0b); :(::); ]; if[not max 0, count each get each tables[]; .util.aws.terminate .aws.instanceId ]; } Bringing it all together¶ The q scripts for the code outlined above are laid out in the same way as kdb+tick, i.e. tickasg.q is in the top directory with the RDB and .u.asg scripts in the directory below, asg/ . The code runs alongside kdb+tick so its scripts are placed in the same top directory. $ tree q/ q ├── asg │ ├── mon.q │ ├── r.q │ ├── sub.q │ ├── u.q │ └── util.q ├── tick │ ├── r.q │ ├── sym.q │ ├── u.q │ └── w.q ├── tickasg.q └── tick.q Starting the tickerplant is the same as in kdb+tick, but tickasg.q is loaded instead of tick.q . q tickasg.q sym /mnt/efs/tplog -p 5010 tickasg.q¶ system "l tick.q" system "l asg/u.q" .tick.zpc: .z.pc; .z.pc: {.tick.zpc x; .u.asg.zpc x;}; .tick.end: .u.end; .u.end: {.tick.end x; .u.asg.end x;}; tickasg.q starts by loading in tick.q , .u.tick is called in this file so the tickerplant is started. Loading in asg/u.q will initiate the .u.asg code on top of it. .z.pc and .u.end are then overwritten to run both the .u and the .u.asg versions. .u.asg.zpc:{[h] if[not null first exec live from .u.asg.tab where handle = h; .u.asg.roll[h;0] ]; update handle:0Ni from `.u.asg.tab where handle = h; } .u.asg.zpc checks if the disconnecting RDB is the live subscriber and calls .u.asg.roll if so. It then marks the handle as null in .u.asg.tab for any disconnection. There are also some minor changes made to .u.add and .u.sub in asg/u.q . Changes to .u ¶ .u will still work as normal with these changes. The main change is needed because .z.w cannot be used in .u.sub or .u.add anymore. When there is a queue of RDBs .u.sub will not be called in the RDB’s initial subscription call, so .z.w will not be the handle of the RDB we want to start publishing to. To remedy this .u.add has been changed to take a handle as a third parameter instead of using .z.w . The same change could not be made to .u.sub as it is the entry function for kdb+tick’s tick/r.q . To keep tick/r.q working .u.subInner has been added, it is a copy of .u.sub but takes a handle as a third parameter. .u.sub is now a projection of .u.subInner , it passes .z.w in as the third parameter. tick/u.q¶ \d .u add:{$[ (count w x)>i:w[x;;0]?.z.w; .[`.u.w;(x;i;1);union;y]; w[x],:enlist(.z.w;y) ]; (x;$[99=type v:value x;sel[v]y;@[0#v;`sym;`g#]]) } sub:{if[x~`;:sub[;y]each t];if[not x in t;'x];del[x].z.w;add[x;y]} \d . asg/u.q¶ / use 'z' instead of .z.w add:{$[ (count w x)>i:w[x;;0]?z; .[`.u.w;(x;i;1);union;y]; w[x],:enlist(z;y) ]; (x;$[99=type v:value x;sel[v]y;@[0#v;`sym;`g#]]) } / use 'z' instead of .z.w and input as 3rd argument to .u.add subInner:{if[x~`;:subInner[;y;z]each t];if[not x in t;'x];del[x]z;add[x;y;z]} sub:{subInner[x;y;.z.w]} \d . asg/r.q¶ When starting an RDB in Auto Scaling mode asg/r.q is loaded instead of tick/r.q . q asg/r.q 10.0.0.1:5010 Where 10.0.0.1 is the private IP address of the tickerplant’s server. /q asg/r.q [host]:port[:usr:pwd] system "l asg/util.q" system "l asg/sub.q" while[null .sub.TP: @[{hopen (`$":", .u.x: x; 5000)}; .z.x 0; 0Ni]]; while[null .sub.MON: @[{hopen (`::5016; 5000)}; (::); 0Ni]]; .aws.instanceId: .util.aws.getInstanceId[]; .aws.groupName: .util.aws.getGroupName[.aws.instanceId]; .sub.rollThreshold: getenv `ROLLTHRESHOLD; .sub.live: 0b; .sub.i: 0; .u.end: {[dt] .sub.clear dt+1}; neg[.sub.TP] @ (`.u.asg.sub; `; `; `$ .aws.groupName, ".r-asg"); asg/r.q loads the scaling code in asg/util.q and the code to subscribe and roll in asg/sub.q . Connecting to the tickerplant is done in a retry loop just in case the tickerplant takes some time to initially come up. The script then sets the global variables outlined below. .aws.instanceId instance ID of its EC2 instance .aws.groupName name of its Auto Scaling group .sub.rollThreshold memory percentage threshold to unsubscribe .sub.live whether tickerplant is currently it sending data .sub.scaled whether it has launched a new instance .sub.i count of `upd` messages queue has processed
// Inter Process Communication Functionality // Copyright (c) 2017 Sport Trades Ltd, (c) 2020 - 2023 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/ipc.q .require.lib each `type`convert`time; / All connections made with this library use this value as the default timeout in milliseconds if / none is specified / @see .ipc.connectWithTimeout .ipc.cfg.defaultConnectTimeout:5000; / Whether inbound connections should be tracked by this library. If true inbound connections will / be tracked in .ipc.inbound. NOTE: Uses the "event" library. / @see .ipc.inbound / @see .ipc.i.enableInboundConnTracking .ipc.cfg.enableInboundConnTracking:1b; / Whether a connection password, if specified, should be logged. If false, the password will be replaced / for logging with asterisks. If true, the plain-text password will be logged .ipc.cfg.logPasswordsDuringConnect:0b; / If enabled, any connection attempt that is made to a process on the local server will be re-routed via Unix Domain Sockets / rather than localhost TCP (only on supported Operating Systems) .ipc.cfg.forceUnixDomainSocketsForLocalhost:1b; / Provides current state of all connections that were initiated by an external process. This will / only be populated if .ipc.cfg.enableInboundConnTracking is enabled on library initialisation / @see .ipc.i.handleOpen / @see .ipc.i.connectionClosed .ipc.inbound:`handle xkey flip `handle`sourceIp`user`connectTime`connectType!"ISSPS"$\:(); / Provides current state of all outbound connections that are initiated using the functions within / this IPC library / @see .ipc.connectWithTimeout .ipc.outbound:`handle xkey flip `handle`targetHostPort`connectTime`hostPortHash!"ISP*"$\:(); / The Operating Systems that support Unix Domain Sockets .ipc.udsSupportedOs:`l`v`m; / List of host names / IP addresses that are always classified as 'local' and therefore should default to UDS if enabled / On library initialisation, additional hosts are added .ipc.localhostAddresses:`localhost`127.0.0.1; / Combination of '.ipc.cfg.forceUnixDomainSocketsForLocalhost' and if the current OS supports UDS / @see .ipc.init .ipc.udsEnabled:0b; .ipc.init:{ if[.ipc.cfg.enableInboundConnTracking; .ipc.i.enableInboundConnTracking[]; ]; .ipc.localhostAddresses:.ipc.localhostAddresses union .z.h,.Q.host[.z.a],.convert.ipOctalToSymbol each (.z.a; .Q.addr .z.h); .log.if.debug ("Local host names and IP addresses: {}"; .ipc.localhostAddresses); .ipc.udsEnabled:.ipc.cfg.forceUnixDomainSocketsForLocalhost & (`$first string .z.o) in .ipc.udsSupportedOs; .log.if.info ("IPC library initialised [ UDS Enabled: {} ]"; `no`yes .ipc.udsEnabled); }; / Open a connection to the specified host and port / @param host (Symbol) The hostname to connect to / @param port (Short|Integer|Long) The post to connect to / @return (Integer) The handle to that process if the connection is successful / @see .ipc.connect .ipc.connectWithHp:{[host;port] if[(not .type.isSymbol host) | not .type.isWholeNumber port; '"IllegalArgumentException"; ]; :.ipc.connect `$":",string[host],":",string port; }; / Open a connection to the specified target host/port using the default connection timeout / @param hostPort (HostPort) The target process to connect to / @return (Integer) The handle to that process if the connection is successful / @see .ipc.connectWithTimeout .ipc.connect:{[hostPort] :.ipc.connectWithTimeout[hostPort;::]; }; / Open a connection to the specified target host/port and allow waiting indefinitely until the process responds / @param hostPort (HostPort) The target process to connect to / @return (Integer) The handle to that process if the connection is successful / @see .ipc.connectWithTimeout .ipc.connectWait:{[hostPort] :.ipc.connectWithTimeout[hostPort; 0]; }; / Open a connection to the specified target host/port with a maximum timeout period. / NOTE: Passwords can be configured to not be printed to via logging / @param hostPort (HostPort) The target process to connect to / @param timeout (Long) The maximum time to wait, in milliseconds, for a connection. Pass generic null to use the default / @return (Integer) The handle to that process if the connection is successful / @throws IllegalArgumentException If the host/port is not of the correct type / @throws ConnectionFailedException If the connection to the process fails / @see .ipc.cfg.defaultConnectTimeout / @see .ipc.cfg.logPasswordsDuringConnect .ipc.connectWithTimeout:{[hostPort;timeout] $[(::) ~ timeout; timeout:.ipc.cfg.defaultConnectTimeout; not .type.isLong timeout; '"IllegalArgumentException"; 0 > timeout; '"IllegalArgumentException" ]; normalised:.ipc.i.normaliseHostPort hostPort; hpHash:.Q.sha1 .type.ensureString normalised`original; hostPort:normalised`toConnect; logHostPort:normalised`toLog; logTimeout:$[timeout in 0 0Wi; "waiting indefinitely"; "timeout ",string[timeout]," ms"]; .log.if.info ("Attempting to connect to {} ({})"; logHostPort; logTimeout); h:@[hopen; (hostPort; timeout); { (`CONN_FAIL;x) }]; if[`CONN_FAIL~first h; .log.if.error "Failed to connect to ",logHostPort,". Error - ",last h; '"ConnectionFailedException (",logHostPort,")"; ]; .log.if.info "Successfully connected to ",logHostPort," on handle ",string h; `.ipc.outbound upsert (h; `$logHostPort; .time.now[]; hpHash); :h; }; / Sends a one-shot query to the specified host/port with the default connection timeout / @param hostPort (HostPort) The target process to connect to / @param query (String|List) The query to send to the remote process / @see .ipc.oneShotWithTimeout .ipc.oneShot:{[hostPort; query] :.ipc.oneShotWithTimeout[hostPort; ::; query]; }; / Sends a one-shot query to the specified host/port and allow waiting indefinitely until the process responds / @param hostPort (HostPort) The target process to connect to / @param query (String|List) The query to send to the remote process / @see .ipc.oneShotWithTimeout .ipc.oneShotWait:{[hostPort; query] :.ipc.oneShotWithTimeout[hostPort; 0; query]; }; / Sends a one-shot query to the specified host/port with a specified connection timeout / NOTE: Passwords can be configured to not be printed to via logging / @param hostPort (HostPort) The target process to connect to / @param timeout (Long) The maximum time to wait, in milliseconds, for a connection. Pass generic null to use the default / @return () The result of the query executed on the remote process / @throws IllegalArgumentException If the host/port is not of the correct type / @throws ConnectionFailedException If the connection to the process fails / @see .ipc.cfg.defaultConnectTimeout / @see .ipc.cfg.logPasswordsDuringConnect .ipc.oneShotWithTimeout:{[hostPort; timeout; query] $[(::) ~ timeout; timeout:.ipc.cfg.defaultConnectTimeout; not .type.isLong timeout; '"IllegalArgumentException"; 0 > timeout; '"IllegalArgumentException" ]; normalised:.ipc.i.normaliseHostPort hostPort; hostPort:normalised`toConnect; logHostPort:normalised`toLog; logTimeout:$[timeout in 0 0Wi; "waiting indefinitely"; "timeout ",string[timeout]," ms"]; .log.if.info ("Sending one-shot query to {} ({})"; logHostPort; logTimeout); .log.if.debug ("Query: {}"; query); res:.[`::; ((.type.ensureString hostPort; timeout); query); { (`ONE_SHOT_FAIL; x) }]; if[`ONE_SHOT_FAIL ~ first res; .log.if.error ("One-shot query to {} failed. Error - {}"; logHostPort; last res); '"OneShotFailedException (",logHostPort,")"; ]; :res; }; / @returns (IntegerList) Any existing handles that match the specified host/port in '.ipc.outbound' / @see .ipc.outbound .ipc.getHandlesFor:{[hostPort] hpHash:.Q.sha1 string .type.ensureHostPortSymbol hostPort; :exec handle from .ipc.outbound where hostPortHash ~\: hpHash; }; / Disconnects the specified handle / @param h (Integer) The handle to disconnect / @return (Boolean) True if the close was successful, false otherwise / @see .q.hclose .ipc.disconnect:{[h] closeRes:@[hclose;h;{ (`FAILED_TO_CLOSE;x) }]; .ipc.i.connectionClosed h; if[`FAILED_TO_CLOSE~first closeRes; .log.if.warn "Failed to close handle ",string[h],". Error - ",last closeRes; :0b; ]; :1b; }; / Uses the event management library to track inbound connection open / close / @see .event.addListener / @see .ipc.i.handleOpen / @see .ipc.i.connectionClosed .ipc.i.enableInboundConnTracking:{ .log.if.info "Enabling inbound connection tracking"; / Optional dependency if inbound connection tracking required. Otherwise event is not loaded .require.lib`event; .event.addListener[`port.open; `.ipc.i.handleOpen]; .event.addListener[`websocket.open; `.ipc.i.websocketOpen]; .event.addListener[`port.close; `.ipc.i.connectionClosed]; .event.addListener[`websocket.close; `.ipc.i.connectionClosed]; }; / @see .ipc.i.connectionOpen .ipc.i.handleOpen:{[h] connectType:` sv `kdb,`tcp`uds 0i = .z.a; .ipc.i.connectionOpen[h;connectType]; }; / @see .ipc.i.connectionOpen .ipc.i.websocketOpen:{[ws] .ipc.i.connectionOpen[ws;`websocket]; }; / Hepler function when a connection is opened (via .z.po). Logs the new connection and adds it to .ipc.inbound / @see .convert.ipOctalToSymbol / @see .ipc.inbound .ipc.i.connectionOpen:{[h;connectType] sourceIp:.convert.ipOctalToSymbol .z.a; user:`unknown^.z.u; .log.if.info "New inbound ",string[connectType]," connection on handle ",string[h]," [ IP Address: ",string[sourceIp]," ] [ User: ",string[user]," ]"; `.ipc.inbound upsert (h;sourceIp;user;.time.now[];connectType); }; / Logs and updates the .ipc.inbound and .ipc.outbound tables when a connection is closed / @see .ipc.disconnect / @see .ipc.inbound / @see .ipc.outbound .ipc.i.connectionClosed:{[h] if[h in key .ipc.inbound; hDetail:.ipc.inbound h; .log.if.info "Inbound connection on handle ",string[h]," closed [ IP Address: ",string[hDetail`sourceIp]," ] [ User: ",string[hDetail`user]," ]"; delete from `.ipc.inbound where handle = h; ]; if[h in key .ipc.outbound; .log.if.info "Outbound connection on handle ",string[h]," closed"; delete from `.ipc.outbound where handle = h; ]; }; / Provides UNIX domain socket translation and password obfuscation / @param (String|Symbol|HostPort) The host/port to normalise / @return (Dict) 3 keys - 'original' -> the host/port as a symbol, 'toConnect' -> the host/port that should be passed to 'hopen', 'toLog' -> log equivalent of the host/port .ipc.i.normaliseHostPort:{[hostPort] if[not .type.isHostPort hostPort; '"IllegalArgumentException"; ]; hostPort:.type.ensureHostPortSymbol hostPort; connHostPort:hostPort; if[.ipc.udsEnabled; hpSplit:":" vs string hostPort; host:`localhost^`$hpSplit 1; if[host in .ipc.localhostAddresses; udsHostPort:`$":unix://",":" sv 2_ hpSplit; ]; if[0 < count udsHostPort; .log.if.debug ("Host/port translated to Unix Domain Socket [ Original: {} ] [ Now: {} ]"; hostPort; udsHostPort); connHostPort:udsHostPort; ]; ]; logHostPort:string connHostPort; if[not .ipc.cfg.logPasswordsDuringConnect; if[4 = count where ":" = logHostPort; hpSplit:":" vs logHostPort; hpSplit:@[hpSplit; 4; :; count[hpSplit 4]#"*"]; logHostPort:":" sv hpSplit; ]; ];
Internet of Things with MQTT¶ MQTT is a messaging protocol for the Internet of Things (IoT). It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium. Internet of Things Internet of Things denotes the network of things embedded with sensors, software, and other technologies that connect and exchange data with other devices and systems over the internet. KX has released an MQTT interface with source code available on GitHub. The interface supports Linux, macOS, and Windows platforms. This interface can be used with the enterprise KX Streaming Analytics platform. For this paper the underlying kdb+ language will be used to explore the core functionality available. Edge devices Edge devices process data at its source rather than in a centralized location. This is efficient and robust: it reduces data-transfer requirements and allows edge devices to continue to operate without a consistent network link to a central server. Edge devices must be compact, efficient and even ruggedized for harsh environments. For this paper a Raspberry Pi 3 Model B+ is used as an example of an edge-class device. This is a low-powered single-board computer. The Linux ARM 32-bit release of kdb+ enables it to run on the Raspberry Pi. Install, set up, test¶ Install and test a broker¶ An MQTT broker is a process that receives all messages from publishers and then routes the messages to the appropriate destination clients. Eclipse Mosquitto is one of many implementations. We will use it for our example. Install the broker with: sudo apt-get install mosquitto To test the broker, install the command line tools: sudo apt-get install mosquitto-clients In one shell subscribe to messages on the test topic: mosquitto_sub -h localhost -t test In another shell publish a message on the same topic: mosquitto_pub -h localhost -t test -m "hello" You should see hello print to the subscriber shell. Install kdb+¶ Download the Linux-ARM version of 32-bit kdb+ and follow the install instructions, making sure to rename l32arm to l32 during install: unzip linuxarm.zip mv q/l32arm q/l32 mv q ~/ Add QHOME to your .bashrc : export PATH="$PATH:~/q/l32" export QHOME="~/q" Install the KX MQTT interface¶ Full instructions for all platforms are on GitHub. Other platforms have fewer steps, as there are pre-compiled releases. For ARM we will compile from source. Install the dependencies needed to compile the projects: sudo apt-get install libssl-dev cmake The Paho MQTT C client library needs to be available before the interface can be compiled. Take the link to the latest paho.mqtt.c library from its Releases tab on GitHub. mkdir paho.mqtt.c wget https://github.com/eclipse/paho.mqtt.c/archive/v1.3.8.tar.gz tar -xzf v1.3.8.tar.gz -C ./paho.mqtt.c --strip-components=1 cd paho.mqtt.c make sudo make install export MQTT_INSTALL_DIR=$(pwd) Now the system is ready for the KX MQTT interface to be installed. Generally it is best practice to use the Releases tab on GitHub to find a link to the latest available and download it. wget -O mqtt.tar.gz https://github.com/KxSystems/mqtt/archive/1.0.0.tar.gz tar -xzf mqtt.tar.gz cd mqtt-1.0.0 Instead, as we wish to use some unreleased, recently-added functionality, we compile directly from the repository source. git clone https://github.com/KxSystems/mqtt.git cd mqtt These next steps are the same whether you are building a release or directly from source. mkdir cmake && cd cmake cmake .. cmake --build . --target install Test the KX MQTT interface¶ Start a q process then: - Load the MQTT interface using \l - Connect to the broker using .mqtt.conn - Subscribe to the test topic using.mqtt.sub \l mqtt.q .mqtt.conn[`localhost:1883;`src;()!()] .mqtt.sub[`test] Publish a message using .mqtt.pub . The default message receive function will print the incoming message: q).mqtt.pub[`test;"hello"] /Publish to the test topic 2 q)(`msgsent;2) (`msgrecvd;"test";"hello") Reading sensor data in kdb+¶ For an example project we will collect some sensor data and publish it to an IoT platform. The full project is available at rianoc/environmentalmonitor. Arduino microcontroller¶ The data source here is an Arduino microcontroller. A microcontroller is a single chip microcomputer; it is simplified and does not run an operating system. The Arduino UNO in use here has 32KB of storage and 2KB of SRAM. To program an Arduino we upload a single sketch to it. The file EnvironmentalMonitor.ino is written in a C/C++ dialect which is simplified specifically to run on microcontrollers. The uploaded sketch gathers temperature, pressure, humidity, and light readings from attached sensors and sends it back to the Raspberry Pi over a serial USB connection. Reading serial data with kdb+¶ The origins of the serial communication protocol data back to 1960. It is still in use in devices due to its simplicity. Reading the serial data in kdb+ is quick using named pipe support: q)ser:hopen`$":fifo://",COM q)read0 ser "26.70,35,736,1013,-5.91,26421" The comma-separated fields contain: - Temperature (Celsius) - Humidity (percent) - Light (analog value between 0 and 100) - Pressure (pa) - Altitude (m – rudimentary estimate) - CRC-16 (checksum of data fields) Calculate an error detecting checksum¶ The final field is particularly important. This is a checksum which enables error-detection, a requirement as serial data can be unreliable. Without this, incorrect data could be interpreted as correct. For example a temperature reading such as 26.70 missing its decimal point would be published as 2670 . In kdb+ a function is needed to generate a checksum to compare against the one sent by the Arduino. If the two values do not match the data is rejected as it contains an error. To create the function the logic from C functions crc16_update and calcCRC was created as crc16 . The Over and Do (/ form) accumulators are used in place of for-loops: rs: {0b sv y xprev 0b vs x} / right shift xor: {0b sv (<>/) 0b vs'(x;y)} / XOR land:{0b sv (&). 0b vs'(x;y)} / AND crc16:{ crc:0; { 8{$[land[x;1]>0;xor[rs[x;1];40961];rs[x;1]]}/xor[x;y] } over crc,`long$x } In this example temperature can be seen to have arrived incorrectly as 195 rather than 19.5: Error with data: "195,39,12,995,8804,21287" 'Failed checksum check We can see that crc16 will return the expected checksum 21287 only if the message is valid: q)crc16 "195,39,12,995,8804" 15720 q)crc16 "19.5,39,12,995,8804" 21287 Publishing data to an IoT platform¶ Home Assistant¶ Home Assistant is a home-automation platform. To make the sensor data captured available to Home Assistant, it can be published over MQTT. Configuring sensors¶ On any IoT platform metadata is needed about sensors such as their name and unit of measure. Home Assistant includes MQTT Discovery to allow sensors to configure themselves. This is a powerful feature as a sensor can send rich metadata once, allowing for subsequent sensor state updates packets to be small. This helps reduce bandwidth requirements, which is important in an IoT environment. For this project we will use a table to store metadata: q)sensors name class unit icon --------------------------------------------------------- temperature temperature "ºC" "" humidity humidity "%" "" light "/100" "mdi:white-balance-sunny" pressure pressure "hPa" "" name the name of the sensor class Home Assistant has some predefined classes of sensor unit the unit of measure of the sensor icon an icon can be chosen for the UI As our light sensor does not fall into a known class , its value is left as null and we must choose an icon , as without a default class one will not be automatically populated. Now we have defined the metadata we need to publish it. MQTT uses a hierarchy of topics when data is published. To configure a Home Assistant sensor, a message must arrive on a topic of the structure: <discovery_prefix>/<component>/[<node_id>/]<object_id>/config An example for a humidity sensor would be: homeassistant/sensor/livingroomhumidity/config The payload we publish on this topic will include our metadata along with some extra fields. | field | content | |---|---| | unique_id | A unique ID is important throughout IoT systems to allow metadata to be related to sensor data | | state_topic | The sensor here is announcing that any state updates will arrive on this topic | | value_template | This template enables the system to extract the sensor value from the payload | A populated JSON config message for the humidity sensors: { "device_class":"humidity", "name":"humidity", "unique_id":"livingroomhumidity", "state_topic":"homeassistant/sensor/livingroom/state", "unit_of_measurement":"%", "value_template":"{{ value_json.humidity}}" } The configure function publishes a config message for each sensor in the table. It builds up the dictionary of information and uses .j.j to serialize to JSON before publishing: configure:{[s] msg:(!). flip ( (`name;room,string s`name); (`state_topic;"homeassistant/sensor/",room,"/state"); (`unit_of_measurement;s`unit); (`value_template;"{{ value_json.",string[s`name],"}}")); if[not null s`class;msg[`device_class]:s`class]; if[not ""~s`icon;msg[`icon]:"mdi:",s`icon]; topic:`$"homeassistant/sensor/",room,msg[`name],"/config"; .mqtt.pubx[topic;;1;1b] .j.j msg; } configure each sensors Note that .mqtt.pubx rather than the default .mqtt.pub is used to set Quality of Service to 1 and Retain to true (1b ) for these configuration messages. .mqtt.pubx[topic;;1;1b] Quality of Service¶ Quality of Service (QoS) can be specified for each published message. There are three QoS levels in MQTT: 0 At most once 1 At least once 2 Exactly once To be a lightweight system, MQTT will default to QoS 0, a fire-and-forget approach to sending messages. This may be suitable for temperature updates in our system as one missed update will not cause any issues. However, for our configuration of sensors we do want to ensure this information arrives. In this case we choose QoS 1. QoS 2 has more overhead than 1 and here is of no benefit as there is no drawback to configuring a sensor twice. Retained messages¶ Unlike other messaging systems such as a kdb+ tickerplant, Kafka, or Solace MQTT does not retain logs of all data that flows through the broker. This makes sense as the MQTT broker should be lightweight and able to run on an edge device with slow and limited storage. Also in a bandwidth limited environment attempting to replay large logs could interfere with the publishing of the more important real-time data. The MQTT spec does however allow for a single message to be retained per topic. Importantly, what this allows for is that our downstream clients no matter when they connect will receive the configuration metadata of our sensors. Birth and Last Will¶ In an environment with unreliable connections it is useful to know if a system is online. To announce that our sensor is online we can add a "birth" message. This does not have a technical meaning rather it is a practice. We will add a line in the connect function to publish a message to say the sensor is online as soon as we connect. QoS 2 and retain set to true are used to ensure this message is delivered. The Last Will and Testament is part of the technical spec for MQTT. When connecting to the MQTT broker we can specify the topic, message, QoS, and retain rules by populating the final dictionary opts parameter of .mqtt.conn . The broker does not immediately publish this message to subscribers. Instead it waits until there is an unexpected disconnection from the publisher. With these added the connect function now looks like: connect:{ statusTopic:`$"EnvironmentalMonitor/",room,"/status"; opts:`lastWillTopic`lastWillQos`lastWillMessage`lastWillRetain!(statusTopic;2;"offline";1); .mqtt.conn[`$broker_address,":",string port;clientID;opts]; .mqtt.pubx[statusTopic;;2;1b] "online"; conn::1b; configure each sensors; } Starting and stopping our process we can now see these messages being triggered. Note the usage of the + character which acts as a single-level wildcard, unlike # which is multilevel. mosquitto_sub -h localhost -v -t "EnvironmentalMonitor/+/status" EnvironmentalMonitor/livingroom/status online EnvironmentalMonitor/livingroom/status offline Retaining flexibility¶ Reviewing our sensors table and configure function we can spot some patterns. Optional configuration variables such as icon tend to be sparsely populated in the table and require specific if blocks in our configure function. Further reviewing the MQTT Sensor specification we can see there are a total of 26 optional variables. If we were to support all of these our table would be extremely wide and sparsely populated and the configure function would need 26 if statements. This is clearly something to avoid. Furthermore if a new optional variable were added to the spec we would need to update our full system and database schema. This example shows the importance of designing an IoT system for flexibility. When dealing with many vendors and specifications the number of possible configurations is huge. To address this in our design we change our table to move all optional parameters to an opts column which stores the values in dictionaries. Other databases might limit datatypes within cells but in kdb+ we can insert any possible data structure available to the language. sensors:([] name:`temperature`humidity`light`pressure; opts:(`device_class`unit_of_measurement!(`temperature;"ºC"); `device_class`unit_of_measurement!(`humidity;"%"); `unit_of_measurement`icon!("/100";"mdi:white-balance-sunny"); `device_class`unit_of_measurement!(`pressure;"hPa")) ) Doing this allows the sensors to include any optional variable they wish and we do not need to populate any nulls. name opts ---------------------------------------------------------------------------- temperature `device_class`unit_of_measurement!(`temperature;"\302\272C") humidity `device_class`unit_of_measurement!(`humidity;"%") light `unit_of_measurement`icon!("/100";"mdi:white-balance-sunny") pressure `device_class`unit_of_measurement!(`pressure;"hPa") Our configure function is then simplified as there need not be any special handling of optional variables. configure:{[s] msg:(!). flip ( (`name;room,string s`name); (`state_topic;"homeassistant/sensor/",room,"/state"); (`value_template;createTemplate string[s`name])); msg,:s`opts; topic:`$"homeassistant/sensor/",msg[`name],"/config"; .mqtt.pubx[topic;;1;1b] .j.j msg; } Publishing updates¶ Now that the sensors are configured the process runs a timer once per second to publish state updates. - Data is read from the serial port. - The checksum value is checked for correctness. - The data is formatted and published to the state_topic we specified. pub:{[] rawdata:last read0 ser; if[any rawdata~/:("";());:(::)]; @[{ qCRC:crc16 #[;x] last where x=","; data:"," vs x; arduinoCRC:"J"$last data; if[not qCRC=arduinoCRC;'"Failed checksum check"]; .mqtt.pub[`$"homeassistant/sensor/",room,"/state"] .j.j sensors[`name]!"F"$4#data; }; rawdata; {-1 "Error with data: \"",x,"\" '",y}[rawdata] ]; } .z.ts:{ if[not conn;connect[]]; pub[] } \t 1000 Here we use .mqtt.pub which defaults QoS to 0 and Retain to false (0b ) as these state updates are less important. An example JSON message on the topic homeassistant/sensor/livingroom/state : {"temperature":21.4,"humidity":38,"light":44,"pressure":1012} It can be noted here that we published a configure message per sensor but for state changes we are publishing a single message. This is chosen for efficiency to reduce the overall volume of messages the broker must relay. The value_template field we populated allows Home Assistant to extract the data for each sensor from the JSON collection. This allows for the number of sensors per update to be flexible. Reducing data volume¶ In our example we are publishing four values every second. In a day this is 86k updates, resulting in 344k sensor values being stored in our database. Many of these values will be repeated. As these are state changes and not events there is no value in storing repeats. In an IoT project for efficiency this should be addressed at the source and not the destination. This edge processing is necessary to reduce the hardware and network requirements throughout the full stack. The first step we take is to add a lastPub and lastVal column to our sensors metadata table: sensors:([] name:`temperature`humidity`light`pressure; lastPub:4#0Np; lastVal:4#0Nf; opts:(`device_class`unit_of_measurement!(`temperature;"ºC"); `device_class`unit_of_measurement!(`humidity;"%"); `unit_of_measurement`icon!("/100";"mdi:white-balance-sunny"); `device_class`unit_of_measurement!(`pressure;"hPa")) ) name lastPub lastVal opts -------------------------------------------------------------------------------------------- temperature `device_class`unit_of_measurement!(`temperature;"\302\272C") humidity `device_class`unit_of_measurement!(`humidity;"%") light `unit_of_measurement`icon!("/100";"mdi:white-balance-sunny") pressure `device_class`unit_of_measurement!(`pressure;"hPa") Then rather than publishing any new data from a sensor when available we instead pass it through a filterPub function. From: .mqtt.pub[`$"homeassistant/sensor/",room,"/state"] .j.j sensors[`name]!"F"$4#data To: filterPub "F"$4#data The function will publish data only when either the sensor value changes or if 10 minutes has elapsed since the last time a sensor had an update published: filterPub:{[newVals] now:.z.p; toPub:exec (lastPub<.z.p-0D00:10) or (not lastVal=newVals) from sensors; if[count where toPub; update lastPub:now,lastVal:newVals[where toPub] from `sensors where toPub; msg:.j.j exec name!lastVal from sensors where toPub; .mqtt.pub[`$"homeassistant/sensor/",room,"/state";msg]; ]; } Setting the 10-minute minimum publish window is important to act as a heartbeat. This will ensure that the difference between a broken or disconnected sensor can be distinguished from a sensor whose value is simply unchanged over a large time window. Also, considering MQTT does not retain values, we wish to ensure any new subscribers will receive a timely update on the value of all sensors. Now when we start our process there are fewer updates and only sensors with new values are included: homeassistant/sensor/livingroom/state "{\"temperature\":21.1,\"humidity\":38,\"light\":24,\"pressure\":1002}" homeassistant/sensor/livingroom/state "{\"light\":23}" homeassistant/sensor/livingroom/state "{\"light\":24}" homeassistant/sensor/livingroom/state "{\"pressure\":1001}" homeassistant/sensor/livingroom/state "{\"temperature\":21.2,\"pressure\":1002}" homeassistant/sensor/livingroom/state "{\"temperature\":21.1}" The UI also needed more complex value_template logic in metadata to extract the data if present but use the prevailing state value if not. From: {{ value_json.pressure }} To: {% if value_json.pressure %} {{ value_json.pressure }} {% else %} {{ states('sensor.livingroompressure') }} {% endif %} Rather than daily 86k updates resulting in 344k sensor values being stored in our database this small changes reduced those to 6k updates delivering 6.5k sensor values. Reducing updates by 93% and stored values by 98%! Home Assistant UI¶ Home Assistant includes a UI to view data. A display for the data can be configured in the UI or defined in YAML. type: glance entities: - entity: sensor.livingroomtemperature - entity: sensor.livingroomhumidity - entity: sensor.livingroompressure - entity: sensor.livingroomlight title: Living Room state_color: false show_name: false show_icon: true An overview of all the sensors: Clicking on any one sensor allows a more detailed graph to be seen: Creating a sensor database¶ Subscribing to data in kdb+¶ Subscribing to the published data from another kdb+ process is quick. MQTT uses / to split a topic hierarchy and when subscribing # can be used to subscribe to all subtopics: \l mqtt.q .mqtt.conn[`localhost:1883;`src;()!()] .mqtt.sub `$"homeassistant/#" Immediately on connection the broker publishes any retained messages on topics: (`msgrecvd;"homeassistant/sensor/livingroomtemperature/config";"{\"device_class\":\"temperature\",\"name\":\"temperature\",\"unique_id\":\"livingroomtemperature\",\"state_topic\":\"homeassistant/sensor/livingroom/state\",\"unit_of_measurement\":\"\302\272C\",\"value_template\":\"{{ value_json.temperature}}\"}") (`msgrecvd;"homeassistant/sensor/livingroomhumidity/config";"{\"device_class\":\"humidity\",\"name\":\"humidity\",\"unique_id\":\"livingroomhumidity\",\"state_topic\":\"homeassistant/sensor/livingroom/state\",\"unit_of_measurement\":\"%\",\"value_template\":\"{{ value_json.humidity}}\"}") (`msgrecvd;"homeassistant/sensor/livingroomlight/config";"{\"device_class\":\"None\",\"name\":\"light\",\"unique_id\":\"livingroomlight\",\"state_topic\":\"homeassistant/sensor/livingroom/state\",\"unit_of_measurement\":\"hPa\",\"value_template\":\"{{ value_json.light}}\"}") (`msgrecvd;"homeassistant/sensor/livingroompressure/config";"{\"device_class\":\"pressure\",\"name\":\"pressure\",\"unique_id\":\"livingroompressure\",\"state_topic\":\"homeassistant/sensor/livingroom/state\",\"unit_of_measurement\":\"/1024\",\"value_template\":\"{{ value_json.pressure}}\"}") Any newly published messages will follow. (`msgrecvd;"homeassistant/sensor/livingroom/state";"{\"temperature\":21.5,\"humidity\":38,\"light\":172,\"pressure\":1011}") (`msgrecvd;"homeassistant/sensor/livingroom/state";"{\"temperature\":21.5,\"humidity\":37,\"light\":172,\"pressure\":1012}") Storing MQTT sensor data¶ Home Assistant does include an SQLite embedded database for storing data. However it is not suitable for storing large amounts of historical sensor data. Instead we can look to kdb+ to do this for us. Config data¶ Storing config data is straightforward by monitoring for incoming messages in topics matching homeassistant/sensor/*/config . As each new sensor configuration arrives we extract its state_topic and subscribe to it. sensorConfig:([name:`$()] topic:`$();state_topic:`$();opts:()) discoveryPrefix:"homeassistant" .mqtt.sub `$discoveryPrefix,"/#" .mqtt.msgrcvd:{[top;msg] if[top like discoveryPrefix,"/sensor/*/config"; opts:.j.k msg; .mqtt.sub `$opts`state_topic; `sensorConfig upsert (`$opts`name;`$top;`$opts`state_topic;opts)]; } This data then populates the sensorConfig table. q)first sensorConfig topic | `homeassistant/sensor/livingroomtemperature/config state_topic| `EnvironmentalMonitor/livingroom/state opts | `name`state_topic`value_template`device_class`unit_of_measurement!("livingroomtemperature";"EnvironmentalMonitor/livingroom/state";"{% if value_json.temperature %}{{ value_json.temperature }}{% else %}{{ states('sensor.livingroomtemperature') }}{% endif %}";"temperature";"\302\272C") State data¶ As configuration has subscribed to each state_topic , sensor state data will begin to arrive. A new if block can then be added to check if the incoming topic matches a configured state_topic in the sensorConfig table and store it if so. if[(`$top) in exec state_topic from sensorConfig; store[now;top;msg]]; Extracting data from JSON templates using Jinja¶ The value_template system used by Home Assistant is a Python templating language called Jinja. To use this in kdb+ we can use PyKX to expose Python functions. We can write a short qjinja.py script to expose the exact function we need: from jinja2 import Template import json def states(sensor): return None; def extract(template, msg): template = Template(template) template.globals['states'] = states return template.render(value_json=json.loads(msg)); Then qjinja.q can import and expose the Python library: .qjinja.init:{[] .qjinja.filePath:{x -3+count x} value .z.s; slash:$[.z.o like "w*";"\\";"/"]; .qjinja.basePath:slash sv -1_slash vs .qjinja.filePath; if[not `p in key `;system"l ",getenv[`QHOME],slash,"p.q"]; .p.e"import sys"; .p.e "sys.path.append(\"",ssr[;"\\";"\\\\"] .qjinja.basePath,"\")"; .qjinja.py.lib:.p.import`qjinja; } .qjinja.init[] .qjinja.extract: .qjinja.py.lib[`:extract;;] Values can now be extracted easily in kdb+: q)tmp:"{{ value_json.temperature }}" / template q)msg:"{\"temperature\":22.2,\"humidity\":37,\"light\":20,\"pressure\":1027}" q).qjinja.extract[tmp;msg] "22.2" The more complex templates are also handled by making a states function available to the template: template.globals['states'] = states q)tmp"{% if value_json.temperature %}{{ value_json.temperature }}{% else %}{{ states('sensor.livingroomtemperature') }}{% endif %}" q)msg:"{\"humidity\":37,\"light\":20,\"pressure\":1027}" q).qjinja.extract[tmp;msg] "None" Now that the correct information can be extracted we can check each message arriving and apply the correct set of value_template values: store:{[now;top;msg] sensors:select name,value_template:opts[;`value_template] from sensorConfig where state_topic=`$top; sensors:update time:now, val:{{$[x~"None";0Nf;"F"$x]}.qjinja.extract[x;y]}[;msg] each value_template from sensors; `sensorState insert value exec time,name,val from sensors where not null val } Persisting data to disk¶ For the purposes of this small project the persisting logic is kept to a minimum. The process keeps data in memory for one hour and then persists to disk. A blog on partitioning data in kdb+ goes into detail on this type of storage layout and methods which could be applied to further manage memory usage which is critical in a low-power edge node. The same process exposes both in-memory and on-disk data. To enable this the on-disk table names are suffixed with Hist . sensorConfigHist:([] name:`$();topic:`$();state_topic:`$();opts:()) sensorStateHist:([] int:`int$();time:`timestamp$();name:`$();state:()) writeToDisk:{[now] .Q.dd[HDB;(`$string cHour;`sensorStateHist;`)] upsert .Q.ens[HDB;sensorState;`sensors]; `sensorState set 0#sensorState; `cHour set hour now; .Q.dd[HDB;(`sensorConfigHist;`)] set .Q.ens[HDB;0!sensorConfig;`sensors]; system"l ",1_string HDB; } Creating a basic query API¶ A very basic query API can be created to extract the data from the system. queryState:{[sensor;sTime;eTime] hist:delete int from select from sensorStateHist where int within hour (sTime;eTime),name like sensor,time within (sTime;eTime); realtime:select from sensorState where name like sensor, time within (sTime;eTime); hist,realtime } By using like a wildcard * can then be passed to return several sensors: q)queryState["0x00124b001b78047b*";2021.03.05D0;2021.03.07D0] time name state ------------------------------------------------------------------ 2021.03.06D19:01:29.912672000 0x00124b001b78047b battery 64 2021.03.06D19:01:29.912672000 0x00124b001b78047b temperature 19.6 2021.03.06D19:01:29.912672000 0x00124b001b78047b humidity 48.22 2021.03.06D19:01:29.912672000 0x00124b001b78047b linkquality 139 2021.03.06D19:02:58.884287000 0x00124b001b78047b battery 64 2021.03.06D19:02:58.884287000 0x00124b001b78047b temperature 19.6 2021.03.06D19:02:58.884287000 0x00124b001b78047b humidity 49.42 2021.03.06D19:02:58.884287000 0x00124b001b78047b linkquality 139 Zigbee¶ Zigbee is a wireless mesh network protocol designed for use in IoT applications. Unlike wi-fi its data transmission rate is a low 250 kbit/s but its key advantage is simplicity and lower power usage. A device such as a Sonoff SNZB-02 temperature-and-humidity sensor can wirelessly send updates for months using only a small coin-cell battery. Most often to capture data a bridge/hub device is needed which communicates over both Zigbee and wi-fi/Ethernet. An example of this would be a Philips Home Bridge, used to communicate with their range of Hue smart bulbs over Zigbee. In our example a more basic device is used, a CC2531 USB Dongle which captures data from the Zigbee radio and communicates this back to the Raspberry Pi over a serial port. Zigbee2MQTT¶ To bring the data available from the USB dongle on our MQTT IoT ecosystem we can use the Zigbee2MQTT project. This reads the serial data and publishes it on MQTT. Many devices are supported including our SNZB-02 sensor. Through configuration we can also turn on the supported Home Assistant integration homeassistant: true permit_join: false mqtt: base_topic: zigbee2mqtt server: 'mqtt://localhost' serial: port: /dev/ttyACM0 After running Zigbee2MQTT mosquitto_sub can quickly check if data is arriving mosquitto_sub -t 'homeassistant/sensor/#' -v The sensor can be seen configuring four sensors: temperature humidity battery linkquality The topics follow the pattern we have seen previously: homeassistant/sensor/0x00124b001b78047b/temperature/config The configuration message includes many more fields that we populated in our basic sensors: { "availability":[ { "topic":"zigbee2mqtt/bridge/state" } ], "device":{ "identifiers":[ "zigbee2mqtt_0x00124b001b78047b" ], "manufacturer":"SONOFF", "model":"Temperature and humidity sensor (SNZB-02)", "name":"0x00124b001b78047b", "sw_version":"Zigbee2MQTT 1.17.0" }, "device_class":"temperature", "json_attributes_topic":"zigbee2mqtt/0x00124b001b78047b", "name":"0x00124b001b78047b temperature", "state_topic":"zigbee2mqtt/0x00124b001b78047b", "unique_id":"0x00124b001b78047b_temperature_zigbee2mqtt", "unit_of_measurement":"°C", "value_template":"{{ value_json.temperature }}" } Subscribing to the state_topic sensor updates can be shown: mosquitto_sub -t 'zigbee2mqtt/0x00124b001b78047b' -v zigbee2mqtt/0x00124b001b78047b {"battery":64,"humidity":47.21,"linkquality":139,"temperature":19.63,"voltage":2900} Capturing Zigbee sensor data in kdb+¶ Because we designed our DIY sensors to meet a spec our system in fact needs no changes to capture the new data. The choice to break optional variables out to the opts proves useful here as many more fields have been populated: q)sensorConfig`$"0x00124b001b78047b temperature" topic | `homeassistant/sensor/0x00124b001b78047b/temperature/config state_topic| `zigbee2mqtt/0x00124b001b78047b opts | `availability`device`device_class`json_attributes_topic`name`state_topic`unique_id`unit_of_measu.. Updates for all four configured sensors are available in the sensorState table: q)select from sensorStateHist where name like "0x00124b001b78047b*" int time name state ------------------------------------------------------------------------- 185659 2021.03.06D19:10:09.694120000 0x00124b001b78047b battery 64 185659 2021.03.06D19:10:09.694120000 0x00124b001b78047b temperature 19.6 185659 2021.03.06D19:10:09.694120000 0x00124b001b78047b humidity 51.74 185659 2021.03.06D19:10:09.694120000 0x00124b001b78047b linkquality 139 185659 2021.03.06D19:26:11.522660000 0x00124b001b78047b battery 64 185659 2021.03.06D19:26:11.522660000 0x00124b001b78047b temperature 19.6 185659 2021.03.06D19:26:11.522660000 0x00124b001b78047b humidity 52.82 185659 2021.03.06D19:26:11.522660000 0x00124b001b78047b linkquality 139 Graphing the data¶ Using KX Dashboards the captured data can then be graphed: Conclusion¶ The world of IoT is complex, with many languages, systems, and protocols. To be successful interoperability and flexibility are key. Here with the MQTT interface, along with pre-existing Python and JSON functionality, kdb+ shows what can be achieved with a subset of its many interfaces. Then, as a database layer, kdb+ shows its flexibility allowing us to tailor how data is captured, stored, and queried based on the data, use case, and hardware for the application. KX POC Blog Series: Edge Computing on a Low-Profile Device rianoc/EnvironmentalMonitor rianoc/qHomeAssistantMQTT rianoc/MQTT_blog rianoc/qZigbee Author¶ Rian Ó Cuinneagáin is a manager at KX. Rian is currently based in Dublin working on industrial applications of kdb+.
Building real-time engines¶ The kdb+tick environment¶ The use of real-time engines (RTEs) within a tick environment provides the ability to enrich it further with real-time custom analytics and alerts. A tick environment can have one or many optional RTEs subscribing to the real-time data being generated from the tickerplant (TP). A RTE can subscribe to all or a subset of the data provided by a TP. The data stored with an RTE can be as little as only that required to hold the latest calculated result (or send an alert), resulting in a very low utilization of resources. The RDB is a form of RTE. It is a real-time process subscribes to all tables and to all symbols on the tickerplant. This process has very simple behavior upon incoming updates, it inserts these records to the end of the corresponding table in order to contain all of the currrent days data. An alternative is to using an RTE is to query the RDB on each clients request. As this would entail performing an operation on a growing dataset, it can prove much more inefficient for the client while also consuming the resources of the RDB. A RTE can also use the TP log file to recover from any inexpected intra-day restarts. Building a RTE¶ How to create a RTE will be shown using an example. Environment setup¶ The following environment can be used to run all examples on this page. Download kdb+tick from KxSystems/kdb-tick Create a schema file with the following two tables (quote and trade ) in tick/sym.q quote:([]time:`timespan$();sym:`symbol$();mm:`symbol$();bid:`float$();ask:`float$();bsize:`int$();asize:`int$()) trade:([]time:`timespan$();sym:`symbol$();price:`float$();size:`int$()) - Start a tickerplant Refer to tick.q usage for more details. Note that a log file will be created in the current directory based on the above command, which will log every message received.q tick.q sym . -p 5000 - Start one or more of the RTEs below, to connect to the tickerplant. Once data is being produced from the feed simulator you can inspect the tables generated, as shown in the relevent examples. - Start a feed simulator to publish randomly generated data on a regular interval. The following feed.q script has been created to generate data relevant to the schema file above: Run ash:neg hopen `:localhost:5000 /connect to tickerplant syms:`MSFT.O`IBM.N`GS.N`BA.N`VOD.L /stocks prices:syms!45.15 191.10 178.50 128.04 341.30 /starting prices n:2 /number of rows per update flag:1 /generate 10% of updates for trade and 90% for quote getmovement:{[s] rand[0.0001]*prices[s]} /get a random price movement /generate trade price getprice:{[s] prices[s]+:rand[1 -1]*getmovement[s]; prices[s]} getbid:{[s] prices[s]-getmovement[s]} /generate bid price getask:{[s] prices[s]+getmovement[s]} /generate ask price /timer function .z.ts:{ s:n?syms; $[0<flag mod 10; h(".u.upd";`quote;(n#.z.N;s;n?`AA`BB`CC`DD;getbid'[s];getask'[s];n?1000;n?1000)); h(".u.upd";`trade;(n#.z.N;s;getprice'[s];n?1000))]; flag+:1; } /trigger timer every 100ms \t 100 Points to note from the above:q feed.q - The data sent to the tickerplant is in columnar (column-oriented) list format. In other words, the tickerplant expects data as lists, not tables. This point will be relevant later when the RDB wishes to replay the tickerplant logfile. - The function triggered on the tickerplant upon receipt of these updates is .u.upd . - If you wish to increase the frequency of updates sent to the tickerplant for testing purposes, simply change the timer value at the end of this script accordingly. Weighted average (VWAP) example¶ This section describes how to build an RTE which calculates information used for VWAP (volume-weighted average price) on a per-symbol basis in real-time. Clients can then retrieve the current VWAP value for one or many symbols. Upon an end-of-day event it will clear current records, ready to recalculate on the next trading day. A VWAP can be defined as: The code to create this example (vwap.q ) is as follows: / connect to TP h:hopen `::5000; / syms to subscribe to s:`MSFT.O`IBM.N / table to hold info used in vwap calc ttrades:([sym:`$()]price:`float$();size:`int$()) / action for real-time data upd:{[x;y]ttrades+:select size wsum price,sum size by sym from y;} / subscribe to trade table for syms h(".u.sub";`trade;s); / clear table on end of day .u.end:{[x] 0N!"End of Day ",string x; delete from `ttrades;} / client function to retrieve vwap / e.g. getVWAP[`IBM.N`MSFT.O] getVWAP:{select sym,vwap:price%size from ttrades where sym in x} The RTE can be run as q vwap.q -p 5041 after starting a tickerplant, but prior to starting the feedhandler. Subscribing to a TP¶ Connect to the TP using IPC (via the hopen command). For example the following connects to another process on the current host using port 5000: h:hopen `::5000; Once connected, a subcription to the required data is created by calling the .u.sub function in the TP using a synchronous request. A RTE should subscribe to the least amount of data required to perform there task. To aid this, the default mechanism allows filtering both by table name and by symbol names being updated within the table. The VWAP example subscribes to any updates occuring within the trade table for the symbols MSFT.O and IBM.N: h(".u.sub";`trade;`MSFT.O`IBM.N); Intraday updates¶ In order to receive real-time updates for the subscriptions made, the RTE must implement the upd function. This should contain the logic required for your chosen analytic or alert. upd[x;y] Where - x is a symbol atom of the name of the table being updated; e.g. `trade ,`quote , etc. - y is table data to add to table x, which can contain one or more rows. The schema used for the table will be the one defined in the TP schema file. An example of data passed to upd in the y parameter for the example trade schema : time sym mm bid ask bsize asize ------------------------------------------------------------ 0D11:57:53.538026000 MSFT.O BB 45.16191 45.16555 349 902 0D11:57:53.538026000 IBM.N DD 178.4829 178.5018 31 673 y can one or more rows depending on the configuration of the feed handler and TP and the filtering enabled within the subscription. When batching enabled in either feed handler or TP, more than one row can be present. The VWAP example has the following custom logic upd:{[x;y]ttrades+:select size wsum price,sum size by sym from y;} From the example above, it uses qsql to select the required data. It uses sum and wsum to perform the calculation. Both the result of the calculation and ttables are keyed tables (dictionaries) so the + (add) operator has upsert semantics, adding the result of the calculation to the running total (ttables ) indexed by sym. The following example shows the actions of an intraday update on the ttables keyed table: - contents of ttables prior to updatesym | price size ------| ------------- MSFT.O| 91572.43 2026 IBM.N | 269151.2 1408 - TP calls upd with x set totrade and y` set to:time sym price size ---------------------------------------- 0D13:03:22.799016000 IBM.N 191.1547 684 - result of the custom calculation performed on data passed to upd sym | price size -----| ------------- IBM.N| 130749.8 684 - contents of ttables after updatesym | price size ------| ------------- MSFT.O| 91572.43 2026 IBM.N | 399901 2092 As upd is defined as a binary (2-argument) function, it could alternatively be defined as a dictionary which maps table names to unary function definitions. This duality works because of a fundamental and elegant feature of kdb+: executing functions and indexing into data structures are equivalent. The following demonstrates how a upd function can be replaced by a mapping of table name to handling function, simulating what occurs on different updates: q)updquote:{[x]0N!"quote update with data ";show x;} / function for quote table updates q)updtrade:{[x]0N!"trade update with data ";show x;} / function for trade table updates q)upd:`trade`quote!(updtrade;updquote) / map table names to unique handler two tables called 'trade' and 'quote' q)upd[`quote;([]a:1 2 3;b:4 5 6)]; / update for quote table calls updquote "quote update with data " a b --- 1 4 2 5 3 6 q)upd[`trade;([]a:1 2 3;b:4 5 6)]; / update for trade table calls updtrade "trade update with data " a b --- 1 4 2 5 3 6 q)upd[`not_handled;([]a:1 2 3;b:4 5 6)]; / update with no corresponding handler The RTE could also be integrated with other processes using IPC to call a function when specific conditions occur (i.e. an alert). End of day¶ At end of day (EOD), the TP sends messages to all subscribed clients, telling them to execute their unary end-of-day function called .u.end. .u.end[x] Where x is the date that has ended, as a date atom type. A RTE will execute its .u.end function once at end-of-day, regardless of whether it has one or many subscriptions. In the VWAP example, it logs that the end-of-day has occurred and clears the table holding the current calculation. Client interaction¶ The RTE can provide a client API consisting of one or more functions that can be used by a client to retrieve the results of our calculation. Rather than have each client request a specific calculation is performed. It also hides the data structures used to record data, leaving them self contained for future improvements. The VWAP example defines a getVWAP function that can take a list of symbols. A RTE client can use IPC to retrieve the current VWAP calculation for one or many symbols, for example q)h:hopen `::5041 q)h("getVWAP";`MSFT.O) sym vwap --------------- MSFT.O 45.16362 q)h("getVWAP";`MSFT.O`IBM.N) sym vwap --------------- MSFT.O 45.16362 IBM.N 191.0711 Without an RTE, this calculation would have to be performed over the entire days dataset contained within the RDB. Weighted average (VWAP) example with recovery¶ If a situation occurs were an RTE is restarted and it requires all of todays relevant data to regain the current value, it can replay the data from the TP log. To demonstrate this, the previous example (vwap.q ) has been altered to include the ability to replay from a TP log on startup: / connect to TP h:hopen `::5000; / syms to subscribe to s:`MSFT.O`IBM.N / table to hold info used in vwap calc ttrades:([sym:`$()]price:`float$();size:`int$()) / action for real-time data upd_rt:{[x;y]ttrades+:select size wsum price,sum size by sym from y} / action for data received from log file upd_replay:{[x;y]if[x~`trade;upd_rt[`trade; select from (trade upsert flip y) where sym in s]];} / clear table on end of day .u.end:{[x] 0N!"End of Day ",string x; delete from `ttrades;} / replay log file replay:{[x] logf:x[1]; if[null first logf;:()]; / return if logging not enabled on TP .[set;x[0]]; / create empty table for data being sent upd::upd_replay; 0N!"Replaying ",(string logf[0])," messages from log ",string logf[1]; -11!logf; 0N!"Replay done";} / subscribe and initialize replay h"(.u.sub[`trade;",(.Q.s1 s),"];.u `i`L)"; upd:upd_rt; / client function to retrieve vwap / e.g. getVWAP[`IBM.N`MSFT.O] getVWAP:{select sym,vwap:price%size from ttrades where sym in x} The code required to perform a replay will now be discussed using this example. Retrieving TP Log information¶ The log information in the example is retreived at the same time as the subscription is made. It is important to register all subscriptions at the same time as retrieving log information, and immediately process the log before processing any updates. The ensures that no messages update prior to processing the log, nor are sent between processing the log and regaining real-time updates. In the example provided, several steps are performed in one line of code: replay h"(.u.sub[`trade;",(.Q.s1 s),"];.u `i`L)"; We can break this down into the following steps - Retrieve log information stored in TP to get the current number of messages and the log file location and perform the subscription. The following shows an example of requesting log information without making a subscription: q)h:hopen `::5000; q)h".u `i`L" 942 `:./sym2024.08.30 - Register subscription for real-time data. The following shows an example of a TP client making a subscription for two symbols within the trade table and also requesting the log information shown in the previous step. The return value is a two item list, with the first item being the the schema information (returned by .u.sub ) and the second element being the log file information.q)h"(.u.sub[`trade;`MSFT.O`IBM.N];.u `i`L)" `trade +`time`sym`price`size!(`timespan$();`g#`symbol$();`float$();`int$()) 942 `:./sym2024.08.30 - Call a function to perform the replay of data from the TP log file given the information returned from the previous steps. In the example we call our custom replay function with both the schema information and log information.replay h"(.u.sub[`trade;",(.Q.s1 s),"];.u `i`L)"; TP Log replay¶ Replaying a log file is detailed here. The data replayed has two potential differences from the live data that should be considered: - The TP log file contains all of todays data. The RTE real-time subscription may have been subscribing to a subset of the data. For example, the RTE subscription could be filtering for particular tables or symbols. Therefore the replay action must include the logic to filter for required data. - Each message passed to upd structures its data using a list of vectors for each column. Real-time data uses a table structure. In order to handle the difference of data between replay and live data, the upd function is changed before/after replay. The VWAP example has upd::upd_replay; / set upd to upd_replay function which will then be called for each message in the log file ... -11!logf; / replay log file ... upd:upd_rt; / set upd to upd_rt function which will be used for all real-time messages As can be seen from the VWAP example, each message stored in the log file executes upd (which has been set to upd_replay ). The VWAP example is only interested in updates to the trade table for specific symbols, so upd_replay filters the data for messages matching that criteria. The data is then tranformed into the format that would normally call upd_rt so the logic to calculate VWAP is reused by passing the data to that function. Further reading¶ The default RDB (r.q) is a form of RTE, so it can be useful to understand how it works and read the related source code. A number of examples are also provided for study. Further examples¶ c.q collection¶ An often-overlooked problem is users fetching vast amounts of raw data to calculate something that could much better be built once, incrementally updated, and then made available to all interested clients. c.q provides a collection of RTE examples, such as - keeping a running Open/High/Low/Latest: much simpler to update incrementally with data from the TP each time something changes than to build from scratch. - keeping a table of the latest trade and the associated quote for every stock – trivial to do in real time with the incremental updates from the TP, but impossible to build from scratch in a timely fashion with the raw data. The default version of c.q connects to a TP and starts collecting data. Depending on your situation, you may wish to be able to replay TP data on a restart of an RTE. An alternative version that replays data from a TP log on start-up is available from simongarland/tick/clog.q . General Usage¶ q c.q CMD [host]:port[:usr:pwd] [-p 5040] | Parameter Name | Description | Default | |---|---|---| | CMD | See features for list of possible options | <none> | | host | host running kdb+ instance that the instance will subscribe to e.g. tickerplant host | localhost | | port | port of kdb+ instance that the instance will subscribe to e.g. tickerplant port | 5010 | | usr | username | <none> | | pwd | password | <none> | | -p | listening port for client communications | <none> | The t variable within the source file can be edited to a table name to filter, or an empty sym list for no filter. The s variable within the source file can be edited to a list of syms to filter on, or an empty sym list for no filter. Features¶ Possible options for CMD on command-line are: All data (with filter)¶ q c.q all [host]:port[:usr:pwd] [-p 5040] Stores all data received via subscribed tables/syms in corresponding table(s). q)trade time sym price size ----------------------------------------- 0D17:43:53.750787000 MSFT.O 45.18422 227 0D17:43:53.750787000 MSFT.O 45.18253 723 0D17:43:54.750922000 IBM.N 190.9688 31 Latest value¶ q c.q last [host]:port[:usr:pwd] [-p 5040] Stores last value per sym, for data received via subscribed tables/syms. If variable t set to subscribe to all tables (i.e. value is empty sym) then the script will also set r to the last table update received. r can contain more than one row if the feedhandler or TP is configured to send messages in batches. q)trade sym | time price size ------| ---------------------------------- MSFT.O| 0D17:47:44.755199000 45.21574 566 IBM.N | 0D17:47:43.751284000 191.0358 505 q)r sym | time mm bid ask bsize asize -----| ----------------------------------------------------- IBM.N| 0D17:47:46.355176000 BB 191.0336 191.0548 452 888 Five minute window¶ q c.q last5 [host]:port[:usr:pwd] [-p 5040] Populates tables with each row representing the last update within a five minute window for each sym. Latest row updates for each tick until five minute window passes and a new row is created. q)trade sym minute| time price size -------------| ---------------------------------- MSFT.O 17:45 | 0D17:49:56.755206000 45.2008 289 IBM.N 17:45 | 0D17:49:59.750186000 191.1024 79 IBM.N 17:50 | 0D17:54:55.752129000 191.2633 817 MSFT.O 17:50 | 0D17:54:58.754249000 45.22999 635 IBM.N 17:55 | 0D17:55:06.753962000 191.266 154 MSFT.O 17:55 | 0D17:55:11.751203000 45.23911 826 Trade with quote¶ q c.q tq [host]:port[:usr:pwd] [-p 5040] Records the current quote price as each trade occurs. It populates table tq with all trade updates, accompanied by the value contained within the last received quote update for the related sym. Example depends upon the tickerplant using a schema with only a quote and trade table. q)tq time sym price size bid ask bsize asize ----------------------------------------------------------------------- 0D11:11:45.566803000 MSFT.O 45.14688 209 45.14713 45.15063 55 465 0D11:11:49.868267000 MSFT.O 45.15094 288 45.14479 45.15053 27 686 Weighted average (VWAP)¶ q c.q vwap [host]:port[:usr:pwd] [-p 5040] Populates table vwap with information that can be used to generate a volume weighted adjusted price. The input volume and price can fluctuate during the day. This example uses wsum to calculate the weighted sum over the mutliple ticks that may be in a single update. Result shows size representing the total volume traded, and price being the total cost of all stocks traded. Example depends upon tickerplant using a schema with a trade table that include the columns sym, price and size. q)vwap sym | price size ------| ------------- MSFT.O| 148714.2 6348 IBM.N | 138147.1 3060 q)select sym,vwap:price%size from vwap sym vwap --------------- MSFT.O 23.42693 IBM.N 45.14611 Weighted average (VWAP with time window)¶ q c.q vwap1 [host]:port[:usr:pwd] [-p 5040] Populates table vwap with information that can be used to generate a volume weighted adjusted price. Calculation as per vwap example above. A new row is inserted per sym, when each minute passes. This presents the vwap on per minute basis. q)vwap sym minute| price size -------------| -------------- MSFT.O 11:07 | 570708.2 12643 MSFT.O 11:08 | 1328935 29425 MSFT.O 11:09 | 56653.97 1254 q)select sym,minute,vwap:price%size from vwap sym minute vwap ---------------------- MSFT.O 11:07 45.14025 MSFT.O 11:08 45.16346 MSFT.O 11:09 45.18718 Weighted average (VWAP with tick limit)¶ q c.q vwap2 [host]:port[:usr:pwd] [-p 5040] As per vwap example, but only including last ten trade messages for calculation. q)vwap sym | vwap ------| -------- MSFT.O| 45.14031 Weighted average (VWAP with time limit)¶ q c.q vwap3 [host]:port[:usr:pwd] [-p 5040] As per vwap example, but only including any trade messages received in the last minute for calculation. q)vwap sym | vwap ------| -------- MSFT.O| 45.14376 Moving calculation (time window)¶ q c.q move [host]:port[:usr:pwd] [-p 5040] Populates table move with moving price calculation performed in real-time, generating the price and price * volume change over a 1 minute window. Using the last tick that occurred over one minute ago, subtract from latest value. For example, price change would be +12 if the value one minute ago was 8 and the last received price was 20. Recalculates for every update. Example depends upon tickerplant using a schema with a trade table that include the columns sym, price and size. Example must be run for at least one minute. q)move sym | size size1 ------| --------------- MSFT.O| -35842.39 -794 Daily running stats¶ q c.q hlcv [host]:port[:usr:pwd] [-p 5040] Populates table hlcv with high price, low price, last price, total volume. Example depends upon tickerplant using a schema with a trade table that include the columns sym, price and size. q)hlcv sym | high low price size ------| ------------------------------- MSFT.O| 45.15094 45.14245 45.14724 5686 Categorizing into keyed table¶ q c.q lvl2 [host]:port[:usr:pwd] [-p 5040] Populates a dictionary lvl2 mapping syms to quote information. The quote information is a keyed table showing the latest quote for each market maker. q)lvl2`MSFT.O mm| time bid ask bsize asize --| -------------------------------------------------- AA| 0D10:59:44.510353000 45.15978 45.16659 883 321 CC| 0D10:59:43.010352000 45.15233 45.15853 956 293 BB| 0D10:59:45.910348000 45.15745 45.16148 533 721 DD| 0D10:59:46.209092000 45.15623 45.16231 404 279 q)lvl2`IBM.N mm| time bid ask bsize asize --| -------------------------------------------------- DD| 0D10:59:52.410404000 191.0868 191.093 768 89 AA| 0D10:59:52.410404000 191.0798 191.0976 587 140 BB| 0D10:59:54.610352000 191.1039 191.1101 187 774 CC| 0D10:59:54.310351000 191.0951 191.1116 563 711 Requires a quote schema containing a column named mm for the market maker, for example quote:([]time:`timespan$();sym:`symbol$();mm:`symbol$();bid:`float$();ask:`float$();bsize:`int$();asize:`int$()) Store to nested structures¶ q c.q nest [host]:port[:usr:pwd] [-p 5040] Creates and populates a trade table. There will be one row for each symbol, were each element is a list. Each list has its corresponding value appended to on each update i.e. four trade updates will result in a four item list of prices. Example depends upon the tickerplant publishing a trade table. q)trade sym | time price size ------| ------------------------------------------------------------------------------------------------------------------------------------- MSFT.O| 0D11:06:24.370938000 0D11:06:25.374533000 0D11:06:26.373827000 0D11:06:27.376053000 45.14767 45.14413 45.1419 45.1402 360 585 869 694 Real-time trade with as-of quotes¶ Overview¶ One of the most popular and powerful joins in the q language is the aj function. This keyword was added to the language to solve a specific problem – how to join trade and quote tables together in such a way that for each trade, we grab the prevalent quote as of the time of that trade. In other words, what is the last quote at or prior to the trade? This function is relatively easy to use for one-off joins. However, what if you want to maintain trades with as-of quotes in real time? This section describes how to build an RTE with real-time trades and as-of quotes. One additional feature this script demonstrates is the ability of any q process to write to and maintain its own kdb+ binary logfile for replay/recovery purposes. In this case, the RTE maintains its own daily logfile for trade records. This will be used for recovery in place of the standard tickerplant logfile. Example script¶ This is a heavily modified version of an RDB (r.q ), written by the author and named RealTimeTradeWithAsofQuotes.q . / The purpose of this script is as follows: 1. Demonstrate how custom RTEs can be created in q 2. In this example, create an efficient engine for calculating the prevalent quotes as of trades in real-time. This removes the need for ad-hoc invocations of the aj function. 3. In this example, this subscriber also maintains its own binary log file for replay purposes. This replaces the standard tickerplant log file replay functionality. \ show "RealTimeTradeWithAsofQuotes.q" /sample usage /q tick/RealTimeTradeWithAsofQuotes.q -tp localhost:5000 -syms MSFT.O IBM.N GS.N /default command line arguments - tp is location of tickerplant. /syms are the symbols we wish to subscribe to default:`tp`syms!("::5000";"") args:.Q.opt .z.x /transform incoming cmd line arguments into a dictionary args:`$default,args /upsert args into default args[`tp] : hsym first args[`tp] /drop into debug mode if running in foreground AND /errors occur (for debugging purposes) \e 1 if[not "w"=first string .z.o;system "sleep 1"] /initialize schemas for custom RTE InitializeSchemas:`trade`quote! ( {[x]`TradeWithQuote insert update mm:`,bid:0n,bsize:0N,ask:0n,asize:0N from x}; {[x]`LatestQuote upsert select by sym from x} ); /intraday update functions /Trade Update /1. Update incoming data with latest quotes /2. Insert updated data to TradeWithQuote table /3. Append message to custom logfile updTrade:{[d] d:d lj LatestQuote; `TradeWithQuote insert d; LogfileHandle enlist (`replay;`TradeWithQuote;d); } /Quote Update /1. Calculate latest quote per sym for incoming data /2. Update LatestQuote table updQuote:{[d] `LatestQuote upsert select by sym from d; } /upd dictionary will be triggered upon incoming update from tickerplant upd:`trade`quote!(updTrade;updQuote) /end of day function - triggered by tickerplant at EOD .u.end:{ hclose LogfileHandle; /close the connection to the old log file /create the new logfile logfile::hsym `$"RealTimeTradeWithAsofQuotes_",string .z.D; .[logfile;();:;()]; /Initialize the new log file LogfileHandle::hopen logfile; {delete from x}each tables `. /clear out tables } /Initialize name of custom logfile logfile:hsym `$"RealTimeTradeWithAsofQuotes_",string .z.D; replay:{[t;d]t insert d} /custom log file replay function /attempt to replay custom log file @[{-11!x;show"successfully replayed custom log file"}; logfile; {[e] m:"failed to replay custom log file"; show m," - assume it does not exist. Creating it now"; .[logfile;();:;()]; /Initialize the log file } ] /open a connection to log file for writing LogfileHandle:hopen logfile / connect to tickerplant and subscribe to trade and quote for portfolio h:hopen args`tp; /connect to tickerplant InitializeSchemas . h(".u.sub";`trade;args`syms); InitializeSchemas . h(".u.sub";`quote;args`syms); This process should be started off as follows: q tick/RealTimeTradeWithAsofQuotes.q -tp localhost:5000 -syms MSFT.O IBM.N GS.N -p 5003 This process will subscribe to both trade and quote tables for symbols MSFT.O , IBM.N and GS.N and will listen on port 5003. The author has deliberately made some of the q syntax more easily understandable compared to r.q . The first section of the script simply parses the command-line arguments and uses these to update some default values. The error flag \e is set for purely testing purposes. When the developer runs this script in the foreground, if errors occur at runtime as a result of incoming IPC messages, the process will drop into debug mode. For example, if there is a problem with the definition of upd , then when an update is received from the tickerplant we will drop into debug mode and (hopefully) identify the issue. We can see this RTE in action by examining the five most recent trades for GS.N : q)-5#select from TradeWithQuote where sym=`GS.N time sym price size bid bsize ask asize --------------------------------------------------------------------- 0D21:50:58.857411000 GS.N 178.83 790 178.8148 25 178.8408 98 0D21:51:00.158357000 GS.N 178.8315 312 178.8126 12 178.831 664 0D21:51:01.157842000 GS.N 178.8463 307 178.8193 767 178.8383 697 0D21:51:03.258055000 GS.N 178.8296 221 178.83 370 178.8627 358 0D21:51:03.317152000 GS.N 178.8314 198 178.8296 915 178.8587 480 Initialize desired table schemas¶ InitializeSchemas defines the behavior of this RTE upon connecting and subscribing to the tickerplant’s trade and quote tables. InitializeSchemas (defined as a dictionary which maps table names to unary function definitions) replaces .u.rep in r.q : The RTE’s trade table (named TradeWithQuote ) maintains bid , bsize , ask and asize columns of appropriate type. For the quote table, we just maintain a keyed table called LatestQuote , keyed on sym which will maintain the most recent quote per symbol. This table will be used when joining prevalent quotes to incoming trades. Intraday update behavior¶ updTrade defines the intraday behavior upon receiving new trades. Besides inserting the new trades with prevalent quote information into the trade table, updTrade also appends the new records to its custom logfile. This logfile will be replayed upon recovery/startup of the RTE. Note that the replay function is named replay . This differs from the conventional TP logfile where the replay function was called upd . updQuote defines the intraday behavior upon receiving new quotes. The upd dictionary acts as a case statement – when an update for the trade table is received, updTrade will be triggered with the message as argument. Likewise, when an update for the quote table is received, updQuote will be triggered. In r.q , upd is defined as a function, not a dictionary. However we can use this dictionary definition for reasons discussed previously. End of day¶ At end of day, the tickerplant sends a message to all RTEs telling them to invoke their EOD function (.u.end ): This function has been heavily modified from r.q to achieve the following desired behavior: hclose LogfileHandle - Close connection to the custom logfile. logfile::hsym `$"RealTimeTradeWithAsofQuotes_",string .z.D - Create the name of the new custom logfile. This logfile is a daily logfile – meaning it only contains one day’s trade records and it has today’s date in its name, just like the tickerplant’s logfile. .[logfile;();:;()] - Initialize this logfile with an empty list. LogfileHandle::hopen logfile - Establish a connection (handle) to this logfile for streaming writes. {delete from x}each tables `. - Empty out the tables. Replay custom logfile¶ This section concerns the initialization and replay of the RTE’s custom logfile. /Initialize name of custom logfile logfile:hsym `$"RealTimeTradeWithAsofQuotes_",string .z.D replay:{[t;d]t insert d} /custom log file replay function At this point, the name of today’s logfile and the definition of the logfile replay function have been established. The replay function will be invoked when replaying the process’s custom daily logfile. It is defined to simply insert the on-disk records into the in memory (TradeWithQuote ) table. This will be a fast operation ensuring recovery is achieved quickly and efficiently. Upon startup, the process uses a try-catch to replay its custom daily logfile. If it fails for any reason (possibly because the logfile does not yet exist), it will send an appropriate message to standard out and will initialize this logfile. Replay of the logfile is achieved with the standard operator -11! as discussed previously. /attempt to replay custom log file @[{-11!x;show"successfully replayed custom log file"}; logfile; {[e] m:"failed to replay custom log file"; show m," - assume it does not exist. Creating it now"; .[logfile;();:;()]; /Initialize the log file } ] Once the logfile has been successfully replayed/initialized, a handle (connection) is established to it for subsequent streaming appends (upon new incoming trades from tickerplant): /open a connection to log file for writing LogfileHandle:hopen logfile Subscribe to TP¶ The next part of the script is probably the most critical – the process connects to the tickerplant and subscribes to the trade and quote table for user-specified symbols. / connect to tickerplant and subscribe to trade and quote for portfolio h:hopen args`tp; /connect to tickerplant InitializeSchemas . h(".u.sub";`trade;args`syms); InitializeSchemas . h(".u.sub";`quote;args`syms); The output of a subscription to a given table (for example trade ) from the tickerplant is a 2-list, as discussed previously. This pair is in turn passed to the function InitializeSchemas . Performance considerations¶ The developer can build the RTE to achieve whatever real-time behavior is desired. However from a performance perspective, not all RTE instances are equal. The standard RDB is highly performant – meaning it should be able process updates at a very high frequency without maxing out CPU resources. In a real world environment, it is critical that the RTE can finish processing an incoming update before the next one arrives. The high level of RDB performance comes from the fact that its definition of upd is extremely simple: upd:insert In other words, for both TP logfile replay and intraday updates, simply insert the records into the table. It doesn’t take much time to execute insert in kdb+. However, the two custom RTE instances discussed in this white paper have more complicated definitions of upd for intraday updates and will therefore be less performant. This section examines this relative performance. For this test, the TP log will be used. This particular TP logfile has the following characteristics: q)hcount `:C:/OnDiskDB/sym2014.08.15 /size of TP logfile on disk in bytes 41824262 q)logs:get`:C:/OnDiskDB/sym2014.08.15 /load logfile into memory q)count logs /number of updates in logfile 284131 We can examine the contents of the logfile as follows: q)2#logs /display first 2 messages in logfile `upd `quote (0D16:05:08.818951000 0D16:05:08.818951000;`GS.N`VOD.L;78.5033 53.47096;17.80839 30.17723;522 257;908 360) `upd `quote (0D16:05:08.918957000 0D16:05:08.918957000;`VOD.L`IBM.N;69.16099 22.96615;61.37452 52.94808;694 934;959 221) In this case, the first two updates were for quote , not trade . Given the sample feedhandler used, each update for trade or quote had two records. The overall number of trade and quote updates in this logfile were: q)count each group logs[;1] quote| 255720 trade| 28411 It was previously mentioned that the TP logfile has the data in columnar list format as opposed to table format, whereas intraday TP updates are in table format. Therefore, in order to simulate intraday updates, a copy of the TP logfile is created where the data is in table format. The code to achieve this transformation is below: /LogfileTransform.q \l tick/sym.q /obtain table schemas d:`trade`quote!(cols trade;cols quote) `:C:/OnDiskDB/NewLogFile set () /initialize new logfile h:hopen `:C:/OnDiskDB/NewLogFile /handle to NewLogFile upd:{[tblName;tblData] h enlist(`upd;tblName;flip(d tblName)!tblData); } -11!`:C:/OnDiskDB/sym2014.08.15 /replay TP log file and create new one This transformed logfile will now be used to test performance on the RDB and two RTE instances. On the RDB, we obtained the following performance: q)upd /vanilla, simple update behavior insert q)logs:get`:C:/OnDiskDB/NewLogFile /load logfile into memory q)count logs /number of messages to process 284131 q)\ts value each logs /execute each update 289 31636704 It took 289 milliseconds to process over a quarter of a million updates, where each update had two records. Therefore, the average time taken to process a single two-row update is 1µs. In the first example RTE (Real-time Trade With As-of Quotes), we obtained the following performance: q)upd /custom real time update behavior trade| {[d] d:d lj LatestQuote; `TradeWithQuote insert d; LogfileHandle enlist (`replay;`TradeWithQuote;d); } quote| {[d] `LatestQuote upsert select by sym from d; } q)logs:get`:C:/OnDiskDB/NewLogFile /load logfile into memory q)count logs /number of messages to process 284131 q)\ts value each logs /execute each update 2185 9962336 It took 2185 milliseconds to process over a quarter of a million updates, where each update had two records. Therefore, the average time taken to process a single two-row update is 7.7 µs – over seven times slower than RDB. In the second example RTE (Real-time VWAP), we obtained the following performance: / Because there are trades and quotes in the logfile but this RTE is only designed to handle trades, a slight change to upd is necessary for the purpose of this performance experiment \ /If trade – process as normal. If quote - ignore q)upd:{if[x=`trade;updIntraDay[`trade;y]]} q) q)logs:get`:C:/OnDiskDB/NewLogFile /load logfile into memory q)count logs /number of messages to process 284131 q)\ts value each logs /execute each update 9639 5505952 It took 9639 milliseconds to process over a quarter of a million updates, where each update had two records. Therefore, the average time taken to process a single two row update is 34 µs – over thirty times slower than RDB. We can conclude that there was a significant difference in performance in processing updates across the various RTEs. However even in the worst case, assuming the TP updates arrive no more frequently than once every 100 µs, the process should still function well. It should be noted that prior to this experiment being carried out on each process, all tables were emptied.
\d .iex main_url:"https://cloud.iexapis.com/stable" token:getenv[`IEX_PUBLIC_TOKEN] convert_epoch:{"p"$1970.01.01D+1000000j*x} reqtype:`both syms:`CAT`DOG callback:".u.upd" quote_suffix:{[sym] "/stock/",sym,"/quote?token="} trade_suffix:{[sym] "/tops/last?symbols=",sym,"&token="} upd:{[t;x] .iex.callbackhandle(.iex.callback;t; value flip delete time from x)} timerperiod:0D00:00:02.000 \d . ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_killtick.q SIZE: 551 characters ================================================================================ // Default configuration - loaded by all processes : Finance Starter Pack \d .servers enabled:1b // whether server tracking is enabled CONNECTIONS:`hdb`rdb`segmentedtickerplant`gateway`wdb // list of connections to make at start up DISCOVERYREGISTER:0b // whether to register with the discovery service CONNECTIONSFROMDISCOVERY:0b // whether to get connection details from the discovery service (as opposed to the static file) SUBSCRIBETODISCOVERY:0b // whether to subscribe to the discovery service for new processes becoming available ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_metrics.q SIZE: 277 characters ================================================================================ / Default config for metrics process \d .metrics windows:0D00:01 0D00:05 0D01:00; / 1 minute, 5 minute, 1 hour windows enableallday:1b; / enable all day window by default tickerplanttypes:`segmentedtickerplant; / type of tickerplant to connect to ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_monitor.q SIZE: 255 characters ================================================================================ // Bespoke Monitor config : Finance Starter Pack \d .servers // list of connections to make at start up // can't use `ALL as the tickerplant doesn't publish heartbeats CONNECTIONS:`discovery`rdb`hdb`wdb`sort`gateway`housekeeping`reporter`feed`sortworker ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_rdb.q SIZE: 824 characters ================================================================================ // Bespoke RDB config : Finance Starter Pack \d .rdb hdbdir:hsym`$getenv[`KDBHDB] // the location of the hdb directory reloadenabled:1b // if true, the RDB will not save when .u.end is called but // will clear it's data using reload function (called by the WDB) connectonstart:1b // rdb connects and subscribes to tickerplant on startup tickerplanttypes:`segmentedtickerplant gatewatypes:`none replaylog:1b hdbtypes:() //connection to HDB not needed subfiltered:0b // path to rdbsub{i}.csv subcsv:hsym first `.proc.getconfigfile["rdbsub/rdbsub",(3_string .proc`procname),".csv"] \d .servers CONNECTIONS:enlist `gateway // if connectonstart false, include tickerplant in tickerplanttypes, not in CONNECTIONS ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_segmentedchainedtickerplant.q SIZE: 1,190 characters ================================================================================ \d . createlogs:0b; // create an stp log file (off in SCTP as createlogs does not control SCTP logging) \d .sctp chainedtp:1b; // switched between STP and SCTP codebases loggingmode:`none; // [none|create|parent] tickerplantname:`stp1; // list of tickerplant names to try and make a connection to subscribeto:`; // list of tables to subscribe for subscribesyms:`; // list of syms to subscription to replay:0b; // replay the tickerplant log file schema:1b; // retrieve schema from tickerplant \d .stplg multilog:`tabperiod; // [tabperiod|none|periodic|tabular|custom] multilogperiod:0D01; errmode:1b; batchmode:`defaultbatch; // [autobatch|defaultbatch|immediate] customcsv:hsym first .proc.getconfigfile["stpcustom.csv"]; replayperiod:`day // [period|day|prior] \d .proc loadcommoncode:1b; loadprocesscode:1b; \d .timer enabled:1b; // enable timer \d .subcut enabled:1b // switch on subscribercutoff \d .servers CONNECTIONS,:`segmentedtickerplant CONNECTIONSFROMDISCOVERY:1b ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_segmentedtickerplant.q SIZE: 454 characters ================================================================================ \d . createlogs:1b; // create a logs \d .stplg multilog:`tabperiod; // [tabperiod|singular|periodic|tabular|custom] multilogperiod:0D01; errmode:1b; batchmode:`defaultbatch; // [memorybatch|defaultbatch|immediate] customcsv:hsym first .proc.getconfigfile["stpcustom.csv"]; replayperiod:`day // [period|day|prior] \d .proc loadprocesscode:1b; \d .eodtime datatimezone:`$"GMT"; rolltimezone:`$"GMT"; ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_sort.q SIZE: 453 characters ================================================================================ // Bespoke Sort config : Finance Starter Pack \d .wdb savedir:hsym `$getenv[`KDBWDB] // location to save wdb data hdbdir:hsym`$getenv[`KDBHDB] // move wdb database to different location tickerplanttypes:sorttypes:() // sort doesn't need these connections sortworkertypes:enlist `sortworker; // sort should use sortworkers by default \d .servers CONNECTIONS:`hdb`rdb`gateway`sortworker // list of connections to make at start up ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_tickerplant.q SIZE: 771 characters ================================================================================ // Bespoke Tickerplant config : Finance Starter Pack \d .proc loadcommoncode:0b // do not load common code logroll:0b // do not roll logs // Configuration used by the usage functions - logging of client interaction \d .usage enabled:0b // switch off the usage logging // Client tracking configuration // This is the only thing we want to do // and only for connections being opened and closed \d .clients enabled:1b // whether client tracking is enabled opencloseonly:1b // only log open and closing of connections // Server connection details \d .servers enabled:0b // disable server tracking \d .timer enabled:0b // disable the timer \d .hb enabled:0b // disable heartbeating \d .zpsignore enabled:0b // disable zpsignore - zps should be empty ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_wdb.q SIZE: 333 characters ================================================================================ // Bespoke WDB config : Finance Starter Pack \d .wdb savedir:hsym `$getenv[`KDBWDB] // location to save wdb data hdbdir:hsym`$getenv[`KDBHDB] // move wdb database to different location sortworkertypes:() // WDB doesn't need to connect to sortworkers \d .servers CONNECTIONS:`segmentedtickerplant`sort`gateway`rdb`hdb ================================================================================ FILE: TorQ-Finance-Starter-Pack_code_hdb_examplequeries.q SIZE: 415 characters ================================================================================ /- HDB query for counting by sym countbysym:{[startdate;enddate] select sum size, tradecount:count i by sym from trade where date within (startdate;enddate)} /- time bucketted count hloc:{[startdate;enddate;bucket] select high:max price, low:min price, open:first price,close:last price,totalsize:sum `long$size, vwap:size wavg price by sym, bucket xbar time from trade where date within (startdate;enddate)} ================================================================================ FILE: TorQ-Finance-Starter-Pack_code_iexfeed_iex.q SIZE: 2,953 characters ================================================================================ \d .iex main_url:@[value;`main_url;"https://cloud.iexapis.com/stable"]; token:@[value;`token;""]; convert_epoch:@[value;`convert_epoch;{{"p"$1970.01.01D+1000000j*x}}]; reqtype:@[value;`reqtype;`both]; syms:@[value;`syms;`CAT`DOG]; callback:@[value;`callback;".u.upd"]; callbackhandle:@[value;`callbackhandle;0i]; callbackconnection:@[value;`callbackconnection;`]; quote_suffix:@[value;`quote_suffix;{{[sym] "/stock/",sym,"/quote?token="}}]; trade_suffix:@[value;`trade_suffix;{{[sym] "/tops/last?symbols=",sym,"&token="}}]; upd:@[value;`upd;{{[t;x].iex.callbackhandle(.iex.callback;t; value flip x)}}]; timerperiod:@[value;`timerperiod;0D00:00:02.000]; init:{[x] if[`main_url in key x;.iex.main_url:x `main_url]; if[`token in key x;.iex.token:x `token]; if[`quote_suffix in key x;.iex.quote_suffix:x `quote_suffix]; if[`trade_suffix in key x;.iex.trade_suffix:x`trade_suffix]; if[`syms in key x;.iex.syms: upper x`syms]; if[`reqtype in key x;.iex.reqtype:x`reqtype]; if[`callbackconnection in key x;.iex.callbackhandle:neg hopen .iex.callbackconnection:x `callbackconnection]; if[`callbackhandle in key x;.iex.callbackhandle:x `callbackhandle]; if[`callback in key x;.iex.callback: $[.iex.callbackhandle=0; string @[value;x `callback;{[x;y]x set {[t;x]x}}[x`callback]]; x`callback]]; if[`upd in key x; .iex.upd:x[`upd]]; .iex.timer:$[not .iex.reqtype in key .iex.timer_dict;'`timer;.iex.timer_dict .iex.reqtype]; } get_data:{[suffix] :.Q.hg hsym `$.iex.main_url,suffix,.iex.token; } get_last_trade:{tab:{[syms] / This function can run for multiple securities. syms:$[1<count syms;"," sv string[upper syms];string[upper syms]]; / Construct the GET request suffix:.iex.trade_suffix[syms]; / Parse json response and put into table. Trade data from https://iextrading.com/developer/ data:.j.k .iex.get_data[suffix]; tab:select sym:`$symbol, price:`float$price, size:`int$size, stop:(count data)#0b, cond:(count data)#`char$(), ex:(count data)#`char$(), srctime:.iex.convert_epoch time from data }[.iex.syms]; .iex.upd[`trade_iex;tab] } get_quote:{tab:raze {[sym] sym:string[upper sym]; suffix:.iex.quote_suffix[sym]; / Parse json response and put into table data: enlist .j.k .iex.get_data[suffix]; select sym:`$symbol, bid: `float$iexBidPrice, ask:`float$iexAskPrice, bsize:`long$iexBidSize, asize:`long$iexAskSize, mode:(count data)#`char$(), ex:(count data)#`char$(), srctime:.iex.convert_epoch latestUpdate from data }'[.iex.syms,()]; .iex.upd[`quote_iex;tab] } timer_both:{.iex.get_last_trade[];.iex.get_quote[]} timer_dict:`trade`quote`both!(.iex.get_last_trade;.iex.get_quote;timer_both) timer:{ @[$[not .iex.reqtype in key .iex.timer_dict; {'`$"timer request type not valid: ",string .iex.reqtype}; .iex.timer_dict[.iex.reqtype]]; []; {.lg.e[`iextimer;"failed to run iex timer function: ",x]}]} \d . ================================================================================ FILE: TorQ-Finance-Starter-Pack_code_processes_iexfeed.q SIZE: 175 characters ================================================================================ .servers.startup[] .iex.callbackhandle:neg .servers.gethandlebytype[`segmentedtickerplant;`any] .timer.repeat[.proc.cp[];0Wp;.iex.timerperiod;(`.iex.timer;`);"Publish Feed"];
Linking columns¶ A link column is similar to a foreign key – it links the values of a column in a table to the values in a column in a second table. They differ: - a foreign-key column is an enumeration over the key column of a keyed table - a link column consists of indexes into an arbitrary column of an arbitrary table A link column is useful where a key column is not available. For example: - a table can contain a link to itself to represent a parent-child relationship - links can represent ‘foreign-key’ relationships between splayed tables, which cannot be keyed Tables in memory¶ In our first example, a link column from a table to itself represents a parent-child relationship. q)t:([] id:101 102 103 104; v:1.1 2.2 3.3 4.4) To create the column parent , we look up the values in the key column using ? and then declare the link using ! – instead of $ as we would for a foreign key enumeration. q)update parent:`t!id?101 101 102 102 from `t `t q)t id v parent -------------- 101 1.1 0 102 2.2 0 103 3.3 1 104 4.4 1 Observe that meta displays the target table of the link in the f column, just as it does for a foreign key. q)meta t c | t f a ------| ----- id | i v | f parent| i t And, just as with a foreign key, we can use dot notation on the link column to follow the link and access any column in the linked table. q)select id, parentid:parent.id from t id parentid ------------ 101 101 102 101 103 102 104 102 Next, we create a link between two columns of enumerated symbols, since this occurs frequently in practice. The table t1 has a column c1 enumerated over sym . q)sym:() q)show t1:([] c1:`sym?`c`b`a; c2: 10 20 30) c1 c2 ----- c 10 b 20 a 30 The table t2 has a column c3 also enumerated over sym , and whose values are drawn from those of column c1 in t1 . q)show t2:([] c3:`sym?`a`b`a`c; c4: 1 2 3 4.) c3 c4 ----- a 1 b 2 a 3 c 4 As before, we use ? to create a vector of indices and we use ! in place of $ to create the link. q)update t1link:`t1!t1.c1?c3 from `t2 Again the f column of meta indicates the table over which the link is created. q)meta t2 c | t f a ------| ------ c3 | s c4 | f t1link| i t1 Now we can issue queries that traverse the link using dot notation. q)select c3, t1link.c2 from t2 c3 c2 ----- a 30 b 20 a 30 c 10 Splayed tables¶ Suppose table t1 has already been splayed on disk and mapped into the q session. Note that dot notation does not work for splayed tables when creating the link. q)`:data/db/t1/ set .Q.en[`:data/db/; ([] c1:`c`b`a; c2: 10 20 30)] `:data/db/t1/ q)\l data/db q)meta t1 c | t f a --| ----- c1| s c2| i Create a link column in table t2 as it is splayed. (Do this on each append if you are creating t2 incrementally on disk.) q)temp:.Q.en[`:data/db/; ([] c3:`a`b`a`c; c4: 1. 2. 3. 4.)] q)`:data/db/t2/ set update t1link:`t1!t1[`c1]?c3 from temp `:data/db/t2/ Remap and check meta . q)\l data/db q)meta t2 c | t f a ------| ------ c3 | s c4 | f t1link| i t1 Now execute a query across the link: q)select t1link.c2, c3 from t2 c2 c3 ----- 30 a 20 b 30 a 10 c Next suppose t1 and t2 have both been splayed. q)`:data/db/t1/ set .Q.en[`:data/db/; ([] c1:`c`b`a; c2: 10 20 30)] `:data/db/t1/ q)`:data/db/t2/ set .Q.en[`:data/db/; ([] c3:`a`b`a`c; c4: 1. 2. 3. 4.)] `:data/db/t2/ First create the link when both tables have been mapped into memory. q)\l data/db Create the link column as before, but update the splayed files of t2 manually. q)`:data/db/t2/t1link set `t1!t1[`c1]?t2`c3 `:data/db/t2/t1link q)`:data/db/t2/.d set (cols t2),`t1link `:data/db/t2/.d Remap and execute the query as before. q)\l data/db q)meta t2 c | t f a ------| ------ c3 | s c4 | f t1link| i t1 q)select t1link.c2, c3 from t2 c2 c3 ----- 30 a 20 b 30 a 10 c Finally, consider two splayed tables that have not been mapped. q)`:data/db/t1/ set .Q.en[`:data/db/; ([] c1:`c`b`a; c2: 10 20 30)] `:data/db/t1/ q)`:data/db/t2/ set .Q.en[`:data/db/; ([] c3:`a`b`a`c; c4: 1. 2. 3. 4.)] `:data/db/t2/ Retrieve the column lists manually and proceed as before. q)`:data/db/t2/t1link set `t1!(get `:data/db/t1/c1)?get `:data/db/t2/c3 `:data/db/t2/t1link q)colst2:get `:data/db/t2/.d q)`:data/db/t2/.d set colst2,`t1link `:data/db/t2/.d Partitioned tables¶ Partitioned tables can have link columns provided the links do not span partitions. In particular, you cannot link across days for a table partitioned by date. Creating a link column in a partitioned table is best done as each partition is written. The process then reduces to that for splayed tables. Create a link between non-symbol columns in the simple partitioned tables t1 and t2 . First, create the first day’s tables with the link and save them to a partition. q)t1:([] id:101 102 103; v:1.1 2.2 3.3) q)t2:([] t1link:`t1!t1[`id]?103 101 101 102; n:10 20 30 40) q)`:temp/db/2019.01.01/t1/ set t1 `:temp/db/2019.01.01/t1/ q)`:temp/db/2019.01.01/t2/ set t2 `:temp/db/2019.01.01/t2/ Do the same for the second day. q)t1:([] id:104 105; v:4.4 5.5) q)t2:([] t1link:`t1!t1[`id]?105 104 104; n:50 60 70) q)`:temp/db/2019.01.02/t1/ set t1 `:temp/db/2019.01.02/t1/ q)`:temp/db/2019.01.02/t2/ set t2 `:temp/db/2019.01.02/t2/ Finally, restart kdb+, map the tables and execute a query across the link. $ q KDB+ 4.0 2020.10.02 Copyright (C) 1993-2020 Kx Systems q)\l temp/db q)select date,n,t1link.v from t2 where date within 2019.01.01 2019.01.02 date n v ----------------- 2019.01.01 10 3.3 2019.01.01 20 1.1 2019.01.01 30 1.1 2019.01.01 40 2.2 2019.01.02 50 5.5 2019.01.02 60 4.4 2019.01.02 70 4.4 The final example is similar except that it creates a link over enumerated symbol columns. q)/ day 1 q)t1:([] c1:`c`b`a; c2: 10 20 30) q)`:temp/db/2019.01.01/t1/ set .Q.en[`:temp/db/; t1] `:temp/db/2019.01.01/t1/ q)t2:([] c3:`a`b`a`c; c4: 1. 2. 3. 4.) q)`:temp/db/2019.01.01/t2/ set .Q.en[`:temp/db/; update t1link:`t1!t1[`c1]?c2 from t2] `:temp/db/2019.01.01/t2/ q)/ day 2 q)t1:([] c1:`d`a; c2: 40 50) q)`:temp/db/2019.01.02/t1/ set .Q.en[`:temp/db/; t1] `:temp/db/2019.01.02/t1/ q)t2:([] c3:`d`a`d; c4:5. 6. 7.) q)`:temp/db/2019.01.02/t2/ set .Q.en[`:temp/db/; update t1link:`t1!t1[`c1]?c2 from t2] `:temp/db/2019.01.02/t2/ q)/ remap q)\l temp/db q)select c3,t1link.c2,c4 from t2 where date within 2019.01.01 2019.01.02 c3 c2 c4 -------- a 30 1 b 20 2 a 30 3 c 10 4 d 40 5 a 50 6 d 40 7 A link column with domain of a partitioned table requires the encompassing table to be partitioned too. Signals a par error since 4.1t 2022.04.15. q).Q.dd[`:/tmp/db1;`2022.01.01`ecfmapping`] set .Q.en[`:/tmp/db1] ([]firmName:enlist "DUMMY") q)\l /tmp/db1 q)select ecfmap.firmName from ([id:1 2];ecfmap:`ecfmapping!1 1) 'par [0] select ecfmap.firmName from ([id:1 2];ecfmap:`ecfmapping!1 1) Since 4.1t 2023.08.04,4.0 2023.08.11 references to linked columns under group by no longer require remapping the foreign column for every group. The application of foreign keys and linked columns in kdb+ Q for Mortals §8.5 Foreign Keys and Virtual Columns
avg , avgs , mavg , wavg ¶ Averages avg ¶ Arithmetic mean avg x avg[x] Where x is a numeric or temporal list, returns the arithmetic mean as a float. The mean of an atom is its value as a float. Null is returned if x is empty, or contains both positive and negative infinity. Where x is a vector null items are ignored. q)avg 1 2 3 2f q)avg 1 0n 2 3 / vector: null items are ignored 2f q)avg (1 2;0N 4) / nested: null items are preserved 0n 3 q)avg 1.0 0w 0w q)avg -0w 0w 0n q)avg 101b 0.6666667 q)avg 1b 1f q)\l trade.q q)show select ap:avg price by sym from trade sym| ap ---| ----- a | 10.75 avg is an aggregate function, equivalent to {sum[x]%count x} . domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f f . f f f f f f f f avg is a multithreaded primitive. avgs ¶ Running averages avgs x avgs[x] Where x is a numeric or temporal list, returns the running averages, i.e. applies function avg to successive prefixes of x . q)avgs 1 2 3 0n 4 -0w 0w 1 1.5 2 2 2.5 -0w 0n avgs is a uniform function, equivalent to (avg\) . domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f . . f f f f f f f f mavg ¶ Moving averages x mavg y mavg[x;y] Where x is a positive int atom (not infinite)y is a numeric list returns the x -item simple moving averages of y , with any nulls after the first item replaced by zero. The first x items of the result are the averages of the terms so far, and thereafter the result is the moving average. The result is floating point. q)2 mavg 1 2 3 5 7 10 1 1.5 2.5 4 6 8.5 q)5 mavg 1 2 3 5 7 10 1 1.5 2 2.75 3.6 5.4 q)5 mavg 0N 2 0N 5 7 0N / nulls after the first are replaced by 0 0n 2 2 3.5 4.666667 4.666667 q)0 mavg 2 3 0n 0n mavg is a uniform function. Domain and range: b g x h i j e f c s p m d z n u v t ---------------------------------------- b | f . f f f f f f . . f f f f f f f f g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f . . f f f f f f f f h | f . f f f f f f . . f f f f f f f f i | f . f f f f f f . . f f f f f f f f j | f . f f f f f f . . f f f f f f f f e | . . . . . . . . . . . . . . . . . . f | . . . . . . . . . . . . . . . . . . c | . . . . . . . . . . . . . . . . . . s | . . . . . . . . . . . . . . . . . . p | . . . . . . . . . . . . . . . . . . m | . . . . . . . . . . . . . . . . . . d | . . . . . . . . . . . . . . . . . . z | . . . . . . . . . . . . . . . . . . n | . . . . . . . . . . . . . . . . . . u | . . . . . . . . . . . . . . . . . . v | . . . . . . . . . . . . . . . . . . t | . . . . . . . . . . . . . . . . . . Range: f wavg ¶ Weighted average x wavg y wavg[x;y] Where x is a numeric listy is a numeric list returns the average of numeric list y weighted by numeric list x . The result is a float atom. q)2 3 4 wavg 1 2 4 2.666667 q)2 0N 4 5 wavg 1 2 0N 8 / nulls in either argument ignored 6f q)0 wavg 2 3 0n / since 4.1t 2021.09.03,4.0 2021.10.01, previously returned 2.5 q)0 wavg (1 2;3 4) 0n 0n / since 4.0/4.1 2024.07.08, previously returned 0n Where x and y conform, the result has an atom for each sublist. q)(1 2;3 4) wavg (500 400; 300 200) 350 266.6667 The financial analytic known as VWAP (volume-weighted average price) is a weighted average. q)select size wavg price by sym from trade sym| price ---| ----- a | 10.75 wavg is an aggregate function, equivalent to {(sum x*y)%sum x} . Domain and range: b g x h i j e f c s p m d z n u v t ---------------------------------------- b | f . f f f f f f f . f f f f f f f f g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f f . f f f f f f f f h | f . f f f f f f f . f f f f f f f f i | f . f f f f f f f . f f f f f f f f j | f . f f f f f f f . f f f f f f f f e | f . f f f f f f f . f f f f f f f f f | f . f f f f f f f . f f f f f f f f c | f . f f f f f f f . f f f f f f f f s | . . . . . . . . . . . . . . . . . . p | f . f f f f f f f . f f f f f f f f m | f . f f f f f f f . f f f f f f f f d | f . f f f f f f f . f f f f f f f f z | f . f f f f f f f . f f f f f f f f n | f . f f f f f f f . f f f f f f f f u | f . f f f f f f f . f f f f f f f f v | f . f f f f f f f . f f f f f f f f t | f . f f f f f f f . f f f f f f f f Range: f wavg is a multithreaded primitive. Implicit iteration¶ avg , avgs , and mavg apply to dictionaries and tables. wavg applies to dictionaries. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 21 3;4 5 6) q)avg d 7 13 4.5 q)avg t a| 11.33333 b| 5 q)avg k a| 11.33333 b| 5 q)avgs t a b ------------ 10 4 15.5 4.5 11.33333 5 q)2 mavg k k | a b ---| -------- abc| 10 4 def| 15.5 4.5 ghi| 12 5.5 q)1 2 wavg d 6 10.33333 5 Mathematics Weighted average mean Volume-weighted average price (VWAP) bin , binr ¶ Binary search x bin y bin[x;y] x binr y binr[x;y] Where x is a sorted listy is a list or atom of exactly the same type (no type promotion) returns the index of the last item in x which is ≤y . The result is -1 for y less than the first item of x . binr binary search right, introduced in V3.0 2012.07.26, gives the index of the first item in x which is ≥y . They use a binary-search algorithm, which is generally more efficient on large data than the linear-search algorithm used by ? (Find). The items of x should be sorted ascending although bin does not verify that; if the items are not sorted ascending, the result is undefined. y can be either an atom or a simple list of the same type as the left argument. The result r can be interpreted as follows: for an atom y , r is an integer atom whose value is either a valid index of x or -1 . In general: r[i]=-1 iff y[i]<x[0] r[i]=j iff last j such that x[j]<=y[i]<=x[j+1] r[i]=n-1 iff x[n-1]<=y[i] and r[j]=x bin y[j] for all j in index of y Essentially bin gives a half-open interval on the left. bin and binr are right-atomic: their results have the same count as y . bin also operates on tuples and table columns and is the function used in aj and lj . bin and binr are multithreaded primitives. If x is not sorted the result is undefined. Three-column argument¶ bin and ? on three columns find all equijoins on the first two cols and then do bin or ? respectively on the third column. bin assumes the third column is sorted within the equivalence classes of the first two column pairs (but need not be sorted overall). q)0 2 4 6 8 10 bin 5 2 q)0 2 4 6 8 10 bin -10 0 4 5 6 20 -1 0 2 2 3 5 If the left argument items are not distinct the result is not the same as would be obtained with ? : q)1 2 3 3 4 bin 2 3 1 3 q)1 2 3 3 4 ? 2 3 1 2 Sorted third column¶ bin detects the special case of three columns with the third column having a sorted attribute. The search is initially constrained by the first column, then by the sorted third column, and then by a linear search through the remaining second column. The performance difference is visible in this example: q)n:1000000;t:([]a:`p#asc n?`2;b:`#asc n?1000;c:asc n?100000) q)\t t bin t 194 q)update`#c from`t; / remove the sort attr from column c q)\t t bin t 3699 $ Cast¶ Convert to another datatype x$y $[x;y] Where x is: - a positive short, lower-case letter, or symbol from the following table, returns y cast according tox 1h "b" `boolean 2h "g" `guid 4h "x" `byte 5h "h" `short 6h "i" `int 7h "j" `long 8h "e" `real 9h "f" `float 10h "c" `char 12h "p" `timestamp 13h "m" `month 14h "d" `date 15h "z" `datetime 16h "n" `timespan 17h "u" `minute 18h "v" `second 19h "t" `time - a symbol from the list `year`dd`mm`hh`uu`ss andy is a temporal type, returns the year, day, month, hour, minute, or seconds value fromy as tabulated below - 0h or"*" , andy is not a string, returnsy (Identity) - an upper-case letter or a negative short int interprets the value from a string, see Tok Casting does not change the underlying bit pattern of the data, only how it is represented. $ (cast) is a multithreaded primitive. Iteration¶ Cast is an atomic function. q)12 13 14 15 16 17 18 19h$42 2000.01.01D00:00:00.000000042 2003.07m 2000.02.12 2000.02.12T00:00:00.000 0D00:00:00.000000042 00:42 00:00:42 00:00:00.042 q)(12h;"m";`date)$42 2000.01.01D00:00:00.000000042 2003.07m 2000.02.12 q)(12h;"m";`date)$42 43 44 2000.01.01D00:00:00.000000042 2003.08m 2000.02.14 q)(12h;13 14h)$(42;42 42) 2000.01.01D00:00:00.000000042 (2003.07m;2000.02.12) Integer¶ Cast to integer: q)"i"$10 10i q)(`int;"i";6h)$10 10 10 10i q)`int$(neg\)6.1 6.6 6 7 -6 -7 Boolean¶ Cast to boolean: q)1h$(neg\)1 0 2 101b 101b Characters are cast to True. q)" ",.Q.an " abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789" q)"b"$" ",.Q.an 1111111111111111111111111111111111111111111111111111111111111111b Byte¶ q)"x"$3 4 5 0x030405 q)"x"$"abc" 0x616263 Casting longs above int infinity Longs greater than 0wi cast to 0xff q)"x"$-2 -1 0 1 2+0Wi 0xfdfeffffff This is considered an error and is planned to change to 0x00 . Temporal¶ Find parts of time: q)`hh`uu`ss$03:55:58.11 3 55 58i q)`year`dd`mm`hh`uu`ss$2015.10.28D03:55:58 2015 28 10 3 55 58i | year | month | mm | week | dd | hh | uu | ss -------------------------------------------------------- timestamp | x | x | x | x | x | x | x | x month | x | x | x | | | | | date | x | x | x | x | x | | | datetime | x | x | x | x | x | x | x | x timespan | | | | | | x | x | x minute | | | | | | x | x | x second | | | | | | x | x | x time | | | | | | x | x | x milliseconds: "i"$time mod 1000 milliseconds: "i"$mod[;1000]"t"$datetime nanoseconds: "i"$timestamp mod 1000000000 Casting to narrower temporal type truncates rather than rounds Such conversions use floor, because the day, hour, minute, second… are all [) notions. (What hour are we in; what millisecond are we in…) For example, "d"$2017.08.23T23:50:12 is 2017.08.23 even though the datetime is closer to 2017.08.24 . As a consequence .z.t-.z.n is typically negative. Identity¶ q)("*";0h)$1 1 1 For string values of y , see Tok. Infinities and beyond¶ Casting an infinity from a narrower to a wider datatype returns a finite value. When an integral infinity is cast to an integer of wider type, it is the same underlying bit pattern, reinterpreted. Since this bit pattern is a legitimate value for the wider type, the cast returns a finite value. q)`float$0Wh 32767f The infinity corresponding to numeric x is min 0#x . Tok Overloads of $ Q for Mortals §7.2 Cast
Temporal data: A kdb+ framework for corporate actions¶ kdb+ is leveraged in many financial institutions across the globe and has built a well-earned reputation as a high-performance database, appropriate for capturing, storing and analyzing enormous amounts of data. It is essential that any large-scale kdb+ system has an efficient design so that time to value is kept to a minimum and the end users are provided with useful functionality. This white paper examines a framework which can be used to apply corporate-action adjustments on the fly to equity tick data. Corporate actions are common occurrences that bring about material changes to the underlying securities. We will look into the reasons why a company may choose to apply these actions and what consequences they have on tick data, with a goal to understanding what adjustments are needed and how best to apply them. It is critical that a kdb+ system can handle these actions in a timely manner and return correct data to the user. Examples of a symbol-name change, stock split and cash dividend will be outlined and for the purposes of this paper we will use Reuters cash-equities market data. All tests were run using kdb+ version 3.1 (2014.02.08) Corporate actions¶ When the board of a company agrees to use a corporate action, there is a resulting effect on the underlying securities of that company and its shareholders. Name changes, stock splits, dividends, rights issues and spin-offs are all examples of corporate actions. However, the purpose of each varies and results in a different effect to the nature and quantity of the securities issued by that company. Name change¶ A company may decide to change its name to reflect a shift in company focus that targets a different core business. Alternatively, it could be to accompany expansion plans in which they require a name that translates across multiple languages. For whatever reason, only in name is the underlying security changed, yet, within a kdb+ system there must be a mapping in place to resolve this action. Stock split¶ If a stock is trading at a very high price it will deter many potential investors. A stock split will increase the number of outstanding shares whilst decreasing the share price accordingly, attracting investors that previously were priced out of the market. In this case size and price adjustments need to be applied to the data. Cash dividend¶ Profits made by companies can be distributed in part to their shareholders in the form of a cash dividend. Some companies, for example start-ups, may not do this, to retain any profits as inward investment for growth. Any investor who purchases a stock before the ex-dividend date (ex-date) is entitled to the dividend. However, beyond this date the dividend belongs to the seller. Therefore, dividends affect the pricing of a stock effective from this date with the number of outstanding shares remaining the same. Spin-off¶ As part of a business restructuring, spin-offs can be used to break a company up in order to concentrate on separate core competencies. No creation of shares takes place, only the filtering of existing shares into the separate new companies, each having an adjusted price based on the original stock. | action | price adjustment | size adjustment | |---|---|---| | Name change | no change | no change | | Stock split | price%adj | size*adj | | Cash dividend | price*adj | no change | | Spin-off | price*adj | no change | Table 1: Corporate-actions formulas for price and/or size adjustments The question for a kdb+ developer is how best to apply the adjustments in a consistent and generic manner. Temporal data¶ One option for dealing with corporate actions would be to capture the daily state of each record. However, this would create an unnecessarily large table over time. We are only interested in when a change occurs, marking them ‘asof’ in a temporal reference table. In the following sections we look at the behavior of applying the sorted attribute to dictionaries and tables. Its characteristics are important in achieving temporal data to obtain meaningful results when passing any argument within the key range. Reference: Set Attribute Adding the sorted attribute s to a dictionary indicates that the data structure is sorted in ascending order. When kdb+ encounters this, a faster binary search can be used instead of the usual linear search. When applied to a dictionary, the attribute creates a step function. Non-sorted dictionary¶ When querying a non-sorted dictionary, nulls are returned as values for keys that are not present in the dictionary. q)d:(100*til 5)!`a`b`c`d`e q)d 0 |a 100| b 200| c 300| d 400| e q)d 0 50 150 200 500 `a```c` Sorted dictionary¶ Taking the same dictionary and applying the sorted attribute, instead of nulls the last known value will be returned. q)d:`s#d q)d 0 | a 100| b 200| c 300| d 400| e q)d 0 50 150 200 75 500 `a`a`b`c`a`e As a keyed table is a particular case of a dictionary, applying the sorted attribute has similar effect. Non-sorted keyed table¶ When querying a non-sorted keyed table, nulls will be returned for values that are not present in the table key. q)tab:([date:.Q.addmonths[2013.01.01;]3* til 5]; quarter_name:`Q1_2013`Q2_2013`Q3_2013`Q4_2013`Q1_2014 ) q)tab date | quarter_name ----------| ---------- 2013.01.01| Q1_2013 2013.04.01| Q2_2013 2013.07.01| Q3_2013 2013.10.01| Q4_2013 2014.01.01| Q1_2014 q)tab([] date:2013.01.01 2013.05.05 2013.06.19 2013.08.25 2013.10.01) quarter_name ------------ Q1_2013 Q4_2013 Sorted keyed table¶ Running the same query on the sorted version of the table will return more meaningful results. q)tab:`s#tab q)tab([] date:2013.01.01 2013.05.05 2013.06.19 2013.08.25 2013.10.01) quarter_name ---------- Q1_2013 Q2_2013 Q2_2013 Q3_2013 Q4_2013 Setting the sorted attribute on a vector has no memory cost and kdb+ will verify the data is in ascending order before applying the attribute. Corporate action name change¶ Requirements¶ When kdb+ is the foundation of a tick trade and quote database, its objective is to obtain a complete picture of a security’s real-time and historical trading activity. Securities from time to time can go through a name change; this is when a company announces that it will be changing its ticker. The following section will present an approach to accessing data for securities that experience this type of corporate action. Reference data¶ Adequate reference data is paramount to the ability of obtaining consolidated stats. It will play a critical part in forming the correct query to the historical data. Firstly, give each sym a unique identifier (uid ) that will be constant for the life of a security. The assumption is made that there is a one-to-one correspondence between sym and a security at any given time. One can obtain this uid per security from an external reference data provider or it can be maintained internally. Introduced in kdb+ V3.0, GUID is now an option for uid . Basics: Datatypes Corporate-action table¶ For ease of understanding, we will be using the sym index to build up this uid and the corresponding corporate-action temporal data reference table cact . In this example, trade and quote data is loaded from a hdb_path directory. q)\l hdb_path/taq q)cact:update uid:i, date:first date from ([]sym:sym) sym uid date --------------------- AAB.TO 0 2010.10.04 AAV.TO 1 2010.10.04 ABX.TO 2 2010.10.04 ABT.TO 3 2010.10.04 ACC.TO 4 2010.10.04 ABC.TO 5 2010.10.04 .. q)//first date is used as it is the earliest point in the hdb q)//and therefore any Corporate Actions before this date are not applicable. q)save `:/ref_path/cact.csv `:/ref_path/cact.csv We now have a table in which every distinct sym in the HDB has a uid assigned to it. When a security undergoes a name change, this file must reflect it. A daily correction file should be sourced with matching uid mapping. If a security goes through one or more name changes we need only map to its uid once and use it as a basis to efficiently query a sorted cact table obtaining all previous syms for the interested date range. An example is outlined below. Research In Motion¶ One high profile name change of recent times was that of Research In Motion (RIM) (NASDAQ: RIMM; TSX: RIM). This change was made in order to have a clear global brand, BlackBerry. This decision was purely a marketing one and did not affect the underlying stock in any way other than to change its name. The change to the company’s ticker was effective from the start of trading on Monday 4 Feb 2013 trading as BB on the Toronto Stock Exchange and BBRY on the NASDAQ. In terms of a Reuters Instrument Code (ric) listed on the Toronto Stock Exchange RIM.TO became BB.TO. | effective date | type | event | |---|---|---| | 04-Oct-2010 | RIM.TO | first date in HDB | | 04-Feb-2013 | BB.TO | name change | Table 2: Blackberry name change Daily correction table¶ A daily correction file will be used to update the cact table as per the example below. Daily_Cor:([] eff_date:(),2013.02.04; new_ric:`BB.TO; old_ric:`RIM.TO; ui d:510 ) q)Daily_Cor eff_date new_ric old_ric uid -------------------------------- 2013.02.04 BB.TO RIM.TO 510 q)cact:`uid`date xkey ("IDS";enlist csv) 0:`:/ref_path/cact.csv q)`cact upsert `uid`date xkey select uid:uid, date:eff_date, sym:new_ric from Daily_Cor `cact q)cact:`uid`date xasc cact q)select from cact where uid=510 uid date | sym --------------| ------ 510 2010.10.04| RIM.TO 510 2013.02.04| BB.TO q)save `:/ref_path/cact.csv `:/ref_path/cact.csv Once this is completed all gateways should be notified to pick up the updated cact file and apply the sorted attribute. q)cact:`uid xasc `uid`date xkey ("IDS";enlist csv) 0:`:/ref_path/cact.csv q)cact:`s#cact; The data¶ Within the majority of kdb+ systems, data is obtained through the use of a gateway process. Common design principles for kdb+ gateways The gateway acts as an interface between the end user and the underlying databases. We would like to pass many different parameters into the function getRes that executes the query on the database, and perhaps more than the maximum number allowed in q, which is eight. For this reason we will use a dictionary as the single parameter. A typical parameter dictionary looks like the following: params:(!) . flip ( (`symList ; `BB.TO`RY.TO); /Requested instruments (`startDate; 2013.01.31); /Only take data from startDate (`endDate ; 2013.02.04); /Only take data to endDate (`startTime; 14:30:00.000); /Only take data from startTime (`endTime ; 22:00:00.000); /Only take data to endTime (`columns ; `volume`vwap); /Requested analytics (`applyCact; `NC) ) /To apply name change adjustments Before this dictionary gets sent to the underlying resources the gateway can enrich the symList with very little expense over the startDate to endDate range to apply any name change corporate actions. This is described in the next section. Corporate-action adjustment¶ The following cact_adj uses the sorted cact table and reverse lookup to first identify the uid for each sym and determine across all dates between the startDate to endDate range all associated syms. cact_adj:{[symList;sD;eD] days:1+eD-sD; symCount:count symList; t where differ t:([]OrigSymList:raze days#/:symList) + cact ([] uid:raze days#/:((reverse cact)?/:symList)[`uid]; date:raze symCount#enlist sD+ til days) } q)cact_adj . (`BB.TO`RY.TO; 2013.01.31; 2013.02.04) OrigSymList sym ------------------ BB.TO RIM.TO BB.TO BB.TO RY.TO RY.TO We can now use this to update the parameters at the gateway level, only executing if the user indicated to apply corporate-action adjustment with applyCact flag set to `NC . if[params[`applyCact]~`NC; params:@[params; `symList`origSymList; :; (cact _adj . params`symList`startDate`endDate)`sym`OrigSymList ] ] q)params symList | `RIM.TO`BB.TO`RY.TO startDate | 2013.01.31 endDate | 2013.02.04 startTime | 14:30:00.000 endTime | 22:00:00.000 columns | `volume`vwap applyCact | `NC origSymList| `BB.TO`BB.TO`RY.TO As you can see the symList has been updated with the pre- and post-corporate action syms. This would not have happened if the sorted attribute had not been applied. Get results¶ The new enriched params will then be sent to the HDB to obtain the result set by calling the getRes function. getRes:{[params]:0!select vwap:wavg[size;price], volume:sum[size] by sym from trade where date within params`startDate`endDate, sym in params[`symList] } q)res:getRes[params] q)res sym vwap volume ------------------------ BB.TO 14.31078 10890299 RIM.TO 12.91377 19889196 RY.TO 62.23244 6057164 Once the query is finished the result set is sent back to the gateway for post processing. First, add the original symList origSymList passed by the user with a left join. q)res:(flip select sym:symList,origSymList from params) lj `sym xkey res q)res sym origSymList vwap volume ------------------------------------ RIM.TO BB.TO 12.91377 19889196 BB.TO BB.TO 14.31078 10890299 RY.TO RY.TO 62.23244 6057164 All that is left to do is to aggregate the data by the origSymList . Use of a functional select here has the power to do this. Basics: Functional qSQL Q for Mortals §9.12.1 Functional select q)/aggregate by q)b:(enlist `sym)!enlist `origSymList q)/aggregate clauses q)a:`volume`vwap!((sum;`volume);(wavg;`volume;`vwap)) q)res: 0!?[res;();b;a] q)res sym volume vwap ----------------------- BB.TO 30779495 13.40805 RY.TO 6057164 62.23244 The final step is to update the consolidated analytics with parameters that the user will find useful. q)res:![res;();0b;`startDate`endDate`startTime`endTime#params] The final result that is returned to the user is: q)res sym volume vwap startDate endDate startTime endTime ----------------------------------------------------------------------- BB.TO 30779495 13.40805 2013.01.31 2013.02.04 14:30:00.000 22:00:00.000 RY.TO 6057164 62.23244 2013.01.31 2013.02.04 14:30:00.000 22:00:00.000 Stock split¶ When a company decides to divide their common shares into a larger number of shares this is known as a stock split. If a company proceeds with a five-for-one split, all number of units held by shareholders would increase by 5 times, however, their equity will remain constant as share price changes accordingly. For example, if a shareholder held 1000 shares before the split, each priced at £10, they would own 5,000 shares after the split at a new price of £2. This leads to a challenge for a kdb+ developer to return historical stats in terms of today’s stock structure. | action | priced adjustment | size adjustment | |---|---|---| | Stock split | price%adj | size*adj | Table 3: Stock-split formula for price and size adjustments Imagine a stock XYZ.L that has gone through two stock splits in its lifetime. First a ten-for-one split effective from 1 October 2010. Then again on the 16 February 2012 a further two-for-one split took effect. | effective date | type | event | |---|---|---| | 01-Oct-2010 | Stock split | 10 for 1 (XYZ.L) | | 16-Feb-2012 | Stock split | 2 for 1 (XYZ.L) | Table 4: XYZ.L stock-split history Typical source data: q)scrTbl:([]sym:`XYZ.L;date:2010.10.01 2012.02.16;action:`SS;adj:`float$10 2) q)scrTbl sym date action adj --------------------------- XYZ.L 2010.10.01 SS 10 XYZ.L 2012.02.16 SS 2 Table 1 showed that there are inconsistencies in how typical source data are applied for different types of actions. For example, price is divided by the adjustment for stock split while for cash dividend it is multiplied. In the following section an adjust-source function adjscr is defined that addresses this and produces a consistent scrTbl table for any corporate action. It provides adjustments for both size (sadj ) and price (padj ) and also ensures these adjustments always need to be multiplied. This becomes important when adjusting for more than one type of corporate action at a time. //Adjust scrTbl function, to be consistent for any action adjscr:{[scrTbl] scrTbl:`sym`date`action`padj xcol update sadj:1%adj from scrTbl where action in `SS; scrTbl:update padj:1%padj from scrTbl where not action in `SS; update sadj:1^sadj from scrTbl } q)scrTbl:adjscr[scrTbl] q)scrTbl sym date action padj sadj --------------------------------- XYZ.L 2010.10.01 SS 10 0.1 XYZ.L 2012.02.16 SS 2 0.5 Again, we are only interested in storing data points of when changes took place. Therefore in a temporal table we need: | effective date | type | price adjustment | size adjustment | |---|---|---|---| | 16-Feb-2012 | asof | 1 | 1 | | 01-Oct-2010 | asof | 0.5 | 2 | | 01-Oct-2010 | before | 0.05 | 20 | Table 5: XYZ.L temporal table for size adjustments Transforming the source-data table can be done in the following way. //calculating adjustment factors afact:{reverse reciprocal prds 1,reverse x} ca:{[cact] `s#2!ungroup update date:(0Nd,'date), padj:afact each padj, sadj:afact each sadj from `sym xgroup `sym`date xasc ``action _ select from scrTbl where date<=.z.d, action in cact } q)adjTbl:ca[`SS] sym date | padj sadj ----------------| --------- XYZ.L | 0.05 20 XYZ.L 2010.10.01| 0.5 2 XYZ.L 2012.02.16| 1 1 Raw without stock-split adjustment: q).Q.view 2010.06.24 2011.07.12 2014.01.10 q)select sum size,avg price by sym,date from trade where sym=`XYZ.L sym date | size price ----------------| -------------- XYZ.L 2010.06.24| 1838 293.3333 XYZ.L 2011.07.12| 2911 553.8033 XYZ.L 2014.01.10| 27159 1478.329 Enriched data with stock-split adjustments: //adjscr allows us to have adjAgg constant for all actions q)adjAgg:`size`price!((*;`size;`sadj);(*;`price;`padj)) q)adjAgg size | * `size `sadj price| * `price `padj adj:{[cact;res] res:update padj:1^padj,sadj:1^sadj from aj[`sym`date;res;$[not count adjTbl:ca[cact];:res;adjTbl]]; :`padj`sadj _ 0!![res;();0b;] (c where (c:cols res) in key adjAgg)#adjAgg } q)select sum size,avg price by sym,date from adj[`SS;] select from trade where sym in `XYZ.L sym date | size price ----------------| -------------- XYZ.L 2010.06.24| 36760 14.66667 XYZ.L 2011.07.12| 5822 276.9017 XYZ.L 2014.01.10| 27159 1478.329 One can see that, for trades occurring after the latest stock split, size remains the same. Trades on 12 July 2011 were before the last stock split but after the first, therefore, trade sizes have increased by a factor of 2, as one share then represents two shares today. Likewise 24 June 2010 was before any splits in the stock and size adjustment is by a factor of 20, as one share then represents twenty shares at present. Price adjustments also appear to ensure trade value remains constant. Cash dividend¶ Say a stock that has decided to pay a £0.05 dividend per share is trading at £7.00 prior to its ex-dividend date (ex-date). A shareholder with 10,000 shares has a total value prior to the ex-date of 10,000×£7.00=£70,000. After the ex-date, the price should theoretically drop to £6.95. Yet, the investor's total value is maintained as 10,000×£6.95=£69,500 + £500 cash. The adjustment factor is determined by: q)cd_padj:{[P;X] (P-X)%P} q)cd_padj[7.00;0.05] 0.9928571 q)7.00*0.9928571 // cross check of calculation 6.95 Users may request that historical price values be adjusted. However, size remains the same. | action | price adjustment | size adjustment | |---|---|---| | Cash dividend | price*adj | no change | Table 6: Cash-dividend formula for price and size adjustments Let’s take a look at a real-world example for BP.L. | date | type | event | |---|---|---| | 04-Feb-2014 | Results: Q4 2013 results and dividend announcement | Dividend of 5.7065 per share | | 11-Feb-2014 | Close price | 491.75 | | 12-Feb-2014 | Ex-date | Fourth quarter dividend | | 12-Feb-2014 | Close price | 487.05 | | 28-Mar-2014 | Dividend | Fourth quarter payment date | Table 7: BP.L cash-dividend history Therefore the corresponding price adjustment is as follows: q)cd_padj[491.75;5.7065] 0.9883955 Similar to the above stock split, typical source data is provided and can be transformed to be a temporal adjTbl with the adjscr , ca and afact functions. q)scrTbl:([] sym:(),`BP.L;date:2014.02.12;action:`CD;adj:0.9883955) q)scrTbl sym date action adj -------------------------------- BP.L 2014.02.12 CD 0.9883955 q)scrTbl: adjscr[scrTbl] q)scrTbl sym date action padj sadj ------------------------------------ BP.L 2014.02.12 CD 1.011741 1 q)adjTbl:ca[`CD] sym date | padj sadj ---------------| -------------- BP.L | 0.9883955 1 BP.L 2014.02.12| 1 1 A typical query for last price without adjustment applied q).Q.view 2014.02.11 2014.02.12 q)0!select last price,last size by sym,date from trade where sym=`BP_.L sym date price size ------------------------------- BP.L 2014.02.11 491.75 6432023 BP.L 2014.02.12 487.05 6852708 Same query but now with cash-dividend adjustments: q).Q.view 2014.02.11 2014.02.12 q)adj[`CD;] select last price,last size by sym,date from trade where sym=`BP.L sym date price size --------------------------------- BP.L 2014.02.11 486.0435 6432023 BP.L 2014.02.12 487.05 6852708 // cross check of calculation q)491.75*0.9883955 486.0435 The correct price adjustment has been applied for a date prior to the ex-dividend date. Combining adjustments¶ The framework outlined in this paper gives the users an option of which corporate-action adjustments, if any, to apply. In the following example a test trade table is created to aid the example. trade:([] date:2013.01.01 2013.04.01 2013.07.01 2014.01.01; sym:4#`VOD.L; price:4#10; size:4#1000 ) q)trade date sym price size --------------------------- 2013.01.01 VOD.L 10 1000 2013.04.01 VOD.L 10 1000 2013.07.01 VOD.L 10 1000 2014.01.01 VOD.L 10 1000 Example source data: scrTbl:([] sym:`VOD.L; date:2012.05.01 2013.02.01 2013.07.01 2013.11.01 2014.06.01; action:`SS`CD`CD`SS`CD; adj:`float$2 0.95 0.97 10 0.96 ) q)scrTbl sym date action adj ---------------------------- VOD.L 2012.05.01 SS 2 VOD.L 2013.02.01 CD 0.95 VOD.L 2013.07.01 CD 0.97 VOD.L 2013.11.01 SS 10 VOD.L 2014.06.01 CD 0.96 q)scrTbl:adjscr[scrTbl] sym date action padj sadj ------------------------------------- VOD.L 2012.05.01 SS 2 0.5 VOD.L 2013.02.01 CD 1.052632 1 VOD.L 2013.07.01 CD 1.030928 1 VOD.L 2013.11.01 SS 10 0.1 VOD.L 2014.06.01 CD 1.041667 1 No adjustments applied: q)adj[`;]select from trade date sym price size --------------------------- 2013.01.01 VOD.L 10 1000 2013.04.01 VOD.L 10 1000 2013.07.01 VOD.L 10 1000 2014.01.01 VOD.L 10 1000 Stock splits only: q)adj[`SS;]select from trade date sym price size ---------------------------- 2013.01.01 VOD.L 1 10000 2013.04.01 VOD.L 1 10000 2013.07.01 VOD.L 1 10000 2014.01.01 VOD.L 10 1000 Cash dividend only: q)adj[`CD;]select from trade date sym price size --------------------------- 2013.01.01 VOD.L 9.215 1000 2013.04.01 VOD.L 9.7 1000 2013.07.01 VOD.L 10 1000 2014.01.01 VOD.L 10 1000 Stock split and cash dividend combined: q)adj[`SS`CD;]select from trade date sym price size ----------------------------- 2013.01.01 VOD.L 0.9215 10000 2013.04.01 VOD.L 0.97 10000 2013.07.01 VOD.L 1 10000 2014.01.01 VOD.L 10 1000 From adjusting the standard source data in adjscr function we can see that adjustment factors for any action are simply multiplied together to give the combined adjustment factor. Conclusion¶ This white paper introduced a method for applying corporate-action adjustments to equity tick data on the fly. The basic use of temporal data was outlined, highlighting the power of the sorted attribute. After this, we explained the role of reference data and its importance in a kdb+ system. With this knowledge we laid out an example of a simple gateway request to show how we could aggregate tick data across a date range in which a name change had taken place. Later in the paper, stock splits and cash dividends were also covered. Overall, this paper provides an insight into the capabilities of kdb+ regarding varies types of corporate actions. It may be used as a framework for firstly dealing with name changes at a gateway level and secondly for handling stock splits and cash dividends at a database level. However it is not limited to these examples, and can also be used for other actions such as stock dividends, rights issues and spin-offs. All tests were run using kdb+ version 3.1 (2014.02.08) Author¶ Sean Rodgers is a kdb+ consultant based in London. He works for a top-tier investment bank on a global tick-capture and analytic application for a range of different asset classes.
Real-time Database (RDB) using r.q¶ r.q is available from KxSystems/kdb-tick Overview¶ A kdb+ process acting as an RDB stores a current day’s data in-memory for client queries. It can write its contents to disk at end-of-day, clearing out it in-memory data to prepare for the next day. After writing data to disk, it communicates with a HDB to load the written data. Customization¶ r.q provides a starting point to most environments. The source code is freely available and can be tailored to individual needs. For example: Memory use¶ The default RDB stores all of a days data in memory before end-of-day writing to disk. The host machines should be configured that all required resources can handle the demands that may be made of them (both for today and the future). Depending on when there may be periods of low/no activity, garbage collection could be deployed after clearing tables at end-of-day, or a system for intra-day writedowns. User queries¶ A gateway process should control user queries and authorization/authentication, using RDBs/RTEs/HDBs to retrieve the required information. If known/common queries can be designed, the RDB can load additional scripts to pre-define functions a gateway can call. End-of-day¶ The end-of-day event is governed by the tickerplant process. The tickerplant calls the RDB .u.end function when this event occurs. The main end-of-day event for an RDB is to save todays data from memory to disk, clear its tables and uses IPC to instruct the HDB to be aware of a new days dataset for it to access. .u.rep sets the HDB directory be the same as the tickerplant log file directory. This can be edited to use a different directory if required Recovery¶ Using IPC (sync request), the RDB process can retrieve the current tickerplant log location and use via the variables the tickerplant maintains. The function .u.rep is then used to replay the log, repopulating the RDB. The RDB should be able to access the tickerplant log from a directory on the same machine. The RDB/tickerplant can be changed to reside on different hosts but this increases the resources needed to transmit the log file contents over the network. The following diagram shows the steps taken by an RDB to recover from a TP log: Usage¶ q tick/r.q [host1]:port1[:usr:pwd] [host2]:port2[:usr:pwd] [-p 5020] | Parameter Name | Description | Default | |---|---|---| | host1 | host running kdb+ instance that the RDB will subscribe to e.g. tickerplant host | localhost | | port1 | port of kdb+ instance that the RDB will subscribe to e.g. tickerplant port | 5010 | | host2 | host of kdb+ instance to inform at end-of-day, after data saved to disk e.g. HBD host | localhost | | port2 | port of kdb+ instance to inform at end-of-day, after data saved to disk e.g. HBD port | 5012 | | usr | username | <none> | | pwd | password | <none> | | -p | listening port for client communications | <none> | Standard kdb+ command line options may also be passed Variables¶ | Name | Description | |---|---| | .u.x | Connection list. First element populated by host1 (tickerplant), and second element populated by host2 (HDB) | Functions¶ Functions are open source & open to customisation. upd¶ Called by external process to update table data. Defaults to insert to insert/append data to a table. upd[x;y] Where x is a symbol atom naming a tabley is table data to add to tablex , which can contain one or more rows. .u.end¶ Perform end-of-day actions of saving tables to disk, clearing tables and running reload on HDB instance to make it aware of new day of data. .u.end[x] Where x is the date that has ended, as a date atom type. Actions performed: - finds all tables with the group attribute on the sym column - calls .Q.dpft , with params: - re-apply group attribute to sym column for those tables found in first steps (as clearing the table removed grouped attribute) .u.rep¶ Initialise RDB by creating tables, which is then populated with any existing tickerplant log. Sets the HDB directory to be used at end-of-day. .u.rep[x;y] Where x is a list of table details, each element a two item list- symbol for table name - schema table y is the tickerplant log details.log comprising of a two item list:- a long for the log count (null represents no log) - a file symbol for the location of the current tickerplant log (null represents no log) Actions performed: Tickerplant (TP) using tick.q¶ tick.q is available from KxSystems/kdb-tick Overview¶ All incoming streaming data is processed by a kdb+ process acting as a tickerplant. A tickerplant writes all data to a tickerplant log (to permit data recovery) and publishes data to subscribed clients, for example a RDB. Customization¶ tick.q provides a starting point to most environments. The source code is freely available and can be tailored to individual needs. Schema file¶ A tickerplant requires a schema file. A schema file describes the data you plan to capture, by specifying the tables to be populated by the tickerplant environment. The datatypes and attributes are denoted within the file as shown in this example: quote:([]time:`timespan$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$()) trade:([]time:`timespan$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); side:`char$()) The default setup requires the first two columns to be time and sym . Real-time vs Batch Mode¶ The mode is controlled via the -t command line parameter. Batch mode can alleviate CPU use on both the tickerplant and its subscribers by grouping together multiple ticks within the timer interval prior to sending/writing. This comes at the expense of tickerplant memory (required memory to hold several ticks) and increased latency that may occur between adding to the batch and sending. There is no ideal setting for all deployments as it depends on the frequency of the ticks received. Real-time mode processes every tick as soon as they occur. A feedhandler can be written to send messages comprising of multiple ticks to a tickerplant. In this situation real-time mode will already be processing batches of messages. End-of-day¶ The tickerplant watches for a change in the current day. As the day ends, a new tickerplant log is created and the tickerplant informs all subscribed clients, via their .u.end function. For example, a RDB may implement .u.end to write down all in-memory tables to disk which can then be consumed by a HDB. Tickerplant Logs¶ Log files are created using the format <tickerplant log dir>/<schema filename><date> e.g. tplog/sym2022.02.02 . These record all published messages and permit recovery by downstream clients, by allowing them to replay messages they have missed. The directory used should have enough space to record all published data. As end-day-day causes a file roll, a process should be put in place to remove old log files that are no longer required. The tickerplant does not replay log files for clients, but exposes log file details to clients so they can access the current log file Publishing to a tickerplant¶ Feed handlers publish ticks to the tickerplant using IPC. These can be a kdb+ process or clients written in any number of different languages that use one of the available client APIs. Each feed sends data to the tickerplant by calling the .u.upd function. The call can include one or many ticks. For example, publishing from kdb+: q)h:hopen 5010 / connect to TP on port 5010 of same host q)neg[h](".u.upd";`trade;(.z.n;`APPL;35.65;100;`B)) / async publish single tick to a table called trade q)neg[h](".u.upd";`trade;(10#.z.n;10?`MSFT`AMZN;10?10000f;10?100i;10?`B`S)) / async publish 10 ticks of some random data to a table called trade ... Subscribing to a tickerplant¶ Clients, such as a RDB or RTE, can subscribe by calling .u.sub over IPC. q)h:hopen 5010 / connect to TP on port 5010 of same host q)h".u.sub[`;`]" / subscribe to all updates q)h:hopen 5010 / connect to TP on port 5010 of same host q)h".u.sub[`trade;`MSFT.O`IBM.N]" / subscribe to updates to trade table that contain sym value of MSFT.O or IBM.N only Clients should implement functions upd to receive updates, and .u.end to perform any end-of-day actions. Usage¶ q tick.q SRC DST [-p 5010] [-t 1000] [-o hours] | Parameter Name | Description | Default | |---|---|---| | SRC | schema filename, loaded using the format tick/<SRC>.q | sym | | DST | directory to be used by tickerplant logs. No tickerplant log is created if no directory specified | <none> | | -p | listening port for client communications | 5010 | | -t | timer period in milliseconds. Use zero value to enable real-time mode, otherwise will operate in batch mode. | real-time mode (with timer of 1000ms) | | -o | utc offset | localtime | Standard kdb+ command line options may also be passed Variables¶ | Name | Description | |---|---| | .u.w | Dictionary of registered clients interest in data being processed i.e. tables->(handle;syms) | | .u.i | Msg count in log file | | .u.j | Total msg count (log file plus those held in buffer) - used when in batch mode | | .u.t | Table names | | .u.L | TP log filename | | .u.l | Handle to tp log file | | .u.d | Current date | Functions¶ Functions are open source & open to customisation. .u.endofday¶ Performs end-of-day actions. .u.endofday[] Actions performed: - inform all subscribed clients (for example, RDB/RTE/etc) that the day is ending by calling .u.end - increment current date ( .u.d ) to next day - roll log if using tickerplant log, i.e. .u.tick¶ Performs initialisation actions for the tickerplant. .u.tick[x;y] Where x is the name of the schema file without the.q file extension i.e.SRC command line parametery is the directory used to store tickerplant logs i.e.DST command line parameter Actions performed: - call .u.init[] to initialise table info,.u.t and.u.w - check first two columns in all tables of provided schema are called time andsym (throwtimesym error if not) - apply grouped attribute to the sym column of all tables in provided schema - set .u.d to current local date, using.z.D - if a tickerplant log filename was provided: .u.ld¶ Initialise or reopen existing log file. .u.ld[x] Where x is current date. Returns handle of log file for that date. Actions performed: - using .u.L , change last 10 chars to provided date and create log file if it doesnt yet exist - set .u.i and.u.j to count of valid messages currently in log file - if log file is found to be corrupt (size bigger than size of number of valid messages) an error is returned - open new/existing log file .u.ts¶ Given a date, runs end-of-day procedure if a new day has started. .u.ts[x] Where x is a date. Compares date provided with .u.d . If no change, no action taken. If one day difference (i.e. a new day), .u.endofday is called. More than one day results in an error and the kdb+ timer is cancelled. .u.upd¶ Update tickerplant with data to process/analyse. External processes call this to input data into the tickerplant. .u.upd[x;y] Where x is table name (sym)y is data for tablex (list of column data, each element can be an atom or list) Batch Mode¶ Add each received message to the batch and record message to the tickerplant log. Batch is published on running timer. Actions performed: - If the first element of y is not a timespan (or list of timespan) - Add data to current batch (i.e. new data y inserted into tablex ), which will be published on batch timer.z.ts . - If tickerplant log file created, write upd function call & params to the log and increment .u.j so that an RDB can execute what was originally called during recovery. Realtime Mode¶ Publish each received message to all interested clients & record message to tickerplant log. Actions performed: - Checks if end-of-day procedure should be run by calling .u.ts with the current date - If the first element of y is not a timespan (or list of timespan), add a new timespan column populated with the current local time (.z.P ). If multiple rows of data, all rows receive the same time. - Retrieves the column names of table x - Publish data to all interested clients, by calling .u.pub with table namex and table generated fromy and column names. - If tickerplant log file created, write upd function call & params to the log and increment .u.i so that an RDB can execute what was originally called during recovery .z.ts¶ Defines the action for the kdb+ timer callback function .z.ts . The frequency of the timer was set on the command line (-t command-line option or \t system command). Batch Mode¶ Runs on system timer at specified interval. Actions performed: - For every table in .u.t - publish data to all interested clients, by calling .u.pub with table namex and table generated fromy and column names. - reapply the grouped attribute to the sym column - publish data to all interested clients, by calling - Update count of processed messages by setting u.i tou.j (the number of batched messages). - Checks if end-of-day procedure should be run by calling .u.ts with the current date Realtime Mode¶ If batch timer not specified, system timer is set to run every 1000 milliseconds to check if end-of-day has occurred. End-of-day is checked by calling .u.ts , passing current local date (.z.D ). Pub/Sub functions¶ tick.q also loads u.q which enables all of its features within the tickerplant.
.log.if.info "Event management now enabled [ Event: ",string[event]," ] [ Bound To: ",string[bindFunction]," ]"; }; .event.i.defaultExitHandler:{[ec] $[0=ec; .log.if.info "Process is exiting at ",string[.time.now[]]," [ Exit Code: ",string[ec]," ]"; .log.if.fatal "Process is exiting at ",string[.time.now[]]," with non-zero exit code [ Exit Code: ",string[ec]," ]" ]; }; ================================================================================ FILE: kdb-common_src_file.hdb.q SIZE: 703 characters ================================================================================ // File Manipulation for HDBs // Copyright (c) 2021 Jaskirat Rajasansir / Wrapper for '.Q.par' to deal with relative paths for a segmented DB configuration in the 'par.txt' file / @param hdbRoot (FolderPath) The HDB root to run against / @param partVal (Date|Month|Year|Long) The specific partition to lookup with 'par.txt' if present / @returns (FolderPath) The expected location of the partition within the HDB .file.hdb.qPar:{[hdbRoot; partVal] if[not .type.isFolder hdbRoot; '"IllegalArgumentException"; ]; par:.Q.par[hdbRoot; partVal; `]; strPar:1_ string par; if[not "/" = first strPar; strPar:string[hdbRoot],"/",strPar; ]; :hsym `$strPar; }; ================================================================================ FILE: kdb-common_src_file.kdb.q SIZE: 4,465 characters ================================================================================ // Manipulation for On-Disk Files // Copyright (c) 2021 - 2022 Jaskirat Rajasansir // All kdb+ files have a 'magic number' as the first 2 bytes of the file: // - 0xff01 - atom or non-enumerated symbol list // - 0xfe20 - single-typed list // - 0xfd20 - enumerated symbol lists, complex lists or "new-format" lists // // The 'magic' number determines where the type and element length information resides. "New-format" list headers start at byte 4080 (4096 - 16) // // It's assumed that 4096 bytes (4 KB) is the smallest block size on most storage devices, therefore we read the full 4096 bytes // instead of reading 2 smaller chunks / Assume any type greater than 200h is a atom .file.kdb.cfg.atomOrList:0xf0; .file.kdb.cfg.headerLength:4096; / Attributes based on the type .file.kdb.cfg.attributes:``s`u`p`g; / Bytes within the header that contain useful information to extrace, based on the 'magic number' of the file .file.kdb.cfg.bytes:`magic xkey flip `magic`type`attr`length!"*II*"$\:(); .file.kdb.cfg.bytes,:`magic`type`attr`length!(0xff01; 2; 3; 4 + til 4); .file.kdb.cfg.bytes,:`magic`type`attr`length!(0xfe20; 2; 3; 8 + til 8); .file.kdb.cfg.bytes,:`magic`type`attr`length!enlist[0xfd20],(4096 - 16) + (2; 3; 8 + til 8); / The start byte for the sym enumeration target (e.g. 'sym') .file.kdb.cfg.symEnumByteStart:()!`int$(); .file.kdb.cfg.symEnumByteStart[enlist 0xfd20]:16; / The maximum number of bytes to read for the sym enumeration target .file.kdb.cfg.symEnumReadBytes:64; / @returns (Dict) kdb file information summary based on the other functions in this namespace / TODO: Optimise further to only do a single read of the header bytes .file.kdb.getSummary:{[file] :`type`attribute`length!.file.kdb[`getType`getAttribute`getLength]@\:file; }; / Optimised kdb+ file type function. Only requires reading the first 4096 bytes of the specified file to return the type / (instead of "type get") / @param file (FilePath) The file to return the type for / @returns (Short) The file type / @see .file.kdb.cfg.atomOrList .file.kdb.getType:{[file] header:read1 (file; 0; .file.kdb.cfg.headerLength); fileType:header .file.kdb.cfg.bytes[header 0 1]`type; :`short$((::; -256h +) .file.kdb.cfg.atomOrList < fileType) fileType; }; / Optimised kdb+ 'get attribute' function. Only requires reading the first 4096 of the specified file to return the attribute / (instead of "attr get") / @param file (FilePath) The file to return the type for / @returns (Symbol) The attribute applied on the current file, or null symbol if no attribute .file.kdb.getAttribute:{[file] header:read1 (file; 0; .file.kdb.cfg.headerLength); :.file.kdb.cfg.attributes header .file.kdb.cfg.bytes[header 0 1]`attr; }; / Optimised element length function. Only requires reading the first 4096 bytes of the specified file to return the length / (instead of "count get"). / NOTE: Optimised code path works for all atom and list types / @param file (FilePath) The file to return the element size / @returns (Long) The list length .file.kdb.getLength:{[file] header:read1 (file; 0; .file.kdb.cfg.headerLength); magic:header 0 1; lengthBytes:0x0 sv reverse header .file.kdb.cfg.bytes[magic]`length; if[not 0xff01 ~ magic; :lengthBytes; ]; fileType:header .file.kdb.cfg.bytes[magic]`type; $[.file.kdb.cfg.atomOrList < fileType; :1; (11h = fileType) & 0 < lengthBytes; :`long$lengthBytes; / else :count get file ]; }; / Optimised retrieval of the enumeration target for symbols / NOTE: Currently causes 2 4KB reads of the file header to get the file type (via .file.kdb.getType) and then to extract the symbol enumeration target / @param file (FilePath) The file to return the enumeration target / @returns (Symbol) The enumeration target file or null symbol if the file is a symbol file with no enumeration / @throws FileIsNotAnEnumerationException If the supplied file path target is not a symbol list or enumerated symbol list .file.kdb.getSymEnumerationTarget:{[file] fileType:.file.kdb.getType file; $[11h = fileType; :`; not fileType within 20 76h; '"FileIsNotAnEnumerationException" ]; header:read1 (file; 0; .file.kdb.cfg.headerLength); enumTgtBytes:header .file.kdb.cfg.symEnumByteStart[header 0 1] + til .file.kdb.cfg.symEnumReadBytes; enumTgtBytes:enumTgtBytes except 0x00; :`$`char$enumTgtBytes; }; ================================================================================ FILE: kdb-common_src_file.q SIZE: 4,411 characters ================================================================================ // File Manipulation Functions // Copyright (c) 2015 - 2017 Sport Trades Ltd, (c) 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/file.q .require.lib each `util`type`convert`os; / Lists the contents of the specified folder / @param folder (FolderPath) The folder to list the contents for / @returns (FilePathList) The files and folders within the folder / @throws IllegalArgumentException If the parameter is not a path type .file.ls:.file.listFolder:{[folder] if[not .type.isFilePath folder; '"IllegalArgumentException"; ]; :key folder; }; / Lists the contents of the specified folder, returning fully qualified paths for each / @param folder (FolderPath) The folder to list the contents for / @returns (FilePathList) The fully qualified files and folders within the folder / @throws IllegalArgumentException If the parameter is not a path type .file.listFolderPaths:{[folder] :` sv/:folder,/:.file.listFolder folder; }; / Finds the files and folders within the specified folder that match the supplied file regex / @param fileRegex (Symbol|String) The part to find. If a symbol, will be surrounded by *. If a string, used as is / @param folder (FolderPath) The folder to find within / @returns (FilePathList) / @throws IllegalArgumentException If the parameter is not a path type .file.find:{[fileRegex;folder] if[not .type.isFilePath folder; '"IllegalArgumentException"; ]; if[not .type.isString fileRegex; fileRegex:"*",.type.ensureString[fileRegex],"*"; ]; files:.file.listFolder folder; :files where files like fileRegex; }; / Finds the files and folders within the specified folder that match the supplied file regex, / returning fully qualified paths for each / @param fileRegex (Symbol|String) The part to find. If a symbol, will be surrounded by *. If a string, used as is / @param folder (FolderPath) The folder to find within / @returns (FilePathList) / @throws IllegalArgumentException If the parameter is not a path type / @see .file.find .file.findFilePaths:{[fileRegex;folder] :` sv/:folder,/:.file.find[fileRegex;folder]; }; / Checks the existance of the specified folder and creates an empty folder if it does not exist / @param dir (FolderPath) / @returns (FolderPath) The supplied folder to check .file.ensureDir:{[dir] if[not .type.isFolder dir; .log.if.info "Directory does not exist, creating [ Directory: ",string[dir]," ]"; .os.run[`mkdir;.convert.hsymToString dir]; ]; :dir; }; / Loads the specified directory / @param dir (FolderPath) .file.loadDir:{[dir] .util.system "l ",.convert.hsymToString dir; }; / Recurseively desecends from the specified root folder down and lists all / files within each folder until no more folders are found. NOTE: Symbolic / links will be treated as a folder, so ensure there are no circular references. / @param root (FolderPath) The root directory to start the tree from / @returns (FilePathList) All files, fully qualified, discovered from root down .file.tree:{[root] rootContents:.file.listFolderPaths root; folders:.type.isFolder each rootContents; :raze (rootContents where not folders),.z.s each rootContents where folders; }; / @returns (FolderPath) The current working directory / @see .os.run .file.getCwd:{ :hsym `$first .os.run[`pwd;::]; }; / @returns (Boolean) True is the specified file is compressed, false otherwise / @throws IllegalArgumentException If the specified parameter is not a file path .file.isCompressed:{[filePath] if[not .type.isFilePath filePath; '"IllegalArgumentException"; ]; compStatus:-21!filePath; $[not `algorithm in key compStatus; :0b; / else :0 < compStatus`algorithm ]; }; / Replaces the specified target file or folder with the specified source file or folder / NOTE: If the target exists, it will be deleted prior to the move / @param source (FilePath|FolderPath) The source file or folder / @param target (FilePath|FolderPath) The target file or folder to replace with the target .file.replace:{[source; target] if[not all .type.isFilePath each (source; target); '"IllegalArgumentException"; ]; source:1_ string source; target:1_ string target; .os.run[`rmFolder; target]; .os.run[`mv; source,"|",target]; }; ================================================================================ FILE: kdb-common_src_http.q SIZE: 17,465 characters ================================================================================ // HTTP Query Library // Copyright (c) 2020 - 2021 Jaskirat Rajasansir // Documentation: https://github.com/BuaBook/kdb-common/wiki/http.q // NOTE: For TLS-encrypted HTTP requests, ensure that OpenSSL 1.0 is available as 'libssl.so' on the library path // 'export KX_SSL_VERIFY_SERVER=NO' can also be useful if the certificate path cannot be validated .require.lib each `type`util`ns; / If true, current proxy settings will be loaded on library initialisation only. If false, the proxy settings will / be queried on every HTTP invocation. .http.cfg.cacheProxy:1b; / If true, the user agent header will be sent with each request. A default is built on library initialisation, but / can be manually specified by setting '.http.userAgent' .http.cfg.sendUserAgent:1b; / If true, if a HTTP response contains a content encoding that is not supported, throw an exception. If false, an / error will be logged but the body will be returned as received .http.cfg.errorOnInvaildContentEncoding:1b; / If true, if the HTTP response is a redirect type, another request to the specified target location will be made. If / false, the redirect response will be returned .http.cfg.followRedirects:1b; / The 'Content-Type' header values in the response that will be automatically converted by the defined function .http.cfg.autoContentTypes:()!(`symbol$()); .http.cfg.autoContentTypes[enlist ""]: `; .http.cfg.autoContentTypes["application/json"]: `.j.k; .http.cfg.autoContentTypes["application/kdb-ipc"]: `.http.i.ipcStringParse; / The 'Content-Encoding' headers values in the response that indicate GZIP encoding and should be uncompressed before returning .http.cfg.gzipContentEncodings:("gzip"; "x-gzip");
-1"view the confusion matrix"; show .ut.totals[`TOTAL] .ml.cm[y;"j"$p] ================================================================================ FILE: funq_housing.q SIZE: 459 characters ================================================================================ housing.f:"housing.data" housing.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" housing.b,:"housing/" -1"[down]loading housing data set"; .ut.download[housing.b;;"";""] housing.f; housing.c:`crim`zn`indus`chas`nox`rm`age`dis`rad`tax`ptratio`b`lstat`medv housing.tw:("FFFBFFFFHFFFFF";8 7 8 3 8 8 7 8 4 7 7 7 7 7) housing.t:`medv xcols flip housing.c!housing.tw 0: `$housing.f housing[`Y`X]:0 1 cut value flip housing.t housing.y:first housing.Y ================================================================================ FILE: funq_ionosphere.q SIZE: 347 characters ================================================================================ ionosphere.f:"ionosphere.data" ionosphere.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" ionosphere.b,:"ionosphere/" -1"[down]loading ionosphere data set"; .ut.download[ionosphere.b;;"";""] ionosphere.f; ionosphere.XY:((34#"E"),"C";",")0:`$ionosphere.f ionosphere.X:-1_ionosphere.XY ionosphere.y:first ionosphere.Y:-1#ionosphere.XY ================================================================================ FILE: funq_iris.q SIZE: 371 characters ================================================================================ iris.f:("iris.data";"bezdekIris.data") 1 iris.b:"http://archive.ics.uci.edu/ml/machine-learning-databases/" iris.b,:"iris/" -1"[down]loading iris data set"; .ut.download[iris.b;;"";""] iris.f; iris.XY:150#/:("FFFFS";",") 0: `$iris.f iris.X:-1_iris.XY iris.y:first iris.Y:-1#iris.XY iris.c:`slength`swidth`plength`pwidth`species iris.t:`species xcols flip iris.c!iris.XY ================================================================================ FILE: funq_kmeans.q SIZE: 4,547 characters ================================================================================ \c 40 100 \l funq.q \l iris.q \l uef.q / redefine plot (to drop space) -1"to demonstrate kmeans, we first generate clusters of data"; -1"we will arbitrarily choose 3 clusters and define k=3"; k:3 -1"we then generate k centroids,"; show C:"f"$k?/:2#20 -1"and scatter points around the centroids with normally distributed errors"; X:raze each C+.ml.bm(2;k)#100?/:(2*k)#1f show .ut.plot[19;10;.ut.c10;sum] X -1 .ut.box["**"] ( "kmeans is an implementation of lloyds algorithm,"; "which alternates between assigning points to a cluster"; "and updating the cluster's center."); -1"kmeans uses the *Euclidean distance* to assign points to clusters"; -1"and generates clusters using the *average* of the data points."; -1"each call to kmeans performs a single iteration."; -1"to find the centroids, we call kmeans iteratively until convergence."; -1"there are two ways to initialze the algorithm:"; -1" 1. randomly pick k centroids (k-means++ and Forgy method)"; -1" 2. assign points randomly to k centroids - random partition method"; -1"the Forgy method is the simplest to implement"; .ml.kmeans[X] over .ml.forgy[k] X -1"the k-means++ method is supplied as an alternate initialization method"; .ml.kmeans[X] over last k .ml.kmeanspp[X]// 2#() -1"the random partition method can also be done by hand"; .ml.kmeans[X] over .ml.rpart[avg;k] X -1"we can plot the data and overlay the centroids found using kmeans++"; show .ut.plt .ml.append[0N;X],' .ml.append[1] .ml.kmeans[X] over .ml.forgy[k] X -1"kmedians uses the lloyd algorithm, but uses the *Manhattan distance*"; -1"also known as the taxicab metric to assign points to clusters"; -1"in addition, it uses the median instead of mean to compute the centroid"; -1"this forces the resulting centroid to have values picked from the data"; -1"it does not, however, force the centroid to be an actual point in the data"; -1"the centroid can be (x1;y2;z3), and not necessarily (x3;y3;z3)"; -1"(to use actual points from the data see k-medoids below)"; -1"we can see the progress by using scan instead of over"; show .ml.kmedians[X] scan .ml.forgy[k] X -1"we can apply kmeans to the classic machine learning iris data"; `X`y`t set' iris`X`y`t; -1"we can see how the data set clusters the petal width"; show .ut.plt (t.pwidth;t.plength;{distinct[x]?x} t.species) -1"we iteratively call kmeans until convergence"; C:.ml.kmeans[X] over last 3 .ml.kmeanspp[X]// 2#() -1"and can show which group each data point was assigned to"; show m:.ml.mode each y I:.ml.cgroup[.ml.edist2;X;C] / classify -1"what percentage of the data did we classify correctly?"; avg y=p:.ut.ugrp m!I / accuracy -1"what does the confusion matrix look like?"; show .ut.totals[`TOTAL] .ml.cm[y;p] / plot errors with increasing number of clusters -1"we can also plot the total ssw from using different values for k"; C:{[X;k].ml.kmeans[X] over last k .ml.kmeanspp[X]// 2#()}[X] each 1+til 10 show .ut.plt .ml.distortion[X] peach C -1"an alternative to k-means is the k-medoids algorithm"; -1"that finds actual data points at the center of each cluster"; -1"the algorithm is slower than k-means because it must computer"; -1"the full dissimilarity matrix for each cluster"; -1"the implementation is know as *partitioning around medoids*"; -1"and is implemented in .ml.pam"; -1"we can use any distance metric, but Manhattan and Euclidean"; -1"(not Euclidean squared) are the most popular"; C:.ml.pam[.ml.edist][X] over X@\:3?count X show .ut.plt .ml.append[0N;X 1 2],'.ml.append[1] C 1 2 -1"let's apply the analyis to one of the uef reference cluster datasets"; X:uef.a1 show .ut.plot[39;20;.ut.c10;sum] X -1"first we generate the centroids for a few values for k"; C:{[X;k].ml.kmeans[X] over last k .ml.kmeanspp[X]// 2#()}[X] peach ks:10+til 20 -1"then we cluster the data"; I:.ml.cgroup[.ml.edist2;X] peach C -1"plot elbow curve (k vs ssw)"; show .ut.plt .ml.ssw[X] peach I -1"plot elbow curve (k vs % of variance explained)"; show .ut.plt (.ml.ssb[X] peach I)%.ml.sse[X] -1"plot silhouette curve (k vs silhouette)"; show .ut.plt s:(avg raze .ml.silhouette[.ml.edist;X]::) peach I ks i:.ml.imax s -1"superimpose the centroids on the data"; show .ut.plot[39;20;.ut.c10;avg] .ml.append[0N;X],'.ml.append[1] C i -1"a soft version of Lloyds algorithm is available with .ml.lloyds"; -1".ml.kmeanss and .ml.kmeanssmax should be identical to ml.kmeans"; X:iris.X C:asc flip .ml.kmeans[X] over C0:last 3 .ml.kmeanspp[X]// 2#() .ut.assert[C] asc flip .ml.kmeanss[X] over C0 .ut.assert[C] asc flip .ml.kmeanssmax[500;X] over C0 ================================================================================ FILE: funq_knn.q SIZE: 1,837 characters ================================================================================ \c 20 100 \l funq.q \l pendigits.q \l adult.q -1"referencing pendigits data from global namespace"; `X`Xt`y`yt set' pendigits`X`Xt`y`yt; k:4 df:`.ml.edist2 -1"checking accuracy of using ",string[k], " nearest neighbors and df=", string df; -1"and uniform weight the points"; -1"using .ml.f2nd to peach across the 2nd dimension of Xt to build distance matrix"; avg yt=p:.ml.knn[0n<;k;y] D:.ml.f2nd[df X] Xt -1"alternatively, we can peach the combination of knn+distance calculation"; avg yt=p:.ml.f2nd[.ml.knn[0n<;k;y] df[X]@] Xt -1"we can also change the weighting function to be 1/distance"; avg yt=p:.ml.f2nd[.ml.knn[sqrt 1f%;k;y] df[X]@] Xt -1"using pairwise distance (squared) function uses matrix algebra for performance"; avg yt=p:.ml.knn[sqrt 1f%;k;y] D:.ml.pedist2[X;Xt] -1"computing the accuracy of each digit"; show avg each (p=yt)[i] group yt i:iasc yt -1"viewing the confusion matrix, we can see 7 is often confused with 1"; show .ut.totals[`TOTAL] .ml.cm[yt;p] ks:1+til 10 -1"compare different choices of k: ", -3!ks; t:([]k:ks) t:update mdist:avg yt=.ml.knn[1f%;k;y] .ml.f2nd[.ml.mdist X] Xt from t t:update edist:avg yt=.ml.knn[1f%;k;y] .ml.f2nd[.ml.edist X] Xt from t show t; n:5 -1"cross validate with ", string[n], " buckets"; I:.ut.part[n#1;0N?] til count X 0 ff:.ml.fknn[sqrt 1f%;.ml.pedist2] pf:.ml.pknn e:y[I]=p:.ml.cv[ff ks;pf;(y;X)] .ml.kfold I -1"find k with maximum accuracy"; k:0N!ks .ml.imax avg avg each e -1"confirm accuracy against test dataset"; avg yt=p:pf[;Xt] ff[k;y;X] -1"using the Gower distance allows us to compute distances"; -1"for ordinal asymmetric binary and nominal features as well"; -1"the 'adult' data set allows us to test knn on mixed-type features"; `X`y`Xt`yt set' adult`X`y`Xt`yt df:`.ml.gower k:3 .ut.assert[.82] .ut.rnd[.01] 0N!avg yt=p:.ml.f2nd[.ml.knn[0n<;k;y] df[X]@] Xt ================================================================================ FILE: funq_kraken.q SIZE: 941 characters ================================================================================ kraken.p:string `daily`hourly`minutely!`d`1h`minute kraken.c:string `BTCUSD`ETHUSD`LTCUSD`XRPUSD`LINKUSD`BCHUSD kraken.c,:string `DOTUSD`EOSUSD`ADAUSD`XMRUSD`DASHUSD`ETCUSD kraken.c,:string `ZECUSD`XTZUSD`TRXUSD`PAXGUSD`COMPUSD kraken.c,:string `BTCEUR`ETHEUR`LTCEUR`XRPEUR`LINKEUR`BCHEUR kraken.c,:string `DOTEUR`EOSEUR`ADAEUR`XMREUR`DASHEUR`ETCEUR kraken.c,:string `ZECEUR`XTZEUR`TRXEUR`PAXGEUR`COMPEUR kraken.c,:string `ETHBTC`LTCBTC kraken.f:kraken.p {"_" sv ("Kraken";y;x,".csv")}/:\: asc kraken.c kraken.b:"http://www.cryptodatadownload.com/cdd/" -1"[down]loading kraken data set"; .ut.download[kraken.b;;"";""] each raze kraken.f; .kraken.load:{[f] if[not count t:("P *FFFFFF I";1#",") 0: 1_read0 f;:()]; t:`time`sym`open`high`low`close`vwap`qty`n xcol t; t:update sym:`$sym except\: "/" from t; t:`sym xcols 0!select by time from t; / remove duplicates t} kraken,:({update `p#sym from x} raze .kraken.load peach::)'[`$kraken.f] ================================================================================ FILE: funq_linear.q SIZE: 1,242 characters ================================================================================ .linear.dll:`liblinear^.linear.dll^:`; / optional override .linear,:(.linear.dll 2: (`lib;1))` .linear,:`L2R_LR`L2R_L2LOSS_SVC_DUAL`L2R_L2LOSS_SVC!"i"$til 3 .linear,:`L2R_L1LOSS_SVC_DUAL`MCSVM_CS`L1R_L2LOSS_SVC!3i+"i"$til 3 .linear,:`L1R_LR`L2R_LR_DUAL!6i+"i"$til 2 .linear,:`L2R_L2LOSS_SVR`L2R_L2LOSS_SVR_DUAL`L2R_L1LOSS_SVR_DUAL!11i+"i"$til 3 \d .linear param:(!) . flip ( (`solver_type;L2R_L2LOSS_SVC_DUAL); (`eps;0f); / uses defaults (`C;1f); (`weight_label;::); (`weight;::); (`p;.1); (`init_sol;::));
Command line options¶ The command line for invoking kdb+ has the form: q [file] [-option [parameters] … ] Options -b blocked -q quiet mode -c console size -r replicate -C HTTP size -s secondary threads -e error traps -S random seed -E TLS Server Mode -t timer ticks -g garbage collection -T timeout -l log updates -u disable syscmds -L log sync -u usr-pwd local -m memory domain -U usr-pwd -o UTC offset -w workspace -p listening port -W start week -P display precision -z date format .z.x (argv), .z.X (raw command line) file¶ This is either the script to load (*.q, *.k, *.s), or a file or a directory. $ q sp.q KDB+ 3.5t 2017.02.28 Copyright (C) 1993-2017 Kx Systems m32/ 4()core 8192MB sjt mint.local 192.168.0.39 NONEXPIRE +`p`city!(`p$`p1`p2`p3`p4`p5`p6`p1`p2;`london`london`london`london`london`lon.. (`s#+(,`color)!,`s#`blue`green`red)!+(,`qty)!,900 1000 1200 +`s`p`qty!(`s$`s1`s1`s1`s2`s3`s4;`p$`p1`p4`p6`p2`p2`p4;300 200 100 400 200 300) q) Operating systems may create hidden files, such as .DS_Store , that block loading of a directory. -b (blocked)¶ -b Block write-access to a kdb+ database, for any handle context (.z.w ) other than 0. Blocks hdel keyword (since V4.1t 2021.10.13, V4.0 2023.08.11). Blocks hopen of a file (since 4.1t 2021.10.13, 4.0 2023.08.11) ~/q$ q -b q)aa:([]bb:til 4) q)\p 5001 q) and in another task q)h:hopen 5001 q)h"count aa" 4 q)h"aa:10#aa" 'noupdate q) Use \_ to check if client write-access is blocked: ~/q$ q -b .. q)\_ 1 -c (console size)¶ -c r c Set console maximum rows and columns, default 25 80. \c system command for detail -C (HTTP size)¶ -C r c Set HTTP display maximum rows and columns. \C system command for detail -e (error traps)¶ -e [0|1|2] Sets error-trapping mode. The default is 0 (off). \e system command for detail -E (TLS Server Mode)¶ -E 0 / plain -E 1 / plain & TLS -E 2 / TLS only Since V3.4. -g (garbage collection)¶ -g 0 / deferred (default) -g 1 / immediate Sets garbage-collection mode. \g system command for detail -l (log updates)¶ -l Log updates to filesystem. -L (log sync)¶ -L As -l , but sync logging. -m (memory-domain)¶ -m path Memory can be backed by a filesystem, allowing use of DAX-enabled filesystems (e.g. AppDirect) as a non-persistent memory extension for kdb+. This command-line option directs kdb+ to use the filesystem path specified as a separate memory domain. This splits every thread’s heap into two: domain description -------------------------------------------------------------------------- 0 regular anonymous memory, active and used for all allocs by default 1 filesystem-backed memory The .m namespace is reserved for objects in memory domain 1, however names from other namespaces can reference them too, e.g. a:.m.a:1 2 3 -o (UTC offset)¶ -o N Sets local time offset as N hours from UTC, or minutes if abs[N]>23 (Affects .z.Z ) \o system command for detail -p (listening port)¶ Set listening port -p [rp,][hostname:](portnumber|servicename) See Listening port for detail. hopen \p system command Multithreaded input mode, Changes in 3.5 Socket sharding with kdb+ and Linux -P (display precision)¶ -P N Display precision for floating-point numbers, i.e. the number of digits shown. \P system command for detail -q (quiet mode)¶ -q Quiet, i.e. no startup banner text or session prompts. Typically used where no console is required. ~/q$ q KDB+ 3.5t 2017.02.28 Copyright (C) 1993-2017 Kx Systems … q)2+2 4 q) and with -q ~/q$ q -q 2+2 4 .z.q (quiet mode) -r (replicate)¶ -r :host:port[:user[:password]] Replicate from :host:port . -s (secondary threads)¶ -s N Number of secondary threads or processes available for parallel processing. \s system command for detail -S (random seed)¶ -S N Sets N as value of random seed. \S system command for detail Roll, Deal -t (timer ticks)¶ -t N Period in milliseconds between timer ticks. Default is 0, for no timer. \t system command for detail -T (timeout)¶ -T N Timeout in seconds for client queries, i.e. maximum time a client call will execute. Default is 0, for no timeout. \T system command for detail -u (disable syscmds)¶ -u (usr-pwd local)¶ -U (usr-pwd)¶ -u 1 / blocks system functions and file access -U file / sets password file, blocks \x -u file / both the above -u 1 disables - system commands from a remote (signals 'access ), including exit via"\\" - access to files outside the current directory for any handle context ( .z.w ) other than 0. Segmented database partitions using directories outside the current working directory can be enabled using the method described here. - hopen on a fifo (since 4.1t 2021.10.13, 4.0 2023.08.11) - hopen of a file (since 4.1t 2021.10.13, 4.0 2023.08.11) - the exit keyword (since 4.1t 2021.07.12) - the hdel keyword (since V4.1t 2021.10.13, V4.0 2023.08.11) Only a simple protection against “wrong” queries For example, setting a system command in .z.ts and starting the timer still works. The right system command could for example expose a terminal, so the user running the database could be fully impersonated and compromised from then on. -U file - sets a password file - disables \x (even on the local console) The password file is a text file with one credential on each line. (No trailing blank line/s.) user1:password1 user2:password2 The password can be - plain text - an MD5 hash of the password - an SHA-1 hash of the password (since V4.0 2020.03.17) q)raze string md5 "this is my password" "210d53992dff432ec1b1a9698af9da16" q)raze string -33!"mypassword" / -33! calculates sha1 "91dfd9ddb4198affc5c194cd8ce6d338fde470e2" Internal function -33! -u file combines the above, i.e. -u file is equivalent to -u 1 -U file . -w (workspace)¶ -w N Workspace limit in MB for the heap across threads for memory domain 0. Default is 0: no limit. \w system command for detail .Q.w Before V4.0 2020.03.17 this command set the limit for the heap per thread. Other ways to limit resources On Linux systems, administrators might prefer cgroups as a way of limiting resources. On Unix systems, memory usage can be constrained using ulimit , e.g. ulimit -v 262144 limits virtual address space to 256MB. -W (start week)¶ -W N Set the start-of-week offset, where 0 is Saturday. The default is 2, i.e Monday. \W system command for detail -z (date format)¶ -z [0|1] Set the format for "D"$ date parsing: 0 for mm/dd/yyyy and 1 for dd/mm/yyyy. Comparison¶ < Less Than > Greater Than deltas differences <= Up To >= At Least differ flag changes & Lesser | Greater min least, minimum max greatest, maximum mins running minimums maxs running maximums mmin moving minimums mmax moving maximums Six comparison operators¶ Syntax: (e.g.) x = y , =[x;y] These binary operators work intuitively on numerical values (converting types when necessary), and apply also to lists, dicts, and tables. They are atomic. Returns 1b where x and y are equal, else 0b . q)"hello" = "world" 00010b q)5h>4h 1b q)0x05<4 0b q)0>(1i;-2;0h;1b;0N;-0W) 010011b q)5>=(`a`b!4 6) a| 1 b| 0 Unlike Match, they are not strict about type. q)1~1h 0b q)1=1h 1b Comparison tolerance applies when matching floats. q)(1 + 1e-13) = 1 1b < > = >= <= <> are multithreaded primitives. For booleans, <> is the same as exclusive or (XOR). Temporal values¶ Below is a matrix of the type used when the temporal types differ in a comparison (note: you may need to scroll to the right to view the full table): | comparison types | timestamp | month | date | datetime | timespan | minute | second | time | |---|---|---|---|---|---|---|---|---| | timestamp | timestamp | timestamp | timestamp | timestamp | timespan | minute | second | time | | month | timestamp | month | date | not supported | not supported | not supported | not supported | not supported | | date | timestamp | date | date | datetime | not supported | not supported | not supported | not supported | | datetime | timestamp | not supported | datetime | datetime | timespan | minute | second | time | | timespan | timespan | not supported | not supported | timespan | timespan | timespan | timespan | timespan | | minute | minute | not supported | not supported | minute | timespan | minute | second | time | | second | second | not supported | not supported | second | timespan | second | second | time | | time | time | not supported | not supported | time | timespan | time | time | time | For example q)20:00:00.000603286 within 13:30 20:00t / comparison of timespan and time, time converted to timespan values 0D13:30:00.000000000 0D20:00:00.000000000 0b q)2024.10.07D20:00:00.000603286 within 13:30 20:00t / comparison of timestamp and time, timestamp converted to time value 20:00:00.000 1b Particularly notice the comparison of ordinal with cardinal datatypes, such as timestamps with minutes. q)times: 09:15:37 09:29:01 09:29:15 09:29:15 09:30:01 09:35:27 q)tab:([] timeSpan:`timespan$times; timeStamp:.z.D+times) q)meta tab c | t f a ---------| ----- timeSpan | n timeStamp| p When comparing timestamp with minute , the timestamps are converted to minutes such that `minute$2024.11.01D09:29:15.000000000 becomes 09:29 and therefore doesn't appear in the output: q)select from tab where timeStamp>09:29 / comparing timestamp with minute timeSpan timeStamp -------------------------------------------------- 0D09:30:01.000000000 2016.09.06D09:30:01.000000000 0D09:35:27.000000000 2016.09.06D09:35:27.000000000 When comparing timespan with minute , the minute is converted to timespan such that 09:29 becomes 0D09:29:00.000000000 for the following comparison: q)select from tab where timeSpan>09:29 / comparing timespan with minute timeSpan timeStamp -------------------------------------------------- 0D09:29:01.000000000 2016.09.06D09:29:01.000000000 0D09:29:15.000000000 2016.09.06D09:29:15.000000000 0D09:29:15.000000000 2016.09.06D09:29:15.000000000 0D09:30:01.000000000 2016.09.06D09:30:01.000000000 0D09:35:27.000000000 2016.09.06D09:35:27.000000000 Therefore, when comparing ordinals with cardinals (i.e. timestamp with minute), ordinal is converted to the cardinal type first. For example: q)select from tab where timeStamp=09:29 timeSpan timeStamp -------------------------------------------------- 0D09:29:01.000000000 2016.09.06D09:29:01.000000000 0D09:29:15.000000000 2016.09.06D09:29:15.000000000 0D09:29:15.000000000 2016.09.06D09:29:15.000000000 q)tab.timeStamp=09:29 011100b is equivalent to q)(`minute$tab.timeStamp)=09:29 011100b and thus q)tab.timeStamp<09:29 100000b q)tab.timeStamp>09:29 000011b Q for Mortals §4.9.1 Temporal Comparison Floating point¶ The comparison of floating-point types are discussed in comparison tolerance . Different types¶ The comparison operators also work on text values (characters, symbols). q)"0" < ("4"; "f"; "F"; 4) / characters are treated as their numeric value 1110b q)"alpha" > "omega" / strings are char lists 00110b q)`alpha > `omega / but symbols compare atomically 0b When comparing two values of different types, the general rule (apart from those for temporal types above) is that the underlying values are compared. Nulls¶ Nulls of any type are equal. q)n:(0Nh;0Ni;0N;0Ne;0n) / nulls q)n =/:\: n 11111b 11111b 11111b 11111b 11111b Any value exceeds a null. q)inf: (0Wh;0Wi;0W;0We;0w) / numeric infinities q)n < neg inf 11111b Infinities¶ Infinities of different type are ordered by their width. In ascending order: negative: -float < -real < -long < -int < -short positive: short < int < long < real < float q)inf: (0Wh;0Wi;0W;0We;0w) / numeric infinities in ascending type width q)(>=) prior inf / from short to float 11111b q)(>=) prior reverse neg inf / from -float to -short 11111b This follows the rule above for comparing values of different types. deltas ¶ Keyword deltas is a uniform unary function that returns the differences between items in its numeric list argument. differ ¶ Keyword differ is a uniform unary function that returns a boolean list indicating where consecutive pairs of items in x differ. Match¶ Match (~ ) compares its arguments and returns a boolean atom to say whether they are the same. Q for Mortals §4.3.3 Order
.finos.authz_ro.addRwUsers:{[userSymOrList] /// Add user(s) to list of "rw" users. // @param u Symbol or list of symbols for users whose "rw" eval // capability is to be granted. .finos.authz_ro.priv.rwUsers::distinct .finos.authz_ro.priv.rwUsers,userSymOrList; } .finos.authz_ro.removeRwUsers:{[userSymOrList] /// Remove user(s) from list of "rw" users. // @param u Symbol or list of symbols for users whose "rw" eval // capability is to be revoked. .finos.authz_ro.priv.rwUsers::.finos.authz_ro.priv.rwUsers except userSymOrList; } .finos.authz_ro.getRwUsers:{[] /// Return current list of users with "rw" eval permission. .finos.authz_ro.priv.rwUsers} .finos.authz_ro.isRwUser:{[userSym] /// Return 1b if userSym represents a user with read-write access. userSym in .finos.authz_ro.priv.rwUsers} /// List of users who will get their parse trees // evaluated with read-only restrictions under "reval". // Takes precedence over functionWhitelist which makes it easier // to grant temporary superuser access. .finos.authz_ro.priv.roUsers:`symbol$() .finos.authz_ro.addRoUsers:{[userSymOrList] /// Add user(s) to list of "ro" users. // @param u Symbol or list of symbols for users whose "ro" eval // capability is to be granted. .finos.authz_ro.priv.roUsers::distinct .finos.authz_ro.priv.roUsers,userSymOrList; } .finos.authz_ro.removeRoUsers:{[userSymOrList] /// Remove user(s) from list of "ro" users. // @param u Symbol or list of symbols for users whose "ro" eval // capability is to be granted. .finos.authz_ro.priv.roUsers::.finos.authz_ro.priv.roUsers except userSymOrList; } .finos.authz_ro.getRoUsers:{[] /// Return current list of users with "ro" eval permission. .finos.authz_ro.priv.roUsers} .finos.authz_ro.isRoUser:{[userSym] /// Return 1b if userSym represents a user with read-only access. userSym in .finos.authz_ro.priv.roUsers} .finos.authz_ro.params.filterVerbsLambdas:{[x] /// Given a parameter list from parse[...], // build an identical tree, but error out // if anything executable is detected. // Special case for general null. if[x~(::); : x]; t:type x; // Recurse on general lists. if[0h=t; : .z.s each x]; // Return anything that's a "pure data" type. if[99h>=abs t; : x]; // Signal an error. '"verbs/lambdas disallowed"; } /// List of functions that are allowed to be run by any user. // Make sure the list doesn't collapse into a symbol list by // putting in a non-sym placeholder such as (::) if necessary. // Whitelist functions should check against an appropriate // entitlements model. .finos.authz_ro.priv.funcs:([func:enlist(::)];paramFilter:enlist(::)) .finos.authz_ro.addFuncs:{[lambdaOrSymbolList] /// Add function(s) to whitelist. `.finos.authz_ro.priv.funcs insert (lambdaOrSymbolList;count[lambdaOrSymbolList]#.finos.authz_ro.params.filterVerbsLambdas) } .finos.authz_ro.addFuncs[(`.q.tables;`.Q.w;.q.tables)] .finos.authz_ro.removeFuncs:{[lambdaOrSymbolList] /// Remove function(s) from whitelist. delete from `.finos.authz_ro.priv.funcs where func~/:lambdaOrSymbolList; } .finos.authz_ro.getFuncs:{[] /// Return current whitelist. .finos.authz_ro.priv.funcs} .finos.authz_ro.getParamFilter:{[funcOrName] /// Get function for filtering parameters of passed function. // An empty general list () or general null (::) will be returned // if funcOrName was not found. exec first paramFilter from .finos.authz_ro.priv.funcs where func~\:funcOrName} .finos.authz_ro.valueFunc:{[x] /// Replacement for "value" with restrictions based on the user's authorization status. // Get the parse tree form. // p:parse x; p:$[10h=type x;parse x;x]; // For empty expression, just return null. if[(0=count p)|p~(::) ; :(::)]; // ReadWrite users get expressions processed using "eval". if[.finos.authz_ro.isRwUser .z.u; :eval p]; // ReadOnly users get expressions processed using "reval". if[.z.K >= 3.3;if[.finos.authz_ro.isRoUser .z.u; :reval p]]; // Count not zero. Take the first item as the function. f:first p; // Get paramFilter for the desired function. paramFilter:.finos.authz_ro.getParamFilter f; // Bail out if function isn't in the whitelist. if[any paramFilter~/:( ();(::) ) ; '"Not a whitelisted function: ",-3!f]; // Filter the parameters and build a new parse tree. p2:enlist[f], paramFilter 1_ p; // Go ahead and eval. eval p2} .finos.authz_ro.priv.orig_zph:.z.ph .finos.authz_ro.restrictZpg:{[] /// Make it easy to activate more restrictive .z.pg / .z.ps . // Use names instead of values to allow overwriting // of .ms.dotz.valueFunc with even more restrictive // implementation (using E3, for example). .z.ps:.z.pg:.z.pq:{`.finos.authz_ro.valueFunc x}; system"x .z.ph"; } .finos.authz_ro.restrictZpg[] ================================================================================ FILE: kdb_q_authz_ro_test_authz_ro.q SIZE: 359 characters ================================================================================ \l authz_ro.q // Test authz by calling .z.pg rather than going // through the trouble to create a separate process // and calling .z.pg on that... .finos.authz_ro.removeRwUsers .z.u t:([]c1:`a`b`c;c2:1 2 3) // These should run. .z.pg"tables[]" .z.pg"tables`." .z.pg".Q.w[]" // This should fail since it has a lambda in it. .z.pg"tables{0N!`hello;`}[]" ================================================================================ FILE: kdb_q_conn_conn.q SIZE: 20,461 characters ================================================================================ .finos.conn.priv.connections:([name:`$()] lazy:`boolean$(); //lazy connection not established immediately, only when attempting to send on it lazyRetryTime:`time$(); //time until the connection is tried again after a failure on lazy connections fd:`int$(); //file descriptor addresses:(); //list of destination addresses timeout:`long$(); //timeout when opening the connection ccb:(); //connect callback dcb:(); //disconnect callback rcb:(); //registration callback ecb:(); //error callback timerId:`int$()); //reconnection timer .finos.conn.priv.defaultConnRow:`fd`lazy`ccb`dcb`rcb`ecb`timerId!(0N;0b;(::);(::);(::);(::);0N); /// // The default timeout for opening connections, if the `timeout option is not provided. .finos.conn.defaultOpenConnTimeout:300000; //5 minutes .finos.conn.priv.initialBackoff:500; .finos.conn.priv.maxBackoff:30000; .finos.conn.defaultLazyRetryTime:00:10:00t; /// // Logging function. // To replace with finos logging utils? .finos.conn.log:{-1 string[.z.P]," .finos.conn ",x}; /// // Error trapping function for opening connections and invoking callbacks. // Can be overwritten by user. .finos.conn.errorTrapAt:@[;;]; /// // Open a new connection to a KDB+ server. // @param name Name (symbol) for this connection, must be unique // @param addresses A list of strings or symbols containing the connection strings, each is tried in sequence until one succeeds // @param options a dictionary of connection info (`lazy`timeout`ccb`dcb`rcb`ecb) // lazy: connection not opened immediately but when an attempt is made to send data // timeout: the connection timeout in milliseconds // ccb: connect callback // dcb: disconnect callback // rcb: registration callback. Set to 0b to disable registration when connecting to a server not using .finos.conn. // ecb: error callback // @return none .finos.conn.open:{[name;addresses;options] if[type[addresses] in -11 10h; addresses:enlist addresses]; if[11h=type addresses; addresses:string addresses]; //set defaults connection:.finos.conn.priv.defaultConnRow,options,`name`addresses!(name;addresses); if[not `timeout in key connection; connection[`timeout]:.finos.conn.defaultOpenConnTimeout]; if[not `lazyRetryTime in key connection; connection[`lazyRetryTime]:.finos.conn.defaultLazyRetryTime]; //Argument validation if[-11h<>type connection`name; '"Invalid name type"]; //Check to see if this name is already in use if[connection[`name] in exec name from .finos.conn.priv.connections; '"Name already exists"]; extraCols:(key[connection] except cols[.finos.conn.priv.connections]) except`fd`timerId; if[0<count extraCols; '"unknown options: ",","sv string extraCols; ]; if[not -7h=type connection`timeout; connection[`timeout]:`int$`time$connection`timeout]; if[not -19h=type connection`lazyRetryTime; connection[`lazyRetryTime]:`time$connection`lazyRetryTime]; `.finos.conn.priv.connections upsert connection; if[not connection`lazy; .finos.conn.priv.retryConnection[connection`name;.finos.conn.priv.initialBackoff]; ]; }; /// // Removes the lazy attribute from a connection. Immediately schedules the connection if not already open. // @param connName Connection name // @return none .finos.conn.lazyToNormal:{[connName] if[not connName in exec name from .finos.conn.priv.connections; '"Connection not valid: ",string connName]; .finos.conn.priv.connections[connName;`lazy]:0b; //if not already connected and not already trying to connect, try to connect now if[null .finos.conn.priv.connections[connName;`fd]; if[null .finos.conn.priv.connections[connName;`timerId]; .finos.conn.priv.retryConnection[connection`name;.finos.conn.priv.initialBackoff]; ]; ]; }; /// // Adds the lazy attribute from a connection. However the connection is not closed. // @param connName Connection name // @return none .finos.conn.normalToLazy:{[connName] if[not connName in exec name from .finos.conn.priv.connections; '"Connection not valid: ",string connName]; .finos.conn.priv.connections[connName;`lazy]:1b; //if a connection is set to lazy while retrying, stop the retry if[not null tid:.finos.conn.priv.connections[connName;`timerId]; .finos.timer.removeTimer tid; .finos.conn.priv.connections[connName;`timerId]:0Ni; ]; }; .finos.conn.priv.retryConnection:{[connName;timeout] .finos.conn.priv.connections[connName;`timerId]:0Ni; if[not connName in exec name from .finos.conn.priv.connections; '"Connection not valid: ",string connName]; if[null .finos.conn.priv.attemptConnection connName; .finos.conn.log"Retrying connection ",string connName; .finos.conn.priv.scheduleRetry[connName;timeout]]; }; .finos.conn.priv.defaultErrorCallback:{[connName;hostport;error] .finos.conn.log"failed to connect ",string[connName]," to ",hostport,": ",error; }; .finos.conn.priv.resolverErrorCallback:{[connName;hostport;error] .finos.conn.log"failed to resolve ",string[connName]," hostport ",hostport,": ",error; ()}; //must return a list of hostports to try /// // Called when a connection callback throws an error. // Can be overwritten by user. // @param connName Connection name // @param err Error message // @return none .finos.conn.ccbErrorHandler:{[connName;err] .finos.conn.log"Connect callback threw signal: \"",err,"\" for conn: ",string connName; }; /// // Called when a disconnection callback throws an error. // Can be overwritten by user. // @param connName Connection name // @param err Error message // @return none .finos.conn.dcbErrorHandler:{[connName;err] .finos.conn.log"Disconnect callback threw signal: \"", err, "\" for conn: ", string connName; };
Comma-separated value files¶ CSVs are a common format for capturing and transferring data. Fields are usually separated by commas; sometimes by other characters, such as Tab. Load¶ Download example.csv from the link above to (e.g.) path/to/example.csv on your local filesystem. Confirm with read0 , which returns the contents of a text file as a list of strings. q)read0 `:path/to/example.csv "id,price,qty" "kikb,36.05,90" "hlfe,96.57,84" "mcej,91.34,63" "iemn,57.12,93" "femn,63.64,54" "engn,94.56,38" "edhp,63.31,97" "ggna,72.39,88" "mjlg,12.04,58" "fpjb,34.3,68" "gfpl,25.34,45" "jogj,78.67,2" "gpna,23.08,39" "njoh,91.46,64" "aoap,48.38,49" "bhan,63.2,82" "enmc,70,40" "niom,58.92,88" "nblh,42.9,77" "jdok,9.42,30" "plbp,42.38,17" .. The Load CSV operator requires only a list of column types and a delimiter to return the CSV as a table. q)show t:("SFI";enlist",") 0: `:path/to/example.csv id price qty -------------- kikb 36.05 90 hlfe 96.57 84 mcej 91.34 63 iemn 57.12 93 femn 63.64 54 engn 94.56 38 edhp 63.31 97 ggna 72.39 88 mjlg 12.04 58 fpjb 34.3 68 gfpl 25.34 45 jogj 78.67 2 gpna 23.08 39 njoh 91.46 64 aoap 48.38 49 bhan 63.2 82 enmc 70 40 niom 58.92 88 nblh 42.9 77 jdok 9.42 30 .. The id column has been rendered as symbols, price as floats, and qty as ints. Enlisting the delimiter (enlist"," ) had Load CSV interpret the first file line as column names. Save¶ The simplest way to save table t as a CSV: q)save `:path/to/t.csv `:path/to/t.csv Load CSV is one form of the File Text operator. Two other forms allow us finer control than save . q)`:path/to/t.tsv 0: "\t" 0: t `:path/to/t.tsv Above, "\t" 0: t uses the Prepare Text form of the operator to return the table as a list of delimited strings. Here the delimiter is the Tab character "\t" . The list of strings becomes the right argument to the Save Text form, which writes the strings as the lines of path/to/t.tsv . Load Fixed for importing tables from fixed-format text files Key-Value Pairs for interpreting key-value pairs as a dictionary .j namespace for serializing as, and deserializing from, JSON Datatypes in kdb+¶ Different types of data have different representations in q, corresponding to different internal representations in kdb+. This is of particular importance in the representation of vectors: Lists of atoms of the same type are called vectors (sometimes simple or homogeneous lists) and have representations that vary by type. Every object in q has a datatype, reported by the type keyword. q)type 42 43 44 / vector of longs 7h q)type (+) / even an operator has a type 102h Datatypes for a complete table. Numbers¶ type atom vector null inf ------------------------------------------ short 42h 42 43 44h 0Nh 0Wh int 42i 42 43 44i 0Ni 0Wi long 42j 42 43 44j 0Nj 0Wj 42 42 43 44 0N 0W real 42e 42 43 44e 0Ne 0We float 42f 42 43 44f 0n 0w 42. 42 43 44. The default integer type is long, so the j suffix can be omitted. A decimal point in a number is sufficient to denote a float, so is an alternative to the f suffix. Nulls and infinities are typed as shown. Text¶ Text data is represented either as char vectors or as symbols. "a" / char atom "quick brown fox" / char vector ("quick";"brown";"fox") / list of char vectors Char vectors can be indexed and are mutable, but are known in q as strings. q)s:"quick" / string q)s[2] / indexing "i" q)s[2]:"a" / mutable q)s "quack" Symbols are atomic and immutable. They are suitable for representing recurring values. `screw / symbol atom `screw`nail`screw / symbol vector q)count `screw`nail`screw / symbols are atomic 3 The null string is " " and the null symbol is a single backtick ` . Dates and times¶ type atom vector null inf ------------------------------------------------------------- month 2020.01m 2020.01 2019.08m 0Nm date 2020.01.01 2020.01.01 2020.01.02 0Nd 0Wd minute 12:34 12:34 12:46 0Nu 0Wu second 12:34:56 12:34:56 12:46:30 0Nv 0Wv time 12:34:56.789 12:34:56.789 12:46:30.500 0Nt 0Wt type atom null inf ------------------------------------------------------------- timestamp 2020.02.29D12:11:42.381000000 0Np 0Wp datetime 2020.02.29T12:14:42.718 0Nz 0Wz timespan 0D00:05:14.659000000 0Nn 0Wn Datetime is deprecated. Prefer the nanosecond precision of timestamps. Booleans¶ Booleans have the most compact vector representation in q. q)"Many hands make light work."="a" 010000100000100000000000000b GUIDs¶ In general, there should be no need for char vectors for IDs. IDs should be int, sym or guid. Guids are faster (much faster for = ) than the 16-byte char vectors and take 2.5 times less storage (16 per instead of 40 per). Use Deal to generate unique guids. q)-2?0Ng cf74afa1-6c49-8e11-d599-736eba641207 6080b044-aa79-2d30-62a4-34390a4c81d1 Datatypes Cast, Tok, null , type Q for Mortals §2.4 Basic Data Types – Atoms Interactive development environments¶ When q runs it displays a console where you enter expressions and see them evaluated. KDB+ 4.0 2020.05.20 Copyright (C) 1993-2020 Kx Systems m64/ 12()core 65536MB sjt mackenzie.local 127.0.0.1 .. q)til 6 0 1 2 3 4 5 q) This is all you need to follow the tutorials, and if you just want to learn a little about q, it is easiest to work in the console. As you become more familiar with q, you may prefer to work in an interactive development environment. KX Developer¶ An interactive development environment for kdb+ produced and maintained by KX. Free for all use. KX Analyst is the enterprise version of Developer. KX Dashboards¶ An interactive development environment for graphical displays from q scripts. Free for all use. Jupyter notebooks¶ JupyterQ, from KX, lets you run q inside a Jupyter notebook. Third-party IDEs¶ - qStudio, a cross-platform IDE with charting and autocompletion by TimeStored - Q Insight Pad is an IDE for Windows - kxcontrib/cburke/qconsole is an IDE using GTK A mountain tour of kdb+ and the q programming language Overview - kdb+ is an in-memory, column-store database optimized for time series. It has a tiny footprint and is seriously quick. - The q vector-programming language is built into kdb+. It supports SQL-style queries. - Q expressions are interpreted in a REPL. - Tables, dictionaries and functions are first-class objects.. - kdb+ persists objects as files. A large table is stored as a directory of column files. - Explicit loops are rare. Iteration over lists is implicit in most operators; otherwise mostly handled by special iteration operators. - Parallelization is implicit: operators use multithreading where it helps. - Interprocess communication is baked in and startlingly simple. Start the tour Scripts¶ Scripts are text files in which you record what you might otherwise type into the q session. Load a script with the Load system command \l or specify the script as a command-line option. Use scripts to - evaluate expressions that define variables and functions - issue system commands, e.g. to load other scripts In a script you can also write multiline expressions. (You cannot do that in the q session.) Multiline expressions¶ Scripts allow you to break over multiple lines expressions that would exceed the maximum line length, or otherwise be awkward to read. Continuation lines must be contiguous (no empty lines) and indented by one or more spaces. jt.q : jt:.[!] flip( (`first; "Jacques"); (`family; "Tati"); (`dob; 1907.10.09); (`dod; 1982.11.05); (`spouse; "Micheline Winter"); (`children; 3); (`pic; "https://en.wikipedia.org/wiki/Jacques_Tati#/media/File:Jacques_Tati.jpg") ) portrait:{ n:" "sv x`first`family; / name i:.h.htac[`img;`alt`href!(n;x`pic);""]; / img a:"age ",string .[-;jt`dod`dob]div 365; / age c:", "sv(n;"d. ",4#string x`dod;a); / caption i,"<br>",.h.htac[`p;.[!]enlist each(`style;"font-style:italic");c] } q)\l jt.q q)portrait jt "<img alt=\"Jacques Tati\" href=\"https://en.wikipedia.org/wiki/Jacques_Tati#.. Back to the margin Resist the temptation to close a list or function definition with a right parenthesis or brace on the left margin. Q would interpret that as the start of a new expression. Multiline comments¶ Scripts allow you to write comments that span multiple lines. (There is no way you can do that in the q session.) Open the comment block with a single forward slash. Close it with a single backward slash. / This is a comment block. Q ignores everything in it. And I mean everything. 2+2 \ Except when closing a comment block, a line with a single backward slash opens a trailing comment block: the interpreter ignores all the subsequent lines. That is, there is no way to close a trailing comment block. Below, the expressions that run a script are temporarily relegated to a trailing comment block, allowing the developer to load the script and explore the execution environment. foo:42 bar:"quick brown fox" main:{ /main process .. .. } \ main[foo;bar] exit 0 Terminate with exit status¶ The exit keyword terminates the q session and returns its argument as the exit status. parm:.Q.opt .z.x / command-line parameters err:{ / validate parameters if[not`foo in key args;2 "foo missing";:104]; if[not`bar in key args;2 "bar missing";:105]; 0 }parm err:$[err=0;main parm;err] main:{ / main process .. .. } exit err Above is a script that validates the command-line parameters. If the foo or bar parameters are missing, messages are written to stderr (2 ) and an error code returned as the exit status. Otherwise the parameters are passed to main , which determines the exit status. The q session¶ The q session is a read-evaluate-print loop. It evaluates a q expression and prints the result. You can use it as a calculator. q)sum 44.95 1032 107.15 1184.1 q)acos -1 3.141593 The q interpreter ignores your comments. q)2+2 3 4 / add atom to a vector 4 5 6 q)/ Pi is the arc-cosine of -1 q)pi:acos -1 q)pi 3.141593 Use show to set and display a value in one expression. q)show pi:acos -1 3.141593 Multiline expressions¶ When you key Enter, the interpreter evaluates what you just typed. There is no way in the session for you to write an expression or comment that spans multiple lines. Scripts permit this. System commands¶ You can also issue system commands. For example, to see the current print precision: q)\P 7i q) A system command may print to the session, as above, but does not return a result that can be named. To do this, use the system keyword. q)show p:system"P" 7i System commands begin with a backslash. If what follows is not a q system command, it is passed to the operating system. q)\ls -al ~/. "total 1560" "drwxr-xr-x+ 87 sjt staff 2784 21 Feb 10:02 ." "drwxr-xr-x 9 root admin 288 12 Feb 13:39 .." "-r-------- 1 sjt staff 7 26 Feb 2018 .CFUserTextEncoding" "-rw-r--r--@ 1 sjt staff 26628 20 Feb 14:43 .DS_Store" .. Watch out for typos when issuing system commands. They may get executed in the OS. Errors¶ If q cannot evaluate your expression it signals an error. q)2+"a" 'type [0] 2+"a" ^ q) The error message is terse. If the expression is within a function, the function is suspended, which allows you to investigate the error in the context in which it is evaluated. q){x+2} "xyz" 'type [1] {x+2} ^ q))/ the extra ) indicates a suspended function q))x / x is the function argument "xyz" Use the Abort system command to cut the stack back one level. q))\ q)/ the single ) indicates the function is off the stack q)/ x is now undefined q) 'x [0] x ^ q) Command-line options¶ The q session can be launched with parameters. The most important is a filename. Q runs it as a script. $ cat hello.q / title: hello-world script in q author: [email protected] date: February 2020 \ 1 "hello world"; exit 0 $ $ q hello.q KDB+ 3.7t 2020.02.14 Copyright (C) 1993-2020 Kx Systems m64/ 4()core 8192MB sjt mint.local 192.168.0.11 EXPIRE 2020.04.01 [email protected] #55032 hello world $ Other predefined parameters set the listening port, number of secondary tasks allocated, and so on. Any other parameters are for you to specify and use. $ q -foo 5432 -bar "quick brown fox" KDB+ 3.7t 2020.02.14 Copyright (C) 1993-2020 Kx Systems m64/ 4()core 8192MB sjt mint.local 192.168.0.11 EXPIRE 2020.04.01 [email protected] #55032 q).Q.opt .z.x foo| "5432" bar| "quick brown fox" .Q namespace, .z namespace Scripts Terminate¶ End your q session with the Terminate system command. q)\\ $
A brief introduction to q and kdb+ for analysts¶ kdb+ is a powerful database that can be used for streaming, real-time and historical data. Q is the SQL-like, general-purpose programming language built on top of kdb+. It offers high-performance, in-database analytic capabilities. Get started to download and install kdb+. Launch q¶ At the shell prompt, type q to start a q console session, where the prompt q) will appear. $ q KDB+ 3.6 2018.10.23 Copyright (C) 1993-2018 Kx Systems m32/ 8()core 16384MB sjt max.local 192.168.0.17 NONEXPIRE q) Create a table¶ To begin learning q, we will create a simple table. To do this, type or copy the code below into your q session. (Don’t copy or type the q) prompt.) q)n:1000000 q)item:`apple`banana`orange`pear q)city:`beijing`chicago`london`paris q)tab:([]time:asc n?0D0;n?item;amount:n?100;n?city) These expressions create a table called tab which contains a million rows and 4 columns of random time-series sales data. (For now, understanding these lines of code is not important.) Simple query¶ The first query we run selects all rows from the table where the item sold is a banana. q)select from tab where item=`banana time item amount city ------------------------------------------ 0D00:00:00.466201454 banana 31 london 0D00:00:00.712388008 banana 86 london 0D00:00:00.952962040 banana 20 london 0D00:00:01.036425679 banana 49 chicago 0D00:00:01.254006475 banana 94 beijing .. Notice all columns in the table are returned in the result when no column is explicitly mentioned. Aggregate query¶ The next query calculates the sum of the amounts sold of all items by each city. q)select sum amount by city from tab city | amount -------| -------- beijing| 12398569 chicago| 12317015 london | 12375412 paris | 12421447 This uses the aggregate function sum within the q language. Notice this returns a keyed table where the key column is city . This key column is sorted in alphabetical order. Time-series aggregate query¶ The following query shows the sum of the amount of each item sold by hour during the day. q)select sum amount by time.hh,item from tab hh item | amount ---------| ------ 0 apple | 522704 0 banana| 506947 0 orange| 503054 0 pear | 515212 1 apple | 513723 .. The result is a keyed table with two key columns, hh for the hour and item . The results are ordered by the keyed columns. The query extracts the hour portion from the nanosecond-precision time column by adding a .hh to the column name. Congratulations, you created and queried your first q table! In-memory queries¶ This tutorial shows you q in action on some randomly-generated in-memory data. If you are new to q you won’t understand all the syntax – but you can get a flavour of the language and its performance. You can run each of these queries in the free versions of kdb+. You can paste each line of code into a session and run it. To time the performance of an operation, prepend \t . q)\t select from table The dataset is from a fictional computer-monitoring application. A company owns a set of desktop computers and monitors the CPU usage minute by minute. Users of the desktop computers can register calls with a help desk if they hit problems. Limited memory If your computer has limited memory, it’s advisable to start q with the -g 1 command line flag to minimize memory use. You can also periodically invoke garbage collection with .Q.gc . Random data generation¶ The script calls.q below will generate some semi-realistic random data. It is quite complex for beginners – don’t dwell on it! / calls.q / Generate some random computer statistics (CPU usage only) / You can modify n (number of unique computers), timerange (how long the data is for) / freq (how often a computer publishes a statistic) / and calls (the number of logged calls) n:1000; timerange:5D; freq:0D00:01; calls:3000 depts:`finance`packing`logistics`management`hoopjumping`trading`telesales startcpu:(til n)!25+n?20 fcn:n*fc:`long$timerange%freq computer:([] time:(-0D00:00:10 + fcn?0D00:00:20)+fcn#(.z.p - timerange)+freq*til fc; id:raze fc#'key startcpu ) computer:update `g#id from `time xasc update cpu:{ 100&3|startcpu[first x]+sums(count x)?-2 -1 -1 0 0 1 1 2 }[id] by id from computer / Generate some random logged calls calls:([] time:(.z.p - timerange)+asc calls?timerange; id:calls?key startcpu; severity:calls?1 2 3 ) / Create a lookup table of computer information computerlookup:([id:key startcpu] dept:n?depts; os:n?`win7`win8`osx`vista) Download calls.q into your QHOME folder, then load it: q)\l calls.q Each desktop reports its CPU usage every minute (the computer table). The CPU usage is a value between 3% and 100%, and moves by between -2 and +2 between each sample period. The data is generated over a 5-day period, for 1000 machines. You can modify the number of machines, time range and sample frequency. Call records (the calls table) are generated with a severity whenever a user reports a problem. A call record has different severity levels possible. 3000 call records are generated in the 5-day period. Static information (the computerlookup table) is stored about each desktop computer, keyed by id . Here, this is just the department the machine belongs to and the operating system. Data overview¶ The generated data looks like this: q)computer time id cpu ------------------------------------- 2014.05.09D12:25:32.391350534 566 24 2014.05.09D12:25:32.415609466 477 39 2014.05.09D12:25:32.416150345 328 41 2014.05.09D12:25:32.476874123 692 38 2014.05.09D12:25:32.542837079 157 33 2014.05.09D12:25:32.553545142 997 33 2014.05.09D12:25:32.557224705 780 43 .. q)calls time id severity ------------------------------------------ 2014.05.09D12:28:29.436601608 990 1 2014.05.09D12:28:32.649418621 33 2 2014.05.09D12:28:33.102242558 843 1 2014.05.09D12:29:52.791007667 16 2 2014.05.09D12:32:43.289881705 776 3 2014.05.09D12:35:06.373595654 529 3 2014.05.09D12:38:53.922653108 766 1 .. q)computerlookup id| dept os --| ----------------- 0 | trading win8 1 | trading osx 2 | management win7 3 | finance vista 4 | packing win8 5 | telesales vista 6 | hoopjumping win8 .. / The counts of each table q)tables[]!count each value each tables[] calls | 3000 computer | 7200000 computerlookup| 1000 Aggregation queries¶ Q is much used to aggregate across large datasets. For these examples, we will use simple aggregators (max , min , avg ) and concentrate on doing complex things in the by-clause. One of the most powerful features of q is its ability to extend the query language with user-defined functions – so we can easily build custom aggregators. For the first example, we will calculate the maximum, minimum and average CPU usage for every machine, across the whole data set: q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id from computer id| mxc mnc avc --| ---------------- 0 | 63 3 22.92236 1 | 42 3 4.679444 2 | 37 3 4.239167 3 | 100 3 41.52431 4 | 100 3 79.00819 5 | 56 3 6.349028 6 | 96 3 30.41361 .. We can also do this for every date, by extracting the date component from the time field: q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id,time.date from computer id date | mxc mnc avc -------------| ---------------- 0 2014.05.09| 42 6 23.86331 0 2014.05.10| 34 3 8.539583 0 2014.05.11| 45 3 7.48125 0 2014.05.12| 63 21 43.95 0 2014.05.13| 49 8 27.98403 0 2014.05.14| 48 12 29.26309 1 2014.05.09| 42 3 16.02158 .. Similarly, we can do this for the time portion. The code below will aggregate across hours in different days: q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id,time.hh from computer id hh| mxc mnc avc -----| ---------------- 0 0 | 47 3 24.64667 0 1 | 57 3 23.04667 0 2 | 57 3 24.28 0 3 | 58 3 26.08333 0 4 | 53 3 22.53333 0 5 | 54 3 22.01333 0 6 | 56 3 23.38667 .. If that is not what is required, we can combine the aggregations to aggregate across each hour in each date separately: q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id,time.date,time.hh from computer id date hh| mxc mnc avc ----------------| ---------------- 0 2014.05.09 12| 42 32 37.28571 0 2014.05.09 13| 35 17 25 0 2014.05.09 14| 32 20 23.3 0 2014.05.09 15| 38 27 32.28333 0 2014.05.09 16| 38 19 29.55 0 2014.05.09 17| 23 15 20.1 0 2014.05.09 18| 31 20 26.03333 .. Or alternatively we can use the xbar keyword to break the time list into buckets of any size. This is equivalent to the by id,time.date,time.hh query above, but is more efficient and has extra flexibility – the bucketing can be any size, all the way down to nanoseconds: q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id,0D01:00:00.0 xbar time from computer id time | mxc mnc avc --------------------------------| ---------------- 0 2014.05.09D12:00:00.000000000| 42 32 37.28571 0 2014.05.09D13:00:00.000000000| 35 17 25 0 2014.05.09D14:00:00.000000000| 32 20 23.3 0 2014.05.09D15:00:00.000000000| 38 27 32.28333 0 2014.05.09D16:00:00.000000000| 38 19 29.55 0 2014.05.09D17:00:00.000000000| 23 15 20.1 0 2014.05.09D18:00:00.000000000| 31 20 26.03333 .. Another approach to breaking up the day might be to define a set of “daily periods”, e.g. early morning is from 00:00 to 07:00, midmorning is from 07:00 to 12:00, lunch is from 12:00 to 13:30, afternoon is from 13:30 to 17:00 and evening is after 17:00. We can aggregate the data according to these groupings by creating a function to map a minute value to a period of the day. This user-defined function drops in to the select statement in the same way as any built-in function. q)timeofday:{`0earlymorn`1midmorn`2lunch`3afternoon`4evening 00:00 07:00 12:00 13:30 17:00 bin x} q)select mxc:max cpu,mnc:min cpu,avc:avg cpu by id,time.date,tod:timeofday[time.minute] from computer id date tod | mxc mnc avc ------------------------| ---------------- 0 2014.05.09 2lunch | 42 25 33.29231 0 2014.05.09 3afternoon| 38 17 27.37619 0 2014.05.09 4evening | 33 6 20.64762 0 2014.05.10 0earlymorn| 34 7 17.22619 0 2014.05.10 1midmorn | 19 3 9.386667 0 2014.05.10 2lunch | 8 3 3.933333 0 2014.05.10 3afternoon| 4 3 3.009524 .. We can also generate an average usage profile in the date range for each time of day across all desktop machines. First, we aggregate the data and calculate the totals for each day in each time period. Then, we re-aggregate the data to get an average usage across all days. q)select avc:sum[cpu]%sum samplecount by tod from select sum cpu,samplecount:count cpu by time.date,tod:timeofday[time.minute] from computer tod | avc ----------| -------- 0earlymorn| 42.04672 1midmorn | 42.71953 2lunch | 41.27816 3afternoon| 40.7399 4evening | 41.12103 A simplification With this dataset, we can do as below. However, this only holds because there are exactly the same number of records for each time period in each day. If this were not the case (as is likely with a real dataset) then we must do as above. q)select avg cpu by tod:timeofday[time.minute] from computer tod | cpu ----------| -------- 0earlymorn| 42.04672 1midmorn | 42.71953 2lunch | 41.27816 3afternoon| 40.7399 4evening | 41.12103 Joins¶ Joins are very fast in q. Most q databases do not rely heavily on pre-defined foreign-key relationships between tables, as is common in standard RDMSs. Instead, ad-hoc joins are used. As an example, lj (left join) can be used join the computerlookup table to either the computer or calls table to show the static data on each computer id . q)calls lj computerlookup time id severity dept os ------------------------------------------------------------ 2014.05.09D12:28:29.436601608 990 1 hoopjumping win7 2014.05.09D12:28:32.649418621 33 2 telesales win8 2014.05.09D12:28:33.102242558 843 1 management win7 2014.05.09D12:29:52.791007667 16 2 management win7 2014.05.09D12:32:43.289881705 776 3 packing vista 2014.05.09D12:35:06.373595654 529 3 management vista 2014.05.09D12:38:53.922653108 766 1 hoopjumping win7 .. We can then perform aggregations using this static data – for example, count calls by severity and department: q)select callcount:count i by severity,dept from calls lj computerlookup severity dept | callcount --------------------| --------- 1 finance | 152 1 hoopjumping| 148 1 logistics | 127 1 management | 152 1 packing | 162 1 telesales | 122 1 trading | 130 .. Alternatively, we can enforce a foreign-key relationship and use that: q)update `computerlookup$id from `calls `calls q)select callcount:count i by id.os,id.dept,severity from calls os dept severity| callcount ------------------------| --------- osx finance 1 | 41 osx finance 2 | 48 osx finance 3 | 39 osx hoopjumping 1 | 44 osx hoopjumping 2 | 49 osx hoopjumping 3 | 37 osx logistics 1 | 29 .. Time joins¶ Now this is where it gets interesting… Q has some specialized time joins. The joins aren’t restricted to time fields (any numeric type will work) but that is what they are predominantly used for. The first, aj (asof join) is used to align two tables, aligning the prevailing value from the value table with each record in the source table. It’s probably easier to explain with an example. For our dataset, let’s say that for every helpdesk call we have received we want to get the prevailing data from the computer table (i.e. when the user called the help desk, what was the CPU reading from the computer). We can't use an lj here because the time fields are very unlikely to match exactly – so instead we use an aj to get the last value from the computer table prior to the record in the call table. Like this: q)aj[`id`time;calls;computer] time id severity cpu ---------------------------------------------- 2014.05.09D12:28:29.436601608 990 1 36 2014.05.09D12:28:32.649418621 33 2 28 2014.05.09D12:28:33.102242558 843 1 37 2014.05.09D12:29:52.791007667 16 2 41 2014.05.09D12:32:43.289881705 776 3 29 2014.05.09D12:35:06.373595654 529 3 24 2014.05.09D12:38:53.922653108 766 1 31 .. Q also has a window join wj . A window join is a generalization of an asof join. Instead of selecting the prevailing value, it allows you to apply any aggregation within a window around a source record. An example for the dataset we are dealing with would be to work out the maximum and average CPU usage for each computer in a window around the call time. For example, we can specify a window of 10-minutes-before to 2-minutes-after each call, and calculate the maximum and average CPU usage (the wj code is slightly more complicated than aj as some rules have to be adhered to): q)p:update `p#id from `id xasc computer q)wj[-0D00:10 0D00:02+\:calls.time; `id`time; calls; (p;(max;`cpu);(avg;`cpu))] time id severity cpu cpu ------------------------------------------------------- 2014.05.09D12:28:29.436601608 990 1 37 35.6 2014.05.09D12:28:32.649418621 33 2 29 27.6 2014.05.09D12:28:33.102242558 843 1 38 36.8 2014.05.09D12:29:52.791007667 16 2 42 41.14286 2014.05.09D12:32:43.289881705 776 3 31 29.55556 2014.05.09D12:35:06.373595654 529 3 29 26.5 2014.05.09D12:38:53.922653108 766 1 40 35.23077 .. On-disk queries¶ The linked scripts allow you to build an on-disk database and run some queries against it. The database is randomly-generated utility (smart-meter) data for different customers in different regions and industry sectors, along with some associated payment information. The idea is to allow you to see some of the q language and performance. There is more information in the README file. Building the database¶ The database is built by running the buildsmartmeterdb.q script. You can vary the number of days of data to build, and the number of customer records per day. When you run the script, some information is printed. $ q buildsmartmeterdb.q KDB+ 3.1 2014.05.03 Copyright (C) 1993-2014 Kx Systems This process is set up to save a daily profile across 61 days for 100000 random customers with a sample every 15 minute(s). This will generate 9.60 million rows per day and 585.60 million rows in total Uncompressed disk usage will be approximately 224 MB per day and 13664 MB in total Compression is switched OFF Data will be written to :./smartmeterDB To modify the volume of data change either the number of customers, the number of days, or the sample period of the data. Minimum sample period is 1 minute These values, along with compression settings and output directory, can be modified at the top of this file (buildsmartmeterdb.q) To proceed, type go[] q)go[] 2014.05.15T09:17:42.271 Saving static data table to :./smartmeterDB/static 2014.05.15T09:17:42.291 Saving pricing tables to :./smartmeterDB/basicpricing and :./smartmeterDB/timepricing 2014.05.15T09:17:42.293 Generating random data for date 2013.08.01 2014.05.15T09:17:47.011 Saving to hdb :./smartmeterDB 2014.05.15T09:17:47.775 Save complete 2014.05.15T09:17:47.776 Generating random data for date 2013.08.02 2014.05.15T09:17:52.459 Saving to hdb :./smartmeterDB 2014.05.15T09:17:53.196 Save complete 2014.05.15T09:17:53.196 Generating random data for date 2013.08.03 2014.05.15T09:17:58.006 Saving to hdb :./smartmeterDB 2014.05.15T09:17:58.734 Save complete 2014.05.15T09:17:58.734 Generating random data for date 2013.08.04 2014.05.15T09:18:03.438 Saving to hdb :./smartmeterDB 2014.05.15T09:18:04.194 Save complete … 2014.05.15T09:23:09.689 Generating random data for date 2013.09.30 2014.05.15T09:23:14.503 Saving to hdb :./smartmeterDB 2014.05.15T09:23:15.266 Save complete 2014.05.15T09:23:15.271 Saving payment table to :./smartmeterDB/payment/ 2014.05.15T09:23:15.281 HDB successfully built in directory :./smartmeterDB 2014.05.15T09:23:15.281 Time taken to generate and store 585.60 million rows was 00:05:33 2014.05.15T09:23:15.281 or 1.76 million rows per second Running the queries¶ Once the database has been built, you can start the tutorial by running smartmeterdemo.q . This will print an overview of the database, and a lot of information as to how to step through the queries. Each query will show you the code, a sample of the results, and timing and memory usage information for the query. You can also see the code in smartmeterfunctions.q . (You will also see comments in the code in this script which should help explain how it works.) $ q smartmeterdemo.q KDB+ 3.1 2014.05.03 Copyright (C) 1993-2014 Kx Systems DATABASE INFO ------------- This database consists of 5 tables. It is using 0 secondary processes. There are 61 date partitions. To start running queries, execute .tut.n[] . … Run .tut.help[] to redisplay the below instructions Start by running .tut.n[]. .tut.n[] : run the Next example .tut.p[] : run the Previous example .tut.c[] : run the Current example .tut.f[] : go back to the First example .tut.j[n] : Jump to the specific example .tut.db[] : print out database statistics .tut.res : result of last run query .tut.gcON[] : turn garbage collection on after each query .tut.gcOFF[] : turn garbage collection off after each query .tut.help[] : display help information \\ : quit q).tut.n[] ********** Example 0 ********** Total meter usage for every customer over a 10 day period Function meterusage has definition: {[startdate; enddate] start:select first usage by meterid from meter where date=startdate; end:select last usage by meterid from meter where date=enddate; end-start} 2014.05.15T09:27:55.260 Running: meterusage[2013.08.01;2013.08.10] 2014.05.15T09:27:56.817 Function executed in 1557ms using 387.0 MB of memory Result set contains 100000 rows. First 10 element(s) of result set: meterid | usage --------| -------- 10000000| 3469.449 10000001| 2277.875 10000002| 4656.111 10000003| 2216.527 10000004| 2746.24 10000005| 2349.073 10000006| 3599.034 10000007| 2450.384 10000008| 1939.314 10000009| 3934.089 Garbage collecting... ********************************* q) Experimentation¶ Experiment! There are lots of things to try: - Run each query several times – does the performance change? (Q doesn’t use explicit caching, it relies on the OS file-system caches.) - Run the queries with different parameters. - If you have multiple cores available, restart the database with secondary processes. See if some of the query performance changes. - Rebuild the database and change the number of days of data in the database, and the number of records per day. How is the query performance affected? - Rebuild the database with compression turned on. How does the size vary? And the performance? User interface¶ The Smart-Meter demo includes a basic UI intended for Business Intelligence-type usage. It is intended as a simple example of an HTML5 front end talking directly to the q database. It is not intended as a demonstration of capability in building advanced BI tools or complex GUIs. It is to show the performance of q slicing and dicing the data in different ways directly from the raw dataset, and the q language: the whole report is done in one function, usagereport . There isn’t any caching of data; there aren't any tricks. The report allows for 3 things: - filtering of the data by date, customer type and region - grouping (aggregating) by different combinations of dimensions. The aggregated stats are max/min/avg/total/count. If no grouping is selected, the raw usage for every meter is displayed - pivoting of the data by a chosen field. If the data is pivoted then the totalusage value is displayed. To access the UI, start the smart-meter demo database on port 5600 and point your browser (Chrome or Firefox only) at http://localhost:5600/smartmeter.html . If the date range becomes large, the query will take more time and memory. Similarly, grouping by hour can increase the time taken. 32-bit free version It is not hard to exceed the 32-bit memory limits of the free version of q when working on large datasets. What next?¶ Start building your own database? A good way to start is to load some data from CSV. Or perhaps try one of the other tutorials.
.tst.desc["Assertions"]{ should["increment the assertions run counter by one"]{ assertsRun: .tst.assertState.assertsRun; 1 musteq 1; .tst.assertState.assertsRun musteq 1 + assertsRun; }; should["attach failure messages to the failures lists"]{ oldFailures: .tst.assertState.failures; / Don't want intentional failures made in the name of testing to cause a test failure must[0b;"failure1"]; must[0b;"faiure2"]; must[1b;"notfailure"]; testedFailures: .tst.assertState.failures; .tst.assertState.failures:oldFailures; count[testedFailures] musteq 2; }; }; .tst.desc["Error Assertions"]{ before{ `oldFailures mock .tst.assertState.failures; / Don't want intentional failures made in the name of testing to cause a test failure }; should["catch errors"]{ mustnotthrow[()]{ mustthrow[();{'"foo"}]; mustnotthrow[();{'"foo"}]; .tst.assertState.failures:oldFailures; }; }; should["be capable of executing function objects"]{ errFunc:{'"foo"}; cleanFunc:{"foo"}; mustthrow[();errFunc]; mustnotthrow[();cleanFunc]; .tst.assertState.failures:oldFailures; }; should["be capable of executing lists"]{ `errFunc mock {'x}; `cleanFunc mock {x}; mustthrow[();(errFunc;"foo")]; mustnotthrow[();(cleanFunc;"foo")]; mustthrow[();(`errFunc;"foo")]; mustnotthrow[();(`cleanFunc;"foo")]; .tst.assertState.failures:oldFailures; }; should["report only thrown exceptions that were not supposed to have been thrown"]{ mustnotthrow["foo";{'"foo"}]; mustnotthrow["foo";{'"bar"}]; mustnotthrow["*foo*";{'"farfigfoogen"}]; testedFailures: .tst.assertState.failures; .tst.assertState.failures:oldFailures; first[testedFailures] mustlike "*to not throw the error 'foo'*"; last[testedFailures] mustlike "*to not throw the error 'farfigfoogen'*"; count[testedFailures] musteq 2; }; should["report only unthrown exceptions that were supposed to have been thrown"]{ mustthrow["foo";{'"bar"}]; mustthrow["foo";{'"foo"}]; mustthrow[("foo";"baz");{'"bar"}]; mustthrow["foo";{""}]; mustthrow["*foo*";{'"farfigfoogen"}]; testedFailures: .tst.assertState.failures; .tst.assertState.failures:oldFailures; testedFailures[0] mustlike "*the error 'foo'. Error thrown: 'bar'*"; testedFailures[1] mustlike "*one of the errors 'foo','baz'. Error thrown: 'bar'*"; testedFailures[2] mustlike "*the error 'foo'. No error thrown*"; count[testedFailures] musteq 3; }; }; ================================================================================ FILE: qspec_test_test_expec_runner.q SIZE: 3,557 characters ================================================================================ .tst.desc["Running an Expectation"]{ before{ `.tst.contextHelper mock {[x;y] system "d ", string x} system "d"; / Need to change back to the proper execution context after every call that refers to a mocked variable in the current context `myRestore mock .tst.restore; / Mocking restore so the UI doesn't get clobbered `.tst.restore mock {}; `.tst.expecList mock .tst.expecList; `.tst.currentBefore mock .tst.currentBefore; `.tst.currentAfter mock .tst.currentAfter; `.tst.callbacks.expecRan mock {[x;y]}; / Mock this out so expectations run TO test running expectations don't count towards test expectations ran `getExpec mock {last .tst.fillExpecBA .tst.expecList}; }; after{ myRestore[]; }; should["call the main expectation function"]{ `ran mock 0b; should["run this"]{`ran mock 1b}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[ran;"Expected the expectation to have run."]; }; should["call the before function before calling the main expectation function"]{ `beforeRan`ran mock' 0b; should["run this"]{`ran mock 1b and beforeRan}; before {`beforeRan mock 1b}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[beforeRan;"Expected the before expectation to run"]; must[ran;"Expected the main expectation to run after the before function"]; }; should["call the after function after calling the main expectation function"]{ `afterRan`ran mock' 0b; should["run this"]{`ran mock 1b}; after {`afterRan mock 1b and ran}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[ran;"Expected the main expectation to run"]; must[afterRan;"Expected the after function to run after the main expectation"]; }; should["make assertions available to be used within the expectation"]{ `.q.must mock {[x;y];'"fail"}; should["run this"]{`noError mock @[{must[1b;"silent pass"];1b};(::);0b]}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[noError;"Expected the assertion method to not throw an error"]; }; should["execute the expectation in the correct context"]{ should["change context"]{`..context mock system "d";}; e:getExpec[]; `.tst.context mock `.foo; .tst.runExpec[();e]; `.[`context] mustmatch `.foo; }; should["restore mocked values after all expectation functions have executed"]{ `ran mock 0b; `.tst.restore mock {`ran mock 1b}; should["run this"]{}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[ran;"Expected the mocking restore function to have been called"]; }; should["prevent errors from escaping when running the expectation"]{ should["run this"]{'foo}; e:getExpec[]; .tst.runExpec[();e]; mustnotthrow[();{[x;y] .tst.runExpec[x]}[e]]; }; should["call the expecRan callback with the results of running the expectation and current specification"]{ `..callbackCalled mock 0b; / The context will be in .tst when the callback is executed `.tst.callbacks.expecRan mock {[x;y]`..callbackCalled set 1b}; should["run this"]{}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; must[callbackCalled;"Expected the descLoaded callback to have been called"]; }; should["restage an expectation if the test run is to immediately halt"]{ `beforeCounter mock 0;; `restoreCounter mock 0; `.tst.restore mock {restoreCounter+:1}; `.tst.halt mock 1b; before{beforeCounter+:1}; should["restage"]{'"foo"}; e:getExpec[]; .tst.runExpec[();e]; beforeCounter musteq 2; restoreCounter musteq 1; }; }; ================================================================================ FILE: qspec_test_test_fileloading.q SIZE: 704 characters ================================================================================ .tst.desc["Test Loading"]{ before{ `basePath mock ` sv (` vs .tst.tstPath)[0],`nestedFiles; `pathList mock `foo`bar`baz!` sv' basePath,'`foo`bar`baz; }; should["recursively find all files matching an extension in a path"]{ (asc ` sv' `a`b`c`d`d,'`q) musteq asc (` vs' .tst.suffixMatch[".q";pathList[`foo]])[;1]; `e.k musteq asc (` vs' .tst.suffixMatch[".k";pathList[`foo]])[;1]; }; should["find all test files in a list of paths"]{ (asc ` sv' `a`b`c`d`d`f`g,'`q) musteq asc (` vs' .tst.findTests[value pathList])[;1]; }; should["return a q file given a q file"]{ path: ` sv (value pathList)[0],`one`a.q; / Happen to know this file exists path musteq .tst.findTests[path]; }; }; ================================================================================ FILE: qspec_test_test_fuzz.q SIZE: 3,968 characters ================================================================================ .tst.desc["Fuzz expectations"]{ before{ `.tst.contextHelper mock {[x;y] system "d ", string x} system "d"; / Need to change back to the proper execution context after every call that refers to a mocked variable in the current context `myRestore mock .tst.restore; / Mocking restore so the UI doesn't get clobbered `.tst.restore mock {}; `.tst.expecList mock .tst.expecList; `.tst.callbacks.expecRan mock {[x;y]}; / Mock this out so expectations run TO test running expectations don't count towards test expectations ran `getExpec mock {last .tst.fillExpecBA .tst.expecList}; }; after{ myRestore[]; }; should["run the fuzz test the number of times specified"]{ `ran mock 0; holds["run this";((),`runs)!(),20]{ran+:1}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; ran musteq 20; `ran mock 0; holds["run this";((),`runs)!(),40]{ran+:1}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; ran musteq 40; }; should["fail when the percentage of failures exceeds the maximum percentage of failures"]{ oldFailures: .tst.assertState.failures; / Don't want intentional failures made in the name of testing to cause a test failure `ran mock 0; holds["run this";`runs`maxFailRate!(20;.5)]{ ran+:1; if[ran > 9; 1 musteq 2]; / Force failure for a certain percentage }; e:getExpec[]; e:.tst.runExpec[();e]; .tst.contextHelper[]; testedFailures: .tst.assertState.failures; .tst.assertState.failures:oldFailures; e[`failRate] mustgt .5; e[`result] mustlike "*Fail"; }; should["not restore mocked variables between fuzz runs"]{ `ran mock 0; holds["run this";((),`runs)!(),20]{ran+:1}; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; ran musteq 20; }; should["provide fuzz variables to the function"]{ `aVar mock 0b; `bVar mock 0b; `cVar mock 0b; `xKey mock `symbol$(); holds["run this";(`runs`vars)!(1;`a`b`c!(`symbol;1 2 3;20#0Nd))]{ xKey:: key x; aVar:: x`a; bVar:: x`b; cVar:: x`c; }; e:getExpec[]; .tst.runExpec[();e]; .tst.contextHelper[]; `a`b`c mustin xKey; type[aVar] musteq -11h; x[bVar] mustin 1 2 3; count[cVar] mustlt 20; type[cVar] musteq 14h; }; }; .tst.desc["The Fuzz Generator"]{ should["return a list of fuzz values of the given type provided a symbol"]{ type[.tst.pickFuzz[`symbol;1]] musteq 11h; type[.tst.pickFuzz[`long;100]] musteq 7h; type[.tst.pickFuzz[`time;10]] musteq 19h; type[.tst.pickFuzz[`guid;10]] musteq 2h; }; should["run a generator function once for every run requested"]{ `runsDone mock 0; .tst.pickFuzz[{runsDone+:1};100]; runsDone musteq 100; }; should["return a table of distinct fuzz values given a dictionary"]{ r: .tst.pickFuzz[`a`b`c`d!`long`float`symbol`timespan;20]; type[r] musteq 98h; type[r`a] musteq 7h; type[r`b] musteq 9h; type[r`c] musteq 11h; type[r`d] musteq 16h; }; should["return a list of elements from a general list"]{ l: (10;`a;"foo";`a`b`c!1 2 3); (10,.tst.pickFuzz[l;20]) mustin l; / Force the list to start with an atom so the comparison works right }; should["return a list of elements from a typed list"]{ l: 10 30 33 22 80 4; .tst.pickFuzz[l;40] mustin l; l: `x`z`f`blah`ep; .tst.pickFuzz[l;40] mustin l; }; should["return lists of fuzz values less than the maximum length given an empty typed list"]{ l:.tst.pickFuzz[`float$();100]; (count each l) mustlt .tst.fuzzListMaxLength; (type each l) musteq 9h; }; should["return lists of fuzz values less than the specified length given a list of null values of a single type"]{ l:.tst.pickFuzz[20#0Nd;100]; (count each l) mustlt 20; (type each l) musteq abs type 0Nd; }; should["return lists of fuzz values of the specified length based on a single value provided a list of identical elements"]{ l: .tst.pickFuzz[20#100;1000]; (count each l) mustlt 20; l mustlt' 100; l: .tst.pickFuzz[20#200;1000]; l mustlt' 200; }; }; ================================================================================ FILE: qspec_test_test_mock.q SIZE: 797 characters ================================================================================
parsequery:{[q]q:$[10=type q;q;10h=abs type f:first q;destringf[f],1_ q;q]}; destringf:{$[(s:`$x)in key`.q;.q s;s~`insert;insert;any (100h; 104h)=type first f: @[parse; x; 0];f;s]}; cando:{[u;q]q:parsequery[q]; $[enabled;allowed[u;q];1b]}; requ:{[u;q]q:parsequery[q]; $[enabled; expr[u;q]; valp q]}; req:{$[.z.w = 0 ; value x; requ[.z.u;x]]} / entry point - replace .z.pg/.zps / authentication / methods - must have function for each authtype - e.g. local,ldap auth.local:{[u;p] ud:user[u]; r:$[`md5~ht:ud`hashtype; md5[p]~ud`password; 0b]; / unknown hashtype r} / ldap autentication relies on ldap code auth.ldap:{[u;p] / check if ldap has been set up $[@[value;`.ldap.enabled;0b]; .ldap.login[u;p]; 0b]} / entry point - replace .z.pw login:{[u;p] if[(not u in key user) or (`public=(1!usergroup)[u][`groupname]); if["B"$(.Q.opt .z.x)[`public][0;0]; if[""~p; adduser[u;`local;`md5;(md5 p)]; assignrole[u;`publicuser]; addtogroup[u;`public]; addpublic[u;.z.w]; :1b; ]]; :0b]; / todo print log to TorQ? ud:user[u]; if[not ud[`authtype] in key auth;:0b]; auth[ud`authtype][u;p]} / drop public users on logout droppublic:{[w] if[any "B"$(.Q.opt .z.x)[`public][0;0]; if[0<count publictrack?w; u:(value publictrack?w)[0]; removeuser[u]; unassignrole[u;`publicuser]; removefromgroup[u;`public]; removepublic[u]; ]] } init:{ .dotz.set[`.z.ps;{@[x;(`.pm.req;y)]}value .dotz.getcommand[`.z.ps]]; .dotz.set[`.z.pg;{@[x;(`.pm.req;y)]}value .dotz.getcommand[`.z.pg]]; // skip permissions for empty lines in q console/qcon .dotz.set[`.z.pi;{$[x in (1#"\n";"");.Q.s value x;.Q.s $[.z.w=0;value;req]@x]}]; .dotz.set[`.z.pp;{'"pm: HTTP POST requests not permitted"}]; // from V3.5 2019.11.23, .h.val is used in .z.ph to evaluate request; below that disallow .z.ph $[(.z.K>=3.5)&.z.k>=2019.11.13;.h.val:req;.dotz.set[`.z.ph;{'"pm: HTTP GET requests not permitted"}]]; .dotz.set[`.z.ws;{'"pm: websocket access not permitted"}]; .dotz.set[`.z.pw;login]; .dotz.set[`.z.pc;{droppublic[y];@[x;y]}value .dotz.getcommand[`.z.pc]]; } if[enabled;init[]] if[enabled;.proc.loadconfig[getenv[`KDBCONFIG],"/permissions/";] each `default,.proc.proctype,.proc.procname; if[not ""~getenv[`KDBAPPCONFIG]; .proc.loadconfig[getenv[`KDBAPPCONFIG],"/permissions/";] each `default,.proc.proctype,.proc.procname]] ================================================================================ FILE: TorQ_code_handlers_trackclients.q SIZE: 3,278 characters ================================================================================ // taken from http://code.kx.com/wsvn/code/contrib/simon/dotz/ / track active clients of a kdb+ session in session table CLIENTS / when INSTRUSIVE is true .z.po goes back asking for more background / use monitorusage.q or logusage.q if you need request by request info / port - port at po time, _may_ have been changed by a subsequent \p / sz - total (uncompressed) bytes transferred, could use it to bounce greedy clients / startp - when the session started (.z.po) / endp - when the session ended (.z.pc) / lastp - time of last session activity (.z.pg/ps/ws) \d .clients enabled:@[value;`enabled;1b] // whether this code is automatically enabled opencloseonly:@[value;`opencloseonly;0b] // whether we only log opening and closing of connections // Create a clients table clients:@[value;`clients;([w:`int$()]ipa:`symbol$();u:`symbol$();a:`int$();k:`date$();K:`float$();c:`int$();s:`int$();o:`symbol$();f:`symbol$();pid:`int$();port:`int$();startp:`timestamp$();endp:`timestamp$();lastp:`timestamp$();hits:`int$();errs:`int$();sz:`long$())] unregistered:{except[key .z.W;exec w from`CLIENTS]} / .clients.addw each unregistered[] cleanup:{ / cleanup closed or idle entries if[count w0:exec w from`.clients.clients where not .dotz.livehn w; update endp:.proc.cp[],w:0Ni from`.clients.clients where w in w0]; if[.clients.MAXIDLE>0; hclose each exec w from`.clients.clients where .dotz.liveh w,lastp<.proc.cp[]-.clients.MAXIDLE]; delete from`.clients.clients where not .dotz.liveh w,endp<.proc.cp[]-.clients.RETAIN;} hit:{update lastp:.proc.cp[],hits:hits+1i,sz:sz+-22!x from`.clients.clients where w=.z.w;x} hite:{update lastp:.proc.cp[],hits:hits+1i,errs:errs+1i from`.clients.clients where w=.z.w;'x} po:{[result;W] cleanup[]; `.clients.clients upsert(W;.dotz.ipa .z.a;.z.u;.z.a;0Nd;0n;0Ni;0Ni;(`);(`);0Ni;0Ni;zp;0Np;zp:.proc.cp[];0i;0i;0j); if[INTRUSIVE; neg[W]"neg[.z.w]\"update k:\",(string .z.k),\",K:\",(-3!.z.K),\",c:\",(-3!.z.c),\",s:\",(-3!system\"s\"),\",o:\",(-3!.z.o),\",f:\",(-3!.z.f),\",pid:\",(-3!.z.i),\",port:\",(-3!system\"p\"),\" from`.clients.clients where w=.z.w\""]; result} addw:{po[x;x]} / manually add a client pc:{[result;W] update w:0Ni,endp:.proc.cp[] from`.clients.clients where w=W;cleanup[];result} .dotz.set[`.z.pc;{.clients.pc[x y;y]}value .dotz.getcommand[`.z.pc]]; wo:{[result;W] cleanup[]; `.clients.clients upsert(W;.dotz.ipa .z.a;.z.u;.z.a;0Nd;0n;0Ni;0Ni;(`);(`);0Ni;0Ni;zp;0Np;zp:.proc.cp[];0i;0i;0j); result} if[enabled; .dotz.set[`.z.po;{.clients.po[x y;y]}value .dotz.getcommand[`.z.po]]; .dotz.set[`.z.wo;{.clients.wo[x y;y]}value .dotz.getcommand[`.z.wo]]; .dotz.set[`.z.wc;{.clients.pc[x y;y]}value .dotz.getcommand[`.z.wc]]; if[not opencloseonly; .dotz.set[`.z.pg;{.clients.hit[@[x;y;.clients.hite]]}value .dotz.getcommand[`.z.pg]]; .dotz.set[`.z.ps;{.clients.hit[@[x;y;.clients.hite]]}value .dotz.getcommand[`.z.ps]]; .dotz.set[`.z.ws;{.clients.hit[@[x;y;.clients.hite]]}value .dotz.getcommand[`.z.ws]];]]; / if no other timer then go fishing for zombie clients every .clients.MAXIDLE / if[not system"t"; / .dotz.set[`.z.ts;{.clients.cleanup[]}]; / system"t ",string floor 1e-6*.clients.MAXIDLE] ================================================================================ FILE: TorQ_code_handlers_trackservers.q SIZE: 22,047 characters ================================================================================ // modified version of trackservers.q // http://code.kx.com/wsvn/code/contrib/simon/dotz/ / track active servers of a kdb+ session in session table SERVERS // Check if the process has been initialised correctly if[not @[value;`.proc.loaded;0b]; '"environment is not initialised correctly to load this script"] \d .servers SERVERS:@[value;`.servers.SERVERS;([]procname:`symbol$();proctype:`symbol$();hpup:`symbol$();w:`int$();hits:`int$();startp:`timestamp$();lastp:`timestamp$();endp:`timestamp$();attributes:())]
Dynamically shrinking big data using time-series database kdb+¶ The widespread adoption of algorithmic trading technology in financial markets combined with ongoing advances in complex event processing has led to an explosion in the amount of data which must be consumed by market participants on a real-time basis. Such large quantities of data are important to get a complete picture of market dynamics; however they pose significant capacity-challenges for data consumers. While specialized technologies such as kdb+ are able to capture these large data volumes, downstream systems which are expected to consume this data can become swamped. It is often the case that when dataset visualizations are focused on trends or spikes occurring across relatively longer time periods, a full tick-by-tick account is not desired; in these circumstances it can be of benefit to apply a simplification algorithm to reduce the dataset to a more manageable size while retaining the key events and movements. This paper will explore a way to dynamically simplify financial time series within kdb+ in preparation for export and consumption by downstream systems. We envisage the following use cases, among potentially many others: - GUI rendering – This is particularly relevant for Web GUIs which typically use client-side rendering. Transferring a large time series across limited bandwidth and rendering a graph is typically a very slow operation; a big performance gain can be achieved if the dataset can be reduced prior to transfer/rendering. - Export to spreadsheet applications – Powerful as they are, spreadsheets are not designed for handling large-scale datasets and can quickly grind to a halt if an attempt is made to import and conduct analysis. - Reduced storage space – Storing smaller datasets will result in faster data retrieval and management. Key to our approach is avoiding distortion in either the time or the value domain, which are inevitable with bucketed summaries of data. All tests were performed using kdb+ version 3.2 (2015.01.08). Background¶ The typical way to produce summaries of data sets in kdb+ is by using bucketed aggregates. Typically these aggregates are applied across time slices. Bucketing¶ Bucketing involves summarizing data by taking aggregates over time windows. This results in a loss of a significant amount of resolution. select avg 0.5 * bid + ask by 00:01:00.000000000 xbar time from quote where sym=`ABC The avg function has the effect of attenuating peaks and troughs, features which can be of particular interest for analysis in certain applications. A common form of bucket-based analysis involves taking the open, high, low and close (OHLC) price from buckets. These are plotted typically on a candlestick or line chart. select o:first price, h:max price, l:min price, c:last price by 00:01:00.000000000 xbar time from trade where sym=`ABC OHLC conveys a lot more information about intra-bucket price movements than a single bucketed aggregate, and will preserve the magnitude of significant peaks and troughs in the value domain, but inevitably distorts the time domain – all price movements in a time interval are summarized in 4 values for that interval. A new approach¶ The application of line and polygon simplification has a long history in the fields of cartography, robotics and geospatial data analysis. Simplification algorithms work to remove redundant or unnecessary data points from the input dataset by calculating and retaining prominent features. They retain resolution where it is required to preserve shape but aggressively remove points where they will not have a material impact on the curve, thus preserving the original peaks, troughs, slopes and trends resulting in minimal impact on visual perceptual quality. Time-series data arising from capital-market exchanges often involve a proliferation of numbers that are clustered closely together on a point-by-point basis but form very definite trends across longer time horizons; as such they are excellent candidates for simplification algorithms. Throughout the following sections we will use the method devised by (Ramer, 1972) and (Douglas & Peucker, 1973) to conduct our line-simplification analysis. In a survey (McMaster, 1986), this method was ranked as mathematically superior to other common methods based upon dissimilarity measurements and considered to be the best at choosing critical points and yielding the best perceptual representations of the original lines. The discriminate smoothing that results from these methods can massively reduce the complexity of a curve while retaining key features which, in a financial context, would typically consist of price jumps or very short-term price movements, which would be elided or distorted by alternative methods of summarizing the data. In the original work, the authors describe a method for reducing the number of points required to represent a polyline. A typical input to the process is the ordered set of points below. Figure 1 The first and last points in the line segment form a straight line; we begin by calculating the perpendicular distances from this line to all intervening points in the segment. Where none of the perpendicular distances exceed a user-specified tolerance, the straight-line segment is deemed suitable to represent the entire line segment and the intervening points are discarded. If this condition is not met, the point with the greatest perpendicular distance from the straight line segment is selected as a break point and new straight-line segments are then drawn from the original end points to this break point. Figure 2 The perpendicular distances to the intervening points are then recalculated and the tolerance conditions reapplied. In our example the gray points identified below are deemed to be less than the tolerance value and are discarded. Figure 3 The process continues with the line segment being continually subdivided and intermediate points from each sub-segment being discarded upon each iteration. At the end of this process, points left over are connected to form a simplified line. Figure 4 If the user specifies a low tolerance this will result in very little detail being removed whereas if a high tolerance value is specified this results in all but the most general features of the line being removed. The Ramer-Douglas-Peucker method may be described as a recursive divide-and-conquer algorithm whereby a line is divided into smaller pieces and processed. Recursive algorithms are subject to stack-overflow problems with large datasets and so in our analysis we present both the original recursive version of the algorithm as well as an iterative, non-recursive version which provides a more robust and stable method. Implementation¶ We initially present the original recursive implementation of the algorithm, for simplicity, as it is easier understood. Recursive implementation¶ // perpendicular distance from point to line pDist:{[x1;y1;x2;y2;x;y] slope:(y2 - y1)%x2 - x1; intercept:y1 - slope * x1; abs ((slope * x) - y - intercept)%sqrt 1f + slope xexp 2f } rdpRecur:{[tolerance;x;y] // Perpendicular distance from each point to the line d:pDist[first x;first y;last x;last y;x;y]; // Find furthest point from line ind:first where d = max d; $[tolerance < d ind; // Distance is greater than threshold => split and repeat .z.s[tolerance;(ind + 1)#x;(ind + 1)#y],' 1 _/:.z.s[tolerance;ind _ x;ind _ y]; // else return first and last points (first[x],last[x];first[y],last[y])] } It is easy to demonstrate that a recursive implementation in kdb+ is prone to throw a stack error for sufficiently jagged lines with a low input tolerance. For example, given a triangle wave function as follows: q)triangle:sums 1,5000#-2 2 q)// Tolerance is less than distance between each point, q)// would expect the input itself to be returned q)rdpRecur[0.5;til count triangle;triangle] 'stack Due to this limitation, the algorithm can be rewritten to be iterative rather than recursive, opting for an approach which uses the kdb+ function over to keep track of the call stack and achieve the same result. Iterative implementation¶ Within the recursive version of the Ramer-Douglas-Peucker algorithm the subsections that have yet to be analyzed and the corresponding data points which have been chosen to remain are tracked implicitly and are handled in turn when the call stack is unwound within kdb+. To circumvent the issue of internal stack limits the iterative version explicitly tracks the subsections requiring analysis and the data points that have been removed. This carries a performance penalty compared to the recursive implementation. rdpIter:{[tolerance;x;y] // Boolean list tracks data points to keep after each step remPoints:count[x]#1b; // Dictionary to track subsections that require analysis // Begin with the input itself subSections:enlist[0]!enlist count[x]-1; // Pass the initial state into the iteration procedure which will // keep track of the remaining data points res:iter[tolerance;;x;y]/[(subSections;remPoints)]; // Apply the remaining indexes to the initial curve (x;y)@\:where res[1] } iter:{[tolerance;tracker;x;y] // Tracker is a pair, the dictionary of subsections and // the list of chosen datapoints subSections:tracker[0]; remPoints:tracker[1]; // Return if no subsections left to analyze if[not count subSections;:tracker]; // Pop the first pair of points off the subsection dictionary sIdx:first key subSections; eIdx:first value subSections; subSections:1_subSections; // Use the start and end indexes to determine the subsections subX:x@sIdx+til 1+eIdx-sIdx; subY:y@sIdx+til 1+eIdx-sIdx; // Calculate perpendicular distances d:pDist[first subX;first subY;last subX;last subY;subX;subY]; ind:first where d = max d; $[tolerance < d ind; // Perpendicular distance is greater than tolerance // => split and append to the subsection dictionary [subSections[sIdx]:sIdx+ind;subSections[sIdx+ind]:eIdx]; // else discard intermediate points remPoints:@[remPoints;1+sIdx+til eIdx-sIdx+1;:;0b]]; (subSections;remPoints) } Taking the previous triangle wave function example once again, it may be demonstrated that the iterative version of the algorithm is not similarly bound by the maximum internal stack size: q)triangle:sums 1,5000#-2 2 q)res:rdpIter[0.5;til count triangle;triangle] q)res[1]~triangle 1b Results¶ Cauchy random walk¶ We initially apply our algorithms to a random price series simulated by sampling from the Cauchy distribution which will provide a highly erratic and volatile test case. Our data sample is derived as follows: PI:acos -1f // Cauchy distribution simulator rcauchy:{[n;loc;scale]loc + scale * tan PI * (n?1.0) - 0.5} // Number of data points n:20000 // Trade table with Cauchy distributed price series trade:([] time:09:00:00.000+asc 20000?(15:00:00.000-09:00:00.000); sym:`AAA; price:abs 100f + sums rcauchy[20000;0.0;0.001] ) The initial price chart is plotted below using KX Dashboards. Figure 5 For our initial test run we choose a tolerance value of 0.005. This is the threshold for the algorithm – where all points on a line segment are less distant than this value, they will be discarded. The tolerance value should be chosen relative to the typical values and movements in the price series. // Apply the recursive version of the algorithm q)\ts trade_recur:exec flip `time`price!rdpRecur[0.005;time;price] from trade 53 1776400 // Apply the iterative version of the algorithm q)\ts trade_iter:exec flip `time`price!rdpIter[0.005;time;price] from trade 141 1476352 q)trade_recur ~ trade_iter 1b q)count trade_simp 4770 The simplification algorithm has reduced the dataset from 20,000 to 4,770, a reduction of 76%. The modified chart is plotted. Figure 6 Apple Inc. share price¶ Applying the algorithms to a financial time-series dataset, we take some sample trade data for Apple Inc. (AAPL.N) for a period following market-open on January 23, 2015. In total the dataset contains ten thousand data points, spanning about 7 minutes. The raw price series is plotted in Figure 7. Again, we use the functions defined above to reduce the number of data-points which must be transferred to the front end and rendered. In terms of financial asset-price data, the input tolerance roughly corresponds to a tick-size threshold below which any price movements will be discarded. A tolerance value of 0.01 – corresponding to 1 cent – results in a 58% reduction in the number of data points with a relatively small runtime cost (<180ms for the recursive version, <600ms for the iterative version). The result, plotted in Figure 8, compares very favorably with the plot of the raw data. There are almost no perceivable visual differences. Figure 7 Figure 8 Finally we simplify the data using a tolerance value of 0.05 – 5 cents. This results in a 97% reduction in the number of data points with a very low runtime cost (18ms for recursive version, 38ms for the iterative version) as the algorithm is able to very quickly discard large amounts of data. However, as can be seen in Figure 9, there is a clear loss of fidelity though the general trend of the series is maintained. Many of the smaller price movements are lost – ultimately the user’s choice of a tolerance value must be sensitive to the underlying data and the general time horizon across which the data will be displayed. Figure 9 Conclusion¶ In this paper we have presented two implementations of the Ramer-Douglas-Peucker algorithm for curve simplification, applying them to financial time-series data and demonstrating that a reduction in the size of the dataset to make it more manageable does not need to involve data distortion and a corresponding loss of information about its overall trends and key features. This type of data-reduction trades off an increased runtime cost on the server against a potentially large reduction in processing time on the receiving client. While for many utilities simple bucket-based summaries are more than adequate and undeniably more performant, we propose that for some uses a more discerning simplification as discussed above can prove invaluable. This is particularly the case with certain time-series and combinations thereof where complex and volatile behaviors must be studied. Authors¶ Sean Keevey is a kdb+ consultant and has developed data and analytic systems for some of the world's largest financial institutions. Sean is currently based in London developing a wide range of tailored analytic, reporting and data solutions in a major investment bank. Kevin Smyth has worked as a kdb+ consultant for some of the world’s leading exchanges and financial institutions. Based in London, Kevin has experience with data capture and high-frequency data-analysis projects across a range of asset classes. References¶ Douglas, D., & Peucker, T. (1973). Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. The Canadian Cartographer 10(2), 112–122. McMaster, R. B. (1986). A statistical analysis of mathematical measures for linear simplification. The American Cartographer 13(2), 113-116. Ramer, U. (1972). An iterative procedure for the polygonal approximation of plane curves. Computer Graphics and Image Processing 1(3), 244-256. ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_compression.q SIZE: 189 characters ================================================================================ /- Bespoke configuration file for the compression process \d .cmp hdbpath:hsym`$getenv[`KDBHDB] // hdb directory maxage:365 // the maximum date range of partitions to scan ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_feed.q SIZE: 181 characters ================================================================================ // Bespoke Feed config : Finance Starter Pack \d .servers enabled:1b CONNECTIONS:enlist `segmentedtickerplant // Feedhandler connects to the tickerplant HOPENTIMEOUT:30000 ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_filealerter.q SIZE: 134 characters ================================================================================ \d .fa tickerplanttype:`segmentedtickerplant // Type of tickerplant to connect to \d .servers CONNECTIONS:.fa.tickerplanttype ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_gateway.q SIZE: 119 characters ================================================================================ // Bespoke Gateway config : Finance Starter Pack \d .gw synccallsallowed:1b // whether synchronous calls are allowed ================================================================================ FILE: TorQ-Finance-Starter-Pack_appconfig_settings_iexfeed.q SIZE: 602 characters ================================================================================ // Bespoke Feed config : Finance Starter Pack \d .proc loadprocesscode:1b \d .servers enabled:1b CONNECTIONS:enlist `segmentedtickerplant // Feedhandler connects to the tickerplant HOPENTIMEOUT:30000
// heartbeating add[`.hb.addprocs;1b;"Add a set of process types and names to the heartbeat table to actively monitor for heartbeats. Processes will be automatically added and monitored when the heartbeats are subscribed to, but this is to allow for the case where a process might already be dead and so can't be subscribed to";"[symbol(list): process types; symbol(list): process names]";""] add[`.hb.processwarning;1b;"Callback invoked if any process goes into a warning state. Default implementation is to do nothing - modify as required";"[table: processes currently in warning state]";""] add[`.hb.processerror;1b;"Callback invoked if any process goes into an error state. Default implementation is to do nothing - modify as required";"[table: processes currently in error state]";""] add[`.hb.storeheartbeat;1b;"Store a heartbeat update. This function should be added to you update callback when a heartbeat is received";"[table: the heartbeat table data to store]";""] add[`.hb.warningperiod;1b;"Return the warning period for a particular process type. Default is to return warningtolerance * publishinterval. Can be overridden as required"; "[symbollist: the process types to return the warning period for]";"timespan list of warning period"] add[`.hb.errorperiod;1b;"Return the error period for a particular process type. Default is to return errortolerance * publishinterval. Can be overridden as required"; "[symbollist: the process types to return the error period for]";"timespan list of error period"] // async messaging add[`.async.deferred;1b;"Use async messaging to simulate sync communication";"[int(list): handles to query; query]";"(boolean list:success status; result list)"] add[`.async.postback;1b;"Send an async message to a process and the results will be posted back within the postback function call";"[int(list): handles to query; query; postback function]";"boolean list: successful send status"] // compression add[`.cmp.showcomp;1b;"Show which files will be compressed and how; driven from csv file";"[`:/path/to/database; `:/path/to/configcsv; maxagefilestocompress]";"table of files to be compressed"] add[`.cmp.compressmaxage;1b;"Run compression on files using parameters specified in configuration csv file, and specifying the maximum age of files to compress";"[`:/path/to/database; `:/path/to/configcsv; maxagefilestocompress]";""] add[`.cmp.docompression;1b;"Run compression on files using parameters specified in configuration csv file";"[`:/path/to/database; `:/path/to/configcsv]";""] // data loader add[`.loader.loadallfiles;1b;"Generic loader function to read a directory of files in chunks and write them out to disk";"[dictionary of load parameters. Should have keys of headers (symbol list), types (character list), separator (character), tablename (symbol), dbdir (symbol). Optional params of dataprocessfunc (diadic function), datecol (name of column to extract date from: symbol), chunksize (amount of data to read at once:int), compression (compression parameters to use e.g. 16 1 0:int list), gc (boolean flag of whether to run garbage collection:boolean); directory containing files to load (symbol)]";""] // sort and set attributes add[`.sort.sorttab;1b;"Sort and set the attributes for a table and set of partitions based on a configuration file (default is $KDBCONFIG/sort.csv)";"[2 item list of (tablename e.g. `trade; partitions to sort and apply attributes to e.g. `:/hdb/2000.01.01/trade`:hdb/2000.01.02/trade)]";""] add[`.sort.getsortcsv;1b;"Read in the sort csv from the specified location";"[symbol: the location of the file e.g. `:config/sort.csv]";""] // garbage collection add[`.gc.run;1b;"Run garbage collection, print debug info before and after"; "";""] // email add[`.email.connectdefault;1b;"connect to the default mail server specified in configuration";"[]";""] add[`.email.senddefault;1b;"connect to email server if not connected. Send email using default settings";"[dictionary of email parameters. Required dictionary keys are to (symbol (list) of email address to send to), subject (character list), body (list of character arrays). Optional parameters are cc (symbol(list) of addresses to cc), bodyType (can be `html, default is `text), attachment (symbol (list) of files to attach), image (symbol of image to append to bottom of email. `none is no image), debug (int flag for debug level of connection library. 0i=no info, 1i=normal. 2i=verbose)]";"size in bytes of sent email. -1 if failure"] add[`.email.test;1b;"send a test email";"[symbol(list):email address to send test email to]";"size in bytes of sent email. -1 if failure"] add[`.email.connect;1b;"connect to specified email server";"[dictionary of connection settings. Required dictionary keys are url (symbol url of mail server host:port), user (symbol of user to sign in as) and password (symbol of password to use). Optional parameters are from (return address on emails, default is [email protected]), usessl (boolean flag of whether to use ssl/tls, default is 0b), debug (int flag for debug level of connection library. 0i=no info, 1i=normal. 2i=verbose)]";"0 if successful, -1 if failure"] add[`.email.send;1b;"Send email using supplied parameters. Requires connection to already be established";"[dictionary of email parameters. Required dictionary keys are to (symbol (list) of email address to send to), subject (character list), body (list of character arrays). Optional parameters are cc (symbol(list) of addresses to cc), bodyType (can be `html, default is `text), attachment (symbol (list) of files to attach), image (symbol of image to append to bottom of email. `none is no image), debug (int flag for debug level of connection library. 0i=no info, 1i=normal. 2i=verbose)]";"size in bytes of sent email. -1 if failure"] add[`.email.disconnect;1b;"disconnect from email server";"[]";"0"] // tplog add[`.tplog.check;1b;"Checks if tickerplant log can be replayed. If it can or can replay the first X messages, then returns the log handle, else it will read log as byte stream and create a good log and then return the good log handle ";"[logfile (symbol), handle to the log file to check; lastmsgtoreplay (long), the index of the last message to be replayed from log ]";"handle to log file, will be either the input log handle or handle to repaired log, depends on whether the log was corrupt"] // memory usage add[`.mem.objsize;1b;"Returns the calculated memory size in bytes used by an object. It may take a little bit of time for objects with lots of nested structures (e.g. lots of nested columns)";"[q object]";"size of the object in bytes"] ================================================================================ FILE: TorQ_code_common_async.q SIZE: 2,950 characters ================================================================================ \d .async // send a query down a handle, flush the handle, return a status as to whether it is succesfully sent (0b or 1b) // the query is wrapped so it gets send back to the originating process // the result will be returned as either // if w is true, the result will be wrapped in the status i.e. // (1b;result) or (0b;"error: error string") // otherwise it will just return the result // there are several error traps here as we need to trap // 1. that the query is successfully sent and flushed // 2. that the query is executed successfully on the server side // 3. that the result is successfully sent back down the handle (i.e. the client hasn't closed while the server is still running the query) send:{[w;h;q] // build the query to send tosend:$[w; ({[q] @[neg .z.w;@[{[q] (1b;value q)};q;{(0b;"error: server fail:",x)}];()]};q); ({[q] @[neg .z.w;@[{[q] value q};q;{"error: server fail:",x}];()]};q)]; .[{x@y; x(::);1b};(h;tosend);0b]} // use this to make deferred sync calls // it will send the query down each of the handles, then block and wait on the handles // result set is (successvector (1 for each handle); result vector) deferred:{[handles;query] // send the query down each handle sent:send[1b;;query] each handles:neg abs handles,(); // block and wait for the results res:{$[y;@[x;(::);(0b;"error: comm fail: handle closed while waiting for result")];(0b;"error: comm fail: failed to send query")]}'[abs handles;sent]; // return results (res[;0];res[;1])} // Wrap the supplied query in a postback function // Don't block the handle when waiting // Success vector is returned postback:{[handles;query;postback] send[0b;;({[q;p] (p;@[value;q;{"error: server fail:",x}])};query;postback)] each handles:neg abs handles,()} \ // Test \d . {@[system;"q -p ",string x;{"failed to open ",(string x),": ",y}[x]]} each testports:9995 + til 3; system"sleep 1"; h:raze @[hopen;;()]each testports if[0=count h; '"no test processes available"] // run some tests // all good -1"test 1.1"; \t r1:.async.deferred[h;({system"sleep 1";system"p"};())] show r1 -1"test 1.2"; // both fail \t r2:.async.deferred[h;({1+`a;1};())] show r2 -1"test 1.3"; // last handle fails - handle invalid \t r3:.async.deferred[h,923482;({system"sleep 1";system"p"};())] show r3 -1"test 1.4"; // server exits while client is waiting for result \t r4:.async.deferred[last h;({exit 0};())] show r4 \t r5:.async.deferred[h;"select from ([]1 2 3)"] show r5 // drop the last handle - it's dead h:-1 _ h // define a function to handle the posted back result showresult:{show x} // All the postback functions will execute very quickly as they don't block .async.postback[h;({"result 2.1: ",string x+y};2;3);`showresult] // send postback as lambda .async.postback[h;({"result 2.2: ",string x+y};2;3);showresult] // send postback as lambda .async.postback[h;({"result 2.3: ",string x+y};2;`a);showresult] // Tidy up @[;"exit 0";()] each neg h ================================================================================ FILE: TorQ_code_common_bglaunchutils.q SIZE: 2,101 characters ================================================================================ \d .sys /a function to execute system commands and return a log message depending on the resulting exit code: /used for both launchprocess.sh and killprocess.sh syscomm:{[params;cmd] /params is (i) a dictionary if syscomm has been called by launch, and (ii) a string if it has been called by killproc /cmd is the command to be executed, as a string prevexitcode:first "I"$system cmd,"; echo $?"; id:$[lok:99h=type params;`launchprocess;`killprocess]; pname:$[lok;params[`procname];params]; $[0=prevexitcode; .lg.o[id;"Successful execution: ",$[lok;"Starting ";"Terminating "],pname]; 1=prevexitcode; .lg.e[id;"Failed to ",$[lok;"start ";"terminate "],pname]; 2=prevexitcode; .lg.e[id;pname,$[lok;" already ";" not "],"running"]; 3=prevexitcode; .lg.e[id;pname," not found"]; .lg.e[id;"Unknown error encountered"] ] } /function which lets us call launchprocess.sh from inside a TorQ process /it takes a dictionary of parameters which will be passed to launchprocess.sh, i.e "-procname rdb1 -proctype rdb" etc. bglaunch:{[params] /exit immediately if process name and type aren't provided if[not all `procname`proctype in key params; .lg.e[`launchprocess;"Process name and type must be provided"]; :()]; /set default values with .Q.def and string the result: pass_arg:first `$.Q.opt[.z.X]`U; if[2>count pass_arg;pass_arg:`$getenv[`KDBAPPCONFIG],"/passwords/accesslist.txt"]; deflt:`procname`proctype`U`localtime`p`qcmd!(`;`;pass_arg;first string .proc.localtime;`$"0W";"q"); params:string each .Q.def[deflt] params; /format the params dictionary as a series of command line args f_args:{"-",string[x]," ",y}'[key params;value params]; sline:"bash ",getenv[`TORQHOME],"/bin/launchprocess.sh "," " sv f_args; syscomm[params;] sline} /this function calls killprocess.sh from within a TorQ process, /takes a single parameter, a string procname bgkill:{[procname] syscomm[procname;] "bash ",getenv[`TORQHOME],"/bin/killprocess.sh ",procname} ================================================================================ FILE: TorQ_code_common_cache.q SIZE: 4,237 characters ================================================================================ // cache the result of functions in memory \d .cache // the maximum size of the cache in MB maxsize:@[value;`.cache.maxsize;100] // the maximum size of any individual result set in MB maxindividual:@[value;`.cache.maxindividual;50] // make sure the maxindividual isn't bigger than maxsize maxindividual:maxsize&maxindividual MB:2 xexp 20 // a table to store the cache values in memory cache:([id:`u#`long$()] lastrun:`timestamp$();lastaccess:`timestamp$();size:`long$()) // a dictionary of the functions funcs:(`u#`long$())!() // the results of the functions results:(`u#`long$())!() // table to track the cache performance perf:([]time:`timestamp$();id:`long$();status:`symbol$()) id:0j getid:{:id+::1}
kdb+ data-management techniques¶ This paper illustrates the flexibility of kdb+ and how instances can be customized to meet the practical needs of business applications across different financial asset classes. Given the vast extent to which kdb+ can be customized, this document focuses on some specific cases relating to the capture and storage of intra-day and historical time-series data. Many of the cases described should resonate with those charged with managing large-scale kdb+ installations. These complex systems can involve hundreds of kdb+ processes and/or databases supporting risk, trading and compliance analytics powered by petabytes of data, including tick, transaction, pricing and reference data. The goal of the paper is to share implementation options to those tasked with overseeing kdb+ setups, in the hope that the options will allow them to scale their deployments in a straightforward manner, maximize performance and enhance the end-user experience. Seven scenarios are presented focused on tuning the technology using some common and uncommon techniques that allow for a significantly more efficient and performant system. The context where these techniques would be adopted is discussed, as well as the gains and potential drawbacks of adopting them. Where appropriate, sample code snippets are provided to illustrate how the technique might be implemented. The code examples are there for illustrative purposes only and we recommend that you always check the use and effect of any of the samples on a test database before implementing changes to a large dataset or production environment. Sample on-disk test databases can be created using the tq.q script. Code examples are intended for readers with at least a beginner-level knowledge of kdb+ and the q language. Automating schema changes¶ In some kdb+ installations, table schemas change frequently. This can be difficult to manage if the installation consists of a splayed partitioned database, since each time the schema changes you have to manually change all the historical data to comply with the new schema. This is time-consuming and prone to error. dbmaint.q is a useful tool for performing these manual changes. To reduce the administrative overhead, the historical database schema can be updated programmatically as intra-day data is persisted. This can be achieved by invoking function updateHistoricalSchema (below), which connects to the historical database and runs locally-defined functions to add and remove tables, add and remove columns, reorder columns and change their types. It does this by comparing the table schemas in the last partition with those of the other partitions, and making the necessary adjustments to the other partitions. kdb+ versions before V2.6 In previous versions the meta data is read from the first, rather than the last, partition therefore minor changes are necessary to get this to work. Note also that column type conversion (function changeColumnTypes ) will not work when changing to or from nested types or symbols – this functionality is beyond the scope of this paper. The updateHistoricalSchema function should be invoked once the in-memory data is fully persisted. If KxSystems/kdb-tick is being used, the function should be called as the last line of the .u.end function in r.q – the real-time database. updateHistoricalSchema takes the port of the historical database as an argument, opens a connection to the historical database and calls functions that perform changes to the historical data. Note that the functions are defined in the calling process. This avoids having to load functionality into the historical process, which usually has no functionality defined in it. updateHistoricalSchema:{[con] h:hopen con; h(addTables;()); h(removeTables;()); h(addColumns;()); h(removeColumns;()); h(reorderColumns;()); h(changeColumnTypes;()); } The following function simply adds empty tables to any older partitions based on the contents of the latest partition. addTables:{.Q.chk`:.} The following function finds all tables that are not in the latest partition but are in other partitions and removes those tables. removeTables:{ t:distinct[raze key each hsym each `$string -1_date]except tables`.; {@[system;x;::]}each "rm -r ",/:string[-1_date] cross "/",/:string t; } The function below iterates over each table-column pair in all partitions except for the latest one. It makes sure all columns are present and if not, creates the column with the default value of the column type in the latest partition. addColumns:{ {[t] {[t;c] {[t;c;d] defaults:" Cefihjsdtz"!("";""),first each "efihjsdtz"$\:(); f:hsym`$string[d],"/",string[t],"/",string c; if[0=type key f; f set count[get hsym`$string[d],"/",string[t],"/sym"]# defaults meta[t][c]`t; @[hsym`$string[d],"/",string[t];`.d;,;c] ] }[t;c]each -1_date }[t] each cols[t]except `date }each tables`. } The following function deletes columns in earlier partitions that are not in the last partition. It does this by iterating over each of the tables in all partitions, except for the last one, getting the columns that are in that partition but not in the last partition, and deletes them. removeColumns:{ {[t] {[t;d] delcols:key[t:hsym`$string[d],"/",string t]except cols t; {[t;c] hdel`$string[t],"/",string c; }[t]each delcols; if[count delcols; @[t;`.d;:;cols[t] except `date] ] }[t] each -1_date }each tables`. } The function below re-orders the columns by iterating over each of the partitions except for the last partition. It checks that the column order matches that of the latest partition by looking at the .d file. If there is a mismatch, it modifies the .d file to the column list in the last partition. reorderColumns:{ /.d file specifies the column names and order {[d] {[d;t] if[not except[cols t;`date]~get f:hsym`$string[d],"/",string[t],"/.d"; f set cols[t] except `date ] }[d]each tables`. }each -1_date } The following function iterates over every table-column pair in all partitions except for the last. It checks that the type of the column matches that of the last partition and if not, it casts it to the correct type. changeColumnTypes:{ {[t] {[t;c] typ:meta[t][c]`t; frst:type get hsym`$string[first date],cpath:"/",string[t],"/",string c; lst:type get hsym`$string[last date],cpath; /if type of column in first and last partition are different /and type in last partition is not symbol, character or list /and type in first partition is not generic list, char vector or symbol /convert all previous partitions to the type in last partition if[not[frst=lst]and not[typ in "sc ",.Q.A]and not frst in 0 10 11h; {[c;t;d] hsym[`$string[d],c] set t$get hsym`$string[d],c }[cpath;typ]each -1_date ] }[t]each cols[t]except `date }each tables`. } Attributes on splayed partitioned tables¶ Attributes are used to speed up table joins and searches on lists. There are four different attributes: g grouped u unique p parted s sorted To apply the parted attribute to a list, all occurrences in the list of each value \(n\) must be adjacent to one another. For example, the following list is not of the required structure and attempting to apply the parted attribute to it will fail: 3 3 5 5 5 3 However, the following list is of the required structure and the parted attribute can be applied: 5 5 5 3 3 4 4 Sorting a list in ascending or descending order guarantees that the parted attribute can be applied. Likewise, to apply the unique attribute, the items of the list must be unique and to apply the sorted attribute, the items of a list must be in ascending order. The grouped attribute does not dictate any constraints on the list and the attribute can be applied to any list of atoms. When the unique, parted or grouped attribute is set on a list, q creates a hash table alongside the list. For a list with the grouped attribute, the hash table maps the items to the indexes where they occur; for a parted list it maps to the index at which the item starts; and for a list with the unique attribute applied, it maps to the index of the item. The sorted attribute merely marks the list as sorted, so that q knows to use binary search on functions such as = , in , ? , and within . Now we explore the use of attributes on splayed partitioned tables. The most common approach with regard to attributes in splayed partitioned tables is to set the parted attribute on the security identifier column of the table, as this is usually the most commonly queried column. However, we are not limited to just one attribute on splayed partitioned tables. Given the constraint required for the parted and sorted attributes, as explained above, it is rarely possible to apply more than one parted, more than one sorted or a combination of the two on a given table. There would have to be a specific functional relationship between the two data sets for this to be feasible. That leaves grouped as a viable additional attribute to exist alongside parted or sorted. To illustrate the performance gain of using attributes, the following query was executed with the various attributes applied, where the number of records in t in partition x was 10 million, the number of distinct values of c in partition x is 6700 and c was of type enumerated symbol. Note that the query itself is not very useful but does provide a helpful benchmark. select c from t where date=x,c=y The query was 18 times faster when column c had the grouped attribute compared with no attribute. With c sorted and parted attribute applied the query was 29 times faster than with no attribute. However, there is a trade-off with disk space. The column with the grouped attribute applied consumed three times as much space as the column with no attribute. The parted attribute applied consumed only 1% more than no attribute. Parted and sorted attributes are significantly more space-efficient than grouped and unique. Suppose the data contains IDs which for a given date are unique, then the unique attribute can be applied regardless of any other attributes on the table. Taking the same dataset as in the example above with c (of type long instead of enumerated symbol) unique for each date and the unique attribute applied, the same query is 42 times faster with the attribute compared with no attribute. However, the column consumes five times the amount of disk space. Sorting the table by that same column and setting the sorted attribute results in the same query performance as with the unique attribute, and the same disk space consumed as with no attribute. However, the drawback of using sorted, as with parted for reasons explained above, is not being able to use the sorted or parted attributes on any other columns. The following is an approximation of the additional disk space (or memory) overhead, in bytes, of each of the attributes, where n is the size of the list and u is the number of unique items. attribute space overhead ------------------------- sorted 0 unique 16*n parted 24*u grouped (24*u)+4*n ID fields – the GUID datatype¶ It is often the case that a kdb+ application receives an ID field which does not repeat and contains characters that are not numbers. The question is which datatype should be used for such a field. The problem with using enumerated symbols in this case is that the enumeration file becomes too large, slowing down searches and resulting in too much memory being consumed by the enumeration list. The problem with using char vector is that it is not an atom, so attributes cannot be applied, and searches on char vectors are very slow. In this case, it is worth considering GUID, which can be used for storing 16-byte values. As the data is persisted to disk, or written to an intra-day in-memory instance, the ID field can be converted to GUID and persisted along with the original ID. The following code shows how this would be done in memory. It assumes that the original ID is a char vector of no more than 16 characters in length. /guidFromRawID function pads the char vector with spaces to the left, /converts to byte array, converts byte array to GUID guidFromRawID:{0x0 sv `byte$#[16-count x;" "],x} update guid:guidFromRawID each id from `t Once the GUID equivalent of the ID field is generated, attributes can be applied to the column and queries such as the following can be performed. select from t where date=d,guid=guidFromRawID["raw-id"] If the ID is unique for each date, the unique attribute could be considered for GUID, but at the expense of more disk space as illustrated above. If the number of characters in the ID is not known in advance, one should consider converting the rightmost 16 characters to GUID (or leftmost depending on which section of the ID changes the most), and keeping the remaining 16 character sections of the ID in a column consisting of GUID lists. An alternative is to just convert the rightmost 16 characters to GUID and use the original ID in the query as follows. select from t where guid=guidFromRawID["raw-id"],id like "raw-id" In this case, the rightmost 16 characters will not necessarily be unique, therefore parted or grouped might be wise choices of attribute on the GUID column in the splayed partitioned table. Combining real-time and historical data¶ Typically, intra-day and historical data reside in separate kdb+ processes. Although there are instances when this is not desirable: - The data set may be too small to merit multiple processes - Users might want to view all their data in one location - It may be desirable to minimize the number of components to manage - Performance of intra-day queries may not be a priority (by combining intra-day and historical data into a single process intra-day queries may have to wait for expensive I/O bound historical queries to complete) Moreover, by having separate processes for intra-day and historical data, it is often necessary to introduce a gateway as a single point of contact for querying intra-day and historical data, thereby running three processes for what is essentially a single data set. However, one of the main arguments against combining intra-day and historical data is that large I/O-bound queries on the historical data prevent updates being sent from the feeding process. This results in the TCP/IP buffer filling up, or blocking the feeding process entirely if it is making synchronous calls. The code snippet below provides a function which simply writes the contents of the in-memory tables to disk in splayed partitioned form, with different names to their in-memory equivalent. The function then memory maps the splayed partitioned tables. eod:{ /get the list of all in-memory, non-keyed tables t:tables[`.] where {not[.Q.qp x]&98=type x} each get each tables`.; /iterate through each of the tables, x is a date {[d;t] /enumerate the table, ascend sort by sym and apply the parted attribute /save to a splayed table called <table>_hist hsym[`$string[d],"/",string[t],"_hist/"] set update `p#optimizedColumn from `optimizedColumn xasc .Q.en[`:.;get t]; /delete the contents of the in-memory table, /and re-apply the grouped attribute update `g#optimizedColumn from delete from t }[x]each t; /re-memory map the splayed partitioned tables system"l ." } The above will result in an additional (splayed partitioned) table being created for each non-keyed in-memory table. When combining intra-day and historical data into one process, it is best to provide functions to users and external applications which handle the intricacies of executing the query across the two tables (as a gateway process would), rather than allow them to run raw queries. Persisting intra-day data to disk intra day¶ Let us now assume there are memory limitations and you want the intra-day data to be persisted to disk multiple times throughout the day. The following example assumes the intra-day and historical data reside in the same process, as outlined in the previous section. However in most cases large volumes of data dictate that separate historical and intra-day processes are required. Accordingly, minor modifications can be made to the code below to handle the case of separate intra-day and historical processes. One way of achieving intra-day persistence to disk is to create a function such as the one below, which is called periodically and appends the in-memory tables to their splayed/partitioned counterparts. The function takes the date as an argument and makes use of splayed upsert which appends to the splayed table. (Rather than set , which overwrites.) flush:{[d] t:tables[`.] where {not[.Q.qp x]&98=type x} each get each tables`.; {[d;t] /f is the path to the table on disk f:hsym`$string[d],"/",string[t],"_hist/"; /if the directory exists, upsert (append) the enumerated table, /otherwise set (creates the splayed partitioned table) $[0=type key f;set;upsert][f;.Q.en[`:.;get t]]; delete from t }[d]each t; /re-memory map the data and garbage collect system"l ."; .Q.gc[] } The above flush function does not sort or set the parted attribute on the data, as this would take too much time blocking incoming queries – it would have to re-sort and save the entire partition each time. You could write functionality to be called at end-of-day to sort, set the parted attribute, and re-write the data. Alternatively, to simplify things, do not make use of the parted attribute if query performance is not deemed a priority, or use the grouped attribute that does not require sorting the data, as above. If the component which feeds the process is a tickerplant (see kdb+tick) it is necessary to truncate the tickerplant log file (file containing all records received) each time flush is called, otherwise the historical data could end up with duplicates. This can happen if the process saves to disk and then restarts, retrieving those same records from the log file, which will subsequently be saved again. The call to the flush function can be initiated from the tickerplant in the same way it calls the end-of-day function. If the source of the data is something other than a component which maintains a file of records received, such as a tickerplant, then calling the flush function from within the process is usually the best approach. Eliminate the intra-day process¶ In some cases, only prior-day data is needed and there is no requirement to run queries on intra-day data. However, you still need to generate the historical data daily. If daily flat files are provided then simply use a combination of the following functions to load the data and write it to the historical database in splayed partitioned form. | function | brief description | |---|---| .Q.dsftg | Load and save file in chunks | .Q.hdpf | Save in-memory table in splayed partitioned format | .Q.dpft | Same as above, but no splayed partitioned process specified | .Q.en | Returns a copy of an in-memory table with symbol columns enumerated | 0: | Load text file into an in-memory table or dictionary | set | Writes a table in splayed format | upsert | Appends to a splayed table | If kdb+tick is being used, simply turn off the real-time database and write the contents of the tickerplant log file to the historical database as follows. \l schema.q /load schema \cd /path/to/hdb /change dir to HDB dir upd:insert -11!`:/path/to/tickerplantlog /replay log file .Q.hdpf[HDB_PORT;`:.;.z.D;`partedColumn] If there are memory limitations on the box, then upd can be defined to periodically flush the in-memory data to disk as follows. \l schema.q \cd /path/to/hdb flush:{ /for each table of count greater than 0 /create or append to the splayed table {[t] f:hsym`$string[.z.D],"/",string[t],"/"; $[0=type key f;set;upsert][f;.Q.en[`:.;get t]]; delete from t }each tables[`.] where 0<count each get each tables`.; } upd:{insert[x;y]; /call flush if any tables have more than 1 million records if[any 1000000<count each get each tables`.; flush[] ] } /replay the log file -11!`:/path/to/tickerplantlog /write the remaining table contents to disk flush[] If kdb+tick is not being used, the equivalent of the tickerplant log file can be created simply by feeding the data into a kdb+ process via a callback function, which writes it to a file and persists it to disk as follows. f:`:/path/to/logfile f set () /initialize the file to empty list h:hopen f callback:{h enlist(`upd;x;y)} /x is table name, y is data Conclusion¶ When designing kdb+ applications to tackle the management of diverse and large volumes of content, you must take many business and technical factors into account. Considerations around the performance requirements of the application, network and hardware, personnel constraints, ease of maintenance and scalability all play a role in running a production system. This document has outlined practical options available to system managers for addressing some of the common scenarios they may encounter when architecting and growing their kdb+ deployment. In summary, the questions that frequently arise when one is in the process of designing robust, high-performance kdb+ applications include: - What are the appropriate datatypes for the columns? - How often is it expected that the schema will change and is the need for manual intervention in ensuring data consistency a concern? - What are the fields that will be queried most often and what (if any) attributes should be applied to them? - What is the trade-off between performance and disk/memory usage when the attributes are applied, and is it worth the performance gain? - Is it a concern that symbol columns would share enumeration files, and if so, should the enumerations be decoupled? - Is it necessary to make intra-day data available? If not, consider conserving in-memory requirements by, for example, persisting to disk periodically, or writing the contents of a tickerplant log file to splayed format at end-of-day. - Consider the overhead of maintaining separate intra-day and historical processes. Is it feasible or worthwhile to combine them into one? As mentioned in the introduction, a key feature of kdb+ is that it is a flexible offering that allows users to extend the functionality of their applications through the versatility of the q language. There are vanilla intraday- and historical-data processing architectures presented in existing documentation that cover many standard use cases. However, it is also common for a kdb+ system to quickly establish itself as a success within the firm, resulting in the need to process more types of data and requests from downstream users and applications. It is at this point that a systems manager is often faced with balancing how to ensure performance and scalability goals are met, while at the same time dealing with resource constraints and the need for maintainability. The seven cases and customization techniques covered in this paper provided examples of what you can do to help reach those goals. Author¶ Simon Mescal is based in New York. Simon is a financial engineer who has designed and developed kdb+-related data-management solutions for top-tier investment banks and trading firms, across multiple asset classes. Simon has extensive experience implementing a range of applications in both object-oriented and vector-programming languages.
/- function to get additional partition(s) defined by parted attribute in sort.csv getextrapartitiontype:{[tablename] /- check that that each table is defined or the default attributes are defined in sort.csv /- exits with error if a table cannot find parted attributes in tablename or default /- only checks tables that have sort enabled tabparts:$[count tabparts:distinct exec column from .sort.params where tabname=tablename,sort=1,att=`p; [.lg.o[`getextraparttype;"parted attribute p found in sort.csv for ",(string tablename)," table"]; tabparts]; count defaultparts:distinct exec column from .sort.params where tabname=`default,sort=1,att=`p; [.lg.o[`getextraparttype;"parted attribute p not found in sort.csv for ",(string tablename)," table, using default instead"]; defaultparts]; [.lg.e[`getextraparttype;"parted attribute p not found in sort.csv for ", (string tablename)," table and default not defined"]] ]; tabparts }; /- function to check each partition type specified in sort.csv is actually present in specified table checkpartitiontype:{[tablename;extrapartitiontype] $[count colsnotintab:extrapartitiontype where not extrapartitiontype in cols get tablename; .lg.e[`checkpart;"parted columns ",(", " sv string colsnotintab)," are defined in sort.csv but not present in ",(string tablename)," table"]; .lg.o[`checkpart;"all parted columns defined in sort.csv are present in ",(string tablename)," table"]]; }; /- function to check if the extra partition column has an enumerable type checkenumerabletype:{[tablename;extrapartitiontype] $[all extrapartitiontype in exec c from meta[tablename] where t in "hijs"; .lg.o[`checkenumerable;"all columns do have an enumerable type in ",(string tablename)," table"]; .lg.e[`checkenumerable;"not all columns ",string[extrapartitiontype]," do have an enumerable type in ",(string tablename)," table"]]; }; /- function to get list of distinct combinations for partition directories /- functional select equivalent to: select distinct [ extrapartitiontype ] from [ tablenme ] getextrapartitions:{[tablename;extrapartitiontype] value each ?[tablename;();1b;extrapartitiontype!extrapartitiontype] }; /-function to return partition directory chunks that will be called in batch by mergebypart function getpartchunks:{[partdirs;mergelimit] /-get table for function which only contains data for relevant partitions t:select from .merge.partsizes where ptdir in partdirs; /-get list of limits (rowcount or bytesize) to be used to get chunks of partitions to get merged in batch r:$[.merge.mergebybytelimit;exec bytes from t;exec rowcount from t]; /-return list of partitions to be called in batch l:(where r={$[z<x+y;y;x+y]}\[0;r;mergelimit]),(select count i from t)[`x]; /-where there are more than set partlimit, split the list s:-1_distinct asc raze {$[(x[y]-x[y-1])<=.merge.partlimit;x[y];x[y], first each .merge.partlimit cut x[y-1]+til x[y] - x[y-1]]}/:[l;til count l]; /-return list of partitions s cut exec ptdir from t }; /-merge entire partition from temporary storage to permanent storage mergebypart:{[tablename;dest;partchunks] .lg.o[`merge;"reading partition/partitions ", (", " sv string[partchunks])]; chunks:get each partchunks; /-if multiple chunks have been read in chunks will be a list of tabs, if this is the case - join into single tab if[98<>type chunks;chunks:(,/)chunks]; .lg.o[`resort;"Checking that the contents of this subpartition conform"]; pattrtest:@[{@[x;y;`p#];0b}[chunks;];.merge.getextrapartitiontype[tablename];{1b}]; if[pattrtest; /-p attribute could not be applied, data must be re-sorted by subpartition col (sym): .lg.o[`resort;"Re-sorting contents of subpartition"]; chunks: xasc[.merge.getextrapartitiontype[tablename];chunks]; .lg.o[`resort;"The p attribute can now be applied"]; ]; .lg.o[`merge;"upserting ",(string count chunks)," rows to ",string dest]; /-merge columns to permanent storage .[upsert;(dest;chunks); {[e;d;p].lg.e[`merge;"failed to merge to ", string[d], " from segments ", (", " sv string p), " Error is - ",string[e]]}[;dest;partchunks]]; }; /-merge data from partition in temporary storage to permanent storage, column by column rather than by entire partition mergebycol:{[tableinfo;dest;segment] .lg.o[`merge;"upserting columns from ", (string[segment]), " to ", string[dest]]; {[dest;segment;col] /-filepath to hdb partition column where data will be saved to destcol:(` sv dest, col); /-data from column in temp storage to be saved in hdb destdata: get segcol:` sv segment, col; .lg.o[`merge;"merging ", string[segcol], " to ", string[destcol]]; /-upsert data to hdb column .[upsert;(destcol;destdata); {[destcol;e].lg.e[`merge;"failed to save data to ", string[destcol], " with error : ",e];}] }[dest;segment;] each cols tableinfo[1]; }; /-hybrid method of the two functions above, calls the mergebycol function for partitions over a rowcount/bytesize limit (kept track in .merge.partsizes) and mergebypart for remaining functions mergehybrid:{[tableinfo;dest;partdirs;mergelimit] /-exec partition directories for this table from the tracking table partsizes, where the number of bytes is over the limit overlimit:$[.merge.mergebybytelimit; exec ptdir from .merge.partsizes where ptdir in partdirs,bytes > mergelimit; exec ptdir from .merge.partsizes where ptdir in partdirs,rowcount > mergelimit ]; if[(count overlimit)<>count partdirs; partdirs:partdirs except overlimit; .lg.o[`merge;"merging ", (", " sv string partdirs), " by whole partition"]; /-get partition chunks to merge in batch partchunks:getpartchunks[partdirs;mergelimit]; mergebypart[tableinfo[0];(` sv dest,`)]'[partchunks]; ]; /-if columns are over the byte limit merge column by column if[0<>count overlimit; .lg.o[`merge;"merging ", (", " sv string overlimit), " column by column"]; mergebycol[tableinfo;dest]'[overlimit]; /-if all partitions are over limit no .d file will have been created - check for .d file and if none exists create one if[()~key (` sv dest,`.d); .lg.o[`merge;"creating file ", (string ` sv dest,`.d)]; (` sv dest,`.d) set cols tableinfo[1]; ]; ]; }; ================================================================================ FILE: TorQ_code_common_monitoringchecks.q SIZE: 2,723 characters ================================================================================ \d .checks numformat:{reverse "," sv 3 cut reverse string `long$x} timeformat:{(" " sv string `date`time$.proc.cp[])," ", $[.proc.localtime=1b;"local time";"GMT"]} formatdict:{"<table>",(raze {"<tr><td><b>",(string x),"</b></td><td> | ",(string y),"</td></tr>"}'[key x;value x]),"</table>"} tablecount:{[tablelist; period; currenttime; timecol; required] .checks.errs::(); counts:{[period;currenttime;timecol;x] count where period > currenttime - (0!value x)timecol}[period;currenttime;timecol] each tablelist,:(); if [any c:counts < required; .checks.errs,:"The following tables have not received the required number of updates (",(string required),") in the last period of ",(string period),". The received count is shown below: <br/>"; .checks.errs,:formatdict tablelist[where c]!counts where c]; ([]messages:.checks.errs)} hdbdatecheck:{[date; tablelist] .checks.errs::(); counts:{ count select from value y where date=x }[date] each tablelist where tablelist in tables[]; if[any counts = 0; .checks.errs,:"One or more of the historical databases have no records.<br />"; .checks.errs,:"The following tables have recieved no updates for ",(string date),".<br/><br/>"; .checks.errs,:.Q.s tablelist[where counts = 0]]; ([]messages:errs)} memorycheck:{[size] .checks.errs::(); if [size < (.Q.w[]`heap) + .Q.w[]`symw; .checks.errs,:"This process exceeded the warning level of ",numformat[size]," bytes of allocated memory at ",timeformat[],"<br/>"; .checks.errs,:"The output of .Q.w[] for this process has been listed below.<br/><br/>"; .checks.errs,:formatdict .Q.w[]]; ([]messages:.checks.errs)} symwcheck:{[size] .checks.errs::(); if [(h:.Q.w[]`symw) > size; .checks.errs,:"This process exceeded the warning for symbol size of ",numformat[size]," bytes at ",timeformat[],"<br />"; .checks.errs,:"The output of .Q.w[] for this process has been listed below.<br/><br/>"; .checks.errs,:formatdict .Q.w[]]; ([]messages:.checks.errs)} slowsubscriber:{[messagesize] .checks.errs::(); if [0 < count slowsubs:(key .z.W) where (sum each value .z.W)>messagesize; subsdict:slowsubs!.z.W[slowsubs]; .checks.errs,:"This alert is triggered when the subscription queue of a process grows too big.<br/>"; .checks.errs,:"The dictionary below shows any subscribers with more than ",(string messagesize)," bytes in their queue:<br/>"; .checks.errs,:formatdict sum each subsdict; .checks.errs,:"The number of messages in each subscriber's queue is:"; .checks.errs,:formatdict count each subsdict]; ([]messages:.checks.errs)} ================================================================================ FILE: TorQ_code_common_os.q SIZE: 1,065 characters ================================================================================ /- courtesy of Simon Garland \d .os NT:.z.o in`w32`w64 Fex:{not 0h~type key hsym$[10=type x;`$x;x]} pth:{if[10h<>type x;x:string x]; if[NT;x:@[x;where"/"=x;:;"\\"]];$[":"=first x;1_x;x]} ext:{`$(x?".")_(x:string x;x)[i:10h=type x]} del:{system("rm ";"del ")[NT],pth x} deldir:{system("rm -r ";"rd /s /q ")[NT],pth x} hdeldir:{[dirpath;pdir] dirpath:$[10h=a:type dirpath;dirpath;-11h=a;string dirpath;'`type]; diR:{$[11h=type d:key x;raze x,.z.s each` sv/:x,/:d;d]}; filelist:diR hsym`$dirpath; if[not pdir;filelist:1_filelist]; .lg.o[`deldir;"deleting from directory : ",dirpath]; hdel each desc filelist} md:{if[not Fex x;system"mkdir \"",pth[x],"\""]}; ren:{system("mv ";"move ")[NT],pth[x]," ",pth y} cpy:{system("cp ";"copy ")[NT],pth[x]," ",pth y} Vex:not 0h~type key`.@ df:{(`$("/";"\\")[NT]sv -1_v;`$-1#v:("/";"\\")[NT]vs pth(string x;x)[10h=type x])} run:{system"q ",x} kill:{[p]@[(`::p);"\\\\";1];} sleep:{x:string x; system("sleep ",x;"timeout /t ",x," >nul")[NT]} pthq:{[x] $[10h=type x;ssr [x;"\\";"/"];`$ -1 _ ssr [string (` sv x,`);"\\";"/"]]}
The Name Game¶ Make substitutions in a string or list of strings Two solutions, using respectively, ssr and Amend At. Both use projections. Both use the Do iterator to both apply and not-apply a function. One uses the Over iterator to consolidate successive operations. Both solutions have five code lines, no loops, no control structures. Write a function that takes a name and returns the lyrics to the Shirley Ellis song ”The Name Game”. from Rosetta Code We shall try two approaches to this problem and see how they compare. First, string search and replacement. In the second we treat the songs as a list of strings and use Amend At to customize it. String search and replace¶ The core of this is pretty simple. Perhaps no more than inserting a name into a template. q)s:"Name, Name, bo-bName\nBanana-fana-fo-fName\nFee-fimo-mName\nName!\n\n" q)1 ssr[s;"Name";]"Stephen"; Stephen, Stephen, bo-bStephen Banana-fana-fo-fStephen Fee-fimo-mStephen Stephen! Not quite: have to drop the first letter of the name. Unless it’s a vowel. In fact, all the leading consonants. Is Y a vowel – Yvette, Yvonne? Let’s suppose so. q)V:raze 1 upper\"aeiouy" / vowels q)s2:"$1, $1, bo-b$2\nBanana-fana-fo-f$2\nFee-fimo-m$2\n$1!\n\n" q)1 {ssr/[s2;("$1";"$2");(x;((x in V)?1b)_x)]}"Stephen"; Stephen, Stephen, bo-bephen Banana-fana-fo-fephen Fee-fimo-mephen Stephen! "$1" and "$2" have no special significance here Although they resemble tokens in Posix regular expression syntax, here they are just substrings that are easy to spot. Note the use of Do \ to get the upper- and lower-case vowels. Here we have used the Over iterator to make successive substititions. Breaking that down, it is equivalent to q)1 ssr[;"$2";"ephen"] ssr[;"$1";"Stephen"] s2; Stephen, Stephen, bo-bephen Banana-fana-fo-fephen Fee-fimo-mephen Stephen! And with a leading vowel? q)1 {ssr/[s2;("$1";"$2");(x;((x in V)?1b)_x)]}"Anne"; Anne, Anne, bo-bAnne Banana-fana-fo-fAnne Fee-fimo-mAnne Anne! That A should be in lower case. q)1 {ssr/[s2;("$1";"$2");(x;lower((x in V)?1b)_x)]}"Anne"; Anne, Anne, bo-banne Banana-fana-fo-fanne Fee-fimo-manne Anne! But we have one more rule still to go. When the name begins with B, F, or M, the corresponding bo-b, fo-f, and mo-m loses its last letter. We could treat this as a possible third string replacement. Lightbulb moment. We do not need to test the first letter to see if the third replacement is needed. If it is not, the replacement, e.g. so-s with so-, is harmless: a no-op. If we define the third substitution s3:{1(-1_)\x,"o-",x}lower first Name then our result is ssr[;"$2";tn] ssr[;"$1";Name] ssr[s;;] . s3 The replacements are all made in the last line: - xo- for xo-x for some letter x Name for"$1" tn for"$2" Should the successive calls to ssr be refactored with Over? They could be. The syntax would be ssr/[s;f;t] , where f and t are the lists of from- and to-strings. But rather than construct those variables, let’s apply ssr/ to a 3-list of arguments. The syntax for that would be (ssr/).(s;f;t) . game_ssr:{[Name] V:raze 1 lower\"AEIOUY"; / vowels tn:lower((Name in V)?1b) _ Name; / truncated Name s3:{1(-1_)\x,"o-",x}lower first Name; / 3rd ssr s:"$1, $1, bo-b$2\nBanana-fana-fo-f$2\nFee-fimo-m$2\n$1!\n\n"; (ssr/).(s;("$1";"$2";s3 0);(Name;tn;s3 1)) } q)1 raze game_ssr each string`Stephen`Anne`Yvonne`Brenda; Stephen, Stephen, bo-bephen Banana-fana-fo-fephen Fee-fimo-mephen Stephen! Anne, Anne, bo-banne Banana-fana-fo-fanne Fee-fimo-manne Anne! Yvonne, Yvonne, bo-byvonne Banana-fana-fo-fyvonne Fee-fimo-myvonne Yvonne! Brenda, Brenda, bo-enda Banana-fana-fo-fenda Fee-fimo-menda Brenda! Amending a list of strings¶ In this approach we treat the song as a list of strings and amend a template list. The template list: q)show s:("bo-b";"Banana-fana fo-f";"Fee-fimo-m";"!";"") "bo-b" "Banana-fana fo-f" "Fee-fimo-m" "!" "" Prefix lines 0 and 3 with the name. pfx:Name,", ",Name,", " / prefix Suffix each of the first three lines with the truncated name. n:lower Name v:"aeiouy" sfx:((n in v)?1b)_ n / suffix but first drop the last letter from lines of s where n[0]=last each s The ternary form of Amend At (syntax @[x;yz] ) applies (unary) z to to each item of x y . So @[s;where n[0]=last each s;-1_] does that. Successive substitutions: @[;0;pfx,] @[;3;Name,] @[;0 1 2;,[;sfx]] @[;where n[0]=last each s;-1_] s We could use the Over iterator to refactor the successive calls to Amend At as a call to @/ . The refactored syntax would be @/[s;i;u] where i is a nested list of indexes and u is a list of unaries. @/[; ((0;3;0 1 2;where n[0]=last each s)); (pfx,;Name,;,[;sfx];-1_)] s A step too far. The syntax of the successive applications keeps the index-unaries as pairs. In the refactored line no improvement in legibility warrants the extra cognitive load of pairing unaries and indexes. Absent some other decisive factor such as evaluation time, the earlier version is better here. But it is good to to be able to spot opportunities to refactor! Putting it all together: game_amend:{[Name] pfx:Name,", ",Name,", "; / prefix n:lower Name; sfx:((n in "aeiouy")?1b)_n; / suffix s:("bo-b";"Banana-fana fo-f";"Fee-fimo-m";"!";""); / song template @[;0;pfx,] @[;3;Name,] @[;0 1 2;,[;sfx]] @[;where n[0]=last each s;-1_] s } Test your understanding: @[;0 1 2;,[;sfx]] uses the ternary form of Amend At. Rewrite it using the quaternary form. Answer @[;0 1 2;,;3#enlist sfx] In the ternary form @[x;y;z] , x[y] becomes z each x[y] . In the quaternary form @[x;y;z;zz] , x[y] becomes x[y] z'zz . Bracket notation for derived functions may help here. Equivalent to the above, x[y] becomes ternary @[x;y;z] z'[x y] quaternary @[x;y;z;zz] z'[x y;zz] In this case, s[0 1 2] becomes in the ternary ,[;sfx]each s[0 1 2] and in the quaternary s[0 1 2],'3#enlist sfx . Review¶ Both approaches found solutions of similar length and legibility. Both used projections. Both culminated in a last line that successively applied similar operations, respectively ssr and Amend At. Using the Do iterator to refactor the last line of game_ssr improved legibility; refactoring the last line of game_amend did not.
Order Book: a kdb+ intraday storage and access methodology¶ The purpose of this white paper is to describe some of the structures available for storing a real-time view of an order book in kdb+ and to provide some examples of how to query each of these structures effectively. Efficient maintenance of, and access to, order-book data is a valuable component of any program trading, post-trade and compliance application. Due to the high-throughput nature of the updates, which typically arrive on a per-symbol basis, an order book has to be maintained and queried on each tick. A vanilla insert of data directly into a quote table may be efficient when updating the data but proves very costly when trying to extract the required order ladders. Alternatively, it would be possible to store the data in an easy-to-query manner but this would increase the overhead and latency to each update, potentially leading to a bottleneck and backlog of messages. This white paper discusses various methods that can be used to solve the aforementioned challenges associated with maintaining a real-time view of an order book. The execution of these methods is first applied to a simplified data set before being applied to a real business use-case. Performance metrics are provided throughout. Order-book data¶ There are many ways to receive order-book data and, within each of them, many different schemas can be used to store the information. For instance, order-book updates can range from ‘add’, ‘modify’ or ‘delete’ messages with a unique order identifier, to something as simple as receiving total available quantity at each price level. The examples below focus on the latter case for simplicity but the same principles can be applied to the more complex individual orders. We will use a very basic order-book schema and make the assumption that we deal with one symbol at a time when we receive an update. /simple example order book schema book:([] time:`second$(); sym:`$(); side:`char$(); price:`float$(); size:`int$() ) /create sample data s:(),`FDP px:(),5 create:{[n] dict:s!px; sym:n?s; side:n?"BS"; price:((+;-)side="B") .'flip(dict sym;.05*n?1+til 10); sample:flip`time`sym`side`price`size! (09:30:00+til n;sym;side;price;100*n?1+til 10) } x:create 10 Different kdb+ structures to store order-book data¶ We examine four possible structures that could be used to store a real-time view of the order book data described above. While not a definitive list, it is useful to demonstrate a logical progression in the different columns that can be used to key this type of data. /structure 1: table keyed on sym,side,price book3key:`sym`side`price xkey book /structure 2: separate tables keyed on sym,price bidbook2key:askbook2key:`sym`price xkey book /structure 3: table keyed on side,price in dictionary keyed on sym bookbysym:(1#`)!enlist`side`price xkey book /structure 4: separate tables keyed on price in dictionary keyed on sym bidbookbysym:askbookbysym:(1#`)!enlist`price xkey book It is important to consider potential issues that may be encountered when using a column of type float as one of the keys in our structures. Due to precision issues that can occur when using floats (which have been discussed enough across various forums as not to warrant repetition here), it is possible that you may discover what appears to be a duplicate keyed row in your table. Mindful that the example below is contrived, imagine receiving the zero quantity update to a price level and the following occurs. bookfloat:`sym`side`price xkey book `bookfloat upsert flip `time`sym`side`price`size! (09:30:00 09:30:01;`FDP`FDP;"BB";4.950000001 4.949999996;100 0) `bookfloat q)bookfloat sym side price | time size ---------------|------------ FDP B 4.95 | 09:30 100 FDP B 4.95 | 09:30 0 q)\P 0 /display the maximum precision possible q)bookfloat sym side price | time size ----------------------------|------------ FDP B 4.9500000010000003 | 09:30 100 FDP B 4.9499999959999998 | 09:30 0 This could lead you to believe a price is still available when it is in fact not, resulting in an incorrect view of the order book. An immediate solution would be to consider using integers for price values and use a price multiplier on a by-symbol basis. The integer conversion could either be done at the feedhandler level or inside the process maintaining the order-book state. q)pxm:(0#`)!0#0N q)pxm[`FDP]:100 q)pxmf:{`int$y*100^pxm x} q)pxmf . x`sym`price 460 545 470 465 475 490 480 480 540 545i Using a column of type float as one of the keys in our structures is a point for consideration when deciding on a schema to maintain book state. Floating-point issues aside, in the interest of consistency and ease of reading, the remainder of this white paper will continue to use price as a float. Maintaining the order book intra-day¶ Now let us consider each of the functions required to upsert the data into each of our four structures. We will assume upon data receipt that the standard upd function is invoked with t and x parameters: t is the table name (not used in the functions below)x is a table containing the data As previously specified, we will assume we only ever receive one symbol in each upd callback and, initially at least, we can receive data for both sides of the book. /sample upd functions to populate each structure updSimple:{[t;x]`book3key upsert x} updBySide:{[t;x] if[count bid:select from x where side="B";`bidbook2key upsert bid]; if[count ask:select from x where side="S";`askbook2key upsert ask]; } updBySym:{[t;x] s:first x`sym; bookbysym[s],:x; } updBySymSide:{[t;x] s:first x`sym; if[count bid:select from x where side="B";bidbookbysym[s],:bid]; if[count ask:select from x where side="S";askbookbysym[s],:ask]; } function | run 1 run 2 run 3 run 4 average --------------------------------------|------------------------------- do[10000;updSimple[`tablename;x]] | 43 41 41 42 41.75 do[10000;updBySide[`tablename;x]] | 92 92 93 91 92 do[10000;updBySym[`tablename;x]] | 42 43 42 42 42.25 do[10000;updBySymSide[`tablename;x]] | 79 80 81 80 80 Table 1: Milliseconds taken to execute each update function 10,000 times In Table 1, we can clearly see it is faster to do one upsert into a single table rather than two upserts into two tables, with the double upsert functions taking almost twice as long as the single. If we change x to contain only one side and rerun our two slower functions again, we start to approach similar timings to the single upsert functions, as shown in Table 2. x:select from x where side="B" function | run 1 run 2 run 3 run 4 average --------------------------------------|------------------------------- do[10000;updBySide[`tablename;x]] | 60 59 59 60 59.5 do[10000;updBySymSide[`tablename;x]] | 54 54 53 53 53.5 Table 2: Milliseconds taken to execute each update function 10,000 times if single-sided Taking this further, assuming we only ever receive updates for a single side, we could alter our upd function definitions for further efficiencies as shown in Table 3. updBySide2:{[t;x] $["B"=first x`side; `bidbook2key; `askbook2key]upsert x; } updBySymSide2:{[t;x]s:firstx`sym; $["B"=first x`side; bidbookbysym[s],:x; askbookbysym[s],:x]; } function | run 1 run 2 run 3 run 4 average ---------------------------------------|------------------------------- do[10000;updBySide2[`tablename;x]] | 35 34 35 35 34.75 do[10000;updBySymSide2[`tablename;x]] | 31 30 29 29 29.75 Table 3: Milliseconds taken to execute each modified update function 10,000 times if single-sided Accessing the order book¶ Now that the data is being maintained in each of our four structures, it is worthwhile considering some common queries that we might wish to perform, the most obvious being extracting top-of-book. The following functions all take a single symbol as an argument and return the same result: a dictionary of bid and ask e.g. `bid`ask!4.95 5.1 . For the sake of brevity, the code below does not exclude price levels where the size is equal to zero. However, handling this is straightforward and should not have a significant impact on performance. /sample functions to extract top-of-book getTopOfBook:{[s] b:exec bid:max price from book3key where sym=s,side="B"; a:exec ask:min price from book3key where sym=s,side="S"; b,a } getTopOfBookBySide:{[s] b:exec bid:max price from bidbook2key where sym=s; a:exec ask:min price from askbook2key where sym=s; b,a } getTopOfBookBySym:{[s] b:exec bid:max price from bookbysym[s]where side="B"; a:exec ask:min price from bookbysym[s]where side="S"; b,a } getTopOfBookBySymSide:{[s] b:exec bid:max price from bidbookbysym s; a:exec ask:min price from askbookbysym s; b,a } The times taken to execute each of the functions are shown below: function | run 1 run 2 run 3 run 4 average ------------------------------------|------------------------------- do[10000;getTopOfBook`FDP] | 32 32 32 33 32.25 do[10000;getTopOfBookBySide`FDP] | 21 21 21 22 31.25 do[10000;getTopOfBookBySym`FDP] | 24 23 24 23 23.5 do[10000;getTopOfBookBySymSide`FDP] | 17 17 17 17 17 Table 4: Milliseconds taken to execute each top-of-book function 10,000 times Table 4 results clearly demonstrate the getTopOfBookBySymSide function takes the least amount of time to return top-of-book, albeit by a very small fraction of a microsecond. However, with some small refinements we can achieve even greater improvements. getTopOfBookBySymSide2:{[s] `bid`ask!(max key[bidbookbysym s]`price;min key[askbookbysym s]`price) } function | run 1 run 2 run 3 run 4 average -------------------------------------|------------------------------- do[10000;getTopOfBookBySymSide2`FDP] | 17 17 17 17 17 Table 5: Milliseconds taken to execute the modified top-of-book function 10,000 times Table 5 shows that the getTopOfBookBySymSide2 function is over three times faster than the getTopOfBook function using the book3key table and approximately twice as fast as the getTopOfBookBySide and getTopOfBookBySym functions using the bid /askbook2key and bookbysym structures respectively. This represents a significant saving if top-of-book is being calculated on every update which could help to alleviate back pressure on the real-time book process and increase throughput of messages. Calculating the top two levels¶ Another common query is to extract the top two levels of the book from each side. Again, the following functions all take a single symbol as an argument and return the same result: a dictionary of bid1 , bid , ask and ask1 , e.g. `bid1`bid`ask`ask1!4.9 4.95 5.1 5.15 . /sample functions to extract top 2 levels getTop2Book:{[s] b:`bid`bid1!2 sublist desc exec price from book3key where sym=s,side="B"; a:`ask`ask1!2 sublist asc exec price from book3key where sym=s,side="S"; reverse[b],a } getTop2BookBySide:{[s] b:`bid`bid1!2 sublist desc exec price from bidbook2key where sym=s; a:`ask`ask1!2 sublist asc exec price from askbook2key where sym=s; reverse[b],a } getTop2BookBySym:{[s] b:`bid`bid1!2 sublist desc exec price from bookbysym[s]where side="B"; a:`ask`ask1!2 sublist asc exec price from bookbysym[s]where side="S"; reverse[b],a } getTop2BookBySymSide:{[s] b:`bid`bid1!2 sublist desc exec price from bidbookbysym s; a:`ask`ask1!2 sublist asc exec price from askbookbysym s; reverse[b],a } function | run 1 run 2 run 3 run 4 average -----------------------------------|------------------------------- do[10000;getTop2Book`FDP] | 75 74 75 74 74.5 do[10000;getTop2BookBySide`FDP] | 63 62 62 61 62 do[10000;getTop2BookBySym`FDP] | 64 64 63 64 63.75 do[10000;getTop2BookBySymSide`FDP] | 58 58 58 58 58 Table 6: Milliseconds taken to execute each top 2 levels of book function 10,000 times Once again, Table 6 results show the getTop2BookBySymSide function returning the top two levels of the book in the least amount of time. As was the case in the last example, we can further optimize the function to achieve greater improvement. getTop2BookBySymSide2:{[s] bid:max bids:key[bidbookbysym s]`price; b:`bid1`bid!(max bids where not bids=bid;bid); ask:min asks:key[askbookbysym s]`price; a:`ask`ask1!(ask;min asks where not asks=ask); b,a } function | run 1 run 2 run 3 run 4 average -----------------------------------|------------------------------- do[10000;getTop2BookBySymSide`FDP] | 28 28 28 28 28 Table 7: Milliseconds taken to execute the modified top 2 levels of book function 10,000 times By using min and max in the getTop2BookBySymSide2 function instead of asc and desc , approximately half the time is needed by the other four functions. Again, this could help to alleviate back pressure on the real-time book process and increase throughput of messages. Business use case¶ Now let us consider more realistic examples by increasing the number of symbols across a range of one thousand to five thousand, in one thousand increments, and adding twenty levels of depth on each side. For these examples, it makes sense to apply some attributes to the different structures to maximize query efficiency. We will not profile the update functions again since we used the assumption of only ever receiving updates for a single symbol. /structure 1 – apply g# to sym and side columns book3key:update`g#sym,`g#side from`sym`side`price xkey book /structure 2 – apply g# to sym column bidbook2key:askbook2key:update`g#sym from`sym`price xkey book /structure 3 – apply u# to dictionary key and g# to side column bookbysym:(`u#1#`)!enlist update`g#side from`side`price xkey book /structure 4 - apply u# to dictionary key bidbookbysym:askbookbysym:(`u#1#`)!enlist`price xkey book s,:upper (neg ns:1000)?`5 px,:1+til ns createBig:{[n] dict:s!px;sym:n?s;side:n?"BS"; price:((+;-)side="B") .'flip(dict sym;.05*n?1+til 20); sample:flip`time`sym`side`price`size! (asc n?09:30:00+til 23400;sym;side;price;100*n?1+til 10) } /applying p# to sym column of x to speed up selection by sym later on x:@[`sym xasc createBig 1000000;`sym;`p#] /populate the tables remembering /we have to split by sym for the last 2 functions updSimple[`tablename;x] updBySide[`tablename;x] updBySym[`tablename] each {[s]select from x where sym=s}each distinct x`sym updBySymSide[`tablename]each {[s]select from x where sym=s}each distinct x`sym symbol counts function | 1000 2000 3000 4000 5000 -------------------------------------|------------------------------- do[10000;getTopOfBook`FDP] | 36 36 36 36.5 36.5 do[10000;getTopOfBookBySide`FDP] | 21 21.5 21.5 21.5 21.5 do[10000;getTopOfBookBySym`FDP] | 23 23 23.5 23.5 24 do[10000;getTopOfBookBySymSide`FDP] | 19 19 19 19 19 do[10000;getTopOfBookBySymSide2`FDP] | 11 11 11 11 11 do[10000;getTop2Book`FDP] | 76 76.5 76.5 76.5 76.5 do[10000;getTop2BookBySide`FDP] | 61 62 62 62 62 do[10000;getTop2BookBySym`FDP] | 64 64 64 64 64 do[10000;getTop2BookBySymSide`FDP] | 60 60 60 60 60 do[10000;getTop2BookBySymSide2`FDP] | 34 34 34 34 34 Table 8: Millisecond times required to execute each of the functions 10,000 times across a range of symbol counts We see that the performance of the above functions remain stable as the number of tickers increase. Conclusion¶ This white paper is a brief introduction to some of the options that are available for storing book information and, as mentioned previously, this paper intentionally excludes handling some cases to keep the code as simple and easy to follow as possible (e.g. the getTop … functions do not exclude prices where size is equal to zero and the getTop2 … functions have no handling for the case when there are not at least two levels of depth). However, it is straightforward to implement solutions for these cases without having a significant impact on the performance. Ultimately, the decision as to how to store the data will depend on the two factors identified in this paper: - how the data is received by the process maintaining book state (e.g. single or double sided) - how the data is to be accessed inside the process (e.g. full book or just top %n% levels required) How data is received and accessed can vary depending on each individual use case. It is therefore recommended that you experiment with each of the suggestions above, and of course any of the other alternatives that are available, to determine which is best for your particular needs. All tests performed with kdb+ 2.8 (2012.03.21). Author¶ Niall Coulter has worked on many kdb+ algorithmic-trading systems related to both the equity and FX markets. Based in New York, Niall is a technical architect for KX Platform, a suite of high-performance data-management, event-processing and trading platforms.
Changes in 4.1¶ The README.txt of the current 4.1 release contains a full list of changes. Some of the highlights are listed below. Production release date¶ 2024.02.13 q Language Features for Enhanced Readability and Flexibility¶ Dictionary Literal Syntax¶ With dictionary literals, you can concisely define dictionaries. Compare q)enlist[`aaa]!enlist 123 / 4.0 aaa| 123 with q)([aaa:123]) / 4.1 aaa| 123 This syntax follows rules consistent with list and table literal syntax. q)([0;1;2]) / implicit key names x | 0 x1| 1 x2| 2 q)d:([a:101;b:]);d 102 / missing values create projections a| 101 b| 102 q)d each`AA`BB`CC a b ------ 101 AA 101 BB 101 CC Similar to list literal syntax (;..) , omission of values results in projection q)([k1:;k2:])[1;"a"] k1| 1 k2| "a" Pattern Matching¶ Assignment has been extended so the left-hand side of the colon (:) can now be a pattern. q)a:1 / atom (old school) q)(b;c):(2;3) / list (nice!) q)([four:d]):`one`two`three`four`five!1 2 3 4 5 / dictionary (match a subset of keys) q)([]x2:e):([]1 2;3 4;5 6) / table (what?!) q)a,b,c,d,e 1 2 3 4 5 6 Before assigning any variables, q ensures that left and right values match. q)(1b;;x):(1b;`anything;1 2 3) / empty patterns match anything q)x 1 2 3 Failure to match throws an error without assigning. q)(1b;y):(0b;3 2 1) 'match q)y 'y Type Checking¶ While we're checking patterns, we can also check types. q)(surname:`s):`simpson q)(name:`s;age:`h):(`homer;38h) q)(name:`s;age:`h):(`marge;36.5) / d'oh 'type q)name,surname / woohoo! `homer`simpson kdb+ can check function parameters too. q)fluxCapacitor:{[(src:`j;dst:`j);speed:`f]$[speed<88;src;dst]} q)fluxCapacitor[1955 1985]87.9 1955 q)fluxCapacitor[1955 1985]88 'type q)fluxCapacitor[1955 1985]88.1 / Great Scott! 1985 Filter Functions¶ We can extend this basic type checking to define our own 'filter functions' to run before assignment. q)tempCheck:{$[x<0;'"too cold";x>40;'"too hot";x]} / return the value (once we're happy) q)c2f:{[x:tempCheck]32+1.8*x} q)c2f -4.5 'too cold q)c2f 42.8 'too hot q)c2f 20 / just right 68f We can use filter functions to change the values that we are assigning, q)(a;b:10+;c:100+):1 2 3 q)a,b,c 1 12 103 amend values at depth without assignment, q)addv:{([v:(;:x+;:(x*10)+)]):y} q)addv[10;([k:`hello;v:1 2 3])] k| `hello v| 1 12 103 or even change the types, q)chkVals:{$[any null x:"J"$","vs x;'`badData;x]} q)sumVals:{[x:chkVals]sum x} q)sumVals "1,1,2,3,5,8" 20 q)sumVals "8,4,2,1,0.5" 'badData Peach/Parallel Processing Enhancements¶ peach enables the parallel execution of a function on multiple arguments. Significant enhancements have been made to augment its functionality: - Ability to nest peach statements. This means you can now apply peach within another peach, allowing for the implementation of more sophisticated and intricate parallelization strategies. - In previous versions, peach had certain limitations, particularly with other multithreaded primitives. The latest release eliminates these constraints, providing greater flexibility in designing parallel computations. Now, you can seamlessly integrate peach with other multithreaded operations, unlocking new possibilities for concurrent processing. - Use of a work-stealing algorithm. Work-stealing is an innovative technique in parallel computing, where idle processors intelligently acquire tasks from busy ones. This dynamic approach ensures a balanced distribution of tasks, leading to improved overall efficiency. This marks a departure from the previous method of pre-allocating chunks of work to each thread. The incorporation of a work-stealing algorithm translates to better utilization of CPU cores, enhancing overall computational efficiency. In our own tests, we have seen these improvements lead to a significant reduction in processing times e.g. q)\s 8i Before: kdb+ 4.0 q)\t (inv peach)peach 2 4 1000 1000#8000000?1. 4406 After: kdb+ 4.1 q)\t (inv peach)peach 2 4 1000 1000#8000000?1. 2035 Unlimited IPC/Websocket Connections¶ The number of connections is now limited only by the operating system and protocol settings (system configurable). In addition, the c-api function sd1 no longer imposes a limit of 1023 on the value of the descriptor submitted. HTTP Persistent Connections¶ kdb+ now has support for HTTP Persistent Connections via .h.ka, a feature designed to elevate the efficiency and responsiveness of your data interactions. HTTP Persistent Connections enables multiple requests and responses to traverse over the same connection. This translates to reduced latency, optimized resource utilization, and an overall improvement in performance. Multithreaded Data Loading¶ The CSV load, fixed-width load (0:), and binary load (1:) functionalities are now multithreaded. This enhancement improves performance and efficiency; it is particularly beneficial for handling large datasets across various data loading scenarios. Socket Performance¶ Users with a substantial number of connections can experience improved performance, thanks to significant enhancements in socket operations. Below is a comparison with version 4.0: q)h:hopen `:tcps://localhost:9999 Before: kdb+ 4.0 q)\ts:10000 h”2+2" 1508 512 After: kdb+ 4.1 q)\ts:10000 h”2+2" 285 512 Enhanced TLS Support and Updated OpenSSL¶ Enhanced TLS Support and Updated OpenSSL Support feature in our latest release. Support for OpenSSL 1.0, 1.1.x, and 3.x, coupled with dynamic searching for OpenSSL libraries and TCP and UDS encryption, provides a robust solution for industries where data integrity and confidentiality are non-negotiable. TLS messaging can now be utilized on threads other than the main thread. This allows for secure Inter-Process Communication (IPC) and HTTP operations in multithreaded input queue mode. HTTP client requests and one-shot sync messages within secondary threads are facilitated through peach. More Algorithms for At-rest Compression¶ Zstd has been added to our list of supported compression algorithms. The compression algorithms can also be used when writing binary data directly. NUCs¶ We try to avoid introducing compatibility issues, and most of those that follow are a result of unifying behavior or tightening up loose cases. []a::e¶ ([]a::e) now throws parse (previously was amend of global 'a' with implicit column name) Value inside select/exec¶ value"..." inside select/exec on the main thread previously used lambda's scope for locals; it now always uses the global scope, e.g. q)a:0;{a:1;exec value"2*a"from([]``)}[] 0 (This fixes a bug since 2.6.) Dynamic Load¶ Loading shared libraries via 2: resolved to a canonical path prior to load via the OS, since v3.6 2018.08.24. This caused issues for libs whose run-time path was relative to a sym-link. It now resolves to an absolute path only, without resolving sym-links. Threads using subnormals¶ macOS/Microsoft Windows performance has been improved when using floating point calculations with subnormal numbers on threads created via multi-threaded input mode or secondary threads (rounds to zero). .z.o¶ .z.o for l64arm build is now l64arm , previously l64 . .z.o for mac universal binary on arm returns m64 , previously m64arm .Q.gc¶ Added optional param (type long) to .Q.gc indicating how aggressively it should try to return memory to the OS. q).Q.gc 0 / least aggressive q).Q.gc[] / most aggressive (continue to use this in the general case; currently matches level 2) .Q.xf and Q.Cf¶ Deprecated functions .Q.Xf and .Q.Cf have been removed. (Using resulting files could return file format errors since 3.6.) .Q.id¶ .Q.id for atom now produces a when it contains single character that is not in .Q.an (instead of empty sym) e.g q).Q.id`$"+" `a (previous version returned `) .Q.id for atom changes are reflected in .Q.id for tables (as before, it was applied to each column name). .Q.id for tables has additional logic to cater for duplicate col names after applying previously defined rules. Names are now appended with 1, 2, and so on, when matched against previous cols, e.g. q)cols .Q.id(`$("count+";"count*";"count1"))xcol([]1 2;3 4;5 6) `count1`count11`count12 (previous version returned `count1`count1`count1) q)cols .Q.id(`$("aa";"=";"+"))xcol([]1 2;3 4;5 6) `aa`a`a1 (previous version returned `aa`1`1) .Q.id now follows the same rule when the provided name begins with an underscore, as it does when it begins with a numerical character. Previously this could produce an invalid column name. q).Q.id`$"_" `a_ q)cols .Q.id(`$("3aa";"_aa";"_aa"))xcol([]1 2;3 4;5 6) `a3aa`a_aa`a_aa1 Block exit¶ kdb+ now blocks the ability to call the exit command during reval or -u when the handle is a remote. \\ was already blocked. Handles within peach¶ Using handles within peach is not supported, e.g. q)H:hopen each 4#4000;{x""}peach H 'nosocket [2] {x""} ^ [0] H:hopen each 4#4000;{x""}peach H ^ q)) One-shot IPC requests can be used within peach instead. .z.W¶ .z.W now returns handles!bytes as I!J , instead of the former handles!list of individual msg sizes. Use sum each .z.W if writing code targeting 4.0 and 4.1. Dictionary update contains by clause¶ Since 4.1 2024.04.29, error if dictionary update contains by clause (previously ignored). q)d:(`a`b!1 2) q)update a by b from d / 4.0 a| 1 b| 2 q)update a by b from d / 4.1 'type Functions withdrawn from q¶ The functions listed here have been withdrawn from q, and are listed here solely for the interpretation of old scripts. list ¶ The list function created a list from its arguments. Use enlist instead. q)list[1;`a`b;"abcd"] (1;`a`b;"abcd") plist ¶ The plist function was a form of enlist (which creates a list from its arguments). It was removed completely in V3.0. q)plist[1;`a`b;"abcd"] (1;`a`b;"abcd") txf ¶ Syntax: txf[table;indices;columns] The txf function did indexed lookup on a keyed table. The function was deprecated since V2.4, and removed completely in V3.0, in favor of straightforward indexing as shown below. Here, table is a keyed table. The indices are the key values to lookup. The columns are those to be read. q)s:`a`s`d`f q)c:2 3 5 7 q)p:1 2 3 4 q)r:10 20 30 40 q)t:([s;c];p;r) q)txf[t;(s;c);`p`r] 1 10 2 20 3 30 4 40 q)t[([]s;c);`p`r] / equivalent without txf 1 10 2 20 3 30 4 40 q) q)txf[t;(`d`a;5 2);`p`r] 3 30 1 10 q)t[([]s:`d`a;c:5 2);`p`r] / equivalent without txf 3 30 1 10 txf was used in select-expressions to join tables with no foreign key relationship. q)q:([]s:`d`f`s;c:5 7 3;k:"DFS") q)select s,k,txf[t;(s;c);`p] from q s k x ----- d D 3 f F 4 s S 2 q)select s,k,t[([]s;c);`p] from q / equivalent without txf s k x ----- d D 3 f F 4 s S 2
// Populate all required Nodes for the graph graph:.ml.addNode[graph;`configuration ;configuration.node] graph:.ml.addNode[graph;`featureData ;featureData.node] graph:.ml.addNode[graph;`targetData ;targetData.node] graph:.ml.addNode[graph;`dataCheck ;dataCheck.node] graph:.ml.addNode[graph;`modelGeneration ;modelGeneration.node] graph:.ml.addNode[graph;`featureDescription ;featureDescription.node] graph:.ml.addNode[graph;`labelEncode ;labelEncode.node] graph:.ml.addNode[graph;`dataPreprocessing ;dataPreprocessing.node] graph:.ml.addNode[graph;`featureCreation ;featureCreation.node] graph:.ml.addNode[graph;`featureSignificance;featureSignificance.node] graph:.ml.addNode[graph;`trainTestSplit ;trainTestSplit.node] graph:.ml.addNode[graph;`selectModels ;selectModels.node] graph:.ml.addNode[graph;`runModels ;runModels.node] graph:.ml.addNode[graph;`optimizeModels ;optimizeModels.node] graph:.ml.addNode[graph;`preprocParams ;preprocParams.node] graph:.ml.addNode[graph;`predictParams ;predictParams.node] graph:.ml.addNode[graph;`pathConstruct ;pathConstruct.node] graph:.ml.addNode[graph;`saveGraph ;saveGraph.node] graph:.ml.addNode[graph;`saveMeta ;saveMeta.node] graph:.ml.addNode[graph;`saveReport ;saveReport.node] graph:.ml.addNode[graph;`saveModels ;saveModels.node] // Connect all possible edges prior to the data/config ingestion // dataCheck graph:.ml.connectEdge[graph;`configuration;`output;`dataCheck;`config]; graph:.ml.connectEdge[graph;`featureData ;`output;`dataCheck;`features]; graph:.ml.connectEdge[graph;`targetData ;`output;`dataCheck;`target]; // modelGeneration graph:.ml.connectEdge[graph;`dataCheck;`config;`modelGeneration;`config] graph:.ml.connectEdge[graph;`dataCheck;`target;`modelGeneration;`target] // featureDescription graph:.ml.connectEdge[graph;`dataCheck;`config ;`featureDescription;`config] graph:.ml.connectEdge[graph;`dataCheck;`features;`featureDescription;`features] // labelEncode graph:.ml.connectEdge[graph;`dataCheck;`target;`labelEncode;`input] // dataPreprocessing graph:.ml.connectEdge[graph;`dataCheck ;`config ;`dataPreprocessing; `config] graph:.ml.connectEdge[graph;`featureDescription;`features ;`dataPreprocessing; `features] graph:.ml.connectEdge[graph;`featureDescription;`symEncode;`dataPreprocessing; `symEncode] // featureCreation graph:.ml.connectEdge[graph;`dataPreprocessing;`output;`featureCreation; `features] graph:.ml.connectEdge[graph;`dataCheck ;`config;`featureCreation; `config] // featureSignificance graph:.ml.connectEdge[graph;`featureCreation;`features;`featureSignificance; `features] graph:.ml.connectEdge[graph;`labelEncode ;`target ;`featureSignificance; `target] graph:.ml.connectEdge[graph;`dataCheck ;`config ;`featureSignificance; `config] // trainTestSplit graph:.ml.connectEdge[graph;`featureSignificance;`features;`trainTestSplit; `features] graph:.ml.connectEdge[graph;`featureSignificance;`sigFeats;`trainTestSplit; `sigFeats] graph:.ml.connectEdge[graph;`labelEncode ;`target ;`trainTestSplit; `target] graph:.ml.connectEdge[graph;`dataCheck ;`config ;`trainTestSplit; `config] // selectModels graph:.ml.connectEdge[graph;`trainTestSplit ;`output;`selectModels;`ttsObject] graph:.ml.connectEdge[graph;`labelEncode ;`target;`selectModels;`target] graph:.ml.connectEdge[graph;`dataCheck ;`config;`selectModels;`config] graph:.ml.connectEdge[graph;`modelGeneration;`output;`selectModels;`models] // runModels graph:.ml.connectEdge[graph;`trainTestSplit;`output;`runModels;`ttsObject] graph:.ml.connectEdge[graph;`selectModels ;`output;`runModels;`models] graph:.ml.connectEdge[graph;`dataCheck ;`config;`runModels;`config] // optimizeModels graph:.ml.connectEdge[graph;`runModels ;`orderFunc ;`optimizeModels; `orderFunc] graph:.ml.connectEdge[graph;`runModels ;`bestModel ;`optimizeModels; `bestModel] graph:.ml.connectEdge[graph;`runModels ;`bestScoringName;`optimizeModels; `bestScoringName] graph:.ml.connectEdge[graph;`selectModels ;`output ;`optimizeModels; `models] graph:.ml.connectEdge[graph;`trainTestSplit;`output ;`optimizeModels; `ttsObject] graph:.ml.connectEdge[graph;`dataCheck ;`config ;`optimizeModels; `config] // preprocParams graph:.ml.connectEdge[graph;`dataCheck ;`config ; `preprocParams;`config] graph:.ml.connectEdge[graph;`featureDescription ;`dataDescription; `preprocParams;`dataDescription] graph:.ml.connectEdge[graph;`featureDescription ;`symEncode ; `preprocParams;`symEncode] graph:.ml.connectEdge[graph;`featureCreation ;`creationTime ; `preprocParams;`creationTime] graph:.ml.connectEdge[graph;`featureSignificance;`sigFeats ; `preprocParams;`sigFeats] graph:.ml.connectEdge[graph;`labelEncode ;`symMap ; `preprocParams;`symMap] graph:.ml.connectEdge[graph;`featureCreation ;`featModel ; `preprocParams;`featModel] graph:.ml.connectEdge[graph;`trainTestSplit ;`output ; `preprocParams;`ttsObject] // predictParams graph:.ml.connectEdge[graph;`optimizeModels;`bestModel ;`predictParams; `bestModel] graph:.ml.connectEdge[graph;`optimizeModels;`modelName ;`predictParams; `modelName] graph:.ml.connectEdge[graph;`optimizeModels;`testScore ;`predictParams; `testScore] graph:.ml.connectEdge[graph;`optimizeModels;`hyperParams ;`predictParams; `hyperParams] graph:.ml.connectEdge[graph;`optimizeModels;`analyzeModel ;`predictParams; `analyzeModel] graph:.ml.connectEdge[graph;`runModels ;`modelMetaData;`predictParams; `modelMetaData] // pathConstruct graph:.ml.connectEdge[graph;`predictParams;`output;`pathConstruct; `predictionStore] graph:.ml.connectEdge[graph;`preprocParams;`output;`pathConstruct; `preprocParams] // saveGraph graph:.ml.connectEdge[graph;`pathConstruct;`output;`saveGraph;`input] // saveMeta graph:.ml.connectEdge[graph;`pathConstruct;`output;`saveMeta;`input] // saveReport graph:.ml.connectEdge[graph;`saveGraph;`output;`saveReport;`input] // saveModel graph:.ml.connectEdge[graph;`pathConstruct;`output;`saveModels;`input] ================================================================================ FILE: ml_automl_code_nodes_configuration_configuration.q SIZE: 697 characters ================================================================================ // code/nodes/configuration/configuration.q - Configuration node // Copyright (c) 2021 Kx Systems Inc // // Entry point node used to pass the run configuration into the AutoML graph \d .automl // @kind function // @category node // @desc Pass the configuration dictionary into the AutoML graph and to // the relevant nodes // @param config {dictionary} Custom configuration information relevant to the // present run // @return {dictionary} Configuration dictionary ready to be passed to the // relevant nodes within the pipeline configuration.node.function:{[config] config } // Input information configuration.node.inputs:"!" // Output information configuration.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_configuration_init.q SIZE: 201 characters ================================================================================ // code/nodes/configuration/init.q - Load configuration node // Copyright (c) 2021 Kx Systems Inc // // Load code for configuration node \d .automl loadfile`:code/nodes/configuration/configuration.q ================================================================================ FILE: ml_automl_code_nodes_dataCheck_dataCheck.q SIZE: 1,519 characters ================================================================================ // code/nodes/dataCheck/dataCheck.q - The dataCheck node // Copyright (c) 2021 Kx Systems Inc // // Update configuration to include default parameters. Check that various // aspects of the dataset and configuration are suitable for running with // AutoML. \d .automl // @kind function // @category node // @desc Ensure that the data and configuration provided are suitable // for the application of AutoML. In the case that there are issues, error as // appropriate or augment the data to be suitable for the use case in // question. // @param config {dictionary} Configuration information assigned by the user // and related to the current run // @param features {table} Feature data as a table // @param target {number[]|symbol[]} Numerical or symbol vector containing // the target dataset // @return {dictionary} Modified configuration, feature and target datasets. // Error on issues with configuration, setup, target or feature dataset. dataCheck.node.function:{[config;features;target] config:dataCheck.updateConfig[features;config]; dataCheck.functions config; dataCheck.length[features;target;config]; dataCheck.target target; dataCheck.ttsSize config; dataCheck.NLPLoad config; dataCheck.NLPSchema[config;features]; features:dataCheck.featureTypes[features;config]; `config`features`target!(config;features;target) } // Input information dataCheck.node.inputs:`config`features`target!"!+F" // Output information dataCheck.node.outputs:`config`features`target!"!+F" ================================================================================ FILE: ml_automl_code_nodes_dataCheck_funcs.q SIZE: 7,534 characters ================================================================================ // code/nodes/dataCheck/funcs.q - Functions called in dataCheck node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application of // .automl.dataCheck \d .automl // Configuration update // @kind function // @category dataCheck // @desc Update configuration based on feature dataset and default // parameters // @param features {table} Feature data as a table // @param config {dictionary|char[]} Path to JSON file containing configuration // dictionary or a dictionary containing relevant information for the update // of augmented with start date/time // @return {dictionary} Full configuration info needed, augmenting config with // any default information dataCheck.updateConfig:{[features;config] typ:config`featureExtractionType; // Retrieve boiler plate additions at run start - ignored in custom additions standardCfg:`startDate`startTime`featureExtractionType`problemType#config; // Retrieve custom configuration information used to update default params customCfg:$[`configPath in key config; config`configPath; `startDate`startTime`featureExtractionType`problemType _ config ]; // Retrieve default params and replace defaults with custom configuration updateCfg:$[typ in`normal`nlp`fresh; dataCheck.i.getCustomConfig[features;customCfg;config;typ]; '`$"Inappropriate feature extraction type" ]; config:standardCfg,updateCfg; // If applicable add save path information to configuration dictionary config,:$[0<config`saveOption;dataCheck.i.pathConstruct config;()!()]; if[utils.logging;config:dataCheck.i.logging config]; config[`logFunc]:utils.printFunction[config`printFile;;1;1]; checks:all not utils[`printing`logging],config`saveOption; if[(2=utils.ignoreWarnings)&checks; updatePrinting[]; config[`logFunc]utils.printWarnings`printDefault ]; // Check that no log/save path created already exists dataCheck.i.fileNameCheck config; warnType:$[config`pythonWarning;`module;`ignore]; .p.import[`warnings][`:filterwarnings]warnType; if[0~checkimport 4;.p.get[`tfWarnings]$[config`pythonWarning;`0;`2]]; savedWord2Vec:enlist[`savedWord2Vec]!enlist 0b; if[0W~config`seed;config[`seed]:"j"$.z.t]; config,savedWord2Vec } // Data and configuration checking
The Twelve Days of Christmas¶ Map a simple data structure to a complex one Nested indexes describe the structure of the result, produced by a single (elided) use of Index At. Amend and Amend At let us change items at depth in the result structure. Two code lines: no loops, no counters, no control structures. Write a program that prints the lyrics of the Christmas carol “The Twelve Days of Christmas” from Rosetta Code Follow a python¶ Rosetta Code offers a Python solution. gifts = '''\ A partridge in a pear tree. Two turtle doves Three french hens Four calling birds Five golden rings Six geese a-laying Seven swans a-swimming Eight maids a-milking Nine ladies dancing Ten lords a-leaping Eleven pipers piping Twelve drummers drumming'''.split('\n') days = '''first second third fourth fifth sixth seventh eighth ninth tenth eleventh twelfth'''.split() for n, day in enumerate(days, 1): g = gifts[:n][::-1] print(('\nOn the %s day of Christmas\nMy true love gave to me:\n' % day) + '\n'.join(g[:-1]) + (' and\n' + g[-1] if n > 1 else g[-1].capitalize())) Seems pretty straightforward. We could translate it into q. gifts:( "A partridge in a pear tree."; "Two turtle doves"; "Three french hens"; "Four calling birds"; "Five golden rings"; "Six geese a-laying"; "Seven swans a-swimming"; "Eight maids a-milking"; "Nine ladies dancing"; "Ten lords a-leaping"; "Eleven pipers piping"; "Twelve drummers drumming") days:" "vs"first second third fourth fifth sixth", " seventh eighth ninth tenth eleventh twelfth" Now we need a function that returns verse x , which we can iterate through til 12 . Unlike the Python code, we shall generate the whole carol as a list of strings. First line: q){ssr["On the %s day of Christmas";"%s";]days x}3 "On the fourth day of Christmas" First two lines: q){(ssr["On the %s day of Christmas";"%s";days x];"My true love gave to me")}3 "On the fourth day of Christmas" "My true love gave to me" But we do not need the power of ssr . We can just join strings. q){("On the ",(days x)," day of Christmas";"My true love gave to me")}3 "On the fourth day of Christmas" "My true love gave to me" And some gifts. q){("On the ",(days x)," day of Christmas";"My true love gave to me"),(x+1)#gifts}3 "On the fourth day of Christmas" "My true love gave to me" "A partridge in a pear tree." "Two turtle doves" "Three french hens" "Four calling birds" But not in that order. q){("On the ",(days x)," day of Christmas";"My true love gave to me"),reverse(x+1)#gifts}3 "On the fourth day of Christmas" "My true love gave to me" "Four calling birds" "Three french hens" "Two turtle doves" "A partridge in a pear tree." Almost. Except on the first day, the last line begins And a partridge. We can deal with this. A little conditional execution. Conditional execution¶ Here is the first line expressed as the result of a Cond. $[x;"And a partridge in a pear tree";"A partridge in a pear tree"] We do not need to compare x to zero. If it is zero, we get the second version of the line. But we already have the short version of the line. We want to amend it. $[x;"And a";"A"],1_"A partridge in a pear tree" The second part of the conditional above is a no-op. Better perhaps to say we may want to amend the line. With a function that drops the first char and prepends "And a" : "And a", _[1;] @ / a composition We could use the Do iterator to apply it – zero or one times. q)1("And a", _[1] @)\"A partridge" "A partridge" "And a partridge" Now we do have to compare x to 0. And cast the result to long. ("j"$x=0)("And a", _[1;] @)/"A partridge" Nothing here seems quite satisfactory. We shall revisit it. For now we shall prefer the slightly shorter and syntactically simpler Cond. Apply At¶ We want to make the changes above, conditionally, to the last gift of the day. Happily, until we reverse the list, that is the first gift: index is 0. q){("On the ",(days x)," day of Christmas";"My true love gave to me"), reverse @[;0;{y,1_x};$[x;"And a";"A"]](x+1)#gifts}0 "On the first day of Christmas" "My true love gave to me" "A partridge in a pear tree." Here we have used the quaternary form of Amend At. The Reference gives its syntax as @[d; i; v; vy] Let’s break ours down accordingly. @[; 0; {y,1_x}; $[x;"And a";"A"]] d - The d argument is missing. It is the only argument missing, so we have a unary projection of Amend At. That makes the value ofd the expression to its right:(x+1)#gifts . The list of gifts, partridge first. i - 0: we are amending the first item in the list. The partridge line. v - This is the function to be applied to the partridge line. We are using the quaternary form of Amend At, so v is a binary. The partridge line is itsx argument. Ourv is{y,1_x} . It will drop the first character of the partridge line and prepend the value of the fourth argument. vy - This the right argument of v : a choice between"And a" and"A" . We need a blank line at the end of each verse. q){("On the ",(days x)," day of Christmas";"My true love gave to me"), reverse(enlist""),@[;0;{y,1_x};$[x;"And a";"A"]](x+1)#gifts}0 Put this into the script. day:{("On the ",(days x)," day of Christmas";"My true love gave to me"), reverse(enlist""),@[;0;{y,1_x};$[x;"And a";"A"]](x+1)#gifts} And run it. q)1 "\n"sv raze day each til 12; On the first day of Christmas My true love gave to me A partridge in a pear tree. On the second day of Christmas My true love gave to me Two turtle doves And a partridge in a pear tree. .. On the twelfth day of Christmas My true love gave to me Twelve drummers drumming Eleven pipers piping Ten lords a-leaping Nine ladies dancing Eight maids a-milking Seven swans a-swimming Six geese a-laying Five golden rings Four calling birds Three french hens Two turtle doves And a partridge in a pear tree. Q eye for the scalar guy¶ Our translation of the Python solution worked, but we can do better. Start from scratch. Leave aside for now how the day changes at the beginning of each verse. Set aside also the "And a" on the first verse, and notice that only that verse varies this way. Suppose we construct each verse as a subset of the final stanza? stanza:( "On the twelfth day of Christmas"; "My true love gave to me:"; "Twelve drummers drumming"; "Eleven pipers piping"; "Ten lords a-leaping"; "Nine ladies dancing"; "Eight maids a-milking"; "Seven swans a-swimming"; "Six geese a-laying"; "Five golden rings"; "Four calling birds"; "Three french hens"; "Two turtle doves"; "And a partridge in a pear tree."; "") Nested indexes¶ Fifteen lines. For verse x we want the first two and the last x+2 . q)0 1,/:{(reverse x)+2+til each 2+x}til 12 / line numbers 0 1 13 14 0 1 12 13 14 0 1 11 12 13 14 0 1 10 11 12 13 14 0 1 9 10 11 12 13 14 0 1 8 9 10 11 12 13 14 0 1 7 8 9 10 11 12 13 14 0 1 6 7 8 9 10 11 12 13 14 0 1 5 6 7 8 9 10 11 12 13 14 0 1 4 5 6 7 8 9 10 11 12 13 14 0 1 3 4 5 6 7 8 9 10 11 12 13 14 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Indexing is atomic. Index stanza using Index At. q)show verses:stanza @ 0 1,/:{(reverse x)+2+til each 2+x}til 12 ("On the twelfth day of Christmas";"My true love gave to me:";"And a partridg.. ("On the twelfth day of Christmas";"My true love gave to me:";"Two turtle dov.. ("On the twelfth day of Christmas";"My true love gave to me:";"Three french h.. ("On the twelfth day of Christmas";"My true love gave to me:";"Four calling b.. ("On the twelfth day of Christmas";"My true love gave to me:";"Five golden ri.. ("On the twelfth day of Christmas";"My true love gave to me:";"Six geese a-la.. ("On the twelfth day of Christmas";"My true love gave to me:";"Seven swans a-.. ("On the twelfth day of Christmas";"My true love gave to me:";"Eight maids a-.. ("On the twelfth day of Christmas";"My true love gave to me:";"Nine ladies da.. ("On the twelfth day of Christmas";"My true love gave to me:";"Ten lords a-le.. ("On the twelfth day of Christmas";"My true love gave to me:";"Eleven pipers .. ("On the twelfth day of Christmas";"My true love gave to me:";"Twelve drummer.. A list. Each item is a list of strings. Nice. Postfix syntax lets us elide Index At q)lines:0 1,/:{(reverse x)+2+til each 2+x}til 12 q)(stanza lines) ~ stanza@lines 1b Thus verses:stanza 0 1,/:{(reverse x)+2+til each 2+x}til 12 Amend At Each¶ Now for those first lines. Iterate a function that, in the first line, replaces "twelfth" with the corresponding item of days : It uses the ternary form of Amend At to apply a unary function to the first line of the verse. That unary is ssr[;"twelfth";y] a projection of ternary ssr onto "twelfth" and the item from days . q)verses{@[x;0;ssr[;"twelfth";y]]}'days ("On the first day of Christmas";"My true love gave to me:";"And a partridge .. ("On the second day of Christmas";"My true love gave to me:";"Two turtle dove.. ("On the third day of Christmas";"My true love gave to me:";"Three french hen.. ("On the fourth day of Christmas";"My true love gave to me:";"Four calling bi.. ("On the fifth day of Christmas";"My true love gave to me:";"Five golden ring.. ("On the sixth day of Christmas";"My true love gave to me:";"Six geese a-layi.. ("On the seventh day of Christmas";"My true love gave to me:";"Seven swans a-.. ("On the eighth day of Christmas";"My true love gave to me:";"Eight maids a-m.. ("On the ninth day of Christmas";"My true love gave to me:";"Nine ladies danc.. ("On the tenth day of Christmas";"My true love gave to me:";"Ten lords a-leap.. ("On the eleventh day of Christmas";"My true love gave to me:";"Eleven pipers.. ("On the twelfth day of Christmas";"My true love gave to me:";"Twelve drummer.. Amend in depth¶ We can fix verse 0, line 2. q)first .[;0 2;{"And",5_x}]verses{@[x;0;ssr[;"twelfth";y]]}'days "On the first day of Christmas" "My true love gave to me:" "A partridge in a pear tree." "" Here we use the ternary form of Amend to apply a unary function to line 2 of verse 0. The function we apply is a lambda: {"And",5_x} . Raze and print¶ Raze to a list of strings and print. q)verses:stanza 0 1,/:{(reverse x)+2+til each 2+x}til 12 q)lyric:raze .[;0 2;{"A",5_x}] verses{@[x;0;ssr[;"twelfth";y]]}'days q)1"\n"sv lyric; On the first day of Christmas My true love gave to me: A partridge in a pear tree. On the second day of Christmas My true love gave to me: Two turtle doves And a partridge in a pear tree. .. On the twelfth day of Christmas My true love gave to me: Twelve drummers drumming Eleven pipers piping Ten lords a-leaping Nine ladies dancing Eight maids a-milking Seven swans a-swimming Six geese a-laying Five golden rings Four calling birds Three french hens Two turtle doves And a partridge in a pear tree. Test your understanding¶ Using string-search-and-replace to change the days looks like a sledgehammer to crack a nut. Can you find an alternative? Answer Replace the unary projection ssr[;"twelfth";y] with {(7#x),y,14_x}[;y] . Notice how projecting {(7#x),y,14_x} onto [;y] maps the y of the outer lambda to the y of the inner lambda. If you raze the verses before fixing the last line of the first verse, how else must you change the definition of lyric ? Answer lyric:@[;2;{"A",5_x}]raze(stanza lines){@[x;0;ssr[;"twelfth";y]]}'days You are now no longer amending at depth but amending an entire item. So you use Amend At rather than Amend. Write an expression to generate all the first lines. Answer {"On the ",x," day of Christmas"}each days Less obviously you can use an elision for the substitution. (Elision with one item missing is a unary projection of enlist .) q)("On the ";;" day of Christmas")"first" "On the " "first" " day of Christmas" q)raze("On the ";;" day of Christmas")"first" "On the first day of Christmas" For the whole list that gives raze each("On the ";;" day of Christmas")each days Remembering that with unary functions f each g each can be composed as (f g@)each gets us (raze("On the ";;" day of Christmas")@)each days which is interesting, but the lambda is preferable as syntactically simpler. Review¶ We got a lot from seeing each verse as a subset of stanza . We avoided lots of explicit iteration by generating a nested list of indexes, and indexing stanza with it. Indexing is atomic, and returned us a nested list of strings, each item a verse. We used an Each Right and one each to generate the indexes; otherwise iteration was free. Iteration is free Not actually, of course. But the iteration implicit in the primitives generally evaluates faster than any iteration you specify explicitly. And it is certainly free in terms of code volume. The structure was put together from integer indexes: lighter work than pushing strings around. We used one more Each to pair off days and verses and amend the first lines. After that we needed only fix the last line of the first verse and remove a level of nesting. Great example of what you can get done with nested indexes.
Serialization examples Integer of value 1 q)-8!1i 0x010000000d000000fa01000000 | bytes | semantics | | 0x01 | architecture used for encoding the message, big endian (0) or little endian (1) | | 00 | message type (0 – async, 1 – sync, 2 – response) | | 0000 | | | 0d000000 | msg length (13) | | fa | type of item following (-6, meaning a 4-byte integer follows) | | 01000000 | the 4-byte int value (1) | Integer vector q)-8!enlist 1i 0x010000001200000006000100000001000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 12000000 | message length | | 06 | type (int vector) | | 00 | attributes (00 – none, 01 – s , 02 – u , 03 – p , 04 – g ) | | 01000000 | vector length (1) | | 01000000 | the item, a 4 byte integer (1) | Byte vector q)-8!`byte$til 5 0x01000000130000000400050000000001020304 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 13000000 | message length | | 04 | type (byte vector) | | 00 | attributes | | 05000000 | vector length (5) | | 0001020304 | the 5 bytes | General list q)-8!`byte$enlist til 5 0x01000000190000000000010000000400050000000001020304 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 19000000 | message length | | 00 | type (list) | | 00 | attributes | | 01000000 | list length (1) | | 04 | type (byte vector) | | 00 | attributes | | 05000000 | vector length (5) | | 0001020304 | the 5 bytes | Dictionary with atom values q)-8!`a`b!2 3i 0x0100000021000000630b0002000000610062000600020000000200000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 21000000 | message length | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 02000000 | vector length | | 6100 | null terminated symbol (`a ) | | 6200 | null terminated symbol (`b ) | | 06 | type (6 – integer vector) | | 00 | attributes | | 02000000 | vector length | | 02000000 | 1st item (2) | | 03000000 | 2nd item (3) | Sorted dictionary with atom values q)-8!`s#`a`b!2 3i 0x01000000210000007f0b0102000000610062000600020000000200000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 21000000 | message length | | 7f | type (127 – sorted dict) | | 0b | type (11 – symbol vector) | | 01 | attributes (`s# ) | | 02000000 | vector length | | 6100 | null terminated symbol (`a ) | | 6200 | null terminated symbol (`b ) | | 06 | type (6 – integer vector) | | 00 | attributes | | 02000000 | vector length | | 02000000 | 1st item (2) | | 03000000 | 2nd item (3) | Dictionary with vector values q)-8!`a`b!enlist each 2 3i 0x010000002d000000630b0002000000610062000000020000000600010000000200000006000100000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 2d000000 | message length | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 02000000 | vector length (2) | | 6100 | null terminated symbol (`a ) | | 6200 | null terminated symbol (`b ) | | 00 | type (0 – list) | | 00 | attributes | | 02000000 | list length (2) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 02000000 | 1st item (2) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 03000000 | 1st item (3) | Table Note the relation to the previous example. q)-8!'(flip`a`b!enlist each 2 3i;([]a:enlist 2i;b:enlist 3i)) 0x010000002f0000006200630b0002000000610062000000020000000600010000000200000006000100000003000000 0x010000002f0000006200630b0002000000610062000000020000000600010000000200000006000100000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 2f000000 | message length | | 62 | type (98 – table) | | 00 | attributes | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 02000000 | vector length (2) | | 6100 | null terminated symbol (`a ) | | 6200 | null terminated symbol (`b ) | | 00 | type (0 – list) | | 00 | attributes | | 02000000 | list length (2) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 02000000 | 1st item (2) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 03000000 | 1st item (3) | Sorted table Note the relation to the previous example. q)-8!`s#([]a:enlist 2i;b:enlist 3i) 0x010000002f0000006201630b0002000000610062000000020000000603010000000200000006000100000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 2f000000 | message length | | 62 | type (98 – table) | | 01 | attributes (`s# ) | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 02000000 | vector length (2) | | 6100 | null terminated symbol (`a ) | | 6200 | null terminated symbol (`b ) | | 00 | type (0 – list) | | 00 | attributes | | 02000000 | list length (2) | | 06 | type (6 – int vector) | | 03 | attributes (`p# ) | | 01000000 | vector length (1) | | 02000000 | 1st item (2) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 03000000 | 1st item (3) | Keyed table q)-8!([a:enlist 2i]b:enlist 3i) 0x010000003f000000636200630b00010000006100000001000000060001000000020000006200630b0001000000620000000100000006000100000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 3f000000 | message length | | 63 | type (99 – dict) | | 62 | type (98 – table) | | 00 | attributes | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 01000000 | vector length (1) | | 6100 | null terminated symbol (`a ) | | 00 | type (0 – list) | | 00 | attributes | | 01000000 | vector length (1) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 02000000 | 1st item (2) | | 62 | type (98 – table) | | 00 | attributes | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 01000000 | vector length (1) | | 6200 | null terminated symbol (`b ) | | 00 | type (0 – list) | | 00 | attributes | | 01000000 | vector length (1) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 03000000 | 1st item (3) | Sorted keyed table Note the relation to the previous example. q)-8!`s#([a:enlist 2i]b:enlist 3i) 0x010000003f0000007f6201630b00010000006100000001000000060001000000020000006200630b0001000000620000000100000006000100000003000000 | bytes | semantics | | 0x01 | little endian | | 000000 | | | 3f000000 | message length | | 7f | type (127 – sorted dict) | | 62 | type (98 – table) | | 01 | attributes (`s# ) | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 01000000 | vector length (1) | | 6100 | null terminated symbol (`a ) | | 00 | type (0 – list) | | 00 | attributes | | 01000000 | vector length (1) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 02000000 | 1st item (2) | | 62 | type (98 – table) | | 00 | attributes | | 63 | type (99 – dict) | | 0b | type (11 – symbol vector) | | 00 | attributes | | 01000000 | vector length (1) | | 6200 | null terminated symbol (`b ) | | 00 | type (0 – list) | | 00 | attributes | | 01000000 | vector length (1) | | 06 | type (6 – int vector) | | 00 | attributes | | 01000000 | vector length (1) | | 03000000 | 1st item (3) | Function q)-8!{x+y} 0x010000001500000064000a00050000007b782b797d | bytes | semantics | | 0x01 | little endian | | 000000 | | | 15000000 | message length | | 64 | type (100 – lambda) | | 00 | null terminated context (root) | | 0a | type (10 – char vector) | | 00 | attributes | | 05000000 | vector length | | 7b782b797d | {x+y} | Function in a non-root context q)\d .d q.d)test:{x+y} q.d)-8!test 0x01000000160000006464000a00050000007b782b797d | bytes | semantics | | 0x01 | little endian | | 000000 | | | 16000000 | message length | | 64 | type (100 – lambda) | | 6400 | null terminated context (.d ) | | 0a | type (10 – char vector) | | 00 | attributes | | 05000000 | length (5) | | 7b782b797d | {x+y} | Enumerations are automatically converted to values before sending through IPC. Interprocess communication
0: File Text¶ Read or write text The File Text operator 0: has five forms: Prepare Text table as a list of delimited strings Save Text write a list of strings to file Load CSV field-delimited string, list of strings, or file, as a list or matrix Load Fixed fixed-format list of strings, or file, as a list or matrix Key-Value Pairs delimited string as key-value pairs Prepare Text¶ Represent a table as a list of delimited strings delimiter 0: t 0:[delimiter;t] Where delimiter is a char atomt is a table in which the columns are either vectors or lists of strings returns a list of character strings containing text representations of the rows of t separated by delimiter . q)csv 0: ([]a:1 2 3;b:`x`y`z) "a,b" "1,x" "2,y" "3,z" q)"|" 0: (`a`b`c;1 2 3;"xyz") "a|1|x" "b|2|y" "c|3|z" Temporals are represented according to ISO 8601. q)show q:.z.p 2022.03.14D16:12:57.427499000 q)show t:flip`d`t!flip"dt"$/:2#q d t ----------------------- 2022.03.14 16:12:57.427 2022.03.14 16:12:57.427 q)csv 0:t "d,t" "2022-03-14,16:12:57.427" "2022-03-14,16:12:57.427" Any cells containing delimiter will be embraced with " and any embedded " doubled. q)t:([]x:("foo";"bar,baz";"qu\"ux";"fred\",barney")) q)t x --------------- "foo" "bar,baz" "qu\"ux" "fred\",barney" q)-1@","0:t; x foo "bar,baz" qu"ux "fred"",barney" Since 4.1t 2023.08.18, csv export of symbol or character vector values containing newlines "\n" are enclosed in double quotes. q)csv 0:([]("foo\nbar";"baz")) ,"x" "\"foo\nbar\"" "baz" Columns that are neither vectors nor lists of strings Prepare Text signals a type error if a column of its right argument is neither a vector nor a list of strings. q)t:([]Actual:1.47 0.03 300;FiscalTag:("FY2022Q2";"FY2022Q2";enlist"FY2022H1")) q)t Actual FiscalTag ------------------ 1.47 "FY2022Q2" 0.03 "FY2022Q2" 300 ,"FY2022H1" q)csv 0:t 'type [0] csv 0:t ^ You cannot diagnose this condition with meta , which examines only the first row of its argument q)meta t c | t f a ---------| ----- Actual | f FiscalTag| C but type each is your friend. q)cols[t] where 1<(count distinct type each)each t cols t ,`FiscalTag Q for Mortals §11.4.3 Preparing Text Save Text¶ Write a list of strings to file filesymbol 0: strings 0:[filesymbol;strings] Where filesymbol is a file symbolstrings a list of character strings strings are saved as lines in the file. The result of Prepare Text can be used as strings . q)`:test.txt 0: enlist "text to save" `:test.txt q)`:status.txt 0: string system "w" `:status.txt If filesymbol - does not exist, it is created, with any missing containing directories - exists, it is overwritten Load CSV¶ Interpret a field-delimited string, list of strings, or file as a list or matrix (types;delimiter ) 0: y 0:[(types;delimiter);y] (types;delimiter;flag) 0: y 0:[(types;delimiter;flag);y] Where y is a file descriptor, string, or a list of stringstypes is a string of column type codes in upper casedelimiter is a char atom or 1-item listflag (optional, default0 , since V3.4) is a long atom indicating whether line-returns may be embedded in strings:0 or1 returns a vector, matrix, or table interpreted from the content of y . With column names¶ If delimiter is enlisted, the first row of the content of y is read as column names and the result is a table; otherwise the result is a list of values for each column. /load 2 columns from space-delimited file with header q)t:("SS";enlist" ")0:`:/tmp/txt Use optional argument flag to allow line returns embedded within strings. q)("I*";",";1)0:("0,\"ab\nc\"";"1,\"def\"") 0 1 "ab\nc" "def" Where y is a string and delimiter an atom, returns a single list of the data split and parsed accordingly. q)("DT";",")0:"20130315,185540686" 2013.03.15 18:55:40.686 Without column names¶ If the CSV file contains data but no column names: 0,hea,481 10,dfi,579 20,oil,77 We can read the columns: q)("ISI";",") 0:`data.csv 0 10 20 hea dfi oil 481 579 77 Create a column dictionary and flip it: table: flip `a`b`c!("ISI";",") 0:`data.csv Column names must not be the null symbol ` Multithreaded Load¶ CSV load (excluding embedded line return mode) can use multiple threads when kdb+ is running in multithreaded mode. q)v:` sv 10000000#","0:10 10#til 100 q)system"s 10";(10#"J";",")0:v Since 4.1t 2021.09.28. Load Fixed¶ Interpret a fixed-format list of strings or file as a list or matrix (types; widths) 0: y 0:[(types;widths);y] Where y is a file descriptor or a list of stringstypes is a list of column types in upper casewidths is an int vector of field widths returns a vector or matrix interpreted from the content of y . q)sum("DT";8 9)0:enlist"20130315185540686" ,2013.03.15D18:55:40.686000000 q)("DT";8 9)0:("20130315185540686";"20130315185540686") 2013.03.15 2013.03.15 18:55:40.686 18:55:40.686 q)dates:("Tue, 04 Jun 2013 07:00:13 +0900";"Tue, 04 Jun 2013 07:00:13 -0500") q)sum(" Z T";5 20 1 5)0:dates 2013.06.04T16:00:13.000 2013.06.04T02:00:13.000 Load Fixed expects either a \n after every record, or none at all. /reads a text file containing fixed-length records q)t:("IFC D";4 8 10 6 4) 0: `:/q/Fixed.txt Tips for Load CSV and Load Fixed - To load a field as a nested character column or list rather than symbol use "*" as the identifier - To omit a field from the load use " " . Multithreaded Load¶ Fixed width load can use multiple threads when kdb+ is running in multithreaded mode Since 4.1t 2021.09.28. Key-Value Pairs¶ Interpret a delimited string as key-value pairs x 0: string 0:[x;string] Where x is a 3- or 4-char string: key-type field-separator [asterisk] record-separator and key-type is S for symbol, I for integer, or J for long, returns a 2-row matrix of the keys and values. q)"S=;"0:"one=1;two=2;three=3" one two three ,"1" ,"2" ,"3" q)"S:/"0:"one:1/two:2/three:3" one two three ,"1" ,"2" ,"3" q)"I=;"0:"1=first;2=second;3=third" 1 2 3 "first" "second" "third" q)s:"8=FIX.4.2\0019=339\00135=D\00134=100322\00149=JM_TEST1\00152=20130425-06:46:46.387" q)(!/)"I=\001"0:s 8 | "FIX.4.2" 9 | "339" 35| ,"D" 34| "100322" 49| "JM_TEST1" 52| "20130425-06:46:46.387" The inclusion of an asterisk as the third character allows the delimiter character to appear harmlessly in quoted strings. (Since V3.5.) q)0N!"I=*,"0:"5=\"hello,world\",6=1"; (5 6i;("hello,world";,"1")) q)0N!"J=*,"0:"5=\"hello,world\",6=1"; (5 6;("hello,world";,"1")) q)0N!"S=*,"0:"a=\"hello,world\",b=1"; (`a`b;("hello,world";,"1")) Q for Mortals §11.5.3 Key-Value Records Column types and formats¶ B boolean /[01tfyn]/i G guid /[0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}/i X byte H short [0-9][0-9] I int J long E real F float C char S symbol P timestamp date?timespan M month [yy]yy[?]mm D date [yy]yy[?]mm[?]dd or [m]m/[d]d/[yy]yy Z datetime date?time N timespan hh[:]mm[:]ss[[.]ddddddddd] U minute hh[:]mm V second hh[:]mm[:]ss T time hh[:]mm[:]ss[[.]ddd] (blank) skip * literal chars .j namespace for JSON Datatypes, File system Q for Mortals §11.4.1 Reading and Writing Text Files
String programs¶ From GeeksforGeeks Python Programming Examples Follow links to the originals for more details on the problem and Python solutions. Is string a palindrome?¶ A string is a palindrome if it matches its reversal. def isPal(str): r = range(0, len(str)) lst = [str[i] for i in r] tsl = [str[-(i+1)] for i in r] return lst == tsl >>> isPal("malayalam") True q){x~reverse x} "malayalam" 1b Sort characters¶ >>> ''.join(sorted('bbccdefbbaa')) 'aabbbbccdef' q)asc "bbccdefbbaa" `s#"aabbbbccdef" Reverse words in a string¶ >>> s = "i like this program very much" >>> words = s.split(' ') >>> " ".join([words[-(i+1)] for i in range(0, len(words))]) 'much very program this like i' q)s: "i like this program very much" q)" " sv reverse " " vs s "much very program this like i" Q keywords vs and sv split and join strings. Remove i th character from string¶ >>> s = 'GeeksforGeeks' >>> "".join([s[i] for i in range(0, len(s)) if i!=2]) 'GeksforGeeks' In q, til count x returns all the indexes of list x . q)s:"GeeksforGeeks" q)s (til count s) except 2 "GeksforGeeks" Is string a substring of another?¶ >>> s = "geeks for geeks" >>> s.find('geek')!= -1 True >>> s.find('goon')!= -1 False In q, the like keyword provides basic pattern matching. q)s:"geeks for geeks" q)s like "*geek*" 1b q)s like "*goon*" 0b Even-length words in a string¶ >>> s = "This is a python language" >>> [wrd for wrd in s.split(" ") if 0 == len(wrd) % 2] ['This', 'is', 'python', 'language'] q)s: "This is a python language" q){x where 0=(count each x)mod 2} " " vs s "This" "is" "python" "language" " " vs splits the string into a list of words. In the lambda, count each x is a vector of their lengths. String contains all the vowels?¶ >>> s = 'geeks for geeks' >>> all(["aeiou"[i] in s.lower() for i in range(0,5)]) False q)s: "geeksforgeeks" q)all "aeiou" in lower s 0b "aeiou" in returns a list of flags, which all aggregates. Count matching characters in two strings¶ >>> str1 = 'aabcddekll12@' >>> str2 = 'bb22ll@55k' >>> len(set(str1) & set(str2)) 5 q)str1: "aabcddekll12@" q)str2: "bb22ll@55k" q)count distinct str1 inter str2 5 In Python set() discards duplicate characters from each string. The q inter keyword is list intersection, not set intersection; distinct discards any duplicates. Remove duplicates from a string¶ >>> "".join(set("geeksforgeeks")) 'krgefso' Q is a vector language. It has a keyword for this. q)distinct "geeksforgeeks" "geksfor" The q keyword preserves order. The Python solution can be adapted to do the same. >>> from collections import OrderedDict >>> "".join(OrderedDict.fromkeys("geeksforgeeks")) 'geksfor' String contains special characters?¶ >>> sc = '[@_!#$%^&*()<>?/\\|}{~:]' >>> any([c in sc for c in set("Geeks$for$Geeks")]) True >>> any([c in sc for c in set("Geeks for Geeks")]) False q)sc:"[@_!#$%^&*()<>?/\\|}{~:]" / special characters q)any sc in "Geeks$For$Geeks" 1b q)any sc in "Geeks For Geeks" 0b Random strings until a given string is generated¶ import string import random import time possibleCharacters = string.ascii_lowercase + string.digits + \ string.ascii_uppercase + ' ., !?;:' # string to be generated t = "geek" attemptThis = ''.join(random.choice(possibleCharacters) for i in range(len(t))) attemptNext = '' completed = False iteration = 0 while completed == False: print(attemptThis) attemptNext = '' completed = True # Fix the index if matches with # the strings to be generated for i in range(len(t)): if attemptThis[i] != t[i]: completed = False attemptNext += random.choice(possibleCharacters) else: attemptNext += t[i] iteration += 1 attemptThis = attemptNext time.sleep(0.1) print("Target matched after " + str(iteration) + " iterations") q)show pc:.Q.a,.Q.A,"0123456789 ., !?;:" / possible characters "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 ., !?;:" q)tryfor:{i:where x<>y; @[y; i; :; count[i]?pc]} q)s: tryfor["geek";] scan " " q)count s / number of iterations 174 The binary q function tryfor first finds where string x varies from y , then replaces those letters with random picks from pc . Projecting tryfor onto "geek" yields a unary function. q)tryfor["geek";] " e k" "veIk" Keyword scan applies tryfor["geek";] successively until the result stops changing, i.e. it finds a match. The initial state is a string of blanks. scan returns the result of every iteration. q)10 -10#\:s / first and last 10 results " " "Czks" "T3cC" "d,s " "8pyi" ";DDG" "DVmh" "8Xoc" "JI!q" "q.NC" "geeZ" "geef" "geeo" "gee9" "gee3" "geen" "gee," "geeR" "geeu" "geeN" Split and join a string¶ >>> '-'.join("Geeks for Geeks".split(' ')) 'Geeks-for-Geeks' q)"-"sv " " vs "Geeks for Geeks" "Geeks-for-Geeks" String is a binary?¶ >>> all([c in "01" for c in '01010101010']) True q)all "01010101010" in "01" 1b Uncommon words from two strings¶ import collections # words that occur once only in string s def singles(s): c = collections.Counter(s.split(' ')) return[e[0] for e in c.items() if e[1]==1] def uncommonWords(s1, s2): sw1 = singles(s1) sw2 = singles(s2) return [w for w in sw1+sw2 if not w in set(sw1)&set(sw2)] >>> s1 = 'Greek for geeks' >>> s2 = 'Learning from the the geeks' >>> uncommonWords(s1, s2) ['Greek', 'for', 'Learning', 'from'] singles:{ c:count each group`$" "vs x; key[c] where value[c]=1 } uncommonWords:{ sx:singles x; sy:singles y; (sx,sy) except sx inter sy } q)s1:"Greek for geeks" q)s2:"Learning from the the geeks" q)uncommonWords[s1;s2] `Greek`for`Learning`from Permute a string¶ from itertools import permutations >>> [''.join(w) for w in permutations('ABC')] ['ABC', 'ACB', 'BAC', 'BCA', 'CAB', 'CBA'] p:{$[x=2;(0 1;1 0);raze(til x)(rotate')\:(x-1),'.z.s x-1]} permutations:{x p count x} q)permutations "ABC" "CAB" "CBA" "ABC" "BAC" "BCA" "ACB" Function p defines permutations of order \(N\) recursively. (.z.s allows a lambda to refer to itself.) permutations uses them to index its argument. q)p 2 0 1 1 0 q)p 3 2 0 1 2 1 0 0 1 2 1 0 2 1 2 0 0 2 1 Reading Room: It’s more fun to permute for a non-recursive algorithm URLs from string¶ import re def findUrls(string): rx = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\), ]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' return re.findall(rx, string) >>> s = 'My Profile: https://auth.geeksforgeeks.org/user/Chinmoy%20Lenka/articles in the portal of http://www.geeksforgeeks.org/' >>> findUrls(s) ['https://auth.geeksforgeeks.org/user/Chinmoy%20Lenka/articles in the portal of http://www.geeksforgeeks.org/'] findUrls:{ begins:{y~count[y]#x}; / x begins with y? c:(x ss "http")_x; / candidate substrings c:c where any c begins\:/:("http://";"https://"); / continue as URLs? {(x?" ")#x}each c } / take each up to space q)s: "My Profile: https://auth.geeksforgeeks.org/user/Chinmoy%20Lenka/articles in the portal of http://www.geeksforgeeks.org/" q)findUrls s "https://auth.geeksforgeeks.org/user/Chinmoy%20Lenka/articles" "http://www.geeksforgeeks.org/" The published Python solution relies on a non-trivial RegEx to match any URL. (It also fails.) The q solution looks for substring "http" and tests candidates to see whether they begin URLs. Combining the iterators Each Left and Each Right \:/: allows us to test each of the candidate substrings in c against both http:// and https:// . Rotate a string¶ def rotate(s,n): return ''.join([s[(i+n)%len(s)] for i in range(0, len(s))]) >>> rotate("GeeksforGeeks",2) 'eksforGeeksGe' >>> rotate("GeeksforGeeks",-2) 'ksGeeksforGee' q)2 rotate "GeeksforGeeks" "eksforGeeksGe" q)-2 rotate "GeeksforGeeks" "ksGeeksforGee" Empty string by recursive deletion?¶ def checkEmpty(string, sub): if len(string)== 0: return False while (len(string) != 0): index = string.find(sub) if (index ==(-1)): return False string = string[0:index] + string[index + len(sub):] return True >>> checkEmpty("GEEGEEKSSGEK", "GEEKS") False >>> checkEmpty("GEEGEEKSKS", "GEEKS") True q)0=count ssr[;"GEEKS";""] over "GEEGEEKSSGEK" 0b q)0=count ssr[;"GEEKS";""] over "GEEGEEKSKS" 1b Projected onto two arguments (as ssr[;"GEEKS";""] ) ternary string-replacement keyword ssr is a unary function. The over keyword applies a unary function successively until the result stops changing. It remains only to count the characters in the result of the last iteration. If over gives an unexpected result, the matter is usually clarified by replacing it with scan , which performs the same computation but returns the result of each iteration. q)ssr[;"GEEKS";""] scan "GEEGEEKSSGEK" "GEEGEEKSSGEK" "GEESGEK" q)ssr[;"GEEKS";""] scan "GEEGEEKSKS" "GEEGEEKSKS" "GEEKS" "" Scrape and find ordered words¶ >>> url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" >>> words = requests.get(url).content.decode("utf-8").split() >>> len(words) 25104 >>> lists = [list(w.lower() for w in words] >>> ow = [words[i] for i in range(0, len(words)) if lists[i] == sorted(lists[i])] >>> len(ow) 422 q)url: "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" q)words:system"curl ",url q)count words 25104 q)ow:words where{x~asc x}each lower words q)count ow 422 A q string is just a list of characters and can be sorted as it is. Find possible words¶ >>> d = ["go", "bat", "me", "eat", "goal", "boy", "run"] >>> tray = list("eobamgl") >>> [w for w in d if all([c in tray for c in w])] ['go', 'me', 'goal'] q)d:string`go`bat`me`eat`goal`boy`run / dictionary q)tray: "eobamgl" q)d where all each d in\:tray "go" "me" "goal"
New features in the 2.4 release of kdb+¶ Detailed change list / release notes Commercially licensed users may obtain the detailed change list / release notes from http://kxdownloads.com General info¶ This release builds on the existing V2.3. The non upward-compatible changes (NUC) are the change to .z.ph (which only impacts people who've written their own HTTP handler), the additional .z.pc close event when closing stdin, and the removal of the hopen shortcut of just hopen`:port . All other code should run without change. Multi-threaded input¶ Although 2.3 used multithreading to farm out queries to multiple partitions using the number of threads set via the -s parameter, the input queue was still single-threaded. In 2.4 it is possible to multithread that queue as well. Starting a kdb+ task with a positive -p parameter like before preserves the single-threaded behavior. q .. -p 5001 / single-threaded input queue as before if, however a negative port number is given the input queue is multithreaded q .. -p -5001 / multithreaded input queue The number of secondary processes set with -s are still dedicated to running queries over the partitions in parallel. However the number of threads used to handle the input queue is not user-settable. For now, each query gets an own thread – later it is probably going to change to a pool of threads, one per CPU. hopen ¶ The shortcut syntax of hopen`:port has been removed, with 2.4 the host section must also be provided. The hostname can be elided, but the colon must be there. hopen`:localhost:5001 / ok hopen`::5001 /ok hopen`:5001 / NOT ok, it will have opened a file handle to file "5001"! The alternative shortcut of supplying solely the (integer) portnumber is still valid. hopen 5001 / open port 5001 on local machine .z.pw ¶ z.pw adds an extra callback when a connection is being opened to a kdb+ session. Previously userid and password were checked if a userid:password file had been specified with the -u or -U parameter. The checking happened outside of the user's session in the q executable. Only if that check was passed did the connect get created and a handle passed into the "user space" as a parameter to .z.po . Now after the -u or -U check has been done (if specified on the command line) the userid and password are passed to .z.pw allowing a function to be run to perform custom validation – for example to check against an LDAP server. If .z.pw returns a 1b the login can proceed and the next stop will be .z.po , if it returns a 0b the user attempting to open the connection will get an access error. .z.pi ¶ .z.pi has been extended to handle console input as well as remote client input (like with qcon). This allows validating console input, or replacing the default display (.Q.s value x ) with a home-knitted version. to get the old (2.3 and earlier) display reset .z.pi .z.pi:{0N!value x;} to return to the default display execute \x .z.pi .z.pc ¶ In addition to the close events previously handled .z.pc is now also called when stdin is "closed" by redirecting it to /dev/null . In this case the handle provided to .z.pc will be 0 (the current console) \x ¶ By default, callbacks like .z.po are not defined in the session, which makes it awkward to revert to the default behavior after modifying them – for example when debugging or tracing. \x allows deleting their definitions to force default behavior. .Q .q visibility¶ In 2.3 the display of the definitions of functions in .q and .Q was suppressed. People complained… so the display is back. .z.ts delay¶ Before 2.4 the timer event would next happen n milliseconds after completion of the execution of the timer's code. In 2.4 the event fires every n milliseconds. .z.i ¶ Process PID .z.t , .z.d ¶ Shorthand for `time$.z.z , similarly .z.d <–> `date$.z.z , .z.T <–> `time$.z.Z , .z.D <–> `date$.z.Z 1066 and all that¶ The valid year range has been expanded to 1710-2290 \b , \B and .z.b ¶ Extended support for tracking dependencies. \b lists all dependencies (views) \B lists all dependencies that have been invalidated .z.b lists dependencies and the values they depend on q)a 11 q)b 12 13 14 q)a:99 q)b 100 101 102 q)\b ,`b q)\B `symbol$() q)a:22 q)\B ,`b q)b 23 24 25 q)\B `symbol$() q).z.b a| b q) -11! file¶ -11! (used to replay logfiles) has been made more robust when dealing with corrupt files. Previously it would have crashed and exited on encountering corrupt records, now it will truncate the file to remove the invalid records and only process valid items. Rather than the individual cells being executed directly .z.ps is called on each allowing easier customization of log loaders. inetd ¶ Run a kdb+ server under inetd , see Knowledge Base: inetd/xinetd -b ¶ Block client access to a kdb+ database. The ability to write directly to disk allows an incorrectly written query to create havoc with a database. If the kdb+ server is started with the -b flag, all client writes are disabled. Note: this functionality was incorrectly assigned to the -z startup parameter for a while during the 2.4 test cycle -q ¶ Start up kdb+ in quiet mode – don't display the startup banner and license information. Makes it easier to process the output from a q session if redirecting it as part of a larger application \1 filename & \2 filename¶ \1 and \2 allow redirecting stdin and stdout to files from within the q session. The filename and any intermediate directories are created if needed clients \a \f \w \b ¶ When the -u or -U client restrictions are in place the list of valid commands for a client has been reduced to \a , \f , \w and \b. Maintain group attribute for append and update¶ The `g# attribute is maintained in places where in 2.3 it would have been cleared. n:100000 / this has .5% change of sym x:+(n?100*n;n?`3;n?100) /id sym size t:([o:`u#!0]s:`g#0#`;z:0) \t .[`t;();,;]'x `s# validated¶ Before 2.4 the data being "decorated" with the `s# flag was not validated – so it was possible to flag as sorted a list that actually wasn't – causing problems when using primitives that depend on order like bin . Now an error will be signalled if data is incorrectly flagged. q)`s#1 2 3 `s#1 2 3 q)`s#1 2 3 3 `s#1 2 3 3 q)`s#1 2 3 3 2 'fail q) In the case of tables it just checks the first column. Month, year and int partition columns all virtual¶ Before 2.4 only the date column in a database partitioned by date was virtual. With 2.4 the year, month or int columns in databases partitioned by year, month or int respectively are treated the same way Repeated partitions¶ With 2.4 it is possible to have the same named partition on multiple partitions (as defined in par.txt ) allowing, for example, splitting symbols by A-M on drive0, N-Z on drive1. Aggregations are handled correctly. Skip leading #! ¶ When loading a script file that starts with #! , the first line is skipped. \c automated¶ On OSs that support it the values passed to \c (console width and height) are taken from the system variables $LINES and $COLUMNS No limit on stdin¶ Before 2.4 there was a limit of 999 characters on stdin/console – with 2.4 there is no limit mlim and glim increased to 1024¶ The maximum number of simultaneously mapped nested columns (mlim) and the maximum number of active `g# indices (glim) have been increased to 1024. mcount and friends null-aware¶ Windowed functions like mcount , mavg have been made consistently aware of null values q)3 mcount 1 1 1 1 1 1 1 2 3 3 3 3 q)3 mcount 1 1 0N 1 1 1 1 2 2 2 2 3 .z.ph and .z.pp changed¶ Previously kdb+ stripped the header information from incoming HTTP requests before passing it on to the user via .z.ph or .z.pp . Unfortunately that meant useful information like the client's browser or preferred languages was being discarded. With 2.4 the header information is passed back as a dictionary in addition to the body text as before. This is a change in behavior, but will only affect those who have customized .z.ph or .z.pp directly. The previous value is now the first item of a 2-item list, the new header dictionary is the second item. Scalars¶ General speedups to the code to handle scalar values abs is primitive¶ \r old new¶ unix mv (rename) splay upsert ¶ With 2.4, the upsert function appends splayed tables on disk. Note that keyed tables cannot be splayed, so splayed upsert is the same as insert . q)`:x/ upsert ([]a:til 3) `:x/ q)count get`:x 3 q)`:x/ upsert ([]a:til 3) `:x/ q)count get`:x 6 Changes in 2.5¶ Below is a summary of changes from V2.4. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) Production release date¶ 2008.12.15 .z.W ¶ New feature in 2.5 2008.12.31 .z.W returns a dictionary of IPC handles with the number of bytes waiting in their output queues. e.g. q)h:hopen ... q)h 4 q)(neg h)({};til 1000000);.z.W 4| 4000030 q).z.W 4| 0 Changes to casting¶ `$"" is a single `$() is a list (in 2.4 it was a single) e.g. in 2.4 q)`something ^ `$.z.x 0 `something in 2.5 q)`something ^ `$.z.x 0 `symbol$() overcome change with e.g. (`$.z.x)0 Changes in 2.6¶ Below is a summary of changes from V2.5. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) Production release date¶ 2009.09.15 IPC Compression¶ Compresses data over TCP/IP connections if - uncompressed serialized data has a length greater than 2000 bytes - connection is not localhost - size of compressed data is less than half size of uncompressed data - both parties are version 2.6 or above timestamp – new type¶ - timestamp, nanosecond precision, type -12h , system datetime as.z.p or.z.P , - cast with `timestamp$… , parse with"P"$ - range 1709.01.01 to 2290.12.31 - Timestamp is blocked (signalled via 'type ) from sending to versions of kdb+ <2.6 timespan – new type¶ - timespan, nanosecond precision, type -16h , system time as.z.n or.z.N - cast with `timespan$… parse with"N"$… - range -106751D23:47:16.854775806 to106751D23:47:16.854775806 - Timestamp is blocked (signalled via 'type ) from sending to versions of kdb+ <2.6 Binding to a specific network address¶ Binds to a specific address when listening for IPC connections (can be useful on a multihomed host) e.g. \p 127.0.0.1:5000 Named Service Support¶ Allows symbolic lookups to map service names to ports e.g. $grep kdb /etc/services kdb 5000/tcp # … q)\p 127.0.0.1:kdb q)\p 5000 Can start q process as q -p localhost:kdb q -p 127.0.0.1:kdb q -p :kdb q -p :5000 q -p 5000 q -p kdb and all those variations again as q)\p ... also when opening connections q)h:hopen `:localhost:kdb / error reporting - attempt to use a non-existent entry from /etc/service q -p 127.0.0.1:kdbx 'kdbx q)\p kdbx 'kdbx .z.W ¶ .z.W changed to return a dictionary of IPC handles with int vectors showing the size in bytes of each message waiting in their output queues. The previous version's behavior can be obtained with sum each .z.W e.g. q)h:hopen ... q)h 4 q)do[3;(neg h)({};til 1000000)];.z.W 4| 4000030 4000030 4000030 q).z.W 4| 0 .z.ts ¶ .z.ts is now passed a timestamp instead of a datetime e.g. q).z.ts:{0N!x} q)\t 1000 2009.09.14D10:16:22.204635000 xasc /xdesc ¶ sort only if not already sorted. Allows () xasc t .Q.w ¶ .Q.w[] – prints dict of memory stats (\w and \w 0 ) for easy reading q).Q.w[] used| 4323152 heap| 67108864 peak| 335544320 wmax| 0 mmap| 56 syms| 534 symw| 28915 -17!x ¶ Allows a kdb+ format file generated by a non-native architecture to be read in full. Intended for migrating data between different endian cpu architectures (e.g. Sparc to x86). Should not be used as method of "sharing" the same file between systems as it is relatively inefficient. It does not map data, it reads the file in full. SIGTERM¶ Reception of SIGTERM signal has the same effect as exit 0 at the command line, i.e. processed when current task has completed. Graceful shutdown can be obtained through hooking .z.exit . \w ¶ \w now reports (M0 sum of allocs from M1;M1 mapped anon;M2 peak of M1;M3 workspace limit;M4 mapped files). e.g. q)\w 122560 67108864 67108864 0 0 q)a:til 100000000 q)\w 536998128 603979776 603979776 0 0 q)delete a from `. `. q)\w 127184 67108864 603979776 0 0 q)`:.test set til 10000 `:test q)a:get`:.test q)\w 127712 67108864 603979776 0 40016 q prompt¶ q prompt shows namespace e.g. q)\d .test q.test)\d .util q.util)\d . q) Partitioned table schemas¶ are derived from the most recent (last) partition. This is in contrast to previous releases which used the oldest (first) partition to get a list of tables and their schemas. \p [-]0W ¶ listens on next available port select from `table ¶ as from 2.6 release 2009.08.20 allow select from `t (and exec from `t ) (like meta) just to help users avoid having to use get
.finos.dep.libPathIn:{[moduleName;lib] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; path:.finos.dep.joinPath(.finos.dep.list[moduleName;`libPath];lib); pathFull:path,$[.z.o like "w*";".dll";".so"]; if[not {x~key x}`$":",pathFull; '"library not found: ",pathFull]; path}; .finos.dep.libPath:{[lib] if[()~.finos.dep.currentModule; '".finos.dep.libPath must be used in module.q"]; .finos.dep.libPathIn[.finos.dep.currentModule;lib]}; .finos.dep.loadFuncIn:{[moduleName;lib;funcName;arity] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; libPath:`$":",.finos.dep.libPathIn[moduleName;lib]; libPath 2:(funcName;arity)}; .finos.dep.loadFunc:{[lib;funcName;arity] if[()~.finos.dep.currentModule; '".finos.dep.loadFunc must be used in module.q"]; .finos.dep.loadFuncIn[.finos.dep.currentModule;lib;funcName;arity]}; .finos.dep.loadDependencies:{[projectFile] if[-11h=type projectFile; projectFile:"\n"sv read0 projectFile]; projectFileK:.j.k projectFile; if[not `dependencies in key projectFileK; :(::)]; .finos.dep.loadFromRecord each projectFileK`dependencies; }; if[()~key `.finos.dep.resolvers; .finos.dep.resolvers:(`$())!()]; .finos.dep.resolvers[`]:{`projectRoot`scriptPath`libPath#x}; .finos.dep.loadFromRecord:{[rec] if[not `name in key rec; '"name missing from record"]; override:0b; prevOverride:0b; if[`override in key rec; if[rec`override; override:1b]]; if[rec[`name] in exec moduleName from .finos.dep.list; prevOverride:.finos.dep.list[rec`name;`isOverride]; ]; if[override or not prevOverride; resolver:$[`resolver in key rec; rec`resolver; `]; if[10h=type resolver; resolver:`$resolver]; if[not resolver in key .finos.dep.resolvers; '"unregistered resolver: ",.Q.s1 resolver]; params:.finos.dep.resolvers[resolver][rec]; .finos.dep.priv.regModule[rec`name;rec`version;params`projectRoot;params`scriptPath;params`libPath;override]; ]; if[override; :(::)]; if[`lazy in key rec;if[rec`lazy; if[`scripts in key rec; '"lazy modules cannot have scripts specified"]; :(::); ]]; .finos.dep.loadModule rec`name; if[`scripts in key rec; .ms.depends.loadScript[rec`name] each rec`scripts]; }; .finos.dep.registerUnload:{[name;unload] if[not name in exec moduleName from .finos.dep.list; '"module not registered: ",name]; .finos.dep.list[name;`unloads]:.finos.dep.list[name;`unloads],unload; }; .finos.dep.unloadErrorHandler:{[name;err] -2"error while unloading module ",name,": ",err; }; .finos.dep.priv.unload:{[name] {[name;handler]@[handler;::;.finos.dep.unloadErrorHandler[name]]}[name] each .finos.dep.list[name;`unloads]; }; .finos.dep.priv.unloadAll:{ .finos.dep.priv.unload each exec moduleName from .finos.dep.list; }; .z.exit:{[handler;x] .finos.dep.priv.unloadAll[]; handler[x]}$[()~key `.z.exit; (::); .z.exit]; ================================================================================ FILE: kdb_q_dep_include.q SIZE: 5,234 characters ================================================================================ .finos.dep.simplifyPath:{[path] path0:path; path:ssr[path;"/";.finos.dep.pathSeparator]; path:.finos.dep.pathSeparator vs "",path; //ensure it's a string path:path where not(enlist ".")~/:path; //remove "." elements c:1+$[.z.o like "w*";path0 like "\\\\*";0]; path:(c#path),(c _path) except enlist""; //remove blank elements (e.g. from dir//file) path:{ //iteratively remove "dir/.." parts from path if[not ".." in x;:x]; pos:(first where x~\:".."); pref:(pos-1)#\:x; suf:(pos+1)_\:x; :pref,suf; }/[path]; path:.finos.dep.pathSeparator sv path; path}; //these should be in util .finos.util.trp:{[fun;params;errorHandler] -105!(fun;params;errorHandler)}; .finos.util.try2:{[fun;params;errorHandler] .finos.util.trp[fun;params;{[errorHandler;e;t] -2"Error: ",e," Backtrace:\n",.Q.sbt t; errorHandler[e]}[errorHandler]]}; if[()~key `.finos.dep.logfn; .finos.dep.logfn:-1]; .finos.dep.errorlogfn:-2; .finos.dep.safeevalfn:.finos.util.try2; .finos.dep.isAbsolute:$[.z.o like "w*"; {any x like/:("?:*";"\\*";"/*")}; {x like "/*"}]; .finos.dep.startupDir:system"cd"; .finos.dep.loaded^:(enlist .finos.dep.simplifyPath$[.finos.dep.isAbsolute .z.f;string .z.f;(system "cd"),.finos.dep.pathSeparator,string .z.f])!enlist 1b; .finos.dep.includeStack:1#key .finos.dep.loaded; .finos.dep.includeDeps:([]depFrom:();depTo:()); .finos.dep.priv.callTree:([]callFrom:(); callTo:()); //like includeDeps but only actually called files are put here .finos.dep.priv.stat:([file:()]elapsedTime:`timespan$()); .finos.dep.resolvePathTo:{[dir;file] .finos.dep.simplifyPath$[.finos.dep.isAbsolute file;file;.finos.dep.joinPath(dir;file)]}; .finos.dep.resolvePath:{[file] currFile:.finos.dep.currentFile[]; path:$[(::)~currFile; system"cd"; .finos.dep.cutPath[currFile][0]]; .finos.dep.resolvePathTo[path;file]}; //set to false to see where the loaded scripts break //however this will corrupt the include stack, so don't use include again after an error .finos.dep.handleErrors:1b; if[0<count getenv`FINOS_DEPENDS_DEBUG; .finos.dep.handleErrors:0b]; .finos.dep.priv.errorHandler:{[path;x] .finos.dep.loaded[path]:0b; .finos.dep.includeStack:(count[.finos.dep.includeStack]-1)#.finos.dep.includeStack; .finos.dep.errorlogfn["Error while loading ",path,": ",x]; 'x}; /// // Include the specified file. Path is relative to the initial script (.z.f) or the current file if it's being included. Files won't be included more than once. // For relative paths, this function should be used only outside of functions, and only in top-level scripts or in scripts included using this function. // On 3.6 and below, it should not be used from scripts loaded directly using \l or wrappers around it. // @param force If true, load this file even if already loaded // @param file File to include .finos.dep.includeEx:{[force;file] if[0=count file; '"include: empty path"]; path:.finos.dep.resolvePath file; $[()~kp:key hsym`$path; {'`$x}path," doesn't exist"; 11h=type kp; {'`$x}path," is a directory"; () ]; `.finos.dep.includeDeps insert (`depFrom`depTo!(last .finos.dep.includeStack;path)); if[path in .finos.dep.includeStack; .finos.dep.errorlogfn["Circular include:\n","\n-> "sv (.finos.dep.includeStack?path)_.finos.dep.includeStack,enlist path]; '"Circular include in ",path; ]; if[force or not .finos.dep.loaded[path]; `.finos.dep.priv.callTree insert (`callFrom`callTo!(last .finos.dep.includeStack;path)); .finos.dep.loaded[path]:1b; .finos.dep.logfn ((count[.finos.dep.includeStack]-1)#" "),"include: loading file ",path; .finos.dep.includeStack:.finos.dep.includeStack,enlist path; if[.z.K<4.0; prevFile:.finos.dep.priv.currentFile; .finos.dep.priv.currentFile:path; //used by .finos.dep.currentFile ]; start:.z.P; $[.finos.dep.handleErrors; .finos.dep.safeevalfn[system;enlist"l ",path;.finos.dep.priv.errorHandler[path]]; system"l ",path ]; end:.z.P; if[.z.K<4.0; .finos.dep.priv.currentFile:prevFile; ]; .finos.dep.priv.stat[([]file:enlist path);`elapsedTime]:end-start; .finos.dep.includeStack:(count[.finos.dep.includeStack]-1)#.finos.dep.includeStack; ]; }; .finos.dep.include:.finos.dep.includeEx[0b;]; .finos.dep.includeFromModule:{ if[10h=type x; x:`$"/"vs x]; s:.finos.dep.scriptPath[x]; $[0<count s;.finos.dep.include s;'".finos.dep.scriptPath returned empty string for ",.Q.s1 x]}; .finos.dep.includedFiles:{asc where .finos.dep.loaded}; /// // Convert the include dependencies into dot format .finos.dep.depsToDot:{ "\n"sv((enlist"digraph G {"),(" ",/:" -> "sv/:(("\"",/:/:flip value flip last each/:.finos.dep.pathSeparator vs/:/:.finos.dep.includeDeps),\:\:"\"")),\:";"),enlist enlist"}"} .finos.dep.getLoadTimeByFile:{enlist[first .finos.dep.includeStack] _ asc (exec file!elapsedTime from .finos.dep.priv.stat)-(exec sum .finos.dep.priv.stat[([]file:callTo);`elapsedTime] by callFrom from .finos.dep.priv.callTree)}; ================================================================================ FILE: kdb_q_finos_init.q SIZE: 2,167 characters ================================================================================ /// // Get name of current file that is being loaded via \l (4.0) or .finos.dep.include (3.6 or earlier) // @return The file path as a string, or the start file (.z.f) if no \l / .finos.dep.include is running .finos.dep.currentFile:$[.z.K>=4.0;{ bt:(.Q.btx .Q.Ll`)[;1;3]; l:first where bt like"\\l *"; $[null l; string .z.f; //this is needed to ensure any includes in the main script work properly //unfortunately this also causes the function to misbehave if called outside of a file load //(::); 3_bt l]}; {.finos.dep.priv.currentFile}]; .finos.dep.pathSeparators:$[.z.o like "w*";"[\\/]";"/"]; .finos.dep.pathSeparator:$[.z.o like "w*";"\\";"/"]; /// // Cut a path into directory and file name // @param path as a string. // @return A list in the form (dir;file). .finos.dep.cutPath:{[path] //Kx says that ` vs is too late to fix for Windows. path:"",path; match:path ss .finos.dep.pathSeparators; $[0<count match; [p:last match; (p#path;(p+1)_path)]; (enlist".";path)]};
Cheat at Boggle¶ Problem For a given set of Boggle™ dice, throw a board. Identify the words that can be found on it and sort them alphabetically (ascending) within score (descending). Techniques¶ - Download a word list from the Web - Generate random numbers - Project a function onto constant values - Iterate using Each - Iterate using the Converge iterator Solution¶ BOXES:()!() BOXES[`new]:"AAEEGN ELRTTY AOOTTW ABBJOO EHRTVW CIMOTU DISTTY EIOSST DELRVY ACHOPS HIMNQU EEINSU EEGHNW AFFKPS HLNNRZ DEILRX" BOXES[`old]:"AACIOT AHMORS EGKLUY ABILTY ACDEMP EGINTV GILRUW ELPSTU DENOSW ACELRS ABJMOQ EEFHIY EHINPS DKNOTU ADENVZ BIFORX" VERSION:`old DICE:" "vs BOXES VERSION BS:"j"$sqrt count DICE / board size ce:count each URL:"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" UD:upper system"curl -s ",URL / Unix dictionary BOGGLEDICT:ssr[;"QU";"Q"]each UD where ce[UD]<17 / Q for QU both:{all flip x} nb:{/ neighbors of position pair y in x by x board i:(.[cross] -1 0 1+/:y)except enlist y; i where both i within\:0,x-1 } NB:BS{x sv flip nb[x;y]}'til[BS]cross til BS / map board posns to neighbors throw:{[dice](2#BS)#dice@'count[dice]?6} BOARD:throw DICE try:{[B;BD;state] si:state 0; / strings as indexes of B wf:state 1; / words found ns:raze{x,/:(NB last x)except x} each si; / next strings to try ns:ns where B[ns]in count[first ns]#'BD; / eliminate duds wf:distinct wf,{x where x in y}[B ns;BD]; / append new words found (ns;wf) } solve:{[brd] b:raze brd; bd:BOGGLEDICT where all each BOGGLEDICT in b; / board dictionary s:distinct{x 1}try[b;bd;]/[(enlist each til 16;())]; / solutions s:ssr[;"Q";"QU"]each s; / QU for Q s:{x idesc ce x}asc s; / sort asc alpha within desc size sc:(0 0 0 1 1 2 3 5,9#11)@ce s; / scores ok:where sc>0; / throw the little ones back 0N!"maximum score: ",string sum sc ok; $[`;s ok]!sc ok } Usage¶ q)show board:throw DICE "OMUY" "MTUP" "WSAY" "ITVI" q)solve board "maximum score: 40" MUTATIS| 5 TWIST | 2 ASTM | 1 ATOM | 1 AUTO | 1 MUST | 1 PAST | 1 .. Discussion¶ Boggle boards have been made in different languages and sizes. There are old and new versions of English dice for the 4×4 board. Each die is defined by six letters. We start with a dictionary of Boggle boxes. Then we’ll pick the ‘old’ version. BOXES:()!() BOXES[`new]:"AAEEGN ELRTTY AOOTTW ABBJOO EHRTVW CIMOTU DISTTY EIOSST DELRVY ACHOPS HIMNQU EEINSU EEGHNW AFFKPS HLNNRZ DEILRX" BOXES[`old]:"AACIOT AHMORS EGKLUY ABILTY ACDEMP EGINTV GILRUW ELPSTU DENOSW ACELRS ABJMOQ EEFHIY EHINPS DKNOTU ADENVZ BIFORX" VERSION:`old Boggle sets come in different sizes. From BOXES`old we find we have a 4×4 board. DICE:" "vs BOXES VERSION BS:"j"$sqrt count DICE / board size Now we can throw the dice to get a board. q)throw:{[dice](2#BS)#dice@'count[dice]?6} q)show board:throw DICE "OAKB" "DGLS" "NSOY" "HDZF" To ‘solve’ the board we need to - generate candidate words - recognize English words For the latter we’ll use the Puzzlers.org dictionary. URL:"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" UD:upper system"curl -s ",URL / Unix dictionary Q and U In English the letter Q is almost always followed by the letter U. On the Boggle board the two letters appear as one – Qu – and are scored as two letters when counting word lengths. BOGGLEDICT:ssr[;"QU";"Q"]each UD where ce[UD]<17 The Boggle dictionary has over 25,104 words. Many have letters not on the board just thrown. q)count bd:BOGGLEDICT where BOGGLEDICT{all x in y}\:raze board / board dictionary 557 Generating candidate words starts with each of the sixteen letters. There are a limited number of paths, because letters may not be re-used. So no solution word can have more than sixteen letters. The set of all possible paths is a function of the board size, not its content. We could generate all the possible paths for a 4×4 board, throw a board, and see which paths correspond to words. Here are the 16 board positions as index pairs: q)til[BS]cross til BS 0 0 0 1 0 2 0 3 1 0 1 1 1 2 1 3 2 0 2 1 2 2 2 3 3 0 3 1 3 2 3 3 What are the neighbors of a position pair y in a x by x board ? both:{all flip x} nb:{i:(.[cross] -1 0 1+/:y)except enlist y;i where both i within\:0,x-1} List the neighbors for each of the 16 board positions. q)show NB:BS{x sv flip nb[x;y]}'til[BS]cross til BS 1 4 5 0 2 4 5 6 1 3 5 6 7 2 6 7 0 1 5 8 9 0 1 2 4 6 8 9 10 1 2 3 5 7 9 10 11 2 3 6 10 11 4 5 9 12 13 4 5 6 8 10 12 13 14 5 6 7 9 11 13 14 15 6 7 10 14 15 8 9 13 8 9 10 12 14 9 10 11 13 15 10 11 14 With NB we can extend any word path. Say our path position pairs are (0 0;1 0;0 1) , that is to say, board positions 0 4 1 . Then the possible extensions are NB 1 excluding 0 4 . q){x,/:(NB last x)except x} 0 4 1 0 4 1 2 0 4 1 5 0 4 1 6 All the 2-letter word candidates on the board: q)raze[board] raze{x,/:NB x}each til 16 "OA" "OD" "OG" "AO" "AK" "AD" "AG" .. Start with the 16 board positions, find their neighbors, repeat another 14 times, and we have all the paths through the board. q)count PATHS:15 {raze{x,/:(NB last x)except x} each x}\til 16 16 PATHS 3 lists all the 4-letter paths; PATHS 5 all the 4-letter paths, and so on. Over twelve million in all. q)ce PATHS 16 84 408 1764 6712 22672 68272 183472 436984 905776 1594648 2310264 2644520 .. q)sum ce PATHS 12029640 Few of these paths through the current board correspond to dictionary words. q)raze[board] last PATHS "OAKBLGDNSOSYFZDH" "OAKBLGDNSHDOSYZF" "OAKBLGDNSHDOSYFZ" "OAKBLGDNSHDOZFYS" "OAKBLGDNSHDOFZYS" "OAKBLGDNSHDZOSYF" "OAKBLGDNSHDZOFYS" .. But we take all of them except the 1- and 2-letter words. q)count RP:raze 2_ PATHS / razed paths 12029540 Now we can see which words in the board dictionary can be found along these paths. q)show wf:bd where bd in raze[board] RP / words found "ADO" "AGO" "ALB" "ALSO" "DALY" "DOG" "FOG" .. It remains to restore a U after each Q q)wf:ssr[;"Q";"QU"] each wf and score and sort the results. q){x idesc x`score}([] word:wf; score:(0 0 0 1 1 2 3 5,9#11)@ce wf) word score -------------- "FOLKSY" 3 "LAGOS" 2 "SLOSH" 2 "ADO" 1 "AGO" 1 This is a brutal approach. There is a substantial initial calculation to find the twelve million or so word paths in RP . Then for each board thrown these must be mapped into strings (raze[board] RP ) so the few hundred words in the ‘board dictionary’ can be sought. A smarter approach starts tracing the word paths but at each iteration eliminates strings that cannot become words. For example, a string "AZX" begins no word in the dictionary and need not be pursued. For this we will use the Converge iterator. The iterator applies a unary function to some initial state until the result stops changing. try:{[B;BD;state] si:state 0; / strings as indexes of B wf:state 1; / words found ns:raze{x,/:(NB last x)except x} each si; / next strings to try ns:ns where B[ns]in count[first ns]#'BD; / eliminate duds wf:distinct wf,{x where x in y}[B ns;BD]; / append new words found (ns;wf) } Our ‘pursuit’ function operates upon a 2-item list state . The first item is the strings it is following, defined as indexes into the razed board. The second item is all the words found so far. The initial state is all the sixteen 1-letter strings on the board, and an empty list. (enlist each til 16;()) It refers at each iteration to the razed board and the board dictionary. These are transient values, dependent on each throw of the board. So in good functional style we project the function onto these values; within it they are constant for each iteration. Projected, our ternary (three-argument) function becomes a unary function that Converge can apply. And Converge keeps on applying it until it finds no new strings to pursue. Restoring U s to follow Q s, scoring and sorting much as above. solve:{[brd] b:raze brd; bd:BOGGLEDICT where all each BOGGLEDICT in b; / board dictionary s:distinct{x 1}try[b;bd;]/[(enlist each til 16;())]; / solutions s:ssr[;"Q";"QU"]each s; / restore Us to Qs s:{x idesc ce x}asc s; / sort asc alpha within desc size sc:(0 0 0 1 1 2 3 5,9#11)@ce s; / scores ok:where sc>0; / discard little ones 0N!"maximum score: ",string sum sc ok; $[`;s ok]!sc ok } Fizz buzz¶ Fizz buzz is a group word game for children to teach them about division. Players take turns to count incrementally, replacing any number divisible by three with the word fizz, and any number divisible by five with the word buzz. — Wikipedia Fizz buzz is fun for programmers as well as children, and has been implemented in a host of languages. Here is a simple solution in Python for the first hundred numbers. for i in range(1, 101): if i%3 == 0 and i%5 == 0: my_list.append("fizzbuzz") elif i%3 == 0: my_list.append("fizz") elif i%5 == 0: my_list.append("buzz") else: my_list.append(i) Since it constructs its results as an array, it could claim to be an array solution. But it employs a for-loop and an if/then/else construct. We can usually dispense with them in q. Start with a vector of numbers. q)show x:1+til 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 We are interested in whether they are divisible by 3 and 5. Those divisible by 3 have 0 as the result of x mod 3 . Similarly by 5. Test both. q)0=x mod/:3 5 00100100100100100100b 00001000010000100001b But we need four results, not two: divisible by neither; by 3; by 5; and by both. q)1 2*0=x mod/:3 5 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 q)sum 1 2*0=x mod/:3 5 0 0 1 0 2 1 0 0 1 2 0 1 0 0 3 0 0 1 0 2 The items of x are mapped to 0 1 2 3 . Let’s construct our result as a symbol vector. q)`$string x `1`2`3`4`5`6`7`8`9`10`11`12`13`14`15`16`17`18`19`20 q)(`$string x;20#`fizz;20#`buzz;20#`fizzbuzz) 1 2 3 4 5 6 7 8 9 .. fizz fizz fizz fizz fizz fizz fizz fizz fizz .. buzz buzz buzz buzz buzz buzz buzz buzz buzz .. fizzbuzz fizzbuzz fizzbuzz fizzbuzz fizzbuzz fizzbuzz fizzbuzz fizzbuzz fizzb.. We just need a way to use 0 1 2 3 to pick from these four vectors. Enter the Case iterator. q)(sum 1 2*0=x mod/:3 5)'[`$string x;20#`fizz;20#`buzz;20#`fizzbuzz] `1`2`fizz`4`buzz`fizz`7`8`fizz`buzz`11`fizz`13`14`fizzbuzz`16`17`fizz`19`buzz Scalar extension means we can use atoms as the last three arguments. q)(sum 1 2*0=x mod/:3 5)'[`$string x;`fizz;`buzz;`fizzbuzz] `1`2`fizz`4`buzz`fizz`7`8`fizz`buzz`11`fizz`13`14`fizzbuzz`16`17`fizz`19`buzz This is a good example of ‘array thinking’. — What is the problem? I can write for-loops in my sleep. — We want you to wake up. Notice how much of the expression simply states the problem. [`$string x;`fizz;`buzz;`fizzbuzz] lists the four possible result options. 0=x mod/:3 5 tests for divisibility by 3 and 5. (sum 1 2* .. )'[ .. ] relates the test results to the final results. This is the only ‘programmery’ bit. The other two parts correspond directly to the posed problem. In this way the solution exhibits high semantic density: most terms in the code correspond to terms in the problem domain. On semantic density: “Three Principles of Code Clarity”, Vector 18:4 “Pair programming With The Users, Vector 22:1
// // @desc Returns the raw data representing the profile execution results. For each // profiled line on which execution occurred, the data includes the following: // // - function name // - closed line number // - leading statement characters // - line execution count // - total time consumed on the line (including subcalls) // - time consumed on the line itself // - percentage of time consumed on the line relative to all lines // // Lines not executed are not included in the report. // // @param x {symbol[]} Specifies the names of the functions for which execution // information is to be computed. If the argument is unspecified // or is the empty symbol, data is returned for all profiled // functions. // // @return {table} A table containing the execution report, ordered alphabetically // by function and then by line number. // data:{ x:$[mt x;key PFD;not(&/)b:(x,:())in key PFD;[-2 "Not profiled:",/" ",'string x where not b;x where b];x]; a:x where n:count each w:flip each PFD x; / Function names t:(-\)each(w:raze w)[;2 3]; / Total and own ( = total - subcall) asc update Pct:100*Own%(+/)Own from select from ([]Name:a;Line:(,/)til each n;Stmt:`$ssr[;"\n";" "]each w[;0];Count:w[;1];Total:t[;0];Own:t[;1]) where Count>0 } // // @desc Computes a usage report summarizing the profile execution results. For each // profiled line on which execution occurred, the data includes the following: // // - function name // - closed line number // - leading statement characters // - line execution count // - total time consumed on the line (including subcalls), in MM:SS.sss // - time consumed on the line itself, in MM:SS.sss // - percentage of time consumed on the line relative to all other lines // // Lines not executed are not included in the report. // // @param x {symbol[]} Specifies the names of the functions for which execution // information is to be computed. If the argument is unspecified // or is the empty symbol, the report is computed for all // profiled functions. // // @return {table} A table containing the execution report, ordered by decreasing // own line execution time. // report:{ t:`$3_''string"t"$(w:data x)`Total`Own; `Own xdesc update Total:first t,Own:last t,Pct:`$(.prof.ffmt[2;Pct],'"%") from w } // // Internal definitions. // enl:enlist ns:~[1#.q]1# mt:{(x~`)|x~(::)} fns:{asc$[mt x;ff(key`.),getn ` sv'`,'(key`)except NSX;getn x]} ff:{x where 100h=type each value each x} getn:{(,/)getns each x} getns:{$[type key x;$[ns value x;ff(j where not i),getn(j:` sv'x,'k)where i:ns each x k:1_key x;`.~x;ff key x;x];x]} expand:{[msk;a] @[msk;where msk;:;a]} trueAt:{@[x#0b;y;:;1b]} ffmt:{("0";"")[x<count each s],'(i _'s),'".",'(i:neg x)#'s:string(_)y*/x#10} CH:"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_-`.:" // // @desc Defines a function within its associated namespace. // // @param nm {symbol} Specifies the fully-qualified name of the function to define. // @param c {symbol} Specifies the context in which to define the function. // @param f {string} Specifies the definition of the function. // def:{[nm;c;f] d:system "d";system "d .",string c; / Set namespace nm set value f; / Define object (reification performed by <value> must be under proper ns) system "d ",string d; / Restore previous namespace } // // @desc Initializes statistics collection on function entry. // // @param nm {symbol} Specifies the name of the function. // // @return {int} Origin-0 index of the stack level of the new function call. // init:{[nm] -1+count PFS,:enl nm,1 1 0*.z.n / Name, fn entry ts, line entry ts, accumulated subcall time ( = 0) } // // @desc Captures execution statistics for the statement just completed. // // @param i {int} Specifies the origin-0 index of the stack level. // @param n {int} Specifies the line number within the function, starting from 1 // and incrementing monotonically for each closed line. The // value is negative if the statement exits the function. // @param r {any} Contains the result of the statement executed. This value is // returned as our result. // // @return {any} The result is the argument `r` without modification. // mon:{[i;n;r] t:.z.n;s:PFS i; $[n<0;[if[i;PFS[i-1;3]+:t-s 1];PFS::i#PFS];PFS[i;2 3]:1 0*t]; / Accrue subcall time to parent on return; else reset line and subcall times PFD[first s;1 2 3;-1+abs n]+:(1i;t-s 2;s 3); / Count, total time, subcall time r } // // @desc Transforms a function by introducing profiler execution logic before the // first statement and after the execution of each closed line. A line is considered // closed if there is no active nesting of parentheses, brackets, or braces (other // than the outer braces that define the scope of the function itself), and the line // ends in a semicolon or is the last line of a function. // // @param nm {symbol} Specifies the name of the function to transform. // @param fn {string} Specifies the definition of the function. // // @return {list} A 3-element array containing the text of the transformed // function, a vector of line numbers, and a corresponding array // of line snippets (with snippet length controlled by the // global `LL`); or, an empty array if the function has already // been instrumented for profiling. // xf:{[nm;fn] fn:(k:2*"k)"~2#fn)_-1_fn; / Ignore trailing brace, and distinguish q vs. k fn[where fn="\t"]:" "; / Map tabs to blanks ln>:(|)(&\)(|)(fn=" ")|ln:fn="\n"; / Ignore trailing lines if[0h<type j:cmm[fn;0,where ln;q:qtm fn];fn@:j:where j;ln@:j;q@:j]; / Mark (with 0's) unescaped quote chars and remove comments p:q*(+\)1 -1 1 -1 1 -1i["()[]{}"?fn]*q:(=\)q; / Cumulative overall nesting level p*:1i=(+\)1 -1i["{}"?fn]*q; / Ignore nested lambdas j:where q<={x@:where j:i|-1_0b,i:x<>" ";i|expand[j;((("-"=1_x),0b)&-1_0b,x in")]}")|(1_k,0b)&-1_0b,k:x in CH]}fn;p@:j;fn@:j;ln@:j; / Remove redundant white space j:where(";"=(";",fn)i)&1i=p i:0,where ln; / Lines to consider, with leading NL if[count ss[first[i[1_j],count fn]#fn]"p99:.prof.";:""]; / Skip if instrumented f:{[nm;n;v;p] s:_[o+:("\n"<>_[o:$[(t:1<abs n)<"{["~2#v;1+v?"]";1]]v)?1b]v;p:o _p; / Offset to code start and resultant string a:(a+1),rz[;s;p]each a:ra[s;p]; / Compute return starts and corresponding ends if[(c:not count s)<1<>a 0;a:0,a,count[s]-((|)s in"\n;")?0b]; / Wrap non-empty, non-return stmt j:where[2#(_)0.5*count b]b:iasc a; / Distinguish starts from ends, in sequence e:(,/)((0,a b)_s),'(({".prof.mon[p99;",string[x],";["}each n,(-)abs n),(,)"]]")[j+a>0],(,)""; ((o#v),$[t;"";"p99:.prof.init[`",nm,"];"],(count[e]*c)_e;(LL&count s)#s) }[string nm]'[j*1 -1[j=last j+:1];i[j]_fn;i[j]_p]; ((k#"k)"),/(first each f),"}";j;last each f) } qtm:{[fn] (fn="\"")<=-1_0b,@[fn="\\";1+fn ss "\\\\";:;0b]} / Unescaped quotes (may be in comments) cmm:{[fn;ln;q] $[1b in c:(fn in" \n")&1_(fn="/"),0b;[j:(cm\). 0b,ln _/:(q;c);(,/)j[;1]];1b]} / Comments cm:{[b;q;c] i:first[b]=\q;j:first where[i<c],n:count q;(i[j-1];(j#1b),(n-j)#0b)} / Line comment ra:{[s;p] where(p>0i)&(-1_1b,s in "\n[{;")&(s=":")>1_(s in ":;/)]}"),0b}; / Return starts rz:{[i;s;p] i+(((";"=i _s)&j=k)|(j<k:p i)&0i<>j:i _p)?1b}; / Return ends \d . \ Usage: .prof.prof`name / Profiles the specified function, or all functions in the specified namespace .prof.prof`name1`name2 / Profiles the specified functions, or all functions in the specified namespaces .prof.prof` / Profiles all functions in all namespaces .prof.unprof`name / Unprofiles the specified function, or all functions in the specified namespace .prof.unprof` / Unprofiles all functions for which profiling is enabled .prof.report`name / Produces a report of profile information for the specified function .prof.report` / Produces a report of profile information for all profiled functions .prof.data`name / Returns the raw profile data for the specified function .prof.data` / Returns the raw profile data for all profiled functions .prof.reset[] / Resets profile statistics, discarding previous results Globals: .prof.NSX - List of namespaces to exclude; assign to change .prof.LL - Number of leading chars of source lines to include in reports; assign to change .prof.PFD - Dictionary of name -> stmt, count, CPU arrays (in line no. order) .prof.PFS - Execution stack (contains name, time at start of entry to level, interim line residual) .prof.SV - Dictionary of saved (unprofiled) definitions ================================================================================ FILE: qspec_app_spec.q SIZE: 3,005 characters ================================================================================ \d .tst .utl.require "qutil/opts.q" .utl.require "qspec" .tst.loadOutputModule["text"] .tst.app.excludeSpecs:(); .tst.app.runSpecs:(); .tst.output.mode: `run .utl.addOpt["desc,describe";1b;(`.tst.app.describeOnly;{if[x;.tst.output.mode:`describe;];x})] .utl.addOpt["xunit";1b;(`.tst.app.xmlOutput;{if[x;.tst.loadOutputModule["xunit"]];x})]; .utl.addOpt["junit";1b;(`.tst.app.xmlOutput;{if[x;.tst.loadOutputModule["junit"]];x})]; .utl.addOpt["perf,performance";1b;`.tst.app.runPerformance] .utl.addOpt["exclude";(),"*";(`.tst.app.excludeSpecs;{"," vs " " sv x})] .utl.addOpt["only";(),"*";(`.tst.app.runSpecs;{"," vs " " sv x})] .utl.addOpt["pass";1b;`.tst.app.passOnly] .utl.addOpt["noquit";0b;`.tst.app.exit] .utl.addOpt["fuzz-display-limt,fdl";"I";`.tst.output.fuzzLimit] .utl.addOpt["ff,fail-fast";1b;`.tst.app.failFast] .utl.addOpt["fh,fail-hard";1b;`.tst.app.failHard] .utl.addArg["*";();(),1;`.tst.app.args]; .utl.parseArgs[]; .utl.DEBUG:1b app.specs:() app.expectationsRan:0 app.expectationsPassed:0 app.expectationsFailed:0 app.expectationsErrored:0 .tst.callbacks.descLoaded:{[specObj]; .tst.app.specs,:enlist specObj; }
Programming idioms¶ Q expressions that solve some common programming problems. How do I select all the columns of a table except one?¶ If the table has many columns, it is tedious and error-prone to write q)select c1,c2,... from t In q we can write q)delete somecolumn from t Here, delete does not modify the table t in place. Watch out delete does not work on historical databases. How do I select a range of rows in a table by position?¶ You can use the row index i . q)select from tab where i within 42 57 How do I extract the milliseconds from a time?¶ Given q)time: 09:10:52.139 we can extract hour, minute and second like this q)(time.hh; time.mm; time.ss) 9 10 52 To query the milliseconds q)time mod 1000 139 How do I populate a table with random data?¶ This is useful for testing. The ? operator can be used to generate random values of a given type. q)n:1000 q)stocks:`goog`amzn`msft`intel`amd`ibm q)trade:([]stock:n?stocks; price:n?100.0;amount:100*10+n?20;time:n?24:00:00.000) q)trade stock price amount time ----------------------------------- ibm 94.1497 2800 10:45:14.943 amzn 96.1774 2800 03:17:33.371 ibm 3.95321 2200 04:53:09.818 goog 11.10307 1000 19:27:15.894 msft 64.73216 1900 17:32:42.558 intel 19.51964 2600 05:05:58.680 ibm 10.18318 1000 05:47:46.437 ... How do I get the hourly lowest and highest price?¶ This query q)select price by stock,time.hh from trade groups the prices hourly by stock stock hh| price .. --------| -------------------------------------------------------------------.. amd 0 | 76.03805 3.632539 16.6526 77.27191 79.27451 7.501845 .. amd 1 | 16.35361 85.9618 27.60804 61.91134 6.921016 84.99082 50.05533 81.10.. amd 2 | 47.61621 15.33209 23.64018 88.34472 .. ... The price column is a list. To get the low and high prices, we need to apply list functions that produce that result: q)select low: min price,high: max price by stock,time.hh from trade stock hh| low high --------| ------------------ amd 0 | 3.632539 79.27451 amd 1 | 6.921016 85.9618 amd 2 | 15.33209 88.34472 ... How can I extract the time of the lowest and highest prices?¶ This query makes use of where to extract the time matching the high/low prices: q)t: `time xasc ([] time:09:30:00.0+100000?23000000; sym:100000?`AAPL`GOOG`IBM; price:50+(floor (100000?0.99)*100)%100) q)select high:max price,low:min price,time_high:first time where price=max price,time_low:first time where price=min price by sym,time.hh from t sym hh| high low time_high time_low -------| ----------------------------------- AAPL 9 | 50.98 50 09:31:01.254 09:32:19.141 AAPL 10| 50.98 50 10:01:24.975 10:04:21.228 AAPL 11| 50.98 50 11:00:04.438 11:00:19.517 AAPL 12| 50.98 50 12:01:24.891 12:01:09.768 AAPL 13| 50.98 50 13:00:35.044 13:01:15.162 AAPL 14| 50.98 50 14:02:37.634 14:01:06.998 AAPL 15| 50.98 50 15:00:42.958 15:01:24.288 GOOG 9 | 50.98 50 09:30:21.404 09:30:12.264 .. How do I extract regular time series from observed quotes?¶ Given this table containing observed quotes: q)t: `time xasc ([] time:09:30:00.0+100000?23000000; sym:100000?`AAPL`GOOG`IBM; bid:50+(floor (100000?0.99)*100)%100; ask:51+(floor (100000?0.99)*100)%100); q)t time sym bid ask ----------------------------- 09:30:00.143 IBM 50.75 51.09 09:30:00.192 IBM 50.03 51.56 09:30:00.507 GOOG 50.23 51.47 09:30:00.540 IBM 50.49 51.22 .. We can extract the last observation of each ‘second’ time period: q)`second xasc select last bid,last last ask by sym,1 xbar time.second from select from t sym second | bid ask -------------| ----------- AAPL 09:30:00| 50.45 51.4 GOOG 09:30:00| 50.43 51.04 IBM 09:30:00| 50.49 51.22 AAPL 09:30:01| 50.68 51.11 .. However this solution will skip periods where there were no observations. For example, in the table generated for this document there were no observations for AAPL and IBM at 09:30:03: .. IBM 09:30:02| 50.39 51.15 GOOG 09:30:03| 50.26 51.94 AAPL 09:30:04| 50.13 51.66 GOOG 09:30:04| 50.07 51.62 IBM 09:30:04| 50.61 51.14 .. A better solution would be to use aj : q)res: aj[`sym`time;([]sym:`AAPL`IBM`GOOG) cross ([]time:09:30:00+til `int$(16:00:00 - 09:30:00)); select `second$time,sym,bid,ask from t] q)`time xasc res sym time bid ask ------------------------- AAPL 09:30:00 50.45 51.4 IBM 09:30:00 50.49 51.22 GOOG 09:30:00 50.43 51.04 AAPL 09:30:01 50.68 51.11 IBM 09:30:01 50.2 51.48 GOOG 09:30:01 50.59 51.72 AAPL 09:30:02 50.3 51.54 IBM 09:30:02 50.39 51.15 GOOG 09:30:02 50.74 51.09 AAPL 09:30:03 50.3 51.54 IBM 09:30:03 50.39 51.15 GOOG 09:30:03 50.26 51.94 AAPL 09:30:04 50.13 51.66 IBM 09:30:04 50.61 51.14 GOOG 09:30:04 50.07 51.62 .. When a millisecond resolution is required, this solution might offer better performances: q)aapl:select from t where sym=`AAPL q)res:([]time),'(`bid`ask#aapl) -1+where deltas @[;count[d]-1;:;count time]d:(time:09:30:00.0+til`int$10:00:00.0-09:30:00.0) bin aapl`time q)select from res where time>09:31:00 time bid ask ------------------------ 09:31:00.001 50.41 51.13 09:31:00.002 50.41 51.13 09:31:00.003 50.41 51.13 09:31:00.004 50.41 51.13 09:31:00.005 50.41 51.13 .. How do I select the last n observations for each sym?¶ A couple of solutions using the same table than in the previous examples: q)ungroup select sym,-3#'time,-3#'bid,-3#'ask from select time,bid,ask by sym from t where time<15:00:00 sym time bid ask ----------------------------- AAPL 14:59:58.564 50.94 51.28 AAPL 14:59:59.450 50.54 51.17 AAPL 14:59:59.650 50.42 51.87 GOOG 14:59:59.159 50.42 51.41 GOOG 14:59:59.302 50.52 51.66 GOOG 14:59:59.742 50.52 51.25 IBM 14:59:56.439 50.01 51.81 IBM 14:59:56.556 50.38 51.33 IBM 14:59:57.116 50.96 51.45 q)select sym,time,bid,ask from t where time<15:00:00,3>(idesc;i) fby sym sym time bid ask ----------------------------- IBM 14:59:56.439 50.01 51.81 IBM 14:59:56.556 50.38 51.33 IBM 14:59:57.116 50.96 51.45 AAPL 14:59:58.564 50.94 51.28 GOOG 14:59:59.159 50.42 51.41 GOOG 14:59:59.302 50.52 51.66 AAPL 14:59:59.450 50.54 51.17 AAPL 14:59:59.650 50.42 51.87 GOOG 14:59:59.742 50.52 51.25 How do I calculate vwap series?¶ Use xbar and wavg : q)t: `time xasc ([] time:09:30:00.0+100000?23000000; sym:100000?`AAPL`GOOG`IBM; price:50+(floor (100000?0.99)*100)%100; size:10*100000?100); q)aapl: select from t where sym=`AAPL,size>0 q)select vwap:size wavg price by 5 xbar time.minute from aapl minute| vwap ------| --------- 09:30 | 50.474908 09:35 | 50.461356 09:40 | 50.46645 09:45 | 50.493585 09:50 | 50.48062 .. Q for Mortals §9.13.4 Meaty Queries How do I extract regular-size vwap series?¶ One quick solution is to use xbar with the running size sum to average the wanted periods: q)t:`time xasc ([]time:N?.z.t;price:0.01*floor 100*10 + N?1.0;size:N?500) q)select time:last time ,vwap: size wavg price by g:1000 xbar sums size from t g | time vwap -----| ---------------------- 0 | 00:41:19.862 10.771082 1000 | 01:25:20.920 10.656847 2000 | 02:15:20.433 10.522944 3000 | 02:35:18.721 10.295203 4000 | 02:58:30.142 10.320519 5000 | 05:54:20.645 10.564838 .. However this solution approximates regular trade series: the last trade of each period can overflow the requested size (1000 in the examples) and the overflowing amount will not be used in the following period. This query highlights the problem: q)update total:sum each size from select time:last time,vwap: size wavg price,size by g:1000 xbar sums size from t g | time vwap size total -----| -------------------------------------------------- 0 | 00:41:19.862 10.771082 409 297 80 59 116 961 1000 | 01:25:20.920 10.656847 126 451 244 149 29 999 2000 | 02:15:20.433 10.522944 85 417 107 50 659 3000 | 02:35:18.721 10.295203 422 477 477 1376 4000 | 02:58:30.142 10.320519 408 19 305 732 5000 | 05:54:20.645 10.564838 406 277 58 106 323 1170 .. A more precise (and elaborate) solution consist of splitting the edge trades into two parts so that each size bar sum up to the requested even amount: rvwap : { [t;size_par] // add the bucket and the total size t:update bar:size_par xbar tot from update tot:sums size from t; // get the indices where the bar changes ind:where differ t`bar; // re-index t, and sort (duplicate the rows where the bucket changes) t:t asc (til count t),ind; // shift all the indices due to the table now being larger ind:ind+til count ind; // update the size in the first trade of the new window and modify the bar t:update size:size-tot-bar,bar:size_par xbar tot-size from t where i in ind; // update the size in the second trade of the new window t:update size:tot-bar from t where i in 1+ind; // calculate stats select last time,price:size wavg price,sum size by bar from t } q)rvwap[t;1000] bar | time price size -----| -------------------------- 0 | 00:44:53.037 10.76246 1000 1000 | 01:29:02.279 10.64794 1000 2000 | 02:25:13.605 10.49144 1000 3000 | 02:52:50.130 10.24878 1000 4000 | 03:11:20.253 10.37125 1000 5000 | 06:01:35.780 10.57725 1000 .. How do I apply a function to a sequence sliding window?¶ Use the Scan iterator to build a moving list adding one item at the time from the original table, and the _ Drop operator to discard older values. This example calculates the moving average for a sliding window of 3 items. Note how the second parameter sets the size of the sliding window. q)swin:{[f;w;s] f each { 1_x,y }\[w#0;s]} q)swin[avg; 3; til 10] 0 0.33333333 1 2 3 4 5 6 7 8 q)// trace the sliding window q)swin[0N!; 3; til 10] 0 0 0 0 0 1 0 1 2 1 2 3 2 3 4 3 4 5 .. A different approach based on prev , inserting 0N at the beginning of the window rather than 0: q)swin2:fwv:{x/'[flip reverse prev\[y-1;z]]} q)swin2[avg;3;til 10] 0 0.5 1 2 3 4 5 6 7 8 q)swin2[::;3;til 10] 0 0 1 0 1 2 1 2 3 2 3 4 3 4 5 .. swin2:fwv finds all the windows using prev and Converge. To find windows of size 3: q)prev\[2;x:3 5 7 2 4 3 7] 3 5 7 2 4 3 7 3 5 7 2 4 3 3 5 7 2 4 q)flip reverse prev\[2;x] 3 3 5 3 5 7 5 7 2 7 2 4 2 4 3 4 3 7 q)max each flip prev\[2;x] 3 5 7 7 7 4 7 q)(3 mmax x)~max each flip prev\[2;x] 1b A third, more memory efficient function, m is: q)m:{last{(a;x 1;x[2],z y x[1]+a:1+x 0)}[;z;x]/[n:count z;(0-y;til y;())]} For larger windows, time and space may be important... q)\ts swin[max;1000;10000?10] 71 82473552 q)\ts 1000 mmax 10000?10 76 524592 q)\ts m[max;1000;10000?10] 205 401984 q)\ts fwv[max;1000;10000?10] 491 213061888 …c.f. smaller window q)\ts w mmax v 1 393440 q)\ts fwv[f:max;w:10;v:10000?10] 6 2656576 q)\ts swin[f;w;v] 12 1702416 q)\ts m[f;w;v] 50 262784 To index: q)w:{(til[count z]-m)+x each flip reverse prev\[m:y-1;z]} q)x w[{x?max x};3;x] 3 5 7 7 7 4 7 which is useful for addressing other lists: q)update top:date w[{x?max x};3;volume] from ([]date:2000.01.01+til 7;volume:x) date volume top ---------------------------- 2000.01.01 3 2000.01.01 2000.01.02 5 2000.01.02 2000.01.03 7 2000.01.03 2000.01.04 2 2000.01.03 2000.01.05 4 2000.01.03 2000.01.06 3 2000.01.05 2000.01.07 7 2000.01.07 Grouping over non-primary keys¶ Consider a table with multiple entries for the same value of the stock column. q)table:([]stock:(); price:()) q)insert[`table; (`ibm`ibm`ibm`ibm`intel`intel`intel`intel; 1 2 3 4 1 2 3 4)] 0 1 2 3 4 5 6 7 q)table stock price ----------- ibm 1 ibm 2 ibm 3 ibm 4 intel 1 intel 2 intel 3 intel 4 If we select and group by stock ... q)select by stock from table stock| price -----| ----- ibm | 4 intel| 4 … only the last price is in the result. What if we want the first price? We reverse the table. q)select by stock from reverse table stock| price -----| ----- ibm | 1 intel| 1 What if we want all the prices as a list? We use xgroup . q)`stock xgroup table stock| price -----| ------- ibm | 1 2 3 4 intel| 1 2 3 4 Getting the contents of the columns of a table¶ Imagine that we want to save a CSV file without the first row that contains the names of the table columns. This can be done by extracting the columns from a table, without the row name, and then saving in CSV format. Here, we assume that tables are not keyed. We’ll use this table as our example: q)trade date open high low close volume sym ------------------------------------------------ 2006.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2006.10.03 27.37 27.48 27.21 27.37 39386200 MSFT 2006.10.04 24.1 25.1 23.95 25.03 17869600 AMD 2006.10.04 27.39 27.96 27.37 27.94 82191200 MSFT 2006.10.05 24.8 25.24 24.6 25.11 17304500 AMD 2006.10.05 27.92 28.11 27.78 27.92 81967200 MSFT 2006.10.06 24.66 24.8 23.96 24.01 17299800 AMD 2006.10.06 27.76 28 27.65 27.87 36452200 MSFT There are two ways of extracting columns from a table. The first one uses indexing by the column names: q)cols trade `date`open`high`low`close`volume`sym q)trade (cols trade) 2006.10.03 2006.10.03 2006.10.04 2006.10.04 2006.10.05 2006.10.05 2006.10.06 .. 24.5 27.37 24.1 27.39 24.8 27.92 24.66 .. 24.51 27.48 25.1 27.96 25.24 28.11 24.8 .. 23.79 27.21 23.95 27.37 24.6 27.78 23.96 .. 24.13 27.37 25.03 27.94 25.11 27.92 24.01 .. 19087300 39386200 17869600 82191200 17304500 81967200 17299800 .. AMD MSFT AMD MSFT AMD MSFT AMD .. The second one turns the table into a dictionary and then gets the values q)value flip trade 2006.10.03 2006.10.03 2006.10.04 2006.10.04 2006.10.05 2006.10.05 2006.10.06 .. 24.5 27.37 24.1 27.39 24.8 27.92 24.66 .. 24.51 27.48 25.1 27.96 25.24 28.11 24.8 .. 23.79 27.21 23.95 27.37 24.6 27.78 23.96 .. 24.13 27.37 25.03 27.94 25.11 27.92 24.01 .. 19087300 39386200 17869600 82191200 17304500 81967200 17299800 .. AMD MSFT AMD MSFT AMD MSFT AMD .. Now we can save in CSV format without the column names: q)t: trade (cols trade) q)save `:t.csv `:t.csv q)\cat t.csv "2006-10-03,24.5,24.51,23.79,24.13,19087300,AMD" "2006-10-03,27.37,27.48,27.21,27.37,39386200,MSFT" "2006-10-04,24.1,25.1,23.95,25.03,17869600,AMD" ... Using flip is more efficient (constant time): q)\t do[1000000; value flip trade] 453 q)\t do[1000000; trade (cols trade)] 2984 However, for splayed tables, only indexing works: C:\>.\q.exe dir KDB+ 2.4t 2006.07.27 Copyright (C) 1993-2006 Kx Systems w32/ 1cpu 384MB ... q)\v `s#`sym`trade q)trade (cols trade) 2006.10.03 2006.10.03 2006.10.04 2006.10.04 2006.10.05 2006.10.05 2006.10.06 .. 24.5 27.37 24.1 27.39 24.8 27.92 24.66 .. 24.51 27.48 25.1 27.96 25.24 28.11 24.8 .. 23.79 27.21 23.95 27.37 24.6 27.78 23.96 .. 24.13 27.37 25.03 27.94 25.11 27.92 24.01 .. 19087300 39386200 17869600 82191200 17304500 81967200 17299800 .. AMD MSFT AMD MSFT AMD MSFT AMD .. q)value flip trade `:trade/ Column names as parameters to functions¶ Column names cannot be arguments to parameterized queries. A type error is signalled when a query like that is applied: q)f:{[tbl; col; amt] select from tbl where col > amt} q)f[trade][`volume][10000] {[tbl; col; amt] select from tbl where col > amt} 'type However, a query can also be built from a string. This allows column names to be used as query parameters. The above parameterized query can be written using strings: q)f:{[tbl; col; amt] value "select from ", (string tbl), " where ", (string col), " > ", string amt} q)f[`trade][`volume][10000] date open high low close volume sym ------------------------------------------------ 2006.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2006.10.03 27.37 27.48 27.21 27.37 39386200 MSFT ... Executing queries built from strings has a performance penalty. q)\t do[100000; select from table where volume > 10000] 234 q)\t do[100000; value "select from ", (string `trade), " where ", (string `volume), " > ", string 10000] 1250 An alternative to building queries by concatenating strings is to use parse trees. q)f:{[tbl;col;amt] ?[tbl; enlist (>;col;amt); 0b; ()]} q)f[trade; `volume; 10000] date open high low close volume sym ------------------------------------------------ 2006.10.03 24.5 24.51 23.79 24.13 19087300 AMD 2006.10.03 27.37 27.48 27.21 27.37 39386200 MSFT This is more efficient than using strings: q)\t do[100000; ?[trade; enlist (>;`volume;10000); 0b; ()]] 312 Note When f builds the query from - strings, tbl is passed by reference, e.g.`trade - parse trees, tbl is passed by value, e.g.trade Glossary for passing by reference and value
Mathematics and statistics¶ | function | rank | ƒ | semantics | |---|---|---|---| + | 2 | a | add | - | 2 | a | subtract | * | 2 | a | multiply | % | 2 | a | divide | $ | 2 | A | dot product, matrix multiply | & | 2 | a | lesser | | | 2 | a | greater | abs | 1 | a | absolute value | acos | 1 | a | arccosine | asin | 1 | a | arcsine | atan | 1 | a | arctangent | avg | 1 | A | arithmetic mean | avgs | 1 | u | arithmetic means | ceiling | 1 | a | round up to integer | cor | 2 | A | correlation | cos | 1 | a | cosine | cov | 2 | A | covariance | deltas | 1 | u | differences | dev | 1 | A | standard deviation | div | 2 | a | integer division | ema | 2 | m | exponential moving average | exp | 1 | a | ex | floor | 1 | a | round down to integer | inv | 1 | u | matrix inverse | log | 1 | a | natural logarithm | lsq | 2 | matrix divide | | mavg | 2 | m | moving average | max | 1 | A | greatest | maxs | 1 | u | maximums | mcount | 2 | m | moving count | mdev | 2 | m | moving deviation | med | 1 | A | median | min | 1 | A | least | mins | 1 | u | minimums | mmax | 2 | m | moving maximum | mmin | 2 | m | moving minimum | mmu | 2 | matrix multiply | | mod | 2 | a | modulo | msum | 2 | m | moving sum | prd | 1 | A | product | prds | 1 | u | products | ratios | 1 | u | ratios | reciprocal | 1 | a | reciprocal | scov | 2 | A | statistical covariance | sdev | 1 | A | statistical standard deviation | signum | 1 | a | sign | sin | 1 | a | sine | sqrt | 1 | a | square root | sum | 1 | A | sum | sums | 1 | u | sums | svar | 1 | A | statistical variance | tan | 1 | a | tangent | til | 1 | natural numbers till | | var | 1 | A | variance | wavg | 2 | A | weighted average | wsum | 2 | A | weighted sum | xbar | 2 | A | round down | xexp | 2 | a | xy | xlog | 2 | a | base-x logarithm of y | ƒ – a: atomic; u: uniform; A: aggregate; m: moving Domains and ranges¶ The domains and ranges of the mathematical functions have boolean, numeric, and temporal datatypes. q)2+3 4 5 5 6 7 q)2012.05 2012.06m-2 2012.03 2012.04m q)3.3 4.4 5.5*1b 3.3 4.4 5.5 Individual function articles tabulate non-obvious domain and range datatypes. Dictionaries and tables¶ The domains and ranges also extend to: - dictionaries where the value of the dictionary is in the domainq)3+`a`b`c!(42;2012.09.15;1b) a| 45 b| 2012.09.18 c| 4 - simple tables where the value of theflip of the table is in the domain - keyed tables where theq)3%([]b:1 2 3;c:45 46 47) b c -------------- 3 0.06666667 1.5 0.06521739 1 0.06382979 value of the table is in the domainq)show v:([sym:`ibm`goog`msoft]qty:1000 2000 3000;p:1550 375 98) sym | qty p -----| --------- ibm | 1000 1550 goog | 2000 375 msoft| 3000 98 q)v+5 sym | qty p -----| --------- ibm | 1005 1555 goog | 2005 380 msoft| 3005 103 Exceptions to the above: cor scov cov sdev dev svar div (tables) til ema var inv wavg (tables) lsq wsum (tables) mmu xbar (tables) mod (tables) xexp (tables) Mathematics with temporals¶ Temporal datatypes (timestamp, month, date, datetime, timespan, minute, second, time) are encoded as integer or float offsets from 2000.01.01 or 00:00. Mathematical functions on temporals are applied to the underlying numerics. See domain/range tables for individual functions for the result datatypes. Beyond addition and subtraction Results for addition and subtraction are generally intuitive and useful; not always for other arithmetic functions. q)2017.12.31+0 1 2 2017.12.31 2018.01.01 2018.01.02 q)2017.12m-0 1 2 2017.12 2017.11 2017.10m q)2017.12m*0 1 2 2000.01 2017.12 2035.11m q)2017.12m% 1 2 3 215 107.5 71.66667 q)00:10%2 5f q)00:10:00%2 300f q)00:10:00.000%2 300000f q)00:10:00.000000000%2 3e+11 Aggregating nulls¶ avg , min , max and sum are special: they ignore nulls, in order to be similar to SQL92. But for nested x these functions preserve the nulls. q)avg (1 2;0N 4) 0n 3 Metadata¶ Operators and keywords that get or set metadata. attr - Attributes of a list cols - Columns of a table fkeys - Foreign keys of a table key - Variously: - Keys of a dictionary - key columns/s of a keyed table - contents of a filesystem directory - whether a file exists - whether a variable name is in use - name of the table linked to by a foreign-key column - type of a vector - name of an enumerating list - a synonym for til keys - Primary key column/s of a table meta - Metadata for a table # Set Attribute- Set the attribute of a list tables - List of tables in a namespace type - Datatype of an object .Q.ty - Datatype as a character code value - Variously - values of a dictionary - value of a variable passed by name - symbol vector of an enumeration - metadata of a function - metadata of a view - decomposition of a projection or composition - internal code of a primitive - original map of an extension - internal code of a primitive function - the result of applying the first item of a list to the rest of it - the result of evaluating a string view - Expression defining a view views - List of views in the default namespace Namespaces¶ Namespaces are containers within the kdb+ workspace. Names defined in a namespace are unique only within the namespace. Namespaces are a convenient way to divide an application between modules; also to construct and share library code. Namespaces are identified by a leading dot in their names. System namespaces¶ kdb+ includes the following namespaces. | namespace | contents | |---|---| .h | Functions for converting files into various formats and for web-console display | .j | Functions for converting between JSON and q dictionaries | .m | Objects in memory domain 1 | .Q | Utility functions | .q | Definitions of q keywords | .z | System variables and functions, and hooks for callbacks | The linked pages document some of the objects in these namespaces. (Undocumented objects are part of the namespace infrastructure and should not be used in kdb+ applications.) These and all single-character namespaces are reserved for use by KX. Names¶ Apart from the leading dot, namespace names follow the same rules as names for q objects. Outside its containing namespace, an object is known by the full name of its containing namespace followed by a dot and its own name. Namespaces can contain other namespaces. Thus .fee.fi.fo is the name of object fo within namespace fi within namespace fee . Dictionaries¶ Namespaces are implemented as dictionaries. To list the objects contained in namespace .foo : key `.foo To list all the namespaces in the root: key ` Construction¶ Referring to a namespace is sufficient to create it. q)key ` `q`Q`h`j`o q).fee.fi.fo:42 q)key ` `q`Q`h`j`o`fee q)key `.fee ``fi q)key `.fee.fi ``fo \d Q for Mortals §12 Workspace Organization
Appendix A - Elastic Block Store (EBS)¶ EBS can be used to store HDB data, and is fully compliant with kdb+. It supports all of the POSIX semantics required. Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once attached, you can create a file system on top of these volumes, run a database, or use them in any other way you would use block storage. Amazon EBS volumes are placed in a specific Availability Zone (AZ) where they are automatically replicated to protect you from the failure of a single component. All EBS volume types offer durable snapshot capabilities and are designed for 99.999% availability. Amazon EBS provides a range of options that allow you to optimize storage performance and cost for your workload. Seven variants of the Elastic Block Store (EBS) are all qualified by kdb+: gp2 , gp3 , io1 , io2 , and io2 Block Express are SSD-based volumes that offer different price/performance points, st1 and sc1 are HDD-based volumes comprised of traditional drives. Unlike ephemeral SSD storage, SSD-backed volumes include the highest performance Provisioned IOPS SSD (io2 and io1) for latency-sensitive transactional workloads and General Purpose SSD (gp3 and gp2) that balance price and performance for a wide variety of transactional data. HDD-backed volumes include Throughput Optimized HDD (st1) for frequently accessed, throughput intensive workloads and the lowest cost Cold HDD (sc1) for less frequently accessed data. Details on Volume Sizes, MAX IOPS, MAX THROUGHPUT, LATENCY, and PRICE for each EBS volume type are available on AWS. The new io2 Block Express architecture delivers the highest levels of performance with sub-millisecond latency by communicating with an AWS Nitro System-based instance using the Scalable Reliable Datagrams (SRD) protocol, which is implemented in the Nitro Card dedicated for EBS I/O function on the host hardware of the instance. Block Express also offers modular software and hardware building blocks that can be assembled in many ways, giving you the flexibility to design and deliver improved performance and new features at a faster rate. For more information on io2 Block Express , see this AWS Blog. EBS-based storage can be dynamically provisioned to any other EC2 instance via operator control. So this is a good candidate for on-demand HDB storage. Assign the storage to an instance in build scripts and then spin them up. (Ref: Amazon EBS) Customers can enable Multi-Attach on an EBS Provisioned IOPS io2 or io1 volume to allow a volume to be concurrently attached to up to sixteen Nitro-based EC2 instances within the same Availability Zone. Multi-Attach makes it easier to achieve higher application availability for applications that manage storage consistency from multiple writers. Each attached instance has full read and write permission to the shared volume. To learn more, see Multi-Attach technical documentation. Data Lifecycle Manager for EBS snapshots provides a simple, automated way to back up data stored on EBS volumes by ensuring that EBS snapshots are created and deleted on a custom schedule. You no longer need to use scripts or other tools to comply with data backup and retention policies specific to your organization or industry. Amazon EBS provides the ability to save point-in-time snapshots of your volumes to Amazon S3. Amazon EBS Snapshots are stored incrementally: only the blocks that have changed after your last snapshot are saved, and you are billed only for the changed blocks. If you have a device with 100 GB of data but only 5 GB has changed after your last snapshot, a subsequent snapshot consumes only 5 additional GB and you are billed only for the additional 5 GB of snapshot storage, even though both the earlier and later snapshots appear complete. With lifecycle management, you can be sure that snapshots are cleaned up regularly and keep costs under control. Simply tag your EBS volumes and start creating Lifecycle policies for creation and management of backups. Use Cloudwatch Events to monitor your policies and ensure that your backups are being created successfully. Elastic Volumes is a feature that allows you to easily adapt your volumes as the needs of your applications change. Elastic Volumes allows you to dynamically increase capacity, tune performance, and change the type of any new or existing current generation volume with no downtime or performance impact. Easily right-size your deployment and adapt to performance changes. Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes and snapshots, eliminating the need to build and manage a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data volumes, boot volumes and snapshots using Amazon-managed keys or keys you create and manage using the AWS Key Management Service (KMS). In addition, the encryption occurs on the servers that host EC2 instances, providing encryption of data as it moves between EC2 instances and EBS data and boot volumes. For more information, see Amazon EBS encryption in the Amazon EC2 User Guide. EBS is carried over the local network within one availability zone. Between availability zones there would be IP L3 routing protocols involved in moving the data between zones, and so the latencies would be increased. EBS may look like a disk, act like a disk, and walk like a disk, but it doesn’t behave like a disk in the traditional sense. There are constraints on calculating the throughput gained from EBS (performance numbers below are from 2018 - soon to be updated): - There is a max throughput to/from each physical EBS volume. This is set to 500 MB/sec for io1 and 160 MB/sec for gp2 . Agp2 volume can range in size from 1 GB to 16 TB. You can use multiple volumes per instance (and we would expect to see that in place with a HDB). - There is a further limit to the volume throughput applied, based on its size at creation time. For example, a GP2 volume provides a baseline rate of IOPs geared up from the size of the volume and calculated on the basis of 3 IOPs/per GB. For 200 GB of volume, we get 600 IOPS and @ 1 MB that exceeds the above number in (1), so the lower value would remain the cap. The burst peak IOPS figure is more meaningful for random, small reads of kdb+ data. - For gp2 volumes there is a burst level cap, but this increases as the volume gets larger. This burst level peaks at 1 TB, and is 3000 IOPS. that would be 384 MB/sec at 128 KB records, which, again is in excess of the cap of 160 MB/sec. - There is a maximum network bandwidth per instance. In the case of the unit under test here we used r4.4xlarge , which constrains the throughput to the instance at 3500 Mbps, or a wire speed of 430 MB/sec, capped. This would be elevated with larger instances, up to a maximum value of 25 Gbps for a large instance, such as forr4.16xlarge . - It is important note that EBS scaled linearly across an entire estate (e.g. parallel peach queries). There should be no constraints if you are accessing your data, splayed across different physical across distinct instances. e.g. 10 nodes of r4.4xlarge is capable of reading 4300 MB/sec. kdb+ achieves or meets all of these advertised figures. So the EBS network bandwidth algorithms become the dominating factor in any final In 2018, Kx Systems conducted performance tests on r4.4xlarge instances with four 200-GB volumes, each with one xfs file system per volume, therefore using four mount points (four partitions). To show higher throughputs, r4.16xlarge instances with more volumes: eight 500-GB volumes were tested. Comparisons were made on gp2 and io1 as well. For testing st1 storage, four 6-TB volumes were tested. Note: The faster nitro instances and io2, io2 Block Express, and NVMe storage has not been tested yet. The results below are a bit dated, but still provide useful information. EBS-GP2¶ | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.004 | ();,;2 3 | 0.006 | hcount | 0.002 | read1 | 0.018 | EBS GP2 metadata operational latencies - mSecs (headlines) EBS-IO1¶ | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.003 | ();,;2 3 | 0.006 | hcount | 0.002 | read1 | 0.017 | EBS-IO1 metadata operational latencies - mSecs (headlines) EBS-ST1¶ | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.003 | ();,;2 3 | 0.04 | hcount | 0.002 | read1 | 0.02 | EBS-ST1 metadata operational latencies - mSecs (headlines) Summary¶ kdb+ matches the expected throughput of the EBS configurations tested with no major deviations across all classes of read patterns required. At the time these tests were conducted, EBS-IO1 achieves slightly higher throughput metrics over GP2, but achieves this at a guaranteed IOPS rate. Its operational latency is lower for meta data and random reads. When considering EBS for kdb+, take the following into consideration: - Fixed bandwidth per node: in our testing cases, the instance throughput limit of circa 430 MB/sec for r4.4xlarge is easily achieved with these tests. Contrast that with the increased throughput gained with the largerr4.16xlarge instance. Use this precept in your calculations. - There is a fixed throughput per GP2 volume, but multiple volumes will increment that value up until the peak achievable in the instance definition. kdb+ achieves that instance peak throughput. - Server-side kdb+ in-line compression works very well for streaming and random 1-MB read throughputs, whereby the CPU essentially keeps up with the lower level of compressed data ingest from EBS, and for random reads with many processes, due to read-ahead and decompression running in-parallel being able to magnify the input bandwidth, pretty much in line with the compression rate. - st1 works well at streaming reads, but will suffer from high latencies for any form of random searching. Due to the lower capacity cost ofst1 , you may wish to consider this for data that is considered for streaming reads only, e.g. older data. Appendix B – EFS (NFS)¶ EFS is an NFS service owned and run by AWS that offers NFS service for nodes in the same availability zone, and can run across zones, or can be exposed externally. The location of where the storage is kept is owned by Amazon and is not made transparent to the user. The only access to the data is via using the service by name (NFS service), and there is no block or object access to said data. | Amazon EFS | Amazon EBS Provisioned IOPS | | |---|---|---| | Availability and durability | Data is stored independently across multiple AZs. | Data is stored redundantly in a single AZ. | | Access | Up to thousands of Amazon EC2 instances, from multiple AZs, can connect concurrently to a file system. | A single Amazon EC2 instance can connect to a file system. | | Use cases | Big data and analytics, media processing workflows, content management, web serving, and home directories. | Boot volumes, transactional and NoSQL databases, data warehousing, and ETL. | One way to think about EFS is that it is a service deployed in some regions (not all) of the AWS estate. It does indeed leverage S3 as a persistent storage, but the EFS users have no visibility of a single instance of the server, as the service itself is ephemeral and is deployed throughout all availability zones. This is different from running your own NFS service, whereby you would define and own the instance by name, and then connect it to an S3 bucket that you also own and define. A constraint of EFS for kdb+ is that performance is limited by a predefined burst limit, which is based on the file-system size: | file-system size | aggregate read/write throughput | |---|---| | 100 GiB | • burst to 100 MiB/s for up to 72 min a day • drive up to 5 MiB/s continuously | | 1 TiB | • burst to 100 MiB/s for 12 hours a day • drive 50 MiB/s continuously | | 10 TiB | • burst to 1 GiB/s for 12 hours a day • drive 500 MiB/s continuously | | larger | • burst to 100 MiB/s per TiB of storage for 12 hours a day • drive 50 MiB/s per TiB of storage continuously | So, the EFS solution offers a single name space for your HDB structure, and this can be shared around multiple instances including the ability for one or more nodes to be able to write to the space, which is useful for daily updates. We tested kdb+ performance with a 1-TB file system. Testing was done within the burst limit time periods. The EFS burst performance is limited to 72 minutes per day for a 100-GB file system. Subsequent throughput is limited to 5 MB/sec. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 3.658 | ();,;2 3 | 11.64 | hcount | 3.059 | read1 | 6.85 | Metadata operational latencies - mSecs (headlines) Summary¶ Note the low rate of streaming read performance, combined with very high metadata latencies (1000× that of EBS). The increase in transfer rate for many-threaded compressed data indicates that there is a capped bandwidth number having some influence on the results as well as the operational latency. Consider constraining any use of EFS to temporary store and not for runtime data access. Appendix C – Amazon Storage Gateway (File mode)¶ Amazon Storage Gateway is a pre-prepared AMI/instance that can be provisioned on demand. It allows you to present an NFS layer to the application with S3 as a backing store. The difference between this and EFS is that the S3 bucket is owned and named by you. But fundamentally the drawback with this approach will be the operational latencies. These appear much more significant than the latencies gained for the EFS solution, and may reflect the communication between the file gateway instance and a single declared instance of S3. It is likely that the S3 buckets used by EFS are run in a more distributed fashion. One advantage of AWS Gateway is that it is managed by AWS, it can be deployed directly from the AWS console, and incurs no additional fees beyond the normal storage costs which is in line with S3. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 3.892 | ();,;2 3 | 77.94 | hcount | 0.911 | read1 | 7.42 | Metadata operational latencies - mSecs (headlines) Summary¶ The throughput appears to run at about 50% of the line rates available, even when run at scale. The AWS gateway exhibits significantly high operational latency. This manifests as very long wait times when performing an interactive ls -l command from the root of the file system, while the file system is under load, sometimes taking several minutes to respond to the directory walk. Amazon FSx for Lustre¶ Amazon FSx for Lustre is POSIX-compliant and is built on Lustre, a popular open-source parallel filesystem that provides scale-out performance that increases linearly with a filesystem’s size. FSx filesystems scale to hundreds of GBs of throughput and millions of IOPS. It also supports concurrent access to the same file or directory from thousands of compute instances and provides consistent, sub-millisecond latencies for file operations, which makes it especially suitable for storing and retrieving HDB data. FSx for Lustre persistent filesystem provides highly available and durable storage for kdb+ workloads. The fileservers in a persistent filesystem are highly available and data is automatically replicated within the same availability zone. FSx for Lustre persistent filesystem allows you to choose from three deployment options. PERSISTENT-50 PERSISTENT-100 PERSISTENT-200 Each of these deployment options comes with 50 MBs, 100 MBs, or 200 MBs baseline disk throughput per TiB of filesystem storage. Performance¶ We present some output of the nano benchmark designed to measure storage performance from kdb+ perspective. In the test we used PERSISTENT-200 deployment type of size 60TB, 200 SSD as cache storage. Chart below displays streaming and random reads of block of different sizes. Streaming read performance is representative of e.g. select statements without where clause and optimizing read-ahead setting by -23! . Random reads happen when we extract only a subset of the vectors - e.g. due to using a restrictive where constraint. The multiple node results are displayed below. The random read performance well demonstrates that Lustre file systems scale horizontally across multiple file servers and disks.
Changes in 2.7¶ Below is a summary of changes from V2.6. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) Production release date¶ 2010.08.05 File compression¶ Previous versions of kdb+ were already able to exploit file systems such as ZFS and NTFS that feature compression. kdb+ V2.7 extends this with built-in file compression with a choice of algorithms and compression levels using a common file format across all supported operating systems. For further details, please see the file compression FAQ Symbol Internalization – enhanced scalability¶ Strings stored as a symbol type have always been internalized in kdb+; this means the data of a string is stored once, and strings that have the same value refer to that single copy. The internalization algorithm has been improved to reduce latencies during addition of new strings (symbols). Note that this does not alter the schema choice recommendations that symbol type is suitable for repeating strings and that unique string data should be stored as char vectors. Memory allocator – garbage collection¶ kdb+ V2.5 returned blocks of memory >32MB back to the operating system immediately when they were no longer referenced. This has now been extended to cache those blocks for reuse, allowing the user to explicitly request garbage collection via the command .Q.gc[] . This improves the performance of the allocator to levels seen prior to V2.5, and yet retains the convenience of returning unused memory to the operating system. Garbage collection will automatically be attempted if a memory request causes wsful or if the artificial memory limit (set via cmd line -w option) is hit. IPC Message Validator¶ Previous versions of kdb+ were sensitive to being fed malformed data structures, sometimes resulting in a crash. kdb+ 2.7 validates incoming IPC messages to check that data structures are well formed, reporting 'badMsg and disconnecting senders of malformed data structures. The raw message is captured for analysis via the callback .z.bm . The sequence upon receiving such a message is - calls .z.bm with a single arg, a list of(handle;msgBytes) - close the handle and call .z.pc - signals 'badmsg e.g. with the callback defined .z.bm:{`msg set (.z.p;x);} then after a bad msg has been received, the global var msg will contain the timestamp, the handle and the full message. Note that this check validates only the data structures, it cannot validate the data itself. Malformed here means invalid encoding of the message, usually due to e.g. incorrect custom serialization code, or user code corrupting or overwriting the send buffer. Changes in 2.8¶ Below is a summary of changes from V2.7. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) Production release date¶ 2011.11.21 Streaming File Compression¶ Built-in file compression was added in V2.7, however the compression required that the file existed on disk before it could compress it. This is enhanced in V2.8 which allows files to be compressed as they are written. This is achieved through the overriding of "set", in that the LHS target of set can be a list describing the file or splay target, with the compression parameters. For example (`:ztest;17;2;6) set asc 10000?`3 (`:zsplay/;17;2;6) set .Q.en[`:.;([]sym:asc 10000?`3;time:.z.p+til 10000;price:10000?1000.;size:10000?100)] kdb+ compressed files/splays can also be appended to. e.g. q)(`:zippedTest;17;2;6) set 100000?10;`:zippedTest upsert 100000?10;-21!`:zippedTest 'compress new files' mode – active if .z.zd (ZIP defaults) present and valid. .z.zd can be set to an int vector of (blockSize;algo;zipLevel) to apply to all new files which do not have a file extension. .e.g q).z.zd:(17;2;6);`:zfile set asc 10000?`3 -19!x and (`:file;size;algo;level) set x take precedence over .z.zd To reset to not compress new files, use \x , e.g. q)\x .z.zd Mac OSX multithreaded¶ OSX build now supports multithreaded modes (secondary threads, peach , multithreaded input) Improved MMU performance¶ e.g. q)\t x$flip x:380 70000#1.0 Changes in 3.0¶ Below is a summary of changes from V2.8. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) Production release date¶ 2012.05.29 Vectors are no longer limited to 2 billion items as they now have a 64-bit length. The default integer is no longer 32-bit, it is 64-bit. i.e. in k/q 0 is shorthand for 0j . 0i represents a 32-bit int. You should almost never see type i , just like we never see type h . The return type from count , find , floor , … are now 64-bit ints (q longs) Schemas and q script files should be fine. There will be some NUCs when performing operations like 0^int() , as this now returns a vector of 64-bit ints. It may be simplest to scan your code for numeric literals, and where 32-bit ints are intended to be kept, specify them explicitly as such, e.g. 0i^… This can be done prior to upgrade and such tokens are compatible with previous versions. kdb+ 3.0 can read 2.7/2.8 kdb+ file formats without conversion. kdb+ files created by 3.0 cannot be read by previous versions. IPC messaging is similar to previous versions, and no single message can exceed 2 billion bytes. In general, if you choose to upgrade, to ensure full IPC interop you should first upgrade your current kdb+ version to the latest release of that version (assuming versions 2.6 thru 2.8), then update client applications to use the latest drivers and then update the kdb+ version to V3.0. If you do not upgrade the drivers, you will not be able to send timestamp/timespan types to those applications, nor use IPC compression. Shared libraries that are loaded into kdb+ must be recompiled using the new k header, and some function signatures have widened some of their types. When compiling for V3.0, define KXVER=3 , e.g. gcc -D KXVER=3 … At the moment there's no need to recompile standalone apps, i.e. that do not load as a shared lib into kdb+. If you choose to update to the latest k.h header file, and compile with KXVER=3 , ensure that you link to the latest C library (c.o /c.dll ). For legacy apps that are just being maintained, you can continue to use the new header file and compile with KXVER undefined or =2 , and bind with older c.o /c.obj . - kdb+V3.0 has support for WebSockets according to RFC 6455, and has been tested with Chrome and Firefox. It is expected that other browsers will catch up shortly. - A new type – Guid, type 2 – has been added. See Datatypes - plist has been removed. - date+time results in timestamp type; previously date+time resulted in datetime type, which has been deprecated since V2.6. Changes in 3.1¶ Below is a summary of changes from V3.0. Commercially licensed users may obtain the detailed change list / release notes from (http://downloads.kx.com) - improved performance of select with intervals. - improved performance through using more chip-specific instructions (SSE). - parallel processing through multiprocess peach . - and numerous other minor fixes and improvements. Production release date¶ 2013.06.09 Not upwardly compatible¶ - distributed each (i.e.q -s -N ) no longer opens connections to secondary processes automatically on ports20000+til N . - result type of E arithmetic for +-* changes from F to E. (avg andwavg still always go to F.mdev promotes to F.) Changes in 3.2¶ Below is a summary of changes from V3.1. Commercially-licensed users may obtain the detailed change list / release notes from downloads.kx.com Production release date¶ 2014.08.22 New¶ - allow views to use their previous value as part of their recalc. - views, in addition to vars, are now also returned by default HTTP handler. - removed limit on number of concurrent vectors which can have `g attr. \c – Console width, height now defaults to 100 1000, previously was 25 80;LINES ,COLUMNS environment vars override it.- allow some messaging threads other than main. - retain `p attr if both args to catenate have`p attr, and parted info conforms - map single splayed files. - appending a sorted vector to a sorted vector on disk now just appends to the file if the sort can be retained. exec by a,b orselect by a,b now sets sort/part attr for those cols. Enhancement to that released on 2014.02.07, now multiple cols- Support automatic WebSockets compression according to (https://tools.ietf.org/html/draft-ietf-hybi-permessage-compression-17) - added dsave to make it easy to.Q.en`p#sym and save; expectssym as first col. rload changed to map all singleton splayed tables; eliminates all the open, map, unmap, close overhead.- expanded mlim (number of mapped nested files) from 251 to 32767. - Added WebSocket client functionality. .z.ws must be defined before opening a WebSocket - allow single escape \ for\/ in char vector (to support JSON) - JSON [de]serialization is now part of q.k - uses two file descriptors per compressed file. This is a result of the change in design to accommodate decompressing a file from multiple threads Not upwardly compatible¶ - views cannot be triggered for recalc from socket threads – signals 'threadview . - view loop detection is no longer performed during view creation; now is during the view recalc. var ,dev ,cov ,cor andenlist are now reserved words.`g attr can be set on a vector in main thread only. Changes in 3.3¶ Below is a summary of changes from V3.2. Commercially-licensed users may obtain the detailed change list / release notes from downloads.kx.com Production release date¶ 2015.06.01 New¶ - Many operations are now 10 times faster – performance of avg … by , avg for types G H, sum G, grouping by G H. Alsodistinct /find for G H. +/ &/ |/ = < are 10-20x faster for GH,avg is a lot faster for GHIJ.+/I will give0Ni on overflow.- Faster and stricter JSON parser. It is approx 50-100x faster and can process Unicode. `g attr can (again) be created in threads other than main thread. In V3.2, we removed the limit on number of concurrent vectors which can have`g attr, and a side-effect was that`g attr could be created on the main thread only. That restriction has now been removed.- Read-only eval of parse tree. The new keyword reval , backed by-24! , behaves similarly toeval (-6! ), but evaluates the parse tree in read-only mode, as if the cmd line option-b were active for the duration of the reval call. This should prove useful for access control. - Improve performance of on-disk sort for un-cached splayed tables. - Allow processing of http://host:port/.json?query requests. - Columns of nested enumerated syms with key `sym now report asS in meta. - Splayed table count is now taken from first column in table. (Previously it was taken from the last column). - Distributed each will revert to localeach if.z.pd is not defined. - Added .z.X , which provides the raw, unfiltered cmd line. - Added .Q.Xf to write empty nested files. .Q.id now handles columns that start with a numeric character. Not upwardly compatible¶ reval is now a reserved word.- SSE-enabled builds (v64,l64,m64) now require SSE4.2 - WebSocket open/close callbacks are now via .z.wo /.z.wc instead of.z.po /.z.pc . Changes in 3.4¶ Below is a summary of changes from V3.3. Commercially licensed users may obtain the detailed change list / release notes from http://downloads.kx.com Production release date¶ 2016.05.31 New¶ - IPC message size limit raised from 2GB to 1TB. - supports IPC via Unix domain sockets for lower latency, higher throughput local IPC connections. - can use both incoming and outgoing encrypted connections using Secure Sockets Layer(SSL)/Transport Layer Security(TLS). - can read directly from NamedPipes (e.g. avoid unzipping a CSV to disk, can pipe it directly into kdb+). varchar~\:x andx~/:varchar are now ~10x faster.- improved performance by ~10x for like on nested char vectors on disk. - can utilize the snappy compression algorithm as algo #3 for File Compression. - certain vector types can now be updated efficiently, directly on disk, rather than having to rewrite the whole file on change. - added async broadcast as -25! (handles;msg) which serializes the msg once, queuing it as async msg to each handle. parse can now handle k in addition to q code..Q.en can now handle lists of sym vectors Not upwardly compatible¶ ema is now a reserved word.
exp , xexp ¶ Raise to a power exp ¶ Raise e to a power exp x exp[x] Where x is numeric- \(e\) is the base of natural logarithms returns as a float \(e^x\), or null if x is null. q)exp 1 2.718282 q)exp 0.5 1.648721 q)exp -4.2 0 0.1 0n 0w 0.01499558 1 1.105171 0n 0w q)exp 00:00:00 00:00:12 12:00:00 1 162754.8 0w exp is a multithreaded primitive. Implicit iteration¶ exp is an atomic function. It applies to dictionaries and tables q)exp(1;2 3) 2.718282 7.389056 20.08554 q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)exp d a| 22026.47 7.58256e-10 20.08554 b| 54.59815 148.4132 0.002478752 q)exp t a b ----------------------- 22026.47 54.59815 7.58256e-10 148.4132 20.08554 0.002478752 q)exp k k | a b ---| ----------------------- abc| 22026.47 54.59815 def| 7.58256e-10 148.4132 ghi| 20.08554 0.002478752 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range f . f f f f f f f . f f f z f f f f Range: fz xexp ¶ Raise x to a power x xexp y xexp[x;y] Where x and y are numerics, returns as a float where x is - non-negative, xy - null or negative, 0n q)2 xexp 8 256f q)-2 2 xexp .5 0n 1.414214 q)1.5 xexp -4.2 0 0.1 0n 0w 0.1821448 1 1.04138 0n 0w The calculation is performed as exp y * log x . If y is integer, this is not identical to prd y#x . q)\P 0 q)prd 3#2 8 q)2 xexp 3 7.9999999999999982 q)exp 3 * log 2 7.9999999999999982 xexp is a multithreaded primitive. Implicit iteration¶ xexp is an atomic function. It applies to dictionaries and keyed tables q)3 xexp(1;2 3) 3f 9 27f q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)3 xexp d a| 59049 9.559907e-11 27 b| 81 243 0.001371742 q)3 xexp k k | a b ---| ------------------------ abc| 59049 81 def| 9.559907e-11 243 ghi| 27 0.001371742 Domain and range¶ xexp| b g x h i j e f c s p m d z n u v t ----| ----------------------------------- b | f . f f f f f f . . . . . . . . . . g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f . . . . . . . . . . h | f . f f f f f f . . . . . . . . . . i | f . f f f f f f . . . . . . . . . . j | f . f f f f f f . . . . . . . . . . e | f . f f f f f f . . . . . . . . . . f | f . f f f f f f . . . . . . . . . . c | . . . . . . . . . . . . . . . . . . s | . . . . . . . . . . . . . . . . . . p | . . . . . . . . . . . . . . . . . . m | . . . . . . . . . . . . . . . . . . d | . . . . . . . . . . . . . . . . . . z | . . . . . . . . . . . . . . . . . . n | . . . . . . . . . . . . . . . . . . u | . . . . . . . . . . . . . . . . . . v | . . . . . . . . . . . . . . . . . . t | . . . . . . . . . . . . . . . . . . Range: f fby ¶ Apply an aggregate to groups (aggr;d) fby g Where aggr is an aggregate functiond andg are conforming vectors collects the items of d into sublists according to the corresponding items of g , applies aggr to each sublist, and returns the results as a vector with the same count as d . When to use fby fby is designed to collapse cascaded select … from select … by … from t expressions into a single select … by … from … where … fby … Think of fby when you find yourself trying to apply a filter to the aggregated column of a table produced by select … by … . q)show dat:10?10 4 9 2 7 0 1 9 2 1 8 q)grp:`a`b`a`b`c`d`c`d`d`a q)(sum;dat) fby grp 14 16 14 16 9 4 9 4 4 14 Collect the items of dat into sublists according to the items of grp . q)group grp a| 0 2 9 b| 1 3 c| 4 6 d| 5 7 8 q)dat group grp a| 4 2 8 b| 9 7 c| 0 9 d| 1 2 1 Apply aggr to each sublist. q)sum each dat group grp a| 14 b| 16 c| 9 d| 4 The result is created by replacing each item of grp with the result of applying aggr to its corresponding sublist. q)(sum;dat) fby grp 14 16 14 16 9 4 9 4 4 14 q)(sum each dat group grp)grp / w/o fby 14 16 14 16 9 4 9 4 4 14 Vectors¶ q)dat:2 5 4 1 7 / data q)grp:"abbac" / group by q)(sum;dat) fby grp / apply sum to the groups 3 9 9 3 7 q)(first;dat) fby grp / apply first to the groups 2 5 5 2 7 Tables¶ When used in a select , usually a comparison function is applied to the results of fby , e.g. select from t where 10 < (sum;d) fby a q)\l sp.q q)show sp / for reference s p qty --------- s1 p1 300 s1 p2 200 s1 p3 400 s1 p4 200 s4 p5 100 s1 p6 100 s2 p1 300 s2 p2 400 s3 p2 200 s4 p2 200 s4 p4 300 s1 p5 400 Sales where quantity > average quantity by part: q)select from sp where qty > (avg;qty) fby p s p qty --------- s2 p2 400 s4 p4 300 s1 p5 400 Sales where quantity = maximum quantity by part: q)select from sp where qty = (max;qty) fby p s p qty --------- s1 p1 300 s1 p3 400 s1 p6 100 s2 p1 300 s2 p2 400 s4 p4 300 s1 p5 400 To group on multiple columns, tabulate them in g . q)update x:12?3 from `sp `sp q)sp s p qty x ----------- s1 p1 300 0 s1 p2 200 2 s1 p3 400 0 s1 p4 200 1 s4 p5 100 0 s1 p6 100 0 s2 p1 300 0 s2 p2 400 2 s3 p2 200 2 s4 p2 200 2 s4 p4 300 1 s1 p5 400 1 q)select from sp where qty = (max;qty) fby ([]s;x) s p qty x ----------- s1 p2 200 2 s1 p3 400 0 s4 p5 100 0 s2 p1 300 0 s2 p2 400 2 s3 p2 200 2 s4 p2 200 2 s4 p4 300 1 s1 p5 400 1 fby before V2.7 In V2.6 and below, fby ’s behavior is undefined if the aggregation function returns a list; it usually signals an error from the k definition of fby . However, if the concatenation of all list results from the aggregation function results raze has the same length as the original vectors, a list of some form is returned, but the order of its items is not clearly defined. 1: File Binary¶ Read and parse, or write bytes There are 10 types of people: those who use binary arithmetic and those who don’t. Read Binary¶ x 1: y 1:[x;y] Where x is a 2-item list (a string of types and an int vector of widths) of which the order determines whether the data is parsed as little-endian or big-endiany is either a- file descriptor to repeatedly read all available records (specified by x ) from a file - 3 element list containing file descriptor, offset (long) and length (long). Enables repeatedly reading all available records (specified by x ) from a file which stops after the given byte length, starting 'offset' bytes from the start of the file. - string - byte sequence - file descriptor to repeatedly read all available records (specified by returns the content of y as a matrix. q)(enlist 4;enlist"i")1:0x01000000 / big endian 16777216 q)(enlist"i";enlist 4)1:0x01000000 / little endian 1 q)show pi:(enlist"f";enlist 8)1:0x7fbdc282fb210940 / pi as little endian 64-bit float 3.141593 q).Q.s1 pi / 1×1 matrix ",,3.141593" Read two records containing an integer, a character and a short from a byte sequence. Note the integer is read with a 4-byte width, the character with 1 byte and the short with 2 bytes. (When reading byte sequences, recall that a byte is 2 hex digits.) q)("ich";4 1 2)1:0x00000000410000FF00000042FFFF 0 255 A B 0 -1 q)("ich";4 1 2)1:"arthur!" 1752461921 u 8562 With offset and length : /load 500000 records, 100000 at a time q)d:raze{("ii";4 4)1:(`:/tmp/data;x;100000)}each 100000*til 5 Since 4.1t 2022.11.01,4.0 2022.12.02 quotes are no longer stripped from y q)("**";4 4)1:"abcd\"ef\"" "abcd" "\"ef\"" Column types and widths¶ b boolean 1 g guid 16 x byte 1 h short 2 i int 4 j long 8 e real 4 f float 8 c char 1 s symbol n p timestamp 8 m month 4 d date 4 z datetime 8 n timespan 8 u minute 4 v second 4 t time 4 (blank) skip Q for Mortals §11.5.1 Fixed-Width Records Multithreaded Load¶ Binary load can use multiple threads when kdb+ is running in multithreaded mode Since 4.1t 2021.09.28. Save Binary¶ x 1: y 1:[x;y] Where x is afilesymbol or (since 4.1t 2023.04.17) a 4 item list (filesymbol , logical block size, compression algorithm and compression level) to write compressed datay is data to write writes bytes to filesymbol and returns it. If filesymbol - does not exist, it is created, with any required directories - exists, it is overwritten q)`:hello 1: 0x68656c6c6f776f726c64 `:hello Compression¶ Since 4.1t 2023.04.17 data can be compressed while writing, by including compression parameters q)(`:file;17;2;9)1:100#0x0 `:file
Code profiler¶ Experimental feature Currently implemented for x86_64 Linux (kdb+ l64 ). kdb+ 4.0 includes an experimental built-in call-stack snapshot primitive that allows building a sampling profiler. A sampling profiler is a useful tool for low-overhead instrumentation of code performance characteristics. For inspiration, we have looked at tools like Linux perf . A new function, .Q.prf0 , returns a table representing a snapshot of the call stack at the time of the call in another kdb+ process. Warning The process to be profiled must be started from the same binary as the one running .Q.prf0 , otherwise 'binary mismatch is signalled. .Q.prf0 stops the target process for the duration of snap-shotting. Time per call is mostly independent of call-stack depth. You should be able to do at least 100 samples per second with less than 5% impact on target process performance. The profiler has mostly the same view of the call stack as the debugger. .Q.prf0 returns a table with the following columns: name assigned name of the function file path to the file containing the definition line line number of the definition col column offset of the definition, 0-based text function definition or source string pos execution position (caret) within text For example, given the following /w/p.q : .z.i a:{b x};b:{c x} c:{while[1;]} a` running it with \q /w/p.q , setting pid to the printed value, the following call-stack snapshot is observed (frames corresponding to system and built-in functions can be filtered out with the .Q.fqk predicate on file name): q)select from .Q.prf0 pid where not .Q.fqk each file name file line col text pos ----------------------------------------- "" "/w/p.q" 4 0 "a`" 0 "..a" "/w/p.q" 2 2 "{b x}" 1 "..b" "/w/p.q" 2 10 "{c x}" 1 "..c" "/w/p.q" 3 2 "{while[1;]}" 7 Permissions¶ By default on most Linux systems, a non-root process can only profile (using ptrace ) its direct children. \q starts a child process, so you should be able to profile these with no system changes. If you wish to profile unrelated processes, you have a few options: - change kernel.yama.ptrace_scope as detailed in the link above - give the q binary permission to profile other processes with setcap cap_sys_ptrace+ep $QHOME/l64/q - run the profiling q as root Docker To avoid the limitation to direct child processes: docker run … --cap-add=SYS_PTRACE Usage¶ Typically a sampling profiler collects call-stack snapshots at regular intervals. It is convenient to use q’s timer for that: .z.ts:{0N!.Q.prf0 pid};system"t 10" /100Hz There are a few toys provided. Their usages follow the same pattern: they accept a single argument, either a script file name to run, or a process ID to which to attach. In the former case, a new q process is started with \q running the specified file. Exit with \\ . - top.q - shows an automatically updated display of functions most heavily contributing to the running time (as measured by number of samples in which they appear). self is the percentage of time spent in the function itself;total includes all descendants. - record.q - writes the samples to disk in a splayed table prof , one sample per row. With prof thus recorded, try `:prof.txt 0:(exec";"sv'ssr[;"[ ;]";"_"]each'name from`:prof),\:" 1" to generate prof.txt suitable for feeding into FlameGraph or speedscope for visualization. Walk-through¶ Let’s apply the profiler to help us optimize an extremely naïve implementation of Monte-Carlo estimation of π. We'll use record.q and speedscope as described above. Start with the following /w/pi.q consisting of purely scalar code: point:{-1+2?2.} dist:{sqrt sum x*x} incircle:{1>dist x} run:{n:0;do[x;n+:incircle point[]];4*n%x} \t 0N!run 10000000 This takes ~12s on a commodity laptop. Get the profile (running q record.q /w/pi.q ), convert to prof.txt and open the result in speedscope. All three tabs offer useful views of the execution profile: - Time Order presents call stack - Left Heavy aggregates similar call stacks together, like FlameGraph - Sandwich shows an execution profile similar to top.q In this very simple case, Sandwich suffices, but feel free to explore different representations. run takes most of the time (Self column), and we can improve it by getting rid of the scalar loop: run:{4*(sum incircle each point each til x)%x} This gets us a modest increase in performance, and looking at the profile again: we see that run no longer dominates the profile; now point and incircle do. We note that incircle is already vectorized, so we should focus on getting a better random-point sampling function. points:{2 0N#-1+(2*x)?2.} run:{4*(sum incircle points x)%x} This runs in ~400ms – around a 30× improvement. The rest is left as an exercise for the reader. Programming examples¶ HTTP client request and parse string result into a table¶ Q has a built-in HTTP request command, which follows the syntax `:http://host:port "string to send as HTTP method etc" The string-to-send can be anything within the HTTP protocol the HTTP server will understand. jmarshall.com/easy/http kdb+ does not add to nor encode the string to send, nor does it decode the response; it just returns the raw data. q)/ string to send q)s2s:"GET /mmz4281/1314/E0.csv HTTP/1.0\r\nhost:www.football-data.co.uk\r\n\r\n" q)data:(`$":http://www.football-data.co.uk") s2s q)(" SSSIIIIII IIIIIIIIIIII"; ",")0:data This example function queries Yahoo Financials and produces a table of trading info for a list of stocks during the last few days. The list of stocks and the number of days are parameters of the function. The function definition contains examples of: - date manipulation - string creation, manipulation and pattern matching - do loops - sending HTTP requests to an HTTP server - parsing tables from strings - queries with user-defined functions - parsing dates from strings Sample use: q)yahoo[10;`GOOG`AMZN] Date Open High Low Close Volume Sym ---------------------------------------------------- 2006.08.21 28.7 28.98 27.97 28.13 5334900 AMZN 2006.08.21 378.1 379 375.22 377.3 4023300 GOOG 2006.08.22 28.14 28.89 28.05 28.37 4587100 AMZN 2006.08.22 377.73 379.26 374.84 378.29 4164100 GOOG 2006.08.23 28.56 28.89 27.77 28.14 4726400 AMZN 2006.08.23 377.64 378.27 372.66 373.43 3642300 GOOG ... The above function definition has been adapted from a more compact one by Simon Garland. The long version adds comments, renames variables, and splits computations into smaller steps so that the code is easier to follow. Compact version: KxSystems/cookbook/yahoo_compact.q An efficient query to extract last n ticks for a particular stock from quote table¶ The quote table is defined as follows: q)quote: ([] stock:();time:();price()) For fast (constant-time) continuous queries on last n ticks we have to maintain the data nested. For our quote table, we define q)q: ([stock:()]time:();price:()) where, for each row, the columns time and price contain a list, rather than an atom (i.e., the columns time and price are lists of lists). This table is populated as follows: q)q: select time, price by stock from quote Now, to get the last 5 quotes for Google q)select -5#'time,-5#'price from q where stock=`GOOG This query executes in constant time. If you want the quotes LIFO, q)select reverse each -5#'time, reverse each -5#'price from q where stock=`GOOG This one is also constant-time. Those iterators… Why do we use each and Each Both? Because the columns time and price are lists of lists, not lists of atoms. An efficient query to know on which days a symbol appears¶ Issuing a single select that covers a whole year would be too inefficient. You could issue a separate select for each date, but if you are covering a year or two, the cumulative time quickly adds up. The design of the following query is based on efficiency (both time and memory) on a large, parallel database. A straightforward implementation of this query takes over a second per month. The version shown here takes 50ms for a whole year. (There will be an initial warm-up cost for a new q instance, but once it has been issued, queries with other symbols take 50ms). getDates:{[table;testSyms;startDate;endDate] symsByDate:select distinct sym by date from table[]where date within(startDate;endDate); firstSymList:exec first sym from symsByDate; val:(@[type[firstSymList]$;;`badCast]each(),testSyms)except`badCast; exec date from(select date,val{(x in y)|/}/:sym from symsByDate)where sym=1b} Sample usage: q)getDates[`quote;`GOOG`AMZN;2005.01.01;2006.02.01] A function to convert a table into XML¶ /given a value of type number, string or time, make an XML element with the type and the value typedData:{ typeOf:{(string`Number`String`Time)[0 10 13h bin neg type x]}; "<data ss:type='" , (typeOf x) ,"'>",string[x],"</data>"} / wrap a value around a tag tagit:{[tagname; v] tagname: string [tagname]; "<",tagname,">", v,"</",tagname,">"}; /convert a table (of numbers, strings and time values) into xml toxml:{ f: {flip value flip x}; colNames: enlist cols x; tagit[`worksheet]tagit[`table]raze(tagit[`row] raze tagit[`cell] each typedData each)each colNames,f x} Sample usage: q)t stock price ----------- ibm 102 goog 103 q)toxml t "<worksheet><table><row><cell><data ss:type='String'>stock</data></cell><cell.. The result looks as follows after some space is added by hand: <worksheet> <table> <row> <cell><data ss:type='String'>stock</data></cell> <cell><data ss:type='String'>price</data></cell> </row> <row> <cell><data ss:type='String'>ibm</data></cell> <cell><data ss:type='Number'>102</data></cell> </row> <row> <cell><data ss:type='String'>goog</data></cell> <cell><data ss:type='Number'>103</data></cell> </row> </table> </worksheet> Computing Bollinger bands¶ Bollinger bands consist of: - a middle band being a N-period simple moving average - an upper band at K times a N-period standard deviation above the middle band - a lower band at K times a N-period standard deviation below the middle band Typical values for N and K are 20 and 2, respectively. bollingerBands: {[k;n;data] movingAvg: mavg[n;data]; md: sqrt mavg[n;data*data]-movingAvg*movingAvg; movingAvg+/:(k*-1 0 1)*\:md} The above definition makes sure nothing is calculated twice, compared to a more naïve version. Sample usage: q)vals: 20 + (100?5.0) q)vals 20.32759 21.56053 23.19912 24.08458 24.73181 22.88464 22.09355 22.54231 20.81.. q)bollingerBands[2;20] vals 20.32759 19.71112 19.34336 19.38947 19.53251 19.83184 19.90732 20.06612 19.74.. 20.32759 20.94406 21.69575 22.29296 22.78073 22.79804 22.6974 22.67802 22.47.. 20.32759 22.177 24.04813 25.19644 26.02894 25.76425 25.48749 25.28992 25.19.. Parallel correlation of time series¶ Parallelism is achieved by the use of peach . k)comb:{(,!0){,/(|!#y),''y#\:1+x}/x+\\(y-x-:1)#1} / d - date / st, et - start time, end time / gt - granularity type `minute `second `hour .. / gi - granularity interval (for xbar) / s - symbols pcorr:{[d;st;et;gt;gi;s] data:select last price by sym,gi xbar gt$time from trade where date=d,sym in s,time within(st,et); us:select distinct sym from data; ut:select distinct time from data; if[not(count data)=(count us)*count ut; / if there are data holes.. filler:first 1#0#exec price from data; data:update price:fills price by sym from`time xasc(2!update price:filler from us cross ut),data; if[count ns:exec sym from data where time=st,null price; data:update price:reverse fills reverse price by sym from data where sym in ns]]; PCORR::0!select avgp:avg price,devp:dev price,price by sym from data; data:(::);r:{.[pcorrcalc;;0]PCORR x}peach comb[2]count PCORR;PCORR::(::);r} pcorrcalc:{[x;y]`sym0`sym1`co!(x[`sym];y[`sym];(avg[x[`price]*y[`price]]-x[`avgp]*y[`avgp])%x[`devp]*y[`devp])} matrix:{ / convert output from pcorr to matrix u:asc value distinct exec sym0 from x; / sym0 has 1 more element than sym1! exec u#(value sym1)!co by value sym0 from x} Sample usage: d:first daily.date; st:10:00; et:11:30; gt:`second; gi:10; s:`GOOG`MSFT`AAPL`CSCO`IBM`INTL; s:100#exec sym from `size xdesc select sym,size from daily where date=d; show pcorr[d;st;et;gt;gi;s]; show matrix pcorr[d;st;et;gt;gi;s]
Market Fragmentation: A kdb+ framework for multiple liquidity sources¶ kdb+ plays a large part in the trading and risk management activities of many financial institutions around the world. For a large-scale kdb+ system to be effective, it must be designed efficiently so that it can capture and store colossal amounts of data. However, it is equally important that the system provides intelligent and useful functionality to end-users. In the financial world, increasing participation and advances in technology are resulting in a progressively more fragmented market with liquidity being spread across many trading venues. It is therefore crucial that a kdb+ system can store and analyze information for a financial security from all available sources. This paper presents an approach to the challenge of consolidating share price information for equities that trade on multiple venues. Motivation¶ Since the inception of the Markets in Financial Instruments Directive (MiFID), Multilateral Trading Facilities (MTFs) have sprung up across Europe. Alternative Trading Systems are the US equivalent. Prior to the MiFID, trading typically took place on national exchanges. Other types of trading venues in existence today include crossing networks and dark pools. All of these venues compete with each other for trading activity. For market participants, the increased fragmentation forces more sophisticated trading strategies in the form of smart order routing. A number of additional factors add to the argument that the ability to consolidate real-time market data in-house can add real value to the trading strategies of an organization, including technical glitches at exchanges which can lead to suboptimal pricing and order matching. Breakdown of the traded volume that occurred on the main trading venues for all EMEA equities in December 2012 Bearing in mind that the data output by a kdb+ system can often be the input into a trading decision or algorithm, the timely and accurate provision of consolidated real-time information for a security is vital. The data¶ The goal for kdb+ financial engineers is to analyze various aspects of a stock’s trading activity at each of the venues where it trades. In the equities world, real-time market data vendors provide trade, level-1 and level-2 quote feeds for these venues. Securities trading on different venues will use a different suffix, enabling data consumers to differentiate between venues – for example, Vodafone shares traded on the LSE are reported by Reuters on the Reuters Instrument Code (RIC) VOD.L, whereas shares of the same company traded on Chi-X are recorded on VODl.CHI. In the FX world, we might have feed handlers connecting directly to each ECN. The symbol column in our table would generally be a currency pair and we might use a venue column to differentiate between venues. Regardless of the asset class, in order to get a complete picture of a security’s trading activity the kdb+ system must collect and store data for all venues for a given security. For the purposes of this paper, we will assume standard equity trade and quote tables. We will use Reuters cash equities market data in the examples and assume that our feed handler subscribes to the RICs we require and that it publishes the data to our tickerplant. The building blocks¶ In order to be able to effectively analyze and consolidate data for securities from multiple venues, we must first have in place an efficient mechanism for retrieving data for securities from a single venue. Here we introduce the concept of a gateway: a kdb+ process that acts as a connection point for end- users. The gateway’s purpose is to: - accept client queries and/or calls made to analytic functions; - to dispatch appropriate requests for data to the RDB or HDB, or both; and - to return the data to the client. Data retrieval, data filtering, computation and aggregation, as a general rule, should all be done on the database. The gateway, once it has retrieved this data, can enrich it. Examples of enrichment are time zone conversion, currency conversion and adjustment for corporate actions. We make the case below that consolidation at the security level is also best done in the gateway. Analytic setup¶ A typical analytic that end-users might wish kdb+ to provide is an interval function, which gives a range of different analytic aggregations for a stock or list of stocks based on data between a given start time and end time on a given date, or for a range of dates. For simplicity, we will focus on analytics which span just one date. Let us call this function getIntervalData . Examples of the analytics the function could provide are open , high , low , close , volume , avgprice , vwap , twap , meanspread , spreadvolatility , range , and lastmidprice . We would like the ability to pass many different parameters into this function, and potentially more than the maximum number allowed in q, which is 8. We may also decide that some parameters are optional and need not be specified by the user. For these reasons we will use a dictionary as the single parameter, rather than defining a function signature with a specific number of arguments. A typical parameter dictionary looks like the following: q)params symList | `VOD.L date | 2013.01.15 startTime| 08:30 endTime | 09:30 columns | `vwap`volume Data filtering¶ It is important at this stage that we introduce the notion of data filtering. Trading venues have a regulatory requirement to report all trades whether they have been executed electronically or over the counter. Not all trades, particularly in the equities world, should be included in all calculations. For example, one user may want only lit order book trades to be included in his/her VWAP calculation, but another user may prefer that all order book trades appear in the VWAP figure. Monthly breakdown of on-and-off order book traded volume across all EMEA equity markets for the year to December 2012 Market data vendors break trade data into different categories including auction, lit, hidden and dark order book, off order book and OTC trades, and use data qualifier flags to indicate which category each trade falls into. Data should be filtered based on these qualifiers, and according to the end-user’s requirements, prior to aggregation. This specification should be configurable within the parameter dictionary passed to the getIntervalData function. The various qualifier flags used to filter the raw data for each category can be held in configuration files on a per-venue basis and loaded into memory in the database so that filtering can be done during execution of the query by the use of a simple utility function. An approach to storing this configuration is outlined as follows: Define a dictionary, .cfg.filterrules , keyed by filtering rule, e.g. Order Book, Total Market, Dark Trades, where the corresponding values are tables holding the valid qualifier flags for each venue for that rule. q).cfg.filterrules TM | (+(,`venue)!,`LSE`BAT`CHI`TOR)!+(,`qualifier)!,(`A`Auc`B`C`X`DARKTRADE`m.. OB | (+(,`venue)!,`LSE`BAT`CHI`TOR)!+(,`qualifier)!,(`A`Auc`B`C`m;`A`AUC`OB`C.. DRK| (+(,`venue)!,`LSE`BAT`CHI`TOR)!+(,`qualifier)!,(,`DARKTRADE;,`DARK;,`DRK.. q).cfg.filterrules[`OB] venue| qualifier ---- | -------------- LSE | `A`Auc`B`C`m BAT | `A`AUC`OB`C CHI | `a`b`auc`ob TOR | `A`Auc`X`Y`OB Assuming we have access to a params dictionary, we could then construct our query as follows. select vwap:wavg[size;price], volume:sum[size] by sym from trade where date=params[`date], sym in params[`symList], time within (params`startTime;params`endTime), .util.validTrade[sym;qualifier;params`filterRule] .util.validTrade makes use of the above config data, returning a Boolean indicating whether the qualifier flag for a given record is valid for that record’s sym according to the given filter rule. Due to the fact that we have defined valid qualifiers on a per-rule per-venue basis, we will of course require a method for retrieving a venue for a given sym. This is best done through the use of a simple dictionary lookup. q).cfg.symVenue BARCl.BS | BAT BARCl.CHI| CHI BARC.L | LSE BARC.TQ | TOR VODl.BS | BAT VODl.CHI | CHI VOD.L | LSE VODl.TQ | TOR q).util.getVenue[`VOD.L`BARC.BS] `LSE`BAT This data processing takes place on the RDB and/or HDB. The query will have been dispatched, with parameters, by the gateway. We will now demonstrate a mechanism for consolidating data from multiple venues by passing an additional multiMarketRule parameter in our call to the gateway. Reference data¶ Having already briefly touched on it, we introduce the role of reference data more formally here. With the above analytic setup, we retrieve data only for the given symbol/s passed to the function. When a multimarket parameter is included however, we need to retrieve and aggregate data for all instrument codes associated with the entity mapped to the sym parameter. With that in mind, the first thing we need to have is the ability to look up the venues on which a given stock trades. We also need to know the instrument codes used for the stock in question on each venue. As described above (The data), these will differ. This information is usually located in a reference data system. The reference data system could be a component of the kdb+ system or it could be an external application within the bank. Regardless of where the reference data is sourced, it should be processed and loaded into memory at least once per day. The databases (RDB and HDB) require access to the reference data, as does the gateway. In terms of size, reference data should be a very small fraction of the size of market data, so memory overhead will be minimal. The most effective layout is to have a table keyed on sym, mapping each sym in our stock universe to its primary sym. By primary sym, we mean the instrument code of the company for the primary venue on which it trades. For example, VOD.L’s is simply VOD.L since Vodafone’s primary venue is the LSE whereas VODl.CHI’s is also VOD.L. q).cfg.multiMarketMap sym | primarysym venue ---------| ---------------- BARCl.BS | BARC.L BAT BARCl.CHI| BARC.L CHI BARC.L | BARC.L LSE BARC.TQ | BARC.L TOR VODl.BS | VOD.L BAT VODl.CHI | VOD.L CHI VOD.L | VOD.L LSE VODl.TQ | VOD.L TOR Consolidating the data¶ Extending parameters¶ Providing a consolidated analytic for a stock requires that we query the database for all syms associated with an entity. With the above reference data at our disposal, we can now write a utility function, .util.extendSymsForMultiMarket , which will expand the list of syms passed into the params dictionary. This function should be called if and only if a multiMarketRule parameter is passed. We should also be careful to preserve the original sym list passed to us, as we will aggregate back up to it during the final consolidation step. The following is an implementation of such a utility function: .util.extendSymsForMultiMarket:{[symList] distinct raze {update origSymList:x from select symList:sym from .cfg.multiMarketMap where primarysym in .cfg.multiMarketMap[x]`primarysym } each (),symList } q).util.extendSymsForMultiMarket[`BARC.L`VOD.L] symList origSymList --------------------- BARCl.BS BARC.L BARCl.CHI BARC.L BARC.L BARC.L BARC.TQ BARC.L VODl.BS VOD.L VODl.CHI VOD.L VOD.L VOD.L VODl.TQ VOD.L We can now use this utility in the getIntervalData function defined on the gateway so that we dispatch our query to the database with an extended symList , as follows: if[params[`multiMarketRule]~`multi; extended_syms:.util.extendSymsForMultiMarket[params`symList]; params:@[params;`symList;:;extended_syms`symList]; ]; Once we have adjusted the parameters appropriately, we can dispatch the query to the database/s in the exact same manner as before. The only thing that happens differently is that we are querying for additional syms in our symList . This will naturally result in a slightly more expensive query. Multi-market aggregation¶ The final, and arguably the most critical step in consolidating the data is to aggregate our analytics at the entity level as opposed to the sym level. Having dispatched the query to the database/s, we now have our data held in memory in the gateway, aggregated on a per-sym basis. Assuming that all venues are trading in the same currency, all that remains is for us to aggregate this data further up to primary sym level. We will use configuration data to define multi-market aggregation rules. Multiple currencies If the various venues trade in different currencies, we would invoke a function in the gateway to convert all data to a common currency prior to aggregation. This method assumes that FX risk is hedged during the lifetime of the trade. The method of aggregation is dependent on the analytic in question. Therefore it is necessary for us to define these rules in a q file. For a volume analytic, the consolidated volume is simply the sum of the volume on all venues. The consolidated figure for a maximum or minimum analytic will be the maximum or minimum of all data points. We need to do a little more work however for a weighted-average analytic such as a VWAP. Given a set of VWAPs and volumes for a stock from different venues, the consolidated VWAP is given by the formula: ( Σ vwap × volume ) ÷ ( Σ volume ) It is evident that we need access to the venue volumes as well as the individually-calculated VWAPs in order to weight the consolidated analytic correctly. This means that when a consolidated VWAP is requested, we need the query to return a volume column as well as a vwap column to our gateway. Similarly, for a consolidated real-time snapshot of the midprice (let’s call it lastmidprice ), rather than working out the mid price for a stock on each venue, we need to return the last bid price and the last ask price on each venue. We then take the maximum of the bid prices and the minimum of the ask prices. This represents the tightest spread available and from there we can work out a meaningful consolidated mid price. The knowledge required for additional column retrieval could be implemented in a utility function, .util.extendExtraColParams , prior to the query dispatch. Here we present a list of consolidation rules for a few common analytics. .cfg.multiMarketAgg:()!(); .cfg.multiMarketAgg[`volume]:"sum volume" .cfg.multiMarketAgg[`vwap]:"wavg[volume;vwap]" .cfg.multiMarketAgg[`range]: "(max maxprice)-(min minprice)" .cfg.multiMarketAgg[`tickcount]:"sum tickcount" .cfg.multiMarketAgg[`maxbid]:"max maxbid" .cfg.multiMarketAgg[`minask]:"min minask" .cfg.multiMarketAgg[`lastmidprice]:"((max lastbid)+(min lastask))%2" With the consolidation rules defined, we can use them in the final aggregation before presenting the data. The un-aggregated data is presented as follows. q)res sym volume vwap maxprice minprice lastbid lastask -------------------------------------------------------------- BARCl.BS 5202383 244.05 244.25 243.85 244 244.1 BARCl.CHI 5847878 244.1 244.3 243.9 244.05 244.15 BARC.L 30283638 244.1 244.3 243.9 244.05 244.15 BARC.TQ 3928294 244.15 244.35 243.95 244.1 244.2 VODl.BS 10342910 160.9 161.245 159.85 160.895 160.9 VODl.CHI 10383645 160.9 161.5 159.85 160.89 160.895 VOD.L 108378262 160.895 161.245 159.9 160.895 160.9 VODl.TQ 10252838 160.895 161.245 159.89 160.895 160.9 We can now aggregate it through the use of a clever functional select, utilizing q’s parse feature to bring our configured aggregation rules into play. First, left-join the original user-passed symList back to the results table. This is the entity that we want to roll up to. res:lj[res;`sym xkey select sym:symList, origSymList from extended_syms] The result of this gives us the following table. sym volume vwap maxprice minprice lastbid lastask origSymList -------------------------------------------------------------------------- BARCl.BS 5202383 244.05 244.25 243.85 244 244.15 BARC.L BARCl.CHI 5847878 244.1 244.3 243.9 244.05 244.15 BARC.L BARC.L 30283638 244.1 244.3 243.9 244.05 244.15 BARC.L BARC.TQ 3928294 244.15 244.35 243.95 244.1 244.2 BARC.L VODl.BS 10342910 161.195 161.245 159.85 161.195 161.205 VOD.L VODl.CHI 10383645 161.19 161.25 159.85 161.195 161.21 VOD.L VOD.L 108378262 161.195 161.245 159.9 161.195 161.205 VOD.L VODl.TQ 10252838 161.195 161.245 159.9 161.195 161.205 VOD.L The final step is to aggregate by the originally supplied user symList . / aggregate by origSymList and rename this column to sym byClause:(enlist`sym)!enlist`origSymList; We then look up the multimarket rules for the columns we are interested in, use parse to parse each string, and create a dictionary mapping each column name to its corresponding aggregation. This dictionary is required for the final parameter into the functional select. aggClause:columns!parse each .cfg.multiMarketAgg[columns:params`columns] aggClause is thus defined as: volume | (sum;`volume) vwap | (wavg;`volume;`vwap) range | (-;(max;`maxprice);(min;`minprice)) lastmidprice| (%;(+;(max;`lastbid);(min;`lastask));2) And our functional select is constructed as follows: res:0!?[res;();byClause;aggClause]; Giving the final result-set, the consolidated analytics, ready to be passed back to the user: sym volume vwap range lastmidprice --------------------------------------------- BARC.L 45262193 244.0986 0.5. 244.125 VOD.L 139357655 161.1946 1.4 161.2 Conclusion¶ This paper has described a methodology for analyzing data across multiple liquidity sources in kdb+. The goal was to show how we could aggregate tick data for a financial security from multiple data sources or trading venues. The main features of a simple analytics system were briefly outlined and we described how to dispatch vanilla analytic requests from the gateway. The concept of data filtering was then introduced and its importance when aggregating time series data was outlined. After this, we explained the role of reference data in a kdb+ system and how it fits into the analytic framework. Armed with the appropriate reference data and consolidation rules, we were then able to dispatch relevant requests to our databases and aggregate the results in the gateway in order to return consolidated analytics to the user. The framework was provided in the context of an equities analytics system, but is extendable to other major asset classes as well as electronically traded derivatives. In FX, the ECNs provided by brokerages and banks act as the trading venues, and instead of using the symbol suffix to differentiate between venues, one can use a combination of the currency pair and the venue. Similarly in commodities, provided there is enough liquidity in the instrument, the same rules and framework can be applied. Other use cases, such as aggregating positions, risk and P&L from the desk or regional level to the department level, could be implemented using the same principles described in this paper. A script is provided in the Appendix below so that users can work through the implementation described in this paper. All tests were performed with kdb+ 3.0 (2012.09.26) Author¶ James Corcoran has worked as a kdb+ consultant in some of the world’s largest financial institutions and has experience in implementing global software and data solutions in all major asset classes. He has delivered talks and presentations on various aspects of kdb+ and most recently spoke at the annual KX user conference in London. As a qualified professional risk manager he is also involved in various ongoing risk-management projects at KX. Appendix¶ The code in this appendix can be found on Github at kxcontrib/market-fragmentation. //////////////////////////////// // Set up configuration data //////////////////////////////// .cfg.filterrules:()!(); .cfg.filterrules[`TM]:([venue:`LSE`BAT`CHI`TOR] qualifier:( `A`Auc`B`C`X`DARKTRADE`m; `A`AUC`B`c`x`D ARK; `a`auc`b`c`x`DRK; `A`Auc`B`C`X`DARKTRADE`m) ); .cfg.filterrules[`OB]:([venue:`LSE`BAT`CHI`TOR] qualifier:(`A`Auc`B`C`m;`A`AUC`B`c;`a`auc`b`c;`A`A uc`B`C`m)); .cfg.filterrules[`DRK]:([venue:`LSE`BAT`CHI`TOR] qualifier:`DARKTRADE`DARK`DRK`DARKTRADE); .cfg.symVenue:()!(); .cfg.symVenue[`BARCl.BS]:`BAT; .cfg.symVenue[`BARCl.CHI]:`CHI; .cfg.symVenue[`BARC.L]:`LSE; .cfg.symVenue[`BARC.TQ]:`TOR; .cfg.symVenue[`VODl.BS]:`BAT; .cfg.symVenue[`VODl.CHI]:`CHI; .cfg.symVenue[`VOD.L]:`LSE; .cfg.symVenue[`VODl.TQ]:`TOR; .cfg.multiMarketMap:( [sym:`BARCl.BS`BARCl.CHI`BARC.L`BARC.TQ`VODl.BS`VODl.CHI`VOD.L`VODl.TQ] primarysym:`BARC.L`BARC.L`BARC.L`BARC.L`VOD.L`VOD.L`VOD.L`VOD.L; venue:`BAT`CHI`LSE`TOR`BAT`CHI`LSE`TOR); .cfg.multiMarketAgg:()!(); .cfg.multiMarketAgg[`volume]:"sum volume" .cfg.multiMarketAgg[`vwap]:"wavg[volume;vwap]" .cfg.multiMarketAgg[`range]: "(max maxprice)-(min minprice)" .cfg.multiMarketAgg[`tickcount]:"sum tickcount" .cfg.multiMarketAgg[`maxbid]:"max maxbid" .cfg.multiMarketAgg[`minask]:"min minask" .cfg.multiMarketAgg[`lastmidprice]:"((max lastbid)+(min lastask))%2" .cfg.defaultParams:`startTime`endTime`filterRule`multiMarketRule! (08:30;16:30;`OB;`none); //////////////////////////////// // Analytic functions //////////////////////////////// getIntervalData:{[params] -1"Running getIntervalData for params: ",-3!params; params:.util.applyDefaultParams[params]; if[params[`multiMarketRule]~`multi; extended_syms:.util.extendSymsForMultiMarket[params`symList]; params:@[params;`symList;:;extended\_syms`symList]; ]; res:select volume:sum[size], vwap:wavg[size;price], range:max[price]-min[price], maxprice:max price, minprice:min price, maxbid:max bid, minask:min ask, lastbid:last bid, lastask:last ask, lastmidprice:(last[bid]+last[ask])%2 by sym from trade where date=params[`date], sym in params[`symList], time within (params`startTime;params`endTime), .util.validTrade[sym;qualifier;params`filterRule]; if[params[`multiMarketRule]~`multi; res:lj[res;`sym xkey select sym:symList, origSymList from extended_syms]; byClause:(enlist`sym)!enlist`origSymList; aggClause:columns!parse each .cfg.multiMarketAgg[columns:params`columns]; res:0!?[res;();byClause;aggClause]; ]; :(`sym,params[`columns])\#0!res }; //////////////////////////////// // Utilities //////////////////////////////// .util.applyDefaultParams:{[params] .cfg.defaultParams,params }; .util.validTrade:{[sym;qualifier;rule] venue:.cfg.symVenue[sym]; validqualifiers:(.cfg.filterrules[rule]each venue)`qualifier; first each qualifier in' validqualifiers }; .util.extendSymsForMultiMarket:{[symList] distinct raze {update origSymList:x from select symList:sym from .cfg.multiMarketMap where primarysym in .cfg.multiMarketMap[x]`primarysym } each (),symList } //////////////////////////////// // Generate trade data //////////////////////////////// \P 6 trade:([]date:`date$();sym:`$();time:`time$();price:`float$();size:`int$()); pi:acos -1; / Box-muller from kx.com/q/stat.q nor:{$[x=2*n:x div 2;raze sqrt[-2*log n?1f]*/:(sin;cos)@\:(2*pi)*n?1f;-1_.z.s 1+x]} generateRandomPrices:{[s0;n] dt:1%365*1000; timesteps:n; vol:.2; mu:.01; randomnumbers:sums(timesteps;1)#(nor timesteps); s:s0*exp[(dt*mu-xexp[vol;2]%2) + randomnumbers*vol*sqrt[dt]]; raze s} n:1000; `trade insert (n#2013.01.15; n?`BARCl.BS`BARCl.CHI`BARC.L`BARC.TQ; 08:00:00.000+28800*til n; generateRandomPrices[244;n]; 10*n?100000); `trade insert (n#2013.01.15; n?`VODl.BS`VODl.CHI`VOD.L`VODl.TQ; 08:00:00.000+28800*til n; generateRandomPrices[161;n]; 10*n?100000); / add dummy qualifiers trade:{ update qualifier:1?.cfg.filterrules[`TM;.cfg.symVenue[sym]]`qualifier from x } each trade; / add dummy prevailing quotes spread:0.01; update bid:price-0.5*spread, ask:price+0.5*spread from `trade; `time xasc `trade; //////////////////////////////// // Usage //////////////////////////////// params:`symList`date`startTime`endTime`columns!( `VOD.L`BARC.L; 2013.01.15; 08:30; 09:30; `volume`vwap`range`maxbid`minask`lastmidprice); / default, filterRule=orderbook & multiMarketRule=none a:getIntervalData params; / change filterRule from 'orderbook' to 'total market' b:getIntervalData @[params;`filterRule;:;`TM]; / change multiMarketRule from 'none' to 'multi' to get consolidated analytics c:getIntervalData @[params;`multiMarketRule;:;`multi];
================================================================================ FILE: TorQ_code_common_pubsub.q SIZE: 6,303 characters ================================================================================ // Pub/sub utilities initially used in Segmented TP process // Functionality for clients to subscribe to all tables or a subset // Includes option for subscribe to apply filters to received data // Replaces u.q functionality \d .stpps // List of pub/sub tables, populated on startup t:` // Handles to publish all data subrequestall:enlist[`]!enlist () // Handles and conditions to publish filtered data subrequestfiltered:([]tbl:`$();handle:`int$();filts:();columns:()) // Function to send end of period messages to subscribers - endofperiod defined in client side endp:{ (neg allsubhandles[])@\:(`endofperiod;x;y;z); }; // Function to send end of day messages to subscribers - endofday defined on client side end:{ (neg allsubhandles[])@\:(`endofday;x;y); }; // Get all distinct sub handles from subrequestall and subrequestfiltered allsubhandles:{ distinct raze union/[value subrequestall;exec handle from .stpps.subrequestfiltered] }; // Subscribe to everything suball:{ delhandle[x;.z.w]; add[x]; :(x;schemas[x]); }; // Make a filtered subscription subfiltered:{[x;y] delhandlef[x;.z.w]; // Different handling for requests passed in as a sym list or a keyed table val:![11 99h;(selfiltered;addfiltered)][type y] . (x;y); $[all raze null val;(x;schemas[x]);val] }; // Add handle to subscriber in sub all mode add:{ if[not (count subrequestall x)>i:subrequestall[x]?.z.w; subrequestall[x],:.z.w]; }; // Error trap function for parsing string filters errparse:{.lg.e[`addfiltered;m:y," error: ",x];'m}; // Parse columns and where clause from keyed table, spit out errors if any stage fails addfiltered:{[x;y] // Use dummy queries to produce where and column clauses filters:$[all null f:y[x;`filters];();@[parse;"select from t where ",f;.stpps.errparse[;"Filter"]] 2]; columns:last $[all null c:y[x;`columns];();@[parse;"select ",c," from t";.stpps.errparse[;"Column"]]]; // Run these clauses in a test query, add to table if successful, throw error if not @[eval;(?;.stpps.schemas[x];filters;0b;columns);.stpps.errparse[;"Query"]]; `.stpps.subrequestfiltered upsert (x;.z.w;filters;columns); }; // Add handle for subscriber using old API (filter is list of syms) selfiltered:{[x;y] filts:enlist enlist (in;`sym;enlist y); @[eval;(?;.stpps.schemas[x];filts;0b;());.stpps.errparse[;"Query"]]; `.stpps.subrequestfiltered upsert (x;.z.w;filts;()); }; // Publish table data pub:{[t;x] if[not count x;:()]; if[count h:subrequestall[t];-25!(h;(`upd;t;x))]; if[t in .stpps.subrequestfiltered`tbl; {[t;x;sels] data:eval(?;x;sels`filts;0b;sels`columns); if[count data;neg[sels`handle](`upd;t;data)]}[t;x;] each select handle,filts,columns from .stpps.subrequestfiltered where tbl=t ]; }; // publish and clear tables pubclear:{ .stpps.pub'[x;value each x,:()]; @[`.;x;:;.stpps.schemasnoattributes[x]]; } // Remove handle from subrequestall delhandle:{[t;h] @[`.stpps.subrequestall;t;except;h]; }; // Remove handle from subrequestfiltered delhandlef:{[t;h] delete from `.stpps.subrequestfiltered where tbl=t,handle=h; }; // Remove all handles when connection closed closesub:{[h] delhandle[;h]each t; delhandlef[;h]each t; }; / Scope value to inside the function so we can iterate over a set of tables without loading / them all into memory at once / Takes a symbol to a global table extractschema:{t:value x; $[.Q.qp t; t; 0#t]}; // Strip attributes and remove keying from tables and store in separate dictionary (for use on STP and SCTP) attrstrip:{[t] {@[x;cols x;`#]} each .stpps.t:t; .stpps.schemasnoattributes:.stpps.t!extractschema each .stpps.t; }; // Set up table and schema information init:{[t] if[count b:t where not t in tables[];{.lg.e[`psinit;m:"Table ",string[x]," does not exist"];'m} each b]; .stpps.t:t except b; .stpps.schemas:.stpps.t!extractschema each .stpps.t; .stpps.tabcols:.stpps.t!cols each .stpps.t; }; \d . // Call closesub function after initial .z.pc call on disconnect .dotz.set[`.z.pc;{[f;x] @[f;x;()];.stpps.closesub x} @[value;.dotz.getcommand[`.z.pc];{{}}]]; // Function called on subscription // Subscriber will call with null y parameter in sub all mode // In sub filtered mode, y will contain tables to subscribe to and filters to apply .u.sub:{[x;y] if[x~`;:.z.s[;y] each .stpps.t]; if[not x in .stpps.t; .lg.e[`sub;m:"Table ",string[x]," not in list of stp pub/sub tables"]; :(x;m) ]; $[y~`;.stpps.suball[x];.stpps.subfiltered[x;y]] }; // Default definition for .u.pub incase process does publishes via .u.pub .u.pub:.stpps.pub // Define .ps wrapper functions .ps.loaded:1b; .ps.publish:.stpps.pub; .ps.subscribe:.u.sub; .ps.init:.stpps.init; .ps.initialise:{.ps.init[tables[]];.ps.initialised:1b}; // Allow a non-kdb+ subscriber to subscribe with strings for simple conditions - return table name and schema to subscriber .ps.subtable:{[tab;syms] .lg.o[`subtable;"Received a subscription to ",$[count tab;tab;"all tables"]," for ",$[count syms;syms;"all syms"]]; val:.u.sub[`$tab;$[count syms;::;first] `$csv vs syms]; $[10h~type last val;'last val;val] }; // Allow a non-kdb+ subscriber to subscribe with strings for complex conditions - return table name and schema to subscriber .ps.subtablefiltered:{[tab;filters;columns] .lg.o[`subtablefiltered;"Received a subscription to ",$[count tab;tab;"all tables"]," for filters: ",filters," and columns: ",columns]; val:.u.sub[`$tab;1!enlist `tabname`filters`columns!(`$tab;filters;columns)]; $[10h~type last val;'last val;val] }; // Striping data in a TorQ Installation // use mod to stripe into number of segments .ds.map:{[numseg;sym] sym!(sum each string sym)mod numseg}; // Initialise subscription request on startup .ds.subreq:(`u#`$())!`int$(); // Striping function which stores the mappings for any symbols that it has already computed and // for subsequent requests for that symbol, it looks them up .ds.stripe:{[input;skey] // If no updates, return if[0=count input;:`boolean$()]; // Check for new sym(s) if[0N in val:.ds.subreq input; // Append to .ds.subreq - unique attr is maintained .ds.subreq,:.ds.map[.ds.numseg;distinct input where null val]; // Reassign val val:.ds.subreq input; ]; skey=val }; ================================================================================ FILE: TorQ_code_common_subscribercutoff.q SIZE: 1,824 characters ================================================================================ //This script will cut off any slow subscibers if they exceed a memory limit \d .subcut enabled:@[value;`enabled;0b] //flag for enabling subscriber cutoff. true means slow subscribers will be cut off. Default is 0b maxsize:@[value;`maxsize;100000000] //a global value for the max byte size of a subscriber. Default is 100000000 breachlimit:@[value;`breachlimit;3] //the number of times a handle can exceed the size limit check in a row before it is closed. Default is 3 checkfreq:@[value;`checkfreq;0D00:01] //the frequency for running the queue size check on subscribers. Default is 0D00:01 state:()!() //a dictionary to track how many times a handle breachs the size limit. Should be set to ()!() checksubs:{ //maintain a state of how many times a handle has breached the size limit .subcut.state:current[key .subcut.state]*.subcut.state+:current:(sum each .z.W)>maxsize; //if a handle exceeds the breachlimit, close the handle, call .z.pc and log the handle being closed. {[handle].lg.o[`subscribercutoff;"Cutting off subscriber on handle ",(string handle)," due to large buffer size at ",(string sum .z.W handle)," bytes"]; @[hclose;handle;{.lg.e[`subscribercutoff;"Failed to close handle ",string handle]}]; .z.pc handle} each where .subcut.state >= breachlimit; } //if cut is enabled and timer code has been loaded, start timer for subscriber cut-off, else output error. if[enabled; $[@[value;`.timer.enabled;0b]; [.lg.o[`subscribercutoff;"adding timer function to periodically check subscriber queue sizes"]; .timer.rep[.proc.cp[];0Wp;checkfreq;(`.subcut.checksubs`);0h;"run subscribercutoff";1b]]; .lg.e[`subscribercutoff;".subcut.enabled is set to true, but timer functionality is not loaded - cannot cut-off slow subscribers"]]]; ================================================================================ FILE: TorQ_code_common_subscriptions.q SIZE: 8,395 characters ================================================================================ /-script to create subscriptions, e.g. to tickerplant \d .sub AUTORECONNECT:@[value;`AUTORECONNECT;1b]; //resubscribe to processes when they come back up checksubscriptionperiod:(not @[value;`.proc.lowpowermode;0b]) * @[value;`checksubscriptionperiod;0D00:00:10] //how frequently you recheck connections. 0D = never // table of subscriptions SUBSCRIPTIONS:([]procname:`symbol$();proctype:`symbol$();w:`int$();table:();instruments:();createdtime:`timestamp$();active:`boolean$()); getsubscriptionhandles:{[proctype;procname;attributes] // grab data from .serves.SERVERS, add handling for passing in () as an argument data:{select procname,proctype,w from x}each .servers.getservers[;;attributes;1b;0b]'[`proctype`procname;(proctype;procname)]; $[0h in type each (proctype;procname);distinct raze data;inter/[data]] } updatesubscriptions:{[proc;tab;instrs] // delete any inactive subscriptions delete from `.sub.SUBSCRIPTIONS where not active; if[instrs~`;instrs,:()]; .sub.SUBSCRIPTIONS::0!(4!SUBSCRIPTIONS)upsert enlist proc,`table`instruments`createdtime`active!(tab;instrs;.proc.cp[];1b); } reconnectinit:0b; //has the reconnect custom function been initialised reducesubs:{[tabs;utabs;instrs;proc] // for a given list of subscription tables, remove any which have already been subscribed to // if asking for all tables, subscribe to the full list available from the publisher subtabs:$[tabs~`;utabs;tabs],(); .lg.o[`subscribe;"attempting to subscribe to ",(","sv string subtabs)," on handle ",string proc`w]; // if the process has already been subscribed to if[not instrs~`; instrs,:()]; s:select from SUBSCRIPTIONS where ([]procname;proctype;w)~\:proc, table in subtabs,instruments~\:instrs, active; if[count s; .lg.o[`subscribe;"already subscribed to specified instruments from ",(","sv string s`table)," on handle ",string proc`w]; subtabs:subtabs except s`table]; // if the requested tables aren't available, ignore them and log a message if[count errtabs:subtabs except utabs; .lg.o[`subscribe;"tables ",("," sv string errtabs)," are not available to be subscribed to, they will be ignored"]; subtabs:subtabs inter utabs;]; // return a dict of the reduced subscriptions :`subtabs`errtabs`instrs!(subtabs;errtabs;instrs) } createtables:{ // x is a list of pairs (tablename; schema) .lg.o[`subscribe;"setting the schema definition"]; // this is the same as (tablename set schema)each table subscribed to (@[`.;;:;].)each x where not 0=count each x; }
/ - this function is used to retrieve and parse the contents of the sort.csv file / - some validation is performed and the sort parameters are sorted in a global variable called / - .sort.params getsortcsv:{[file] file: hsym file; if[null file; file:defaultfile]; params:@[ {.lg.o[`init;"retrieving sort settings from ",string x];("SSSB";enlist",")0: x}; file; {[x;e] '"failed to open ",string[x],". The error was : ",e ;}[file] ]; /-check the correct columns have been included in csv file if[not all spcb: (spc: cols params) in `tabname`att`column`sort; '"unrecognised columns (",(", " sv string spc where not spcb),") in ", string file]; /-check that attributes are acceptable if[not all atb: (at:distinct params`att) in ``p`s`g`u;'"unrecognised type of attribute - ",", " sv string at where not atb]; /-set sortparams globally @[`.sort;`params;:;params]; }; / - this is main sort function that will be called of each table to be sorted /-function to reload, sort and save tables /- this function can be passed a table name of a pair of (tablename;table directory(s)) sorttab:{[d] // try to read in the sort configuration from the default location if[0=count params; getsortcsv defaultfile]; .lg.o[`sort;"sorting the ",(st:string t:first d)," table"]; / - get the sort configuration sp:$[count tabsortparams:select from params where tabname=t; [.lg.o[`sorttab;"Sort parameters have been retrieved for : ",st];tabsortparams]; count defaultsortparams:select from params where tabname=`default; [.lg.o[`sorttab;"No sort parameters have been specified for : ",st,". Using default parameters"];defaultsortparams]; / - else escape, no sort params have been specified [.lg.o[`sorttab;"No sort parameters have been found for this table (",st,"). The table will not be sorted"];:()]]; / - loop through for each directory {[sp;dloc] / - sort the data if[count sortcols: exec column from sp where sort, not null column; .lg.o[`sortfunction;"sorting ",string[dloc]," by these columns : ",", " sv string sortcols]; .[xasc;(sortcols;dloc);{[sortcols;dloc;e] .lg.e[`sortfunction;"failed to sort ",string[dloc]," by these columns : ",(", " sv string sortcols),". The error was: ",e]}[sortcols;dloc]]]; if[count attrcols: select column, att from sp where not null att; /-apply attribute(s) applyattr[dloc;;]'[attrcols`column;attrcols`att]]; }[sp] each distinct (),last d; .lg.o[`sort;"finished sorting the ",st," table"]; }; /-function to apply attributes to columns applyattr:{[dloc;colname;att] .lg.o[`applyattr;"applying ",string[att]," attr to the ",string[colname]," column in ",string dloc]; / - attempt to apply the attribute to the column and log an error if it fails .[{@[x;y;z#]};(dloc;colname;att); {[dloc;colname;att;e] .lg.e[`applyattr;"unable to apply ",string[att]," attr to the ",string[colname]," column in the this directory : ",string[dloc],". The error was : ",e];}[dloc;colname;att] ] }; / - these functions are common across the TorQ processes that persist save to the data base \d .save / - define a default dictionary of manipulation functions to apply of tables before it is enumerated an persisted to disk savedownmanipulation:()!(); /- manipulate a table at save down time manipulate:{[t;x] $[t in key savedownmanipulation; @[savedownmanipulation[t];x;{.lg.e[`manipulate;"save down manipulation failed : ",y];x}[x]]; x]}; /- post eod/replay, this is called after the date has been persisted to disk and sorted and /- takes a directory (typically hdb directory) and partition value as parameters postreplay:{[d;p] }; / - functions for running and descriptively logging garbage collection \d .gc / - format the process memory stat's into a string for logging memstats:{"mem stats: ",{"; "sv "=" sv'flip (string key x;(string value x),\:" MB")}`long$.Q.w[]%1048576} / - run garbage collection and log out memory stats run:{ .lg.o[`garbagecollect;"Starting garbage collect. ",memstats[]]; r:.Q.gc[]; .lg.o[`garbagecollect;"Garbage collection returned ",(string `long$r%1048576),"MB. ",memstats[]]} ================================================================================ FILE: TorQ_code_common_email.q SIZE: 4,606 characters ================================================================================ // Functionality to send emails // make sure the path is set to find the libcurl library. You can use standard libs if required // windows: // set PATH=%PATH%;%KDBLIB%/w32 // linux: // export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$KDBLIB/l[32|64] \d .email // configuration for default mail server enabled: @[value;`enabled;.z.o in `w32`l32`l64`m32`m64] // whether emails are enabled url: @[value;`url;`] // url of email server e.g. `$"smtp://mail.example.com:80" user: @[value;`user;`] // user account to use to send emails e.g. [email protected] password: @[value;`password;`] // password for user account from: @[value;`from;`$"torq@localhost"] // address for return emails e.g. [email protected] usessl: @[value;`usessl;0b] // connect using SSL/TLS debug: @[value;`debug;0i] // debug level for email library: 0i = none, 1i=normal, 2i=verbose img: @[value;`img;`$getenv[`KDBHTML],"/img/DataIntellect-TorQ-logo.png"] // default image for bottom of email lib:`$getenv[`KDBLIB],"/",string[.z.o],"/torQemail"; connected:@[value;`connected;0b] if[enabled and .z.o~`w64; .lg.w[`email;"Email is not supported for Windows 64bit. Disabling email functionality"]; enabled:0b ]; if[.email.enabled; libfile:hsym ` sv lib,$[.z.o like "w*"; `dll; `so]; libexists:not ()~key libfile; if[not .email.libexists; .lg.e[`email;"no such file ",1_string libfile]]; if[.email.libexists; connect: @[{x 2:(`emailConnect ;1)};lib;{.lg.w[`init;"failed to create .email.connect " ,x]}]; disconnect: @[{x 2:(`emailDisconnect;1)};lib;{.lg.w[`init;"failed to create .email.disconnect ",x]}]; send: @[{x 2:(`emailSend ;1)};lib;{.lg.w[`init;"failed to create .email.send " ,x]}]; create: @[{x 2:(`emailCreate ;1)};lib;{.lg.w[`init;"failed to create .email.create " ,x]}]; g: @[{x 2:(`emailGet ;1)};lib;{.lg.w[`init;"failed to create .email.get " ,x]}]; getSocket: @[{x 2:(`getSocket ;1)};lib;{.lg.w[`init;"failed to create .email.getSocket " ,x]}]; ]; ]; // connect to the configured default email server connectdefault:{ if[any null (url;user;password); .lg.e[`email; "url, user and password cannot be null"]]; connected::`boolean$1+connect `url`user`password`from`usessl`debug!(url;user;password;from;usessl;debug); $[connected;.lg.o[`email;"connection to mail server successful"]; .lg.e[`email;"connection to mail server failed"]]} // send an email using the default mail server. Try to establish a connection first senddefault:{ if[not enabled; .lg.e[`email;e:"email sending is not enabled"]; 'e]; .lg.o[`email;"sending email"]; if[not connected; connectdefault[]]; if[not connected; :.lg.e[`email; "cannot send email as no connection to mail server available"]]; x[`body]:x[`body],defaultfooter[]; res:send x,`image`debug!(img;debug); $[res>0;.lg.o[`email;"Email sent. size was ",(string res)," bytes"]; .lg.e[`email;"failed to send email"]]; res} defaultfooter:{("";"email generated by proctype=",(string .proc.proctype),", procname=",(string .proc.procname)," running on ",(string .z.h)," at time ",(" " sv string `date`time$.proc.cp[])," ", $[.proc.localtime=1b;"local time";"GMT"])} test:{senddefault `to`subject`body!(x;"test email";enlist ("this is a test email to see if the TorQ email lib is configured correctly"))} // used to send an email using the default mail server on a seperate specififed process sendviaservice:{[proc;dict] h:.servers.gethandlebytype[proc;`any]; if[not count h;.lg.e[`email;"could not connnect to ",(string proc)," process"];:0b]; / returns 1b if async request has been sent. Returns result of `.email.servicesend to .email.servicecallback first .async.postback[first h;(`.email.servicesend;dict);.email.servicecallback] } // sends email and catches result in a dictionary servicesend:{[dict] :`status`result!@[{(1b;.email.senddefault x)};dict;{(0b;x)}]; } // Used in .email.sendviaservice to log email status once .email.servicesend has been called by .async.postback servicecallback:{ $[x[`status]; .lg.o[`email;"Email sent successfully"]; .lg.e[`email;"failed to send email: ",x[`result]]] }; \ SIMPLE SEND: .email.senddefault `to`subject`body!(`$"[email protected]";"hi jim";("here's some stuff";"to think about")) ================================================================================ FILE: TorQ_code_common_eodtime.q SIZE: 1,799 characters ================================================================================ / - system eodtime configuraion / - loaded into and used in the tp and pdb processes \d .eodtime // default settings rolltimeoffset:@[value;`rolltimeoffset;0D00:00:00.000]; // offset from standard midnight rollover datatimezone:@[value;`datatimezone;`$"GMT"]; // timezone for stamping data rolltimezone:@[value;`rolltimezone;`$"GMT"]; // timezone for EOD roll // function to determine offset from UTC for timestamping data getdailyadjustment:{exec adjustment from .tz.t asof `timezoneID`gmtDateTime!(.eodtime.datatimezone;.z.p)}; dailyadj:getdailyadjustment[]; // get offset when loading process and store it in dailyadj // function to determine offset from UTC for EOD roll adjtime:{[p] :exec adjustment from .tz.t asof `timezoneID`gmtDateTime!(.eodtime.rolltimezone;.z.p); }; // function to get time (in UTC) of next roll after UTC timestamp, p getroll:{[p] z:rolltimeoffset-adjtime[p]; // convert rolltimeoffset from rolltimezone to UTC z:`timespan$(mod) . "j"$z, 1D; // force time adjust to be between 0D and 1D ("d"$p) + $[z <= p;z+1D;z] // if past time already today, make it tomorrow };
================================================================================ FILE: ml.q_rl_model_trading.q SIZE: 5,964 characters ================================================================================ / * Simplistic stock trading RL model. * * An optimal trading policy is learned by simulating trades. Buy / Sell signals * are calculated using simple technical strategies, e.g. momentum, moving * average crossing, etc. The reward signal is the returns from some time in the * future, e.g. 5 days. * * A learning agent undergoes cross-validated training / testing. That is, a * batch of market data is partitioned into, e.g. 5 different slices. One slice * is trained on while the others are used for testing, and this is repeated * so that each slice is eventually trained on. * \ \d .trading / local data directory datadir:"../../data/"; / sliding window swin:{[f;w;s] f each { 1_x,y }\[w#(type s)$0;s]}; / technical indicators states:`momentum`volatility`upxsma`downxsma; / * Read market data csv and create technical indicators * @param {string} ticker - stock ticker to get data for * @returns {table} \ get_data:{[ticker] t:1 _ flip `date`o`h`l`c`v`close!("DFFFFFF";",") 0: `$":",datadir,ticker,".csv"; t:`date xasc t; t:update momentum:{(x>=0)} -1+close%close[i-5], vol:0^5 mdev log close%close[i-1], sma20:mavg[20;close], sma50:mavg[50;close], rtn5:-1+close[i+5]%close from t; t:update volatility:(med[vol] <) each vol from t; t:update xsma:{(x>=0)-x<0} sma20-sma50 from t; t:update xsma:0^xsma-xsma[i-1] from t; t:update downxsma:.trading.swin[{any x<0};5;xsma], upxsma:.trading.swin[{any x>0};5;xsma] from t}; / * Process a single action and update reward & state * @param {dict} store - learner metadata * @param {dict} r - single record of trade data * @returns {dict} - learner metadata \ learn:{[store;r] store:.qlearner.next_action[store]; action:store`action; side:store[`curstate][`side]; if[(action=`long)&(side=0b);side:1b]; if[(action=`short)&(side=1b);side:0b]; reward:$[side=0b;-1*r`rtn5;r`rtn5]; newstate:(`side,states)!(side,r[states]); .qlearner.observe[store;reward;newstate]}; / * Applies a learned policy over market data, i.e. creates the long/short * inidicators in a market data table. Meant to be used as a reduce function. * @param {dict} store - learner metadata * @param {table} rprev - accumulated processed records * @param {dict} r - new record of trade data to process * @returns {table} - processed records \ policy:{[store;rprev;r] prevside:last[rprev][`side]; curstate:(`side,states)!(prevside,r[states]); action:.qlearner.best_action[store;curstate]; side:prevside; if[(action=`long)&(prevside=0b);side:1b]; if[(action=`short)&(prevside=1b);side:0b]; newstate:curstate,enlist[`side]!enlist[side]; r[`side]:side; rprev,r}; / * Calculate realized returns, assumptions: * - Start with exactly enough cash to purchase (at close) one share at time 0 * - Only valid positions are long 1 share or short 1 share * - It follows that each transaction (except first) must be 2 shares * - We are concerned with total return so exact qty doesnt matter * @param {table} data * @returns {table} \ realized:{[data] / first record treated as a buy if[not first[data]`side;'"initial side should be 1b"]; / ensure state change to get realized rtns up to last observation if[1=count distinct (-2#data)`side; data:update side:not side from data where i=-1+count[data]]; data:select from data where side<>prev side; data:update qty:2, dir:-1+2*side from data; data:update qty:1 from data where i=0; data:update qtydelta:dir*qty, cashdelta:dir*qty*close from data; data:update netcash:1_(-\) (close[0],cashdelta), netqty:(+\) qtydelta from data; / return computed relative to value at time 0 i.e. close price of share update return:(netcash+close*netqty) % first close from data}; testhlpr_:{[store;data] first_:enlist[enlist[first[data],enlist[`side]!enlist[1b]]]; policy[store] over first_,1_data}; / * Apply learned policies and calculate returns over test data * @param {dict} store - learner metadata * @param {table list} data - list of data slices * @returns {float} - total return \ testhlpr:{[store;data] r:realized[testhlpr_[store;data]]; last r[`return]}; / * Train a learner for one "episode" i.e. for an entire market data slice * @param {dict} store - learner metadata * @param {table} train - training data * @returns {dict} - learner metadata \ trainiter:{[store;train] store:.qlearner.init_state[store;(`side,states)!(0b,first[train][states])]; train:1 _ train; / TODO using , was finicky without empty list at end learn over -1_enlist[store],train,enlist[]}; / * Apply a train & test cycle for one set of cross-validation slices * @param {dict} lrnargs - learner parameters * @param {table list} slices - list of data slices * @param {list} islice - list of indices for slices list * @returns {list} - total return for each slice \ traintesthlpr:{[lrnargs;slices;islice] store:.qlearner.init_learner[`long`short;`side,states;lrnargs`alpha;lrnargs`epsilon;lrnargs`gamma]; / first slice is the training set train:slices[first islice]; / train for a variable number of episodes i:-1; while[lrnargs[`episodes]>i+:1;store:trainiter[store;train]]; testhlpr[store] each slices[1 _ islice]}; / * Do a complete cross-validation train & test procedure * @param {string} ticker - stock ticker for which to lookup data * @param {dict} lrnargs - learner parameters * @param {int} cv - number of cross validation slices to use * @returns {list} - total return for each cross validation slice \ traintest:{[ticker;lrnargs;cv] data:get_data[ticker]; slicefn:{[d;part;offset] step:("i"$count[d]%part); step#(step*offset)_d}; slices:slicefn[data;cv] each til cv; islices:{[cv_;i] (i+til cv_) mod cv_}[cv] each til cv; r:(,/) traintesthlpr[lrnargs;slices] each islices; r}; / * Calculate returns on a random trading policy * @param {string} ticker - stock ticker for which to lookup data \ randtest:{[ticker] data:get_data[ticker]; data[`side]:1b,(count[data]-1)?01b; last[realized[data]]`return}; ================================================================================ FILE: ml.q_util.q SIZE: 1,407 characters ================================================================================ / * Identity matrix \ ident:{[n] {(x#0),1,(y-x+1)#0}[;n] each til n} / * Extract diagonal from a matrix \ diag:{(x .) each til[count x],'til count[x]} / * Euclidean distance matrix (edm) * See https://arxiv.org/abs/1502.07541 \ edm:{m:x mmu flip[x]; diag[m] + flip diag[m] - 2*m} / * Look up d[i], then look up d[j] for each j in d[i], and so on... * @param {dict} d * @param {any} i - first key to look up \ recursive_flatten:{[d;i] x:enlist i; prevx:0N; while[not prevx ~ x:distinct x,(,/) d each x; prevx:x]; x} / * Disjoint-set data structure and algo * Returns a dict where each key either has * - A single entry, the parent of the node * - Multiple entries, the children of the node * @param {dict} d * @param {list} l - two keys to join \ dj:{[d;l] / Find all children to merge recursively rf:recursive_flatten[d;] each l; / Count, sort, then select index of node with hightest num of children from / each flattened list idx:first each idesc each ((count each d) each) each rf; / For each flattened list select actual node from index roots:(((rf) .) each) (til count idx),'idx; idx:idesc count each d each roots; / Root is the node with most children root:first roots idx; / Update preferred root with new children children:((,/)rf) except root; d[root]:distinct d[root],children; / Update children with new root d,children!enlist each count[children]#root} ================================================================================ FILE: ml_automl_automl.q SIZE: 1,702 characters ================================================================================ // automl.q - Setup automl namespace // Copyright (c) 2021 Kx Systems Inc // // Define version, path, and loadfile. // Execute algo if run from cmd line. \d .automl if[not `e in key `.p; @[{system"l ",x;.pykx.loaded:1b};"pykx.q"; {@[{system"l ",x;.pykx.loaded:0b};"p.q"; {'"Failed to load PyKX or embedPy with error: ",x}]}]]; if[not `loaded in key `.pykx;.pykx.loaded:`import in key `.pykx]; if[.pykx.loaded;.p,:.pykx]; // Coerse to string/sym coerse:{$[11 10h[x]~t:type y;y;not[x]&-11h~t;y;0h~t;.z.s[x] each y;99h~t;.z.s[x] each y;t in -10 -11 10 11h;$[x;string;`$]y;y]} cstring:coerse 1b; csym:coerse 0b; // Ensure plain python string (avoid b' & numpy arrays) pydstr:$[.pykx.loaded;{.pykx.eval["lambda x:x.decode()"].pykx.topy x};::] version:@[{AUTOMLVERSION};`;`development] path:{string`automl^`$@[{"/"sv -1_"/"vs ssr[;"\\";"/"](-3#get .z.s)0};`;""]}` loadfile:{$[.z.q;;-1]"Loading ",x:_[":"=x 0]x:$[10=type x;;string]x;system"l ",path,"/",x;} // @kind description // @name commandLineParameters // @desc Retrieve command line parameters and convert to a kdb+ dictionary commandLineInput:first each .Q.opt .z.x // @kind description // @name commandLineExecution // @desc If a user has defined both config and run command line arguments, the // interface will attempt to run the fully automated version of AutoML. The // content of the JSON file provided will be parsed to retrieve data // appropriately via ipc/from disk, then the q session will exit. commandLineArguments:lower key commandLineInput if[all`config`run in commandLineArguments; loadfile`:init.q; .ml.updDebug[]; testRun:`test in commandLineArguments; runCommandLine[testRun]; exit 0] ================================================================================ FILE: ml_automl_code_aml.q SIZE: 8,428 characters ================================================================================ // code/aml.q - Automl main functionality // Copyright (c) 2021 Kx Systems Inc // // Automated machine learning, generation of optimal models, predicting on new // data and generation of default configurations \d .automl
kdb+ query scaling¶ Trading volumes in financial market exchanges have increased significantly in recent years and as a result it has become increasingly important for financial institutions to invest in the best technologies and to ensure that they are used to their full potential. This white paper will examine some of the key steps which can be taken to ensure the optimal efficiency of queries being run against large kdb+ databases. In particular it will focus on q-SQL statements, joins, and other constructs which provide kdb+ with a powerful framework for extracting information from increasingly large amounts of timeseries data both-in memory and on- disk. It covers key ideas and optimization solutions while also highlighting potential pitfalls. All tests were run using kdb+ 3.1 (2013.12.27) Overview of data¶ For the purpose of analysis we have created three pairs of trade and quote tables containing equity timeseries data where one pair is fully in memory, one has been splayed on-disk and one has been partitioned by date. Each pair of tables, quote and trade , have been loaded into a namespace (.mem , .splay or .par ) so that it is clear which table we are looking at during our analysis. The data contains 4 primary symbols (AAPL, GOOG, IBM and MSFT) along with 96 randomly-generated others. The in-memory quote table contains 2 million records, the splayed quote table contains 5 million records and the partitioned quote table contains 2 million records per partition over 5 days giving 10 million records in total. The trade data is 10% the size of each quote table and has been simplified by basing it off the previous quote rather than maintaining an order book. q)1_count each .mem quote| 2000000 trade| 200000 q)1_count each .splay quote| 5000000 trade| 500000 q)1_count each .par quote| 50000000 trade| 5000000 q)date 2014.02.19 2014.02.20 2014.02.21 2014.02.22 2014.02.23 q)select count i by date from .par.quote date | x ----------| -------- 2014.02.19| 10000000 2014.02.20| 10000000 2014.02.21| 10000000 2014.02.22| 10000000 2014.02.23| 10000000 Each table has an attribute applied to the sym column – the in-memory tables being grouped (g# ), and those on-disk being parted (p# ). The table metadata below displays the structure of the in-memory quote and trade tables, the meta of the tables on-disk differ by attribute only. q)meta .mem.quote c | t f a ---| ----- sym| s g dt | p ap | f as | j bp | f bs | j q)meta .mem.trade c | t f a ----| ----- sym | s g dt | p tp | f ts | j side| s Select statements¶ In this section we will look at the most common and flexible method for querying kdb+ data, the q-SQL select statement. This construct allows us to query both on-disk (memory mapped) and in-memory data in a similar fashion. Retrieving records by symbol¶ In this section we will look at methods for retrieving the first and last records by symbol with and without further constraints added to the select statement. We will focus on the use of the efficient select by sym from tab construct as well as using the Find ? operator to perform a lookup for a table. In the following example we look at how we can retrieve the last record by symbol from a table. The default behavior of the q By clause is to retrieve the last value of each column grouped by each parameter in the By clause. If no aggregation function is specified there is no need to explicitly call the last function; comparing the two constructs we see a 2½× speed improvement and similar memory usage. q)select by sym from .mem.quote sym | dt ap as bp bs ----| ----------------------------------------------------- AAPL| 2014.02.23D15:59:56.081766206 527.1077 93 527.0023 77 ACJ | 2014.02.23D15:59:58.392773419 487.661 66 487.5635 26 AGL | 2014.02.23D15:59:58.475614283 363.3669 43 363.2942 49 AHF | 2014.02.23D15:59:59.843160342 145.5018 52 145.4727 84 .. q)\ts a:select by sym from .mem.quote 20 16784064 q)\ts b:select last dt, last ap, last as, last bp, last bs by sym from .mem.quote 51 16783328 q)a~b 1b We see an approximately 4× performance improvement when we apply to partitioned data. q)\ts a:select by sym from .par.quote where date = last date 78 134226368 q)\ts b:select last date, last dt, last ap, last as, last bp, last bs by sym from .par.quote where date = last date 345 201329840 q)a~b 1b We can use the Find operator ? to obtain a list of indexes corresponding to the first occurrence of a symbol in our table. In our first example we obtain the position i of the first occurrence of each sym in our table, we then perform a lookup in the same table using the Find operator and index into the original table with the result, which will be a list of type long. Note that this example is actually redundant, we could use the simpler select first sym, first dt, first ap, first as, first bp, first bs by sym from .mem.quote to retrieve this data even more efficiently, but we will build on this in the rest of this section. We find this to be approximately 2× faster than using fby and it also uses slightly less memory: q).mem.quote(select sym, i from .mem.quote)?0!select first i by sym from .mem.quote sym dt ap asbp bs ----------------------------------------------------------- AAPL 2014.02.23D09:00:00.117540266 544.0444 27 543.9356 56 ACJ 2014.02.23D09:00:01.156257558 487.5938 36 487.4963 60 AGL 2014.02.23D09:00:00.014462973 345.9409 80 345.8717 104 AHF 2014.02.23D09:00:00.466329697 138.5061 44 138.4784 63 AHK 2014.02.23D09:00:00.167160294 342.2259 97 342.1575 104 .. q)\ts a:.mem.quote(select sym, i from .mem.quote)?0!select first i by sym from .mem.quote 24 33557504 q)\ts b:select from .mem.quote where i = (first; i) fby sym 63 42995872 q)a~b 1b q)\ts select first sym, first dt, first ap, first as, first bp, first bs by sym from .mem.quote 12 15238560 In our second example we show how using the Find operator does not outperform select by sym from t but is still an improvement over using last : q)\ts a:.mem.quote(select sym,i from .mem.quote)?0!select last i by sym from .mem.quote 32 33556528 q)\ts b:0!select by sym from .mem.quote 20 16784144 q)a~b 1b Finally we demonstrate a key use case for this construct: how we can select the first occurrence of an event in one column of the table. In our example below we look at the maximum bid size by sym. Examples like this, where there is no q primitive shortcut which can be applied consistently across all columns, for example the call to first used above, is where the performance improvements of this construct come to the fore. In the example below we will achieve 2× performance over the alternative method fby . q).mem.quote(`sym`bs#.mem.quote)?0!select max bs by sym from .mem.quote sym dt ap as bp bs ------------------------------------------------------------ AAPL 2014.02.23D09:00:19.752919580 544.043 92 543.9342 109 ACJ 2014.02.23D09:00:12.435798905 487.5911 106 487.4936 109 AGL 2014.02.23D09:01:29.984035491 345.9841 79 345.915 109 AHF 2014.02.23D09:04:30.327046606 138.578 13 138.5503 109 AHK 2014.02.23D09:03:01.717110984 342.2449 12 342.1764 109 .. q)\ts .mem.quote(`sym`bs#.mem.quote)?0!select max bs by sym from .mem.quote 28 16779264 Efficient select statements using attributes¶ One of the most effective ways to ensure fast lookup of data is the correct use of attributes, discussed in white paper Columnar database and query optimization. In this section we will look at how we can ensure an attribute is used throughout an entire select statement. If we wish to filter a table for entries containing a particular list of syms, the simplest way of doing this is to use a statement like select from table where sym in symList . However, when we apply the in keyword to a column which contains an attribute we will only receive that attribute’s performance benefit for the first symbol in the list we are searching. An alternative is to rewrite the query using a lambda and pass in each symbol in turn: {select from table where sym = x} each symList When we use the lambda function we get the performance improvement for every symbol in the list, so even with the overhead of appending the results using the raze function we will often see an improvement in execution time at the cost of the extra memory needed to store the intermediate results. In the following examples, which are run on the partitioned quote table, we see a speed increase of slightly below 2× at the cost of a larger memory footprint. However, as the number of symbols under consideration and the size of the data increases we would expect to see greater benefits. q)raze {select from .par.quote where date = last date, sym = x}each `AAPL`GOOG`IBM date sym dt ap as bp bs ---------------------------------------------------------------------- 2014.02.23 AAPL 2014.02.23D09:00:00.248076673 527.0607 29 526.9553 86 2014.02.23 AAPL 2014.02.23D09:00:01.054893527 527.0606 18 526.9552 85 2014.02.23 AAPL 2014.02.23D09:00:01.189490128 527.0605 15 526.9551 72 2014.02.23 AAPL 2014.02.23D09:00:01.211703848 527.0605 41 526.9551 69 2014.02.23 AAPL 2014.02.23D09:00:01.286917179 527.0606 49 526.9552 59 2014.02.23 AAPL 2014.02.23D09:00:01.546945609 527.061 100 526.9556 88 .. q)\ts a:raze {select from .par.quote where date = last date, sym = x}each `AAPL`GOOG`IBM 15 44042032 q)\ts b:select from .par.quote where date = last date, sym in `AAPL`GOOG`IBM 25 27264720 q)a~b 1b The example below uses a similar construct to find the maximum ask price for each symbol from our in-memory quote table; here we achieve a 20% speed increase and a huge reduction in memory usage. q)raze{select max ap by sym from .mem.quote where sym = x} each `AAPL`GOOG`IBM sym | ap ----| -------- AAPL| 544.0496 GOOG| 1198.86 IBM | 183.5632 q)\ts a:select max ap by sym from .par.quote where date = last date, sym in `AAPL`GOOG`IBM 20 12584960 q)\ts b:raze {select max ap by sym from .par.quote where date = last date, sym = x}each `AAPL`GOOG`IBM 16 2100176 q)a~b 1b Obtaining a subset of columns¶ In cases where our goal is to obtain a subset of columns from a table we can use the Take operator # to do this more efficiently than in a standard select statement. This potential improvement comes from recognizing that a table is a list of dictionaries with its indexes swapped. Position [1;2] in a table is equivalent to position [2;1] in a dictionary. A table is therefore subject to dictionary operations. This is a highly efficient operation as kdb+ only has to index into the keys defined in the left argument of # . q)`sym`ap`as#.mem.quote sym ap as ---------------- AAPL 544.0444 27 AAPL 544.0431 57 AAPL 544.043 77 .. q)\ts:1000000 a:`sym`ap`as#.mem.quote 687 672 q)\ts:1000000 b:select sym, ap, as from .mem.quote 819 1200 q)a~b The above gives a small performance increase and corresponding decrease in memory usage and will also work on splayed tables. It can be applied to keyed tables when used in conjunction with the Each Right iterator /: . This will return the columns passed as a list of symbols in the first argument along with the key column of the table. To illustrate this we create a table .mem.ktrade , which is the in-memory trade table defined above keyed on unique GUID values. // add a unique GUID tradeID primary key to in-memory trade table q).mem.ktrade:(flip enlist[`tradeID]!enlist `u#(neg count .mem.trade)?0Ng)!.mem.trade q)`sym`side#/:.mem.ktrade tradeID | sym side ------------------------------------| --------- deaf3e2d-f9ba-6a8b-1db4-ffe76a6a3be6| AAPL Sell 986e1121-ba3c-d9a1-80f4-a884ad783444| AAPL Sell 37472984-556c-4a6b-1f2c-e7eb716144ee| AAPL Buy 6e65f67e-23e1-4ef9-cb08-9c68467f9e34| AAPL Sell .. q)\ts:10000 a:`sym`side#/:.mem.ktrade 19 736 q)\ts:10000 b:select tradeID, sym, side from .mem.ktrade 21 1376 q)\ts:10000 c:`tradeID xkey select tradeID, sym, side from .mem.ktrade 63 1424 q)a~/:(b;c) 11b As before, we see a modest improvement in runtime if our goal is to return an unkeyed table. However, if we want to return the data keyed, as it was in the original table, the performance improvement is approximately 3×. In both cases we cut memory usage in half. Efficient querying of partitioned on-disk data¶ A basic but incredibly important consideration when operating on a large dataset is the ordering of the Where clause which will be used to extract the data required from a table. An efficiently ordered Where clause that exploits the partitioned structure of a database or its attributes can make a query orders of magnitude faster. The following code illustrates how efficient construction of a Where clause in a partitioned database, by filtering for the partition column first, can lead to superior results. q)select from .par.trade where date = last date, ts > 50 date sym dt tp ts side -------------------------------------------------------------- 2014.02.23 AAPL 2014.02.23D09:00:15.613542722 527.0524 52 Buy 2014.02.23 AAPL 2014.02.23D09:00:33.319332489 526.9366 53 Sell 2014.02.23 AAPL 2014.02.23D09:00:42.741552220 527.0334 70 Buy 2014.02.23 AAPL 2014.02.23D09:00:45.434857198 526.9292 70 Buy 2014.02.23 AAPL 2014.02.23D09:00:50.608371516 527.0314 63 Sell q)\ts select from .par.trade where date = last date, ts > 50 47 20973216 q)\ts select from .par.trade where ts > 50, date = last date 1822 169872240 The date parameter in the above example can also be modified within the select statement while still providing a performance improvement. For example, we could use the mod keyword to obtain data for one particular day of the week, or a Cast to get the data for an entire month. The key to applying these functions properly is ensuring that the virtual date column is the first parameter in the first element of the Where clause; this can be confirmed by looking at the parse tree created by the select statement. // select all the data for February 2014 q)select from .par.trade where date.month = 2014.02m date sym dt tp ts side -------------------------------------------------------------- 2014.02.19 AAPL 2014.02.19D09:00:00.627678019 509.4149 18 Buy 2014.02.19 AAPL 2014.02.19D09:00:02.227734720 509.4179 42 Sell 2014.02.19 AAPL 2014.02.19D09:00:03.376288471 509.4192 19 Sell 2014.02.19 AAPL 2014.02.19D09:00:03.714012114 509.42 12 Sell 2014.02.19 AAPL 2014.02.19D09:00:13.403248267 509.4331 57 Sell .. q)\ts select from .par.trade where date.month = 2014.02m 93 322964016 q)parse "select from .par.trade where date.month=2014.02m" ? `.par.trade ,,(=;`date.month;2014.02m) 0b () The parse tree above shows how a query is broken down to be executed by the q interpreter. | item | role | |---|---| ? | select or exec statement | `.par.trade | table we are selecting from | ,,(=;`date.month;2014.02m) | Where clause | 0b | By clause: 0b is none | () | columns to select: () means all | This pattern is the same as that used in the functional forms of select or exec . Basics: Functional q-SQL Q for Mortals: §9.12 Functional Forms The key to this query working efficiently is ensuring that date.month is the first element in the Where clause of the parsed query. Joins¶ One of the most powerful aspects of kdb+ is that it comes equipped with a large variety of join operators for enriching datasets. These operators may be divided into two distinct types, timeseries joins and non-timeseries joins. In this section we will focus on optimal use of three of the most commonly-used timeseries join operators, aj and wj , and the non-timeseries join lj . Basics: Joins The majority of joins in a kdb+ database should be performed implicitly by foreign keys and linked columns, permanent relationships defined between tables in a database. These provide a performance advantage over the standard joins and should be used where appropriate. White paper: The application of foreign keys and linked columns in kdb+ Left joins¶ The left join lj is one of the most common, important and widely used joins in kdb+. It is used to join two tables based on the key columns of the table in the second argument. If the key columns in the second table are unique it will perform an operation similar to a left outer join in standard SQL. However, if the keys in the second table are not unique, it will look up the first value only. If there is a row in the first table which has no corresponding row in the second table, a null record will be added in the resulting table. The example below illustrates a simple left join which is used to add reference data to our quote table. First we define a table .mem.ref , keyed on symbol, with some market information as the value. The lj joins this market information to the .mem.trade table, returning the result. q).mem.ref:([sym:exec distinct sym from .mem.trade]; mkt:100?`n`a`l) q).mem.quote lj .mem.ref sym dt ap as bp bs mkt -------------------------------------------------------------- AAPL 2014.02.23D09:00:00.117540266 544.0444 27 543.9356 56 n AAPL 2014.02.23D09:00:00.770298577 544.0431 57 543.9343 68 n AAPL 2014.02.23D09:00:01.014678832 544.043 77 543.9342 96 n AAPL 2014.02.23D09:00:03.650976810 544.0455 12 543.9367 48 n .. In our second example we see that when there are duplicate keys in the second table only the first value corresponding to that key is used in the join. q).mem.ref2:update sym:`AAPL from .mem.ref1 where 0 = i mod 2 q).mem.ref2 sym | mkt ----| --- AAPL| n ACJ | n AAPL| l AHF | a AAPL| l .. q)select firstMkt:first mkt, lastMkt:last mkt by sym from .mem.ref2 where sym = `AAPL sym | firstMkt lastMkt ----| ---------------- AAPL| n l q).mem.quote lj .mem.ref2 sym dt ap as bp bs mkt -------------------------------------------------------------- AAPL 2014.02.23D09:00:00.117540266 544.0444 27 543.9356 56 n AAPL 2014.02.23D09:00:00.770298577 544.0431 57 543.9343 68 n AAPL 2014.02.23D09:00:01.014678832 544.043 77 543.9342 96 n AAPL 2014.02.23D09:00:03.650976810 544.0455 12 543.9367 48 n AAPL 2014.02.23D09:00:06.727747153 544.0471 81 543.9383 50 n The left join is built into dict,keyedTab and tab,\:keyedTab (since V2.7). lj has been updated (since V3.0) to use this form inside a .Q.ft wrapper to allow the table being joined to be keyed. .Q.ft will unkey the table, run the left join, and then rekey the result. Window joins¶ wj and wj1 are the most general forms of timeseries joins. They aggregate over all the values in specified columns for a given time interval. wj1 differs from wj in that it only considers values from within the given time period, whereas wj will also consider the currently prevailing values. In the example below wj calculates the minimum ask price and maximum bid price in the second table, .mem.quote , based on the time windows w ; in this case the values just before and after a trade is executed in .mem.trade . // Define the time windows to aggregate over q)w:-2 1+\:exec dt from .mem.trade where sym = `AAPL q)w 2014.02.23D09:00:00.118540264 2014.02.23D09:00:22.930035590 2014.02.23D09:00:00.118540267 2014.02.23D09:00:22.930035593 // calculate the min ask price and max bp for each time window and join to the trade table q)wj[w; `sym`dt; select from .mem.trade where sym = `AAPL; (.mem.quote; (min; `ap); (max; `bp))] sym dt tp ts side ap bp --------------------------------------------------------------------- AAPL 2014.02.23D09:00:00.118540266 544.0444 27 Sell 544.0444 543.9356 AAPL 2014.02.23D09:00:22.930035592 544.0437 55 Sell 544.0437 543.9349 AAPL 2014.02.23D09:00:25.852858872 544.0401 88 Buy 544.0401 543.9313 AAPL 2014.02.23D09:00:35.654202142 543.9216 12 Sell 544.0304 543.9216 AAPL 2014.02.23D09:00:42.274095995 543.9134 40 Buy 544.0222 543.9134 .. q)\ts wj[w; `sym`dt; select from .mem.trade where sym = `AAPL; (.mem.quote; (min; `ap); (max; `bp))] 9 207696 q)w:-2 1+\:.mem.trade.dt q)\ts wj[w; `sym`dt; .mem.trade; (.mem.quote; (min; `ap); (max; `bp))] 485 14898368 wj can also be used to run aggregations efficiently on splayed or partitioned data. // define an aggregation window from this trade back to the previous one q)w:value flip select dt^prev dt,dt from .par.trade where date=max date, sym=`AAPL // calculate max and min values between two trades q)t:select from .par.trade where date = max date, sym = `AAPL q)q:select from .par.quote where date = max date q)wj[w; `sym`dt; t; (q; (min; `ap); (max; `bp))] date sym dt tp ts side ap bp -------------------------------------------------------------------------- 2014.02.23 AAPL 2014.02.23D09:00:01.606196483 527.0609 43 Sell 527.0609 526.9555 2014.02.23 AAPL 2014.02.23D09:00:02.905704958 527.0586 12 Buy 527.0609 526.9555 2014.02.23 AAPL 2014.02.23D09:00:14.610539881 526.9486 33 Buy 527.0597 526.9543 .. wj can be useful for finding max and min values in a given timeframe as shown above, however in most situations it is preferable to use an as-of join or a combination of as-of joins. As-of joins¶ aj and aj0 are simpler versions of wj . They return the last value from the given time interval rather than the results from an arbitrary aggregation function. aj displays the time column from the first table in its result, whereas aj0 uses the column from the second table. If a table has either a grouped or parted attribute on its sym column, as is the case for all of the tables in our sample database, it will likely be a good candidate for an as-of join, which we would expect to give constant time performance. However it is important to realize that only the attributes on the first column in an as-of join will be used, therefore it is rarely a good idea to use an aj on more than two columns. If there is no attribute on the data being joined, or there is a need to apply extra constraints we will expect a linear runtime and a select statement will in most cases be more appropriate. As we can see in the results below, our second as-of join without the attribute is four orders of magnitude slower than the first join and used an order of magnitude more memory. q)meta .mem.quote c | t f a ---| ----- sym| s g dt | p ap | f as | j bp | f bs | j q)\ts aj[`sym`dt; select from .mem.trade where sym = `AAPL; .mem.quote] 2 150848 q)update sym:reverse reverse sym from `.mem.quote `.mem.quote q)meta .mem.quote c |tfa ---| ----- sym| s dt | p ap | f as | j bp | f bs | j q)\ts aj[`sym`dt; select from .mem.trade where sym = `AAPL; .mem.quote] 11393 2442416 An as-of join can also be performed directly on memory-mapped data without having to read the entire files. It is important to take advantage of this by only reading the data needed into memory, rather than performing further restrictions in the Where clause, as this will result in a subset of the data being copied into memory and greatly increase the runtime despite working with a smaller dataset. q)t:select from .par.trade where date = max date, sym = `AAPL q)q:select from .par.quote where date = .z.d q)select sym, dt, ap, bp, tp from aj[`sym`dt; t; q] sym dt ap bp tp ------------------------------------------------------------- AAPL 2014.02.23D09:00:01.606196483 527.0609 526.9555 527.0609 AAPL 2014.02.23D09:00:02.905704958 527.0586 526.9532 527.0586 AAPL 2014.02.23D09:00:14.610539881 527.054 526.9486 526.9486 AAPL 2014.02.23D09:00:15.613542722 527.0524 526.947 527.0524 AAPL 2014.02.23D09:00:25.251668544 527.0447 526.9393 526.9393 AAPL 2014.02.23D09:00:28.020791189 527.0443 526.9389 526.9389 AAPL 2014.02.23D09:00:32.284428963 527.0435 526.9381 527.0435 AAPL 2014.02.23D09:00:33.319332489 527.042 526.9366 526.9366 AAPL 2014.02.23D09:00:33.863122707 527.0424 526.937 527.0424 AAPL 2014.02.23D09:00:34.584276510 527.0417 526.9363 527.0417 .. q)\ts aj[`sym`dt; select from .par.trade where date = .z.D, sym = `AAPL; select from .par.quote where date = max date] 24 68438016 q)\ts aj[`sym`dt; select from .par.trade where date = .z.D, sym = `AAPL; select from .par.quote where date = max date, sym = `AAPL] 4696 8193984 Conclusion¶ This paper looked at how to take advantage of multiple kdb+ query structures to achieve optimal query performance for large volumes of data. It focused on making efficient changes to standard select statements for both on-disk and in-memory databases, illustrating how they can provide both flexibility and performance within a database. It also considered joins, in particular the left join, window join and as-of join, which allow us to perform large-scale analysis on in-memory and on-disk tables. The performance and flexibility of user queries and timeseries joins are some of the main reasons why kdb+ is such an effective tool for the analysis of large-scale timeseries data. The efficient application of these tools is vital for any kdb+ enterprise system. All tests were run using kdb+ 3.1 (2013.12.27) Author¶ Ian Lester is a financial engineer who has worked as a consultant for some of the world’s largest financial institutions. Based in New York, Ian is currently working on a trading application at a US investment bank.
get , set ¶ Read or set the value of a variable or a kdb+ data file get ¶ Read or memory-map a variable or kdb+ data file get x get[x] Where x is - the name of a global variable as a symbol atom - a file or folder named as a symbol atom or vector returns its value. Signals a type error if the file is not a kdb+ data file. Used to map columns of databases in and out of memory when querying splayed databases, and can be used to read q log files, etc. q)a:42 q)get `a 42 q)\l trade.q q)`:NewTrade set trade / save trade data to file `:NewTrade q)t:get`:NewTrade / t is a copy of the table q)`:SNewTrade/ set .Q.en[`:.;trade] / save splayed table `:SNewTrade/ q)s:get`:SNewTrade/ / s has columns mapped on demand value is a synonym for get By convention, value is used for other purposes. But the two are completely interchangeable. q)value "2+3" 5 q)get "2+3" 5 set ¶ Assign a value to a global variable Persist an object as a file or directory nam set y set[nam;y] /set global var nam file set y set[file;y] /serialize y to file dir set t set[dir;t] /splay t to dir (file;lbs;alg;lvl) set y set[(file;lbs;alg;lvl);y] /write y to file, compressed and/or encrypted (dir;lbs;alg;lvl) set t set[(dir;lbs;alg;lvl);t] /splay t to dir, compressed and/or encrypted (dir;dic) set t set[(dir;dic);t] /splay t to dir, compressed and/or encrypted Where alg integer atom compression/encryption algorithm dic dictionary compression/encryption specifications dir filesymbol directory in the filesystem file filesymbol file in the filesystem lbs integer atom logical block size lvl integer atom compression level nam symbol atom valid q name t table y (any) any q object Compression parameters alg , lbs , and lvl Encryption parameters alg and lbs Compression/Encryption specification dictionary Examples: q)`a set 42 / set global variable `a q)a 42 q)`:a set 42 / serialize object to file `:a q)t:([]tim:100?23:59;qty:100?1000) / splay table q)`:tbl/ set t `:tbl/ q)(`:ztbl;17;2;6) set t / serialize compressed `:ztbl q)(`:ztbl/;17;2;6) set t / splay table compressed `:ztbl/ q)(`:ztbl/;17;16;6) set t / splay table encrypted (since v4.0 2019.12.12) `:ztbl/ Anymap write detects consecutive deduplicated (address matching) top-level objects, skipping them to save space (since v4.1t 2021.06.04, v4.0 2023.01.20) q)a:("hi";"there";"world") q)`:a0 set a `:a0 q)`:a1 set a@where 1000 2000 3000 `:a1 q)(hcount`$":a0#")=hcount`$":a1#" 0b Since 4.1t 2023.09.29,4.0 2023.11.03 when writing anymap, empty vectors without attributes are deduplicated automatically (including enum vectors when the enum name is 'sym'). Since 4.1t 2021.06.04,4.0 2023.01.20 improved memory efficiency of writing nested data sourced from a type 77 (anymap) file, commonly encountered during compression of files. e.g. q)`:a set 500000 100#"abc";system"ts `:b set get`:a" / was 76584400 bytes, now 8390208. Splayed table¶ To splay a table t to directory dir dir must be a filesymbol that ends with a/ t must have no primary keys- columns of t must be vectors or compound lists - symbol columns in t must be fully enumerated Format¶ set saves the data in a binary format akin to tag+value, retaining the structure of the data in addition to its value. q)`:data/foo set 10 20 30 `:data/foo q)read0 `:data/foo "\376 \007\000\000\000\000\000\003\000\000\000\000\000\000\000" "\000\000\000\000\000\000\000\024\000\000\000\000\000\000\000\036\000.. Setting variables in the KX namespaces can result in undesired and confusing behavior. These are .h , .j , .Q , .q , .z , and any other namespaces with single-character names. Compression/Encryption¶ For (fil;lbs;alg;lvl) set y / write y to fil, compressed and/or encrypted (dir;lbs;alg;lvl) set t / splay t to dir, compressed and/or encrypted Arguments lbs , alg , and lvl are compression parameters and/or encryption parameters. Splay table t to directory ztbl/ with gzip compression: q)(`:ztbl/;17;2;6) set t `:ztbl/ For (dir;dic) set t / splay t to dir, compressed the keys of dic are either column names of t or the null symbol ` . The value of each entry is an integer vector: lbs , alg , and lvl . Compression/encryption for unspecified columns is specified either by an entry for the null symbol (as below) or by .z.zd . q)m1:1000000 q)t:([]a:m1?10;b:m1?10;c:m1?10;d:m1?10) q)/specify compression for cols a, b and defaults for others q)show dic:``a`b!(17 5 3;17 2 6;17 2 6) | 17 5 3 a| 17 2 6 b| 17 2 6 q)(`:ztbl/;dic) set t / splay table compressed `:ztbl/ Compression may speed up or slow down the execution of set . The performance impact depends mainly on the data characteristics and the storage speed. Database: tables in the filesystem File system File compression Compression in kdb+ Data at rest encryption (DARE) getenv ¶ Get or set an environment variable getenv ¶ Get the value of an environment variable getenv x getenv[x] where x is a symbol atom naming an environment variable, returns its value. q)getenv `SHELL "/bin/bash" q)getenv `UNKNOWN / returns empty if variable not defined "" setenv ¶ Set the value of an environment variable x setenv y setenv[x;y] where x is a symbol atomy is a string sets the environment variable named by x . q)`RTMP setenv "/home/user/temp" q)getenv `RTMP "/home/user/temp" q)\echo $RTMP "/home/user/temp" | Greater, or ¶ Greater; logical OR x|y |[x;y] x or y or[x;y] Returns the greater of the underlying values of x and y . q)2|3 3 q)1010b or 1100b /logical OR with booleans 1110b q)"sat"|"cow" "sow" | is a multithreaded primitive. Flags¶ Where x and y are both flags, Greater is logical OR. Use or for flags While Greater and or are synonyms, it helps readers to apply or only and wherever flag arguments are expected. There is no performance implication. Dictionaries and keyed tables¶ Where x and y are a pair of dictionaries or keyed tables the result is equivalent to upserting y into x where the values of y exceed those in x . q)show a:([sym:`ibm`msoft`appl`goog]t:2017.05 2017.09 2015.03 2017.11m) sym | t -----| ------- ibm | 2017.05 msoft| 2017.09 appl | 2015.03 goog | 2017.11 q)show b:([sym:`msoft`goog`ibm]t:2017.08 2017.12 2016.12m) sym | t -----| ------- msoft| 2017.08 goog | 2017.12 ibm | 2016.12 q)a|b sym | t -----| ------- ibm | 2017.05 msoft| 2017.09 appl | 2015.03 goog | 2017.12 Mixed types¶ Where x and y are of different types the greater of their underlying values is returned as the higher of the two types. q)98|"a" "b" Implicit iteration¶ Greater and or are atomic functions. q)(10;20 30)|(2;3 4) 10 20 30 They apply to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d|5 a| 10 5 5 b| 5 5 5 q)d|`b`c!(10 20 30;1000*1 2 3) / upsert semantics a| 10 -21 3 b| 10 20 30 c| 1000 2000 3000 q)t|5 a b ---- 10 5 5 5 5 5 q)k|5 k | a b ---| ---- abc| 10 5 def| 5 5 ghi| 5 5 Domain and range¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b . x h i j e f c . p m d z n u v t g | . . . . . . . . . . . . . . . . . . x | x . x h i j e f c . p m d z n u v t h | h . h h i j e f c . p m d z n u v t i | i . i i i j e f c . p m d z n u v t j | j . j j j j e f c . p m d z n u v t e | e . e e e e e f c . p m d z n u v t f | f . f f f f f f c . p m d z n u v t c | c . c c c c c c c . p m d z n u v t s | . . . . . . . . . . . . . . . . . . p | p . p p p p p p p . p p p p n u v t m | m . m m m m m m m . p m d . . . . . d | d . d d d d d d d . p d d z . . . . z | z . z z z z z z z . p . z z n u v t n | n . n n n n n n n . n . . n n n n n u | u . u u u u u u u . u . . u n u v t v | v . v v v v v v v . v . . v n v v t t | t . t t t t t t t . t . . t n t t t Range: bcdefhijmnptuvxz and , & , Lesser, max , min Comparison, Logic Q for Mortals §4.5 Greater and Lesser > Greater Than >= At Least¶ x>y >[x;y] x>=y >=[x;y] Returns 1b where the underlying value of x is greater than (or at least) that of y . q)(3;"a")>(2 3 4;"abc") 100b 000b q)(3;"a")>=(2 3 4;"abc") 110b 100b With booleans: q)0 1 >/:\: 0 1 00b 10b q)0 1 >=/:\: 0 1 10b 11b Implicit iteration¶ Greater Than and At Least are atomic functions. q)(10;20 30)>(50 -20;5) 01b 11b They apply to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)d>=5 a| 100b b| 010b q)t>5 a b --- 1 0 0 0 0 0 q)k>5 k | a b ---| --- abc| 1 0 def| 0 0 ghi| 0 0 Range and domain¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | b . b b b b b b b . b b b b b b b b g | . b . . . . . . . . . . . . . . . . x | b . b b b b b b b . b b b b b b b b h | b . b b b b b b b . b b b b b b b b i | b . b b b b b b b . b b b b b b b b j | b . b b b b b b b . b b b b b b b b e | b . b b b b b b b . b b b b b b b b f | b . b b b b b b b . b b b b b b b b c | b . b b b b b b b . b b b b b b b b s | . . . . . . . . . b . . . . . . . . p | b . b b b b b b b . b b b b b b b b m | b . b b b b b b b . b b b . . . . . d | b . b b b b b b b . b b b b . . . . z | b . b b b b b b b . b . b b b b b b n | b . b b b b b b b . b . . b b b b b u | b . b b b b b b b . b . . b b b b b v | b . b b b b b b b . b . . b b b b b t | b . b b b b b b b . b . . b b b b b Range: b group ¶ group x group[x] Returns a dictionary in which the keys are the distinct items of x , and the values the indexes where the distinct items occur. The order of the keys is the order in which they appear in x . q)group "mississippi" m| ,0 i| 1 4 7 10 s| 2 3 5 6 p| 8 9 To count the number of occurrences of each distinct item: q)count each group "mississippi" m| 1 i| 4 s| 4 p| 2 To get the index of the first occurrence of each distinct item: q)first each group "mississippi" m| 0 i| 1 s| 2 p| 8 gtime , ltime ¶ Global and local time gtime ¶ UTC equivalent of local timestamp gtime ts gtime[ts] Where ts is a datetime/timestamp, returns the UTC datetime/timestamp. q).z.p 2009.10.20D10:52:17.782138000 q)gtime .z.P / same timezone as .z.p 2009.10.20D10:52:17.783660000 ltime ¶ Local equivalent of UTC timestamp ltime ts ltime[ts] Where ts is a datetime/timestamp, returns the local datetime/timestamp. q).z.P 2009.11.05D15:21:10.040666000 q)ltime .z.p / same timezone as .z.P 2009.11.05D15:21:10.043235000 System clocks¶ UTC and local datetime/timestamps are available as hcount ¶ Size of a file in bytes hcount x hcount[x] Where x is a file symbol, returns as a long the size of the file. q)hcount`:c:/q/test.txt 42 On a compressed/encrypted file returns the size of the original uncompressed/unencrypted file.
Changes in 4.0¶ Production release date¶ 2020.03.17 Updates¶ 2023.01.20¶ Anymap write now detects consecutive deduplicated (address matching) top-level objects, skipping them to save space. q)a:("hi";"there";"world"); q)`:a0 set a;`:a1 set a@where 1000 2000 3000; q)(hcount`$":a0#")=hcount`$":a1#" Improved memory efficiency of writing nested data sourced from a type 77 file, commonly encountered during compression of files, e.g. q)`:a set 500000 100#"abc" q)system"ts `:b set get`:a" / was 76584400 bytes, now 8390208 2022.10.26¶ l64arm build available. m64 is now a universal binary, containing both Intel and ARM builds for macOS. Use lipo to extract the individual architecture binaries. e.g. lipo -thin x86_64 -output q/m64/q.x64 q/m64/q lipo -thin arm64 -output q/m64/q.arm q/m64/q Support for OpenSSL v3: on Linux, q will now try to load versioned shared libraries for OpenSSL if libssl-dev[el] is not installed. the -p command-line option (or \p system command) can now listen on a port within a specified range e.g. q)\p 80/85 q)\p 81 The range of ports is inclusive and tried in a random order. A service name can be used instead of a port number. The existing option of using 0W to choose a free ephemeral port can be more efficient (where suitable). A range can be used in place of port number, when setting using existing rules e.g. for hostname q)\p myhost:2000/2010 or for multithreaded port q)\p -2000/2010 2021.07.12¶ Extended the range of .z.ac to (4;"") to indicate fallback to try authentication via .z.pw . 2021.03.25¶ On Windows builds, TCP send and receive buffers are no longer set to fixed sizes, allowing autotuning by the OS. 2021.01.20¶ The result of dlerror is appended to the error message if there is an error loading compression or encryption libraries e.g. (without Snappy installed) q).z.zd:17 3 6 q)`:a set til 1000000 'snappy libs required to compress a$. dlopen(libsnappy.dylib, 0x0002): tried: 'libsnappy.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OSlibsnappy.dylib' (no such file), '/usr/lib/libsnappy.dylib' (no such file, not in dyld cache), 'libsnappy.d [0] `:a set til 1000000 ^ 2020.11.02¶ Path-length limit is now 4095 bytes, instead of 255. OS errors are now truncated (indicated by .. ) on the left to retain the more important tail, e.g. q)get`$":root/",(250#"path/"),"file" '..path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/path/file. OS reports: No such file or directory [0] get`$":root/",(250#"path/"),"file" ^ 2020.08.10¶ Serialize and deserialize now use multithreaded memory, e.g. q)\t system"s 10";-9!-8!100#enlist a:1000000#0 Uncompressed file writes to shared memory/cache are now up to 2x faster, e.g. q)\t a:100 100000#"abc";do[100;`:/dev/shm/a set a] 2020.08.03¶ One-shot request now checks that it has received a response message, otherwise signals expected response msg , e.g. q)`::5001"neg[.z.w]`" 'expected response msg 2020.07.15¶ Errors raised by the underlying decompression routines are now reported as decompression error at block <b> in <f> . 2020.06.18¶ Allow more than one scalar to be extended in Group-by clause. e.g. q)select sum x by a:`,b:` from([]x:1 2 3) a b| x ---| - | 6 2020.06.01¶ Added .z.H , a low-cost method to obtain the list of active sockets. .z.H~key .z.W Added -38!x : where x is a list of socket handles, returns a table of ([]p:protocol;f:socketFamily) where protocol isq (IPC) orw (WebSocket)socketFamily ist (TCP) oru (Unix domain socket) q){([]h)!-38!h:.z.H}[] h| p f -| --- 8| q u 9| q t 2020.05.20¶ Access-controlled file paths are now allowed to be canonical in addition to relative for remote handles under reval or -u in the command line. e.g. h(reval(value;)enlist@;(key;`$":/home/charlie/db")) Multithreaded primitives¶ kdb+ has been multithreaded for more than 15 years, and users could leverage this explicitly through peach , or via the multithreaded input mode. kdb+ 4.0 adds an additional level of multithreading via primitives. It is fully transparent to the user, requiring no code change by the user to exploit it. The underlying framework currently uses the number of threads configured as secondary threads on the command line. As most kdb+ primitives are memory-bound, within-primitive parallelism is intended to exploit all-core memory bandwidth available on modern server hardware with multi-channel memory. SSE-enabled primitives (mostly arithmetic) are not parallel when SSE is not available (Windows and NoSSE builds). Within-primitive parallelism reverts to being single-threaded within peach or multi-threaded input mode. Systems with low aggregate memory bandwidth are unlikely to see an improvement for in-memory data, but on-disk data should still benefit. Multi-threaded primitives are not NUMA-aware and should not be expected to scale beyond 1 socket. Data-At-Rest Encryption (DARE)¶ kdb+4.0 supports Data-At-Rest Encryption (DARE), using AES256CBC. As with the built-in file compression, encryption is transparent and requires no changes to a query to utilize it. Once a master key has been created via a third-party tool such as OpenSSL: openssl rand 32 | openssl aes-256-cbc -salt -pbkdf2 -iter 50000 -out testkek.key the key can then be loaded into kdb+ using -36!(`:testkek.key;"mypassword") Files can then be compressed and/or encrypted using the same command as for file compression, with AES256CBC encryption as algo 16. e.g. (`:ztest;17;2+16;6) set asc 10000?`3 / compress and encrypt to an individual file or use .z.zd for process-wide default setting when writing files: .z.zd:(17;2+16;6) / zlib compression, with aes256cbc encryption kdb+ DARE requires OpenSSL 1.1.1 Data-At-Rest Encryption (DARE) Optane support¶ Memory can be backed by a filesystem, allowing use of DAX-enabled filesystems (e.g. AppDirect) as a non-persistent memory extension for kdb+. Use command-line option -m path to use the filesystem path specified as a separate memory domain. This splits every thread’s heap into two: | domain | content | |---|---| | 0 | regular anonymous memory (active and used for all allocs by default) | | 1 | filesystem-backed memory | Namespace .m is reserved for objects in domain 1; however names from other namespaces can reference them too, e.g. a:.m.a:1 2 3 . \d .m changes the current domain to 1, causing it to be used by all further allocs. \d .anyotherns sets it back to 0. .m.x:x ensures the entirety of .m.x is in the domain 1, performing a deep copy of x as needed. (Objects of types 100-103h , 112h are not copied and remain in domain 0.) Lambdas defined in .m set current domain to 1 during execution. This will nest since other lambdas don’t change domains: q)\d .myns q)g:{til x} q)\d .m q)w:{system"w"};f:{.myns.g x} q)\d . q)x:.m.f 1000000;.m.w` / x allocated in domain 1 -120!x returns x ’s domain (currently 0 or 1), e.g 0 1~-120!'(1 2 3;.m.x:1 2 3) . \w returns memory info for the current domain only: q)value each ("\\d .m";"\\w";"\\d .";"\\w") The -w limit (M1/m2) is no longer thread-local, but domain-local: command-line option -w and system command \w set the limit for domain 0. mapped is a single global counter, the same in every thread’s \w . Profiler¶ kdb+ 4.0 (for Linux only) includes an experimental built-in call-stack snapshot primitive that allows building a sampling profiler. Added support for OpenSSL 1.1.x¶ One-shot sync queries can now execute via `::[("host:port";timeout);query] which allows a timeout to be specified. Debugger¶ There are various Debugger improvements: The .Q.bt display highlights the current frame with >> . q)).Q.bt` [2] {x+1} ^ >>[1] {{x+1}x} ^ [0] {{x+1}x}`a ^ A new Debugger command & (where) prints current frame info. q))& [1] {{x+1}x} ^ The Debugger restores original namespace and language (q or k) setting for every frame. View calculations and \ system commands, including \l , correspond to individual debug stack frames. .d1 ).Q.bt` >>[3] t0.k:8: va::-a ^ [2] t1.q:8: vb::va*3 ^ [1] t1.q:7: vc::vb+2 ^ [0] 2+vc ^ Errors thrown by parse show up in .Q.trp with location information. q).Q.trp[parse;"2+2;+2";{[email protected] 2#y}]; [3] 2+2;+2 ^ [2] (.q.parse) The name (global used as local) bytecode compiler error has location info. q){a::1;a:1} 'a [0] {a::1;a:1} ^ Miscellaneous¶ - The parser now interprets Form Feed (Ctl-L) as whitespace, allowing a script to be divided into pages at logical places. - The macOS build no longer exits with couldn’t report -- exiting after waking from system sleep when running under an on-demand license. This alleviates the issue of Apple dropping support for 32-bit apps on macOS 10.15. - Multicolumn table lookup now scales smoothly, avoiding catastrophic slowdown for particular distributions of data at the expense of best-case performance. - Stdout and stderr may now be redirected to the same file, sharing the same file table entry underneath. This mimics the redirection of stdout and stderr at a Unix shell: cmd > log.txt 2>&1 . - Both HTTP client and server now support gzip compression via "Content-Encoding: gzip" for responses toform?... -style requests. The response payload must be >2000 chars and the client must indicate support via"Accept-Encoding: gzip" HTTP header, as is automatically done in.Q.hg and.Q.hp . - Externally compressed (e.g. using gzip) log files can now be played back via a fifo to -11!`:logfifo , e.g.q)system"mkfifo logfifo;gunzip log.gz > logfifo&";-11!`:logfifo - Further integrity checks have been added to streaming execute -11!x to avoidwsfull orsegfault on corrupted/incomplete log files. - -u/U passwordfile now supports SHA1 password entries. Passwords must all be plain, MD5, or SHA1; they cannot be mixed. e.g.q)raze string -33!"mypassword" / -33! calculates sha1 - .Q.cn now usespeach to get the count of partitioned tables. This can improve the startup time for a partitioned database. NUCs¶ We have sought to avoid introducing compatibility issues, and most of those that follow are a result of unifying behavior or tightening up loose cases. select ¶ select count,sum,avg,min,max,prd by is now consistent for atoms. q)first max select count 1b by x1 from ([]1 2 3;3 3 3) / was 3 1 The length of the By clause needs to match table count. select auto-aliased colliding duplicate column names for either select a,a from t , or select a by c,c from t , but not for select a,a by a from t . Such a collision now signals a dup names for cols/groups a error during parse, indicating the first column name which collides. The easiest way to resolve this conflict is to explicitly rename columns. e.g. q)select a,b by c:a from t reval ¶ Has been extended to behave as if command-line options -u 1 and -b were active, and also block all system calls which change state. I.e. all writes to file system are blocked; allows read access to files in working directory and below only; and prevents amendment of globals. lj ¶ Now checks that its right argument is a keyed table. Command-line processing¶ Now checks for duplicate or mutually exclusive flags. e.g. q -U 1 -u 1 throws -u or -U . License-related errors¶ Now reported with the prefix licence error: , e.g. 'licence error: upd .
// need this to handle queries that only hit one backend process // reformat those responses to look the same formatresponse:{[status;sync;result] if[`kxdash~first result; res:last result; :$[res`status; (`.dash.rcv_msg;res`w;res`o;res`r;res`result); (`.dash.snd_err;res`w;res`r;res`result)]]; $[not[status]and sync;'result;result]} init:{ // KX dashboards are expecting getFunctions to be defined on the process .api.getFunctions:@[value;`.api.getFunctions;{{:()}}]; // Reset format response .gw.formatresponse:formatresponse; // incorporate dashps into the .z.ps definition .dotz.set[`.z.ps;{x@y;.kxdash.dashps y}@[value;.dotz.getcommand[`.z.ps];{{value x}}]]; }; if[enabled;init[]]; ================================================================================ FILE: TorQ_code_handlers_apidetails.q SIZE: 5,587 characters ================================================================================ // Add to the api functions \d .api if[not`add in key `.api;add:{[name;public;descrip;params;return]}] // Message handlers add[`.usage.usage;1b;"log of messages through the message handlers";"";""] add[`.usage.logtodisk;1b;"whether to log to disk";"";""] add[`.usage.logtomemory;1b;"whether to log to .usage.usage";"";""] add[`.usage.ignore;1b;"whether to check the ignore list for functions to ignore";"";""] add[`.usage.ignorelist;1b;"the list of functions to ignore";"";""] add[`.usage.logroll;1b;"whether to automatically roll the log file";"";""] add[`.usage.rolllogauto;1b;"Roll the .usage txt files";"[]";"null"] add[`.usage.readlog;1b;"Read and return a usage log file as a table";"[string: name of log file]";"null"] add[`.access.USERS;1b;"Table of users and their types";"";""] add[`.access.HOSTPATTERNS;1b;"List of host patterns allowed to access this process";"";""] add[`.access.POWERUSERTOKENS;1b;"List of tokens allowed by power users";"";""] add[`.access.USERTOKENS;1b;"List of tokens allowed by default users";"";""] add[`.access.BESPOKETOKENS;1b;"Dictionary of tokens on a per-user basis (outside of their standard allowance)";"";""] add[`.access.addsuperuser;1b;"Add a super user";"[symbol: user]";"null"] add[`.access.addpoweruser;1b;"Add a power user";"[symbol: user]";"null"] add[`.access.adddefaultuser;1b;"Add a default user";"[symbol: user]";"null"] add[`.access.readpermissions;1b;"Read the permissions from a directory";"[string: directory containing the permissions files]";"null"] add[`.clients.clients;1b;"table containing client handles and session values";"";""] add[`.servers.SERVERS;1b;"table containing server handles and session values";"";""] add[`.servers.opencon;1b;"open a connection to a process using the default timeout. If no user:pass supplied, the default one will be added if set";"[symbol: the host:port[:user:pass]]";"int: the process handle, null if the connection failed"] add[`.servers.addh;1b;"open a connection to a server, store the connection details";"[symbol: the host:port:user:pass connection symbol]";"int: the server handle"] add[`.servers.addw;1b;"add the connection details of a process behind the handle";"[int: server handle]";"null"] add[`.servers.addnthawc;1b;"add the details of a connection to the table";"[symbol: process name; symbol: process type; hpup: host:port:user:pass connection symbol; dict: attributes of the process; int: handle to the process;boolean: whether to check the handle is valid on insert";"int: the handle of the process"] add[`.servers.getservers;1b;"get a table of servers which match the given criteria";"[symbol: pick the server based on the name value or the type value. Can be either `procname`proctype; symbol(list): lookup values. ` for any; dict: requirements dictionary; boolean: whether to automatically open dead connections for the specified lookup values; boolean: if only one of each of the specified lookup values is required (means dead connections aren't opened if there is one available)]";"table: processes details and requirements matches"] add[`.servers.gethandlebytype;1b;"get a server handle for the supplied type";"[symbol: process type; symbol: selection criteria. One of `roundrobin`any`last]";"int: handle of server"] add[`.servers.gethpbytype;1b;"get a server hpup connection symbol for the supplied type";"[symbol: process type; symbol: selection criteria. One of `roundrobin`any`last]";"symbol: h:p:u:p connection symbol of server"] add[`.servers.startup;1b;"initialise all the connections. Must processes should call this during initialisation";"[]";"null"] add[`.servers.refreshattributes;1b;"refresh the attributes registered with the discovery service. Should be called whenever they change e.g. end of day for an HDB";"[]";"null"] add[`.pm.adduser; 1b; "Adds a user to be permissioned as well as setting their password and the method used to hash it."; "[symbol: the username; symbol: method used to authenticate; symbol: method used to hash the password; string: password, hashed using the proper method]"; "null"] add[`.pm.addgroup; 1b; "Add a group which will have access to certain tables and variables"; "[symbol: the name of the group; string: a description of the group]"; "null"] add[`.pm.addrole; 1b; "Add a role which will have access to certain functions"; "[symbol: the name of the role; string: a description of the role]"; "null"] add[`.pm.addtogroup; 1b; "Add a user to a group, giving them access to all of its variables"; "[symbol: the name of the user to add; symbol: group the user is to be added to]"; "null"] add[`.pm.assignrole; 1b; "Assign a user a role, giving them access to all of its functions"; "[symbol: the name of the user to add; symbol: role the user is to be assigned to]"; "null"] add[`.pm.grantaccess; 1b; "Give a group access to a variable"; "[symbol: the name of the variable the group should get access to; symbol: group that is to be given this access; symbol: the type of access that should be given, eg. read, write]"; "null"] add[`.pm.grantfunction; 1b; "Give a role access to a function"; "symbol: name of the function to be added; symbol: role that is to be given this access; TO CLARIFY"; "null"] add[`.pm.createvirtualtable; 1b; "Create a virtual table that a group might be able to access instead of the full table"; "[symbol: new name of the table; symbol: name of the actual table t add; TO CLARIFY]"; "null"] add[`.pm.cloneuser; 1b; "Add a new user that is identical to another user"; "[symbol: name of the new user; symbol: name of the user to be cloned; string: password of the new user]"; "null"] ================================================================================ FILE: TorQ_code_handlers_controlaccess.q SIZE: 7,153 characters ================================================================================ // minor customisation of controlaccess.q from code.kx // http://code.kx.com/wsvn/code/contrib/simon/dotz/ // main change is that the set up (users, hosts and functions) are loaded from csv files // also want to throw an error and have it as an error in the io log rather than a separate log file / control external (.z.p*) access to a kdb+ session, log access errors to file / use <loadinvalidaccess.q> to load and display table INVALIDACCESS / setting .access.HOSTPATTERNS - list of allowed hoststring patterns (";"vs ...) / setting .access.USERTOKENS/POWERUSERTOKENS - list of allowed k tokens (use -5!) / adding rows to .access.USERS for .z.u matches / ordinary user would normally only be able to run canned queries / poweruser can run canned queries and some sql commands / superuser can do anything \d .access // Check if the process has been initialised correctly if[not @[value;`.proc.loaded;0b]; '"environment is not initialised correctly to load this script"] MAXSIZE:@[value;`MAXSIZE;200000000] // the maximum size of any returned result set enabled:@[value;`enabled;0b] // whether permissions are enabled openonly:@[value;`openonly;0b] // only check permissions when the connection is made, not on every call USERS:([u:`symbol$()]poweruser:`boolean$();superuser:`boolean$()) adduser:{[u;pu;su]USERS,:(u;pu;su);} addsuperuser:adduser[;0b;1b];addpoweruser:adduser[;1b;0b];adddefaultuser:adduser[;0b;0b] deleteusers:{delete from`.access.USERS where u in x;} // Read in the various files readhostfile:{ .lg.o[`access;"reading host file ",x]; 1!("*B";enlist",")0:hsym `$x } // Read in the default users file readuserfile:{ .lg.o[`access;"reading user file ",x]; 1!("SBBB";enlist",")0:hsym `$x } // Read in the default functions file readfunctionfile:{ .lg.o[`access;"reading function file ",x]; res:("*BB*";enlist",")0:hsym `$x; res:update func:{@[value;x;{.lg.e[`access;"failed to parse ",x," : ",y];`}[x]]} each func from res; // parse out the user list 1!update userlist:`$";"vs'userlist from res } // Read in each of the files and set up the permissions // if {procname}_*.csv is found, we will use that // otherwise {proctype}_*.csv // otherwise default_*.csv readpermissions:{[dir] .lg.o[`access;"reading permissions from ",dir]; // Check the directory exists if[()~f:key hsym `$dir; .lg.e[`access;"permissions directory ",dir," doesn't exist"]]; // Read in the permissions files:{poss:`$(string (`default;.proc.proctype;.proc.procname)),\:y; poss:poss where poss in x; if[0=count poss; .lg.e[`access;"failed to find appropriate ",y," file. At least default",y," should be supplied"]]; poss}[key hsym `$dir] each ("_hosts.csv";"_users.csv";"_functions.csv"); // only need to clear out the users - everything else is reset .lg.o[`access;"clearing out old permissions"]; delete from `.access.USERS; // Load up each one hosts::raze readhostfile each (dir,"/"),/:string files 0; users::raze readuserfile each (dir,"/"),/:string files 1; funcs::raze readfunctionfile each (dir,"/"),/: string files 2; HOSTPATTERNS::exec host from hosts where allowed; addsuperuser each exec distinct user from users where superuser; addpoweruser each exec distinct user from users where not superuser,poweruser; adddefaultuser each exec distinct user from users where not superuser,not poweruser,defaultuser; USERTOKENS::asc distinct exec func from funcs where default; POWERUSERTOKENS::asc distinct exec func from funcs where default or power; // build a dictionary of specific functions for specific users BESPOKETOKENS::exec asc distinct func by userlist from ungroup select func,userlist from funcs where not userlist~\:enlist`; } likeany:{0b{$[x;x;y like z]}[;x;]/y} loginvalid:{[ok;zcmd;cmd] if[not ok;H enlist(`LOADINVALIDACCESS;`INVALIDACCESS;(.z.i;.proc.cp[];zcmd;.z.a;.z.w;.z.u;.dotz.txtC[zcmd;cmd]))];ok} validuser:{[zu;pu;su]$[su;exec any(`,zu)in u from USERS where superuser;$[pu;exec any(`,zu)in u from USERS where poweruser or superuser;exec any(`,zu)in u from USERS]]} superuser:validuser[;0b;1b];poweruser:validuser[;1b;0b];defaultuser:validuser[;0b;0b] validhost:{[za] $[likeany[.dotz.ipa za;HOSTPATTERNS];1b;likeany["."sv string"i"$0x0 vs za;HOSTPATTERNS]]} validsize:{[x;y;z] $[superuser .z.u;x;MAXSIZE>s:-22!x;x;'"result size of ",(string s)," exceeds MAXSIZE value of ",string MAXSIZE]} cmdpt:{$[10h=type x;.q.parse x;x]} cmdtokens:{ // return a list from nested lists raze(raze each)over{ // check if the argument is a list or mixed list $[(0h<=type x) & 1<count x; // check if the first element of the argument is a string or // has one element but is not a mixed list $[(10h = type fx) | (not 0h=type fx) & 1=count fx:first x; // return the first element and convert any character types to sym type {[x] $[(type x) in -10 10h;`$x;x]} fx; // apply this function interatively into nested lists where the type is a mixed list or list of symbols ],.z.s each x where (type each x) in 0 11h; ] }x } usertokens:{$[superuser x;0#`;$[poweruser x;POWERUSERTOKENS;$[defaultuser x;USERTOKENS;'`access]]],BESPOKETOKENS[x]} validpt:{all(cmdtokens x)in y} validcmd:{[u;cmd] $[superuser u;1b;validpt[cmdpt cmd;usertokens u]]} invalidpt:{'"invalid parse token(s):",raze" ",'string distinct(cmdtokens cmdpt x)except usertokens .z.u}
ABC problem¶ Search a tree of possibilities, stop when you find one A simple problem requires recursively searching a tree of possible solutions. Each Right is used to generate the next round of searches, and to evaluate them. Using Index At with nested indexes avoids two nested loops. A global variable is used to end the search after a solution is found. Core solution has three code lines: no loops, no counters. Determine whether a string can be composed from a set of blocks You are given a collection of ABC blocks, maybe like the ones you had when you were a kid. There are twenty blocks, with two letters on each block. A complete alphabet is guaranteed amongst all sides of the blocks. Write a function that takes a string (word) and determines whether the word can be spelled with the given collection of blocks. The rules are simple: - Once a letter on a block is used that block cannot be used again - The function should be case-insensitive from Rosetta Code Test cases¶ Example collection of blocks: (B O) (X K) (D Q) (C P) (N A) (G T) (R E) (T G) (Q D) (F S) (J W) (H U) (V I) (A N) (O B) (E R) (F S) (L Y) (P C) (Z M) Example results from those blocks: >>> can_make_word("A") True >>> can_make_word("BARK") True >>> can_make_word("BOOK") False >>> can_make_word("TREAT") True >>> can_make_word("COMMON") False >>> can_make_word("SQUAD") True >>> can_make_word("CONFUSE") True General case¶ Backtracking¶ Each block you pick to supply a letter also removes its obverse from the available letters. For example, if you use VI for a V, you no longer have an I available. So you cannot make words such as live, vial, or evil from the example blocks. Similarly, Y and L are on the same block and are not repeated: quickly cannot be made. For some letters there is a choice of blocks from which to pick. In the example blocks, you could pick a B from either BO or OB. It does not matter which: either way both B and O remain available. As it happens, the example blocks duplicate letters only in pairs. The BO/OB pairing is repeated for CP/PC, GT/TG, etc., and these are the only duplicated letters. The rules do not specify this, so the example blocks are a special case. In the general case, different choices of block for a duplicated letter leave you with different sets of blocks from which to fill the rest of the string. Every time you have a choice of blocks, the possibilities proliferate. Some may succeed, some may not. The author of the Fortran solution on Rosetta Code notes this issue and offers a second solution of 277 code lines to deal with the general case, which the author refers to as “backtracking”. We shall solve only the general case. It cries out for a recursive search of the tree of possibilities. Blocks, tiles, and pyramids¶ The problem refers to blocks bearing two letters each. They might better be thought of as tiles, with a letter on each side. A block can bear six letters on its sides. For that matter, a pyramid can bear four, and a dodecahedron twelve. While the problem specifies two letters, we shall stay open to the possibility of code that works for any number of letters on each block. Core solution¶ The core of the solution is recursive. - If the string is empty, all its letters have been matched and the result is 1b . - If you cannot fill the first letter of the string s[0] from the available blocks, the result is0b . - Otherwise, find all the blocks that match s[0] . For each block, remove it from the available blocks, and call the function to see if you can make1_s . The result is whether any of these calls returns1b . A lambda can use .z.s to refer to itself BLOCKS:string`BO`XK`DQ`CP`NA`GT`RE`TG`QD`FS`JW`HU`VI`AN`OB`ER`FS`LY`PC`ZM WORDS:string`A`BARK`BOOK`TREAT`COMMON`SQUAD`CONFUSE cmw:{[s;b] / [string; blocks] $[0=count s; 1b; / empty string not any found:any each b=s 0; 0b; / cannot proceed any(1_s).z.s/:b(til count b)except/:where found] } q)WORDS cmw\:BLOCKS 1101011b The lines of cmw correspond to the steps 1–3 above. Note the use of Each Right /: first to generate the lists of blocks to be tried (til count b)except/:where found] then to recurse with the rest of the string: (1_s).z.s/: . Note also that the list of blocks b is applied to the lists of indexes. In full that would be b@(til count b)except/:where found] Index At @ is elided here, but elided or not, is atomic, so there is no need to iterate b through the lists of indexes. Iteration is free! Case sensitivity¶ The problem requires the solution be insensitive to case in the words. Words:string`A`bark`BOOK`Treat`COMMON`squad`CONFUSE cmwi:{cmw[;y]upper x} / case-insensitive q)Words cmwi\:BLOCKS 1101011b Letters per block¶ The search of available blocks is any each b=s 0 . This is independent of the number of letters on the blocks. q)show B6:upper string 10?`6 "MILGLI" "IGFBAG" "KAODHB" "BAFCLB" "KFHOGJ" "JECPAE" "KFMOHP" "LKKLCO" "KFIFPA" "FGLGOF" q)WORDS cmw\:B6 1010000b Stopping the search¶ More blocks and duplicate letters mean more solutions for any given string, and a solution is easier to find. But that increases the work cmw must do, because cmw finds all the solutions. However, we need only one, and would prefer evaluation stop after the first solution has been found. For that we shall set a global variable and recurse, not cmw , but an anonymous lambda. cmws:{[x;y] / cmw – stop search .cmw.done::0b; / start search {[s;b] / [string; blocks] $[.cmw.done; 1b; / call off search .cmw.done::0=count s; 1b; / empty string not any found:any each b=s 0; 0b; / cannot proceed any(1_s).z.s/:b(til count b)except/:where found] }[x;y] } q)\ts:100 WORDS cmw\:BLOCKS 83 5456 q)\ts:100 WORDS cmws\:BLOCKS 25 5440 Test your understanding¶ Can you test .cmw.done outside the Cond with or as e.g. .cmw.done or {[s;b] Answer No – both arguments of or are evaluated. (You are thinking of another programming language.) Review¶ The general case of the problem requires a tree search. That is easily expressed: - Each Right to generate lists of indexes to try - Apply list of blocks b direct to the nested lists of indexes - Each Right to recurse the function through the lists of blocks The lambda used .z.s to refer to itself: it does not need a name to recurse. The search through the tree of possibilities can be stopped by reading a global flag set when a solution is found. Case-insensitivity is trivial. The solution is not limited to two letters per block. Perhaps best of all, the solution is highly legible. The three code lines of the core solution correspond closely to an English description of the solution steps.
// @kind function // @category clust // @desc Update DBSCAN config including new data points // @param config {dictionary} A dictionary returned from '.ml.clust.dbscan.fit' // containing: // modelInfo - Encapsulates all relevant infromation needed to fit // the model `data`inputs`clust`tab, where data is the original data, // inputs are the user defined minPts and eps, clust are the cluster // assignments and tab is the neighbourhood table defining items in the // clusters. // predict - A projection allowing for prediction on new input data // update - A projection allowing new data to be used to update // cluster centers such that the model can react to new data // @param data {float[][]} Each column of the data is an individual datapoint // and update functions // @return {dictionary} Updated model configuration (config), including predict clust.dbscan.update:{[config;data] modelConfig:config[`modelInfo]; data:clust.i.floatConversion[data]; // Original data prior to addition of new points, with core points set orig:update corePoint:1b from modelConfig[`tab]where cluster<>0N; // Predict new clusters new:clust.i.dbscanPredict[data;modelConfig]; // Include new data points in training neighbourhood orig:clust.i.updNbhood/[orig;new;count[orig]+til count new]; // Fit model with new data included to update model tab:{[t]any t`corePoint}.ml.clust.i.dbAlgo/orig,new; // Reindex the clusters tab:update{(d!til count d:distinct x)x}cluster from tab where cluster<>0N; // return updated config clusts:-1^exec cluster from tab; modelConfig,:`data`tab`clust!(modelConfig[`data],'data;tab;clusts); returnInfo:enlist[`modelInfo]!enlist modelConfig; returnKeys:`predict`update; returnVals:(clust.dbscan.predict returnInfo; clust.dbscan.update returnInfo); returnInfo,returnKeys!returnVals } ================================================================================ FILE: ml_ml_clust_hierarchical.q SIZE: 9,356 characters ================================================================================ // clust/hierarchical.q - Hierarchical and CURE clustering // Copyright (c) 2021 Kx Systems Inc // // Hierarchical clustering. // Agglomerative hierarchical clustering iteratively groups data, // using a bottom-up approach that initially treats all data // points as individual clusters. // // CURE clustering. // Clustering Using REpresentatives (CURE) is a technique used to deal // with datasets containing outliers and clusters of varying sizes and // shapes. Each cluster is represented by a specified number of // representative points. These points are chosen by taking the most // scattered points in each cluster and shrinking them towards the // cluster center using a compression ratio. \d .ml // Clustering Using REpresentatives (CURE) and Hierarchical Clustering // @kind function // @category clust // @desc Fit CURE algorithm to data // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param n {long} Number of representative points per cluster // @param c {float} Compression factor for representative points // @return {dictionary} A dictionary containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data clust.cure.fit:{[data;df;n;c] data:clust.i.floatConversion[data]; if[not df in key clust.i.df;clust.i.err.df[]]; dgram:clust.i.hcSCC[data;df;`cure;1;n;c;1b]; modelInfo:`data`inputs`dgram!(data;`df`n`c!(df;n;c);dgram); returnInfo:enlist[`modelInfo]!enlist modelInfo; predictFunc:clust.cure.predict returnInfo; returnInfo,enlist[`predict]!enlist predictFunc } // @kind function // @category clust // @desc Fit Hierarchical algorithm to data // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param lf {symbol} Linkage function name within '.ml.clust.i.lf' // @return {dictionary} A dictionary containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data clust.hc.fit:{[data;df;lf] // Check distance and linkage functions data:clust.i.floatConversion[data]; if[not df in key clust.i.df;clust.i.err.df[]]; dgram:$[lf in`complete`average`ward; clust.i.hcCAW[data;df;lf;2;1b]; lf in`single`centroid; clust.i.hcSCC[data;df;lf;1;::;::;1b]; clust.i.err.lf[] ]; modelInfo:`data`inputs`dgram!(data;`df`lf!(df;lf);dgram); returnInfo:enlist[`modelInfo]!enlist modelInfo; predictFunc:clust.hc.predict returnInfo; returnInfo,enlist[`predict]!enlist predictFunc } // @kind function // @category clust // @desc Convert CURE config to k clusters // @param config {dictionary} A dictionary returned from '.ml.clust.cure.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param k {long} Number of clusters // @return {dictionary} Updated config with clusters labels added clust.cure.cutK:{[config;k] clust.i.checkK[k]; clustVal:clust.i.cutDgram[config[`modelInfo;`dgram];k-1]; clusts:enlist[`clust]!enlist clustVal; config,clusts } // @kind function // @category clust // @desc Convert hierarchical config to k clusters // @param config {dictionary} A dictionary returned from '.ml.clust.hc.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param k {long} Number of clusters // @return {dictionary} Updated config with clusters added clust.hc.cutK:clust.cure.cutK // @kind function // @category clust // @desc Convert CURE dendrogram to clusters based on distance // threshold // @param config {dictionary} A dictionary returned from '.ml.clust.cure.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param distThresh {float} Cutting distance threshold // @return {dictionary} Updated config with clusters added clust.cure.cutDist:{[config;distThresh] clust.i.checkDist[distThresh]; dgram:config[`modelInfo;`dgram]; k:0|count[dgram]-exec first i from dgram where dist>distThresh; config,enlist[`clust]!enlist clust.i.cutDgram[dgram;k] } // @kind function // @category clust // @desc Convert hierarchical dendrogram to clusters based on distance // threshold // @param config {dictionary} A dictionary returned from '.ml.clust.cure.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param distThresh {float} Cutting distance threshold // @return {dictionary} Updated config with clusters added clust.hc.cutDist:clust.cure.cutDist // @kind function // @category clust // @desc Predict clusters using CURE config // @param config {dictionary} A dictionary returned from '.ml.clust.cure.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param data {float[][]} Each column of the data is an individual datapoint // @param cutDict {dictionary} The key defines what cutting algo to use when // splitting the data into clusters (`k/`dist) and the value defines the // cutting threshold // @return {long[]} Predicted clusters clust.cure.predict:{[config;data;cutDict] updConfig:clust.i.prepPred[config;cutDict]; clust.i.hCCpred[`cure;data;updConfig] } // @kind function // @category clust // @desc Predict clusters using hierarchical config // @param config {dictionary} A dictionary returned from '.ml.clust.cure.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`inputs`dgram, where data is the original data, inputs // are the user defined linkage and distance functions while dgram // is the generated dendrogram // predict - A projection allowing for prediction on new input data // @param data {float[][]} Each column of the data is an individual datapoint // @param cutDict {dictionary} The key defines what cutting algo to use when // splitting the data into clusters (`k/`dist) and the value defines the // cutting threshold // @return {long[]} Predicted clusters clust.hc.predict:{[config;data;cutDict] updConfig:clust.i.prepPred[config;cutDict]; clust.i.hCCpred[`hc;data;updConfig] } // @kind function // @category clust // @desc Fit CURE algorithm to data and convert dendrogram to clusters // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param n {long} Number of representative points per cluster // @param c {float} Compression factor for representative points // @param cutDict {dictionary} The key defines what cutting algo to use when // splitting the data into clusters (`k/`dist) and the value defines the // cutting threshold // @return {dictionary} Updated config with clusters added clust.cure.fitPredict:{[data;df;n;c;cutDict] fitModel:clust.cure.fit[data;df;n;c]; clust.i.prepPred[fitModel;cutDict] } // @kind function // @category clust // @desc Fit hierarchial algorithm to data and convert dendrogram // to clusters // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param lf {symbol} Linkage function name within '.ml.clust.i.lf' // @param cutDict {dictionary} The key defines what cutting algo to use when // splitting the data into clusters (`k/`dist) and the value defines the // cutting threshold // @return {dictionary} Updated config with clusters added clust.hc.fitPredict:{[data;df;lf;cutDict] fitModel:clust.hc.fit[data;df;lf]; clust.i.prepPred[fitModel;cutDict] } ================================================================================ FILE: ml_ml_clust_init.q SIZE: 503 characters ================================================================================ // clust/init.q - Load clustering library // Copyright (c) 2021 Kx Systems Inc // // Clustering algorithms including affinity propagation, // cure, dbscan, hierarchical, and k-means clustering \d .ml // required for use of .ml.confmat in score.q loadfile`:util/init.q // load clustering files loadfile`:clust/utils.q loadfile`:clust/kdtree.q loadfile`:clust/kmeans.q loadfile`:clust/aprop.q loadfile`:clust/dbscan.q loadfile`:clust/hierarchical.q loadfile`:clust/score.q .ml.i.deprecWarning`clust ================================================================================ FILE: ml_ml_clust_kdtree.q SIZE: 3,802 characters ================================================================================ // clust/kdtree.q - K dimensional tree // Copyright (c) 2021 Kx Systems Inc // // A k-dimensional tree (k-d tree) is a special case of the // binary search tree data structure, commonly used in computer // science to organize data points in k-dimensional space. // Each leaf node in the tree contains a set of k-dimensional points, // while each non-leaf node generates a splitting hyperplane // which divides the surrounding space. \d .ml // K-Dimensional (k-d) Tree // @kind function // @category clust // @desc Create new k-d tree // @param data {float[][]} Each column of the data is an individual datapoint // @param leafSize {long} Number of points per leaf (<2*number of reppts) // @return {table} k-d tree clust.kd.newTree:{[data;leafSize] args:`leaf`left`parent`self`idxs!(0b;0b;0N;0;til count data 0); clust.kd.i.tree[data;leafSize]args }
// @kind function // @category selectModels // @desc Remove Keras models if criteria met // @param modelTab {table} Models which are to be applied to the dataset // @param tts {dictionary} Feature and target data split into train and testing // sets // @param target {number[]|symbol[]} Numerical or symbol target vector // @param config {dictionary} Information related to the current run of AutoML // @return {table} Keras model removed if needed and removal highlighted selectModels.targetKeras:{[modelTab;tts;target;config] if[not check.keras[];:?[modelTab;enlist(<>;`lib;enlist`keras);0b;()]]; multiCheck:`multi in modelTab`typ; tgtCount:min count@'distinct each tts`ytrain`ytest; tgtCheck:tgtCount<count distinct target; if[multiCheck&tgtCheck; config[`logFunc]utils.printDict`kerasClass; :delete from modelTab where lib=`keras,typ=`multi ]; modelTab } // @kind function // @category selectModels // @desc Update models available for use based on the number of data // points in the target vector // @param modelTab {table} Models which are to be applied to the dataset // @param target {number[]|symbol[]} Numerical or symbol target vector // @param config {dictionary} Information related to the current run of AutoML // @return {table} Appropriate models removed and highlighted to the user selectModels.targetLimit:{[modelTab;target;config] if[config[`targetLimit]<count target; if[utils.ignoreWarnings=2; tlim:string config`targetLimit; config[`logFunc](utils.printWarnings[`neuralNetWarning]0),tlim; :select from modelTab where lib<>`keras,not fnc in`neural_network`svm ]; if[utils.ignoreWarnings=1; tlim:string config`targetLimit; config[`logFunc](utils.printWarnings[`neuralNetWarning]1),tlim ] ]; modelTab } // @kind function // @category selectModels // @desc Remove theano/torch models if these are unavailable // @param config {dictionary} Information related to the current run of AutoML // @param modelTab {table} Models which are to be applied to the dataset // @param lib {symbol} Which library you are checking for e.g.`theano`torch // @return {table} Model removed if needed and removal highlighted selectModels.removeUnavailable:{[config;modelTab;lib] if[0<>checkimport$[lib~`torch;1;5]; config[`logFunc]utils.printDict`$string[lib],"Models"; :?[modelTab;enlist(<>;`lib;enlist lib);0b;()] ]; modelTab } ================================================================================ FILE: ml_automl_code_nodes_selectModels_init.q SIZE: 239 characters ================================================================================ // code/nodes/selectModels/init.q - Load selectModels node // Copyright (c) 2021 Kx Systems Inc // // Load code for selectModels node \d .automl loadfile`:code/nodes/selectModels/selectModels.q loadfile`:code/nodes/selectModels/funcs.q ================================================================================ FILE: ml_automl_code_nodes_selectModels_selectModels.q SIZE: 1,382 characters ================================================================================ // code/nodes/selectModels/selectModels.q - Select models node // Copyright (c) 2021 Kx Systems Inc // // Select subset of models based on limitations imposed by the dataset. This // includes the selection/removal of poorly scaling models. In the case of // classification problems, Keras models will also be removed if there are // not sufficient samples of each target class present in each fold of the // data. \d .automl // @kind function // @category node // @desc Select models based on limitations imposed by the dataset and // users environment // @param tts {dictionary} Feature and target data split into training/testing // sets // @param target {number[]|symbol[]} Target data as a numeric/symbol vector // @param modelTab {table} Potential models to be applied to feature data // @param config {dictionary} Information related to the current run of AutoML // @return {table} Appropriate models to be applied to feature data selectModels.node.function:{[tts;target;modelTab;config] config[`logFunc]utils.printDict`select; modelTab:selectModels.targetKeras[modelTab;tts;target;config]; modelTab:selectModels.removeUnavailable[config]/[modelTab;`theano`torch]; selectModels.targetLimit[modelTab;target;config] } // Input information selectModels.node.inputs:`ttsObject`target`models`config!"!F+!" // Output information selectModels.node.outputs:"+" ================================================================================ FILE: ml_automl_code_nodes_targetData_init.q SIZE: 187 characters ================================================================================ // code/nodes/targetData/init.q - Load targetData node // Copyright (c) 2021 Kx Systems Inc // // Load code for targetData node \d .automl loadfile`:code/nodes/targetData/targetData.q ================================================================================ FILE: ml_automl_code_nodes_targetData_targetData.q SIZE: 823 characters ================================================================================ // code/nodes/targetData/targetData.q - Target data node // Copyright (c) 2021 Kx Systems Inc // // Loading of the target dataset, data can be loaded from in process or // alternative data sources \d .automl // @kind function // @category node // @desc Load target dataset from a location defined by a user // provided dictionary and in accordance with the function .ml.i.loaddset // @param config {dictionary} Location and method by which to retrieve the data // @return {number[]|symbol[]} Numerical or symbol target vector targetData.node.function:{[config] dset:.ml.i.loadDataset config; $[.Q.ty[dset]in"befhijs"; dset; '`$"Dataset not of a suitable type only 'befhijs' currently supported" ] } // Input information targetData.node.inputs:"!" // Output information targetData.node.outputs:"F" ================================================================================ FILE: ml_automl_code_nodes_trainTestSplit_funcs.q SIZE: 1,299 characters ================================================================================ // code/nodes/trainTestSplit/funcs.q - Functions called in trainTestSplit node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application of // .automl.trainTestSplit \d .automl // Configuration update // @kind function // @category trainTestSplit // @desc Apply TTS function // @param config {dictionary} Location and method by which to retrieve the data // @param features {table} The feature data as a table // @param target {number[]} Numerical vector containing target data // @param sigFeats {symbol[]} Significant features // @return {dictionary} Data separated into training and testing sets trainTestSplit.applyTTS:{[config;features;target;sigFeats] data:flip features sigFeats; ttsFunc:utils.qpyFuncSearch config`trainTestSplit; ttsFunc[data;target;config`testingSize] } // @kind function // @category trainTestSplit // @desc Check type of TTS object // @param tts {dictionary} Feature and target data split into training/testing // sets // @return {::|err} Null on success, error on unsuitable TTS output type trainTestSplit.ttsReturnType:{[tts] err:"Train test split function must return a dictionary with", " `xtrain`xtest`ytrain`ytest"; $[99h<>type tts;'err;not`xtest`xtrain`ytest`ytrain~asc key tts;'err;] } ================================================================================ FILE: ml_automl_code_nodes_trainTestSplit_init.q SIZE: 251 characters ================================================================================ // code/nodes/trainTestSplit/init.q - Load trainTestSplit node // Copyright (c) 2021 Kx Systems Inc // // Load code for trainTestSplit node \d .automl loadfile`:code/nodes/trainTestSplit/trainTestSplit.q loadfile`:code/nodes/trainTestSplit/funcs.q ================================================================================ FILE: ml_automl_code_nodes_trainTestSplit_trainTestSplit.q SIZE: 1,037 characters ================================================================================ // code/nodes/trainTestSplit/trainTestSplit.q - Train test split node // Copyright (c) 2021 Kx Systems Inc // // Apply the user defined train test split functionality onto the users feature // and target datasets returning the train-test split data as a list of // (xtrain;ytrain;xtest;ytest) \d .automl // @kind function // @category node // @desc Split data into training and testing sets // @param config {dictionary} Location and method by which to retrieve the data // @param features {table} The feature data as a table // @param target {number[]} Numerical vector containing target data // @param sigFeats {symbol[]} Significant features // @return {dictionary} Data separated into training and testing sets trainTestSplit.node.function:{[config;features;target;sigFeats] tts:trainTestSplit.applyTTS[config;features;target;sigFeats]; trainTestSplit.ttsReturnType tts; tts } // Input information trainTestSplit.node.inputs:`config`features`target`sigFeats!"!+FS" // Output information trainTestSplit.node.outputs:"!" ================================================================================ FILE: ml_automl_code_tests_files_theano_theano.q SIZE: 2,644 characters ================================================================================ // code/tests/files/theano/theano.q - Theano test files // Copyright (c) 2021 Kx Systems Inc \d .automl // @kind function // @category models // @desc Fit model on training data and score using test data // @param data {dictionary} Containing training and testing data according to // keys `xtrn`ytrn`xtst`ytst // @param seed {int} Seed used for initialising the same model // @param mname {symbol} Name of the model being applied // @return {int|float|boolean} The predicted values for a given model as // applied to input data models.theano.fitScore:{[data;seed;mname] dataDict:`xtrain`ytrain`xtest`ytest!raze data; mdl:get[".automl.models.theano.",string[mname],".model"][dataDict;seed]; mdl:get[".automl.models.theano.",string[mname],".fit"][dataDict;mdl]; get[".automl.models.theano.",string[mname],".predict"][dataDict;mdl] } // @kind function // @category models // @desc Compile a theano model for binary problems // @param data {dictionary} Containing training and testing data according to // keys `xtrn`ytrn`xtst`ytst // @param seed {int} Seed used for initialising the same model // @return {<} The compiled theano models models.theano.NN.model:{[data;seed] data[`ytrain]:models.i.npArray flip value .ml.i.oneHot data[`ytrain]; models.theano.buildModel[models.i.npArray data`xtrain;data`ytrain;seed] } // @kind function // @category models // @desc Fit a vanilla theano model to data // @param data {dictionary} Containing training and testing data according to // keys `xtrn`ytrn`xtst`ytst // @param mdl {<} Model object being passed through the system // (compiled/fitted) // @return {<} A vanilla fitted theano model models.theano.NN.fit:{[data;mdl] data[`ytrain]:models.i.npArray flip value .ml.i.oneHot data[`ytrain]; mdls:.p.wrap each mdl`; trainMdl:first mdls; models.theano.trainModel[models.i.npArray data`xtrain;data`ytrain;trainMdl]; last mdls } // @kind function // @category models // @desc Predict test data values using a compiled model // for binary problem types // @param data {dictionary} Containing training and testing data according to // keys `xtrn`ytrn`xtst`ytst // @param mdl {<} Model object being passed through the system // (compiled/fitted) // @return {boolean} The predicted values for a given model models.theano.NN.predict:{[data;mdl] models.theano.predictModel[models.i.npArray data`xtest;mdl]` } // load required python modules and functions models.i.npArray :.p.import[`numpy]`:array; models.theano.buildModel :.p.get[`buildModel] models.theano.trainModel :.p.get`fitModel models.theano.predictModel:.p.get[`predictModel] ================================================================================ FILE: ml_automl_code_tests_files_torch_torch.q SIZE: 3,425 characters ================================================================================ // code/tests/files/torch/torch.q - PyTorch test files // Copyright (c) 2021 Kx Systems Inc // // Contains the functionality to apply vanilla pytorch models // within the automl framework \d .automl // @kind function // @category models // @desc Fit model on training data and score using test data // @param data {dictionary} Containing training and testing data according to // keys `xtrn`ytrn`xtst`ytst // @param seed {int} Seed used for initialising the same model // @param mname {symbol} Name of the model being applied // @return {int|float|boolean} the predicted values for a given model as // applied to input data models.torch.fitScore:{[data;seed;mname] dataDict:`xtrain`ytrain`xtest`ytest!raze data; mdl:get[".automl.models.torch.",string[mname],".model"][dataDict;seed]; mdl:get[".automl.models.torch.",string[mname],".fit"][dataDict;mdl]; get[".automl.models.torch.",string[mname],".predict"][dataDict;mdl] }
// @kind function // @category clust // @desc Find nearest neighhbors in k-d tree // @param tree {table} k-d tree // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.df' // @param xIdxs {long[][]} Points to exclude in search // @param pt {long[]} Point to find nearest neighbor for // @return {dictionary} Nearest neighbor dictionary with closest point, // distance, points searched and points to search clust.kd.q.nn:clust.kd.nn:{[tree;data;df;xIdxs;pt] nnInit:(0N;0w;0#0;clust.kd.findLeaf[tree;pt;tree 0]); start:`closestPoint`closestDist`xNodes`node!nnInit; stop:{[nnInfo]not null nnInfo[`node;`self]}; 2#stop clust.kd.i.nnCheck[tree;data;df;xIdxs;pt]/start } // @kind function // @category kdtree // @desc Find the leaf node point belongs to // @param tree {table} k-d tree table // @param pt {float[]} Point to search // @param node {dictionary} Node in the k-d tree to start the search // @return {dictionary} The index (row) of the kd-tree that the datapoint // belongs to clust.kd.q.findLeaf:clust.kd.findleaf:{[tree;pt;node] {[node]not node`leaf}clust.kd.i.findNext[tree;pt]/node } // @kind function // @category kdtree // @desc Sets k-d tree q or C functions // @param typ {boolean} Type of code to use q or C (1/0b) // @return {::} No return. Updates nn and findLeaf functions. clust.kd.qC:{[typ] funcTyp:not(112=type clust.kd.c.nn)&112=type clust.kd.c.findLeaf; func:$[typ|funcTyp;`q;`c]; clust.kd[`nn`findLeaf]:(clust.kd[func]`nn;clust.kd[func]`findLeaf) } // @kind function // @category kdtree // @desc Get nearest neighbor in C // @param tree {table} k-d tree table // @param data {float[][]} Each column of the data is an individual datapoint // @param df {fn} Distance function // @param xIdxs {long[][]} Points to exclude in search // @param pt {long[]} Point to find nearest neighbor for // @return {dictionary} Nearest neighbor information clust.kd.c.nn:{[tree;data;df;xIdxs;pt] data:clust.i.floatConversion[data]; pt:clust.i.floatConversion[pt]; args:(tree;data;(1_key clust.i.df)?df;@[count[data 0]#0b;xIdxs;:;1b];pt); `closestPoint`closestDist!clust.kd.c.nnFunc . args }; // @kind function // @category kdtree // @desc Find the leaf node point belongs to using C // @param tree {table} k-d tree table // @param point {float[]} Point to search // @param node {dictionary} Node in the k-d tree to start the search // @return {dictionary} The index (row) of the kd-tree that the // datapoint belongs to clust.kd.c.findLeaf:{[tree;point;node] point:clust.i.floatConversion[point]; tree clust.kd.c.findLeafFunc[tree;point;node`self] } // @kind function // @category kdtree // @desc Load in C functionality clust.kd.c.nnLoadFunc:.[2:;(`:kdnn;(`kd_nn;5));::]; clust.kd.c.findLeafFunc:.[2:;(`:kdnn;(`kd_findleaf;3));::]; // Default to C implementations clust.kd.qC[0b]; ================================================================================ FILE: ml_ml_clust_kmeans.q SIZE: 4,883 characters ================================================================================ // clust/kmeans.q - K means clustering // Copyright (c) 2021 Kx Systems Inc // // K means clustering. // K-means clustering begins by selecting k data points // as cluster centers and assigning data to the cluster // with the nearest center. // The algorithm follows an iterative refinement process // which runs a specified number of times, updating the // cluster centers and assigned points to a cluster at // each iteration based on the nearest cluster center. \d .ml // K-Means // @kind function // @category clust // @desc Fit k-Means algorithm to data // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param k {long} Number of clusters // @param config {dictionary} Configuration information which can be updated, // (::) allows a user to use default values, allows update for to maximum // iterations `iter, initialisation type `init i.e. use k++ or random and // the threshold for smallest distance to move between the previous and // new run `thresh, a distance less than thresh will result in // early stopping // @return {dictionary} A dictionary containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`df`repPts`clust, where data and df are the inputs, // repPts are the calculated k centers and clust are clusters associated // with each of the datapoints // predict - A projection allowing for prediction on new input data // update - A projection allowing new data to be used to update // cluster centers such that the model can react to new data clust.kmeans.fit:{[data;df;k;config] data:clust.i.floatConversion[data]; defaultDict:`iter`init`thresh!(100;1b;1e-5); if[config~(::);config:()!()]; if[99h<>type config;'"config must be (::) or a dictionary"]; // Update iteration dictionary with user changes updDict:defaultDict,config; // Fit algo to data r:clust.i.kMeans[data;df;k;updDict]; // Return config with new clusters inputDict:`df`k`iter`kpp!(df;k;updDict`iter;updDict`init); modelInfo:r,`data`inputs!(data;inputDict); returnInfo:enlist[`modelInfo]!enlist modelInfo; predictFunc:clust.kmeans.predict returnInfo; updFunc:clust.kmeans.update returnInfo; returnInfo,`predict`update!(predictFunc;updFunc) } // @kind function // @category clust // @desc Predict clusters using k-means config // @param config {dictionary} A dictionary returned from '.ml.clust.kmeans.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`df`repPts`clust, where data and df are the inputs, // repPts are the calculated k centers and clust are clusters associated // with each of the datapoints // predict - A projection allowing for prediction on new input data // update - A projection allowing new data to be used to update // cluster centers such that the model can react to new data // @param data {float[][]} Each column of the data is an individual datapoint // @return {long[]} Predicted clusters clust.kmeans.predict:{[config;data] config:config[`modelInfo]; data:clust.i.floatConversion[data]; // Get new clusters based on latest config clust.i.getClust[data;config[`inputs]`df;config`repPts] } // @kind function // @category clust // @desc Update kmeans config including new data points // @param config {dictionary} A dictionary returned from '.ml.clust.kmeans.fit' // containing: // modelInfo - Encapsulates all relevant information needed to fit // the model `data`df`repPts`clust, where data and df are the inputs, // repPts are the calculated k centers and clust are clusters associated // with each of the datapoints // predict - A projection allowing for prediction on new input data // update - A projection allowing new data to be used to update // cluster centers such that the model can react to new data // @param data {float[][]} Each column of the data is an individual datapoint // @return {dictionary} Updated model configuration (config), including predict // and update functions clust.kmeans.update:{[config;data] modelConfig:config[`modelInfo]; data:clust.i.floatConversion[data]; // Update data to include new points modelConfig[`data]:modelConfig[`data],'data; // Update k means modelConfig[`repPts]:clust.i.updCenters [modelConfig`data;modelConfig[`inputs]`df;()!();modelConfig`repPts]; // Get updated clusters based on new means modelConfig[`clust]:clust.i.getClust [modelConfig`data;modelConfig[`inputs]`df;modelConfig`repPts]; // Return updated config, prediction and update functions returnInfo:enlist[`modelInfo]!enlist modelConfig; returnKeys:`predict`update; returnVals:(clust.kmeans.predict returnInfo; clust.kmeans.update returnInfo); returnInfo,returnKeys!returnVals } ================================================================================ FILE: ml_ml_clust_score.q SIZE: 2,894 characters ================================================================================ // clust/score.q - Scoring metrics for clustering // Copyright (c) 2021 Kx Systems Inc // // Scoring metrics allow you to validate the performance // of your clustering algorithms \d .ml // Cluster Scoring Algorithms // Unsupervised Learning // @kind function // @category clust // @desc Davies-Bouldin index - Euclidean distance only (edist) // @param data {float[][]} Each column of the data is an individual datapoint // @param clusts {long[]} Clusters produced by .ml.clust algos // @return {float} Davies Bouldin index of clusts clust.daviesBouldin:{[data;clusts] dataClust:{x[;y]}[data]each group clusts; avgClust:avg@''dataClust; avgDist:avg each clust.i.dists[;`edist;;::]'[dataClust;avgClust]; n:count avgClust; dbScore:clust.i.daviesBouldin[avgDist;avgClust;t]each t:til n; sum[dbScore]%n } // @kind function // @category clust // @desc Dunn index // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param clusts {long[]} Clusters produced by .ml.clust algos // @return {float} Dunn index of clusts clust.dunn:{[data;df;clusts] dataClust:{x[;y]}[data]each group clusts; mx:clust.i.maxIntra[df]each dataClust; upperTri:-2_({1_x}\)til count dataClust; mn:min raze clust.i.minInter[df;dataClust]each upperTri; mn%max raze mx } // @kind function // @category clust // @desc Silhouette score // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param clusts {long[]} Clusters produced by .ml.clust algos // @param isAvg {boolean} Are all scores (0b) or the average score (1b) // to be returned // @return {float} Silhouette score of clusts clust.silhouette:{[data;df;clusts;isAvg] k:1%(count each group clusts)-1; $[isAvg;avg;]clust.i.sil[data;df;group clusts;k]'[clusts;flip data] } // Supervised Learning // @kind function // @category clust // @desc Homogeneity Score // @param pred {long[]} Predicted cluster labels // @param true {long[]} True cluster labels // @return {float} Homogeneity score for true clust.homogeneity:{[pred;true] if[count[pred]<>n:count true; '"pred and true must have equal lengths" ]; if[not ent:clust.i.entropy true;:1.]; confMat:value confMatrix[pred;true]; nm:(*\:/:).((count each group@)each(pred;true))@\:til count confMat; mi:(sum/)0^confMat*.[-;log(n*confMat;nm)]%n; mi%ent } // Optimum number of clusters // @kind function // @category clust // @desc Elbow method // @param data {float[][]} Each column of the data is an individual datapoint // @param df {symbol} Distance function name within '.ml.clust.i.df' // @param k {long} Max number of clusters // @return {float[]} Score for each k value - plot to find elbow clust.elbow:{[data;df;k] clust.i.elbow[data;df]each 2+til k-1 } ================================================================================ FILE: ml_ml_clust_tests_passfail.q SIZE: 2,186 characters ================================================================================ // The following utilities are used to test that a function is returning the expected // error message or data, these functions will likely be provided in some form within // the test.q script provided as standard for the testing of q and embedPy code // @kind function // @category tests // @fileoverview Ensure that a test that is expected to fail, // does so with an appropriate message // @param function {(func;proj)} The function or projection to be tested // @param data {any} The data to be applied to the function as an individual item for // unary functions or a list of variables for multivariant functions // @param applyType {boolean} Is the function to be applied unary(1b) or multivariant(0b) // @param expectedError {string} The expected error message on failure of the function // @return {boolean} Function errored with appropriate message (1b), function failed // inappropriately or passed (0b) failingTest:{[function;data;applyType;expectedError] // Is function to be applied unary or multivariant applyType:$[applyType;@;.]; failureFunction:{[err;ret](`TestFailing;ret;err~ret)}[expectedError;]; functionReturn:applyType[function;data;failureFunction]; $[`TestFailing~first functionReturn;last functionReturn;0b] }
// Coerse to string/sym coerse:{$[11 10h[x]~t:type y;y;not[x]&-11h~t;y;0h~t;.z.s[x] each y;99h~t;.z.s[x] each y;t in -10 -11 10 11h;$[x;string;`$]y;y]} cstring:coerse 1b; csym:coerse 0b; // Ensure plain python string (avoid b' & numpy arrays) pydstr:$[.pykx.loaded;{.pykx.eval["lambda x:x.decode()"].pykx.topy x};::] version:@[{NLPVERSION};0;`development] path:{string`nlp^`$@[{"/"sv -1_"/"vs ssr[;"\\";"/"](-3#get .z.s)0};`;""]}` loadfile:{$[.z.q;;-1]"Loading ",x:_[":"=x 0]x:$[10=type x;;string]x;system"l ",path,"/",x;} ================================================================================ FILE: ml_test.q SIZE: 1,245 characters ================================================================================ @[{system"l ",x;.pykx.loaded:1b};"pykx.q";{@[{system"l ",x;.pykx.loaded:0b};"p.q";{'"Failed to load PyKX or embedPy with error: ",x}]}] if[.pykx.loaded;.p:.pykx;.p.e:{.pykx.pyexec x}]; \d .t n:ne:nf:ns:0 pt:{-2 $[first[x]~`..err;err;fail][x;y]} i:{` sv" ",/:` vs x} ge:{if[not P;n+:1;ns+:1;:(::)];v:.Q.trp[x;y;{(`..err;x,"\n",.Q.sbt 1#y)}];n+:1;if[not(1b~v)|(::)~v;pt[v](y;file)]} P:1;N:0;MM:0#` requiremod:{if[0~first u:@[.p.import;x;{(0;x)}];P::0;-2"WARN: can't import: ",string[x],", remainder of ",file," skipped, error was:\n\n\t",u[1],"\n";MM,:x]} e:ge value;.p.e:ge .p.e err:{ne+:1;"ERROR:\n test:\n",i[y 0]," message:\n",i[x 1]," file:\n",i y 1} fail:{nf+:1;"FAIL:\n test:\n",i[y 0]," result:\n",i[.Q.s x]," file:\n",i y 1} u:raze{$[0>type k:key x;k;` sv'x,'k]}each hsym`$$[count .z.x;.z.x;enlist"tests"] {N+:1;P::1;file::x;system"l ",x}each 1_'string u where u like"*.t"; msg:{", "sv{":"sv string(x;y)}'[key x;value x]}`failed`errored`skipped`total!nf,ne,ns,n; $[(ne+nf);[-2 msg;exit 1];-1 msg]; if[ns;-2"These modules required for tests couldn't be imported:\n\t",("\n\t"sv string distinct MM),"\n\ntry running\n\tpip install -r tests/requirements.txt\n\nor with conda\n\tconda install --file tests/requirements.txt\n";-2 msg]; \\ ================================================================================ FILE: q4q_bbo.q SIZE: 2,318 characters ================================================================================ \l q4q.q / https://www.cmegroup.com/confluence/display/EPICSANDBOX/Top+of+Book+-+BBO b:"https://www.cmegroup.com/market-data/datamine-historical-data/files/" d:()!() d[`corn]:"XCBT_C_FUT_110110" d[`crude]:"XNYM_CL_FUT_110110" d[`emini]:"XCME_ES_FUT_110110" d[`eurusd]:"XCME_EC_FUT_110110" d[`eurodollar]:"XCME_ED_FUT_110110" d[`gold]:"XNYM_GC_FUT_110110" -1"downloading and extracting sample best bid and offer datasets"; (.q4q.unzip .q4q.download[b] ,[;".zip"]@) each d; -1"loading fixed width bbo meta information"; m:("HSHHJC*";1#",") 0: `:bbo.csv -1"nulling unwanted columns"; m:update typ:" " from m where not name in `expiry`seq`time`edate`side`px`pxdl`qty`ind`mq f:d`emini / change value to load different data set -1"loading fixed width bbo data: ", f; t:flip (exec name from m where not null typ)!m[`typ`len] 0: `$f,".txt" -1"merging date & time and scaling price column"; t:update time+edate,px*.01 xexp pxdl from t -1"generating trade table"; trade:select `p#expiry,seq,time,tp:px,ts:qty from t where null side, null ind -1"generating quote rack"; quote:select distinct expiry,seq,time from t where not null mq, not null side -1"joining bid quotes to rack"; quote:quote lj 2!select `p#expiry,seq,bs:qty,bp:px from t where side="B" -1"joining ask quotes to rack"; quote:quote lj 2!select `p#expiry,seq,ap:px,as:qty from t where side="A" -1"joining trade and quote tables to generate time and quote table"; taq:aj[`expiry`seq;trade] select `p#expiry,seq,bs,bp,ap,as from quote -1"generating open/high/low/close summary"; ohlc:select o:first tp,h:max tp,l:min tp,c:last tp by expiry,0D00:01 xbar time from trade \ / garman klass volatility .q4q.pivot select vol:sqrt[252*24*60]*.q4q.gk[o;h;l;c] by 0D01 xbar time,expiry from ohlc / garman klass yang zhang volatility .q4q.pivot select vol:sqrt[252*24*60]*.q4q.gkyz[o;h;l;c;prev c] by 0D01 xbar time,expiry from ohlc / volume profile .q4q.pivot update ts%sum ts by expiry from select sum ts by 0D01 xbar time,expiry from trade / time weighted average spread .q4q.pivot 1e4*select sprd:(time - prev time) wavg (ap-bp)%.5*ap+bp by 0D02 xbar time,expiry from quote / second by second volatility .q4q.pivot select vol:sqrt 252*24*60*svar tp-prev tp by 0D01 xbar time, expiry from select last log tp by 0D00:01 xbar time,expiry from trade ================================================================================ FILE: q4q_q4q.q SIZE: 613 characters ================================================================================ \d .q4q / (b)ase url, (f)ile download:{[b;f] if[()~key lf:`$":",f;lf 1: .Q.hg `$":",0N!b,f]; lf} uz:$["w"=first string .z.o;"\"C:\\Program Files\\7-zip\\7z.exe\" x -y -aos ";"unzip -u "] unzip:{[f] system 0N!uz, 1_string f;} / garman klass volatility gk:{[o;h;l;c]sqrt avg (.5*x*x:log h%l)-(-1f+2f*log 2f)*x*x:log c%o} / garman klass yang zhang volatility gkyz:{[o;h;l;c;pc]sqrt avg (x*x:log o%pc)+(.5*x*x:log h%l)-(-1f+2f*log 2f)*x*x:log c%o} / pivot table pivot:{[t] u:`$string asc distinct last f:flip key t; pf:{x#(`$string y)!z}; p:?[t;();g!g:-1_ k;(pf;`u;last k:key f;last key flip value t)]; p} ================================================================================ FILE: q4q_ts.q SIZE: 965 characters ================================================================================ \l q4q.q / https://www.cmegroup.com/confluence/display/EPICSANDBOX/Time+and+Sales b:"https://www.cmegroup.com/market-data/datamine-historical-data/files/" d:()!() d[`corn]:"2012-11-05-corn-futures.csv" d[`crude]:"2012-11-05-crude-oil-futures.csv" d[`emini]:"2012-11-05-e-mini-s-p-futures.csv" d[`eurusd]:"2012-11-05-euro-fx-futures.csv" d[`eurodollar]:"2012-11-05-eurodollar-futures.csv" d[`gold]:"2012-11-05-gold-futures.csv" -1"downloading sample time and sales datasets"; .q4q.download[b] each d; f:d`emini; / change value to load different data set -1"loading CSV time and sales dataset: ", f; / t:("DVICSCMIFFCCCCSCCCCCCDS";1#",") 0: `$f t:(" VI MI FCC D ";1#",") 0: `$f -1"renaming columns"; t:`time`seq`expiry`qty`px`side`ind`edate xcol t -1"generating trade table"; trade:select `p#expiry,seq,time+edate,tp:px,ts:qty from t where null side, null ind \ .q4q.pivot select vwap:ts wavg tp by 0D02 xbar time,expiry from trade ================================================================================ FILE: qprof_prof.q SIZE: 11,440 characters ================================================================================ / Q code profiler Copyright (c) 2014-2019 Leslie Goldsmith Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ---------------- Contains routines to profile statement execution counts and CPU consumption. Functions to be profiled can be specified explicitly or by referencing a parent namespace (in which case all functions in all namespaces below it are examined). Profiling temporarily modifies a function by adding instrumentation logic to it. A copy of the original function is first saved, and is restored when profiling is removed. Note that if a profiled function is modified, changes to it are lost when profiling is removed. Attribution of subcall time may not be correct in a profiled function that signals an event to a higher level. Usage information appears at the bottom of this file. Author: Leslie Goldsmith \ \d .prof NSX:`q`Q`h`j`m`o`s`prof / Namespace exclusion list LL:30 / Number of source chars to show PFD:(0#`)!PFS:() if[not type key`SV;SV:PFD] // // @desc Enables profiling for specified functions. // // @param x {symbol[]} Specifies the names of the functions to profile. Namespaces // are replaced by the names of the functions within them, // recursively. If the argument is unspecified or is the empty // symbol, all functions in all non-system namespaces are profiled. // prof:{ {f:$[type key x;value x;0]; $[100h<>type f;-2 "Not a function: ",string x; "l"=first s:last v:value f;-2 "Locked function: ",string x; count t:xf[x;s];[PFD[x]:(enl last[t]j?1+til i),(i:last j:t 1)#'"inn"$0;SV[x]:f;def[x;first v 3;first t]]; -2 "Already profiled: ",string x]; } each fns x; } // // @desc Disables profiling for specified functions, and discards collected usage // information. // // @param x {symbol[]} Specifies the names of the functions to unprofile. Namespaces // are replaced by the names of the functions within them, // recursively. If the argument is unspecified or is the empty // symbol, profiling is disabled for all functions for which it // is enabled. // unprof:{ {$[100h<>type f:SV x;-2 "Not profiled: ",string x; [x set f;{.[`.prof;(,)x;_;y]}\:[`SV`PFD;x]]]; } each $[mt x;key SV;fns x]; } // // @desc Resets profile statistics, discarding previous results. The state of // profiled functions is unaltered. // reset:{PFD[;1 2 3]*:0;PFS::()}
/// // Register on a connection. This is called automatically as part of the .finos.conn.open connection procedure. // @param conn Connection name // @param items Dictionary of registration parameters // @return none .finos.conn.registerRemote:{[conn;items] items:(`conn`pid!(`;.z.i)),items; $[type[conn] in -6 -7h; [ items[`conn]:`nonPersistent; sendFunc:neg[conn]; ]; -11h=type conn; [ items[`conn]:conn; sendFunc:.finos.conn.asyncSend[conn]; ]; '"conn must be int or symbol" ]; sendFunc({$[()~key`.finos.conn.register;::;.finos.conn.register[x]]};items); }; .finos.conn.priv.oldZpo:@[get;`.z.po;{}]; .finos.conn.priv.oldZpc:@[get;`.z.pc;{}]; .finos.conn.priv.oldZwo:@[get;`.z.wo;{}]; .finos.conn.priv.oldZwc:@[get;`.z.wc;{}]; /// // This callback registers basic info about the client and calls any user callbacks registered by .finos.conn.addClientConnectCallback. .finos.conn.priv.Zpo:{[existingZpo;myfd] // Invoke the old .z.po as we're chaining these together `.finos.conn.priv.clientList upsert `fd`protocol`user`host`connID!(myfd;`kdb;.z.u;.Q.host[.z.a];.finos.conn.priv.lastClientConnID+:1); {[x;f]@[value f;x;{[f;h;e].finos.conn.log"Client connect callback ",string[f]," threw error ",e," for handle ",string h}[f;x]]}[myfd]each .finos.conn.priv.clientConnectCallbacks; existingZpo[myfd]; }; .z.po:.finos.conn.priv.Zpo .finos.conn.priv.oldZpo; /// // This callback is fired when a handle is disconnected. If this is one of // our fd's, then schedule a reconnect attempt except for lazy connections. // // Note: We chain any existing .z.pc so that will be invoked _before_ this // method. Additionally any user callbacks defined by .finos.conn.addClientDisconnectCallback are called. // // @param fd file descriptor that was disconnected // .finos.conn.priv.Zpc:{[existingZpc;myfd] // Invoke the old .z.pc as we're chaining these together existingZpc[myfd]; connNames:exec name from .finos.conn.priv.connections where fd=myfd; {[connName] .finos.conn.log"Handle to ",string[connName]," disconnected."; //Invoke the disconnect cb inside protected evaluation .finos.conn.errorTrapAt[.finos.conn.priv.connections[connName;`dcb];connName; .finos.conn.dcbErrorHandler[connName;]]; //Reset the fd for this connection to 0N so that it's retried .finos.conn.priv.connections[connName;`fd]:0N; if[not .finos.conn.priv.connections[connName;`lazy]; //Start the connection retry .finos.conn.priv.scheduleRetry[connName;.finos.conn.priv.initialBackoff]; ]; } each connNames; {[x;f]@[value f;x;{[f;h;e].finos.conn.log"Client disconnect callback ",string[f]," threw error ",e," for handle ",string h}[f;x]]}[myfd]each .finos.conn.priv.clientDisconnectCallbacks; delete from `.finos.conn.priv.clientList where fd=myfd; }; .z.pc:.finos.conn.priv.Zpc .finos.conn.priv.oldZpc; /// // This callback registers basic info about the client and calls any user callbacks registered by .finos.conn.addClientWSConnectCallback. .finos.conn.priv.Zwo:{[existingZwo;myfd] // Invoke the old .z.wo as we're chaining these together `.finos.conn.priv.clientList upsert `fd`protocol`user`host`connID!(x;`ws;.z.u;.Q.host[.z.a];.finos.conn.priv.lastClientConnID+:1); {[x;f]@[value f;x;{[f;h;e].finos.conn.log"Client Websocket connect callback ",string[f]," threw error ",e," for handle ",string h}[f;x]]}[myfd]each .finos.conn.priv.clientWSConnectCallbacks; existingZwo[myfd]; }; .z.wo:.finos.conn.priv.Zwo .finos.conn.priv.oldZwo; /// // This callback calls any user callbacks registered by .finos.conn.addClientWSDisconnectCallback. .finos.conn.priv.Zwc:{[existingZwc;myfd] // Invoke the old .z.wc as we're chaining these together existingZwc[myfd]; {[x;f]@[value f;x;{[f;h;e].finos.conn.log"Client Websocket disconnect callback ",string[f]," threw error ",e," for handle ",string h}[f;x]]}[myfd]each .finos.conn.priv.clientWSDisconnectCallbacks; delete from `.finos.conn.priv.clientList where fd=myfd; }; .z.wc:.finos.conn.priv.Zwc .finos.conn.priv.oldZwc; ================================================================================ FILE: kdb_q_dep_dep.q SIZE: 8,829 characters ================================================================================ .finos.dep.list:([moduleName:()]version:();projectRoot:();scriptPath:();libPath:();loaded:`boolean$();unloads:();isOverride:`boolean$()); .finos.dep.currentModule:(); .finos.dep.priv.regModule:{[moduleName;version;projectRoot;scriptPath;libPath;override] if[not 10h=type moduleName; '"moduleName must be a string"]; if[not 10h=type version; '"version must be a string"]; if[not 10h=type projectRoot; '"projectRoot must be a string"]; if[not 10h=type scriptPath; '"scriptPath must be a string"]; if[not 10h=type libPath; '"libPath must be a string"]; if[0=count moduleName; '"moduleName must not be empty"]; if[0=count version; '"version must not be empty"]; if[all 0=count each (projectRoot;scriptPath;libPath); '"at least one of projectRoot, scriptPath or libPath must be provided"]; if[not .finos.dep.isAbsolute projectRoot; projectRoot:.finos.dep.resolvePathTo[system"cd";projectRoot]]; if[()~key `$":",projectRoot; '"project root does not exist: ",projectRoot]; if[not .finos.dep.isAbsolute scriptPath; scriptPath:.finos.dep.resolvePathTo[projectRoot;scriptPath]]; if[()~key `$":",scriptPath; '"script path does not exist: ",scriptPath]; if[not .finos.dep.isAbsolute libPath; libPath:.finos.dep.resolvePathTo[projectRoot;libPath]]; if[()~key `$":",libPath; '"library path does not exist: ",libPath]; if[first enlist[moduleName] in exec moduleName from .finos.dep.list; existing:.finos.dep.list moduleName; prevOverride:existing`isOverride; if[override and existing`loaded; '"cannot override already loaded module"]; if[not override; if[not prevOverride; if[not version~existing`version; 'moduleName," version mismatch: ",version," (already registered: ",existing[`version],")"]; if[not projectRoot~existing`projectRoot; 'moduleName," projectRoot mismatch: ",projectRoot," (already registered: ",existing[`projectRoot],")"]; if[not scriptPath~existing`scriptPath; 'moduleName," scriptPath mismatch: ",scriptPath," (already registered: ",existing[`scriptPath],")"]; if[not libPath~existing`libPath; 'moduleName," libPath mismatch: ",libPath," (already registered: ",existing[`libPath],")"]; ]; :(::); ]; ]; `.finos.dep.list upsert `moduleName`version`projectRoot`scriptPath`libPath`isOverride!(moduleName;version;projectRoot;scriptPath;libPath;override); }; .finos.dep.regModule:{[moduleName;version;projectRoot;scriptPath;libPath] .finos.dep.priv.regModule[moduleName;version;projectRoot;scriptPath;libPath;0b]}; .finos.dep.regOverride:{[moduleName;version;projectRoot;scriptPath;libPath] .finos.dep.priv.regModule[moduleName;version;projectRoot;scriptPath;libPath;1b]}; //maybe use common override method across all projects? .finos.dep.try:{.finos.util.trp[x;y;{[z;e;t].finos.dep.errorlogfn"Error: ",e," Backtrace:\n",.Q.sbt t; z[e]}[z]]}; if[0<count getenv`FINOS_DEPENDS_DEBUG; .finos.dep.try:{[x;y;z]x . y}]; .finos.dep.priv.moduleStack:(); .finos.dep.loadModule:{[moduleName] if[first enlist[moduleName] in .finos.dep.priv.moduleStack; {'x}"circular module load: "," -> " sv .finos.dep.priv.moduleStack,enlist moduleName; ]; prevModule:.finos.dep.currentModule; .finos.dep.currentModule:moduleName; .finos.dep.priv.moduleStack,:enlist moduleName; res:.finos.dep.try[(1b;)@.finos.dep.priv.loadModule@;enlist moduleName;(0b;)]; .finos.dep.currentModule:prevModule; .finos.dep.priv.moduleStack:-1_.finos.dep.priv.moduleStack; if[not first res; 'last res]; }; .finos.dep.isLoaded:{[moduleName].finos.dep.list[moduleName;`loaded]}; //can be overwritten by user .finos.dep.preLoadModuleCallback:{[moduleName]}; .finos.dep.postLoadModuleCallback:{[moduleName]}; .finos.dep.priv.loadModule:{[moduleName] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; if[.finos.dep.list[moduleName;`loaded]; :(::)]; .finos.dep.preLoadModuleCallback[moduleName]; if[`qproject.json in key `$":",.finos.dep.list[moduleName;`projectRoot]; .finos.dep.loadDependencies `$":",.finos.dep.joinPath(.finos.dep.list[moduleName;`projectRoot];"qproject.json"); ]; scriptPath:.finos.dep.list[moduleName;`scriptPath]; if[`module.q in key `$":",scriptPath; .finos.dep.include .finos.dep.joinPath(scriptPath;"module.q"); ]; .finos.dep.postLoadModuleCallback[moduleName]; .finos.dep.list[moduleName;`loaded]:1b; }; .finos.dep.scriptPathIn:{[moduleName;script] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; path:.finos.dep.joinPath(.finos.dep.list[moduleName;`scriptPath];script); if[not {x~key x}`$":",path; '"script not found: ",path]; path}; .finos.dep.scriptPath:{[script] if[()~.finos.dep.currentModule; '".finos.dep.scriptPath must be used in module.q"]; .finos.dep.scriptPathIn[.finos.dep.currentModule;script]}; .finos.dep.loadScriptIn:{[moduleName;script] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; .finos.dep.include .finos.dep.scriptPathIn[moduleName;script]; }; .finos.dep.loadScript:{[script] if[()~.finos.dep.currentModule; '".finos.dep.loadScript must be used in module.q"]; .finos.dep.loadScriptIn[.finos.dep.currentModule;script]}; .finos.dep.execScriptIn:{[moduleName;script] if[not moduleName in key .finos.dep.list; '"module not registered: ",moduleName]; system"l ",.finos.dep.scriptPathIn[moduleName;script]; }; .finos.dep.execScript:{[script] if[()~.finos.dep.currentModule; '".finos.dep.loadScript must be used in module.q"]; .finos.dep.execScriptIn[.finos.dep.currentModule;script]};
Signal processing and q¶ Signal processing is the analysis, interpretation and manipulation of signals to reveal important information. Signals of interest include human speech, seismic waves, images of faces or handwriting, brainwaves, radar, traffic counts and many others. This processing reveals information in a signal that can be obscured by non-useful information, commonly called ‘noise’. This noise can be due to the stochastic[1] nature of signals and/or interference from other signals. Traditionally, signal processing has been performed by either an analog process, through the use of a series of electronic circuits, or through the use of dedicated hardware solutions, such as SIP (Systems In Package) or SoC (Systems on a Chip). The use of Internet of Things[2] (IoT) devices to capture signals is driving the trend towards software-based signal-processing solutions. Software-based solutions are not only cheaper and more widely accessible than their hardware alternatives, but their highly configurable nature is better suited to the modular aspect of IoT sensor setups. The growth of IoT and software-based signal processing has resulted in increased availability of cheaply-processed signal data, enabling more data-driven decision making, particularly in the manufacturing sector [1]. Currently popular software implementations of digital signal-processing techniques can be found in the open source-source libraries of Python (e.g., SciPy, NumPy, PyPlot) and C++ (e.g., SlgPack, Aquila, Liquid-DSP). The convenience of these libraries is offset by the lack of a robust, high-throughput data-capture system, such as kdb+ tick. While it is possible to integrate a kdb+ process with Python or C++, and utilize these signal-processing routines on a kdb+ dataset, it is entirely possible to implement them natively within q. This white paper will explore how statistical signal-processing operations (those which assume that signals are stochastic), can be implemented natively within q to remove noise, extract useful information, and quickly identify anomalies. This integration allows for q/kdb+ to be used as a single platform for the capture, processing, analysis and storage of large volumes of sensor data. The technical specifications for the machine used to perform the computations described in this paper are as follows: - CPU: Intel® Core™i7 Quad Core Processor i7-7700 (3.6GHz) 8MB Cache - RAM: 16GB Corsair 2133MHz SODIMM DDR4 (1 x 16GB) - OS: Windows 10 64-bit - kdb+ 3.5 2017.10.11 Basic steps for signal processing¶ For the purpose of this paper, signal processing is composed of the following steps: - Data Capture - Spectral Analysis - Smoothing - Anomaly Detection This paper will focus on the last three steps listed above, as the first topic of data capture has already been covered extensively in previous KX white papers, including - kdb+tick profiling for throughput optimization - Disaster recovery for kdb+tick - Query Routing: a kdb+ framework for a scalable load-balanced Moreover, the analysis techniques presented will make use of the following sensor data set, collected by the University of California Irvine, which contains the total power load for a single household over a 4 year period, to a resolution of 1 minute, as illustrated in Figure 1 below. Figure 1: The load data of a single household, in 1-minute increments, from January 2007 to September 2010 Spectral analysis¶ To understand better the nature of the processing that will be performed on this signal, it will be useful to cover an important signal/wave concept. A fundamental property of any wave (and hence signal) is that of superpositioning[3], which is the combining of different signals (of the same or different frequency), to create a new wave with different amplitude and/or frequency. If simple signals can be combined to make a more complex signal, any complicated signal can be treated as the combination of more fundamental, simpler signals. Spectral analysis is a broad term for the family of transformations that ‘decompose’ a signal from the time domain to the frequency domain[4], revealing these fundamental components that make up a more complex signal. This decomposition is represented as a series of complex[5] vectors, associated with a frequency ‘bin’, whose absolute value is representative of the relative strength of that frequency within the original signal. This is a powerful tool that allows for insight to be gained on the periodic nature of the signal, to troubleshoot the sensor in question, and to guide the subsequent transformations as part of the signal processing. In general, some of the basic rules to keep in mind when using the frequency decomposition of a signal are: - Sharp distinct lines indicate the presence of a strong periodic nature; - Wide peaks indicate there is a periodic nature, and possibly some spectral leakage (which won’t be fully discussed in this paper); - An overall trend in the frequency distribution indicates aperiodic (and hence of an infinite period) behavior is present. Consider the following artificial signal in Figure 2, which is a combination of a 10Hz and 20Hz signal together with a constant background Gaussian noise: x:(2*pi*(1%2048)) * til 2048; y:(5 * sin (20*x))+(5 * sin (10 * x)) + `float$({first 1?15.0} each x); Figure 2: An artificial, noisy signal This time-series shows a clear periodic nature, but the details are hard to discern from visual inspection. The following graph, Figure 3, shows the frequency distribution of this signal, produced through the application of an un-windowed Fourier Transform, a technique that will be covered in detail below. Figure 3: The frequency distribution of the simulated signal shown in Figure 2 The distinct sharp peaks and a low level random ‘static’ in Figure 3 demonstrate the existence of both period trends and some background noise within the signal. Considering that the signal in Figure 2 was to be made up of two different frequencies, why are there clearly three different frequencies (i.e., 10, 20 and 50Hz) present? This is because the original signal contained a low-level 50Hz signal, one that might be observed if a sensor wasn’t correctly shielded from 50Hz main power. This leakage could influence any decisions made from this signal, and may not have been immediately detected without spectral analysis. In general, spectral analysis is used to provide information to make a better decision on how to process a signal. In the case above, an analyst may decide to implement a high-pass filter to reduce the 50Hz noise, and then implement a moving-mean (or Finite Impulse) filter to reduce the noise. In the following chapters, an industry-standard technique for spectral analysis, the radix-2 Fast Fourier Transform, is implemented natively within q. This implementation is reliant on the introduction of a framework for complex numbers within q. Complex numbers and q¶ All mainstream spectral analysis methods are performed in a complex vector space, due to the increase in computational speed this produces, and the ease at which complex signals (such as RADAR) are handled in a complex vector space. While complex numbers are not natively supported in q, it is relatively straightforward to create a valid complex vector space by representing any complex number as a list of reals and imaginary parts. Consider the following complex number, composed of the real and imaginary parts, 5 and 3, respectively z:(5;3) This means that a complex number series can be represented as z:(5 7 12;3 -9 2) By defining complex numbers as pairs of lists, subtraction and addition can be implemented with the normal + and - operators in q. All that remains to construct a complex vector space in q, is to create operators for complex multiplication, division, conjugation and absolute value. \d .signal mult:{[vec1;vec2] // Performs the dot product of two vectors realOut:((vec1 0) * vec2 0) - (vec1 1) * vec2 1; imagOut:((vec1 1) * vec2 0) + (vec1 0) * vec2 1; (realOut;imagOut)}; division:{[vec1;vec2] denom:1%((vec1 1) xexp 2) + ((vec2 1) xexp 2); mul:mult[vec1 0;(vec1 1);(vec2 0);neg vec2 1]; realOut:(mul 0)*denom; imagOut:(mul 1)*denom; (realOut;imagOut)}; conj:{[vec](vec 0;neg vec 1)}; mag:{[vec] sqrvec:vec xexp 2; sqrt (sqrvec 0)+sqrvec 1}; \d . q).signal.mult[5 -3;9 2] 51 -17 q).signal.mult[(5 2 1;-3 -8 10);(9 8 -4;2 3 6)] 51 40 -64 / Reals -17 -58 -34 / Imaginary Fast Fourier Transform¶ The Fourier Transform is a standard transformation to decompose a real or complex signal into its frequency distribution[6]. The Fast Fourier Transform (FFT) is a family of algorithms that allow for the Fourier transform to be computed in an efficient manner, by utilizing symmetries within the computation and removing redundant calculations. This reduces the complexity of the algorithm from \(n\)² to around \(n \times log(n)\), which scales impressively for large samples. Traditionally the Fast Fourier Transform and its family of transforms (such as Bluestein Z-Chirp or Rader’s), are often packaged up in libraries for languages such as Python or Matlab, or as external libraries such as FFTW (Fastest Fourier Transform in the West). However, with the creation of a complex number structure in q, this family of transformations can be written natively. This reduces dependencies on external libraries and allows for a reduced development time. The following example shows how the Radix-2 FFT (specifically a Decimation-In-Time, Bit-Reversed Input Radix-2 algorithm) can be implemented in q. The function, fftrad2 , takes a complex vector of length \(N\) (a real signal would have 0s for the imaginary part) and produces a complex vector of length \(N\), representing the frequency decomposition of the signal. \d .signal // Global Definitions PI:acos -1; / pi ; BR:2 sv reverse 2 vs til 256; / Reversed bits in bytes 00-FF P2:1,prds[7#2],1; / Powers of 2 with outer guard P256:prds 7#256; / Powers of 256 bitreversal:{[indices] // Applies a bitwise reversal for a list of indices ct:ceiling 2 xlog last indices; / Number of significant bits (assumes sorted) bt:BR 256 vs indices; / Breakup into bytes and reverse bits dv:P2[8-ct mod 8]; / Divisor to shift leading bits down bt[0]:bt[0] div dv; / Shift leading bits sum bt*count[bt]#1,P256 div dv / Reassemble bytes back to integer }; fftrad2:{[vec] // This performs a FFT for samples of power 2 size // First, define some constants n:count vec 0; n2:n div 2; indexOrig:til n; // Twiddle Factors - precomputed the complex values over the //discrete angles, using Euler formula angles:{[n;x]2*PI*x%n}[n;] til n2; .const.twiddle:(cos angles;neg sin angles); // Bit-reversed the vector and define it into a namespace so lambdas can access it ind:bitreversal[indexOrig]; .res.vec:`float$vec . (0 1;ind); // Precomputing the indices required to implement each temporal phase of the DIT // Number of signals signalcount:`int$ {2 xexp x} 1+ til `int$2 xlog n; // Number of points in each signal signalpoints:reverse signalcount; // Define an initial count, this can be used as a // backbone to get the even and odd indices by adjustment initial:{[n2;x]2*x xbar til n2}[n2;] peach signalcount div 2; evens:{[n2;x]x + n2#til n2 div count distinct x}[n2;] peach initial; odds:evens + signalcount div 2; twiddleIndices:{[n;n2;x]n2#(0.5*x) * til n div x}[n;n2;] peach signalpoints; // Butterfly Implementation bflyComp:{[bfInd] tmp:mult[.res.vec[;bfInd 1];.const.twiddle[;bfInd 2]]; .[`.res.vec;(0 1;bfInd 1);:;.res.vec[;bfInd 0]-tmp]; .[`.res.vec;(0 1;bfInd 0);+;tmp]}; bflyComp each flip `int$(evens;odds;twiddleIndices); .res.vec}; \d . Below is a demonstration of how .signal.fftrad2 operates on an example signal shown in Table 1. | Time (s) | 0 | 0.25 | 0.5 | 0.75 | | Amplitude | 0 | 1 | 0 | -1 | // Adjusting .signal.fftrad2, to print intermediate results / Line 31, .res.vec:0N!`float$vec . (0 1;ind); / Line 51, .[`.res.vec;(0 1;bfInd 0);+;tmp];0N!.res.vec}; q).signal.fftrad2[(0 1 0 -1;4#0)] (0 0 1 -1f;0 0 0 0f) // Bitwise reversal (0 0 0 2f;0 0 0 0f) // Butterfly step 1 (0 1.224647e-016 0 -1.224647e-016;0 -2 0 2f) // Butterfly step 2 0 1.224647e-016 0 -1.224647e-016 // Result is printed 0 -2 0 2 Sampling frequency and length¶ The application of an FFT produces the sequence of complex numbers that are associated with the magnitudes of the frequencies that compose a signal, but not the frequencies themselves. The frequencies that are captured by a spectral analysis are a function of the sampling that was applied to the dataset, and will have the following characteristics, - The number of distinct frequencies (often called frequency bins) is equal to - The maximum frequency that can be captured by a Fourier Transform is equal to half the sampling frequency[7] - Both positive and negative frequencies will result from the transform and will be mirrored about 0Hz Considering the above characteristics, the frequencies of a spectral analysis are a function of only the number of samples and the sampling frequency. The following function can be used to produce the frequencies of a spectral analysis from the number of samples and sampling frequency, and hence is used to define the x axis. q)xAxis:{[Ns;fs] (neg fs*0.5)+(fs%Ns-1)*til Ns} This function has been applied below to compute the associated frequencies for the sample signal given in Table 1. q)xAxis[4;4] //Calculating the frequency bins for 16 samples, measured at 20Hz -2 -0.6666667 0.6666667 2 q)transform:.signal.fftrad2[(0 1 0 -1;4#0)] q)flip `Frequency`Real`Imaginary!(xAxis[4;4];transform 0;transform 1) Frequency Real Imaginary ----------------------------------- -2 0 0 -0.6666667 1.224647e-016 -2 0.6666667 0 0 2 -1.224647e-016 2 Windowing the signal¶ There is an important assumption that is carried forward from the continuous Fourier Transform to the discrete version, that the data set being operated on is infinitely large. Obviously, this cannot be true for a discrete transform, but it is simulated by operating with a circular topology, i.e., by assuming that the end of the sample connects to the start. This means that there can be discontinuities in the data if the sampled dataset doesn’t capture a full period (resulting in different start-end values). These discontinuities result in a phenomenon known as a ‘spectral leakage’[8], where results in the frequency domain are spread over adjacent frequency bins The solution to this problem is to window the signal, which adjusts the overall amplitude of the signal to limit the discontinuities. In general, this is achieved by weighting the signal towards 0 at the start and end of the sampling. For this paper, the commonly used Hanning Window (which finds a middle ground between many different windowing functions), shown in Figure 4, will be applied. Figure 4: The shape of a Hann Window, which is applied to an input signal to tamp down the signal so that there is greater continuity under the transform. Distributed under CC Public Domain In the following example, the Hanning weightings are computed and then applied to a complex vector. // The window function will produce a series of weightings // for a sequence of length nr, // which would be the count of the complex series q)window:{[n] {[nr;x](sin(x*acos -1)%nr-1) xexp 2} [n;] each til n} q)vec:(1 2 1 4;0 2 1 4) q)n:count first vec // Applying the window with an in place amendment q)vec*:(window n;window n); 0 1.5 0.75 5.999039e-032 0 1.5 0.75 5.999039e-032 The Fourier Transformation for a real-valued signal¶ In general, the final step to getting the frequency distribution from the transformed vector series is to get the normalized magnitude of the complex result. The normalized magnitude is computed by taking the absolute value of the real and imaginary components, divided by the number of samples. However, for a real-valued input, where there is symmetry about the central axis[9], the dataset can be halved. Putting it all together¶ Below, a single function, spectral , is defined which performs a spectral analysis (through a Fourier Transform) on a complex vector. This function windows the vector, performs a FFT and scale the results appropriately. spectral:{[vec;fs] // Get the length of the series, ensure it is a power of 2 // If not, we should abort operation and output a reason nr:count first vec; if[not 0f =(2 xlog nr) mod 1;0N! "The input vector is not a power of 2";`error]; // Window the functions, using the Hanning window wndFactor:{[n] {[nr;x](sin(x*acos -1)%nr-1) xexp 2} [n;] each til n}; vec*:(wndFactor nr;wndFactor nr); // Compute the fft and get the absolute value of the result fftRes:.signal.fftrad2 vec; mag:.signal.mag fftRes; // Scale an x axis xAxis:{[Ns;fs] (neg fs*0.5)+(fs%Ns-1)*til Ns}[nr;fs]; // Table these results ([]freq:xAxis;magnitude:mag%nr) }; Let’s apply this to the dataset shown in Figure 1, from the University of California, loading it from the .txt file and then filling in null values to ensure a constant sampling frequency. q)household:select Time:Date+Time,Power:fills Global_active_power from ("DTF "; enlist ";") 0:`:household_power_consumption.txt This data is then sampled at 2-hour intervals, ensuring the sample size is a power of 2, before performing a spectral analysis on it with the spectral function. q)household2hour:select from household where 0=(i mod 120) q)n:count household2hour 17294 q)cn:`int$2 xexp (count 2 vs n)-1 16384 q)spec:spectral[(cn sublist household2hour`Power;cn#0f);1%120*60] q)\t spectral[(cn sublist household2hour`Power;cn#0f);1%120*60] 6 Figure 5: The frequency distribution of the Household Power load, excluding the first result to demonstrate the Hermitian symmetry The above FFT was formed on 16,384 points and was computed in 6ms. In this case, the frequencies are so small that they are difficult to make inferences from. (It’s hard to intuit what a frequency of 1 \(\mu\)Hz corresponds to). A nice solution is to remove the symmetry present (taking only the first half), then convert from Frequency (Hz) to Period (hours). specReal:select period:neg (1%freq)%3600, magnitude from (cn div 2) sublist spec Figure 6: The frequency distribution of the Household Power dataset with the symmetry removed It can be seen from Figure 6, that the first value is nearly an order of magnitude larger than any other, suggesting that there is a strong high-frequency signal that hasn’t been captured. The likely culprit from this would be interference from a 50Hz signal due to mains power, and the existence of others is a strong possibility. After the large initial value, there are distinct 2, 3, 6, 10 and (just below) 12-hour cycles, which could be explained as part of the ebb and flow of day-to-day life, coming home from school and work. This result, with strong sharp lines, definitively shows the existence of periodic trends, along with a constant background noise. This knowledge will be used to further process the signal and obtain more useful information on long-term trends. Smoothing¶ A simple method to see the longer-term trends and remove the daily noise is to build a low-pass filter and reduce the frequencies below a specific threshold. A simple low-pass filter can be constructed as a windowed moving average, which is trivial in q with its inbuilt function mavg . All that is needed is to ensure the moving average is centered on the given result, which can be achieved by constructing a function that uses rotate to shift the input lists to appropriately center the result. The following implements an N-point centered moving average on a list. \d .signal movAvg:{[list;N] $[0=N mod 2; (floor N%2) rotate 2 mavg (N mavg list); (floor N%2) rotate N mavg list]}; \d . From the spectral analysis of the Household Power dataset (see Figure 6), it can see that the most dominant signals have a period of at most 24 hours. Therefore, this is a good value for the window size for an initial smoothing of the Household Power dataset[10]. q)householdMA:select Time, Power: .signal.movAvg[Power;24*60] from household q)\t householdMA:select Time, Power: .signal.movAvg[Power;24*60] from household 109 The following Figure 7 shows the result of the application of a 1,440-point moving average on the raw Household Power data set (see Figure 1), containing over 2 million data points (achieved in just over 100ms). Figure 7: The smoothed dataset: note the now clearly visible yearly trend Comparison of the smoothed data (Figure 7) and the raw data (Figure 1) demonstrates the clarity achieved by applying a simple moving average to smooth a dataset. This simple process has removed a significant amount of noise and revealed the existence of a year-long, periodic trend. Application of an FFT to this smoothed dataset can reveal trends which would have been difficult to obtain from the raw data. For this, a lower sampling rate (of 6-hour intervals) will be used to assist in revealing long-term trends[11]. Figure 8: The FFT of the smoothed dataset From the figure above, it can be seen that whilst there is still some high-frequency noise, there are definite 6- and 12-hour trends, with a definite 2.5-day trend in the power usage. Despite Figure 7 showing a clear year-long periodic trend, this isn’t present in the Frequency distribution in Figure 8, probably because the dataset simply does not cover enough years for that trend of that length to be present in a simple spectral analysis. It can be concluded from the results in this chapter that there are bi-weekly, weekly and yearly periodic trends hidden within the raw data, as well as the 5, 6, 12 and 24-hour trends obtained from earlier work. In the context of household power consumption, these results are not unexpected. Anomaly detection¶ In this paper, anomaly detection is considered the last step of signal processing, but it can be implemented independently of any other signal processing, serving as an early warning indicator of deviations from expected behavior. Assuming the sensors are behaving stochastically, outliers can be identified by the level of deviation they are experiencing from the expected behavior, i.e., using the mean and standard deviation. Ideally, these deviations should be measured from the local behavior in the signal rather than from the whole series, which allows for periodic behavior to be accounted for. This can be achieved with a windowed moving average (from “Smoothing” above) and a windowed moving standard deviation. \d .signal movDev:{[list;N] $[0=N mod 2; (floor N%2) rotate 2 mdev N mavg list; (floor N%2) rotate N mdev list]}; \d . Assuming the randomness in the signal is Gaussian in nature, an anomaly is present if the current value is more than a sigma-multiple of the standard deviation away from the moving mean. This can be determined effectively within q by using a vector conditional evaluation in a qSQL statement, comparing the actual value with maximum/minimums created with the moving average and deviation. Below this technique is demonstrated on the Household Power dataset, with a 5-point moving window and a sigma level of 2. q)sigma:2; q)outliers:update outlier:?[ (Power > movAvg[Power;5] + fills sigma * movDev[Power;5]) | (Power < movAvg[Power;5] - fills sigma * movDev[Power;5]); Power; 0n] from household; q)\t update outliers:update outlier:?[ (Power > movAvg[Power;5] + fills sigma * movDev[Power;5]) | (Power < movAvg[Power;5] - fills sigma * movDev[Power;5]); Power; 0n] from household; 380 It takes approximately 380ms to perform this detection routine on over 2 million data points, and the results can be seen below in Figure 9. Of the approximately 2 million data points, 5000 were detected to be anomalous, interestingly, setting the sigma level to over 3 drops the anomalies to 0, perhaps due to the Gaussian distribution within the dataset. Figure 9: The results of an anomaly detection on the Household Power dataset Conclusion¶ In this paper, we have shown the ease with which simple, fast signal processing can be implemented natively within q. All that was required was a basic framework to support complex-number operations to allow for complex-field mathematics within q. With this framework, signal-processing algorithms can easily be implemented natively in q, reducing dependencies on external libraries, and allowing for faster and interactive development. This complex framework was essential to the successful implementation of a Fast Fourier Transform (FFT), a fundamental operation in the processing of many signals The collection of signal-processing functions that were shown in this paper were able rapidly to process the grid-load dataset obtained from the University of California, Irvine. An FFT was implemented on a sample of 16,000 data points in around 6ms and guided from that spectral analysis, a simple low-pass filter was implemented directly in the time domain, on over 2 million data points, in around 100ms. The resulting data had a clear reduction in noise and revealed the longer-term trends present in the signal. It has also been shown that a simple and robust anomaly-detection routine can be implemented in q, allowing anomalies to be detected in large datasets very rapidly. These smoothing and anomaly detection routines could easily be integrated as real-time operations with other data capture and processing routines. Clearly, native applications of signal processing, which historically have been the realm of libraries accessed through Python or C++, can be natively integrated into q/kdb+ data systems. This allows for q/kdb+ to be used as a platform for the capture, processing, storage and querying of sensors based signal information. References¶ [1] E. B. &. K. McElheran, “Data in Action: Data-Driven Decision Making in U.S. Manufacturing,” Center for Economic Studies, U.S. Census Bureau, 2016. [2] E. Chu and A. George, Inside the FFT Black Box: Serial and Parallel Fast Fourier Transform Algorithms, CRC Press, 1999. [3] PJS, “Window Smoothing and Spectral Leakage”, Siemens, 14 September 2017. siemens.com, accessed 20 May 2018. [4] K. Jones, The Regularized Fast Hartley Transform, Springer Netherlands, 2010. [5] L. Debnath and D. Bhatta, Integral Transforms and Their Applications, Chapman and Hall/CRC, 2017. Author¶ Callum Biggs joined First Derivatives in 2017 as a data scientist in its Capital Markets Training Program. Acknowledgments¶ I would like to acknowledge the Machine Learning Repository hosted by the University of California, Irvine, for providing the real-world data sets used as sample input data sets for the methods presented in this paper. This repository has an enormous number of freely-available datasets. Notes¶ - A stochastic process is one where at least part of the process is defined by random variables. It can be analyzed but not predicted with certainty, although Autoregressive based models can achieve a modest prediction capability. - Internet of Things is the networking and data sharing between devices with embedded electronics, sensors, software and even actuators. Common household examples include devices such as Fitbits, Amazon Echo’s and Wi-Fi connected security systems. - An everyday example of the superpositioning principle in action is through Noise-Cancelling headphones, which measure the outside soundwave, and then produce an exact opposite wave internally, that nullifies the outside noise. - The ‘domain’ is the variable on which a signal is changing. Humans naturally tend to consider changes in a time-domain, such as how a singer’s loudness varies during a song. In this case, the frequency-domain analysis would be to consider how often the singer hit particular notes. - Here complex is used in a mathematical sense, each vector contains a real and imaginary part. - Technically, a Fourier Transformation decomposes a signal into complex exponentials (called “cissoids”). - This is a result of the Nyquist-Shannon sampling theorem. - A comprehensive explanation for how sampling discontinuities can result in spectral leakage would require an in-depth discussion of the underlying mathematics of a Fourier Transformation. An excellent discussion of this phenomena (which avoids the heavy math) can be found at Windows and Spectral Leakage - This is because the result of a real valued FFT is conjugate symmetric, or Hermitian, with the exception of the first value. - Designing a filter to smooth the data in this manner is very simplistic and does not account for key filtering criteria such as frequency and impulse response, phase shifting, causality or stability. Application of more rigorous design methods such as a Parks-McClellan algorithm would be beyond the scope and intention of this paper. - The smoothing that was performed may have highlighted the longer term trends, but the background noise remained present. It was therefore necessary to decrease the sampling frequency (and therefore reduce the largest frequency captured) in order to better highlight the low frequency/high period trends. Appendix¶ Data sources¶ The primary dataset, containing sensor readouts for a single households total power load, that was introduced and then transformed throughout this paper, can be found at uci.edu. The code presented in this paper is available at github.com/kxcontrib/q-signals. Python FFT comparison¶ In the following, a comparison between two alternative implementations of FFT will be performed to benchmark the native q-based algorithm. Comparisons will be made to a similar algorithm used in \d .signal . mult:{[vec1;vec2] // Performs the dot product of two vectors realOut:((vec1 0) * vec2 0) - ((vec1 1) * vec2 1); imagOut:((vec1 1) * vec2 0) + ((vec1 0) * vec2 1); (realOut;imagOut)}; division:{[vec1;vec2] denom:1%((vec1 1) xexp 2) + ((vec2 1) xexp 2); mul:mult[vec1 0;vec1 1;vec2 0;neg vec2 1]; realOut:(mul 0)*denom; imagOut:(mul 1)*denom; (realOut;imagOut)}; conj:{[vec](vec 0;neg vec 1)}; mag:{[vec] sqrvec:vec xexp 2; sqrt (sqrvec 0)+sqrvec 1}; \d . q).signal.mult[5 -3;9 2] 51 -1 q).signal.mult[(5 2 1;-3 -8 10);(9 8 -4;2 3 6)] 51 40 -64 / Reals -17 -58 -34 / Imaginary Fast Fourier Transform, written natively in Python 3.6, and also against a well refined, and highly-optimized C based library. The library in question will be the NumPy library, called via the q fusion with Python – embedPy (details on setting up this environment are available from Machine Learning section on code.kx.com. In order to get an accurate comparison, each method should be tested on a sufficiently large dataset, the largest 2N subset of the previously used Household Power dataset is suitable for this purpose. //Get the largest possible 2N dataset size q)n:count household; q)nr:`int$2 xexp (count 2 vs n)-1 1048576 q)comparison:select real:Power, imag:nr#0f from nr#household q)\t .fft.fftrad2[(comparison`real;comparison`imag)] 1020 q)save `comparison.csv So the q algorithm can determine the Fourier Transform of a million point dataset in about a second, a nice whole number to gauge the comparative performance. In Python, the native complex-math definitions field can be used to build a similar radix-2 FFT Decimation in Time routine as was implemented by Peter Hinch. import pandas as pd import numpy as np import time import math from cmath import exp, pi data = pd.read_csv('c:/q/w64/comparison.csv') fftinput = data.real.values + 1j * data.imag.values def ffft(nums, forward=True, scale=False): n = len(nums) m = int(math.log2(n)) #n= 2**m #Calculate the number of points #Do the bit reversal i2 = n >> 1 j = 0 for i in range(n-1): if i<j: nums[i], nums[j] = nums[j], nums[i] k = i2 while (k <= j): j -= k k >>= 1 j+=k #Compute the FFT c = 0j-1 l2 = 1 for l in range(m): l1 = l2 l2 <<= 1 u = 0j+1 for j in range(l1): for i in range(j, n, l2): i1 = i+l1 t1 = u*nums[i1] nums[i1] = nums[i] - t1 nums[i] += t1 u *= c ci = math.sqrt((1.0 - c.real) / 2.0) # Generate complex roots of unity if forward: ci=-ci # for forward transform cr = math.sqrt((1.0 + c.real) / 2.0) # phi = -pi/2 -pi/4 -pi/8... c = cr + ci*1j#complex(cr,ci) # Scaling for forward transform if (scale and forward): for i in range(n): nums[i] /= n return nums t0 = time.time() ffft(fftinput) t1 = time.time() print(t1 - t0) 8.336997509002686 From the above computations, it can be seen that the natively-written Python FFT computed the million-point transformation in 8.3 seconds. And finally using embedPy to access the C-backed NumPy library to perform the FFT, which will likely not use a radix-2 algorithm, but a mixed-radix implementation that allows for multi-threading (which the in-place Decimation-In-Time algorithm fundamentally cannot do). p)import numpy as np; p)import pandas as pd; p)import time; p)data = pd.read\_csv('comparison.csv'); // Convert to a complex pair p)fftinput = data.real.values + 1j * data.imag.values; p)t0 = time.time(); p)np.abs(np.fft.fft (fftinput)); p)t1 = time.time(); p)print (t1 - t0) 0.04644131660461426
// @kind function // @category public // @subcategory experimenal // @fileoverview *EXPERIMENTAL* send a request with a client-side timeout // @param t {int|long} timeout (seconds) // @param m {symbol} HTTP method/verb // @param u {symbol|string|#hsym} URL // @param hd {dict} dictionary of custom HTTP headers to use // @param p {string} payload/body (for POST requests) // @return {(dict;string)} HTTP response (headers;body) timeout:{[t;m;u;hd;p] ot:system"T";system"T ",string t; //store old timeout & set new r:@[0;(`.req.send;m;u;hd;p;VERBOSE);{x}]; //send request & trap error system"T ",string ot; //reset timeout :$[r~"stop";'"timeout";r]; //return or signal } \d . ================================================================================ FILE: reQ_req_url.q SIZE: 4,146 characters ================================================================================ \d .url // @kind function // @category private // @fileoverview Parse URL query; split on ?, urldecode query // @param x {string} URL containing query // @return {(string;dict)} (URL;parsed query) query:{[x]@["?"vs x;1;dec]} //split on ?, urldecode query // @kind function // @category private // @fileoverview parse a string/symbol/hsym URL into a URL dictionary // @param q {boolean} parse URL query to kdb dict // @param x {string|symbol|#hsym} URL containing query // @return {dict} URL dictionary parse0:{[q;x] if[x~hsym`$255#"a";'"hsym too long - consider using a string"]; //error if URL~`: .. too long x:sturl x; //ensure string URL p:x til pn:3+first ss[x;"://"]; //protocol uf:("@"in x)&first[ss[x;"@"]]<first ss[pn _ x;"/"]; //user flag - true if username present un:pn; //default to no user:pass u:-1_$[uf;(pn _ x) til (un:1+first ss[x;"@"])-pn;""]; //user:pass d:x til dn:count[x]^first ss[x:un _ x;"[/?]"]; //domain a:$[(dn=count x)|"?"=x[dn];"/",;] dn _ x; //absolute path (add leading slash if necessary) o:`protocol`auth`host`path!(p;u;d;a); //create URL object :$[q;@[o;`path`query;:;query o`path];o]; //split path into path & query if flag set, return } // @kind function // @category private // @fileoverview parse a string/symbol/hsym URL into a URL dictionary & parse query // @param x {string|symbol|#hsym} URL containing query // @return {dict} URL dictionary // @qlintsuppress RESERVED_NAME .url.parse:parse0[1b] //projection to parse query by default // @kind function // @category private // @fileoverview format URL object into string // @param x {dict} URL dictionary // @return {string} URL format:{[x] :raze[x`protocol`auth],$[count x`auth;"@";""], //protocol & if present auth (with @) x[`host],$[count x`path;x`path;"/"], //host & path $[99=type x`query;"?",enc x`query;""]; //if there's a query, encode & append } // @kind function // @category private // @fileoverview return URL as a string // @param x {string|symbol|#hsym} URL // @return {string} URL sturl:{(":"=first x)_x:$[-11=type x;string;]x} // @kind function // @category private // @fileoverview return URL as an hsym // @param x {string|symbol|#hsym} URL // @return {#hsym} URL hsurl:{`$":",sturl x} // @kind function // @category private // @fileoverview URI escaping for non-safe chars, RFC-3986 // @param x {string} URL // @return {string} URL hu:.h.hug .Q.an,"-.~" // @kind function // @category private // @fileoverview encode a KDB dictionary as a URL encoded string // @param d {dict} kdb dictionary to encode // @return {string} URL encoded string enc:{[d] k:key d;v:value d; //split dictionary into keys & values v:enlist each .url.hu each {$[10=type x;x;string x]}'[v]; //string any values that aren't stringed,escape any chars that need it k:enlist each $[all 10=type each k;k;string k]; //if keys are strings, string them :"&" sv "=" sv' k,'v; //return urlencoded form of dictionary } // @kind function // @category private // @fileoverview decode a URL encoded string to a KDB dictionary // @param x {string} URL encoded string // @return {dict} kdb dictionary to encode dec:{[x] :(!/)"S=&"0:.h.uh ssr[x;"+";" "]; //parse incoming request into dict, replace escaped chars } \d . ================================================================================ FILE: reQ_tests_test_b64.q SIZE: 231 characters ================================================================================ .utl.require"req/b64.q" .tst.desc["base64 enc/dec"]{ should["encode b64"]{ "dXNlcjpwYXNz" mustmatch .b64.enc"user:pass"; }; should["decode b64"]{ "user:pass" mustmatch .b64.dec"dXNlcjpwYXNz"; }; }; ================================================================================ FILE: reQ_tests_test_cookies.q SIZE: 488 characters ================================================================================ .utl.require"req/cookie.q" .tst.desc["Cookies"]{ before{ `basePath mock (` vs .tst.tstPath)[0]; `r mock get ` sv basePath,`mock`cookiejar; }; should["read cookie jar"]{ r mustmatch .cookie.readjar ` sv basePath,`cookiejar; }; should["write cookie jar"]{ .cookie.writejar[` sv basePath,`cookiejar2;r]; read0[` sv basePath,`cookiejar2] mustmatch read0[` sv basePath,`cookiejar]; hdel ` sv basePath,`cookiejar2; }; }; ================================================================================ FILE: reQ_tests_test_requests.q SIZE: 2,111 characters ================================================================================ .utl.require"req" // remove tracing header added by httpbin - messy to work with various types of return // also ignore Host, as this changes depending on which httpbin we're testing with .req.rmhd:{if[`headers in k:key x;x:k#((1#`headers)_x),(1#`headers)!enlist(`$"X-Amzn-Trace-Id";`Authorization;`Host;`Connection)_ x`headers];(1#`url) _ x}; // set fixed user agent for testing, so tests are not dependent on kdb version .req.def["User-Agent"]:"kdb+/reQ-testing"; // httpbin URL to use for tests TESTURL:"https://nghttp2.org/httpbin" .tst.desc["Requests"]{ before{ `basePath mock ` sv (` vs .tst.tstPath)[0],`json; `rd mock {.req.rmhd (1#`origin)_ .j.k raze read0` sv x,y}[basePath]; }; should["Perform basic auth"]{ `r mock rd`auth.json; r mustmatch .req.rmhd (1#`url) _ .req.g (8#TESTURL),"user:passwd@",(8_TESTURL),"/basic-auth/user/passwd"; }; should["Send custom headers"]{ `r mock rd`headers.json; r mustmatch .req.rmhd (1#`url) _ .req.get[TESTURL,"/headers";`custom`headers!("with custom";"values")]; }; should["Follow HTTP redirects"]{ `r mock rd`redirect.json; r mustmatch .req.rmhd `url`origin _ .req.g TESTURL,"/redirect/3"; }; should["Follow relative HTTP redirects"]{ `r mock rd`rel_redirect.json; r mustmatch .req.rmhd `url`origin _ .req.g TESTURL,"/relative-redirect/3"; }; should["Follow absolute HTTP redirects"]{ `r mock rd`abs_redirect.json; r mustmatch .req.rmhd `url`origin _ .req.g TESTURL,"/absolute-redirect/3"; }; should["Send POST request"]{ `r mock rd`post.json; r mustmatch .req.rmhd `url`origin _ .req.post[TESTURL,"/post";enlist["Content-Type"]!enlist .req.ty`json;.j.j (1#`text)!1#`hello]; }; should["Set cookie"]{ `r mock rd`setcookie.json; r mustmatch .req.rmhd (1#`url) _ .req.g TESTURL,"/cookies/set?abc=123&def=123"; }; should["Delete cookie"]{ `r mock rd`deletecookie.json; r mustmatch .req.rmhd (1#`url) _ .req.g TESTURL,"/cookies/delete?abc"; } }; ================================================================================ FILE: ws.q_examples_gdax.q SIZE: 5,644 characters ================================================================================ / depends on ws-client, qutil package .utl.require"ws-client" book:([] sym:`$();time:`timestamp$();bids:();bsizes:();asks:();asizes:()) //schema for book table trade:([] time:`timestamp$();sym:`$();price:`float$();bid:`float$();ask:`float$();side:`$();tid:`long$();size:`float$()) \d .gdax
// @kind function // @category optimizeModelsUtilitity // @desc Show true and predicted values from confusion matrix // @param confMatrix {dictionary} Confusion matrix // @return {dictionary} Confusion matrix with true and predicted values optimizeModels.i.confTab:{[confMatrix] keyMatrix:string key confMatrix; predVals:`$"pred_",/:keyMatrix; trueVals:`$"true_",/:keyMatrix; predVals!flip trueVals!flip value confMatrix } // @kind function // @category optimizeModelsUtilitity // @desc Save down confusionMatrix // @param modelDict {dictionary} Library and function of model // @param bestModel {<} Fitted best model // @param tts {dictionary} Feature and target data split into training // and testing set // @param scoreFunc {<} Scoring metric applied to evaluate the model // @param seed {int} Random seed to use // @param idx {int} Index of column that is being shuffled // return {float} Score returned from predicted values using shuffled data optimizeModels.i.predShuffle:{[modelDict;bestModel;tts;scoreFunc;seed;idx] tts[`xtest]:optimizeModels.i.shuffle[tts`xtest;idx]; preds:$[modelDict[`modelLib] in key models; [customModel:"." sv string modelDict`modelLib`modelFunc; predFunc:get".automl.models.",customModel,".predict"; predFunc[tts;bestModel] ]; bestModel[`:predict][tts`xtest]` ]; scoreFunc[preds;tts`ytest] } // @kind function // @category optimizeModelsUtility // @desc Shuffle column within the data // @param data {float[]} Data to shuffle // @param col {int} Column in data to shuffle // @return {float[]} The original data shuffled optimizeModels.i.shuffle:{[data;col] countData:count data; idx:neg[countData]?countData; $[98h~type data; data:data[col]idx; data[;col]:data[;col]idx ]; data } // @kind function // @category optimizeModelsUtility // @desc Create dictionary of impact of each column in ascending order // @param scores {float[]} Impact score of each column // @param countCols {int} Number of columns in the feature data // @param ordFunc {fn} Ordering of scores // @return {dictionary} Impact score of each column in ascending order optimizeModels.i.impact:{[scores;countCols;ordFunc] scores:$[any 0>scores;.ml.minMaxScaler.fitTransform;]scores; scores:$[ordFunc~desc;1-;]scores; keyDict:til countCols; asc keyDict!scores%max scores } // Updated cross validation functions necessary for the application of // hyperparameter search ordering correctly. // Only change is expected input to the t variable of the function, // previously this was a simple floating point values -1<x<1 which denotes // how the data is to be split for the train-test split. // Expected input is now at minimum t:enlist[`val]!enlist num, while for // testing on the holdout sets this should be include the scoring function // and ordering the model requires to find the best model // `val`scf`ord!(0.2;`.ml.mse;asc) for example // @kind function // @category optimizeModelsUtility // @desc Modified hyperparameter search with option to test final model // @param scoreFunc {fn} Scoring function // @param k {int} Number of folds // @param n {int} Number of repetitions // @param features {any[][]} Matrix of features // @param target {any[]} Vector of targets // @param dataFunc {fn} Function which takes data as input // @param hyperparams {dictionary} Dictionary of hyperparameters // @param testType {float} Size of the holdout set used in a fitted grid // search, where the best model is fit to the holdout set. If 0 the function // will return scores for each fold for the given hyperparameters. If // negative the data will be shuffled prior to designation of the holdout // set // @return {table|list} Either validation or testing results from // hyperparameter search with (full results;best set;testing score) hp.i.search:{[scoreFunc;k;n;features;target;dataFunc;hyperparams;testType] if[0=testType`val;:scoreFunc[k;n;features;target;dataFunc;hyperparams]]; dataShuffle:$[0>testType`val;xv.i.shuffle;til count@]target; i:(0,floor count[target]*1-abs testType`val)_dataShuffle; r:scoreFunc[k;n;features i 0;target i 0;dataFunc;hyperparams]; func:get testType`scf; res:$[type[func]in(100h;104h); dataFunc[pykwargs pr:first key testType[`ord]each func[;].'']; dataFunc[pykwargs pr:first key desc avg each r](features;target)@\:/:i ]; (r;pr;res) } // @kind data // @category optimizeModelsUtility // @desc All possible gs/rs functions // @type dictionary xvKeys:`kfSplit`kfShuff`kfStrat`tsRolls`tsChain`pcSplit`mcSplit // @kind function // @category optimizeModelsUtility // @desc Update gs functions with automl `hp.i.search` function // @type dictionary gs:xvKeys!{hp.i.search last value x}each .ml.gs xvKeys // @kind data // @category optimizeModelsUtility // @desc Update rs functions with automl `hp.i.search` function // @type dictionary rs:xvKeys!{hp.i.search last value x}each .ml.rs xvKeys ================================================================================ FILE: ml_automl_code_nodes_pathConstruct_funcs.q SIZE: 1,175 characters ================================================================================ // code/nodes/pathConstruct/funcs.q - Functions called in pathConstruct node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application of // .automl.pathConstruct \d .automl // @kind function // @category pathConstruct // @desc Create the folders that are required for the saving of the // config, models, images and reports // @param preProcParams {dictionary} Data generated during the preprocess stage // @return {dictionary} File path where paths/graphs are to be saved pathConstruct.constructPath:{[preProcParams] cfg:preProcParams`config; saveOpt:cfg`saveOption; if[saveOpt=0;:()!()]; pathName:-1_value[cfg]where key[cfg]like"*SavePath"; pathName:utils.ssrWindows each pathName; pathConstruct.createFile each pathName; } // @kind function // @category pathConstruct // @desc Create the folders that are required for the saving of the // config, models, images and reports // @param pathName {string} Name of paths that are to be created // @return {::} File paths are created pathConstruct.createFile:{[pathName] windowsChk:$[.z.o like"w*";" ";" -p "]; system"mkdir",windowsChk,pathName } ================================================================================ FILE: ml_automl_code_nodes_pathConstruct_init.q SIZE: 245 characters ================================================================================ // code/nodes/pathConstruct/init.q - Load pathConstruct node // Copyright (c) 2021 Kx Systems Inc // // Load code for pathConstruct node \d .automl loadfile`:code/nodes/pathConstruct/pathConstruct.q loadfile`:code/nodes/pathConstruct/funcs.q ================================================================================ FILE: ml_automl_code_nodes_pathConstruct_pathConstruct.q SIZE: 1,180 characters ================================================================================ // code/nodes/pathConstruct/pathConstruct.q - Path construction for saving // Copyright (c) 2021 Kx Systems Inc // // Construct path to where all graphs/reports are to be saved down. Also join // together all information collected during preprocessing, processing and // configuration creation in order to provide all information required for // the generation of report/meta/graph/model saving. \d .automl // @kind function // @category node // @desc Construct paths where all graphs/reports are to be saved. Also // consolidate all information together that was generated during the process // @param preProcParams {dictionary} Data generated during the preprocess stage // @param predictionStore {dictionary} Data generated during the prediction // stage // @return {dictionary} All data collected along the entire process along with // paths to where graphs/reports will be generated pathConstruct.node.function:{[preProcParams;predictionStore] pathConstruct.constructPath preProcParams; preProcParams,predictionStore } // Input information pathConstruct.node.inputs:`preprocParams`predictionStore!"!!" // Output information pathConstruct.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_predictParams_init.q SIZE: 202 characters ================================================================================ // code/nodes/predictParams/init.q - Load predictParams node // Copyright (c) 2021 Kx Systems Inc // // Load code for predictParams node \d .automl loadfile`:code/nodes/predictParams/predictParams.q ================================================================================ FILE: ml_automl_code_nodes_predictParams_predictParams.q SIZE: 1,336 characters ================================================================================ // code/nodes/predictParams/predictParams.q - Predict params // Copyright (c) 2021 Kx Systems Inc // // Collect all the parameters relevant for the generation of reports/graphs etc // in the prediction step such they can be consolidated into a single node // later in the workflow \d .automl // @kind function // @category node // @desc Collect all relevant parameters from previous prediction steps // to be consolidated for report/graph generation // @param bestModel {<} The best model fitted // @param hyperParmams {dictionary} Hyperparameters used for model (if any) // @param modelName {string} Name of best model // @param testScore {float} Score of model on testing data // @param modelMetaData {dictionary} Meta data from finding best model // @return {dictionary} Consolidated parameters to be used to generate // reports/graphs predictParams.node.function:{[bestModel;hyperParams;modelName;testScore;analyzeModel;modelMetaData] params:`bestModel`hyperParams`modelName`testScore`analyzeModel`modelMetaData; params!(bestModel;hyperParams;modelName;testScore;analyzeModel;modelMetaData) } // Input information predictParams.i.k:`bestModel`hyperParams`modelName`testScore, `analyzeModel`modelMetaData; predictParams.node.inputs:predictParams.i.k!"<!sf!!" // Output information predictParams.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_preprocParams_init.q SIZE: 202 characters ================================================================================ // code/nodes/preprocParams/init.q - Load preprocParams node // Copyright (c) 2021 Kx Systems Inc // // Load code for preprocParams node \d .automl loadfile`:code/nodes/preprocParams/preprocParams.q ================================================================================ FILE: ml_automl_code_nodes_preprocParams_preprocParams.q SIZE: 1,629 characters ================================================================================ // code/nodes/preprocParams/preprocParams.q - Preproc params node // Copyright (c) 2021 Kx Systems Inc // // Collect all the parameters relevant for the preprocessing phase \d .automl // @kind function // @category node // @desc Collect all the parameters relevant for the generation of // reports/graphs etc in the preprocessing phase such they can be // consolidated into a single node later in the workflow // @param config {dictionary} Location and method by which to retrieve the data // @param descrip {table} Symbol encoding, feature data and description // @param cTime {time} Time taken for feature creation // @param sigFeats {symbol[]} Significant features // @param symEncode {dictionary} Columns to symbol encode and their required // encoding // @param symMap {dictionary} Mapping of symbol encoded target data // @param featModel {<} NLP feature creation model used (if required) // @param tts {dictionary} Feature and target data split into training/testing // sets // @return {dictionary} Consolidated parameters to be used to generate // reports/graphs preprocParams.node.function:{[config;descrip;cTime;sigFeats;symEncode;symMap;featModel;tts] params:`config`dataDescription`creationTime`sigFeats`symEncode`symMap, `featModel`ttsObject; params!(config;descrip;cTime;sigFeats;symEncode;symMap;featModel;tts) } // Input information preprocParams.i.k :`config`dataDescription`creationTime`sigFeats`symEncode, `symMap`featModel`ttsObject; preprocParams.i.t:"!+tSS!<!"; preprocParams.node.inputs:preprocParams.i.k!preprocParams.i.t; // Output information preprocParams.node.outputs:"!" ================================================================================ FILE: ml_automl_code_nodes_runModels_funcs.q SIZE: 6,300 characters ================================================================================ // code/nodes/runModels/funcs.q - Functions called in runModels node // Copyright (c) 2021 Kx Systems Inc // // Definitions of the main callable functions used in the application of // .automl.runModels \d .automl // @kind function // @category runModels // @desc Extraction of appropriately valued dictionary from a JSON file // @param scoreFunc {symbol} Function used to score models run // @return {fn} Order function retrieved from JSON file for specific scoring // function runModels.jsonParse:{[scoreFunc] jsonPath:hsym`$path,"/code/customization/scoring/scoringFunctions.json"; funcs:.j.k raze read0 jsonPath; get first value funcs scoreFunc } // @kind function // @category runModels // @desc Set value of random seed for reproducibility // @param config {dictionary} Information relating to the current run of AutoML // @return {::} Value of seed is set runModels.setSeed:{[config] system"S ",string config`seed; } // @kind function // @category runModels // @desc Apply TTS to keep holdout for feature impact plot and testing // of best vanilla model // @param config {dictionary} Information relating to the current run of AutoML // @param tts {dictionary} Feature and target data split into training/testing // sets // @return {dictionary} Training and holdout split of data runModels.holdoutSplit:{[config;tts] ttsFunc:utils.qpyFuncSearch config`trainTestSplit; ttsFunc[tts`xtrain;tts`ytrain;config`holdoutSize] }
/ assign skewed side and src lists to determine probabilities of appearing sideweight:cnt?{x,cnt-x}'[1+til cnt-1] sidemap:s!skewitems[;side] each sideweight srcweight:1+til count src srcmap:s!skewitems[srcweight;] each cnt#enlist src / ========================================================================================= / generate a batch of prices / qx index, qb/qa margins, qp price, qn position batch:{ d:gen x; qx::x?weightedsyms; qb::rnd x?1.0; qa::rnd x?1.0; n:where each qx=/:til cnt; s:p*prds each d n; qp::x#0.0; (qp raze n):rnd raze s; p::last each s; qn::0} / gen feed for ticker plant len:10000 batch len maxn:15 / max trades per tick qpt:5 / avg quotes per trade / ========================================================================================= t:{ if[not (qn+x)<count qx;batch len]; i:qx n:qn+til x;qn+:x; (s i;qp n;`int$volmap[s i]*x?99;1=x?20;x?c;e i;raze 1?'sidemap[s i])} q:{ if[not (qn+x)<count qx;batch len]; i:qx n:qn+til x;p:qp n;qn+:x; (s i;p-qb n;p+qa n;`long$bidmap[s i]*vol x;`long$askmap[s i]*vol x;x?m;e i;raze 1?'srcmap[s i])} feed:{h$[rand 2; (".u.upd";`trade;t 1+rand maxn); (".u.upd";`quote;q 1+rand qpt*maxn)];} feedm:{h$[rand 2; (".u.upd";`trade;(enlist a#x),t a:1+rand maxn); (".u.upd";`quote;(enlist a#x),q a:1+rand qpt*maxn)];} init:{ o:"p"$9e5*floor (.z.P-3600000)%9e5; d:.z.P-o; len:floor d%113; feedm each `timestamp$o+asc len?d;} /- use the discovery service to find the tickerplant to publish data to .servers.startupdepcycles[`segmentedtickerplant;10;0W]; h:.servers.gethandlebytype[`segmentedtickerplant;`any] / init 0 .timer.repeat[.proc.cp[];0Wp;0D00:00:00.200;(`feed;`);"Publish Feed"]; ================================================================================ FILE: TorQ-Finance-Starter-Pack_database.q SIZE: 878 characters ================================================================================ quote:([]time:`timestamp$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$(); src:`symbol$()) trade:([]time:`timestamp$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); stop:`boolean$(); cond:`char$(); ex:`char$();side:`symbol$()) quote_iex:([]time:`timestamp$(); sym:`g#`symbol$(); bid:`float$(); ask:`float$(); bsize:`long$(); asize:`long$(); mode:`char$(); ex:`char$(); srctime:`timestamp$()) trade_iex:([]time:`timestamp$(); sym:`g#`symbol$(); price:`float$(); size:`int$(); stop:`boolean$(); cond:`char$(); ex:`char$(); srctime:`timestamp$()) packets:([] time:`timestamp$(); sym:`symbol$(); src:`symbol$(); dest:`symbol$(); srcport:`long$(); destport:`long$(); seq:`long$(); ack:`long$(); win:`long$(); tsval:`long$(); tsecr:`long$(); flags:(); protocol:`symbol$(); length:`long$(); len:`long$(); data:()) ================================================================================ FILE: TorQ_code_common_analyticslib.q SIZE: 5,784 characters ================================================================================ \d .al / - General argument checking funciton checkargs:{[args;k;t] $[not 99h~type args;'`$"Input parameter must be a dictionary with keys:\n\t-",sv["\n\t-";string k]; / - check for dictionary / - check for keys not all k in key[args];'`$"Dictionary keys are incorrect. Keys should be:\n\t-", sv["\n\t-";string k], "\nYou have input keys:\n\t-", sv["\n\t-";string key args]; / - check for types / any not in'[string .Q.ty'[args k];t] any not .Q.ty'[args k] in t;'`$("One or more of your inputs are of an invalid type."); `table in key args; / - check if columns are in table provided (if[any not args[`keycols] in cols args`table;'`$("The columns (", raze string args[`keycols],") you are attempting to use does not exist in the table provided.")]); :()] } / - Forward fill function ffill:{[args] / - Checks type of each column and fills accordingly forwardfill:{ $[0h=type x; x maxs (til count x)*(0<count each x); fills x] }; / - If the input is just a table if[.Q.qt args; / - update fills col1,fills col2,fills col3... from table :(![args;();0b;cols[args]!(`forwardfill),/:cols args])]; / - Check which columns are being filled if[`~args[`keycols];args[`keycols]:cols args`table]; / - Call checkargs function checkargs[args;(`table`by`keycols);(" sS")]; $[`~args`by; / - Functional update to forward fill / - Equivalent to: / - update fills keycols1,fills keycol2,fills keycol3 from table ![args`table;();0b;((),args`keycols)!(`forwardfill),/:((),args`keycols)]; / - Funciontal update to forward fill by specific column(s) / - Equivalent to: / - update fills keycols1,fills keycols2,fills keycols3 by `sym from table ![args`table;();((),args`by)!((),args`by);((),args`keycols)!(`forwardfill),/:((),args`keycols)] ] }; / - General pivot function pivot:{[args] checkargs[args;`table`by`piv`var;(" sS")]; / - Check for optional args f and g / - Check if f optional exists, if empty set as default else leave as input if[not `f in key args; args[`f]:{[v;P] `$"_" sv' string (v,()) cross P}; ]; / - Check if g optional exists, if empty set as default else leave as input if[not `g in key args; args[`g]:{[k;P;c] k,asc c}; ]; / - Call check function on input (args`var):(),args[`var]; G:group flip (args[`by])!((args[`table]):0!.Q.v (args[`table]))(args`by),:(); F:group flip (args[`piv])!(args[`table]) (args`piv),:(); count[args`by]!(args`g)[args`by;P;C]xcols 0!key[G]!flip(C:(args`f)[args`var]P:flip value flip key F)!raze {[i;j;k;x;y] a:count[x]#x 0N; a[y]:x y; b:count[x]#0b; b[y]:1b; c:a i; c[k]:first'[a[j]@'where'[b j]]; c}[I[;0];I J;J:where 1<>count'[I:value G]]/:\:[(args`table) (args`var);value F] }; /- intervals function intervals:{[args] / Call general checkargs function checkargs[args;`start`end`interval`round;("MmuUjJhHNnVvDdPpB")]; / Error checks specific to intervals if[args[`start]>args[`end];'`$"start time should be less than end time"]; if[not (type args[`start`end]) in `short$5,6,7,(12+til 8) except 15; '`$"start and end must be of same type and must be one of timestamp, month, date, timespan, minute, second, time"]; if[(args[`end] - args[`start]) < args[`interval];'`$"Difference between start and end points smaller than interval specified, please use a smaller interval"] / Check optional arguments and assign defaults where appropriate $[`round in key args; if[not -1 = type args[`round];'`$"round should be specified as a boolean value"]; args:args,(enlist `round)!enlist 1b]; / need the `long on the multiplying interval because of timestamp issues $[args[`round]; x:(neg type args[`start])$(`long$args[`interval])*`long$(args[`start] + args[`interval]*til 1+ `long$(args[`end]-args[`start])%args[`interval])%args[`interval]; / this is the same as the above but we don't divide by interval and convert to long again so rounding doesn't take place x:args[`start] + args[`interval]*til 1+`long$(args[`end]-args[`start])%args[`interval]]; $[args[`end] < last x;x:-1 _x;x] }; /rack function rack:{[args] / Call general check args function checkargs[args;`table`keycols;" sSB"]; / Check Optional arguments and assign defaults where appropriate / Set a variable 'timeseries' to an empty list timeseries:enlist (); if[.Q.qt args[`table]; args[`table]:0!args[`table]]; / if base is given in the function call make sure that it's a table or else assign it to a null list $[`base in key args; if[not .Q.qt args[`base];'`$"if base is specified it must be as a table"]; args[`base]:enlist () ]; / if arguments for a timeseries are provided create intervals column if[`timeseries in key args; timeseries:([]interval:intervals[args[`timeseries]])]; / if full expansion isn't provided, default it to 0b $[`fullexpansion in key args;if[not -1 = type args[`fullexpansion];'`$"fullexpansion must be provided as a boolean value"]; args[`fullexpansion]:0b]; if[1=count args[`keycols]; args[`keycols]:enlist args[`keycols]]; / This is where actual racking is done $[args[`fullexpansion]; / If fullexpansion is true we cross each column of the table with the others. racktable:args[`base] cross ((cross/){distinct each (enlist each cols[x])#\:x}args[`keycols]#args[`table]) cross timeseries; /if full expansion isn't true, just cross rhe required key columns first with a base then with the timeseries racktable:args[`base] cross ((args[`keycols]#args[`table]) cross timeseries)] } \d . ================================================================================ FILE: TorQ_code_common_api.q SIZE: 4,908 characters ================================================================================ // some functions to help build up an api and help the programmer // allows searching of the q memory space for variables/functions based on pattern matching, returns any associated api entries // could be used in conjunction with help.q \d .api // table to hold some extra descriptive info about each function/param etc detail:([name:`u#`symbol$()] public:`boolean$(); descrip:();params:();return:()) // map from variable type name to full name typemap:"fvab"!`function`variable`table`view // Add a description to the api add:{[name;public;descrip;params;return] `.api.detail upsert (name;public;descrip;params;return);} // Get the full var names for a given namespace varnames:{[ns;vartype;shortpath] vars:system vartype," ",string ns; // create the full path to the variable. If it's in . or .q it doesn't have a suffix `$$[shortpath and ns in `.`.q;"";(string ns),"."],/:string vars} // list out a given namespace for a specific variable type list:{[ns;vartype] api:([]name:varnames[ns;vartype;1b]; vartype:typemap vartype; namespace:ns; public:$[ns in `.`.q;1b;0b]) lj .api.detail; // re-sort the api table according the order of the description table api iasc (exec name from .api.detail)?api`name} // get all the namespaces in . form allns:{`$(enlist enlist "."),".",/:string key `} // Dump out everything across all name spaces fullapi:{ res:`name xkey raze list .' allns[] cross key typemap; // add in all the q binary functions res,([name:.Q.res] vartype:(count .Q.res)#`primitive; namespace:`;public:1b) lj .api.detail}
-1"remove punctuation from text"; t:update .ut.sr[.ut.pw] peach text from t -1"tokenize and remove stop words"; t:update (except[;stopwords.xpo6] " " vs ) peach lower text from t -1 "use porter stemmer to stem each word"; t:update (.porter.stem') peach text from t -1"partitioning text between training and test"; d:.ut.part[`train`test!3 1;0N?] t c:d . `train`text y:d . `train`class -1"generating vocabulary and term document matrix"; X:0f^.ml.tfidf[.ml.lntf;.ml.idf] .ml.tdm[c] v:asc distinct raze c -1 "fitting multinomial naive bayes classifier"; pT:.ml.fnb[.ml.wmultimle[1];(::);y;flip X] -1"confirming accuracy"; ct:d . `test`text yt:d . `test`class Xt:0f^.ml.tfidf[.ml.lntf;.ml.idf] .ml.tdm[ct] v avg yt=p:.ml.pnb[1b;.ml.multill;pT] flip Xt show .ut.rnd[1e-4] select[>PE] from ([]word:v)!flip last pT ================================================================================ FILE: funq_uef.q SIZE: 454 characters ================================================================================ uef.s:("s1.txt";"s2.txt";"s3.txt";"s4.txt") uef.a:("a1.txt";"a2.txt";"a3.txt") uef.d:("dim032.txt";"dim064.txt";"dim128.txt") uef.d,:("dim256.txt";"dim512.txt";"dim1024.txt"); uef.b:"http://www.cs.uef.fi/sipu/datasets/" -1"[down]loading uef data sets"; .ut.download[uef.b;;"";""] uef.s,uef.a,uef.d; uef,:`s1`s2`s3`s4!("JJ";10 10) 0:/: `$uef.s uef,:`a1`a2`a3!("JJ";8 8) 0:/: `$uef.a uef,:(!) . flip {(`$"d",string x;(x#"J";x#6) 0: y)}'[16*\6#2;`$uef.d] ================================================================================ FILE: funq_ut.q SIZE: 8,880 characters ================================================================================ \d .ut / assert minimum required version of q / throw verbose exception if x <> y assert:{if[not x~y;'`$"expecting '",(-3!x),"' but found '",(-3!y),"'"]} assert[2016.05.12] 2016.05.12&.z.k / supports random permutation 0N? assert[3.4] 3.4&.z.K / supports reshape for an arbitrary list of dimensions / data loading utilities / load (f)ile if it exists and return success boolean loadf:{[f]if[()~key f;:0b];system "l ",1_string f;1b} unzip:$["w"=first string .z.o;"7z.exe x -y -aos";"unzip -n"] gunzip:$["w"=first string .z.o;"7z.exe x -y -aos";"gunzip -f -N -v"] untar:"tar -xzvf" / tar is now in windows 10 system32 / (b)ase url, (f)ile, (e)xtension, (u)ncompress (f)unction download:{[b;f;e;uf] if[0h=type f;:.z.s[b;;e;uf] each f]; if[l~key l:`$":",f;:l]; / local file exists if[()~key z:`$":",f,e;z 1: .Q.hg`$":",0N!b,f,e]; / download if[count uf;system 0N!uf," ",f,e]; / uncompress l} / load http://yann.lecun.com/exdb/mnist/ dataset mnist:{ d:first (1#4;1#"i") 1: 4_(h:4*1+x 3)#x; x:d#$[0>i:x[2]-0x0b;::;first ((2 4 4 8;"hief")@\:i,()) 1:] h _x; x} / load http://etlcdb.db.aist.go.jp/etlcdb/data/ETL9B dataset etl9b:{(2 1 1 4 504, 64#1;"hxxs*",64#" ") 1: x} / general utilities / generate a sequence of (s)-sized steps between (b)egin and (e)nd sseq:{[s;b;e]b+s*til 1+floor 1e-14+(e-b)%s} / generate a sequence of (n) steps between (b)egin and (e)nd nseq:{[n;b;e]b+til[1+n]*(e-b)%n} / round y to nearest x rnd:{x*"j"$y%x} / allocate x into n bins nbin:{[n;x](n-1)&floor n*.5^x%max x-:min x} / table x cross y tcross:{value flip ([]x) cross ([]y)} / return memory (used;allocated;max) / returned in units specified by x (0:B;1:KB;2:MB;3:GB;...) mem:{(3#system"w")%x (1024*)/ 1} / given a dictionary mirroring the group operator return value, reconstruct / the original ungrouped list. generate the dictionary key if none provided ugrp:{ if[not type x;x:til[count x]!x]; x:@[sum[count each x]#k;value x;:;k:key x]; x} / append a total row and (c)olumn to (t)able totals:{[c;t] t[key[t]0N]:sum value t; t:t,'flip (1#c)!enlist sum flip value t; t} / surround a (s)tring or list of stings with a box of (c)haracters box:{[c;s] if[type s;s:enlist s]; m:max count each s; h:enlist (m+2*1+count c)#c; s:(c," "),/:(m$/:s),\:(" ",c); s:h,s,h; s} / use (w)eight vector or dictionary to partition (x). (s)ampling (f)unction: / til = no shuffle, 0N? = shuffle, list or table = stratify part:{[w;sf;x] if[99h=type w;:key[w]!.z.s[value w;sf;x]]; if[99h<type sf;:x (floor sums n*prev[0f;w%sum w]) _ sf n:count x]; x@:raze each flip value .z.s[w;0N?] each group sf; / stratify x} / one-hot encode vector, (symbol columns of) table or (non-key symbol / columns of) keyed table x. onehot:{ if[98h>t:type x;:u!x=/:u:distinct x]; / vector if[99h=t;:key[x]!.z.s value x]; / keyed table D:.z.s each x c:where 11h=type each flip x; / list of dictionaries D:string[c] {(`$(x,"_"),/:string key y)!value y}' D; / rename uniquely x:c _ x,' flip raze D; / append to table x} / Heckbert's axis label algorithm / use Heckbert's values to (r)ou(nd) or floor (x) to the nearest nice number nicenum:{[rnd;x] s:`s#$[rnd;0 1.5 3 7;0f,1e-15+1 2 5f]!1 2 5 10f; x:f * s x%f:10 xexp floor 10 xlog x; x} / given suggested (n)umber of labels and the (m)i(n) and (m)a(x) values, use / Heckbert's algorithm to generate a series of nice numbers heckbert:{[n;mn;mx] r:nicenum[0b] mx-mn; / range of values s:nicenum[1b] r%n-1; / step size mn:s*floor mn%s; / new min mx:s*ceiling mx%s; / new max l:sseq[s;mn;mx]; / labels l} / plotting utilities / cut m x n matrix X into (x;y;z) where x and y are the indices for X / and z is the value stored in X[x;y] - result used to plot heatmaps hmap:{[X]@[;0;`s#]tcross[til count X;reverse til count X 0],enlist raze X} / using (a)ggregation (f)unction, plot (X) using (c)haracters limited to / (w)idth and (h)eight. X can be x, (x;y), or (x;y;z) plot:{[w;h;c;af;X] if[type X;X:enlist X]; / promote vector to matrix if[1=count X;X:(til count X 0;X 0)]; / turn ,x into (x;y) if[2=count X;X,:count[X 0]#1]; / turn (x;y) into (x;y;z) if[not `s=attr X 0;c:1_c]; / remove space unless heatmap l:heckbert[h div 2].(min;max)@\:X 1; / generate labels x:-1_nseq[w] . (min;max)@\:X 0; / compute x axis y:-1_nseq[h] . (first;last)@\:l; / compute y axis Z:(y;x) bin' "f"$X 1 0; / allocate (x;y) to (w;h) bins Z:af each X[2]group flip Z; / aggregating overlapping z Z:c nbin[count c;0f^Z]; / map values to characters p:./[(h;w)#" ";key Z;:;value Z]; / plot points k:@[count[y]#0n;0|y bin l;:;l]; / generate key p:reverse k!p; / generate plot p} c10:" .-:=+x#%@" / 10 characters c16:" .-:=+*xoXO#$&%@" / 16 characters c68:" .'`^,:;Il!i><~+_-?][}{1)(|/tfjrxn" / 68 characters c68,:"uvczXYUJCLQ0OZmwqpdbkhao*#MW&8%B@$" plt:plot[19;10;c10;avg] / default plot function / generate unicode sparkline (with nulls rendered as spaces) spark:{ s:("c"$226 150,/:129+til 8) nbin[8] x; / map to 8 unicode characters if[n:count w:where null x;s[w]:(n;3)#"c"$226 128 136]; / replace null values s:raze s; s} / image manipulation utilities / remove gamma compression gexpand:{?[x>0.0405;((.055+x)%1.055) xexp 2.4;x%12.92]} / add gamma compression gcompress:{?[x>.0031308;-.055+1.055*x xexp 1%2.4;x*12.92]} / convert rgb to grayscale grayscale:.2126 .7152 .0722 wsum / create netpbm bitmap using ascii (or (b)inary) characters for matrix x pbm:{[b;X] s:($[b;"P4";"P1"];-3!count'[(X;X 0)]); s,:$[b;enlist"c"$raze((0b sv 8#)each 8 cut raze::)each flip X;" "0:"b"$X]; s} / create netpbm graymap using ascii (or (b)inary) characters for matrix x pgm:{[b;mx;X] if[b;if[255<mx|max (max') X;'`limit]]; / binary version has 255 max s:($[b;"P5";"P2"];-3!count'[(X;X 0)];string mx); s,:$[b;enlist "c"$raze flip X;" "0:"h"$X]; s} / create netpbm pixmap using ascii (or (b)inary) characters for matrix x ppm:{[b;mx;X] if[b;if[255<mx|max (max') (max'') X;'`limit]]; / binary version has 255 max s:($[b;"P6";"P3"];-3!count'[(X;X 0)];string mx); s,:$[b;enlist "c"$2 raze/flip X;" "0:raze flip each "h"$X]; s} / text utilities
lj , ljf ¶ Left join x lj y lj [x;y] x ljf y ljf[x;y] Where x is a table. Since 4.1t 2023.08.04 ifx is the name of a table, it is updated in place.y is- a keyed table whose key column/s are columns of x , returnsx andy joined on the key columns ofy - or the general empty list () , returnsx - a keyed table whose key column/s are columns of For each record in x , the result has one record with the columns of y joined to columns of y : - if there is a matching record in y , it is joined to thex record; common columns are replaced fromy . - if there is no matching record in y , common columns are left unchanged, and new columns are null q)show x:([]a:1 2 3;b:`I`J`K;c:10 20 30) a b c ------ 1 I 10 2 J 20 3 K 30 q)show y:([a:1 3;b:`I`K]c:1 2;d:10 20) a b| c d ---| ---- 1 I| 1 10 3 K| 2 20 q)x lj y a b c d --------- 1 I 1 10 2 J 20 3 K 2 20 The y columns joined to x are given by: q)y[select a,b from x] c d ---- 1 10 2 20 lj is a multithreaded primitive. Changes in V4.0¶ lj checks that y is a keyed table. (Since V4.0 2020.03.17.) q)show x:([]a:1 2 3;b:10 20 30) a b ---- 1 10 2 20 3 30 q)show y:([]a:1 3;b:100 300) a b ----- 1 100 3 300 q)show r:([]a:1 2 3;b:100 20 300) a b ----- 1 100 2 20 3 300 q)(1!r)~(1!x)lj 1!y 1b q)r~x lj 1!y 1b q)x lj y 'type [0] x lj y ^ Changes in V3.0 Since V3.0, the lj operator is a cover for ,\: (Join Each Left) that allows the left argument to be a keyed table. ,\: was introduced in V2.7 2011.01.24. Prior to V3.0, lj had similar behavior, with one difference - when there are nulls in the right argument, lj in V3.0 uses the right-argument null, while the earlier version left the corresponding value in the left argument unchanged: q)show x:([]a:1 2;b:`x`y;c:10 20) a b c ------ 1 x 10 2 y 20 q)show y:([a:1 2]b:``z;c:1 0N) a| b c -| --- 1| 1 2| z q)x lj y / kdb+ 3.0 a b c ----- 1 1 2 z q)x lj y / kdb+ 2.8 a b c ------ 1 x 1 2 z 20 Since 2014.05.03, the earlier version is available in all V3.x versions as ljf . Joins Q for Mortals §9.9.2 Ad Hoc Left Join load , rload ¶ Load binary data from a file or directory load ¶ Load binary data from a file load x load[x] Where x is - a symbol atom or vector matching the name/s of datafile/s (with no extension) in the current directory, reads the datafile/s and assigns the value/s to global variable/s of the same name, which it returns - a filesymbol atom or vector for datafile/s (with no extension), reads the datafile/s and assigns the value/s to global variable/s of the same name, which it returns - a filesymbol for a directory, creates a global dictionary of the same name and within that dictionary recurses on any datafiles the directory contains Signals a type error if the file is not a kdb+ data file There are no text formats corresponding to save . Instead, use File Text. q)t:([]x: 1 2 3; y: 10 20 30) q)save`t / save to a binary file (same as `:t set t) `:t q)delete t from `. / delete t `. q)t / not found 't q)load`t / load from a binary file (same as t:get `:t) `t q)t x y ---- 1 10 2 20 3 30 The following example uses the tables created using the script sp.q q)\l sp.q q)\mkdir -p cb q)`:cb/p set p `:cb/p q)`:cb/s set s `:cb/s q)`:cb/sp set sp `:cb/sp q)load `cb `cb q)key cb `p`s`sp q)cb `s s | name status city --| ------------------- s1| smith 20 london s2| jones 10 paris s3| blake 30 paris s4| clark 20 london s5| adams 30 athens Operating systems may create hidden files, such as .DS_Store , that block load . rload ¶ Load a splayed table from a directory rload x rload[x] Where x is the table name as a symbol, the table is read from a directory of the same name. rload is the converse of rsave . The usual, and more general, way of doing this is to use get , which allows a table to be defined with a different name than the source directory. The following example uses the table sp created using the script sp.q q)\l sp.q q)rsave `sp / save splayed table `:sp/ q)delete sp from `. `. q)sp 'sp q)rload `sp / load splayed table `sp q)3#sp s p qty --------- s1 p1 300 s1 p2 200 s1 p3 400 q)sp:get `:sp/ / equivalent to rload `sp save , rsave .Q.dsftg (load process save), .Q.fps (pipe streaming), .Q.fs (file streaming), .Q.fsn (file streaming with chunks), .Q.v (get splayed table) File system Q for Mortals §11.2 Save and Load on Tables log , xlog ¶ Logarithms and natural logarithms log ¶ Natural logarithm log x log[x] Where x is numeric and - null, returns null - 0, returns -0w - a datetime, returns x - otherwise, the natural logarithm of x q)log 1 0f q)log 0.5 -0.6931472 q)log exp 42 42f q)log -2 0n 0 0.1 1 42 0n 0n -0w -2.302585 0 3.73767 log is a multithreaded primitive. Implicit iteration¶ log is an atomic function. It applies to dictionaries and tables q)log(2;3 4) 0.6931472 1.098612 1.386294 q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 -21 3;4 5 -6) q)log d a| 2.302585 1.098612 b| 1.386294 1.609438 q)log t a b ----------------- 2.302585 1.386294 1.609438 1.098612 q)log k k | a b ---| ----------------- abc| 2.302585 1.386294 def| 1.609438 ghi| 1.098612 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range f . f f f f f f f . f f f z f f f f Range: fz xlog ¶ Logarithm x xlog y xlog[x;y] Returns the base-xf logarithm of yf , where xf and yf are x and y cast to floats, i.e. "f"$(x;y) . Where yf is negative or zero, the result is null and negative infinity respectively. q)2 xlog 8 3f q)2 xlog 0.125 -3f q)1.5 xlog 0 0.125 1 3 0n -0w -5.128534 0 2.709511 0n q)`float$"AC" 65 67f q)65 xlog 67 1.00726 q)"A"xlog"C" 1.00726 xlog is a multithreaded primitive. Implicit iteration¶ xlog is an atomic function. It applies to dictionaries and tables q)(2;3 4)xlog(4;5 6) 2f 1.464974 1.292481 q)10 xlog d a| 1 0.4771213 b| 0.60206 0.69897 q)10 xlog t a b ----------------- 1 0.60206 0.69897 0.4771213 q)10 xlog k k | a b ---| ----------------- abc| 1 0.60206 def| 0.69897 ghi| 0.4771213 xlog and xexp ¶ xlog is the inverse of xexp , i.e. y~x xexp x xlog y . q)2 xexp 2 xlog -1 0 0.125 1 42 0n 0 0.125 1 42 Domain and range¶ xlog | b g x h i j e f c s p m d z n u v t ---- | ----------------------------------- b | f . f f f f f f f . f f f . f f f f g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f f . f f f . f f f f h | f . f f f f f f f . f f f . f f f f i | f . f f f f f f f . f f f . f f f f j | f . f f f f f f f . f f f . f f f f e | f . f f f f f f f . f f f . f f f f f | f . f f f f f f f . f f f . f f f f c | f . f f f f f f f . f f f . f f f f s p | f . f f f f f f f . f f f . f f f f m | f . f f f f f f f . f f f . f f f f d | f . f f f f f f f . f f f . f f f f z | . . . . . . . . . . . . . . . . . . n | f . f f f f f f f . f f f . f f f f u | f . f f f f f f f . f f f . f f f f v | f . f f f f f f f . f f f . f f f f t | f . f f f f f f f . f f f . f f f f Range: f lower , upper ¶ Shift case lower x lower[x] upper x upper[x] Where x is a character or symbol atom or vector, returns it with any bicameral characters in the lower/upper case. q)lower"IBM" "ibm" q)lower`IBM `ibm q)upper"ibm" "IBM" q)upper`ibm`msft `IBM`MSFT domain: b g x h i j e f c s p m d z n u v t range: . . . . . . . . c s . . . . . . . . Implicit iteration¶ lower and upper are atomic functions. q)upper(`The;(`quick`brown;(`fox;`jumps`over));`a;`lazy`dog) `THE (`QUICK`BROWN;(`FOX;`JUMPS`OVER)) `A `LAZY`DOG lsq ¶ Least squares, matrix divide x lsq y lsq[x;y] Where: x andy are float matrixes with the same number of columns- the number of rows of y do not exceed the number of columns - the rows of y are linearly independent returns the least-squares solution of x = (x lsq y) mmu y . That is, if d:x - (x lsq y) mmu y then sum d*d is minimized. If y is a square matrix, d is the zero matrix, up to rounding errors. q)a:1f+3 4#til 12 q)b:4 4#2 7 -2 5 5 3 6 1 -2 5 2 7 5 0 3 4f q)a lsq b -0.1233333 0.16 0.4766667 0.28 0.07666667 0.6933333 0.6766667 0.5466667 0.2766667 1.226667 0.8766667 0.8133333 q)a - (a lsq b) mmu b -4.440892e-16 2.220446e-16 0 0 0 8.881784e-16 -8.881784e-16 8.881784e-16 0 0 0 1.776357e-15 q)a ~ (a lsq b) mmu b / tolerant match 1b q)b:3 4#2 7 -2 5 5 3 6 1 -2 5 2 7f q)a lsq b -0.1055556 0.3333333 0.4944444 0.1113757 1.031746 0.7113757 0.3283069 1.730159 0.9283069 q)a - (a lsq b) mmu b / minimum squared difference 0.5333333 -0.7333333 -0.2 0.7333333 1.04127 -1.431746 -0.3904762 1.431746 1.549206 -2.130159 -0.5809524 2.130159 lsq solves a normal equations matrix via Cholesky decomposition – solving systems is more robust than matrix inversion and multiplication. Since V3.6 2017.09.26 inv uses LU decomposition. Previously it used Cholesky decomposition as well. Polynomial fitting¶ lsq can be used to approximate x and y values by polynomials. lsfit:{(enlist y) lsq x xexp/: til 1+z} / fit y to poly in x with degree z poly:{[c;x]sum c*x xexp til count c} / polynomial with coefficients c x:til 6 y:poly[1 5 -3 2] each x / cubic lsfit[x;y] each 1 2 3 / linear,quadratic,cubic(=exact) fits -33 37.6 7 -22.4 12 1 5 -3 2 Notice that lsq is very close to {x mmu inv y} . q)A:(1.1 2.2 3.3;4.4 5.5 6.6;7.7 8.8 9.9) q)B:(1.1 2.1 3.1; 2.3 3.4 4.5; 5.6 7.8 9.8) q)A lsq B 1.211009 -0.1009174 2.993439e-12 -2.119266 2.926606 -3.996803e-12 -5.449541 5.954128 -1.758593e-11 q)A mmu inv B 1.211009 -0.1009174 7.105427e-15 -2.119266 2.926606 0 -5.449541 5.954128 7.105427e-15 inv , mmu Mathematics LU decomposition, Cholesky decomposition
Migrating a kdb+ HDB to Amazon EC2¶ KX has an ongoing project of evaluating different cloud technologies to see how they interact with kdb+. If you are assessing migrating a kdb+ historical database (HDB) and analytics workloads into the Amazon Elastic Compute Cloud (EC2), here are key considerations: - performance and functionality attributes expected from using kdb+, and the associated HDB, in EC2 - capabilities of several storage solutions working in the EC2 environment, as of March 2018 - performance attributes of EC2, and benchmark results You must weigh the pros and cons of each solution. The key issues of each approach are discussed in the Appendices. We highlight specific functional constraints of each solution. We cover some of the in-house solutions supplied by Amazon Web Services (AWS), as well as a selection of some of the third-party solutions sold and supported for EC2, and a few open-source products. Most of these solutions are freely available for building and testing using Amazon Machine Images (AMI) found within the Amazon Marketplace. Why Amazon EC2?¶ Gartner, and other sources such as Synergy Research, rank cloud-services providers: - Amazon Web Services - Microsoft Azure - Google Cloud Platform This is partly due to the fact that Amazon was first to market, and partly because of their strong global data-center presence and rich sets of APIs and tools. Amazon EC2 is one of many services available to AWS users, and is managed via the AWS console. EC2 is typically used to host public estates of Web and mobile-based applications. Many of these are ubiquitous and familiar to the public. EC2 forms a significant part of the “Web 2.0/Semantic Web” applications available for mobile and desktop computing. kdb+ is a high-performance technology. It is often assumed the Cloud cannot provide a level of performance, storage and memory access commensurate with dedicated or custom hardware implementations. Porting to EC2 requires careful assessment of the functional performance constraints both in EC2 compute and in the supporting storage layers. kdb+ users are sensitive to database performance. Many have significant amounts of market data – sometimes hundreds of petabytes – hosted in data centers. Understanding the issues is critical to a successful migration. Consider the following scenarios: - Your internal IT data services team is moving from an in-house data center to a cloud-services offering. This could be in order to move the IT costs of the internal data center from a capital expense line to an operating expense line. - You need your data analytics processing and/or storage capacity to be scaled up instantly, on-demand, and without the need to provide extra hardware in your own data center. - You believe the Cloud may be ideal for burst processing of your compute load. For example, you may need to run 100s of cores for just 30 minutes in a day for a specific risk-calculation workload. - Your quants and developers might want to work on kdb+, but only for a few hours in the day during the work week, a suitable model for an on-demand or a spot-pricing service. - You want to drive warm backups of data from in-house to EC2, or across instances/regions in EC2 – spun up for backups, then shut down. - Development/UAT/Prod life-cycles can be hosted on their own instances and then spun down after each phase finishes. Small memory/core instances can cost less and can be increased or decreased on demand. Hosting both the compute workload and the historical market data on EC2 can achieve the best of both worlds: - reduce overall costs for hosting the market data pool - flex to the desired performance levels As long as the speed of deployment and ease of use is coupled with similar or good enough runtime performance, EC2 can be a serious contender for hosting your market data. In-house vs EC2¶ kdb+ is used to support - real-time data analytics - streaming data analytics - historical data analytics The historical database in a kdb+ solution is typically kept on a non-volatile persistent storage medium (a.k.a. disks). In financial services this data is kept for research (quant analytics or back-testing), algorithmic trading and for regulatory and compliance requirements. Low latency and the Cloud In the current state of cloud infrastructure, KX does not recommend keeping the high-performance, low-latency part of market data – or streaming data collection – applications in the Cloud. When speed translates to competitive advantage, using AWS (or cloud in general) needs to be considered carefully. Carefully-architected cloud solutions are acceptable for parts of the application that are removed from from the cutting-edge performance and data-capture requirements often imposed on kdb+. For example, using parallel transfers with a proven simple technology such as rsync , that can take advantage of the kdb+ data structures (distinct columns that can safely be transferred in parallel) and the innate compressibility of some of the data types to transfer data to historical storage in a cloud environment at end of day. Storage and management of historical data can be a non-trivial undertaking for many organizations: - capital and running costs - overhead of maintaining security policies - roles and technologies required - planning for data growth and disaster recovery AWS uses tried-and-tested infrastructure, which includes excellent policies and processes for handling such production issues. Before we get to the analysis of the storage options, it is important to take a quick look at the performance you might expect from compute and memory in your EC2 instances. CPU cores¶ We assume you require the same number of cores and memory quantities as you use on your in-house bare-metal servers. The chipset used by the instance of your choice will list the number of cores offered by that instance. The definition used by AWS to describe cores is vCPUs. It is important to note that with very few exceptions, the vCPU represents a hyper-threaded core, not a physical core. This is normally run at a ratio of 2 hyper-threaded cores to one physical core. There is no easy way to eliminate this setting. Some of the very large instances do deploy on two sockets. For example, r4.16xlarge uses two sockets. If your sizing calculations depend on getting one q process to run only on one physical core and not share itself with other q processes, or threads, you need to either - use CPU binding on q execution - invalidate the execution on even, or odd, core counts Or you can run on instances that have more vCPUs than there will be instances running. For the purposes of these benchmarks, we have focused our testing on single socket instances, with a limit of 16 vCPUs, meaning eight physical cores, thus: [centos@nano-client1 ~]$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz System memory¶ Memory sizes vary by the instance chosen. Memory lost to hypervisor Memory is reduced from the nominal “power of two” RAM sizing, as some is set aside for the Xen hypervisor. For example, a nominal 128 GB of RAM gets sized to approximately 120 GB. Take account of this in your memory sizing exercises. Compute and memory performance¶ For CPU and memory, the EC2 performance matches that seen on physical systems, when correlated to the memory specifications. So the default HVM mode of an AMI under Xen seems to work efficiently when compared to a native/physical server. There is one caveat to this, in testing kdb+ list creation speeds we observe a degradation of memory list creation times when the number of q processes running exceeds the number of vCPUs in the virtual machine. This is because the vCPU in EC2 is actually a single hyperthreaded core, and not a physical core. In this example, we see competition on the physical cores. For a 16 vCPU instance we notice this only when running above 8 q processes: Megabytes and mebibytes Throughout this paper, MB and GB are used to refer to MiBytes and GiBytes respectively. Network and storage performance¶ As expected, we see more noticeable performance variations with the aspects of the system that are virtualized and shared in EC2, especially those which in principle are shared amongst others on the platform. For kdb+ users, the storage (I/O) and the networking access are virtualized/shared, being separated from the bare metal by the Xen hypervisor. Most of the AMIs deployed into EC2 today are based on the Hardware Virtual Machine layer (HVM). It seems that in recent instantiations of HVM, the performance for I/O aspects of the guest have improved. For the best performance, AWS recommends current-generation instance types and HVM AMIs when you launch your instances. Any storage solution that hosts historical market data must: - support the Linux-hosted POSIX filesystem interfaces - offer suitable performance for streaming and random I/O mapped read rates - offer acceptable performance for random-region reads of a table (splayed) columns, constituting large record reads from random regions of the file These aspects, and inspection of metadata performance, are summarized in the tests. The term metadata is used to refer to file operations such as listing files in a directory, gathering file size of a file, appending, finding modification dates, and so on. Using Amazon S3 as a data store Because kdb+ does not directly support the use of an object store for its stored data, it cannot support direct use of an object-store model such as the Amazon S3. If you wish to use Amazon S3 as a data store, kdb+ historical data must be hosted on a POSIX-based filesystem layer fronting S3. Several solutions offer a POSIX interface layered over an underlying S3 storage bucket. These can be included alongside native filesystem support that can also be hosted on EC2. Although EC2 offers both physical systems and virtual systems within the Elastic Cloud, it is most likely customers will opt for a virtualized environment. There is also a choice in EC2 between spot pricing of an EC2, and deployed virtual instances. We focus here on the attribute and results achieved with the deployed virtual instance model. These are represented by instances that are tested in one availability zone and one placement group. A placement group is a logical grouping of instances within a single availability zone. Nodes in a placement group should gain better network latency figures when compared to nodes scattered anywhere within an availability zone. Think of this as placement subnets or racks with a data center, as opposed to the data-center itself. All of our tests use one placement group, unless otherwise stated. kdb+ is supported on most mainstream Linux distributions, and by extension we support standard Linux distributions deployed under the AWS model. Testing within this report was carried out typically on CentOS 7.3 or 7.4 distributions, but all other mainstream Linux distributions are expected to work equally well, with no noticeable performance differences seen in spot testing on RHEL, Ubuntu and SuSe running on EC2. Does kdb+ work in the same way under EC2?¶ Yes – mostly. When porting or hosting the HDB data to EC2, we expect our customers to: - Use one of the many POSIX-based filesystems solutions available under EC2. - Use (partly or fully) the lower-cost object storage via a POSIX or POSIX-like access method. - Not store the historical data on Hadoop HDFS filesystems. If kdb+ runs alongside one of the solutions reviewed here, your HDB will function identically to any internally-hosted, bare-metal system. You can use this report as input to determine the performance and the relative costs for an HDB solution on EC2. Historical data layouts and performance testing¶ The typical kdb+ database layout for a stock tick-based system is partitioned by date, although integer partitioning is also possible. Partitioning allows for quicker lookup and increases the ability to parallelize queries. kdb+ splays in-memory table spaces into representative directories and files for long-term retention. Here is an example of an on-disk layout for quote and trade tables, with date partitions: Usually, updates to the HDB are made by writing today’s or the last day’s in-memory columns of data to a new HDB partition. Q programmers can use a utility built into q for this which creates the files and directories organized as in the table above. kdb+ requires the support of a POSIX-compliant filesystem in order to access and process HDB data. kdb+ maps the entire HDB into the runtime address space of kdb+. This means the Linux kernel is responsible for fetching HDB data. If, for example, you are expecting a query that scans an entire day’s trade price for a specific stock symbol range, the filesystem will load this data into the host memory as required. So, for porting this to EC2, if you expect it to match the performance you see on your in-house infrastructure you will need to look into the timing differences between this and EC2. Our testing measured the time to load and unload data from arrays, ignoring the details of structuring columns, partitions and segments – we focused on just the raw throughput measurements. All of these measurements will directly correlate to the final operational latencies for your full analytics use-case, written in q. In other words, if a solution reported here shows throughput of 100 MB/sec for solution A, and shows 200 MB/sec for solution B, this will reflect the difference in time to complete the data fetch from backing store. Of course, as with any solution, you get what you pay for, but the interesting question is: how much more could you get within the constraints of one solution? To give an example: assuming a retrieval on solution A takes 50 ms for a query comprised of 10 ms to compute against the data, and 40 ms to fetch the data, with half the throughput rates, it might take 90 ms (10+80) to complete on solution B. Variations may be seen depending on metadata and random read values. This is especially important for solutions that use networked file systems to access a single namespace that contains your HDB. This may well exhibit a significantly different behavior when run at scale. Data locality¶ Data locality is the basic architectural decision. You will get the best storage performance in EC2 by localizing the data to be as close to the compute workload as is possible. EC2 is divided into various zones. Compute, storage and support software can all be placed in pre-defined availability zones. Typically these reflect the timezone location of the data center, as well as a further subdivision into a physical instance of the data center within one region or time zone. kdb+ will achieve the lowest latency and highest bandwidth in the network by using nodes and storage hosted in the same availability zone. Getting your data into EC2¶ Let’s suppose you already have a lot of data for your historical database (HDB). You will need to know the achievable bandwidth for data loading, and note that you will be charged by the amount of data ingested. The mechanics of loading a large data set from your data center which hosts the HDB into EC2 involves the use of at least one of the two methods described below. EC2 Virtual Private Cloud¶ We would expect kdb+ customers to use the EC2 Virtual Private Cloud (VPC) network structure. Within the VPC you can use either an anonymous IP address, using EC2 DHCP address ranges, or a permanently-allocated IP address range. The anonymous DHCP IP address range is free of charge. Typically you would deploy both the front and backend domains (subnets) within the same VPC, provisioned and associated with each new instance in EC2. Typically, an entire VPC allocates an entire class-C subnet. You may provision up to 200 class-C subnets in EC2, as one account. Public IP addresses are reachable from the internet and are either dynamically allocated on start, or use the same pre-defined elastic IP address on each start of the instance. Private IP addresses refer to the locally defined IP addresses only visible to your cluster (e.g. the front/backend in diagram below). Private IP addresses are retained by that instance until the instance is terminated. Public access may be direct to either of these domains, or you may prefer to set up a classic ‘demilitarized zone’ for kdb+ access. An elastic IP address is usually your public IPv4 address, known to your quants/users/applications, and is reachable from the Internet and registered permanently in DNS, until you terminate the instance or elastic IP. AWS has added support for IPv6 in most of their regions. An elastic IP address can mask the failure of an instance or software by remapping the address to another instance in your estate. That is handy for things such as GUIs and dashboards, though you should be aware of this capability and use it. You are charged for the elastic IP address if you close down the instance associated with it, otherwise one IP address is free when associated. As of January 2018 the cost is, $0.12 per Elastic IP address/day when not associated with a running instance. Additional IP addresses per instance are charged. Ingesting data can be via the public/elastic IP address. In this case, routing to that connection is via undefined routers. The ingest rate to this instance using this elastic IP address would depend on the availability zone chosen. But in all cases, this would be a shared pubic routed IP model, so transfer rates may be outside your control. In theory this uses publicly routed connections, so you may wish to consider encryption of the data over the wire, prior to decryption. Direct Connect¶ Direct Connect is a dedicated network connection between an access point to your existing IP network and one of the AWS Direct Connect locations. This is a dedicated physical connection offered as a VLAN, using industry standard 802.1q VLAN protocol. You can use AWS Direct Connect instead of establishing your own VPN connection over the internet to VPC. Specifically, it can connect through to a VPC domain using a private IP space. It also gives a dedicated service level for bandwidth. There is an additional charge for this service. Security of your data and secure access¶ The EC2 application machine image model (AMI) has tight security models in place. You would have to work very hard to remove these. The following diagram is a typical scenario for authenticating access to kdb+ and restricting networking access. The frontend and backend private subnets are provisioned by default with one Virtual Private Cloud (VPC) managed by EC2. Typically, this allocates an entire class-C subnet. You may provision up to 200 class-C subnets in EC2. The public access may be direct to either of these domains, or you may prefer to setup a classic ‘demilitarized zone’: Amazon has spent a lot of time developing security features for EC2. Key issues: - A newly-provisioned node comes from a trusted build image, for example, one found in the AWS Marketplace. - The Amazon Linux AMI Security Center provides patch and fix lists, and these can be automatically inlaid by the AMI. The Amazon Linux AMI is a supported and maintained Linux image provided by AWS for use on EC2. - Encryption at rest is offered by many of the storage interfaces covered in this report. Getting your data out of EC2¶ Storing billions and billions of records under kdb+ in EC2 is easily achievable. Pushing the data into EC2 can be easily done and in doing so incurs no data transfer charges from AWS. But AWS will charge you to extract this information from EC2. For example, network charges may apply if you wish to extract data to place into other visualization tools/GUIs, outside the domain of kdb+ toolsets. Replication¶ Or you may be replicating data from one region or availability zone, to another. For this, there is a cost involved. At time of writing, the charges are $.09/GB ($92/TB), or $94,200 for 1 PB transferred out to the Internet via EC2 public IP addresses. That is raw throughput measurements, not the raw GBs of kdb+ columnar data itself. This is billed by AWS at a pro-rated monthly rate. The rate declines as the amount of data transferred increases. This rate also applies for all general traffic over a VPN to your own data center. Note that normal Internet connections carry no specific service-level agreements for bandwidth. Network Direct¶ If you use the Network Direct option from EC2, you get a dedicated network with guaranteed bandwidth. You then pay for the dedicated link, plus the same outbound data transfer rates. For example, as of January 2018 the standard charge for a dedicated 1 GB/sec link to EC2 would cost $220/month plus $90/month for a transfer fee per TB. Consider these costs when planning to replicate HDB data between regions, and when exporting your data continually back to your own data center for visualization or other purposes. Consider the migration of these tools to coexist with kdb+ in the AWS estate, and if you do not, consider the time to export the data. Storing your HDB in S3¶ S3 might be something you are seriously considering for storage of some, or all, of your HDB data in EC2. Here is how S3 fits into the landscape of all of the storage options in EC2. Locally-attached drives¶ You can store your HDB on locally-attached drives, as you might do today on your own physical hardware on your own premises. EC2 offers the capability of bringing up an instance with internal NVMe or SAS/SATA disk drives, although this is not expected to be used for anything other than caching data, as this storage is referred to as ephemeral data by AWS, and might not persist after system shutdowns. This is due to the on-demand nature of the compute instances: they could be instantiated on any available hardware within the availability zone selected by your instance configuration. EBS volumes¶ You can store your HDB on EBS volumes. These appear like persistent block-level storage. Because the EC2 instances are virtualized, the storage is separated at birth from all compute instances. By doing this, it allows you to start instances on demand, without the need to co-locate the HDB data alongside those nodes. This separation is always via the networking infrastructure built into EC2. In other words, your virtualized compute instance can be attached to a real physical instance of the storage via the EC2 network, and thereafter appears as block storage. This is referred to as network attached storage (Elastic Block Storage). Alternatively, you can place the files on a remote independent filesystem, which in turn is typically supported by EC2 instances stored on EBS or S3. Amazon S3 object store¶ Finally, there is the ubiquitous Amazon S3 object store, available in all regions and zones of EC2. Amazon uses S3 to run its own global network of websites, and many high-visibility web-based services store their key data under S3. With S3 you can create and deploy your HDB data in buckets of S3 objects. - Storage prices are lower (as of January 2018): typically 10% of the costs of the Amazon EBS model. - S3 can be configured to offer redundancy and replication of object data, regionally and globally. Amazon can be configured to duplicate your uploaded data across multiple geographically diverse repositories, according to the replication service selected at bucket-creation time. S3 promises 99.999999999% durability. However, there are severe limitations on using S3 when it comes to kdb+. The main limitation is the API. API limitations¶ An S3 object store is organized differently from a POSIX filesystem. S3 uses a web-style RESTful interface HTTP-style interface with eventual-consistency semantics of put and change. This will always represent an additional level of abstraction for an application like kdb+ that directly manages its virtual memory. S3 therefore exhibits slower per–process/thread performance than is usual for kdb+. The lack of POSIX interface and the semantics of RESTful interfaces prevents kdb+ and other high-performance databases from using S3 directly. However, S3’s low cost, and its ability to scale performance horizontally when additional kdb+ instances use the same S3 buckets, make it a candidate for some customers. Performance limitations¶ The second limitation is S3’s performance, as measured by the time taken to populate vectors in memory. kdb+ uses POSIX filesystem semantics to manage HDB structure directly on disk. It exploits this feature to gain very high-performance memory management through Linux-based memory mapping functions built into the kernel, from the very inception of Linux. S3 uses none of this. On EC2, kdb+ performance stacks up in this order (from slowest to faster): - S3 - EBS - Third-party distributed or managed filesystem - Local drives to the instance (typically cache only) Although the performance of S3 as measured from one node is not fast, S3 retains comparative performance for each new instance added to an HDB workload in each availability zone. Because of this, S3 can scale up its throughput when used across multiple nodes within one availability zone. This is useful if you are positioning large numbers of business functions against common sets of market data, or if you are widely distributing the workload of a single set of business queries. This is not so for EBS as, when deployed, the storage becomes owned by one, and only one, instance at a time. Replication limitations¶ A nice feature of S3 is its built-in replication model between regions and/or time zones. Note you have to choose a replication option; none is chosen by default. The replication process may well duplicate incorrect behavior from one region to another. In other words, this is not a backup. However, the data at the replica site can be used for production purposes, if required. Replication is only for cross-region propagation (e.g. US-East to US-West). But, given that the kdb+ user can design this into the solution (i.e. end-of-day copies to replica sites, or multiple pub-sub systems), you may choose to deploy a custom solution within kdb+, across region, rather than relying on S3 or the filesystem itself. Summary¶ - The POSIX filesystem interface allows the Linux kernel to move data from the blocks of the underlying physical hardware, directly into memory mapped space of the user process. This concept has been tuned and honed by over 20 years of Linux kernel refinement. In our case, the recipient user process is kdb+. S3, by comparison, requires the application to bind to an HTTP-based RESTful (get, wait, receive) protocol, which is typically transferred over TCP/IP LAN or WAN connection. Clearly, this is not directly suitable for a high-performance in-memory analytics engine such as kdb+. However, all of the filesystem plug-ins and middleware packages reviewed in this paper help mitigate this issue. The appendices list the main comparisons of all of the reviewed solutions. - Neither kdb+, nor any other high-performance database, makes use of the RESTful object-store interface. - There is no notion of vectors, lists, memory mapping or optimized placement of objects in memory regions. - S3 employs an eventual-consistency model, meaning there is no guaranteed service time for placement of the object, or replication of the object, for access by other processes or threads. - S3 exhibits relatively low streaming-read performance. A RESTful, single S3 reader process is limited to a read throughput of circa 0.07 GB/sec. Some of the solutions reviewed in this paper use strategies to improve these numbers within one instance (e.g. raising that figure to the 100s MB/sec – GB/sec range). There is also throughput scalability gained by reading the same bucket across multiple nodes. There is no theoretical limit on this bandwidth, but this has not been exhaustively tested by KX. - Certain metadata operations, such as kdb+’s append function, cause significant latency vs that observed on EBS or local attached storage, and your mileage depends on the filesystem under review. Performance enhancements, some of which are bundled into third-party solutions that layer between S3 and the POSIX filesystem layer, are based around a combination of: multithreading read requests to the S3 bucket; separation of large sequential regions of a file into individual objects within the bucket and read-ahead and caching strategies. There are some areas of synergy. kdb+ HDB data typically stores billions and billions of time-series entries in an immutable read-only mode. Only updated new data that lands in the HDB needs to be written. S3 is a shared nothing model. Therefore, splitting a single segment or partitioned column of data into one file, which in turn is segmented into a few objects of say 1 MB, should be a lightweight operation, as there is no shared/locking required for previously written HDB data. So the HDB can easily tolerate this eventual consistency model. This does not apply to all use-cases for kdb+. For example, S3, with or without a filesystem layer, cannot be used to store a reliable ticker-plant log. Where S3 definitely plays to its strengths, is that it can be considered for an off-line deep archive of your kdb+ formatted market data. KX does not make recommendations with respect to the merits, or otherwise, of storing kdb+ HDB market data in a data retention type “WORM” model, as required by the regulations SEC 17-a4. Disaster recovery¶ In addition to EC2’s built-in disaster-recovery features, when you use kdb+ on EC2, your disaster recovery process is eased by kdb+’s simple, elegant design. kdb+ databases are stored as a series of files and directories on disk. This makes administering databases extremely easy because database files can be manipulated as operating-system files. Backing up a kdb+ database can be implemented using any standard filesystem backup utility. This is a key difference from traditional databases, which have to have their own cumbersome backup utilities and do not allow direct access to the database files and structure. kdb+’s use of the native filesystem is also reflected in the way it uses standard operating-system features for accessing data (memory-mapped files), whereas traditional databases use proprietary techniques in an effort to speed up the reading and writing processes. The typical kdb+ database layout for time-series data is to partition by date. Licensing kdb+ in the Cloud¶ Existing kdb+ users have a couple of options for supporting their kdb+ licenses in the Cloud: Existing license¶ You can use your existing license entitlement but must transfer or register coverage in the Cloud service. This would consume the specified number of cores from your license pool. An enterprise license can be freely used in EC2 instance(s). This might apply in the situation where the Cloud environment is intended to be a permanent static instance. Typically, this will be associated with a virtual private cloud (VPC) service. For example, AWS lets you provision a logically isolated section of the Cloud where you can launch AWS resources in a virtual network. The virtual network is controlled by your business, including the choice of IP, subnet, DNS, names, security, access, etc. On-demand licensing¶ You can sign up for an on-demand license, and use it to enable kdb+ on each of the on-demand EC2 nodes. kdb+ on-demand usage registers by core and by minutes of execution. Encryption¶ Consider the need for access to any keys used to encrypt and store data. Although this is not specific to AWS, do not assume you have automatic rights to private keys employed to encrypt the data. Where a third-party provider supplies or uses encryption or compression to store the market data on S3, you will need to check the public and private keys are either made available to you, or held by some form of external service. Benchmarking methodology¶ For testing raw storage performance, we used a lightweight test script developed by KX, called nano , based on the script io.q written by KX’s Chief Customer Officer, Simon Garland. The scripts used for this benchmarking are freely available for use and are published on Github at KxSystems/nano These sets of scripts are designed to focus on the relative performance of distinct I/O functions typically expected by a HDB. The measurements are taken from the perspective of the primitive IO operations, namely: | test | what happens | |---|---| | Streaming reads | One list (e.g. one column) is read sequentially into memory. We read the entire space of the list into RAM, and the list is memory-mapped into the address space of kdb+. | | Large Random Reads (one mapped read and map/unmapped) | 100 random-region reads of 1 MB of a single column of data are indexed and fetched into memory. Both single mappings into memory, and individual map/fetch/unmap sequences. Mapped reads are triggered by a page fault from the kernel into mmap ’d user space of kdb+. This is representative of a query that requires to read through 100 large regions of a column of data for one or more dates (partitions). | | Small Random Reads (mapped/unmapped sequences) | 1600 random-region reads of 64 KB of a single column of data are indexed and fetched into memory. Both single mappings into memory, and individual map/fetch/unmap sequences. Reads are triggered by a page fault from the kernel into mmap ’d user space of kdb+. We run both fully-mapped tests and tests with map/unmap sequences for each read. | | Write | Write rate is of less interest for this testing, but is reported nonetheless. | | Metadata: ( hclose hopen ) | Average time for a typical open/seek to end/close loop. Used by TP log as an “append to” and whenever the database is being checked. Can be used to append data to an existing HDB column. | Metadata:(();,;2 3) | Append data to a modest list of 128 KB, will open/stat/seek/write/close. Similar to ticker plant write down. | Metadata:(();:;2 3) | Assign bytes to a list of 128 KB, stat/seek/write/link. Similar to initial creation of a column. | | Metadata: ( hcount ) | Typical open/stat/close sequence on a modest list of 128 KB. Determine size. e.g. included in read1 . | | Metadata: ( read1 ) | An atomic mapped map/read/unmap sequence open/stat/seek/read/close sequence. Test on a modest list of 128 KB. | This test suite ensures we cover several of the operational tasks undertaken during an HDB lifecycle. For example, one broad comparison between direct-attached storage and a networked/shared filesystem is that the networked filesystem timings might reflect higher operational overheads vs. a Linux kernel block-based direct filesystem. Note that a shared filesystem will scale up in-line with the implementation of horizontally distributed compute, which the block filesystems will not easily do, if at all. Also note the networked filesystem may be able to leverage 100s or 1000s of storage targets, meaning it can sustain high levels of throughput even for a single reader thread. Baseline result – using a physical server¶ All the appendices refer to tests on AWS. To see how EC2 nodes compare to a physical server, we show the results of running the same set of benchmarks on a server running natively, bare metal, instead of on a virtualized server on the Cloud. For the physical server, we benchmarked a two-socket Broadwell E5-2620 v4 @ 2.10 GHz; 128 GB DDR4 2133 MHz. This used one Micron PCIe NVMe drive, with CentOS 7.3. For the block device settings, we set the device read-ahead settings to 32 KB and the queue depths to 64. It is important to note this is just a reference point and not a full solution for a typical HDB. This is because the number of target drives at your disposal here will limited by the number of slots in the server. Highlights: Creating a memory list¶ The MB/sec that can be laid out in a simple list allocation/creation in kdb+. Here we create a list of longs of approximately half the size of available RAM in the server. Shows the capability of the server when laying out lists in memory; reflects the combination of memory speeds alongside the CPU. Re-read from cache¶ The MB/sec that can be re-read when the data is already held by the kernel buffer cache (or filesystem cache, if kernel buffer not used). It includes the time to map the pages back into the memory space of kdb+ as we effectively restart the instance here without flushing the buffer cache or filesystem cache. Shows if there are any unexpected glitches with the filesystem caching subsystem. This may not affect your product kdb+ code per-se, but may be of interest in your research. Streaming reads¶ Where complex queries demand wide time periods or symbol ranges. An example of this might be a VWAP trading calculation. These types of queries are most impacted by the throughput rate i.e., the slower the rate, the higher the query wait time. Shows that a single q process can ingest at 1900 MB/sec with data hosted on a single drive, into kdb+’s memory space, mapped. Theoretical maximum for the device is approximately 2800 MB/sec and we achieve 2689 MB/sec. Note that with 16 reader processes, this throughput continues to scale up to the device limit, meaning kdb+ can drive the device harder, as more processes are added. Random reads¶ We compare the throughputs for random 1 MB-sized reads. This simulates more precise data queries spanning smaller periods of time or symbol ranges. In all random-read benchmarks, the term full map refers to reading pages from the storage target straight into regions of memory that are pre-mapped. Simulates queries that are searching around broadly different times or symbol regions. This shows that a typical NVMe device under kdb+ trends very well when we are reading smaller/random regions one or more columns at the same time. This shows that the device actually gets similar throughput when under high parallel load as threads increase, meaning more requests are queuing to the device and the latency per request sustains. Metadata function response times¶ We also look at metadata function response times for the filesystem. In the baseline results below, you can see what a theoretical lowest figure might be. We deliberately did not run metadata tests using very large data sets/files, so that they better represent just the overhead of the filesystem, the Linux kernel and target device. | function | latency (mSec) | function | latency (mSec) | |---|---|---|---| hclose hopen | 0.006 | ();,;2 3 | 0.01 | hcount | 0.003 | read1 | 0.022 | Physical server, metadata operational latencies – mSecs (headlines) This appears to be sustained for multiple q processes, and on the whole is below the multiple μSecs range. kdb+ sustains good metrics. AWS instance local SSD/NVMe¶ We separate this specific test from other storage tests, as these devices are contained within the EC2 instance itself, unlike every other solution reviewed in Appendix A. Note that some of the solutions reviewed in the appendixes do actually leverage instances containing these devices. An instance-local store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. This is available in a few predefined regions (e.g. US-East-1), and for a selected list of specific instances. In each case, the instance local storage is provisioned for you when created and started. The size and quantity of drives is preordained and fixed in both size and quantity. This differs from EBS, where you can select your own. For this test we selected the i3.8xlarge as the instance under test. i3 instance definitions will provision local NVMe or SATA SSD drives for local attached storage, without the need for networked EBS. Locally provisioned SSD and NVMe are supported by kdb+. The results from these two represent the highest performance per device available for read rates from any non-volatile storage in EC2. However, note that this data is ephemeral. That is, whenever you stop an instance, EC2 is at liberty to reassign that space to another instance and it will scrub the original data. When the instance is restarted, the storage will be available but scrubbed. This is because the instance is physically associated with the drives, and you do not know where the physical instance will be assigned at start time. The only exception to this is if the instance crashes or reboots without an operational stop of the instance, then the same storage will recur on the same instance. The cost of instance-local SSD is embedded in the fixed price of the instance, so this pricing model needs to be considered. By contrast, the cost of EBS is fixed per GB per month, pro-rated. The data held on instance local SSD is not natively sharable. If this needs to be shared, this will require a shared filesystem to be layered on top, i.e. demoting this node to be a filesystem server node. For the above reasons, these storage types have been used by solutions such as WekaIO, for their local instance of the erasure coded data cache. | function | instance-local NVMe (4 × 1.9 TB) | physical node (1 NVMe) | |---|---|---| | streaming read (MB/sec) | 7006 | 2624 | | random 1-MB read (MB/sec) | 6422 | 2750 | | random 64-KB read (MB/sec) | 1493 | 1182 | metadata (hclose , hopen ) | 0.0038 mSec | 0.0068 mSec | The variation of absolute streaming rates is reflective of the device itself. These results are equivalent to the results seen on physical servers. What is interesting is that at high parallelism, the targets work quicker with random reads and for metadata service times than the physical server. These instances can be deployed as a high-performance persistent cache for some of the AWS-based filesystem solutions, such as used in ObjectiveFS and WekaIO Matrix and Quobyte. Observations from kdb+ testing¶ CPU and memory speed¶ For CPU and memory speed/latencies with kdb+, EC2 compute nodes performance for CPU/memory mirrors the capability of logically equivalent bare-metal servers. At time of writing, your main decision here is the selection of system instance. CPUs range from older generation Intel up to Haswell and Broadwell, and from 1 core up to 128 vcores (vCPU). Memory ranges from 1 GB up to 1952 GB RAM. Storage performance¶ The best storage performance was, as expected, achieved with locally-attached ephemeral NVMe storage. This matched, or exceeded, EBS as that storage is virtualized and will have higher latency figures. As data kept on this device cannot be easily shared, we anticipate this being considered for a super cache for hot data (recent dates). Data stored here would have to be replicated at some point as this data could be lost if the instance is shut down by the operator. Wire speeds¶ kdb+ reaches wire speeds on most streaming read tests to networked/shared storage, under kdb+, and in several cases we can reach wire speeds for random 1-MB reads using standard mapped reads into standard q abstractions, such as lists. gp2 vs io1 ¶ EBS was tested for both gp2 and its brethren the io1 flash variation. kdb+ achieved wire speed bandwidth for both of these. When used for larger capacities, we saw no significant advantages of io1 for the HDB store use case, so the additional charges applied there need to be considered. st1 ¶ EBS results for the st1 devices (low cost traditional disk drives, lower cost per GB) show good (90th-percentile) results for streaming and random 1-MB reads, but, as expected, significantly slower results for random 64-KB and 1-MB reads, and 4× the latencies for metadata ops. Consider these as a good candidate for storing longer term, older HDB data to reduce costs for owned EBS storage. ObjectiveFS and WekaIO Matrix¶ ObjectiveFS and WekaIO Matrix are commercial products that offer full operational functionality for the POSIX interface, when compared to open-source S3 gateway products. These can be used to store and read your data from/to S3 buckets. WekaIO Matrix offers an erasure-encoded clustered filesystem, which works by sharing out pieces of the data around each of the members of the Matrix cluster. ObjectiveFS works between kdb+ and S3 with a per-instance buffer cache plus distributed eventual consistency. It also allows you to cache files locally in RAM cache and/or on ephemeral drives within the instance. Caching to locally provisioned drives is likely to be more attractive vs. caching to another RAM cache. POSIX filesystems¶ Standalone filesystems such as MapR-FS and Quobyte support POSIX fully. Other distributed filesystems designed from the outset to support POSIX should fare equally well as, to some degree, the networking infrastructure is consistent when measured within one availability zone or placement group. Although these filesystem services are encapsulated in the AWS marketplace as AMIs, you are obliged to run this estate alongside your HDB compute estate, as you would own and manage the HDB just the same as if it were in-house. Although the vendors supply AWS marketplace instances, you would own and run your own instances required for the filesystem. WekaIO and Quobyte¶ WekaIO and Quobyte use a distributed filesystem based on erasure-coding distribution of data amongst their quorum of nodes in the cluster. This may be appealing to customers wanting to provision the HDB data alongside the compute nodes. If, for example, you anticipate using eight or nine nodes in production these nodes could also be configured to fully own and manage the filesystem in a reliable way, and would not mandate the creation of distinct filesystem services to be created in other AWS instances in the VPC. What might not be immediately apparent is that for this style of product, they will scavenge at least one core on every participating node in order to run their erasure-coding algorithm most efficiently. This core will load at 100% CPU. EFS and AWS Gateway¶ Avoid EFS and AWS Gateway for HDB storage. They both exhibit very high latencies of operation in addition to the network-bandwidth constraints. They appear to impact further on the overall performance degradations seen in generic NFS builds in Linux. This stems from the latency between a customer-owned S3 bucket (AWS Gateway), and an availability zone wide distribution of S3 buckets managed privately by AWS. Open-source products¶ Although the open source products that front an S3 store (S3FS, S3QL and Goofys) do offer POSIX, they all fail to offer full POSIX semantics such as symbolic linking, hard linking and file locking. Although these may not be crucial for your use case, it needs consideration. You might also want to avoid these, as performance of them is at best average, partly because they both employ user-level FUSE code for POSIX support. Network configuration¶ The network configuration used in the tests: The host build was CentOS 7.4, with Kernel 3.10.0-693.el7.x86_64. The ENS module was installed but not configured. The default instance used in these test reports was r4.4xlarge . Total network bandwidth on this model is “up-to” 10 Gbps. For storage, this is documented by AWS as provisioning up to 3,500 Mbps, equivalent to 437 MB/sec of EBS bandwidth, per node, bi-directional. We met these discrete values as seen in most of our individual kdb+ tests. Author¶ Glenn Wright, Systems Architect, KX, has 30+ years of experience within the high-performance computing industry. He has worked for several software and systems vendors where he has focused on the architecture, design and implementation of extreme performance solutions. At KX, Glenn supports partners and solutions vendors to further exploit the industry-leading performance and enterprise aspects of kdb+.
Conformable data objects¶ Many q operators and keywords implicitly iterate through the items of their list arguments, provided that the arguments are conformable. This article describes what it means for data objects to conform. The idea of conformable objects is tied to atomic functions such as Add, functions like Cast with behavior very much like atom functions, and functions derived from Each. For example, the primitive function Add can be applied to vectors of the same count, as in q)1 2 3+4 5 6 5 7 9 but fails with a length error when applied to vectors that do not have the same count, such as: q)1 2 3 + 4 5 6 7 'length [0] 1 2 3 + 4 5 6 7 ^ The vectors 1 2 3 and 4 5 6 are conformable, while 1 2 3 and 4 5 6 7 are not. Add applies to conformable vectors in an item-by-item fashion. For example, 1 2 3+4 5 6 equals (1+4),(2+5),(3+6) , or 5 7 9 . Similarly, Add of an atom and a list is obtained by adding the atom to each item of the list. For example, 1 2 3+5 equals (1+5),(2+5),(3+5) , or 6 7 8 . If the argument lists of Add have additional structure below the first level then Add is applied item-by-item recursively, and for these lists to be conformable they must be conformable at every level; otherwise, a length error is signalled. For example, the arguments in the following expression are conformable at the top level – they are both lists of count 2 – but are not conformable at every level. q)(1 2 3;(4;5 6 7 8)) + (10;(11 12;13 14 15)) 'length [0] (1 2 3;(4;5 6 7 8)) + (10;(11 12;13 14 15)) ^ Add is applied to these arguments item-by-item, and therefore both 1 2 3+10 and (4;5 6 7 8)+(11 12;13 14 15) are evaluated, also item-by-item. When the latter is evaluated, 5 6 7 8+13 14 15 is evaluated in the process, and since 5 6 7 8 and 13 14 15 are not conformable, the evaluation fails. Type and length All atoms in the arguments to Add must be numeric, or else Add will signal a type error. However, the types of the atoms in two lists have nothing to do with conformability, which is only concerned with the lengths of various pairs of sublists from the two arguments. The following function tests for conformability; its result is 1 if its arguments conform at every level, and 0 otherwise. conform:{ $[ max 0>type each (x;y) ; 1 ; count[x]=count[y] ; min x conform' y; 0]} That is, atoms conform to everything, and two lists conform if they have equal counts and are item-by-item conformable. Two objects x and y conform at the top level if they are atoms or lists, and have the same count when both are lists. For example, if f is a binary then the arguments of f' (that is, f -Each) must conform at the top level. By extension, x and y conform at the top two levels if they conform at the top level and when both are lists, the items x[i] and y[i] also conform at the top level for every index i ; and so on. These conformability concepts are not restricted to pairs of objects. For example, three objects x , y , and z conform if all pairs x,y and y,z and x,z are conformable. Controlling evaluation¶ Evaluation is controlled by - iterators (maps and accumulators) for iteration - conditional evaluation - explicit return from a lambda - signalling and trapping errors - control words exit Iterators¶ Iterators are the primary means of iterating in q. Maps¶ The maps Each, Each Left, Each Right, Each Parallel, and Each Prior are iterators that apply values across the items of lists and dictionaries. Accumulators¶ The accumulators Scan and Over are iterators that apply values progressively: that is, first to argument/s, then progressively to the result of each evaluation. For unary values, they have three forms, known as Converge, Do, and While. Case¶ Case control structures in other languages map values to code or result values. In q this mapping is more often handled by indexing into lists or dictionaries. q)show v:10?`v1`v2`v3 / values `v1`v1`v3`v2`v3`v2`v3`v3`v2`v1 q)`r1`r2`r3 `v1`v2`v3?v / Find `r1`r1`r3`r2`r3`r2`r3`r3`r2`r1 q)(`v1`v2!`r1`r2) v / dictionary: implicit default `r1`r1``r2``r2```r2`r1 q)`r1`r2`default `v1`v2?v / explicit default `r1`r1`default`r2`default`r2`default`default`r2`r1 The values mapped can be functions. The pseudocode for-each (x in v) { switch(x) { case `v1: `abc,x; break; case `v2: string x; break; default: x; } } can be written in q as q)((`abc,;string;::) `v1`v2?v)@'v `abc`v1 `abc`v1 `v3 "v2" `v3 "v2" `v3 `v3 "v2" `abc`v1 and optimized with .Q.fu . See also the Case iterator. Control structures¶ Conditional evaluation¶ $[test;et;ef;…] Cond evaluates and returns ef when test is zero; else et . In the ternary form, two expressions are evaluated: test and either et or ef . With more expressions, Cond implements if/then/elseif… control structures. Vector Conditional The Vector Conditional operator, unlike Cond, can be used in query templates. Vector Conditional is an example of a whole class of data-oriented q solutions to problems other languages typically solve with control structures. Data-oriented solutions are typically more efficient and parallelize well. Control words¶ do - evaluate some expression/s some number of times if - evaluate some expression/s if some condition holds while - evaluate some expression/s while some condition holds Control words are not functions. They return as a result the generic null. Common errors with control words a:if[1b;42]43 / instead use Cond a:0b;if[a;0N!42]a:1b / the sequence is not as intended! Control words are little used in practice for iteration. Iterators are more commonly used. Explicit return¶ :x has a lambda terminate and return x . Signalling and trapping errors¶ Signal will exit the lambda under evaluation and signal an error to the expression that invoked it. Trap and Trap At set traps to catch errors. exit ¶ The exit keyword terminates kdb+ with the specified return code.
/ * Information Gain Helper function * * t is table where 1st col is attribute and remaining cols are class counts * for each value in attribute, e.g. * * attr1 class1 class2 * ------------------- * 0 25 75 * 1 33 67 * ... * * cls is list of classes e.g. `class1`class2 * clscnt is list of occurances of each class * setcnt is count of instances \ ighlpr:{[t;cls;clscnt;setcnt] e:entropy each flip t[cls]; p:(sum each flip t[cls])%setcnt; entropy[clscnt] - sum e*p} / * Information Gain * * test: * / random integers with class labels: * q)t:flip (`a`b`c`class!) flip {(3?100),1?`A`B`C} each til 10000 * q)infogain[t] each `a`b`c * 0.01451189 0.01573281 0.01328462 \ infogain:{[tbl;attrib] cls:exec distinct class from tbl; clscnt:(exec count i by class from tbl)[cls]; setcnt:count tbl; d:distinct tbl[attrib]; / change attrib colname for ease of use with select syntax t2:(`a xcol attrib xcols select from tbl); t2:select count i by a, class from t2; / create empty table to hold pivot t3:flip (`a,cls)!(enlist[d],{[d;x]count[d]#0}[d;] each til count cls); / pivot t2, change class column to a column per class value t3:{x lj `a xkey flip (`a,y[`class])!(enlist y[`a];enlist y[`x])} over (enlist t3),0!t2; ighlpr[t3;cls;clscnt;setcnt]} / * * Decision Tree ID3 * \ / * Generate paths through a decision tree. * data is a table containing entire dataset * d is a dict that contains attribute path (n_) and values (v_) to consider * * The result contains a table with two columns n_ and v_. The n_ column will * match the input d[`n_] except new possible attributes are appended. The * prefix of the v_ column should match the input values d[`v_]. * * e.g. * * q)t:([] a:1 1 2 2;b:3 4 3 4;class:`A`B`A`B) * q)nextpaths[t;`n_`v_!(enlist[`a];enlist[])] * n_ v_ * ------ * a b 1 * a b 2 \ nextpaths:{[data;d] n_:d[`n_]; attrs:key[first[data]] except `class; if[null first n_;:flip `n_`v_!(enlist each attrs;count[attrs]#enlist[])]; / construct a func. query similar to "select v_:(x,'y) from data" clause:{if[1=count x;:(each;enlist;first x)]; {((';,);x;y)} over x}[n_]; tmp:?[data;();1b;enlist[`v_]!enlist[clause]]; if[not null first d[`v_]; tmp:select from tmp where all each =[d[`v_];] each (-1_') v_]; / tack on the passed-in n_ to each row tmp[`n_]:(count[tmp];count[n_])#n_; tmp:`n_`v_ xcols tmp; / find leaves: select count distinct class by <n_> from data leaves:?[data;();n_!n_;enlist[`class]!enlist[(count;(distinct;`class))]]; / find leaves: select n_1,n_2,... from leaves where class=1 leaves:(value each) ?[leaves;enlist[(=;`class;1)];0b;n_!n_]; / internal nodes tmpi:select from tmp where not v_ in leaves; / leaf nodes tmpl:select from tmp where v_ in leaves; / dupe rows for each new attr to be appended to n_ newp:enlist[n_] cross attrs except n_; tmpl , (,/){[newp;x] {`n_`v_!(y;x[`v_])}[x;] each newp }[newp;] each tmpi} / * Query a table with a where clause generated by the d argument. * e.g. igwrap[t;(`a`b;10 1)] will effectively run the query: * select from t where a=10,b=1 * TODO: symbols seem to need special treatment in functional queries? \ igwrap:{[tbl;d] ?[tbl;{(=;x;$[-11=type y;enlist[y]; y])}'[first d;last d];0b;()]}; / * Perform one step of the ID3 algorithm. * data is a table containing entire dataset * tree is a n_, v_ table returned from nextpaths * l is level of algorithm to perform (i.e. how many elements in the paths) * * Returns a sorted table with infogain calculated for each path (e_ column) \ id3step:{[data;tree;l] / c_ contains a chain of attributes already split on tree_:update c_:(-1_') n_ from select from tree where l = count each n_; / get subsets of data matching c_ and v_ sub:igwrap[data] each flip tree_`c_`v_; / a_ is candidate attribute for next split a_:(-1#') tree_`n_; / find infogain for each subset and candidate attribute tree_[`e_]:{infogain . x} each (enlist each sub),'a_; / sort infogain to find attr to split on next / sort twice because groupby seems to jumble initial sort `e_ xdesc value select first n_, first v_, first e_ by c_, v_ from `e_ xdesc tree_} / helper function for recursive calls id3hlpr:{[data;tree;l] if[0=count tree;:tree]; r:id3step[data;tree;l]; attrs:key[first[data]] except `class; if[(0=count[r]) or l=count attrs;:r]; / recurse np_:(,/) nextpaths[data] each r; r:id3hlpr[data;np_;l+1]; / select paths that that have length l tmpl:select from np_ where l = count each n_; tmpl uj r} / * ID3 * * test: * q)t:flip (`a`b`c`d`class!) flip {(4?10),1?`A`B`C} each til 100 * q)id3[t] * \ id3:{[data] r:id3hlpr[data;nextpaths[data;`n_`v_!(enlist[];enlist[])];1]; / for each id3 path find the most common class, i.e. run query like: / select count i by class from data where (attr1=val1)&(attr2=val2)... clauses:{{(&;x;y)} over {(=;first x;enlist last x)} each flip x`n_`v_} each r; classes:{[data;clause] r:?[data;enlist[clause];enlist[`class]!enlist[`class];enlist[`x]!enlist[(#:;`i)]]; first exec class from `x xdesc r}[data;] each clauses; r[`class]:classes; delete e_ from r} / * Classic weather dataset * \ weatherdata:{ outlook:`sunny`sunny`overcast`rain`rain`rain`overcast`sunny`sunny`rain`sunny`overcast`overcast`rain; temp:`hot`hot`hot`mild`cool`cool`cool`mild`cool`mild`mild`mild`hot`mild; humidity:`high`high`high`high`normal`normal`normal`high`normal`normal`normal`high`normal`high; wind:`weak`strong`weak`weak`weak`strong`strong`weak`weak`weak`strong`strong`weak`strong; class:`no`no`yes`yes`yes`no`yes`no`yes`yes`yes`yes`yes`no; t:([] outlook:outlook;temp:temp;humidity:humidity;wind:wind;class:class)} ================================================================================ FILE: ml.q_rl_algo_qlearner.q SIZE: 2,553 characters ================================================================================ / * Q Learning is a reinforcement learning method for incrementally estimating * the optimal action-value function. It is an off-policy temporal difference * method. As such, it is model-free and uses bootstrapping. \ \d .qlearner / * Initialize the state for the learner. The state is stored in a metadata dict * and passed back to the caller. Subsequent calls to learner functions require * that it be passed back. * @param {symbols} actions - list of action names * @param {symbols} states - list of state names * @param {float} alpha - learning rate * @param {float} epsilon - probability of selecting random action * @param {float} gamma - discount factor \ init_learner:{[actions;states;alpha;epsilon;gamma] qhat:("i"$xexp[2;count[states]])#enlist[actions!count[actions]#0f]; curstate:states!count[states]#0b; store:(`states`curstate`action`alpha`epsilon`gamma`qhat!(states;curstate;`;alpha;epsilon;gamma;qhat)); store}; / * Helper function to initialize learner's current state * @param {dict} store - qlearner metadata * @param {dict} newstate - state to assign to learner \ init_state:{[store;newstate] store[`curstate]:store[`curstate],newstate; store}; / * Select the best action for given state * @param {dict} store - qlearner metadata * @param {dict} state - state for which to lookup best action \ best_action:{[store;state] first key asc store[`qhat][2 sv state[store`states]]}; / * Selects the next action and places it in store`action * @param {dict} store - qlearner metadata \ next_action:{[store] actions:cols first[store`qhat]; action:$[store[`epsilon]>first[1?1.0]; / retrieve actions from qhat dict so we dont have to store separately first 1?cols first store`qhat; best_action[store;store[`curstate]]]; store[`action]:action; store}; / * Make an observation * @param {dict} store - qlearner metadata * @param {float} reward - reward received for taking cur action in cur state * @param {dict} newstate - newstate transitioned to \ observe:{[store;reward;newstate] / convert newstate dict into decimal encoding newstate_:2 sv newstate[store`states]; curstate:2 sv store[`curstate][store`states]; curaction:store[`action]; qhat:store[`qhat]; curq:qhat[curstate][curaction]; newq:reward + store[`gamma]*first[desc qhat[newstate_]]; newq:(newq*store[`alpha]) + curq*(1-store[`alpha]); newq_:qhat[curstate],enlist[curaction]!enlist[newq]; store[`qhat]:(curstate # store[`qhat]), enlist[newq_], (1+curstate) _ store[`qhat]; store[`curstate]:store[`curstate],newstate; store}; ================================================================================ FILE: ml.q_rl_experiment_trade_trade.q SIZE: 2,463 characters ================================================================================ / * Run trading experiments with market data. Assumes existence of market data * directory and files named with tickers, e.g. data/IBM.csv \ \l ../../algo/qlearner.q \l ../../model/trading.q / local data directory .trading.datadir:"../../../data/"; / number of tickers to process ntickers:500; / * Run training episodes and one test run * @param {string} ticker * @param {int} episodes * @returns {table} \ singleshot:{[ticker;episodes] store:.qlearner.init_learner[`long`short`hold;`side,.trading.states;.3;0.1;.3]; data:.trading.get_data[ticker]; / partition half-half train / test part:("i"$count[data]%2); train:part#data; test:part _ data; i:-1; while[episodes>i+:1;store:.trading.trainiter[store;train]]; r:.trading.testhlpr_[store;test]; r2:.trading.realized[r]; r:update return:1f^fills return, bhreturn:1f^close%close[0] from r lj `date xkey r2; select date,return,bhreturn from r}; / Train and test and write out the top N runs to disk getreturns:{[ticker;topcnt] r:{[t;x] singleshot[t;100]}[ticker] peach til 100; last[r]`return; top:r[topcnt#idesc {last[x]`return} each r]; r:([] date:first[top]`date;bhreturn:first[top]`bhreturn); r:{[r;x] r[`$("rtn",string[first 1#x])]:(1_x)`return;r} over enlist[r],til[count[top]],'top; `:results/rtntop.csv 0:.h.tx[`csv;r];}; / * Batch functions: run a batch of experiments for multiple tickers and multiple * learner configurations. e.g. calling runbatch[] will train and test a learner * for each configuration, cross validated against each security. * NOTE: This may take a long time. Results are written to disk incrementally. \ batch_:{[fn] tickers:ntickers#ssr[;".csv";""] each value "\\ls ",.trading.datadir; ({enlist[`$x]} each tickers),'fn peach tickers}; batch:{[lrnargs] fn:{rtns:1_x; ([] ticker:count[rtns]#first[x]; returns:rtns)}; (lrnargs,) each (,/) fn each batch_[.trading.traintest[;lrnargs;5]]}; batchwrap:{[x;y] r:x,batch[y]; `:results/results.csv 0:.h.tx[`csv;r]; r}; runbatch:{ kparams:`alpha`epsilon`gamma`episodes; dparams:kparams!( .1 * 1 _ til 10; 0.1; .1 * 1 _ til 10; 100); params:flip kparams!flip (cross/) (dparams[kparams]); batchwrap over enlist[0#params],params}; / * Run a batch of random policies on various tickers * @param {int} iters - number of iterations over all tickers \ randbatch:{[iters] r:(,/) {batch_[.trading.randtest]} each til iters; `:results/results.csv 0:.h.tx[`csv;flip `ticker`returns!flip r]};
Publish-subscribe messaging with Solace¶ Message brokers have been around for decades and are deployed in most, if not all, large enterprises. Like any technology that has been around for so long, message brokers have seen their fair share of transformation, especially in their capabilities and use cases. This paper covers different ways applications communicate with each other, the publish/subscribe (pub/sub) messaging pattern and its advantages, benefits of implementing pub/sub with kdb+, and finally, how you can implement pub/sub with kdb+ using Solace’s PubSub+ event broker. Message brokers, or event brokers as they are commonly known these days, form the middle layer responsible for transporting your data. They differ from your databases: they are not meant for long-term storage. Event brokers specialize in routing your data to the interested applications. Essentially, event brokers allow your applications to communicate with each other without having to worry about data routing, message loss, protocol translation, and authentication/authorization. Why do processes communicate with each other?¶ Monolithic applications were big pieces of software that rarely worried about other applications and were largely self-contained. Most if not all of the business logic and interactions with other applications was encapsulated in a single gigantic application. The monolithic architecture had its advantages: it allowed full control of the application without having to rely on other teams. It also simplified some things since your application did not have to worry about interacting with other applications. When it came to deployment, all you had to do was roll out this one (albeit giant) piece of code. However, as business and applications grew, the monolithic architecture started showing several pain points. In particular, it did not scale well. It was difficult to manage, troubleshoot, and deploy. Adding new features became a tedious and risky task and put the entire application at risk. Many companies have started decomposing monolithic applications into smaller components or services where each application aligns with a business unit or service. Several companies have broken down applications even further into microservices, which are small-scale applications meant to manage one task only. As applications were broken down, demand grew for a way for them to interact with each other. Applications need to communicate with other applications to share data and stay in sync. Where a monolithic application could do everything itself, multiple applications now need to rely on each other and must share data to accomplish that. This is just one example of why applications need to talk to each other. There are many others but at the core of it, each system architecture consists of multiple types of applications – databases, caches, load balancers, API gateways etc, which they cannot exist in isolation. They need to talk to each other. Most applications need to store/retrieve data into/from a database. Most Web applications are a front end with a load balancer that routes Web traffic to various backend servers. These applications communicate with each other to make the overall architecture work. But how? Interprocess communication¶ Applications directly communicating with each other is known as interprocess communication (IPC). A typical kdb+ architecture consists of several kdb+ processes sharing data with each other using IPC. A kdb+ server process listens on a specific port; a client kdb+ process establishes a connection with the server process by opening a handle: q)h:hopen `::5001 q)h "3?20" 1 12 9 q)hclose h Synchronous vs asynchronous¶ As the example above shows, applications can communicate with each other using IPC. Specifically, in q, we can open a handle to a process, execute a remote query, and then close the handle. The previous example demonstrates how synchronous communication works. There are two popular ways of communication: synchronous and asynchronous. Both allow applications to communicate with each other, but with a subtle yet powerful difference. In synchronous communication, when an application issues a request or executes a remote query, it cannot do anything else in the meantime but wait for the response. Conversely, with asynchronous communication, your application is free to continue its operations while it waits for a response from the remote server. In q, you can execute remote queries asynchronously using a negative handle: q)h:hopen `::5001 q)(neg h) "a:3?20" / send asynchronously, no result q)(neg h) "a" / again no result q)h "a" / synchronous, with result 0 17 14 Asynchronous communication is extremely powerful because it frees up resources to be utilized for additional processing instead of sitting idle waiting for a response from the server. However, there are some use cases when you must use synchronous communication, especially when you need to be certain your remote query was executed by the server. For example, when authenticating users, you would use synchronous requests to make sure the user is authenticated before giving her access to your system. But for the most part it is best to use asynchronous messaging. Queueing¶ As applications scale, they are responsible for processing large quantities of messages. As these messages are sent to your application, in some cases, they are expected to be processed in order whereas in other cases, the order is not important. For example, an alerting system responsible for sending a push notification every time some value crosses a threshold might not care about the order of events. Its job is to simply check whether the value is above or below a threshold and respond. On the other hand, a payment-processing application at a bank certainly cares about message order. It would not want to let a customer continue withdrawing money from their account if they had already withdrawn all the money earlier. Queues are data structures used to order messages in sequence. Queue semantics make it a FIFO (First In First Out) data structure in which events are dequeued in the order they were enqueued. For example, if you have multiple transaction orders from 10:00am to 11:00am being enqueued, they will be consumed by a subscriber in that order starting with the first transaction at 10:00am. Besides providing sequential ordering, queues also provide persistence. A subscriber meant to process order transactions and persist them to a database might crash and not come back online for 10 minutes. The orders generated in those 10 minutes are still required to be processed albeit at a delay. A queue will persist those messages and make them available to the subscriber once it comes back online to avoid any data loss. Queues are commonly used in a pub/sub architecture (see below) to provide ordering and persistence. Request/reply¶ Communication can be either unidirectional or bidirectional. The IPC example above is a unidirectional example where one process sends a message to another process directly. Occasionally, you require the communication between your processes to be bidirectional, where the second process is required to provide you with a response. For example, issuing a request to query a database will provide you with a response consisting of the result of your query. For such scenarios, the asynchronous request/reply pattern uses queues to store the requests and responses. In this version, the first application issues a request, which is persisted in a queue for the second application to consume from. The payload of the message could contain a replyTo parameter which tells the second application to which queue it should publish the response. The first application is listening to the queue and consumes the message whenever it is available. Asynchronous request/reply provides benefits of both bi-directional communication and asynchronous messaging. Pub/sub with kdb+¶ Publish/subscribe supports bi-directional asynchronous communication from many-to-many applications. It is commonly used to distribute data between numerous applications in enterprises. The pub/sub messaging pattern can be implemented through the use of an event broker. In such an architecture, the event broker acts as an abstraction layer to decouple the publishers and consumers from each other. At a high level, the pub/sub messaging pattern consists of three components: - Publishers - publish data asynchronously to topics or queues without worrying about which process will consume this data - Consumers - consume data from topics or queues without worrying about which process published the data - Event Broker - connects to publishers and consumers; they do not connect to each other. With so many messaging patterns to choose from, why invest time and resources in implementing the pub/sub messaging pattern in your architecture? Pub/sub has numerous advantages that make your architecture efficient, robust, reliable, scalable and cloud-ready. Efficient data distribution¶ Data is the lifeline of your kdb+ stack. It flows from feed handlers to tickerplants to downstream subscribers and then eventually to historical databases. However, kdb+ stacks do not operate in isolation. Depending on how the company is structured, there might be a central Market Data team responsible for managing connectivity with external data vendors and writing Java feed handlers to capture market data and then distributing it over an event broker for other teams to consume. The Tick Data team can be just one of many downstream consumers interested in all or subset of the data published by the Market Data team. Similarly, the Tick Data team can enrich the raw data by generating stats and distributing it to other downstream consumers via the same event broker. The key idea here is that the publisher only has to worry about publishing data to the event broker and not get bogged down in the details of how many downstream consumers there are, what their subscription interests are, what protocol they want to use and so forth. This is all responsibility of the event broker. Similarly, the subscribers do not need to worry about which publisher they need to connect to. They continue to connect to the same event broker and get access to real-time data. Moreover, there are events that can lead to data spikes and impact applications. For example, market volumes were multiple times over their usual daily average in March 2020 during Covid-19. Anyone not using an event broker to manage data distribution would have been impacted with sudden spikes in data volume. Brokers provide shock absorption that deal with sudden spikes to make your architecture robust and resilient. Avoid tight coupling¶ Processes directly communicating with each other creates a tightly-coupled architecture where each process is dependent on one or more other processes. Such an architecture gets harder to maintain as it scales, since it requires more coordination between processes. In the context of kdb+, sharing data between your multiple tickerplants, RDBs and stats processes makes them depend on each other. Stats process responsible for consuming raw data directly from a RDB and generating stats on that raw data makes it dependent on the RDB. If something were to happen to the RDB process, it will also impact the stats process. Additionally, if a change were required in the RDB process, your developers will need to ensure that it does not impact any downstream processes that depend on the RDB process. This prevents you from making quick changes to your architecture. Each change you make introduces risks to downstream processes. Instead, each kdb+ process, whether it be an RDB or stats process, can communicate via an event broker using the pub/sub messaging pattern. The tickerplant can publish data to the event broker without worrying about connection details of downstream subscribers and their subscriptions. Both RDB and stats process can subscribe for updates from the event broker without knowing a thing about the tickerplant. The stats process can generate minutely stats and then republish that data to event broker on a different topic allowing other processes to subscribe to those stats. Doing so keeps your kdb+ processes independent, allows them to be flexible so they can easily adapt to new requirements, and reduces possibility of introducing bugs accidentally. Easily integrate other applications with kdb+¶ Most event brokers support a wide range of APIs and protocols, which means different applications can exploit these APIs, which kdb+ may or may not natively support, to publish or consume data. For example, our stats process from earlier is still consuming raw data from the event broker, generating stats on that data and then republishing it to a different topic. Now we have the PNL team interested in displaying these stats on a dynamic web dashboard using JavaScript. They can easily use the event broker’s JavaScript API to consume the stats live and display on a shiny HTML5 dashboard without having to learn anything about kdb+. For example, your company might have IoT devices producing a lot of timeseries data that needs to be captured in your kdb+ database. IoT devices typically use the lightweight MQTT protocol to transfer events. Using an event broker that supports the MQTT protocol would allow you to easily push data from your IoT devices to the event broker and then persist it in a kdb+ database to be analyzed in real time or later. Using an event broker puts the kdb+ estate at the center of your big data ecosystem and allows different teams to exploit its capabilities by integrating with it. Don’t let your kdb+ estate sit in isolation! Zero message loss¶ Not all events are equal. Some events do not matter at all and are not worth capturing. Some, such as market data, are important but there is tolerance for some loss. Then there are some events, such as order data, that are extremely important and cannot be dropped. A PNL dashboard can tolerate some loss of data, but an an execution management system losing order data can result in monetary loss. Where there is zero tolerance for message loss, using an event broker provides that guarantee. Event brokers use local persistence and acknowledgements to provide a guaranteed flow between publishers and subscribers. Publishers can publish data to the event broker and receive an acknowledgement back when broker has persisted the data locally. When a subscriber comes online and requests that data, it will receive it and will provide an acknowledgement back to the broker letting it know it is safe to delete the data from its local storage. Global data distribution and data consolidation via event mesh¶ Very rarely does data just sit in one local data center, especially at global financial firms using kdb+ for the tick data store. As discussed earlier, there are feed handlers deployed globally, potentially co-located at popular exchanges. Some institutions even have their kdb+ instances co-located to capture the data in real time with ultra-low latency, and then ship it back to their own datacenters. Storing your tick data in a colo is an expensive process, especially when eventually it needs to be shipped back to your data center to be analyzed or accessed by other applications. A cost-effective alternative is to use event brokers deployed locally to form an event mesh: a configurable and dynamic infrastructure layer for distributing events among decoupled applications, cloud services and devices. It enables event communications to be governed, flexible, reliable and fast. An event mesh is created and enabled through a network of interconnected event brokers. Modern event brokers can be deployed in different regions and environments (on-premises or cloud) yet still be connected together to move data seamlessly from one environment in one region, such as a colo in New Jersey, to a different environment in another region, such as an AWS region in Singapore. Using an event mesh to distribute your data out of colo to your own data center/s in different regions provides you with a cost-effective way to store your tick data in your core tick data store in your datacenter instead of at colos. You can consolidate data from different colos in different regions into your central tick data store. Conversely, you might want to localize your central tick data stores for your clients, such as researchers, spread across the globe to provide them with access to data locally and speed up their queries. Again, this can be done by distributing the data over an event mesh formed by event brokers deployed locally. Cloud migration¶ In the last decade, there has been a strong push to adopt cloud across industries. While many companies of various sizes and in different industries have chosen to fully adopt cloud, global financial companies are still in the process of migrating due to their size and strict regulatory requirements. With the rise in cloud adoption, there has been a rise in multiple vendors offering cloud services as well, mainly AWS, GCP, and Azure. Again, many companies have decided to pick one of these popular options, but other companies have chosen to go with either hybrid cloud or multi-cloud route. With hybrid cloud, companies still have their own datacenter but limit its use to critical applications only. For non-critical applications, they have chosen to deploy their applications in the cloud. Other companies have decided to go with not just one but at least two cloud providers to avoid depending heavily on just one. As you can see, this adds complexity to how data is shared across an organization. No longer do you only have to worry about applications being deployed in different geographical regions but also across multiple environments. Again, this is where an event mesh (see above) can help. As you gradually migrate your applications from on-premises to the cloud, you will also run two instances of your application in parallel for some time period. Again, you can use an event broker to share the data easily between the two processes in real time. Once you have migrated some applications, your central kdb+ tick data store located in your local datacenter in Virginia needs to be able to share data with the new on-demand kdb+ instances in AWS spun up by your researchers in Hong Kong running machine-learning algorithms across real-time tick data. And you need to ensure the data is shared in a cost-effective and federated manner. Ideally, data should only traverse from one environment to another if it is requested. You should avoid unnecessary network costs and only replicate data from one environment to another if there are active subscribers. And from a security point of view, you do not want to replicate sensitive data to environments unless required. Restricting access¶ If you have worked with market data before, you know how expensive it can be and how many market data licenses you need to be aware of and navigate when providing users and applications access to real-time market data. Market data access is limited by strict licenses so to avoid fees by data vendors and exchanges one needs to be careful of not just which applications are using the data but who has potential access to the data. Only the applications that require the data and are authorized to access the data should be able to consume that data. This is where Access Control Lists (ACLs) help. Event brokers allow you to lock down exactly which applications have access to the data in a transparent manner. You can control what topics publishers can publish to and what topics subscribers can subscribe from to make sure no one is accessing any data they are not authorized to access. For example, if all the market data in our organization is published to topics of this topology: EQ/{region}/{exchange}/{stock} we can restrict applications to US equities data by limiting them to the EQ/US/> hierarchy. Additionally, I can grant a subscriber access to only data from the New York Stock Exchange (NYSE) as: EQ/US/NYSE/> . Having strong ACL profiles provides transparency and strong security. And with market data, it helps avoid an expensive bill from exchanges and market data vendors! Implement with kdb+ and Solace¶ Solace PubSub+ event broker¶ Before picking a suitable broker for your kdb+ stack, gather all your requirements and cross reference them with all the features provided by different brokers. Solace’s PubSub+ broker is heavily used in financial services. It supports open APIs and protocols, dynamic topics, wildcard filtering, and event mesh. Its support for high-throughout messaging and low latency makes it a suitable companion for kdb+. Some core features: - Rich hierarchical dynamic topics and wildcard filtering - PubSub+ topics are dynamic, so you do not need to create them manually. They are hierarchical, and consumers can use wildcards to filter data on each of the levels in the topic. - In-memory and persistent messaging - Use in-memory (direct) messaging for high throughput use-cases and persistent (guaranteed) messaging for critical messages such as order data. - Event mesh - Distribute data dynamically across regions and environments by linking different PubSub+ brokers to form an event mesh. KX recently published an open-source kdb+ interface to Solace as one of its Fusion interfaces. This interface, or API, makes it extremely easy to use PubSub+ event broker from within your q code. Currently, the API supports: - Connecting to a PubSub+ instance - Creating and destroying endpoints - Performing topic-to-queue mapping with wildcard support - Publishing to topics and queues - Subscribing to topics and binding to queues - Setting up direct and guaranteed messaging - Setting up request/reply messaging pattern Spin up a PubSub+ instance¶ The easiest way to deploy a PubSub+ instance is to sign up for a Solace Cloud account and spin up a 60-day free instance. Alternatively, you can setup a local PubSub+ instance via Docker. PubSub+ Standard Edition is free to use in production as well. On Solace Cloud, we can create a free service very quickly by selecting the ‘free’ tier and picking AWS as our cloud provider. Below I have selected US East as my availability zone and named my service demo . Because this is a free tier, we are given a very lightweight PubSub+ instance with the following configuration: - 50 connections - 1GB storage - 4Mbps network throughput - Shared tenancy (as opposed to dedicated) - Single node deployment (as opposed to high availability) Because we have shared tenancy, the service will be up in just few seconds. With a PubSub+ instance up and running, we are ready to install the kdb+ interface to Solace. We will be using it on an AWS EC2 instance. Connect to PubSub+ from q¶ With the API installed, you are ready to start using it from q. First connect to the Solace Cloud service created in the previous section. The API provides useful examples to help you get started. To establish a connection, we can use the sol_capabilities.q example. But first add our connection information to sol_init.q . This file contains several initialization settings, including several connection defaults. It is also responsible for defining and registering several callback functions. You can find connection details for your Solace Cloud service in the Connect tab. Create a directory called cert within your examples directory and download the PEM file to it. Note that if you are trying to connect to a non-secure SMF host, you do not need to download the PEM file or reference it. You will also need username, password, Message VPN, and Secured SMF Host information. Update values in sol_init.q with these details: default.host :"tcps://mr2ko4me0p6h2f.messaging.solace.cloud:20642" default.vpn :"msgvpn-oyppj81j1ov" default.user :"solace-cloud-client" default.pass :"v23cck5rca6p3n1eio2cquqgte" default.trust: "cert" Once we have done that, we can test our connection: (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_capabilities.q -opt SESSION_PEER_SOFTWARE_VERSION KDB+ 3.6 2019.08.20 Copyright (C) 1993-2019 Kx Systems l64/ 2(16)core 7974MB ec2-user ip-172-31-70-197.ec2.internal 172.31.70.197 EXPIRE 2021.05.10 [email protected] KOD #4170793 ### Registering session event callback ### Registering flow event callback ### Initializing session SESSION_HOST | mr2ko4me0p6h2f.messaging.solace.cloud:20642 SESSION_VPN_NAME| msgvpn-oyppj81j1ov SESSION_USERNAME| solace-cloud-client SESSION_PASSWORD| v23cck5rca6p3n1eio2cquqgte [22617] Solace session event 0: Session up ### Getting capability : SESSION_PEER_SOFTWARE_VERSION `9.3.1.5 ### Destroying session We were able to register a session callback and flow event callback, initialize a session, get a confirmation that our session is up, get the PubSub+ version our Solace Cloud service is running and finally, destroy the session before exiting. Publish messages to PubSub+¶ In PubSub+, you can either publish to a topic or a queue. - topic - facilitates the pub/sub messaging pattern because it allows multiple subscribers to subscribe to the topic - queue - allows only one consumer to consume from that queue and hence implements a point-to-point messaging pattern We are more interested in the pub/sub messaging pattern, so we publish to a topic. PubSub+ offers two service qualities. - Direct messaging - Messages are not persisted to disk and hence, are less reliable but offer higher throughput and lower latency. - Persistent messaging - Also known as guaranteed messaging, involves persistence and acknowledgements to guarantee zero end-to-end message loss. The additional overhead of persistence and acknowledgements makes persistent messaging more suitable for critical data distribution in low throughput use-cases. For example, you should use direct messaging for market data distribution and guaranteed messaging for order flows. The sol_pub_direct.q example shows how to publish a direct message. The example loads the sol_init.q script and calls the .solace.sendDirect function with topic and data as parameters: \l sol_init.q -1"### Sending message"; .solace.sendDirect . 0N!params`topic`data; exit 0 Let’s publish "Hello, world" to the topic data/generic/hello . (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_pub_direct.q -topic "data/generic/hello" -data "Hello, world" ### Sending message `data/generic/hello`Hello, world ### Destroying session Similarly, we can publish a guaranteed message by calling the function .solace.sendPersistent as shown in the sol_pub_persist.q example. Create a queue and map topics to it¶ Just as you can publish data to either topics or queues, you can also consume data by either subscribing to a topic or instead mapping one or more topics to a queue and binding to that queue. Subscribing directly to a topic corresponds to direct messaging and is used in high-throughput, low-latency use cases, since there is no additional overhead of persistence and acknowledgements. For a higher service quality, promote direct messages to persistent messages by using topic-to-queue mapping. Topic-to-queue mapping is simply using a queue and mapping one or more topics to that queue to enqueue all of the messages sent to its topics. Queues provide persistence and ordering across topics so if the subscriber disconnects, no data is lost. The kdb+ interface to Solace allows you to create queues, map topics to them and destroy queues as well. This is nicely demonstrated in the examples q sol_endpoint_create.q create a queue q sol_topic_to_queue_mapping.q map topics to it q sol_endpoint_destroy.q destroy the queue once done Create a queue hello_world . (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_endpoint_create.q -name "hello_world" ### Creating endpoint ### Destroying session In the PubSub+ UI we can confirm our queue was created. Now map the data/generic/hello topic to it. (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_topic_to_queue_mapping.q -queue "hello_world" -topic "data/generic/hello" ### Destroying session Not much output but once again, confirm via PubSub+ UI. Rerun the previous example to publish data to the same topic (data/generic/hello ) and see if it gets enqueued in our newly created queue. (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_pub_direct.q -topic "data/generic/hello" -data "Hello, world" ### Sending message `data/generic/hello`Hello, world ### Destroying session Check the queue again. This time we find one message enqueued. Delete the queue by calling .solace.destroyEndpoint as shown in sol_endpoint_destroy.q . (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_endpoint_destroy.q -name "hello_world" ### Destroying endpoint ### Destroying session Subscribe to messages from PubSub+¶ The example sol_sub_direct.q shows how you can subscribe to a topic directly. When subscribing, we need to define a callback function which will be executed whenever a message is received. In this example, it simply prints the message to console. The actual subscription is implemented via the .solace.subscribeTopic function. The callback function, subUpdate , is registered via the .solace.setTopicMsgCallback function. Start a subscriber and have it subscribe to topic data/generic/hello . (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_sub_direct.q -topic "data/generic/hello" ### Registering topic message callback ### Subscribing to topic : data/generic/hello ### Session event eventType | 0i responseCode| 0i eventInfo | "host 'mr2ko4me0p6h2f.messaging.solace.cloud:20642', hostname 'mr2ko4me0p6h2f.messaging.solace.cloud:20642' IP 3.88.1 (host 1 of 1) (host connection attempt 1 of 1) (total connection attempt 1 of 1)" Your q session is still running, and awaiting messages. To publish a message, open a new terminal and run the earlier publishing example. (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_pub_direct.q -topic "data/generic/hello" -data "Hello, world" ### Sending message `data/generic/hello`Hello, world ### Destroying session As soon as this message is published, the subscriber running in our original terminal will receive the message and print it to the console. ### Message received payload | "Hello, world" dest | `data/generic/hello isRedeliv| 0b isDiscard| 0b isRequest| 0b sendTime | 2000.01.01D00:00:00.000000000 We can see properties of the message such as payload, destination, whether the message is being redelivered, timestamp and so on. Instead of subscribing directly to a topic, we can bind to a queue with the topic mapped to it. Use the queue hello_world created in the previous section with our topic data/generic/hello mapped to it. Use the example sol_sub_persist.q to bind to a queue. This example resembles sol_sub_direct.q , but here we need to send an acknowledgement after receiving the message via .solace.sendAck . Bind to a queue via .solace.bindQueue . (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_sub_persist.q -dest "hello_world" ### Registering queue message callback [16958] Solace flowEventCallback() called - Flow up (destination type: 1 name: hello_world) q### Session event eventType | 0i responseCode| 0i eventInfo | "host 'mr2ko4me0p6h2f.messaging.solace.cloud:20642', hostname 'mr2ko4me0p6h2f.messaging.solace.cloud:20642' IP 3.88.1 (host 1 of 1) (host connection attempt 1 of 1) (total connection attempt 1 of 1)" ### Flow event eventType | 0i responseCode| 200i eventInfo | "OK" destType | 1i destName | "hello_world" Run the sol_pub_direct.q example again to publish a direct message to the topic data/generic/hello . Our consumer should consume it from the queue. ### Message received payload | "Hello, world" dest | `hello_world destType | 0i destName | "data/generic/hello" replyType | -1i replyName | "" correlationId| "" msgId | 1 As soon as you publish the data, it will be enqueued in the queue and then consumed by our consumer bound to that queue. The consumer will print the message and its properties to the terminal. The destination in this example is our queue hello_world , not the topic data/generic/hello as earlier. destType being 0i tells us it is a queue. A topic is represented by 1i . We are not restricted to a single consumer. To fully exploit pub/sub messaging, we can subscribe multiple consumers to the same topic and/or queue. They will get the same message without the publisher having to publish multiple times. Example¶ Market data simulator¶ The following example shows how you can distribute market data over Solace PubSub+ and capture that data in real-time in kdb+ using the kdb+ interface to Solace. A simple market data simulator in Java generates random L1 market data for some preconfigured securities. It publishes data to PubSub+ topics of this syntax: <assetClass>/marketData/v1/<country>/<exchange>/<sym> For example, AAPL data will be published on EQ/marketData/v1/US/NASDAQ/AAPL and IBM data on EQ/marketData/v1/US/NYSE/IBM . By default, the simulator is configured to publish data for multiple stocks from four exchanges: NYSE, NASDAQ, LSE, and SGX. Two example messages for AAPL and IBM: { "symbol":"AAPL", "askPrice":250.3121, "bidSize":630, "tradeSize":180, "exchange":"NASDAQ", "currency":"USD", "tradePrice":249.9996, "askSize":140, "bidPrice":249.6871, "timestamp":"2020-03-23T09:32:10.610764-04:00" } { "symbol":"IBM", "askPrice":101.0025, "bidSize":720, "tradeSize":490, "exchange":"NYSE", "currency":"USD", "tradePrice":100.5, "askSize":340, "bidPrice":99.9975, "timestamp":"2020-03-23T09:32:09.609035-04:00" } This specific topic hierarchy is used to exploit Solace PubSub+’s rich topic hierarchy, which provides strong wildcard support and advance filtering logic. Topic architecture best practices The simulator uses the Solace JMS API to publish direct messages to PubSub+. (Solace also offers a proprietary Java API.) himoacs/market-data-simulator Streaming market data simulator for your projects Real-time subscriber¶ In a typical kdb+ stack, a market datafeed handler publishes data to a tickerplant which then pushes it to one or more real-time subscribers such as an RDB and/or a stats process. Here we implement an RDB to capture real-time updates and insert them in a prices table. How to consume the market data messages being published to PubSub+? We can map topics to a queue and bind to it, or just subscribe directly to a topic. With market data we want to avoid persistence, so we will subscribe directly to a topic. Solace PubSub+ supports subscriptions to an exact topic, or a generic topic using wildcards. There are two wildcards: * abstract away one level from the topic > abstract away one or more levels Both wildcards can be used together and * can be used more than once. For example, we know our publisher is publishing pricing data to a well-defined topic of the following topology: <assetClass>/marketData/v1/<country>/<exchange>/<sym> This lets our subscriber filter on several fields. For example, we can subscribe to EQ/> all equities */*/*/*/NYSE/> all NYSE securities EQ/marketData/v1/US/> all US equities The wildcarding is extremely powerful and allows subscribers to receive filtered data rather than filter data themselves. For our example, we will subscribe to equities data from all countries by subscribing to: EQ/marketData/v1/> . In our q code, we define our topic. topic:`$"EQ/marketData/v1/>"; Then we create an empty table called prices to store our updates. prices:flip `date`time`sym`exchange`currency`askPrice`askSize`bidPrice`bidSize`tradePrice`tradeSize! `date`time`symbol`symbol`symbol`float`float`float`float`float`float$\:() Define a callback function subUpdate to parse incoming JSON market data updates. subUpdate:{[dest;payload;dict] // Convert binary payload a:"c"$payload; // Load JSON to kdb table b:.j.k "[",a,"]"; // Update types of some of the columns b:select "D"$date,"T"$time,sym:`$symbol,`$exchange,`$currency,askPrice, askSize,bidPrice,bidSize,tradePrice,tradeSize from b; // Insert into our global prices table `prices insert b; } Of the three arguments, payload is the actual pricing data. subUpdate converts the binary payload to characters and loads JSON data into a kdb+ row using .j.k ; then updates some of the column types and inserts the row in the prices table. Register the callback function and subscribe to the topic. .solace.setTopicMsgCallback`subUpdate .solace.subscribeTopic[topic;1b] That’s it! Full code: // Listen to a Solace topic and capture all the raw records in real time // Load sol_init.q which has all the PubSub+ configurations \l sol_init.q // Topic to subscribe to topic:`$"EQ/marketData/v1/>" // Global table for capturing L1 quotes and trades prices:flip `date`time`sym`exchange`currency`askPrice`askSize`bidPrice`bidSize`tradePrice`tradeSize! `date`time`symbol`symbol`symbol`float`float`float`float`float`float$\:() -1"### Subscribing to topic : ",string topic; // Callback function for when a new message is received subUpdate:{[dest;payload;dict] // Convert binary payload a:"c"$payload; // Load JSON to kdb table b:.j.k "[",a,"]"; // Update types of some of the columns b:select "D"$date,"T"$time,sym:`$symbol,`$exchange,`$currency,askPrice, askSize,bidPrice,bidSize,tradePrice,tradeSize from b; // Insert into our global prices table `prices insert b; } // Assign callback function .solace.setTopicMsgCallback`subUpdate // Subscribe to topic .solace.subscribeTopic[topic;1b] With market data simulator running and publishing data, we can run the code above and capture those updates. (kdb) [ec2-user@ip-172-31-70-197 examples]$ q rdb.q ### Subscribing to topic : EQ/marketData/v1/> ### Session event eventType | 0i responseCode| 0i eventInfo | "host 'mr2ko4me0p6h2f.messaging.solace.cloud:20642', hostname 'mr2ko4me0p6h2f.messaging.solace.cloud:20642' IP 3.88.1 (host 1 of 1) (host connection attempt 1 of 1) (total connection attempt 1 of 1)" We can see our prices table has been created and is capturing our market data updates. q)\a ,`prices q)10#prices date time sym exchange currency askPrice askSize bidPrice bidSize tradePrice tradeSize ----------------------------------------------------------------------------------------------------- 2020.09.15 14:47:27.671 AAPL NASDAQ USD 249.243 640 247.9999 370 248.6215 140 2020.09.15 14:47:27.672 FB NASDAQ USD 171.389 140 167.5756 260 169.4823 80 2020.09.15 14:47:27.673 INTC NASDAQ USD 59.07073 110 58.33693 490 58.70383 280 2020.09.15 14:47:27.674 IBM NYSE USD 98.69098 670 98.19876 80 98.44487 140 2020.09.15 14:47:27.674 BAC NYSE USD 22.32329 680 22.04598 680 22.18464 410 2020.09.15 14:47:27.674 XOM NYSE USD 42.51064 50 42.193 480 42.35182 500 2020.09.15 14:47:27.675 VOD LSE GBP 97.71189 210 96.98179 480 97.34684 200 2020.09.15 14:47:27.675 BARC LSE GBP 92.5173 720 91.13987 710 91.82858 470 2020.09.15 14:47:27.675 TED LSE GBP 135.2894 520 135.2894 630 135.2894 390 2020.09.15 14:47:27.676 DBS SGX SGD 19.40565 30 19.11673 410 19.26119 500 We can see all new updates are being inserted into the table, since the count is increasing. q)count prices 192 q)count prices 228 q)count prices 240 As you can see, it is fairly simple to implement an RDB that consumes data from a PubSub+ broker rather than from from another kdb+ process. And multiple processes can consume the data without changing any existing processes. Stats process¶ Here we build a stats process that generates some meaningful stats from the raw data, every minute, just for US securities. Because our publisher (the market data simulator) uses a hierarchical topic, our stats process can easily filter for US equities data by subscribing to the topic EQ/marketData/v1/US/> . To make things more interesting, once the stats are generated every minute, we will publish them to PubSub+ for other downstream processes to consume. To make things more interesting again, we will also consume from a queue instead of subscribing to a topic. Create our queue and map the relevant topic to it: // Market Data queue to subscribe to subQueue:`market_data topicToMap:`$"EQ/marketData/v1/US/>" -1"### Creating endpoint"; .solace.createEndpoint[;1i] `ENDPOINT_ID`ENDPOINT_PERMISSION`ENDPOINT_ACCESSTYPE`ENDPOINT_NAME!`2`c`1,subQueue -1"### Mapping topic: ", (string topicToMap), " to queue"; .solace.endpointTopicSubscribe[;2i;topicToMap]`ENDPOINT_ID`ENDPOINT_NAME!(`2;subQueue) Then, we will create our prices table again like we did in previous example but this time, we will also create a stats table which will store our minutely stats. // Global table for capturing L1 quotes and trades prices:flip `date`time`sym`exchange`currency`askPrice`askSize`bidPrice`bidSize`tradePrice`tradeSize! `date`time`symbol`symbol`symbol`float`float`float`float`float`float$\:() // Global table for stats stats:flip (`date`sym`time, `lowAskSize`highAskSize, `lowBidPrice`highBidPrice, `lowBidSize`highBidSize, `lowTradePrice`highTradePrice, `lowTradeSize`highTradeSize, `lowAskPrice`highAskPrice`vwap)! (`date`symbol`minute, `float`float, `float`float, `float`float, `float`float, `float`float, `float`float`float) $\:() Once again, callback function subUpdate holds the parsing logic. subUpdate:{[dest;payload;dict] // Convert binary payload a:"c"$payload; // Load JSON to kdb table b:.j.k "[",a,"]"; // Send ack back to Solace broker .solace.sendAck[dest;dict`msgId]; // Update types of some of the columns b:select "D"$date,"T"$time,sym:`$symbol,`$exchange,`$currency,askPrice, askSize,bidPrice,bidSize,tradePrice,tradeSize from b; // Insert into our global prices table `prices insert b; } As before, incoming messages are parsed and inserted into the prices table. Unlike before, we are also acknowledging the messages using .solace.sendAck . Register the callback function and bind it to our queue. // Assign callback function .solace.setQueueMsgCallback`subUpdate // Bind to queue .solace.bindQueue `FLOW_BIND_BLOCKING`FLOW_BIND_ENTITY_ID`FLOW_ACKMODE`FLOW_BIND_NAME!`1`2`2,subQueue So far, the realtime subscriber simply subscribes to raw updates and writes them to a table. Now, we shall generate minutely stats on raw data from the prices table and store those stats in the stats table. updateStats:{[rawTable] // Generate minutely stats on data from last minute `prices set rawTable:select from rawTable where time>.z.T-00:01; min_stats:0!select lowAskSize: min askSize, highAskSize: max askSize, lowBidPrice: min bidPrice, highBidPrice: max bidPrice, lowBidSize: min bidSize, highBidSize: max bidSize, lowTradePrice: min tradePrice, highTradePrice: max tradePrice, lowTradeSize: min tradeSize, highTradeSize: max tradeSize, lowAskPrice: min askPrice, highAskPrice: max askPrice, vwap:tradePrice wavg tradeSize by date, sym, time:1 xbar time.minute from rawTable; min_stats:select from min_stats where time=max time; // Inserts newly generated stats to global stats table `stats insert min_stats; // Get all the unique syms s:exec distinct sym from min_stats; // Generate topic we will publish to for each sym t:s!{"EQ/stats/v1/",string(x)} each s; // Generate JSON payload from the table for each sym a:{[x;y] .j.j select from x where sym=y}[min_stats;]; p:s!a each s; // Send the payload l:{[x;y;z] .solace.sendDirect[`$x[z];y[z]]}[t;p]; l each s; } Here we - trim the prices table to hold only data from last minute - generate minutely stats on the trimmed data - insert the stats into stats table - publish the stats to PubSub+ broker in JSON format using dynamic topics of topology EQ/stats/v1/{symbol_name} Finally, we set a timer to execute the updateStats function every minute. // Send generated stats every minute \t 60000 .z.ts:{updateStats[prices]} solace-pubsub.q Complete script Run the stats process! This time, we have two tables: prices and stats q)\a `s#`prices`stats After a minute has passed, we can see the stats table being populated with minutely stats for US stocks only. q)stats date sym time lowAskSize highAskSize lowBidPrice highBidPrice lowBidSize highBidSize lowTradePrice highTradePrice lowTradeSize highTradeSize lowAskPrice highAskPrice vwap -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2020.09.15 AAPL 15:32 60 720 180.4279 186.577 70 790 181.5788 187.0443 40 430 181.8058 189.3824 230.3561 2020.09.15 BAC 15:32 90 800 11.82591 12.18924 20 780 11.96047 12.21979 70 460 12.0796 12.34199 260.956 2020.09.15 FB 15:32 20 790 99.74284 102.7611 0 790 100.3953 104.0619 20 410 100.6463 105.3627 167.8411 2020.09.15 IBM 15:32 50 750 49.56313 51.59694 70 790 49.93766 51.85622 0 500 50.18766 52.1155 266.1778 2020.09.15 INTC 15:32 50 770 56.3434 60.07335 70 730 56.76917 60.22391 0 440 56.76917 60.44816 177.8193 2020.09.15 XOM 15:32 0 770 36.76313 37.64447 20 740 36.94868 37.73882 20 440 37.04106 38.15767 254.9052 Besides being stored locally in the stats table, these minutely stats are also being published to the PubSub+ broker. We can see that by starting another process that subscribes to the EQ/stats/> topic. (kdb) [ec2-user@ip-172-31-70-197 examples]$ q sol_sub_direct.q -topic "EQ/stats/>" ### Subscribing to topic : EQ/stats/> q)### Session event eventType | 0i responseCode| 0i eventInfo | "host 'mr2ko4me0p6h2f.messaging.solace.cloud:20642', hostname 'mr2ko4me0p6h2f.messaging.solace.cloud:20642' IP 3.88.1 (host 1 of 1) (host connection attempt 1 of 1) (total connection attempt 1 of 1)" ### Message received payload | "[{\"date\":\"2020-09-15\",\"sym\":\"AAPL\",\"time\":\"15:36\",\"lowAskSize\":160,\"highAskSize\":720,\"lowBidPrice\":187.7551,\"highBidPrice\":193.4258,\"lowBidSize\":30,\"highBidSize\":740,\"lowTradePrice\":189.4145,\"highTradePrice\":194.8875,\"lowTradeSize\":60,\"highTradeSize\":480,\"lowAskPrice\":189.6513,\"highAskPrice\":196.3491,\"vwap\":308.9069}]" dest | `EQ/stats/v1/AAPL isRedeliv| 0b isDiscard| 0b isRequest| 0b sendTime | 2000.01.01D00:00:00.000000000 This example shows you can easily code a stats process that gets updates from a PubSub+ event broker instead of a tickerplant or a realtime subscriber. That makes the stats process rely solely on the broker and not other kdb+ processes, and yields a loosely-coupled architecture. If you later need to add another stats process, you can do so effortlessly without modifying any existing processes. Moreover, these processes can be deployed on-premises or in the cloud since they can easily get the real-time data from PubSub+ brokers deployed in an event mesh configuration. For example, the RDB process could be co-located with the market data feed handlers, with the stats process deployed in AWS. Conclusion¶ In designing your architecture, you need to consider how your applications will communicate with each other. Depending on the size and complexity of your architecture, you can choose direct communication via IPC, bi-directional request/reply or the pub/sub messaging pattern. The pub/sub messaging pattern via an event broker allows you to efficiently distribute data at scale and take advantages of loose coupling, dynamic filtering, easy integration and event mesh. The Solace PubSub+ event broker and the KX open-source API brings the power of pub/sub messaging to kdb+. Author¶ Himanshu Gupta is a solutions architect at Solace. He has experience working at both buy and sell side as a tick data developer. In these roles, he has worked with popular timeseries databases, such as kdb+, to store and analyze real-time and historical financial market data. Other papers by Himanshu Gupta
// @kind function // @category graph // @desc Update the contents of a functional node // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name of a functional node to be updated // @param node {fn} A functional node // @return {dictionary} The graph with the named functional node contents // overwritten updNode:{[graph;nodeId;node] node,:(1#`)!1#(::); if[not nodeId in 1_exec nodeId from graph`nodes;'"invalid nodeId"]; if[count key[node]except``function`inputs`outputs;'"invalid node"]; oldNode:graph[`nodes]nodeId; if[`inputs in key node; if[(::)~node`inputs;node[`inputs]:(0#`)!""]; if[-10h=type node`inputs;node[`inputs]:(1#`input)!enlist node`inputs]; if[99h<>type node`inputs;'"invalid inputs"]; inputEdges:select from graph[`edges]where destNode=nodeId, destName in key oldNode`inputs; graph:@[graph;`edges;key[inputEdges]_]; inputEdges:flip[`destNode`destName!(nodeId;key node`inputs)]#inputEdges; graph:@[graph;`edges;,;inputEdges]; inputEdges:select from inputEdges where not null sourceNode; graph:i.connectGraph/[graph;0!inputEdges]; ]; if[`outputs in key node; if[-10h=type node`outputs; node[`outputs]:(1#`output)!enlist node`outputs]; if[99h<>type node`outputs;'"invalid outputs"]; outputEdges:select from graph[`edges]where sourceNode=nodeId, sourceName in key oldNode`outputs; graph:@[graph;`edges;key[outputEdges]_]; outputEdges:select from outputEdges where sourceName in key node`outputs; graph:@[graph;`edges;,;outputEdges]; outputEdge:select sourceNode,sourceName,destName,destName from outputEdges; graph:i.connectGraph/[graph;0!outputEdge]; ]; if[`function in key node; if[(1#`output)~key graph[`nodes;nodeId]`outputs; node[`function]:((1#`output)!enlist@)node[`function]::]; ]; graph:@[graph;`nodes;,;update nodeId from node]; graph } // @kind function // @category graph // @desc Delete a named function node // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name of a functional node to be deleted // @return {dictionary} The graph with the named fucntional node removed delNode:{[graph;nodeId] if[not nodeId in 1_exec nodeId from graph`nodes;'"invalid nodeId"]; graph:@[graph;`nodes;_;nodeId]; inputEdges:select from graph[`edges]where destNode=nodeId; graph:@[graph;`edges;key[inputEdges]_]; outputEdges:select from graph[`edges]where sourceNode=nodeId; graph:@[graph;`edges;,;update sourceNode:`,sourceName:`, valid:0b from outputEdges]; graph } // @kind function // @category graph // @desc Add a configuration node to a graph // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name associated with the configuration // node // @param config {fn} Any configuration information to be supplied to other // nodes in the graph // @return {dictionary} A graph with the the new configuration added to the // graph structure addCfg:{[graph;nodeId;config] nodeKeys:``function`inputs`outputs; addNode[graph;nodeId]nodeKeys!(::;@[;config];::;"!") } // @kind function // @category graph // @desc Update the contents of a configuration node // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name of a configuration node to be // updated // @param config {fn} Any configuration information to be supplied to other // nodes in the graph // @return {dictionary} The graph with the named configuration node contents // overwritten updCfg:{[graph;nodeId;config] updNode[graph;nodeId](1#`function)!enlist config } // @kind function // @category graph // @desc Delete a named configuration node // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param nodeId {symbol} Denotes the name of a configuration node to be // deleted // @return {dictionary} The graph with the named fucntional node removed delCfg:delNode // @kind function // @category graph // @desc Connect the output of one node to the input to another // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param sourceNode {symbol} Denotes the name of a node in the graph which // contains the relevant output // @param sourceName {symbol} Denotes the name of the output to be connected to // an associated input node // @param destNode {symbol} Name of a node in the graph which contains the // relevant input to be connected to // @param destName {symbol} Name of the input which is connected to the output // defined by sourceNode and sourceName // @return {dictionary} The graph with the relevant connection made between the // inputs and outputs of two nodes connectEdge:{[graph;sourceNode;sourceName;destNode;destName] srcOutputs:graph[`nodes;sourceNode;`outputs]; dstInputs:graph[`nodes;destNode;`inputs]; if[99h<>type srcOutputs;'"invalid sourceNode"]; if[99h<>type dstInputs;'"invalid destNode"]; if[not sourceName in key srcOutputs;'"invalid sourceName"]; if[not destName in key dstInputs;'"invalid destName"]; edge:(1#`valid)!1#srcOutputs[sourceName]~dstInputs[destName]; graph:@[graph;`edges;,;update destNode,destName,sourceNode, sourceName from edge]; graph } // @kind function // @category graph // @desc Disconnect an edge from the input of a node // @param graph {dictionary} Graph originally generated using .ml.createGraph // @param destNode {symbol} Name of the node containing the edge to be deleted // @param destName {symbol} Name of the edge associated with a specific input // to be disconnected // @return {dictionary} The graph with the edge connected to the destination // input removed from the graph. disconnectEdge:{[graph;destNode;destName] if[not(destNode;destName)in key graph`edges;'"invalid edge"]; edge:(1#`valid)!1#0b; graph:@[graph;`edges;,;update destNode,destName,sourceName:`, sourceNode:` from edge]; graph } ================================================================================ FILE: ml_ml_graph_init.q SIZE: 414 characters ================================================================================ // graph/init.q - Load graph library // Copyright (c) 2021 Kx Systems Inc // // Graph and Pipeline is a structural framework for developing // q/kdb+ solutions, based on a directed acyclic graph. .ml.loadfile`:graph/utils.q .ml.loadfile`:graph/graph.q .ml.loadfile`:graph/pipeline.q .ml.loadfile`:graph/modules/saving.q .ml.loadfile`:graph/modules/loading.q .ml.loadfile`:util/utils.q .ml.i.deprecWarning`graph ================================================================================ FILE: ml_ml_graph_modules_loading.q SIZE: 3,393 characters ================================================================================ \d .ml // Utility Functions for loading data // @private // @kind function // @category loadingUtility // @fileoverview Construct path to a data file // @param config {dict} Any configuration information about the dataset being // loaded in // @return {str} Path to the data file i.loadFileName:{[config] file:hsym`$$[(not ""~config`directory)&`directory in key config; config`directory; "."],"/",config`fileName; if[()~key file;'"file does not exist"]; file } // @private // @kind function // @category loadingUtility // @fileoverview Load splayed table or binary file // @param config {dict} Any configuration information about the dataset being // loaded in // @return {tab} Date obtained from splayed table or binary file i.loadFunc.splay:i.loadFunc.binary:{[config] get i.loadFileName config } // @private // @kind function // @category loadingUtility // @fileoverview Load data from csv file // @param config {dict} Any configuration information about the dataset being // loaded in // @return {tab} Data obtained from csv i.loadFunc.csv:{[config] (config`schema;config`separator)0: i.loadFileName config } // @private // @kind function // @category loadingUtility // @fileoverview Load data from json file // @param config {dict} Any configuration information about the dataset being // loaded in // @return {tab} Data obtained from json file i.loadFunc.json:{[config] .j.k first read0 i.loadFileName config } // @private // @kind function // @category loadingUtility // @fileoverview Load data from HDF5 file // @param config {dict} Any configuration information about the dataset being // loaded in // @return {tab} Data obtained from HDF5 file i.loadFunc.hdf5:{[config] if[not`hdf5 in key`;@[system;"l hdf5.q";{'"unable to load hdf5 lib"}]]; if[not .hdf5.ishdf5 filePath:i.loadFileName config; '"file is not an hdf5 file" ]; if[not .hdf5.isObject[filePath;config`dname];'"hdf5 dataset does not exist"]; .hdf5.readData[fpath;config`dname] } // @private // @kind function // @category loadingUtility // @fileoverview Load data from ipc // @param config {dict} Any configuration information about the dataset being // loaded in // @return {tab} Data obtained via IPC i.loadFunc.ipc:{[config] h:@[hopen;config`port;{'"error opening connection"}]; ret:@[h;config`select;{'"error executing query"}]; @[hclose;h;{}]; ret } // @private // @kind function // @category loadingUtility // @fileoverview Load data from config dictionary // @param config {dict} Any configuration information about the dataset being // loaded in // @return {dict} Data obtained from config dictionary i.loadFunc.process:{[config] if[not `data in key config;'"Data to be used must be defined"]; config`data } // @private // @kind function // @category loadingUtility // @fileoverview Load data from a defined source // @param config {dict} Any configuration information about the dataset being // loaded in // @return {dict} Data obtained from a defined source i.loadDataset:{[config] if[null func:i.loadFunc config`typ;'"dataset type not supported"]; func config } // Loading functionality // @kind function // @category loading // @fileoverview Node to load data from a defined source // @return {dict} Node in graph to be used for loading data loadDataSet:`function`inputs`outputs!(i.loadDataset;"!";"+") ================================================================================ FILE: ml_ml_graph_modules_saving.q SIZE: 3,499 characters ================================================================================ \d .ml // Utility Functions for loading data // @private // @kind function // @category savingUtility // @fileoverview Construct path to location where data is to be saved // @param config {dict} Any configuration information about the dataset being // saved // @return {str} Path to a file location i.saveFileName:{[cfg] file:hsym`$$[`dir in key cfg;cfg`key;"."],"/",cfg fname; if[not ()~key file;'"file exists"]; file} // @private // @kind function // @category savingUtility // @fileoverview Save data as a text file // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a text file i.saveFunc.txt:{[config;data] i.saveFileName[config]0:.h.tx[config`typ;data]; } // @private // @kind function // @category savingUtility // @fileoverview Save data as a text file // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a text file i.saveFunc[`csv`xml`xls]:i.saveFunc.txt // @private // @kind function // @category savingUtility // @fileoverview Save data as a binary file // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a binary file i.saveFunc.binary:{[config;data] i.saveFileName[config]set data; } // @private // @kind function // @category savingUtility // @fileoverview Save data as a json file // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a json file i.saveFunc.json:{[config;data] h:hopen i.saveFileName config; h @[.j.j;data;{'"error converting to json"}]; hclose h; } // @private // @kind function // @category savingUtility // @fileoverview Save data as a HDF5 file // @param config {dict} Any configuration information about the dataset being // saved // @param data {tab} Data which is to be saved // @return {null} Data is saved as a HDF5 file i.saveFunc.hdf5:{[config;data] if[not`hdf5 in key`;@[system;"l hdf5.q";{'"unable to load hdf5 lib"}]]; .hdf5.createFile filePath:i.saveFilename config; .hdf5.writeData[filePath;config`dname;data]; }
ceiling ¶ Round up ceiling x ceiling[x] Returns the least integer greater than or equal to boolean or numeric x . q)ceiling -2.1 0 2.1 -2 0 3 q)ceiling 01b 0 1i ceiling is a multithreaded primitive. Implicit iteration¶ ceiling is an atomic function. q)ceiling(1.2;3.4 5.6) 2 4 6 q)ceiling`a`b!(1.2;3.4 5.6) a| 2 b| 4 6 q)ceiling([]a:1.2 3.4;b:5.6 7.8) a b --- 2 6 4 8 Prior to V3.0¶ Prior to V3.0, ceiling - used comparison tolerance - accepted datetime (Since V3.0, use "d"$23:59:59.999+ instead.) q)ceiling 2 + 10 xexp -12 -13 3 2 q)ceiling 2010.05.13T12:30:59.999 /type error since V3.0 2010.05.14 q)"d"$23:59:59.999+ 2010.05.13T12:30:59.999 2010.05.14 Domain and range¶ domain b g x h i j e f c s p m d z n u v t range i . i h i j j j i . . . . . . . . . Range: hij ^ Coalesce¶ Merge keyed tables ignoring nulls x^y ^[x;y] Where x and y are keyed tables, returns them merged. With no nulls in y , the result is the same as for Join. q)kt1:([k:1 2 3] c1:10 20 30;c2:`a`b`c) q)kt2:([k:3 4 5] c1:300 400 500;c2:`cc`dd`ee) q)kt1^kt2 k| c1 c2 -| ------ 1| 10 a 2| 20 b 3| 300 cc 4| 400 dd 5| 500 ee q)(kt1^kt2) ~ kt1,kt2 1b ^ Fill where x and y are lists or dictionaries When y has null column values, the column values of x are updated only with non-null values of y . q)kt3:([k:2 3] c1:0N 3000;c2:`bbb`) q)kt3 k| c1 c2 -| -------- 2| bbb 3| 3000 q)kt1,kt3 k| c1 c2 -| -------- 1| 10 a 2| bbb 3| 3000 q)kt1^kt3 k| c1 c2 -| -------- 1| 10 a 2| 20 bbb 3| 3000 c The performance of Coalesce is slower than that of Join since each column value of y must be checked for null. cols , xcol , xcols ¶ Table columns cols ¶ Column names of a table cols x cols[x] Where x is a - table - the name of a table as a symbol atom - a filesymbol for a splayed table returns as a symbol vector its column names. q)\l trade.q q)cols trade /value `time`sym`price`size q)cols`trade /reference `time`sym`price`size xcol ¶ Rename table columns x xcol y xcol[x;y] Where y is a table, passed by value, and x is - a symbol vector of length no greater than count cols y returnsy with its firstcount x columns renamed - a dictionary (since V3.6 2018.08.24) formed from two symbol vectors, of which the keys are all the names of columns of y , returnsy with columns renamed according to the dictionary q)\l trade.q q)cols trade `time`sym`price`size q)`Time`Symbol xcol trade / rename first two columns Time Symbol price size ------------------------------ 09:30:00.000 a 10.75 100 q)trade:`Time`Symbol`Price`Size xcol trade / rename all and assign q)cols trade `Time`Symbol`Price`Size q)(`a`c!`A`C)xcol([]a:();b:();c:()) / rename selected columns A b C ----- Q for Mortals §9.8.1 xcol xcols ¶ Reorder table columns x xcols y xcols[x;y] Where y is a simple table, passed by valuex is a symbol vector of some or all ofy ’s column names returns y with x as its first column/s. q)\l trade.q q)cols trade `time`sym`price`size q)trade:xcols[reverse cols trade;trade] / reverse cols and reassign trade q)cols trade `size`price`sym`time q)cols trade:`sym xcols trade / move sym to the front `sym`size`price`time Q for Mortals §9.8.2 xcols meta , .Q.V (table to dictionary) Dictionaries, Metadata Tables ' Compose¶ Compose a unary value with another '[f;ff][x;y;z;…] Where f is a unary valueff is a value rank ≥1 the derived function '[f;ff] has the rank of ff and returns f ff[x;y;z;…] . q)ff:{[w;x;y;z]w+x+y+z} q)f:{2*x} q)d:('[f;ff]) / Use noun syntax to assign a composition q)d[1;2;3;4] / f ff[1;2;3;4] 20 q)'[f;ff][1;2;3;4] 20 Extend Compose with Over / or over to compose a list of functions. Use '[;] to resolve the overloads on' - noun syntax to pass the composition as an argument to over q)g:10* q)dd:('[;]) over (g;f;ff) q)dd[1;2;3;4] 200 q)(('[;])over (g;f;ff))[1;2;3;4] 200 q)'[;]/[(g;f;ff)][1;2;3;4] 200 Implicit composition¶ Compose one or more unary values with a higher-rank value Values can be composed by juxtaposition within parentheses. The general form is a sequence of unaries f , g , h … terminating with a value ff of rank ≥2. The rank of (f g h… ff) is the rank of ff . q)x:-100 2 3 4 -100 6 7 8 9 -100 q)(x;0 (0|+)\x) -100 2 3 4 -100 6 7 8 9 -100 0 2 5 9 0 6 13 21 30 0 Above, (0|+) composes the unary projection 0| with Add. The composition becomes the argument to Scan, which derives the ambivalent function (0|+)\ , which is then applied infix to 0 and x to return cumulative sums. If we take -100 to flag parts of x , the expression max 0 (0|+)\x returns the largest of the sums of the parts. To compose a sequence of unary values, use Apply or Apply At. $ Cond¶ Conditional evaluation $[test;et;ef;…] Control construct: test , et , ef , etc. are q expressions. Three expressions¶ If test evaluates to zero, Cond evaluates and returns ef , otherwise et . q)$[0b;`true;`false] `false q)$[1b;`true;`false] `true Only the first expression test is certain to be evaluated. q)$[1b;`true;x:`false] `true q)x 'x Although it returns a result, Cond is a control-flow construct, not an operator. It cannot be iterated, nor projected onto a subset of expressions. Odd number of expressions¶ For brevity, nested triads can be flattened. $[q;a;r;b;c] <=> $[q;a;$[r;b;c]] These two expressions are equivalent: $[0;a;r;b;c] $[r;b;c] Cond with many expressions can be translated to triads by repeatedly replacing the last three expressions with the triad. $[q;a;r;b;s;c;d] <=> $[q;a;$[r;b;$[s;c;d]]] Equivalently $[q;a; / if q, a r;b; / else if r, b s;c; / else if s, c d] / else d Cond in a signum -like function q){$[x>0;1;x<0;-1;0]}'[0 3 -9] 0 1 -1 Even number of expressions¶ An even number of expressions returns either a result or the generic null. q)$[1b;`true;1b;`foo] `true q)$[0b;`true;1b;`foo] `foo q)$[0b;`true;0b;`foo] / return generic null q)$[0b;`true;0b;`foo]~(::) 1b Versions before V3.6 2018.12.06 signal cond . Name scope¶ Cond’s brackets do not create lexical scope. Name scope within its brackets is the same as outside them. Good style avoids using Cond to control side effects, such as amending variables. Using if is a clearer signal to the reader that a side effect is intended.) Also, setting a variable in a code branch can have unintended consequences. Query templates¶ Cond is not supported inside qSQL queries. Instead, use Vector Conditional. $ dollar, Vector Conditional Controlling evaluation Q for Mortals §10.1.1 Basic Conditional Evaluation cor ¶ Correlation x cor y cor[x;y] Where x an d y are conforming numeric lists returns their correlation as a float in the range -1f to 1f . Perfectly correlated data results in a 1 or -1 . When one variable increases as the other increases the correlation is positive; when one decreases as the other increases it is negative. Completely uncorrelated arguments return 0f . q)29 10 54 cor 1 3 9 0.7727746 q)10 29 54 cor 1 3 9 0.9795734 q)1 3 9 cor neg 1 3 9 -1f q)1000101000b cor 0010011001b -0.08908708 cor is an aggregate function, equivalent to {cov[x;y]%dev[x]*dev y} . cor is a multithreaded primitive. Domain and range¶ b g x h i j e f c s p m d z n u v t ---------------------------------------- b | f . f f f f f f f . f f f f f f f f g | . . . . . . . . . . . . . . . . . . x | f . f f f f f f f . f f f f f f f f h | f . f f f f f f f . f f f f f f f f i | f . f f f f f f f . f f f f f f f f j | f . f f f f f f f . f f f f f f f f e | f . f f f f f f f . f f f f f f f f f | f . f f f f f f f . f f f f f f f f c | f . f f f f f f f . f f f f f f f f s | . . . . . . . . . . . . . . . . . . p | f . f f f f f f f . f f f f f f f f m | f . f f f f f f f . f f f f f f f f d | f . f f f f f f f . f f f f f f f f z | f . f f f f f f f . f f f f f f f f n | f . f f f f f f f . f f f f f f f f u | f . f f f f f f f . f f f f f f f f v | f . f f f f f f f . f f f f f f f f t | f . f f f f f f f . f f f f f f f f Range: f cos , acos ¶ Cosine, arccosine cos x cos[x] acos x acos[x] Where x is a numeric, returns cos - the cosine of x , taken to be in radians. The result is between-1 and1 , or null if the argument is null or infinity. acos - the arccosine of x ; that is, the value whose cosine isx . The result is in radians and lies between 0 and π. (The range is approximate due to rounding errors). Null is returned if the argument is not between -1 and 1. q)cos 0.2 / cosine 0.9800666 q)min cos 10000?3.14159265 -1f q)max cos 10000?3.14159265 1f q)acos -0.4 / arccosine 1.982313 cos and acos are multithreaded primitives. Domain and range¶ domain: b g x h i j e f c s p m d z n u v t range: f . f f f f f f f . f f f z f f f f Implicit iteration¶ cos and acos are atomic functions. q)cos (.2;.3 .4) 0.9800666 0.9553365 0.921061 q)acos (.2;.3 .4) 1.369438 1.266104 1.159279 count , mcount ¶ Count the items of a list or dictionary count ¶ Number of items count x count[x] Where x is - a list, returns the number of its items - a dictionary, the number of items in its value - anything else, 1 q)count 0 / atom 1 q)count "zero" / vector 4 q)count (2;3 5;"eight") / mixed list 3 q)count each (2;3 5;"eight") 1 2 5 q)count `a`b`c!2 3 5 / dictionary 3 q)/ the items of a table are its rows q)count ([]city:`London`Paris`Berlin; country:`England`France`Germany) 3 q)count each ([]city:`London`Paris`Berlin; country:`England`France`Germany) 2 2 2 q)count ({x+y}) 1 q)count (+/) 1 Use with each to count the number of items at each level of a list or dictionary. q)RaggedArray:(1 2 3;4 5;6 7 8 9;0) q)count RaggedArray 4 q)count each RaggedArray 3 2 4 1 q)RaggedDict:`a`b`c!(1 2;3 4 5;"hello") q)count RaggedDict 3 q)count each RaggedDict a| 2 b| 3 c| 5 q)\l sp.q q)count sp 12 Table counts in a partitioned database mcount ¶ Moving counts x mcount y mcount[x;y] Where x is a positive int atomy is a numeric list returns the x -item moving counts of the non-null items of y . The first x items of the result are the counts so far, and thereafter the result is the moving count. q)3 mcount 0 1 2 3 4 5 1 2 3 3 3 3 q)3 mcount 0N 1 2 3 0N 5 0 1 2 3 2 2 mcount is a uniform function. Implicit iteration¶ mcount applies to dictionaries and tables. q)k:`k xkey update k:`abc`def`ghi from t:flip d:`a`b!(10 21 3;4 5 6) q)2 mcount d a| 1 1 1 b| 2 2 2 q)2 mcount t a b --- 1 1 2 2 2 2 q)2 mcount k k | a b ---| --- abc| 1 1 def| 2 2 ghi| 2 2
A natural query interface for distributed systems¶ Technical constraints mean that large real-time database installations tend to be distributed over many processes and machines. In an environment where end-users need to be able to query the data directly, this poses a challenge – they must find the location of the data and connect before they can query it. Typically this inconvenience is mitigated by providing an API. This can take the form of a function which can be used to query tables in the data warehouse. This paper briefly explores one alternative method of eliminating this burden, envisioning the use case of a user sitting at a desk, entering queries in real time, for whom using a traditional API may be cumbersome and unnatural. The idea is to intercept queries as they are entered, dissect them to identify references to remote data and to seamlessly return a result as if all the data were available in the immediate process. Implementing this idea in full generality would rely on being able to solve a difficult problem – to identify a user’s intent. We narrow the scope of the solution by only attempting to redirect qSQL queries which query datasets specified in configuration files. Undocumented feature The solution uses an undocumented feature of the q interpreter which allows a handler for a custom language to be defined. We will define a handler which will parse qSQL queries. It will determine which elements of a query refer to tables on other processes and execute these elements of the query remotely as appropriate. Certain internal kdb+ optimizations (in particular join optimizations) may be lost when using the implementation described in this paper. Background¶ Before describing the implementation, it is worth noting two important caveats: - This implementation uses a single-letter namespace. Single-letter namespaces are reserved for use by KX and should not be used in a production system. - The implementation uses an undocumented feature of kdb+ which allows a handler for a custom language to be defined. This feature is not guaranteed to be present in future kdb+ versions. A handler for standard SQL is distributed with q in a file called s.k . This allows queries to be entered in a q process in a more traditional SQL as follows: s)SELECT field1, field2 FROM table WHERE date BETWEEN ... Important to note is the prefixed s) which tells q to evaluate the query using the handler defined in s.k . By examining this file, it is possible to gain an understanding of the interpreter feature to define custom language handlers. They must reside in single-letter namespaces. The key function in s.k is .s.e , which is passed anything preceded by s) as a string. We will define our handler similarly, taking the namespace ‘.H’. Queries will be entered thus: H)select from q where sym=`ABC, time within 12:00:00.000 13:00:00.000 This query is passed to .H.e by the interpreter. By breaking the query down from here, we can define custom behaviors where desired for certain aspects of the query. The parse function takes a string, parses it as a q expression and returns the parse tree. We will use this to break down the query entered by a user into a format where it is easy to identify the constituent elements. This will be useful for identifying where a user intends to query a remote dataset. You can inspect how the parse statement breaks down a query as follows: q)parse"(select from t where ex=`N) lj (select clo:last mid by sym from q)" k){.Q.ft[,\:[;y];x]} //this is the definition of lj (?;`t;,,(=;`ex;,`N);0b;()) //first select statement (?;`q;();(,`sym)!,`sym;(,`clo)!,(last;`mid)) //second select .H.q ¶ We will use the tools discussed above to provide a simple implementation of the idea described at the outset. Our handler will scan q statements for queries to remote datasets and evaluate them as specified in the configuration. For the purposes of this paper, configuration will be defined in a simple table on the local instance. It serves to provide a mapping from table names (or aliases) to handles of processes where the data is available. In a more formal production environment this would be slightly more involved, with centrally controlled configuration. .H.H:([alias:`trade`quote`traders] host:`:localhost:29001`:localhost:29002`:localhost:29001; name:`t`q`traders; handle:3#0N ) //open handle to each distinct process update handle:.Q.fu[hopen each] host from `.H.H //utilities to look up handle or table-name for a given alias .H.h:{.H.H[x][`handle]} .H.n:{.H.H[x][`name]} select and exec operations have rank 4, 5, or 6 (the fifth and sixth arguments are seldom used) while update and delete operations have rank 4. Also, queries on remote tables must match a table alias in our configuration. With these criteria, we can define functions to identify these operations in a parse tree. //check if subject of select/exec is configured as a remote table .H.is_configured_remote:{$[(1 = count x 1)and(11h = abs type x 1); not null .H.h first x 1; 0b] } //check valence and first list element to determine function .H.is_select:{(count[x] in 5 6 7) and (?)~first x} .H.is_update:{(count[x]=5) and (!)~first x} We combine these for convenience: .H.is_remote_exec:{$[.H.is_select[x] or .H.is_update[x]; .H.is_configured_remote[x]; 0b] } To evaluate a functional query remotely, we define a function which will take the parse tree, look up the correct handle and table name for the remote process and evaluate accordingly. .H.remote_evaluate:{(.H.h x 1)@(eval;@[x;1;.H.n])} We can define a pair of functions which, together, will take a parse tree and traverse it, inspecting each element and deciding whether to evaluate remotely. .H.E:{$[ .H.is_remote_exec x; .H.E_remote x; 1=count x;x; .z.s each x ] } .H.E_remote:{ //need to examine for subqueries r:.H.remote_evaluate{$[ (0h~type x)and not .H.is_remote_exec x; .z.s each x; .H.is_remote_exec x; .H.E_remote x; x ] } each x; //need special handling for symbols so that they aren’t //interpreted as references by name $[11h=abs type r;enlist r;r] } .H.E recursively scans a parse tree and identifies any queries which need to be remotely evaluated. These are then passed to .H.E_remote . In .H.E_remote , we again scan the parse tree looking for remote queries. This is to identify any sub-queries that need to be evaluated remotely, e.g. select from X where sym in exec distinct sym from Y where X and Y are located on two separate remote processes. In this way, we iteratively create a new parse tree, where all the remote queries in the original parse tree have been replaced with their values, as evaluated via IPC. Next, a function which will parse a query, apply the above function and finally evaluate what remains: .H.evaluate:{eval .H.E parse x} Recursion These functions are recursive which comes with the usual caveats about stack space. This should only be a concern for the most extreme one-liners. All that remains is to define the key .e function, the point of entry for queries entered with a H) prefix in the q interpreter. For brevity, this will simply add a layer of error trapping around the evaluation function defined above. .H.e:{@[.H.evaluate;x;{'"H-err - ",x}]} In the next section we will use the above to demonstrate seamless querying of tables from remote processes within a q session. Example¶ It is easy to demonstrate where this idea may be useful. Consider a simple case where there is a quote table on one RDB process and a trade table on another along with a supplementary metadata. We use the following files to initialize q processes with a quote, trade and traders table, populated with some sample data: quote.q ¶ \p 29002 //set random seed \S 1 rnorm:{$[ x=2*n:x div 2; raze sqrt[-2*log n?1f]*/:(sin;cos)@\:(2*acos - 1)*n?1f; -1_.z.s 1+x ] } q:([]time:asc 1000?01:00:00.000000000; sym:`g#1000?`ABC`DEF`GHI; bsize:1000*1+1000?10; bid:1000#0N; ask:1000#0N; asize:1000*1+1000?10 ) //simulate bids as independent random walks update bid:abs rand[100f]+sums rnorm[count i] by sym from `q //asks vary above bids update ask:bid + count[i]?0.5 from `q trade.q ¶ \p 29001 \S 2 traders:([tid:til 4]; name:("M Minderbinder";"J Yossarian";"C Cathcart";"M M M Major") ) t:([]time:asc 100?00:00:00.000000000; sym:`g#100?`ABC`DEF`GHI; side:100?"BS"; qty:100? 1000; tid:100?til 4 ) Gateway¶ Then, we start up our gateway process, using the code above in section .H.q . This opens the necessary connections to the remote instances and initializes the H query-handler. From there we can explore the data: q)trade 'trade q)quote 'quote Tables trade and quote do not exist in the immediate process. However, with the right prefix, we can query them: q)H)select from trade time sym side qty tid ------------------------------------- 0D00:45:55.800542235 GHI B 292 0 0D00:57:46.315256059 ABC S 24 1 0D01:03:35.359731763 DEF S 795 2 0D01:05:44.183354079 ABC S 660 2 0D01:09:37.164588868 DEF S 434 3 .. And on to something slightly more complicated: q)H)select sum qty by sym, side from trade where sym in exec -2#sym from quote sym side| qty --------| ----- DEF B | 4552 DEF S | 5425 GHI B | 6361 GHI S | 17095 Joins also work: q)H)aj[`sym`time;select time, sym, ?[side="B";qty;neg qty] from trade; select time, sym, bsize, bid, ask, asize from quote] time sym qty bsize bid ask asize ----------------------------------------------------------- 0D00:45:55.800542235 GHI 292 7000 11.50185 11.82094 5000 0D00:57:46.315256059 ABC -24 10000 81.01584 81.19805 1000 0D01:03:35.359731763 DEF -795 7000 45.43002 45.57759 7000 0D01:05:44.183354079 ABC -660 3000 81.71235 81.75569 4000 0D01:09:37.164588868 DEF -434 7000 45.43002 45.57759 7000 q)H)(select from trade)lj(select from traders) time sym side qty tid name ------------------------------------------------------ 0D00:45:55.800542235 GHI B 292 0 "M Minderbinder" 0D00:57:46.315256059 ABC S 24 1 "J Yossarian" 0D01:03:35.359731763 DEF S 795 2 "C Cathcart" 0D01:05:44.183354079 ABC S 660 2 "C Cathcart" 0D01:09:37.164588868 DEF S 434 3 "M M M Major" .. and updates: q)H)update name:(exec tid!name from traders)tid from trade time sym side qty tid name ------------------------------------------------------ 0D00:45:55.800542235 GHI B 292 0 "M Minderbinder" 0D00:57:46.315256059 ABC S 24 1 "J Yossarian" 0D01:03:35.359731763 DEF S 795 2 "C Cathcart" 0D01:05:44.183354079 ABC S 660 2 "C Cathcart" 0D01:09:37.164588868 DEF S 434 3 "M M M Major" .. Drawbacks¶ We lose a lot of the internal optimizations of kdb+ if accessing tables on disparate remote processes using the method described above. The interpreter is no longer able to query data intelligently. An example of this is when performing an as-of join on an on-disk dataset. Typically, one uses a minimal Where clause for the second table parameter of an as-of join – only a virtual column clause if querying off-disk to map the data into memory: aj[`sym`time; select from t where date = .z.d, sym=`ABC ...; select from q where date = .z.d] However, since we are accessing tables on remote processes, we are unable to take advantage of the optimizations built in to aj for mapped data. Therefore, we should behave as we would when typically querying a remote table and select only what is necessary, in order to minimize IPC overhead. H)aj[`sym`time; select from t where date = .z.d, sym=`ABC, ... select from q where date=.z.d, sym=`ABC, ...] Even if all the tables referenced in a query are on the same process, the interface isn’t smart enough (in its current incarnation) to pass them as one logical unit to the remote process. The scope for complexity in queries is theoretically unbounded. It is difficult to test the many variants of nested queries one may expect to see. Testing for the above has been limited but it serves as an illustration of an idea. Some examples of where it will fail or may give unexpected results: - Intended references to local values within queries sent to remote processes: these will be evaluated on the remote process - Foreign keys However, with awareness of these drawbacks, there is no reason an interface such as the above cannot be used to facilitate analysis and compilation of data. Conclusion¶ We have explored the idea set out above to build a simple proof-of-concept of a method of mitigating the overhead for end-users of having to locate data which they wish to access. Functionally, it behaves the same as a traditional API. Where it differs is in providing a more natural, seamless experience for querying data directly. This solution is clearly not appropriate for high-performance, large-scale querying of big data sets. However it may suit data analysts as a form of gateway to a large distributed database. A similar layer could easily sit on top of an existing API to give developers an easier means of interacting with raw data and to facilitate rapid prototyping of algorithms and procedures for business processes. It could be integrated with a load balancer, a permissions arbiter, a logging framework or a managed cache as part of an enterprise infrastructure for providing access to data stored in kdb+. All tests were run using kdb+ version 3.2 (2014.10.04). Author¶ Sean Keevey is a kdb+ consultant and has developed data and analytic systems for some of the world’s largest financial institutions. Sean is currently based in London developing a wide range of tailored analytic, reporting and data solutions in a major investment bank.
// Apply function to data of various types // @param func {fn} Function to apply to data // @param data {any} Data of various types // @return {fn} function to apply to data mlops.ap:{[func;data] $[0=type data; func each data; 98=type data; flip func each flip data; 99<>type data; func data; 98=type key data; key[data]!.z.s[func] value data; func each data ] } // Replace +/- infinities with data min/max // @param data {table|dictionary|number[]} Numerical data // @return {table|dictionary|number[]} Data with positive/negative // infinities are replaced by max/min values mlops.infReplace:mlops.ap{[data;inf;func] t:.Q.t abs type first first data; if[not t in "hijefpnuv";:data]; i:$[t;]@/:(inf;0n); @[data;i;:;func@[data;i:where data=i 0;:;i 1]] }/[;-0w 0w;min,max] // Load code with the file extension '*.py' // // @param codePath {string} The absolute path to the 'code' // folder containing any source code // @param files {symbol|symbol[]} Python files which should be loadable // return {::} mlops.load.py:{[codePath;files] sys:.p.import`sys; sys[`:path.append][codePath]; pyfiles:string files; {.p.e "import ",x}each -3_/:$[10h=type pyfiles;enlist;]pyfiles } // Wrap models such that they all have a predict key regardless of where // they originate // // @param mdlType {symbol} Form of model being used `q`sklearn`xgboost`keras`torch`theano, // this defines how the model gets interpreted in the case it is Python code // in particular. // @param model {dictionary|fn|proj|<|foreign} Model retrieved from registry // @return {fn|proj|<|foreign} The predict function mlops.format:{[mdlType;model] $[99h=type model; model[`predict]; type[model]in 105 112h; $[mdlType in `sklearn`xgboost; {[model;data] model[`:predict;<]$[98h=type data;tab2df;]data }[model]; mdlType~`keras; raze model[`:predict;<] .p.import[`numpy][`:array]::; mdlType~`torch; (raze/){[model;data] data:$[type data<0;enlist;]data; prediction:model .p.import[`torch][`:Tensor][data]; prediction[`:cpu][][`:detach][][`:numpy][]` }[model]each::; mdlType~`theano; {x`}model .p.import[`numpy][`:array]::; mdlType~`pyspark; {[model;data] $[.pykx.loaded; {.pykx.eval["lambda x: x.asDict()"][x]`} each model[`:transform][data][`:select][`prediction][`:collect][]`; first flip model[`:transform][data][`:select]["prediction"][`:collect][]` ] }[model]; model ]; model ] } // Transform data incoming into an appropriate format // this is important because data that is being passed to the Python // models and data that is being passed to the KX models relies on a // different 'formats' for the data (Custom models in q would expect data) // in 'long' format rather than 'wide' in current implementation // // @param data {any} Input data being passed to the model // @param axis {boolean} Whether the data is to be in a 'long' or 'wide' format // @param mdlType {symbol} Form of model being used `q`sklearn`xgboost`keras`torch`theano, // this defines how the model gets interpreted in the case it is Python code // in particular. // @return {any} The data in the appropriate format .ml.mlops.transform:{[data;axis;mdlType] dataType:type data; if[mdlType=`pyspark; :.ml.mlops.pysparkInput data]; if[dataType<=20;:data]; if[mdlType in `xgboost`sklearn; $[(98h=type data); :tab2df data; :data]]; data:$[98h=dataType; value flip data; 99h=dataType; value data; dataType in 105 112h; @[{value flip .ml.df2tab x};data;{'"This input type is not supported"}]; '"This input type is not supported" ]; if[98h<>type data; data:$[axis;;flip]data ]; data } // Utility function to transform data suitable for a pyspark model // // @param data {table|any[][]} Input data // @param {<} An embedPy object representing a Spark dataframe mlops.pysparkInput:{[data] if[not type[data] in 0 98h; '"This input type is not supported" ]; $[98h=type data; [df:.p.import[`pyspark.sql][`:SparkSession.builder.getOrCreate][] [`:createDataFrame] .ml.tab2df data; :df:.p.import[`pyspark.ml.feature][`:VectorAssembler] [`inputCols pykw df[`:columns];`outputCol pykw `features] [`:transform][df] ]; [data:flip (`$string each til count data[0])!flip data; .z.s data] ] } // Wrap models retrieved such that they all have the same format regardless of // from where they originate, the data passed to the model will also be transformed // to the appropriate format // // @param mdlType {symbol} Form of model being used `q`sklearn`xgboost`keras`torch`theano, // this defines how the model gets interpreted in the case it is Python code // in particular. // @param model {dictionary|fn|proj|<|foreign} Model retrieved from registry // @param axis {boolean} Whether the data should be in a 'long' (0b ) or // 'wide' (1b) format // @return {fn|proj|<|foreign} The predict function wrapped with a transformation // function mlops.wrap:{[mdlType;model;axis] model:mlops.format[mdlType;model]; transform:mlops.transform[;axis;mdlType]; model transform:: } ================================================================================ FILE: ml_ml_mlops_src_q_update.q SIZE: 8,342 characters ================================================================================ \d .ml // Update the latency monitoring details of a saved model // @param config {dictionary} Any additional configuration needed for // setting the model // @param cli {dictionary} Command line arguments as passed to the system on // initialisation, this defines how the fundamental interactions of // the interface are expected to operate. // @param model {function} Model to be applied // @param data {table} The data which is to be used to calculate the model // latency // @return {::} mlops.update.latency:{[fpath;model;data] fpath:hsym $[-11h=ty:type fpath;;10h=ty;`$;'"unsupported fpath"]fpath; config:@[.j.k raze read0::; fpath; {'"Could not load configuration file at ",x," with error: ",y}[1_string fpath] ]; func:{{system"sleep 0.0005";t:.z.p;x y;(1e-9)*.z.p-t}[x]each 30#y}model; updateData:@[`avg`std!(avg;dev)@\:func::; data; {'"Unable to generate appropriate configuration for latency with error: ",x} ]; config[`monitoring;`latency;`values]:updateData; config[`monitoring;`latency;`monitor]:1b; // need to add deps for .com_kx_json .[{[fpath;config] fpath 0: enlist .j.j config}; (fpath;config); {} ]; } // Update configuration information related to null value replacement from // within a q process // @param fpath {string|symbol|hsym} Path to a JSON file to be used to // overwrite initially defined configuration // @param data {table} Representative/training data suitable for providing // statistics about expected system behaviour // @return {::} mlops.update.nulls:{[fpath;data] fpath:hsym $[-11h=ty:type fpath;;10h=ty;`$;'"unsupported fpath"]fpath; config:@[.j.k raze read0::; fpath; {'"Could not load configuration file at ",x," with error: ",y}[1_string fpath] ]; if[98h<>type data; -1"Updating schema information only supported for tabular data"; :(::) ]; func:{med each flip mlops.infReplace x}; updateData:@[func; data; {'"Unable to generate appropriate configuration for nulls with error: ",x} ]; config[`monitoring;`nulls;`values]:updateData; config[`monitoring;`nulls;`monitor]:1b; // need to add deps for .com_kx_json .[{[fpath;config] fpath 0: enlist .j.j config}; (fpath;config); {'"Could not persist configuration to JSON file with error: ",x} ]; } // Update configuration information related to infinity replacement from // within a q process // @param fpath {string|symbol|hsym} Path to a JSON file to be used to // overwrite initially defined configuration // @param data {table} Representative/training data suitable for providing // statistics about expected system behaviour // @return {::} mlops.update.infinity:{[fpath;data] fpath:hsym $[-11h=ty:type fpath;;10h=ty;`$;'"unsupported fpath"]fpath; config:@[.j.k raze read0::; fpath; {'"Could not load configuration file at ",x," with error: ",y}[1_string fpath] ]; if[98h<>type data; -1"Updating schema information only supported for tabular data"; :(::) ]; func:{(`negInfReplace`posInfReplace)!(min;max)@\:mlops.infReplace x}; updateData:@[func; data; {'"Unable to generate appropriate configuration for infinities with error: ",x} ]; config[`monitoring;`infinity;`values]:updateData; config[`monitoring;`infinity;`monitor]:1b; // need to add deps for .com_kx_json .[{[fpath;config] fpath 0: enlist .j.j config}; (fpath;config); {'"Could not persist configuration to JSON file with error: ",x} ]; } // Update configuration information related to CSI from within a q process // @param fpath {string|symbol|hsym} Path to a JSON file to be used to // overwrite initially defined configuration // @param data {table} Representative/training data suitable for providing // statistics about expected system behaviour // @return {::} mlops.update.csi:{[fpath;data] fpath:hsym $[-11h=ty:type fpath;;10h=ty;`$;'"unsupported fpath"]fpath; config:@[.j.k raze read0::; fpath; {'"Could not load configuration file at ",x," with error: ",y}[1_string fpath] ]; if[98h<>type data; -1"Updating CSI information only supported for tabular data"; :(::) ]; bins:first 10^@["j"$;(count data)&@[{"J"$.ml.monitor.config.args x};`bins;{0N}];{0N}]; updateData:@[{mlops.create.binExpected[;y]each flip x}[;bins]; data; {'"Unable to generate appropriate configuration for CSI with error: ",x} ]; config[`monitoring;`csi;`values]:updateData; config[`monitoring;`csi;`monitor]:1b; .[{[fpath;config] fpath 0: enlist .j.j config}; (fpath;config); {} ]; }
init:{[dbname] t::tables[`.]except `currlog; msgcount::rowcount::t!count[t]#0; tmpmsgcount::tmprowcount::(`symbol$())!`long$(); logtabs::$[multilog~`custom;key custommode;t]; rolltabs::$[multilog~`custom;logtabs except where custommode in `tabular`singular;t]; currperiod::multilogperiod xbar .z.p+.eodtime.dailyadj; nextperiod::multilogperiod+currperiod; getnextendUTC[]; i::1; seqnum::0; if[(value `..createlogs) or .sctp.loggingmode=`create; createdld[dbname;.eodtime.d]; openlog[multilog;dldir;;.z.p+.eodtime.dailyadj]each logtabs; // If appropriate, roll error log if[.stplg.errmode;openlogerr[dldir]]; // read in the meta table from disk .stpm.metatable:@[get;hsym`$string[.stplg.dldir],"/stpmeta";0#.stpm.metatable]; // set log sequence number to the max of what we've found i::1+ -1|exec max seq from .stpm.metatable; // add the info to the meta table .stpm.updmeta[multilog][`open;logtabs;.z.p+.eodtime.dailyadj]; ] // set loghandles to null if sctp is not creating logs if[.sctp.chainedtp and not .sctp.loggingmode=`create; `..loghandles set t! (count t) # enlist (::) ] }; \d . // Close logs on clean exit .dotz.set[`.z.exit;{ if[not x~0i;.lg.e[`stpexit;"Bad exit!"];:()]; .lg.o[`stpexit;"Exiting process"]; // flushing in memory data to disk during unexpected shutdown when batchmode is set to memorybatch if[.stplg.batchmode=`memorybatch; .lg.o[`stpexit;"STP shutdown unexpectedly, batchmode = `memorybatch, therefore flushing any remaining data to the on-disk log file"]; .stplg.zts.memorybatch[]; .lg.o[`stpexit; "Complete!"] ]; // exit before logs are touched if process is an sctp NOT in create mode if[.sctp.chainedtp and not .sctp.loggingmode=`create; :()]; .lg.o[`stpexit;"Closing off log files"]; .stpm.updmeta[.stplg.multilog][`close;.stpps.t;.z.p]; .stplg.closelog each .stpps.t; }] ================================================================================ FILE: TorQ_code_segmentedtickerplant_stpmeta.q SIZE: 1,512 characters ================================================================================ // API for writing logfile meta data // Metatable keeps info on all opened logs, the tables which feed each log, and the number of messages written to each log \d .stpm metatable:([]seq:`int$();logname:`$();start:`timestamp$();end:`timestamp$();tbls:();msgcount:`int$();schema:();additional:()) // Functions to update meta data for all logs in each logging mode // Meta is updated only when opening and closing logs updmeta:enlist[`]!enlist () updmeta[`tabperiod]:{[x;t;p] getmeta[x;p;;]'[enlist each t;`..currlog[([]tbl:t)]`logname]; setmeta[.stplg.dldir;metatable]; }; updmeta[`singular]:{[x;t;p] getmeta[x;p;t;`..currlog[first t]`logname]; setmeta[.stplg.dldir;metatable]; }; updmeta[`periodic]:updmeta[`singular] updmeta[`tabular]:updmeta[`tabperiod] updmeta[`custom]:{[x;t;p] pertabs:where `periodic=.stplg.custommode; updmeta[`periodic][x;t inter pertabs;p]; updmeta[`tabular][x;t except pertabs;p] }; // Logname, start time, table names and schema populated on opening // End time and final message count updated on close // Sequence number increments by one on log period rollover getmeta:{[x;p;t;ln] if[x~`open; s:((),t)!(),.stpps.schemas[t]; `.stpm.metatable upsert (.stplg.i;ln;p;0Np;t;0;s;enlist ()!()); ]; if[x~`close; update end:p,msgcount:sum .stplg.msgcount[t] from `.stpm.metatable where logname = ln ] }; setmeta:{[dir;mt] t:(hsym`$string[dir],"/stpmeta"); .[{x set y};(t;mt);{.lg.e[`setmeta;"Failed to set metatable with error: ",x]}]; }; ================================================================================ FILE: TorQ_code_wdb_origstartup.q SIZE: 1,523 characters ================================================================================ \d .wdb startup:{[] .lg.o[`init; "searching for servers"]; .servers.startup[]; .lg.o[`init; "writedown mode set to ",(string .wdb.writedownmode)]; $[writedownmode~`partbyattr; .lg.o[`init; "partition has been set to [savedir]/[",(string partitiontype),"]/[tablename]/[parted column(s)]/"]; writedownmode~`partbyenum; .lg.o[`init; "partition has been set to [savedir]/[",(string partitiontype),"]/[parted column enumerated]/[tablename]/"]; .lg.o[`init; "partition has been set to [savedir]/[",(string partitiontype),"]/[tablename]/"]]; if[saveenabled; //check if tickerplant is available and if not exit with error if[not .finspace.enabled; /-TODO Remove when tickerplant fixed in finspace .servers.startupdepcycles[.wdb.tickerplanttypes; .wdb.tpconnsleepintv; .wdb.tpcheckcycles]; ]; subscribe[]; /- add missing tables to partitions in case an IDB process wants to connect. Only applicable for partbyenum writedown mode if[.wdb.writedownmode in `default`partbyenum;initmissingtables[currentpartition]]; // if for replay table maxrows were customised, we want to check row count for each table, save and gc where needed if[(not .wdb.numtab~.wdb.replaynumtab)or .wdb.numrows<>.wdb.replaynumrows; tabs:exec table from .sub.SUBSCRIPTIONS; tabmaxrowpairs:{(x;.wdb.maxrows[x])}each tabs; {replaymaxrowcheck[first x;last x]}each tabmaxrowpairs]; ]; @[`.; `upd; :; .wdb.upd]; } ================================================================================ FILE: TorQ_code_wdb_writedown.q SIZE: 2,741 characters ================================================================================ \d .wdb /-Required variables for savetables function compression:@[value;`compression;()]; /-specify the compress level, empty list if no required savedir:hsym @[value;`savedir;`:temphdb]; /-location to save wdb data hdbdir:hsym @[value;`hdbdir;`:hdb]; /-move wdb database to different location hdbsettings:(`compression`hdbdir)!(compression;hsym hdbdir); numrows:@[value;`numrows;100000]; /-default number of rows numtab:@[value;`numtab;`quote`trade!10000 50000]; /-specify number of rows per table maxrows:{[tabname] numrows^numtab[tabname]}; /- extract user defined row counts replaymaxrows:{[tabname] replaynumrows^replaynumtab[tabname]}; partitiontype:@[value;`partitiontype;`date]; /-set type of partition (defaults to `date) getpartition:@[value;`getpartition; /-function to determine the partition value {{@[value;`.wdb.currentpartition; (`date^partitiontype)$.proc.cd[]]}}]; currentpartition:.wdb.getpartition[]; /- Initialise current partition tabsizes:([tablename:`symbol$()] rowcount:`long$(); bytes:`long$()); /- keyed table to track the size of tables on disk savetables:{[dir;pt;forcesave;tabname] /- check row count /- forcesave will write flush the data to disk irrespective of counts if[forcesave or maxrows[tabname] < arows: count value tabname; .lg.o[`rowcheck;"the ",(string tabname)," table consists of ", (string arows), " rows"]; /- upsert data to partition .lg.o[`save;"saving ",(string tabname)," data to partition ", string pt]; .[ upsert; (` sv .Q.par[dir;pt;tabname],`;.Q.en[hdbsettings[`hdbdir];r:0!.save.manipulate[tabname;`. tabname]]); {[e] .lg.e[`savetables;"Failed to save table to disk : ",e];'e} ]; /- make addition to tabsizes .lg.o[`track;"appending table details to tabsizes"]; .wdb.tabsizes+:([tablename:enlist tabname]rowcount:enlist arows;bytes:enlist -22!r); /- empty the table .lg.o[`delete;"deleting ",(string tabname)," data from in-memory table"]; @[`.;tabname;0#]; /- run a garbage collection (if enabled) if[gc;.gc.run[]]; :1b; ]; 0b}; \d . /-endofperiod function endofperiod:{[currp;nextp;data] .lg.o[`endofperiod;"Received endofperiod. currentperiod, nextperiod and data are ",(string currp),", ", (string nextp),", ", .Q.s1 data]}; ================================================================================ FILE: TorQ_config_permissions_default.q SIZE: 5,060 characters ================================================================================ //admin role that has full access to the system .pm.addrole[`admin;"full system access"] .pm.grantfunction[.pm.ALL;`admin;{1b}] .pm.assignrole[`admin;`admin] // systemuser, role used by each TorQ process, has access to all functions // TorQ processes need to communicate with each other