00:00:00 --- log: started forth/06.01.28 00:34:36 --- quit: rsyncx ("Leaving") 01:43:15 --- join: virl (n=virl@chello062178085149.1.12.vie.surfer.at) joined #forth 01:49:58 --- join: Cheery (i=Henri@a81-197-18-99.elisa-laajakaista.fi) joined #forth 02:01:14 --- quit: Cheery (Read error: 104 (Connection reset by peer)) 02:04:18 --- join: Cheery (n=Henri@a81-197-18-99.elisa-laajakaista.fi) joined #forth 02:04:19 --- quit: madgarden (Read error: 104 (Connection reset by peer)) 02:08:23 --- quit: JasonWoof ("off to bed") 02:11:30 --- join: Cheery_ (i=Henri@a81-197-18-99.elisa-laajakaista.fi) joined #forth 02:12:05 --- join: danniken (i=CapStone@ppp-70-128-43-5.dsl.ltrkar.swbell.net) joined #forth 02:12:12 --- quit: danniken- (Read error: 104 (Connection reset by peer)) 02:12:16 --- quit: Ray_work (Read error: 104 (Connection reset by peer)) 02:12:28 --- join: Ray_work (n=Raystm2@adsl-68-93-111-214.dsl.rcsntx.swbell.net) joined #forth 02:16:34 --- quit: Cheery (Read error: 104 (Connection reset by peer)) 02:20:49 --- nick: Cheery_ -> Cheery 04:08:01 --- join: Quiznos (i=b@69-168-231-199.bflony.adelphia.net) joined #forth 04:08:28 : rehi 1 dup * dup swap drop ." hello" ; 04:08:30 rehi 04:08:52 drop 04:11:00 how useful 04:21:47 well, nice obfuscation code 04:43:37 --- join: Raystm2_ (n=Raystm2@adsl-69-149-52-52.dsl.rcsntx.swbell.net) joined #forth 04:48:23 --- quit: Raystm2 (Read error: 104 (Connection reset by peer)) 05:01:03 --- join: PoppaVic (n=pete@0-2pool198-74.nas30.chicago4.il.us.da.qwest.net) joined #forth 06:59:32 --- join: madgarden (n=madgarde@London-HSE-ppp3546494.sympatico.ca) joined #forth 07:02:38 OK... I'm in need of some advice, lads. 07:03:59 Looking for A forth that will C-interface well, and I'm looking for a decent string-lib.. Why the HECK are strings so noxious all over, (until they are overly cozy and all else suffers?) 07:04:55 last I checked, retro wouldn't cut it. Any changes that would make it work? 07:06:00 Problem with FICL is - the author went AWOL and never added ANS voc/order/wordlist support. 07:27:44 the nasty thing about strings is the memory management. there is no way around that, though (unless you get into garbage collection or such, which is a problem of its own) 07:27:57 yep 07:28:07 I'm considering a "string stack" 07:28:15 howmany? 07:28:28 there are other schemes, each with their own shortcomings / limitations 07:28:49 segher can I give a suggestion in c syntax? 07:29:48 sure 07:29:55 yeah, and nothing I've seen in 20+ years seems much sense. 07:30:09 ok, so standardly, forth strings are just `CELL s[]' 07:30:12 or *s 07:30:14 "too sensible" 07:30:24 so why not make em, **s or ***s 07:30:39 and use defining and runtime words to handle the redirection? 07:30:57 that would also make moving and assigning to strings easier 07:31:06 again, I want a Forth that can be C-embedded 07:31:15 shouldnt matter 07:31:19 and structs are not a huge issue 07:31:30 still shouldnt matter 07:31:37 alas, it matters 07:31:51 why? (i'm thinking from forth-pov 07:32:10 when folks write asm-forth, they bypass C - and therein we can't get back and forth. 07:32:25 sure we can, just have to be imaginative 07:32:39 c's runtime is quite uncomplicate afaik 07:32:50 stack the args, or reg 'em 07:32:54 nbd. 07:33:14 the only real issue that I think of is stack mgmt 07:33:31 is c's data stack == forth dstk? 07:33:52 or should a new stk be allocd for forth to keep and manage 07:34:00 C doesn't have a data stack. 07:34:04 imagine one 07:34:15 think recursive descent 07:34:27 in C, rstk and dstk are the same 07:34:36 yeah sure, most implementations use a stack -- but the C _language_ doesn't have a stack 07:34:48 it's on the hw usually. 07:35:00 but there are cpu's that dont even offer a hw stk 07:35:19 the c-stk is in the call-return process 07:35:23 process/activity 07:35:37 and c-locals go on dstk 07:36:13 --- join: ThinkingInBinary (n=tom@pool-68-163-163-216.bos.east.verizon.net) joined #forth 07:36:37 Quartus: Hey, how are you? 07:58:31 --- join: reuben (n=ben@leb-cr1-220-16.peak.org) joined #forth 07:59:28 --- quit: ThinkingInBinary (Remote closed the connection) 08:04:08 --- quit: reuben ("Leaving") 08:13:31 --- quit: PoppaVic ("Pulls the pin...") 08:15:03 --- join: PoppaVic (n=pete@0-1pool46-107.nas30.chicago4.il.us.da.qwest.net) joined #forth 08:19:08 --- nick: Raystm2_ -> Raystm2 08:31:27 --- join: sproingie (n=chuck@64-121-2-59.c3-0.sfrn-ubr8.sfrn.ca.cable.rcn.com) joined #forth 09:11:44 --- quit: PoppaVic ("Pulls the pin...") 09:49:46 --- join: Teratogen (i=leontopo@intertwingled.net) joined #forth 09:53:52 MINIX 09:55:17 CP/M 09:55:27 * Robert purrs. 09:55:39 I actually have more 09:55:53 Minix and CP/M machines here, than Linux ones. 09:55:58 5 vs. 3 09:57:00 i have a cp/m machine here, and no linux ones. and my job is porting linux to new machines :-) 09:57:07 well, part of the job, anyway 09:57:21 What do you run on your main machine/s? 09:57:43 macosx 09:58:40 i'll get myself some linux boxes soon... but i've been saying that for years now :-) 09:59:11 i do have some old linux boxes, but they're back in the netherlands 10:00:02 x86 boxes, not too interesting anyway 10:00:27 I'm only really using OS X when I have no other choice... or when the option is a Windows box. 10:01:55 i just *love* it as a desktop 10:02:15 it all just works... and works smoothly. esthetically pleasing, too 10:02:37 plus you have the command line at your fingertips if you want it 10:02:46 I thought the design was mostly annoying. If god wanted smooth windows he wouldn't have made pixels square. 10:03:04 Yeah, it's not too bad really. 10:03:24 pixels aren't square... 10:03:41 --- join: JasonWoof (n=jason@c-71-192-33-206.hsd1.ma.comcast.net) joined #forth 10:03:41 --- mode: ChanServ set +o JasonWoof 10:04:13 Shush! ;) 10:04:14 Hi, JasonWoof. 10:04:57 hi 10:05:06 I think it's time to write a forth cgi 10:06:08 What are you going to use it for? 10:06:12 nah, write the whole web server in forth 10:06:34 I don't have time to write a whole webserver 10:07:05 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 10:09:40 hmm... I did write a basic httpd (which runs on tcp-server or inetd) in gforth 10:12:06 oh god... I can't believe I wrote this 10:12:35 get a load of this: : strcmpc 0 do dup c@ rot dup c@ rot = 0= if 2drop unloop 0 exit then 1+ swap 1+ loop 2drop 1 exit ; 10:13:04 * crc slowly backs away 10:13:14 how long ago was that written? 10:14:51 medification date on the file is october 2002 10:15:05 pretty old :) 10:15:23 * crc hates looking over his old code; it's depressing 10:15:46 should be ?DO no matter what 10:16:22 EXIT at then end is funnt, too 10:16:25 funny 10:16:38 all the rest is just inexperience ;-) 10:18:49 so... how would you write it now :-) 10:22:40 I don't even know what it does 10:22:53 presumably it's some sort of string compare 10:24:11 : str-cmp ( addr u addr2 u2 ) swap >a over <> if 2drop exit then -str-cmp ; 10:24:19 it is strcmpc ( a1 a2 n -- 1|0 ) \ return 1 if the strings with length n at a1 resp. a2 are equal 10:24:40 : -str-cmp ( addr u -- compare u bytes at addr with data at A ) 10:24:50 same as COMPARE really 10:25:03 except for the return value 10:25:21 erm, COMP 10:25:26 ?for dup b@ b+@ <> if drop false exit then 1+ next drop 2 ; 10:25:34 lol 10:25:39 s/2/true/ 10:29:07 : comp BEGIN dup WHILE 1- >r dup c@ swap >r over c@ = WHILE char+ r> char+ r> REPEAT drop r> r> drop 0 ELSE 3drop 1 THEN ; 10:31:01 : comp chars bounds ?DO i c@ over c@ <> IF drop unloop 0 EXIT THEN char+ 1 chars +LOOP 1 ; 10:31:19 that last one, if you like DO ... LOOP (i don't) 10:31:41 closer to your original though 10:32:00 but without the stack juggling 10:32:12 er... what's comp do again? 10:32:19 oh neat... I buffered the document and output a content-length header 10:32:20 string compare? 10:33:07 ] segher: it is strcmpc ( a1 a2 n -- 1|0 ) \ return 1 if the strings with length n at a1 resp. a2 are equal 10:33:10 yeah, we're fiddling with string compare words 10:33:36 should be u, not n, i guess 10:33:38 I'm a big fan of the A register 10:33:44 the original didn't allow 0 anyway 10:33:45 compare's easy. string search is the hard one 10:34:03 real world string search has to be somethign like boyer-moore or at least KMP 10:34:18 both of those are really easy to implement 10:34:24 : strcmp ( a a2 n -- t/f ) swap b>a ?for dup b@ b+@ <> if drop false ; then 1+ next drop true ; 10:34:34 almost easier than the naive way 10:34:38 KMP sure. boyer moore in forth would be kinda tedious 10:34:56 remind me which is which :-) 10:35:24 boyer moore builds a table ahead of time. kmp i dont remember much but i dont think it needs much scratch space 10:35:30 jasonwoof: that looks like colorforth... 10:35:48 tho i think bmh still beats kmp most of the time 10:35:55 I believe the A register idea came from colorforh 10:35:58 a table the size of the search string, true 10:36:08 A register is machineforth 10:36:13 the A reg thing is older than that 10:36:21 x2 10:36:22 I mean that's where I found it 10:36:36 ah sure :-) 10:37:14 A reg is pretty neat for machine implementations... 4 regs total available: A R S T 10:37:36 address, top of return stack, top of data stack, second on data stack 10:37:48 swap those last two :-) 10:37:58 A and R are quite alike 10:39:04 some people have A be a whole stack, too, instead of just one reg 10:41:49 i wonder what a minimal language for a SECD machine would look like 10:41:57 lisp, i suppose 10:42:59 secd? 10:43:12 --- join: ThinkingInBinary (n=tom@pool-68-163-163-216.bos.east.verizon.net) joined #forth 10:43:29 functional language vm 10:43:41 http://en.wikipedia.org/wiki/SECD_machine 10:44:31 oh heh the article suggests the language right there, it gives the instruction set 10:46:39 interesting! 10:47:16 nah, a minimal language for such a machine would be... forth :-) 10:47:31 forth doesn't exactly have first class environments 10:47:45 so what :-) 10:48:10 i am working (at least in my head) on a dataflow-esque language for forth 10:48:12 you don't have to use all features of the machine to be a language on the machine 10:48:14 i call it phrase 10:57:52 Hey, ThinkingInBinary. Good thanks! You? 10:58:22 Quartus: Great. My brother just got a new Tungsten E2, I can't wait to try QF on it. 10:59:02 Great! 11:02:10 Quartus: And I'm picking up C (got a copy of K&R for xmas). 11:02:29 Quartus: hey, can you make loop operators work at runtime in QF? 11:02:37 You mean at the console? 11:02:39 Quartus: or can I somehow? 11:02:41 Quartus: yeah 11:02:52 Only by writing a tiny definition. 11:03:16 : foo 50 0 do something loop ; foo 11:03:17 for instance 11:05:32 You could write an iterator, if you wanted one, something that took an argument: 11:05:40 50 times foo 11:06:01 where 'foo' is the thing you want to do 50 times. 11:10:42 : times parse-word find dup 0= abort" bla" -1 = abort" immediate bla" swap 0 ?DO dup >r execute r> LOOP drop ; 11:11:01 I'd make it something like this: 11:11:17 : times ' swap 0 ?do dup >r execute r> loop ; 11:11:29 ah yes, tick. duh. :-) 11:12:04 If I needed [times] I'd write it separately, but I don't think I would. 11:12:06 and you got the dup >r right, too -- just forgot the drop at the end :-) 11:12:17 Right, I did say 'something like this'. :) 11:12:24 yeah i know :-) 11:13:09 On the other hand I can't think of a situation where I'd have wanted 'times'. 11:13:20 i think [times] is more natural than times really 11:13:42 I'd expect [times] to be the compile-time version. 11:14:02 and times is more natural than for ... next which is more natural than ?do ... loop 11:14:08 yes i understand 11:15:14 Loss of functionality in each level. for/next doesn't let you pick your start index value; times doesn't give you access to the loop counter. 11:15:34 sure 11:15:47 loss of _unneeded_ functionality at each level :-) 11:15:55 many many loops need only the count 11:16:11 Not so many of mine. 11:16:18 times does give you access to the loop counter, btw 11:16:30 well, depends on your EXECUTE implementation 11:16:58 "almost all loops start at 0", how about that, then? 11:17:00 --- quit: ThinkingInBinary (Remote closed the connection) 11:17:05 I don't know of any Forths that let you access I from anywhere except inline where the DO/LOOP is coded. Can't be in a nested definition. 11:17:07 just a rewording, really 11:17:15 yes 11:17:32 but EXECUTE doesn't put anything on the return stack in some forths 11:17:54 Which ones? 11:17:59 oh wait, that's just for primitives. scratch that, you're right 11:18:09 mine :-) 11:18:15 Gforth doesn't let you use i from a nested definition. Well, you can *use* it, but the value is bogus. 11:18:22 ...and it's a fucking hassle, let me tell you that :-( 11:18:22 Quartus Forth doesn't either. 11:18:56 sure, i is just r@ or 2r@ + in basically any forth 11:19:01 Anyway, 'times' drops another value on the return stack before the execute, which would further complicate things. 11:19:34 true... but it could be seen as part of the loop environment 11:19:46 Somehow this doesn't seem to be heading in the 'simpler' direction. 11:20:14 it is if times is your primitive, and the other loops are build on that 11:20:24 just turn it inside out :-) 11:20:42 Start with the most abstract, awkward and least functional 'primitive', and build the simple ones on top of it? 11:20:51 I think I'll stick with the current methods. :) 11:20:56 ?DO / 11:21:11 ?DO ... LOOP is not simple at all! 11:21:19 it is about half the binary size of my forth engines 11:21:25 Come on. 11:21:27 well... 1/4th or so 11:21:37 600 of 2500 bytes 11:21:42 literally. 11:22:18 that's just the size of the machine code, not of the forth dictionary, obviously 11:22:50 and sure i could code DO/?DO ... LOOP/+LOOP in high-level code, but that would be too much of a speed penaly 11:24:29 I build ?DO around DO. 11:24:48 that is an option, yes -- but ?DO is more frequent 11:25:21 unless you have many fixed-size loops of course -- which brings us back to times :-) 11:25:39 Not in my code. But supposing that to be universally true -- you can still build it around DO. It's a DO/(+)LOOP enclosed in a conditional. 11:25:53 sure, yes 11:26:11 do you see a good way to build +LOOP on LOOP btw? 11:26:32 * segher would love to get rid of +LOOP as primitive 11:27:07 I factor out a 'loop-resolve' word, and build both around that. 11:27:28 that's just compiler, i mean the run-time stuff 11:27:44 I'm talking about run-time stuff. 11:28:18 well that's just ip += whatever is in the next cell at ip 11:28:35 mine are ITC and/or TC engines :-) 11:28:45 TTC 11:28:46 I don't follow. +LOOP has to adjust the loop index, then check if the limit boundary has been crossed, then branch. 11:28:53 LOOP does the same thing but with a fixed increment of 1. 11:29:09 different rules for when the loop ends 11:30:07 because +LOOP works both for signed and unsigned numbers 11:30:41 [yeah for twos' complement it is the same] 11:30:45 Yes. 11:31:02 You work on non-two's-complement hardware? 11:31:13 not really, no 11:31:17 Ok then. 11:31:22 but i like to toy with the idea :-) 11:31:34 mental masturbation whatever 11:31:45 You could still factor out the commonality. 11:32:27 well, my runtime code is compiled C. and i'd like it to be sort-of efficient. the LOOP code is nice and compact -- the +LOOP code isn't really 11:32:59 too many branches 11:33:39 Why would you have more branches in a LOOP than in a +LOOP? 11:33:59 the other way around 11:34:08 Ok, turn it around. Why? 11:35:10 first branch in +LOOP is depending on whether the increment is pos or neg 11:35:31 second is the iterate-or-not 11:35:34 Yikes. Ok, let me unwind this. You have some compiler that generates C source from Forth? 11:35:48 no 11:35:56 my forth engine is written in C 11:36:21 so my code for DOLOOP looks like this: 11:36:44 So no access to machine flags, and hence the need to do the clumsy and branch depending on sign. 11:37:04 PRIM(DOLOOP) if (++RTOS.n == RNOS.n) { RPOP; RPOP; } else JUMP; NEXT(++ip) MIRP 11:37:18 there are no machine flags. this is not x86. 11:37:25 well, it runs on x86, too, but hey 11:37:51 I have yet to see a CPU that doesn't let you test carry. Hang on, though, I think there's a way around it. Checking my logs for the last time I went over this. 11:38:06 powerpc compare insns do not set a carry flag. 11:38:13 thanks 11:39:00 Ok. Here it is. Compute index minus limit. Store as a. Copy a to b. Increment b by the loop index. If signs of a and b differ, you're done. 11:39:03 i'm sure there must be a nice closed formula for "this was the last +LOOP iteration" but i haven't figured it out yet 11:39:37 oh boy. that simple? 11:39:38 Basically, you test if adding the increment to the index makes it change "hemispheres" with respect to the limit. 11:40:27 That should eliminate your extra branch, and it works within the shackles of C. 11:40:51 i'm not sure it's right though... say, word size is 16 bit, loop is 0..40000, increment is 35000 11:41:32 let me run through this... moment 11:42:24 ...no, doesn't work 11:42:47 basically, for this to work, you need to do the arithmetic in at least wordsize+1 bits 11:43:01 Give a non-working example, then. 11:43:06 ...which might generate not-so-bad-at-all code, really 11:43:16 the one i said 11:43:37 first iter: index = 0, limit = 40000, index-limit is positive 11:43:56 increment: 35000-40000 is negative 11:44:02 but the loop hasn't ended yet! 11:44:43 run it in 17-bit (or more) arithmetic, and all is fine, though 11:47:18 I'm not sure how that's impacted by the ambiguous-condition caveat in DO. 11:48:01 But certainly, handling it with an extra precision will take care of it in C. 11:48:22 end of +LOOP is if some value on the path from old-index (exclusive) and new-index (inclusive), incrementing in the direction of the sign of increment, is equal to limit 11:48:46 In fact it has to cross the boundary between limit and limit-1. 11:49:07 that's the same thing 11:49:16 But stated in a way I didn't have to read twice to get. :) 11:49:24 hehe 11:50:09 but anyway -- it has to _cross_ that boundary. it's not good enough if it, say, goes from limit to limit-1, but in the positive direction 11:50:23 You lost me. 11:50:31 it's a royal pain in the behind :-) 11:51:02 oh wait, it cannot do that, increment is a signed number 11:51:12 never mind that :-) 11:51:14 The semantics only say it has to cross that boundary, not in which direction. 11:51:39 well the domain is a circle 11:51:47 Yes, so I think that unravels your argument above. 35000 is not a valid signed 16-bit integer. 11:51:55 traveling one arc doesn't cross the other arc 11:52:03 yah 11:52:22 So you can use the method I suggested without extra precision, and avoid your branch at the same time. 11:52:34 sounds nifty 11:52:48 i'll have to think a little bit more about it though, if you don't mind 11:52:57 but thanks for setting me straight :-) 11:53:03 Hard for me to stop you. 11:53:21 Glad to help. 11:53:45 let me buy you a drink if ever we meet. where are you based? 11:53:57 Toronto, Canada. You? 11:54:14 germany, now -- but i'm dutch 11:54:26 You may have to mail the drink. :) 11:54:34 heh 11:54:44 i'll probably be in ottawa again this summer 11:55:04 Only four-hours drive away. 11:55:21 hehe yeah 11:55:39 i've been to toronto -- to the airport that is 11:55:47 they made me take of my shoes 11:55:58 *big* mistake after a trans-atlantic flight 11:56:37 Heh. Most North American international airports require that now. 11:57:13 yeah, it's a hassle 11:57:22 they want to know everything about you, too 11:57:43 Yes, it's unpleasant. 11:57:52 next thing you know they'll require you're preference: coke or pepsi? before you can enter the country 11:58:01 Always say 'pepsi' in an airport. 11:58:22 i say "mine's a drip coffe thank you so very much" heh 11:58:37 why that, btw? 11:58:48 Joking. 'coke' being cocaine. 11:58:57 oh heh 11:59:15 i have LOTS of problems when flying in from amsterdam 11:59:25 strip searches etc. 11:59:32 Everybody does. 11:59:45 but it's ridiculous 11:59:54 Yes. 12:00:04 so do something about it :-) 12:00:12 It'll be over as soon as the wars on Drugs and Terrorism are won. 12:00:14 I do; I travel by air as little as possible. 12:00:40 robert: heh right 12:00:59 robert: wanna play some gtnwf? 12:01:35 * Robert wonders what that is. 12:01:51 global thermo-nuclear warfare 12:02:02 an 80's computer game. really. 12:02:22 * Robert thinks about Armageddon Man. 12:02:30 A C64 fan made me play that a bit this summer. 12:02:42 ah, c64 :-) 12:02:52 now there's a *real* machine :-) 12:03:41 True, true... Got a bunch of Forth systems for my C128. 12:03:42 too bad forth implementations on that are a bit awkward, because of a cpu bug^Hquirk^H^H^H^H^Hdesign peculiarity 12:03:59 :D 12:04:53 indirect jumps where one half of the address to jump to is on one page, and the other half is on the next page, do not work 12:06:34 quartus: thinking about it more, i think you are right. this is great :-) 12:06:55 Glad to hear it. 12:06:55 --- join: weet-bix (i=leontopo@intertwingled.net) joined #forth 12:07:06 now just to write it in a way where GCC on any target creates sane code out of it :-) 12:07:37 Should be easy enough. 12:08:13 --- quit: weet-bix (Client Quit) 12:09:06 you'd be surprised 12:09:06 Worst case might be a target-dependent sign-comparison routine. 12:09:31 i have to support as far back as GCC-3.3 12:09:49 not that i really care if generated code is crappy on old compilers 12:10:17 but yes, it should be pretty simple 12:10:24 sally forth! 12:10:43 ahoy 12:12:02 i don't want to use any target-dependent code -- it is bad enough to support three threading models and multiple stack models in one code base, already :-) 12:12:55 I believe you can only test the sign of an integer in C by comparing it to zero. 12:13:27 So I'd code it that way and trust GCC to optimize a non-branching comparison on any platform where it's possible. 12:13:30 sure -- but the compiler does (should do) a decent job of optimizing that away 12:13:37 indeed 12:14:01 So, two comparisons and an XOR. 12:14:04 --- join: snoopy_1711 (i=snoopy_1@dslb-084-058-166-208.pools.arcor-ip.net) joined #forth 12:14:19 or an add or a ub, yes 12:14:21 sub 12:15:00 I'd expect the xor to be faster. 12:15:43 on some cpus, xor sets fewer flags than add, so it might be marginally faster on those, yes 12:16:12 Add/sub involve barrel shifts as a rule. Xor doesn't. 12:16:22 do you know about any cpu where add takes more cycles than xor? 12:16:37 I believe it does on the 68K. 12:16:53 add involving barrel shift? er? barrel shift == "rotate" 12:17:16 Yes. 12:17:37 how does "add" involve "rotate bits"? 12:17:50 Hmm? Oh, sorry -- shifts, anyway. 12:18:12 shifts in _hardware_ are for free... 12:18:12 And into/out-of the carry & other flags. 12:18:29 [i am not saying adds aren't more expensive, sure that's true] 12:18:48 That's architecture-dependent. I'm just suggesting that as a rule-of-thumb xor is a simple instruction than add. 12:18:56 yes 12:19:10 unless you have a truly wacko cpu 12:19:18 but i don't care about those 12:19:27 So I'd use xor in the test. 12:19:31 and besides, GCC could optimize all of that away anyway 12:19:42 that's the plan, yes 12:19:53 let me write the code just now, see what it generates 12:21:54 --- quit: Snoopy42 (Read error: 145 (Connection timed out)) 12:22:06 --- nick: snoopy_1711 -> Snoopy42 12:24:06 one branch :-) 12:24:13 There you go. 12:24:47 pretty shitty code though 12:25:04 but that is probably just GCC acing up again 12:25:07 acting 12:25:14 sure looks like it 12:25:20 What's the C? 12:26:00 C? 12:26:04 oh okay 12:26:07 Yes, your source for the comparison. 12:27:24 PRIM(DO_X2b_LOOP) type_n x; x = RTOS.n - RNOS.n; RTOS.n += TOS.n; POP; if ((RTOS.n ^ x) < 0) { RPOP; RPOP; } else JUMP; NEXT(++ip); MIRP 12:27:48 typing it in from another machine, i hope there's no typoes 12:28:22 I bet you could work the sign comparison into one ? comparator, without establishing a new variable x. Might help GCC optimize. 12:28:39 basically, my complaints about this generated code are just instruction selection and scheduling 12:28:44 100% GCC's fault 12:29:06 no, GCC optimized all of that away just fine 12:29:24 it basically always does, since the 3.x series 12:29:40 and 4.x is just great with most of those issues 12:30:14 there is no reason to obfuscate your code anymore, just to get better code generated. well, for those cases, anyway ;-P 12:30:52 I don't consider ? an obfuscation, but that's good anyway. 12:31:53 any code change you do to get better assembly generated is an obfuscation. unless your original code was crap ;-) 12:32:04 i prefer the more readable code 12:32:08 So do I. 12:32:18 :-) 12:33:48 okay, your turn. is there any code issue you could use help with :-) 12:34:10 Right now I'm in pretty good shape. 12:34:46 ah no, think back a bit then, dig in your memory 12:35:24 One of my users has just discovered that my turnkey word breaks when the target word has only one word in it. I may have to fix that. 12:35:32 there must be something :-) 12:35:53 turnkey? like, generate a standalone app? 12:36:47 Right. I have a 'MakePRC' word. 12:36:48 sounds just like a plain stupid bug^W^W^W simple oversight, anyway :-) 12:37:07 Yes I think MakePRC is tripping over a tail-call optimization. 12:37:28 Though I'm not sure, and so I'll have to do a bit of testing. 12:37:28 what kind of optimisations do you have? 12:38:04 right, another question: what debugging tiools do you use for forth code? 12:38:30 like, what kind of tools would you use to figure out this one bug 12:38:34 Inlining for short sequences, optimization of instruction sequences (such as short literals), tail-call. Few other odds and ends. 12:39:01 inlining at machine code level, or forth ITC/DTC code level? 12:39:05 Hmm. I'd reproduce the bug, and then instrument the source until I found at which point it was failing. 12:39:13 Oh, it's a native-code Forth. 12:39:19 ah yes, the standard way :-) 12:39:30 No ITC/DTC here. 12:39:37 does it compile new definitions into machine code? 12:39:41 Yes. 12:39:52 lots of subroutine calls? 12:40:01 well you do inlining, you said 12:40:05 Except for inlined sequences, yes. 12:40:18 does it reorder machine instructions? 12:40:34 No. Can't afford that degree of optimization on the Palm. 12:40:48 palm is a mips cpu, isn't it? 12:40:55 plenty of horsepower 12:40:57 ARM, emulating a 68K. 12:41:03 ah heh :-) 12:41:09 Not plenty of battery life. 12:41:50 well reordering machine insns isn't that expensive  [but do put a cutoff in there somewhere] -- there's normally quite severe constraints 12:41:54 true true 12:42:12 and it doesn't help that much on an arm anyway 12:42:20 Practically speaking it's an unnecessary degree of optimization in this environment. 12:42:22 any register allocation? 12:42:43 Yes, TOS is cached. 12:42:47 yeah, first do peepholes etc. 12:42:58 If I went full-force into optimization I might achieve another factor of two speedup on certain sequences. 12:43:18 no i mean -- if you inline, do you ge register numbers on the inlined code, or is it a "verbatim" copy always? 12:43:37 yes, you can't expect much more on general code 12:43:48 Inlining is verbatim except where it's necessary to compensate for previous optimizations. 12:44:01 on powerpc, my stupid stupid ITC engine beats gforth on many benchmarks! 12:44:10 right 12:44:20 sounds just fine on arm, really 12:44:27 Really it's on 68K. 12:44:32 this is a strongarm, right? 12:44:35 ah yes 12:45:05 are you really running it on an emulator, or doing some direct translation? 12:45:43 Quartus Forth does all the little or no-cost optimizations, the ones that take CPU time are not there, but also not needed for general-purpose Palm apps. There's an assembler (68K and ARM both) for detail optimization should some particular app require it. 12:45:47 translation would be way faster, as both cpus have 16 regs 12:46:30 It's running on the built-in 68k emulator. There's no way to write a pure ARM app, all OS calls have to go out and in through the 68k emulator. You lose any advantage. It is possible to write ARM subroutines, though. 12:46:54 write the whole thing as a subroutine :-) 12:47:06 Then any OS calls have to go out and back in through the 68K emulator. 12:47:14 the engine as a whole 12:47:26 I can repeat that again, but I'm not sure you're getting it. :) 12:47:31 os calls are slow anyway. who cares. 12:47:43 Believe me, you'd care on a Palm, apps spend a great deal of time in OS calls. 12:48:00 of course they do 12:48:12 but you're not going to speed up the os 12:48:33 At any rate, this is a strength. Quartus Forth apps run on all versions of the OS from the beginning, Palm OS 1 up to Palm OS 5 (even 6, though that's not in official release). 12:48:42 or do you mean, a call to the os is way way slower if called from pure arm code? 12:48:48 Yes. 12:48:54 ouch OUCH 12:49:06 You have to re-endian all the arguments, too. 12:49:14 oh bugger 12:49:43 can't they just run the arm in big-endian mode [no thay can't, they use intel arms chips] 12:49:47 grr 12:50:42 The 68K emulator is a good one, and the OS is ARM underneath the interface layer, so it's a decent solution with remarkably good compatibility with earlier software and devices. 12:51:15 as good as macos/ppc running 68k? 12:51:25 I'd expect so. 12:51:31 very nice 12:51:45 Though I think the macos/ppc solution was a bit different. 12:51:59 JIT or something like it. 12:52:11 so -- your optimizer... does it optimize stuff like dup drop ? 12:52:31 it could do JIT, yes -- but straight direct interpretation for most of it 12:52:36 mostly JIT for loops 12:52:52 It doesn't. I'm working on a literal-sequence optimizer presently, to determine if the cost of doing it can be reduced sufficiently. 12:53:30 i think the key is determining what optimizations are worthwhile on real code 12:53:38 _on real code_ 12:53:42 So things like "3 5 + 2 * dup" would compile "16 16" 12:53:50 but the proble is: what _is_ real code? :-) 12:54:15 Well, yes, that's decidely true -- I don't optimize for the sake of it, as optimizations can introduce new bugs, especially in combination. 12:54:21 or 16 dup , if that is faster / more compact 12:54:53 not just that, but many "optimizations" make the code run slower 12:55:01 --- join: bbls (n=ionut@81.180.7.4) joined #forth 12:55:03 hello 12:55:04 I try to avoid those. :) 12:55:06 Hi bbls 12:55:14 hey ho 12:55:16 is there a forth implementation that runs under win32 linux and osx? 12:55:24 gforth does. 12:55:25 and that has sockets and threads implementation 12:55:30 That I'm not sure of. 12:55:37 hmm 12:55:47 quartus: you can't always predict which opts are "bad" though :-( 12:56:02 segher, so I stay with the low-hanging fruit. 12:56:19 bbls: on linux/bsd at least, gforth can use networking, yes 12:56:29 quartus: a wise choice :-) 12:56:33 If I drill down too far I'm going to run into the differences in the way the Palm OS 5 68K emulator responds vs. the original 68K chips. 12:57:09 quartus: like on my engine, on running open firmware... first opt i made was making "memcpy()" a primitive 12:57:26 If you need that level of optimization, the best approach is to determine where your particular app's bottleneck is, and optimize that bit. 12:58:01 Palm devices are seldom used for heavy computation, so it doesn't come up too much. 12:58:04 quartus: next was making BRANCH a primitive. yes, memcpy() was _way_ more expensive than taking 200+ cycles per branch 12:58:46 wow, we agree a lot! you are a sane person! i don't meet many of those :-) 13:00:01 Heh. 13:00:24 my current (self-appointed) task is making dictionary searches cheap. the catch is, i want to make it simpler than it is already, too 13:00:31 What's your mechanism now? 13:00:53 stack of wordlists, each wordlist a linked list 13:01:08 do plain string comparison per entry 13:01:23 I use a hashtable of chains. Wordlist support is automatic. 13:01:46 I've done quite a bit of work on optimizing dictionary search in order to speed compilation times. 13:01:52 how is that... do you store WID for each hash entrty? 13:01:57 WID? 13:02:04 Oh. Wordlist identifier. 13:02:18 my system compiles most of itself at runtime, i really need to have this fast ;-) 13:02:18 No. I xor the hashvalue with the WID. 13:02:41 plus, open firmware does a _lot_ of wordlist searches at runtime no matter what 13:03:05 how does that help? 13:03:32 Give it some thought. Imagine a hashtable with two buckets, which are two separate linked lists. Imagine then that you have two wordlists, 0 and 1. 13:03:49 (you can only have as many wordlists as there are buckets in your hashtable with this method). 13:03:53 it might help finding an entry in the wordlist you want to look in faster, but you still need to make sure that the entry you found actually is in the worlist you are searching for 13:03:59 No, it's automatic. 13:04:25 By xoring the hashvalue with the WID you're searching, you only search in the correct chain. 13:04:40 so... you hash, xor, you find... and then you do an extra string comparison 13:04:42 ? 13:05:01 Right. Hash, search, compare. You're doing far, far fewer string comparisons because each chain is short. 13:05:26 that works, right. i can't use that though, i have >1000 wordlists normally :-( 13:05:29 And wordlists are free, both in terms of word headers (0 add'l bytes) and in terms of the search. 13:05:41 That's not a problem, have 4096 buckets in your hashtable. 13:06:05 not enough 13:06:19 Ok, go back to that line I typed and increase the buckets. 13:06:24 i might be comfortable with 64k entries 13:06:36 I fail to see why you'd need 65536 wordlists. 13:06:43 but that takes at least 0.5MB of memory 13:07:06 there are two (or three, depending on the model i use) wordlists per device node 13:07:24 and there can be _lots_ of device nodes 13:07:48 That strikes me as a bizarre way to manage your namespace, but whatever works. Yes, it's a space/speed tradeoff, but the speed increase is significant. 13:08:04 And the space isn't major, though your setup sounds odd. 13:08:07 it's not my design, that's how the OF standard works :-( 13:08:27 So you're trying to retrofit OF to have faster dictionary searches? 13:08:49 my OF implementation, but yes 13:09:11 like, there are LOTS of wordlist lookups done at runtime 13:09:12 A clumsier way to do it would be to store the WID as part of the header. 13:09:19 Slower, too. 13:09:36 I suppose you might allocate a new tiny hashtable for each wordlist. 13:09:51 almost all names looked up in any device node exist in many many device nodes 13:09:52 Never tried it, though. 13:10:14 I can only tell you that this method, although it costs you space in terms of the hashtable, is as fast as I've been able to devise. 13:10:27 i'd prefer not to have a hash table per wordlist. a) overhead b) allocation complexity 13:11:00 yes, if you know in advance how many wordlists you'll have (and it's a small number), it sounds liker an excellent scheme 13:11:00 Is OF 16-bit? 13:11:09 either 32-bit or 64-bit 13:11:30 32 is minimum 13:11:31 It needn't be a small number, it simply determines how large your hashtable is. 13:12:03 That, combined with a very simple hash function, and you're set. 13:12:13 --- part: bbls left #forth 13:12:18 heh... do you really want a 4GB * wordsize hash table :-) 13:12:43 That's obviously absurd. 13:13:06 If I really did have more than 4GB of names in my dictionary, though, then yes. 13:13:09 hash me baby! 13:13:42 i want a fully integrated Flinux 13:13:49 is there one yet? 13:14:20 flinux? 13:14:29 no, Flinux, Forth+linux 13:14:34 or 13:14:37 FLinux 13:14:43 since both are proper nouns :) 13:15:27 btw, unless you're talkin bout a forth-on-metal with 4gb ram, linux wont let you have 4gb 13:15:29 quartus: i think i'll go with one hash table, and store wid with each entry. some words can be in multiple wids, too... not really a requirement, but nice to have 13:15:38 linux takes the top gig by design 13:15:48 or two gig depending on config 13:16:03 linux will let me have 2**40 bytes just fine, thanks 13:16:26 segher, the hashtable will still speed you up by about the number of buckets you use. 13:16:39 yeah 13:16:39 Do use a very simple hash function, though. Overkill will eat your gains. 13:16:55 has the name and its nfa 13:17:00 or ^nf 13:17:06 hash = (hash * 33) ^ {next char) 13:17:08 has/hash 13:17:15 I recommend a power-of-two-sized hashtable so you can use bitmasking to get the hashtable offset. 13:17:21 yes 13:17:35 segher did what i wrote to you this morning make sense? 13:17:37 Don't buy the conventional hype that hashtables have to be prime-sized, it's bunk. 13:17:50 the nice thing about one hash table total, instead of one per wid, is that i cheaply can make it huge 13:17:51 bunk is debunkable? 13:18:01 i know it is bunk, yes 13:18:34 Yes, I don't recommend the multiple hashtable approach, it's just a possibility that I've never tested. 13:18:41 quiznos: i have no idea what you wrote this morning. heck, i don't know what morning is for you, this is the net :-) 13:18:57 segher i think it was you - stacks 13:19:02 c's and forth 13:19:16 you'd have to be more exact 13:19:25 you said `c dont have stack' 13:19:26 i talk about all that shit all the time 13:19:31 yes 13:19:38 this afternoon, right 13:19:38 i said `it's in the call/return process/activity' 13:19:43 A single hashtable makes MARKER straightforward, too. 13:19:50 it's not in the language 13:19:53 fine, be timezonally differential! 13:19:56 bahahah 13:20:02 :) 13:20:14 quartus: let me think about that for a sec... 13:20:15 so what are we hashing now? 13:20:24 The usual, dictionary headers. 13:20:26 bits? words? cells? 13:20:35 ooo headers 13:20:56 * segher is hashing his last two beers, and then off to the pub 13:22:11 quartus: so, you walk all of the hash, throw out everything with an address > mark? 13:22:17 so, is there a fully integrated flinux? 13:22:27 Gosh no. MARKER copies the hashtable, copies it back later. 13:22:46 heh, that works fine, sure :-) 13:22:56 The only thing a hashtable complicates is an ordered WORDS, but not much. 13:23:02 you'll need to copy back HERE as well 13:23:08 Right, there are pointers to record. 13:23:39 Quartus is there a speed increase with hashing words over defining a good word with good algorythm? 13:23:45 HERE and your hash and the dictionary stuff should be all, i think 13:23:52 Quiznos, I don't understand what you're asking. 13:23:55 ok 13:24:15 segher, Quartus Forth has separate code and dataspace HERE pointers, but yes, that's the size of it. 13:24:38 oh nice 13:24:44 hashing is for descreasing the time spent looking for a particular cell in the header: whether the hash of the word oin the PAD matches the current word.head.hash matches 13:24:49 right so far? 13:24:54 i'm thinking about having a separate space just for the names 13:24:56 oin/in 13:25:11 Quiznos, hashing reduces the number of headers that have to be looked at at all. 13:25:18 that is not what hashing does at all 13:25:21 Quartus Forth has a separate namespace, too. 13:25:37 what quartus said 13:25:37 so if the hash descreases spead, at the expense of cost of not comparing elements of the word's name then how about using a better search algo 13:25:52 Hasning increases speed, or you're doing something really wrong. 13:25:54 dont word trees do the same thing? 13:26:01 with better partitioning? 13:26:04 sure 13:26:19 i'm thinking of FIG's word-tree 13:26:30 except those tend to be a) slower b) more complicated 13:26:47 Better partitioning? A good hash-function results in a balanced distribution between buckets. 13:26:54 look, we're running on hw that is atleast 100 times faster than a ][+ 13:27:03 And yes, trees are fearfully more complex, and fantastic overkill for Forth. 13:27:06 at what point does cpu speed become irrelevent? 13:27:17 that's not a good outlook on trees 13:27:17 you tell us. 13:27:24 fig has always had trees 13:27:35 only slow hw killed them or reduced populatiryt 13:27:40 FIG? FIG Forth had a single linked list. 13:27:41 populatrity 13:27:50 it had one list with branches 13:28:11 That's not a tree in any conventional algorithmic sense. 13:28:24 but now 3 decades later, the hw is a 100 times faster 13:28:29 can one of you guys change their nick? it's really hard to tell you guys apart 13:28:32 so, when does speed become irrelevent 13:28:33 :-) 13:28:35 --- nick: Quiznos -> PurpleSmurf 13:28:41 hows that? :) 13:28:46 thank you :-) 13:28:46 PurpleSmurf, speed is never irrelevant. My Forth runs on Palm hardware. 13:28:47 yw 13:28:58 those arent slow at all either! 13:29:05 they'd be consoles if they were 13:29:11 but they ARE gui 13:29:20 so how much is enough? 13:29:28 50Mhz? 100M, 1000M? 13:29:44 when is the hw platform fast enuf? 13:29:57 Ok. I'll leave that for somebody who thinks it's an interesting question to answer. 13:30:19 i'm surprised you or anyone else wouldnt think it's interesting. 13:30:22 :-) 13:30:37 alright, so hashing with a space contraint 13:30:42 a 0.9875 MHz 6502 is fast enough for anything. 13:30:44 is that the criteria? 13:30:50 segher yea heh 13:30:58 or a 2mhz z80 13:31:03 PAL C64 :-) 13:31:06 or a 10m 68010 13:31:11 heh 13:31:22 but we're obviously datin ourselves 13:31:23 :) 13:31:25 nah, Z80 is too slow for ANYTHING EVER 13:31:27 segher: Damn straight. 13:31:27 heh 13:31:44 * Robert was poking RSA for 6502, but got tired of it. 13:31:57 Know of any bignum libraries for it? 13:32:01 so hashing is supposed to replace a string of random length with a constant length value that is constant for the same string? 13:32:14 * segher has done perspective-correct texture mapping on c64 13:32:14 Generally speaking. 13:32:20 I was starting on one, but got tired after writing most of a dynamic memory manager. 13:32:29 so, instead of n-cell comparisions, it's reduced to a 1-cell cmp? 13:32:43 It's not used for a direct comparison at all. 13:32:58 but the flaw in that is that if the first cells comp'd dont match, then br 13:33:07 so the hash is not needed 13:33:16 A hashvalue on a dictionary header is used to determine which bucket of the hashtable to search. 13:33:26 it's the same speed to compare hashes and the first cell to determine equal 13:33:31 equality 13:33:39 You're not reading what I'm writing. I'm going out for a bit. 13:33:41 ok, so that's more complication 13:33:47 i am reading/ 13:33:53 you normally want a hash table to have about 3 entries per hash slot worst case 13:34:01 i'm just a few lines ahead of you Quartus :) that's all 13:34:06 i type fast 13:34:10 95% change or 98% change or so 13:34:44 so, for hashsearch (hereafter HS) it's a cmp, then calc 13:34:47 instead of cmp, br 13:34:49 bran 13:34:58 it's adding more complexity to search 13:35:15 i dont get the reason for that 13:35:17 compute hash value, walk chain of words matching those hash values 13:35:34 but that chain is _way_ shorter than the chain of *all* words 13:35:35 and as Quartus said, his f. works on pda - these arent exactly limited boxen either 13:36:01 that walk, still requires two reads, one cmp and further work 13:36:33 still, it's more work to find a word with a particular name 13:36:39 two reads. who cares. 1000's and 1000's (if not 1000_000's) of reads... there is *do* care 13:36:48 s/is/i 13:37:27 ok, here's alittle more detail what i mean 13:37:37 please 13:37:48 which, {do, dont}? 13:38:21 do you want me to continue or no? 13:38:23 just being friendly, "please continue" 13:38:27 thank you 13:38:45 briefly, struct word {head{nf,lf} body{ cf, pf}} 13:38:46 ok? 13:38:55 don't be offended at irc, too many jerks on irc 13:39:02 i'm not offendable. 13:39:11 i dont use profane lang either. 13:39:15 ask ##Linux 13:39:29 i don't really get what you meant there, sorry 13:39:53 a word has two parts, head body, head two parts, namefield, linkfield 13:40:02 body, two parts, code, param fields 13:40:04 k so far 13:40:06 ok 13:40:10 ah ok 13:40:20 and we have a word : search ... ; 13:40:25 takes a string 13:40:42 and param field for a normal colon def is a list of cf's? 13:40:44 starts at the top of the (generically meant) tree of words 13:40:50 right 13:41:00 oki-dokie 13:41:19 simepl.forth reads last or latest and starts a strcmp() type word 13:41:24 tree of _which_ words? 13:41:24 simple 13:41:29 at latest 13:41:33 or last 13:41:36 if you're eforth 13:41:40 :) 13:41:51 in the current 13:41:57 everything defined so far? or words in certain lists only? 13:42:00 simply your model alittle? 13:42:05 yes, defined. 13:42:42 so strcmp starts, read 1cell and compare to to arg1 from caller-word 13:43:12 do ==? if continue-read-cmp else fail then ; 13:43:28 we know how tree searches work, don't bother explaining that, please 13:43:30 excuse the mixed sense of do/then 13:43:36 ok 13:44:02 my point is that on the first read at first-cell, if it fails, it's going to be faster than a hash-searc 13:44:15 nah 13:44:18 why? 13:44:35 what's the signature of hash-search? 13:44:58 a binary tree search takes 1.6 * [number of bits in the number of entries in your tree] comparisons on average 13:45:16 1.6 * lg #entires 13:45:29 and for a linear as i'm thinking? 13:45:33 hash search takes about 2 13:45:35 2 13:45:45 not 2 * something, _2_ 13:45:58 but HS still needs to do more work to even get started 13:46:06 so what 13:46:10 it's gotta hash the recieved string 13:46:19 which is cheap 13:46:19 tht makes it slower 13:46:28 cheap enough, anyway 13:46:31 but not faster 13:46:32 heh 13:46:51 it is a constant cost. constant compared to searching all of the dicts, anyway 13:47:21 even an ancient ][ forth searches everything before the blink of an eye 13:47:31 as you said, that was a ~1mhz cpu! 13:47:38 we're not looking at searching 20 names. more like 20_000 or 20_000_000 13:47:41 alittle less even 13:47:44 ok 13:48:09 to quoth Wirth, algorithms + data structures = programs 13:48:28 and no, a 6502 does not search eeveryting " at the blink of an eye" 13:48:32 i'm still of the mind that if it's not fsat enough then either get a smarter compiler or a better algo 13:48:37 sure it did 13:48:45 i noticed 50ms myself often 13:48:50 lol 13:48:54 in48k? 13:48:54 which is *quite* noticable 13:49:00 one blink 13:49:05 blink slower :) 13:49:09 lol 13:49:18 have ten of those words on one line. and then? 13:49:23 we still get that on linux when things are swapping 13:49:45 alright, so i need more imperical research 13:50:06 impirical 13:50:10 empirical 13:50:11 yea 13:50:21 i'll get it eventually 13:50:23 empirical is the word you are looking for 13:50:26 nods 13:50:40 i havent written that word in decades 13:51:42 huh. noisy in here today :) 13:51:43 actually, re ][ forth not searching, isn't quite right. 13:51:57 yes -- if you have few wordlists in your search order, and those are small, a very simple search algo might perform better thean more sophisticated algo's. BUT IT DOESN'T SCLAE 13:52:10 s/LAE/ALE/ 13:52:27 Simply put, if you have a linked list that takes n time to search, you can make it take n/m time to search by using a hashtable of m buckets. Whether the original linked list was fast enough for your purposes is irrelevant to whether or not you can improve the search time by a factor of m. 13:52:27 tathi: heh, morning :-) 13:52:40 in one of my old forth books, i recall reading that the search starts at the top and searches the trees in order and forth itself gets searched twice 13:52:56 Quartus k 13:53:22 You can also get m wordlists for free with a simple technique. 13:53:23 so, now you have trees and lists 13:53:31 There are no trees in a hashtable. 13:53:46 trees are more trouble than they're worth 13:53:46 not just simple standard forth words, but more than just words 13:53:48 quartus: and if there are N entries in your table, you would normally make your table size N or N/2... so what's the speed then (quartus, don't answer :-) ) 13:54:00 forth linkdlists, hashes, and possibily more 13:54:34 --- quit: warpzero ("leaving") 13:54:37 PurpleSmurf, you're mixing concepts and terms together apparently at random. 13:54:37 --- join: warpzero (n=warpzero@wza.us) joined #forth 13:54:46 sorry, not what i mean to do. 13:55:01 quartus: that wordlist xor trick is very cool, btw... do you know who thought of it first? 13:55:03 A Forth dictionary holds Forth word headers. That's it. 13:55:18 segher, I independently invented it myself, but I'm sure I wasn't the first. 13:55:20 it depends on how the dictionary is designed 13:55:38 a non-segment arch/platform doesnt need to be artificially segmented 13:55:45 quartus: it _is_ very cool. too bad i can't use it :-( 13:56:38 segher, maybe you can, in part. Split your wordlist into two pieces, one that fits the hashtable mask. 13:56:50 heh true 13:56:59 Then store the WID in the header too, for confirmation. 13:57:07 there is a flat-module package for linux that can be installed to eliminate segementation in binaries 13:57:28 making that a dep to running a non-segmented forth would be one way to eliminate segements 13:57:45 i've been thinking of having the "main" forth wordlist (and maybe a few others) be searched by something different from all others 13:58:11 Why? 13:58:20 because? 13:58:28 crooked letter 13:58:33 mostly because the access patterns are different 13:58:44 Good hashfunction will level the access patterns. 13:59:05 level it, but not take advantage of it 13:59:35 Maybe. But it complicates things. 13:59:56 i mean, if you are compiling code, you *know* most of that shit will hit in the FORTH or MACRO wordlists 14:00:18 By adding your initial wordlists to the hashtable first, they get searched first, so that gives you a boost. 14:00:25 --- quit: segher (Read error: 104 (Connection reset by peer)) 14:00:32 oops 14:01:34 You wind up comparing n/m/2 strings on average to find a word, so by putting your efforts into optimizing the primary search mechanism, it's an across-the-board win. 14:02:04 --- quit: warpzero (Read error: 104 (Connection reset by peer)) 14:02:17 agreed, but i dont want to complexify the intended porpose 14:02:42 --- join: warpzero (n=warpzero@wza.us) joined #forth 14:03:24 or the word thereof :) 14:06:41 --- join: segher (n=segher@dslb-084-056-156-029.pools.arcor-ip.net) joined #forth 14:07:23 dunno if this would help when using a better hash strategy, though 14:08:14 You wind up comparing n/m/2 strings on average to find a word, so by putting your efforts into optimizing the primary search mechanism, it's an across-the-board win. 14:09:33 it's not going to hurt, sure. and it is a really cheap optimisation to make 14:12:28 when searching over multiple wordlists though, you better watch out for not carrying some word in some wordlist more to the front than a word with identical name in some other wordlist earlier in the search order 14:13:20 Explain that further :) 14:14:42 two wordlists. both have a "config-l@" word. you find one (in some device node, so some wordlist). you want to find one in the other wordlist some time later. 14:16:28 Right. So how is that a problem? 14:16:33 basically -- what i want is, to *not* invalidate all of the search order / word finding hash table / whatever content whenever i change search order (which happens *a lot* 14:16:42 No need. 14:16:56 If two words are in different wordlists, they wind up in different buckets. 14:17:08 indeed -- the hash table schemes we discuseed tonight will work fine afaics 14:17:41 not that one though -- i can't use that one 14:19:59 can gforth get environment variables from unix? 14:20:17 yes 14:20:23 how? 14:20:28 s" HOME" environment? 14:20:33 returns false 14:21:00 read the manual? 14:21:29 sorry, i don't have a bett answer than that 14:21:36 better 14:23:50 which forth has all of the kernel's api wordified? 14:26:51 Jason, I think it's get-env or some such 14:28:03 no one knows? 14:42:11 Quartus: thanks :) getenv worked 14:54:06 --- part: Cheery left #forth 14:58:35 environment? is the forth environment, not the unix environment 14:59:00 --- quit: warpzero (Read error: 104 (Connection reset by peer)) 14:59:05 --- join: warpzero (n=warpzero@wza.us) joined #forth 14:59:17 I see 14:59:30 anyway, I wrote this: 14:59:34 : env getenv over if true exit then drop ; ( addr u -- addr2 u2 true | false ) 15:00:06 getenv seems to return two nulls if it doesn't find the var 15:00:14 ie ( addr u -- 0 0 ) 15:00:38 probably, yes. not very clean / neat, though 15:04:14 ls 15:04:19 ack, sorry 16:30:05 --- quit: virl (Read error: 104 (Connection reset by peer)) 17:05:43 --- quit: tathi ("leaving") 17:50:32 --- join: ThinkingInBinary (n=tom@pool-68-163-163-216.bos.east.verizon.net) joined #forth 18:54:46 --- part: ThinkingInBinary left #forth 19:22:12 --- quit: Jim7J1AJH ("leaving") 21:15:13 --- quit: virsys ("bah") 21:19:42 --- quit: sproingie (Remote closed the connection) 23:19:08 --- join: swalters_ (n=swalters@6532183hfc82.tampabay.res.rr.com) joined #forth 23:19:15 --- quit: swalters_ (Remote closed the connection) 23:40:12 --- join: yoyoFreeBSD (n=root@222.90.44.37) joined #forth 23:40:23 --- part: yoyoFreeBSD left #forth 23:59:59 --- log: ended forth/06.01.28