00:00:00 --- log: started forth/06.07.12 01:18:15 --- quit: johnnowak () 01:18:47 --- join: johnnowak (n=johnnowa@user-0cev1ia.cable.mindspring.com) joined #forth 01:20:06 --- part: johnnowak left #forth 01:28:14 --- quit: Raystm2 (Read error: 104 (Connection reset by peer)) 01:31:42 --- join: Raystm2 (n=Raystm2@adsl-69-149-43-76.dsl.rcsntx.swbell.net) joined #forth 02:43:57 --- quit: ASau (Remote closed the connection) 03:15:16 --- join: vatic (n=charlest@pool-162-83-254-201.ny5030.east.verizon.net) joined #forth 04:40:52 --- join: nighty (n=nighty@66-163-28-100.ip.tor.radiant.net) joined #forth 05:08:22 --- quit: nighty (Read error: 113 (No route to host)) 05:09:55 --- join: nighty (n=nighty@66-163-28-100.ip.tor.radiant.net) joined #forth 05:26:44 --- join: snoopy_1711 (i=snoopy_1@dslb-084-058-142-115.pools.arcor-ip.net) joined #forth 05:27:18 --- quit: snoopy_1711 (clarke.freenode.net irc.freenode.net) 05:27:18 --- quit: Snoopy42 (clarke.freenode.net irc.freenode.net) 05:27:44 --- join: snoopy_1711 (i=snoopy_1@dslb-084-058-142-115.pools.arcor-ip.net) joined #forth 05:27:44 --- join: Snoopy42 (i=snoopy_1@dslb-084-058-142-115.pools.arcor-ip.net) joined #forth 05:35:00 --- quit: Snoopy42 (Read error: 145 (Connection timed out)) 05:35:17 --- nick: snoopy_1711 -> Snoopy42 05:36:49 --- join: timlarson_ (n=timlarso@65.116.199.19) joined #forth 05:40:20 --- join: PoppaVic (n=pete@0-2pool238-21.nas24.chicago4.il.us.da.qwest.net) joined #forth 05:46:09 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 06:09:58 tathi: you awake, chief? 06:11:01 yeah 06:11:22 trying to decipher NetBSD's MAKEDEV script 06:11:22 --- join: Ray_work (n=Raystm2@199.227.227.26) joined #forth 06:11:45 so'k - I'll not bug you until laters. 06:12:06 I thought I knew about shell scripting, but they're using some stuff I've never needed :) 06:12:24 I hate sh script... really, really 06:15:11 --- quit: madwork ("?OUT OF DATA ERROR") 06:17:59 --- join: madwork (n=foo@derby.metrics.com) joined #forth 06:25:43 --- quit: tathi ("here goes nothin'") 06:35:04 --- quit: Jim7J1AJH ("leaving") 06:40:08 --- join: Jim7J1AJH (n=jim@221x115x224x2.ap221.ftth.ucom.ne.jp) joined #forth 06:40:49 --- part: Jim7J1AJH left #forth 07:20:55 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 07:31:05 how exactly gets the hash calculated in a forth which uses hashes for searching words? 07:31:30 depends on the forth 07:31:50 ok, but what would you use for that? 07:32:36 multiply by a number that is relatively prime to 2**n and add (for each character) 07:32:39 I need a 32bit hash sum and nothing longer. all md* seem to use 128bit hashes 07:32:39 seems to work well enough 07:32:52 yeah, you don't really need a strong hash like that 07:32:52 afaik, the goal of any string-hashing is to generate a value as close to unique as possible. 07:33:23 yeah, that seems to be the theoretical goal 07:33:58 I've always looked at it as cutting the search time down by the number of buckets 07:38:43 http://burtleburtle.net/bob/index.html 07:58:18 --- quit: PoppaVic ("Pulls the pin...") 07:59:00 --- quit: vatic (Remote closed the connection) 08:12:34 --- quit: tathi ("leaving") 08:20:45 --- join: PoppaVic (n=pete@0-1pool67-137.nas22.chicago4.il.us.da.qwest.net) joined #forth 09:45:16 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 10:05:25 --- quit: virl (Remote closed the connection) 10:12:03 --- quit: PoppaVic ("Stay well, folks") 10:22:45 --- join: vatic (n=charlest@ool-45740b1c.dyn.optonline.net) joined #forth 11:09:23 --- quit: tathi ("leaving") 11:44:08 --- join: virl (n=blah@metagw.funkfeuer.at) joined #forth 11:45:06 --- part: virl left #forth 11:56:39 --- nick: Raystm2 -> nanstm 13:32:22 --- quit: timlarson_ ("Leaving") 14:06:45 --- quit: nighty (Read error: 113 (No route to host)) 14:37:22 --- quit: madgarden ("?OUT OF DATA ERROR") 14:39:30 --- join: madgarden (n=madgarde@Toronto-HSE-ppp3708723.sympatico.ca) joined #forth 14:46:34 --- join: I440r (n=mark4@24-177-235-246.dhcp.gnvl.sc.charter.com) joined #forth 14:57:11 --- join: virl (n=virl@chello062178085149.1.12.vie.surfer.at) joined #forth 15:03:27 --- quit: Ray_work ("User pushed the X - because it's Xtra, baby") 15:04:16 --- join: docl_ (n=docl@70-101-145-1.br1.mcl.id.frontiernet.net) joined #forth 15:10:48 virl, for a dictionary hash, 32 bits is considerable overkill. 15:11:13 why overkill? 15:11:20 How big is your hashtable? 15:11:50 perhaps big perhaps small, well it was only theoretically 15:12:07 But it won't have 4 billion buckets, I'm guessing. 15:13:26 Your hashtable need be no bigger than the maximum number of dictionary entries you ever expect to hold, and in fact can be much smaller. So you need a hash function that is well distributed over just enough bits to cover the maximum hashtable size. 15:13:50 MD5 and hashes like it are slow. You want something much simpler and faster. And you may want something that lets you search case-insensitive, as well. 15:15:39 Depending on your decisions on that. 15:18:13 --- quit: docl (Read error: 110 (Connection timed out)) 15:18:18 Initializing a hashvalue with the length of the word, multiplying by a small prime (say, 5, or 17) and adding each successive character is a simple, fast and effective hash for Forth. Add an intermediate step of making each char case-insensitive before adding, if you want case-insensitive matching. 15:19:01 To make that clearer, you start with the length as the hash, and then for each character in the name you multiply by a small prime and add the character value. 15:19:39 --- nick: docl_ -> docl 15:19:54 start with zero. for each character, multiply by 33 (decimal) and then xor the character. in the end, reduce the number whatever way you need 15:20:18 Sure, xor is roughly equivalent to an add in that context. And 33 is small prime. But I recommend starting with the length as a seed. 15:20:25 one of the very best known hashes for character strings, and _dirt_ cheap 15:20:34 2 wrongs? 15:20:41 ? 15:20:53 xor is _way_ better than add there 15:21:09 Based on what? 15:21:11 and the prime statement... erm... look again? 15:21:26 based on 20 years of experience? 15:21:30 Oh sorry. 33 ain't prime. 15:21:42 like i said, this is a well-known ("folklore") hash 15:21:46 segher_, that won't fly, I've got more than 20 years. :) 15:22:11 Yeah, hash folklore also says the hashtable should have a prime number of buckets, which is also bogus. 15:22:27 for many hashes, it's correct 15:22:34 For many craptastic hashes. 15:22:34 not for all though 15:22:40 sure 15:22:54 ((first char *2 + second char) *2) + length 15:22:57 If your hashfunction is crappy, having a prime number of buckets helps disguise that a bit. 15:24:13 just shifts and adds 15:24:21 I have a half-dozen algorithm books here that push pjw_hash as the best function. I've tested it; it's awful. 15:24:24 no multiply divide or xor 15:24:54 tho xor instead of + might be better... hafta check that :) 15:24:58 A truncated MD5 does give a very nice distribution, but it's slow as molassass on a cold morning. 15:26:29 At any rate, these things are easily empirically measured. Pick a hashfunction, fastest one you can find, and test it for distribution in your expected hashtable size. I also recommend power-of-two hashtable sizes, they facilitate masking of the hash (rather than using a modulo function), and make for easy resizing if you want a dynamic hashtable. 15:27:05 ya 15:27:34 but i dont think a dynamic hash size would simplify anything here 15:28:28 jenkins' hash function is very good, he also has a nice analysis on his website 15:28:59 I ran my chi-squared tests against all the Forth words I had on hand, plus every English word from 1-31 chars in length. 15:29:48 My requirement was for a case-insensitive hash, as well. 15:29:54 for forth, hashes are interesting 15:30:03 forth source is not english text 15:30:14 jenkins hash looks way too complex 15:30:30 for example, you *really really* want @ to be hashed to a slot all on its own 15:30:40 segher_, why? 15:30:55 because @ is very common ? 15:31:02 Is it? 15:31:06 because it alone amounts to 8%orsoofallinputwords 15:31:17 or so of all input words 15:31:26 Wow. Not in my sources. 15:31:50 i dont think finding a hash algorithm that keeps @ alone in a bucket is worth it even if @ is used on every other word lol 15:31:58 quartus: you're not telling us you use "VALUE" and friends. _heretic_. :-) 15:32:17 Anyway, beyond good distribution (which you can obviously get with overkill in the hash, for instance with MD5) you want speed, so I also benchmarked compilation times for large Forth sources. 15:32:19 segher_, i think the original variable and constant are flawed 15:32:42 I440r: if your hash uses lists per bucket, putting it at the start of the bucket would be good enough, sure 15:32:55 how so? 15:33:05 segher_, I don't use VALUE often, and globals only when appropriate. I surely don't use @ once every 13 words. 15:33:25 i use lists per buckets 15:33:31 If you really had one single word that came up 8% of the time, your dictionary search shouldn't even bother to go to the hashtable for it, it should just check for that first and return. 15:33:55 quartus: dynamically you use it way more often anyway 15:34:20 Same logic, though. I don't have any word that comes up 8% of the time; if I did, I'd have the search check for that first and just send back the xt. No lookup. 15:35:03 if your : doesn't come up at least 5% of the time, you write really crappy code 15:35:03 quartus and would the compile time speed increase warrant doing so ? 15:35:38 I440r, worth testing for, I'd say, at 8%. But I suspect that figure's being pulled out of thin air. 15:35:46 I440r: probably not, unless the hash lookup code is b0rked 15:35:46 even if you have "blah" occur 30% of the time adding code to test for blah wouldnt be worth it 15:36:38 i use a very very simple hashing algorithm that does just fine 15:37:09 tho im sure you could improve it you would slow down the hash calculation and decrease the search time 15:37:19 and you would have zero gain 15:37:21 I generally work in constrained environments where it's advantageous to find where the lines meet for both speed and distribution. 15:37:22 or next to zero 15:37:24 works much better than just linear search i guess heh 15:37:52 --- join: Raystm2 (n=Raystm2@adsl-69-149-43-76.dsl.rcsntx.swbell.net) joined #forth 15:38:07 adding a more complex hash algorithm just to distribute better doesnt give you any gain if you ask me 15:38:22 I'm not suggesting it does. 15:38:55 a very very poor hash is still going to be hugely better than no hash at all 15:39:08 and a very good hash will only be marginally better than a bad one 15:40:47 The poorest hash function would lump everything into one bucket. The best would have all chains close to the same length as each other. That will always be an easily measurable difference. 15:41:17 first char doubled. add second char. double result. add lenthg. and result with 3f 15:41:25 thats like way simple 15:41:38 and gives an ok distribution accross the buckets 15:41:43 quartus: no, it would have the static probability of hitting something in each chain identical 15:41:52 It doesn't provide very good distribution under my tests. 15:41:54 segher_, what? 15:42:00 weighted by depth, even 15:42:20 "it would make the search cost as small as possible" 15:42:20 Quartus, it might not but show me a hash algorithm thats as simple that does better and ill adopt it :) 15:42:48 Ok. Start with the length, for each successive char multiply by, say, 5, and add the char. Try that, see how it looks. 15:42:57 multiply 15:42:59 thats slow 15:43:09 Do it with a shift and an add if you prefer. 15:43:36 your also going to take longer to calculate the hash with longer words 15:43:49 Yes. The improved distribution compensates. 15:44:48 Depending on your platform, this may not matter. On a slow CPU it makes a good deal of difference. On a desktop with many GHz at hand, it may be hard to find source large enough so that you can appreciate the difference. 15:44:59 add eax, [eax * 4 ] 15:45:12 hmm thats a *5 there 15:45:27 I'm personally in favour of efficiency and economy no matter what the platform, but it may not be important enough to worry about if there's horsepower to spare. 15:46:10 no i agree - its better to have huge gains on a slow machine with zero gains on a fast one 15:46:34 did i get the syntax of that 5* right ? 15:46:40 Does it work? 15:46:52 i dont know ive not coded it yet, i think i got the opcode wrong 15:47:04 mov eax, [eax + 4* eax ] 15:47:07 i think thats right 15:47:15 I haven't worked in x86 for awhile, I'd need to look it up. 15:48:20 it means doing alot of movzx 15:48:36 movxz eax, [ebx] 15:48:51 movzx edx, [ebx +1] 15:49:08 no i need to do a loop here 15:49:15 hang on im gona go code it 15:49:18 I440r: it's correct, you probably should use lea eax,[eax*4],eax though 15:49:19 see if it improves things 15:49:24 yea lea 15:49:45 lea eax, [eax + 4* eax] 15:49:47 It's worth a try, it should give much better distribution than a two-char plus len hash. Test the chi-square, the average chain length, and the speed of lookup. 15:49:59 whats the chi-sauare 15:50:22 It's a statistical measure of distribution. 15:50:23 it would be difficult to benchmark this - even if i use x86 profiling opcodes heh 15:50:59 It's not difficult to bench the distribution. It may be necessary to run a large source through it a number of times to get a large enough sample on a fast system. 15:51:25 the only harge sources i have are my extensions 15:51:56 One large source I used was my regression test sources. Lots of lookups. 15:52:03 --- quit: nanstm (Read error: 110 (Connection timed out)) 15:56:03 Quartus Forth uses a multiply by 17, rather than 5. Same idea. 15:56:25 As I say, it's also case-insensitive, so there's a mapping step. 15:58:07 isforth is case sensative but you can switch it off 15:58:17 isforth is also 100% LOWER case 15:58:23 upper case is annoying lol 15:58:28 mixed case is worse 15:58:45 I store all words in original form in the dictionary, the lookup is case-insensitive. 15:58:56 so do i 15:59:04 i dont disallow upper or mixed 15:59:07 i discourage it 15:59:16 and you can switch case sensativity off 15:59:49 How do you search, if it's off 16:00:28 same routines do both searches 16:00:56 If you've already hashed a word case-sensitive, how do you find it case-insensitive? 16:00:59 if im doing case insensative searches i upper case the string im searching for and the string im checking it against 16:01:09 But you won't know what bucket to find it in. 16:01:12 the hash is calculated first :) 16:01:22 The hash is calculated when the word is added to the hashtable. 16:01:26 find calculates the hash 16:01:33 (find) does the case switching 16:02:26 if you create a word callee XYZZY and look for xyzzy you wont find it unless you turn off case sensativity 16:02:35 oh i see what you mean 16:02:43 So, taking that a step at a time. I add Foo to to the dictionary. It's hashed as F o o, let's say that puts it in bucket 1. I switch off case-sensitivity and search for 'foo', which is hashed as f o o, let's say that hashes to bucket 2. 16:02:51 i get it lol 16:04:18 the only way to do this is to search for foo Foo fOo foO FOo fOO etc 16:04:28 Or hash case-insensitive. 16:04:44 unless you calculate the hasn on the word in all lower case 16:04:46 And if you want it, do the case-matching in the string-match after lookup. 16:04:49 or all upper case 16:05:24 One really marvellous feature of a hashtable-based dictionary that I always marvel at is the free wordlists. 16:05:44 Xor the hash with the wordlist value, and voila. 16:05:50 ? 16:06:03 word list value ? 16:06:12 and they are called vocabularies not word lists :P: 16:06:42 vocabulary blah blah definitions 16:06:48 not wordlist blah blah definitions 16:06:49 In fact they're called wordlists, at the risk of sending you on a rant, in the Standard. You can set up named vocabularies on top of them trivially, but at the base they're wordlists. Each is a unique number. 16:07:20 So if you have a hashtable of, say, 128 buckets, you can have 128 wordlists without any extra cost. It's like magic. 16:07:57 u lost me 16:08:09 During the dictionary search, you xor the hash of the word you're looking for with the wordlist you want to look in, and it only finds words in that wordlist. 16:08:22 Ok, an example. 16:08:48 There are two wordlists, wordlist 1 and wordlist 2. The current definitions wordlist is, say, 1. 16:08:54 So I create : foo ." Hi" ; 16:09:05 As that's added to the dictionary, its hashvalue is xored with 1. 16:09:25 So I switch the definitions wordlist to wordlist 2. 16:09:37 And I create : foo ." Bah!" ; 16:09:42 As that's added to the dictionary, its hashvalue is xored with 2. 16:10:04 At lookup time, I may be searching wordlist 1, or wordlist 2; I'll only find the right 'foo'. 16:10:11 aha. if you search the wrong vocabulary you will get the wrong bucket 16:10:20 and not find it there 16:10:24 Correct. 16:10:38 so it doesnt matter what the search order is 16:10:40 So in Standard terms, at lookup time you walk the search-order a wordlist at a time in order. 16:10:52 Right. It's very nice. 16:11:44 I have a few reserved wordlist numbers, I use them for idempotent file includes, things like that, all off of the same hashtable. 16:13:04 disadvantage is that you have to give your vocabularies a numerical value - could you simply take its address ? 16:13:18 no. that wont work 16:13:22 If the address was always unique within the range of the number of hashtable buckets, maybe. 16:13:24 because the hasn is 00 to 3f 16:13:50 and you might have a vocabulary at xxxxxx12 and at yyyyyy12 16:13:51 It's not a disadvantage if your wordlist system is set up in these terms, with vocabs built on top of it. 16:14:15 It was a real 'aha' moment for me back when I realized how clean, neat, and free it was. 16:14:38 i have vocabularies which are arrays of 64 threads 16:14:40 No linked-list chaining all words in a wordlist together. Just a value-add from the already insanely beneficial hashtable. 16:15:34 I know some F79 and earlier Forths had words linked to each other in various bizarre chains; perhaps there's some value to that I've never perceived. 16:15:43 do you use only also and friends ? 16:15:50 i dispensed with "also" 16:16:00 i dont need to do also x also y also z 16:16:03 i just do x y z 16:16:09 I have used them. Primarily I use module/public:/private:/end-module, which encapsulate all of that. 16:16:52 "encapsulated" 16:16:58 sounds like oop to me lol 16:17:04 With a system built on standard wordlists, you can build a vocabulary mechanism that works like that -- avoiding ALSO -- without any trouble. 16:17:21 No, it's not oop. They're just wrappers that give me named modules. 16:17:29 i have one problem but i have a solution to it 16:17:32 i just didnt implement it 16:17:37 with the also mechanism you can have 16:17:46 asm forth compiler forth asm compiler forth asm 16:17:53 you cant with my method 16:18:19 but there are advantages to being able to add a vocabulary multiple times to the search order 16:18:26 There are times when it's useful to add a wordlist to the search order despite it already being there. 16:18:34 it gives you the ability to safely remove it later without actually removing it 16:18:42 yes 16:18:46 i have a solution to it 16:19:01 i just make the search order stack revectorable 16:19:22 I guess that'd work, though it seems to add complication. 16:19:37 not really 16:20:00 i can have +order and -order or something 16:20:03 no big deal 16:20:13 have a stack of search orders :) 16:20:24 where only the TOP one is searched 16:20:51 that way i can modify it to my hearts content and when im done i can back out and not have affected someone elses search order 16:20:56 Quartus Forth's search-order holds 16 wordlists; I've never bumped into the top. 16:21:10 Perhaps some developers would. 16:21:11 and ill NEVER hage to serch asm forth compiler root asm compiler forth compiler root etc etc 16:22:02 .vocs bug sockets block terminal i-voc h-voc root compiler forth ok 16:22:20 i have 9 vocabularies. h-voc is the headerless vocabulary 16:22:27 and i-voc is for macro : definitions 16:22:46 auto inlined - you purge the vocabulary and all headers and all code is discarded 16:23:19 As you may recall, inling replaces macros in QF, and headers are kept until turnkeying. MARKER works to roll the whole dictionary & hashtable back to a defined point, though. 16:23:23 .vocs is my word that displays all defined vocabularies 16:24:13 The closest I come is the ability to display all named modules. 16:24:16 ive had to discard forget and friends till i can metacompile 16:24:27 well - i think i can do it still but its complex 16:24:46 I don't have FORGET -- never could figure out how it could possibly work in a multi-wordlist dictionary. But I do have MARKER, which is at any rate a more powerful facility, albeit rarely used. 16:24:58 because when isforth launches it has [list-space-memory][head-space-memory] with no space between 16:25:02 i have to relocate the headers 16:25:11 and they arent moved in the same order they are created 16:25:33 i can get forget to work but its dangerous 16:25:36 for instance 16:25:45 : some-signal-handler ...... ; 16:25:56 lol 16:26:33 i had forget working perfectly with mylti vocabularies but its sort of broken atm 16:26:40 because of the above 16:26:56 relolcating a word header is kinda complex 16:27:12 That's where MARKER is stronger; it can restore interrupt vectors, or whatever else, and rolls the whole system back to a point in time. 16:27:20 i cant just pick it up and move it - so i was doing it by relocating each thread of each voc one at a time 16:27:27 and that changed the order of headers in memory 16:28:01 no. because once you have told linux that a given signal is to execut code at address xxxxx its set 16:28:21 And can be re-set. 16:28:28 if you forget below that code and compile new code over the top and then get the signal your dead 16:28:49 True. MARKER can be implemented to reset handlers. 16:28:50 but i actually dont care about "fixing" that 16:29:10 im not adding code to wrap handler creation just so the compiler can stop you doing something silly 16:29:13 ill assume you wont 16:29:24 and when you do... youll learn from the experience :) 16:29:41 do you check for thens balancing ifs ? 16:29:45 MARKER is strictly a programmer's facility, so making it handle more cases isn't a bad thing. 16:29:59 i dont but i discourage : xxx ..... if ...... ; : yyy .... then .... ; 16:30:06 but i dont prevent it either 16:30:25 I ensure the control-flow stack is balanced when ending a definition. If you want to do : xxx ... if ... ; : yyy ... then ... ; you'd have to do a bit of extra work to circumvent the checks. 16:30:29 true but im not gona protect the coder from himself :) 16:31:26 I also ensure that THEN resolves an IF that's earlier in memory than it is, little things like that. Protects against typos that can otherwise have bizarre results. 16:31:43 one of the quotes in my tgz for isforth says something about unix not being designed to prevent you doing stupid stuff because that would prevent you from doing clever stuff 16:32:00 like chuck moores famous min/max definitions 16:32:05 Well, compiler security doesn't prevent you from doing clever stuff, just requires you to be explicit about it. 16:32:08 bad coding style but clever 16:32:12 --- join: LOOP-HOG (n=jason@66.213.202.50) joined #forth 16:32:24 I'm dead against clever as a coding style. 16:32:34 yea but i consider that being explicit "red tape" 16:32:42 like type casts and their ilk 16:32:49 bullshit visual clutter IMHO 16:33:02 so am i 16:33:06 but i wont dictate against it 16:33:11 For the one time in a thousand that you'll want to start an IF in one definition and terminate it in another, it's hardly a burden. 16:33:39 i use "begin while until else then" in my memory manager 16:33:50 which strictly speaking is poor style 16:33:56 but it was more efficient 16:33:59 Yes, I think it's ugly, but it'll work in Quartus Forth too. 16:34:19 and i cant recode the parts of it i want CODED yet 16:34:38 being able to revert to assembly would negate the need for that kinda crap 16:34:39 Works in an Standard system. 16:34:44 yes i know 16:34:44 any 16:34:50 what was the name of that mac channel again? 16:35:00 i have an arm assembler here that does something similar 16:35:07 not my code 16:35:10 nm 16:35:31 Likewise begin while while repeat then, and things like that. Hard to follow, but they'll work even with compiler security at full-tilt. 16:35:42 heh 16:36:13 they are easy to follow by the person who codes them. i very much dislike that i used the technique in my memory manager 16:36:30 Easy to follow at the time of writing, but not so much when returning to the code months later. 16:36:32 but i was unable to find any other way to do it and remain efficient 16:36:59 no. i rarely have problems with my code because i dont ever EVER EVER treat the sources as if they are comments 16:37:02 sources on the left 16:37:05 comments on the right 16:37:20 --- part: crc left #forth 16:37:38 if you have thought about your code enough to comment it you have thought about it enough to CODE it 16:37:49 thats a quote from my parody of the linux kernel coding style 16:38:39 --- join: segher (n=segher@dslb-084-056-146-125.pools.arcor-ip.net) joined #forth 16:41:00 --- join: crc (n=crc@pool-70-110-173-103.phil.east.verizon.net) joined #forth 16:41:05 hi crc! 16:41:07 --- mode: ChanServ set +o crc 16:41:21 quartus do you allow words that parse beyond the end of a line ? 16:41:22 hi I440r 16:41:36 : test 0 do i constant loop ; 16:41:36 10 test aaa bbb ccc ddd eee 16:41:36 fff ggg hhh iii jjj 16:41:41 would that work in quartus ? 16:42:07 it USED to work in isforth but i couldnt count source lines during compilation 16:42:10 If you want to write a parse-refill, you certainly can do so, but otherwise Quartus Forth is a Standard Forth, so parsing past the end of a line returns a null string. 16:44:40 yea thats what ive got 16:45:04 but the above test word might be to create 256 assembler registers each of with has 16 character names 16:45:10 just to give a worst case scenario 16:45:24 There are other ways to code it. 16:45:36 --- join: vatic_ (n=charlest@ool-45740b1c.dyn.optonline.net) joined #forth 16:45:45 yea i could do start cout creating-wrod xxx yyy zzz 16:45:57 and give a different start number and count on each line 16:46:06 For instance, sure. 16:46:28 but not being able to parse beyond the end of a line is something that can bite you in the ass if your unaware of it 16:46:54 --- quit: segher_ (Read error: 110 (Connection timed out)) 16:47:09 But you'd be aware of it on the first failed compile. 16:47:10 im not overly worried about it - but being able to parse beyond the end of a line was nice :) 16:47:21 you would be aware of something being wrong 16:47:31 you might not be aware of exactly what 16:47:46 it could potentially stall you for quite a while 16:48:11 Still, it fits with your philosophy of not protecting the programmer from himself, and your overriding opinion that the programmer is responsible for knowing everything about the limitations of a system. 16:49:58 true :)P 16:50:06 which is why im not overly worried about it 16:50:36 it was just nice being able to parse beyond the end of a line when my entier source file was treated as a single line of souce but with line breaks :) 16:51:54 --- quit: vatic (Read error: 110 (Connection timed out)) 16:52:22 actuall this would be protecting the user from the system tho not the other way round :) 16:53:26 --- quit: LOOP-HOG ("Leaving") 16:53:40 I can see where always parsing past the end of line could result in silent errors that could be far more difficult to find than the opposite. 16:54:31 As happens with missing comment terminators in languages like C. 16:54:56 yea 16:55:04 but i wont support multi line comments 16:55:15 ( does not parse beyond the end of aline in isforth 16:55:22 but i seem to recall it DOES in ans forth 16:55:29 breaking the above parsing beyond end of line rule 16:56:49 If you support the FILE wordset version it does. 16:56:54 The CORE version doesn't. 16:57:06 i dont support any prt of ans 16:57:15 tho.. im sure alot of isforth is ans compliant 16:57:19 it wasnt a goal 16:57:29 actualy neither is ans non-compliance 17:02:08 0 [IF] ... [THEN] is the more usual contrivance for multi-line comments. 17:04:35 i wont support conditional compilation either 17:04:52 im diametrically opposed to interleaving 3984765387465 different versions of your application all into the same source files 17:05:57 and if i add the ability to do so it will be abused like it has in C sources 17:06:00 ick 17:08:38 Trivial thing to add if anyone wants it, and it's certainly not always abused. 17:09:20 agreed. if you want it in isforth im not even going to discourage you from adding it. but i wont add it to the distrobution version 17:14:50 Protecting the developer from himself? 17:15:53 --- quit: vatic_ ("Chatzilla 0.9.71 [Firefox 1.5.0.4/2006050817]") 17:16:39 --- join: vatic (n=charlest@ool-45740b1c.dyn.optonline.net) joined #forth 17:22:14 The main instance of conditional compilation that I've seen, recently, is in a gforth networking example that ambiguates between little- and big- endian. 17:22:49 htonl PF_INET [ base c@ 0= ] [if] $10 lshift [then] 17:23:41 I've used it to allow selectable implementations of certain words, or to add or subtract features from a given build of an app. 17:24:07 Certainly it's useful to avoid defining a word that may already exist in a given Forth. 17:26:24 or to allow the definition of one thats missing 17:26:28 cruft 17:26:34 just redefine it 17:30:01 quartus - does your recursive whutsit remove redefinitions ? 17:31:01 I wouldn't imagine so. 17:32:16 I guess it could identify two words as having the same definition and then link all use to just one of them, then letting the tree-shaker take out the spare. 17:32:36 it could compare definitions with the same name and if they are identical it could ignore redefinition 17:32:44 It doesn't include a definition that's never used. 17:32:49 but : redefined 1 . ; and : redefined 2 . ; are two different words, regardless of name. 17:33:15 But if two separate words are used at two places in the code, the fact that they have the same name is irrelevant. 17:33:42 If one word has multiple names, that is, if there's a shared xt, that code is only exported once. 17:34:11 yea but what about two different IDENTICAL definitions 17:34:12 both used. 17:34:29 you could see they have the same name and then test for identical code and strip the redefinition 17:34:35 I440r - it doesn't need to care about the name. Comparing the definition is enough to unify : NOT INVERT ; and : -1xor invert ; 17:34:39 prolly doesnt happen enough 17:35:01 as the names don't make it into the executable anyway. 17:35:38 the first-pass optimizer could change all the definition names to a hash of their definition :-) 17:36:06 :) 17:36:40 Oh, I don't scan for identical code sequences. I tried that once as an optimization, and it was both slow, and not particularly effective; it didn't happen often enough. 17:37:13 : benchmark 100 0 do i allocate drop loop 100 0 do free drop loop ; <-- im doing that but im not just running it, im auto stepping it in my debugger :) 17:37:22 its fascenating to watch 17:37:27 hypnotic even :)_ 17:38:46 itll probably take an hour or 2 tho lol 17:41:07 I even searched for partial matches at the end of words. Still not worth doing. 17:44:17 :) 17:44:43 im not a fan of a compiler outputting optimized code 17:45:03 1:1 corelatikon between source and object being lost is a huge negative imho 17:45:23 if i say dup dup drop drop the compiler has no business "optimizing it" 17:46:14 tho writing an optimizer would be fun 17:46:21 just to have DONE it :) 17:46:35 I require functional correlation. I'm quite happy to have the compiler make the code run faster and/or smaller. 17:53:51 Gives me even less reason to write 'clever' code. 17:54:06 :) 17:54:19 theres good clever and bad clever 17:54:46 but compiler optimisations are just one more level of complexity that can go wrong 17:55:16 and optimized code gives you a 1 or 2 percent improvement 17:55:21 not worth the bother 17:55:28 optimize your algorithms 17:55:48 'optimized'? 'improvement'? I sentence you all to purchase of Programming Perl :-) 17:55:58 Optimizations can do way, way better than 1 or 2 percent. 17:56:02 It has a section on 'optimization'. 17:56:57 id rather have a copy of the X protocol book 17:57:18 In fact an additional hard-core optimizer on top of Quartus Forth might well give another factor of two, overall. 17:57:26 That's two times, not two percent. 17:58:05 Speedwise. At the same time I could knock generated code size down considerably, too. 17:58:16 I440r - write a program to stick between your favorite X programs and X, and have it copy its throughcommunications into a file. Study the file. 17:59:00 And that's not because Quartus Forth is a slacker. The last factor of two is seldom worth the effort, as that's when the optimizer gets complicated. The existing optimizations do a helluva lot better than 1 or 2 percent, though, or there'd have been no call to write it. 17:59:37 I440r - then write a program that lets you construct such communications with forth words on the fly. Construct these words safely away from where they get DISPLAYed 17:59:44 s/words/communications/ 18:00:00 u lost me 18:00:05 where? 18:00:20 just say that again but different lol 18:00:25 say what? 18:00:28 i didnt understand what you were trying to say 18:00:30 I don't know where I lost you. 18:00:37 I440r - then write a program that lets you construct such communications with forth words on the fly. Construct these words safely away from where they get DISPLAYed 18:01:49 I440r - mainly, set up your environment so that you can play at spitting X protocol at an X server without sabotaging the forth terminal you communicate from. You can probably do this with DISPLAY and xnest or somesuch. 18:02:27 thats sorta what i wanna be doing 18:02:38 i need to know the x protocol inside out tho 18:02:48 once you have that environment, and transcripts of recent X protocol communications by familiar programs, you should find it easy to get familiar. 18:03:12 well, I assume that the X protocol is readable. 18:03:33 i dont have the book yet so i wouldnt know hehe 18:03:48 if it isn't, write a filter that makes readable protocol as you learn it 'inside out'. 18:03:57 :) 18:03:59 You don't? Why haven't you spied on actual programs talking to X, yet? 18:04:51 lol because im not rady to 18:11:10 --- join: tcn (n=user@pool-72-70-247-128.spfdma.east.verizon.net) joined #forth 18:11:25 hi tcn 18:11:33 hey 18:11:46 howzit going ? 18:12:06 not bad 18:12:11 A full description of the protocol is beyond the scope of this documentation; for complete information, see the X Window System Protocol, X Version 11, available as Postscript or *roff source from ftp://ftp.x.org, or Volume 0: X Protocol Reference Manual of O'Reilly & Associates's series of books about X (ISBN 1-56592-083-X, http://www.oreilly.com), which contains most of the same information. 18:12:17 I440r - there you go. 18:12:35 i made some changes to isforth - wanna do another release soon but im working out some glitches 18:12:53 ayrnieu, wow - tho id rather have it in softback :) 18:13:01 well, buy it :-) 18:13:08 gona... eventually 18:13:37 may as well get the electronic copy, then 18:13:45 cant log into x.org grrrr 18:13:50 doesnt accept anonymous 18:13:59 have you seen Cairo? 18:14:04 cairo ? 18:14:12 graphics library? 18:14:30 no 18:14:41 Amazon has it for 25 USD 18:14:43 it's a little slow but it's postscript-quality.. supposed to be the replacement for xlib 18:14:52 adding x support to isforth is on my todo list 18:14:57 WITHOUT using xlib 18:15:05 hhehe 18:15:33 I440r - I've no problem logging into ftp.x.org ... 18:15:38 hey, how do I run isforth from a script with #! ?? 18:15:44 I440r - of course, I went the way of http://ftp.x.org/ :-) 18:15:54 heh 18:16:37 any idea where that book is in there ? 18:18:21 nope. 18:20:28 and ftp://ftp.x.org/ also accepts anonymous 18:20:40 so just do an ls -R 18:20:58 theres a digest directory 18:21:00 with that in there 18:21:37 er no 18:21:40 its empty 18:21:41 ugh 18:27:49 ok i cant find it 18:27:53 wget http://ftp.x.org/pub/X11R6.7.0/PDF/proto.pdf 18:28:25 err damn lol 18:29:09 I scripted ftp to get the ls -R , and then scanned it. Hint: XiProtocol.pdf is about the X Input Protocol 18:32:14 tcn you do 18:32:24 #! /path/to/isforth -sfload 18:32:30 then the rest of the file has the soruces in 18:32:34 thers examples 18:32:56 sorry tcn, missed the question till just now lol 18:33:06 I like #! /usr/bin/env gforth -- but shebang portability is for build systems. 18:35:33 that works with isforth too.. err.. sort of 18:36:15 ? 18:36:33 oh 18:36:41 you have to put -sfload 18:37:14 /usr/bin/env invokes its one (1) argument with one (1) argument: the path given to the fail with the shebang 18:37:27 to the file 18:39:55 tcn do you see pm's ? 18:39:59 well, i'm too tired for this 18:40:26 prime ministers?? 18:40:44 Prime Meridians, he means, which is nonsense: you can't see any of them. 18:40:44 perl modules? 18:41:13 you mean when i'm surveying? heh 18:41:46 no private messages lol 18:42:20 nope, I see a message telling me PM's are blocked 18:43:50 bah 18:44:37 i'm too lazy to register 18:44:57 thats lilo's anally retentive "im going to put an end to this NON EXISTANT spam" 18:45:50 he could at least give me an option to accept PMs 18:45:57 no, the spam existed. 18:46:26 ive been on here for years and ive seen ONE spammer 18:46:49 So? 18:47:52 spam was never an issue on this network 18:49:03 tcn do you have icq/aim or yahoo ? 18:49:14 thinking about it 18:49:19 lol 18:49:58 hmm.. I had aol and yahoo chat clients but nobody was ever on there when I looked 18:50:17 no, spam was an issue. 18:50:47 that you've seen ONE spammer in years only says that you've seen ONE spammer in years. IAC, reality is. 18:54:51 tcn get yahoo or icq or somethingt - just not jabber or... msn (ick) 18:54:54 i gtg zzz 18:54:58 --- quit: I440r ("Leaving") 18:55:21 heh.. me too 18:55:25 --- quit: tcn (Remote closed the connection) 19:00:35 --- quit: uiuiuiu (Remote closed the connection) 19:00:40 --- join: uiuiuiu (i=ian@dslb-084-056-234-005.pools.arcor-ip.net) joined #forth 19:09:58 --- join: LOOP-HOG (n=jason@66.213.202.50) joined #forth 19:27:17 --- join: snoopy_1711 (i=snoopy_1@dslb-084-058-128-170.pools.arcor-ip.net) joined #forth 19:35:01 --- quit: Snoopy42 (Read error: 145 (Connection timed out)) 19:35:17 --- nick: snoopy_1711 -> Snoopy42 19:39:59 --- part: crc left #forth 19:51:57 ah, I've had this pretty Forth code staring at me for a while, now, BFTLOMIC hack right now. Later. 19:52:23 gforth.el 's forth-mode is quite pretty , with color-theme-charcoal-black 19:54:12 : key-s ( n c-addr u -- ) rot th >r 2>R :noname 2R> POSTPONE SLiteral POSTPONE interpolate POSTPONE >mud POSTPONE ; r> ! ; 19:54:21 : key: ( n "command" ) 0 parse key-s ; 19:55:31 : superkey ( n -- addr ) >r new-keytable >r :noname r@ postpone literal postpone vikey postpone ; r> r> swap >r th ! r> ; 19:55:41 silly, name, SUPERKEY 19:57:39 --- join: crc (n=crc@pool-70-110-173-103.phil.east.verizon.net) joined #forth 19:57:58 used like this: char t superkey constant target-kt target-kt with-kt[ ' set-target char s key-xt ' show-target char t key-xt ' clear-target char c key-xt ]with-kt 19:58:20 char s key" strike $target" 19:58:40 --- mode: ChanServ set +o crc 20:01:51 ayrnieu, you have a comment-free style that leaves me wondering what the hell your code does :) 20:04:27 http://paste.lisp.org/display/22511 <-- here, with one (1) comment. Is it so obscure? :-) 20:07:53 unixkey could use a comment: KEY returns -1 on EOF 20:09:07 It's somewhat better listed vertically rather than blasted out into the channel horizontally, that's for sure. 20:09:34 VI-UI , also: \ 'toplevel' for the VI UI. 20:17:53 'th' strikes me as perhaps not the best naming. 20:20:01 it's cute, yes. 20:22:05 Also perhaps too general. It looks like it'd be more universally useful as th ( addr1 n -- addr2 ) 20:23:01 well, not universally -- it'd apply to arrays of cells. 20:23:08 Or th ( n addr1 -- addr2 ) 20:23:10 Yes it would. 20:23:26 Which would be a step up over it applying only to one named array of cells as it is now. 20:23:30 here, it's part of the very brief bootstrap into user-level code. 20:23:56 s/very brief// 20:25:14 I can see a dissonance between the suggested generality of the name and its use. 20:25:21 You could ' noop instead of ignore. 20:25:40 oh, yeah. 20:25:41 --- quit: vatic (Remote closed the connection) 20:25:42 Well, sure, that's my feeling -- too general to be nailed down like that. 20:26:02 NOOP , unrelatedly, has a hugely long CODE definition in gforth 20:27:04 the way I use it here suggests that the only numbers that matter are indexes into keytables. Which is true until I get any cleverer with the 'user code'. 20:27:08 So would ignore, I'm sure. 20:27:39 The code in unixkey potentially does a lot more than ( -- +n ) suggests. 20:28:26 yes, I noted that above. It now has a comment on the previous line: \ KEY returns -1 on EOF 20:28:50 But it may BYE. 20:29:09 yes. But that's easy to see. 20:29:53 It is, but it means you have to read the definition to know it. 20:30:15 Is dup if execute else drop then necessary in vikey? Isn't the whole keytable prefilled with xts? 20:30:17 yes. But as the definition is simply : unixkey key dup 0< if bye then ; 20:30:55 I know, it's a matter of principle. The name and stack-diagram and any required comment should tell you what the word will do without requiring code examination. 20:31:00 oh! No. An assumption of 0 as a NOOP rotted, I guess. 20:31:35 The name 'unixkey' doesn't suggest that it may abort the entire session. 20:32:04 no, the name is historical; I used UNIXKEY in unix filters. 20:32:48 So a new name there might be good, though I'm not sure the whole word isn't in need of some work. It waits for and fetches a key, aborting the session if EOF. That's a funky thing for one word to do. 20:33:18 And you use it in exactly one place, so you could insert the code back into that definition with a comment. 20:34:31 After all, vikey is now much shorter since the removal of the conditional. :) 20:34:45 : vikey ( kt -- ) key dup 0< ( EOF? ) if bye then cells + @ execute ; 20:34:51 Nicer. 20:35:34 Generalize th and you get : vikey ( kt -- ) key dup 0< ( EOF? ) if bye else th @ execute then ; 20:35:37 I shouldn't be too attached to words, once defined. 20:38:37 It'd probably be too clever to put the xt of BYE into the table in the -1 position. 20:39:18 if I generalize th and add a key! for a common case : key! vi-kt th ! ; , then I can take your suggestion and only need to add vi-kt into key-s 20:39:29 Sure. 20:39:49 yes, too clever. 20:40:54 hah, nice. 20:41:31 I don't know, I'm on the fence about it. I'd probably do it. 20:41:33 But with comments. 20:41:47 Easily done, just boost up the size by one, return the value of the 0 position. 20:41:59 Initialize the cell when you fill all the others with noop. 20:42:32 it's an optimization of something I'm not sure about, yet ; I'll keep a note to think about it later, though. 20:42:49 so... align ['] bye , here dup 256 cells dup allot ['] ignore fill-cells 20:43:03 s/ignore/noop 20:44:04 Then : vikey ( kt -- ) key th @ execute ; 20:44:26 hm, adding key! , I realize that it's just key-xt 20:44:59 Note also that '@ execute' is commonly 'perform', and that gforth has perform. 20:45:00 So 20:45:06 : vikey ( kty -- ) key th perform ; 20:45:09 which is pretty sweet. 20:45:15 yes, it is :-) 20:45:39 I'll go ahead and do that, then; it's too easy. 20:45:51 It's when the higher level stuff starts to read so sweetly, that I begin to feel the lower levels must be as they should be. 22:10:54 --- join: snowrichard (n=richard@adsl-69-155-177-154.dsl.lgvwtx.swbell.net) joined #forth 22:11:42 hi 22:11:57 Hi. 22:12:51 been fscking around with my sister's XP wireless today. I think I have it working now, just tested it in the car. 22:13:30 Freedom! :) 22:13:53 --- part: LOOP-HOG left #forth 22:14:38 so you working on anything interesting now? 22:16:10 I've been building a model spaceship. :) Not all that Forth-related, but it's interesting. 22:16:41 like trek or star wars looking thing? 22:16:55 Neither -- Space: 1999. Ran in 1975 and 1976. 22:17:03 ok 22:17:16 Post-Trek, pre-Star Wars. 22:17:17 75 was when i grad high school 22:17:37 Maybe you remember the show. Martin Landau, Barbara Bain. Set on the moon. 22:17:43 yea 22:18:01 lots of white buildings 22:18:09 The moonbase. 22:18:14 tunnels and such 22:18:46 And their spaceships called "Eagles". 22:21:35 One of which I'm scratchbuilding, in full studio scale. 22:21:51 working from a photo? 22:22:04 Quite a few photos, and also measurements taken from the original model. 22:22:22 sounds complicated like sculpture 22:22:47 It's a huge amount of work in fact. Fun though. 22:23:32 while I was messing around with the wireless I found a neighbors network, but he has a wep key on it 22:23:46 Heh. 22:24:02 same 2 wire dsl router 22:30:22 --- quit: snowrichard ("Leaving") 22:51:16 --- join: ASau (n=user@home-pool-170-3.com2com.ru) joined #forth 22:51:25 Dobre jitro! 23:59:59 --- log: ended forth/06.07.12