00:00:00 --- log: started forth/05.10.03 05:37:39 --- join: virl (n=hmpf@chello062178085149.1.12.vie.surfer.at) joined #forth 05:55:30 --- quit: LOOP-HOG (Read error: 110 (Connection timed out)) 05:58:34 --- join: Ray_work (n=vircuser@adsl-65-68-201-174.dsl.rcsntx.swbell.net) joined #forth 05:58:42 --- nick: Raystm2 -> nanstm 06:09:21 --- join: madwork (n=foo@derby.metrics.com) joined #forth 06:30:50 --- join: PoppaVic (n=pete@0-1pool73-110.nas24.chicago4.il.us.da.qwest.net) joined #forth 06:31:08 Howdy 06:50:53 hi PoppaVic! :) 06:51:01 Howdy-doody 06:51:03 ready for a productive day 06:51:08 Sure tryin' 06:51:10 ? 06:51:13 hehe 06:52:24 I've circled back to considering either a tag-field/stack-entry or a hidden-array of byte; at least for the control-stack. 06:52:28 * Ray_work finishes ( for the most part ) all of his cf projects. Time to document them, then put them to rest and move on to a 'real' language. 06:52:51 'cf'? 06:52:57 colorforth 06:53:00 ahh 06:53:18 I spent a year developing cf apps, time to give up the ghost there. 06:53:52 my robot/newbie-forth-trainor complete, just the docs to go. 06:53:54 It has occured to me that using tags (somehow), for at least the flow-stack, would simplify flow-control code 06:54:57 crc has some pretty neat flow-control structures in his retroForth. 06:55:02 I'm not sure exactly what tags we need, but it seems likely that it's a very small set. 06:55:21 I'm certain you are correct. 06:55:57 at the minimum, markers for 'returnptr' and 'number' may well be all that is needed 06:56:39 or, even more generically: 'code' vs 'number' 06:57:12 I see. 06:57:50 This would, I believe, greatly simplify the idea of 'exit', 'leave', 'unloop', etc - we'd not NEED 'unloop', if we use a decent set of tags 07:03:07 Back shortly... 07:04:15 okay 07:21:21 back 07:39:14 The longer I stare at do-loops the more they piss me off 07:40:43 in fact, the info-file suggests about 5 different reasons to wish for a bat and a quiet alley. 07:42:08 I'm beginning to see a similar issue with the idea of "it's just a CELL' when we talk about stack-entries. 07:45:40 Crap. I hate to admit it, but we got warts all over the place. 07:47:17 --- join: sproingie (i=foobar@64-121-2-59.c3-0.sfrn-ubr8.sfrn.ca.cable.rcn.com) joined #forth 07:47:57 Assuming merely a CELL-width stack... You've got: Signed, Unsigned and even 'intended-width' (range). 07:48:08 hi sp 07:48:24 mornin 07:49:56 sproingie: I presume your own code has some issues with 'do-loops' as well? Gforth added at least 3 more I never thought of (and should have) 07:51:54 i'm not sure what gforth's looping constructs are 07:52:10 i'm working on getting THROW and CATCH working in rf 07:52:47 well, they mention that "DO" is pretty much deprecated in favor of ?DO - and then also mention THAT has "issues" so they added +DO and U+DO, etc, etc - it's quite a mess. 07:52:52 seems to be doing the flow transfer to the handler correctly, but it isn't cleaning up the stacks 07:53:17 A lot like a goto, then? 07:53:18 THAT? 07:53:21 never heard of THAT 07:53:26 ohhh 07:53:32 that as in ?DO 07:53:36 right 07:53:37 i thought since you capitalized it ... ;) 07:53:41 sorry 07:54:04 throw/catch is more like a nonlocal return 07:54:21 flipside of goto/longjmp 07:54:30 it's exactly like setjmp/longjmp 07:54:53 yeah, the info on setjmp/longjmp never excited me. 07:55:17 I've used it a couple times in my life - but only by screaming and just throwing it at the code. 07:55:25 they're rather limited, and usually need a lot of glue to keep stuff from crashing 07:55:40 solaris has getcontext/setcontext, which aside from having better names, are also more well behaved 07:55:50 i believe linux appropriated those also 07:55:51 Their docs on what it sets up and does is pretty damning 07:56:04 I've heard of those... somewhere 07:56:25 they save enough info to use them even as downward continuations. or so i'm told 07:56:49 somehow i don't see that, since they'd have to save a big chunk of stack 07:56:50 I can't think in "continuations" 07:57:11 yeah, there is an awful lot of stack and even register foo-foo therein 07:57:39 well for upward continuations like catch/throw, you don't need to mess with the stack 07:57:54 if you were counting on registers to be saved tho, forget it 07:58:21 setjmp/longjmp make pretty lousy multitasking primitives in that regard 07:59:04 i wonder what the instruction set of the cell is like 07:59:12 what "cell"? 07:59:19 would be nice to get forth running on one of those. maybe one on each of its units 07:59:29 ahh a cpu 07:59:50 the cell, new cpu by ibm everyone's been drooling over, playstation 3 is using it 08:00:18 Well, I'm just about convinced I can all but discard double-number codes at this level, and that I need to add a CPU-emulator level. 08:00:21 looks very SIMD or vector pipeline based, possibly both 08:00:49 has one main CPU, and 7 APU's (auxilliary processing units) 08:01:13 so, it's a star/net/hex-set 08:01:23 or maybe 8 apu's ... i know sony is disabling one of 'em for higher yields 08:01:34 THe idea has been around for years. More power to IBM 08:02:07 oh yeah it's not as new or strange as a VLIW like the Itanic 08:02:19 though i imagine it consumes about 1/1000th as much power as one of those 08:02:37 One of the issues always was that you'd need a special asm/compiler which had rules internalized to opt-off some load 08:02:52 which makes it a pretty decent fit for the PS3 08:03:16 since the PS2 already worked like that. it took developers years to get used to it but theydid 08:03:21 I have to ignore special CPU's - I can't afford to even dream about such things 08:03:36 PS2 has one CPU, two VPU's and the GS 08:03:41 and they're all more or less independent 08:03:48 no, I meant the Cell 08:04:03 (and I don't even own any games) 08:04:37 the model that dev kits use for the PS2 will probably work for the PS3 pretty well 08:05:09 hopefully sony will document the FCS release in English this time 08:06:11 oh, hey - I was going to ask you.. 08:06:39 That zero-packing-terminator? Is that even used for variables, constants and such? 08:07:05 I know it's prolly more jason's issues, but I thought I'd ask you 08:07:58 Seems unlikely, but I got a bit confuzzled out of the gate this AM 08:08:37 zero-packing terminator? not sure what that is 08:08:55 is that like telling the compiler not to pack a struct? 08:09:20 yeah, it must be specific to tathi/jasons code. They use it to tell the code-fetch that no more opcodes follow. 08:09:56 you can ask a compiler to not pad structs at the end to an 8 byte boundary, but i don't think you can keep it from aligning on a 4-byte boundary 08:10:04 they fetch a CELL at a whack, and 1+ can be run, but they hit the zero and end there 08:10:40 sproingie: yeah, sorry - not C/asm packing - I'm pretty sure I don't much care for the idea. 08:10:53 i wouldn't rely on a compiler respecting a "not packed" directive 08:11:17 I don't like using gcc "metacompiler" directives at all 08:11:33 most compilers have something similar 08:11:43 sure, and I'd still avoid them 08:12:07 probably a good idea 08:12:14 The only time I can envision getting that involved would be like for tathi's object-file header-stuff 08:15:20 has anybody here swiftforth? or a trial version? I don't like that register stuff, so that would be really nice if someone could give me a little version for testing. 08:17:11 I don't know why the hell every software company wants a formular with my data when I'd like to test their software. I think it's idiotic. 08:18:42 ok, how is swift forth? 08:23:39 quartus lets people download without getting nosey 08:24:15 it's your basic shareware. registration just to download always struck me as self-defeating 08:24:35 like marketing wants all its info up front so they turn people off right away 08:28:23 * PoppaVic screams, sighs, "ah-hahs".. And then ponders How_To... 08:29:30 OK, yeah... Bastards. *sigh* I will need to create an opasm table to flog and access all the pseudo-regs. 08:30:17 I think I solved my problem with different cell sizes in xell. 08:30:47 Given a decent set, THIS would be the "layer" that folks would optimize for a specific CPU. 08:31:18 --- quit: Quartus (Remote closed the connection) 08:40:12 --- quit: warpzero ("leaving") 08:40:19 --- join: warpzero (n=warpzero@wza.us) joined #forth 08:51:47 --- quit: warpzero ("leaving") 08:52:20 --- join: warpzero (n=warpzero@wza.us) joined #forth 08:52:21 PoppaVic: it's just a terminator, rather than requiring the VM to keep a count of where it is. 08:52:36 ahh, ok. 08:52:55 so, every bytecode-seq would need the \0? 08:53:18 you want to fetch several opcodes at once (because of alignment issues - you can only a literal from certain addresses) 08:53:33 right 08:53:53 no, that's why I chose zero - I just shift the instruction word down, and the rest gets filled with zeros. 08:54:07 so when you hit the end...you know you need to fetch another instruction word. 08:54:09 ohh 08:54:21 and embedded data? 08:55:11 The only kind of embedded data that my VM knows about is single-cell literals. 08:55:30 hmm...this needs a diagram :) 08:57:02 Yeah. versus the fetcher/decoder 09:07:06 * PoppaVic rotfls as he works on the pseudoasm-layer 09:10:29 tathi: to be honest, it sounds like a small caching mechanism 09:11:16 And, thus.. I have to wonder if the shifts cost more than just indexing. 09:15:48 yes, but what alternative do you have? 09:15:54 where do you put literals? 09:16:01 right 09:16:20 I know what yer saying, I've been glaring at it for a week 09:16:47 actually, we had a major oversight when we did Fovium -- offsets for branches/calls should be IN the instruction word. 09:16:58 I don't think Jason has gotten around to switching it yet though. 09:17:01 ..but, at some point 3+ access are going to be unaligned or shifted 09:17:20 Yes. I am glaring at that right now. 09:18:23 We already know that any silicon is faster, but grossly variable. So, I am oddly leaning toward emulating a forthish CPU again. 09:18:43 this probably isn't any help, but I snipped out just the "execute loop" for Fovium. 09:18:46 http://rafb.net/paste/results/3W5Tvf84.html 09:19:20 And, because of unixish/posixish systems, there is really no point in considering interpretive-compiling/execution of the "asm" 09:20:51 I should say "real asm" 09:21:13 tathi: yeah, sorta' akin to my loeliger book 09:22:49 tathi: if it matters much, cogitate this some and email me... I think we need to consider a pseudo-CPU and what commands we need for it. If we can emulate an 'ideal' CPU, the rest will fall into place 09:23:45 I have been. I'm coming to the conclusion that there is no such thing as an 'ideal' CPU, so that it makes more sense to try and build a flexible compiler infrastructure. 09:24:17 I'm beginning to suspect we need [in]direct, {src,dest}regs, and some basics 09:25:27 tathi: well, we can't be harmed then.. All we care is our bytecode runs all over, and the interface to true asm/C or whatever does what we want to the pseudo-cpu 09:26:20 All we care is that the bytecodes/lists/structs hold up. 09:27:05 and, hell: maybe we will end up writing >native compilers per-platform 09:27:06 I just have no interest in something that runs all over (yet?). 09:27:22 I'd rather put the effort into a decent compiler for my box, and then generalize as I need it. 09:27:33 well, sure - I can live in C for the same reasons 09:27:43 ..now, reconfigure it all the time 09:28:31 I'd rather "compile" bytecode and leave optimizing nativecode compilers as an aside 09:29:58 I have to agree with a buddy of mine from Long Ago - "Forth is the worlds most unsupported Language" - it's worse than pulling teeth to get folks to remotely agree. 09:31:08 I haven't switched it yet 09:31:10 Actually, it's a lot like sitting in #asm and trying to folks 09:31:22 to [talk with] 09:31:26 I might only switch it for calls 09:31:42 so branches will be easy to patch 09:32:52 it's not that much harder to patch branches if they're in the instruction word... 09:32:54 but whichever. 09:33:18 yeah 09:34:06 >>r @ -64 & swap 6 << or r> ! 09:34:20 heh... I begin to see why folks charge for their "portable" Forths 09:34:41 vs: ! 09:35:06 vs everyones cpu-specific whores ;-) 09:36:16 oh, what am I thinkig 09:36:25 ah, well *sigh* - it's par for the course 09:36:26 don't have to grab the bits, just patch in the call opcode 09:36:45 It seems to me that with a compiler, you could easily add more "primitives" 09:36:49 here 6 << op-call or swap ! 09:37:00 tathi: yep 09:37:02 Whereas with a bytecode engine, you really want to get everything designed properly before you build it. 09:37:12 assuredly 09:37:20 And...seeing as how I'm not a fan of big-design-up-front... :) 09:37:34 tathi: the kicker remains that objectfile versus pcodefile issue. 09:37:41 tathi: I don't have a problem changing the vm as I go 09:37:58 tathi: especially adding and removing opcodes 09:38:01 JasonWoof: oh, I meant as an engine for other people to build stuff on top of. 09:38:11 tathi: in forth it doesn't seem to matter much what's an opcode, and what's a word 09:38:18 true. 09:38:21 tathi: ahh. I've not read the backlog 09:38:37 Yep. We need a version of forth that is more like CINT and C-- 09:38:47 yeah, whatever API you give people better not change much 09:38:55 exactly 09:38:55 That's what I'm aiming for. Moving at a glacial pace though... 09:39:05 lunch, bbiab 09:39:11 sure 09:39:13 ooh :) lunch 09:39:16 sounds good. 09:39:24 * JasonWoof puts on some clothes and heads for the kitchen 09:48:37 brb 09:48:48 --- quit: PoppaVic ("makes a few quick calls and returns") 09:57:33 --- join: aardvarx (n=folajimi@shell2.sea5.speakeasy.net) joined #forth 09:57:43 --- join: PoppaVic (n=pete@0-2pool198-117.nas30.chicago4.il.us.da.qwest.net) joined #forth 09:57:48 greetings, all. 09:57:48 heh. looks like I started something there :) 09:57:52 hi aardvarx 09:58:13 hi tathi 09:58:14 OK, back... I think we need a major glare, yeah 09:59:04 lol. I hadn't seen CINT before. 09:59:13 that's amusing. 09:59:28 tathi: I know you've generated asm to create direct objects. Yeah, CINT is one of several. 10:00:06 Note that I've only generated executables. Never quite got around to relocatable (.o) files. 10:00:16 ahhhh 10:00:49 It doesn't look too terribly difficult, but I've never actually done it. 10:00:51 Well, I look forward to seeing object-file code. 10:01:34 Seems almost certain that we'd benefit from a synthetic-assembly in forthish and interp+generator/compiler. 10:01:38 Yeah, I will get to it eventually. 10:02:54 I'd like to solicit comments on registers/use/opcodes, though.. I'm gonna' add a layer below what I have to talk strictly with the VM and its "registers? 10:02:59 ?/" 10:03:48 the "asm opcodes" need intense folding, which is why I called it a day early. 10:04:51 direct|indirect, sourcereg/address/destreg, etc. 10:06:49 The kicker with C remains: actual object-code is closed - fine. We want an emulator that we can forth-compile/interpret for. A second program to generate local-asm and objects/libs/modules is fine with me. 10:07:45 and, we may well add some "pseudoasm" to support dlopen-type shit. 10:08:21 we don;t NEED to fully parallel a cpu, we only need to parallel enough to interface most places. 10:09:11 My biggie is: we don't reflect the platform-cpu, just our own "system" 10:12:55 I think the asm-opcodes should be folded, spindled and mutilated as tight as we can (like the 4-byte fetches emulate) 10:13:39 and, we then have - almost universally - merely one set of opcodes/level/table that really demands "porting" 10:14:24 It mostly takes fig-forth DOWN one-level 10:15:18 And, mind you: I don't WANT to accept inline-asm (true asm). 10:15:42 Let folks use linkers for that. 10:17:03 It's just occured to me, too: this may simplify the RStack. 10:19:39 I need more feedback on "what 'asm' set we need", but it seems likely we can generate "slower" bcode (and who cares, nowadays?) - but simplify actual compilers no end. 10:20:59 I'd even appreciate some code-kibitzing on that "asm codeset" It's not like brain-surgery. 10:21:47 anyway, the hell with it for today... I'm gonna' chow and snooze. 10:21:57 talk to you tomorrow 10:21:59 --- quit: PoppaVic ("Pulls the pin...") 10:42:10 --- join: Snoopy42 (i=snoopy_1@dslb-084-058-132-076.pools.arcor-ip.net) joined #forth 11:29:47 --- join: zoly (n=l@tor/session/x-dd7e61ce1d0e3684) joined #forth 11:29:56 'morning 11:33:43 Hi, zoly. 11:33:55 What are these tor/session/* masks? 11:34:54 tor.eff.org ? 11:35:50 right 11:36:04 Thanks. 11:38:03 prevents irc scanners to trace back to the system one connects from 11:38:03 i found the number of portscans increasing when i connect to irc nets 11:39:38 Looks pretty nice. Is it efficient? 11:40:08 you wouldn't want to use it for web browsing all the time 11:40:32 but with a proxy switch on your web browser, it is nice 11:41:42 the entrance behaves like a socks4/5 proxy 11:42:52 for irc, it has a bit the function of cloaked accounts, without the need to hassle freenode staff 11:43:05 and works on other irc nets too 12:20:56 --- join: Quartus (n=trailer@ansuz.pair.com) joined #forth 12:24:33 Quartus!!! 12:24:35 Howdy! 12:26:21 Hi. 12:26:32 Hi there. 12:27:09 How's the business doing? 12:27:41 Always room for improvement. :) How are things with you? 12:32:01 Not too bad, not too bad... 12:32:34 Can't seem to get the motivation to code :( 12:32:47 Not a problem I usually have. 12:33:11 Oh? 12:34:38 Yes, I enjoy the process, so I don't have to reach deep for motivation. Is there something specific you're wanting to write? 12:34:44 money counts. 12:34:56 I'm still learning C. 12:35:05 It helps to have a project in mind. 12:35:06 So, I am reading the book I have. 12:35:14 It's not a bad read... 12:35:31 but it does not make me giddy, either. 12:35:39 Maybe you need a better book :) 12:35:42 It also helps to have ONE project in mind. So you don't spend the day trying to pick a project. 12:35:54 I prefer to have several going at once. 12:35:54 Robert, I'm not that good yet. 12:36:19 Are you learning C so that you can do a specific thing or things, or just because you think it'd be a good thing to learn, in abstract terms? 12:36:23 I mean, "hello world" is not a problem, 12:36:30 Quartus, the latter. 12:36:47 aardvarx, that makes it hard. It's a means to an end, not an end in itself. 12:37:08 What can I do? 12:37:24 As I said, having something specific you want to achieve would help a lot. 12:37:56 I've got the "Hello world" thing down... but I don't know how to "use/write sockets" 12:38:05 * aardvarx wonders what a "socket" is... 12:38:05 Do you need to use/write sockets at this point? 12:38:12 Nope. 12:38:19 Then that's of little consequence, at this stage. :) 12:38:35 Here's a thought: 12:38:57 This might sound absurd, but please hear me out 12:40:23 I noticed that some buildings on the college campus I attended had Greek symbols on them (I don't know what is means, but they are usually over the door of the building or something...) 12:40:52 Anyway, I was wondering how many combinations of those groups can exist. 12:41:41 I was thinking I would write something that will spit out a combination (I noticed they usually are two or three letters per building...) 12:42:08 That's a pretty simple calculation -- number_of_letters_in_the_greek_alphabet**3 12:42:23 Also, there should be a way to keep track of combinations that are already used. 12:42:40 So, what do you think? 12:43:05 Do you want to generate an exhaustive list of all two- and three-symbol strings in the Greek alphabet? 12:43:36 Yes. Perhaps even include single-characters as well. 12:44:05 How much experience do you think that requires? 12:44:23 Also, is it something that can be implemented entirely in C? 12:45:12 The whole thing about "keeping track" suggests that I might need to use a database. Perhaps that might mean I need to learn MySQL? 12:45:39 aardvarx, you can write a *very* short program that prints all of them in order. 12:45:55 Short? 12:46:00 :) 12:46:01 In pretty much any language, though if you want them displayed properly you'll have to sort out fonts, etc. 12:46:22 Yes. Do you think it would be a long program to print out all numbers between, say, 000 and 999? 12:46:44 lol.. 12:46:54 I think at this point, I'd settle for just spewing out things like "Alpha", "Beta", "Gamma", etc... 12:46:57 * zoly thinks about installing oracle for managing the telefone numbers of his contacts 12:47:12 aardvarx -- then that takes away the worries about the fonts. 12:47:16 You know that many people? 12:47:23 * Robert has a ~/stalk.txt file. 12:47:24 at least 50 :) 12:47:24 zoly, I get it. The My 12:47:39 "MySQL idea was ridiculous" 12:48:37 There are (I believe) 24 letters in the Greek alphabet. So there are 24*24*24 possible 3-letter strings -- 13824. 12:48:49 576 two-letter strings. 24 one-letter strings. 12:49:05 In total14424. 12:49:14 * aardvarx queries google for Greek alphabet... 12:49:17 is there an idiom like "shooting sparrows with cannons" in english? 12:49:29 zoly, killing a fly with a sledgehammer 12:49:50 sound like a good one 12:50:06 Quartus, it is twenty-four. 12:51:04 yup. sci-fi authors tend to futurize that to "swatting a fly with a laser cannon" :) 12:51:13 Quartus, so here's an idea: First, I start out with your suggestion of just "spewing everything out." 12:51:17 It is an extremely simple program to print an exhaustive list of all one, two, and three-symbol Greek strings. No database is required, any more than you'd need one to print the numbers from 0 to 14424 without doing one twice. 12:51:46 How about subtracting those that are already used? 12:51:56 If you're printing all of them, no subtracting is required. 12:52:20 actually, the "shooting mosquitos with laser" idiom was used by C.M. after the NC4000 came out. personal SDI 12:52:46 :) 12:53:13 Quartus, I guess that would be the next step would be to find out which string has been "taken" already. 12:53:18 though there was speculation about whether he was commenting on military applications of NC4000 12:53:41 No need to do that either, aardvarx. A three-symbol Greek string is no different than a three-digit number, except it's in base 25 instead of base 10. 12:53:44 "I guess that the next step would be to find out which string has been "taken" already." 12:54:10 Base 25 ??? 12:54:20 Wait, base 24? Sorry. Base 24. 12:54:28 aardvarx: first simplify problem, then start coding :) 12:55:00 zoly, I want to make sure that I have the tools/qualifications. 12:55:12 aardvarx, yes. Decimal digits are from 0-9; ten different symbols, hence 'base 10'. Greek letters are from alpha to omega; 24 different symbols, hence 'base 24'. 12:55:24 aardvarx: you think too complex 12:55:45 zoly, how did you know??? 12:55:51 :P 12:56:00 aardvarx: you told us 12:56:07 I did? 12:57:30 Quartus, how does the issue of "Base" factor into this conversation? Could you please explain? 12:57:48 Did you understand the last thing I wrote? 12:58:07 No :( 12:58:39 Ok. Assign alpha a value of 0. Assign each successive letter the next highest integer. 12:58:45 What is omega's value? 12:58:49 23 12:59:11 Ok. So if I said your greek string is represented by {0,23,0} -- what's the string? 12:59:32 Alpha, Omega, Alpha ! 12:59:35 I see! 12:59:48 Hmmm. 12:59:56 Ok. Now working in decimal, if I said your decimal string is represented by {5,4,2} -- what's the string? 13:00:58 Epsilon, Delta, Beta ? 13:01:17 I think you're off by one there. 13:01:27 Decimal. Forget Greek. Assign a value of 0 to the symbol '0', and so forth. What is the value of '9'? 13:01:59 aardvarx, the conceptual piece you may be missing is that decimal numbers are written in an alphabet of ten symbols, '0' '1' '2' ... '9'. 13:02:21 Yes! 13:02:29 Zeta, Epsilon, Gamma ? 13:02:45 A number, written out, say -- '542' is nothing more than a three-letter string in the 'decimal' alphabet. 13:03:37 Hold it! 13:03:39 A three-letter greek string, say 'Alpha Omega Alpha' is nothing more than a three-digit number in the 'Greek' alphabet, where the digits aren't named '0' ... '9', but 'alpha' to 'omega', and there's 24 of them. 13:04:02 That's not the same as {5,4,2} though. 13:04:05 Is it? 13:04:12 {5,4,2} in decimal is '542'. 13:04:42 {5,4,2} in Greek is 'Zeta Epsilon Gamma'. 13:05:03 Quartus, I guess where I run into trouble is {20,15,22} 13:05:14 What kind of trouble? 13:05:20 201522 is not a valid entry. 13:05:39 It is greater than 24*24*24 13:05:39 Not a valid entry in what context? 13:05:47 I'm going to try to simulate a fairly large DTL (diode-transistor logic) digital circuit, mainly interested in propagation delays and their effects. What I'm wondering about is how the number of inputs in an NAND gate affects the output propagation time. Anyone experienced with this? 13:06:02 aardvarx, it's 20*24*24 + 15*24 + 22. 13:06:13 ok, in decimal, you have 10 'letters', so each 'digit' is worth 10 times more than the next one. 13:06:29 I see. 13:06:31 in Greek, you have 24, so each digit is worth 24 times more than the previous one. 13:07:12 Aha! 13:07:33 So, if I get 10245 as an input... 13:08:23 zoly is right. 13:08:28 * aardvarx is thinking too hard. 13:08:35 I lost my train of thought. 13:08:44 I was going to say divide by 24*24... 13:09:22 That doesn't really say anything about the first letter. 13:09:48 tathi, wait a minute... 13:10:06 Are you saying that it can be reduced to a basic arithmetic problem? 13:10:18 aardvarx, yes. You can enumerate any string. 13:10:30 Aha! 13:10:38 aardvarx: I was just saying in words what Quartus put in an equation. :) 13:10:39 So you can iterate over the entire integer space, and generate each string in turn. 13:11:07 I was thinking that I would have to write all the possible combinations out by hand! 13:11:30 No. 13:11:49 No more than you'd have to write out each three-digit decimal number by hand if I asked you to print all the numbers from 000 to 999. 13:12:34 So do you see how you wouldn't need to keep track of those already printed? 13:13:39 Only if I don't care to keep a record, yes. If I am just counting... 13:13:54 all I have to do is keep moving forward. 13:15:27 Right. So SQL -- overkill. :) 13:15:47 Hmmm. 13:18:37 So, if I am given 12345, I will divide it by 24*24; then I will reference the remainder to the alphabet. The returned value would be the first letter (a.k.a Most signficant bit) in the collection, right? 13:18:45 even "keeping a record" would be simple 13:19:42 zoly, I won't argue with you (because you probably know much more than I do at this point,) even though I don't believe you. 13:20:26 you know already how to enumerate any string. i.e. representing each of them with a unique number 13:20:27 zoly, I guess *simple* is relative :P 13:20:53 use this number as an index into an array, where you mark "used" or "unused" 13:21:58 zoly, where does the information in the array come from? 13:22:03 aardvarx, you'd take 12345 mod 24 (in C this is %24). That's the lowest digit. 13:22:24 Then you'd divide 12345 by 24, discarding any fractional component. 13:22:33 aardvarx: just true vs false, or 1 vs 0 13:22:33 Then you'd take the result % 24 again, that's your next digit. 13:22:39 Repeat as requried. 13:22:42 *required 13:22:57 Goodness gracious!!! 13:23:05 you start with "all false" = non used 13:23:15 Why on earth did I start from the other end? 13:23:18 none 13:23:44 zoly, I see your point, but I.... 13:23:50 Aaaah! 13:23:50 aardvarx, note that you can extend this technique to print a value in any base. Binary = base 2, decimal = base 10, hex = base 16, etc. 13:24:11 Just do the mod operation? 13:24:20 Yes, mod by the base, divide by the base, repeat. 13:24:29 --- join: Pepe_Le_Pew (n=User@201008242045.user.veloxzone.com.br) joined #forth 13:24:31 Use the appropriate alphabet of symbols. 13:25:03 In Forth, this is what the <# # #S #> words to do produce numeric output for any value of BASE {2..36}. 13:25:22 eeew!!! 13:25:33 Is that such a good idea? 13:25:37 huh? 13:25:55 Are stacks able to do that kind of operation? 13:25:59 Of course they are. 13:26:03 I thought they can only add numbers? 13:26:07 Good grief, no. 13:26:12 What??? 13:26:26 aardvarx, are you pulling my leg here? You're in the Forth channel. 13:26:40 * aardvarx just realized that fact. 13:26:51 Apologies to the audience. 13:27:01 I forgot. 13:27:12 Are you just having us on? Or is this stuff really that confusing to you at this point? 13:27:27 I just didn't think that Forth can handle array operations, is all. 13:27:49 Quartus, I am not kidding. I have thought about doing this for a few years now. 13:28:04 I thought I would need to use MySQL... 13:28:11 so I never pushed it too far. 13:28:14 Have you ever used Forth at all? 13:28:33 Short answer: no. 13:29:18 The code samples I saw were just showing how RPN works. 13:30:04 Just being acquainted with Postfix created a lot of OOOOH/AAAH moments. 13:30:28 Well, here's a word that prints an unsigned double number, in Forth: : ud. <# #s #> type space; 13:30:33 Examples of its use: 13:30:45 decimal 12345. ud. -> 12345 ok 13:30:54 decimal 12345. 2 base ! ud. -> 11000000111001 ok 13:31:05 decimal 12345. hex ud. -> 3039 ok 13:31:06 O_O 13:31:15 decimal 12345. 8 base ! ud. -> 30071 ok 13:31:24 decimal 12345. 24 base ! ud. -> LA9 ok 13:31:26 You get the idea. 13:31:28 O_O 13:31:50 After 9, A is the next digit (value 10), B is 11, etc., up to Z. 13:32:10 * aardvarx speechless 13:32:38 Note that the same ud. definition displays all of those depending solely on the value of BASE. 13:32:58 except hex 13:33:07 : hex 16 base ! ; 13:33:08 hex is shorthand for 16 base ! 13:33:21 I see. 13:33:21 decimal is shorthand for 10 base ! 13:33:31 octal? 13:33:35 8 base ! 13:33:45 Good deal! 13:34:09 binary? 13:34:41 * aardvarx thought Forth was meant to be simple (i.e. does not do complex operations) 13:35:05 It's as complex as you make it. 13:35:08 aardvarx, Forth is quite capable of extraordinarily complex operations. 13:35:23 arrays too? 13:35:26 Yes. 13:35:35 O_O 13:35:50 Any data structure you can think up. 13:35:57 an array is just some reserved memory which you can address by index 13:36:28 Can I make files just like with C (e.g. filename.c) to store Forth code? 13:36:50 YEs. 13:37:07 What is the extension? .forth ? 13:37:15 Up to you. Gforth uses .fs. 13:37:44 I've seen .f, .4th, .4. 13:38:03 * aardvarx wishes previous exposure to stacks can be undone... 13:38:12 like, using an editor to write source to a file? 13:38:12 and load that source file into forth? 13:38:19 It doesn't sound like you've had much exposure to stacks, aardvarx. 13:38:19 zoly, yes. 13:38:40 Quartus, the code I saw said something like this: 13:38:54 Put first number on stack... 13:39:14 Put second number on stack by pushing down first one... 13:39:30 Push '+' onto stack... 13:39:40 Ok, right there you've gone off the rails. 13:39:42 Press a shiny red button... 13:39:47 And voila! 13:40:09 aardvarx: that's how you usually load source into Forth 13:40:10 nowadays, that is... 13:40:24 some Forths, especially older ones, and stand-alone Forth, have their own simple file system. 13:40:27 That's a sketchy and questionable description of how a stack-based machine adds two numbers together. Surely, though, even given that amazingly craptastic description, you can see how it would extend to any operation on stack items? 13:40:58 Quartus, not until zoly replied. 13:41:17 zoly's talking about source in files, aardvarx, not about how stacks work. 13:41:59 I think that I have definitely left the track... 13:42:09 generally speaking, if you want to keep source, you wouldn't enter it interactively 13:42:17 I mean, now I think that each entry in the array is a stack entry. 13:42:29 aardvarx, no. An array is a data structure in memory. 13:42:45 stack != memory 13:42:48 Is that right? 13:42:57 In general terms, yes, that's right. 13:43:02 Okay. 13:43:21 You're talking RAM, right? 13:43:28 Yes. 13:43:33 Okay. 13:43:55 difference would be, stack doesn't need to be freely addressable. but "memory" would be accessed using an address. 13:44:34 for stack it is enough to be able to reach the top element(s), and push new elements, or pop data off it 13:45:01 If you try to "pop off" data that is not there... 13:45:09 stack underflow 13:45:10 You have a stack underflow condition, which is an error. 13:45:15 Yes. 13:45:46 that's like spending more money than you have in your wallet 13:46:02 I guess that there is an overflow condition, but it has nothing to do with leaving data on the stack, right? 13:46:19 If I have more data than stack space... 13:46:38 that would cause an overflow if I tried to jam everything into the stack. 13:46:57 with extremely small stacks, or errors in program, that could happen 13:47:00 I think that's what I read. 13:47:27 aardvarx, the stack is not where you want to store your data structures. It's potentially very small. 13:47:41 Okay. 13:48:06 Consider it to be 32 entries deep at most, as a rule-of-thumb. 13:48:21 Okay. 13:48:47 I will have to try out what I learned today in C. 13:49:15 Since you frequent the Forth channel, you might try Forth at some point! 13:49:25 Quartus, that is the idea. 13:49:44 Next stop is Forth. 13:50:00 decimal 12345. hex ud. -> 3039 ok 13:50:02 It isn't easier to get into Forth by starting with C, I'll tell you. It'd be like getting into C by starting with Fortran. 13:50:17 re: stack depth. I think tathi did some experimental studies yesterday, finding that in total 12 items are used by the isForth compiler when compiling itself. 13:50:17 Worse, even. 13:50:27 Very well. 13:50:30 Robert, I can believe it. 13:50:56 aardvarx, if you want to learn Forth -- learn Forth. 13:51:15 It's the shortest path. :) 13:51:22 Very well. 13:51:36 Do you know of any resources for using arrays in Forth? 13:51:47 Anything online? 13:52:04 There's no built-in 'array' operator in a standard forth -- because it's trivial to implement arrays, and you can choose which method suits you best in any given app. 13:52:12 For instance, a simple cell-array with automatic indexing: 13:52:31 : array ( cells -- ) create cells allot does> ( n -- addr ) cells + ; 13:52:39 Then, in use: 13:52:39 no 13:52:45 mistake 13:52:46 oops, did I get that wrong? 13:52:53 ah sorry 13:52:57 does> swap cells + 13:53:09 In use: 13:53:11 use character array first :) 13:53:13 100 array foo 13:53:34 42 5 foo ! stores 42 in the 6th cell of the array 13:53:46 It starts at zero? 13:53:54 Yes. 13:54:06 aardvarx: can you imagine how an array is represented in memory? 13:54:16 Contiguous blocks. 13:54:22 Contiguous bytes. 13:54:30 At least that's what I've been told. 13:54:53 therefore i ask whether you can visualize it yourself 13:55:03 Yes, I can. 13:55:06 pah, arrays whole memory is an array 13:55:11 doesn't help you if you repeat what you've been told without understanding it 13:55:36 if you can, you can also see what you would do to address any array element? 13:56:40 The array must have some address in memory; once I know the address of the array, I can then index to the entry of interest? 13:56:49 like "your array starts at address 'x', you have elements of size 'cell', you want to access element 'n' 13:56:51 " 13:57:13 Does my reply make sense? 13:57:24 important is the stepsize, ehm in that case cells. 13:57:44 if you understand the required calculation, you know also what the language would have to do to address array elements 13:57:47 aardvarx, it does. Zoly's asking how you convert an index into the address of a specific cell in the array. 13:58:03 Uhh.. no. 13:58:11 I didn't think that was possible. 13:58:34 I thought the array had an ADDRESS in memory and that I would be responsible for the index operation. 13:58:39 how can it be impossible if most languages know arrays? 13:59:16 It's like, I know the address to my residence. Does the kitchen have a street address as well? 13:59:17 if you know, you can teach your compiler to do the calculation for you 13:59:30 zoly, no compiler talk just yet. 13:59:48 I'm still dealing with the self-compilation heresy. 14:00:12 i don't understand you "kitchen vs residence" analog 14:01:37 You're saying that I have to have an address for a particular entry in the array. I thought the ENTIRE array had an address, which told me where the Contiguous block resided. I would still be responsible for indexing to the Nth entry of the array. 14:02:06 Something like 'Address.index' 14:02:19 lets simplify the array. say, an array with byte-sized elements. 14:02:32 Please excuse me. 14:02:41 I am running a tad late. 14:02:42 first array address is "x". 14:02:49 Can we resume tomorrow? 14:02:56 that is where array element 0 resides 14:03:57 Will you be around the channel tomorrow? 14:04:46 Thanks for your time; have a good day. 14:04:50 --- quit: aardvarx ("leaving") 14:19:57 Some strange conceptual bindings, there. Confusing the name of thing with the thing itself. 14:22:43 things one doesn't understand often seem more complex than they really are 14:24:33 but he seems to come from databasing. maybe use of similar terminology confused him. 14:27:29 Maybe. But the notion that an array has an address referenced by its name, but that its elements don't...! 14:32:51 :) 14:49:05 essentially, this is even correct. first address, referenced by array name. element, only referenced by offset calculation. 14:49:32 But each element has an address. 14:49:47 at least if you do manual-style arrays 14:50:17 No matter where the offset calculation is performed, each element has an address. 14:51:50 can't deny that 14:52:12 even though i tried to find a way :) 14:52:36 but what you said was: "...array has an address referenced by its name, but that its elements don't" 14:53:03 If you prefer... "...an array has an address (referenced by its name), but that its elements don't...!" 14:53:28 meaning "elements for no address, referenced by name" 14:54:51 elements _have_ no address, ref'd by name.... 14:54:52 ok. that changes the meaning of what you said. at least, it changes how i understood that. 14:54:59 zoly, I was trying to express the simple notion that every element of an array in memory has its own address, and that aardvarx seems to be stuck in a conceptual rut wherein he understands than an array has an address, but considers the elements of the array to have some other mystical quality that prevents them from having their own individual addresses. 14:56:11 like, an array is some kind of singularity 14:57:13 or like, an array occupies a single memory location, and all data has somehow to be squeezed into that location 14:57:24 Maybe. He's several epiphanies away from an understanding of even the basics. 14:57:42 Heh. 14:58:22 i can't imagine if he just starts playing with a Forth interpreter, comprehension won't come quickly. 14:59:29 It can't hurt. 14:59:44 Quartus: i tend to used byte/character arrays for first explanation. there is much more 'magic' in "swap cells +" then in "+" 15:00:45 assuming byte addressable cpu ... 15:00:45 byte addressing cpu... 15:00:56 byte addressable memory 15:01:20 Yes, except that it makes those assumptions, I agree -- it's fewer symbols to understand. 15:01:48 allows also conveniently to ignore the stack order after does> 15:02:03 But ! and @ become c! and c@, which adds some of the complexity back in. 15:02:39 And in examples you need to limit the storage to {0..255}, or use CHAR. 15:02:53 not really, if we clearly state "character array", then "character-fetch" and "character-store" are very intuitive 15:03:51 Matter of perspective, I guess. I think of c@ and c! as special-cases of @ and !. 15:04:24 thus a great path to upgrade to cell arrays, by demonstrating the limits of character arrays 15:04:29 eh? index can be anything 15:04:35 index not limited to 0..255 15:04:47 I didn't say the index was limited, but that the storage was. 15:04:55 Of each element. 15:07:39 create allot does> + ; vs create cells allot does> swap cells + ; 15:08:27 7 words vs 4 words, not counting ; 15:09:07 Right. Three fewer words, but you need to explain the element range restriction, explain c@ and c!, use CHAR before symbols, and illustrate the limits before showing how to lift the limits. My feeling is that showing the more general case first is preferable, saving the limited cases for later, even at the cost of three words in the ARRAY definition. 15:09:08 even though architecture is hardcoded in the first case 15:10:52 Yes, there's that, too -- your example only works on 1 CHARS equals 1 architectures. 15:11:01 objection. not "more general". just "different array element size". handles only cell size elements. 15:11:13 Otherwise it becomes : carray create chars allot does> swap chars + ; and nothing is saved. 15:11:47 objection again. works always, but element size = address increment of cpu 15:12:19 zoly, in my opinion @ and ! are more fundamental operators than c@ and c!, simply because the former have no 'c' qualifier -- no qualifier at all, in fact. 15:12:51 re "element size = address increment of cpu" -- yes. But it may be inaccessible in that form if c@ and c! require char-aligned addresses. 15:14:18 if address increment of cpu would be CELL, one would have to use @ and !, instead of c@ and c! 15:14:30 you would use c@ and c! _only_ with cpus with byte addressing 15:14:38 Also, the CELL is the basic unit of data in a Standard Forth. CHAR always fits inside a CELL, but a CELL is not necessarily an even multiple of a CHAR. 15:14:54 zoly, you'd use c@ and c! for char addressing. That may or may not be a byte. 15:15:35 which is important in the light of the data you intend to store. but not in the light of demonstrating array addressing. 15:16:25 If he only knew what his questions caused... :) 15:16:32 It's important in terms of the pedagogical order in which concepts are presented. I don't think eliminating CELLS and SWAP from the array definition provides benefits that outweigh the negatives, in particular the architectural dependency. 15:17:56 For teaching Standard Forth, a carray is : carray chars allot does> swap chars + ; . The shorter version, without chars and swap, is a special-case that works only when 1 chars is equal to 1. 15:17:58 thus, incrementally increasing understanding would be an anti-concept 15:18:16 er, put a 'create' in that carray. :) 15:19:00 no. works always. but wouldn't be a byte array, on non-byte adressing machines. 15:19:33 zoly, can you imagine an architecture on which 'carray' would create an array, but would be inaccessible by @, !, c@, and c!? 15:20:05 I can, and if a created carray is inaccessible, I feel relatively comfortable saying it doesn't work. 15:21:12 a "carray" will be what you define it to be. if i define it to be an unbuffered disk array, memory operations wouldn't work in it 15:21:21 Imagine the complexity of having to explain that 10 carray foo isn't an array of 10 characters after all. 15:21:49 zoly, you're shifting the context away from teaching the concept of memory-based indexed arrays. 15:22:49 you may call it character array, if you like. concluding that it must contains and be accessed character wise is a requirement you have introduced 15:23:06 For the purposes of teaching arrays, yes, that's a requirement. 15:23:38 my statement concerning element size was "hardcoding architecture" 15:23:50 like, address increment of cpu 15:23:50 If it's a word to create float arrays, I require that the array be able to be accessed float-wise, too. Cell-arrays, same rationale. 15:24:05 which _may_ be byte. or _may_ be word, or cell 15:25:27 you are hardcoding archtecture too, if you claim that leaving out cells and swap cells will forcibly lead to byte arrays 15:25:32 zoly, yes. My response to that was that the general case has the multiplier in it, be it CELLS, or CHARS, or FLOATS, or what-have-you; using programming techniques that require specific architectural attributes is a special-case to be discussed later. 15:26:38 In my opinion, teaching a technique that has to have warning signs attached to it ("Will not perform the same way on architectures where 1 CHARS does not equal 1") is making the situation more difficult for the student and teacher both. 15:30:31 1 chars wouldn't be 1 on machines, addressing less than 8 bit per address increment. 15:31:10 such as, on 4 bit per address increment, 1 chars would have to be 2 (assuming 8 bit per char) 15:31:59 Forth for 4004 :) 15:32:43 Mmm... 15:34:02 on word-addressing cpus, 1 chars would still be 1 15:34:08 not 0.5 15:35:01 word as in 16 bit 15:36:15 Right, it isn't 1 on machines with <8 bit chars, or >8 bit chars. And cell size is not required to be a multiple of char size. 15:36:49 Well, not an even multiple. At least the same size, and at least 16 bits, for a Standard Forth. 15:41:47 no. 1 cells could be 1 chars 15:41:58 Yes, it could be. It is not required to be. 15:42:28 but you would very likely have some pack/unpack words 15:42:46 Yes, and probably other words for accessing by address unit. But not Standard words. 15:45:48 iirc, cmforth for novix (which used 16-bit addressing) had pack and unpack 15:46:06 Yes, though I don't remember the specifics offhand. 15:46:08 char was 8 bit, 15:46:20 cell was 1 15:46:46 maxint was 65535 15:47:04 cell was not 1, I'm sure? :) 15:47:06 if there would have been such a constant as maxint ... 15:47:27 16 bit forth, with a cpu which had address increments of 16 bit 15:47:43 i.e. here 0 , here swap - . returned 1 15:48:21 therefore, cell must/would have been 1 15:48:32 Oh, you mean 1 CELLS is 1. Ok. 15:48:55 right. sorry. 15:49:14 i implied 1 cells constant cell 15:49:26 I guess a Standard Forth for the Novix would have had 1 CHARS equal to 1 CELLS, with additional words to access 8-bit quantities. 15:50:33 that's where pack/unpack entered the game 15:51:02 after unpack, you had 1 char per cell unpacked, easily addressable 15:51:30 but pack stored two chars per cell, requiring less space, but not addressable anymore 15:52:11 afaik, no words no mask, and conditionally shift 8 bits right, for isolating chars, existed 15:52:19 no words to mask ... 15:52:40 after all, you had unpack for that purpose 15:52:46 Right. 15:53:53 --- quit: virl (Remote closed the connection) 15:55:38 pygmy is modelled after cmforth. i haven't had a comparative look at it, whether it attempts to implement this (for pygmy platforms unneeded) concept as well. 15:56:16 I don't remember pygmy having any need for packing or unpacking char data -- as you say, not needed on x86 platforms. 15:56:59 I consider cmForth an ancestor of Quartus Forth, as regards its native-code capability. I studied cmForth early on. 15:57:24 if it would try to attain compatibilit, it would have to. 15:57:50 zoly, as far as I know Pygmy wasn't intended to be compatible with cmForth, just descended from it. 15:57:51 you worked with nc4000? 15:57:58 zoly, no -- I studied the cmForth source. 15:58:18 It's a metacompiled Forth. 15:58:19 ah, ok. i did some work using nc4000 15:58:49 t'is a while ago 15:59:18 that's where my cmforth recollections come from 15:59:32 Hands-on! 16:00:10 some results of that work you can see on some products today 16:00:31 I will look for the 'zoly' signature. :) 16:00:45 you won't find that :) 16:01:50 but the products are consumer-grade 16:02:06 actually, it is the packaging where you would find the results :) 16:02:15 Forth-enabled packaging? :) 16:03:17 some japanese manufactures use, next to barcodes, another type of machine-readable code. a square pattern, printed from retroreflective material 16:03:52 I've seen that. 16:04:13 we made the prototype of that code 16:04:14 and the installation for reading from distance 16:04:18 Neat! 16:04:45 the prototype was novix-based 16:05:57 What was the production model built on? 16:06:19 though novix was total overkill for that project. it wasn't the only one we used novix for though 16:06:19 but the only one one could have seen in the wild 16:07:03 no idea. we just did feasibility and preprotype. as in, functional demonstration. 16:07:12 pre-prototype 16:07:30 able to read it from about 10 m distance 16:07:43 and, finding the codes on that distance too 16:08:09 that's where the retroreflection comes in 16:08:19 helps to filter all other stuff out easily 16:08:31 only product codes remained 16:09:00 Clever. 16:09:05 which is useful for automated warehouses 16:10:00 send the bot, tell it to return with product xyz, even if it is not exactly in the location where one thinks it should be 16:10:02 the bot will find it :) 16:10:09 Or die trying? 16:10:43 actually, error handling protocols was not part of out work 16:11:19 You left the 'die trying' part to the production implementation. Wise. :) 16:13:24 we didn't build bots. just the discovery and recognition of distant codes 16:13:34 Interesting nonetheless. 16:14:05 at that point we had even no idea how and where that technique would be used 16:15:19 if was surprised myself when finding it on consumer goods later on 16:16:15 neat indeed. /me catches up on the days convo 16:16:34 oops sex change 16:16:39 --- nick: nanstm -> Raystm2 16:16:41 we speculated, of course. where pretty close. but didn't think the codes would go on the product cases. more like, on larger boxes full of articles, not on individual articles 16:16:42 the "automatic warehouse retrieval by bot" is still speculation, btw 16:17:29 my chuckBot could do it ( /me says with the na na na tone. ) :) 16:20:19 yes, put those stickers on power outlets 16:20:36 great idea. 16:21:02 or induction coils where it can recharge 16:21:12 even better. 16:21:42 Infact, dust off the old Tesla notebooks... 16:22:39 The ground is ground and the warehouse air is charged with excitement. 16:38:16 In other news... Give me a thousand deep stack and I'll find a way to use the whole thing. :) 16:39:15 Not me. Three items plus or minus two, for the word under consideration. More only as nesting demands. 16:41:42 Well, sure, normally that would be the case for me as well. I'm just saying, If someone gives me that much room, I'll find a way to use it :) 16:42:17 It won't be very convoluted. I don't do very convoluted. 16:42:36 I made the data stack in Quartus Forth 1024 deep. 16:42:43 :) 16:42:50 thank you. ( tear) 16:43:00 Heh. Not just now, it's always been that way. 16:43:17 Yes, and you had my mind in mind and I thank you. 16:43:46 I'm looking so forward to getting a Palm and trying out your forth. 16:43:59 I look forward to hearing what you think of it! 16:44:17 If you want an early look, you can get the Palm OS Emulator or Simulator and try it on there. 16:44:19 The way our bonus system is running, this will be next spring. :( cant wait. 16:45:19 I may do that. But this week is for finishing up my involvement with cf, and modulating on to more profitable projects for atleast 6-8 months. 16:45:51 nanstm is getting itchy to work from home. 16:45:52 Well, it's a no-cost option. 16:46:16 Money not the problem, time... 16:46:37 I know what you mean. 16:47:05 Infact, your success has breathed much life into my endeavor. 16:47:16 oh yes? how so? 16:48:00 Cuz, monkey see, monkey do. 16:48:17 Not sure I follow. 16:48:21 I always do better when I know someone doing it. 16:48:36 Selling a Forth? 16:49:06 No, but selling just the same. Product doesn't matter, so long as you have one and it's got a demand. 16:49:28 True. 16:49:59 I sell better @work because I work with great salesmen. msmd. 16:50:15 I need a great salesman to sell Quartus Forth for me! 16:50:26 I'll sell better online cuz I know a few people actually doing it. 16:50:27 ya. 16:50:45 I'll find a way to mix it in. 16:51:16 Infact, I'll be needing it ( Quartus forth ) to integrate my ideas on the handheld. 16:51:43 I decided that the first time we conversed. 16:51:50 Oh? You'll be working under a Palm OS? 17:00:45 My sweetie called me on her break, sorry for delay. 17:01:12 Yes, I want to integrate that as well, into the _idea_. 17:02:24 I believe I need 3 applications in forth. 17:02:27 A device using the Palm OS has a number of immediate advantages. Easy to develop for, an existing base of thousands of apps. Stable and useable. 17:02:28 One on the palm. 17:02:52 One on the home computer ( a generalization there ). 17:03:38 And a web application as some companies will not allow you to have private applications on the "house" machine... but they let you browse, kinda thingy. 17:04:11 Ah. 17:04:27 I set up a '.fhtml' type a while back, lets me embed Forth code in html. 17:04:41 excellent! 17:05:12 Have you seen what crc is doing with his 'later' word. ? 17:05:16 No, I haven't. 17:05:56 You may know it under another name, but : later pop pop swap push push ; is what i'm talking about. 17:06:21 A kind of primtive co-routine thing? 17:06:27 When he defines a tag, html or otherwise, where the ... 17:06:55 Co-routine is not registering in my memory banks. error error. 17:07:03 Carry on. 17:07:05 hehe 17:07:07 sorry. 17:07:31 ... where the string comes between tags: 17:07:51 ex stringthing 17:08:00 wow, that was painless 17:08:16 just switched the encoding of the call instruction in fovium 17:08:20 he'll code a prototype using 'later' like. 17:09:25 : htag later then the string might be like... 17:10:22 : hstring htag string ; 17:11:02 `later' made chuckBot chess moves very natural to code. 17:11:28 Raystm2: why not just give a URL? 17:11:38 Beg pardon? 17:11:45 oh to code that uses later? 17:11:55 http://wiki.forthfreak.net/index.cgi?FML 17:12:09 :) 17:13:20 Clever zoly, thanks :) 17:14:43 * Raystm2 reading that link as i've never seen the page before. 17:15:49 latest editor of that page was .... (looking...) 17:15:49 crc 17:16:20 Very clever man as well, that crc! 17:30:50 More clever than I am, today; I can't see how it's supposed to operate. 17:33:50 Doesn't matter how much forth experiance the person has when they first see `later'... that is always the statement the seer makes. 17:34:31 Basically, pop pop off the last two return points on the Rstack, swap them and return them swaped. 17:34:32 I'd like to see an example of this FML in use. 17:34:50 The source, I mean, not the generated HTML. 17:34:54 right. 17:34:57 I got that :) 17:35:10 I'm guessing it's (p hello there ) 17:35:22 But I fail to see how that generates

hello there

using this mechanism. 17:35:26 yes. 17:35:43 : (p ."

" ... so far I'm good. 17:35:54 Yes! 17:35:55 Then later ."

" ; 17:35:59 right. 17:36:07 So when does 'hello there' get parsed and output? 17:36:12 okay 17:36:35 in the def you did above... 17:37:23 http://wiki.forthfreak.net/index.cgi?WikiWrittenInForth at the end of page is more about it 17:37:34 well actually your def is insuficient, I think. 17:37:46 oh cool a link, thanks zoly ;) 17:37:52 including a diagram which is supposed to demonstrate the functon 17:37:54 function 17:38:31 Oh, ok. That was the unclear bit, I guess. You have explicitly ." the text you want to output. 17:38:45 Raystm2, I pulled that def right out of the page. 17:39:08 then i'm insufficient :) 17:39:52 The definition of 'later' is, I presume, rf-specific? What are its run-time semantics? 17:40:24 not really rf-specific, any forth with pop swap and push can do it. I believe that's every forth. 17:40:26 : later r> r> swap >r >r ; ? 17:40:44 I learned about it recently - very beautiful construct. 17:40:57 indeed, some kinda zen is in there. 17:41:14 Some kind I can't yet replicate in Gforth. 17:41:15 I'm always looking for ways to use it. 17:41:44 you'll have to define it as Robert suggested. 17:41:45 I think Gforth doesn't use the return stack for calls. 17:41:51 oh 17:42:02 Looks that way. 17:42:49 But the gist is... You will be swaping back and forth from your current word, to the prototype and back again. 17:42:59 Raystm2: Found any cool ways yet (except this good use)? 17:43:18 Absolutely. please check out this... 17:44:38 blocks 252 254 and 256 are relevent http://colorforth.info/cbsep12.html 17:44:59 Yes, that's a primitive coroutine facility. I have something similar in Quartus Forth, 'cept it's called corou. 17:45:19 I see. 17:46:14 I thought maybe it was as clean as (p hello there ) ... that confused me for a bit. I see it's (p ." hello there" ) , which isn't as clean, but what can you do? 17:46:29 :) 17:46:37 In Quartus Forth return-addresses are two cells. 17:46:54 Raystm2: Checking. 17:47:07 Quartus ( teasing ) You had your reasons. 17:49:22 Quartus. what is the benifit of the two cell return-address as it pertains to your corou? 17:49:28 Hmm... 17:49:33 Colorful code. 17:49:41 it's colorforth 17:50:10 Yeah... 17:50:17 but its the same in forth. just add proper punct. 17:50:30 if you have questions ... 17:50:31 I just mean that the code is rather, hm, different. 17:51:13 That's right. The source you see here is stored as byte code, determined by the color. 17:51:31 Red, a def. 17:51:36 green a previous def 17:51:49 yellow an executed def during compile time. 17:52:26 Yeah, been looking at colorforth.. just never used it a lot. 17:53:15 I looked at it for ten years :) then I just started using it the last two, with the last one year being the most productive. :) 17:54:07 It ain't as bad as some will say, but it is rather limiting since it's not really been actively developed. 17:54:11 AFAICT Chuck's phrase "brutally simple" about sums it up. :) 17:54:18 :) 17:54:37 Yes... no matter if you're for or against it, I'd say it's a pretty good summary. 17:54:44 I'm begining to fall out of love with it. 17:55:21 Not because _of_ it, but because of the looks you get from "respectable" forthers. 17:55:42 I was shocked to find predjudice in our minority. 17:56:15 Oh, don't get me wrong. I prefer it to ANS. ;) 17:56:37 I think the colored forth concept is interesting, as well as the "moving state from compile- to edit-time". 17:56:43 I think it has the spirit of Forth in it. 17:56:52 In American asshole terms, "We are the blackest of the black". 17:56:56 But I think Chuck's implementation leaves a bit to be desired. :) 17:57:14 there is that tathi. 17:57:22 Glypher is a fun colorforth. so far. 17:57:35 More appealing to the normal forther, i'm sure. 17:58:12 * tathi will be curious to see what herkforth looks like when it gets put back into one piece again. 17:58:33 Some one needs to port that for me :) 17:59:32 tathi: that link above of the later example... used your htmler to make that page :) 17:59:48 I assumed so. 18:00:01 I wish it could do dependancy as well.... :0 18:00:14 I'm amused by the gibberish in the first few blocks. :) 18:00:26 yes, I should have cut that. 18:00:49 that's the asm from 23 blocks trying to convert by your rules. 18:01:03 from = front 18:01:15 right 18:02:33 Thats the part i should replace with at listing. 18:02:42 at/a 18:03:49 it wouldn't be too hard to add dependencies, I think. 18:03:54 I need to make a listing of the asm and then link in all of the color text to it's proper dependancy. 18:04:43 what's this? moving away from hex codes for asm? :) 18:05:06 ? 18:05:13 Never! :) 18:05:15 hehe 18:05:18 I wish. 18:05:24 One day you'll re-invent the symbolic assembler. 18:05:27 Raystm2: you would only have to add " I think programmers are realists, for the most part, and become optimistic when the plan is at early stages and it seems the plan will work. 18:21:22 hehe 18:24:02 but at that time the hard disk is some minutes/hours/days closer to the point of failure 18:24:11 hahahaha 18:24:12 yes 18:24:45 ooh the Cannon is 10 tons. 18:25:30 thats a giant crossbow. 18:26:20 The super gun of its period. 18:35:52 * sproingie waves 18:36:13 Hi sproingie 18:37:35 --- quit: tathi ("leaving") 18:56:51 hm. milendorf's implementation of throw doesn't work on rf 18:57:18 --- quit: Robert (Connection reset by peer) 18:57:21 or more likely i have rp@ wrong 18:58:35 the moment i call handler @ rp!, control transfers back 19:03:29 rp@ has to be a macro, I'd think 19:04:58 Or whatever rf does when it inlines instructions. 19:05:09 rp!, too. 19:05:12 yeah, macro 19:05:23 oh, otherwise it's returning its own return address 19:05:28 right 19:06:25 more efficient as a macro 19:06:40 Yes. 19:06:41 sp@ and sp! don't need to be macros, do they? 19:06:46 No. 19:07:10 ok, now i'm getting different results ... segfault 19:07:15 at least it's a different problem 19:07:15 Progress. 19:08:31 not so good ... croaked just calling execute 19:08:46 Hard to help from here; I can't quite make out the code. 19:08:59 yeah, for some reason i'm losing the stack 19:09:19 Ok, well, usual approach: take it in small stages. Divide and conquer. 19:10:06 Test your sp@ sp! rp@ rp! by themselves. 19:10:06 i think i have to change the content of the macro too 19:11:33 ah yes 19:11:43 gotta take the square brackets out of the macro 19:11:53 rf caches TOS in eax, doesn't it? You need to flush that when doing sp@, cache it again at sp! 19:12:35 actually, it makes sp@ dead simple 19:12:48 how so? 19:13:11 since i push the value of esi, which is where the current TOS is going to be once the return value of sp@ pushes it down 19:13:55 sp@ basically flushes the cache by virtue of returning a value 19:14:13 Ok. I follow. 19:15:21 whoah 19:15:22 In Quartus Forth the SP register points to the second item, not the phantom stack cell where TOS would normally live. 19:15:23 just like magic 19:15:26 it works now 19:15:36 it was the macrofication 19:15:39 thanks :) 19:15:47 Sure, glad I helped. :) 19:16:17 need to find some serious uses of throw/catch to test it 19:16:36 gforth probably has a few 19:16:44 Try a THROW from inside a nested for/next or something. 19:17:31 Once you've got it working I suppose you'll want to put it in the kernel so that internal errors can use it, too. 19:17:50 ohhh duh. sp! was returning control immediately because it wasn't a macro 19:18:06 i had the sp@ problem figured out, but sp!'s problem just dawned on me 19:18:22 sproingie: redefine / to throw -10 for division by 0 19:18:25 I don't follow, but ok. 19:18:49 the sp@ word reset the return stack, then dutifully returned to the top of the RS 19:18:54 rp@. 19:18:59 er yah 19:19:04 Ok. Makes sense now. :) 19:19:28 playing with the RS makes my brane hurt 19:19:44 Yes, I mentioned that above -- rp@ and rp! both need to be macros, or else awkardly-written to work around their own return addresses. 19:46:49 Raystm2: did you say you wished someone would port herkforth for you? 20:12:17 :) 20:12:36 I know I know, I'll get a Mac someday.... 20:12:55 I've always ment to get a Mac anyway. 20:13:04 meant even 20:19:27 hurry up, before apple goes x86 20:20:15 or maybe herkforth has been ported to x86 by then? 20:44:20 well, it's just that I'm in the middle of porting it... 20:44:28 to a virtual machine that tathi and I made up 20:44:53 so it can run easily and consistantly under OSes on various architectures 20:45:16 haven't tackled endian yet 20:45:37 the new system is called Mist 20:46:07 it is significantly (but not radically) different from herkforth 20:46:19 so far I've got an interpret loop going 20:46:24 but there's not much in the dictionary yet 20:46:38 I'm in the midst of writing an assebler in Mist 20:46:53 (I already have a cross compiler in herkforth that I used to make the interpret loop, dictionary etc) 20:48:20 JasonWoof: mist may not be an ideal name ... 20:48:55 it is the german word for "crap. muck" 20:49:07 "dung" 20:55:14 and in english, people might call it vaporware ;) 20:55:54 admittedly, first thing that popped to mind was Myst 21:33:25 what's the problem with endianness? 21:34:00 oh, I've just been ignoring it 21:34:12 I want this stuff to work the same on ppc and x86 (and others) 21:34:35 but in it's current state, it won't 21:34:42 because of endianness 21:35:51 I think I'll hack the C version of the emulator so it can run in either endian 21:37:08 hmmm... didn't know mist was a bad word it german 21:37:23 maybe that's why the name didn't seem to be taken 21:37:36 it reminds me too much of myst too 21:38:11 that should in general work 21:38:11 if you access bytes as bytes, and cells as cells, and avoid mixed access, like single bytes in cells, you should be fine 21:38:33 my code is endian safe 21:38:42 but the image has to be in one or the other 21:38:53 disk-image, rom, whatever you call it 21:39:31 the vm implementation just reads it into memory, and starts executing it 21:39:34 i learned on motorola, thus big endian became natural to me. though i see advantages in little-endianness. 21:39:47 I learned on ppc 21:40:02 little-endian seems bass-ackwards and stupid 21:40:17 I am unaware of the advantages 21:40:30 if you add two numbers, on what end do you start? 21:40:43 right 21:40:50 why? 21:40:57 why not start on the left? 21:41:12 because you don't know the carries yet 21:41:50 now, have a number in memory. and a pointer to it. where does it point to, in big endian systems? 21:42:02 to the big end 21:42:17 imagine, the numbers are bigger than your registers ... 21:42:35 thus, add some digits, carry to next digits 21:43:08 your pointer handling is much simpler with little endian systems 21:43:21 you only walk the pointer along the number 21:43:41 from little end to big end, just like you would add the numbers yourself 21:45:33 and how's that different from how you'd do it with big endian? 21:45:41 even though it is a bit counterintuitive, i find it to have advantages which i can not find in big endian representation 21:45:44 you have to first move the pointer to the end? 21:46:05 big endian? locate the least significant bits of the number first 21:46:12 where is the end? 21:46:36 the little end 21:46:51 how far away is it? 21:47:11 look, you can't do multiple precision math without knowing how big your numbers are 21:47:20 i can. 21:47:41 what? 21:47:42 i incent a number format: number is terminated by leading zeroes 21:47:47 invent 21:48:00 leading zeros? 21:48:01 i don't use a fixed number of bytes 21:48:08 so now you can't use numbers that have zero digits? 21:48:24 yes, a number of bytes, until i encounter some terminating condition 21:48:25 zero termination is just plain stupid 21:48:48 the only good use for x-terminated stuff is to make things human-editable 21:48:49 thus a number could be 32 bits. or 40. or 160 ... 21:49:31 well, you named one reason already. 21:49:56 what about, infinite precision packages? 21:50:12 that's rediculous 21:50:21 why? 21:50:30 because computers aren't infinitely big 21:50:47 computers can't do infinite precision 21:51:13 if i need one single 160 bit number, why should i respresent all numbers as such? 21:51:18 but pick a reasonable precision, and the computer can do it 21:52:02 you can have variable length numbers without the possibility of them being infinitely long 21:52:29 ok. there are limits, like amount of memory, or calculation time. i'd better call it "arbitrary precision" 21:52:39 I'm not saying that the alternative to terminators is fixed length everything 21:52:41 use counts 21:53:00 some stuff makes sense as fixed length 21:53:40 terminators are a pita because they reduce the number of valid bytes/words/whatever the data can contain 21:53:48 often creating the requirement for encodings 21:54:15 counted data can contain (without encodings) other counted data 21:54:22 take as example, faculty of 100 21:54:23 but null-terminated data cannot 21:54:24 you start with small numbers 21:54:24 while your busy, they get bigger 21:54:58 factorial? 21:55:45 ok. you *can* use counts. but still, you'd use them mainly to determine where you would start to calculate (and move pointer back towards lower addresses) with big endian. where against, with little endian, increasing pointer leads you to the next, larger portion of the number. 21:55:54 the latter appears to me simpler 21:56:13 factorial, right. 21:56:30 it appears simpler? 21:56:33 but it's all backwards 21:56:37 for multiplying, same thing as addition is true. you start at lower end 21:56:38 that, to me, makes it seem confusing 21:57:00 that is because you have a conceptual block, being used to big endian 21:57:10 therefore little endian confuses you 21:57:16 no, it's because I speak english 21:57:19 when I write 321 21:57:27 1 is the least significant digit. and it's on the right 21:57:39 when you do long addittion, you start on the right 21:58:00 that's the one you start calculating with 21:58:03 inout/output may be easier with big endian 21:58:16 if you're concerned about moving the pointers to the least-significant end, then why not just use pointers that are there to begin with? 21:59:07 but calculation not 21:59:10 you start on the smaller side of the number 21:59:10 memory has no "left" or "right" 21:59:10 it has higher and lower addresses 21:59:12 also, I don't know much about x86, but on ppc, I'm quite sure that it's faster to loop over data if you know the count first, than if you have to check conditionals the whole way 21:59:25 yes, higher and lower 21:59:38 "why not just use pointers that are there to begin with" - that's what you do with little endian 21:59:41 higher is afaik always displayed going rightwards 21:59:56 yes, and you can do the same in big endian 22:01:06 and moving your way through number would be decrementing 22:01:12 then, you would bump the pointer by a yet unknown amount towards higher addresses 22:01:13 yep 22:01:20 and decrement again towards lower addresses 22:01:28 whaat? 22:01:31 what are you talking about? 22:01:40 that *can't* be simpler than simply, increment always by 1 22:01:50 no, but it can be the same 22:02:00 like always decrement by 1 22:02:10 what are you doing going through the bytes one at a time anyway 22:02:19 maybe it seems simpler to you, but the CPU would disagree 22:03:07 why would I store long numbers all out of order if I wanted to be able to step through them as either bites, half-words or words? 22:03:10 that would be stupid 22:03:15 because the numbers are larger than my ALU can handle with one operation 22:03:47 if I wanted to store arbitrary precision numbers in a memory-effecient way, I'd probably store them just as you'd write them, with padding on the left of course 22:03:58 1111111111111111122222222222222222222233333333333333333444444444445555555555555 22:04:09 same as when you are adding up multi-digit numbers 22:04:14 say the above were all bytes 22:04:30 and I'd keep the sizes alligned to 4 bytes 22:04:39 I'd keep a pointer to the first 1 22:04:47 and just before the first one in memory there'd be a count 22:06:20 : ptr->end dup 4- @ + ; 22:07:40 : count 2- h@ ; 22:08:15 : anum. dup count for dup b@ b. 1+ next drop ; 22:09:12 well, i do n for a <- (p1+) a <- a+ (p2+) next 22:09:38 I'm not farmilliar with those words 22:10:05 plus shifting out the result back to mem. ignoring here 22:10:06 () is indirection 22:10:08 p1 and p2 are pointers 22:10:14 I was just trying to do readable code 22:10:45 what's it do? is this the add routine? 22:11:05 (p1+) is an autoincrementing read 22:11:06 <- is assignment 22:11:38 the n for next is the enclosing loop. i do n iterations 22:11:56 n for next is the bit I understood 22:12:05 is this "a <-" assigning something to a? 22:12:26 what is a? 22:14:16 first iteration, reading lowest bits. adding. using result, keeping carry 22:14:16 pointers have both been bumped to next portion of number 22:14:16 accu. temp var 22:14:17 temp register, if you like 22:14:21 pointers both point to least significant bits of number. ready to crunch them 22:15:29 reading both, and autoincrementing both, causes them to point to next number portion 22:15:30 repeat until end of number 22:15:31 no "pointer to end before we can begin" 22:15:31 no "incrementing sometimes, and decrementing some other times" 22:15:47 always "increment by one, until end (i.e. most significant bits) of number reached 22:15:49 " 22:16:07 if there follows another number, where on the least significant bits again 22:17:26 but, the extra effort you have to do for calculation, i would have to do for input/output 22:17:46 because of the strange way we write numbers ... 22:18:04 we write them left to right, but calculate on them right to left ... 22:18:50 at least sub, add and mul 22:20:05 and because we have been that accustomed to writing them the wrong way, big endian appears more intuitive. 22:22:03 mind you, i find big endian more natural too 22:22:08 only, i can't find many advantages to it 22:22:09 shows how deeply engrained this conditioning is 22:22:09 that's probably the same sort of problem other people have with RPN 22:23:39 --- quit: swalters (Read error: 104 (Connection reset by peer)) 22:24:25 what is this crap you keep spewing about incrementing sometimes? 22:26:00 you've got little endian engrained so much that you think the way people write numbers with a pencil as wrong? 22:27:17 with big endian, you would have to increment, to get to next number, in higher memory, and decrement, in order to walk through number from least significant bits to most significant bits. 22:27:48 no you wouldn't 22:27:51 haven't you been listening 22:28:02 the more significant stuff goes lower in memory 22:28:09 don't mis-order it 22:28:25 you don't listen, i think. i said "i learned on motorola, thus big endian became natural to me" and "mind you, i find big endian more natural too" 22:28:59 now you claim " you've got little endian engrained so much..." 22:29:13 when you go from a 32 bit number in memory, to a 64 bit: 1) in big endian, you would add the new 4-bytes LEFT (lower) than the original 2) in little endion you'd put it after (higher memory) 22:29:37 yes 22:29:42 I am indeed quite confused about your position 22:30:11 you seem to be saying little endian is more adapted to this, but don't seem to understand an intelligent way of doing it at all with big endian 22:30:26 the way I'm proposing would not involve incrementing _at all_ for doing an add 22:30:28 again. "i find big endian more natural, more intuitive. but i find little endian more practical" 22:30:28 not once 22:31:17 and i don't say "you can't do that with big endian", i say "it is more effort/more complex" 22:32:13 no, big endian would require *decrementing* to walk through portion of number, and *incrementing by a number of bytes* to ho to next number 22:32:51 assuming next number is in higher memory 22:35:15 of course you *could* place next number in lower memory, below the number you are currently working on. and have your pointer point to the end of the last number. then you need only to decrement. and at end of lst number, you'd be on least significant bits of next/previous number and so on. you would effectively use little endian then. 22:35:39 just reversed, from high to low addresses 22:37:54 zoly> "and decrement, in order to walk through number from least significant bits to most significant bits" 22:37:58 (07:27:26) JasonWoof: no you wouldn't 22:38:00 (07:27:30) JasonWoof: haven't you been listening 22:38:00 (07:27:40) JasonWoof: the more significant stuff goes lower in memory 22:38:00 (07:27:48) JasonWoof: don't mis-order it 22:38:21 that's *why* you would have to decrement 22:50:27 LOL 22:50:40 yep... little endian is kinda like big endian... 22:50:41 only... 22:50:44 reversed! 22:51:05 --- quit: JasonWoof ("off to bed") 23:07:39 'night 23:16:44 --- quit: sproingie (Remote closed the connection) 23:59:59 --- log: ended forth/05.10.03